We are very pleased to announce the results of the seventh edition of the Library Journal Index of Public Library Service, sponsored by Baker & Taylor’s Bibliostat. The LJ Index is a measurement tool that compares U.S. public libraries with their spending peers based on four types of output measures of their per capita use. For this year’s Star Libraries, please click on "The Star Libraries" above; for more on what’s next for the index, see "What's Next for the LJ Index".
When the LJ Index and its Star Library ratings were introduced in 2008, our hope was that whether libraries were awarded stars or not, they would examine these statistics more closely—both for their own library and for their peers—and make fuller use of these and other types of data for local planning and evaluation purposes.
In the meantime, however, another type of data has come to the fore—outcomes. The conventional wisdom in the public library community today is that output data alone is insufficient to assess the performance of public libraries. The new big question is: What difference do libraries make in the lives of their users and communities? Yet, for many, the distinction between an output and an outcome has remained elusive and often confusing. Fortunately, over the past year or two, several major projects have begun or reached a level of maturity that provide public library administrators and stakeholders with some carefully crafted and broadly tested tools for making sense of output and outcome data.
Here we will explore what some of this year’s Star Libraries are doing with outcome measures, chiefly through their involvement with up-and-running projects such as the Edge Initiative and the Impact Survey, as well as developing efforts—for example, the work of the Public Library Association (PLA) Performance Measures Task Force. Comments were solicited from directors and other representatives of Star Libraries about how their experiences with outcome measurement affect their views about where public libraries need to go with output measurement.
» Next page: "The Star Libraries"We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing
Add Comment :-
Comment Policy:
Comment should not be empty !!!
Will
Some people have better use of libraries than actually reading a book.Posted : Feb 24, 2015 05:53
satellite internet access
Excellent site you have here but I was curious if you knew of any discussion boards that cover the same topics discussed here? I'd really like to be a part of online community where I can get suggestions from other experienced individuals that share the same interest. If you have any suggestions, please let me know. Kudos!Posted : Feb 04, 2015 05:46
Ray Lyons & Keith Curry Lance
Thank you for your thoughtful comments, Mary Jo. It is really heartening to see librarians taking quantitative data seriously and putting the time in, as you have, to think through the issues. And please accept our apologies for this delayed response. We didn't see your comment right away. And our schedules were a bit of an impediment to preparing this response. We can’t really disagree with the main points you make, but do want to offer our perspective. We’ve written before about the strengths and weaknesses of national rating systems of any sort. These are report-card measurement systems that, by definition, are simplistic and broad-brush reflections of institutional data. Recognizing the limited nature of national ratings, this year our article focused completely on a more robust and fruitful performance measurement--library outcome evaluation. Yes, in the LJ Index ratings libraries with lower populations will benefit if they also serve patrons who are not local residents (see the LJ Index FAQ item #12). At the same time, we do not alter or create an alternate set of national public library data (FAQ item #20). Attempting to correct or omit libraries with high per capita values is a slippery slope. How do you decide which values are so extreme that they should be considered invalid? Which values close to apparently extreme values should also be omitted? And which values close to them? Eliminating one outlier creates another. If you decide to screen the data for these particular problems, what about other problems where reporting practices cause other comparisons to be potentially unfair? These become subjective decisions, while our design aim is to keep the ratings as impartial as possible. In report-card systems measurement decisions are usually trade-offs. Your suggestion to use total expenditures per capita in local peer group comparisons is another alternative. However, it can introduce problems similar to those that bother you about the LJ Index. Libraries sharing a given expenditure per capita level can easily vary in population from 10,000 to more than 100,000—a 1000%+ difference (higher than the 500% discrepancy you note). The most ideal method for forming library comparison groups is combining expenditures and population, as recommended in a statistical brief published by the National Center for Educational Statistics (NCES) in the late ‘90s (see http://nces.ed.gov/pubs98/98310.pdf). Yet, even this system will not avoid the problem you are concerned with. In a national rating system the number of “winners” is arbitrary (i.e. set by ratings rules). Libraries with low populations and high expenditures would still be guaranteed winning spots since they would excel even within their own category. Given a set number of total winners, these same libraries would still claim spots that would otherwise go to other libraries with different expenditure ratios. When we created the LJ Index in 2008, we considered the NCES model, but decided it is too complicated. It produces 45-50 peer groups which would be difficult to report on. We stand by per capita library output measures as legitimate for purposes of comparing libraries. Per capita measures have been traditional library statistics since the 19th century. (This is not to say they don’t have limitations that need taken into account.) More importantly, per capita measures used in the LJ Index are not "false data." They are replicated exactly from the IMLS data files. Comparisons made may possibly be unsatisfactory—perhaps this is what you’re referring to as “false.” In any case, the data we use are trustworthy to the extent that they've been reported accurately to IMLS. Finally, the fact that a limited group of libraries repeatedly earn LJ Index Stars is not a sign of a defective measuring system. It’s the opposite, in fact. Ratings of all kinds (cities, hospitals, universities, state governments, etc.) have contestants that earn high scores year after year. This is a basic tenet of good measurement design: Repeated measurements should not vary erratically over time. Since the inception of the LJ Index we have encouraged libraries to pursue local comparisons for a richer evaluation of their performance. So we applaud your recommendations (other than characterizing IMLS data as “false”). We hope libraries will explore the data more thoroughly by conducting their own customized comparisons. Ray Lyons and Keith Curry lancePosted : Nov 16, 2014 03:41
Mary Jo Finch
Thank you for continuing to encourage librarians to look at measurement for what we can learn about ourselves by studying comparable libraries. Unfortunately, because you group libraries by TOTAL expenditure and then rank them by PER CAPITA measures, you set up a statistical mismatch that awards libraries with under-reported populations and encourages false comparisons for the rest of us. By setting up comparisons as you have, Library Journal is choosing to believe that Library ABC, which gets 5 stars every year, actually spends in excess of $1600 per capita and has a visitation rate in excess of 74. This means we believe that the city where this library resides believes in spending over $2m annually on library services for a very small population and that every single man, woman and child in that community comes to the library 1.42 times per week. Obviously this is not the case. They have a sizable budget because they are actually serving a larger than reported population, but their measurements are only being divided by the legal population rather than the actual population served. In each grouping the 5-star libraries have per capita expenditures well in excess of the per capita expenditures for the group as a whole. In the $1m-$5m group it is at its worst, with 5-star libraries having per capita expenditures 568% higher than the group as a whole ($317.90 versus $55.70). Wherever you see a huge per capita expenditure, you know you have an under-reported population, which means the other per capita measurements are false numbers. I would encourage library directors who wish to use the statistics for comparison to sort the libraries in their grouping by expenditures per capita (you will have to add a column for this), and then to look at the libraries who spend similarly to you. Besides providing reasonable benchmarks for performance, it will help you to explain to your boards why you continue to be star-less.Posted : Nov 05, 2014 09:00
Cynthia J. Davis
I am the director of Spirit Lake Public Library which you have incorrectly listed as having a population of over 10,000. My city population is 4840. Any idea why the population is wrong?Posted : Nov 04, 2014 01:46