When I first proposed beginning a column on metrics, it seemed like a common sense notion. In fact, the proposal practically wrote itself. Library metrics are the hottest of topics, as we’re simultaneously a service industry and an industry whose value to patrons and communities is difficult to quantify. This results in our necks traditionally being one of the first on the chopping blocks during cuts, and our staff and supporters constantly fighting for more allocated resources. Qualitative anecdotes don’t defend our worth effectively in this business-savvy, metrics-driven world, nor do they assure that we’re maximizing value for our patrons in our expenditure choices.
As a true librarian at heart, once the column was approved, I started my research. Often when beginning research, you cast your first net with extreme caution, prepared to be buried under a towering mound of inaccurate or inapplicable results. Surprisingly, despite the importance and value of library metrics, I discovered they aren’t touched on with near the frequency you’d expect. Why this phenomenon? I have some ideas.
Let’s face it. Librarians are rarely math-centric. I learned this as a MLIS student with an undergraduate degree in actuarial science. While like majors could bond over their commonalities, I always felt a little lost – who needs a math librarian? Further in to my library school career, I was swept up into Technical Services librarianship when I came in for a part-time reference desk job interview at my legal resources professor’s workplace and the Technical Services Director saw math featured prominently on my resume. She immediately usurped my reference interview and stole me away to the land of backlogs of Westlaw and Lexis bills, much to my delight. In retrospect, I don’t even remember interviewing formally. You say, “statistics,” and librarians’ ears perk up. You say, “I like numbers,” and their eyes light up. Then, they hand you a stack of papers covered with numbers and run before you can hand it back.
Yes, people who have bad memories from their math classes growing up are often squeamish around things number-related. While I understand that fear completely, library metrics are completely different. Hence, one of my goals at the outset of this column is to help our amazing group of technical services law librarian readers realize that hearing the word “metrics” is not synonymous with “panic.”
To begin, let’s go over some basic concepts and vocabulary regarding metrics and their uses in libraries. First, all metrics aren’t created equal – for example, they: (1) use different collection and evaluation methods; (2) speak to different audiences; and (3) serve different purposes. Understanding the breadth of this topic is the first real step in creating and tracking functional metrics, which can then effectively communicate value and aid in decision-making. There are many things you can measure in the library falling into the general categories of inputs, processes, outputs, outcomes and impacts.
“Inputs” is a fancy name for resources used to produce or deliver a program or service, like staff, supplies, equipment, and money. Through processes, these inputs become outputs – resources and services that you produce, including your available materials and the programs you organize and host. Input and output tracking gives you those first glance statistics, easy to count, measure, and report, as these are tangible things. Outputs are usually what are reported to stakeholders or decision makers, e.g., we check out this many books, we have this many research guides, or these many people use the library. However, these metrics don’t accurately demonstrate the value of our services and our products.
And here’s where outcomes and impacts come in. I tend to agree with the school of thought that outcomes and impacts are the same thing, seen from different perspectives. Outcomes are changes from the perspective of our customers and impacts are the same change from the perspective of a stakeholder, usually more of a high level change, with long-term effects on the larger community. These metrics are known by quite a few names, including impact metrics, performance metrics, and outcome metrics, and are primarily intangible, making them much more difficult to measure. Naturally, they also communicate the most value and provide the most guidance and support.
Let’s be clear. Metrics are different from statistics, and for that matter, so is data. Just because you did poorly in your statistics class or didn’t score highly on the quantitative section of the GRE doesn’t mean that you should run from data or cringe when metrics is bandied about in a meeting with stakeholders or decision makers. Formally, data is qualitative or quantitative attributes of a variable or set of variables which typically arises as a result of measurements. Statistics don’t even come into play until you study the collection, organization, and interpretation of this data. Even better, in the library world, statistics don’t necessarily require the use of Greek letters or even convoluted equations. Most statistics, measures, and metrics can be organized into operating metrics, customer and user satisfaction metrics, and value and impact metrics.
Operating measures and operational statistics (such as how many people came to the library, how many check-outs took place on a certain day, and how many hits we had on a database) lend themselves well to understanding resource allocation, improving efficiencies, and making budget determinations. Customer and user satisfaction metrics, on the other hand, tell us how well the choices we made are doing based on operating measures and indicate what improvements may be required. Value and impact measures are incredibly meaningful in their own right, as they often incorporate satisfaction and the importance of separate outcomes. These are the most elusive of all measurements; so naturally, they’re the most valuable.
Martha Kyrillidou, senior director of the Association of Research Libraries statistics and service quality programs, once said “what is easy to measure is not necessarily what is desirable to measure.” This is such a true observation regarding metric gathering in libraries – easy measurements rarely result in meaningful statistics, meaning one of your first challenges is figuring out how to make the things you choose to measure meaningful. Simply put, a meaningful measure shows you how much value you’re getting out of your investment. This could mean the investment in the library itself and the value that the stakeholders or decision makers are getting out of that investment, or it could mean what sort of value your customers are getting out of how the library chooses to invest their resources, both in terms of financial outlay and in terms of staff time. To determine meaningful measures, you need to understand your stakeholders or decision makers, and you need to understand your customers.
For instance, quantitative resource usage information doesn’t show how or why users are using materials, or even indicate how satisfied they are with the products. Relying solely on quantitative data, such as a basic measure of number of hits, isn’t necessarily enough to justify value to stakeholders and customers. Our most popular blog post at the law school, according to easily generated WordPress statistics, is one featuring a cartoon sun. Looking at the numbers and reports, you’d assume this was an incredibly popular post and maybe even assume it contributes a lot of value. However, this particular post features a metadata tag for “cartoon sun,” and one of the most searched keywords that leads people to our blog is – you guessed it – “cartoon sun.” Here, it’s obvious that a simple number doesn’t communicate actual value to our customer base or to our stakeholders and decision makers.
Similarly, one database may feature twice as many hits as another database when comparing generated usage reports, but that could be because it has a convoluted interface (possibly even for the sole purpose of generated inflated hits). Again, just because it’s an easy measure doesn’t mean it’s meaningful. Qualitative data, such as patron survey feedback and user experience testing, provides the context within which to view these numbers. This often means using a hybrid approach of both quantitative and qualitative data.
So there you have it. The metrics world is wide and wild, and this column will do its best to shine light on as many parts of it as possible. In addition to detailed discussions of the general metric concepts already mentioned, additional topics will include collection methods, statistical concepts in a nutshell, resource usage statistics, COUNTER and SUSHI, collection and transactional statistics, consortia challenges, web metrics, altmetrics, faculty support, law firm and public law library metrics, performance indicators and benchmarks, as well as discussion of tools for presentation and manipulation of data.
I’m still figuring out how best to approach the column to meet the needs of our audience, and since the next issue is devoted to American Association of Law Libraries (AALL) Annual Meeting program reports, this column won’t reappear until fall. I’d love to hear any suggestions on format and approach, any questions you’d like for me to attack, or any topics you’d like for me to cover. Shoot me an email at firstname.lastname@example.org, and let me know what you think!
Technical Services Law Librarian (TSLL) is an official publication of the Technical Services Special Interest Section and the Online Bibliographic Services Special Interest Section of the American Association of Law Libraries. This article originally appeared in the June 2014 issue and is reprinted here with permission.