Usage Statistics: Do They Drive You…or Do You Drive Them?
by Ron Burns (Vice President of Global Software Services, EBSCO Information Services)
Column Editor: Kathleen McEvoy (EBSCO Information Services)
How would you depict the ultimate measure of library success? An obvious focus is the end user and his or her satisfaction. They came to the library with a need — did they leave fulfilled? This particular “statistic” is not easy to come by no matter how many surveys we may conduct, but we can certainly influence the level of satisfaction. Statistics can guide collection development decisions, but they can also help guide our actions when we go behind the numbers with an eye toward enhanced end user satisfaction.
For better or worse, Google changed the game. It showed users that it should be simple to get strong answers to their queries. The most important thing that we all have in common is “time.” And no end user wants to go to the library only to waste time trying to find where to go, what resource to use, sifting through less-than-stellar results — only to have difficulty getting the actual full text of the item they wanted. We are no longer in the business of teaching users to search. There is no time for that. We can teach them to be good finders (i.e., good discerners of information), but more and more, that is the value proposition that the library brings to students — the assumption is that the library has the “good stuff.” And if it is as good as Google in terms of presenting the right information at the right time, the best materials should be right at the top of the result list. There’s no time for anything less. Just as we no longer do long division by hand (we have calculators on our smart phones), we shouldn’t have to teach users to search. Our systems should be smart enough to get students exactly what they come to the library for — the right information. And it better be easy and fast.
So, how can usage statistics help us? It goes without saying that we need the statistics themselves — in a way that is simplified for librarians to utilize. Of course various resources will provide their own statistics, but there are tools that can help us consolidate usage information in a single workflow/environment to make the most of it. And some serials agents are now merging usage information with journal analytics to go as far as to provide cost detail in relation to usage. But the decisions we make and the actions we take when we have this detail is what will separate success and failure.
Are you quick to look at low usage statistics and conclude that the material with low usage is not as valuable/important? Is it always a jump to assessing your “collection development decisions”? Or do you think about “why” something may not be used as much as something else? Sharing statistics is an interesting way to better understand the “why.” Why would two university libraries of similar size (FTE) and supporting similar programs have such a discrepancy in usage of an identical resource? In taking a look at two similar universities in North Carolina (Chart 1), the discrepancy in usage of the same popular full-text resource was staggering over the same time period.
Chart 1 – “Searches” conducted in an identical multidisciplinary full-text database during the same time period at two similar universities (site names omitted to protect the “innocent”)
Some of us may be quick to conclude that “searches” are not worthwhile statistics to investigate when seeking end user satisfaction and full text usage, but the reality is that it is step one. We have to make sure we get people to our resources before we can worry about whether they find the right material. If the assumption is made that the multi-disciplinary, full-text database represented here has strong content, why is there such a discrepancy?
A quick look at the library Websites provides an answer. Both universities have a discovery service. University B, however, removes the burden from the end user by making the choices obvious. Think about Google again. When you go to Google — and even when you used it for the first time—did you have to debate what you should do? Or was it exceedingly obvious? Of course libraries have more to make available than just a search box, but how much more is really necessary? The reality is that University B had a minimal number of links/options for students, with their discovery service prominently featured. To the contrary, University A had seven times the number of options on its home page with the discovery service not prominently featured. And further still, University A defaulted the visible search box to the catalog (not the discovery service, which includes the catalog).
What percentage of overall library “traffic” comes through your Website? And are you spending the relative time and resources to optimize it? These are some “off the path” statistics that may be the most meaningful to take a close look at. Vendors spend tens of thousands of hours conducting focus groups and studying user behavior. One thing we know for sure is that the defaults that libraries provide are almost always what the students use. So, if you default to the catalog as the lone search box on your home page, you may have already led your user down the wrong path. Try switching up the default to discovery (with catalog included), and then take a look down stream and study some of the more intricate statistics. Did you have fewer searches per session and quicker value to the end user?
So, once we get users to “search”, we have to make sure they “find.” The only way to do this is to ensure the best possible results on the top of the result list and then simplify the pathway to the full text. It’s the combination of comprehensiveness and precision that can get us to the best possible results at the top of a discovery result list. Comprehensiveness and precision in discovery services are not easy to come by. Discovery services by-in-large were designed to utilize base metadata (e.g., article title) and full text. This may return a lot of results, but the idea of “best” results at the top of the page is quickly lost. And in the end, so is the user. EBSCO’s EDS is an example of taking an approach to discovery that is made to emulate the academic research experience of the most refined indexes. By ensuring first that detailed subject indexing (together with full text searching and other metadata) is the core of the discovery service, it opens the door for sophisticated relevancy and value ranking algorithms to ensure that the user gets what they came for when they come to the library — the right result…right away. The fact is that end users don’t care if they get ten million results to a search; they care about the ten results at the top of the page.
One way to determine if the best items are surfacing to the top is to look at year-on-year usage of top resources. ScienceDirect is commonly viewed as a top university resource and as such can be a good indicator of whether end users are finding some of the most valuable content in a library’s collection. This was used as an example in an article recently published outlining the impact of EDS on usage of ScienceDirect at Bournemouth University (UK). The following appeared:
In the second year of EDS use at Bournemouth, there was a 1362 percent increase in JSTOR linking and 357 percent increase in ScienceDirect linking. Because EDS allows for the infusion of high-end subject indexes, the statistics related to use of these critical resources can be illuminating. For example, usage records from A&I service CAB Abstracts increased by 81 percent from 2010-11 to 2011-12.* See Table 3.
[*Note: Because Bournemouth subscribes to CAB Abstracts on EBSCOhost, the University takes advantage of the EDS “platform blending” technology, which allows for infusion of results from subject indexes that don’t otherwise participate in discovery services.] — Sam Brooks, “Increasing Value and Usage of Information Resources Through Discovery.” Panlibus Winter, 2012: 18 Web. http://issuu.com/panlibus/docs/panlibus26
As discovery becomes more prominent, and more universities default to discovery (as opposed to catalog), EDS customers see an increase in usage of key resources due to the availability of subject indexing and refined approaches to relevancy ranking. The above single site example and the following aggregate example move beyond number of “searches” (step 1), and “record views” (step 2) and into “linkouts” (step 3 — i.e., getting to the full text). Linkouts in this case are a combination of link resolver use and SmartLink access to Elsevier full-text content. The chart above (top right) represents an aggregate view of random universities using EDS for two consecutive years, who have access to Elsevier full text.
We attribute the massive increase in full text usage to three major cascading reasons: 1.) More prominently featured discovery service from year one to year two resulting in more searches; 2.) Shifting defaults from catalog to discovery on library Web pages resulting in more searches; 3.) Concerted effort in the last 12 months to enable SmartLinks+ for customers purchasing e-journal packages from EBSCO to complement the link resolver and streamline access to full text.
So, if step one is making sure users are searching, and step two is making sure they get the right results, step three is closing the loop. For libraries it means getting the user to the full text quickly and accurately. Studies show, however, that it is here where we lose end users…and most typically, they go back to Google. Unfortunately, some of our problem lies within our baseline solution — link resolvers. Link resolvers are vital, but suffice it to say libraries are over-reliant on these tools in general. Not only do link resolvers require multiple clicks to get to the full text, but “links fail nearly a third of the time” (Trainor, Cindi; Price, Jason, “Digging into the Data: Exposing the Causes of Resolver Failure.” Library Technology Reports; October, 2010, Vol. 46 Issue 7, p15). EBSCO’s SmartLinks+ serve as a way for libraries who use EBSCO as a serials agent (for e-journals and e-journal packages) to complement link resolvers with more accurate, single-click access to the full text. Studies show that frustration surrounding link resolvers result in users dropping a session before they reach the full text (that the library owns) because they are either unable to find it quickly enough, or confused by the path. Removing the obstacles increases the usage statistics of collections, and as a result, the overarching success metrics.
Doing the things mentioned earlier, a library is bound to see the number of searches and link outs in discovery increase. One way in EDS to dive into a deeper evaluation of users’ total derived value from the discovery service (i.e., finding information for their research — list of articles, citation, full text) is to look at the following four metrics.
1. Abstract Views — User clicked into the detail record view.
2. Full Text Views — User clicked directly into the article full text which was available from an EBSCOhost Full Text database.
3. Custom Link from — The metadata of the record and relevancy ranking put this record in a position on the result list for the user to easily find it. – Full text was matched to a library holding via publisher site (e.g., ScienceDirect) or via a link resolver, and link displayed on result list. — User clicked the link. With or without causing an Abstract view.
4. Smart Link From — The metadata of the record and relevancy ranking put this record in a position on the result list for the user to easily find it. — Full text was matched to a library holding automatically via SmartLinking which brought a pre-constructed PDF link into position for them to directly access it from the result list (a PDF icon not a “Find It” link). Smart Link From is an important measure of success because EBSCO can guarantee that the user was rendered the PDF. — User clicked the link. With or without causing an Abstract view.
These metrics can be viewed against the main discovery index as the content source, but are most powerful when viewed against the library’s subscribed subject indexes (e.g., PsycINFO, CAB Abstracts, etc.) because it tells the story of search index content quality combined with use of subject indexing-heavy relevancy ranking, and its role in user satisfaction.
Statistics that will help uncover the areas in need of improvement and help to close the loop between the end user’s information need and the information we have are not always simple to view. So, while there is a logical path (three steps) to investigate, and while EBSCO is structured to help libraries with the four metrics mentioned above, there are other statistics libraries should consider that may require some digging. Did we get our users to the right place? How can we streamline and increase that traffic? Did they end sessions before clicking on a record? Did they conduct multiple similar searches because results weren’t what they considered “great”? Did we get them to the full text quickly and easily? And if not, where did we lose them?
Can librarians study the value of the results and the users’ perceptions of whether they quickly got the best results from their library experience? Have libraries conducted studies of user behavior and experience similar to the C&RL study conducted by Bucknell University and Illinois Wesleyan University? (http://crl.acrl.org/content/early/2012/05/07/crl-374.full.pdf+html) Users can tell us more than simple statistics. And it may be these “unavailable” statistics that can help us better understand user behavior and potential solutions to close the gap between end user need and ultimate library success. It’s our users who determine the answer to the opening question: How would you depict the ultimate measure of library success? The answer from our users is likely very simple — results. And our path to getting there is becoming more clear.
Leah was appointed Executive Director of the Charleston Conference in 2017, and has served in various roles with the Charleston Information Group, LLC, since 2004. Prior to working for the conference, she was Assistant Director of Graduate Admissions for the College of Charleston for four years. She lives in a small town near Columbia, SC, with her husband and two kids where they raise a menagerie of farm animals.