v26 #1 Analyze This: Usage and Your Collection

by | Mar 31, 2014 | 0 comments

COUNTER: Basic Explanations to Disabuse Expectations

by Athena Hoeppner  (Electronic Resources Librarian, University of Central Florida)

Column Editor:  Kathleen McEvoy  (EBSCO Information Services)

As the Electronic Resources Librarian, I frequently compile usage reports for librarians and administrators.  Almost as frequently, I find myself explaining the reports.  In my experience, we librarians look at usage data through a lens of expectations.  We expect stable usage with moderate increases yearly;  we expect usage on par with our peers; and we look for low cost/use to prove the value of e-resources.  Over the years, I’ve experienced many things which confound those expectations and lead to large fluctuations:  usage lower than peers;  and unreliable or un-calculable cost-per-use.

At the core of usage analysis and comparisons is the COUNTER Code of PracticeCOUNTER establishes protocols widely adopted by e-resource vendors to produce and deliver consistent usage reports to libraries.  The first Code of Practice, released in 2003, described seven reports.  The newest release (required as of 31 December 2013) describes 23 reports.  The reports document three basic types of interactions between users and e-resources:  Search Activity, Full Content Access, and Turnaways, with variations for type of content (i.e., article, book, multi-media), mode of access (i.e., desktop, mobile device), file format delivered, and year of publication.  For UCF’s searches and full content access data, I use the 10 go-to reports discussed below.

Search Activity Reports

Four reports give a complete account of all of UCF’s searches in COUNTER-compliant e-resources:  Platform Report 1 (PR1), Database Report 1 (DB1), Book Report 5 (BR5), and Journal Report 4 (JR4).  BR5 and JR4 include only Total Searches.  PR1 and DR1 include a richer view of search behavior with data for:

•    Regular Searches

•    Searches-federated and automated

•    Result Clicks

•    Record Views

I sum searches from the PR1, BR5, and JR4 to calculate UCF’s total searches across all of our COUNTER-compliant e-resources.  For vendors that offer more than one interface or service for interacting with the content, the platform report reveals how much each interface is used.  For example, PR1 for EBSCO delineates searches run on their EBSCOhost, EDS, EDS API, and Mobile interfaces.

DB1 is more detailed than PR1, with usage for each database on a platform.  On multi-database platforms, a single query typically runs simultaneously in several databases on a platform.  The usage statistics count the search in each database, so one query can result in a 1x (number of databases) increase on the DB1 report.  Use PR1 to see total usage instead of summing the data reported on DB1.

In Release 4 — sessions are no longer counted and reported, but Results Clicks and Record Views have been added.  ARL needs to update its Survey in response to the changes, and Usage Summaries in Library Annual Reports around the world will look different next cycle!

Full Content Access Reports

COUNTER Release 4 offers reports for the variety of content types modern libraries provide to users, including articles, eBooks, eBook chapters or sections, and multimedia of all kinds.  The following reports provide a complete view of UCF’s use of full content from COUNTER-compliant vendors:

•    Book Report 1 (BR1) – title requests

•    Book Report 2 (BR2) – section requests

•    Journal Report 1 (JR1)

•    Journal Report 1a (JR1a) – journal archives

•    Journal Report 1 GOA (JR1GOA) – Gold Open Access

•    Multimedia Report 1 (MR1)

Joe User:  A Time Traveler’s Walk-Through

To illustrate how user behavior translates into usage statistics, let’s track Joe User as he proceeds through a typical library research session in three settings:  Single Database, Federated Search, Web Scale Discovery (WSD).  Joe’s basic behavior will remain consistent. He enters a query for “knee,” clicks on five results, and accesses five full content items.  We’ll look just at the statistics in DB1, PR1, and the suite of full content reports JR1-MR1.  For the sake of simplicity and space, I combined and compacted the data in the examples below.

One Database Setting

Joe starts his session in 2003, using one database, CINAHL, on one platform, EBSCOhost.  He enters “knee,” clicks on five results, and opens five full-content items.  His activity would generate the following search usage data.

Joe opens one Springer book chapter, three full-text articles (one each from EBSCOhost, Wiley, and PLOS ONE), and one video from Alexander Street Press.  Each opened item is counted on each vendor’s report appropriate for the content type.  If the Wiley article is Gold Open Access, it is counted on JR1GOA.  If it is from a purchased archive, it goes on JR1a.  The article from PLOS ONE is not recorded on any COUNTER report — PLOS journals are green open access, so no authentication is needed to read the article and PLOS does not issue COUNTER reports, and instead uses article level metrics.

Multiple Databases/Federated Searches

We teleport Joe into the near past, 2009, where Joe tries MetaLib using a Nursing Quick Search form that sends the query to five databases:  CINAHL and Alt-Health Watch from EBSCOhost;  PsycInfo and Dissertations Full-text on ProQuest;  and Cochrane from Ovid.  Joe runs his search for “knee,” clicks two results from CINAHL, one from PsycInfo, and two from Cochrane.

This time, Joe’s usage is distributed across the five database, three platforms, plus MetaLib.  Results Clicks and Record Views are new to Release 4, so I did not know how they are counted in federated search systems like MetaLibOliver Pesch, a COUNTER Executive Committee member, technical committee chair, and Chief Product Strategist at EBSCO, explained the accounting for me:

“Record Views” would be counted by the platform where the records are retrieved from; however, “Result Clicks” would happen on the platform that generated the result list.  Therefore, in the table that follows, the Record Views would be 2 for CINAHL, 1 for PsycInfo and 2 for Cochrane – and 0 for MetaLib since MetaLib does not host the “records” being viewed.   The “Result Clicks” are as would be expected.

Assuming that Joe discovers and selects the exact same full content as in the scenario, there is no change in the full content usage statistics.  The full content reports are not affected by federation.  How the user got to the full-text makes no difference, be it through an A/I database, a link in an online course system, or a Google Scholar search.  So long as the content is hosted on a COUNTER-compliant vendor site, the use is tallied on most suitable report: JR1, JR1a, JR1GOA, BR1, BR2, and MR1.

Web Scale Discovery

Joe catches up with modern times and repeats his activity in a Web scale discovery service with one query, five clicks on results, and five full-content accesses as before.  Because WSD is relatively new, and because Release 4 is brand new, I was once again unsure how the activity translates into COUNTER statistics. Oliver Pesch explained:

Our user Joe searches EBSCO Discovery Service (EDS), which covers 100 databases (for sake of an example)… each of the 100 databases will receive a +1 for “Searches — Federated and Automated”; however, the  PR1 for EBSCOhost will receive only a +1 for Searches Regular to represent the user’s actual search on EDS.  Since EDS shows which database a result is from, each result click will be attributed to the database the result is from, and each view of an abstract will be reflected on that database as a “Record View.”  If EDS is also searching other databases via federated search “connectors,” the individual searches will not show on EDS but would show as “Searches — Federated” on the content-provider’s COUNTER DB1 report.  Record views would show on the content-provider’s COUNTER DB1 report.  EBSCOhost PR1 report would only reflect Result Clicks and Record Views for databases hosted, searched, and accessed on EBSCOhost.

Each WSD service ingests records and content differently, which may affect the Result Click and Record View statistics.  Consider the different approaches used by EDS and Summon when multiple data sources supply metadata for an item.  EBSCO Discovery Service identifies and the associates multiple records for the item, but keeps each record separate.  Summon matches and merges multiple records for a single item to create a Summon record. In search results, EDS shows the best original record for the item. Summon shows the Summon record.  In EDS, when Joe clicks and views a record, he sees a record supplied by specific database, and the usage is attributed to that database.  In Summon, when Joe clicks and views a record, he sees the Summon record and I suspect the usage is attributed to the Summon, not to the databases that originally supplied the metadata.

The scenarios above illustrate how changes in search modes and technologies can have a big effect on statistics.  Search statistic increased from 1 to 6 to 100+.

Choices, Circumstances, and Complications

Now that we’ve examined how searches, clicks, and full-text data are affected by different scenarios, I’ll revisit to my original point that year-to-year or library-to-library comparisons of COUNTER data is problematic.  The scenarios demonstrate that a library’s use of federation or WSD has an appreciable effect on search statistics.  I experienced this impact first hand in the late 2000s when many of UCF’s peers implemented federated search as their primary access to e-resources.  UCF chose to use federated searching in a limited capacity.  For a couple of years, around ARL Survey and Annual Report time, I had to explain why UCF’s search statistics were much lower than our similarly sized peers.

One year, UCF’s limited federate search implementation, dubbed Quick Articles, experienced a hiccup.  We included only three databases in our general search group, and one stopped working with our system for a period of months.  Even though we did not steer traffic to Quick Articles, our search statistics for the problem database plummeted.  I have seen similar effects from network and EZproxy down time.

Some libraries make the perfectly valid choice to encourage searching individual databases.  Most will implement a discovery service and include as many relevant databases as possible.  The exact contents of the discovery index will vary from service to service and library to library.  In addition, each discovery service uses proprietary relevancy ranking algorithms.  Even if the services included exactly the same data sources in their index, they would each surface different results in the first few pages.  All of these choices and differences will increase use of some e-resources and likely decrease use of others.  Different choices by libraries may result in peer libraries showing very different usage patterns.

Cost-per-use calculations are also affected by the issues above, but the larger difficulty stems from inconstancies in the availability and granularity of pricing data.  Many of UCF’s e-journals are part of state-wide packages, and many are access-only titles.  We have access to thousands of e-journals with no itemized prices.  Our most used databases are, similarly, grouped into packages with no itemized pricing.  Such cases make it impossible to calculate price-per-use.

In addition, much of our full-text usage is from aggregator databases. To calculate the cost-per-use for a journal available through both a direct subscription and through aggregators requires summing the use wherever the journal is hosted, but determining the full price for access to the journal becomes too complicated and is not feasible.

I’d like to conclude by stating that I am a fan of COUNTER and do think that libraries should use COUNTER data for many purposes, including year-to-year and library-to-library comparisons.  I hope that I’ve provided some basis for making such comparison with some care, and with plenty of salt.  More details, including descriptions of the reports I did not cover, are available in the full The COUNTER Code of Practice for e-Resources: Release 4 on COUNTER Code of Practice site:  http://www.projectcounter.org/code_practice.html

The COUNTER Code of Practice for e-Resources: Release 4.  Published April 2012. http://www.projectcounter.org/r4/COPR4.pdf

Appendix A (Glossary of Terms). Updated November 2012.  http://www.projectcounter.org/r4/APPA.pdf

COUNTER Compliance. A step by step Guide for Vendors. Published May 2012.  http://www.projectcounter.org/documents/COUNTER_compliance_stepwise_guide.pdf

0 Comments

Submit a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

LATEST NEWS

ATG Conferences, Meetings and Webinars 10/26/20

ATG Conferences, Meetings and Webinars 10/26/20 NISO Plus 2021 Call For Session Proposals! You probably know by now that NISO Plus 2021 will be a global, virtual-first event. What you may not know is that, unlike the inaugural conference in February 2020, next...

ATG Conferences, Meetings and Webinars 10/26/20

ATG Conferences, Meetings and Webinars 10/26/20 NISO Plus 2021 Call For Session Proposals! You probably know by now that NISO Plus 2021 will be a global, virtual-first event. What you may not know is that, unlike the inaugural conference in February 2020, next...

ATG News You Need to Start the Week: 10/26/20

ATG News You Need to Start the Week: 10/26/20 Expanded access to JSTOR and Artstor extended through June 2021 According to Library Technology Guides "As the COVID-19 crisis continues to disrupt higher education, ITHAKA is extending the JSTOR and Artstor expanded...

ATG News & Announcements 10/23/20

- ATG News & Announcements 10/23/20 - Initiative for Open Abstracts According to this announcement the Initiative for Open Abstracts "is a collaboration between scholarly publishers, infrastructure organizations, librarians, researchers and other interested...

ATG Conferences, Meetings, & Webinars 10/22/20

Don’t Forget! Apply for ACRL 2021 Virtual Conference Scholarships by October 23 The deadline to apply for a scholarship to attend the ACRL 2021 Virtual Conference is coming up this Friday, October 23.  REGISTER The ACRL 2021 Scholarship program provides...

SUBSCRIBE TO OUR PODCAST

Share This