v27 #5 Let’s Get Technical — A Technical Services Perspective on Taking on a Shared Retention Project, Part 1

by | Dec 7, 2015 | 0 comments

Column Editors:  Stacey Marien  (Acquisitions Librarian, American University Library)

and Alayne Mundt  (Resource Description Librarian, American University Library)

In 2014, American University, in conjunction with the other eight WRLC (Washington Research Library Consortium) schools, entered into a memorandum of understanding to commit to coordinating, retaining, and sharing collections, preserving rarely held items, and make better use of library resources and spaces.  As part of this project WRLC contracted the services of Sustainable Collections Services to conduct an analysis of the nine WRLC libraries’ shared collections.  This analysis was done in order to identify rarely held materials at each of the libraries, provide insight that would allow for strategic management of the library collections, and ensure retention of materials across the consortium and across the library community.  An additional goal of the project was to ensure that the WRLC’s Shared Collections Facility (SCF) is being used appropriately as a repository of library assets while preventing unnecessary duplication within the SCF.

These types of shared retention projects are becoming increasingly common as libraries and consortia reduce their print collections in favor of increasing electronic collections and using library space for collaboration and other activities.  The goals of reducing collections responsibly include aiming to retain unique and little held titles and also to prevent duplication and redundancy.  These retention projects can analyze where collections are strong and are getting the most use so that library resources can be allocated where they are most needed.

With these goals in mind, WRLC and its member libraries entered into an agreement with Sustainable Collections Services (SCS) to conduct an analysis of our shared collection.  Several task forces and pre-existing WRLC committees were involved in getting the project off the ground.  These groups set up local policies, drafted the scope of the retention commitments, and addressed other goals such as reducing redundancy at our shared storage facility.  Perhaps because this project was so closely related to the collections side of librarianship, technical services librarians were not involved in many of the initial policymaking and workflow discussions and decisions.  We felt this was a mistake, as many errors in the implementation process could have been avoided with more input from the technical services staff.

Although our consortia is not the first to participate in one of these retention projects, the configuration of our shared catalog and our shared offsite storage center  have added some unique challenges to implementing the goals of the retention project.  In order to identify the different types of markers needed for achieving the goals of the project, it was decided that using location codes in the shared catalog would be the easiest means of identifying and managing the various commitments.  These location codes are:

XXXPERM: for items identified as being “cultural heritage” (i.e., having ten or fewer holdings within the U.S.)

XXXRET: for items we are committing to retain up to two copies of within the consortia

XXXDUP: for items identified as already having two duplicates already within the shared storage facility

XXXDIS: for items that have been discarded in favor of a shared retention copy for the consortia.

It was decided that location codes would be a better marker of these different commitments instead of a note in an item or holding record for several reasons.  Using a location code instead of a note would ease running reports or searching for different statuses.  It would make identification by circulation staff and WRLC processing staff much easier.  Location codes would also ease performing batch processing of records, and would lower the impact on system performance by allowing batch processing using Voyager’s pick-and-scan function, which is a factor when considering making significant changes to a database containing roughly 11 million records.  Pick-and-scan uses barcodes and works easily with location code changes, while Voyager’s Global Data Change module relies on bibliographic and holdings i.d. numbers to make changes to records and places greater stress on overall systems performance when making batch changes on a large scale.  Thus, for every location code that originally existed at each WRLC school and the shared storage facility, four new location codes were created with the above additions, adding more than 600 new location codes into our shared catalog, and at least 33 so far for our university alone.  This has had a significant impact on several local workflows and has generated a great deal of confusion and uncertainty amongst staff in Cataloging and Acquisitions as well as at all of the participating WRLC schools.  Some issues that have arisen concern how to handle replacements when books are lost or damaged, how to correct errors, as well as how to work with other schools and WRLC Central to communicate information about moving retention commitments from one school to another.  There has also been confusion on where to send questions now that the task force has completed their work.  Stacey and I plan to address these issues in more detail in future columns.

The batch location code changes were performed over the course of one or two months across the consortia’s shared catalog at the end of 2014.  They were based on the analysis performed by Sustainable Collections Services in April 2014.  However, as we all know, library catalogs are living, non-static entities, and unfortunately, a good deal of information had changed in the months between the analysis and the batch location changes.  This is compounded by the fact that we were still in the process of moving 100,000 items to storage, which we outlined in a previous column.  Additionally, there were problems identified with the project’s dataset when the location changes were made.  Some of the problems that occurred were professor-owned copies of books from Reserves being accidentally included as retention copies when they shouldn’t have been.  We also discovered our Visual Arts Collection of nearly 30,000 titles was accidentally left out of the analysis.  Any item that had gone through a location change since the initial analysis, such as items moving to the stacks from Reference, had an erroneous location code applied.  Many of these errors were able to be corrected using the log files from the pick-and-scan process, and the analysis was re-run to include our Visual Arts Collection and exclude items on Reserves.  There was also a rerun of the analysis and reallocation of retention commitments when one school in the consortia was unable to participate in the project.  Additionally, the new location codes were not initially included in the limits in our discovery layer, Summon.  Although these issues were fixed relatively easily, if technical services staff had been included in some of the initial planning, some of the issues would have not occurred in the first place, and provisions could likely have been put in place to minimize some of the errors that resulted when the location changes were made.

Although the projects and associated processes with it have been confusing for staff and have complicated certain workflows to the point of bringing work to a standstill while we resolve questions, there have been some ancillary benefits on the technical services side.  As a part of the analysis done by Sustainable Collections Services, we were provided with several remediation lists that have enabled us to do systematic database cleanup in certain areas.  We have been able to do a cleanup of records for which the title and author in records in our local database did not match the corresponding record in OCLC, records that did not have holdings set in OCLC, and records lacking an OCLC number.

Projects such as these are not necessarily meant to be perfect in their execution.  When performing any process with nine million records, there are bound to be errors and inconsistencies.  For example, the task force that outlined the scope of the project, retention commitments and criteria, acknowledged in their final report that there are bound to be errors with the data set, and they did not recommend an inventory of titles identified for retention as a good use of resources.  However, it is the technical services staff that often carries out and maintains the day-to-day and title-by-title issues as they come up, and it is imperative that they should be included in the decision-making process with projects such as these from the beginning.  Doing so will reduce potential errors and will improve the overall outcome of the project.

 

Sign-up Today!

Join our mailing list to receive free daily updates.

You have Successfully Subscribed!

Pin It on Pinterest