v31#2 It’s High Time — Metrics in the Administration of Higher Ed

by | May 23, 2019 | 0 comments

Column Editor:  Darby Orcutt  (Assistant Head, Collections & Research Strategy, North Carolina State University Libraries, Box 7111, Raleigh, NC  27695-7111; Phone: 919-513-0364) 

Column Editor’s Note:  This column is a revised and abridged version of my remarks as a part of the National Information Standards Organization (NISO) Virtual Conference, Advancing Altmetrics: Best Practices and Emerging Ideas, December 13, 2017. — DO

A Bird’s Eye View

We’re going to look at this subject from a higher-level view, focusing on the uses and potential uses of metrics at the level of university administration, and particularly with regard to research strategy.  This perspective on the subject is important for understanding the larger context of the work that libraries do, and potential roles for libraries in addressing these sorts of higher-level needs — the sorts of things we as libraries or librarians may be asked to do, should appropriately do, or should avoid doing.  Perhaps most importantly, we need to be thinking about how we not just as librarians but as faculty of our institutions could and should be contributing to the ongoing alignment of our metrics with our overarching institutional goals and mission.

What is Data For?

I find it useful to consider data through the lenses of its purposes, and these are often overlapping, but we basically can consider that some metrics are for more or less mundane decision-making, some are for a communicative purpose (say, quantitatively representing the extent of faculty publication for marketing purposes or for an associational survey), some metrics are for a specific tactical purpose, and some are for a much more broadly strategic purpose.

It is always extremely important that we focus on these ends when we are working with any metrics, so that the cart doesn’t drive the horse.  There is an old adage in scientific practice that the “scale creates the phenomenon,” and we should keep this firmly in mind. With traditional scholarly publishing metrics, for example, we often look for higher numbers of citations or better impact factors for journals.  And it can be seductive, for tenure committees or for individuals, to look at some numbers and to allow them a certain intrinsic authority. But we need to bear in mind that all these quantitative measures are only significant to the extent that they serve as proxies for quality.

Faculty versus Administrator Views

Individual faculty tend to approach metrics from an on-the-ground perspective, and generally simply their own.  Their first interest is in “their” numbers, how they themselves are represented. I was reminded of this yet again in a meeting between a metrics vendor and our faculty governance group; the preponderance of questions and concerns from faculty revolved around issues of how data was collected, was it a fair process, and why were certain measures reported and not others.  From a faculty standpoint, the hazards of traditional metrics are often well-known, well-feared, and can widely vary by discipline. What, quite frankly, they tend to misperceive are the risks associated with these hazards; in other words, how likely are the bad-case scenarios? From spot checks of some services conducted by others, faculty across institutions often seem more likely to unintentionally omit publications from their own CV’s than to lose credit for publications due to auto-harvesting errors.

Of course, we need to tease out the two major kinds of “error.”  A discrete error generally stems from a specific mistake with a specific piece of metadata (say, an author’s name was spelled incorrectly).  A systematic error derives from a larger coding issue, and may still be a simple mistake (for example, a column was accidentally omitted from the formula that sums citations) or from an intentional coding choice (for example, deciding not to count data for “proceedings” because, while these are considered highly significant within engineering disciplines, they are considered supplementary at best within many other disciplines).

Faculty often fear that their contributions, especially contributions of certain kinds, will not be “counted” by their administration.  Thoughtful and well-informed university administrators, however, are cognizant of the issues with metrics, of the sorts of differential impacts that systematic issues may have, and of the cultural and practical differences between how these play out across various disciplines.

“If You Have a Hammer, Everything Looks Like a Nail”

Quantitative measuring (that is, counting) should never simply substitute for thinking.  We need to always bear in mind that our metrics at best approximate, indicate, or correlate with our genuine interests.  High rates of citation often accompany scholarly work of quality and significance, but the latter are our true aims.

Considered properly, counting can significantly enable thinking — but again, it must be contextualized in a great many ways.

For one example, all data covers a range of time.  To put it in the language of the fine print on your mutual fund statements, “past performance does not guarantee future results.”  But it certainly may be indicative of such. In addition, some metrics by their nature only reveal trends when considered in aggregate over longer periods, particularly data that tends to fluctuate over shorter terms.

For another example, the scale at which datasets are used may make all the difference.  Is the data suitably scoped for the intended purpose? For most traditional metrics, for example, allowing counting to substitute for thinking in the evaluation of a single faculty member or even academic program would generally be irresponsible; however, using diverse counts at an aggregate university level might be “close enough” to note certain trends.

Library Roles and Contributions

In the present environment, and I think we’ll see this trend growing, research offices and other university administrators are ending up in new and different relationships with their libraries.  On some campuses, much of the work of gathering, reporting, and even interpreting various metrics is being shifted to the libraries. On other campuses, the libraries are being left out of the conversation altogether.  And, of course, there are all sorts of possibilities in between these poles. There are many new commercial players too in the realm of academic metrics, but many of them are already familiar to us in libraries, as many are vendors we have worked with already for many years as library vendors, and they are now redefining themselves as information brokers in order to reach new markets within our larger institutions.  Many of the databases, tools, and technologies used to create metrics and dashboards and related products, from Web of Science to CrossRef, are already familiar to us, and draw from librarian skill sets in search strategies, metadata creation, and fostering information interoperability — just to name a few.

One of the librarian’s key roles traditionally has been in promoting information literacy, and in the realm of data that includes educating our user communities regarding the full contexts of data.  I do think we can responsibly help assuage the fears of individual faculty regarding the reliability of certain metrics, but we also should play a role in educating our administrators as to appropriate uses of metrics.  Perhaps most significantly, the values of librarianship can help properly steer campus practices. Not only do we promote ethics around the informed use of contextualized information, but we are advocates for the sharing of information — and transparency across the community is key to the responsible use of metrics.  This transparency is both of the data itself (individual scholars will be less fearful and more supportive of data efforts if they can see and double-check the information about themselves and their work) but also this transparency is very importantly of the policies and practices around administrative data. How is it to be used?  How should it not be used? Such transparency is especially useful in the training of new administrators, to ensure that a culture of responsible use of metrics persists despite personnel changes.

In all of this, we as librarians probably ought to tread carefully, as many of the potential roles for libraries in this new landscape put us in new — and perhaps unwanted — relational structures with our users and institutions.  Could existing relationships with individual researchers be potentially tainted by new perceptions of librarians as part of the “them” of upper university administration? How can we effectively remain in the hallowed “neutral” territory of facilitating information sharing and proper use if we ourselves are enmeshed in the gathering, outsourcing, and reporting of institutional metrics?  Obviously, any of the potentially fraught issues in this regard might further play out differently across institutions with varying histories, librarian faculty status and tenure differences, funding realities, governance structures, etc. — but the first step at every institution is having a seat at the table and making clear how universities can leverage the existing expertise of their libraries to help foster success at this level of the institution as well.  

Next time:  Altmetrics in the Administration of Higher Ed. — DO

 

Sign-up Today!

Join our mailing list to receive free daily updates.

You have Successfully Subscribed!

Pin It on Pinterest