v31#4 Asking the Right Questions: Bridging Gaps Between Information Literacy Assessment Approaches

by | Oct 4, 2019 | 2 comments

by Alison J. Head  (Executive Director, Project Information Literacy) 

and Alaina C. Bull  (First Year Experience Librarian, University of Washington Tacoma) 

and Margy MacMillan  (Senior Researcher, Project Information Literacy) 

Abstract

A large volume of research on information literacy assessment has measured students’ skills and competencies against librarians’ expectations.  Far fewer studies have reported on the holistic experiences of students with finding, using, and creating information to fulfill their academic and personal needs.  Consequently, the picture emerging from the assessment literature often portrays students as unskilled, uncreative, and uninterested when fulfilling course research assignments.  Drawing on our combined experience at community colleges and universities in the U.S. and Canada and Project Information Literacy (PIL), this article introduces a typology for classifying and critiquing four levels of information literacy assessment — micro, meso, macro, and mega, and presents a series of reflective questions to spark useful connections among these approaches, while maximizing librarians’ teaching, learning, and assessment outcomes.

Introduction

How students find, use, and create information has intrigued librarians for decades.  As the Internet and social media have rapidly and dramatically redefined what it means to be part of an informed society, academic librarians have turned even more of their attention to teaching, learning, and information literacy assessments.  In the past ten years, the sheer number of assessment articles, books, and conference presentations has increased exponentially, signaling the importance of this inquiry.1 

The quality and potential usefulness of the assessment literature, however, remains a topic of discussion and debate among librarians.2  According to some critics, much of the literature is anecdotal and relies on methods that lack replicability from one class or library setting to the next.  Others have noted a preponderance of research based on deficit-based tests that measure students’ atomized skills against librarians’ expectations for information literacy competencies.  Critics claim there needs instead to be strengths-based assessment focused on how students solve their information problems. 

In this article, we draw upon our combined assessment experience at U.S. and Canadian community colleges and universities and in conducting multi-institutional studies at Project Information Literacy (PIL), a national research institute on students’ research practices, to introduce a typology for classifying four approaches to information literacy assessment.  Reflective questions are featured for strengthening the ties among these approaches, so that librarian practices, instruction methods, and student learning may ultimately advance and improve teaching and learning outcomes.

Levels of Assessment

Information literacy assessment methods are as varied as the librarians who have developed and shared their creative tools for measuring student success.  To summarize the individuality of these accounts and categorize the breadth and depth of the assessment fields, we have developed a typology that draws on a broad swath of assessment literature. 

We define four levels of assessment:  (1) micro: in a single course; (2) meso: at a program or institutional level;  (3) macro: assessment of skill sets across multiple institutions; and (4) mega: contextualized against larger trends within society.3  Table 1 summarizes each assessment category in our typology and the related details.  (See Table 1 above.)

Each assessment approach in our typology has its own goals, methods, motivations, and desired outcomes.  For instance, micro-level classroom assessments provide insights into more specific measures of student learning in classroom situations.  In many cases, libraries use these assessment methods to document the impact of our instructional efforts. Much of this assessment is done quickly and to check the validity of instructional approaches.  In class, librarians frequently use assessment tools like “think-pair-share” surveys embedded in learning management systems, “Poll Anywhere” surveys, and one-minute papers to identify keywords that all check for quick understanding of surface-level concepts. 

Meso-level assessment methods gather data to measure how effective librarians have been at reaching all students with information literacy instruction.  Data can include percentages of classes taught in a department, total number of students reached plotted against overall trends in retention or GPA, types of items taught in different types and levels of courses as well as student performance of particular outcomes.  This approach often incorporates data on the achievement of particular institutional outcomes. Meso methods often look at information literacy instruction from the specific perspective of proving the need to retain information literacy instruction. In some cases, this kind of data is compiled across institutions as a macro-level assessment to reveal comparisons between academic libraries.  These macro efforts often use standardized question banks and are often implemented by consortia of libraries.

With the constant threat to library funding, it is understandable why librarians assess from a strategic, and some might even say defensive, mindset.  Libraries are under relentless public scrutiny. They are subject to reduced funding, whether it is on a campus, in a county, or nationwide. In academia, we are always expected to do more with less.  In a society focused on data-driven decision making, as librarians, we are constantly trying to justify why and how our work has impact. Embracing assessment has allowed us to come to the negotiations prepared to discuss our impact, and back our statements up with data.

One departure from factors like these that drive library-centered assessments are the ongoing mega studies we have conducted with PIL.  Unlike macro studies among pre-existing groups of institutions, PIL works to ensure a wide representation of institutional types and locations.  To date, there have been ten large-scale analytical research studies about student research in the digital age.  We collect data about how students solve information problems for coursework and in their daily lives. We look for robust relationships between students’ research practices from across schools that suggest generalizable trends (e.g., the growing use of Wikipedia for course research).15  In this sense, we examine what students actually do, rather than what we think they should do

At PIL, we have used interviews, surveys, and, most recently, in our news study,16 direct observation and a computational analysis of social media as methods for assessing information practices through the lens of the student experience.  These methods allow for a deeper understanding of human experiences, attitudes, and opinions about the research process, from the students’ perspective.  Since 2009, more than 22,000 young adults enrolled in 89 U.S. public and private colleges and universities, community colleges, and 34 high schools have been interviewed or surveyed.  PIL’s research goal is to fill in missing pieces of the information literacy puzzle by finding out how early adults (in their own words and based on their own experiences) put their information literacy competencies into practice in learning environments in a digital age, regardless of how they may measure up to standards for being information-literate.

Minding the Gaps

Even though there are different rationales for each of these four levels of assessment, a symbiotic relationship does exist among the approaches in our typology.  We definitely see where each level can borrow from the others to enrich the entire assessment process. Still, assessment data may end up being incomplete. Even when the data seem very straightforward in terms of an assessment of students’ skills, the results may not always point clearly to the next step.  While assessment data points are effective as bargaining tools, they are less helpful in leading to a more thorough understanding of information literacy and promoting change. A deeper understanding of this interplay and knowing the “right questions” to ask for reconciling these approaches, we argue, has the great possibility of maximizing information literacy teaching, learning, and assessment outcomes.

In recent years, there has been an increase in interest in moving beyond the status quo assessment models.  For instance, Magnus, Belanger, and Faber have discussed the importance of incorporating feminist and critical pedagogy into assessment efforts in their 2018 article “Towards a Critical Assessment Practice.”17  Similarly, we need to critically examine the questions we ask students when developing assessments for implementing change. 

The kinds of questions librarians frequently ask reveal the (often) narrow kinds of information literacy we value: the use of library terms and systems.  Often, such questions are shaped more by external needs for particular kinds of data, or external pressures to prove the worth of a program than by our genuine desire to improve learning.  Unsurprisingly, these questions fall into what the scholarship of teaching and learning (SoTL) would call the “what works” category.18  Examples might be whether a particular type of teaching increases students’ use of library resources or their confidence in using library resources.  The data gathered from students is often coded on whether or not, or to what extent, students demonstrate a positive emotional response to interventions. 

Even when we think we are assessing learning gains in information literacy, our questions often fall short of our goals to impact teaching and learning.  Instead, many times we are really only assessing the ability to memorize library jargon. In other cases, we are evaluating the identification of information containers that have limited application in the dynamic information environment where students fulfill classroom and personal research needs. 

At the same time, our assessments tend to privilege certain information behaviors (e.g., ones that mimic our own professional ideals as librarians for seeking information to be used for academic assignments or learning).  While these assessments may provide useful data to the librarian or about the library program, it is arguable whether that information is actually about the kind of deep learning we claim we want to support. If the questions we ask are not about learning, it is hardly surprising we struggle to implement results that improve the experience of students.

Like many undergraduate researchers, we librarians, too, may be rushing to prove a position before we understand enough about the context (similar to all of the first-year papers on why marijuana must/must not be legalized).  Before we can really tell “what works” for learning, we need to understand precisely what students actually do. SoTL would suggest asking more “what is” questions — ones that explore what is actually happening when students are doing something, regardless of whether it matches an instructor’s expectations, fits into wider frameworks, or serves particular institutional goals.  We cannot measure the difference instruction makes until we truly understand what is happening as students are learning.

For instance, an assessment librarian might ask questions such as these:  “What are the first steps students in a freshman composition class take when searching for sources for an argumentative essay?”  “Is there a difference in how students in 300-level history courses assess sources in a class where issues of equity and diversity are explicitly addressed in the readings?”  “What aspects of the research process do students find most satisfying and most frustrating, and are there differences related to year of study or whether the course is in the student’s main discipline?” 

In these cases, data come from material generated while students experience learning, such as reflective journals, lab work, and successive drafts of papers, or may use protocols like think-alouds.  PIL studies often incorporate this kind of data through focus groups and interviews that ask students to narrate their own experiences with information. The insights gained can illuminate why an intervention might have “worked” and therefore how to implement changes to create the conditions that foster learning.  While they are broken down here by type of assessment, the questions in Table 2 can also spark useful connections between the approaches, and may be used outside the levels we associate them with here. Table 2 presents a framework of what some of these questions may be for each level of assessment in our typology.  (See Table 2 p.22.)

To Prove and Improve

Information literacy instruction has dramatically changed as libraries have situated themselves as centers of pedagogy.  Still, many of our assessment methods remain fossilized. Far too often, librarians reuse tested assessment methods that focus on proving worth rather than measuring actual learning.  The typology we present in this article is intended to help us recognize where we can usefully borrow questions from different levels and types of assessments. For instance, we may want to look for tested macro questions that might identify learning gains, or micro assessments that can scale up to provide deeper insights.  What can we learn from the voices of students heard in more qualitative work that can help us ask better questions? What changes when we move the focus of assessment from proving something works to understanding and improving the student experience?

In recent years, as Donna Lanclos and Andrew Asher have noted, there has been a promising expansion of work that explores IL from the student perspective.19  These studies show the benefits of asking different questions about IL and assessment and using student experience as a lever for change.  This approach has the benefit of destabilizing comfortable assumptions while deepening our understanding of learning, a necessary condition for real change.  

We are not saying that library assessment is broken.  Instead, we contend that information literacy assessment would benefit from both proving and improving teaching and learning outcomes.  In order to accomplish this, we need to change our questions.  A simple example in the context of instruction is to ask two quick questions at the end of the session: “What is one thing that a student has learned?  What is something that a student is unclear on, or what question does a student have?” This kind of assessment shows where students get stuck, providing useful feedback on something the librarian can actually rework and improve.  It might even prompt a “what is” question, such as “what is happening in students’ lived experiences that might be contributing to their confusion?” This is the kind of question that can inform that librarian’s response. 

Ultimately, we must bring the same intentionality to assessment as we do to teaching and learning.  We need to incorporate the same kinds of reflective practices as we do in our instruction. And by doing so, we can ask questions for proving worth that also improves instruction, an all-important goal for advancing the librarian profession.

About the Authors

Alison J. Head, Ph.D. is the Founder and Director of Project Information Literacy (PIL) and a Visiting Scholar at Harvard Graduate School of EducationAlaina C. Bull is a Research Analyst at PIL and the First-Year Experience Librarian at The University of Washington TacomaMargy MacMillan is Senior Researcher at PIL, an I-SoTL Outreach Associate, and Professor Emerita at Mount Royal University, where she spent 27 years working on information literacy initiatives.

Acknowledgements

We are grateful to Barbara Fister, PIL’s Scholar-in-Residence, for making recommendations for this paper and her keen insights, and to Steven Braun for designing the tables.  

Endnotes

1.  When we conducted a search of “all databases” in the University of Washington library portal using the terms “information literacy assessment,” we found more than 500 books and articles published between 2008 and 2018.  Search conducted: January 3, 2019.

2.  Andrew Walsh, “Information Literacy Assessment: Where Do I Start?,” Journal of Librarianship and Information Science  Vol. 41, No. 1 (March 2009): 19-28, http://eprints.hud.ac.uk/id/eprint/2882/1/Information_literacy_assessment_where_do_we_start.pdf;  Zoe Fisher, “Information Literacy Assessment (Day 88/100)” (blog), (June 8, 2017), https://quickaskzoe.com/2017/06/08/information-literacy-assessment-day-88100/.

3.  Brad Wuetherick and Stan Yu, “The Canadian Teaching Commons: The Scholarship of Teaching and Learning in Canadian Higher Education,” New Directions for Teaching and Learning (2016), No. 146: 23-30, https://onlinelibrary.wiley.com/doi/pdf/10.1002/tl.20183.

4.  “Framework for Information Literacy for Higher Education,” American Library Association, (February 9, 2015), http://www.ala.org/acrl/standards/ilframework.

5.  Megan Oakleaf.  Academic Library Value: The Impact Starter Kit.  Syracuse, NY: Dellas Graphics, (2017), https://www.alastore.ala.org/content/academic-library-value-impact-starter-kit.

6.  Terrel Rhodes.  Assessing Outcomes and Improving Achievement: Tips and Tools for Using Rubrics.  Washington, DC: Association of American Colleges & Universities.  (2010), https://www.aacu.org/value-rubrics.

7.  “The Tests,” Project SAILS, (2016), https://www.projectsails.org.

8.  “NSSE,” National Survey of Student Engagement, (2019), http://nsse.indiana.edu

9.  “WASSAIL,” University of Alberta Libraries, (2019), https://guides.library.ualberta.ca/augustana/information-literacy/wassail.

10.  “The ERIAL Project,” The ERIAL Project, (2013), http://www.erialproject.org

11.  “What is PIL?” Project Information Literacy, (2019), https://www.projectinfolit.org.

12.  “Our Work,” Ithaka S+R, (2019), https://sr.ithaka.org/our-work/.

13.  “Pew Research Center,” Pew Research Center, (2019), https://www.pewresearch.org.

14.  “GALLUP,” GALLUP, (2019), https://www.gallup.com.

15.  Alison J. Head and Michael B. Eisenberg, “How Today’s College Students Use Wikipedia for Course-Related Research,” First Monday, Vol. 15, No. 3, (March 1, 2010), http://firstmonday.org/ojs/index.php/fm/article/view/2830.

16.  Alison J. Head, John Wihbey, P. Takis Metaxas, Margy MacMillan, and Dan Cohen, “How Students Engage with News: Five Takeaways for Educators, Journalists, and Librarians,” Project Information Literacy Research Institute, (October 16, 2018), https://www.projectinfolit.org/uploads/2/7/5/4/27541717/newsreport.pdf

17.  Ebony Magnus, Jackie Belanger, and Maggie Faber, “Towards a Critical Assessment Practice,” In the Library with a Lead Pipe, (Oct. 31, 2018), http://inthelibrarywiththeleadpipe.org/2018/towards-critical-assessment-practice/.

18.  Pat Hutchings, “Approaching the Scholarship of Teaching and Learning,” in Opening Lines: Approaches to the Scholarship of Teaching and Learning, edited by Pat Hutchings (Menlo Park, Calif.: Carnegie Foundation for the Advancement of Teaching, 2000), https://files.eric.ed.gov/fulltext/ED449157.pdf.

19.  Donna Lanclos and Andrew D. Asher, “‘Ethnographish’: The State of the Ethnography in Libraries,” Weave: Journal of Library User Experience Vol. 1, No. 5, (2016), http://dx.doi.org/10.3998/weave.12535642.0001.503.

Sign-up Today!

Join our mailing list to receive free daily updates.

You have Successfully Subscribed!

Pin It on Pinterest