Tuesday, February 19, 2013

Collection Assessment: Going in the right direction


For the last six months or so, I've been trying to develop a more systematic method of evaluating our collections, incorporating different kinds of measures.  So it's nice to see examples from other libraries, as demonstrated by the slew of posters and presentations from the last Library Assessment Conference.  Here are highlights from a few that piqued my interest...

From out of UC Berkeley, Susan Edwards, et al. describe their evaluation of the library's collections based on three types of measures: collection uniqueness (overlap with their closest peer), direct usage (cross-tab analysis of book usage by location and patron affiliation), and indirect usage (citation analysis of dissertations).    This is very much in the direction I've been working, evaluating the collection from different angles, using these exact same measures (among others).  For collection uniqueness, they point out both that having a fair amount of overlap is appropriate, but that there is no national benchmark for overlap percentages.  How unique should the collection be?  I'd be interested in perhaps collaborating with UC Berkeley to come up with that national benchmark for overlap.  But the citation analysis was most interesting, in part because they used a random selection of citations.  A major obstacle of conducting citation analyses is the time and labor necessary to gather and record each citation.  This is really not necessary if a random selection is used appropriately.  I'd really like to learn exactly how they did their selection.  From this analysis, they learned that their monograph collection for the social welfare students did not meet their needs as well as other collections.  An interesting feature of their poster was an interactive slide on which users could add stickers that related their estimation of how well their own libraries met their users' needs.

From the University of Maryland University College Library, Lenore England and Barbara J. Mann describe their efforts to centralize the evaluation of electronic resources.  Their poster described the criteria included in the evaluations, as well as the methods of communication with faculty and students regarding the review process.  What was most interesting was the use of a LibGuide that is used to both document the process and communicate the progress to those who may be most impacted by the collection development decisions.    The LibGuide not only makes the process transparent, but also provides the opportunity for comments from the stakeholders. This may be a useful method to employ in our next go-around of budget cuts.

Alicia Estes and Samantha Guss from NYU described their methods for Data Gathering and Assessment for Strategic Planning.  This was accomplished using a team-based approach, with librarians from a wide range of divisions of the library.  The team gathered data to be used in the planning process, including summarizing recent library assessment activities, discovering and producing an inventory of data collected, and "identifying trends."  In addition to providing data for strategic planning, the poster listed some lessons learned from this project.  These included discovering a need for more training in gathering, analyzing and understanding statistics, the need for an individual explicitly responsible for gathering and managing data ("to 'own' assessment"), and most notably, the need for a "more uniform process for data collection."  Alicia and Samantha, I feel your pain.

But this is a good lead in to a set of posters on developing such processes and repositories.  From Joanne Leary and Linda Miller describe Cornell Library's implementation of LibPAS for their annual data collection.  This caught my eye because we, too, are implementing LibPAS as a central repository of our statistics.  Some of the challenges, opportunities and the "Conceptual Shifts" seemed quite familiar, including the "chance to review and rethink" data collection, the challenge of a large and complicated organization, and the shift of having standardized data that is immediately available.  Although it's a little late for us to learn from their efforts, but it is good to know with whom we could collaborate or to whom we could go for ideas.  Nancy B. Turner, from Syracuse University, described their use of SharePoint for their data collection. Their document repository was most intriguing, with its "structured metadata for filtering results".  Finally, there is the poster from Kutztown University Library (you learn something new everyday) which describes their efforts to combine their locally-grown data repository system (ROAR) with the university's TracDat system.  Again, this caught my eyes because of our use of TracDat for campus assessment.

Of course, the latest efforts have been to associate usage of library resources and services to student outcomes, notably grades.  The poster from the University of Minnesota focused on using the data that are already available to the library to make this connection.  This included circulation, computer workstation logins, e-resource logins (mostly from off-campus users), registration for library instruction, and individual consultations.  Despite certain limitations of this data, they were able to demonstrate clear quantitative associations of a number of data with student grades and re-enrollment.  They do not mention if these associations were tested for statistical significance, but I am definitely interested in their methods.

Overall, I realized how much I missed from last year's Library Assessment Conference and what I hope to contribute this coming year.

No comments:

Post a Comment