Tuesday, February 19, 2013

Collection Assessment: Going in the right direction


For the last six months or so, I've been trying to develop a more systematic method of evaluating our collections, incorporating different kinds of measures.  So it's nice to see examples from other libraries, as demonstrated by the slew of posters and presentations from the last Library Assessment Conference.  Here are highlights from a few that piqued my interest...

From out of UC Berkeley, Susan Edwards, et al. describe their evaluation of the library's collections based on three types of measures: collection uniqueness (overlap with their closest peer), direct usage (cross-tab analysis of book usage by location and patron affiliation), and indirect usage (citation analysis of dissertations).    This is very much in the direction I've been working, evaluating the collection from different angles, using these exact same measures (among others).  For collection uniqueness, they point out both that having a fair amount of overlap is appropriate, but that there is no national benchmark for overlap percentages.  How unique should the collection be?  I'd be interested in perhaps collaborating with UC Berkeley to come up with that national benchmark for overlap.  But the citation analysis was most interesting, in part because they used a random selection of citations.  A major obstacle of conducting citation analyses is the time and labor necessary to gather and record each citation.  This is really not necessary if a random selection is used appropriately.  I'd really like to learn exactly how they did their selection.  From this analysis, they learned that their monograph collection for the social welfare students did not meet their needs as well as other collections.  An interesting feature of their poster was an interactive slide on which users could add stickers that related their estimation of how well their own libraries met their users' needs.

From the University of Maryland University College Library, Lenore England and Barbara J. Mann describe their efforts to centralize the evaluation of electronic resources.  Their poster described the criteria included in the evaluations, as well as the methods of communication with faculty and students regarding the review process.  What was most interesting was the use of a LibGuide that is used to both document the process and communicate the progress to those who may be most impacted by the collection development decisions.    The LibGuide not only makes the process transparent, but also provides the opportunity for comments from the stakeholders. This may be a useful method to employ in our next go-around of budget cuts.

Alicia Estes and Samantha Guss from NYU described their methods for Data Gathering and Assessment for Strategic Planning.  This was accomplished using a team-based approach, with librarians from a wide range of divisions of the library.  The team gathered data to be used in the planning process, including summarizing recent library assessment activities, discovering and producing an inventory of data collected, and "identifying trends."  In addition to providing data for strategic planning, the poster listed some lessons learned from this project.  These included discovering a need for more training in gathering, analyzing and understanding statistics, the need for an individual explicitly responsible for gathering and managing data ("to 'own' assessment"), and most notably, the need for a "more uniform process for data collection."  Alicia and Samantha, I feel your pain.

But this is a good lead in to a set of posters on developing such processes and repositories.  From Joanne Leary and Linda Miller describe Cornell Library's implementation of LibPAS for their annual data collection.  This caught my eye because we, too, are implementing LibPAS as a central repository of our statistics.  Some of the challenges, opportunities and the "Conceptual Shifts" seemed quite familiar, including the "chance to review and rethink" data collection, the challenge of a large and complicated organization, and the shift of having standardized data that is immediately available.  Although it's a little late for us to learn from their efforts, but it is good to know with whom we could collaborate or to whom we could go for ideas.  Nancy B. Turner, from Syracuse University, described their use of SharePoint for their data collection. Their document repository was most intriguing, with its "structured metadata for filtering results".  Finally, there is the poster from Kutztown University Library (you learn something new everyday) which describes their efforts to combine their locally-grown data repository system (ROAR) with the university's TracDat system.  Again, this caught my eyes because of our use of TracDat for campus assessment.

Of course, the latest efforts have been to associate usage of library resources and services to student outcomes, notably grades.  The poster from the University of Minnesota focused on using the data that are already available to the library to make this connection.  This included circulation, computer workstation logins, e-resource logins (mostly from off-campus users), registration for library instruction, and individual consultations.  Despite certain limitations of this data, they were able to demonstrate clear quantitative associations of a number of data with student grades and re-enrollment.  They do not mention if these associations were tested for statistical significance, but I am definitely interested in their methods.

Overall, I realized how much I missed from last year's Library Assessment Conference and what I hope to contribute this coming year.

Sunday, February 10, 2013

Favorite TEDTalk of the week

For the last few months, I've been trying to schedule a time on Sunday mornings to watch the latest TEDTalks posted during the week.  Today, I'm having to catch up from missing a couple of weeks due to various reasons.  While most are quite interesting, I did want to highlight one that I found most intriguing and/or inspirational.



 Tyler DeWitt talks about his efforts as a middle-school science teacher to explain science without the "tyranny of precision" and conforming to the "cult of seriousness".  His key line is, "Let me tell you a story..." advocating the use of storytelling as a tool for engaging students.  Tyler seems like a person who has been born & raised in a cult and has just thought of and is bewildered by ideas such as speaking your mind and freedom to believe in any (or no) religion.  He (metaphorically) wonders why others haven't thought of this before.  It's almost like Tyler believes he's the first person to use storytelling to reach students in science.  And it's quite reasonable for him to believe this.  He, himself, has been indoctrinated into the "cult of seriousness" and has been complicit in the "tyranny of precision" through his own science education and training to teach and write about science.  So we can forgive him for his seemingly egocentrism and examine the meat of his argument: Make the teaching of science engaging and inspiring to students by using simpler language, metaphors that the students can relate to, and occassional "little lies".  While he winces when people call it "dumbing down" (you'd be surprised at how many of the negative comments use this exact phrase), ....  The "little lies" is another aspect that some people take exception to.  But Tyler affirms it is better that the students learn overall concepts that may not be 100% accurate than to not learn any of the concepts at all.

I always find it interesting to read the comments of the TEDTalks to find what others think.  Inevitably, there are the gushing "Right on!" and "Amen!" comments, but just as inevitably there are naysayers.  I think this exchange is good and right.  And I found it quite interesting that the bulk of the negative comments came from fellow science teachers who express the exact sentiments that Tyler advocates against: a tyranny of precision, focusing on the idea of "little lies" idea; and the "dumbing down" of science.  I think the concerns about the "little lies" are due to a poor choice of words.  From what I understood, Tyler is not advocating lying per se but rather not being 100% precise.  Using the examples of his talk, by leaving out the fact that a few viruses use RNA instead of DNA, he avoids confusing the students as they attempt to understand the basic concept of bacteriophage viruses.  The more exceptions you throw into an explanation, the harder it is to understand.  So, while Tyler is not telling the whole truth, he is, nonetheless, telling the truth.   The second issue of "dumbing down" is also about choice of words, in this case, by the naysayers.  Tyler emphasizes "simplifying" the language, not "dumbing it down".  I believe the difference is based on your assumptions of how learning should happen.  If you believe that it is the individual student's responsibility to "do the work" and understand it on his or her own, then using simpler language is "dumbing down".  If you believe it is the responsibility of the teacher to explain and to, well, teach, then using simpler language is one tool of many.

Finally, context of the class is key to the teaching methods used. Tyler (and most science teachers in  primary, secondary, and lower-level undergraduate education) is teaching students many of whom will not even go to college, let alone become a scientist or science teacher.  His goals are to have his students understand the basic concepts of science and the scientific method, and to inspire interest in discovering the ways of nature.

So, what does this have to do with librarianship?  Well, librarianship is an extension of education, and using library-centric terms and teaching searching skills using complex concepts like "boolean" and "relevance".  Just as with science teachers, there are some who think we should teach the terminology and not "dumb it down".  I believe there is a compromise of sorts - teaching the terminology by telling a story.  Similarly, by using metaphors and storytelling to explain concepts of information literacy, our goals should not include making our students professional searchers, but rather to be able find relevant and useful information and to evaluate the sources and potential biases of the sources.

Saturday, February 9, 2013

The data you need?

Walt Crawford, who has done some rather amazing analyses of library-data, notably the freely-available data from IMLS (for public libraries) and NCES (for academic libraries), appears to be feeling a little, well, under-appreciated.  In his post, "The data you need? Musings on libraries and numbers," (in which he admits to have edited "to reduce the whininess") he expresses his concern that there appears to be a lot of data out there but nobody seems to care.  He cites a series of examples, including the low sales of his own work, the apparent demise of Tom Hennen's American Public Library Ratings (which, admittedly, I was previously unaware of), and the unfortunate circumstances that led a PhD colleague to pursue other venues because there were no jobs to analyze data in libraries.  This last example hits home with me because I feel quite fortunate to have the title of Collection Assessment Librarian.  Not only am I paid to analyze data about our collections, but that is my primary responsibility; it was not tacked on to the list of responsibilities of the Collection Development Librarian or the Reference Librarian or even the Dean.  This is what I do.  I am not meaning to boast, but rather to express my appreciation.  I also hope to point out that while such work of data analysis may be under-appreciated, I think there is interest, however scattered it may be.

But this is a general problem associated with LIS field itself - are we a science or are we a profession?  Can we be both?  The science of LIS implies that data is analyzed to answer fundamental questions regarding the who, what, when, where, why, and how of libraries and information.  But the big questions are almost always asked by the academicians.  Those in the profession are generally more concerned with the little questions regarding their collections, their budgets, their users.  What I think is needed is a greater connection of the little questions to the bigger ones.  How is my collection at my library affected by economic forces of scholarly communication?  In exactly what ways will the local, state and national economies affect the services and collections of my library?

My concern is that we are not preparing professional librarians to make these connections.  While "research" is mentioned in 16 of the approximately 90 sections/standards in the 2008 Accreditation Standards, courses in research methods and data analysis are haphazardly required by the SLIS graduates.  Of the 3 LIS schools in Texas, only UT requires a course in research methods, but not in statistics.  While I don't think that a practicing librarian needs to have the same training and skills of an epidemiologist or social sciences researcher, I do believe they should be able to read and evaluate published research and apply it to their smaller questions.  I also think they should be able to conduct small-scale studies to answer their questions using methods that will provide valid answers.

Walt goes on to question his own contributions, due to lack of response from the library community.  He is, essentially, taking a sounding, asking - Is anybody there? Does anybody care?  Well, Walt, I think we do care and some of us do read your results.  I think what is contributing to this apparent anomie is not disinterest, but perhaps a kind of paralysis - what do we do with this? It is interesting that the libraries in my state have generally moderate circulation rates or that circulation is correlated to expenditures.  What can I do with that information?  While this may be taught in the core curriculum of MLS programs, it may be forgotten as the graduates enter the workforce and get sucked into drudgery of their everyday routines.

Trying to address these issues, he asks some questions which are quite familiar to me:
  • Am I asking the right questions?
  • Is there any analysis that is worth doing?
  • How can this information be made "meaningful and useful to librarians"?
  • Are librarians "willing to deal with data at all–to work with the results, to go beyond the level of analysis I can do and make it effective for local use"?
  • Can librarians "get" the differences in statistical measures, such as averages versus medians?
I don't believe the truth is clear about this.  It is probably something like, some of us do care but don't get it; some of us get it but don't care; some care and understand it, but don't have the time; and some of us are quite interested and can follow through.  And it's not clear whether this last group is growing in numbers or just staying on the fringes.

Finally, he asks the one question that got me to start this post in the first place: What is the data we need?  This struck me because I've been working on my first collection assessment using more formal process and I keep thinking of other measures to include:
  • Relative circulation rates compared with holdings rates
  • Distribution of age of books
  • Distribution of materials by type, age and usage
  • Comparisons of these against our peer institutions
  • Comparisons of databases against our peers
  • Spending on these subject areas compared with our peers
  • Acquisitions of recognized materials (highly-recommended, highly-cited, award-winning, etc.)
  • Coverage of resources in databases
  • Usage of all of our resources (notably electronic)
  • Publication of materials in this area, especially given changes to the ecology and economy of scholarly communications
  • Impact of primary research materials on the field and in the school
Some of this data we have or can start collecting.  We are paying dearly for the use of the WorldCat Collection Analysis System so that we can compare with our peers (I understand the risks and problems with this but we believe it can still provide valid trends and comparisons).  We have been working to standardize how circulation and in-house usage data is collected at the different collections or libraries within our system.  And we have been working to bring all the data into central repositories to make comparisons and analysis a little easier.

Others, notably of usage and publication, are notoriously difficult.  It would be very useful to know if our usage of selected databases differed significantly from usage of the same resources at our peer institutions.  Heck, even after nearly 10 years of the COUNTER standard, our own usage data is still quite difficult to compile and understand (for some vendors, that data is absolutely worthless because it is masked by queries through a common interface).  

So, Mr. Crawford, I just wanted to say, I feel your pain.  It can seem lonely doing all this work without the formal recognition and the ultimate expression of value (money).  This is why I will start looking at the NCES data more carefully and try to think of how this information can be applied locally.  And I finally put my money where my mouth is...I purchased the book (print is still my preferred format) and downloaded the Graphing ebook - you now have one sale.

Saturday, February 2, 2013

What could I do with 20 extra hours a week?

This past week, I have hired two library student assistants.  This is a first for me, to be solely responsible for interviewing and hiring and training anybody.  While each student has other responsibilities, about half of their time can be spent on tasks that I can assign.  So, this adds up to one part-time worker!  Now, what could I do with essentially 20 extra hours a week?  Here are some ideas:

  • Compare our holdings in specific subject fields with those of our other institutions..
  • Check the rate of ownership of award-winning books by year of award.
  • Analyze the distribution of circulation by subject area, year of publication and patron type and compare it with the distribution of usage of ebooks.
  • Analyze the life-cycle of materials by format (print book, ebook, media), and determining the period to first use and length of "time on shelf" between uses.
  • Compare the items requested via ILL from previous years to determine how many we eventually gain access to.
  • Determine differences in MARC records and other metadata between items used and items not used.
  • Compare the usage of ebooks through different vendors to answer the question, Does platform matter?
  • Conduct a Brief Test of Collection Strength.
  • Conduct a basic descriptive assessment of our media collection.
  • ....I've only just begun....
Admittedly, it will take me some time to develop the methods and document the procedures for the assistant to do these tasks.  But these are pretty smart students who have a desire to do well and gain experience.  I, too, look forward to developing some basic managerial and supervision skills and experience.  

Although you could say they are only students, I feel a responsibility to mentor each one and provide an introduction to professional librarianship in an academic setting.  Towards that end, I am considering providing a space for students on this blog, in which they could develop their writing skills.  I'd be interested in learning more from them.