Saturday, July 20, 2013

The Performance-based Funding Model: Creating New Research Databases in Sweden and Norway

This article made me wonder about our own institution's impact on research, and notably, how the library could help its members increase their impact.  One of my original 5-year goals was to establish a "Research Impact Measures Service" - a la University of New South Wales in Australia.  This service would not only assist the researchers in their own performances, but also for departments and even university-wide.  I still keep this idea in the back of my mind...and it comes forward when reading articles such as this.
This article refers to national efforts to develop quantitative measures of impact as one (sometimes the primary) decision factor in disseminating funding.  There are, of course, many opinions on the validity and value of such methods.  It does tend to reward success, but, like pure capitalism, this can lead to greater differences between the "haves" and the "have-nots".  It can also stifle innovation by essentially betting on sure things. Many breakthroughs start with research that has high risks of failure.
But it did get me thinking about comparing my university's output with others.  A cursory look at data from Web of Knowledge (a resource with documented limitations) demonstrates that the university's impact has been limited.
2000-Current# ArticlesArticles Ratio# CitationsCitations Ratio# Citing ArticlesCiting Articles RatioAvg CitesAvg Cites Ratioh-indexh-Index Ratio
UT Dallas83670.971068731.66786441.6313.91.681121.30
UT Arlington99721.16925871.44684231.4110.521.27971.13
UT San Antonio56640.66365930.57312000.647.30.88660.77
2008-2012# ArticlesArticles Ratio# CitationsCitations Ratio# Citing ArticlesCiting Articles RatioAvg CitesAvg Cites Ratioh-indexh-Index Ratio
Texas Tech87612.19385172.42293872.365.471.23631.50
This is a puzzle -- UT Dallas published as many articles as UNT, but had over 50% more citations.  I would like to investigate this further - is it due to differences in subject coverage?  UT-Dallas was initially started as a upper-level and graduate school focusing on technology.  UNT was originally a teacher's college - research has been a relatively recent focus.  Could the association with the UT System be a factor?  This could also help explain the how UT-Arlington, which similarly started as a teacher's college, has 40% more citations and a higher h-index than UNT.  This explanation fails, though, in the comparison with UT San Antonio.
This brief inquiry has only raised more questions.  I would like to delve into the details more thoroughly, controlling for number of faculty, subject coverage, longitudinal trends, graduate degrees awarded - what else?  I'd really like to know how my institution could get more respect.

No comments:

Post a Comment