I’m going to start this blog post with a little bit of honesty: more often than not, I skip conference keynote addresses. I do so on the oft-confirmed assumption that the talks will vary between vaguely informative reproductions of work I can read elsewhere or attempts at saccharine inspiration that eschew much-needed critical insight.
Breaking habit, I attended all three keynote talks at the 12th International Conference on Performance Measurement in Libraries (LibPMC). And, refreshingly, the presentations – delivered by: Ayub Khan, MBE, Vice President of CILIP; Dr. Steve New, Professor of Operations Management, Said Business School, University of Oxford; and Dr. Colleen Cook, PhD, Trenholme Dean of Libraries, McGill University – illuminated questions that, for me, have been bubbling beneath the surface for a while. During LibPMC, I realized these questions can be summarily (if reductively) expressed in a single principle: assessment – like libraries – is not neutral.
In his talk about assessment and impact in UK public libraries, Khan presented a list of reasons it’s important to measure the performance of libraries – essentially, why we assess. While it wasn’t meant to be exhaustive, the list presented reasons that were all extrinsic in nature: concerning budgets, perceptions, and external stakeholders. It’s a reality that much of what we do is, as Khan put it, to “persuade” others of our value. But to think that justifying libraries’ existence is the primary or driving motivation behind library assessment makes me feel uncomfortable about my position as an assessment librarian.
If we choose only to conduct assessment to convince those with the purse strings of our worth, are we failing the communities we serve?
Two presentations that highlighted assessment in service to our constituencies were Selena Killick’s paper on using a customer relationship management (CRM) tool to better understand and proactively respond to student needs at Open University, and Maggie Faber’s lightning talk about a model for sustainable space assessment and evidence-based decision-making.
Killick discussed the implementation of an institutional CRM and how the volume and specificity of data collected has allowed library staff to proactively identify pain points – a task especially important given the changing demographics at OU which include 22,000 students with declared learning disabilities. Killick emphasized that “students feel like a failure when they have to ask a librarian” and by leveraging CRM data to predict common issues and implement solutions, the library can improve the student experience and encourage success.
Faber explained the process by which she engages colleagues and branch staff at University of Washington Libraries to produce meaningful data visualizations that, when used in concert with situational information, support sustainable decision-making around service provision. In one example, Faber was asked to consider space usage over time of day to determine if an hours reduction was feasible for certain locations. The question was brought about in response to Washington State’s recent minimum wage increase and the reality that budget cuts would have to be made in order to meet the new pay standard. Faber’s spatial data analysis confirmed that certain locations received low usage during late hours and, coupled with the knowledge that another library location with long hours was located literally steps away, staff were confidently able to shorten hours – and redirect expenditures to wages – knowing they would not eliminate a service valued by students.
Both Killick and Faber struck me as placing students at the centre of the assessment practice and giving primacy to their needs. The projects were prompted by external (even political) factors – institutional implementation of a CRM and changes to state legislation, respectively – yet the outcomes at which Killick and Faber arrived were not overtaken or defined by these factors.
In the second keynote, New shared his expertise on process improvement in automotive factories and hospitals, discussing briefly the challenges and opportunities associated with adopting certain process improvement methods in libraries. He drew a distinction between the Ford method of manufacturing – maximum efficiency for reduced cost, leading to a necessity of excessive quality assurance inspection – and the Toyota philosophy – a human-centred process in which the frontline workers are entrusted to “pull the cord” when they see a problem and subsequently engaged in finding a solution.
I’ll mention Faber here again because in her presentation she described a model of assessment in which the knowledge of the branch staff is critical to the decision making process. New emphasized the need for trust, respect, and the humanization of workflows in order for frontline workers to confidently “pull the cord” or offer their knowledge in pursuit of process improvement. To my mind, Faber’s data visualization presents a well-informed and analytically sound “cord” which branch staff are able to “pull” when making decisions about space use and service provision.
Frankie Wilson, in her presentation about research diaries for graduate students, demonstrates how New’s thesis on process improvement and frontline engagement can be extended to students. While piloting the methodology, Wilson treated participants as co-researchers in their own right and provided compensation – acknowledging that not only are these individuals trained researchers (as doctoral students) but that they are providing time and labour in service of the study. Characterizing them as collaborators (as opposed to subjects) and compensating them proved both a matter of respect and a means of engagement. As co-researchers, the participants were highly engaged and invested in the project – a necessity for a study that required weekly participation over an entire semester. Many of us offer incentives to study participants, but to acknowledge participants as collaborators rather than subjects highlights, for me, a matter of ethics in how we engage with and treat the communities for which we exist. To humanize the assessment practice by acknowledging the subjectivity of our constituents can only illuminate the static data we gather.
Cook closed out the keynote trilogy by “assessing assessment in libraries”: how far we’ve come and where we should be looking to go. She framed her talk in Western (i.e. cowboy) terms – the good, the bad, and the ugly. For Cook, “the ugly” is when assessment gets political – especially when statistical rankings (think: journal impact factor, h-index, and ARL index) are treated as surrogates for evaluation. The assumption that a rank position signifies inherent value (of a publication, author, or library) is a dangerous one that threatens the integrity of the work we do. I appreciate Cook’s assertion that we must not misuse statistical ranking systems beyond the scope of their original intent; but I found myself thinking: when is assessment not political?
The Conference’s theme was “communicating value and leadership”. Value is an inherently political concept – how it’s defined, how and why it’s communicated and to whom. And, of course, who occupies the leadership space when it comes to directing assessment and making decisions.
The points made by Khan and New – that much of our work is extrinsically motivated, and that the frontline must be engaged in process improvement for it to be effective and sustainable – are themselves political statements. These conditions are accurate, but the juxtaposition of using assessment conducted by and for the populations we serve to satisfy the requirements of decision-makers who operate at a distance from our libraries must be interrogated.
These points compelled me during the conference and since returning to work to acknowledge that there is more to my assessment work than asking a question and choosing a methodology. It is incumbent on me – as I learned from many of the presenters at LibPMC – to consider what and who is at stake in the work that I do and how I communicate it.
We treat the populations we serve like test subjects to be studied; we focus on communicating results to administration in an attempt to secure funding or legitimacy; we locate the work of assessment and improvement in the office of one librarian or administrator. These statements are not true for every library or assessment program; however, I’ve observed that they are true for many and, largely, for me. As Emily Drabinski tweeted from afar (following the conference hashtag) “Evidence matters when power wants it to” (https://twitter.com/edrabinski/status/891956060986191876).
As I started it, I’m going to end this blog post with a little bit of honesty: I haven’t been doing enough in my assessment work to question power structures, interrogate assumptions, and disrupt perceived neutrality. They may be small at first, but after LibPMC 2017 I will take steps toward more conscious practice.
Because assessment is not neutral, nor should it be easily accepted as such.