Posts Tagged ‘ technical services ’

Core Competencies for Electronic Resources Librarians #alamw14

How I missed adding Dr. Sarah Sutton’s presentation of Electronic Resources Core Competencies to my scheduler, I’ll never know.  But, thanks to Twitter, I got there in time to catch the key points, just after Sutton’s overview of the competencies themselves.

Sutton gave high praise to many of the unique ways (mostly academic) libraries are already putting the competencies into practice. Most are using them to analyze, restructure, and define workflows and staffing, either at the department or unit level, and even across the entire library. The latter speaks to a significant takeaway of the competencies, that in most cases “one person can’t possibly do all of this”.  The competencies document emphasizes how they are not a set of competencies for an e-resources librarian, but a focus on the collaborative nature of managing these resources throughout the organization.  Other applications Sutton shared include informing MLS course programming and continuing education opportunities for both professionals and paraprofessionals, and creating job descriptions and hiring advertisements.  The audience provided additional applications, such as assessing and targeting specific areas of strengths and weaknesses.

Sutton plans to continue her research by investigating how the competencies shape student learning outcomes in MLS programs.  For myself, I see connections to my research interest in organizational communication, as well as pursuing the question of how you develop training in these competencies, especially in such amorphous concepts as “tolerance for ambiguity and complexity”.  How do you practice that, and how do you measure it?!

There was an important final question from the floor that spoke to how these competencies relate to Emery & Stone’s Techniques for E-resources Management (TERMS).  Sutton aptly addressed the similarities between the two, while noting the two have differing approaches — TERMS being more practical in nature and the Core Competencies being more conceptual, addressing the knowledge skills, and abilities of the people doing the work of e-resources management.  I shared my agreement with others in the audience that the two are complementary,  pointing out that I posed a very similar question for the TERMS project  —  imagining how techniques mapped to the e-resources life cycle could extend to mapping improved workflows and organizational communication.

The nature of e-resources evokes themes of constant change and adaptability.  As such, the process for updating these competencies, according to Sutton, will be ongoing, and the opportunities for training and other applications of the competencies will continue to evolve.  It will be interesting to see how the programming takes shape for the upcoming ER&L, and especially NASIG’s Annual Conference in Fort Worth, as its call for proposals were modeled on these Core Competencies.

Unified Resource Management (Alma) #erl13

Jimmy Ghaphery of Virginia Commonwealth University (VCU) shared the public side of the universal resource management system, Alma.  As an early adopter, VCU migrated to Alma b/w April to October 2012 from their existing array of systems: SFX, ARC, Aleph, and Verde.  January 2013 included further migration to Alma OpenURL and Alma CourseReserves.

Ghaphery emphasized the benefit of Alma as the back-end system upon which other layers can be added. Alma at VCU means no other catalog for users. Staff search as the public do and there has been no huge uproar. Although, browse functionality necessary and this functionality and need appears to be a bigger issue in humanities research.

Ghaphery noted the OpenURL interface as one of the most used by our public and found Alma provided a better visibility, especially of print holdings.  There is still a need for better support for custom parsers in order to include collections not indexed by PrimoCentral.

The details of the back-end system was presented by Erika Johnson (Boston College) who emphasized the dashboard and task list interface and the system requirement to set up workflows. Staff training in this was done in house, not ExLibris, and within a sandbox.  The benefit of the system, is that it knows the next step in a workflow, you don’t have to tell it or track and then manually push an event to its next step.  For example, if you created the order and then “sent to vendor” it automatically moves to the “activation” task list. Alma tasklist and workflow design and centralization structure simplifies the renewal process in one system.

Johnson also talked a bit about Alma Analytics which allows a number widgets to produce budget, task, and cost per use analysis reports.  In the April release, this cost per use tool will be visible from the search interface, and potentially coming into the Primo interface as well. Johnson noted their prior reorganization which created a continuing/e-resource and metadata unit, worked well for Alma implementation.

Susan Stearns (Ex Libris) finished up with additional Alma updates and summarized the four major areas of evaluation focus on which they have worked closely with partners.
1) Streamlining workflows
2) Increased visibility through Analytics
3) Creating an environment for collaboration (ARL community facilitation, OrbisCascade award, infrastructure to support sharing resources on both collection and technical services )
4) Becoming agile — agile development and a different (agile) mindset required for dashboard workflow interface

 

Internal and External Client Service #erl13

What a refreshingly lively presentation by McGill University’s Dawn McKinnon and Amy Buckland!

Responding to the need at McGill to bring the customer service component back into the technical services environment, McKinnon and Buckland shared stories of their transitioning roles from public to technical services and offered suggested communication strategies based on the question: “Would you treat a patron this way?

Some of the common communication pitfalls between these two groups include using too much tech speak or the communication black hole (non-response to communication). McKinnon and Buckland promote the “you can’t communicate too much” philosophy shared by their Dean, and suggest answering an email promptly, even if you don’t have the answer right away. They also offered four basic solutions to internal communication issues:

1) require “job talk” during the hiring process, and recreate this for current folks in get-to-know-you brown-bag meetings. Try expanding this to other departments, even those outside the library.
2) workshops about various technical service processes that impact your internal customers, or even more regular topic or update open sessions. Important to include staff in these meetings, so information isn’t privileged only to faculty.
3) intentionally create diverse representation from public and tech services on committees.
4) communicate! — open door policy, office hours, email, blogs (subject related ones forthcoming), weekly mgmt meetings, open office hours, bimonthly recorded talks with the Dean.

Excellent suggestions and stories from the audience indicated the relevance of this topic to any organization undergoing restructuring of their technical services. Virginia Tech, for example, shared the importance of having a safe environment for communication, especially in order to understand new roles before effectively being able to communicating with others about what they do. Having an internal collaborative group meeting before opening it up to larger (public) group meeting is one approach to that. Another audience member suggested implementing a “service level agreement” to understand and communicate what staff do as well as give the option to say no.

I was particularly energized by this session, as it speaks to my own research interest in reapplying the reference interview (or other service methodology) to meeting internal customer information needs. In my own organizational circle of concern we’ve discussed keeping at least one position connected to reference and public service duties, with the idea of cross pollinating ideas to both areas. Likewise, I’ve contemplated the benefits of a reorganized structure that brings public and technical services closer together.

So, is it as simple as “would you kiss your mother with that mouth”? Or is there a methodology that translates to a practical customer service philosophy? Or both?

the truth about reference

It’s been quite a month in my personal life, and no wonder  I never got back to filling out that last truthberry picking post.  I see some where I have no memory of what I found interesting at the time.  But, others, like Sheehan’s recent  ALA Techsource post on AI and reference,  are still relevent and worth building on, as other insights and starting points toward my big research interest  — the reapplication of the reference interview to interorganizational communication/information seeking — have come about since then.

It is also ARL stat collection time.   I serve on team monitoring a shared email account for e-resources troubleshooting questions (think of it as a distance cousin of virtual reference) and annually question whether I am supposed to count these as reference transactions.  For your information, ARL defines a reference transaction as:

…an information contact that involves the knowledge, use, recommendations, interpretation, or instruction in the use of one or more information sources by a member of the library staff. The term includes information and referral service. Information sources include (a) printed and non-printed material; (b) machine-readable databases (including computer-assisted instruction); (c) the library’s own catalogs and other holdings records; (d) other libraries and institutions through communication or referral; and (e) persons both inside and outside the library. When a staff member uses information gained from previous use of information sources to answer a question, the transaction is reported as a reference transaction even if the source is not consulted again.


I’ve always held, and our head of reference agrees, that we should count them.  But as distance cousins, the majority of questions we get are referrals from the real reference folks who are, thus, already counting them.  This year we may have more transaction to count as we have begun putting our face (our email address) out there a little to assist more directly with things like persistent linking, when resources are on order (and not yet available), and when we know there are likely to be problems with e-resources.  The latter two actually pick up the slack for what our ERM ought to be doing for us —  but that’s another post.  So, what I ultimately mean to point out here, is two-fold:

1) technical services libraries are increasingly access service librarians (our email troubleshooting group  is a concrete example).

2) as a result (and in addition to our counting transactions in this new role), we ought to look at the ARL definition above more closely.

Garden Libraries - The Imaginarium Garden (courtesy of Southfield Public Library, Southfield, MI)

My guess is traditional reference or public services librarians translate these transactions primary as a service to users wherever they are — as in “the library as place” and that place is inside and outside the library (in the Union, dorms, faculty offices, or even via email, IM, social media).  By seeking these reference stats of their colleagues, traditional reference librarians do concede that they aren’t the only transactors with our users.   But, I wonder how many interpret that definition to apply to transactions with people inside the library who work there?  This internal reference transaction among colleagues, I argue, is an activity technical services librarians have long been doing but perhaps not historically thought of as a reference transaction. Some examples of this I’ve thought of might be when we are helping reference staff to answer more technical questions (maybe we should count these twice!), when we help a subject liaison by pulling together reports for collection management, and maybe even when problem solving organizationally and seeking information about each others’ workflows in order to put a bigger picture together.

As for and how to go about answering my research question, this leads me back to Sheehan’s post and whether a direct reapplication of the theory behind the reference interview is the way to go, or whether so much changed both in reference (going virtual) and technical services (going reference) that a new theory is needed.  In addition to my own fascination with AI, the post connects to a debate about 2.0 vs. f2f communication that has stalled me in starting my research.  The post, specifically, led me to ask this question: has the stigma of ‘why’ questions in the reference interview (see Dervin & Dewedney, 1986) diminished as a result of more open sharing in social media?  Or is it (as Sheehan seems to point to) precisely because it’s online that this openness in social media occurs, but the f2f human interaction still requires the finesse of something like neutral questioning?

Other questions I’ve mulled over, related to the ARL stats definition, are whether there are too fundamental of differences in the reference transaction (and the use of reference interview skills) when the players are our working peers than when between librarians and students, faculty, or community users.   I’d be interested to know what you think and suggestions you may have for methodological starting points.

Comments below or emails to atruthbrarian[at]gmail[dot]com are welcome.

Dervin, B., & Dewdney, P. (1986). Neutral questioning: A new approach to the reference interview. Reference Quarterly25 (4), 506-513.

ala dc – the rest of the story

It was simply foolish of me to think I could do a play-by-play of this enormous conference. I quickly realized what a tiring feat attending an entire ALA conference can be.  I arrived Thursday to attend a Friday all-day preconference and stayed on through the closing sessions Tuesday, including the Library Advocacy Day rally on Capitol Hill.  It wasn’t that I woke each morning at 6am to catch the metro, but more the greater amount of walking, intense thinking, and extended mingling with large groups of people than this introvert usually would experience in a given day. So, while what follows is a lot more to go through in one sitting, I will try to cover just the highlights, giving you a good idea of what a technical services academic librarian might be thinking about these days.

The preconference:  I think there probably ought to be a session at the next conference that helps people come up with original titles for these sessions. They all seem to follow the tired formula phrases like “Taming the fill in the blank library problem [and] fill in the blank wild animal” or use that really terrible Herding Cats metaphor often given to describe stubborn personnel or other things beyond our control.  Perhaps I could start with a study of the frequency of these titles — I personally have attended two “Taming…” preconferences that were not explicitly part of a series. Anyway, this one was on e-resource license negotiations. There was good discussion, good presenters and the opportunity to go through a practical license example as a group. All in all, very useful.

The multiple overlapping main conference sessions that were heavily attended included various takes on the topics of weeding (or if you prefer, deaccessioning), e-books, collaboration, and data driven decision-making.

Ebooks and Usage

About e-books and about usage data, the big takeaways are that e-books lag in functionality and usage data than their e-journal cousins.  Most folks seemed to think the solution would be for them to move to resemble the e-journal model in accessibility and of course in standardization (COUNTER compliance) of usage data.  I like the perspective response to whether e-books have been worth it, by countering with the question of whether print books were worth it (see 80/20 phenomenon in collection development, (Leimkuhler, 1969)). But the argument can’t be resolved on use alone.  As the deaccessioning debate makes clear, there are bigger more fundamental issues to address with respect to the value of electronic resources or even entire concept of the digital library.

Deaccessioning and the Digital Library

One of the more memorable and heavily attended main sessions, Multiple Formats and Multiple Copies in a Digital Age: Acceptance, Tolerance, Elimination (ALCTS – CMDS, RUSA – CODES), approached the topic of deaccessioning and seemed a perfect follow-up to the conversation my library began just before I left. This group of presenters was more than approaching the topic, however.  They experienced, for better or worse, the deaccessioning process, including deduping print with online formats. They reminded and reassured the audience of the library’s long reformatting history (e.g. newspapers, microfilm, audio/video) and offered simultaneously visionary and tangible paths for the future of libraries. Some of these ideas included that in 10-20 years most print collections will be special collections; that in fact, “collection” will no longer be a useful term – preferring rather the term network or cloud library; and the mindset that if it is not indexed on the web, its existence will be hard to prove.  We are certainly experiencing bits of all of this already.

Some useful advice included forging agreements with partner libraries, aggressively pushing electronic browsing for all collections, and — as I, an alum of a University with a strong Engineering program, have been saying all along – push Engineering schools to develop e-readers [and more!] that are better suited for the needs of networked libraries.

Communication/Collaboration

Consortial licensing was touched on in the preconference and ‘memorandums of understanding’ were proposed in the deaccessioning discussion. Another opportunity I had to delve into the realm of collaboration and communication came in a round table discussion with Technical Services Managers in Academic Libraries Interest Group Program (ALCTS).  We discussed many of the ways each of our organizations is trying to manage communication. Many of us are creating wikis with similar underuse by the rest of our staff.  Many of us are having regular face to face meetings with similar under-participation by the rest of our staff.  But, two big takeaways I did appreciate:

1)      successful implementation of any development effort involves identifying ‘expert users’, specifically individuals who are both good with people and good with technology, and…

2)      the utter importance of organization (hey this is what we do best right?)

One library credited the success of its wiki to the well-organized effort of the Department Head with the responsibility of keeping it well-structured and  up to date.  This echoes a consistent theme I heard: “pick one thing and do it well”.  That goes for development and communication, eh?

I was also anticipating a session by LLAMA/LOMS addressing my research interest in interorganizational communication. I had a good experience last annual in Chicago at a session from this same group.  This year, the session was Communication at the Crossroads: The Theory and Practice of Connecting Effectively Within and Without the Organization.  Sadly, the presenters kind of failed – or their presentations failed — to communicate on many levels.  Presenting lesson #1:  If you’re going renegade and not using a PowerPoint:

1)      you’d better be a dynamite verbal communicator, esp. in a large setting (in other words: don’t read from your material), and…

2)       at the very least, don’t leave the former presenter’s PowerPoint slide up while you’re talking.

The first presenter (with PowerPoint) basically gave the 10 minutes version of Organizational Communication Theory 101.  Interesting, but underdeveloped and disconnected from the practical concerns of those in the room.   It also focused more on the sender-receiver side of communication theory overall. Only with the brief mention of making decisions in context (Schein) and reducing anxiety (Van Maanen) did it touch on my MLS preferred construct of addressing communication gaps (see Dervin).

The second presenter talked about how much she loves being a manager and her practical advice could be summarized with “you have to talk to your staff”.  True – and sad there actually is enough a need to have to point that out.  But she made some good points, like reminding us that work doesn’t get done through the org chart. That what is really needed to fill in the gaps (there you go) are behavioral charts, because it’s not so much what you know these days as who you know – and she meant that in the least nepotistic way possible. I think.

Another large point brought up, but inadequately addressed was the issue of multimodal communication.  It reminded me of the work of Nancy Baym – a communications professor that our library’s Development Committee brought into a panel discussion. Baym points to research testing the prevailing assumption that electronic communication negates face to face communication or worse erodes our relational closeness. The research, she argues, actually suggests that we use each in context of the communication need and that multi-modal communication may even be an indicator of increased relational closeness or satisfaction.

So, I suppose sometimes the value of session comes in what is left out because it presents opportunities to expand one’s own research.   In addition to what I’ve mention expanding upon above, other random thoughts that sparked some potential for further reflection include the pick something, do it well philosophy; don’t fear transparency; and remembering that good organization is the key to so many successes.

Baym, N. K. (2009). A Call for Grounding in the Face of Blurred Boundaries Journal of Computer-Mediated Communication, 14 (3), 720-723

Dervin, B. (May 2005). Sense-making studies. The Sensemaking Methodology Site. http://communication.sbs.ohio-state.edu/sense-making/

Leimkuhler, F.F. (1969). Some behavioral patterns of library users: the 80/20 rule.  Wilson Library Bulletin, 43. 458-61.

Shein, E. (2004). Organizational Culture and Leadership. San Francisco: Jossey-Bass.

Van Maanen, J. (1979). People-processing: Strategies of organizational socialization. Organizational Dynamics, 7 (1), 18-36.

Journal 09-14-03

More thoughts on issues Kurzwiel stirs in me, mostly defining humanity apart from a solely mechanistic rationale.  Searle rebuttals with distinctions between computing symbols and conscious understanding.  Dembski follows by showing how computers lack a frame of reference, context (inability to get the joke).  Though, how many of us out there are just like that — ha ha.

Anyway,what Searle said got me thinking about the technical services side of the library (that being my current work).  Don’t we just sort out [make symbols or code] the info?  How much is consciousness understanding or how many decisions require that “gut” feeling?  A subject cataloger might argue that it is quite a bit — not a technical service, but an art. Taken too far in my train of thought, I wondered how many of us techies could be (ARE BEING) replaced by technology.  In fact we embrace it to a large extent — anything that helps us do our job faster.  What we’ve found is that this sometimes causes a predicament.  If you don’t use the fast technology your work becomes irrelevant (too slow, unnecessary work to get the job done).  On the other hand, embrace it to fully and one might end up wondering what your warm body is even doing there. Maybe that’s drastic.  But I have found myself twiddling my thumbs every now and then when a pile of work I tought would take two hours, I managed through in one.  This is also partly my keeping up with the pace.  My skills [get] faster and computers are [getting] faster.

 

Is this where libraries in general will find themselves if they embrace too fully the electronic format, if they abandon too  fully the traditional library?  I guess with relief I return to the fact that coincides with my last statement.  Since we (humans) will create the machines, we (librarians) will integrate them into the library.  Then — call me naive — I think any further argument Kurzweil makes (machines self-replicating and such) is too “out there” to worry about right now, if ever.

%d bloggers like this: