Posts Tagged ‘ technology ’

Unified Resource Management (Alma) #erl13

Jimmy Ghaphery of Virginia Commonwealth University (VCU) shared the public side of the universal resource management system, Alma.  As an early adopter, VCU migrated to Alma b/w April to October 2012 from their existing array of systems: SFX, ARC, Aleph, and Verde.  January 2013 included further migration to Alma OpenURL and Alma CourseReserves.

Ghaphery emphasized the benefit of Alma as the back-end system upon which other layers can be added. Alma at VCU means no other catalog for users. Staff search as the public do and there has been no huge uproar. Although, browse functionality necessary and this functionality and need appears to be a bigger issue in humanities research.

Ghaphery noted the OpenURL interface as one of the most used by our public and found Alma provided a better visibility, especially of print holdings.  There is still a need for better support for custom parsers in order to include collections not indexed by PrimoCentral.

The details of the back-end system was presented by Erika Johnson (Boston College) who emphasized the dashboard and task list interface and the system requirement to set up workflows. Staff training in this was done in house, not ExLibris, and within a sandbox.  The benefit of the system, is that it knows the next step in a workflow, you don’t have to tell it or track and then manually push an event to its next step.  For example, if you created the order and then “sent to vendor” it automatically moves to the “activation” task list. Alma tasklist and workflow design and centralization structure simplifies the renewal process in one system.

Johnson also talked a bit about Alma Analytics which allows a number widgets to produce budget, task, and cost per use analysis reports.  In the April release, this cost per use tool will be visible from the search interface, and potentially coming into the Primo interface as well. Johnson noted their prior reorganization which created a continuing/e-resource and metadata unit, worked well for Alma implementation.

Susan Stearns (Ex Libris) finished up with additional Alma updates and summarized the four major areas of evaluation focus on which they have worked closely with partners.
1) Streamlining workflows
2) Increased visibility through Analytics
3) Creating an environment for collaboration (ARL community facilitation, OrbisCascade award, infrastructure to support sharing resources on both collection and technical services )
4) Becoming agile — agile development and a different (agile) mindset required for dashboard workflow interface



Webscale Collection Analysis and Development (Intota) #erl13

Marist College is one of the development partners for Intota. Kathryn (Katie) Silberger gave an overview of assessment efforts at Marist and how webscale (360COUNTER, Google, Summon) has helped them.

Using 360COUNTER provides multi-year comparison, centralized gathering and storage, while still offering robust reports in Excel format. With cost data in the system, the renewal analysis and decision takes 10 minutes.  You even get stats for products that are not providing reports. For example, click-throughs for open access in order to raise faculty awareness; referrer reports — where are people starting; and a report of widget for usage in LibGuides (which LibGuides can’t provide).  Other types of assessment services they tried include using Google forms (DIY) for reference question analysis and a direct connection to collection decisions, and looking at discovery logs and posting the top 10 or more questions to your internal staff.

Like many, their assessment environment means dealing with data in multiple systems and a proliferation of spreadsheets.   Beside collecting these into one system, other reasons for going webscale were that e-stats and p-stats are different. Current p(hysical)-stats, like circulation statistics reports, don’t account for highly circulating items like laptops, study rooms. Also, when assessment it takes too much time and effort, you often can’t ask for things “out of curiosity”.  Webscale means less time manipulating data and more time for analysis.

Mark Tullos (ProQuest) discussed how to bring all of this together in one place with Intota Assessment.  Intota Asessment has been rolled out this year in advance of the entire Intota Webscale systems. The claim is that Intota offers “a total picture of holdings, usage and overlap across all formats.

This Spring they are beta testing with current partners (and possibly adding other partners). After this process they will be recommending best practices.

Question — How do you deal with the fact that ingesting data is problematic to 360COUNTER or homegrown solutions, and requires a lot of normalization.
Answer — 360’s Data Retrieval Service (DRS) has helped this by using authority control — much like the questioner suggested they were manually doing. Problem with normalization of COUNTER data is often with the header, so have replaced this. DRS doesn’t require SUSHI compliance.

Question — What about non COUNTER data?
Answer — For no stats, use the click-through, for the non-COUNTER, normalize them to make them COUNTER-ish and load them.

Question — How are you loading cost?
Answer — By hand. The form fas not been as easy as giving it out to clerical staff given that invoicing is so varied across product. Once you have it in,

Question — If Banner could interact with Serials Solution this seems it would make this process easier. Are you planning for this?
Answer — Serials Solutions in following this and other payment systems, anticipating something to assist when rolling out Intota Webscale.

Troubleshooting and Tracking #erl13

Nathan Hosburgh (Montana State University) and Katie Gohn (University of Tennessee) spoke to a packed crowd about troubleshooting and tracking e-resource access problems by reviewing the various approaches, tools, and information resources used.

Outlining approaches to troubleshooting through the lens of “psychology and philosophy” seemed to speak more to the fundamental skills and talents effective troubleshooters have — remain calm, high tech with a human touch, logical & analytical thinking, can-do attitude, and don’t assume operator error.

Knowing you users is foremost, and this includes both internal and external users. Your internal users (ILL, Reference, Collection development, systems) provide valuable feedback from varied points of access and patterns of use. Knowing specifics about your external users — who will have different enrollment statuses, needs, devices — will inform the approach for solving problems.

How problems are solved varies widely — email, link to problem report form, internal error log, ticket system, and AskaLibrarian. The lengths people go to solve problem ranges from simple to complex guides for users to more detailed internal documentation.

Question is, how are you evaluating the effectiveness of these methods?

Katie Gohn shared observations of the widely varying sources for reporting e-resource troubles — anywhere from water cooler talks to direct emails. But her portion of the presentation focused primarily on the e-resource tracking system Footprints. Her library had this system set up as an instance of the wider University IT’s version.

Their web-based “Report IT” form populates the system from a user-selected category assignment of the problem and a general comment box. On the back end, this form also gathers types of computer, IP, and referring URL.

What are the key features that a tracking systems provides that email or other existing methods don’t?
1) the ability to see status and who’s responsible
2) communicate centrally in a system that is easily searchable
3) ability to categorize which allow you to assess needs from vendors or identify internal training needs
4) Have numbers to know staffing needs in this area.

6 people assigned to these troubleshooting teams for a 12K FTE-sized organization. They are hoping to justify the hire of one more. Basic troubleshooting training is important and this tool will help shape that.

Google Lite

I spent the ENTIRE day reworking my online presence at the heels of Google’s new  privacy policy implementation happening March 1st.  Thank God it’s a Leap Year!  I justify this procrastination by considering myself fairly savvy in these realms, or at least savvy in my connections with more savvy helpers like Sense and Reference and EFF.

Now,  I know I have not secured my everlasting privacy.  The internet is both permanently public in one sense (data is forever and no longer my own) , and publicly private in another (there is so much out there, my contributions are likely to go unnoticed anyway).  But my hope was to begin sorting out my online lives a little more clearly into basic camps of what I want to share and what I want to store.   I am also not giving up Google entirely.  I am keeping my Gmail account and the services for which I’ve used that email to register.  But in order to disassociate it from my daily searching and reading (that I prefer to keep somewhat private), I had to figure out a new browser, search engine, and reader.  So, here’s the end results and what I learned  in the process.

Google Bookmarks –> Evernote

I’ve always been uncomfortable with the public aspect (sharing) of bookmarks – which is why I never took full advantage of delicious.  But I had been hanging on to Google Bookmarks and justified Google knowing those bookmarks — well, Google knew my (and your) search history too– because until the recent privacy policy changes, Google kept that information separate and somewhat anonymized from its other personalized Google account features.  So, I hung on to Bookmarks even after losing its seemless functionality when Firefox force-upgraded some months back.  I had also (with the Firefox change) decided to try out Chrome, thinking it would integrate the Bookmarks more seamlessly. It did not, and I’ve just been living not exactly pleased the Chrome browser and Bookmarks since.   I decided to tackle finding a new bookmark service before dealing with search and my other Google accounts.

I had tried Evernote as a personal notetaking, to-do list keeper, and potential research ideas storage/organizer.  So, I decided to add my bookmarking there. Because I wanted to clean them up in the process,   I  manually reviewed, moved and tag-categorized over 150 sites.  I’m not totally jazzed with the default display, but I’m still learning and feel like there is plenty of flexibility.

Google Reader –> Netvibes

I took Sense and Reference’s suggestion for Netvibes as a reader alternative to Google Reader.  Along with a good take on what the Google privacy changes mean, you can see his full Google alternative recommendations here.  I like Netvibes both visually and organizationally.  And it seems, like Evernote, to have much more to explore.

Firefox and Chrome (Google Lite)

To take EFF’s recommendation to separate my search from my service, I had to really think through how I work in the day.  Ultimately, I went back to Firefox as my default browser and giving it my home page for work, my bookmarks, and my reader.  I kept Google Chrome, opening it to my Gmail, Twitter, and this blog (which I might reconsider — I’m blogging right now intentionally not yet signed into my Google accounts.).  Luckily, I have two screens so I can visually keep these browser universes separate.  Although, I’ll probably have to put a big post it note on the Chrome screen that reminds me DO NOT SEARCH IN GOOGLE CHROME!

Sidebar on dual monitors (in Ferris Bueller voice): “It is so choice. If you have the means, I highly recommend picking [another] one up.”

Google Search –> Duck Duck Go

I also took recommendation for a new search engine, trying out Duck Duck Go. It is very clean visually and also has a nifty Firfox plugin.  So far I also like the functionality and speed of the results.  Best of all,  it is not tracking my stuff.  See what I mean in a nutshell or in their  full privacy policy .

Google+ –> Facebook (for now)

Finally, I cancelled my Google+ account which wasn’t much of anything anyway. When it asked why I was deciding to leave, I should have said:  “Your algorithm can probably figure that one out.”

I’m sure I’ve got still got some blind spots in this whole thing.  So, please feel free to educate me, especially since next up is Facebook timeline .

truthberry picking (to be continued…)

Just a place marker for my post on the berries from ALA Direct this week


Journal 09-14-03

More thoughts on issues Kurzwiel stirs in me, mostly defining humanity apart from a solely mechanistic rationale.  Searle rebuttals with distinctions between computing symbols and conscious understanding.  Dembski follows by showing how computers lack a frame of reference, context (inability to get the joke).  Though, how many of us out there are just like that — ha ha.

Anyway,what Searle said got me thinking about the technical services side of the library (that being my current work).  Don’t we just sort out [make symbols or code] the info?  How much is consciousness understanding or how many decisions require that “gut” feeling?  A subject cataloger might argue that it is quite a bit — not a technical service, but an art. Taken too far in my train of thought, I wondered how many of us techies could be (ARE BEING) replaced by technology.  In fact we embrace it to a large extent — anything that helps us do our job faster.  What we’ve found is that this sometimes causes a predicament.  If you don’t use the fast technology your work becomes irrelevant (too slow, unnecessary work to get the job done).  On the other hand, embrace it to fully and one might end up wondering what your warm body is even doing there. Maybe that’s drastic.  But I have found myself twiddling my thumbs every now and then when a pile of work I tought would take two hours, I managed through in one.  This is also partly my keeping up with the pace.  My skills [get] faster and computers are [getting] faster.


Is this where libraries in general will find themselves if they embrace too fully the electronic format, if they abandon too  fully the traditional library?  I guess with relief I return to the fact that coincides with my last statement.  Since we (humans) will create the machines, we (librarians) will integrate them into the library.  Then — call me naive — I think any further argument Kurzweil makes (machines self-replicating and such) is too “out there” to worry about right now, if ever.

%d bloggers like this: