Digital Odyssey 2007

Building our Future

  • Categories

  • Archives

Beth Jefferson on the BiblioCommons

Posted by odyssey2007 on April 21, 2007

Slides | Photo
1:30 – 3pm; Room 205.

In her Digital Odyssey 2006 session, Beth talked about the research being conducted with BiblioCommons and since then the project has received funding from Knowledge Ontario to further the work of buidling a proof of concept by implementing a pilot project with Ontario public libraries.

Key ideas:

  • Social discovery (vs. social software)
  • Averages hide important differences

  • Introduction:

    • Consider that: “A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be” (Gretsky)
    • The question then is, what kind of player plays where the puck has been?
    • Many are frustrated with existing OPACS, so what can be done about it?
    • Next generation catalogue functionality (already) include the following: spell-checking, field weighting, truncation, sort flexibility, in-line query limiters; (and for more advanced examples) faceted search, duplicate detection (FRBR), ratings, reviews, tags…
    • Libraries are behind commerce in this area; therefore there is an opportunity here for libraries to ‘leap frog’.
      Where are things headed? ‘Social searching’ (not about blogs, podcasts) but using what others have found to help discovery of resource.
    • BiblioCommons focuses on:
      • The library catalogue as being about ‘discovery’ (and discovery as ‘fun’) vs. being limited to ‘finding’
      • Relevancy: No single set of criteria are relevant to ‘average users’ – what is relevant to you is not equal to what is relevant to me – ‘averages’ gloss over subtle but important differences.
      • Communities, niche groups of users.
    • If you think about the search interface, it is essential to offer tools that refine searches for users, but tools are also essential for helping the users ‘expand’ their search, and get to a place they didn’t know how to ask for. This is where the value of social context comes in.
    • How does the concept of BiblioCommons differ from the ‘social OPAC’ or modules added to existing OPACs (eg. add a review, tag, rate this book, etc.)? Existing modules are pretty straight-forward to implement – they are not tough, and vendors already focus on this aspect of ILS. What is tough is getting at the questions of
      • ‘which data do you ask for? (which data is most valuable, and what format, and what is not asked?);
      • ‘how do we ensure the quality of search results?’;
      • ‘how do we allow for aggregation across systems?’ (Social data is only valuable when there is tons of it, but current practice is to limit data to the local ILS);
      • ‘if provide reviews, how many?’ 10,000 reviews? No, but only 5 of the 10,000 that is relevant to a user (ie. data from sources they ‘trust’);
      • ‘how do we help create or support communities of interests that share values, tastes, affinities, etc., but also integrate it back into the search process? (Once you have data collected, you can invest in algorithms that make the data valuable. Use metadata of library collections, but also people and conversations they have about the collection).

    • I. Aggregation:

    • BiblioCommons proposes a subscription web services model with a central repository interacting with multiple library systems in a direct way. Batch loading of records and authentication would occur on a nightly basis. With aggregation across library systems users can access the central repository through a local library interface where users can also contribute metadata, and in this way work towards a kind of ‘universal library catalogue’.

    • Question: Local ILS record vs. a central repository: does one record replace the other?
    • Response: The records are copied and there is a redundancy in the system.

    • What data should be gathered? What enhances discovery?

    • User Reviews: how useful are they? The problem with reviews is that it is only useful when get you get to a record, but it is limited for ‘discovery’.

    • Ratings: Ratings are a hug part of the culture, as a kind of voting, there is an expectation of being able to use rating as a way of expressing what we like or not like (numerous examples are available, from movies, to ‘, etc.). So, why not let users rate the library and collections in this way?
    • We need to think about ways to engage huge numbers of users in rating their own experiences and using that information to enhance search capabilities. How do we do this? The ‘my account’ feature of OPACs has the potential to help create an environment for this kind of interaction to take place.

    • Lists: Lists are also a big part of popular culture and serves as a way of curating collections. Notice how keyword search results can be very random (e.g. a search for head injury offers titles about a doctor’s story, a mystery novel, etc.). But if we give the task of creating lists to users, the results can be less random, because they can specify things like how a subject like ‘head injury’ is treated, from what perspective, in relation to other topics, of interest to whom, etc. If we unleash tools for doing this on the OPAC, users will use it like they do on
    • If use ‘lists’ today, what is expected in the next iteration? Perhaps help users to judge the authority of list creators by looking for consensus. Another thing is supporting a foundation for conversation between users to take place. This can be done by creating an optional check box that permits users to contact a list creator to talk about that topic.

    • Tags: The use of tags in finding fiction is important to consider. In public libraries, fiction, as a type of resource is so dominant (about 70 %) while at the same time the LCSH is highly inappropriate (e.g. A Fine Balance – about ‘apt. houses?’). So tags can help us get there. If you go into Amazon, (vs. Library Thing), you can get a list of terms expressing what a book is ‘about’ (friendship, poverty, etc.) and characteristics of its ‘tone/style’ (haunting, heart-wrenching, epic, etc.). In this way user vocabulary can be helpful, but the question or challenge lies in how to elicit rich vocabulary from users to aid the discovery process (vs. what tag clouds show now). What we are doing now is experimenting with ‘faceted tagging’, by prompting users to fill in genre, type, about, tone/style, of interest to fields. Almost nobody in our research of 50 in-depth interviews, were familiar with the concept of ‘tagging’ and so the term was changed and the participants were asked to ca‘tag’orize the books. A focus group of 11 or 12 teens generated a rich set of terms for Harry Potter, but consider the different terms used by different groups of users: ‘dark’ vs. ‘scary’. In this way, adjectives themselves carry social context. These subtle differences among what could be treated as synonyms can instead be used to explore by using these different facets. Tagging can also be used to indicate objectionable content, age appropriateness, but also to combine one person’s option with another’s comment on the same resource to gauge the relevancy of the tag for your own use.

    • If we want to get users to provide input, how do we not only get them to do it, but get lots of people to do it? Compare OCLC’s open worldcat’s 2 reviews of the Da Vinci Code compared to LibraryThing (launched 2 or 3 months after the former), which has 400 reviews for the DaVinci Code. This comparison illustrates how implemention is key –
      Tim O’Reilly, writes on this in ‘Architecture of Participation‘.
    • If you take a look at the steep pyramid the break down looks like this: 1 creator, 10 synthesizers, 100 consumers. Look at any number of initiatives and consider how often get comments back?
    • So we need to focus on how to 1) remove the barriers (people won’t go out of the way to do this, vs. everyone wants an outlet for their ‘voice’ idea – early implementers vs. general public users. – in our survey, out of 7 statements of motivating factors only 1 of 50 said that their motivation would be to ‘have my voice heard’ (vs. other ‘perks’ or for the sense of ‘giving back’).
    • Currently, the small number of people who have confidence in their own voice have a huge impact on how collective energy and attention is directed because they can heavily influence e.g. relevancy ranking in Google. The library can play a role in helping the rest of the others to be heard.
    • Another factor to consider is a fear of ‘If I write a review, can I edit it afterwards?’ – you can’t with WorldCat.
      In survey of online users, 40% (n = 45) responded that they access their library account online several times a week (eg. Hold status, due dates). If we project this to the rest of North America, we would probably reach a count of several million, which would more than adequately compete for sites like MySpace, Facebook, Amazon, etc. Consider that we get this kind of traffic with the kind of interface we have now – image what could happen if we offered a page that was more interesting!
    • We need to make the process ‘in the flow’ for most users but also add functionality for engaged users. Three major motivations (each representing about 1/3 of the responses) to contribute were
      • a) public good will;
        b) want personalized recommendations;
        c) nothing or won’t do it unless given something
    • Jacob Nielsen on ‘participation and equality’ points out the need to offer rewards. For this reason we are looking at a community credits program, so that we can give a sense that ‘someone is thanking me to do this’.
    • But remember, don’t overlook the obvious! Currently libraries don’t have the ability to send e-mail notices to users for changes to their account, and this is the one thing they would absolutely like.
    • Privacy control: When users contribute, they have control of what is shared and what is hidden. At first we went out to users with an interface specifying ‘public’ or ‘private’ nature of the their contribution, but most people wanted to keep things private so we had to reverse the incentives, and shifted the semantics to state, ‘share’ vs. ‘hide’.

    • II. Quality Assurance:

    • You can’t monitor contributions for two reasons: 1) it is not scalable, and 2) you then run into legal complications.
    • We need to create a welcoming area by laying out the very basic, clear rules, but leave grey areas to be sorted out by users.
    • The points of caution we are taking include:
      1. users have to authenticate to contribute (often users login at library stations anyway);
      2. maintain an anonymous username but one that corresponds to their library account so that they can get kicked out if cause problems;
      3. let users control what is relevant to them or not (e.g. ‘I like this person’s comment but ignore these others’);
      4. flagging content (these will be reviewed across various library systems and any discrepancies go to the committee of provincial partners);
      5. vocabulary filtering (we are testing ways to allow users to customize offensive vocabulary lists (x-out the vowels of specific words).

      III. Back-end Analytics:

    • To improve relevancy, need to consider niche user groups, and aiming to get ‘results that are relevant to me’. Consider children vs. adults: some people think that the point is to create interfaces and portals that are friendly to children but more specifically, really need to focus on the trouble children have in filtering results by getting better at presenting smaller subsets of results, etc.
    • Averages: consider teachers vs. kids — imdb now offers demographic information with ratings. If a 7 to 12 year old girl can get an average specific to the group of 7 to 12 year olds. Take for example a set of recommendations for ‘funny’: a ranked listing of what other people have found ‘funny’ – but to whom? How do we allow people to get to their definition of ‘funny’? Socially provided data is important, and who provided it, is important to the discovery process. In some cases, the question ‘was this review was helpful to you?’ is asked, but then the answer provided is not tied to the user, so that you can’t easily get the same reviewers (but now people are working on this for the next iteration – e.g. the ability to pull down a menu and select ‘trusted sources’ of reviewers, but also, limit that influence for certain subject areas.

    • Question: Legal implications? In police investigations, must the library provide information?
    • Response: All information is collected by users opting in to do so.

    • Question: When a user is rating a review, do they think about legal implications?
    • Response: In testing, users were taken to site asking them to get at their expectation of what will be done with their contribution and they all expected the content to appear in the catalogue, but not necessarily tied to their profile. So we are trying to get at what users expect vs. just covering ourselves legally.

    • Question: If you move to another city or library system and acquire a new id?
    • Response: Users have the choice of linking accounts so that one profile exists for all instances, and you can toggle between library systems.

    • IV. Communities:

    • How do we support communities, conversations, and help users find what they couldn’t ask for? How do we allow users to expand their searches?
    • Put things like ILL like you want it to be used (though this may be an issue for some), and offer browsing of local library but also resources held in other libraries, community groups (e.g. hospitals).
    • Same thing for ‘questions and answers’. Consider Yahoo Answers: the challenge is, if you have 3 million people, how do you match the questions with the answers? Problem include granularity (e.g. ‘mental health’ vs. ‘mood disorders’). One approach is to ask the person to catalogue questions and answers to help match them up.
    • What does this do for the role of librarian? It creates a mega phone for librarians to give rich answers, but also a potential to be viewed by thousands of users.

    • Notes by Hyun-Duck Chung


    Leave a Reply

    Fill in your details below or click an icon to log in: Logo

    You are commenting using your account. Log Out /  Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out /  Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out /  Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out /  Change )


    Connecting to %s

    %d bloggers like this: