Search for
Prefer items held by your Libraries  Online 
In the news
Stories and titles automatically selected and updated every 40 minutes
Background related to this project can be found in a paper presented to the Sydney Information Online conference of 2007 by Alison Dellit and Tony Boston (also available in MSWord format). The slides are also available as a 1.8MB Powerpoint.

See also Rethinking the catalogue, a paper delivered by Alison Dellit and Kent Fitch to the NLA Innovative Ideas Forum, 19 April 2007.

About the demonstrator

This system is a very simple demonstration of searching MARC bibliographic records using Lucene for storage and indexing. The database being searched is a copy of the Australian National Bibliographic Database (ANBD) as at March 2008. It contains 16 million bibliographic records with holdings information for Australian libraries. The demonstrator extracts topics and relationships from records retrieved from a simple full text search to present search results:

Record details are shown augmented with:

The user's libraries can be used to boost rankings, and "online" resources can also be selected (although the current definition of what is an "online" resource is too broad to be very useful).

The same data (a few months more up to date) is also accessible in a public form through Libraries Australia. Please note that this system is a demonstrator of ideas, not a statement of direction by the National Library of Australia for the Libraries Australia service. The user interface has been left undesigned because we want feedback on the basic ideas rather than graphic design.

Please provide feedback on the NBD Prototype Discussion Wiki or by email to Kent Fitch, NLA.

As shown at the NLA Innovative Ideas Forum Rethinking the catalogue presentation, here's a static demo of integration of library metadata search and remote full text source search.

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 2.1 Australia License.


Database

This system has been implemented using Lucene as the storage and indexing mechanism. Four Lucene databases have been created:

  1. A simple database with 16 million MARC records, stored in a compressed XML representation with a single unique identifying key (index) - 22 GB
  2. An index database of these 16 million MARC records. Over 150 logical indices have been created (mimicing many of the indices in the current production NBD), but most of these are tiny and are mapped onto just 1 of the 26 Lucene "physical" indices created on the database. Less than 10 indices are used by the searching and clustering functionality of this demonstrator. The process creating these indices exploits much coded data in the MARC representation; for example, the 048 codes are mapped into words, which are indexed as part of genre; collection level records have their ranking boosted; holding count also boosts ranking. There are dozens of such mappings. This database is about 23GB.
  3. A database used to drive spelling suggestions (since replaced by Yahoo spelling suggestions). This database was built using sample Lucene spellchecking code, augmented with phonetic representation and word frequency data. 0.4GB
  4. Subject authority index - 0.2GB

The program which reads the raw MARC records and constructs the 2 large databases runs for about 16 hours elapsed to produce the final, merged versions of these databases (4 x 3GHz Xeon box with 8GB memory). The other databases require relatively insignificant elapsed time to generate.

Approach

MARC records are heavily processed during the indexing phase to extract field/sub-field contents into Lucene indices. Parts of fields are stored with different Lucene field boosts (for example, the main title, 245$a, is heavily boosted, whereas subtitle, alternative title, series title, added entry title etc are also indexed as "title", but with differing and lower boosts).

At query time, a very large query is constructed, searching many separate indices with separate boosts for exact matches, "near" matches and keyword anywhere clauses.

The first 700 records (the most relevant 700) are then retrieved and specific field values accumulated to find clusters (date, subject, classification, etc), and for the top clusters, the database population is found, enabling the cluster presentation on the right hand side of the results page. As well, spell checking and subject authority matching is performed.

This approach was originally tried using Oracle, and it worked fine up to about 200,000 records (on a dual-core 3GHz Pentium 4 with 2GB) before becoming very slow. The Lucene implementation seems much more scalable (we've benchmarked 5 queries/second on 16 million records as attainable).

The demonstrator is a very simple JSP implementation, but the code betrays the layers of experimentation required to achieve the current functionality and is fit for no purpose other than mining for ideas.

More details - the 22 things going on here

  1. Augmentation of search data with decoded MARC data

    The idea is to make the coded data searchable and hence contribute to ranking and clustering by converting the codes to words.

    • Can we use LC/Dewey code names as "subjects"? That is, should we add the subject "plant injuries, diseases, pests" based on [ 082 a 632.5/0994 2 22 ]
    • Can we reliably set "audience" based on, for example: [ 650 0 v Juvenile fiction ]
    • Genre: "percussion xylophone" based on, for example:[ 048 a pb01 ]
    • Genre: "bibliography" and "technical report" based on, for example: [ 008 040308s2003    xraa     bt  f000 0 eng ]
    • Subject: "United States -- Florida" based on, for example: [ 043 a n-us-fl ]

  2. Relevance ranking rules

    Search term specific:

    1. Occurs in Title/Subject/Author rather than notes/TOC; main Title/Author rather than added entry...
    2. Occurs as a phrase or near phrase rather than as scattered words
    3. Occurs as an exact match
    4. Occurs multiple times (especially the unusual words)
    5. Occurs as the only or main words (e.g., as the only subject rather than as 1 of 10)
    6. Occurs as entered rather than occurs as a matching stem
    7. Occurs in "more important" tags

    Item characteristics (search term non specific):

    1. Is a collection level record
    2. Is widely held
    3. Is held by one of your libraries
    4. Is on the shelf at one of your libraries (not implemented)
    5. Is available online
    6. Is highly rated (sales/reviews) on Amazon or LibraryThing (not implemented)
    7. Is widely cited by other books or by credible web pages (not implemented)
    8. Is available for inexpensive purchase and quick delivery new or second hand (not implemented)

  3. Clustering by subject hierarchy -v- by subject facets

    The search result UI currently shows both views (selectable by a kludgey button/tab). The problem with hierarchies are fragmentation of areas of interest (eg, search for computer art and then decide you want to narrow to "exhibitions") and inconsistent placement of terms in and structures of hierarchies (eg, search for florida). The problems with facets is that sometimes some more context is required to make sense of the single facet (eg, the "to 640 a d" facet shown if you search for ancient egypt).

    Maybe the OCLC FAST approach would help.

  4. Clustering by date

    The initial date clusters vary in "width" (number of years), with more recent widths being narrower (fewer years). Clicking on a date range results in date clustering by year.

  5. Clustering by Conspectus disciple and sub-category

    Conspectus - not quite dead yet, is another way of grouping bibliographic resources. Warwick Cathro suggested applying some mappings from DDC to Conspectus and from LC to Conspectus as an experiment. (As at 12 Nov 06, only the DDC mappings have been applied).

  6. Stem matching

    Searches for Wallaby rugby and Wallabies rugby should probably return the same set, just differently ranked.

  7. Advanced search

    A prototype advanced search is available

  8. Opensearch and XML output support

    Eg, openSearch results for ancient egypt, general XML results for ancient egypt

  9. "My Library" support

    Searchers can nominate their libraries which are remembered using a cookie. Results from their libraries are annotated and can be optionally used to boost ranking in search results. Holdings in their libraries are emphasised in the detailed title display. "Deep Linking" into some OPACs has been implemented.

  10. "Online" results boosting

    Searchers can elect to boost the ranking of "online" results. However, the identification of true online results is currently rather hit and miss, as "online" covers both a web page with a LC table of contents and a fully online version of the work.

  11. FRBR-like title grouping

    The current Libraries Australia database contains many "duplicates": records not merged due to subtle differences in metadata which are often inconsequential or errors. Many people also think it would be a good idea to combine various editions of works in the search results interface, although how far this combining should go is debatable. Should it be the equivalent of an FRBR work, or of an FRBR expression? Should it include works across languages and material types?

    The first approach taken by this prototype was to use an adaptation of the OCLC FRBR Work-Set Algorithm to group MARC records with a matching (normalised) author/title/material type/language.

    This has since been refined to group into the following "layers":

    1. Author / title grouping to a "superwork"; not part of FRBR but an attempt to group "works" across forms such as books and films.
    2. Author / title / form grouping to a "work", although form is more likely to be an expression level definer in real FRBR. Currently using a hacked material type used for "form" in this experiment, which gives rise to some inconsisencies in the display (which uses a different and more subtle "form" definition).
    3. Author / title / form / language grouping to an "expression".
    4. Author / title / form / language / edition / year of publication / publisher grouping to "manifestation"; a mixture of real FRBR expression and manifestation layers. (The "edition" string is normalised from either the 250$a or guessed from the 245$a contents. The "publisher" string used to group is just the first 6 normalised characters in the 260$b in a heuristic attempt to survive the many variations in publisher names that hinder exact record matching.)
    5. MARC record which is maybe a manifestation or item in real FRBR. These records often have multiple holdings, sometimes hundreds, because the NBD system they are coming from is a union catalogue.

    The OCLC workset algorithm contains approaches for bringing together groups based on extensive 7xx author added entry processing, but we've taken a simple approach which still needs work! However, we have fruitfully grouped on exact ISBN and title matches when no author is available.

    What we're trying to achieve is a set of groupings most likely to be useful to a searcher wanting to find a resource. The searcher probably has very strong preferences for the form and language of the resource they're seeking, which is why they're our top two layers/groupings. After that, they may have a preference for a particular edition or, less likely but possibly, even a particular manifestation (publisher, publication year, place of publication).

    Of course, they don't actually care about the bibliographic record; they want to get there hands on the resource, so we have to think about how they can easily tell the system to:

    • Locate any edition I can get today for free
    • Locate any edition published after 1960 I can get today for free
    • Locate either of these two editions I can get cheapest and soonest
    • Locate any French edition available for electronic access...

    All records grouped into an FRBR-like structure display a "This title can be viewed as part of an experimental FRBR group" hyperlink near the top of their detail display page. The resulting display is a long way from what we want it to look like but it shows the results of our clustering. Here are some examples of the FRBR-like displays:

    Grouping statistics (05Oct06):

    • 105,791 "superwork" groupings containing more than 1 work (author/title)
    • 1,765,347 "work" groupings (author/title/form)
    • 1,944,388 "expression" groupings (author/title/form/language)
    • 3,860,503 "manifestation" groupings (author/title/form/language/edition/year/publisher)
    • 4,887,047 MARC records grouped

    That is, 4.9 million (about 30%) of the ~16 million MARC records in the LA extract used by the NBD prototype where grouped.

  12. Work grouping - OCLC xISBN service

    We also query the OCLC xISBN service when displaying a title, and list associated ISBN's and allow this group to be searched.

  13. News headlines

    We're using a news feed to automatically select some current news stories and (hopefully) relevant works. The idea is to promote library resources as an effective way to get the background information you need to really understand current affairs.

  14. Authority file lookup hoping to find "see", "see also" terms (more below)

  15. Corpus based spelling suggestions (more below)

  16. Full retrieval of metadata from Amazon for titles with an ISBN , showing covert art, price, availability (new and used), customer & editoral reviews, ListMania!, similar titles, reading age...

  17. "Buy it" link to isbn.nu on the title detail page for titles with an ISBN links to isbn.nu.

  18. "Borrow it" link to netBooks (netBooks is a completely imaginary service...)

  19. "Tag it" link to add searcher-supplied tags (currently disabled, sorry).

  20. CQL search

    A CQL search function is gradually being implemented.

  21. Google Books lookup

    We use the Google Book Search Book Viewability API to test whether the title is available in full or partial view at Google Books, or whether Google Books have metadata (including reviews, citations, holdings) for the book. Search results and single book displays are annotated accordingly. The long term goal is to prefer full and partial views higher in search results, particularly when the Online checkbox is selected.

    We pass the first ISBN, or if none, the LCCN (tag 010) to the GBSAPI. With some luck, searchers may be able to find lots of useful text online.

  22. LibraryThing JSON API

    We use the LibraryThing JSON API to retrieve holdings, number of reviews and ratings from LibraryThing. This API was deliberately designed to be very similar to the Google Book Viewability API, and we use it in much the same way.

More details - Basic Authority file lookup

A dump of the LA authority file was loaded. The file contained 1,750,475 MARC authority records, of which 521,668 where selected as being "of interest" because they were more than just an authority file entry: they had at least one "see also" or "see" reference or at least one note. These 521,668 records were used to create 928,848 Lucene documents in a new index by "splitting" each "see also" or "see" reference into its own document and duplicating any notes. The process took 40 minutes elapsed using one CPU on handford (parse, build, index, optimise). The index is small, only 172MB.

The usage is currently very simple - the user's "text box" terms are searched for in the index both as the "from" and the "to" heading. Eg, cuban missile crisis.

More details - Basic spelling suggestions

The corpus based spelling suggestions were replaced on 25Sep08 with spelling suggestions from the Yahoo BOSS API

A spelling dictionary was created using all the keywords (all text appearing in data tags) from the entire NBD by reading all the terms in the "keywords" index and storing:

The "n gram" code was based on the Nicolas Maisonneuve and David Spencer Lucene-contribution code and I added the frequency boost and phonetic key to experiment with faster and better checking.

Terms containing digits or with a frequency of one were dropped, leaving almost 2.1 million terms which were built into a database of about 422 MB.

Each word in the user supplied search term is checked against the top 80 (highest ranked) matches from a Lucene search on, typically, 3-gram and 4-gram components of the word (boosting the first 3-gram and final n-grams) and the phonetic key. The checking involves calculating the Levenshtein distance between each entered word and each replacement candidate and multiplying the (inverse) Levenshtein score by the square root of the Lucene score and taking the single best result.

Although this approach is fast, takes into account database term frequency, phonetic similarity and likely typo/mispelling errors it fails to weight candidates based on probability of appearing with other supplied search terms (which I think we really should do!).

This works: cronula, thatcher reagon but this is unsatisfactory: egyptian fairos, because candidates aren't selected based on all the user supplied words (or the "error" word in context).

17Jun2008 - exposed the spelling suggestor as a service

More details - Linking to Libraries Australia from Amazon

If you use Firefox, then

  1. install greasemonkey
  2. restart Firefox
  3. install the Libraries Australia greasemonkey script by clicking here and then clicking "Install" on the page that appears
  4. navigate to Amazon (eg, http://www.amazon.com/gp/product/1558605703 )
and after a few seconds you'll see a link under the title to Libraries Australia like this:

This technique was developed by Carrick Mundell and has been extended and used by many others as a demonstration of taking library catalogues "into the users' space".

Reports and presentations

Some index scans

Most Recent Changes

  • 25 Sep 08 - replace corpus based spelling suggestions with those from the Yahoo BOSS API
  • 23 Sep 08 - use Google Book Search Embedded Viewer, eg http://ll01.nla.gov.au/show.jsp?rid=000014629180
  • 26 Mar 08 - use LibraryThing JSON API toi augment details on search results
  • 18 Mar 08 - Google Book Search Book Viewability API used to annotate search results and single title displays; database reloaded with current data from the NBD. The FRBR database has not been recalculated yet.
  • 30 Nov 06 - added auto search discovery file for Firefox v2

    To use this service as a search destination in Firefox v2:

    1. On this and most other pages on this site, the Firefox v2 search box (normally to the right of the URL up the top of the browser) might glow a bit, or at least look look a bit excited about something.
    2. Click the drop down box to the left of the search box - it probably has a big "G" in it for "Google".
    3. At the end of the list should be the item: "Add NLA Library Labs"; click it to add NLA Library Labs to your list of search engines.
    4. To use, just click the drop down box again and click "NBD Library Labs" - its now your Firefox 2 search engine (don't worry - its easy to change back to Google!)
    5. Type "half a chook" in the Firefox2 F search box, and you'll get a Library Labs search result in the document window. Because the search results are returned in openSearch/RSS format, Firefox 2 (and IE7) treats it as a feed to which you can subscribe; So, we could have a new books feed, "new soccer books" feed, etc etc.
    6. Don't forget to change your Firefox 2 search engine back to Google! (just click the dropdown next to the search box...)

  • 12 Nov 06 - added Conspectus grouping derived from DDC
  • 18 Oct 06 - started adding CQL support
  • 05 Oct 06 - added index scans
  • 04 Oct 06 - new improved FRBR-like display which attempts to create an FRBR-like structure on loose notions of a super-work, work, expression, manifestation and item. This view is not yet searchable: its purpose at the moment is just to display a given title in some sort of FRBR context and help us understand how the grouping algorithm is performing.
  • 29 Sep 06 - add index with title and place of publication for serials/newspapers etc ("continuing resources") to address the common (?) requirement to search for a magazine/newspaper by entering its name and place of publication, eg: People Sydney.
  • 29 Sep 06 - remove the blood from the banner image
  • 28 Sep 06 - change the "form" cluster to a smallish set of broad forms. All the detailed "form"s have been left in the "genre" dumpster.
  • 22 Sep 06 - the index used for exact match titles has been changed so that for a title with filing characters (eg, 245$a with indicator 2 > 0) both versions of the title (with and without the filing characters) are tested for an exact match. This helps when searchers omit a leading "the", "an" etc (eg, Canberra Times.
  • 22 Sep 06 - serials have been boosted on the basis that they are "containers" and hence of more interest than monographs etc. Unlike "collections" which received a static 10 points boost, serials have their native document boost (based on holdings) increased by 50%.
  • 21 Sep 06 - Do you want to search for works by {author} added to the top of the results if the top-clustered author was responsible for at least an estimated 100 works or 10% of the search results. I suspect this is a kludge to paper-over our poor presentation of clustering results: we can't move everything up to the top! But it helps when people want to find works by an author, rather than works about an author.
  • 21 Sep 06 - more... added to the news items. News results still aren't great, and I really think manual tweaking of news results would be a good investment in time by a knowledgeable librarian collective.
  • 21 Sep 06 - added paging through search results for those finding 100 results just not enough...

Credits

Big Ideas Alison Dellit    Alexander Johannesen    Joanna Meakins    Judith Pearce    Encouragement, Patronage, Tolerance and Forbearance Monica Berko    Warwick Cathro    Mark Corbould    Judith Pearce    AustLit    Documentation Alison Dellit    Relevance Ranking tweaking and testing Kate Davis    Alison Dellit    Paul Livingston    Suzanne Morris    Judith Pearce    Belinda Tiffen    Cataloguing-based suggestions and insight Sandra Henderson    Deirdre Kiorgaard    Judith Pearce    Feedback, help and suggestions Anne Beaumont    Christine Fernon    Jenny Warren    Julie Whiting    Conspectus suggestion Warwick Cathro    Data supply Tony Boston    Simon Jacob    Benchmarking Assistance Steven McPhillips    Hardware and Operating System Steven McPhillips    Mark Triggs    Banner Alexander Johannesen Ben Warren    Design Position vacant


Kent Fitch, August 2006

Version 0.03 Prototype service with stale data; this is not a production service.

Uses the Lucene library.