This system is a very simple demonstration of searching MARC bibliographic records using Lucene for storage and indexing. The database being searched is a copy of the Australian National Bibliographic Database (ANBD) as at March 2008. It contains 16 million bibliographic records with holdings information for Australian libraries. The demonstrator extracts topics and relationships from records retrieved from a simple full text search to present search results:
Record details are shown augmented with:
The user's libraries can be used to boost rankings, and "online" resources can also be selected (although the current definition of what is an "online" resource is too broad to be very useful).
The same data (a few months more up to date) is also accessible in a public form through Libraries Australia. Please note that this system is a demonstrator of ideas, not a statement of direction by the National Library of Australia for the Libraries Australia service. The user interface has been left undesigned because we want feedback on the basic ideas rather than graphic design.
As shown at the NLA Innovative Ideas Forum Rethinking the catalogue presentation, here's a static demo of integration of library metadata search and remote full text source search.
This system has been implemented using Lucene as the storage and indexing mechanism. Four Lucene databases have been created:
The program which reads the raw MARC records and constructs the 2 large databases runs for about 16 hours elapsed to produce the final, merged versions of these databases (4 x 3GHz Xeon box with 8GB memory). The other databases require relatively insignificant elapsed time to generate.
MARC records are heavily processed during the indexing phase to extract field/sub-field contents into Lucene indices. Parts of fields are stored with different Lucene field boosts (for example, the main title, 245$a, is heavily boosted, whereas subtitle, alternative title, series title, added entry title etc are also indexed as "title", but with differing and lower boosts).
At query time, a very large query is constructed, searching many separate indices with separate boosts for exact matches, "near" matches and keyword anywhere clauses.
The first 700 records (the most relevant 700) are then retrieved and specific field values accumulated to find clusters (date, subject, classification, etc), and for the top clusters, the database population is found, enabling the cluster presentation on the right hand side of the results page. As well, spell checking and subject authority matching is performed.
This approach was originally tried using Oracle, and it worked fine up to about 200,000 records (on a dual-core 3GHz Pentium 4 with 2GB) before becoming very slow. The Lucene implementation seems much more scalable (we've benchmarked 5 queries/second on 16 million records as attainable).
The demonstrator is a very simple JSP implementation, but the code betrays the layers of experimentation required to achieve the current functionality and is fit for no purpose other than mining for ideas.
The idea is to make the coded data searchable and hence contribute to ranking and clustering by converting the codes to words.
Search term specific:
Item characteristics (search term non specific):
The search result UI currently shows both views (selectable by a kludgey button/tab). The problem with hierarchies are fragmentation of areas of interest (eg, search for computer art and then decide you want to narrow to "exhibitions") and inconsistent placement of terms in and structures of hierarchies (eg, search for florida). The problems with facets is that sometimes some more context is required to make sense of the single facet (eg, the "to 640 a d" facet shown if you search for ancient egypt).
Maybe the OCLC FAST approach would help.
The initial date clusters vary in "width" (number of years), with more recent widths being narrower (fewer years). Clicking on a date range results in date clustering by year.
Conspectus - not quite dead yet, is another way of grouping bibliographic resources. Warwick Cathro suggested applying some mappings from DDC to Conspectus and from LC to Conspectus as an experiment. (As at 12 Nov 06, only the DDC mappings have been applied).
A prototype advanced search is available
Searchers can nominate their libraries which are remembered using a cookie. Results from their libraries are annotated and can be optionally used to boost ranking in search results. Holdings in their libraries are emphasised in the detailed title display. "Deep Linking" into some OPACs has been implemented.
Searchers can elect to boost the ranking of "online" results. However, the identification of true online results is currently rather hit and miss, as "online" covers both a web page with a LC table of contents and a fully online version of the work.
The current Libraries Australia database contains many "duplicates": records not merged due to subtle differences in metadata which are often inconsequential or errors. Many people also think it would be a good idea to combine various editions of works in the search results interface, although how far this combining should go is debatable. Should it be the equivalent of an FRBR work, or of an FRBR expression? Should it include works across languages and material types?
The first approach taken by this prototype was to use an adaptation of the OCLC FRBR Work-Set Algorithm to group MARC records with a matching (normalised) author/title/material type/language.
This has since been refined to group into the following "layers":
The OCLC workset algorithm contains approaches for bringing together groups based on extensive 7xx author added entry processing, but we've taken a simple approach which still needs work! However, we have fruitfully grouped on exact ISBN and title matches when no author is available.
What we're trying to achieve is a set of groupings most likely to be useful to a searcher wanting to find a resource. The searcher probably has very strong preferences for the form and language of the resource they're seeking, which is why they're our top two layers/groupings. After that, they may have a preference for a particular edition or, less likely but possibly, even a particular manifestation (publisher, publication year, place of publication).
Of course, they don't actually care about the bibliographic record; they want to get there hands on the resource, so we have to think about how they can easily tell the system to:
All records grouped into an FRBR-like structure display a "This title can be viewed as part of an experimental FRBR group" hyperlink near the top of their detail display page. The resulting display is a long way from what we want it to look like but it shows the results of our clustering. Here are some examples of the FRBR-like displays:
Grouping statistics (05Oct06):
That is, 4.9 million (about 30%) of the ~16 million MARC records in the LA extract used by the NBD prototype where grouped.
We also query the OCLC xISBN service when displaying a title, and list associated ISBN's and allow this group to be searched.
We're using a news feed to automatically select some current news stories and (hopefully) relevant works. The idea is to promote library resources as an effective way to get the background information you need to really understand current affairs.
Authority file lookup hoping to find "see", "see also" terms (more below)
Corpus based spelling suggestions (more below)
Full retrieval of metadata from Amazon for titles with an ISBN , showing covert art, price, availability (new and used), customer & editoral reviews, ListMania!, similar titles, reading age...
"Buy it" link to isbn.nu on the title detail page for titles with an ISBN links to isbn.nu.
"Borrow it" link to netBooks (netBooks is a completely imaginary service...)
"Tag it" link to add searcher-supplied tags (currently disabled, sorry).
A CQL search function is gradually being implemented.
Google Books lookup
We use the Google Book Search Book Viewability API to test whether the title is available in full or partial view at Google Books, or whether Google Books have metadata (including reviews, citations, holdings) for the book. Search results and single book displays are annotated accordingly. The long term goal is to prefer full and partial views higher in search results, particularly when the Online checkbox is selected.
We pass the first ISBN, or if none, the LCCN (tag 010) to the GBSAPI. With some luck, searchers may be able to find lots of useful text online.
LibraryThing JSON API
We use the LibraryThing JSON API to retrieve holdings, number of reviews and ratings from LibraryThing. This API was deliberately designed to be very similar to the Google Book Viewability API, and we use it in much the same way.
A dump of the LA authority file was loaded. The file contained 1,750,475 MARC authority records, of which 521,668 where selected as being "of interest" because they were more than just an authority file entry: they had at least one "see also" or "see" reference or at least one note. These 521,668 records were used to create 928,848 Lucene documents in a new index by "splitting" each "see also" or "see" reference into its own document and duplicating any notes. The process took 40 minutes elapsed using one CPU on handford (parse, build, index, optimise). The index is small, only 172MB.
The usage is currently very simple - the user's "text box" terms are searched for in the index both as the "from" and the "to" heading. Eg, cuban missile crisis.
A spelling dictionary was created using all the keywords (all text appearing in data tags) from the entire NBD by reading all the terms in the "keywords" index and storing:
The "n gram" code was based on the Nicolas Maisonneuve and David Spencer Lucene-contribution code and I added the frequency boost and phonetic key to experiment with faster and better checking.
Terms containing digits or with a frequency of one were dropped, leaving almost 2.1 million terms which were built into a database of about 422 MB.
Each word in the user supplied search term is checked against the top 80 (highest ranked) matches from a Lucene search on, typically, 3-gram and 4-gram components of the word (boosting the first 3-gram and final n-grams) and the phonetic key. The checking involves calculating the Levenshtein distance between each entered word and each replacement candidate and multiplying the (inverse) Levenshtein score by the square root of the Lucene score and taking the single best result.
Although this approach is fast, takes into account database term frequency, phonetic similarity and likely typo/mispelling errors it fails to weight candidates based on probability of appearing with other supplied search terms (which I think we really should do!).
17Jun2008 - exposed the spelling suggestor as a service
If you use Firefox, then
This technique was developed by Carrick Mundell and has been extended and used by many others as a demonstration of taking library catalogues "into the users' space".
To use this service as a search destination in Firefox v2:
Big Ideas Alison Dellit Alexander Johannesen Joanna Meakins Judith Pearce
Encouragement, Patronage, Tolerance and Forbearance Monica Berko Warwick Cathro
Mark Corbould Judith Pearce AustLit
Documentation Alison Dellit
Relevance Ranking tweaking and testing Kate Davis Alison Dellit Paul Livingston
Suzanne Morris Judith Pearce Belinda Tiffen
Cataloguing-based suggestions and insight Sandra Henderson Deirdre Kiorgaard Judith Pearce
Feedback, help and suggestions Anne Beaumont Christine Fernon Jenny Warren Julie Whiting
Conspectus suggestion Warwick Cathro
Data supply Tony Boston Simon Jacob
Benchmarking Assistance Steven McPhillips
Hardware and Operating System Steven McPhillips Mark Triggs
Alexander Johannesen Ben Warren
Design Position vacant
Kent Fitch, August 2006
Version 0.03 Prototype service with stale data; this is not a production service.Uses the Lucene library.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 2.1 Australia License.