Competency 5

“Design, query, and evaluate information retrieval systems”

Introduction

Librarians and information professionals depend daily on information retrieval systems, or IR systems, so much so that one might even call them the backbone of our profession. We have ever been at the forefront of building and using digital computing services to enhance our profession and improve its ability to serve the public and private good. We use information systems like OPACs and online catalogs in libraries, archives, museums and many other settings where cabinets of index cards once marked the peak of organizational technology. Nowadays instead of the DDC card catalog system, librarians, information professionals and users alike can search for collection items using IR systems.

The creation of a useful IR system is a complex matter. The nature of comparable systems, the makeup of the collection, and knowledge of the intended user group are some of the major aspects of design that must be looked at by professionals.

In one of my coding classes (LIBR 240) my teacher imparted something that really stuck with me: when designing a new web page, why start from scratch? Why not draw from your old code and adapt it to the new page? This concept also applies to the design of IR systems. It is best practices for one academic library creating an open-access repository to look critically at a similar IR system that a fellow academic library is using for the same purpose. While not every aspect of the evaluated system may be relevant, this practice encourages interoperability and intuitive design standards as well as saving time and reducing needless errors.

The nature of the collection’s media and content also determines the design of its IR system. A search interface made for a collection of archived photos from the 1900′s requires a system that functions somewhat differently than one which searches a collection of recent popular music. Every collection’s IR system is only as good as its metadata schema, which must reflect the details and aspects accurately and efficiently. A long-term concern of IR designers is the planned homogeneity of the collection: will it acquire new and various types of content, or will its media formats remain much the same? Serious IR engineers must take collection growth into account.

Different user groups use and relate to information systems in different ways. A patron querying a book on dinosaurs will use her IR system in a totally different way than an artist trying to find a particular painting. Users have expectations of how their IR system should work, how its interface ought to look, and how it must perform. When their expectations aren’t met they will not use it as much or at all, and the system must be considered flawed and in need of a redesign. This is why user testing is so important when designing an IR system, even for simple web page interfaces.

User tests usually try to appeal to the first time users, to make sure that the widest audience finds the system to be approachable, usable, and functional. Creators of IR systems sometimes conduct user tests even though the final, or live version of the system aren’t ready yet, creating mock ups of how the system will function and designing a series of tasks for the users to attempt to do. In example, a user might search for the term “Shakespeare” by interacting with the mock ups or developing system and the tester will record the results at various stages. This helps the creators to tweak or adjust the design– i.e., make a much-missed button bigger or a different color, and then they can test again, fine tuning the system before release. And even after its official launch, creators of IR systems often conduct tests to determine how the IR system is performing and meeting user needs.

Evaluation is done by the IR system designers themselves, sometimes with the assistance of experienced high level users, and/or new users. Two major levels of experience describes IR users well because an IR system has two modes of functioning: a simple, baseline format where the most rudimentary types of searches will likely execute, even when performed by first time users; a more advanced tier of search operation utilizing selection options, including Boolean operators, narrowing by date, document type, author, language, etc. Wikipedia offers a fine baseline mode where a new user can successfully search for a term and read the results, yet a more experienced user can navigate to a page on a rare goldfish, find a factual error or typo, and then even correct the text of the entry.

The technical term for a controlled search request, a ‘query’ in database language is a request to an IR system database to return matches to specified search terms.Queries are keyword based, with the best-known example, Google, mainly using keyword matching to rank their returned results. But this isn’t to say that you’ll always find what you want when you search. Some IR systems can’t handle misspellings or symbols that confuse the coding language like “&”. And while some IR systems can use multiple types of queries, such as searching by a certain field like ‘author,’ or ‘date’, new users may not have the practice with the system to fully utilize these features.

Often in the process of using IR systems, the first query does not display exactly what is expected. This is typically due to a misspelling, or when term is not recognized by the IR system, or another term used instead. For example, when searching for the “Heart Attack” in a medical-focused IR system, the results may be scarce, likely because they use the term myocardial infarction instead. Controlled vocabularies or having the IR system itself suggest alternative spellings, phrasing, or synonyms can help resolve this issue. Yet regardless of input success, the first query is usually not the last, as many users try to rephrase and refine their query search terms.

Experienced IR users take advantage of Boolean operators, a logical means of aggregating records by sorting results using AND, OR, and NOT in addition to the search keywords. The AND function combines two search elements to display the combination of two sets of results together; searching for “a AND b” would display results that contain a, b, or both terms. The OR function displays results offering either one term or the other, but not any with both. The precise NOT function allows users to exclude an element from their results, as in “a NOT b” yielding results that contain a but not those which also contain b. Boolean operators obviously allow for greater control in querying, yet remain unknown to most new IR users.

Many IR systems use querying, from search engines to library OPAC’s and social networking sites; databases like SQL exist to perform rapid and flexible queries. The whole point of an IR system is finding the information you wish to retrieve. While a perfectly good IR system will provide “accurate enough” results to queries, a better design will consider search precision and recall. Precision refers to the number of records that match the initial search query, and recall the number of records retrieved. High precision but low recall means that the records retrieved all meet the search criteria, but only a few records are returned. Low precision but high recall means that lots of records are returned, but only a small portion of them fulfill the criteria adequately. A delicate balance between these two aspects of IR query provides users with correct and useful search results.

Evidence

My first piece of evidence is my final project from LIBR 251 Web Usability. This assignment asked me to create a five page report where I chose a computer interaction problem to solve, defined the user population, formulated a design solution, created a prototype form, and implemented a user test. Based on this test I then redesigned and created a new prototype and implemented a second user test. My interaction problem was how to improve the ease of scrolling through long documents and also finding information within them. My solution method involved highlighting particular sections of the scroll bar with user-created markers. The first prototype was created in paper while the second was created in MS Paint and interacted with on a computer. I included screen shots of this assignment, as it clearly demonstrates my understanding of design of and evaluation of IR systems.

My second piece of evidence is Assignment #1 from LIBR 242 Database Management. This assignment tasked me to examine a database system and analyze how its design, function, search capacities, and what sort of database information yields the search results. I selected the website Expedia and discussed the different sorts of queries which are possible in its IR system. I identified useful information, i.e., the unique flight number. I included screen shots because this assignment also qualifies my understanding of IR system design, query, and evaluation.

My third piece of evidence is Assignment #3 from LIBR 247 Vocabulary Design. With this I was asked to evaluate and compare two thesaurus-enhanced IR systems, one free and one commercial. I examined their query features, interface design, retrieval process, and evaluated their interfaces from an end-user perspective. I also provided suggestions for improvements, like a frequently asked question section (FAQ). By performing searches, I was able to play the role of a user and evaluate the two thesaurus-enhanced IR systems.

Conclusion

The creation of an effective IR system must be iterative, as the design and evaluation stages are not static and reoccur throughout the process. The critical testing of a system design helps to further refine its interface and operations, resulting in a more usable product. One important concept to highlight is that not everyone uses IR systems in the same way, because people have different needs and think in very different ways. Not all IR systems will work equally well for all users, nor will each system design work for well for all types of information. A useful IR system needs to be tailored to particular information, and to the users that will be retrieving that information.

Evidence 1

Evidence 2

Evidence 3