, , , , , , , , , , , , , , , , , , , , , , , , ,

Our most recent LAPIS session yesterday featured a guest lecture by Katharine Schopflin, a corporate information professional who has also conducted research into the roles of the encyclopaedia and reference librarianship. An encyclopaedia—whose principle defining characteristics were identified in her research as accuracy, a lack of bias, being up-to-date, authoritativeness, providing subject coverage in sufficient and appropriate depth, and written succinctly—and is essentially derived from the same principles of information organisation—such as controlled vocabularies and classification schemes—that we have encountered in other modules, and is historically rooted within the positivist school of philosophy.

The modern concept of the general reference encyclopaedia, with comprehensive A-to-Z subject coverage written by experts, with cross-references, indexes and other information-seeking tools, developed over hundreds of years, through “proto-encyclopaedias” such as Thomas Aquinas’s Summa Theologica and the later, alphabetical, Lexicon Technicum by John Harris. Many of these early encyclopaedias were compiled in the rationalist belief, typical of the Age of Enlightenment, that they represented an order of things (and therefore of knowledge about them) inherent in the universe; none more so than the famous French Encyclopédie, edited by Denis Diderot and Jean-Baptiste le Rond d’Alembert and first published in 1751. Not only did this establish the standard practice of including material contributed by a group of named experts in particular fields, but it was also explicitly based upon a “Figurative System of Human Knowledge” that was published in its first volume.

The tripartite classification of knowledge that underpinned the Encyclopédie. Like many other information organisation tools, it is based upon the earlier work of the polymath Francis Bacon.

The tripartite classification of knowledge that underpinned the Encyclopédie. Like many other information organisation tools, it is based upon the earlier work of the polymath Francis Bacon.

The desire to create a universal, systematic body of knowledge in this way–which was also driven by the exponential increase of information being produced by human activity, and growing awareness of the problem of “information overload” that this could cause—reached its apogee in the early twentieth century through the work of the Belgian bibliographer and proto-information scientist, Paul Otlet, and his collaborator, Henri La Fontaine. Amongst a multitude of projects for international co-operation and standardisation, particularly in the bibliographic and information fields, was the Universal Bibliographic Repertory: an enormous collection of catalogue cards intended to function in, amongst other things, much the same way as a colossal encyclopaedia. This form of information organisation had by now superseded the individual book, or volumes of a printed encyclopaedia, as Otlet recognised the need for the information contained within a book to be separated from the physical form of the book itself, as demonstrated in his own conception of human knowledge:

Paul Otlet's conceptual model of how human knowledge is recorded.  The universal catalogue transcends the limitations of individual books and other physical "carriers" of information.

Paul Otlet’s conceptual model of how human knowledge is recorded. The universal catalogue transcends the limitations of individual books and other physical “carriers” of information.

This need for separation was later repeated by J.C.R. Licklider, who, when authoring a report in the mid-1960s into the feasibility of a networked knowledge environment (i.e. the technology that developed into the current Internet), encountered the same problems:

[Licklider] wrote about ‘the difficulty of separating the information in books from the pages’—a problem that, he argued, constituted one of ‘the most serious shortcomings of our present system for interacting with the body of recorded knowledge’. What he would do was create a system of cataloguing information from a wide range of sources, extracting and indexing that information—and distributing it over a network.

(Quoted in Alex Wright’s book, Cataloging the World: Paul Otlet and the birth of the Information Age, p.250.)

Fifty or so years later, the technology to create a networked, global encyclopaedia that can transcend the limits of its printed cousins has already been in existence for over two decades (although the fully-indexed Semantic Web remains a distant dream for now), and the transition of the traditional general reference encyclopaedias into digital forms—first on CD-ROM and now on the Internet with subscription access—has been the key development in the genre’s publishing industry. However, this is not the most significant change in the encyclopaedia world, as the rise of Wikipedia—an online encyclopaedia that can be accessed and edited freely by anyone—has demonstrated since its foundation in 2001. At the time of writing, it currently comprises 4,752,420 articles, and has been studied in some detail, with its organisation, social environment and potential as a model for innovation and collaborative work all being investigated, in addition to some famous studies into its reliability versus the established subscription-based online encyclopaedias such as Britannica.

The advantages of Wikipedia—its freeness, inclusiveness, collaborative nature and ability to cover a wide range of topics that would not feature in a “conventional” encyclopaedia to a high standard—have been readily apparent, and tend to outweigh its flaws, such as an unevenness of coverage in certain disciplines, and well-established practices exist to minimise the risks of malicious vandalism associated with a freely-editable encyclopaedia. Its success can be seen through the wide adoption of the wiki (the term derives from a Hawaiian word for “quick”) web application across the Internet, from relatively well-known websites such as WikiHow, to the many hundreds, if not thousands, of wikis on obscure and niche topics (such as the first PlayStation game I owned back in 1997!) hosted by Wikia. There is even a WikiIndex which acts as a directory for all other wikis! In addition, I imagine that many of the readers of this blog have also referred to, or even edited, wikis owned by their employers and used as part of their knowledge management programmes.

The rise of Wikipedia, in conjunction with Google as the exemplar of search engines and the spread of information technology in general, has also revolutionised librarianship in practice and caused much debate in the wider LIS sector. As users’ information behaviour increasingly becomes a case of “Google plus Wikipedia”, and library catalogues move towards the model of search engine-inspired discovery tools that include resources available digitally and outside the physical space of the building (perhaps become a form of encyclopaedia in their own right in the process), is there any need for traditional information retrieval and reference librarian skills within the profession? I would answer with a resounding “yes!”: the very fact that a plethora of information is now immediately available in a variety of formats, produced by a variety of sources who may differ in terms of trustworthiness, reliability and so forth, with differences in usage and permission rights, et cetera, means that the role of the librarian or information professional as a mediator between the user and the information that they seek is still of vital importance if society is not to succumb to the strain of information overload. Thus librarians still maintain their traditional role as guardians of knowledge—possibly to the extent of being encyclopaedias in themselves—with the new skills and tools demanded by the digital age.