Reductio ad Wikipedia?


, , , , , , , , , , , , , , , , , , , , , , , , ,

Our most recent LAPIS session yesterday featured a guest lecture by Katharine Schopflin, a corporate information professional who has also conducted research into the roles of the encyclopaedia and reference librarianship. An encyclopaedia—whose principle defining characteristics were identified in her research as accuracy, a lack of bias, being up-to-date, authoritativeness, providing subject coverage in sufficient and appropriate depth, and written succinctly—and is essentially derived from the same principles of information organisation—such as controlled vocabularies and classification schemes—that we have encountered in other modules, and is historically rooted within the positivist school of philosophy.

The modern concept of the general reference encyclopaedia, with comprehensive A-to-Z subject coverage written by experts, with cross-references, indexes and other information-seeking tools, developed over hundreds of years, through “proto-encyclopaedias” such as Thomas Aquinas’s Summa Theologica and the later, alphabetical, Lexicon Technicum by John Harris. Many of these early encyclopaedias were compiled in the rationalist belief, typical of the Age of Enlightenment, that they represented an order of things (and therefore of knowledge about them) inherent in the universe; none more so than the famous French Encyclopédie, edited by Denis Diderot and Jean-Baptiste le Rond d’Alembert and first published in 1751. Not only did this establish the standard practice of including material contributed by a group of named experts in particular fields, but it was also explicitly based upon a “Figurative System of Human Knowledge” that was published in its first volume.

The tripartite classification of knowledge that underpinned the Encyclopédie. Like many other information organisation tools, it is based upon the earlier work of the polymath Francis Bacon.

The tripartite classification of knowledge that underpinned the Encyclopédie. Like many other information organisation tools, it is based upon the earlier work of the polymath Francis Bacon.

The desire to create a universal, systematic body of knowledge in this way–which was also driven by the exponential increase of information being produced by human activity, and growing awareness of the problem of “information overload” that this could cause—reached its apogee in the early twentieth century through the work of the Belgian bibliographer and proto-information scientist, Paul Otlet, and his collaborator, Henri La Fontaine. Amongst a multitude of projects for international co-operation and standardisation, particularly in the bibliographic and information fields, was the Universal Bibliographic Repertory: an enormous collection of catalogue cards intended to function in, amongst other things, much the same way as a colossal encyclopaedia. This form of information organisation had by now superseded the individual book, or volumes of a printed encyclopaedia, as Otlet recognised the need for the information contained within a book to be separated from the physical form of the book itself, as demonstrated in his own conception of human knowledge:

Paul Otlet's conceptual model of how human knowledge is recorded.  The universal catalogue transcends the limitations of individual books and other physical "carriers" of information.

Paul Otlet’s conceptual model of how human knowledge is recorded. The universal catalogue transcends the limitations of individual books and other physical “carriers” of information.

This need for separation was later repeated by J.C.R. Licklider, who, when authoring a report in the mid-1960s into the feasibility of a networked knowledge environment (i.e. the technology that developed into the current Internet), encountered the same problems:

[Licklider] wrote about ‘the difficulty of separating the information in books from the pages’—a problem that, he argued, constituted one of ‘the most serious shortcomings of our present system for interacting with the body of recorded knowledge’. What he would do was create a system of cataloguing information from a wide range of sources, extracting and indexing that information—and distributing it over a network.

(Quoted in Alex Wright’s book, Cataloging the World: Paul Otlet and the birth of the Information Age, p.250.)

Fifty or so years later, the technology to create a networked, global encyclopaedia that can transcend the limits of its printed cousins has already been in existence for over two decades (although the fully-indexed Semantic Web remains a distant dream for now), and the transition of the traditional general reference encyclopaedias into digital forms—first on CD-ROM and now on the Internet with subscription access—has been the key development in the genre’s publishing industry. However, this is not the most significant change in the encyclopaedia world, as the rise of Wikipedia—an online encyclopaedia that can be accessed and edited freely by anyone—has demonstrated since its foundation in 2001. At the time of writing, it currently comprises 4,752,420 articles, and has been studied in some detail, with its organisation, social environment and potential as a model for innovation and collaborative work all being investigated, in addition to some famous studies into its reliability versus the established subscription-based online encyclopaedias such as Britannica.

The advantages of Wikipedia—its freeness, inclusiveness, collaborative nature and ability to cover a wide range of topics that would not feature in a “conventional” encyclopaedia to a high standard—have been readily apparent, and tend to outweigh its flaws, such as an unevenness of coverage in certain disciplines, and well-established practices exist to minimise the risks of malicious vandalism associated with a freely-editable encyclopaedia. Its success can be seen through the wide adoption of the wiki (the term derives from a Hawaiian word for “quick”) web application across the Internet, from relatively well-known websites such as WikiHow, to the many hundreds, if not thousands, of wikis on obscure and niche topics (such as the first PlayStation game I owned back in 1997!) hosted by Wikia. There is even a WikiIndex which acts as a directory for all other wikis! In addition, I imagine that many of the readers of this blog have also referred to, or even edited, wikis owned by their employers and used as part of their knowledge management programmes.

The rise of Wikipedia, in conjunction with Google as the exemplar of search engines and the spread of information technology in general, has also revolutionised librarianship in practice and caused much debate in the wider LIS sector. As users’ information behaviour increasingly becomes a case of “Google plus Wikipedia”, and library catalogues move towards the model of search engine-inspired discovery tools that include resources available digitally and outside the physical space of the building (perhaps become a form of encyclopaedia in their own right in the process), is there any need for traditional information retrieval and reference librarian skills within the profession? I would answer with a resounding “yes!”: the very fact that a plethora of information is now immediately available in a variety of formats, produced by a variety of sources who may differ in terms of trustworthiness, reliability and so forth, with differences in usage and permission rights, et cetera, means that the role of the librarian or information professional as a mediator between the user and the information that they seek is still of vital importance if society is not to succumb to the strain of information overload. Thus librarians still maintain their traditional role as guardians of knowledge—possibly to the extent of being encyclopaedias in themselves—with the new skills and tools demanded by the digital age.


Evolve or die


, , , , , , , ,

My earlier blog posts on copyright, the serials crisis and Open Access may have give the impression that large commercial publishers are the natural antagonists of librarians; that librarians are the heroic, under-resourced champions of access to information that would otherwise be locked and bound behind a labyrinthine network of paywalls, copyright law and restricted permissions. This is, of course, largely untrue: the researcher, the publisher and the librarian are all integral parts of the information communication chain and are therefore in a symbiotic relationship with each other, even if one group’s needs and goals have the potential to cause tension with those of another. It is important to bear in mind that innovations such as Creative Commons licenses and Open Access publishing are not completely revolutionary, as they merely form a legal superstructure on an existing foundation in the case of the former and shift the payment structure within the existing publishing framework on the other. And whilst librarians may face financial problems caused by the high price of subscriptions to academic journals, it must also be remembered that the publishers themselves are facing huge challenges for a variety of reasons. These include technological developments in how books are published and read—most obviously the ascendancy of the e-book—business-model challenges from internet giants such as Google Scholar’s digitisation programme and Amazon’s print-on-demand service, and societal changes in what is being read (witness the popularity of the Fifty Shades of Grey series, which was originally written as fan-fiction).

Our most recent LAPIS lecture covered the current situation from the publisher’s point-of-view, and included the pertinent observation by Michael Cairns of Publishing Technology that:

Technology in publishing, how it is implemented and how it is used is increasingly the differentiator—not the content!—between the publishers that will succeed and those that will fail.

In other words, publishers must stay on top of the latest technological developments, or they will go out of business: evolve or die. In practical terms, this process has extended from the automation of backroom publishing (already largely completed, from computerised typesetting through to algorithmically-driven distribution), through the digitision of content, to the increasing personalisation of marketing and user-centred material through analytics programmes such as this soon-to-be released service offered to publishers by Jellyfish, described in terms that cover the current data mining zeitgeist, as “Google Analytics for e-books”.

To reinforce the plethora of possibilities being pursued by publishers at present, we also had a guest lecture from Dan Franklin, the Digital Publisher at Penguin Random House UK, the British section of a global conglomerate formed recently by the merger of the two historic publishing houses in 2013. He emphasised that, as the digital publishing industry reaches a stable state of maturity (to the point at which the very term becomes a tautology, as publishers now include it within their overall publishing strategy in all forms of media), it is important for publishers to fully interact with the new and constantly-evolving technology available in order to maximise readership, which both increases information consumption (and therefore, hopefully, knowledge) and the companies’ commercial viability in an era which has been influenced hugely by the expectation of free content engendered by the development and growth of the Internet. He also picked out the music and entertainment industry, which has a long history of resisting change for fear of losing revenue, only to suffer in the longer-term as a result.

Franklin also took us on a whistle-stop tour of some of Penguin Random House’s current digital projects, including YourFry, a digital storytelling project to create a crowdsourced biography based on the memoirs of Stephen Fry; new e-reading websites for Penguin’s Pelican and Little Black Classics imprints; the use of other forms of media such as podcasts and films to attract new readers; technological changes such as the creation of a template for building recipe e-book apps; and the use of social media websites such as My Independent Bookshop to improve the user-centred, personalised experience for consumers. The message is clear, and also applicable to libraries (for example, the website LibraryThing is a clear parallel to My Independent Bookshop): innovate or become irrelevant.

[N.B. The cover image for this post is by Ben Tubby ( CC BY 2.0, via Wikimedia Commons.]

Opening up access


, , , , , , , , ,

In a previous blog post, I wrote about the “serials crisis” that has threatened to undermine, and restrict, the existing economic model of the dissemination of scholarly communication through academic publishing. This week, we discussed the reaction to this crisis through the Open Access model, and also enjoyed a guest lecture from Martin Paul Eve, author of the recently-published book Open Access and the Humanities: Contexts, Controversies and the Future. In a nutshell, the term “serials crisis” refers to the fact that the cost of subscribing to an academic journal has risen much higher than inflation ever since the 1980s, with a resulting squeeze on libraries’ acquisition budgets. This has led to calls for an alternative model of academic publishing that both shifts the financial burden away from the potential readers (many of whom are put off entirely by high subscription and individual article-purchase fees), allows unrestricted access and re-use of research, and the attendant benefits of sharing information.

The Open Access movement is derived from the world of computer software licensing: in the 1980s, Richard Stallman published the freely-licensed GNU operating system as a reaction against the growing preponderance of proprietary software. Just after the turn of the millennium, Lawrence Lessig founded Creative Commons, which extended these principles to any form of published authorial work; over the next few years, the principles of Open Access were developed and set out by Peter Suber and others following conferences in Budapest, Bethesda and Berlin, leading to phrases such as “BBB Statements” and the “BBB definition” of Open Access. As is clear from this timeline, Open Access evolved in step with the contemporaneous development and growth of the Internet, and one of the key points that underlies the movement is that the low cost of disseminating digital works via this technology has rendered traditional publishing models obsolete.

The raison d’être of Open Access is summed-up clearly and succinctly in this video, drawn and animated by the author of the popular PHD Comics webcomic:

In his aforementioned book, Eve provides an equally clear and succint definition of Open Access as follows:

The term ‘open access’ refers to the removal of price and permission barriers to scholarly research.

Open Access means peer-reviewed academic research that is free to read online and that anyone may redistribute and reuse, with some restrictions.

The fact that the work is still subject to the peer-review process is important, as many people misguidedly equate Open Access with a drop in quality compared to the traditional subscription-based model of scholarly publishing. Looking at the “access” component of Open Access in academic journal publishing, there are two models:

  • Gold—the article is made freely-accessible (usually by publishing under a Creative Commons license) by the publisher and is funded by an Article Processing Charge (APC) which is paid by the author, their research institution or other funding body. The cost of publishing is therefore shifted from the demand-side (the audience) to the supply side.
  • Green—the author deposits a pre- or post-print copy of their article within an institutional repository, subject to certain provisions as set out by the publisher (for instance, the author may only be able to submit a pre-print copy that has not been subject to professional copyediting or typesetting standards, or the article may be embargoed until a certain time has passed since publication).

There are also two “permissions” models within Open Access:

  • Gratis—the article is free to read
  • Libre—the article is free to read and also reuse, for example for the purposes of text- or data-mining.

The massive change that the Open Access models represent in comparison to the subscription-based publication paradigm has caused, and will cause, a number of problems, such as the uncertainty of guaranteeing financial viability with unproven economic models, the high cost of developing “hybrid” or “legacy” Open Access models that are based on the adaptation of existing subscription-based systems, the difficulties in persuading researchers—who traditionally are pressured into publishing in the well-established “prestige” journals—publishers and even librarians to adopt a totally new system, and the fact that humanities subjects face additional challenges in a movement that was initially driven by the sciences. These include the fact that arts and humanities research is less well-funded than its scientific equivalent, thus making it harder to pay APCs, and the fact that the “half-life” of an arts of humanities research article is likely to be significantly longer than a scientific one, causing complications to the Green Open Access model.

Despite these challenges, the Open Access movement has nevertheless made significant progress in the decade or so since it was launched. In the United Kingdom, recommendations made by the 2012 Finch Report led the government to mandate Open Access publishing models for publicly-funded research (making the country one of fourteen to adopt such legislation); a model which has also been taken up by institutions such as the Higher Education Funding Council for England (HEFCE) and the Wellcome Trust. It appears that the Open Access movement is currently at a watershed moment during its history, and could soon become the prevailing model for scholarly publishing if current trends continue.

Crisis? What crisis?


, , , , , , , ,

The LAPIS module has, up to now, taken us on a very broad and philosophical journey, asking us to consider questions like “what is an author?”, “what is an authorial work?”, and “what is copyright?”. The fifth lecture of term applied some of these questions and general themes to the narrower and more focussed field of scholarly publishing and its effect on libraries.

Scholarly publishing is distinct from trade publishing, in that it aims to provide resources for research and advanced study rather than merely turning a profit (although this remains an important consideration!). It dates back to the 1660s, with the Enlightenment-era formation of the first learned societies and the publication of their activities in serial form.

The oldest English scientific journal is

The oldest English scientific journal is the Philosophical Transactions of the Royal Society, which was first published in 1665.

This format of scholarly publishing continues to the present day; the number of academic journals has increased steadily at about three per cent every year to a current total of approximately 25,000, most of which are marketed primarily at academic libraries and other large institutions, instead of individual customers. This has created problems for librarians, particularly those who work in academic libraries: as the number of journals has increased, it has become more and more difficult to afford access to those necessary to sustain a scholarly, research-orientated institution–a particularly severe problem for those institutions that exist in the developing world.

Most current academic journals are published on behalf of their societies by commercial publishers, although university presses and not-for-profit organisations are still part of the market as well. We referred to two related articles from the journal The Serials Librarian, both of which emphasise the financial risk taken on by a publisher that establishes a new journal: this has led to the widespread practice of “bundling”, whereby a large academic publisher such as Elsevier, Springer or Wiley sells a large collection of journal titles–some highly desirable, others less so–as an inclusive package, thus using its more successful journals to subsidise the rest.

This has further increased the financial pressure on the acquisitions budgets of academic libraries, as the price of journals subscriptions has consistently increased faster than the general Consumer Price Index since the 1980s. An example of the effects that this phenomenon—popularly dubbed the “serials crisis”—can be seen in this (American) Association of Research Libraries survey of changes in library expenditure by resource from 1986 to 2007, showing a 340% increase in expenditure on serials, compared to an 89% increase in the CPI and a lower figure for every other type of resource.

The resultant growing backlash to the large commercial publishers’ models has taken many forms, from petitions and campaigns against individual publishing houses, new, less restricted forms of copyright licensing, and all the way through to the Open Access movement, which we will cover in more detail in future lectures.

A cartoon by Giulia Forsythe that demonstrates the strength of feeling against the Dutch publishing giant, Elsevier, amongst The Cost of Knowledge campaign.

A cartoon by Giulia Forsythe that demonstrates the strength of feeling against the Dutch publishing giant, Elsevier, amongst The Cost of Knowledge campaign.

The session also featured a guest lecture from Suzanne Kavanagh of the Association of Learned and Professional Society Publishers (ALPSP), which supports these groups—which are often small and under-resourced—in this demanding environment, between commercial publishers on the one hand and the stretched resources of libraries and other customers on the other (not to mention the challenges caused through ongoing technological developments, causing changes in turn in both traditional reading habits, and also the rise of new scholarly publishing paradigms being developed by corporate megaliths such as Amazon and Google) by providing a number of support services: information, advocacy and representation, professional development and opportunities for networking. The lecture helped remind us that the issues that we discuss in lectures are not just philosophical or theoretical, but that they also affect entire real-world industries.

Getting creative with copyright


, , , , , , , , ,

In our fourth LAPIS issue of term, we discussed the often thorny issue of intellectual property law and how it fits into the library and publishing sectors. The principle of copyright—the aspect of intellectual property that is most likely to be of relevance to librarians—dates back to when the development of the printing press and related technology made the mass copying of authors’ works feasible; prior to this, works had to be copied out manually by professional scribes, which was a laborious and accepted part of the publishing process.

In the United Kingdom, copyright law dates back to 1735, with the implementation of the Engravers’ Copyright Act, which conferred exclusive usage rights on the author of the engraving for a period of fourteen years. It also recognised and codified the distinction between the intellectual work of the creator of the engraving, and the physical labour of the craftsmen who carried out the engraving process itself. The contemporary record of the law makes it clear that its purpose is to incentivise the production of intellectual content:

An act for the encouragement of the arts of deſigning, engraving, and etching hiſtorical and other prints, by veſting the properties thereof in the inventors and engravers, during the time therein mentioned.

The Engravers' Copyright Act (1735) was sponsored by William Hogarth, the renowned artist and engraver who was a prominent victim of unauthorised copying.  Much of his work carries a strong moral message, including this scene from A Rake's Progress (also published in print form in 1735).

The Engravers’ Copyright Act was sponsored by William Hogarth, the renowned artist and engraver who was a prominent victim of unauthorised copying. Much of his work carries a strong moral message, including this scene from A Rake’s Progress (also published in print form in 1735).

Over the following centuries, this copyright principle was extended to cover all forms of intellectual and artistic authorial works, and in general the framework of laws, in the United Kingdom and in other countries, held together. Yet the advent of the current digital age has posed significant challenges to this model, as the interconnectedness of the Internet combined with ever-advancing computer technology has made it much more simple to copy, modify and disseminate these works. Moreover, the very nature of digital technology raises its own questions: is a digital file in itself a copy of something else? Do multiple different saved versions of the same document (perhaps backed up automatically by the author’s own computer) count as copies? And so forth. Often this is not done for any commercial gain, and the creation of another copy, although technically piracy, can be said (for example, by legal scholar Stefan Larsson) to differ from “theft” as the creation of a copy does not remove the original. Overall, the gradual extension of copyright, in the forms of “all rights reserved”, to include all published works by default is incompatible with contemporary socio-cultural norms driven by technological progress, as argued by theorists such as Lawrence Lessig, and alternative models are needed to prevent creativity and entrepreneurship from stagnating due to unnecessary and unworkable restrictions.

The most successful alternative copyright model has proven to be Creative Commons, a non-profit organisation founded by Lessig in 2001 to offer a range of licensing alternatives to the default option of “all rights reserved”. Creative Commons licenses offer a range of options to content producers, all based around the core principal of attribution, whilst allowing people to make use of this content without needing to ask the author’s permission every time, thus removing an unnecessary level of bureaucracy and helping to foster a more creative environment. The principles of Creative Commons are summed up in the organisation’s original signature video:

Creative Commons has been extremely successful, and is used by a variety of companies and organisations, including the, Al-Jazeera, Google, Wikipedia and Flickr. If you look at the bottom right-hand corner of this blog, you can see that its original contents are licensed under the Creative Commons, which produces handy graphics, links and HTML snippets for authors (like me!) who produce Web-based content.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

It should be borne in mind that Creative Commons is intended to complement traditional copyright laws, not to supersede them. In the United Kingdom, copyright law continues to be modified as the digital ages poses new challenges: in 2014, for instance, the rules concerning the making of personal copies for private use and the making of derivative works including quotations and for parody were relaxed.

Copyright is a particularly tricky subject for libraries, which need to balance respecting the law, encouraging authors to produce information, and facilitating unrestricted access to this information. British public libraries pay authors, via the Treasury, royalties for books that are circulated to their authors, illustrators and other contributors through the Public Lending Right scheme, but this has faced challenges in recent years, due to both technology changes in the form of e-books, and the fact that many public libraries are now community-run following recent government expenditure cuts. We can but hope that updating the legal framework, as Creative Commons has proved it is possible to do for freely-licensed works, will help libraries to continue providing satisfactory user access to information.

What is an author, what is an audience, and can they be one and the same?


, , , , , , , , , , , ,

Our third week of LAPIS continued to explore the wide-ranging philosophical concepts of authorship, and the socio-cultural effects of how these concepts have changed over time. We also benefited from a guest appearance from The Guardian’s commissioning editor, Eliza Anyangwe, to offer a journalist’s perspective on current issues of authorship and publishing, which proved to be a useful addition to our existing LIS paradigm.

One of the major themes discussed was how the greater access to information that has taken place throughout human history, and particularly over the last twenty years with the development and exponential growth of Internet communications, has blurred the traditional distinction between “author” and “audience”, and led to a far more participatory culture.

A sketch by H. P. Lovecraft of Cthulhu, his most famous literary creation.

A sketch by H. P. Lovecraft of Cthulhu, his most famous literary creation.

The American author H. P. Lovecraft is best-known for his works of fantasy-horror, exemplified by his conception of the Cthulhu Mythos (pictured above), but he also wrote essays and letters on a number of subjects, including one on amateur journalism. Written circa 1920, much of it seems quaint: much of it is concerned with the potential decline in quality when journalism is carried out by amateurs rather than professionals, whereas citizen journalism is an accepted part of the modern profession. (A notable recent example is that of Daniel Wickham, whose Tweets exposing the hypocrisy of many of the world leaders who attended the recent Charlie Hebdo memorial in Paris were subsequently widely reported by the “mainstream” media.

Nevertheless, I was struck by this passage that occurs near the end of the essay:

Above all, let mutual comment be encouraged. We should make it virtually a rule to see that as many articles as possible receive printed replies.

This anticipates the widespread (albeit not universal, and still much-debated, practice) of news outlets allowing members of the public to make comments on their online articles, one of the key elements of modern media culture.

We also discussed the essay What is an author? by the philosopher and cultural theorist Michel Foucault, whose definition transcends the simple definition of simply the person who produces an artistic or intellectual work, but is instead “a certain functional principle” which is shaped by societal influences that have the potential to stifle his or her output by circumscribing it within established conventions and received knowledge. This led us to the very recent, and similarly-titled essay What is an @uthor? by Matthew Kirschenbaum, which updates many of the questions raised for the contemporary digital age. Kirschenbaum compares William Faulkner with the contemporary author William Gibson: Faulkner was a pioneering figure in the communication of literature, as he allowed a series of classroom conferences he held at the University of Virginia to be recorded and published; Gibson is a current example of this approach taken to extremes, with the entirety of a recent book tour recorded, in addition to frequent interactions with his audience through social media such as Twitter. The question that Kirschenbaum raises is simple: when studying contemporary authors, is it now to be expected of the researcher that he or she must trawl these extensive digital archives in search of illumination on a particular subject, in addition to the author’s works themselves and the standard additional sprint sources of information?

"Unofficial" fan-art (by Spankeye on DeviantArt) of Cthulhu - is it a less valid interpretation as a result?  Or does "ownership" transcend the concept of an individual author (in spirit if not in law)?

“Unofficial” fan-art (by Spankeye on DeviantArt) of Cthulhu – is it a less valid interpretation as a result? Or does “ownership” transcend the concept of an individual author (in spirit if not in law)?

A key part of authors’ use of social media is their direct interaction with their audience, and this raises the additional questions that surround fan, or participatory culture, and whether or not the audience can, in effect, secure “ownership” of an artistic or intellectual creation over its original author. A relevant example is the current Star Wars Uncut project, which aims to remake the original Star Wars trilogy with each scene shot by a different group of fans. To many of us in the lecture, this seemed like a new and rather indulgent form of artistic expression, but we were then asked to consider if the underlying concept was really so very different from an established, artistic form, the parody? And is the desire to film this project perhaps borne out of frustration about the subsequent direction of the franchise under George Lucas and, latterly, Disney, and therefore a (possibly subconscious) attempt to claim “ownership” of the original films?

Eliza’s lecture on contemporary trends in journalism raised many of the same points, and emphasised the fact that, outside the theoretical discourse of the lecture theatre and in news agencies across the world, there is a greater reliance than ever before on the traditional, professional side of the discipline being supported by the work of amateurs, essentially “fans” of newsworthy stories. She concluded that there is currently a choice between two models in the journalism sector: first, a culture in which everyone is encouraged to contribute content and everyone can view the news agency’s output, the drawback of which is that it may not be economically sufficient; and secondly, a subscription-based model which restricts necessarily participation and access, but may provide a more solid financial basis providing that the agency’s reputation is already high enough to encourage readers to pay for content. Something tells me that we will encounter the choice between these models again throughout the LAPIS module!

The medium is the message, and the medium is changing


, , , , , , , , , , , ,

In our second LAPIS lecture of the term, we continued to discuss the philosophy and history of publishing. In a continuation of last week’s discussion of the distinction between form and content, inspired by Walter Benjamin, we discussed the works of the Canadian philosopher of communication and media, Marshall McLuhan. One of McLuhan’s best-known ideas is that “the medium is the message”, as expressed in his 1964 book, Understanding Media: The Extensions of Man: that the medium through which content is communicated is of greater importance to society than the content itself. This is because different forms of media cause different social effects due to their inherent characteristics: for example, content that is communicated by radio is consumed in a fundamentally different way from content communicated by television, or the print media. The video below features McLuhan elaborating on his theories in a televised question-and-answer session.

The idea itself is now over fifty years old, yet has retained its fundamental importance as one of the cornerstones of modern media theory. It continues to remain relevant, as new forms of communication media continue to emerge; most obviously those associated with the growth of the Internet. One notable trend of recent years is the growth of media which demand extreme brevity on the part of those who transmit messages, for example the 140-character limit of Twitter or the six-second video loops on Vine.

McLuhan himself divided media into “hot” and “cool” types. The former, which include film and radio, engage one sense completely and require little active participation on the part of the consumer. The latter require greater participation to “fill in the gaps” of the experience with one’s own imagination, and include telephone conversations, comic books, and (perhaps surprisingly) television. I wonder what McLuhan would have made of the Internet, which is extremely participatory in terms of user experience, multimedia in form, and increasingly immersive in nature?

The growth of the Internet leads me in nicely to another important lecture topic: disruptive innovation. A concept that originated with business professor Clayton Christensen, it refers to a new technology that at first disrupts, then later completely supersedes its predecessor, whilst retaining some of its main identifying elements and hence resulting in long-term progress. The publishing sector has been severely disrupted by the emergence and growth of Internet-based technologies—notable examples include the development of self-publishing as a viable business model, the development of e-readers, and the replacement of traditional encyclopaedias and reference sources with free, crowdsourced alternatives such as Wikipedia—and we will explore the wider effects of this disruption upon society, and how the publishing sector is reacting, in future lectures.

One important societal shift caused by the disruptive innovation associated with the Internet that we briefly touched on appears to be the development of a Sharing Economy from the earlier Knowledge Economy model first espoused by Peter Drucker in the 1960s. Drucker argued that modern society consisted mostly of knowledge, rather than manual, workers, and that this preponderance had significant ramifications for society. The Sharing Economy takes this approach one step further: now that information is recognised as a commodity, and that this commodity has never been so easy to share, it follows that its value increases for everyone when it shared. This in turn has redefined publishing, according to John Feather, as “the commercial activity of putting books into the public domain”. This naturally has a wide range of repercussions, as we will again investigate later in the module.

The artist in a hostile world


, , , , , , , , , ,

With a new term comes a change in focus: this blog will now primarily be used to post my thoughts and reflections on the Libraries & Publishing in an Information Society module that I am currently studying. In our first lecture for this module, we discussed (amongst other things) the role of “the author as producer” – the title of an address delivered by the German philosopher and critic, Walter Benjamin, in 1934. In this address, he discusses several topics, such as his belief that the form and content of a work of art or literature cannot be separated from one another, and the role of technological and political developments in changing the nature of authorship.

What I was most drawn to, however, was the fact that Benjamin delivered this address in a period of history when freedom of creative expression was under threat from the twin pillars of contemporary totalitarianism: fascism, represented by the accession to power of Benito Mussolini in Italy and Adolf Hitler in Germany, and communism, represented by the creation of the Soviet Union in the wake of the Russian Revolutions in 1917. Benjamin acknowledges this by referring to Plato’s Republic, a discussion of a utopian society in which the common good is achieved through just governance by an élite of “philosopher-kings”, but which also includes a great deal of artistic censorship.

Odessa Port (1898) by Wassily Kandinsky - an example of his "realistic" early work.

Odessa Port (1898) by Wassily Kandinsky – an example of his “realistic” early work.

The late nineteenth and early twentieth centuries saw the rapid development of new artistic movements that reacted violently against the established practices that preceded them. These developments can often be identified in the progression of individual artists’ works. For example, the Russian painter Wassily Kandinsky’s early works are rooted in Impressionism—a revolutionary movement itself when it first appeared, but well-established by the beginning of the twentieth century—but become quickly more abstract during his artistic development, culminating in the compositions of seemingly-random colours and shapes of his mature works.

Composition VI (1913) by Kandinsky, a totally abstract painting.

Composition VI (1913) by Kandinsky, a totally abstract painting.

A similar example in the field of music can be taken from the artistic development of Arnold Schoenberg, the Austrian composer. As with Kandinsky, his early works, such as Verklärte Nacht (1899) are rooted in the late-Romantic tradition of Richard Wagner.

However, he quickly abandoned this style, instead experimenting with atonalism, before developing his own technique of twelve-tone serialism, which, like abstract painting, was alien to anything that had gone before it.

The total creative freedom espoused by these and other artistic developments were anathema to the totalitarian régimes that emerged during the inter-war period. Both the fascist and communist governments aspired to total control of their nations’ cultural lives, and anything that ran counter to the official, state-sanctioned artist movements was repressed with increasing severity. In Nazi Germany, not only were the works of “traditional” artists who happened to be Jewish (such as Felix Mendelssohn) or otherwise undesirable censored, but modern, “degenerate” artworks in general were treated as objects of ridicule.

In the Soviet Union, the collapse of the Tsarist society during the revolutions of 1917 and the subsequent civil war at first allowed artists—at least those who did not go into exile—complete creative freedom, and the country’s avant-garde movements initially flourished as a result. However, as the Bolsheviks consolidated their hold on the country, the party leadership eventually rejected these movements as being associated with the bourgeoisie, and instead promoted and sponsored the style of socialist realism: many artists who were not able to abandon their favoured styles in favour of socialist realism were subsequently demoted, exiled to Siberia, or even executed, and their works suppressed.

Yet despite these oppressive and often dangerous environments, it was still possible for an artist to defy the régime and retain his or her creative independence and integrity. Probably the most famous example of this is the Soviet composer Dmitri Shostakovich: initially a rising star of the music world, he abruptly fell from favour after a hostile review of his opera, Lady Macbeth of the Mtensk District, was published in Pravda, the state newspaper, allegedly by Joseph Stalin himself. Fearing for his life, he worried that his Fourth Symphony, in development at the time, was too tragic in tone to be performed at a time when the state was proclaiming a rapid improvement in living conditions through the measures of the second Five-Year Plan, and pulled it from performance. He later completed a triumphant Fifth Symphony, which received glowing reviews, and was officially subtitled “A Soviet Artist’s Response to Just Criticism”. However, many musicologists have noted that the triumphalism of the music sounds exaggerated and forced, as in the final section of the fourth and final movement (starting at approximately 10:05 in the video below), a interpretation which was too subtle to be noticed by the state organs responsible for vetting artistic output.

Despite the fact that fascism and communism are now largely discredited, and their control of the arts is an extreme example of how the artist and society are related, it is clear from what we discussed in the lecture that authors, publishing and libraries are intrinsically interlinked to one another, and to the wider society in general. I look forward to studying these issues from additional perspectives in the coming weeks.

New year, new name



It has been my intention to keep this blog running, even though the module for which it was required is now finished. Hence I have changed its name, in order to remove the specific reference to the Digital Information Technologies and Architectures module, and promoted a modified version of the original subtitle (“Welcome to the Library of Tomorrow!”) in its place. I have chosen to do this because I think that this blog has, in its brief existence so far, covered a number of areas which are at the forefront of current LIS research, and I intend to keep it that way (although I remain partial to whimsical historical digressions as well). I am also interested in researching the futurology of the discipline for my dissertation later in the year – more on which later.

The second term of my course’s lectures begins tomorrow, so please stay tuned for more posts soon!