Anmerkung: bei Facebook gibt es eine schöne Fotostrecke zur Veranstaltung.

In meinem vorherigen Post habe ich Gedanken zum Auftakt von “Öffentlichkeit, Medien und Politik..” festgehalten, heute geht es entsprechend weiter. Nachdem am ersten Tag der Veranstaltung die Rolle des Intellektuellen im Mittelpunkt stand, wurde am zweiten Tag unter anderem über die Veränderung der Wissenschaft durch das Internet diskutiert. Ich beziehe mich hier auf die (Pseudo-)Delphi-Runde am Nachmittag, in der wir dieser Frage nachgegangen sind, liefere aber erst noch eine relativ kurze Aufzählung der Beiträge aus den drei Sessions zuvor. Björn Brembs hat zu diesen Beiträgen sehr schöne Zusammenfassungen mitgebloggt, die ich hier nochmals verlinke.

In Autorschaft und kollaboratives Publizieren: Wissenschaft und ‘Werk’ im digitalen Zeitalter, moderiert von Friedrich Jaeger, stellten Gerhard Lauer (1) und Daniela Pscheida (2) die Frage nach Schöpfer und Schöpfung im wissenschaftlichen Zusammenhang. Bei beiden wurde dabei deutlich, dass beide Begriffe in der Zukunft ihre Bedeutung noch wesentlich verändern werden, und dass unsere aktuellen institutionellen Rahmenbedingungen dies nur bedingt unterstützen. Datenveröffentlichungen (Korpora und deren Annotation) und kollaborative Publikationsformen (Wikis) passen im Moment noch nur bedingt in das anerkannte Schema wissenschaftlicher Werke — jedenfalls, wenn es um Berufungskomissionen geht.

Weiter ging es anschließend mit Qualitätsstandards und institutionelle Kontexte digitaler Wissenschaftskommunikation, von mir moderiert, wo wir einen Schwenk in Richtung Bibliotheksarbeit, Publikationskosten und Qualitätsstandards/Qualitätsbewertung vollzogen. Die Präsentationen von Gregor Horstkemper (3) und Jochen Johansen konzentrierten sich auf die ersten beiden Aspekte, der Beitrag von Martin Warnke (4) dann auf den dritten. Für Leute aus dem Bibliotheksbereich enthielten die Talks von Horstkemper und Johansen relativ viel Bekanntes, aber für das überwiegend nicht-bibliothekarische Publikum waren sie meines Erachtens sehr wichtig. Man sollte bei jeder sich bietenden Gelegenheit vor Wissenschaftlern über Bibliotheksservices und die horrenden Kosten wissenschaftlicher Toll Access-Publikationen reden, und auf keinen Fall annehmen, dass diese Informationen inzwischen jeder kennt. Wissenschaftler, die besser informiert sind, treffen bessere Entscheidungen mit Blick auf diese Punkte. Der Beitrag von Martin Warnke war vor allem durch sein Fazit spannend: der Long Tail der Forschung sei wichtig, meinte Warnke, also nicht nur die exponiertesten und ökonomisch “sinnvollen” Disziplinen, Themenfelder und Werke.

Die dritte Session Öffentlichkeit und Wissenschaftskommunikation im digitalen Zeitalter: Mythen und Realitäten, moderiert von Mareike König, lenkte schließlich unsere Aufmerksamkeit auf das Verhältnis von Wissenschaft und Öffentlichkeit.
Steffen Albrecht sprach über Onlinediskurse als Reflexionsspiele. Veränderungen öffentlicher Kommunikation durch die neuen Medien,
Torsten Reimer (5) über die Arbeit von JISC unter dem Titel Forschung im galaktischen Zoo. Neue Medien, neue Wissenschaftskommunikation, neue Wissenschaft?, und schließlich Rainer Winter (6) über Das Internet und die Konstitution einer transnationalen Öffentlichkeit. Der Beitrag von Torsten Reimer war insofern für mich der spannendste, als die von ihm beschriebenen citizen science-Projekte (allen voran Galaxy Zoo) in meinen Augen den Weg der Wissenschaft im 21. Jahrhundert vorzeichnen. Dieser Aspekt ist sicherlich einen eigenen Post wert, aber Partizipation ist vor allem deshalb so interessant, weil sich dadurch nicht nur bestimmte wissenschaftliche Probleme crowdsourcen lassen, sondern auch, weil so eine breite Unterstützung für Wissenschaft hergestellt werden kann. Wie Reimer es so schön sagte: an einem Projekt zu kürzen, bei dem 10.000 Steuerzahler partizipieren, fällt schwer. Es lohnt sich, darüber nachzudenken, wie partizipative Konzepte beispielsweise in virtuelle Forschungsumgebungen integriert werden können. Gerade in den Geisteswissenschaften kann immens viel dadurch erreicht werden, dass interessierte Laien aktiv in Forschungsprozesse eingebunden werden. Dass sich aus einer Zusammenarbeit zwischen Laien und Experten auf Augenhöhe auch Konflikte ergeben können, sollte dabei ebenfalls klar sein.

Von links nach rechts: Claus Leggewie, Felix Lohmeyer, Jens Klump, Stephan Humer, Sonja Palfner, Jan Schmidt, Manfred Thaller, Patrick Sahle, Jan Schmirmund, Cornelius Puschmann und Björn Brembs. Copyright: KWI / Foto: Georg Lukas

Womit wir auch bei der Delphi-Runde angekommen wären, denn auch dort spielte dieser Aspekt eine Rolle. Ziel der Runde war die Erkundung der Frage, wie sich wissenschaftliche Kommunikation und Wissenschaft insgesamt durch das Internet verändern. Ursprünglich hatten wir gehofft, besonderes Augenmerk auf die Nutzungsansätze unterschiedlicher Generation zu richten, was wir aber dann aber zugunsten eines stärker individuellen Ansatzes verworfen haben. Die Runde war zwar fachlich gut durchmischt, allerdings mit Blick auf die Faktoren Alter, Geschlecht und Internetaffinität eher homogen (nämlich vorwiegend 30-40 Jahre alt, männlich und “Internetfreundlich”).

Hier eine Auflistung der Teilnehmer:

Drei Aspekte sind mir nachträglich besonders im Gedächtnis geblieben, wobei es sich um eine sehr subjektive Auswahl handelt. Die Diskussion war sehr ergiebig und der aufgezeichnete Ton wird hoffentlich noch veröffentlicht, damit Interessierte das gesamte Gespräch nachvollziehen können.

Redet die digitale Wissenschaftsclique mit sich selbst?
Ich bleibe gleich bei der Frage der Repräsentativität. Als Jan Schmidt zu Anfang der Diskussion über seine Computersozialisation mit dem heimischen C64 berichtete und Björn Brembs prompt verständnisvoll nickte, musste ich ebenfalls an entsprechende Erfahrungen denken. Das war mir zumindest zum Teil etwas suspekt, denn wie generalisierbar können unsere Perspektiven sein, wenn wir eine derart homogene Gruppe darstellen? Dabei meine ich nicht die Repräsentativität des Runde als Grundlage einer empirischen Untersuchung, sondern eventuelle wissenschaftspolitische Entwicklungen, die sich langfristig an einer kleinen und sehr spezifischen Gruppe orientieren, deren Verhalten als progressiv eingeordnet wird (und die sich auch laufend durch ihre digitale Präsenz selbst bescheinigt, wie progressiv sie ist). Das bedeutet natürlich nicht, dass die Richtung nicht stimmt, aber nachdenklich macht mich dieser virtuelle echo chamber schon etwas.

Nachdenklich: Björn Brembs, Stephan Humer und Jens Klump. Copyright: KWI / Foto: Georg Lukas

Digital divide auch in der Wissenschaft
In diese Sinne stellte Patrick Sahle bereits zu Beginn der Runde fest: “der Graben wird größer”. Gemeint war speziell der Graben zwischen den Digital Humanities und den “normalen” Geisteswissenschaften, der nach Patricks Eindruck eher breiter als schmaler wird. Björn Brembs monierte, dass sich viele Praktiken innerhalb der Wissenschaft nur im Schneckentempo durchsetzten, die außerhalb der Wissenschaft bereits üblich seien. Er bemängelte die große Diskrepanz zwischen dem, was üblich, und dem, was (technisch) möglich ist — eine Frustration, die wohl jeder schon erlebt hat, wenn es um die zuweilen arkanen Praktiken im Universitätsbetrieb geht.

Die Geisteswissenschaften müssen das Internet erobern (Manfred Thaller, daneben Sonja Palfner). Copyright: KWI / Foto: Georg Lukas

Von unsichtbarer Arbeit zu institutioneller Anerkennung
Einerseits wurden von den Teilnehmern die vielen indirekten Vorteile ihrer Nutzung digitaler Medien hervorgehoben (Informiertheit, bessere Vernetzung mit Kollegen, erhöhte Sichtbarkeit der eigenen Forschung), andererseits wurde für mich aber auch die Forderung nach einer stärkeren institutionellen Anerkennung deutlich. Wie Sonja Palfner bemerkte, nimmt der Anteil des “invisible work” ständig zu, also die Arbeit, die ein Wissenschaftler im Zusammenhang mit Forschungsanträgen, Gutachten, Evaluationen, usw. verrichtet. Addiert man dazu Social Media-Nutzung und andere informelle Kommunikationsaktivitäten, so bleibt immer weniger Raum für die wissenschaftlichen Veröffentlichungen, die zugleich das zentrale Beurteilungskriterium darstellen, wenn es um die wissenschaftliche Karriere geht. Dennoch wird nachweislich immer mehr veröffentlicht und (vermutlich) immer weniger gelesen. Die Anerkennung verschiedener “unsichtbarer” Arbeiten, die Wissenschaftler übernehmen, könnte ein erster Schritt sein, die Flut an reinen Karrierepublikationen einzudämmen.

I read about this new book series titled Scholarly Communication: Past, present and future of knowledge inscription this morning on the Humanist mailing list. Since scholarly communication is one my main research interests, I’m thrilled to hear that there will be a series devoted to publications focusing on the topic, edited and reviewed by a long list of renown scholars in the field.

On the other hand it’s debatable (see reactions by Michael Netwich and Toma Tasovac) whether a book series on the future of scholarly communication is not a tad anachronistic, assuming it is published exclusively in print (seems to be the case from the look of the announcement on the website). New approaches, such as the crowdsourcing angles of Hacking the Academy or Digital Humanities Now, seem more in sync with Internet-age publishing to me, but sadly such efforts usually don’t involve commercial publishers**. My recent struggles with Oxford University Press over a subscription to Literary and Linguistic Computing (the only way of joining the ALLC) has added once more to my skepticism towards commercial publishers. And not because their goal is to make money — there’s nothing wrong with that inherently — but because they largely refuse to innovate when it comes to their products and business models. Mailing a paper journal to someone who has no use for it is a waste of resources and a sign that you are out of touch with your customers needs… at least if your customer is this guy.

Do scholars in the Humanities and Social Sciences* still need printed publications and (consequently) publishers?

Do we need publishers if we decide to go all-out digital?

Do we need Open Access?

I have different stances in relation to these questions depending on the hat I’m wearing. Individually I think print publishing is stone dead, but I also notice that by and large my colleagues still rely on printed books and journals much more heavily than digital sources. Regarding the role of publishers and Open Access the situation is equally complex: we need publishers if our culture of communication doesn’t change, because reproducing digitally what we used to create in print is challenging (see this post for some deliberations). If we decide that blog posts can replace journal articles because speed and efficiency ultimately win over perfectionism, since we are no longer producing static objects but a constantly evolving discourse — in that case the future of commercial publishers looks uncertain. Digital toll-access publishing seems to have little traction in our field so far, something that is likely to change with the proliferation of ebooks we are likely to see in the next few years.

Anyhow — what’s your take?

Should we get rid of paper?

Should we get rid of traditional formats and post everything in blogs instead?

Is Cameron Neylon right when he says that the future of research communication is aggregation?

Let me know what you think — perhaps the debate can be a first contribution to Scholarly Communication: Past, present and future. :-)

(*) I believe the situation is fundamentally different in STM, where paper is a thing of the past but publishers are certainly not.

(**) An exception of sorts could to be Liquid Pub, but that project seems focused on STM rather than Hum./Soc.Sci.

Timely or Timeless? The Scholar’s Dilemma.

On May 19, 2010, in Thoughts, by cornelius

Note: this introduction, co-authored with Dieter Stein, is part of the volume Selected Papers from the Berlin 6 Open Access Conference, which will appear via Düsseldorf University Press as an electronic open access publication in the coming weeks. It is also a response to this blog post by Dan Cohen.

Timely or Timeless? The Scholar’s Dilemma. Thoughts on Open Access and the Social Contract of Publishing

Some things don’t change.

We live in a world seemingly over-saturated with information, yet getting it out there in both an appropriate form and a timely fashion is still challenging. Publishing, although the meaning of the word is undergoing significant change in the time of iPads and Kindles, is still a very complex business. In spite of a much faster, cheaper and simpler distribution process, producing scholarly information that is worth publishing is still hard work and so time-consuming that the pace of traditional academic communication sometimes seems painfully slow in comparison to the blogosphere, Wikipedia and the ever-growing buzz of social networking sites and microblogging services. How idiosyncratic does it seem in the age of cloud computing and the real-time web that this electronic volume is published one and a half years after the event its title points to? Timely is something else, you might say.

Dan Cohen, director of the Center for History and New Media at George Mason University, discusses the question of why academics are so obsessed with formal details and consequently so slow to communicate in a blog post titled “The Social Contract of Scholarly Publishing“. In it, Dan retells the experience of working on a book together with colleague Roy Rosenzweig:

“So, what now?” I said to Roy naively. “Couldn’t we just publish what we have on the web with the click of a button? What value does the gap between this stack and the finished product have? Isn’t it 95% done? What’s the last five percent for?”

We stared at the stack some more.

Roy finally broke the silence, explaining the magic of the last stage of scholarly production between the final draft and the published book: “What happens now is the creation of the social contract between the authors and the readers. We agree to spend considerable time ridding the manuscript of minor errors, and the press spends additional time on other corrections and layout, and readers respond to these signals — a lack of typos, nicely formatted footnotes, a bibliography, specialized fonts, and a high-quality physical presentation — by agreeing to give the book a serious read.”

A social contract between author and reader. Nothing more, nothing less.

It may seem either sympathetic or quaint how Roy Rosenzweig elevates the product of scholarship from a mere piece of more or less monitizable content to something of cultural significance, but he also aptly describes what many academics, especially in the humanities, think of as the essence of their work: creating something timeless. That is, in short, why the humanities are still in love with books, why they retain a pace of publishing that is entirely snail-like, both to other academic fields and to the rest of the world. Of course humanities scholars know as well as anyone that nothing is truly timeless and understand that trends and movements shape scholarship just like they shape fashion and music. But there is still a commitment to spend time to deliver something to the reader that is a polished and perfected as one can manage. Something that is not rushed, but refined. Why? Because the reader expects authority from a scholarly work and authority is derived from getting it right to the best of one’s ability.

This is not just a long-winded apology to the readers and contributors to this volume, although an apology for the considerable delay is surely in order, especially taking into account the considerable commitment and patience of our authors (thank you!). Our point is something equally important, something that connects to Roy Rosenzweig’s interpretation of scholarly publishing as a social contract. This publication contains eight papers produced to expand some of the talks held at the Berlin 6 Open Access Conference that took place in November 2008 in Düsseldorf, Germany. While Open Access has successfully moved forward in the past eighteen months and much has been achieved, none of the needs, views and fundamental aspects addressed in this volume — policy frameworks to enable it (Forster, Furlong), economic and organizational structures to make it viable and sustainable (Houghton; Gentil-Beccot, Mele, and Vigen), concrete platforms in different regions (Packer et al) and disciplines (Fritze, Dallmeier-Tiessen and Pfeiffenberger) to serve as models, and finally technical standards to support it (Zier) — none of these things have lost any of their relevance.

Open Access is a timely issue and therefore the discussion about it must be timely as well, but “discussion” in a highly interactive sense is hardly ever what a published volume provides anyway – that is something the blogosphere is already better at. That doesn’t mean that what scholars produce, be it in physics, computer science, law or history should be hallowed tomes that appear years after the controversies around the issues they cover have all but died down, to exist purely as historical documents. If that happens, scholarship itself has become a museal artifact that is obsolete, because a total lack of urgency will rightly suggest to people outside of universities that a field lacks relevance. If we don’t care when it’s published, how important can it be?

But can’t our publications be both timely and timeless at once? In other words, can we preserve the values cited by Roy Rosenzweig, not out of some antiquated fetish for scholarly works as perfect documents, but simply because thoroughly discussed, well-edited and proofed papers and books (and, for that matter, blog posts) are nicer to read and easier to understand than hastily produced ones? Readers don’t like it when their time is wasted; this is as true as ever in the age of information overload. Scientists are expected to get it right, to provide reliable insight and analysis. Better to be slow than to be wrong. In an attention economy, perfectionism pays a dividend of trust.

How does this relate to Open Access? If we look beyond the laws and policy initiatives and platforms for a moment, it seems exceedingly clear that access is ultimately a solvable issue and that we are fast approaching the point where it will be solved. This shift is unlikely to happen next month or next year, but if it hasn’t taken place a decade from now our potential to do innovative research will be seriously impaired and virtually all stakeholders know this. There is growing political pressure and commercial publishers are increasingly experimenting with products that generate revenue without limiting access. Historically, universities, libraries and publishers came into existence to solve the problem of access to knowledge (intellectual and physical access). This problem is arguably in the process of disappearing, and therefore it is of pivotal importance that all those involved in spreading knowledge work together to develop innovative approaches to digital scholarship, instead of clinging to eroding business models. As hard as it is for us to imagine, society may just find that both intellectual and physical access to knowledge are possible without us and that we’re a solution in search of a problem. The remaining barriers to access will gradually be washed away because of the pressure exerted not by lawmakers, librarians and (some) scholars who care about Open Access, but mainly by a general public that increasingly demands access to the research it finances. Openness is not just a technicality. It is a powerful meme that permeates all of contemporary society.

The ability for information to be openly available creates a pressure for it to be. Timeliness and timelessness are two sides of the same coin. In the competitive future of scholarly communication, those who get everything (mostly) right will succeed. Speedy and open publication of relevant, high quality content that is well adjusted to the medium and not just the reproduction of a paper artifact will trump those publications that do not meet all the requirements. The form and pace possible will be undercut by what is considered normal in individual academic disciplines and the conventions of one field will differ from those of another. Publishing less or at a slower pace is unlikely to be perceived as a fault in the long term, with all of us having long gone past the point of informational over-saturation. The ability to effectively make oneself heard (or read), paired with having something meaningful to say, will (hopefully) be of increasing importance, rather than just a high volume of output.

Much of the remaining resistance to Open Access is simply due to ignorance, and to murky premonitions of a new dark age caused by a loss of print culture. Ultimately, there will be a redefinition of the relativities between digital and print publication. There will be a place for both: the advent of mass literacy did not lead to the disappearance of the spoken word, so the advent of the digital age is unlikely to lead to the disappearance of print culture. Transitory compromises such as delayed Open Access publishing are paving the way to fully-digital scholarship. Different approaches will be developed, and those who adapt quickly to a new pace and new tools will benefit, while those who do not will ultimately fall behind.

The ideological dimension of Open Access – whether knowledge should be free – seems strangely out of step with these developments. It is not unreasonable to assume that in the future, if it’s not accessible, it won’t be considered relevant. The logic of informational scarcity has ceased to make sense and we are still catching up with this fundamental shift.

Openness alone will not be enough. The traditional virtues of a publication – the extra 5% – are likely to remain unchanged in their importance while there is such a things as institutional scholarship. We thank the authors of this volume for investing the extra 5% for entering a social contract with their readers and another, considerable higher percentage for their immense patience with us. The result may not be entirely timely and, as has been outlined, nothing is ever truly timeless, but we strongly believe that its relevance is undiminished by the time that has passed.

Open Access, whether 2008 or 2010, remains a challenge – not just to lawmakers, librarians and technologists, but to us, to scholars. Some may rise to the challenge while others remain defiant, but ignorance seems exceedingly difficult to maintain. Now is a bad time to bury one’s head in the sand.

Düsseldorf,

Mai 2010

Cornelius Puschmann and Dieter Stein

Note: this is a crossposting with cyberling.org.

The World Loanword Database (WOLD, http://wold.livingsources.org/), edited by Martin Haspelmath and Uri Tadmor and published by the Max Planck Digital Library (http://www.mpdl.mpg.de/) is a new digital resource for linguists that allows tracing the origin of loan words.

We had the oportunity to interview WOLD web developer Robert Forkel and ask him about the design philosophy and technology behind the platform. Soon (in about 1-2 weeks) we will also post an interview with Martin Haspelmath on the potential of WOLD for data-driven linguistic research.

Cornelius Puschmann: Robert, WOLD is a rich, open-access resource for studying a range of different questions in linguistics. Could you tell us a bit more about the history of WOLD itself, how it came into being?

Robert Forkel: Martin can tell you everything about the concept and history of WOLD, so I’ll focus on the development process. Successful collaboration with the Max Planck Institute for Evolutionary Anthropology (EVA, http://www.eva.mpg.de/english/index.htm) on the World Atlas of Language Structures Online (WALS, http://wals.info/) led to the Cross-Linguistic Database Platform project (http://www.mpdl.mpg.de/projects/intern/cldp_de.htm). The idea behind the platform is the post-hoc integration of distributed resources via linked data (http://linkeddata.org/). WOLD is the second linked data resource for linguistics we have developed, so now the work on integration of the two can begin.

Cornelius Puschmann: Where does the data for WOLD come from and who contributed to it, apart from the editors and yourself?

Robert Forkel: I’ll also refer you to Martin for a detailed answer to that question. The short version is that the data was contributed by a large group of researchers over several years in the Loanword Typology Project and then adapted for Web publication.

Cornelius Puschmann: What kind of technology is WOLD based on and how can researchers interact with the data?

Robert Forkel: WOLD is implemented using a Python web application framework (currently Turbogears, but we’ll move to Pylons soon), serving data stored in a relational database (PostgreSQL). Good question regarding how researchers can interact with the data — we’d like to find out more about that once more people use WOLD. As stated above, we want to establish linked data and RDF as as data access and exchange protocols. This will be beneficial to our own integration plans, but ideally it would also replace CSV/Excel/etc as exchange formats. Our own plan in terms of data integration involves harvesting dispersed data and putting it in a central repository where it could be queried using SPARQL (http://www.w3.org/TR/rdf-sparql-query/). Pretty much like OLAC (http://www.language-archives.org/), just for data.

Cornelius Puschmann: How long did it take to develop WOLD and what resources, in terms or specialists and work hours, are needed to put a project on this scale together?

Robert Forkel: There is no simple answer to this, since different steps were involved, with the development of the WOLD web platform just being the last one. The data for WOLD was collected in a project running over several years. During this project, the data was stored in a Filemaker database (http://www.filemaker.com/) which made for easy data input, but also required an extra data migration step for the online publication. Having gathered experience with this kind of toolset and the workflow of the linguists in the WALS Online project helped a lot.

The work on the online publication of the data was also an ongoing process over the course of more than a year. There are always delays in a project with many contributers and parties involved, where careful coordination between scholars and developers is pivotal. I think to put together a project of this scale requires an organization which can dedicate small amounts of resources over a longer period of time. The finished web application right now could probably be rewritten within a week or two — which I’m actually doing for the switch to a new software framework. But as with WALS, an iterative process was essential. There is simply no way of imagining (let alone specifying) such an application without looking at it and discussing it with practitioners.

Cornelius Puschmann: How does WOLD tie in with other MPDL/MPG-EVA projects and who do you see as target audiences for the different resources you provide?

Robert Forkel: In various ways. For resources like the intercontinental dictionary series (http://lingweb.eva.mpg.de/ids/), and word lists in general, the ties are very strong, i.e. I think it should be possible to mix and match data from these resources without much programming. In fact, we think about reusing the web application serving WOLD to serve IDs as well, thereby publishing the ID data as linked data as well. With resources like WALS, integration will probably be on a more superficial level à la “and what does WALS say about language X?” Finding out what it may mean to query WALS and WOLD and ID data at once is ultimately the goal of the “cross-linguistic database platform” project, so stay tuned.
Regarding the target audience: the first week after its publication WOLD showed that, just as with WALS, the user community is not restricted to linguistic specialists, but quite diverse.

Cornelius Puschmann: How do legal and licensing issues come into play when developing such resources? What role does Open Access play?

Robert Forkel: Legal and data licensing issues should come into play at a very early stage of your project. There is significant demand for qualified real legal advice, since all of this is unchartered terrain. With WOLD we were in the fortunate situation that the data had not been published before and the editors agreed to publishing it under a Creative Commons Attribution (CC-BY) license, which I’m told qualifies as “real” open access. Still, licensing and conveying license information is still a largely unsolved problem for research data, if not in principle, then practically in each concrete dataset I’ve encountered so far. A lot of insecurity in this area stems from a lack of precedent and explicit licensing terms.
Being able to publish WOLD and WALS open access is certainly essential for getting an entity like the MPDL involved, since we are committed to open access (http://oa.mpg.de/openaccess-berlin/berlindeclaration.html). Publishing restricted data would be hard to justify in our context.

Cornelius Puschmann: Where do you see the field moving in terms of digital resources and cyberinfrastructure in the future?

Robert Forkel: Well, fortunately for researchers, I don’t see the field moving forward so quickly that one risks falling behind. My personal opinion is that if maybe in three years a WOLD vocabulary can be imported in Excel or Google Spreadsheets by simply giving the vocabulary URL — and be meaningfully merged with a word list from IDs — I’d consider this a bright future.

Cornelius Puschmann: What are your recommendations for developers and researchers who want to build such resources or contribute to existing ones?

Robert Forkel: Get in touch! Actually the “contribution” question is still a big one for us. WALS has been a tremendous success in sliciting feedback.

I’d like to thank Robert for taking the time to chat with me.

Der Misserfolg der Medientheorie, die geglaubt hat, mit der Exegese von ein paar Aufsätzen Walter Benjamins und der Wiederholung einiger ungedeckter Thesen Michel Foucaults schon etwas Tragfähiges zu den gegenwärtigen medialen Transformationen zu sagen, hat etwa auf der Seite der Geisteswissenschaften dazu beigetragen, dass ihr zu ihren eigenen realen Arbeitsumwelten nicht viel einfällt. Die Verlage haben zu lange darauf gesetzt, die bestehenden Publikationswege als die allein sinnvollen zu verteidigen. Die Wissenschaftsorganisationen neigen dazu, die Unterschiede der Fächerkulturen einzuebnen. Wir alle werden dabei von Google und Co. überholt. Andere Prozesse wie die weltweite Konkurrenz der Wissenschaftsstandorte oder die Metrisierung der Wissenschaften beschleunigen diesen Prozess. Wir stehen hier am Anfang einer Entwicklung, die keiner von uns überblicken kann.

http://www.zotero.org/coffee001/items/51403460

At last, and with quite a bit of lag, here are the slides for last month’s talk at the 1st European Summer School “Culture & Technology” in Leipzig. It was a fabulous event and I cannot praise Elizabeth Burr and her staff enough for making it happen and hosting us so graciously. Digital humanists are a wonderfully diverse and friendly lot and I can’t wait to get to know more of them at Digital Humanities 2010 and next year’s ESU-CT.

Tagged with: