Corpus Linguistics with R, Day 2

On July 28, 2009, in Code, by cornelius

R Lesson 2


text<-c("This is a first example sentence.", "And this is a second example sentence.")

# gsub replaces stuff in strings

> gsub ("second", "third", text)
SEARCH-REPLACE-SUBJECT
[1] "This is a first example sentence."
[2] "And this is a third example sentence."
> gsub ("n", "X", text)
[1] "This is a first example seXteXce."
[2] "AXd this is a secoXd example seXteXce."
> gsub ("is", "was", text)
[1] "Thwas was a first example sentence."
[2] "And thwas was a second example sentence."

---

Perl-style regex

^ beginning of str, e.g. "^x", ***OR*** NOT inside of []
$ end of str, e.g. "x$"
. any other char
\ escape char - TWO ("\\") needed
[] character classes, e.g. [aeiou] vowels, [a-h] is same as [abcdefgh]
{MIN,MAX} number of immediately preceding unit (chacter)

examples
lo+l

> grep("analy[sz]e", c("analyze", "analyse", "moo"), perl=T, value=T)
[1] "analyze" "analyse"

> grep("(first|second)", text, perl=T, value=T)
[1] "This is a first example sentence."
[2] "And this is a second example sentence."
> grep("(first|lalala)", text, perl=T, value=T)
[1] "This is a first example sentence."
>

> grep("ab{2}", z, perl=T, value=T)
[1] "aabbccdd"
> grep("(ab){2}", z, perl=T, value=T)
[1] "ababcdcd"
>
>
> gsub("a (first|second)", "another", text, perl=T)
[1] "This is another example sentence."
[2] "And this is another example sentence."
>
>
>
>
> gsub("[abcdefgh]", "X", text, perl=T)
[1] "TXis is X Xirst XxXmplX sXntXnXX."
[2] "AnX tXis is X sXXonX XxXmplX sXntXnXX."

> grep("forg[eo]t(s|ting|ten)?_v", a.corpus.file, perl=T, value=T)
all forms of forget

*? lazy matching e.g.
gregexpr("s.*?s", text[1], perl=T)

> gregexpr("s.*?s", text[1], perl=T)
[[1]]
[1] 4 14
attr(,"match.length")
[1] 4 12

# note: things that are matched are consumed and can then not be found again in the same passtext

> gsub("(19|20)[0-9]{2}", "YEAR", text)
[1] "They killed 250 people in YEAR." "No, it was in YEAR."
> #replaces only 19xx and 20xx

---

> textfile<-scan(file.choose(), what="char", sep="\n")
Enter file name: corp_gpl_short.txt
Read 9 items
> textfile<-tolower(textfile)
> textfile
[1] "the licenses for most software are designed to take away your"
[2] "freedom to share and change it. by contrast, the gnu general public"
[3] "license is intended to guarantee your freedom to share and change free"
[4] "software--to make sure the software is free for all its users. this"
[5] "general public license applies to most of the free software"
[6] "foundation's software and to any other program whose authors commit to"
[7] "using it. (some other free software foundation software is covered by"
[8] "the gnu library general public license instead.) you can apply it to"
[9] "your programs, too."
> unlist(strsplit(textfile, "//W"))
[1] "the licenses for most software are designed to take away your"
[2] "freedom to share and change it. by contrast, the gnu general public"
[3] "license is intended to guarantee your freedom to share and change free"
[4] "software--to make sure the software is free for all its users. this"
[5] "general public license applies to most of the free software"
[6] "foundation's software and to any other program whose authors commit to"
[7] "using it. (some other free software foundation software is covered by"
[8] "the gnu library general public license instead.) you can apply it to"
[9] "your programs, too."
> text_split<-unlist(strsplit(textfile, "//W"))
> text_split
[1] "the licenses for most software are designed to take away your"
[2] "freedom to share and change it. by contrast, the gnu general public"
[3] "license is intended to guarantee your freedom to share and change free"
[4] "software--to make sure the software is free for all its users. this"
[5] "general public license applies to most of the free software"
[6] "foundation's software and to any other program whose authors commit to"
[7] "using it. (some other free software foundation software is covered by"
[8] "the gnu library general public license instead.) you can apply it to"
[9] "your programs, too."
>
> text_split<-unlist(strsplit(textfile, "//W"))
> text_split
[1] "the licenses for most software are designed to take away your"
[2] "freedom to share and change it. by contrast, the gnu general public"
[3] "license is intended to guarantee your freedom to share and change free"
[4] "software--to make sure the software is free for all its users. this"
[5] "general public license applies to most of the free software"
[6] "foundation's software and to any other program whose authors commit to"
[7] "using it. (some other free software foundation software is covered by"
[8] "the gnu library general public license instead.) you can apply it to"
[9] "your programs, too."
> text_split<-unlist(strsplit(textfile, "\\W"))

> textfile<-scan(file.choose(), what="char", sep="\n")
Enter file name: corp_gpl_short.txt
Read 9 items
> textfile<-tolower(textfile)
> textfile
[1] "the licenses for most software are designed to take away your"
[2] "freedom to share and change it. by contrast, the gnu general public"
[3] "license is intended to guarantee your freedom to share and change free"
[4] "software--to make sure the software is free for all its users. this"
[5] "general public license applies to most of the free software"
[6] "foundation's software and to any other program whose authors commit to"
[7] "using it. (some other free software foundation software is covered by"
[8] "the gnu library general public license instead.) you can apply it to"
[9] "your programs, too."
> unlist(strsplit(textfile, "//W"))
[1] "the licenses for most software are designed to take away your"
[2] "freedom to share and change it. by contrast, the gnu general public"
[3] "license is intended to guarantee your freedom to share and change free"
[4] "software--to make sure the software is free for all its users. this"
[5] "general public license applies to most of the free software"
[6] "foundation's software and to any other program whose authors commit to"
[7] "using it. (some other free software foundation software is covered by"
[8] "the gnu library general public license instead.) you can apply it to"
[9] "your programs, too."

> text_split<-unlist(strsplit(textfile, "//W+"))
> text_split
[1] "the licenses for most software are designed to take away your"
[2] "freedom to share and change it. by contrast, the gnu general public"
[3] "license is intended to guarantee your freedom to share and change free"
[4] "software--to make sure the software is free for all its users. this"
[5] "general public license applies to most of the free software"
[6] "foundation's software and to any other program whose authors commit to"
[7] "using it. (some other free software foundation software is covered by"
[8] "the gnu library general public license instead.) you can apply it to"
[9] "your programs, too."
> sort(table(text_split), decreasing=T)
text_split
to software the free and general
9 9 7 5 4 3 3
is it license public your by change
3 3 3 3 3 2 2
for foundation freedom gnu most other share
2 2 2 2 2 2 2
all any applies apply are authors away
1 1 1 1 1 1 1
can commit contrast covered designed guarantee instead
1 1 1 1 1 1 1
intended its library licenses make of program
1 1 1 1 1 1 1
programs s some sure take this too
1 1 1 1 1 1 1
users using whose you
1 1 1 1
>

> text_freqs
text_split
to software the free and general is
9 7 5 4 3 3 3
it license public your by change for
3 3 3 3 2 2 2
foundation freedom gnu most other share all
2 2 2 2 2 2 1
any applies apply are authors away can
1 1 1 1 1 1 1
commit contrast covered designed guarantee instead intended
1 1 1 1 1 1 1
its library licenses make of program programs
1 1 1 1 1 1 1
s some sure take this too users
1 1 1 1 1 1 1
using whose you
1 1 1
> text_freqs[text_freqs>1]
text_split
to software the free and general is
9 7 5 4 3 3 3
it license public your by change for
3 3 3 3 2 2 2
foundation freedom gnu most other share
2 2 2 2 2 2
>

> !(text_split %in% stop_list)
[1] FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
[13] TRUE TRUE FALSE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE
[25] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE
[37] TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
[49] TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE TRUE TRUE TRUE
[61] TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
[73] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE
[85] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
> text_stopremoved<-text_split[!(text_split %in% stop_list)]
> text_stopremoved
[1] "licenses" "for" "most" "software" "are"
[6] "designed" "to" "take" "away" "your"
[11] "freedom" "to" "share" "change" "it"
[16] "by" "contrast" "gnu" "general" "public"
[21] "license" "is" "intended" "to" "guarantee"
[26] "your" "freedom" "to" "share" "change"
[31] "free" "software" "to" "make" "sure"
[36] "software" "is" "free" "for" "all"
[41] "its" "users" "this" "general" "public"
[46] "license" "applies" "to" "most" "free"
[51] "software" "foundation" "s" "software" "to"
[56] "any" "other" "program" "whose" "authors"
[61] "commit" "to" "using" "it" "some"
[66] "other" "free" "software" "foundation" "software"
[71] "is" "covered" "by" "gnu" "library"
[76] "general" "public" "license" "instead" "you"
[81] "can" "apply" "it" "to" "your"
[86] "programs" "too"
>

# LOAD an R file
source("something.r")

Tagged with:  

Corpus Linguistics with R, Day 1

On July 28, 2009, in Code, by cornelius

(This post documents the first day of a class on R that I took at ESU C&T. I is posted here purely for my own use.)


R Lesson 1

> 2+3; 2/3; 2^3
[1] 5
[1] 0.6666667
[1] 8

---

Fundamentals - Functions

> log(x=1000, base=10)
[1] 3

---

(Formals describes the syntax of other functions)

formals(sample)

---

Variables

( <- allows you to save something in a data structure (variable) )
> a<-2+3
> a
[1] 5

# is for comments

whitespace doesn't matter

---
# Pick files
file.choose()

# Get working dir
getwd()

# Set working dir
setwd("..")

# Save
> save(VARIABLE_NAME, file=file.choose())
Fehler in save(test, file = file.choose()) : Objekt ‘test’ nicht gefunden
> save.image("FILE_NAME")

---

> setwd("/home/cornelius/Code/samples/Brown_95perc")
> getwd()
[1] "/home/cornelius/Code/samples/Brown_95perc"
> dir()

> my_array <- c(1,2,3,4)
> my_array
[1] 1 2 3 4
> my_array <- c("lalala", "lululu", "bla")
> my_array2 <- c(1,2,3,4)
> c(my_array, my_array2)
[1] "lalala" "lululu" "bla" "1" "2" "3" "4"
>

# it is possible to add something to ALL values in a vector, i.e.
my_array2 + 10

# c (conc) makes a list
stuff1<-c(1,2,3,4,5)

---

# sequence starts at 1 (first arg), goes on for 5 (second arg), increments by 1 (third arg)
seq(1, 5, 1)

---

# put a file into a corpus vector
# what=real|char sep=seperator
> my_corpus<-scan(file=file.choose(), what="char", sep="\n")

# unique elements in my array
unique(array)

# count elements in an array
table(array)

# sort elements in an array
sort(table(array))

---
# this tells me the position of the elements in my text that aren't "this"
> values<-which(my_little_corpus!="this")
> values
[1] 2 3 4 5 6 7 8 9 11 12 13 14

# this will produce TRUE|FALSE for my condition (is this element "this")
> values<-my_little_corpus!="this"
> values
[1] FALSE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE
[13] TRUE TRUE

# this will return the array without "this"
> values<-my_little_corpus[my_little_corpus!="this"]
> values
[1] "is" "just" "a" "little" "example" "bla" "bla"
[8] "bla" "is" "the" "third" "line"

...

> cc<-c("banana", "bagel")
> cc == "banana"; cc!="banana" #
[1] TRUE FALSE
[1] FALSE TRUE
> "banana" %in% cc
[1] TRUE
> c("bagel", "banana") %in% cc
[1] TRUE TRUE
> match ("banana", cc)
[1] 1
> match (c("bagel","banana"), cc)
[1] 2 1

# match looks for a list of tokens and returns their position in the datastructure

---
> cat(bb, sep="\n", file=scan(what="char"), append=F)
# write the contents of bb to a file, ask the user for file

moo<-scan(what="char")
# read something the user types into a var

# Clear Mem
> rm(list=ls(all=T))
>

---

# create vector1 (ordered)
vec1<-c("a","b","c","d","e","f,",g","h","i","j")

# oder
# > letters[1:10]
# [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j"

# create vector2 (random)
# > vector2<-sample(vector1)

---

length()
# number of elements

nchar()
# number of characters

> aa<-"know"
> nchar(aa)
[1] 4
> aa<-c("I","do","not","know")
> nchar(aa)
[1] 1 2 3 4
> lala<-c("cat","gnu","hippopotamus")
> lala
[1] "cat" "gnu" "hippopotamus"
> nchar(lala)
[1] 3 3 12

> substr("hippopotamus", 0, 5)
[1] "hippo"
>

# like explode() / implode()
paste (string, sep="my_seperator", collapse="stuff to put in")

---

# percentages
x/sum(x)

barplot (1,2,3)

Read in corpus data and build a list of words frequencies
1) scan file
2) strsplit by " "
3) unlist to make vector
4) make a table with freqs
5) sort
6) output

#search for strings
grep("needle", haystack)

> grep("is", text, value=T)
[1] "This is a first example sentence."
[2] "And this is a second example sentence."
> grep("And", text, value=T)
[1] "And this is a second example sentence."
> grep("sentence", text, value=T)
[1] "This is a first example sentence."
[2] "And this is a second example sentence."
>

gregexpr
# alternative to grep, returns a list of vectors

> mat<-gregexpr("e", text)
> mat
[[1]]
[1] 17 23 26 29 32
attr(,"match.length")
[1] 1 1 1 1 1

[[2]]
[1] 16 22 28 31 34 37
attr(,"match.length")
[1] 1 1 1 1 1 1

> unlist(mat)
[1] 17 23 26 29 32 16 22 28 31 34 37
> mat<-gregexpr("sentence", text)
> sapply (mat, c)
[1] 25 30

Tagged with:  

A first glimpse at my Twitter corpus

On July 14, 2009, in Thoughts, by cornelius

I’ve finally found the time to do some initial number-crunching on my Twitter corpus in preparation for my presentation at IR 10.0. In this post, I’ll document some very (!) basic first observations, all of which are work in progress, but will probably show up in a published paper at some point.

The to-date research by Honeycut/Herring and boyd/Golder/Lotan has already examined different aspects of Twitter language thoroughly (retweeting, @-messaging), but I hope to add insight in a few areas by examining facets that so far have been explored less, namely the relation of tweeting to other forms of CMC and uses of Twitter which don’t involve retweeting and messaging, but can be regarded as more introspective and (to me) “blog-like”.

I’ll start with some very basic stuff that won’t really be too surprising to most of you.

First the corpus data:

a) extract from my larger Twitter corpus (Twitter_SmallCorp)
Size: 1,932,772 tokens / 149,292 types

b) NPS Chat corpus
Size: 45,010 tokens / 6,066 types

c) Webtext corpus included in the NLTK
Size: 396,736 tokens / 21,537 types

My three corpora differ drastically in size and by the standards of most computational linguists all three are pretty small. It’s my impression, however, that top X word lists (e.g. top 50) will not change significantly beyond 6-digit numbers, but tend to show characteristic distribution patterns for a given genre. Wordlists are too ambiguous to really identify a type of text reliably, but looking at them can still be interesting.

The table below shows the top 50 types in all three corpora ranked by frequency. I deliberately don’t provide unnormalized counts in the table as they would be fairly meaningless – the rank is what I’m interested in.

Twitter_SmallCorp NPS_Chat Webtext
1 I lol I
2 the to the
3 to i to
4 a the a
5 and you you
6 of I in
7 for a and
8 in hi on
9 you me of
10 is is is
11 it in it
12 s and not
13 on it that
14 my that with
15 that hey for
16 n’t my Girl
17 have of 2
18 with u Guy
19 at s when
20 me for like

There’s a lot of uniformity at first glance if you compare the three lists. However, the comparison of Twitter with Web chat shows some interesting (though largely unsurprising) differences:

  1. the first person pronoun and determiner (I, me, my) are used frequently in all three corpora, but Twitter seems to have a slight lead
  2. the second person (you) is less frequent in Twitter than in the other corpora
  3. greetings and emotives (hi, hey, lol) are frequent in chat, but occur much less frequently in Twitter
  4. words expressing relations (and, of) are significantly more frequent in Twitter than in chat

In contrast to chat, Twitter is generally used in a (more) asynchronous fashion, which provides motivation for (1) – (4). Going hand in hand with this is the lack of cospatiality (virtual cospatiality, that is – obviously there’s generally no real cospatiality online) – Twitter does not evoke the image of a “room” or shared space as do most chats. Depending on my Twitter client, I can only see what my followers are writing, but not my own tweets and direct messages, at least not in the same window. Finally, the participant structure is open and opaque – I may not know the participants in a chat personally, but I can identify them individually. Unless my updates are protected, anyone can potentially read my tweets and it is at the same time less obvious that anyone will necessarily read them. This is a situation comparable to that of blogs and it explains the lesser degree of linguistically enacted performance in Twitter vs. chats and the higher degree of propositional language. Everyone controls his/her own discourse environment in Twitter and accordingly less expressions are used that relate the speaker to others (2 and 3), while more are used that include the Twitterer (1).

Below are three plots showing the cumulative type distributions in each corpus. Note that they are rough and contain noise and punctuation.

I’m still just scratching the surface here, but a comparison of verbs and verb classes using larger Twitter, blog and chat corpora will come next. I’ll also look at tweets on the discourse level, specifically at (for lack of a better word) “non-commnuicative tweets”, i.e. those which are not RTs and not @-messages. Stay tuned. :-)

Tagged with:  

NLTK corpus functions

On July 11, 2009, in Code, by cornelius

fileids() The files of the corpus
fileids([categories]) The files of the corpus corresponding to these categories
categories() The categories of the corpus
categories([fileids]) The categories of the corpus corresponding to these files
raw() The raw content of the corpus
raw(fileids=[f1,f2,f3]) The raw content of the specified files
raw(categories=[c1,c2]) The raw content of the specified categories
words() The words of the whole corpus
words(fileids=[f1,f2,f3]) The words of the specified fileids
words(categories=[c1,c2]) The words of the specified categories
sents() The sentences of the specified categories
sents(fileids=[f1,f2,f3]) The sentences of the specified fileids
sents(categories=[c1,c2]) The sentences of the specified categories
abspath(fileid) The location of the given file on disk
encoding(fileid) The encoding of the file (if known)
open(fileid) Open a stream for reading the given corpus file
root() The path to the root of locally installed corpus
readme() The contents of the README file of the corpus

Tagged with:  

NLTK corpora

On July 11, 2009, in Things I want to look up later, by cornelius

[*] alpino………….. Alpino Dutch Treebank
[*] nombank.1.0……… NomBank Corpus 1.0
[*] abc…………….. Australian Broadcasting Commission 2006
[*] maxent_ne_chunker… ACE Named Entity Chunker (Maximum entropy)
[*] conll2000……….. CONLL 2000 Chunking Corpus
[*] chat80………….. Chat-80 Data Files
[*] brown…………… Brown Corpus
[*] brown_tei……….. Brown Corpus (TEI XML Version)
[*] cmudict…………. The Carnegie Mellon Pronouncing Dictionary (0.6)
[*] biocreative_ppi….. BioCreAtIvE (Critical Assessment of Information
Extraction Systems in Biology)
[*] cess_cat………… CESS-CAT Treebank
[*] conll2002……….. CONLL 2002 Named Entity Recognition Corpus
[*] conll2007……….. Dependency Treebanks from CoNLL 2007 (Catalan
and Basque Subset)
[*] city_database……. City Database
[*] indian………….. Indian Language POS-Tagged Corpus
[*] shakespeare……… Shakespeare XML Corpus Sample
[*] dependency_treebank. Dependency Parsed Treebank
[*] inaugural……….. C-Span Inaugural Address Corpus
[*] ieer……………. NIST IE-ER DATA SAMPLE
[*] gutenberg……….. Project Gutenberg Selections
[*] gazetteers………. Gazeteer Lists
[*] names…………… Names Corpus, Version 1.3 (1994-03-29)
[*] mac_morpho………. MAC-MORPHO: Brazilian Portuguese news text with
part-of-speech tags
[*] movie_reviews……. Sentiment Polarity Dataset Version 2.0
[*] cess_esp………… CESS-ESP Treebank
[*] genesis…………. Genesis Corpus
[*] kimmo…………… PC-KIMMO Data Files
[*] floresta………… Portuguese Treebank
[*] qc……………… Experimental Data for Question Classification
[*] nps_chat………… NPS Chat
[*] paradigms……….. Paradigm Corpus
[*] pil…………….. The Patient Information Leaflet (PIL) Corpus
[*] stopwords……….. Stopwords Corpus
[*] propbank………… Proposition Bank Corpus 1.0
[ ] pe08……………. Cross-Framework and Cross-Domain Parser
Evaluation Shared Task
[*] state_union……… C-Span State of the Union Address Corpus
[*] sinica_treebank….. Sinica Treebank Corpus Sample
[*] ppattach………… Prepositional Phrase Attachment Corpus
[*] senseval………… SENSEVAL 2 Corpus: Sense Tagged Text
[*] problem_reports….. Problem Report Corpus
[*] reuters…………. The Reuters-21578 benchmark corpus, ApteMod
version
[*] swadesh…………. Swadesh Wordlists
[*] rte…………….. PASCAL RTE Challenges 1, 2, and 3
[*] udhr……………. Universal Declaration of Human Rights Corpus
[*] treebank………… Penn Treebank Sample
[*] unicode_samples….. Unicode Samples
[*] verbnet…………. VerbNet Lexicon, Version 2.1
[*] wordnet_ic………. WordNet-InfoContent
[*] book_grammars……. Grammars from NLTK Book
[*] words…………… Word Lists
[*] punkt…………… Punkt Tokenizer Models
[*] wordnet…………. WordNet
[*] large_grammars…… Large context-free grammars for parser
comparison
[*] ycoe……………. York-Toronto-Helsinki Parsed Corpus of Old
English Prose
[*] spanish_grammars…. Grammars for Spanish
[*] rslp……………. RSLP Stemmer (Removedor de Sufixos da Lingua
Portuguesa)
[*] tagsets…………. Help on Tagsets
[*] sample_grammars….. Sample Grammars
[*] timit…………… TIMIT Corpus Sample
[*] maxent_treebank_pos_tagger Treebank Part of Speech Tagger (Maximum entropy)
[*] toolbox…………. Toolbox Sample Files
[*] basque_grammars….. Grammars for Basque
[*] hmm_treebank_pos_tagger Treebank Part of Speech Tagger (HMM)
[*] webtext…………. Web Text Corpus
[*] switchboard……… Switchboard Corpus Sample

Tagged with:  

Accessing corpora: nltk.corpus
String processing: nltk.tokenize, nltk.stem
Collocation discovery: nltk.collocations
Part-of-speech tagging: nltk.tag
Classification: nltk.classify, nltk.cluster
Chunking: nltk.chunk
Parsing: nltk.parse
Semantic interpretation: nltk.sem, nltk.inference
Evaluation metrics: nltk.metrics
Probability and estimation; nltk.probability
Applications: nltk.app, nltk.chat

Tagged with:  

Publishing my dissertation Open Access

On July 11, 2009, in Thoughts, by cornelius

I am proud to announce that my dissertation The corporate blog as an emerging genre of computer-mediated communication: features, constraints, discourse situation will be published with Universitätsverlag Göttingen in the series Göttinger Schriften zur Internetforschung (Göttingen publications on Internet Research). The series is edited by Svenja Hagenhoff, Dieter Hogrefe, Elmar Mittler, Matthias Schumann, Gerald Spindler and Volker Wittke and has thus far featured five works investigating different aspects of medial change brought about by the Internet and digital technology, such as individualization in the media and new forms of academic communication on the Net. I am proud to be the first linguist to publish in this interdisciplinary series and it is also extremely gratifying to see my thesis published with a university press that has a modern approach to scholarly communication. All works published in the series are hybrid Open Access and print on demand publications, in other words, you can either choose to read them online (or download them to your computer), or to order the traditional dead tree version. Different channels of distribution are also supported, i.e. my dissertation will be on Google Books and Amazon.

I’d like to thank the editors and especially Svenja Hagenhoff for taking the time to consider my thesis for inclusion and Margo Bargheer for pointing the series out to me.

(Three cheers)

Tagged with: