I’ve been following the development of googleVis, the implementation of the Google Visualization API for R, for a bit now. The library has a lot of potential as a bridge between R (where data processing happens) and HTML (where presentation is [increasingly] happening). A growing number of visualization frameworks are on the market and all have their perks (e.g. Many Eyes, Simile, Flare). I guess I was inspired in such a way by the Hans Rosling Show TED talk that makes such great use of bubble charts that I wanted to try the Google Vis API for that chart type alone. There’s more, however, if you don’t care much for floating blubbles: neat chart variants include the geochart, area charts and the usual classics (bar, pie, etc). Check out the chart gallery for an overview.

So here are my internet growth charts:

(1) Motion chart showing the growth of the global internet population since 2000 for 208 countries

(2) World map showing global internet user statistics for 2009 for 208 countries

Data source: data.un.org (ITU database). I’ve merged two tables from the database into one (absolute numbers and percentages) and cleaned the data up a bit. The resulting tab-separated CSV file is available here.

And here’s the R code for rendering the chart. Basically you just replace gvisMotionChart() with gvisGeoChart() for the second chart, the rest is the same.

1
2
3
4
library("googleVis")
n <- read.csv("netstats.csv", sep="\t")
nmotion <- gvisMotionChart(n, idvar="Country", timevar="Year", options=list(width=1024, height=768))
plot(nmotion)
Tagged with:  

Dynamic Twitter graphs with R and Gephi (clip and code)

On January 2, 2011, in Code, by cornelius

Note 1/4/11: I’ve updated the code below after discovering a few bugs.

Back in October when Jean Burgess first posted a teaser of the Gephi dynamic graph feature applied to Twitter data, I thought right away that this was going to bring Twitter visualization to an entirely new level. When you play around with graph visualizations for a while you inevitably come to the conclusion that they are of very limited use for studying something like Twitter because of it’s dynamicity as a ongoing communicative process. Knowing that someone retweeted someone else a lot or that a certain word occured many times is only half the story. When someone got a lot of retweets or some word was used frequently is often much more interesting.

Anyhow, Axel Bruns posted a first bit of code (for generating GEXF files) back in October, followed by a detailed implementation (1, 2) a few days ago. Since Axel uses Gawk and I prefer R, the first thing I did was to write an R port of Axel’s code. It does the following:

  1. Extract all tweets containing @-messages and retweets from a Twapperkeeper hashtag archive.
  2. Generate a table containing the fields sender, recipient, start time and end time for each data point.
  3. Write this table to a GEXF file.

The implementation as such wasn’t difficult and I didn’t really follow Axel’s code too closely, since R is syntactically different from Gawk. The thing I needed to figure out was the logic of the GEXF file, specifically start and end times, in order to make sure that edges decay over time. Axel explains this in detail in his post and provides a very thorough and clean implementation.

My own implementation is rougher and probably still needs polishing is several places, but here’s a first result (no sound; watch in HD and fullscreen):

Note 1/7/11: I’ve replaced the clip above with a better one after ironing out a few issues with my script. The older clip is still available here.

Like previous visualizations I’ve done, this also uses the #MLA09 data, i.e. tweets from the 2009 convention of the Modern Language Association.
1/7/11: The newer clip is based on data from Digital Humanities 2010 (#dh2010).

And here’s the R code for generating the GEXF file, in case you want to play around with it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
rm(list=ls(all=T));
 
outfile.gexf <- "dh2010.gexf";
decaytime = 3600;
buffer = 0;
eid = 1;
 
tweets <- read.csv(file.choose(), head=T, sep="|", quote="", fileEncoding="UTF-8");
ats <- tweets[grep("@([a-z0-9_]{1,15}):?", tweets$text),];
g.from <- tolower(as.character(ats$from_user))
g.to <- tolower(gsub("^.*@([a-z0-9_]{1,15}):?.*$", "\\1", ats$text, perl=T));
g.start <- ats$time - min(ats$time) + buffer;
g.end <- ats$time - min(ats$time) + decaytime + buffer;
g <- data.frame(from=g.from[], to=g.to[], start=g.start[], end=g.end[]);
g <- g[order(g$from, g$to, g$start),];
output <- paste("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<gexf xmlns=\"http://www.gexf.net/1.2draft\" version=\"1.2\">\n<graph mode=\"dynamic\" defaultedgetype=\"directed\" start=\"0\" end=\"", max(g$end) + decaytime, "\">\n<edges>\n", sep ="");
all.from <- as.character(unique(g$from));
for (i in 1:length(all.from))
{
	this.from <- all.from[i];
	this.to <- as.character(unique(g$to[grep(this.from, g$from)]));
	for (j in 1:length(this.to))
	{
		all.starts <- g$start[intersect(grep(this.from, g$from), grep(this.to[j], g$to))];
		all.ends <- g$end[intersect(grep(this.from, g$from), grep(this.to[j], g$to))];
		output <- paste(output, "<edge id=\"", eid, "\" source=\"", this.from, "\" target=\"", this.to[j], "\" start=\"", min(all.starts), "\" end=\"", max(all.ends), "\">\n<attvalues>\n", sep="");
		for (k in 1:length(all.starts))
		{	
			# overlap
			# if (all.starts[k+1] < all.ends[k]) output <- paste(output, "", sep=""); ... ?
			output <- paste(output, "\t<attvalue for=\"0\" value=\"1\" start=\"", all.starts[k], "\" />\n", sep="");
		}
		output <- paste(output, "</attvalues>\n<slices>\n", sep="");		
		for (l in 1:length(all.starts))
		{
			output <- paste(output, "\t<slice start=\"", all.starts[l], "\" end=\"", all.ends[l], "\" />\n", sep="");
		}
		output <- paste(output, "</slices>\n</edge>\n", sep="");	
		eid = eid + 1;
	}
} 
output <- paste(output, "</edges>\n</graph>\n</gexf>\n", sep = "");
cat(output, file=outfile.gexf);
Tagged with:  

A new, simpler approach to Twitter visualization

On November 27, 2010, in Uncategorized, by cornelius

If you’ve been following my work recently you might have noticed my interest slight obsession with visualization, especially in relation to communication on Twitter. I’ve been experimenting both with graphs and with traditional bar and pie charts to show what happens when people use Twitter.

Now I’ve tried something new, somewhat inspired by an essay on info visualization recently published by Lev Manovich. In it, Manovich describes an approach that he calls direct visualization:

In direct visualization, the data is reorganized into a new visual representation that preserves its original form. Usually, this does involve some data transformation such as changing data size. For instance, text cloud reduces the size of text to a small number of most frequently used words. However, this is a reduction that is quantitative rather than qualitative. We don’t substitute media objects by new objects (i.e. graphical primitives typically used in infovis), which only communicate selected properties of these objects (for instance, bars of different lengths representing word frequencies). My phrase “visualization without reduction” refers to this preservation of a much richer set of properties of data objects when we create visualizations directly from them.

Applying this idea to the Twitter data I work with, I decided to try something new. Instead of reducing the richness of the data, why not rearrange it to make it more readable? And here’s the result of my attempt to do that:

All tweets using the #MLA09 hashtags in one large PDF

(Note: download the PDF and look at it in your favorite PDF viewer if the zooming in Scribd is sluggish)

Tagged with:  

After recently discovering the excellent methods section on mappingonlinepublics.net, I decided it was time to document my own approach to Twitter data. I’ve been messing around with R and igraph for a while, but it wasn’t until I discovered Gephi that things really moved forward. R/igraph are great for preprocessing the data (not sure how they compare with Awk), but rather cumbersome to work with when it comes to visualization. Last week, I posted a first Gephi visualization of retweeting at the Free Culture Research Conference and since then I’ve experimented some more (see here and here). #FCRC was a test case for a larger study that examines how academics use Twitter at conferences, which is part of what we’re doing at the junior researchers group Science and the Internet at the University of Düsseldorf (sorry, website is currently in German only).

Here’s a step-by-step description of how those graphs were created.

Step #1: Get tweets from Twapperkeeper
Like Axel, I use Twapperkeeper to retrieve tweets tagged with the hashtag I’m investigating. This has several advantages:

  • it’s possible to retrieve older tweets which you won’t get via the API
  • tweets are stored as CSV rather than XML which makes them easier to work with for our purposes.

The sole disadvatage of Twapperkeeper is that we have to rely on the integrity of their archive — if for some reason not all tweets with our hastag have been retrieved, we won’t know. Also, certain information is not retained in Twapperkeepers’ CSV files that is present in Twitter’s XML (e.g. geolocation) that we might be interested in.

Instructions:

  1. Search for the hashtag you’re interested in (e.g. #FCRC). If no archive exists, create one.
  2. Go to the archive’s Twapperkeeper page, sign into Twitter (button at the top) and then choose export and download at the bottom of the page
  3. Choose the pipe character (“|”) as seperator. I use that one rather than the more conventional comma or semicolon because we are dealing with text data which is bound to contain these characters a lot. Of course the pipe can also be parsed incorrectly, so be sure to have a look at the graph file you make.
  4. Voila. You should now have a CSV file containing tweets on your hard drive. Edit:Actually, you have a .tar file that contains the tweets. Look inside the .tar for a file with a very long name ending with “-1″ (not “info”) — that’s the data we’re looking for.

Step #2: Turn CSV data into a graph file with R and igraph
R is an open source statistics package that is primarily used via the command line. It’s absolutely fantastic at slicing and dicing data, although the syntax is a bit quirky and the documentation is somewhat geared towards experts (=statisticians). igraph is an R package for constructing and visualizing graphs. It’s great for a variety of purposes, but due to the command line approach of R, actually drawing graphs with igraph was somewhat difficult for me. But, as outlined below, Gephi took care of that. Running the code below in R will transform the CSV data into a GraphML file which can then be visualized with Gephi. While R and igraph rock at translating the data into another format, Gephi is the better tool for the actual visualization.

Instructions:

  1. Download and install R.
  2. In the R console, run the following: install.packages(igraph);
  3. Copy the CSV you’ve just downloaded from Twapperkeeper to an empty directory and rename it to tweets.csv.
  4. Finally, save the R file below to the same folder as the CSV and run it.

Code for extracting RTs and @s from a Twapperkeeper CSV file and saving the result in the GraphML format:

?Download tweetgraph.R
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# Extract @-message and RT graphs from conference tweets
library(igraph);
 
# Read Twapperkeeper CSV file
tweets <- read.csv("tweets.csv", head=T, sep="|", quote="", fileEncoding="UTF-8");
print(paste("Read ", length(tweets$text), " tweets.", sep=""));
 
# Get @-messages, senders, receivers
ats <- grep("^\\.?@[a-z0-9_]{1,15}", tolower(tweets$text), perl=T, value=T);
at.sender <- tolower(as.character(tweets$from_user[grep("^\\.?@[a-z0-9_]{1,15}", tolower(tweets$text), perl=T)]));
at.receiver <- gsub("^\\.?@([a-z0-9_]{1,15})[^a-z0-9_]+.*$", "\\1", ats, perl=T);
print(paste(length(ats), " @-messages from ", length(unique(at.sender)), " senders and ", length(unique(at.receiver)), " receivers.", sep=""));
 
# Get RTs, senders, receivers
rts <- grep("^rt @[a-z0-9_]{1,15}", tolower(tweets$text), perl=T, value=T);
rt.sender <- tolower(as.character(tweets$from_user[grep("^rt @[a-z0-9_]{1,15}", tolower(tweets$text), perl=T)]));
rt.receiver <- gsub("^rt @([a-z0-9_]{1,15})[^a-z0-9_]+.*$", "\\1", rts, perl=T);
print(paste(length(rts), " RTs from ", length(unique(rt.sender)), " senders and ", length(unique(rt.receiver)), " receivers.", sep=""));
 
# This is necessary to avoid problems with empty entries, usually caused by encoding issues in the source files
at.sender[at.sender==""] <- "<NA>";
at.receiver[at.receiver==""] <- "<NA>";
rt.sender[rt.sender==""] <- "<NA>";
rt.receiver[rt.receiver==""] <- "<NA>";
 
# Create a data frame from the sender-receiver information
ats.df <- data.frame(at.sender, at.receiver);
rts.df <- data.frame(rt.sender, rt.receiver);
 
# Transform data frame into a graph
ats.g <- graph.data.frame(ats.df, directed=T);
rts.g <- graph.data.frame(rts.df, directed=T);
 
# Write sender -> receiver information to a GraphML file
print("Write sender -> receiver table to GraphML file...");
write.graph(ats.g, file="ats.graphml", format="graphml");
write.graph(rts.g, file="rts.graphml", format="graphml");

Step #3: Visualize graph with Gephi
Once you’ve completed steps 1 and 2, simply open your GraphML file(s) with Gephi. You should see a visualization of the graph. I won’t give an in-depth description of how Gephi works, but the users section of gephi.org has great tutorials which explain both Gephi and graph visualization in general really well.

I’ll post more on the topic as I make further progress, for example with stuff like dynamic graphs which show change in the network over time.

Tagged with:  

Graphing Twitter friends/followers with R (updated)

On June 25, 2010, in Code, by cornelius

Edit: And here is an update of the update, this one contributed by Kai Heinrich.

Here’s an updated version of my script from last month, something I’ve been meaning to do for a while. I thank Anatol Stefanowitsch and Gábor Csárdi for improving my quite sloppy code.


# Load twitteR and igraph packages.
library(twitteR)
library(igraph)


# Start a Twitter session.
sess <- initSession('USERNAME', 'PASSWORD')


# Retrieve a maximum of 20 friends/followers for yourself or someone else Note that
# at the moment, the limit parameter does not [yet] seem to be working.
friends.object <- userFriends('USERNAME', n=20, sess)
followers.object <- userFollowers('USERNAME', n=20, sess)


# Retrieve the names of your friends and followers from the friend
# and follower objects.
friends <- sapply(friends.object,name)
followers <- sapply(followers.object,name)


# Create a data frame that relates friends and followers to you for expression in the graph
relations <- merge(data.frame(User='YOUR_NAME', Follower=friends), data.frame(User=followers, Follower='YOUR_NAME'), all=T)


# Create graph from relations.
g <- graph.data.frame(relations, directed = T)


# Assign labels to the graph (=people's names)
V(g)$label <- V(g)$name


# Plot the graph using plot() or tkplot().
tkplot(g)

Tagged with:  

Post-event Twitter stats for #THATcamp

On May 26, 2010, in data, by cornelius

I thought I’d post an updated version of the simple stats on Twitter activity presented here. The data in the older post was collected before THATcamp took place, the graphs below show the activity during and after the camp.

The tweets I’ve collected are also available here (my own file) and on TwapperKeeper.

Tweets over time (roughly 14th of May to 24th)

Most active users

Most @-messaged users

Most retweeted users

Tagged with: