Friday, October 25, 2013

DH Data Talk

Last night I was on a panel organized by Duke Libraries' Digital Scholarship group. The panelists each gave some brief remarks and then we had what I thought was a really productive and interesting discussion. The following are my own remarks, with links to my slides (opens a new tab). In my notes, //slide// means click forward (not always to a new slide, maybe just a fragment).
This is me, and I work //slide// for this outfit. I'm going to talk just a little about a an old project and a new one, and not really give any details about either, but surface a couple of problems that I hope will be fodder for discussion. //slide// The old project is Papyri.info and publishes all kinds of data about ancient documents mostly written in ink on papyrus. The new one, Integrating Digital Epigraphies (IDEs), is about doing much the same thing for ancient documents mostly incised on stone.
If I had to characterize (most of) the work I'm doing right now, I'd say I'm working on detecting and making machine-actionable the scholarly links and networks embedded in a variety of related projects, with data sources including plain text, XML, Relational Databases, web services, and images. These encompass critical editions of texts (often in large corpora), bibliography, citations in books and articles, images posted on Flickr, and databases of texts. You could think of what I'm doing as recognizing patterns and then converting those into actual links; building a scaffold for the digital representation of networks of scholarship. This is hard work. //slide// It's hard because while superficial patterns are easy to detect, //slide// without access to the system of thought underlying those patterns (and computers can't do that yet—maybe never), those patterns are really just proxies kicked up by the underlying system. They don't themselves have meaning, but they're all you have to hold on to. //slide// Our brains (with some prior training) are very good at navigating this kind of mess, but digital systems require explicit instructions //slide// —though granted, you can sometimes use machine learning techniques to generate those.
When I say I'm working on making scholarly networks machine actionable, I'm talking about encoding as digital relations the graph of references embedded in these books, articles and corpora, and in the metadata of digital images. There are various ways one might do this, and the one we're most deeply into right now is called //slide// RDF. RDF models knowledge as a set of simple statements in the form Subject, Predicate, Object. //slide// So A cites B, for example. RDF is a web technology, so all three of these elements may be URIs that you could open in a web browser, //slide// and if you use URIs in RDF, then the object of one statement can be the subject of another, and so on. //slide// So you can use it to model logical chains of knowledge. Now notice that these statements are axioms. You can't qualify them, at least not in a fine-grained way. So this works great in a closed system (papyri.info), where we get to decide what the facts are; it's going to be much more problematic in IDEs, where we'll be coordinating data from at least half a dozen partners. Partners who may not agree on everything. //slide// What I've got is the same problem from a different angle—I need to model a big pile of opinion but all I have to do it with are facts.
Part of the solution to these problems has to be about learning how to make the insertion of machine-actionable links and facts (or at least assertions), part of—that is, a side-effect of—the normal processes of resource creation and curation. But it also has to be about building systems that can cope with ambiguity and opinion.