I led the web scraping workshop at the Getting Started in Digital History workshop. I’ve tried to make it as accessible as one can online: slides, links, and beyond are all online. If you’re curious about how to grab data online, what scraping resources there are, and how to work with social media, please do check it out.
I gave a paper on collaborative digital history, looking at how our team has been able to do what it has been able to do. I began by talking about our research project’s objectives (web archives and historical research), and then general thoughts on team work and how we’ve achieved success with two of our projects. It was a round table discussion so we had an incredible conversation afterwards.
I had the opportunity to publish “The Problem of History in the Age of Abundance” in the December 16th issue of the Chronicle Review. It is now available online, albeit behind a paywall or some links on social media. You can read it here now for a limited time. Writing for the Chronicle was a great experience: fantastic editors, copyediting, and thoughtful engagement with my work.
This in a nut shell is the core argument of a much larger manuscript that I’m working on, so any general thoughts or comments are always appreciated.
Use of computational methods for exploration and analysis of web archives sources is emerging in new disciplines such as digital humanities. This raises urgent questions about how such research projects process web archival material using computational methods to construct their findings. This paper aims to enable web archives scholars to document their practices systematically to improve the transparency of their methods. We adopt the Research Object framework to characterize three case studies that use computational methods to analyze web archives within digital history research. We then discuss how the framework can support the characterization of research methods and serve as a basis for discussions of methods and issues such as reuse and provenance. The results suggest that the framework provides an effective conceptual perspective to describe and analyze the computational methods used in web archive research on a high level and make transparent the choices made in the process. The documentation of the research process contributes to a better understanding of the findings and their provenance, and the possible reuse of data, methods, and workflows.
I had the opportunity to present “Studying the Web in the Shadow of Uncle Sam,” a paper that Tom Smyth (Library and Archives Canada) and myself proposed to the National Webs workshop that Niels Brügger and Ditte Laursen hosted in Aarhus, Denmark. Our paper abstract is below, as are the slides that I presented.
Hopefully more cool things can be announced soon (and the paper should appear in an edited collection, hopefully in early 2018 – our drafts go in for March).
What is the Canadian Web? While Canada does have the .ca top-level domain, this does not capture Canada: while universities, governmental institutions, and some companies use the .ca TLD, many other corporations, small businesses, bloggers, and others generally gravitate towards .com, .org, or .net (a question we will briefly explore in our paper). In short, the .ca domain is a relatively niche player. Analyses using just the top-level domain would be skewed towards certain forms of content providers. This question presents considerable challenge for national libraries and researchers working in a national perspective on an inherently global network.
We will approach this question in three ways within this paper. First, we present the state of the Canadian Web. Drawing on initial work by Library and Archives Canada and twenty-five Archive-It partners in Canada, we discuss what it means to study the Canadian Web. Second, we explore what work has been done to date: how Library and Archives Canada and Canadian partners have embraced the challenge and what they have been collecting. How do librarians and archivists select their seeds in this context, and does it approach a national web? This collection development strategy is an interim one, beginning to lay the foundation for greater capacity for domain crawling. “Thematic web collections” steward parts of the Canadian Web, with a recognition of the stopgap nature of things. Finally, we use the piece to show various paths forward towards a domain crawl of the “Canadian Web,” highlighting the Web Archives for Longitudinal Knowledge (WALK) project that is beginning to integrate disparate web archives across the country.
Part of the problem with warcbase is that you need a decently powerful laptop or desktop to run meaningful analysis on small-to-medium-sized collections (although it does run on a Raspberry Pi). While our team has access to a cluster, that’s not the case for everybody. What if you had a few WARC files and wanted to run some analysis on it with warcbase, without fully going through our documented yet sometimes challenging installation process? And if you wanted to deploy it on a cloud provider such as AWS? Maybe you have collections in an Amazon S3 bucket?
Note that this is all part of our exploration of ways that could eventually bring warcbase to a wider audience. It’s still command line based, unfortunately, and requires knowledge of the AWS stack. But for technically-inclined people or developers with web archival collections, we’re moving closer to helping you work with your collections.
One of our bottlenecks in our work has been processing time, and the ability to have a dedicated Spark cluster will let us engage in both exploratory research, as well as the sustained processing to help make our research projects a reality.