HistoryCrawler is a Debian Linux virtual machine that is pre-installed for research with digitized primary sources. It was created by William Turkel, Mary Beth Start (a PhD candidate at Western), and myself with our HistoryCrawler SSHRC grant. Instructions are here.
The 7GB download of a fully installed version is available at
The password is ‘go’. You can load it using VirtualBox.
The machine is largely equipped for textual analysis work: it has Python with many extensions pre-installed (SciPy, Scikit-learn, Beautiful Soup, Internet Archive, Orange), the Overview clustering engine, the R statistical programming language, SEASR’s visual programming portal, the SOLR search engine, Stanford Named Entity Recognition, and an older version of WARC Tools.
It aims to deal with a major problem that we face: we write programs to work with digital sources on our computer, and it doesn’t work on another person’s because they don’t have the ‘right’ version of a software tool. More vexingly, programs change and older versions can be pulled down off the web. This is especially pronounced when using web archive, as we’re often using tools in different ways than their creators foresee.
I am using this platform to further develop tools for analyzing web archives, and am also distributing this to Research Assistants on my team. If you read through my blog, you will see that I use many of these tools when analyzing web archives.