Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Text Mining For Historians

Scholarship on the transformation of gender norms and women’s emancipation in the twentieth century has largely relied on the fundamental tools of historical work: archival research and close reading of primary sources. But in an increasingly digital world, where massive archives of newspapers, magazines, and even parliamentary debates can be found online there are new opportunities to add to the historian’s toolbox and expand our understanding of how women’s rights have evolved over time.

We are pleased to offer scholars and students of history access to two free online modules providing instruction in digital humanities methods and techniques. This first course introduces historians to the methods of text mining. Text mining offers a way to read large bodies of text (or corpora) digitally and provides historians with new ways to analyse these corpora. For example, you could trace how often certain keywords (like ‘feminist’ or ‘suffragist’) are used in a particular newspaper, and see the terms and word clusters that are associated with them. Each learning module provides a video explaining the content and some activities to help you develop your skills. It also shows you how you can integrate text mining into your historical research.

These two courses were commissioned as a part of the research of the International Standing Working Group in Medialization and Empowerment at the German Historical Institute London. They are designed to build skills in the digital humanities, and also to help course-takers develop the confidence and ability to use text mining and the quantitative techniques often required to interpret its results in their own historical research. The debate on computing in History – and indeed the role of quantitative analyses in the subject – has been a long and involved one stretching back to the 1960s. While the popularity of both has declined as History’s status as a humanities subject has become entrenched, the data age, and increasing influence of the social sciences, have placed both issues prominently back on the agenda again. These courses do not attempt to create acolytes, but to critically reflect on the scope, power, and limitations of these methodologies.

The course leaders are Dr. Luke Blaxill of the University of Oxford, and Dr. Kaspar Beelen of the Turing Institute. Responsibility for the content lies with the developers.


Unit 1: Introduction to Text Mining For Historians

Key Readings

  • Luke Blaxill, The War of Words: The Language of British Elections 1880-1910 (Woodbridge, 2020), Chapters 1+2 (‘Introduction’ and ‘On Method’).
  • Daniel Greenstein, Historian’s Guide to Computing (Oxford, 1994). An old book, but good on historians and quantification in general.
  • Jo Guldi and David Armitage, The History Manifesto (Cambridge, 2015), Chapter 4.
  • Nan Z. Da, ‘The Computational Case against Computational Literary Studies’, Journal of Critical Inquiry (2019).

Exercises

Alone, or with a friend, make a list of five things that you think text mining might be able to help you with in your own research. What might it help you do better that you already do, and what new possibilities might it bring? You may want to consider the key readings for ideas.

Then, make a list of five ways in which you are sceptical about text mining as a tool for the sort of historical research that interests you.


Unit 2: Data. Selection, Collection, & Structure

Key Readings

  • Martin Wynne, ‘Creating a Corpus’ and ‘Developing Linguistic Corpora: A Guide to Good Practice’ (2005). Link

Then there are dedicated chapters on corpus creation in:

  • Tony McEnery and Andrew Hardie, Corpus Linguistics: Method, Theory and Practice (Cambridge, 2011).
  • Magali Paquot and Stefan Th. Gries, A Practical Handbook of Corpus Linguistics (Springer, 2020).
  • Svenja Adolphs, Introducing Electronic Text Analysis: A Practical Guide for Language and Literary Studies (Trowbridge, 2006).

Exercises

Alone, or with a friend, consider what sort of corpus might support your research. What primary source texts are there, and what sort of research questions would be interesting to ask with them? In particular, consider:

  1.  How would you structure such a corpus with metadata. What comparative groups are there for you to work with? Would you need to add metadata with markup of some kind?
  2. Where are the sources located? Are they available digitally? If not, could you digitise them yourself?
  3. Is your data structured, semi structured, or unstructured?

Unit 3: Basic Analysis with Antconc Part 1

Key Readings

  • Eva Andersen and Jolien Gijibels, AntConc, Historians and their Diverging Research Methods (2020). Link
  • Svenja Adolphs, Introducing Electronic Text Analysis: A Practical Guide for Language and Literary Studies (Trowbridge, 2006). An excellent and concise book, still excellent after 15 years – and a recommended starting point.
  • John Sinclair, Corpus Concordance and Collocation (Oxford, 1991). This is a classic pioneering text, but still referenced a great deal! The two textbooks below are modern evolutions of Sinclair.
  • Tony McEnery and Andrew Hardie, Corpus Linguistics: Method, Theory and Practice (Cambridge, 2011).
  • Magali Paquot and Stefan Th. Gries, A Practical Handbook of Corpus Linguistics (Springer, 2020).

Exercises

Alone, or with a friend, think about your reflections to the exercises for classes 1 and 2. Hopefully by now you will have an idea of what sort of corpus might be useful to you, and how you might structure it and collect data for it, and add metadata to it. Think about the main techniques outlined here. Which of these – in theory – might be useful? Which ones less useful? How might they augment your existing analysis with the methods you already use?


Unit 4: Basic Analysis with Antconc Part 2

Key Readings

  • Laurence Anthony, Basics with Antconc
  • Svenja Adolphs, Introducing Electronic Text Analysis: A Practical Guide for Language and Literary Studies (Trowbridge, 2006). An excellent and concise book, still excellent after 15 years – and a recommended starting point.
  • John Sinclair, Corpus Concordance and Collocation (Oxford, 1991). This is a classic pioneering text, but still referenced a great deal! The two textbooks below are modern evolutions of Sinclair.
  • Tony McEnery and Andrew Hardie, Corpus Linguistics: Method, Theory and Practice (Cambridge, 2011).
  • Magali Paquot and Stefan Th. Gries, A Practical Handbook of Corpus Linguistics (Springer, 2020).

Exercises

It’s time to get our hands dirty with a practical exercise with Antconc! Please see the following PDF document for a full outline. You will need to download Antconc, as well as the exercise corpora – specifically the manifestos corpus – from Github


Unit 5: (More) Advanced Analysis: Python Part 1

This unit is composed of several lectures which will introduce you to Python.

Lecture A: Introduction

This session runs in an interactive notebook on MyBinder. An overview of all interactive materials is available here

We start with a brief introduction to the aims and principles of this course: why should a historian bother to learn a programming language for analysing textual and other types of data? Why Python (notebooks) in particular? We also discuss what to expect from this course (and what not) and give an overview of the skills you will obtain.

Lecture B: Basic Python: A Gentle Initiation

This session runs in an interactive notebook on MyBinder. An overview of all interactive materials is available here.

This notebook starts with a gentle introduction to the basic elements of the Python syntax. We discuss how to create and manipulate variables, and demonstrate common operations. Some topics are more extensively discussed in ‘break out’ notebooks or in external documentation.

Lecture C: Text and String Methods

This session runs in an interactive notebook on MyBinder. An overview of all interactive materials is available here

Finally, we move on from more fundamental syntax to working with actual text data. In this notebook, we introduce ‘string methods’, which are Python tools for processing and manipulating text files. We also demonstrate how to open and read text files (at scale).

Key Readings

  • Eric Matthes, Python Crash Course: A Hands-On, Project-Based Introduction to Programming (No Starch Press, 2019).
  • Nick Montfort, Exploratory Programming for the Arts and Humanities (MIT Press, 2021).
  • Al Sweigart, Automate the Boring Stuff with Python: Practical Programming for Total Beginners (No Starch Press, 2019).
  • Peter Wentworth, Jeffrey Elkner, Allen B. Downey, and Chris Meyer, “How to Think like a Computer Scientist: Learning with Python 3” (2012).


Exercises
Integrated into the course (please see above)


Unit 6: Analysis with Python Pt 2

This unit is composed of four lectures which will cover more advanced text mining analysis with Python.

Lecture A: Processing Texts

This session runs in an interactive notebook on MyBinder. An overview of all interactive materials is available here.

This lesson introduces core Python objects such as lists and dictionaries that you will need when processing text files. We discuss the application of Natural Language Processing tools to historical documents. More precisely, we show how to use the NLTK and SpaCy to splitting a text into tokens and analyse the grammatical structure of a sentence with part-of-speech tagging.

Lecture B: Corpus Selection

This session runs in an interactive notebook on MyBinder. An overview of all interactive materials is available here.

In this notebook, we introduce techniques for selecting relevant information from large data sets. We discuss how to filter and select information based on their metadata as well as textual content. The strategies covered here allow you to select documents that are relevant to your research question and build question-specific subcorpora.

Lecture C: Corpus Exploration

This session runs in an interactive notebook on MyBinder. An overview of all interactive materials is available here.

After building a subcorpus, you need tools to explore and analyse the texts meaningfully. We focus on a wide range of tools provided by the Natural Language Toolkit, such as concordance or Keyword in Context (KWIC), collocation analysis and feature selection. We use reports written by Victorian Medical Officers of Health as a case study.

Lecture D: Trends over Time

This session runs in an interactive notebook on MyBinder. An overview of all interactive materials is available here.

The last notebook in the text mining series focuses on studying discursive trends over time. The goal of this notebook is to understand the changing content of British political manifestos.

Further Readings

  • Folgert Karsdorp, Mike Kestemont, and Allen Riddell, Humanities Data Analysis: Case Studies with Python (Princeton University Press, 2021).
  • Brian Kokensparger, Guide to Programming for the Digital Humanities: Lessons for Introductory Python (Springer, 2018).
  • Edward Loper, and Steven Bird, “Nltk: The natural language toolkit.” arXiv preprint cs/0205028 (2002).
  • M. Lutz, Learning Python: Powerful Object-Oriented Programming (O’Reilly Media Inc., 2013).
  • Alex Martelli, Anna Ravenscroft, and David Ascher, Python Cookbook (O’Reilly Media Inc., 2005).
  • Jacob Perkins, Python 3 Text Processing with NLTK 3 Cookbook (Packt Publishing Ltd, 2014).
  • Matthew J. Salganik, Bit by Bit: Social Research in the Digital Age (Princeton University Press, 2019).
  • Jake VanderPlas, Python Data Science Handbook: Essential Tools for Working with Data (O’Reilly Media Inc., 2016).

Exercises

Integrated into the course (please see above)


Downloads

The five ‘Exercise Corpora’ below have been compiled and segmented by us from publicly available textual historical archives. They support the exercises for both courses and can be downloaded in full here.

Medical Officers of Health in London, 1848-1972
Medical Officers of Health (MOH) were appointed to investigate the health of the population, sanitary conditions, disease, housing, and clinical services in each London borough. Our corpus enables comparisons between interwar and Victorian MOH, as well as a wealthy borough (Westminster) and a poor one (Poplar).

British House of Commons Debates, 1945-2014
Taken from the official record of Parliament (Hansard). Our corpus enables comparisons between parties, ministers, and male vs. female MPs. The debates we have selected concern the issue of abortion law.

 

Heritage Made Digital Newspapers, 1800-1880
Nineteenth-century British articles from numerous newspapers. We have created two sub corpora: all of the articles containing the the word ‘slavery’ and all of the articles containing ‘workhouse’. Subdivided by decades that saw key legislation and campaigns on these issues.

British Election Manifestos, 1966-2019
The printed national manifestos of the Liberal, Labour, and Conservative parties in every general election from 1966 to present. We have set this corpus up to enable comaprisons between parties, and between two key decades: the 1960s (1964, 1966, 1970 elections) and the 1980s (1979, 1983, 1987 elections).

The Times Headlines from the 1960s onwards
The Times is often credited as being Britain’s national ‘newspaper of record’. This corpus is setup as a csv file and contains every headline from every day.


OpenEdition suggests that you cite this post as follows:
Max Weber Stiftung (March 25, 2022). Text Mining For Historians. Wissen entgrenzen. Retrieved October 11, 2024 from https://doi.org/10.58079/vb1b


Max Weber Stiftung

The Max Weber Foundation promotes global research, focused on the areas of social sciences, cultural studies and the humanities. Our research is conducted at ten institutes in various countries across the globe with different and independent fields of focus. Through our globally operating institutes, we are able to contribute to the communication and networking between Germany and our host countries or regions. By promoting academic dialogue and merging academic and non-academic employees from several countries with different cultural backgrounds, the Max Weber Foundation is able to strengthen the internationalization of research.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.