Curation Project Concept:
This semester, I’m taking a graduate seminar called Reading Remains. We’re reading essays, most of them published in the last ten years, that discuss the practice of reading – how we should read, what stance we should take in relation to a text, how much or little we should engage emotionally, whether we should be suspicious of a text and seek to expose its hidden meaning, or whether we should read generously and assume that a text is openly presenting its meaning on the surface.
One of the challenges the course presents is keeping track of the terms. Different authors use different terms to describe slightly different reading styles: suspicious reading, symptomatic reading, surface reading, reparative reading, thin-description reading, thick-description reading, paranoid reading, to name a few.
So, I decided to upload the essays into Voyant in order to help me discern patterns in how the authors were using these terms. (Here are three examples of the eight I used in total).
Voyant’s word-clouds gave me a broad picture of the keywords in each essay, but I quickly realized I would need an organization strategy. How to synthesize the information Voyant presented? I decided to focus on Voyant’s ‘Words in Context’ function. I chose words from the word-cloud that I deemed important, and searched for them in context. Then, as I started to see how these words coalesced in different contexts to form definitions of certain style of reading, I copy-and-pasted choice sentences into a spreadsheet. I created spreadsheets for each style of reading. See examples below:
The next step was to make a more visually-cohesive, immediately graspable depiction of the relationships between these terms. At this point I would have loved to have had some skills with a program like inDesign. But, I didn’t. So I drew it by hand. I drew these flow charts on two pages. The first page is what I determined to be a series of reading styles that are more generous, tending towards the surface of texts, tending to trust texts, tending to want to engage texts with positive emotions, surprise, and excitement. The second page represents a more ‘rigorous’ series of critical reading styles, that broadly focus on distrusting texts, exposing hidden meaning, feeling paranoid about ideology in texts, and reading texts ‘against the grain’ by posing difficult questions and drawing out a text’s inconsistencies.
Voyant provided an interesting lens for reading these texts. Having the visualizations allowed me to look at a large swath of information in one glance, without having to flip through pages. I liked the idea of using spreadsheets and flow charts and word-clouds as supplements to traditional reading.
When I told my friends about this project, I was amused by their resistance to using digital tools for reading. There is a strong ‘traditionalist’ stance, especially among English majors, where anything that’s not printed on paper is inferior. I agree with that sentiment to a point – I would never read a book on a Kindle, for example. But I will read academic essays in .pdf format on my computer. And I am definitely open to using a tool like Voyant as an aid.
One of the things I liked about Voyant was that it wasn’t ‘smart’ enough to distinguish the ‘actual’ texts themselves from the titles, the words in the margins, the chapter titles, and even the names of my classmates who had uploaded the files into our shared DropBox folder (re: WordCloud #2 has ‘clairegrandy’ as a hugely important word in one of the texts, because the phrase ‘uploaded by clairegrandy’ somehow appeared in the pdf file). It brought the various ‘messengers’ and mediums who had delivered the text to me to the fore. I liked that.
What I would really be interested to see in the future would be a program that I can upload a .pdf text into, like Voyant, that not only calculates word frequency but produces substantive analysis of some kind. I would like to use a program that helps me to understand my cognitive processes when I read a text. Could someone write a program that could produce the same flow charts that I drew in the final analysis stage of my project? I think it could be fascinating – especially because literary critics like to think of their work as somehow outside of the reach of machines. I wonder how much analysis a program could theoretically produce, and at what exact point would it find its limit? At what point would I have to take over with instincts, recursive logic, and ‘artistic’ association in order to formulate a more ‘complete’ analysis? Perhaps a preliminary way to approach this would be to design a program like Voyant – a program that produces word clouds – but instead of having the largest words in the cloud be the most frequently occurring, it would have the largest words be the words that the program deemed most important, based on a certain set of criteria that a programmer decided to implement. Then, perhaps, different readers could alter the criteria, and have different words appear in the cloud based on different sets of criteria. These are intriguing possibilities. I think Voyant is a great first step towards such a program.