By: Virginia Conn
Lauren M.E. Goodlad, professor of English and Comparative Literature at Rutgers University, asked the following provocative question at Comp Lit’s inaugural Spring 2020 Brown Bag Lunch: how did a study of genre and the longue durée end
up addressing artificial intelligence in the present day? She began by postulating that the pronouncement of billionaire investor Mark Cuban is typical of the zeitgeist today: “AI is going to change everything. There’s nothing that AI won’t impact.” Whether AI so conceived is just the latest speculative bubble—or rather the end of life as we know it—is a question Goodlad ponders at the close of her essay, “Genres that Matter: The Long Afterlives of Nineteenth-Century Fiction,” which is forthcoming this fall and upon which her talk was based.
To historicize and situate the impact of artificial intelligence, machine learning, and big data on the question of genre, Goodlad referred to four touchstones for her work: Fernand Braudel’s claim that the work of the longue durée historiographer is to cultivate awareness of temporal plurality; Franco Moretti’s effort to theorize genre by drawing on evolutionary biology to conceive genres as coherent types that vie with one another in a bid for survival of the fittest; Wai Chee Dimock’s claim for deep time, which conceives generic transtemporality by analogy to fractal geometry; and Ted Underwood’s move towards genre analysis through machine-generated scatterplots. Notably, all these longue durée analyses conceive genre as a relatively fixed and transparent object: for Moretti, a macro-category for tracing the DNA of dominant types of cycles, for Dimock, a neutral vehicle for fractal forays into the past, and for Underwood, a fundamentally lexical object ideal for statistical analysis. As such, none of them offer the nuanced genealogy or transtemporal comparativism that might encourage rapproachment between the formal and historical methods that is the focus of Goodlad’s work.
She recuperates the idea of genre as something that doesn’t exist in a vacuum, as there can be no “horizon” without circulation, nor “expectation” without perceiving subjects. Genre, therefore, is as much about the legibility of conventions for particular readers as about authors’ intentions or critics’ taxonomies. Her point during the talk was not so much that these works create problems for distant reading as that distant readers who single out this one scale of analysis create problems for themselves. A strong genre theory, much like an influential genre itself, operates at multiple scales. As such, genres so theorized not only operate supratextually as the articulation of types and intertextually (as in the influence of old forms on new), but also intratextually.
What does this ultimately mean for literary and digital humanists? Goodlad suggests that DH modelers clarify their statistical assumptions: not because quantitative tools are necessarily untrustworthy, but because DH computationalists cannot be both conventional data scientists when arguing for robust results and postmodern experimentalists when rejecting the mantle of statistics, empiricism, or positivism. Plainly put: overstated claims on the part of computational theorists play into the hype and confusion about AI, data, and enhance the tremendous economic and cultural power of those who most benefit already. As humanist researchers and teachers in the 21st century, we have more important things to do than protect tech bros from their own hubris.