Humanities professor leads interdisciplinary effort
The topic of artificial intelligence (AI) is everywhere, with relentless media coverage proclaiming the start of a momentous shift in the way we learn and work.
A group of scholars at Rutgers, meanwhile, is working to cut through the “hype” to reach a deeper understanding of this new technology, educate students and the public, and evaluate the opportunities and danger posed by AI.
Critical AI @Rutgers, a School of Arts and Sciences (SAS) initiative that has drawn faculty from across the academic spectrum, and is supported by the Center for Critical Analysis, the Center for Center for Cognitive Science, and the SAS Division of Humanities.
“Touted as a fourth industrial revolution, AI is nonetheless, poorly understood and subject to hype, misinformation, and anxiety,” says the homepage ofSince forming in 2018, Critical AI @Rutgers has been developing its distinctive presence through a range of public-facing activities. Earlier this fall, the group organized the one-day virtual conference, Critical AI Literacy in a Time of Chatbots: A Public Symposium for Educators, Writers, and Citizens. The group also recently published the first issue of Critical AI, which is edited at Rutgers and published by Duke University Press.
The group is planning a two-week global humanities institute in Pretoria next July that will focus on Design Justice AI, which explores the impact of the new technologies on local and indigenous languages and cultures.
Leading the work of Critical AI @ Rutgers is Lauren M. E. Goodlad, a Distinguished Professor of English, and a scholar of 19th century literature and culture. In the interview, she discusses the group’s priorities, and the contributions that humanities scholars like herself bring to the mission of engaging with AI.
Q: Critical AI @ Rutgers might be one of the most interdisciplinary initiatives at the university, with computer scientists, biologists, philosophers, literary scholars, and many others participating. What is the unifying mission that brings all these very different faculty members together?
A: Artificial intelligence is a term that many people still associate with science fiction, though it’s now the name given to a dizzying array of real-life technologies. The disconnect between fictional and real-world “AI” creates misunderstanding and encourages hype. Humanities and social science researchers are trained to ask critical questions, such as what does it mean to be “intelligent,” “conscious,” “human,” “artificial,” or “ethical?” AI’s enthusiasts have by and large either left these questions unasked or answered them from narrow technical perspectives.
So, the mission of Critical AI @ Rutgers is to bring the humanities and interpretive social sciences into dialogue with other AI researchers for joint explorations into a wide range of AI-related topics, including the social, pedagogical, and environmental impact. Approaching these topics rigorously can help ensure that new technologies are designed to work for communities and serve the public good.
Bear in mind that “critical” in this context, does not necessarily mean negative or even skeptical. It simply means pursuing research with the judgment and discernment necessary to pose relevant questions and explore them rigorously.
Q: You are a professor of English with a specialty in 19th century Victorian literature. Yet you have devoted many years to studying automated technologies and are now leading this initiative. What is it about AI that you find compelling?
A: So many things! First, as a scholar of nineteenth-century British literature and culture, I’m aware that some of the statistical mainstays that drive today’s technologies were developed more than a hundred years ago. In fact, what’s today called linear regression (what readers might think of as bell curve thinking) originated in the work of Francis Galton, a cousin of Charles Darwin who was interested in “anthropometrics” and coined the term “eugenics.”
I also love interdisciplinary challenges. Bringing together folks in computer science, media studies, information science, digital humanities, and technology policy and law–with their colleagues in humanities, arts, and social science disciplines–has been an amazing learning opportunity for me and I hope others as well.
Critical AI is also increasingly interested in the challenges that automated chatbots present to teachers and learners–which is why we organized the October 6 public symposium for educators, writers, and citizens.
Q: When people hear the words “artificial intelligence,” they might ordinarily think of science, computers, robots, and technology, but not necessarily literature or philosophy. What are the critical contributions that humanities scholars can bring to issues of AI?
A: To me that question seems almost limitless–and very important as well.
AI enthusiasts like to think of the technology as “disruptive” but seldom specify the goals behind that disruption. A critical perspective encourages thinking about whose goals count and why. Instead of assuming a new technology always ushers in “progress,” a critical perspective asks: progress of what kind, for whom, and with what effects? Very often, AI systems are sold as “solutions,” but without any clear sense of the problem they are aiming to solve.
The fact is that what goes by the name of AI is usually data-driven predictive analytics. Many of these predictions are notoriously biased against particular people, places, cultures, or practices that are either excluded from or marginalized in the datasets used for training AI systems. As computer scientists often say about systems trained on flawed datasets: “garbage in/garbage out.”
So, with that in mind, think of the histories that must be told, the philosophical perspectives that need unpacking, or the anthropological or sociological questions evoked when AI marketing sets out to “revolutionize” diverse ways of living and knowing.
Think too about how we write, read, and learn.
Q: In the short time that Critical AI @ Rutgers has been together, what do you see as some of its key accomplishments?
A: There are several that I hope readers will be interested in. To help faculty getting accustomed to teaching in a climate of chatbots, we prepared a document that offers guidance and resources for the university’s Office of Teaching, Evaluation and Assessment Research that is available to all faculty and instructors and continues to solicit comments and suggestions.
Last month, the first issue of Critical AI was published: a special issue devoted to “Data Worlds.” We hope the journal will help to spur humanities interest in researching these topics as well as interest in interdisciplinary collaborations.
Finally, Critical AI’s work has been supported by a number of grants (including Rutgers Global and the National Endowment for the Humanities). Right now, we’re preparing Design Justice AI, a Mellon-CHCI sponsored Global Humanities Institute at the University of Pretoria, with partners from the Australian National University and the University of Connecticut.
The institute will meet next July to talk about how design justice principles, which emphasize the participation of local people, can help to foster alternative technologies that pay attention to the insights of local arts, languages, and communities. This approach calls attention to the fact that most of the data used to train models for generating text and images come from North American or European speakers of English or other dominant languages.
What could be lost from human creativity and diversity if writers or visual artists anywhere in the world came to rely on predictive models that excluded the majority of the world’s many cultures and languages? We have funding to bring more than a dozen early career scholars from across the disciplines to join these conversations.
The call for proposals for Design Justice AI is here.