Generative AI is taking humanities research to new level
Generative AI is taking humanities research to new level
Steven Emmanuel// August 8, 2023//
Not surprisingly, the release of ChatGPT has produced a host of concerns about its potentially harmful effects on society. In higher education, commonly cited concerns center on threats to academic integrity, particularly the worry that students may soon depend on generative AI to do their thinking and writing.
In response to these challenges, many schools have either set institution-wide guidelines or encouraged faculty members to establish policies appropriate to their disciplines and courses. In some cases, this has meant restricting or even banning the use of ChatGPT. This drastic response is problematic for a variety of reasons — not least because it fails to appreciate the increasingly prominent role that AI will play in shaping the way we live, work and learn.
While it would be unwise to minimize the challenge posed by AI, it is important to recognize that Large Language Models (LLM) like ChatGPT are merely the next evolution in a long history of technological innovation aimed at expanding the scope of our intellectual reach. Indeed, scholarship in the humanities has long played a significant role in the development of new knowledge technologies, including AI. The beginning of what we now call “digital humanities” traces back to the early days of computing in the 1940s, when the Jesuit scholar Roberto Busa used the IBM punch card machine to create his Index Thomisticus, a searchable electronic database of more than 10 million words. Busa’s pioneering work not only transformed the way scholars would study Thomas Aquinas but helped pave the way for the development of machine translation and natural language processing.
Today, AI is taking humanities research to an entirely new level. To take but one example, researchers at Notre Dame have developed a technique that combines deep learning with LLM algorithms to produce automated transcriptions of ancient manuscripts. The benefit of this technology to scholarship is immense. At the very least, it will accelerate access to troves of ancient literary and historical texts that might otherwise have taken decades to come to light.
The value of AI for humanities scholarship is twofold. First, it gives researchers an unprecedented ability to access, collect, organize, analyze, and disseminate ideas. Second, as the Notre Dame project shows, AI can perform a kind of labor that saves time and allows researchers to focus their efforts on the important human work of analysis and interpretation.
The same is true for workplace applications of AI. As Paul LeBlanc recently wrote:
“Power skills, often associated with the humanities, will be ever more important in a world where AI does more knowledge work for us and we instead focus on human work. I might ask my AI cobot what I need to know to assess a business opportunity – say, an acquisition – and to run the analysis of their documents and budgets and forecasts for me. However, it will be in my read of the potential business partner, my sense of ways the market is shifting, my assessment of their culture, the possibilities for leveraging the newly acquired across my existing business lines – that combination of critical thinking, emotional intelligence, creativity, and intuition that is distinctly human – in which I perform the most important work.” (“The Day Our World Changed Forever,” Trusteeship, Mar/Apr 2023)
This optimistic vision for the future of AI depends, of course, on our graduates having acquired the kind of moral and intellectual skills that are developed most fully through the study of great works of philosophy, literature, and the arts.
Viewed in this light, the real challenge posed by AI is not the technology per se, but rather that it arrives at a time when the humanities are in decline. In recent years, decreasing numbers of majors and flagging course enrollments have led to the downsizing or closure of core humanities programs across the nation. Indeed, we are witnessing a fundamental shift in our cultural understanding of the purpose of higher education. The traditional liberal arts values of intellectual curiosity and breadth of knowledge have been replaced by a narrow focus on the technical skills and training considered most useful in the job market.
Rather than challenging the cultural attitude that devalues the humanities, many institutions have leaned into it. Under pressure to compete for a diminishing pool of students, liberal arts institutions have sought to make themselves more attractive by expanding their STEM and pre-professional programs while at the same time disinvesting in areas of the curriculum that students perceive to be at best a luxury, and at worst a waste of time.
At its core, study in the humanities helps students develop the capacity to empathize with others, to wonder and think for themselves, and to inquire deeply into questions about meaning, truth, and value. These are abilities our graduates must have if they are to live and flourish in a world increasingly shaped by AI and autonomous systems.
The academic concerns currently being raised about AI are legitimate. However, it should be noted that the temptation to misuse this technology will be greatest in an environment where a utilitarian attitude toward education prevails. Whether our students’ ability to think and write will deteriorate due to having access to technologies like ChatGPT will depend on the message we send about the value and purpose of higher education. At this critical juncture, we must commit ourselves to helping students understand and embrace that aspect of the liberal arts that focuses on cultivating moral and intellectual growth and a deeper appreciation for what makes us human.
In a somewhat ironic twist, ChatGPT just might be the wake-up call that saves the humanities.
Steven M. Emmanuel, Ph.D., is a professor of philosophy at the Susan S. Goode School of Arts and Humanities at Virginia Wesleyan University.
g