Abstract
In lieu of an abstract, here is a brief excerpt of the content:Reviewed by:Who Wrote This?: How AI and the Lure of Efficiency Threaten Human Writing by Naomi BaronLuke MunnBaron, Naomi. Who Wrote This?: How AI and the Lure of Efficiency Threaten Human Writing. Stanford University Press, 2023. 344pp.Who Wrote This? is Naomi Baron’s latest book exploring the emergence of AI language models and their potential implications for writing. A linguist, educator, and emeritus professor at American University, Baron should be well placed to tease out the nuances of language, showcase the creativity of human literary canon across history, and deliver a set of findings and recommendations informed by deep experience. Indeed, since the release of ChatGPT in November 2022, there is a hunger for this kind of expertise and guidance both inside and outside academia, as the deluge of debates, panels, and forums in the past year have attested. This is the perfect storm, one that should be harnessed by a linguist with two decades of work on digital writing and reading into a powerful book. Instead, the reader must wade through a morass of truisms, historical anecdotes, and wiki-like summaries of research studies, interspersed with surface-level ponderings.The problems begin almost immediately with the structure of the book itself. While the book is ostensibly divided into chapters, each chapter is atomized into dozens of subsections, some no more than a paragraph or two in length. In this whirlwind tour, the reader is swiftly confronted with a jumble of quotes from journalists, jokes about language, historical events, a “laypersons AI roadmap,” and recent news stories, flitting from one to the next with barely a pause. The “AI Applications” section takes this same piecemeal approach, laying out the possible uses of artificial intelligence and then painstakingly stepping through each one. Each subsection features a few paragraphs that regurgitates material you might find, more cohesively and in-depth, in a Wikipedia article. The material on the history of artificial intelligence is handled simplistically and uncritically, skipping across a handful of heroic figures like Yann LeCun, Yoshua Bengio, and Fei-Fei Li.While anecdotes and quotes abound, an argument is much harder to locate. On the one hand, Baron seems to uncritically echo the AI hype that has dominated news headlines in the past year. “We’ve come a long way,” she opines, “we have vacuum cleaning robots, drones, and self-driving [End Page 156] cars” (59). The reader learns that AI can generate poetry, correct grammar, play games, and beat masters (AlphaGo), and even identify cancer (machine learning for mammograms). These use-cases, and the ambitious claims that accompany them, are never really challenged in any sustained theoretical or methodological way. On the other hand, Baron seems to be highly conservative when it comes to the proper domain of “the human” and of writing in particular. “AI isn’t up for self-expression or thinking” (14) she says in passing, dismissing the recent revolutionary improvements in large language models that she’s just hyped up.The possibility of machinic intelligence and creativity, in fact, is a fascinating if messy one. Turing was not so interested in demonstrably “proving” that computers could be intelligent, but instead (and very much in line with his everyday navigation of British sexual norms) in developing something that could “pass” as human. Intelligence was not an acid-test, but a set of gestures that could be performed or pantomimed, emulated until they were “good enough” to deceive. N. Katherine Hayles has continued this trajectory in her work on “non-conscious cognition,” removing the barriers to what qualifies as intelligence and subsequently extending cognition down to unicellular organisms and plants. Within my own work on language models alongside colleagues (Munn et al. 2023; Magee, Munn, and Arora 2023), we highlight how models like GPT 3.5 achieve an approximation of natural speech. What is important is not the authenticity or actual intelligence of this feat, but that this “good enough” ability means that users treat language models as an interlocutor, a listener, a subject. This projection, even if misplaced, opens up this interaction and requires a conceptualization that is at least as attuned to memory, affect, and subjectivity as software interfaces and linear...