Hannes Bajohr is a Professor in the Department of German. Dr. Bajohr’s primary areas of focus include digital writing and literature, German philosophy of the 20th century, and political theory. He received the N. Katherine Hayles Award for Criticism of Electronic Literature by the Electronic Literature Organization in 2024. Dr. Bajohr studied philosophy, German literature, and modern history at Humboldt University of Berlin, and received his PhD from Columbia University.
Firstly, can you introduce yourself and speak a bit about your areas of focus within the Department of German?
I have been at Berkeley for about a year now and teach in the German department, where I work in three fields: The first is intellectual history in the German context – authors like Hannah Arendt, Günther Anders, or Hans Blumenberg, who was also the topic of my dissertation. In particular, I research a specific current in German philosophy starting in the 1920s called “philosophical anthropology,” which is the study of how, if at all, we can define the human.
Second, I work on digital media studies and digital writing in particular. One topic of mine is the long history of automated text production that goes back all the way to the Baroque. The idea that the permutation of words can be mechanized is then taken up with the advent of computers in the 1950s. So having machines write text, even literary text, is certainly nothing that suddenly appeared with ChatGPT!
Finally, I also work on political theory, mostly liberalism and republicanism; here a focus is the work of the political theorist Judith N. Shklar, about whom I have also co-authored a book in German that will appear in English next year.
What was the first computer-generated text?
The very first text one could describe as literary are the “Love Letters” by Christopher Strachey, a British computer scientist and colleague of Alan Turing’s working in Manchester. In 1953, Strachey wrote a program that worked as a slot-and-fill mechanism, just like the game Mad Libs. It would run overnight because computing time was so expensive back then. The next morning – this was before monitors, so it was output on a teletype printer – the people working on the computers would find a love letter, mysteriously signed by the Manchester University computer itself. There is a wonderful anthology called Output: Computer-Generated Text 1953-2023 that collects this and other examples of digital writing.
Can you speak about some of the recent projects you have worked on?
The last book I published is an introduction to digital literature. It came out in German and is currently being translated into English. It relates the long history of automated writing I just mentioned, from Baroque combinatorial literature to ChatGPT, and discusses how to read and interpret these artifacts.
Speaking more directly to the current situation around machine learning, I also wrote a number of essays on AI and text – on authorship and reception in particular. What does it mean if a machine can write text that is virtually indistinguishable from something human-written? Does this make the machine an author? My argument is that it doesn’t because authorship is about more than simply the production of text; it’s about the status of being recognized as a subject capable of writing, and is thus a question of social recognition. In other words, authorship comes down to our collective willingness to grant it to non-human entities – it is not a technical question at all but a social one.
I also work on something I call “post-artificial writing,” which asks the reverse question, focusing not about authorship but reception: What happens to our standard expectation of unknown texts if there is a machine that can produce writing that looks just like what a human could have written? What’s interesting here is that our standard expectation comes to light only retrospectively: Only once there is an alternative do we understand that we have always (at least in modernity) implicitly assumed that any text we encounter is human-written. That is obviously in the process of changing now and we have entered a new expectation, or rather, a new state of doubt. This is evident in everyday acts of reading. An email that uses an em-dash or the phrase “to delve into” (both hallmarks of ChatGPT) may suddenly prompt suspicion: was this generated? Increasingly, we all perform what I call “folk-forensics”: scanning texts for signs of AI influence, weighing stylistic tics, and recalibrating our reception accordingly.
The striking thing is: No matter what legal restrictions are imposed, and no matter what forms of AI-detection technology are developed (none of which truly work), the doubt, once introduced, will never disappear. But if there is no surefire method to prove the human origin of a text, what happens next? My proposal is that we might finally enter a phase of what I call post-artificial writing, in which the difference between “natural” and “artificial” writing no longer plays any serious role, because other things – what the text says, its style, the information it conveys – will have become more important than the question of origins.
I might say that this would have some serious impact on our understanding of literature. Incidentally, I am also a poet working in this medium, and I have written a novel, (Berlin, Miami), using a self-trained language model that is, let’s say, not quite on par with current LLMs. The book is pretty absurd and great fun, and it, too, is being translated into English at the moment – by a human! This is something I find quite important because an AI translation would smooth over the wonderful strangeness of the language my model produced. I am very curious to see what the translator will make of it.
How have you seen pedagogy changing due to the onset of Artificial Intelligence? How do you expect students and faculty to respond to this?
I think that we’re a little past the initial panic. In 2023 and 2024, people were all doom and gloom about the possibility of generative AI taking over essay-writing. With a few semesters of experience behind us, the atmosphere has become more pragmatic. Broadly speaking, I see two camps emerging. One attempts to stem the tide by going analog: blue books for exams, more in-class writing, oral exams, and an increased emphasis on presentations. The other accepts that there is no turning back and instead integrates AI into pedagogy, sometimes even requiring it. Many universities now provide faculty with suggested language for their syllabi, specifying whether a class is AI-only, allows AI under certain conditions, or prohibits it altogether.
Where on that spectrum you fall depends on what you think the point of essay-writing is. Is it a product or a process? If it’s a product – and thus teaching is about educating people to produce something that reads like a good text – there’s not really a convincing argument against AI. Our students entering the labor market will encounter AI everywhere, mostly because companies think this is a way to automate tasks and streamline productivity (which may or may not be a good strategy). Here, education would mean to make students aware of possible pitfalls and limitations and build a certain AI literacy – the case of the lawyer submitting ChatGPT-generated briefs riddled with hallucinated precedents is a cautionary example. This also means impressing on students that the responsibility for a text cannot be ceded to a machine – even if no word is yours, you need to treat it as if you had written it and have to answer for it. In this scenario, our job would be to help students make the most of artificial intelligence without getting complacent about its shortcomings.
The other camp says that writing isn’t really about the product – the product is just the end-point of a process that runs much deeper. For them, writing is thinking. If you don’t write a text yourself, you delegate the thought-process to the machine, and then you’re also at the mercy of the machine – including its predilections and its blind spots. This might be discipline specific to a point: In the sciences, one could argue that if the essential intellectual work has already taken place in the lab, then summarizing the findings in prose is a secondary task. But specifically in the humanities, it is impossible to separate writing from thinking as a recursive feedback loop. And I suspect this is not entirely foreign to STEM fields either: Consider the slow trial and error of programming, which itself risks being increasingly replaced by generative AI.
From this point of view, the danger is that AI will narrow our intellectual possibilities, steering thought into pre-shaped and commercially acceptable corridors. This camp’s concern is bound up with its awareness of the ethical, legal, and political problems of AI writing: questions of copyright and training data, the reproduction of bias, and, more broadly, the commodification of the linguistic commons. OpenAI, one shouldn’t forget, is a company with a profit interest that dictates what their models do and don’t output. In other words, what can be said using a commercial model is based on the parameters of its business model, not its technical capacity alone. One can make a reasonable case that, in the extreme case, a society where communication is overwhelmingly mediated by language models, the very conditions of democratic participation are at stake.
Another way to frame the question is to ask what, exactly, is being lost when we use AI. Is it comparable to the decline of mental arithmetic once calculators were introduced into math class? Or is it closer to the fading practice of memorizing poetry, which schools gradually abandoned? The answer depends, at least in part, on whether one sees these activities as having intrinsic value beyond their immediate utility – whether mental calculation fosters intellectual flexibility, or memorized poetry gives you a wider range of expressive means.
Thinking about the negative reports about what will happen to academia due to AI, have you seen this reflected in the work of your students? Have there been ways in which you were surprised by positive impacts?
I think that generally, it can go both ways. The whole problem with generative AI for students is that the thing is just so tempting – it’s hard not to use it! One reason for this temptation may be that it does not feel like cheating in the traditional sense and so for a while there was confusion about what is acceptable and what isn’t. I heard from a colleague that one of their students said, “This is not plagiarism, because there is no original. Plagiarism means copying something verbatim, and ChatGPT always produces something new.” So clarifying what is allowed or not is always the first step.
But this whole situation also asks us to consider what the point of the student essay is in the first place. What does the five-paragraph essay actually test? Is it knowledge, the ability to construct an argument, self-expression, or competence in academic form? Or does it, more narrowly, test nothing beyond the skill of producing a five-paragraph essay – at which point the exercise risks becoming moot. Now, I do think that writing is thinking and in that sense, even this general formula might be a useful scaffolding for thought. But ideally, the exercise should move beyond the mechanical sequence of introduction, claim, argument, and reiteration.
A positive I can see: If everyone uses AI, the playing field is, to a certain degree, leveled. Class markers that have to do with differences in pre-university education are to a large extent gone here. Of course, some are still better writers than others. In a freshman class, we used to have good writers and bad writers; now we have competent writers and excellent writers. With AI assistance, many essays reach the level of competence expected in college writing. What begins to stand out, then, are the texts that are more imaginative, unexpected, daring, or aesthetically experimental, that is, the ones that can break free from the five-paragraph mold. This was borne out in my classes: I had some excellent papers – some that used a little bit of AI, and some that were not AI generated. But only in the latter was there a deeper investment and pride in the work. Our grading practices as teachers will have to adjust to this raised baseline and not just look for competence but excellence.
What courses do you regularly teach for undergraduates?
I teach two classes about AI. One is a freshman class called “Literary AI” (offered this fall) and it basically follows the long history of rule-based writing, from the Baroque to today, with a detour through the avant-garde – Dada, Surrealism, Oulipo, and so on. The second, “Language after Language Models,” asks a more fundamental question: are the outputs of large language models truly “language”? What is “meaning” in a machine that cannot really mean? This course also engages translation theory as a way of probing those limits. In both classes, technical knowledge is interwoven with humanistic inquiry: in “Literary AI,” for instance, students might read Kant alongside a Tristan Tzara poem, and then experiment by building small poetry generators in Python.
I think one important point in both classes is that as you learn about AI, you’re more capable of seeing its limits. In the end, there’s no magic here but also no apocalypse brought about by the singularity. Yet it takes some work to get there. One of the most useful early assignments is to ask students to get an LLM to completely generate an essay. It turns out that this is not so easy after all and a lot of work goes into it when you want something halfway decent. We then analyze these texts and assess: what worked, what didn’t, is there a sustained argument or just a lot of hand waving, and so on. Of course, this exercise is effective because the subject of the course is AI itself and students have become experts in the topic; it may be more difficult to transpose this into other disciplinary contexts.
How do you see translation changing because of Artificial Intelligence?
We are lucky to have an exceptionally diverse student-body at Berkeley, and in my class about language models, at least half the students spoke a second language. I asked them to translate poems in their language to each other and to explain the nuances and the richness of certain words. We then compared this to AI translation. The contrast was instructive: What you see is that translation is an art, but it is also a practice that is rooted in cultural knowledge and the relation to other texts of that language. If you have echoes or citations of older traditions or allusions to vernacular tropes, they’re not usually found by an automatic translator. This exercise helps students to understand that while machine translation is very competent, it again only raises the baseline. Translation, and especially literary translation, still requires a lot of background knowledge and a certain feeling for the language that is difficult to automate. Yet this is, in some sense, also an idealist view: in practice, it is often the publisher’s profit margin that determines whether a translator produces a text themselves or merely “post-edits” the output of a machine.
Can you tell me more about the term, “negative anthropology” and how it is related to the use of artificial intelligence in writing?
By negative anthropology I mean less a fixed school than a loose constellation of thinkers in Germany between the 1920s and 1970s such as Helmuth Plessner, Günther Anders, Ulrich Sonnemann, and, to some extent, Hannah Arendt. Their shared point of departure was the conviction that “the human” cannot be defined positively: attempts to do so inevitably produce exclusions, as Sylvia Wynter has argued even more forcefully in the present. Yet, like Wynter, they also insisted that we cannot simply dispense with the term. Abandoning “the human” altogether, as some strands of posthumanism suggest, risks losing an indispensable conceptual register, particularly an ethical one. In analogy to negative theology, the human becomes something that can be circumscribed but not defined, that is, approached by way of what it is not. This produces a flexible notion of humanity that remains usable across domains, from climate change to AI ethics, without collapsing entirely into pure relationality.
In our discussions of AI, both the enthusiasts and the critics often compare the technology against some innate human essence or defining trait. That trait used to be rationality but after computers out-computing humans for a while now, the last bastion of the human has become the opposite: the non-rational, the creative, which either has to be defended against encroaching automation or, conversely, to be conquered by increasingly sophisticated AI. One lesson of negative anthropology is that this opposition might be misguided in the first place, since there have never been humans withouttechnology (a point my colleague David Bates has made very convincingly).
But the other lesson is that this comparison between humans and machines also produces certain aesthetic norms that should not be accepted so easily. For instance, rejecting computer-produced writing because it fails some kind of standard of “good” writing presupposes precisely the kind of strong anthropology that negative anthropology would unsettle. A couple of years ago, German novelist Daniel Kehlmann tried to write a book using an AI model and was disappointed because the output was not up to his standard. Yet he compared the output with what he expected it to do if it were a human. Insisting less on this anthroponormative standard, we can begin to ask what is interesting or distinctive in such texts on their own terms and treat them less as failed imitations of human writing than as artifacts with their own aesthetic affordances. For that reason, I would even argue that the recent improvements of language models – especially reinforcement learning from human feedback – have made them less interesting aesthetically; their writing is blander even though it is more “human.”
What book would you recommend to everyone reading this interview?
If I may, I’d like to recommend two books. The first is called Language Machines by Leif Weatherby, who teaches in the German Department at NYU. It is a sophisticated discussion of what kind of language is produced by language models and what its political and literary implications are. The other is by our own Nina Beguš, who teaches at Berkeley’s CSTMS. She wrote a book called Artificial Humanities,which also insists that AI needs to be taken seriously on its own terms, but in which the humanities, as the disciplines that focus on human meaning, have a major contribution to make. I share that view: as AI increasingly becomes a cultural machine, it is humanists who are best-equipped to assess its impact.
