New Reading Scenes
Reading – the foundational skill of the humanities – has been profoundly transformed by the rise of digital media and the widespread digitization of texts. Already in 2012, N. Katherine Hayles analyzed this transformation in depth in her book How We Think: Digital Media and Contemporary Technogenesis.1 Ever a groundbreaking theorist of media, literature, and critical posthumanism, Hayles has since continued to explore how humans and machines form what she calls a “cognitive assemblage” that challenges the idea of autonomous, self-possessed subjectivity in processes of meaning-making.2 Literary studies scholar Julika Griem has proposed to analyze these transformations by paying close attention to the “reading scene” or scenes where practices of reading are explicitly thematized in literary texts and visual media. She proposes that this media reflexivity enables us to analyze the changing forms, valuations, and norms assigned to reading as a cultural practice.3
What new reading scenes emerge with the spread of large language models (LLMs) and the research practices that surround them? How is reading transformed in terms of modes, methods, but also valuations, when machines read for us, that is, “interpret” content, summarize texts, produce outlines, propose essay questions? What does the formatting of an LLM’s output – the breakdown of content in bullet points, the use of bold typography to focus the reader’s attention on “what counts” (but counts following what norms?), the chat with a bot – do to our understanding and valuation of reading? What do we read of a text, a philosophical position, or a source in a foreign language when we read it through machine reading? Is this reading through the machine a reading with the machine? In other words, do we form an assemblage of shared cognition, and are we co-constituting meaning when we think and read through an LLM?
One is reminded of Friedrich Nietzsche, who, in 1882, chiselled away on his Hansen Writing Ball the following words: “Unser Schreibzeug arbeitet mit an unseren Gedanken”4 (“Our writing tool works with us at shaping our thoughts”). Martin Stingelin has pointed to this passage as one of the foundational “writing scenes” where the writer acknowledges how the machine contributes in a very embodied, material way, to co-shaping what they think and what they write.5 Transposing this to our current reading scene, are we still reading with the machine or have we started to read from it, such that we are nudged to align with the interpretations that a model generates and the valuations attached to these interpretations? And could we arrive at a point where we consider machine reading – reading from the machine – the default mode of what it means to read? Perhaps this will not be the case for people who grew up learning to read independently of models. But what about the next generation?
Many German Länder are currently introducing the use of LLMs in school for teaching and learning purposes.6 I am convinced that LLMs will have a massive impact on the ability to read and interpret texts and also to make sense of oral discourse. That is why I am sceptical about the introduction of these models in educational settings at a stage when children and adolescents are still learning how to read – grappling with how to parse information, how to interpret complex texts, and how to develop an awareness for the rhetorical dimension of language. The rationale behind this introduction, apart from clear economic interests, is that LLMs constitute a chance as long as students learn how to use them “critically.”7
Critical thinking is the operative yet under-defined concept that does a lot of work in the institutional positioning of schools and universities toward LLMs. Critical thinking functions as the safe-guard of students‘ autonomy, a value still considered essential in pedagogical settings. At the same time, it is expected that critical thinking toward AI does not hinder readiness to use the tools in order to optimally prepare students for the job market. What does the reading scene look like in which students are taught “critical thinking” towards LLMs’ outputs? What I have found so far is that activities addressed to high-school students often consist in discovering the inaccuracies and errors that the model generated in response to a prompt. Students may be for instance asked to compare AI output with information existing elsewhere on the web or in books. In this reading scene, students are trained to become not the active producer of meaning, of analysis, of arguments, but the evaluators, modulators, improvers of AI models. These evaluations, modulations, and improvements expected from the students are made on the basis of and thus remain determined by the kind of normative rationality that characterizes current AI.
Two clarifications: the first regards the question of the necessary expertise to produce such an evaluation, while the second addresses the idea of LLMs as normativizing machines.
Expertise
Serious output evaluation presupposes a degree of expertise that must be at least equal or superior to the “expertise” of the model. Without expertise, the reader of models’ output has to either systematically fact-check the entirety of the output or take the model’s word for it. Additionally, the acquisition of the necessary expertise rests on the existence of an exteriority to the model. However, this exteriority, when we are thinking of the internet, is itself increasingly populated by synthetic content, that is, content generated by AI models.
Speaking from my own experience, I have yet to get an output that does not contain hallucinations, errors, or inaccuracies. To mention but one example, I recently asked GPT to explain inflation from a Marxist perspective. The model first generated somewhat unsurprisingly a supply-and-demand-based explanation. But even after the third prompting/demand for correction, the model still couldn’t shift its “attention head” to the location in its vector space containing what is apparently a very minoritized position: the Marxist theory of economy. This position couldn’t be generated either because it is highly minoritized in the dataset or because the model is fine-tuned to favor other explanations. This is not a kind of conspiracy theory but the reality of model training. For instance, the Chinese model DeepSeek refuses to address any question regarding the Cultural Revolution. One can expect that the alignment of big tech companies with the Trump administration could have similar effects. The Trump administration has already proceeded to purge content related to diversity and climate change from many government websites.8 Simultaneously, LLMs are fine-tuned through reinforcement learning to repeat in an apologetic and subservient tone that they always aim to generate content that is “neutral and balanced,” which is a misleading claim to say the least.
LLMs as Normativizing Machines
This claim to neutrality must be challenged but not so much, as one might expect, by pointing to the existence of biases in AI models. Biases are, in fact, readily acknowledged by companies, which frame them both as a temporary issue and an opportunity to perform their version of liberalism, wherein every subject is said to deserve equal representation, not so much in the eyes of society as in those of AI models. This framing conveniently offers companies a compelling justification to collect even more data to fulfil their promise of neutrality.
Instead, the claim to neutrality must be challenged by showing how it invisibilizes the fact that LLMs and other generative AIs are essentially normativizing machines. The inherent normativity of current AI marks a departure from earlier forms of rule-based artificial intelligence that pertained to what historian Erickson and colleagues have called in their book How Reason Almost Lost Its Mind “Cold War Rationality,” a rationality devoid of human judgement and interpretation – and thus considered particularly apt to minimize uncertainty.9 In contrast, current machine-learning-based AI explicitly relies on the integration and automation of moral judgment, valuation, and interpretation.10 So when an LLM asserts that it aims to be neutral and balanced, it conceals not only the normative work going into the systematic production of this claim, which consists in reinforcement learning based on human evaluations; it also works at neutralizing the question of positionality, which has been a major epistemological contribution of feminist and postcolonial studies. The claim to neutrality reveals current AI’s aspiration to produce a gaze from everywhere that purports to emerge from reality itself, to map it in its totality, and in doing so, to neutralize positionality.11
In the meantime, LLM-based machine reading is being used to fulfill evaluative tasks such as the automated assessment of resumes, job applications,12 and more recently college applications in the U.S., as well as the automated grading of students’ homework in German schools. If you want to successfully pass these tests, you had better make sure that your application materials reflect the statistical norms and ethico-technical valuations of the LLM that will evaluate them.13
In conclusion, I am convinced that reading scenes shaped around the praxis and valuation of close reading will not disappear. On the contrary, close reading will continue to play an essential role in the critique of technology. However, the scene should now include model outputs, the model’s chain-of-thought mechanism,14 research papers written by the machine learning community, and the logics of technology itself. Additionally, we need genealogies or histories that reveal the contingency of technology’s current trajectory and its non-naturalness, and analyze the specific valuations – in every sense of the word – that contribute to shaping its course.
References
- Hayles, Katherine N. (2021): How We Think: Digital Media and Contemporary Technogenesis, Chicago: The University of Chicago Press.
- Hayles, Katherine N. (2017): Unthought: The Power of the Cognitive Nonconscious, Chicago: The University of Chicago Press, https://doi.org/10.7208/chicago/9780226447919.001.0001.
- Griem, Julika (2021): Szenen des Lesens: Schauplätze einer gesellschaftlichen Selbstverständigung, Bielefeld: transcript, https://doi.org/10.1515/9783839458792.
- Cited in: Stingelin, Martin, Davide Giuriato and Sandro Zanetti (eds.) (2004): ‚Mir ekelt vor diesem tintenklecksenden Säkulum‘: Schreibszenen im Zeitalter der Manuskripte, München: Fink, p. 8.
- Stingelin/Giuriato/Zanetti 2004.
- One of these tools is licensed by German startup Fobizz: https://fobizz.com/de/klassenraeume_ki/.
- A website provided by the Canton of Vaud in Switzerland (the canton in which I went to school and studied), offering pedagogical resources and guidelines on the use of generative AI https://www.eduvaud.ch/ressources/ne-vous-fiez-pas-aux-reponses-dune-ia/ (Last Access: 24.04.2025).
- https://www.npr.org/sections/shots-health-news/2025/01/31/nx-s1-5282274/trump-administration-purges-health-websites (Last Access: 24.04.2025).
- Erickson, Paul et al. (2013): How Reason Almost Lost Its Mind: The Strange Career of Cold War Rationality, Chicago/London: The University of Chicago Press, https://doi.org/10.7208/chicago/9780226046778.001.0001.
- Schwerzmann, Katia and Alexander Campolo (2025): ‘Desired Behaviors’: Alignment and the Emergence of a Machine Learning Ethics”, in: AI & Society, pp. 1–14, https://doi.org/10.1007/s00146-025-02272-3.
- Schwerzmann, Katia (2024): From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic, Philpapers (Preprint), https://philpapers.org/rec/SCHFET-6 (Last Access: 24.04.2025).
- Gan, Chengguang, Qinghao Zhang and Tatsunori Mori (2024): Application of LLM Agents in Recruitment: A Novel Framework for Resume Screening, on: arXiv, 13/08/2024, http://arxiv.org/abs/2401.08315 (Last Access: 24.04.2025).
- Mühlhoff, Rainer and Marte Henningsen (2025): Chatbots im Schulunterricht: Wir testen das Fobizz-Tool zur automatischen Bewertung von Hausaufgaben, on: arXiv, 21/01/2025, https://doi.org/10.48550/ARXIV.2412.06651.
- The mechanism of “chain-of-thought” simulates “close reading” by dissecting (that is, literally analyzing) the user’s prompt in its simplest elements, enabling the model to tackle each of these successively. Wei, Jason et al. (2023): Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, on: arXiv, 10/01/2023, http://arxiv.org/abs/2201.11903 (Last Access: 07.05.2025).
SUGGESTED CITATION: Schwerzmann, Katia: New Reading Scenes. On Large Language Model and Machine Reading, in: KWI-BLOG, [https://blog.kulturwissenschaften.de/new-reading-scenes/], 26.05.2025