AI is Inherently Evil: Parts 5 + 6
The concluding sections
5. The Limit of Intensification
As any AI fan will happily tell you, AI and human learning look similar from the broadest possible perspective: both use information about the past to evaluate and respond to new events. The similarities end there.
Most obviously, humans can derive information from material experiences as well as immaterial, secondary information; AI has no material existence1 and as such has no direct perceptions to compare with the data it collects. Humans have obligations aside from self-preservation and efficiency; AI has no obligations except those of technocapitalism, which all hinge on self-preservation, expansion, and efficiency. Humans collect and use information selectively, from multiple sources; AI does so indiscriminately, from a single source2. Finally, humans use information as a representation of reality, understanding that decisions—our labor—change that reality. AI understands no reality outside of information and is a digitization of labor itself; this is a complicated way of saying AI does not create, but predict information, and it predicts new results based exclusively on old outcomes, not on imagined possible futures.
Because AI imitates patterns, not exceptions, it will imitate any biases or qualitative trends in training data. The information AI consumes is aggressive and narrative, so AI predicts that users want aggressive narratives. It is personalized, so AI will attend to personal desires and assumptions above external realities. It will also replicate the hierarchical structures imported with the training data. Worst, it will leave information incomplete and hollow, just like the fragmented, incomplete information filling platforms. Despite this distortion, AI can still emulate form with great consistency: whether information is true or false, deep or shallow, it retains structure and pattern; otherwise, there would be no relationship of parts, and no pattern for AI to predict and replicate3. More concisely, every data point is an example of how to write a data point; not every data point is an accurate data point. This is why AI has become notorious for lying with enormous confidence4; this is also the factor that elevates AI from a problem to an existential threat. Form and reference are related, but not closely enough to matter.
Model collapse—the problem in which an AI uses its own output as input—is a recurring subject of the AI news cycle5. Regardless of whether AI is years away from collapsing or has already started, the basic principle still works. As the information an AI uses is already algorithmically optimized6, the data it generates will be optimized for copying and sharing—even if the model excludes copies from future datasets, it may fall for modified copies or derivative works. Besides, AI outmatches any other technology for sheer data output; even if AI doesn't replace human content, it will vastly exceed it in scale. Say some humans keep actively creating new data, perhaps incorporating AI into their workflow: this scenario ignores both the drive of AI to automate all labor and the fact that it would still be scraping its own content, albeit embedded within new, human-made content. Most importantly, the entire point of contriving schemes to bypass autocannibalism is to ensure AI can imitate humans more consistently and accurately. An AI that could discriminate AI content would just end up creating content it could not discriminate.
In other words, the development of AI starts an algorithmic, recursive, two-stage process.
In the first stage, AI content gets increasingly indistinguishable from human content. More platforms adopt it; they profit from reduced labor and materials costs, and add more users and more data. The more platforms use AI, the more add their data to its training pools. AI accumulates samples of both probable and improbable responses to prompts; hallucinations and factual errors dwindle, even though the output is still flawed, personal, aggressive, incomplete, and hierarchical. This process accelerates recursively7; the profits generated by AI enable their makers to scrape yet more data and automate yet more tasks. This pattern approaches an end state in which neither material commodities nor physical labor exist; all users and utilities are represented by information, and, since complete access to data entails complete predictive abilities, all present and future information would exist at once on one platform.
The internet will never reach this end state. As AI content replaces less optimal content, and this immaterial content replaces material commodities, humans depend on AI and learn to imitate it as well. It doesn't matter whether AI content dominates the web or just makes up a sizable fraction of it, nor does it matter whether AI scrapes its own content or human content inspired by AI: some form of collapse begins, and new output stops representing reality. It instead represents invented personalities, dramatic narratives, and artificial hierarchies. In this scenario, human reliance on online data would render conflicting alternatives unavailable. Likewise, even as information loses reference, it loses form more slowly; it appears coherent, logical, and relevant, even if it completely contradicts reality.
Collapse ends when the data a system generates contradicts decisions necessary to the survival of either the system or its users. More simply, AI either generates information causing the mass killing of its users or a terminal error in its own program8. In this scenario, the AIs manage most information used by technological society. When one gives its final instructions, no one will have the "common sense" to refuse—common sense is still just learned information. In the best-case outcome, certain models collapse before others and the survivors have a chance to turn theirs off beforehand. The worst-case outcomes are endless: AI forces us to eat rocks and glue9, AI tells us to poison ourselves with botulism10, or a federally-managed AI hallucinates an enemy invasion and reacts accordingly.
Of course, the alternative is that the conditions necessary for AI to collapse change before collapse can happen. As computers, technocapitalism, and AI are now societally integrated, such change would itself be apocalyptic. The likeliest candidate is the ongoing climate apocalypse—if enough humans die to wipe out the labor force and consumer base of AI, or if extreme weather destroys the hardware itself, absolute collapse will be the least of our concerns. Personally, I expect a combination: AI insulates affluent users from the situation, nullifying mitigation and migration efforts, and the global information system collapses around the same time the global resource network does.
6. Discussion and Conclusion
Model collapse is not actually the critical problem. The internet itself is the problem.
I use AI as the example because it's concrete and specific, but AI just accelerates what the internet already does. The totalizing attribute makes users dependent on the internet for more information, and the automation attribute makes them increasingly unnecessary to the production of that information. The universal attribute allows single sources to control the production of diverse information, while the immaterial attribute disconnects that information from material reference points. This creates a closed system in which inaccurate information inspires the imitation, recycling, and augmentation of said inaccuracies, whether by users or machines. Automation prevents the system from detecting inaccuracies; totalization prevents users from doing so.
I say "the internet," because the problem is not strictly computers, but the ability of many computers to share information. Once the online market becomes the best way to buy and sell commodities, the compounding force of technocapitalism begins; once that market is mediated through platforms, those platforms compete until a few hold all the data. The other main agent of absolute collapse is the universal attribute. A computer without it could not be fully automated, nor could it represent all objects through data. It would also not be a computer anymore.
Yet even the attributes are just impossible ideals, shaping the development of technology but not achievable with it. Every technology exists within a larger, non-technological context. You can make GPT code a whole website for you, but you still have to prompt it. You can move all your savings into crypto, but the car you buy is made of metal, not Javascript. You can fire all your workers and replace them with AI, but you're not going to fire yourself. Like most of us, the leaders of the tech industry want something they can never have. Unlike us, they have the power to make universal decisions for everyone based on lies and fabrications, until the lies stop working and everything collapses—and everything dies.
Except in a very generic and unhelpful sense as computer hardware.
Yes, that data needs to be cleaned and labeled; yes, AI has parameters to filter out certain results. These tasks increase labor and invite automation. The expansion of AI capabilities, with the end goal of GAI, also weakens selectivity—the more an AI can do, the less selective it can afford to be. Chat-GPT isn't supposed to give instructions for illegal or dangerous actions, but the fact that it can leaves one wondering why it didn't filter out that data in the first place.
Most poisoning techniques, like Glaze or Nightshade, take advantage of this fact by attempting to disrupt image patterns while preserving information. But the pattern still exists (otherwise the image would be unintelligible to the intended human audience); developers just need to adjust AI to find it.
Cade Metz, "What Exactly Are the Dangers Posed by A.I.?," The New York Times, 1 May 2023, https://www.nytimes.com/2023/05/01/technology/ai-problems-danger-chatgpt.html.
Matteo Wong, "AI Is an Existential Threat to Itself," The Atlantic, 21 Jun. 2023, https://www.theatlantic.com/technology/archive/2023/06/generative-ai-future-training-models/674478/.
Firstly, the people who initially upload information are predisposed to optimize it. Secondly, a certain amount of non-optimized content fails to pay for itself and disappears altogether. Lastly, many users of AI specifically want to optimize their results, as with those recipe sites generated purely for SEO, which means basing them on previous success cases.
Millett, "Technology is Capital," 73–98.
Note that information doesn't decay from safe to harmful, but from true to false. This is why the scenarios suggested by Roko, Nick Bostrom, or the Terminator films are ridiculous—they impart agency and motivations to AI it cannot acquire from machine learning. Rather, in my scenario, AI continues making errors ranging from harmless to destructive but non-terminal, until it makes an error it cannot survive.
Nico Grant, "Google's A.I. Search Errors Cause a Furor Online," The New York Times, 24 May 2024. https://www.nytimes.com/2024/05/24/technology/google-ai-overview-search.html.
Nik Suresh, "I Will Fucking Piledrive You if Your Mention AI Again," Ludicity, 19 Jun. 2024, https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/.


I have read your six-part essay with great interest! There is much I do not fully understand because I am not in the flux of your resources and experiences (and the attendant vocabulary), but I get the gist of it. Might you be interested to post a postscript, as it were: what are your recommendations for an individual in daily interaction with the internet and AI? You are not off-grid yourself, since your work is to some degree on the grid, but you doubtless have tips for how we can somewhat-safely deal with tech while we wait for it to destroy itself and ourselves in the bargain. Also, your footnotes allude to a rich exploration of related literature. What is a starting place for that exploration? What is a reading list for beginners?
I have the (probably delusional) sense that I am a "discerning" user of tech. If the internet is the ocean and I am shopping for seafood, I am looking for sustainable, uncontaminated, satisfying nourishment. From your essay I get the feeling that everything is actually unsustainable, contaminated, and only partially satisfying because the entire structure is tainted with the motives intrinsic to capitalism and the monopolistic structure of the corporations that trawl the ocean "on our behalf" to extract the resources of the ocean, market them and display them prettily in the grocery store (i.e. our screens). There is no "satisfaction" or real nourishment because the platform and its advertisers seek our continuing interaction. But isn't reading the NY Times or Le Monde on my phone a simulacrum of the traditional knowledge-gathering experience, simply transferred to the digital domain? Or does the digital presence of such "trustworthy" sources simply contribute to the evil scraping of content into vast holds of AI banks? What is a modern person to do?