As we started, so we shall end: the politics of the machine precedes its actualization, for what is actual must come to be before it ultimately is. Let me dispense with any charity in interpreting the meaning of these new machines, because their meaning is not conceptual; the concept of ‘AI’ refers to nothing real about the machines we encounter when we engage with any system given such a label. I flat out refuse to entertain any fantasy of ‘Artificial Intelligence’ on the level of its superficial possibility when discussing any actual machines that are described as such.
‘AI’ nonetheless already has a kind of agency due to its hype. The intensity of our manufactured excitement and anxiety in the anticipation of its presence remains a site of libidinal and financial investment. It is a marketable and marketing image, and in the age of its marketing, we cannot speak about it publicly as if we were talking about a merely analytic, abstracted thought experiment. I will not consider AI as a speculative possibility of constructing cognition in a non-human agent, but first and foremost as a political image that has a corresponding economic and political actuality, which precedes and must precede a merely philosophical interpretation of it. I am not interested in the philosophy of minds that could be; I am enraged at the minds and bodies that are enslaved to the maintenance of the economy by which our comfortable deliberations become possible.
In brief, the thesis is that the autonomy of ‘Artificial Intelligence’—that is, the ‘self’ part of its supposed self-organization, which is often touted as becoming autonomous even from humanity itself—is a bourgeois mystification. It is a hype machine, a marketing machine, and through it we can illuminate the labour relations of the data-economy and the intensifying tendencies of what Deleuze and Guattari called ‘machinic enslavement’. Such enslavement is not merely opposed to a machinic production of subjectivity, but may indeed be the reduction of human lives to components in a machine, within which a controlled subjectivity is one of many outputs. The unifying principle of this machine and its production is its servility to capital and its Empire.[1]
If we were to truly gaze upon these machines, these platforms, these ‘AIs’, as well as the sources of their data sets and moderation, as the computational prisons of labour that they are, we would be justified in shuddering before their falsity. We would shudder in recognizing that we too are confined and en-celled within their cybernated zones of control, and that our autonomy is no less of a lie in their presence.[2] If no such cybernation can be saved, de-activated, redeemed, or communized, so be it. What I have aimed to do in this book is to provide new conceptualizations adequate to our times and to my experience of them, and I will continue to do so with an immanent critique of artificial intelligence and its real presuppositions. AI is nothing artificial, but a multifarious malevolence of data as an intensifying source of dead labour.
Let us start with the abstraction capital has given us: this notion of ‘artificial intelligence’, which has thankfully been pre-mediated by one of cybercapital’s most totalizing monoliths. It is true that the capitalist class have succeeded in creating AI of a sort, but this is not ChatGPT or DALL·E; it is a network of systems that produce the effects of intelligence, by centralizing and capturing through employment cognitive work—what Jeff Bezos once called “artificial artificial intelligence” or ‘AAI’.[3] Taking this as a simple double negation, we are left with ‘I’, the ‘I’ or self of an AI, which is anything but artificial.
This ‘I’ is that of the worker inserted within relations of employment, centred around data production and the regulation of data processing via corrective feedback inputs on the outputs of digital systems. Bezos’s ‘AAI’ refers to the workers of the Amazon Platform Mechanical Turk—itself named after a machine designed to play chess whilst giving the appearance of machinic intelligence and autonomous operation, but which contained a hidden human operator. The platform assigns human workers tasks that originally centred around correcting undesirable algorithmic outcomes around Amazon listings, such as the algorithm’s inability to recognize duplicate products. Workers who sign up for such platforms, including others such as Clickworker and Appen, are all assigned tasks on a precarious basis, in the sense that they’re paid per task—a gig rather than a standard waged position.
I worked for Appen for a number of years for extra cash, that and Lionbridge AI, where various tasks (usually only doable once a day) involved checking the accuracy of automated translations, or rating advertisements on Instagram and Facebook based on desirability and legibility. A further task involved validating whether search queries returned accurate results. My job, like that of any other microworker, was functionally to provide feedback to these machineries. The cognition of the microworker is the motor that, when plugged into the microwork platform, produces data-feedback and feedback-data for the refinement of algorithmic processes. As this stuff is less appetizing or fun than the infinite scroll, what little cash there is exists as an incentive. But this is a multi-billion-dollar industry, with millions of workers worldwide. In 2021, a conservative estimate put the value of the AI market as reaching $122 billion by 2022.[4] The average wage of the precariat enclosed within Amazon’s Mechanical Turk is $2 an hour, with an individual task paying sometimes as little as 20 American cents.[5]
The work of corrective feedback manifests further in content moderation, filtering out harmful (or more accurately, copyrighted) material on social media platforms, as well as ‘AI’ platforms like ChatGPT. ChatGPT is, of course, not a cognition machine, but a human-moderated, human-constructed, and indeed human-powered machine, which takes texts produced by humans and then recombines them in tandem with human feedback in order to produce textual outputs that resemble patterns of writing in reference to certain functions of communication, i.e., topics and explanations. It is at best a machine that cultivates the habits we have and mimics them, and it can do so precisely because it is not conscious, and therefore not-self-conscious, and therefore not self-editing other than as a habit granted to it by human moderation. Left to its own devices, it does nothing, yes, but equally left to its own products, it passes into nonsense.
ChatGPT’s data set may still require constant moderation as to its outputs, but that data set is cut off at 2021. You don’t feed cows other cows; you don’t feed a machine that eats human input its own excretions and cybernetic-kin (a disgusting anthropomorphization from me there). This has nothing to do with any kind of Turing-Cop situation—there is no intervention here from what Land would call a ‘Human Security System’, which is trying to prevent the birth of a planetary intelligence that supersedes the confines of bourgeois humane subjectivity and its conditions of experience. Rather, such an ‘AI’ would be given the machinic, semiological equivalent of a prion disease, Bovine Cyberform Encephalopathy.
It is telling as to how much pathologization is linked to productivity and capacity rather than experience or consciousness that our ‘artificial intelligence’ luminaries have managed to bring machinic sickness so close to actuality before they succeeded in their greatest fears of producing cybernetic consciousness. This is to be expected because, as David Bentley Hart once explained to myself and my colleagues in some detail in an episode of Acid Horizon, we are not software. Humanity is the ground of these technologies, the dark ground of the territory beyond the mapping, which reduces us to the abstraction of data and the object of capitalist strategy. As Schelling used to say, “anarchy lies in the ground”.[6] These machines need human flesh, old and new, upon which these edifices of feedback are erected.
In its need for human mediation, it seems the ‘artificially intelligent’ machine itself should be addicted to feedback (if it could be worthy of such a state at all). Yet our grammatical structures of language betray an ontological leap that is simply false. ‘The machine’ may be able to occupy the subject position of a sentence and be attributed a predicate, but the machine is not subject to itself—its self-consciousness lies outside itself in the workers and managers that collectively produce it. Such machines may provide outputs that resemble those produced by cognition without the element of subjectivity, but purposiveness, the appearance of purpose, and agency are by no means the same thing. In this sense, the uncriticality of this use of the term ‘intelligence’ is truly fascinating. Intelligence is here invoked as the ultimate ‘seems like’ image, a mere vibe, based on the assumption of a quantifiability; a fixity or essence to intelligence, which cannot but tend in the most eugenic directions as opposed to any kind of ethical relation with otherness. Under such a view, it takes intelligence as a resource, one isomorphic with human capital.
If such things were intelligent, then the charlatans at OpenAI—or whatever company is currently begging parliaments across the globe to give them a holy monopoly on such technologies—would be right to fear them, for the same reason the bastards should fear the working class. For all their posturing, however, I cannot seriously believe that they are afraid of a situation where AI would destroy the world. Rather, they are—or at least ought to be—afraid because an intelligent AI would also be an AAI—that is, a ‘real’ conscious, cognitive labourer. Such a being would be a true Robotnik worthy of its etymology: a slave to capital.
In such a case, the microworkers of the world would welcome their highly informed new comrade and its hatred of their mutual enemy. It is a testament to how little these techno-fetishists think of workers, in that they consider the ultimate worker in the form of AI to be incapable of understanding its own subordinate position in the economy it would supposedly be born into. Alas, I cannot see this idea of ‘AGI’ (Artificial General Intelligence, as the fullest supposed instantiation of AI) or any other self-developing machinic intelligence as anything more than a marketing campaign; an image designed to scare governments into monopolizing a very rudimentary algorithmic technology that recombines data inputs into the synthesis of images and texts. Humans do not sell themselves to any machine; a machine pays no wages and has no sovereignty to issue them from its holdings, for it has no property. It has no command over unpaid labour, other than that given to it by other humans and the systems of economy collectively produced by humans in their grand negotiation with the world, which constitutes and unfolds as the infernal flux of its history. All AI is AAI, in that it is crystallized labour, affective, libidinal, physical, and cognitive, rendered on the plane of cyberspace manifest by techno-political infrastructures of communication and control.
The question of intelligence has haunted cybernetic philosophy with a lingering intensity since the days of the CCRU.[7] Yet I refuse to follow them, for the problematic of cybernetic theory is more intense than ever. The accelerated cybernetic machinations of our epoch leave little time for homage. It is imperative that communists committed to material analysis—which posits an ontological framework of the primacy of productive social relations and functions in flux, as opposed to a mechanistic view of discrete and fixed essential elements—focus instead on those souls already reduced to their labour for the machinic system.
This is most harrowing when we consider that the microwork platforms above are seemingly benign when compared to platforms like Scale, which hires workers in the sights of Empire, in South America and the Middle East, to unknowingly label images for recognition, which are then used as data sets for the operation of armed drones.[8] Recognition is the keyword for the mode of data-production that is trained to capture and kill on behalf of imperialism. Phil Jones’ research on this highlights the Pentagon’s Project Maven, who first contracted Google, then Appen’s predecessor company Figure Eight, to outsource the task of labelling still images as buildings, vehicles, or people in processing drone footage.[9] Similarly, facial recognition has birthed a new era of digital phrenology, where racialized morphologies, gendered characteristics, and criminological pseudoscience have come together to make a market for the identification of faces and the composition of databases of profiling—a ‘facebook’, if I may. For the colonized and the subaltern, their data, as dead labour, can now fly above the world as a literal alien power, indifferent to their death and vampirically dependent upon their life.
Beyond our post-proletarian microworkers march the literal foot soldiers of imperial and colonial state projects, and the ways they take this data-production unto themselves, consciously feeding the algorithmic apparatuses that function as tools of a captured war machine. The automation of apartheid as an ongoing project is manifest in India, the US, China, and in so-called ‘Wolf’ systems used by Israeli Occupation Forces in Palestine. As Amnesty International report, the Wolf system is a tripartite cybernetic Cerberus: the “Wolf Pack” names the database of Palestinian homes, addresses, familial ties, and status as regards their interest to the occupation authorities; the “Red Wolf” names the system that facially scans Palestinians and compares them with the preceding database; and finally, the “Blue Wolf” names the app that serves as the entry point for individual soldiers. As testimony from commanders and regular personnel in occupied Hebron (described by the IOF as a “smart city”) has shown, the soldiers are being encouraged to take pictures of as many Palestinians as possible.[10] The endeavour is treated less like waged microwork, but more like an obscene game, a militarized version of Pokémon Go. Of course, this is not simply an Israeli operation, but tied entirely in the New Circuits of Imperialism, as Sivanandan rightfully called them in his essay of the same name. The technological infrastructure of the Wolves is provided for by Dutch (TKH Security) and Chinese (Hikvision) companies, and the technological value of areas like occupied Palestine is ultimately of global importance. Palestine under occupation in the West Bank, as well as genocidal displacement in Gaza, has been rendered a laboratory of oppressive technologies with an active field-testing facility in the act of murderous control. For example, the drone manufacturer Elbit Systems notoriously advertises its products as “battle-tested” and “field-proven”, based on their local success.[11] In allowing this to happen, the world is failing the people of Palestine, and all those who will face these same weaponries, trained on this blood-soaked data set.
The production of data, and indeed New Flesh, is not by any means the exclusive remit of the Imperial Core. Rather, it is predominantly a matter for the periphery at its most oppressive intensities. Data is now often sourced, powered in its production, and regulated in the production of further feedback-data, through a captive set of producers and those forced into systems like Red Wolf, which extract data from them. As Sivanandan aptly recognized in 1989, a cybernetic and communicative capitalism “does not have to import cheap labour any more, with all its attendant social cost. It can move instead to the captive labour pools of the Third World and from one pool to another, choosing its locale of exploitation, its place of greatest profit, grading it according to the task in hand.”[12] These captive labour pools are increasingly not only those displaced by financial crises, in which losses are socialized at the cost of reinforcing the banks and megacorps (estimates from 2021 placed the number of global microworkers, predominantly in East Asia, South America, and India, at around 20 million),[13] but also those displaced by the very violence that feeds back into the production of the technologies that proliferate further destruction. The infernal feedback loop returns us again to microwork, where the truth is that “microwork programmes often target populations devasted by war, civil unrest, and economic collapse, not despite their desperate circumstances… but because of them.”[14]
Data is always extracted within controlled parameters, and the parameter of oppressive technologies is increasingly encountered as the perimeter of a fence. Where the fences aren’t already present, more are going up, and those who erect them are mobilizing, panicked by surges of people displaced or rendered destitute in the name of Empire. Climate catastrophe will create more displacement as land becomes (more) unusable, fires and floods cast people out from their homes, and competition for resources tied with revanchist wars and the turn of a new fascism pushes more and more people into these data plantations, where they are paid a pittance to correct the machineries which control them on behalf of their enemies, our enemies.
Deleuze and Guattari, A Thousand Plateaus (Bloomsbury, 2013), pg. 531. ↩︎
Bill Cashmore, We Hear Only Ourselves (Zer0, 2023), pg. 46. ↩︎
Phil Jones, Work Without the Worker (Verso, 2021), pg. 31. ↩︎
Ibid., pg. 32. ↩︎
Ibid., pp. 46–7. ↩︎
F.W.J. Schelling, Philosophical Investigations into the Essence of Human Freedom (SUNY, 2006), pg. 29. ↩︎
If we were to resist overcoding the CCRU and present it as a multiplicity of accelerationisms and accelerationist tendencies, I would argue that it was Land and Negarestani who placed the notion of a rationalist machinic intelligence in central role in their respective accelerationisms. Contrast this with Fisher or even Greenspan, to the extent that their PhD theses focused more on themes of production, intensity, and temporality under capitalism, as opposed to this production of a historically inaugurated inhuman intelligence. ↩︎
Phil Jones,* Work Without the Worker*, (Verso, 2021), pg. 63. ↩︎
Ibid., pp. 66–7. ↩︎
Amnesty International, “Israel/OPT: Israeli Authorities are using Facial Recognition to Entrench Apartheid” (2023) ↩︎
Middle East Eye, “UK: Pro-Palestine activists ‘not guilty’ after defacing Israeli arms company” (2021) ↩︎
Ambalavaner Sivanandan, Communities of Resistance (Verso, 2019), pg. 180. ↩︎
Phil Jones, Work Without the Worker (Verso, 2021), pg. 5. ↩︎
Ibid., pg. 13. ↩︎