Workplace artificial intelligence training won’t help us understand what we are doing, writes Jason Walsh
Blogs
Image: Pixabay
Make no mistake: artificial intelligence (AI) means job cuts. In part, this is because of the prevailing wisdom that ‘AI-augmented’ workers are more productive. More importantly, though, cuts excite investors. Once you understand this, the fantasy driving AI hype makes perfect sense: fewer producers, fewer consumers, and businesses shifting symbolic money around between themselves and the shrunken pool of people able to spend their money on Veblen goods.
This process was already underway before we had developed plausible simulations of thinking and working: such is the lopsided concentration of wealth in the United States that the top ten per cent of richest households now account for some 49% of consumer spending. Still, AI is accelerating this process.
Today’s announcement by Facebook owner Meta Platforms that it planned to make 10% of its workforce redundant makes this perfectly clear. Chief executive Mark Zuckerberg, for instance, previously noted AI would lead to “flattening teams“. Likewise, Microsoft is planning to shed staff in favour of building more sheds.
It is against this background that Ireland’s government, like its peers elsewhere in the EU, has announced plans to train workers to work with AI: the AIReady.ie platform will offer a series of short courses aimed at making workplace AI use as commonplace as any other information technology tool.
Launching AIReady.ie this week, minister for further and higher education, research, innovation and skills, James Lawless said: “We are now at a point where AI readiness is no longer optional – it is essential. Being ‘AI‑ready’ is about more than technology, it is about giving people the skills, confidence and understanding they need to participate fully in an AI‑enabled society.”
I have no particular objection to a few training courses being offered, but let’s tell the truth: like so much in the technology industry, the rush to AI is a cargo cult. Fringe absurdities, such as the now infamous shoe company ‘pivoting’ to AI may be little more than desperation, but in all its extremity it does lay bare the shape of the thinking that will itself shape the things to come.
Hard of thinking
In his 1998 book, The Corrosion of Character, sociologist Richard Sennett documented the impact on workers when skilled labour, such as baking, was transformed by quasi-automated, rules-based processes. The result was not merely de-skilling but the erosion of the worker’s capacity to understand, and therefore to critique, the process they were part of.
AI training, as currently conceived, is the digital equivalent. How could it be otherwise with the level of skills we are talking about: pressing buttons?
Here is what the department itself says: “Current content focuses on building foundational AI literacy and practical digital capability. Initially, the platform will offer a series of four free, short courses tailored to older people, small businesses such as sole traders and farmers, and individuals returning to the workforce. Each AIReady course takes less than 30 minutes to complete and can be accessed on a smartphone, tablet or laptop, allowing learners to build skills quickly and flexibly, wherever they are.”
Typing sentence fragments into a box is not something that requires significant training. What workers asked to use AI might benefit from, though, is education. Education in, for instance, epistemology, which would allow them to interrogate the ‘facts’ cheerfully churned up by chatbots. Education in the English language, meanwhile, might rid us of common misuses, from American orthography to the assumption that nouns should be capitalised as though we were all speaking pre-reform German.
This is not just a journalist’s tedious stylistic preference. To write a grammatical sentence with a subject, verb, and object that means what you intend it to mean requires having actually completed a thought. Consequently, the sentence is evidence that the thinking occurred. Bullet points, fragments, and AI-generated prose can simulate the appearance of thought without the thought having occurred.
Taste, too, matters, and while it is inherently nebulous it is true that discernment can be taught, at least in the sense that repeated exposure to ideas, speech and all forms of human culture produces reference points that allow us to contextualise and critique other aspects of culture.
With an AI, the box you are typing into is, in fact, not a box but a representation of one, a simulacrum This sounds trivial. It is not. Understanding the process requires a basic understanding of the machine and how it works.
The chatbot is a representation that has replaced the thing it represents, making the original inaccessible – and possibly even meaningless. The interface not only looks like a thinking entity, it is, for most users, indistinguishable from one. The consequence of this is that the AI, which is in reality performing mathematical ordering of data, produces output that is then projected into the world as fact, as though reality was calculable in advance. It is not.
On top of this there is the brute fact that we, humanity, do not have a working model of the mind and, in addition, any ‘mind’ we construct will be only partially understood even by its creators.
The end result is that, beyond simple, well-understood tasks, what AI is doing is imposing patterns onto noise and then mistaking that pattern for truth.
In short, AI compromises our ability to think and, thus, the cargo cult becomes numerology.
Anyone who has actually used AI will know that, useful as it can be, it also acts as an impediment to comprehension. Tasks are performed, correctly or not, but not comprehended and results are decoupled from cognition. This is, literally, stupid.
Addressing this gaping knowledge chasm is not what AI training, as it is being proposed by business and government, does. Instead, what is offered are ‘micro-credentials’ supported by ‘industry partners’.
What never seems to be on offer is a toolbox that can be used to open up the machine and peer inside. More immediately, however, it is long past time that we, as consumers, computer users, and as citizens, demanded better.
The message tech companies have been telegraphing, for decades now, is straightforward: bad is good enough. It isn’t.


