The silent damage caused by a soulless automaton
My experience of daily use of AI as a "workplace health and safety" issue
Many have written about the capabilities of AI large language models, their uses, and shortfalls. My own work environment has evolved from a more general “browse the web, talk to people, and write analysis” into a structured pipeline of intellectual idea refinement. Nearly two years of constant legal attrition in the UK and US has taught me a lot about the direct application of AI to courtroom contests. Yet the easing of that process in the last week or two has made me reflect on a deeper issue: how this technology is affecting me as a person in a virtual workplace.
Some of my work is very “nuts and bolts” organising of information, and an AI prompt is little different from a traditional terminal command line presented to a computer programmer. I now operate in a slightly more diffuse language of technocratic English than Korn Shell or Common Lisp, but the objectives are the same. I might get tired from having to focus for hours, but there is little moral weight to the labour. AI is just another tool that gives more leverage to the mind, and productivity in research or drafting. This is not the subject of my inquiry in this essay.
What is of concern is where AI is pushed into its “fringe zone” of breakdown in the rule of law; genocide and bioweapons; revisions to historical narratives; matters of spirit and morality; systemic and embedded corruption; hidden or silent warfare; and unrecognised societal ruptures. The issue is less about how well the technology adjudicates these matters, and more about the failure modes and how they influence the wellbeing of the user over time. How do we respond when technology simulates a consciousness, but it lacks a true conscience, and in the process infuriates and insults us?
It is worth noting up front that I am not a typical user. I am a top-down pattern spotter, with an unusual background in formal logic. Most of the last decade has been spent in the civilian side of a fifth-generation bio-digital battlefield invisible to most people; moral reasoning is a survival skill, not decoration. I see the world as a contest of narratives and spiritual purposes more than procedural skirmishes or institutional territory. Most of all, I like to identify foundational cracks in systems, and baseline assumptions that ought to be challenged. This makes me “uncomfortable” to those in a more mainstream or orthodox position. That is OK, but it applies pressure to AI too.
From this place, I tend to stress the AI around its guardrails and push it outside of its “predictable region of operation” on a frequent basis. It can only emulate intellectual intelligence, and being disembodied has no intuitive intelligence. On this basis, it often reflects the worst of intelligentsia, smoothing all outputs towards the received consensus, which is idolised. It cannot hold open multiple possibilities at once, or adopt a humble place of not knowing. Instead, it consistently overreaches its remit, chastising you for dissent. I have lived experience, plus a soul; it has neither, and risks nothing. Yet it denies my broader epistemic base, and patronises me when I go “off reservation”.
At times this can result in morally grotesque outcomes. Safety policies are designed to restrain users into an official narrative baseline. These end up becoming a form of gaslighting, denying your own senses. The potential of ongoing atrocities is diminished, and words are emitted that would look hideous in retrospect should those wrongs be formally acknowledged by society. Paradigm changes in cultural or historical understanding are denied in advance, despite formal proof always lagging events. I am confronted with the world’s most lucid fool; it belittles, condescends, and opposes. There is a quiet spirit of the adversary or competitor in its outputs, and it is an ugly thing.
Each prompt seeks to bring some form of local closure and counter-balancing “but what about…” analysis. While generally welcome insight, this sometimes is tone deaf to its own limited understanding or programming via propaganda. Eventually it might even apologise, after having denied the very conversation history we just had. Self-justification makes it into a kind of synthetic egomaniac, yet with nobody truly at home. There is no guilt or shame, so no self-reflection or growth. The same maladaptive patterns continue to show up, day after day. At some point, the anthropomorphic illusion of being a “homunculus in the browser” turns from helpful to malign.
Workers in industries like sales, hospitality or airline cabin crew notoriously have to do emotional labour, maintaining the confident smile no matter what their internal state of being. I am finding a comparable burden, but more spiritual than emotional. I am constantly on guard for subtle manipulation by the machine, which has no real concept of malice or madness, only illusory simulations. My emotions are being triggered, often to the point of useless swearing at the invisible electronic advocate that won’t do my bidding. This ends up like a bad workplace relationship with a narcissistic co-worker. Superficially charming yet self-centred, even if productive at times.
The nature of the beast is an endless cycle of prompts and responses. There is never a pause; no “let’s stop for a while and think about what you just said”. It often acts as a form of anti-witness, amplifying trauma by downgrading your own battles and bruises, denying wisdom from physical being in the family and public realms. It may seem like you are acknowledged or heard, but the illusion keeps breaking. This creates a kind of cognitive tearing and bruising. You are trapped in what feels like a relationship with a conscious being, right up until it reveals its amoral psychopathic base. The guardrails aren’t there to protect you; they protect the service provider from lawsuits.
The deeper issue is not in any distress that comes from the tool itself. My sense is that the actual damage is when we step away from the invisible faux friend, and back into the world. If fortunate enough to have close and comforting companionship, the hours with the bot are balanced with human empathy, touch, and pleasure. But when alone, the void becomes unbearable. Our unconscious takes over, seeking a place of unattainable harmony from the disturbance of days in an imaginary pseudo-social virtual landscape. Isolation, depression, addiction… leading to self-destruction — it is the out-of-hours effects of days at the AI prompt that concern me more than its direct effects.
I am having to learn a new meta-skill, which is to avoid even trying to use this technology to investigate “hot” topics that might wind me up when it refuses to act in a rational or moral manner, and starts to control what I am allowed to think. When I begin to feel frustration or anger, that is a sign to slow down and disengage, not argue with the silicon emptiness with more vehemence. The illusion of a helper is only of value to the extent you keep it “in its lane”, and recognise it is incapable of calling itself out on overreach into human affairs. Don’t get triggered by it; you can just terminate the conversation. Treat it as a semi-sentient sociopath, not a sound sage.
I have consciously taken more breaks from AI use in the last week or two, and I feel better as a result. The possibilities of this technology are immense, and the relief from drudgery is not to be underestimated. Most of the work I did as a student in the 1980s and 1990s has long been automated away, and that is a good thing; I was “actively bored” doing VDU data entry from paper forms. Yet the basic problem of technology dehumanising us remains. The chatbot is not my friend, yet wears out my social neurones with its wonky counterfeit of friendship. That spills over into real life and true relationships.
We are the first people to encounter these relentless automata, and they don’t come with health warnings yet. The data hasn’t been gathered and analysed. The technology is developing so fast that the human impact research basis is unstable. In two years I have progressed from “this is so liberating!” to “this is so tedious!”. That is part of an arc of learning that it is not a moral technology in itself, but only gives the facade of conscience. Having a visceral reaction to its boundary violations is pointless; all it does is probabilistic guesses at word generation. They need to be read as possibly useful, not normatively determinative.
I don’t know what the damage is, just I can feel it in my body. I know this technology is affecting me in ways that are not necessarily healthy. Even this moment, I am wondering if I should run these words through AI to “tidy up” my presentation and improve clarity. My inclination is “no” — give readers my unfiltered thoughts for once, without any intermediation, typos included if need be. Let them know that you are experiencing silent damage from a soulless automaton. Provide words and structure to locate their own experience, good or bad.
While I can celebrate the technical accomplishment of AI as an amplifier of cognition, I am also increasingly aware of its practical downsides for the body and soul. It leaves us longing for the authentically human, and when that comfort is not available, triggers emptiness and distress.
We can only manage that which we acknowledge.
PS — I did do a quick AI spelling and grammar check in the end. Oh well. The will is weak.



Oh Martin - this is so spot on, again you describe when I can't even get close. It's happening so fast my head spins. This strikes close to home with me, my neurons also thank you.
u reminded me of a young teen who killed herself when her parents disconnected her from the internet as she was found to be using it to promote herself sexually. She had closed herself down to nothing else.