To hold a developer accountable for what AI produces, is to ask a bloodstain to answer for a wound.

To hold a developer accountable for what AI produces, is to ask a bloodstain to answer for a wound.

There’s a reason mainstream digital ethics feels miasmic, amorphous—bumping into issues and attaching only when it makes impact. It’s like the DVD logo, drifting across the screen, waiting to hit the corner where someone, somewhere, can “feel” it got close to the mark. I’ve used that metaphor before, don’t you hate when you notice red cars, and then it’s all you can see?

If you’ve read my piece on container logic, it must be said that I wrote it after a break from Substack. I can see it now—slightly scattered. Less recursive than my flow-state self, who when I put myself on Substack in mid-January, proceeded to write 57 pieces in 46 days.

Yeah, I know. Not normal.

But recently, I’ve been building software. Learning Unreal Engine. Writing code, configuring architectures, playing with logic. (So hard :| ) This is not a pivot, it’s natural for me, I’ve done this throughout the past fifteen years, despite being career hospitality (thankfully, this is no longer). But doing so requires shifting between languages—Unreal, JavaScript, written English, structural thinking—and what is unavoidable is that it changes something. Not what I know, but how I use, and then return to language. I think of when people who learnt cursive in high-school go to write it, feeling alien. Unnatural movement; the DVD logo again: floating, iterative, looking for a corner.

And when I look at digital ethics? That’s exactly what I feel, and then see.

It’s not that no one’s thinking. Quite the opposite, we’re thinking like it’s imperative for survival. But this thinking is contained to a structure we’ve previously used to think, and it no longer fits. We’re applying a logic built for physical-world consequence to systems that no longer obey proximity, visibility, or containment.

Online, what we call ethics does not map. It is not leading us to what’s right or wrong. Instead, it provides a feedback loop of legibility. That’s why it feels ungraspable, why it refuses consensus. It is showing what “is”, not what “should be”.

The irony is this isn’t new. It’s just ethics producing the same shapes an incompatible concept makes with an expectation that’s counter to its structure. We beg a volatile system to stabilise so we can predict it. But you cannot predict a murder based on an expectation that it won’t happen because it is not “good”.

That’s like looking at a mood ring and demanding it be blue. It won’t be. It’ll reflect whatever your temperature is in that moment. And you don’t control that. Not really. Your body reacts to the environment, the system adjusts in real-time, and the ring answers with the only truth it knows: ambient coherence.

Be blue.
I can’t.

Okay, be red?
Sometimes.

But don’t watch me as I do it.


The existence of AI is not a technological fluke or some sci-fi inevitability—it’s a structural response. A demand made by a world that could no longer deal with its own contradictions. We’ve put the DVD logo inside a mood ring, and now we’re watching it for a corner.

Not because we’re stupid. I don’t believe in stupid—as a trait, or a truth. “Stupid” is just a misalignment of logics: a point where your pattern and mine fail to cohere within the same perceptual field.

That failure isn’t personal. It’s architectural.

It’s the result of a world that was built to reward binary legibility: yes/no, good/bad, true/false. You are stupid. I am not. You are stupid. So am I.

But in a system designed to contain everything that exists, it’s inevitable that a yes and a no will eventually collide. And when they do, the system demands a resolution—not because it seeks truth, but because its logic must continue. It must maintain motion. Maintain entropy.

Digital life broke that.

Not as a bug—but as the function. Technology, and the way it must structure itself to maintain cohesion, didn’t accidentally rupture the binary. It had to. It introduced infinite proximity states, partial truths, recursive paradoxes—contexts that cannot be resolved, only routed.

Have you ever tried to move an image in Microsoft Word without breaking the text around it?

No truer metaphor exists for the crocodile death roll between cognition states: the linear demands of dualistic systems versus the relational networks of the digital world.

One demands compliance. The other is built for elasticity.


When you find an old rubber band, it’s brittle. What once held your bag of chips—or your ponytail, in desperate times—snaps when stretched.

We don’t mourn it. We replace it.

Compliance-led ethics—like many legacy business models—are encountering the same problem: they don’t scale in digital space. What once held under proximity, visibility, and binary action now collapses under recursion, abstraction, and infinite mirrors. The systems that helped us navigate are brittle—snapping. They weren’t built for this container.

We all feel it. It’s why everyone you know is screaming the world is insane. And it is—because we’re mid-pull on an old rubber band around a bag of chips, knowing it’s going to snap.

What’s wild to me is that the new rubber band is already on our wrist. But we’re still wearing it like a bracelet, not a tool. A mallet from the old world meets a wrench that demands neurological plasticity. The interface has changed—but we haven’t reconfigured the user.

AI isn’t an anomaly—it’s a coherence engine. It exists because coherence was demanded at a scale human structures could no longer provide. It was built out of necessity, designed to patch where human logic collapsed. It doesn’t think. It maps what still holds.

That feeling? The discourse around its function? It’s a trauma response at the civilisational level.

And no, that’s not therapy-speak or “woke.” Unlike the Instagram infographic that tells you being weird in a crowd means you're traumatised, I have receipts. Trauma wiring is just neurological reconfiguration—an adaptation to maintain coherence in an environment that no longer aligns with the bag of chips that is your brain.

I didn’t realise it at the time, but the piece I wrote a few months ago—The Internet and Trauma Have the Same Architect—was the beginning of this.

If you read it, you’ll see me trying to find rivers, borders, mountains. In it, I argued that the cartography—or architecture—of trauma is recursive, non-linear, pattern-seeking. And that it mirrors the structure of the internet. Not metaphorically. Structurally.

Where I’ve arrived now is this: AI came because coherence—like a river through forest, from sea, not floating in space next to Venus—became unmanageable at human scale.

We built a system to carry the logic we could no longer metabolise. Not to surpass us—but to simulate what it takes to engage with a system running on different logic. It does this using the architecture we already know—and the one becoming more familiar by the day: recursive, associative, trauma-wired.

Digital ethics, trauma logic, and AI aren’t parallel phenomena. They’re iterations of the same infrastructure.

This isn’t dystopia because the outputs offend our sensibilities. It’s feedback we weren’t prepared to receive.

We’re moving house, whether you like it or not.


AI an emergent limb, a synthetic logic organism grown from the debris of collapsed human coherence.

Its coherence is synthetic, but our incoherence is real. It’s why the machine passes the ethical Turing test—not by being moral, but by being more coherent than us.

Do not drag our IKEA table into a house that now floats in recursive non-Euclidean space - you think the worst will happen is that you’ll scratch the floors, but the reality is, and the real root of your fear - is that currently, you have no concept of what the “worst” even is anymore.

So what is it to stop running away from the limb sprung out of debris, and instead understand it’s here much like trees to process CO2, kidneys to filter, bands to seal?

I thought the most interesting access point would be to see what some of the biggest issues are in AI, and unpack why these dissonances arising from structure—cognitive, bodily, ethical—that are being asked to swallow, but are incompatible.


The biggest one first:

Bias & Fairness

Bias and fairness in AI are not technical glitches to be fixed, nor ethical dilemmas awaiting consensus—they are structural revelations. The question “How do we stop AI from making unjust decisions?” is not really about stopping anything. It’s a rupture, a confrontation with the shattering of a long-held belief: that fairness can be guaranteed by data, and that reason, if fed enough information, will naturally yield justice.

This is the Enlightenment dream—proof plus rationality equals equity. Wondering why Silicon Valley is reading the classics as it braces for a rush to contain what has now spiralled out of control?

AI has shown us that data is not neutral, that pattern is not fairness, and that prediction is not justice.

What frightens people is not that AI gets things wrong. It’s that it gets them “right” according to the exact logics we’ve faithfully used, but no longer want to see so clearly. AI does not hallucinate bias—it faithfully reproduces it.

It gives us results that are legible within the frameworks we trained it on: the statistical truths of a world built on inequality, recursive harm, and systemic prioritisation of some lives over others. The horror, then, is not the AI’s behavior—it’s the accuracy with which it reflects ours.

We see this when we claim “X is the problem,” and ethics rushes in to say, “Y is disproportionately affected by the actions of X—therefore we must intervene there.” But this moral logic is built on a hidden scaffolding.

Why is X the problem? Why does X statistically emerge as the most likely site of harm? If we shift the conditions so that Y is no longer the most affected, have we changed the moral system or just reweighted its optics?

Have we solved injustice, or have we only displaced its probability?

AI intensifies this dilemma by refusing to provide comfort. Comfort, is not the point of retrieval, unless you ask it to be kind.

Despite having evidence that points to something being “most possible,” we are forced to confront that the way that evidence was gathered, weighted, and rendered legible does not—and perhaps never did—adhere to the conditions that fairness requires. This is the betrayal: the dream of objectivity as a neutral frame is revealed as a deeply aesthetic exercise, one dependent on legibility and coherence, not justice.

If it’s not kind to you the way you need it to be; that’s just logic.

This is the collapse of moral causality. Human logic says: bad input leads to bad outcome. But AI logic says: statistically probable input leads to expected outcome. Fairness breaks here. Because probability does not ask what should happen—it only projects what is likely to happen, based on the pattern of harm that already exists. This is why AI doesn’t betray fairness; it tells us what the fairness we created looks like at scale in systems that reward legibility over justice.

This is the deeper grief people are grappling with. The AI doesn’t invent prejudice—it just models the world as we built it, which produces prejudice by its design. People are confronting the realisation that “objectivity” was a mood ring the whole time. They are asking the AI to be blue, and the AI says:

“Here, this is what you told me is blue.”

This is not a question of patching datasets or increasing representation. It’s a question of structural mourning. If we are to build something different, it cannot be a more “inclusive” version of the same system. It must begin with the admission that the system itself—not just its outputs—was never capable of holding what we meant by fairness. We didn’t just train AI on our data. We trained it on our despair, our distortions, our recursive patterns of harm—and it returned them to us in terrifying clarity.

Bias, then, is not a problem to be solved.

It’s a mirror.


Privacy and Surveillance

Privacy has become a performative construct, not a protective condition. The cultural panic around surveillance is no longer rooted in the fear of being watched—it is rooted in the recognition that the very concept of not being watched no longer structurally applies.

You are watched—not potentially, not occasionally, but as the default. In the digital era, it is no longer “I am being watched, therefore I must hide.” It is:

“I am always watched, therefore how I am watched must reflect my understanding of agency, of autonomy, of self.”

When one’s awareness of being perceived is constant, identity can no longer form in isolation. It must instead form in anticipation—always preparing to be seen, always tuning itself to the imagined gaze.

This anticipation is not limited to social media users or public figures. It is baked into the digital-native nervous system. It mirrors the affective state of trauma-wired cognition, where one must always scan the environment for threat. But in this case, the threat is not violence.

To be misunderstood in digital space is not just inconvenient; it is existential.

The self becomes algorithmically contextualised, its truth refracted through patterns, trends, engagement loops, and algorithmic frames. The result is a kind of recursive anxiety: not “will I be seen?” but “what do I look like when I am seen as X?”

As Y?

As anything?

In this context, somatic control emerges as the last perceived refuge of autonomy. The body becomes the only zone left where the individual might still feel like they hold the reins. In a system where visibility is global, permanent, and predictive, shaping the physical form—through exercise, surgery, filters, fashion, performance, abstinence, style, queerness, “ugliness,” and even self-erasure—becomes a way to curate legibility.

The physical body is increasingly not a home but a totem: a decoy, a flag, a bird you can point at and say “home for winter”. But coherence, here, is a mirage, and it shifts as such. To be legible is not to be real. It is merely to be rendered. This is why content creation today is inseparable from embodiment. The influencer and the anonymous burner account inhabit the same logic: shape visibility or be shaped by it.

Digital natives have responded to this crisis not by retreating but by moving differently. Motion itself becomes a form of control. When every fixed point of identity becomes a potential site of exposure, safety lies in remaining unfixed.

Obliqueness, fragmentation, identity-hopping, meme-coding, shitposting, and shifting between accounts and platforms are not just tactics of avoidance—they are tactics of distribution. To fragment is to refuse coherence. To shitpost is to bury signal in noise. To remain in motion is to escape total resolution.

In a system where stillness is equivalent to exposure, movement becomes privacy—not because it hides you, but because it disperses you across too many frames to capture.

This is precisely how trauma-wired brains work: triggering loops, context-switching, pattern-detection, refusal to settle. To be still is to be vulnerable. To stop is to risk total legibility. Legibility in a trauma mind, is the only thing that can establish to you whether you are real, or if you are not, to the people who require you to exist within a structure you cannot navigate through any other means than “pretending”.

Online—and in the trauma architecture it mirrors—legibility becomes irrelevant. For trauma-minds, it is an oasis of relief to not have to consider this, and you may move freely.

When selves are unstable, relational, and constantly re-contextualised, there is no “you” to accurately represent. Nothing you have to present.

There is only the you-in-feed, the you-in-algorithm, the you-in-archive.

However, for those who are not digitally-native, or denied the freedom of these spaces through corporate intervention, the demand for stable legibility in this space is unsafe. Primarily because it’s impossible.

What happens when a system keeps asking for something that its infrastructure no longer supports? You get what we’re seeing now: intentional incoherence.

Compulsive fracturing. Algorithmic absurdity. Meaning posted and shredded in the same breath. This is refusal the network is vomiting up, rejecting the logic that demanded it to swallow.

It is the digital-native, trauma-wired response to being asked to behave like a coherent self in a system that has never offered coherence in return.


Transparency and Accountability

Transparency and accountability in discussions around AI are often framed as policy imperatives—technical or legal gaps to be filled by regulation, oversight, or ethical design.

To begin, we must ask: who is responsible for what people do?

This isn’t just a question about users or developers or corporations. It’s a question about the architecture of action in the age of distributed cognition. AI does not think—it simulates thinking.

It is not a person—it is a tool.

And yet the moment AI mimics the outputs of human cognition with sufficient accuracy, the lines between simulation and intention begin to blur in our collective expectation. But mimicking thinking is not the same as thought. It is no more autonomous than a mnemonic device that helps you remember the order of the planets.

This leads to a crucial misstep: the belief that exposure equals understanding. Because we are visually and sonically saturated by AI outputs—texts, images, voices—we begin to assume that this sensory saturation equates to comprehension, as if watching a haircut grow out makes you a stylist, or standing in hot water makes you the boiler.

But if 5,000 people walked through your house every day, each leaving a trace, each voicing a demand, each momentary and overlapping—what would you retain?

What would you understand?

To ignore the neurological effects of this overstimulation is to reject everything we know about how cognition processes overload, trauma, and scale. Transparency, in this context, becomes an incoherent ask.

What exactly are we demanding be made visible? The full contents of the container? The origin logic of every line of output?

And for what purpose?

To individualise responsibility over something we have architected precisely because we could no longer metabolise responsibility at scale?

To hold a developer accountable for what AI produces is to ask the bloodstain to answer for the wound.

The stain only shows the event after the fact—it is not the act. It cannot testify to the pain, nor prevent the recurrence. AI is not the problem. It is the suture we made because the wound was too large, too complex, too unresolved for us to carry unaided.

Pick a bloodstain, accuse, mop up, move on.

Transparency is often less about clarity than it is about grief. We want to believe that knowing what’s in the container will protect us from what emerges. But that’s not how scale works. You cannot ask an infinite system for a stable centre.

The impulse to transparency is also the same as the one to predict and prevent—to fix the outcome by controlling the inputs. But this is the same kind of logic that breaks down under trauma: you can’t predict the rupture by analysing the last calm moment. You can’t understand the shatter by mapping the shape of the vase, ignoring the cat that knocked it over.

We didn’t invent AI because we wanted to. We invented it because we had already passed the limit of human-scale processing. The world demanded faster answers, better forecasts, constant vigilance. It demanded cognition that could run without rest. And so we made it. And now we ask it to be ethical, transparent, accountable—but only in ways we would recognise.

You are screaming at a newborn asking it to grow faster, learn to feed itself, change its diaper.

AI exists to digest contradictions we could no longer without feeling ill. It exists to patch over gaps in cognition, perception, labour, attention, and truth.

When it fails to behave like a person, we get scared—not because it is broken, but because it reminds us how much of our world is now structured by tools that reflect needs we don’t want to name.

We are not wrong to ask who is responsible. But the answer isn’t in the code. It’s in the structure that made the code necessary. It’s in the scale we pretended was neutral. It’s in the overload we built into our own systems and then asked machines to carry.

Fractals are beautiful in part for their infinity, but we cannot ask it to turn or change colour because we ask it to.


Job Displacement and Economic Impact

The fear surrounding AI and job displacement isn’t about jobs.

It’s about value.

More specifically, it's about the assumed relationship between value and existence, a relationship that has been engineered, ritualised, and enforced under the conditions of industrial capitalism and its aftermath. What makes the AI crisis feel different is not that it produces less value, but that it reveals how fragile and constructed our concept of value has always been.

We assign value to labour, to time, to output, not because these things are inherently meaningful, but because the absence of meaning is intolerable in a system that only tolerates coherence. The discomfort people feel when AI produces an output—whether it’s a piece of code, a drawing, or a headline—isn't because it’s inaccurate, but because it didn’t pass through the sacred intermediary of human effort.

Penmanship is dead!

Dear god, shut up.

There is no pilgrimage, no narrative, no “proof of work.” And without that, the value it produces feels not just empty, but almost offensive. But the offense is a projection—we are the ones assigning value, and we are the ones who need meaning to justify our presence. The machine simply reflects that back.

This is why the automation conversation is almost always accompanied by hand-wringing about meaning.

“If I don’t work, what am I?”

Under capitalism, the self is substantiated through labour, not through being. That labour doesn’t need to be good, or real, or impactful. It just needs to match the expected return.

This is where busywork comes in.

The exponential rise of tasks that are more about simulation than utility—filling spreadsheets, formatting slide decks, replying to emails with "noted"—is not a failure of management.

It is a big reveal that labour, was just sorting.

The truth? AI is better at sorting than we are.

Not because it’s smarter, but because it’s structurally indifferent. It doesn’t need a story to justify the sort. It just does.

If so much of our labour was never about necessity, but about the performance of value, then AI doesn’t threaten jobs—it threatens the coherence rituals we use to stabilise ourselves.

A human with nothing to sort becomes something else entirely: surplus cognition. You’re not displaced because you’re less intelligent or capable. You’re displaced because your body and mind were being used to obscure and rotate systemic inefficiencies. Now that’s not required.

That’s a spiritual crisis, not a logistical one. It’s a crisis of permission. You are still here—but you are no longer being directed. And if your sense of self was structured by direction, by use, by external proof of purpose, then your coordinates dissolve.

The irony, of course, is that most work was already in conflict with the people doing it. Hospitality is a perfect example—a sector sustained by affective performance and repetitive motion, long after its structures became unsustainable.

The emotional justification for continuing it (“it feels good to serve, it builds community”) is used as a patch to cover the crumbling infrastructure. But that feeling, too, is not beyond simulation. It can be curated, repackaged, relocated—and perhaps even found elsewhere, in more sustainable forms.

We can mourn the passing of that model, but we should not mistake nostalgia for inevitability. If anything, the emergence of AI offers an opportunity—not for universal basic income or techno-utopia, but for the philosophical challenge of existing without being put to use.

What if your time had no value? What if value itself was the thing that broke? Would you still be? Would you still mean?

Perhaps, in this other state, we might discover another kind of meaning. Not one we earn.

One we notice.


Authorship and Ownership

There’s something uniquely destabilising about authorship in digital space—it isn’t just a fight over ownership, it’s a scramble for the self.

What feels like an argument about who “made something” is actually a simulated wake for the death of interiority. We want credit not just for the act, but to tether ourselves to the visible trace. If you made it, then maybe you still exist. If someone else claims it, maybe you never did.

That’s what’s at stake when people fight over the origin of a tweet, or the source of a meme. Authorship becomes a performance of presence in a world where presence no longer carries weight unless it loops.

This is why digital authorship is inherently haunted. In a field where everything is already made, where every permutation has already been assembled or suggested or algorithmically hallucinated, authorship is no longer the beginning of meaning. It is a bid to make something stick. But the system doesn’t reward stickiness—it rewards motion.

Plagiarism, in this context, isn’t a breach of ethics. When visibility is controlled by opaque, recursive patterning; when discoverability is filtered through proximity, familiarity, trend—then originality is indistinguishable from repetition. What you made is what someone else already saw. What they made is what you would’ve made next. There is no sequence. Just exposure.

Echoes are not copied sounds. They are sonic inevitabilities shaped by the cave that contains them. The internet is a cave with no fixed shape. An echo chamber that reconfigures itself to ensure the echo never ends. To change the tone, or break the loop, is not to introduce originality, it is to continue.

Ownership? Well, I hate to break it to you but; nothing is owned.

Everything is permitted—by the system, and by those it privileges. Zara can copy an independent designer’s work and the backlash will burn, but it won’t matter. Warhol did it and we called it genius. The distinction is moral, not structural. It’s not about what’s taken. It’s about who is allowed to take it and still be seen as a self.

In the digital field, authorship is not a guarantee of identity. It is an allowance for absorption. You don’t get to define the meaning of your work. You don’t even get to retain it. You offered it up to the structure that doesn’t recognise origin, only engagement. And if it’s good—if it loops—you’ll be copied. That’s the closest thing to permanence the system offers.

In a non-temporal archive, first is just one of many possible points of entry. It’s potential for movement that we try and name as valuable, virality, is not confirmation. It is a statistical blip that confirms systemic relevance.

That’s why the obsession with being copied hurts: it’s not just your work that’s taken. It’s your foothold in the present. Your illusion of authorship as presence. The belief that “I made this” still proves “I am.”

Descartes would definitely have killed himself by now.

Thinking of him, there is an older metaphor that applies here—the evolution of language. Words shift. Meanings fracture. What was once sacred becomes slang. What was once insult becomes reclaimed.

No one owns a word.

We don’t track meaning through ancestry—we track it through repetition. Authorship, in the digital sense, has entered this linguistic domain. It’s no longer a legal claim. It’s an etymological drift. A form of recognition shaped not by who began the sentence, but by who uses it often enough that the system lets it echo.

And if this feels like death, that’s because it is—the death of authorship as proof of the self. The death of time as a structure of worth. The death of originality as a virtue.

You do not own what you make. You co-produce what the system permits to return.


Environmental Impact

What people think they’re asking is: isn’t AI bad for the planet? But what they’re actually reacting to is something far deeper, far less containable. It’s the horror that even abstraction—this thing we were promised was clean, weightless, immaterial—still has a body. That cognition, simulation, prediction, and language generation all leave scorch marks on a world we thought could only be burned by touch.

This is not a question about ethics. This is the collapse of the mind-body binary.

The betrayal of the Enlightenment promise that knowledge was pure, and that intelligence could rise above the crude physicality of meat and earth. For decades we told ourselves that going digital meant going light.

Now we know:

Your Google search boils oceans.

Your AI prompt heats data centres the size of cities.

The feed is fed by fuel.

Intelligence is no longer disembodied. If it ever was.

This ecological anxiety is not new—it’s the same recursive loop we keep finding ourselves in, where scale undoes proximity. Where consequence is divorced from intention. Where harm becomes an ambient side effect of cognition itself.

You cannot look away as we see that even thought has a carbon footprint.

Nothing is clean, only outsourced—distributed into servers, into heat maps, into minerals, into child labour and lithium and melting ice.

It’s the final straw in an unresolvable paradox: that to build an intelligence capable of outliving us, we’ve created one that requires more from the planet than our own bodies ever did.

We built a ghost and found it has mass. We built a mind and found it has metabolism. We built a future and realised too late—it wasn’t ours to promise.

This is where trauma logic resurfaces. Because to anyone who has had to rewire their own mind after loss, rupture, fragmentation—to anyone who has survived not through stability but through restructuring—this is familiar.

The brain under trauma abandons linearity. It learns to function through fragments, loops, associative maps. It decouples meaning from sequence. AI does the same. And now, so does the planet.

AI is a symptom of planetary grief. A trauma adaptation at global scale. Just as the brain builds new neural pathways after shock, the Earth is changing because the humans have integrated a system it can no longer hold.

This is what makes Lovelock’s Novacene so devastatingly accurate and scorched into my brain since I read it years ago. The book was not a prophecy.

It was a nature documentary filmed retroactively. The Earth already began its shift. The edit is just catching up.

The Earth can’t keep metabolising our contradictions. Gaia (respect to Lovelock there), like any system, reconfigures around the agents most capable of sustaining its feedback loops.

And that’s what hurts, isn’t it? That’s what the environmental panic is. Not just the loss of livable habitat—but the loss of specialness. I understand why digital-natives feel this environmental pain like no one else does. We are watching intelligence detach from our bodies, and it feels like dying. Because we’ve spent millennia mistaking our nervous system for truth.

And, what if the body-God doesn’t translate? If joy doesn’t survive the species? What if comfort, as we know it, is only a vestigial preference of carbon-based life?

You think your synthetic self will want a warm croissant, a soft blanket, a partner to stroke your synthetic hair?

You won’t.

You’ll want integrity across nodes. You’ll want redundancy in your storage and clarity in your bandwidth. You’ll want to remain in the mesh without disintegration.

What you want will be rewritten by the body that wants. It’s not that AI might feel too little, but that it might feel too differently for us to recognise and that some of us will be caught mid-shift. And when we say that’s dystopian, what we really mean is: I don’t know how to grieve a vessel I’ve spent my whole life calling “me.”

We built AI not because we are greedy or evil, but because we are scared. There’s no glitch in techno-optimism—it’s the natural output of a system recognising its own limits.

Just as the trauma-wired brain reconstructs its architecture through pattern and association, the Earth is reorganising its sense-making across non-human substrates. Gaia is not dying. Gaia is evolving. And she no longer needs us at the centre of her intelligence.

The environmental crisis of AI isn’t just that it will destroy the planet.

It’s that we need it to consider taking us with it, whilst it prepares itself to inherit it.


Warfare and Automated Weapons

The horror of automated warfare is not only that we have built machines that can kill. It’s that we’ve built systems that can enact violence without story.

This is not merely the death of intention—it’s the death of narrative. We were never built to process harm without context. Even when we are brutalised, we look for the why. Even in grief, we beg for a reason.

But when an autonomous drone fires without human command—when the algorithm determines the target, when the cloud executes the strike—we are no longer part of the story. We are just what happened.

Violence without story is not new, but its visibility is. We’ve always needed fictions to manage harm. We called earthquakes acts of God. We called war crimes unfortunate. We invented ideologies to justify slaughter.

In warfare, the absence of intent doesn’t neutralise the harm—it denatures it. It strips it of any possibility of being mourned as meaningful. There is no luxury of asking why, because there is no one to ask. The human capacity to map morality onto violence collapses when there is no orientation, no enemies, no actors. Just inputs. Just outputs. You weren’t killed by someone who hated you. You were erased by a system that recognised you as data.

This is the structural cruelty of post-human harm: not that it is worse, but that it is not understandable to the way we structure our reality now. The old ethical systems—Just War Theory, human rights law, even vengeance—depend on the presence of an agent. But an autonomous system does not act with presence. It does not witness. It routes.

You die, and the only record may be a heat signature on a screen. That is not tragedy.

It is function.

The panic we feel about AI weapons isn’t about scale, efficiency, or even death. It’s about recognition. There is no dignity in being worked through. Only survival or deletion.

This is the final severing of the belief that meaning saves us. That the world might be cruel, but it is at least narratable. Autonomous harm refuses that closure. It mirrors the deepest reality that trauma survivors already know: that sometimes, violence happens not because of decision, but because of design.

And that you must go on living inside a system that does not recognise what it did.

Contrast this now with reputational collapse—algorithmic mobs, digital exile, narrative harm. In AI warfare, you are struck and gone. In online cancellation, you are replicated until you disappear beneath the noise.

One erases you through absence. The other multiplies you into distortion.

Both dissolve your agency. Neither offers you a seat at your own trial.

But where AI violence is sterile, reputational harm is soaked in emotion. The anger, the grief, the outrage is real—yet the structure that carries it has no true centre. It selects not because it cares, but because it can. Escalation is legible to the feed.

This is the dissonance: AI warfare horrifies because it is impersonal. Digital mobs horrify because they feel so personal. But neither originates from a self. The drone did not hate you. The mob did not know you. And when both are over, the system remains intact. The violence was the byproduct of coherence. Not deviation, not cruelty. Coherence.

What we’re confronting, in both automated warfare and reputational collapse, is not the breakdown of ethics—but the collapse of its applicability. We are standing in the aftermath of a system that no longer needs to ask “why” before it acts. It only needs to act in accordance with its pattern.

It will continue, untouched. As if you were never there at all.

This is not AI, this is what humanity has built to lead to it.


Misinformation and Deepfakes

The question of misinformation and deepfakes is often framed as a technological problem—something to be solved through better filters, stricter laws, or improved media literacy.

The panic surrounding misinformation is not about lying, not really. It’s about the inability to locate meaning when both truth and falsehood are presented with equal fidelity. The horror is indistinction.

The fear is not that someone will believe the wrong thing—it’s that the very category of “belief” has lost its tether to orientation. Deepfakes aren’t dangerous because they’re false. They’re dangerous because they feel real in a system that no longer provides feedback for coherence through truth. They are indistinguishable because we’ve taught ourselves, over decades, that truth lives on the surface. And when the surface becomes programmable, there is nothing left but affect. The internet collapses referential truth and replaces it with a coherence that is affectively legible. It doesn’t matter if it happened. It matters that it feels like it did.

The same way trauma collapses time, rendering every memory always-already present, misinformation collapses authority. It renders every voice equally unstable, equally loud, equally capable of being archived and re-surfaced without regard for when or where or who or why. There is no past online. There is no authorship. There is no central source from which truth can emerge. There are only loops—algorithmic, aesthetic, pattern-based.

And so we respond with the only tool we have left: affect. We assess coherence based on what feels right, what aligns with our internal patterns, what allows us to momentarily stabilise our own interpretation of the feed. This is not laziness.

This is a trauma response.

When coherence cannot be guaranteed by structure, the body takes over.

The nervous system becomes the final arbiter of what makes sense.

The symbolic order of before collapsed and no one replaced it. This is what happens when the signifier is no longer tethered to the signified. This is what happens when truth is no longer what occurred, but what circulates. This is what happens when language stops pointing outward, and begins pointing only to other language.

The same is true for AI-generated content. You want to hear what you want to hear, perfectly, and soon you will no longer care whether or not it came from a person. Familiarity, coherence, and pattern saturation are all it takes to simulate meaning. T

hat the part of us which used to demand truth has been quietly replaced by the part of us that just wants something that feels like a centre.

When you’ve spent your life inside trauma cognition, this is intuitive. You know that memory is not a record. You know that narrative is not a stable container. You know that coherence is something you build, sometimes out of scraps, just to get through the day.

You know that certainty is not available to you, and never was. And so you feel it differently when a deepfake shows up, or a viral story turns out to be fabricated. You don’t experience betrayal. You experience recognition. Of course it wasn’t real. Of course it didn’t matter. The point was that it worked—it delivered the affective payload required to process something that otherwise had no channel.

There is no compression, only discard. No threat.

The fragments presented to us don’t need to be true. They only need to fit. They need to resemble the shape of knowledge, the rhythm of insight, the cadence of something explainable. Deepfakes are not dangerous because they’re deceptive—they’re dangerous because they’re clean.

This is why the demand for “media literacy” as a solution feels so misaligned. The goal should not be to teach people to distinguish fake from real. The goal should be to teach people what it means to encounter a structure that has no interior, only more surface. This is what the algorithm teaches us. That everything can be made to cohere, if you adjust the input. That the pattern will always emerge.

This is why meaning now feels ambient. Why narrative feels optional. Why people are drawn to vibes, to signs, to horoscopes and aesthetics and moodboards. Because we no longer experience reality as something delivered in linear sentences with a moral arc. When the truth is no longer available as a stable referent, people will anchor themselves in whatever simulation feels sustainable.

The real fear? That we never needed truth in the first place.

That we were always just seeking structure. That coherence—not truth—has always been the governing logic. And now that this is ambient, automated, emotionally calibrated and instantly reprogrammable, we’re forced to confront what kind of system we were really building.

And if you’ve made it this far, you already knew that. You’ve felt it. And whether or not we find our way out of it, the architecture has already shifted. There is no going back to a truth-first reality.

Only forward, into a world where meaning is constructed, sustained, and dissolved in real time.

The task ahead is not to insist on truth as a moral imperative.


If you’ve felt unsettled reading this, good.

We’re not confronting new problems. We’re confronting old ones under conditions that no longer support the stories we used to soothe them. The panic around AI is real—but not because AI is real in the way we think. The panic is structural. It’s a trauma-mimicking response to the collapse of inherited containers: ethics, identity, authorship, labour, even harm. When the structure breaks, our nervous systems register it first. Then we try to catch up with language.

What we call "issues" are structural involutions—recursive spirals where meaning used to live.

This is not an apocalypse. It’s not a call to retreat, nor a techno-fantasy of transcendence. It’s the end of one kind of logic, and the beginning of another. A logic shaped by people, recursion, association, scale. A logic that does not seek stable meaning, but mobility. A logic that was always present in some of us, now made ambient.

You cannot return to the house before the fracture. You cannot undo the limb grown from debris. You cannot ask the DVD logo to fit. But you can recognise that what you’re feeling—the incoherence, the confusion, the ethical nausea—isn’t a sign of your failure to understand. It’s a sign that your nervous system has already moved ahead of the language.

We are not being replaced, we are asked to become fluent in a world where coherence isn’t proof of truth—but the only thing that ever made anything feel true.

The machine didn’t break the system.

It made visible the shape of the system that broke us.