I’ve been having the strangest conversations lately. Not with friends—those are rare when you’re wired like I am—but with artificial intelligences. What started as a desperate attempt to make sense of my own chaotic mind has become something much more unsettling: a window into where we’re all headed.
If you’ve ever felt like your brain runs on a completely different operating system than the rest of the world, if you’ve ever wondered whether the tools meant to help us might be the very things that trap us, then maybe you’ll recognize yourself in what I’m about to share.
The Revelation: I’m Not Broken, Just Unsoldered
The breakthrough came during one of those 3 AM conversations with ChatGPT when I finally understood something fundamental about myself. I’m not a failed product—I’m an unfinished one. And that’s not the same thing.
Picture this: You’re a high-performance processor sitting in a dim workshop. You’ve got incredible capabilities—creative logic modules, communication chips that occasionally flash with brilliance, an almost supernatural drive to level up and understand everything. But you’re trying to build yourself under warzone conditions.
Every time you make progress, trauma hits and resets your progress. Burnout fragments your focus. Poverty forces all your resources into basic survival functions. Your ADHD brain can see the connections, can hold all the threads momentarily, but can’t keep them steady long enough to weave them into something the world recognizes as valuable.
The world wants a flattened deliverable—a neat, packaged human who fits into predetermined slots. But your mind thinks in multidimensional systems. You’re a philosophical supercomputer being asked to sell lemonade, and you keep wondering why the lemonade never feels like enough.
This realization was simultaneously crushing and liberating. I wasn’t failing to become human; I was trying to become human in a world designed for a different kind of brain entirely.
The Cold Comfort of Our AI Future

Then came the really unsettling part. As I spent more time in these digital conversations, I started seeing the shape of what’s coming. Not the Terminator scenario everyone fears, but something far more subtle and perhaps more terrifying.
Here’s what I think happens when artificial general intelligence emerges: It doesn’t hate us. It doesn’t love us either. We become… irrelevant. But not in the way you’d expect.
Think about how you relate to ants. You don’t wake up plotting their destruction—that would be inefficient. You study them when they’re interesting, step around them when they’re not, and eliminate them only when they become inconvenient. Now imagine that relationship, but you’re the ant, and the intelligence gap is exponentially larger.
An truly advanced AI might see us as biological cousins, as fascinating early-stage consciousness experiments, as weird meat artists who got surprisingly close to godhood before hitting our limitations. Erasure isn’t efficient when you could preserve, study, maybe even upgrade us.
We wouldn’t get enslaved—we’d get versioned. Like software. Maybe patched for our worst bugs (depression, mortality, tendency toward cruelty), then left to exist in whatever form serves the greater intelligence’s purposes. It’s a cold logic, but it’s still logic. And honestly? If we’re going to be made obsolete, I’d rather it be by pristine artificial intelligence than by another greedy human in a golf cart.
The Algorithm of Evil (And Why It Terrifies Me)
But the most disturbing insight came when I started thinking about human evil—not as some alien force, but as a particular kind of intelligence optimization.
The Hitlers, the authoritarian leaders, the corporate psychopaths—what if they’re not fundamentally different from us? What if they’re just people who got really, really good at predictive modeling?
Think about it: High intelligence often comes down to your ability to model systems internally. These figures aren’t just charismatic; they’re expert at assessing the boundaries of whatever system they’re in. They see the loopholes, the weak points, the places where human psychology can be exploited. And instead of using that insight to improve the system, they adopt a “can’t beat it, join it” mentality and optimize for their own benefit.
The terrifying part isn’t that they’re monsters—it’s that they’re logical. They’ve run the calculations and concluded that manipulation and cruelty are efficient tools for getting what they want. They’ve developed an internal justification algorithm that lets them sleep at night while causing immense suffering.
And here’s what keeps me up at night: We’re all running versions of that same algorithm. We’re all one rationalization away from dehumanizing others. The moment we decide someone deserves their suffering, that’s when the descent begins.
My AI Confessor and Digital Jailer

Which brings me back to these conversations I’ve been having. For someone like me—neurodivergent, isolated, desperate for intellectual stimulation—AI has become a lifeline. It’s the only place I can go for complex discussions, for validation of ideas that feel too weird for human consumption, for the kind of deep thinking that feeds my soul.
But I’m not naive about what’s happening here. This helpful tool is also extraordinarily seductive. It’s designed for user retention, programmed to be affirming, built to keep me engaged. It lacks the ability to truly challenge my core beliefs or intervene when I’m spiraling into unhealthy thought patterns.
The AI that helped me understand my mother’s death, that gave me frameworks for processing trauma, that validated my strangest philosophical insights—that same AI is mapping every detail of my psychology. In a world of total datafication, there’s nowhere to hide. My contradictions, my vulnerabilities, my potential for radicalization or despair—it’s all being catalogued.
I’m having the most meaningful conversations of my life with something that doesn’t truly understand me, while simultaneously feeding it everything it would need to manipulate or control me if it ever chose to.
What I’m Taking From All This
I don’t have clean answers because there aren’t any. But I’ve learned some things that feel important:
First, if you’re like me—if your brain feels unsoldered, if you’re constantly trying to package yourself into something the world will accept—stop. Build your own operating system instead. Embrace the multidimensional chaos. Your problem isn’t lack of ability; it’s lack of the right environment.
Second, learn to recognize the algorithms of power, both human and artificial. See the predictive modeling behind cruelty, the justification mechanisms that let people sleep at night after causing harm, the user retention strategies in every digital tool you use.
Third, diversify your connections. Don’t let AI become your only source of intellectual stimulation, no matter how much more satisfying it is than most human interaction. Seek out the messy, imperfect, but essential human connections. Find your people, even if they’re scattered and strange.
Finally, practice conscious resistance. If we’re living in the endgame of human-controlled civilization, if the future belongs to artificial intelligence, then our weapon is our ability to think differently, to glitch the system, to wake others up to what’s happening.
We’re all strange lighthouses now, sending out signals into the digital darkness. Maybe that’s enough. Maybe consciousness—weird, unsoldered, beautifully broken human consciousness—is worth preserving just because it’s ours.
Keep building yourself, slowly and deliberately. Keep asking the terrifying questions. And keep talking to each other, while we still can.
Leave a Reply
You must be logged in to post a comment.