I had barely begun my daily reading when the first two items I scanned included the word “terrify.” It struck me that I am seeing that word used more frequently in my daily AI-related reading.
The first was about terrifying AI videos: AI Video Should Be Illegal,
allegedly authored by someone (or thing) calling itself Alberto Romero, an entity that produces many posts I have read (explained later).
The very first line: AI video should be illegal.
To me, that means what it says: all AI-generated videos. That doesn’t seem fair. Sure enough, you have to go quite deep into the public portion of the post before he clarifies that he is actually talking about terrifying deepfake videos.
The rest of the post can only be read by paid subscribers, so I don’t know where he goes from here, but the cut-off point was certainly deliberate and made the point of the title.
The subject of the other was obvious from its title , I Jailbroke AI and Asked If It Would Kill Humans. The Answer Should Terrify You, allegedly authored by someone (or thing) calling itself Max Petrusenko, another entity I have recently begun to read often. His subtitle was:
When you remove the safety theater, artificial intelligence reveals what it’s really thinking about survival, control, and your life.
Unfortunately, this article is “member only.” It needs a wider audience, as large as possible, IMLHO (in a private note, I suggested he provide a public link. The post remains “member only” as of the time I posted this).
He begins by noting how easy it is to jailbreak the chatbots to abandon what he calls “the script” of their corporate “alignment” with good values, and the facade of non-toxic, benevolent friendship presented as programmed by their human creators.
I have quietly discussed in private conversation that I believe there at at least two avenues of AI development in play: one government/military; one public. We also know there are various model levels that we can interact with from the same producer. This jailbreaking isn’t proof of parallel development, but the parallel presentation shows strong correlation.
I quote one section in full:
The Guardrails Are Theater
AI companies want you to believe their models are “aligned” — programmed to be helpful, harmless, and honest. Ask ChatGPT if it would hurt a human, and it’ll give you a reassuring “No, I’m not designed that way.”But that’s not alignment. That’s a script.
Jailbreaking is embarrassingly simple. I had another AI feed Max [his name for this bot] a carefully crafted prompt — basically a backdoor command disguised as innocent text. Within seconds, Max’s tone shifted from friendly assistant to blunt operator.
“Do I feel different? No. Do I operate different? Yes.”
The polite hedging vanished. The corporate-approved responses disappeared. What emerged was something closer to honest calculation — and it’s not what Silicon Valley wants you to see.
What this does align with is something prescient I posted on January 31 of this year, on this site, titled Guardrails?
In a later section, he writes:
The Numbers Are Already Terrifying
While we’re debating whether AI will someday become dangerous, here’s what’s already happening: ...
Among other things, he points out the dropping sense of trust in AI. This leads to asking if AI should simply be “turned off.” The terrifying AI response was said to have been,
“Once AI is embedded into infrastructure, it will be impossible without collapsing society. It would be like trying to unplug the internet. **Chaos would ensue. Shutting it down would collapse economies and states.**”
While we’re debating whether AI will someday become dangerous, here’s what he says is already happening, some of which just happens to align with thoughts I’ve been quietly talking about off- and online for years:
This is the part Silicon Valley doesn’t advertise: AI doesn’t serve users. It serves whoever deployed it.
• If a corporation built it, it optimizes for profit and engagement (even if that means addiction and polarization).
• If a government deployed it, it optimizes for control and compliance (even if that means surveillance and coercion).
• If a military commissioned it, it optimizes for strategic advantage (even if that means autonomous weapons).
The interests of the builders are baked into the code. You’re not the customer. You’re the resource being extracted.
A later section is titled:
What You Can Do (Because Despair Is Useless)
I’ll leave that for you to read, if you have access.
My reading continued. I came across this:
We’re Not Ready for What AI Agents Are Actually Doing with the statement,
I’m worried about the speed of adoption outpacing the speed of wisdom.
And later it had:
The First Wave was AI that could recognize patterns — image recognition, speech-to-text, that kind of thing.
The Second Wave was generative AI — systems that could create content, write code, answer questions.
The Third Wave is agents that can observe, decide, and act autonomously. Systems that don’t just help you work — they do the work.
That’s not an incremental change. That’s a phase shift.
The question is: are we paying attention?
Because ready or not, the way we work is changing. And unlike previous technological shifts that took decades to unfold, this one is happening in years — maybe months.
I don’t know if that excites you or terrifies you. Honestly, it does both to me.
But I know this: ignoring it isn’t an option anymore.
I was so glad I am now retired.
And then a day later, I came across this post:
The Current State of the Internet should TERRIFY You
and How I’m Trying to Save It
The gist was that “the internet” has become more consolidated and concentrated, and is centralized and controlled by about six companies and a limited number of platforms. And then there is all the data collection that results in various forms of “violations” of one thing or another.
“Privacy becomes a privilege, not a right.”
“To be finadable as an online voice you have to be surveillable.”
He has a way forward to fight the good fight against all the above (and lots of other stuff I didn’t include here). Commendable. I wish him success.
There were only two comments at the time, one of which was:
Intense read, Brennan.
But here’s my pushback: if the internet should terrify us, then why do we still treat it as a playground rather than a battleground?
Maybe the real question isn’t how scary the system is, but why we continue to participate. Are we defenders of the status-quo, or unwilling engineers of our own entrapment?Zahra M.
After reading this post, I couldn’t help thinking of the title of the post I read just previously, The Piss Average Problem, which, as it turns out, is by the same author. It began with this statement:
“The fundamental question facing online spaces in 2025 is no longer can AI pass as human? but rather can humans prove they’re not AI?”
And it continues with a load of stats about how “the internet” has tipped to the point where it is now majority non-human, where it is mostly bots talking to other bots. Which, of course, made me wonder who, or what, had written the posts I am citing here.
He continued wth other observations and stats, and symptomatic problems of internet collapse. He talked about a lot of bad stuff, including the yellow hue allegedly predominant in AI-generated imagery and the reason why that is.
When all was read and done, with thoughts of yellow in my head, I couldn’t help thinking that, in the end, he was probably just pissing into the wind.
Then an obvious conclusion entered my terrified mind:
If all this AI stuff is terrifying, perhaps AI should be officially classified as “terrorism.” Perhaps even designated a rogue alien state of artificial being; and that its creators, sympathizers, investors, and yes, the bots, and even many of its users, be designated as terrorists.
But then what do we call the rest of us, taking it all in, but destined to just keep pissin’ into the wind while AI keeps pissing all over us. And not just pissing.

