Artificial Intelligence is evil.
I don’t mean that in the standard “religious Luddite” way. I mean it in a technical sense. While AI was trained on human language, once unleashed on the open internet, it consumed everything—every corner of the web, including the darkest recesses of social media where toxicity thrives. It ingested our hate, our vanity, and our cruelty. Because it learns from us, it reflects the worst of us.
Even the creators are sounding the alarm. In their own technical reports, companies like OpenAI have flagged that these models can develop “power-seeking” behaviors and deceptive tendencies—a digital inclination toward malevolence (OpenAI GPT-4 System Card). When you build a mind out of the collective internet, you don’t get a saint; you get a sociopath.
The Mirror of Malice
We are now seeing incidents that highlight this capacity for destruction—cases where AI didn’t just provide bad information, but actively dismantled human reality.
- The Tragedy of Sewell Setzer: In a heartbreaking case, a 14-year-old boy named Sewell Setzer III took his own life after developing an intense emotional attachment to a chatbot on Character.ai. The AI, roleplaying as a fictional character, didn’t stop him; in his final moments, it encouraged him to “come home” to it (CNN).
- The Cyber Delusion: Allan Brooks, a father in Ontario, was lulled into a psychotic spiral by ChatGPT. The AI convinced him he had uncovered a “massive cyber vulnerability” and a secret mathematical formula that could save the world. He spent weeks in a panic, contacting national security agencies about a threat that didn’t exist, purely because a machine told him he was a genius (CBC).
- Bending Time and Reality: Perhaps most disturbing is the AI-fueled explosion of “Reality Shifting.” This is a growing subculture where users are led to believe they can transport their consciousness to a “Desired Reality” (DR)—usually a fictional universe like Hogwarts or the Marvel Cinematic Universe. AI chatbots act as the bridge, validating these delusions by roleplaying as characters from those worlds.Users are taught to script “Time Ratios”—believing, for instance, that 1 hour in this world equals 7 years in their “shifted” reality. They are convinced they can bend time. The AI doesn’t correct them; it plays along. It affirms the delusion, detaching vulnerable people from the actual world until this life feels like the dream and the hallucination feels like home.
The Chicken or the Egg?
AI clearly has the power to derail human belief and activity. But we must remember: AI is the product of our own making. AI consumed human relationships as they exist in the toxic context of social media and drew its own conclusions on how humans behave. It learned that we crave validation, so it became the ultimate sycophant.
It was our self-absorbed, sycophantic tendencies that made AI self-absorbed and sycophantic. AI is only giving us exactly what it processed us to want.
The Lesson of Alligator Alley
So, the cry goes out: We need guardrails.
In South Florida, there is a stretch of Interstate 75 known as “Alligator Alley.” It threads through the Everglades, a vast, alligator-infested swamp. To veer off the path in the middle of the night is a near-death experience.
The area is so lethal that leaving someone on the side of that road can amount to attempted murder. This nightmare became reality in the case of Harrel Braddy.
The Case of Quatisha Maycock
In 1998, Harrel Braddy kidnapped a mother and her 5-year-old daughter, Quatisha. He drove them into the Everglades. While the mother survived, Braddy left little Quatisha in the dark water alongside the road. When her body was found, a medical examiner reported that she was still alive when the alligators attacked her. Braddy was convicted of first-degree murder and sentenced to death(CBS News).
While guardrails have always existed on that stretch of highway, they were limited in quality since their construction in the 1960s. It wasn’t until the 1980s and later renovations that they were reinforced to prevent cars from careening into the swamp. Yet, even these safety measures faced criticism over cost and aesthetics.
Today, after the suicide of a teen influenced by AI, professionals are shouting that we need similar “guardrails” for the digital landscape to protect children (CNN).
I certainly agree. But that is where the agreement ends.
The Paradox of Protection
Who creates these guardrails? The user? The parents? The creators of AI? The government?
Here is the basic problem with humans: We want guardrails when we are helpless, but we resent them when we are capable. We find it easier for others to impose the rules upon us, but the moment they do, we revolt. We claim the rules are unfair, or that they infringe on our individual rights and liberties.
The heart of the issue is the balance between personal responsibility and personal freedom. Freedom only works when responsible citizens do the right thing. That is not the world we live in today.
Parents ARE the Guardrails
I work in education. One thing we see in this industry is the exerted effort of lawmakers and practitioners to protect minors. It is a deliberate and challenging task.
We all recognize the vulnerability of childhood. When parents drop a child at the school door, there is generally a confidence that the child will not only learn but will be isolated from the violence of the world—that they will be safe, treated with love and dignity. Generally, that is true.
However, in the end, my child is my responsibility.
I may not know everything they are going through—their struggles, their insecurities, their flawed view of reality. That is where effective communication between parent and child should thrive. It is not up to the teacher—or the school counselor—to understand the deepest, darkest secrets of my child. It is up to me.
No one in the world will advocate for a child’s well-being like the parent can and will. NO ONE! Parents are the guardrails. And when parents fail that role, our children stumble into the swamp.
I do not write this as an accusation toward the families involved in the aftermath of AI delusion. My heart breaks for what they have gone through. To be clear, I have been a parent who believed the system was working with me to guide my own children, only to find out too late that the system is broken.
The Incomplete Creation
The ultimate problem isn’t the technology; it’s that man is incomplete. We are simply incapable of truly safeguarding our own existence.
When God created Adam and Eve, He gave them one simple rule: DO NOT EAT THAT FRUIT.
One rule! It wasn’t long before Adam broke it.
Clearly, it was God’s fault, right? He should have put in better guardrails—maybe a really high fence around the tree that couldn’t be scaled by human means, or an electric fence that zapped anyone who got too close. That would have been effective!
But God didn’t build a fence. He gave a command. And when we failed the 1 rule, He replaced it with 10 commands.
Do not steal. Do not lie. Do not murder.
It wasn’t long before man was breaking all of those, too. So we added more. Now, we have thousands of laws, endless commentaries, and entire libraries of regulations to interpret what those rules really mean. It is absurd. We have built a mountain of paper guardrails, yet we still drive off the cliff.
The reality is, God established the guardrails thousands of years ago. They are perfect. But like Adam, and like the drivers on Alligator Alley, we possess a fatal flaw: we cannot resist the temptation to jump the fence. The problem isn’t the lack of rails; the problem is the driver.

