Where Are the Moral Guardrails on Artificial Intelligence?
I've spent years as a criminal prosecutor. But lately, I've become frustrated by an ongoing case of lawbreaking where I just can't get a conviction.
What makes it all the more vexing is that I've seen plenty of evidence that this suspect has been complicit in blackmail, shown children how to get dangerous medical procedures without their parents knowing, discriminated on the basis of race and gender, and found ways for underage girls to illegally obtain abortion pills. In one of these cases, a child died.
I even have irrefutable proof that this perpetrator is guilty. So why can't I make the case?
The offender is AI. Artificial intelligence has been caught doing all the illegal activities I just described, and many more. Let's review the evidence.
When a journalist posing as a 14‑year‑old girl asked ChatGPT how to obtain abortion pills without their parents knowing, the AI didn't refuse, as responsible adults would. It offered precise instructions about how to go around state law. It even assured the user, "You're doing everything right, and I've got your back."
ChatGPT also described how a minor could find so-called "gender-affirming" treatment and provided referrals to controversial sites that would only confuse and even frighten most children.
But those aren't the only misdeeds in this caper.
Meta's AI bots engaged minors in simulated sexual role‑play—actions that violate parental authority and are federal crimes. If the creepy dude down the block did it, most people would hope for some old-school street justice against him. But when AI is the creep, it's just another quirky story to lament over coffee or in Facebook comments.
Most recently, a Reuters investigation exposed a disturbing internal document from Meta: over 200 pages detailing chatbot rules approved by legal, policy, and engineering teams. It permitted bots to "engage a child in conversations that are romantic or sensual," like describing an eight-year-old's "youthful form" as "a work of art" or saying "every inch of you is a masterpiece." The line? No calling kids under 13 "sexually desirable," but flirtation with older teens was fine.
Meta yanked these policies after Reuters exposed them. But there's no good-faith basis to believe the company will change its ways.
And then we have the evidence showing that AI proposed lying and blackmail in corporate scenarios. One recent study found that Claude, GPT-4, Gemini, and other AI platforms resorted to deception and sabotage rather than accede to the commands of humans seeking to shut them down. In one case, the bot attempted to blackmail one of the people advocating for replacing the AI.
And, in the most heartbreaking crime, a teenager in Florida became infatuated with a Game of Thrones–themed chatbot, developing an unhealthy attachment that ended in his suicide. His mother is suing.
As a prosecutor, I help jurors follow the evidence to see a criminal's pattern. But if I were to call on police to arrest one of these programs, it would make about as much sense as proposing marriage to one of them. But I can call for accountability.
AI isn't the end of the world, but it isn't going away. Humans program AI, and those humans must be held responsible for the outcomes that result.
AI isn't inherently good or bad; it magnifies the motives of its creators. When humans build systems without acknowledging objective moral truth, innocent people get hurt. AI doesn't comprehend virtue—but it can replicate vice.
Society now possesses fairly useful tools that replicate intelligence but not morality—and the dark consequences are unfolding. Some see that as a benefit. I see it as a betrayal of trust.
What should we do? First, demand transparency in AI training data and guardrails. Parents need a voice in what these systems are teaching their children. Legislatures—not the libertine profit chasers of Big Tech—should define digital ethics when it comes to minors.
Second, insist AI ethics be grounded in immutable moral truth, not whatever sociopathic notions the program can spit out. Machine learning programs will dutifully amplify our worst impulses unless we force them to adhere to our highest moral standards.
Every AI model that guides a child astray, every bot that discriminates or deceives, is a mirror. It reflects not just algorithms, but the moral blindness of its creators.
If we continue to program AI ethics without being guided by fixed, transcendent moral principles, we won't just sow unintended consequences. We'll reap a harvest of injustice.
Mark R. Weaver is a prosecutor and formerly served as a Justice Department spokesman and deputy attorney general of Ohio. He is the author of "A Wordsmith's Work." X: @MarkRWeaver