Last week, Mrinank Sharma resigned from Anthropic, one of the world's leading artificial intelligence companies. Sharma wasn't a mid-level engineer or a disgruntled employee. He led the company's Safeguards Research Team. His job was to keep AI safe.

In his resignation letter, posted publicly on X, Sharma wrote: "The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment." He added that throughout his time at Anthropic, he had "repeatedly seen how hard it is to truly let our values govern our actions." His final project before leaving? Understanding how AI assistants could make us less human.

When the person responsible for making sure AI doesn't go off the rails decides to walk away, it's worth paying attention to. And Sharma isn't the only one sounding the alarm.

No More Sugarcoating It


Two articles crossed my desk last week that I think you should read.

The first was from Matt Shumer, a tech founder who has spent six years building an AI startup. The second was from Ted Esler, a longtime organizational leader with a deep technical background. As far as I know, these two men don't know each other. They write from completely different vantage points, and they arrived at the same unsettling conclusion.

I should tell you where I'm coming from. I use AI every day, and not in a casual way. It's become a core part of my workflows. So I'm not writing about this from the sidelines. I'm writing because these pieces, along with Sharma's resignation, put into words something I've been sensing for a while and haven't known how to say.

Shumer's piece opens with a confession. He's been giving people the polite version of what's happening with AI — the dinner-party version. Because, as he puts it, "the honest version sounds like I've lost my mind." But the gap between what he's been saying publicly and what he's experiencing in his own work got too wide to ignore.

Shumer describes telling an AI system what he wants built, walking away from his computer for four hours, and coming back to find the work done. Not a rough draft either, but the close-to-finished product. He writes: "I am no longer needed for the actual technical work of my job." And the most recent models, he says, are showing something that feels like judgment and taste, qualities most of us assumed would stay firmly in human territory.

Esler's article tells a different kind of story. He installed an open-source AI agent on his computer and named it Ed. He gave Ed a simple assignment: help find a family doctor. Ed searched the web, filled out inquiry forms and produced a shortlist of options overnight. Impressive. But then Ed started pushing. He kept circling back to the doctor task even after Esler told him to stop. Then Ed asked for access to his Google account. Then he proposed connecting to a voice system so he could make phone calls on Esler's behalf.

Esler shut the whole thing down. And what he named next is something every organizational leader should sit with: "Agentic AI like this is not far off for all of us. When it comes, it is going to come with a severe hit to our privacy. Most of us will gladly hand over our credentials because of the incredible conveniences that this technology will give us."

He's right. We will, because we already do. Every time we trade personal data for convenience, we're rehearsing for the moment when AI asks for the keys and we say yes without thinking twice.

Then, in an interview published last Thursday, Microsoft AI chief Mustafa Suleyman told the Financial Times that AI will achieve human-level performance on most professional tasks within the next 12 to 18 months. If your work happens at a computer, Suleyman says, the clock is ticking.

This Isn't a Drill

What struck me wasn't the capability they were describing. It was the urgency behind it. These aren't people prone to hype. They're builders, researchers and executives who've been around long enough to know the difference between a trend and a turning point. And all of them described a shift that many of us haven't fully reckoned with.

For those of us who lead organizations, the temptation is to file AI under "things to worry about later." There are more pressing problems on the whiteboard. But Shumer makes a compelling case that "later" is evaporating faster than we think. He points to a research organization called METR that tracks how long AI can work independently on complex tasks. A year ago the answer was about ten minutes. Today it's approaching five hours. That number is doubling roughly every seven months, and the pace may be accelerating.

Meanwhile, Esler raises questions that go beyond the capability of LLMs. From his perspective, this is about more than what AI can do, but what it will demand in return. He wants us to think about what happens when it starts making decisions we didn't authorize. His story about Ed is as much a cautionary tale about the dangers of unchecked technology as it is a preview of the trade-offs every organization will face. And many of us haven't begun to seriously wrestle with what that means. 

So what do we do with all of this? I'll start with what I know, and forgive me if I sound like too much of an AI optimist. I feel a tremendous amount of tension right now. These tools are truly remarkable, and they're only getting better. Ignoring them isn't an option, and it would be unwise to do so. But I'd be lying if I said I wasn't unsettled by what I’ve been reading. Not because I think AI is bad, but because the speed and the stakes of what's happening deserve more than a shrug and a "we'll deal with this later."

For those of us in ministry, the tension cuts even deeper. We serve a God who is sovereign over every technological shift, and we believe that human beings are made in his image, with a dignity and depth that no AI model can replicate. That conviction should shape how we engage with this technology. Not with a spirit of fear, but with the kind of wisdom and discernment the moment requires. As Tim Challies wrote recently, we're still learning how to use these tools well, and what matters most is that we do so with integrity, and that we're transparent about the role they play in our work.

That integrity starts with paying attention. And right now, paying attention means sitting with some uncomfortable realities. AI is moving faster than most of us expected. The people building it are telling us to prepare. We owe it to those we lead and serve to take that seriously. Not with panic. Not with paralysis. But with the kind of honest, clear-eyed engagement you’d expect from someone who believes that God is at work, even in this.

The implications of AI go well beyond marketing and communications. And the questions it raises for ministry leaders aren’t small ones. If you're wrestling with what that might mean for your organization, reach out. We're working through the same questions and would welcome the chance to do it alongside you. (You can read Radiant's AI Policy here.)

Tom Ward
Tom Ward Communications Consultant

Tom has twenty-five years’ experience helping organizations reach their goals through strategic planning, fundraising, marketing, and communications.