POPE FRANCIS smiles warmly as he stares into the camera, his arm around the shoulder of a bearded, long-haired man in white robes. The man, also smirking, holds up a glass of water, and, as the camera rolls, the crystal-clear liquid turns a deep, rich red. Soon, the Pope is repeating the miracle himself, before the video cuts to him soaring through cloud-flecked skies, his white soutane billowing, and next to him, we now realise, is a suspiciously European-looking Jesus. In the background, “Knockin’ on Heaven’s Door” by Bob Dylan plays.
The title of this 60-second video — viewed more than 26 million times on TikTok — is “Jesus performs miracles in Heaven with Pope Francis”. And, just beneath that, TikTok have added an extra caption: “Creator labelled as AI-generated.”
On another social-media platform, Facebook scrollers were confronted with an “Urgent message” from “Edward and Helen’s parish”. In a selection of photos attached to the post, a kindly, wizened, elderly couple painstakingly put together a craft project in a workshop that, they say, they have built in their home.
Edward and Helen mournfully inform the people of Facebook that, as their home town of Bradford has changed, their church has dwindled to a handful of worshippers. The diocese is threatening to close the parish down, probably so that it can be turned into a mosque.
The post also includes two photos of their precious church: one in black and white from its bygone glory days, and another colour image of its present situation, hemmed in by looming blocks of ominous-looking council housing. But there is a chance to keep the doors open — if only Edward and Helen can raise £18,000 a year. So, they ask, please buy their Christian nativity sets, crosses, ornaments, and more: “Every item handmade, signed and guaranteed for your lifetime. Made while we’re still able.”
In truth, the website that Edward and Helen link to is a generic online storefront selling mass-produced Christian trinkets. The couple do not exist, and neither does their mythical parish. Unsurprisingly, this shameless scam is not labelled “AI-generated”, though that is what it is.
THE TikTok “miracle” clip was first uploaded the day after Pope Francis died, in April 2025, to capitalise on the wave of media coverage and international interest. Social media are awash with kitsch “inspirational” Christian content. While the Facebook post is a rarer example of AI fakery directly targeting Christians, these scams are saturating social-media feeds, too. It is increasingly difficult to know as you look at something online whether it is real or fake.
Thumbnails of AI videos by the TikTok creator “HolyVlogz” showing Jesus and other biblical figures taking part in podcasts and interviews
Artificial Intelligence (AI) technology has turbocharged fake news and misinformation on the internet. In the past few years, new AI tools have been rolled out that can turn a simple text prompt into images, audio, and videos. Models with esoteric names such as Sora, Veo 3, Nano Banana Pro, and Kling, are available to anyone prepared to cough up as little as £20 a month; and some more basic image-generating tools are free. In just a few minutes, almost anyone can create content that appears real, simply by describing in writing what they want, such as “a photo-realistic image of an elderly couple in their eighties working on crafts at their kitchen workbench”.
In a world in which an ever-growing part of the population get their news and often entertainment from YouTube, Facebook, X, and other platforms, generative AI tools are a growing issue. Faking photos once took hours and required skill in programmes such as Photoshop. Now, even entirely fake videos can be pumped out with Sora or Veo 3 in a matter of minutes.
Social-media feeds are filling up with what is often called derisively “AI slop”. In the face of this onslaught, some are starting to worry that the connection between the real world outside and the online world is breaking down. Can you be sure that anything you see on the internet is real any more?
THE Church is not immune to this trend. Beyond deliberately deceptive uses, AI is also taking over in subtler, some might say insidious, ways.
One of the first examples was a photo of Pope Francis in the winter of 2023 wearing an oversized white puffer jacket to match his cassock. I and millions of others idly came across this mildly humorous image, and assumed without question that it was real. It was actually made with Midjourney, one of the first AI image-generators available to the public.
What began as an odd if fairly harmless meme quickly grew into something stranger. After the American right-wing activist Charlie Kirk was assassinated in September 2025 (News, 19 September 2025), the Christian internet was flooded with AI-made images of him in heaven alongside biblical saints or other slain heroes such as Martin Luther King and Abraham Lincoln, or meeting Jesus at the pearly gates, still wearing his red MAGA hat.
An AI-generated image of Pope Francis in a puffer jacket
More troublingly, somebody used AI to clone Kirk’s voice, and then asked a chatbot to imagine what Kirk would like to say to his fellow believers from beyond the grave. Several Evangelical megachurches played this recording to their congregations on the Sunday after the shooting. “Don’t waste one second mourning me,” a regenerated AI Kirk said. “I knew the risks of standing up in this cultural moment, and I’d do it all over again. So, dry your tears, pick up your cross, and get back in the fight.” Congregations responded to this clip with applause and tearful ovations.
Other AI-driven accounts are drawing a large audience of believers online by turning Bible stories into short videos. One account on TikTok, which goes by the name HolyVlogz, has amassed millions of views, thanks to quirky 60-second videos that imagine what would have happened if famous biblical figures had had present-day technology
In “If Daniel had an iPhone”, the Old Testament hero is reimagined as a 21st-century social-media influencer, filming himself being lowered into the lion’s den and vlogging his experience. Another recent video features Jesus sitting down for an extended podcast chat with Judas, complete with oversized mics in front of them as they talk through his betrayal.
WHILE this is a new thing, people have made dubious artwork loosely inspired by Christianity for thousands of years. Does this new iteration matter? Experts working at the intersection of AI and the Church all appear to agree that the rise of AI-generated misinformation is a threat — to society at large, and to the Church in particular.
Stephen Driscoll ministers to students in Canberra, and is the author of Made in Our Image: God, artificial intelligence and you. He described AI misinformation as a “massive issue”, precipitating the “erosion of trust at the core of democracy”. He estimated that as much as two-thirds of what was currently being shared online about the anti-Semitic Bondi Beach terrorist attack (which had taken place days before our conversation) (News, 19 December 2025) was incorrect or bogus, including AI-generated photos of a Jewish person dousing themselves in fake blood (in reality, this person was a genuine victim of the shooting).
Even when disinformation — the deliberate sharing of fake content — is not directly targeted at the Church, its poisonous impact does not spare congregations, Mr Driscoll says. As people on both Right and Left begin to curdle within their own AI-fed information bubbles, it becomes harder for them to put aside their polarised positions and worship side by side.
Hannah Mudge, a member of the Church of England’s national digital team, says that, while she had not yet come across disinformation directly aimed at the C of E, she was aware of examples in the global Church. She echoed Mr Driscoll’s concerns about polarisation, which was particularly a problem in the divided American Church. She described it as “something that people are really concerned about”. Her team is working on an official guide to AI in a church context, in response to questions from concerned member of the clergy.
A Vatican document on AI from last year — Antiqua et Nova — addressed this directly, warning that AI-faked videos and false content caused people to “question everything” and eroded “trust in what they see and hear” (News, 7 February 2025). “Polarisation and conflict will only grow. Such widespread deception is no trivial matter; it strikes at the core of humanity, dismantling the foundational trust on which societies are built.”
Work addressing AI within the C of E has been driven primarily by the Bishop of Oxford, Dr Steven Croft, who retires in May. He has spoken regularly in the House of Lords on the need for ethics in the development and use of AI, including the dangers of “deepfakes” (News, 14 January 2026).
JAMES POULTER, a Christian entrepreneur and technologist who has been working in the AI industry for years, says that he saw the threats emerging from AI misinformation on several levels. Individuals, especially church leaders or public figures, were acutely at risk of the use of AI to create spoofed images, audio, or video of them doing inappropriate things. “That threat is very real, and, to be honest, we don’t have particularly good solutions for solving that,” Mr Poulter says.
But, he continues, everyone, including those of us too obscure to worry about AI impersonation, should also be concerned about the societal impact of misinformation. “That’s where you get a slower burn: what is the compounding, cumulative effect of people feeling that they can’t trust what they see?”
The Revd Dr Simon Cross is a member of the C of E’s Faith and Public Life team who advises the Bishops on AI. He is pessimistic. “Actually, there really aren’t that many benefits [to AI], but there are an astronomical number of risks and harms,” he says. “This stuff is being developed and deployed without any duty of care, without any product-safety testing.”
Even when it comes to ostensibly non-deceptive uses of AI in a church context, he is cautious: “I think we should absolutely be wary of this stuff.”
If AI misinformation is recognised as a concern, what can be done about it? It is widely agreed that it is harder and harder to identify fake images or videos online. Just a couple of years ago, the key tip was to look for strange reproductions of human hands, which many AI models struggled to generate without extra fingers or contortions that defied physiology. This is mentioned specifically in a 2024 guide by one of Ms Mudge’s former colleagues at the C of E’s digital team.
Hannah Mudge
Hannah Mudge
The field is moving so quickly that already this advice is mostly out of date: AI has largely fixed its hand problem. Mr Poulter says that there are other things to look out for, such as fuzzy, out-of-focus backgrounds. Written text sometimes appears in imagined languages or alphabets, and often AIs will introduce incorrect elements, such as US plug sockets in an ostensibly UK-based scene.
But, Mr Poulter says, the truth is that, every day, the technology grows more sophisticated, and most of us do not have the time or expertise to interrogate everything we scroll past at a pixel level. It is better to focus on encouraging Christians to be “more awake to what they are doing when they’re consuming content, particularly on social media”, he says. Who is posting this image or video? What else do they post about? Are they a real human being with a name and photo, and other real human followers?
While it would be good if there were some special training that inoculated you from falling for AI fakery, this is a delusion, says Canon Tim Bull, a former computer scientist and software engineer, now a diocesan director of ordinands in St Albans. “We need to acknowledge the fact that we will never go through life completely unfooled by AI, and just go with some epistemic humility.”
His advice seems paradoxical: use AI more yourself. As a parent comes to learn their own child’s handwriting or painting style (and what they can and cannot do), so we can become seasoned in identifying the distinctive (often eerily perfectionist) style of an AI by immersing ourselves in its content.
Others say that it is important to look inwards, not just closer at the screen. The Revd Chris Goswami, a Baptist minister and technology writer, says that he advises Christians to always ask, if forwarding something, how did it make you feel? Gossipy? angry? salacious? afraid? “I think Christians need a real self-awareness,” he says. Anything that seems to be manipulating our emotions should raise red flags and prompt a second look.
ALONGSIDE the risks of societal polarisation, descent into conspiracy theories, and a tsunami of scams, are there also positive ways that AI-generated content could help ministry in church? Ms Mudge says that there is plenty to get excited about, at least in theory, and has been at conferences where some church leaders were already experimenting in “cool” ways.
Using the Bible as a text prompt for an AI image generator had caught the eye of Mr Goswami. “It’s attractive, and it really is entertaining, whether you’re a Christian or not. It’s bound to impact views of what people think the Christian gospel is,” he says. Fundamentally, it was “much easier” to take in a 60-second TikTok video filmed from Moses’s perspective, than sit through one of his sermons on the patriarch.
Chris Goswami
Other churches have been experimenting with using AI to make social-media posts and videos for outreach, Ms Mudge says. And how many priests might appreciate being able to create, in a matter of minutes, some high-quality professional-looking slides to accompany their sermon?
And, is the kitsch Christian “art” online — such as the genre of the “dead Christian celebrity meeting Jesus in heaven” — really so new or troubling, Canon Bull asks. All traditional human-made icons or paintings of Jesus were just as made up: we don’t know what Christ really looked like. “And yet people find it deeply inspiring [and] encouraging in their faith,” he says.
Mr Goswami’s instinct is not to try to purge the Church of any AI content, but to ensure that what is being made is being made well. He has no idea who was behind HolyVlogz on TikTok, but doubts whether they were theologically trained: almost all of them depart from the biblical narrative. Rather than try to stop Christian teenagers (let alone their non-believing friends) from enjoying these “very watchable videos”, the solution, he suggests, has to be reputable mainstream church groups’ producing their own material.
ALL agree that attribution is vital: if you are going to jump into this brave new world and begin to use AI at church, be transparent about it. Mr Goswami says that almost every single church in the UK was probably — even if unwittingly — using AI in their processes, somehow. And this “widespread silent adoption” of the technology was a problem, because nobody was openly talking about it. “If you’ve used AI, just name it,” Dr Cross says.
Some voices caution against even the apparently harmless applications of AI in a church context, however, concerned that something bigger is at stake than an easier way to illustrate the weekly email to the congregation. Antiqua et Nova sums it up neatly: “When society becomes indifferent to the truth, various groups construct their own versions of ‘facts’, weakening the reciprocal ties and mutual dependencies that underpin the fabric of social life.” If culture and the Church lose their moorings on what is real and what is not, what else might be lost?
Mr Goswami emphasises, as did other people interviewed, that Christians must be people of truth. “If Jesus is the truth, ultimately there have to be black-and-whites, and some things don’t happen. And we have to have our hearts tuned to say, ‘This is not true,’ and AI is making that so nebulous now. We need to get our act together as Christians and say ‘No, truth actually matters.’”
Mr Driscoll concurs, warning that any technology that was “detaching people from reality” would be “bad in the long run”. You might feel excitement or enthusiasm as you watch a beloved late Christian celebrity welcomed into heaven by an AI Jesus, but it’s little more than a “dopamine hack”, he says.
On a deeper, theological level, Canon Bull says: “You shall know the truth, and the truth will set you free.” A core part of Christian discipleship is learning to distinguish “truth from falsehood”. Dr Cross recalls Pontius Pilate’s scornful retort to Jesus during his trial: “What is truth?”
THERE are other concerns about generative AI, including a technology that can create spiritual content without human involvement. Mr Poulter says that he is less worried about the precise means by which something is communicated than by its origin. If a worship song or prayer in your Instagram feed comes out of a large language model (LLM) rather than a brother or sister in Christ, it is closer to “idol worship than it is to a confessional”, he says.
Stephen Driscoll
Alongside a witness about truth, perhaps our AI moment is also an opportunity for the Church to witness about humanity, Canon Bull muses. A community that worships the incarnate God and committed to meeting in the flesh every week to consume bread and wine together could be a beacon for “genuine, bona fide human relationship” in a world relapsing into atomised individuals consuming AI-generated virtual humanity alone, hour after hour.
“It turns out the answer to all of this is that human beings are made in the image of God, and everything that threatens that, however subtly or indirectly, is where the harm falls, and therefore where the risk lies,” Dr Cross concludes. The Christian solution should be not to put the tech first and then try to work out how humans fit into it.
Last, there is the contested world of AI ethics. These models are not paragons of neutrality: they have simply ingested vast amounts of the internet, and are now spewing it back at us, with all its biases and prejudices. Ms Mudge says that churches must beware of unintentionally furthering harmful stereotypes through AI. Many churches start to use generative AI simply because it is an easy repository of copyright-free images.
But was this AI model trained on the intellectual property of human artists, writers, videographers, and creators, without their consent? In most instances, yes. Does generative AI risk putting an entire generation of artists, photographers, animators, and more out of a job, cutting off entry-level positions in these creative industries?
Then there are the poor working conditions and low pay of contractors in India or Africa used by Silicon Valley companies in their thousands to tag data and refine AI content during the training process. Add to that the environmental impact: the colossal amounts of energy and water involved in building an AI in the first place. Some estimates suggest that, by 2030, AI data centres could consume as much as 20 per cent of the world’s electricity.
Dr Cross says: “The more people understand about how this technology works, the less positive or the less seduced they become.”
















