Breaking NewsComment > Columnists

Preach the gospel at all times. . . Use drones if necessary

ONE of the core beliefs of The Guardian is that missionaries are bad. Hence the enchanting story from Brazil last week in which missionaries equipped with drones played the part of the serpent in Eden: “Missionary groups are using audio devices in protected territories of the rainforest to attract and evangelise isolated or recently contacted Indigenous people in the Amazon. A joint investigation by the Guardian and Brazilian newspaper O Globo reveals that solar-powered devices reciting biblical messages in Portuguese and Spanish have appeared among members of the Korubo people in the Javari valley, near the Brazil-Peru border.

“Drones have also been spotted by Brazilian state agents in charge of protecting the areas. The gadgets have raised concerns about illegal missionary activities, despite strict government measures designed to safeguard isolated Indigenous groups.”

So isolated are these people that the story is illustrated with a photograph of one of them, a cute child carrying his toddler brother and pet spider monkey. Taking photographs for the newspaper and gaining their trust enough to be able to caption the picture is not, for The Guardian, nearly as corrupting a contact as exposing people to random Bible passages spoken by a robot in languages that they are unlikely to understand.

In any case, the definitive story of missionaries’ contacting remote Amazonian tribes is Daniel Everett’s memoir Don’t Sleep, There Are Snakes, in which the missionary linguist lives with his family among the Pirãha tribe for 20 years, and ends up abandoning both his Christian faith and his Chomskian theories of language.

But that is a story about human interactions. Why might anyone believe that robot voices would convert anyone either way?

I WAS still worrying about this question when I found the latest paper from the wonderful Murray Shanahan. He is a philosopher and computer scientist who worked with Beth Singler on a paper for which they teased an AI into playing the part of a Buddhist deity, something that I wrote about a few months ago (6 June). Now he has prompted Chat-GPT to write a Buddhist sutra, “a barely legible thing of such linguistic invention and alien beauty that no human alive today can grasp its full meaning”, to quote his instructions.

Read with the eye of faith, the machine did exactly as he ordered. Working with a couple of recognised Buddhist scholars and practitioners (should one say atheologians?), he has produced a careful exegesis of the result and concludes that the result is valuable and meaningful to a Buddhist scholar, regardless of its origin. “It is easy — and often appropriate — to dismiss such material as meaningless word salad, or as ‘AI slop’. However, the Xeno Sutra’s density of symbolism and richness of allusion repay closer reading.”

Certainly, he got more out of it than the hapless Korubo people will have got from the solar-powered box that tells them, as one captured by The Guardian did, “Let’s see what Paul says as he considers his own life in Philippians chapter 3, verse 4: ‘If someone else thinks they have reasons to put confidence in the flesh, I have more’.”
It seems to me that the closest human analogy to Xeno Sutra is that Shanahan et al. have encouraged the machine to write Buddhist theological fan fiction. Religious fan fiction can have enormous effects on the world — look at the Book of Mormon. Does it matter if it is produced by a machine?

Shanahan has an interesting answer: the problem is not that it is inauthentic, a category that I understand him to deny. “No doubt many will take pleasure in generating, reading, and decoding texts like the Xeno Sutra, and some will benefit from whatever teachings they have to offer.” But there’s just too much: “Anyone with access to a contemporary LLM can effectively produce, on demand, a staggering cornucopia of ‘sacred’ texts (albeit of variable quality), none of which has been seen before. How are we to deal with this embarrassment of riches?

“Used in this way, AI should be likened to a potent, mind-altering drug; it has the potential to do harm as well as good.”

As a Buddhist intellectual, he believes that personality is an illusion; but, for most of us, it is a central fact about people, and, if the machines can produce personalities on demand, we will take them for real. Perhaps it is time to run for the Brazilian jungle, where the potent mind-altering drugs are easily recognised and cause uncontrollable vomiting as the price of enlightenment.

Source link

Related Posts

1 of 15