“We shape our buildings, and afterwards our buildings shape us.”
These words, spoken by none other than Winston Churchill, were shared as he stood in the bomb-scarred halls of the temporary Commons (then the House of Lords) to argue for the famed chamber to be rebuilt. It was October 28, 1943—two years on since incendiary blasts from a fleet of Luftwaffe bombers had torn through the House of Commons; but still two years until the Second World War’s denouement would eventually arrive.
Against the war-battered austerity of 1940s Great Britain, you could be forgiven for thinking the desire to restore heirloomed buildings seems a little out of place. And yet, it’s precisely this context for Churchill’s words that serves to illuminate their importance, and so too the essential truth they reveal:
That our humanity—the quintessential traits of who and what we are—does not subsist in a vacuum, and cannot be preserved by one. We are both our history and our creations; and they, quite inescapably, shape us as much as we shape them.
You needn’t stand beneath the dome of St Paul’s Cathedral, or gaze up at the Sistine Chapel ceiling, to know the layout and décor of a space inevitably shifts the psychology of the people inside it. Or become an anthropologist to realize clocks do more than merely tell time; they’ve trained whole societies to live by schedules. And it does not take a scholar to recognize that writing, that most ancient of technologies, beyond recording thoughts has also reshaped memory, law, and even faith by transporting ideas across generations.
In the knowing words of academic and priest John M. Culkin: “We make our tools, and our tools make us.”
Which is why it makes sense, especially for Christians, to be concerned not only with what an emerging technology can do, but also the unwitting impact it might have. From the invention of the wheel to the hydrogen bomb, human history is littered with both inspiring and cautionary instances of the evolving interplay between our innovations and our civilities. And so it’s no surprise to observe concern today when considering the topic of AI; and rightly so. Because whatever else AI is—particularly its LLM variants—it is not merely a new tool. It is a new kind of environment: a conversation-shaped environment that meets you where language lives—inside your head—and offers to respond. Which is a sobering thought. Because when a tool starts to inhabit the inner life, usefulness is no longer the only question that matters.
As Christians, we must ask not only: Can AI help?
We must ask also: What might it be training us to become?
How should we think about AI?
When St. Augustine, dissatisfied as he was with both popular religion and the simplistic answers of Manichaeism, first stumbled upon Academic skepticism, he did not do so in search of resolution so much as a kind of permission for his own restless musings.
Skepticism offered a way to doubt shallow certainties, to distrust easy answers, and to do so in a way that would later help distill Augustine’s most novel and important tenet—fides quaerens intellectum—faith seeking understanding.
His conviction?
Faith does not fear questions. Instead, it remains skeptical of ideas that promise total explanation; and honest, rather than suspicious, about inner conflicts and complexity. Because faith, in Augustine’s view, is essentially a mode of nuance, able to welcome novelty whilst carefully weighing it, which is precisely the kind of approach that today’s Christian would do well to adopt.
Whether reflecting on the latest trend, political idea, or indeed the next technological revolution, there are questions that we must allow to surface, along with a willingness to ask them honestly:
What’s really driving this?
What does it reward?
What will it form?
This is all well and good.
But allowing skepticism to be subsumed by the reactionary blunt instruments of binary thought—“AI is evil” or “AI is progress”—is anathema to Augustinian thought, and, it could be argued, akin to history’s most corrosive responses to change.
When Galileo’s studied gaze observed a sun-centered cosmos, his ideas resulted in him being tried by the Inquisition and forced to recant. When Gutenberg’s printing press radically hastened the circulation of ideas, he wasn’t welcomed with enthusiasm for this faith-propagating innovation so much as wary unease.
For some—and often, sadly, particularly those in the church—the creed has always been clear:
New = bad.
Change = loss.
New + change = evil.
But this, dear reader, is not faith-shaped thinking.
For the sensible Christian, reckoning well with modernity means cultivating a willingness to upgrade the resolution at which we examine both it and ourselves.
This—especially when it comes to AI—means getting clear on three things:
- The theological: what it means to be human, and dependent on God.
- The technical: what AI actually is (and isn’t).
- The practical: the real-world incentives, guardrails, and use-cases that determine whether this tool serves the good… or corrodes it.
And this is where the thoughts of author and journalist, Karen Hao, perhaps prove useful:
’AI’ is such an interesting word because it’s sort of like the word transportation, in that you have bicycles, you have gas guzzling trucks, you have rocket ships. They’re all forms of transportation, but they all serve different purposes and have different cost benefit trade-offs. And to me, the quest for artificial general intelligence has the worst trade-offs, because you are trying to build fundamentally an ‘everything machine’… you confuse the public about what you can actually do with these technologies, which leads to harm because then people start asking it for things like medical information and instead get medical misinformation back.But there are many, many different types of AI technologies that I think are hugely beneficial, and [these are] task-specific models that are meant to target solving a specific well-scoped challenge… I think if we want broad-based benefit from AI, we need broad-based distribution of these types of AI technologies across all different industries.
In other words, not all AI, at least from Hao’s perspective, is created equal—a crucial wrinkle that steers us away from simple AI-good, AI-bad narratives to instead weigh the technology’s implications by why and how it is deployed.
The invitation here is toward a sort of learned discernment, in which we recognize that AI can be both helpful and unhelpful. But to successfully adopt this approach, we must first understand precisely what AI is, and what it is not.
So what is AI exactly?
The official literature defines Large Language Models (LLMs) as “probabilistic models,” able, as researcher François Chollet noted in his 2019 paper On the Measure of Intelligence, to “manipulate symbols according to statistical correlations.” in essence, likening LLMs to an insanely sophisticated autocomplete with a memory of culture; but with a crucial distinction:
These models do not store knowledge like a database, rather they store weights—numerical values that encode how strongly certain words and concepts relate to one another in their given context. When you type a prompt, the model breaks your input into tokens, then calculates, step by step, the most likely next token, repeating this process thousands of times per response to generate what should be considered something more like a probabilistic continuation than an actual retrieved answer—i.e., the illusion of intelligence, rather than intelligence itself.
This has implications, especially when parsing claims about the supposed imminence of sentient machines; claims often spurred on by highly publicized (and often selective) anecdotes of unexpected behaviors in LLMs. These moments, when systems appear to reason, self-correct, or speak about themselves in ways that invite speculative language about interiority or a nascent “ghost in the machine” can be misleading. Because when AI engineers speak of theory-of-mind style language and metaphor use, they’re simply observing the results of pattern recognition (albeit an incredibly powerful version of it) at scale; the same way flocking and murmuration emerge from simple bird rules, or traffic jams form without being coordinated by a “traffic mind.”
And sure, the sheer computational scale and speed of these models call for a level of epistemic humility, but this does not make them intelligent in the true sense, any more than a booming voice and clever machinery make the Wizard of Oz an actual wizard.
So, here’s what we do know…
Large Language Models are not capable of genuine logical reasoning, abstract reasoning, grounded experience, beliefs, or awareness. In fact, they have real limitations, with evidence we may already be approaching their ceiling and starting, as Professor Yann LeCun remarked at a recent AI Summit, “to see the limits of the LLM paradigm.”
Against this backdrop, rather than worrying about the vanishingly distant and unlikely prospect of ChatGPT someday achieving sentience, might there be a deeper and more abiding concern we should be paying attention to?
The critical question of how artificial intelligence is used, and, perhaps even more importantly, the underlying motivations and incentives that will drive its development.
The power of profit, and why it matters
We need only look back to the hopeful forecasts of the early 2000s regarding social media—our last era-defining innovation—for examples of how powerfully motives can shape technology in unexpected ways.
There was Israeli-American author and academic Yochai Benkler opining on how social media would allow fora more participatory culture, enabling ordinary people to create, share, and collaborate beyond the occasional arbitrary constraints of markets and hierarchies. We had early internet theorists like Howard Rheingold foreseeing social media platforms as tools to enhance our “collective intelligence,” while platform founders like Mark Zuckerberg presented them as legitimate extensions of human sociality, “making the world more transparent” by “giving people the power to share.”
And in many ways, this techno-optimist vision of the future has proven true. The last two decades have given rise to avenues of collaboration and learning that would have been unthinkable just a generation ago—crowdfunding, livestreams, aid networks, podcasting, whole creative economies and communities built from bedrooms and basements.
It would take the most ardent of cynics to deny social media has delivered incredible gains. And yet, two decades on from its beginnings, we’re only now starting to recognize the complex psychological and sociocultural impacts this technology has had.
There are now multiple studies linking social media use with mental health symptoms including depression, anxiety, body dysmorphia, self-harm, and even suicidality; class action lawsuits alleging links between feed algorithms and compulsive behaviors, while researchers as esteemed as Jonathan Haidt correlate the sharp and measurable deterioration in emotional well-being—especially among younger cohorts—with the emergence of smartphones and social media use.
As these platforms continue to scale profits, it seems we’re gradually awakening to unwelcome truths about their impact on everything from our attention to our anxieties, identity, discourse, and even our politics.
The real question, as it often is, is why?
Are these unwanted effects simply an inevitable result of the technology? Or are they best explained by the incentives shaping how these platforms have evolved over time?
When author and journalist Cory Doctorow coined the term ‘enshittification’ to denounce how digital platforms develop, he outlined three key stages:
- They start by serving users well (to attract growth).
- Pivot to serving business customers (advertisers, sellers, etc.).
- Before finally turning their focus toward extracting maximum value for shareholders, often (once they’ve monopolized the category) by degrading the experience for everyone else.
Doctorow, among several notable examples, cites Google—once optimized to surface the most relevant results, now, in Doctorow’s view, increasingly cluttered with ads, spam, and irrelevant content that requires users to perform more searches, resulting in more clicks, leading to more ad revenue.
In parallel, we see mounting evidence of how algorithm-driven feeds on social platforms amplify polarization and emotional intensity at the expense of wellbeing, favoring recommendations engineered to keep us scrolling over posts from people we’ve actually chosen to connect with—the goal, once again, is to maximize ad revenue; the common denominator in Doctorow’s thesis, and the smoking gun that suggests a reality we’re still coming to terms with:
Namely, that Benkler and others were right—social media was replete with positive potential, and perhaps still is, but once revenue becomes the priority, ‘enshittification’ and harmful algorithms are always going to be the natural result. As St. Paul once famously wrote, it’s “the love of money” rather than money itself that lies at “the root of all kinds of evil.”
Incentives, after all, drive outcomes, and incentives will ultimately determine whether AI—and its applications—shape society toward health, opportunity, and the better world the AI enthusiasts preach, or unwittingly draws us into dependency, distraction and monetized compulsion.
AI and the soul in the digital age
“People will come to love their oppression, to adore the technologies that undo their capacities to think.” — Aldous Huxley, Brave New World
It’s certainly tempting, against so stark a backdrop, to indulge in pessimism, allowing the aftertaste of every dystopian sci-fi novel ever read to tease our imagination toward only apocalyptic outcomes. Should you succumb, you’d be in good company. To date, foretellings of humanity’s demise at the immaterial hands of AI have taken every form from the displacement of human beings in the workplace to culture-tinged fears about the inevitable birth of Skynet.
And yes, all joking aside, the concerns and questions are warranted. Nobel-winning AI pioneer Geoffrey Hinton once suggested a 10%–20% chance advanced AI could lead to human extinction over the next few decades; with Tesla CEO Elon Musk suggesting a “non-zero chance” of “annihilating humanity,” and the late Stephen Hawking remarking that “AI could be the worst event in the history of our civilization.”
And yet, perhaps one of the most notable marks of Christian history has been the tendency for faith’s most vibrant expressions to arise in the most perilous circumstances. There is the sharp proliferation of faith during the second and third centuries, as Christians bucked norms to care for victims of the Antonine and Cyprian plagues while others fled. Or the rise of monasticism following the collapse of the Western Roman political order; or the evangelical revivals of Wesley, Whitefield and Hastings amid the seismic industrial upheaval and urbanization of 18th-century Great Britain. Or even, perhaps, the Quiet Revival currently ensuing on the heels of a pandemic and global lockdowns.
It seems hope often finds its most fertile ground amid disruption and calamity—a trait the Jewish-German historian and philosopher, Hannah Arendt, would perhaps assent to.
By the late 1950s, Arendt had already fled Nazi Germany, watched Europe collapse into mechanized barbarity and totalitarianism, and reported on the trial of Adolf Eichmann in Jerusalem.
And yet it was against this backdrop, drawing on Augustine’s line—“Initium ut esset, creatus est homo” (“That a beginning be made, man was created”)—that she coined her use of the term ‘natality’ to reframe the human condition not around mortality or inevitability, but around initium: the capacity to begin again.
Birth, for Arendt, was not merely a biological continuation—some randomized and indifferent outcome of a cosmic lottery—rather, it signals, with its every occurrence, the arrival of something unique, precious, and even hopeful: i.e., a newborn agent of embodied potential, capable of interrupting the flow of history to bring about change. In her words:
“The miracle that saves the world… is ultimately the fact of natality; the birth of new people and new beginnings, and the actions they are capable of.”
Human ingenuity, to put it plainly—this world’s ultimate currency, and coincidentally the very quality that seems, amid the assaulting digital tsunami of our age, to be most under attack.
Because what’s clearer now than it has perhaps ever been, is that our growing distractibility is not merely some inane whittling away of hours that would be otherwise better spent—it is anti-Sabbatic, and an erosion, in many ways, of what it means to be human; one that crowds the mind down to narrow, limbic channels of urge and impulse where we gradually lose that most precious of God-given qualities—the capacity to think.
And so perhaps the question is not what damage AI might do, but, in the spirit of the Benedictine communities before us, what opportunities its disruption might present to reimagine, serve, and help restore human dignity, well-being, and flourishing?
Right now, we’re living in a culture with record levels of burnout, digital addiction and mental illness, with the World Health Organization reporting there to be as many as 1.1 billion people affected worldwide, all while access to effective care remains limited, with only about one third of people in high-income countries receiving treatment.
With a growing body of research now highlighting the benefits of AI-journaling, perhaps the next great innovation of our culture will be in developing AI-solutions that join this brave new world to the old by enabling a return to ancient and biblical rhythms of reflection and repose—i.e., the defining powers of our species, and home to our greatest feats of reform, ingenuity and change.
It is here Newton conceived the foundations of calculus and gravity; here Archimedes’ “Eureka” moment surfaced, and it is even here, during the enforced pause of Covid lockdowns, many re-evaluated their careers to embark on fresh and more meaningful directions. In the words of one observer: “When you’re in the grind, you don’t have a moment to think about whether it’s fulfilling.”
A habit, for too many of us, that is all too familiar. Often, it is only when we slow enough to awaken to the world within us that we are able to better plan, learn, empathize, imagine, and even, as the Psalmist reveals, encounter ultimate Truth (Psalm 46:10)—revealing realities, obscured by distraction, that have always been present, and allowing us to become, as C.S. Lewis once put it, “more truly ourselves.”
And this, really, is the whole point—and, it could be argued, the very fabric and story of redemption itself. Because becoming ourselves, beyond mere private self-discovery, is to recover something irreducible—a way of seeing, feeling, creating, and relating that resists compression into data, because it is born of embodiment, history, limitation, love, and ultimately, design.
For all its power, AI cannot suffer, hope, repent, imagine, or care from within the grain of lived experience. It cannot carry moral weight, nor bear responsibility, nor extend mercy at a cost to itself. These are things that remain the sole purview and remit of the human soul in emulation of the One who made it.
In the end, it is only in becoming more fully, attentively, and faithfully human that we unlock the very capacities this age most urgently needs—culturally, politically, and even economically—and thereby rediscover, perhaps, that the greatest safeguard against dehumanization is not resistance to technology, but rather the disciplined and revolutionary choice to buck the trend of our surroundings to recover the self we were created to be, and therein the unique contribution we are each here to make.
Micah Yongo is a writer based in Manchester, England, and the founder of Natality, an AI-assisted journaling platform focused on reflective and spiritual practice. He is also the author of two ancient Africa-inspired novels, including his debut Lost Gods, shortlisted for a British Fantasy Award and Starburst Magazine’s inaugural Brave New Words Award. Shaped by the West African folklore of his childhood and his background in technology, Yongo’s work explores faith and formation—how stories, technologies and beliefs shape who we become—and the search for purpose in contemporary life.
