The Age of AI | Critical Review & Notes

Jason Thacker. The Age of AI: Artificial Intelligence and the Future of Humanity. Zondervan, 2020. (192 pages)


It is hard to describe just how disappointing this book is. I was hoping to read something far more robust and insightful. Instead, most of the content is sermonizing, insufficient definitions, oversimplified explanations, perplexing and incoherent arguments, and inadequate understandings. A few examples.

Thacker argues for the moral neutrality of technology but then argues that we should be developing weapons. He does not address that technology has a bias (cf. What Technology Wants). And even if technology were morally neutral, he does not address technology’s ontology, the very nature of “the medium” itself.

Thacker argues that we are not “just” machines, and cautions us how AI may reduce our view of humanity. However, he does not address the simple fact that we are machines. How we work is an aspect of who we are. That is part of the definition of humanity, bolstered by Thacker’s own theology to “work.” There’s no need to bifurcate or qualify that truth with a “just.” And by ignoring this aspect of ourselves, he misses an opportunity to address the core distinctions between AI and humanity that are being illuminated.

Thacker does not define consciousness very well, and his simplistic definition is unhelpful and minimizing. His entire philosophy is sorely inadequate. I would not engage it here, except to say that he misses yet again an opportunity to discuss the distinction and difference (as Yuval Noah Harari does in Sapiens and Homo Deus).

Perhaps the greatest failure is Thacker’s plea that “Christians must be the ones who champion the dignity of every human life,…” (p.181). This is “Christian exceptionalism,” and is a worldview that minimizes and sublimates the very Christian theology from which it emerges. Why shouldn’t we all, regardless of religious affiliation, “champion the dignity of every human life?” The author who does that most brilliantly on the topic of AI is not a Christian. As such, I do not recommend The Age of AI by Jason Thacker. Instead, read Kai-Fu Lee’s book AI Superpowers and watch his TED talk.


Foreword by Richard J. Mouw

Sometimes we define humanity down, reducing ourselves to the level of animals. And sometimes we define ourselves up, as in the recent transhumanist patterns of thought… (12)

1. Foundation: Man and Machine

Artificial intelligence is the technology of today and tomorrow. So where do we begin our journey to understand it? We must begin where everything began: in the pages of Scripture. (16)

[via: *sigh. So, this may render this book “D.O.A.” How in the world are we to understand AI through “scripture?!” We could “theologize” about it. For example, is not AI humanity actually living out the image of God, creating “in our image and in our likeness?”]


So I propose two questions to aid us in our journey to apply God’s Word to artificial intelligence:

  1. What does it mean to be human?
  2. What is technology and artificial intelligence?


The Image of God

After God created man, he said that he was “very good,” as opposed to just “good” like he had said on the previous five days. (17)

[via: Correction: The declaration of “very good” is to “all that God had made,” not just ADAM.]

[Being fruitful and multiplying]…means raising families, working the ground, and caring for creation. (18)

[via: Raising families? This opening is extremely interpretive, bordering on “Christian midrash” and allegory.]

To “work the ground” is another way of saying to do our jobs by using the abilities God has given us. … These jobs are not a result of sin or a punishment for our rebellion. They were given to us so that we might reflect him in the world.

[via: Better. I concur with that last sentence.]

As part of our creativity, we make technology to aid us in our God-given responsibilities. (18)

As you and I reflect God with our creative abilities, we are able to make things that allow us to live our callings as workers in easier and more efficient ways that benefit all of society. Think about all of the tools that we use each day that make our lives easier. From hammers and nails to digital devices like smart phones and computers, we design and create tools that allow us to work in a broken world that is not the way God designed it to be. (19)

[via: There’s no mention of “dominion” here. Hmm.]

The Rebellion


Technology is a tool that helps us live out our God-given callings. This is one of the most important things for us to learn as we engage the topic of technology and artificial intelligence. Because we often see the tremendous power that technology has over our lives, we are tempted to treat technology as more than a tool, as something with a value similar to our own if it is powerful enough or does enough work on its own. (20)

Scripture shows that technology and tools can be used for both good and evil. Even if a tool was designed for evil, the tool itself isn’t evil. What is sinful isn’t the sword but how people choose to use it. … Technology is not morally neutral, because it influences and changes us each time we use it. (20)

[via: It feels like he’s saying contradictory things here. Also, McLuhan has posited that technology is an extension of ourselves, and therefore, teach “tool” that we make does have a bias.]

Technology expands what is possible for humans to do. It can be best thought of as a catalyst or an accelerant for change because it opens new opportunities for humans to live in this world. (20)


[via: I concur.]

Without the printing press, we likely would not have twenty-four hour cable news and networks and the rise of fake news. (22)

[via: Although Jonathan Gottschall and Yuval Noah Harari would argue we have had fake news since the time humans have been telling stories.]

Technology is amoral…but it is a catalyst for change and an opportunity for both good and evil. (22)


Learning vast amounts of information used to be something only humans could do,… (23)

[via: WHAAAT?! Uh, every living entity is learning vast amounts of information!]

Artificial intelligence is an emerging field of technology defined as nonbiological intelligence, where a machine is programmed to accomplish complex goals by applying knowledge. (23) to the task at hand. Because it’s nonbiological, AI can be copied and reprogrammed at relatively low cost. In certain forms, it is extremely flexible and can be harnessed for great good or for ill. (24)


We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.


J. McCarthy, Dartmouth College
M. L. Minsky, Harvard University
N. Rochester, I.B.M. Corporation
C.E. Shannon, Bell Telephone Laboratories

August 31, 1955

cf. AlphaGo

[via: It is important to point out that the example of AlphaGo shows what AI does for the better than humans, and that is calculation. It does not do everything better than humans. There are extreme limitations to what AlphaGo can do in the world, namely, it can play Go.]


We are able to use technology for the glory of God and the betterment of society, or we can use it to push side the dignity of others created in God’s image for sinful and contorted means. (26)

[via: Thacker dives head first into moral evaluation without a robust epistemological or ontological assessment.]

Many see artificial intelligence through a materialist worldview,… Everything about you and your life is reducible to some chemical and natural reaction. Materialists rely heavily on the theory of evolution from Charles Darwin’s 1859 book, On the Origin of Species, which offers an explanation for how humanity came to be and how our world developed over time. While not all of those who hold to evolution are materialists, all materialists believe in some form of evolution and deny the existence of any creator God or special creation. Some see AI as the next stage of our world’s evolution. It is argued that human beings are nothing more than a set of organic algorithms or a biological computer, albeit an extremely complex one. (27)

[via: *sigh. So, for a book published in 2020, it is astounding that we’re still hashing through 19th century arguments and debates. This whole segment on “materialism” is neither accurate, helpful, nor pertinent to the real challenges of AI.]


[via: This section started to feel defensive. That AI is “producing results and accomplishing goals that amaze even the most well informed in the AI community” (30) is not a great feat of AI, but the exposure of humanity’s inability to imagine.]

While the benefits of AI are many, the threat to human dignity is real and must be addressed by thoughtful Christians. (30)

As with every new development, Christians must proclaim who Scripture reveals God to be, and who we are as created in his image. The tools we make are to be used well and not used to rule over us. (31)

[via: The irony of this statement is just too much. 🙄]

2. Self: Alexa, Are You There?



In May 2018, Google announced a groundbreaking technology that it hoped would revolutionize the way humans interact with computers. Google Duplex …

[via: What Thacker doesn’t mention is that the system relies heavily on humans, and has a long, long way to go before it can be truly function as advertised. And, because my church is in Silicon Valley, and I have a parishioner who actually worked on Google Duplex, I have it on personal testimony that the demonstration was largely fabricated (not “faked”), and in accordance with the linked NY Times article, the technology is nowhere near what it is claiming.]

…many of us are beginning to question the nature of intelligence and what it means to be human. … But the danger is that we might dub down what it means to be human, assigning more value to machines as they begin to take on tasks and jobs once reserved for humans. And soon enough these machines may shock us as they successfully pass themselves off as humans and outperform us in numerous ways. (37)

[via: The Turing test, however, is not so much about AI’s capabilities as it is human gullibility and unintelligence.]


Intelligence can be broadly defined as the ability to acquire and apply knowledge or skills. (37)

AI can be divided into two categories: narrow and general. (38)

I believe computer systems should be called intelligent, albeit artificially, because intelligence doesn’t define what it means to be human. … Intelligence doesn’t mean that a system is aware of itself or is able to outperform humans in all areas. That said, our current AI systems are nowhere near the level of intelligence of a human being, even that of my toddler sons. (39)


Everything about us is reducible to the matter that makes up our bodies, including our thoughts and emotions. But it truly takes a step of faith for scientists to make that assertion, because we are just beginning to understand the smallest processes in our brains and have no idea where consciousness lies. (40)

[via: This is a straw man. cf. The Big Picture.]

Everything about us is explained as just another part of our materialistic world. We don’t have any more worth or dignity than a chimpanzee or Google’s AlphaGo. (41)

[via: Non sequitur.]

In this materialistic world, we are not persons with immortal souls, created in the image of God. (41)

[via: Neither in the Bible!]


If these machines are able to mimic ways that we process information and even pick up on things that we routinely miss, are they really that smart, or are we just not as intelligent as we often portray ourselves to be? The truth is that we tend to talk about AI in ways that dehumanize us and humanize our machines. We deny our dignity by acting as if we are just advanced machines, all the while treating machines as if they are persons with certain rights, thoughts, and even feelings. (42)

[via: I’m curious what AI research he is referring to.]

One reason that we seek to humanize our machines is because we like to play God. (42)

[via: But, as I alluded to before, are we not “in God’s image?”]

The greatest delusion of our age is the paradoxical penchant to deny our own agency while attributing agency to the machines we create. – Jay Richards

Our paradox is that we explain away what it means to be human, while trying to create machines (42) that are just like us or that can surpass us. We dumb down what it means to be human and treat each other as simple machines, but at the same time put our hope and faith in these machines to solve the problems and ills that we deal with each day. We rightfully see where we fall short but put our hope in our own creations rather than in our Creator. (43)

[via: This is starting to feel like a flimsy apologetic/polemic, some kind of blowing into the wind.]

I believe that we long to create something that is more intelligent than ourselves because we know that we are not perfect.

[via: I disagree. We create because we are curious, because we bear God’s image, and because of our innate evolutionary impulses.]

As AI becomes more and more advanced, taking on characteristics that we think of as uniquely human, we are led to an identity crisis. (44)


The same technology that has liberated us from so much inconvenience and drudgery has also unmoored us from the things that anchor our identities. – Ben Sasse


We’ll spend the next three decades–indeed, perhaps the next century–in a permanent identity crisis, continually asking ourselves what humans are good for. – Jay Richards

If our worth and that of our neighbor is based solely on what we do, then there will be no reason to treat each other with respect and dignity because true value is found in who we are rather than what we do. Even if we treat others with respect out of our own personal interest or survival, we are essentially treating other people as a means to an end rather than image-bearers of God. (45)

[via: There’s really “no reason”?! And do we need to create such a {false?} dichotomy between “utility” and “human value?”]


In this popular comparison, our bodies are a lot like computer hardware, of lesser value than our minds. Our bodies can be upgraded or even disposed of in the future when we no longer need them. (46)

[via: It feels as if he’s missing a key dynamic, that humans analogize everything, and we frequently anthropomorphize our machines, including calling the CPU the “brains” of the computer.]


There are three prevailing ways to think about what it means to be created in God’s image: the substantive view, the relational view, and the functional view. (47)

Substantive View

According to the substantive view, something about how God created our minds and bodies distinguishes us from the rest of creation. (47)

Relational View

The relational view of the image of God simply states that we are the only part of creation that can have a relationship with God. (47)

Functional View

The functional view of being made in God’s image revolves around our status as God’s image-bearers. (48)

[via: I think I may subscribe to a “descriptivist” view, that the Bible is merely “describing” a view.]

Which View Is Correct?


cf. Emily Esfahani Smith, The Power of Meaning: Crafting a Life That Matters


3. Medicine: The Doctor Will See You Now


While early cultures did not understand germ theory, which was first proposed by Girolamo Fracastoro in 1546, they did understand one of the key elements of why we get sick and why our bodies break down over time: sin. Our rebellion against God and his design resulted in the fall of man and the brokenness that we experience every day (Rom. 5:12). (56)

Whoever believes in me, though he die, yet shall he live, and everyone who lives and believes in me shall never die” (John 11:25-26).

God’s sovereignty over death and disease is a reminder that this life is not eternal. Each one of us will die at some point, no matter how much we fight it or try to put it off. But the good news of death is that it ushers in our new life of eternity with God, the one who made us and sustains us in this life and the one to come (Phil. 1:21). (57)

[via: This doesn’t really make any sense.]


Death is a reminder that we are not gods, but it is also a reminder that we are created in the image of the one who has overcome death through his resurrection (1 Cor. 15:55-57). (58)

[via: Again, this doesn’t make sense.]


Are the goals of overcoming death and becoming like God even within our reach? Should they be our aim? I don’t believe so, and that puts me at odds with many thinkers like Harari. … But we must keep in mind what the Bible tells us about our lives and how our universe is designed to work. We are not designed to overcome death through our cunning and abilities, even as we help to push back the effects of sin and death. We are designed to worship the one who already overcame death on our behalf more than two thousand years ago. And even in our dying, we are to be reminded of Christ and his victory over death. (60)

[via: Once again, a bit nonsensical, and very much sermonizing.]





As we continue to develop AI medical technology and AI-empowered medical care and intervention, our guiding ethic should be to use these tools responsibly and in pursuit of loving our neighbor. A major risk is that these tools can be used (66) to reduce a human being to a set of data. (67)

But is this really safe and good for society?

[via: And ethics question.]


The human species can, if it wishes, transcend itself–not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity. – Julian Huxley, “Transhumanism,” 1957

cf. L.U.K.E. Life Under Kinetic Evolution

The transhumanist line of thinking will quickly lead to humans being treated like pieces of flesh to be manipulated in search of some upgrade to become greater than ourselves. [NO.] In this pursuit, it will be easy to regard as less than human those who have no clear societal value. [NO.] If we successfully upgrade ourselves, a new disparity between the haves and have-nots will appear. [YES.] An unfettered hope in our ability to fix the world’s problems through technology will end only in heartbreak and broken bodies. We were not designed to carry that weight or responsibility. We are not gods, but we were made like the one who created everything. [BOTH.] … While we should pursue technological innovation to help push back the effects of the fall on our bodies, we should not seek to keep up with the machines, because they are never going to rival us in dignity and worth in the eyes of God. Our machines will increasingly have abilities that surpass ours, but they never will achieve dignity on a par with ours. (70)

| God proclaims that we are not the sum of our parts, nor are we just bodies that should be upgraded at will. (70)

[via: Where?]


4. Family: Welcome to the Family

cf. (the voice of Siri)




cf. Stefania Druga and Sherry Turkle


As our homes become smarter, we quite possibly will become less engaged in the daily lives of our families, because things just always work or are being done for us. While we might constructively use freedom from our daily drudgeries such as cleaning or mowing, the temptation will be strong to use this time to pursue our own devices at the expense of learning and growing together as a family. (83)

[via: But we’re fighting the fall; humans have been doing this for millennia. “Contentophobia” (yes, I made up that word) is a lack of imagination and a failure to understand the true nature of humanity.]




Sex robots are just another form of sexual perversion that promotes this lie.

[via: I’d be curious if Thacker would deem lingerie as a “perversion?”]



These applications are (95) incredibly beneficial, but they also jeopardize our ability to think and process information on our own. (96)

[via: I would call this a “Generational Technological” chauvinism.]

5. Work: Meet Your New Coworker





Imagine a classroom in which the teacher serves as an intermediary between students and their personalized AI teachers. (110)

[via: Already happening!]


Job Loss, Retraining, and the Next Generation

While we can’t fix every problem that will arise from the AI revolution, we can join the next generation in learning about these changes in order to set them (and ourselves) up for future success. (114)

[via: I agree. ;-)]

Free Money?

If all yoru basic needs are met, wjhy would you pursue meaningful and life-giving work? (116)

[via: Aaand, we’re right back to disagreeing. This kind of sentiment, again, lacks human understanding and is in many ways a very low view of humanity. Pursuing “meaningful and life-giving work” is intrinsic to human nature. UBI does not exacerbate sin, it is designed to mitigate a system.]


We Are Adaptable and Will Survive

Our adaptability is a part of what it means to be human, but it takes a lot of time and creative energy. We should welcome the challenge to adapt in the age of AI because AI tools will allow us to grow and mature in ways that aren’t possible without their advances. (120)


So should we fear the future? No. But we should be aware of how AI is changing our society. (120)

[via: I concur, though the general messaging of the book feels a bit schizophrenic.]

6. War: The New Battlefield


It occurred to me that if I could invent a machine–a gun–which could by it srapidity of fire enable one man to do as much battle duty as a hundred, that it would, to a great extent, supersede the necessity of large armies, and consequently, exposure to battle and disease be greatly diminished. – Richard Gatling, quoted in Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: Norton, 2018), 36.






In Army of None, Paul Scharre writes, “Humans are not perfect, but they can empathize with their opponents and see the bigger picture. Unlike humans, autonomous weapons would have no ability to understand the consequences of their actions, no ability to step back from the brink of war.” (132)

AI can desensitize us to the reality that in war, real human lives are lost. (134)

[via: Yes, but this has been happening long before AI.]


While I hear, understand, and largely agree with the concerns raised by Google and others about AI weapons, I fear the lack of development of these weapons will do more harm than good. These tools should be used as deterrents against rogue states and groups. It’s a question not of whether this technology will be created and used but of who will create it and how they will use it. (136)

[via: So, this A) is a violation of one principle as articulated by Albert Einstein, “You cannot simultaneously prevent and prepare for war. The very prevention of war requires more faith, courage, and resolution than are needed to prepare for war. We must all do our share, that we may be equal to the task of peace.” B) misses the larger question of the bias of the technology, and what it does to us as humans when we create these tools. And, C) appears to be in violation of a biblical theology of “shalom,” principle that is in concert with the other sermonizing found throughout the rest of the book.]


Scripture calls us to engage in the mission of justice, rather than to sit passively by waiting for others to fight our battles for us. But we also need to be aware of the temptation to pursue militarism as some type of patriotic duty. … This concept of just war must be the basis for how we think about military engagement in the age of AI. (137)

7. Data and Privacy: You Are the Product


Part of our image-bearing capacity is the (142) ability to observe the universe and use that knowledge to further our dominion over the earth and to care for society. These observations allow us to adapt to the world around us and reorient our plans to better fit with the way that the world was designed. (143)

[via: So, here’s the perplexing thing about this statement. Isn’t all of that what we are doing via our technology?]



The power of AI and modern technology can fool us into thinking that we are something that we are not. That we are in control. That we are mini gods. (146)

[via: Um, this is dissonant from what was stated on p. 143]




cf. Golden State murder case


…some foundational truths in our pursuit of privacy in the age of AI. (157)

Complete Privacy Is a Lie

Our God is sovereign over all of creation. (157)

Nothing to Hide

The early church didn’t have our modern notions of privacy, but they were not nearly as connected with the wider world as we are. (159)

Data as Property

Exodus 2015…”You shall not steal.” The Ten Commandments sum up the entire law in the Old Testament,…

[via: Does Thacker not accept Jesus’s teachings, that Deuteronomy 6:4-5 and Leviticus 19:18 sum up the entire law?]

Second, we see that there is a sense in which the things we have been given by God are our personal stewardship and property until the time of Christ’s return (Matthew 13). (161)

Wisdom in Sharing

We should not openly share all things with all people but in wisdom openly share the most precious details (162) of our lives with the church. 9163)


[via: Yes, we are.]


Believers must wisely consider the implications of data mining and other issues of digital privacy. (165)

…let us not forget that Christians’ view of data and privacy will increasingly be at odds with the world around us. (166)

8. Future: What’s Coming Next?

We need not fear the future because we know the one who created all things. (168)

As we begin this age of AI, we need to understand what we believe and how others think about our world. The time to engage in conversations about technological innovations is now, while they are taking place, rather than responding to them afterward. (168)

[via: This is a fundamental thesis, and a common premise amongst “Christian” books; to preserve and pursue one’s beliefs first.]



cf. 1737, Jacques de Vaucanson

…just like past generations, the dreams are not always filled with wonder. Often these glimpses of what might be cause us to shudder. The comforting yet often terrifying fact is that for all of our advances in technology specifically with AI, we are still unable to know what will happen tomorrow, much less in fifty to a hundred years from now. (172)


Narrow AI versus General AI (AGI)

The only general intelligence the world has ever known is a human being. No other creature or technology has ever risen to that level of intelligence. But what if there was something greater than AGI on the horizon? (174)


Let an ultraintelligent machine be defined as a machine that can far surpass all of the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; (174) there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus, the first intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” – I. J. Good, 1956


The argument for the singularity rests on two main pillars: exponentially improving computer systems and the understanding that humanity is nothing more than a machine. (176)

Look around you–you’re witnessing the final decades of a hundred-thousand-year regime. – Jaan Tallinn

In response to Tallinn, we simply have to say that if you get the starting point of humanity wrong, it seems that you also getting [sic] the ending wrong. (177)


But what is consciousness, anyway? Simply put, consciousness is the ability to know that you exist. It is being aware, having the ability to think about thinking. (178)

[via: So, animals are not conscious? And fetuses are not conscious?]

The reason that my son is more valuable than my dog is based solely on the fact that God created humanity in his image and made dogs lower than humanity in creation. (179)

These [future] machines will follow programs that will grow more and more complex, and it will become more and more difficult to discern what they’re doing, but it will always be possible to trace every action they perform back to a deterministic set of instructions. – Sean Gerrish



To me, the greatest danger is not humanity designing an AI system that will take over the world but humanity using AI tools in ways that dishonor God and our fellow image-bearers. Christians must be the ones who champion the dignity of every human life, not just the ones in the womb or on their deathbeds. (181)

I don’t fear AI or even the moment of singularity. I don’t fear robots, even if one day they do wake up with some level of consciousness. Rather, I fear the people of God buying the lie that we are nothing more than machines and that somehow AI will usher in a utopian age. AI is not a savior. It is not going to fix all of our world’s problems. It is a tool that must be wielded with wisdom. AI will lead to many great advances but also will open up new opportunities to dishonor God and devalue our neighbor. Just as technology always has. (182)

Onward to the future, full of joy, expectation, and hope as we seek to navigate the age of AI as the people of God. (182)

About VIA

One comment

  1. Pingback: AI Superpowers | Reflections & Notes | vialogue

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: