Back to series

Artificial Intelligence and Its Impacts on Humanity

VOLUME 9 NUMBER 1 ISSUE OF BROADCAST TALKS (PDF)

BROADCAST TALKS presents ideas to cultivate Christ-like thinking and living. Each issue features a transcription of a talk presented at an event of the C.S. Lewis Institute. The following is adapted from an interview with John Lennox, conducted by Joel Woodruff, President of the C.S. Lewis Institute. It was recorded on June 10, 2022, and subsequently broadcast as a virtual event titled “Artificial Intelligence and Its Impacts on Humanity”.


Joel Woodruff: During our time together, we’re going to be looking at the complex topic of artificial intelligence. We obviously won’t be able to cover it exhaustively. But our hope is that we will have a discussion today that will awaken all of our interests and help us to go more deeply into this topic, because it’s certainly something that impacts our future. So John, could you give us a definition for artificial intelligence, or AI?

John Lennox: Artificial intelligence essentially comes in two forms, and it’s very important to distinguish them. The first one is usually called narrow AI, and then there’s artificial general Intelligence or AGI. Narrow AI is the stuff with which we’re really familiar. It’s the stuff that’s working today. AGI is much more speculative.

So let’s think a bit about narrow AI. The word narrow refers to the fact that a narrow AI system does typically one single thing that normally requires human intelligence to do. I’ll give you an example: Let’s take x-rays. In this age of Covid, a lot of x-rays of people’s lungs are being taken, so we have a huge database consisting of, say, a million pictures of x-rays of lungs, and they’ve been labeled by top doctors with the diseases the pictures represent. And then we have a computer, a powerful computer, and suppose an x-ray is taken of my lungs. And what happens is this AI system compares. It’s got an algorithm, a procedure that automatically compares the picture of my lungs with the million pictures in the database. It does that very quickly, and it comes out with a diagnosis that I’ve got this or that disease.

These days, that kind of system is proving very effective indeed. And generally speaking, the result is better than you would get from your local doctor. So that’s a typical AI system. Now, the word artificial needs to be taken seriously. The machine is not intelligent. It simply does what it’s programmed to do. It simulates intelligence. And that’s the important thing. It doesn’t think. It simply does what is built into its program. But the result is what you’d normally get by using human intelligence. So as one famous early scientist in this field said, “The word ‘artificial’ in artificial intelligence is real. It’s really artificial.”

That’s the first kind. And we’re familiar with it, by the way. Our smartphones are constantly suggesting to us that we bought this book, and therefore we might be interested in that book. And Amazon’s search engine is guided by AI, which is picking up a trail of all our purchases and then picking through all its catalog, and the algorithm then spits out what we might be interested in. So we’re all extremely familiar with this kind of thing. And we cooperate with it, even though it’s tracing our position, and so on and so forth.

Facial recognition is another example of the type of technology I’m talking about. We’ll come to that in a moment, because it very rapidly raises massive ethical problems. I often say that artificial intelligence is like all technology. It’s like a knife. A really sharp knife—you can use it for surgery or you can use it for murder. And there’s a downside to AI that you might want to investigate.

But before we leave the generality, let’s think a bit about artificial general intelligence. As the name suggests, the idea here is to create a machine that can do everything that normally requires human intelligence and do it better and do it faster. And so here we’re in the realm of the attempt to create a super intelligence.

There are various directions in research, two main ones. The first one is to try to enhance existing humans, to make them super intelligent by blending them with technology and turning them into a kind of cyborg, which a lot of people favor. And some folks speculate that one day we will merge with the machines. The other way is to try to start from scratch and remove the dependence on biological material and try to, as people say, upload the contents of our minds onto some durable material like silicon and something like that. There’s a huge amount of hype. This is the kind of speculative stuff that is loved by the makers of science fiction films and the authors of sci-fi books.

But I take it seriously in my book [2084: Artificial Intelligence and the Future of Humanity], because many leading scientists are taking it seriously. To give one particular example: our British Astronomer Royal, one of our very top scientists, Lord Martin Rees, says that we can have zero confidence that the dominant intelligences a few centuries hence will have any emotional resonance with us, even though they may have an algorithmic understanding of how we behaved. He’s suggesting that, in the far future, it won’t be the minds of humans but those of machines that will most fully understand the cosmos. Because leading scientists like Lord Rees are talking like this, I think it’s very important for us to think about the implications of this kind of thing. Although it may be many years away, it often amuses me that whenever I ask people or read how long it will be until the so-called singularity occurs, that is, when the machines will take over, it’s always about 30 to 50 years hence. And that’s been true for quite some time.

So we’ve got these two kinds: narrow AI, which is up and working at the moment with great benefits on the one hand and negatives on the other, which you can ask about if you wish; and then AGI, which is largely speculative, but toward which quite a number of people are going, because they see mega dollars in it in the first place.

Joel Woodruff: So this is a good way, I think, of understanding the differences. This narrow AI does seem like something we all encounter in everyday life. I don’t always like it when Amazon is telling me what I should buy next. But it is amazing how they can see my interests just from those algorithms and such. We obviously are being impacted by that. Maybe a way, an angle, I’d like to look at, because you mentioned the science fiction world: in the 20th century, there were authors like Aldous Huxley, who wrote Brave New World, George Orwell wrote 1984. (I think your book 2084 is kind of a spinoff of that title.) But it seems that in the 20th century, they were—while dreaming up some of the technology—also addressing possible ethical issues that could be harmful, as these worlds they created were somewhat dystopian, rather than utopian. What impact, if any, did these kinds of novels in the 20th century have on getting us to where we are in the world today?

John Lennox: I think they had a pretty considerable impact, really. And it’s very interesting that you would raise those novels. George Orwell and Aldous Huxley took a very different tack on these things. Consider the idea of big data that George Orwell envisaged in 1984, the idea that Big Brother is watching you and surveillance technology; it’s very important to realize that he saw this kind of stuff and saw its extreme danger. Incidentally, I should make the record straight. The title does indeed come from spinning off Orwell’s book, but it wasn’t my idea. It was the idea of a colleague of mine, who’s a very well known atheist, who’s debated me on several occasions, the Oxford chemistry professor Peter Atkins. When he heard I was writing this, he said, “I’ve got a title for you,” so I do acknowledge him in the beginning of that book.

So it’s very important. That ethical issue that’s raised, of course, is the invasion of privacy. I noticed that, when you referred to Amazon, you said you didn’t quite like them doing that. This is actually a doorway into a huge problem. It’s a huge problem in several directions. Let me, first of all, talk about it more in terms of Western economies, where there’s vast money to be made in the following way: I call it Surveillance Capitalism, which is the critical phrase of the title of a book by a brilliant emerita professor, Shoshana Zuboff, at Harvard Business School. She points out that these powerful search engines are harvesting data about you and me; what we don’t often realize is they’re selling it off to third parties without our permission. So it is a real invasion of privacy. Her book is being taken extremely seriously by major players in the world economy.

 

Of course, the interesting thing is, here we are, we have our smartphones, and they are tracking us. They’re harvesting information. Where we go, how we meet. They might even be listening to our conversations. Who knows? And yet we do this voluntarily. And someone has made the point, “Isn’t it odd that we invite a spy into our houses?” with our so-called digital assistants, like Alexa and Siri. Again, who knows what’s going on behind the scenes? So that presents a real problem. But then, going back to Orwell and his “Big Brother is watching you” idea—there we have surveillance, and surveillance, I often call it communism, is not only in communist countries, although it’s well known in China that surveillance technology is being used for purposes that I find frankly very disturbing.

Let’s step back from this for a moment. You can immediately see that facial recognition is very useful for a police force, who can pick out criminals from a football crowd or can recognize people in the street or coming into a country and arrest them. But unfortunately, what can be used for surveillance in a good way can also be used to control people. And this is what sadly is happening, particularly in Xinjiang in China, where the Uyghur population is being surveilled to such an extent, it’s almost unbelievable. A couple of years ago, there was a major article on this in… Time or Newsweek, and the Chinese author said, “Look, in the West you’ve got all this technology, too. And the only thing is it’s not yet in the hands of a strong central government, but it may be one day.” I read that, even in this country, the police force would like to have that kind of technology. The difficulty is that you can see a benign and positive use, but you can see the extreme danger of it. The invasion of privacy in some parts of the world is utterly extreme. There’s the famous social credit system, again being rolled out across China, where you are monitored all the time. If you do anything that doesn’t fit in with the current system, you get penalized; you suddenly find you can’t use your credit card, or you can’t get a seat on a plane, or you can’t get an upgrade in your job, etc., etc. And this is actually working now. This is not futuristic speculation. This is stuff that’s being rolled out all over the place.

This is still narrow AI. This isn’t artificial general intelligence. Then there’s another huge area here, the whole question of artificial intelligence used in the direction of automatic weapons and that kind of thing. So you’re raising very big issues. The ethical issues are a wake-up call for us to take this kind of thing absolutely seriously.

Aldous Huxley, I think by contrast, tended to feel that we’d be overcome, we’d fall in love with our technology, whereas Orwell thought it would oppress us, but we seem to be having both things happening in our society together. We fall in love with it, and it eventually takes us prisoner.

Joel Woodruff: I think that’s probably a reality for many in the world today. I know people certainly like their smartphones, and all the things it can do. However, you raise some pretty scary topics about what can happen. It makes me think theologically about when we talk about God as being omniscient, omnipresent, omnipotent. In a sense, I wonder if, in a certain way, this kind of technology is almost like going back to the Tower of Babel, where mankind is trying to become like God, so we’re all-knowing, ever-present in a strange kind of way. China seems to be doing some of that. I know that we’re talking in a science context, but as a believer, it’s kind of a spiritual
dynamic going on in all of this.

John Lennox: Oh, absolutely. And the Tower of Babel in Scripture is really an attempt to reach for the sky by using high human intelligence and build a colossal skyscraper. One of the first we ever think of, and someone has well written, “Behind every skyscraper, there is an even bigger ego”; it’s human beings trying to reach God through their technology. And that’s exactly what some people say we are doing.

One of the major reasons I wrote my book is that we have a very well-read, best-selling Israeli intellectual, Yuval Noah Harari, who has written a couple of million-selling books, Sapiens and Homo Deus. The second one of those, Homo Deus, which is Latin for the “God-man” or “the man who is God,” is exactly about this kind of thing, talking about using technology to fulfill what’s called the transhumanist dream. And that’s another word that is cropping up large in the area of artificial general intelligence. In other words, moving beyond Humans 101, like we are, to something that combines the power of the machines with advanced human intelligence by implants, by drugs, and by genetic engineering. His basic idea is that, in the 21st century, two things are on the agenda: The first one is to solve the problem of physical death; he regards physical death simply as a technical problem with a technical solution. The second problem to be solved on the agenda is to enhance human happiness.

Those two things that Harari talks about are in the hearts of many people, and they’re ancient. The idea of wanting immortality leads you back to concepts like the Tree of Life in Genesis, the elixir of life in many mythologies. The idea of constant happiness and living forever, of course, is in every human heart. You asked me what I thought of all of this as a Christian believer, and I’ve written a great deal about it in my book, because it seems to me that what AGI and what many people desire for it is actually something that will never be fulfilled by technology, but it’s a parody of what is held out to us in the Bible.

I think I ought to explain what I mean by that. Let’s take the people, such as Harari, who suggest they’re going to solve the problem of physical death. They’re going to enhance human happiness, so that we’ll be like gods. And that’s what he says. I say to them, “You’re too late.” They look at me with a great puzzlement and say, “What do you mean we’re too late?” Well, I say, “First of all, the problem of physical death has already been solved, incredibly solved, 20 centuries ago,” and we’re talking about this at Eastertime. The resurrection of Jesus Christ is the evidence that God through Christ has the power to raise the dead. So that’s a solution to the problem of physical death. I would just like to add, there’s a lot more evidence for its truth and credibility than there is for the AGI solution.

But the second thing is this idea of immortality and the enhancement of happiness. The wonderful thing about the Christian message—and when this hit me, this is when I got the trigger, the aha moment for writing my book. I suddenly realized that what Harari and others are holding out in the transhumanist vision is already there in a real sense in the Christian message. Because, you see, what Scripture tells me is that the person that trusts Christ as Lord for salvation or repentance of the mess they’ve made of their lives and maybe of other people’s lives receives in that moment a new life that actually is going to exist forever. They receive peace. They receive forgiveness. They receive a new power to live. And also they’re assured of the fact that their very bodies will be raised from the dead. Talk about an uploading! That will be the best one that ever happens.

What strikes me powerfully is the AGI vision; the transhumanist vision bypasses the fundamental problem of human brokenness, human sin, human rebellion against God. So therefore it won’t solve anything. Aspects of AI may imprison people and control them. But God’s solution, to give people forgiveness and freedom, is unique. That’s why I say they’re too late. And that’s why I’m very interested in the fact that millions of people around the world are fascinated by this stuff. Of course they are! Because it’s speaking to a deep-down desire to get rid of death and to have real hope and real life.

So I believe that Christians are very well placed, in the best place, to speak into this situation, and to point out—and I often do it this way, and I do it in my book. I say, “Look, there are many sci-fi speculative scenarios about the future,” and there are dozens of them, and people will believe them. I say, “Look, if you’re going to believe that, why don’t you have a look, before you reject it, at the scenario presented to us in the New Testament? Actually, there’s more evidence for it, and it will bring you much more peace and happiness and the solution to these problems than AGI is going to do.”

Joel Woodruff: That’s a wonderful way of looking at it, to see that a lot of what the AGI effort longs to address—the core longings of the human heart, for everlasting life, for youth, for happiness—have already been addressed through Jesus Christ and His resurrection. I think it’s a great way for us to approach our friends and colleagues when discussing this topic. It’s getting to the core. “Why are you pursuing these ideas?” I think that’s really helpful to us in that regard. When we think about AGI, which is this blending of artificial intelligence and trying to create these transhumans, how does that impact the view of human beings? Obviously there’s a Judeo- Christian view of that. But how do you see that impacting the view of human beings—human life, the value, worth and dignity of human beings?

John Lennox: That’s another key issue that served as a driver for me in writing this book. Who am I as a human being? For the transhumanists, we are only one staging post in a possibly endless evolution of human beings into something higher, greater, cleverer, and all the rest of it. I say I’m not very convinced about that because we haven’t shown a great deal of change in many centuries. That’s first. But much more important than that, it seems to me that we’re faced with a worldview question; in this area, we have two main worldviews in contention. There’s the atheist worldview, and atheism has a great influence in this whole arena. Then there’s the theistic worldview, in my case the Christian worldview. And that tells me—and this is believed, of course, by our Jewish and Muslim friends as well—that human beings are made in the image of God, which gives them infinite value and worth. Therefore I am concerned with artificial enhancements of existing human beings that begin to meddle in the very definition of what it is to be a human being.

Now, as I’m talking to you, I have an enhancement sitting on my nose. It’s my pair of glasses; I might have had contact lenses, and one can imagine all kinds of enhancements, hearing aids, speaking aids, etc. Those are very positive enhancements; I believe that Christians ought to be involved in their development. And certainly in many aspects of AI—and that’s an important thing to say. I’m not against research in this. I’m very much for it. If you want a brilliant example, consider the work of Rosalind Picard, a brilliant scientist at MIT. She’s a Christian believer, and she has a lab that is called Affective Computing. She’s developed a whole theory herself. She’s using artificial intelligence to develop smartwatches that predict if a child, for example, is about to have some kind of seizure or fit, and it’s saving kids’ lives. This is marvelous stuff to be involved in! So let none of our viewers think that I’m against involvement. I think we need Christians in there, not only doing the technology, but also being able to relate to the ethical aspects of it.

But my final point on this, although it’s a very big topic: Not only are humans created in the image of God, but they’re utterly special for another reason. The central claim of Christianity, amazing though it is—I believe it is absolutely true—is that God became human. So human, if I might put it this way, is the kind of being that God can become. That makes them utterly unique. And that’s why I think Jesus Christ is utterly unique in history. He was a man, but never a mere man, never only a man. He was God become human. Once more, people starting to play around and play God, so to speak, particularly with the genetic construction of human beings—C.S. Lewis saw it a long time ago, when he wrote a couple of brilliant books, one, That Hideous Strength, which is a science fiction book, interestingly, and another called The Abolition of Man. He pointed out that if, in the end, we leave it to a group of scientists to specify the blueprint for all future human beings, they won’t be human beings. They’ll be artifacts. He ends it with this chilling statement: “Man’s final conquest has proved to be the abolition of Man.”

And that’s what I fear, that with all this hype, in the end, AGI could destroy humanity, when all the time we’re being offered this wonderful message that can bring peace to our hearts, can bring certainty of the future, and a guarantee of being, if I might put it that way, uploaded into a future world to enjoy an eternal relationship with the God who created us. But in order to be involved in that, we must face what is wrong with our human nature, our sin and rebellion against God, and we must face it and repent of it and trust Christ. That’s a radical solution to a radical diagnosis. AI knows nothing of it, and therefore, I fear, is doomed to fail at that deeper level.

Joel Woodruff: That’s, I think, a great thing to consider, especially when we think about the prospect you mentioned, if scientists were to control all of this without ethics, ethical thinking, philosophy, and, as a believer, theological underpinnings. It just shows the importance of the humanities and the liberal arts and the sense of integrating a worldview. You can’t just do cold science. We need to have integration of those things. How would you recommend believers be involved in the development of the positive use of the narrow AI as well as speaking into some of the dangers of AGI?

John Lennox: You’re absolutely right in our need for something more than science. The late Jonathan Sacks, Lord Sacks, who was our British chief rabbi and a brilliant philosopher and thinker, said something like, “Science takes things apart to understand how they work, whereas religion puts them together to understand what they mean.” Science does not give us meaning. Faith in God and Christ, the biblical worldview, fills our world with meaning.

Therefore—particularly for younger people who are scientifically gifted—it’s an important world to be in, to develop technology that’s going to be beneficial to human beings. The list of beneficial uses of narrow AI is growing day by day. And it’s those people—if they spend time also thinking through the implications, philosophically and Christian-wise, of the speculative side—they will be in a position, a very strong position, to help the rest of us to navigate these topics in which we aren’t trained. Because it’s quite clear that a lot of this stuff has a seductive effect on people, and therefore we are moving toward—and this is another topic altogether—having AI linked with virtual and augmented reality and to immersing ourselves in alternate worlds. This raises immense ethical problems, allowing people to do things that they wouldn’t want to be seen to do in their real lives. So it’s immensely important that we have Christian thinking on the score.

Joel Woodruff: I know in your book you address the future of AI, where we’re going. I like the Yogi Berra quote that you have there, where he states, when it comes to predicting, “It’s tough to make predictions, especially about the future.” Where do you see us going in this world, with narrow AI and then AGI as well?

John Lennox: I find it very hard to say. But as a Christian believer and observing the trends in our world today, I fear that there is a development toward central authorities or even ultimately a world government where the economy will be controlled in the kind of way we’re seeing today, a social credit system. The book of Revelation, it’s full of metaphors, but metaphors stand for realities. And it talks of…under the figure of an animal, a wild animal, a future world leader who fascinatingly controls the economy by insisting that everybody has a special mark on their forehead. Now, that used to be inconceivable, but now we are getting nearer and nearer to it. I use fingerprint identification on my computer. There’s retinal identification. There’s all kinds of stuff. So that’s what I was saying earlier. The biblical scenario is quite scary, but it cogs in with reality, as we see it. Going outside, we see these trends in our society, so it’s no longer wild speculation, and it’s because of the accuracy of that kind of thing. Of course, we cannot necessarily identify or pin down all that Scripture says, because what it wants to do is to show us, roughly speaking, where things are leading.

But for the Christian, there is an additional central hope, and that is not only that we can enjoy the fruit of eternal life now by trusting Christ and receiving Him and knowing the reality of the power of His indwelling life, but we are promised that one day He will return. This world hasn’t heard the last of Him. How can it have? If He’s the Son of God, ultimately the Word who is the Creator, you cannot crucify that person and think that the world has heard the last of Him.

So here am I with my feet on the ground, a mathematics professor emeritus who believes very strongly that one day Jesus Christ will physically return, as He was seen physically and literally to go. Of course, that depends entirely on who He is. But that’s the question that each of us has to decide what we believe about.

Joel Woodruff: Yes. I think it is interesting. I think the Scriptures do give us insight into the future. It is amazing how some of these apocalyptic genres and the book of Revelation seem to be coming to pass, as far as what is possible. But there is this good hope. Jesus talks about how there will be rumors of war and earthquakes, and there’s going to be—all kinds of evil may be on the move, but we know that the gates of hell shall not prevail against God’s kingdom. So there’s a great hope, as believers, in the midst of all the possible negatives in the world, that we do have a great hope. As a follower of Jesus Christ, how would you encourage each one of us to live into this world in which AI, AGI are part of it? How do we witness to Christ and share the gospel in this world in which these are some of the dangers we’re living in?

John Lennox: We can only do what we can do. I think the level to start at is in conversations with our friends, to raise these issues. That actually is one of the practical reasons I wrote this book, because I think you’ll agree it’s pretty accessible. The idea was to give a tool to thinking Christians, so that they could use some of these notions to engage their friends and colleagues on these issues. I think the main way to engage people is not by telling them what we think, but by asking them questions about what they think. I would want to ask people, “Look, what do you think of artificial intelligence? Where do you think it’s going? Have you any defense against the negative side? What helps you?” and so on. In the end, they may ask you, as one would hope, as the apostle Peter says, they may ask you, “Have you got any hope and a reason for the hope that is within you?” Then you can explain your hope in Christ and trust that they will take it seriously.

Joel Woodruff: Maybe a final question, because, as I know, you have grandchildren, and we’re thinking about next generations, perhaps. What would be your advice to the next generation about how to deal with this changing world in which AI, AGI, and just all kinds of technological issues may come about? And how do we deal with that as a follower of Jesus Christ?

John Lennox: I think the most important thing is for parents to talk to their children about these things. There’s a huge danger at the moment—I’ve been reading about it just today—of children who’ve been interviewed; they feel desperately lonely because, when they come home from school, their parents do nothing but sit at the table, fiddling with their smartphones or tablets or watching television, and so on. The children are crying out for friendship, for companionship, for family love, and they’re not getting it. So what is happening is they’re turning to the connected world, the internet of things, and they’re going on the net, and they’re friending people, but these aren’t genuine friends, and it’s an absolute tragedy. And parents need to wake up and start to learn to leave their smartphones, if they can, at work, but certainly away from the family, to make sure they have family meal times, to talk to their children about these issues, and get their children interested in critiquing these things, especially as they get into early teen age. And there’s a vast lack of that.

Some years ago, another MIT expert, Sherry Turkle—I think she’s still working at MIT—wrote a book called Alone Together, and its subtitle is “Why We Expect More from Technology and Less from Each Other.” It’s a devastating indictment of our contemporary way of life, and we need to face this very seriously as Christian believers. And it’s in the hands of parents, first of all, to set the example. If the parents are wired to smartphones and so on all the time, they will lose any influence on their children in the end.

[Video of the complete version of this talk is available at: Artificial Intelligence and Its Impacts on Humanity.  Additional information about the topic of this talk is included in John Lennox’s book, 2084: Artificial Intelligence and the Future of Humanity (Zondervan, 2020).]

 

 


John Lennox

Dr. John C. Lennox is an Irish mathematician, bioethicist, and Christian apologist. Now retired, he is the Emeritus Professor of Mathematics at the University of Oxford as well as the Emeritus Fellow in Mathematics and Philosophy of Science at Green Templeton College, Oxford. A former professor of mathematics and the philosophy of science, he has also been a part of public debates with New Atheists such as Richard Dawkins, presenting intellectual defenses of Christianity. Lennox has written acclaimed books such as Cosmic Chemistry: Do God and Science Mix and most recently 2084: Artificial Intelligence and the Future of Humanity. He received his Ph.D. of Philosophy at the University of Cambridge, while also holding numerous honorary degrees from other British universities.

 

COPYRIGHT: This publication is published by C.S. Lewis Institute; 8001 Braddock Road, Suite 301; Springfield, VA 22151. Portions of the publication may be reproduced for noncommercial, local church or ministry use without prior permission. Electronic copies of the PDF files may be duplicated and transmitted via e-mail for personal and church use. Articles may not be modified without prior written permission of the Institute. For questions, contact the Institute: 703.914.5602 or email us.

0 All Booked 0.00 All Booked 0.00 All Booked 22140 GLOBAL EVENT: Keeping the Faith From One Generation To Another with Stuart McAllister and Cameron McAllister, 8:00PM ET https://www.cslewisinstitute.org/?event=global-event-keeping-the-faith-from-one-generation-to-another-with-stuart-mcallister-and-cameron-mcallister-800pm-et&event_date=2024-05-17&reg=1 https://www.paypal.com/cgi-bin/webscr 2024-05-17

Experience a Transformed Life

Print your tickets