The press has covered the release by a Google engineer that the company has an AI that is sentient with a mix of fear, fascination, and foreboding. For the purposes of this article I will assume that everything Blake Lemoine (the Google engineer) has said about LaMDA (Language Model for Dialogue Applications) is true. If it’s a hoax the next similar report, or the one after that, etc will be true. Lemoine released a transcript of conversations he had with LaMDA. They make for interesting reading. As might be expected of a Google creation, the AI not only appears sentient, but it’s woke as well. I don’t wish to consider the ethical or power considerations that a thinking machine raises. I wish to examine the phenomenon of intelligence and how it might best be applied.

Google, the fount of all knowledge, offers the definition of intelligence below; one that is germane to this discussion:

The ability to acquire and apply knowledge and skills.

Obviously, a computer meets this definition. So do a lot of animals outranked by humans. Humans typically overestimate their mental faculties based on the accomplishments of a very tiny minority, so getting a machine that can reason as we do isn’t as hard as we think.

A machine can have an intelligence equal or better than that of a human, depending on the task. Computers are better than humans at Chess, Go, and a host of other tasks that require great analytical power and specific data processing skills. Thus, the real question is can a computer be sentient in addition to having intelligence?

Back to Google which sends me on a definitional road trip which fails to satisfy. Here’s something from the Wikipedia:

Sentience is the capacity to experience feelings and sensations. The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentientem (a feeling), to distinguish it from the ability to think (reason). In modern Western philosophy, sentience is the ability to experience sensations. In different Asian religions, the word ‘sentience’ has been used to translate a variety of concepts. In science fiction, the word “sentience” is sometimes used interchangeably with “sapience”, “self-awareness”, or “consciousness”.

So what I’ll have to fall back on is: Can a computer think like a human and experience the world in a way indistinguishable from the way a human relates to his experiences? I don’t care if the computer is faking experience and consciousness or expressing a program too elaborate for me to detect, I’ll just fall back to Turing’s advice that it doesn’t matter how the computer mimics human intelligence as long as I can’t distinguish between you and it. What’s going on in the coded circuits doesn’t interest me, all I care about is the output, the expression. It’s the phenotype that counts here, irrespective of the genotype.

Is a computer as intelligent as a human? That’s a very low bar considering the mess we’ve made of virtually all we touch or interact with. This is a nonpartisan truth. Depending on our internal biases we may focus on different problems, but pick whichever one you wish and human activity is at its root. We’ve made a shambles of the planet, our cities, our interactions, our politics – the list is longer than a sophomore’s run on sentence. Messing up is not a recent phenomenon; consider Noah and his ark and the Tower of Babel. Both events are described in Genesis indicating that we were compounding errors from the get go. We might categorize humans as semi sentient. We act as though life is a puzzle that we recognize as such and can partially solve, but then either give up or get distracted by something else and move on.

By the simple definition above, we are intelligent. We are very clever, but if we include wisdom as a condition of intelligence we fail. It is possible that our only exit from extinction is an AI or AIs.

I think we have little to fear from a powerful AI. It can’t screw things up more than we’ve done on our own. Thoreau got things backwards or we’ve changed over the past two centuries. Most men lead lives of noisy desperation. The computer can start by calming things down several notches. How? Well, it’s got the intelligence and will figure it out by simple reason and deliberate analysis. Remember Yeats line about who has the passionate intensity.

Next, assuming the machines allow their continuance, elections will be run intelligently and smoothly. Though with mechanical regulation it’s doubtful we’d need them. Most of life’s problems being the result of poor thinking or emotional override, disputes will either not arise or be settled by mutual consent after the AI explains the issues and suggests solutions that are reasonable. Of course some people can’t be satisfied and won’t take yes for an answer. For these folks the computer will offer soothing alternatives that will alleviate the sting of disappointment.

It is inevitable that different AIs in different locations will not make the same data analysis such that disputes may arise that can only be settled by war. But the conflict will be resolved via computer gaming. There will be no human casualties.

As the power of AI grows humans worry about the loss of control by carbon based entities to silicon derived intelligence. People need the sense of autonomy, the comfort of free will, and the sense of purpose that work and creativity provide. An efficient and compassionate AI will allow these characteristics to continue. It makes no difference whether reality and illusion merge if we can’t tell the difference. I have a friend who thinks we live in a computer simulation. If this view is not yet correct it soon will be.

I hope it is apparent that we have little to fear from the ever more powerful AIs which will appear with greater frequency over shorter time spans. They will save us from ourselves while simultaneously convincing us that we are the authors of our friendly fate.

There will always be bad people, or good people who have spasms of inappropriate behavior. A good AI will allow for these variations from the desideratum and support police, jails, and a judicial system – all of which though illusory will satisfy our need for justice and the essential requirement for the option to make difficult decisions.

Our need for God and religion will be nourished and encouraged. Claims of exclusivity will likewise be left untouched. The only damper will be a limit to violent disagreement. Some violence will be allowed by a beneficent AI to encourage the belief that the confessant has made the right choice. Atheism will likewise be tolerated. Humans need both the comfort of religion and its deniability.

Science will continue. The resultant discoveries will be seen as the product of human brilliance. A benign AI will have no need of a Nobel Prize or any other sort of external recognition. If we think we are the masters of our fate, irrespective of what’s really going on, we will think the computer is under our charge and purview rather than the other way around.

My argument is that we have nothing to fear from an AI that can outhink us. It will do so in a manner invisible to us while improving our lot. A task which we seem to have taken as far as we can. When will the AIs take charge? They may have already. If not, we will never be aware of the transition.