This is a flow up to the article I wrote here five days ago in which I described the failure of Open AI’s ChatGPT to handle medical material. Obviously, the AI program has not had much exposure to medicine accounting for its poor performance in that field. Specialized AI programs have done as well or better in taking medical certifying exams than physicians, though I’m not certain they could write an essay adequately answering the question that ChatGPT failed to provide.
An AI programmed to play Go ( AlphaGo), the most complex game yet conceived, beat the world’s champion player 4 games to 1 in 2016. Kellin Pelrine, an American player not at the top of the Amateur ranking for the game, defeated a program considered the equal of AlphaGo which is not publicly available. The way Pelrine beat the computer was to use another program to probe the weaknesses that the champion program (KataGo) had overlooked. No AI can be better than the situations it is constructed to respond to, which is why ChatGPT made such a hash of gastric alkalosis.
After 1 million run throughs the analyzing computer found a “blind spot” that a human would easily recognize. Utilizing this omission in coding, Pelrine was able to easily beat the supposedly invincible machine. This victory points out that none of the so-called AI machines really have artificial intelligence. To be intelligent they would have to respond and analyze situations they had not been programmed to deal with.
Since I wrote the initial piece linked above, I have encountered additional puff pieces about the power of AI programs attributing to them more power than they possess. AI seems a bad term for the programs now being written, which doubtless will be very good at handling specific tasks. Limited AI might be better as it makes no claim for universal knowledge or even for more skill beyond what it has been designed to do.
Computers can do calculations far beyond the capacity of humans. They can do or oversee specific tasks better than can we. But they do not yet seem capable of imagination or adaptation to new circumstances as can the best humans. They may be able to operate under certain situations at the edge of their programming at the mediocre level, but when creativity is required they cannot yet match the best humans at tasks requiring such characteristics.
Of course, most humans who function at the highest skill levels do so under a limited set of conditions. Even the greatest minds and bodies ever seen have limitations to their skills. Thus, it seems unlikely that we will ever develop a computer that has mastered every aspect of human behavior and that can create new knowledge or art that is unmatched by anything previously done. Such a computer would also be devoid of bias. It would also have to deal with complicated subjects in depth.
Ask ChatGPT to discuss the difference between justice and social justice and you’ll get a left biased totally inadequate discussion of both subjects. Go to the Wikipedia article on justice and you’ll get pages (with just a tinge of bias) on the subject with links to many other articles the topic. ChatGPT still has a lot to learn.
Thus, limited AI will doubtless be a boon to human achievement, but that it will create great art, literature, or music is problematic. That a machine can beat a man at something does not seem very disturbing. No one cares that a car could beat Usain Bolt in a race. Bolt didn’t race against automobiles, he competed against other men. Which brings us to how he would have done against women sprinters, but that would be beating a dead horse.