I previously touched this subject several times; the last piece was a couple of years ago, but its ever-increasing shadow made me think an encore might be appropriate. Everybody is talking or writing about AI, though the level of comprehension about the subject seems inversely proportional to its ubiquity. You certainly won’t get any deep insight here about its current or future utility or harm. The best I can offer is a few observations about how it’s worked for me. If I can outsmart it on occasion, it’s got a long way to go before it can live up to its second word.
I’ve been regularly using ChatGPT and Google’s AI. If you want to gather facts quickly, the AIs are very good at presenting a useful compendium of the data relevant to the subject you queried. If you want a critical analysis of a subject you’ve input, you’ll get a regurgitation of whatever opinions the AIs have found on the sources they have scanned. They have yet to offer any independent analyses that go beyond what they find pre-cooked on the web.
At a more basic level, ChatGPT often gets the facts wrong. I asked where in Italy the American tenor Richard Tucker sang. It listed a few cities and then said he didn’t sing at Verona or La Scala. He made his Italian debut in Verona in 1947 and appeared at La Scala in 1969. When I corrected the AI, it admitted its error and acknowledged Tucker’s presence at both sites. Google got it right the first time I asked it the same question. I asked ChatGPT the question again. This time the AI said he sang at Verona but that he never appeared at La Scala. I corrected it, and again it admitted its error – a slow learner. The lesson is that today’s AI makes mistakes. So when given an answer, the accuracy of which you don’t know, check other sources.
When I asked ChatGPT why Windows 11 kept switching my color preference from light to dark, it gave me a long list of complicated directions that didn’t work. Google immediately diagnosed the problem as a bug in Microsoft’s Power Toys and told me which setting (just one) to change – problem solved.
When asked to write a review of a book, play, or a performance of any kind, the AIs delivered a solid and integrated amalgam of reviews it perused from whatever sources it found online. It’s very good at collating but shows no sign of original thought. It’s also very good at specific tasks, like playing chess. But the computer program that can beat the best human chess player in the world can do nothing else. It will certainly get better at whatever task it is focused on, like mastering Go, which it has yet to achieve. The question of whether it will gain mastery at multitasking is more grounded in fear than in reality.
Will it write great music in a style never heard before? I doubt it; even if it did, could it do anything else at an above human level? Does anyone know how to code for genius level creativity? Why was Beethoven a greater composer than just about anyone else? Everyone agrees his music is unsurpassed, but explain what makes it great. A musicologist may dissect it with learned analyses of chord relations, recapitulations at the right moment, and a host of techniques mastered. Yet why do the first four notes of his Fifth Symphony hit the listener with a visceral impact that defies analysis? Beethoven had it, whatever it is. There’s the problem: how to teach a computer to do something we don’t understand.
A lot of people with varying degrees of expertise worry that the various AIs will get a lot smarter than their human authors and turn on all of us like Pharaoh and the Hebrews. Of course, fear swamps reason every time. But before the AIs can take over the planet and become our masters they will first have to become intelligent. Right now they’re like an idiot savant who’s also learned to cut and paste.
One of the most reliable signs of error is when everyone starts thinking the same way and becomes wildly enthusiastic about the same thing. Without nay-sayers and Cassandras we tend to jump out of airplanes without a parachute. With them a few of us may stay on the ground.
AI may have many uses, such as running machines and assembly lines, spotting design errors, and driving cars more safely than humans, who are easily distracted and prone to operational errors. You can easily expand the list of things a computer can or soon will do better than people. Computers do not have emotions, which can contaminate performance and lead to error. But it is, in large part, that our greatest and worst achievements stem from emotion. Without it there is no Beethoven and Bach and also no Hitler and Stalin.
AI will doubtless yield many tools and actions that surpass what humans have developed over the millennia. But before it can become a threat or a benevolent guardian it must first gain consciousness. Human consciousness is as mysterious as it. Conciousness is in the brain, but that aside, we really don’t know how it works. If it’s just a spontaneously emergent property of sufficient computing power, then a computer will eventually achieve it. If, as is highly likely, it’s much more complicated than that a computer may never be conscious in any way approaching the way we are.
AI is certainly going to exert a powerful force over our lives as have epochal technologies in the past. It will doubtless add and subtract depending on the skill sets it enhances or renders obsolete. It will also cause many people to lose a lot of money because of reckless investments in technologies they don’t understand and which fail. Whether it will have more of an effect on our lives than the development of modern agriculture, the automobile, the transistor, or other revolutionary technologies is debatable. Remember the surgeon’s creed – when in doubt, cut it out.





And DeepSeek, thinks that Charlie Kirk still alive…