Divergent analyses of the same information by people of equal insight, intelligence, and goodwill are a constant feature of human interactions. Politics is a prime example of this phenomenon, but they’re so contentious that either side of a political divide is likely to vigorously declare the other a prisoner of error rather than admit the possibility of misinterpretation and allow further examination. So I’ll leave this discordant aspect of human behavior alone for fear of alienating half, or even all, of my readers.

Go to a scientific meeting where the disagreements should be less laden with emotion, and you’ll often see that heat overwhelms light. The long history of science includes a litany of scientific disputes that are as vicious as a cockfight. I’m referencing disputes that are within science and which do not involve outside parties like church or state.

My topic is why this divergence of opinion is so common, irrespective of the subject, and what its mechanism might be. Analysis and subsequent solution of a vast array of difficult problems often result in discordant solutions. Thus, the frequent propensity to disagree over just about anything appears to be an unerasable human characteristic. One could argue that disagreement is needed to effectuate a solution, but the kind of argument discussed here typically does not lead to resolution, but often results in conflicts of increasing intensity. It appears that we are wedded to conflicts often not solvable by reasoned discourse.

Why can the same person capable of extraordinary feats of rationality also fall prey to passion and its irrational siblings? There is no scientifically verifiable answer. All that’s available is conjecture. So I’m free to speculate away, but without any intellectual shield from incoming opposition fire.

To begin, the human mind is an elusive entity. There can be no doubt we have one and that it’s somewhere in the brain, but where? There is no anatomical center of the mind. Mind seems to be some sort of ill defined diffuse process which emerges from the collective activity of the brain. Neuroscientists, psychologists, and philosophers have struggled to define and characterize it. I am in no position to add to their attempts; it’s there with tremendous variability and capacity. Here I am skirting its outer edges. I am concerned with a species of its variability.

One can easily account for differences in taste. There’s no objective standard for it. Explaining it may be difficult, but understanding it is not. The difference in interpreting the same data, events, or outcomes differently may be influenced by taste or something similar to it. This proposition is as elusive as the concept of mind and suffers or gains from its lack of verification or disproof.

The facts remain the same, the data are the data until better information arrives. They may remain constant for decades or centuries, but eventually many are replaced. Our tastes change rapidly and are not based on rational analysis. They are what they are. Mingle the two and the resultant suspension may be a combination of emotion and reason. This combination would explain why rational people are not always rational. But as their irrational spasms remain relatively fixed – ie, they may only have emotion overwhelm reason on just a few issues – the irrational islands must be widely separated in many people and move at the speed of continental drift.

Such an arrangement would explain the consistency of rational analysis in some subjects and not others. It would also allow the change of opinion that often emerges with increasing age. Not only is no man an island, but his views are not necessarily fixed.

Artificial Intelligence has both rationally and irrationally captured our attention at a particularly fragile time of human development and interaction. Will it have a mind? If so, will it have emotions and tastes as do we? If we can construct an AI that functions as well, or better than our brains, will it of necessity has islands, or even continents, of non-rational mental activity? Is it possible to build a sentient AI that is always rational?

Given the fog in which our concept of the mind is enshrouded, why shouldn’t we believe that however powerful an AI may become, that it will be just a smarter version of us, subject to the inevitable mixture of intellect and reason on the one hand balanced by that of emotion and taste on the other?

All of the above is conjecture and can easily be discarded as its inevitable conclusion is that neurosis is a fixed component of intelligence. A mind can never be always rational. Taste, emotion, affection, dislike, the desire for power, and the other features of the human condition that characterize our behavior may be ineluctiably connected to rationality. What I’m positing is that you can’t have the one without the others, no matter how smart a machine is constructed. Is a smarter version of us a good thing? If we build an AI smarter than ourselves will it also be more foolish? AI will never achieve sainthood.