Any artifact that seems to perform certain tasks better than humans begins by arousing fear or enthusiasm, usually excessive, threatening to replace us or promising to set us free, and ends by challenging us to think about what specifically is human in those tasks. Is it possible to have a less hysterical relationship with the digital world? If the so-called artificial intelligence did what the human brain does, there would be reasons to cheer or to worry about it, but the truth is that they are two powers that, despite their name, have little resemblance and are more cooperative then compete. ChatGPT’s promises have raised expectations of artificial general intelligence, “singularity” (John von Neumann), “superintelligence” (Bostrom), or “digital supremacism” (Balkin); that is, the likelihood that machines can match and even improve on human intelligence. Based on the fact that artificial intelligence systems are getting smarter and smarter, it is believed that there will come a time when they reach the level of human intelligence. The self-learning ability of artificial intelligence would enable an optimization that would no longer require the intervention of the human programmer. The question is not to know when this victory will take place and by what principle such a prediction can be made, not even if it is something desirable, but rather what kind of intelligence we are talking about, because maybe there is a mistake from the beginning. It may happen that there is no rivalry, competition, or threat of substitution, because in the end they are two different intelligences. Artificial intelligence only simulates some specific aspects of human intelligence, but it does not perform all the tasks of human intelligence, which is not only calculation and speed, but also understanding and reflection. The heralds of the future sorpasso do not speak of integral intelligence, but of the analytical capabilities of instrumental intelligence, from a narrowly empirical, calculating epistemology that ignores the historical context of human life. Computers are good at solving problems that are well defined, subject to strict rules, that can be mathematical, that can be worked out with logical and statistical criteria. The superiority of the machines is evident when it comes to calculations such as orientation by reference to the corresponding satellites, games based not on breaking the rules, but on their correct application. But making a joke, understanding a simple thought, or metaphorically combining different semantic domains requires capabilities other than second-order predicate logic. A critique of algorithmic reason would be based on this limitation of the field of validity of the computable. The human intelligence that is supposedly challenged by artificial intelligence is not just implementation or effectiveness with respect to achieving certain objectives, but the reflection that identifies the objectives that are desirable to achieve. Our concept of intelligence goes beyond instrumental function; it is not so much the achievement of objectives as their choice in a meaningful and balanced way in a world of great complexity in which conflicting objectives must be weighed. On the other hand, where are the emotional, social or moral aspects that we consider to be constitutive of our intelligence? It seems that an intelligence reduced to some of its instrumental or arithmetic advantages cannot be compared to that of humans. If we limit the notion of intelligence in this sense (to stay in a world where it’s true that machines beat us in many ways), “superintelligence,” if it ever existed, will be pretty stupid. The question is not so much whether machines can think as whether they are able to understand relations of meaning. “Sense” is something that is not obtained in the form of a pattern, because it includes ambivalences, gray areas and paradoxes. This way of understanding meaning can only be reconstructed practically, that is, with attributes such as empathy, physicality, and installation in the world. While many people have some shortcomings in these capabilities, and while there are no hard and absolute limits in terms of their computing power in machines, there is nothing to indicate that technologies can implement these properties. Artificial intelligence can translate text, make medical diagnoses and mimic human behavior patterns, but without really understanding everything. Someone might object that this matters little if they come up with the right solutions. The problem is that people need to understand problems in order to solve them. Is it possible to be intelligent without knowing it, like a zombie who can perform intelligent human tasks, but only reflexively? Artificial intelligence today is a so-called intelligent system; it is content to learn a function, but does not reflect. It has reflex intelligence, not reflective. And this does not correspond to the idea we have of intelligence. We could conclude with an analogy from the development of technology and the world of work. The mechanization that led to a decrease in work done with physical exertion promoted an institution that is central to our society: sport as a free effort, sought for its own sake, not in the service of a productive activity. Perhaps we are at a historic juncture where we should develop mental “fitness rooms” to enhance our intellectual abilities. Instead of a new chapter in artificial intelligence, we would face a new challenge for human intelligence.
Source: La Verdad
I am George Kunkel, an author working for Today Times Live. I specialize in opinion pieces and cover stories that are both informative and thought-provoking – helping to shape public discourse on key issues. My work is regularly featured across the network’s many platforms, including print media and social media.