view · edit · sidebar · attach · print · history

Machine learning, creation, discovery... intelligence?

Carlo A. Trugenberger, 31.3.2023

Artificial intelligence, after its heyday in the 80s of last century, has been a renewed hot topic for experts since the deep learning revolution of the early 10s of the present one. After the disruption of generative chatbots late last year, it is on the lips of everyone.

Since the word “intelligence” is a subject of heated debate though, let us focus first on the less controversial concept of machine learning. Machine learning techniques are broadly divided into two categories, supervised and unsupervised. The aim of supervised algorithms is to learn a function that maps inputs to outputs based on available labelled examples of this function. This can be done with discriminative or generative techniques but, in all cases, there is a human teacher who knows the function, at least on the training set. The trained software is thus nothing more than a supporting tool to overcome the problem of applications to large data sets.

Unsupervised learning is essentially pattern recognition. These algorithms try to identify by themselves relevant patterns within huge sets of unlabeled and unclassified data, with no human guidance. The task itself is not simple at all. On top of this, two questions immediately arise, which further complicate the issue:

  • What should be done with the identified patterns?
  • Are all patterns admissible?

These questions are not independent, of course, the answer to the second question largely depends on the answer to the first. And forgetting to properly address the second question is what often causes many problems, like the failures experienced by the present generation of generative chatbots. Let us consider music-generating algorithms as a first example. In this case, the task is to produce new music pieces that closely resemble one, or a set of, patterns previously identified in the listening phase. There are not many restrictions on the admissible patterns in this case, music (and art in general) is quite subjective, and one can easily find an enthusiast for works that others may find less appealing.

The same procedure is blindly applied to language in chatbots, unfortunately. These are trained to recognize patterns in huge quantities of text and are then asked to answer with similar patterns to new questions. This of course leads to disasters, like in this answer, Question: what happens if I hold a piece of paper horizontally with two hands and I let go of one hand?

ChatGPT answer: if you were holding a piece of paper horizontally with two hands and let go of one hand, the paper would likely tilt or rotate in thedirection of the hand you let go of. This is because the paper is no longer being held evenly on both sides, and the weight distribution is now uneven. If the paper is a light weight, it will drift to the side where you let go of and fall to the ground. If the paper has a little more weight or if you have a strong grip, it will stay in the air for a moment before falling down.

This failure could have been anticipated. The point is that language, contrary to music, is subject to a huge number of constraints: those of meaning, or in simpler words, “making sense”. Most identified patterns, like most possible sentences, must be rejected, simply because they lack any sense. What is missing in present generative chatbot are the constraints imposed by semantics.

There are two ways to satisfy constraints on the admissible patterns. Either the system must learn them by brute force in a large number of interactions with several instances of itself, with a positive reward (essentially life vs. death) for remaining within the permitted bounds, or they must be hard-wired by humans. The first way is evolution, the second is a pragmatic shortcut.

The evolutionary path is what has been followed by the creators of AlphaGo Zero, the algorithm that beat the best human Go player in 2016. The program initially knew only the rules; it then played against itself for many days using reinforcement learning to hone its skills until it became the world’s best player. What is most interesting is that, in its learning process, AlphaGo Zero developed new, very effective strategies that were unknown and surprising to the best human players. This is undeniably the process of “discovery” and, correspondingly, it may be argued that the program is indeed “intelligent”, albeit on a specific and limited domain. Personally, I fully agree with this point.

While the Go game is extremely complex in its strategic development, the basic rules are quite simple. This cannot be said about language, unfortunately. The complexity of semantics, the set of “rules” governing meaning in human communications is humungous. This is already true of lexical semantics, governing the meanings of words and their relations. But things get really out of control with logical semantics, concerned with concepts such as reference, presupposition and implication. It could be even surmised that an algorithm which learns semantics from scratch in interactions with instances of itself retraces human evolution and acquires general intelligence, i.e., consciousness.

Unfortunately, I believe this is still far over the horizon. A more pragmatic, albeit less elegant, approach is to hard-wire the rules of lexical semantics in an algorithm that combines a linguistic approach with pattern recognition machine learning methods. This is what has been done by InfoCodex Semantic Technologies. Since this software is enabled for actual content recognition, it is capable of discovery, which it has proved by finding new biomarkers for diabetes only on the basis of text mining a large quantity of biomedical research. Moreover, it can summarize large documents on any subject without specific training, a task which is outside the scope of chatbots based exclusively on probabilistic language models.

Another difficult question is, if a machine learning algorithm can acquire the capabilities for independent scientific discovery. Here too, there are two levels. In mathematics, a system of axioms is a circumscribed set of rules which is somewhat similar in its nature to the rules of a game like Go. And indeed, once these rules are programed, automated reasoning and deduction algorithms can prove new theorems, exactly like AlphaGo Zero can find new strategies.

However, suppose that an algorithm is shown a huge quantity of data on the distribution of galaxies in the universe and asked to detect patterns in this data. Can it then infer from these patterns a new “law” for galaxy formation? Of course not! This is as impossible as for chatbots to discover new knowledge from text patterns. In this case, what is missed is the entire status quo of scientific knowledge accumulated in the history of humanity. This is infinitely more complex than the simple rules of an axiomatic system, a complexity comparable, if not superior to the logical semantics of human communication. Can an algorithm learn this knowledge by accelerated interactions with instances of itself coupled with some form of rewards? In this case too, I fear this is not yet on the horizon. And unfortunately, in this case the shortcut of hard-wiring at least a subset of the needed concepts is also not feasible; scientific discoveries stem often from unsuspected correlations and similarities between very distant fields.

The debate about the possible “intelligence” of machines can be sometimes virulent. It looks to me, however, that both sides just focus on unproductive dogmatic positions. Programs like AlphaGo Zero undeniably show some form of “intelligence”, although it is evidently not complete human intelligence, and one should not read too much into it. The ideology that machines cannot acquire human-like intelligence in principle is also just an antiquated anthropocentric dogma. After all we acquired our own intelligence in evolution and there is nothing stopping in principle algorithms to go through an immensely accelerated evolution in interactions with themselves. In the meantime, the shortcut of hard-wiring some semantics and combine it with machine learning can lead to a pragmatic way to “simulate” some intelligence and make some discoveries.

view · edit · sidebar · attach · print · history
Page last modified on March 31, 2023, at 02:56 PM