Quantum AI - Part 3 - The rise of the AIs

Previously, in the Quantum AI column, we have seen that quantum computing would theoretically be able to significantly increase the execution speed of some Machine Learning algorithms. This is due to the phenomena at the core of such computers (like states superposition, qubits entanglement, superposed states decoherence, immediate wave function collapse) that enable parallel calculations by storing several (superposed) information within a single particle. By considerably speeding up algorithmic processing, quantum will make AI solve new problems. Some people go even further and say it could enable the design of an artificial brain as efficient as that of humans, especially able to integrate abstract concepts such as consciousness and feelings. The question that then arises is whether quantum AI would be more likely to become “strong AI” (artificial general intelligence)…

First of all, what is human intelligence?

Before looking at the complementarities between quantum and strong AIs, let’s start by delimiting the notion of human intelligence (HI). To give one definition, we could say that HI is a set of cognitive abilities that enable humans to learn, understand, communicate, think and act in a relevant, rational, logical or emotional way, depending on the situation.

This raises the question of what human intelligence is made up of. Several intelligence models exist, such as the Cattell–Horn–Carroll (CHC) theory. Although there is not yet a consensus for this question, the cognitive skills related to human intelligence could be classified into a few categories (somewhat re-aggregated here) such as: 
  • Logical and quantitative intelligence : ability for abstraction and formalization, rationality and reasoning; 
  • Literary and philosophical intelligence : language, reading, writing, structuring of thought and ideas; 
  • Social and emotional intelligence : introspection, ability to interact with others, ability to understand, perceive, feel and manifest emotions; 
  • Reactive and psychomotor intelligence : attention, coordination, parallel processing of multiple stimuli, mental endurance; 
  • Perceptive and artistic intelligence : visual, auditory, spatial, temporal, kinesthetic representations…; 
  • Memory intelligence: solicitation of short, medium or long term memories depending on the tasks. 

Obviously, these declinations of intelligence have overlaps and can be reorganized differently. Moreover, some are subtly correlated, especially with memory. This is why theoretical models tend to distinguish so-called “ fluid ” intelligence from those called “ crystallized ” one in order to differentiate the mental processing that requires or not learning and the knowledge permanently present in memory. There is a cold analogy with computers which can use the information permanently stored on hard disks or that, faster to access, temporarily present in RAM cards, or even in cache.

But above all, scientists like Spearman (the father of the eponymous correlation coefficient) have long wanted to demonstrate the existence of a “g factor“ that would consolidate all these abilities into a single form of general intelligence. To date, however, there is no real consensus on any model.
Then, how do you define artificial intelligence?

Artificial intelligence, on the other hands, is a set of sciences and techniques aimed at designing stand-alone machines with learning and reasoning abilities similar to those of humans. Be careful, this is an analogy of human intelligence rather than a vulgar technical cloning. Among the underlying sciences of AI are Machine learning – including deep learning, which is characteristic of the advent of AI since 2012 -, followed by natural language processing, neuroscience, computer robotics, etc.



However, the term AI has very different definitions depending on the industry. On this subject, I invite you to read these articles which explain the difference between Machine Learning AI and AI in video games.

In the business world, machine learning (or even deep learning) algorithms have made it possible to create extremely efficient programs – in terms of precision and speed – for some processing. This is how three types of AI (we could rather call them algorithms here) emerged:
  • Vertical or specialist: highly efficient in the execution of a task but not very adaptable to different tasks; 
  • Horizontal or generalist: reasonably efficient in the execution of a task but above all adaptable to different tasks (by transfer learning mechanisms for example); 
  • Versatile: superior to humans in the execution of several tasks and adaptable. 



Today, it has been possible to implement high performance vertical AI (for example medical diagnosis on the basis of radiographs) and to some extent horizontal AI in the field of vision, language and games. But it is still very difficult to design good generalist AIs (especially versatile AIs). Just look at the AIs that drive autonomous vehicles: it’s hard to imagine these AIs doing stock market analyzes to drive corporate investments, while our human financial exec generally have no problem driving a car otherwise!

In addition, what is important to remember is that all of these AIs are said to be “weak” (including versatile AIs) because they may be superior to humans in performing specific tasks, but they lack inherent human abilities such as: 
  • Self-awareness and introspection ability; 
  • The ability to make unlearned decisions or to experience unprogrammed emotions (assuming they are already programmable); 
  • Motivation , curiosity and the search for meaning in life. 



An AI who acquires such human abilities would be qualified as General Artificial Intelligence (AGI) or more simply “strong AI” or even “complete AI”.

A definition of strong AI

Thus, strong AI could be define as versatile AI with consciousness, capable of feeling, desiring, judging and deciding with complete autonomy. Arriving at such a level of sophistication, these strong AIs could for example be able to update the code that designed them to adapt to their environment or their new aspirations.


Many people fear this strong AI – starting with Elon Musk whose lyrical and alarmist outbursts have made the rounds of social networks several times. Why is that? Because if such an AI was trying to emancipate itself or “go into a tailspin”, like potentially any human in an extreme situation, then it could have disastrous consequences because of its execution capabilities disproportionately greater than those of the human brain and especially not restricted by the imperfection of our physical and limiting body. This day, when AI would definitively escape the control of humans, seems worthy of most novels or dystopian films such as Matrix, Terminator, Transcendence or I, Robot. But is this singularity only possible? It is difficult to give a date for the birth of strong AI since the obstacles that stand in the way of its design are multiple and perhaps irresolvable.
Strong AI vs. the materialist understanding of thought

First of all, it should be possible to determine whether human intelligence – as defined above -, emotions and sensory perceptions are reducible to a purely material functioning made up of electronic components. This problem is indeed far from obvious, even for the most agnostic among us. Who has never been fascinated by our body’s capacity to produce unsuspected reminiscences by the mere perception of a smell or music, often linked to a memory with a strong affect? Is it therefore possible to reconstruct such a psychic and sensory process with a set of transistors and computational and deterministic logic gates ?

Even without having all the evidence, it may be tempting to answer in the negative because neural biophysics is inherently different from computer electronics. But in order to answer in more detail, one would need a full understanding of the general mechanics of the brain. From this arises a new question: however surprising the latest scientific discoveries in neuroscience may be, does the human brain only have at its disposal all the elements of intelligence to understand with certainty its own functioning? Mathematicians will certainly see here a reference to Gödel’s incompleteness theorem.

More concretely – not being comfortable with the application of this theorem to the functioning of the mind -, I wonder here about the brain physical limitations which would prevent us from understanding the insights necessary for its own understanding. For example, if we try to imagine another color perceptible to the human eye (not present in the visible spectrum), another state of matter (different from solid, liquid, gas or plasma), or even the mental representation of a fourth spatial dimension, we quickly realize that we are not capable of it. This does not mean, however, that these objects inaccessible to our mind do not exist, but rather that our brain is not able to imagine perceptions that are foreign to its own physical structure. It is also the whole problem of qualia (subjective sensations triggered by the perception of something) and their links with consciousness. We naturally come to wonder if the brain is not the seat of phenomena other than neural electricity and the biochemistry of synapses…

The use of quantum AI to reach artificial consciousness

Even without taking these metaphysical considerations into account, it is clear that a Turing machine – the computational model on which traditional algorithms are based on -, although able to simulate many things, may never intrinsically be able to feel, perceive and reason like a human being. This is why, to bypass the barrier of determinism and the perception of reality, in order to achieve artificial consciousness, some would be tempted to call upon two recent areas of fundamental research.
  • The first, which I will not detail here, is that of what we could call hybrid neural networks (artificial and biological!) which therefore combine electronic circuits and living cells, following the example of the Koniku society; 
  • The second is of course quantum computing, whose fundamental characteristic is to leverage the properties of particles to carry out intrinsically parallel operations whose results are no longer deterministic but probabilistic. 
Why this choice? Because some scientists such as Penrose and Hameroff assumed that quantum phenomena (superposition and entanglement) would sit in our brain through the spins (intrinsic magnetization) of proteins present in neural microtubules which would behave like the qubits of a quantum computer. For more details on these phenomena, I refer you to the second article of quantum AI.

Technical digression
It may indeed seem natural to use Dirac’s formalism (at the basis of algebra modeling for quantum mechanics) and qubits to describe the evolution of these brain microsystems. For example, there is a similar willingness in the field of atomistics to describe molecular orbitals with precision. It comes down to using quantum computing to simulate quantum physics. It’s a bit like using traditional computing to simulate processor electronics.

Nevertheless, the reception given to this theory was very mixed, for the simple reason that, apart from photons (constituents of light), there is almost no physical system at room temperature capable of making quantum phenomena last long enough (we speak of decoherence time, generally well below a billionth of a millisecond for natural molecular aggregates) for them to have any impact on neuronal functioning.

So even if quantum computing allowed vertical AI to solve insoluble problems today, it would not seem capable of adding new properties that are attributable only to the biological brain whose quantum camake AI progress along the performance axis but not along the adaptability axis nor, a priori, to the consciousness axis. Quantum or not, the perspectives of artificial thinking always seem far away.


Technical digression
However, I still have a favorable reservation on the fact that quantum may one day make it possible to model mechanisms such as the notion of subjective reality or consciousness. Indeed, certain phenomena like quantum counterfactuality (cf. the Mach-Zehnder interferometer or the Elitzur-Vaidman’s bomb tester ) could never have been understood without quantum mechanics.

To be brief, it describes the fact that a particle “tests” all possible paths (without actually taking them, we speak here about its associated probability wave) before actually “choosing” one with a given probability. In other words, a phenomenon which could take place on one of the paths of a particle, whether it occurs or not, always betrays its potential occurrence by probabilistically interfering with the result of other measurements on other possible paths. It is as if all the possible futures of an action interfere with each other and only the future with the best resulting probabilities emerges from the present when the wave functions collapse.

This surprising phenomenon testifies to the fact that we have not found better than quantum to accurately describe certain microscopic experiments that challenge intuition, and even if the reality is quite different, quantum appears as the best tool to describe what we observe. I refer here to the Copenhague school of thought. I tell myself that it is perhaps the same thing with the consciousness that only a theory as abstract as quantum mechanics could describe. In any case, if there really was such an opportunity to model subtle physical processes with a quantum-like formalism (which is above all a tool), then the birth of strong AI would be put back in play. But today, quantum mechanics does not seem to help us on the path of consciousness whose quantum nature is itself uncertain.

And if strong AI existed theoretically, would it be possible to create it for real?

But let's admit that it is possible, that we can create a strong AI. Is it empirically possible to have the talents and power required to build this AI before the end of mankind? The question does indeed arise. Leaving aside the conceptual complexity of such a program/algorithm, we can look at two important metrics: time and energy.

Regarding time, it is above all a question of when we will finally have a more robust knowledge of the consciousness mechanisms to actually design an artificial mind, supposing that humanity does not have any vital issues to solve in priority before then. As for energy, IBM gives some orders of magnitude of the supercomputers consumption: it would take no less than 12 gigawatts to power a single computer with as many artificial neurons as the human brain (about 80 billion). This is roughly equivalent to the power delivered by some fifteen French nuclear reactors (a quarter of the total number of French nuclear power plants!). From this point of view there, the human being, spending only 20 watts, is incredibly more economical ... And do not expect quantum to optimize energy. To preserve the coherence of the qubits superposition/entanglement (required to speed up the algorithmic processing time), the temperature of the computers must be lowered to near absolute zero (-273.15°C). The energy required is obviously colossal.

So, if we assume that the creation of an artificial consciousness will require as much or even more energy than these supercomputers - quantum or not -, it seems unthinkable that we devote so many resources to the development of such AI, however strong it may be. And even if we did it, we would have created an artificial conscious brain similar to a cold data center, but in no way similar to a humanoid walking down the street (otherwise, I’ll let you imagine the size of the battery powering the brain of such a cyborg). Even if current research is focused on making neural networks more efficient, we will probably face an industrial dead end when it comes to making AI strong and mobile, assuming that this has an anthropological interest.

In summary, due to our ignorance of the consciousness mechanisms, quantum AI will probably not be more “strong AI” than conventional AI. The question of the usefulness of a strong AI may arise, but it we likely put more efforts to make vertical AI lighter and less energy-consuming, especially when the latter is partially quantum.

____

This article is part of a column dedicated to quantum AI. Find all posts of the same theme:

Part 1 – Ending impotence!
Part 3 – Rise of the AIs

A. Augey

Posts les plus consultés de ce blog

Quantum AI - Part 2 - The dice have been cast!

IA Quantique - Partie 4 - Le rôle clé du DataOps