How artificial intelligence will change our lives

enter image description here

Many see it as the promise of a new industrial revolution that will overturn our relationship to work and knowledge, others the premises of "superhuman" intelligence. Over the past ten years, artificial intelligence has experienced a renaissance: what are the reasons, the players and the limits? This article published in the last issue of "Carnets de science" gives us some answers.

In less than a decade, computers, which had been content to replace our office equipment, communication devices, graphics tools, media players and editors and entertainment devices, human brain the capacity to beat champions of poker, to translate texts correctly, to recognize faces, to drive cars, to hold a conversation (almost) sensible, even to anticipate our desires. Real and dazzling progress that could herald the many times postponed golden age of artificial intelligence (AI), a discipline whose disappointments seemed to measure up to the hopes raised. The Gafam, whose valuation is intimately linked to this progress, today invest massively in artificial intelligence laboratories. They promise us that soon machines will not only be able to sort, analyze and interpret, as best as we can, the masses of data that we produce continuously, but also to help us make our decisions when it is choosing a partner or music, or even acting for us as robots or autonomous vehicles.

But where exactly does this sudden, sudden and sometimes disturbing burst of intelligence from machines come from? “The practical issues of current AI are the same as those defined sixty years ago: it involves simulating, articulating learning tasks, perception / classification, reasoning, decision and finally 'action, tempers Sébastien Konieczny, researcher at the Lens IT Research Center.1 Most of the current results are due to the recent development of learning and classification methods associated with deep learning. There has been a technical break, the results of which are indisputable and spectacular. But the algorithms on which these advances are based - statistical modeling, neural networks, machine learning, etc. - have been developed for around thirty years. "

Indeed, in August 1955, on the initiative of John McCarthy, a mathematician specialist in Turing machines, Claude Shannon, inventor of information theory, Marvin Minsky, future co-founder of the artificial intelligence laboratory in Massachusetts Institute of Technology (MIT), and Nathaniel Rochester, creator of IBM 701 - the first general-purpose commercial computer - used the terms "artificial intelligence" for the first time in an appeal to the scientific community. The main practical objectives of AI were posed: autonomous robots, comprehension and translation of writing and speech, artificial vision, decision support and solving mathematical problems.

Our goal is for machines to perform tasks that were once thought to be better than we do.

In the early days, starting from the observation that the computer is above all a system of symbol manipulation, researchers try to model and emulate intelligence from the notions of symbol and internal representation of the world. Mobilizing the progress of mathematical logic and linguistics, this symbolic approach remained dominant in artificial intelligence until the 2000s.

 It resulted in particular in the development of the first expert medical diagnostic systems, the first electronic players - Deep Blue's victory over world chess champion Gary Kasparov was the apotheosis - as well as some unconvincing attempts natural language processing.

From symbolic methods to numerical methods

One of the big drawbacks of symbolic approaches is that they handle noisy data very poorly: there should be no error in the data presented to the system. In fact, this learning problem has been addressed and partially resolved by another stream of AI research called connectionism. The latter takes the biological brain as a model and attempts to reproduce some of its faculties, notably visual, by numerically simulating the behavior of formal neural networks; also drawing inspiration from the hierarchical layered structure of neurons in the human visual cortex, which results in we knew how to train a neural network to recognize the objects of an image thanks to the error back-propagation algorithm: a numerical method which made it possible to improve the rate of good recognition of the system by adjusting, with each error classification, the weight of the connections between each neuron in the network.

Yann LeCun, now director of Facebook's artificial intelligence laboratory, had demonstrated this by developing, in 1989, a device for recognizing handwritten postal codes based on deep learning: as long as we have a sample of 'important enough examples to present to a neural network, it ends up matching the performance of a human being, at least in theory ... 2

The problem with these devices is that once trained, they work like a black box.

“We have known since the 1980s that these methods worked, only they required considerable computing power and a quantity of data, which remained out of our reach for a long time. Everything changed in the mid-2000s: with the increase in processor capacity, it became possible to perform the calculations required by the algorithms in a reasonable time.

Deep learning, explains Jérôme Lang, research director at the Laboratory for Analysis and Modeling of Systems for Decision Support3 and 2017 CNRS Silver Medal. The other major change is the Internet and the exponential increase in the amount of data it has made available. For example, to train a visual recognition system, you have to present as many annotated photos as possible: while it was necessary to strive to constitute samples of a few hundred photos indexed by students, we can today fairly easily access hundreds of millions of photographs annotated by Internet users via the “Captcha”. "

The irrational efficiency of data

The most spectacular results of artificial intelligence, those that no specialist dared to hope for in the early 2000s, were obtained in the fields of visual recognition - from face recognition to scene analysis -, voice or music recognition, and automatic language processing - from translation to automatic meaning extraction. What these systems have in common is that their "intelligence", or at least their performance, increases with the size of the corpus from which they were trained. The problem with these devices is that once trained, they work like a black box: they give good results but, contrary to what symbolic systems allow, do not give any indication on the "reasoning" they followed. to succeed. A phenomenon that three researchers employed by Google called in 2009 "irrational data efficiency".

The algorithms on which recent progress is based have been developed over the past thirty years. This explains the decisive advantage that the Gafam have taken in these fields, as well as the frenzy that they have to extract data from us against promises of intelligence. Some technological gurus even extrapolate the imminent emergence of an artificial consciousness, a superhuman intelligence made of machines that will invent and create by themselves, for us ... and possibly against us.

 “These“ predictions ”are made by people who are far from research or who have something to sell, underlines Sébastien Konieczny. AI researchers have never claimed to recreate intelligence - which we should already give an operational definition -, our goal is that machines perform better than us tasks considered intelligent because only we were able until now to realize them, like the classification of thousands of photos or pieces of music. The only things we master today are goal oriented tasks; a machine that is capable of defining its goals, that we do not know how to do. So consciousness ... "

Get Paid to Socialize


Author: admin

System Administrator, Software engineer and blogger, jazz musician and very nice guy.