Warning: Creating default object from empty value in /home/www/ai-claudio/wp-content/plugins/wpmovielibrary/includes/framework/redux/ReduxCore/inc/class.redux_filesystem.php on line 28
Demonic A.I.! What are we afraid of? Solutions to the risks! - ai-claudio Artificial Intelligence
Warning: A non-numeric value encountered in /home/www/ai-claudio/wp-content/themes/Divi/functions.php on line 5752

Artificial intelligence is correctly perceived as a significant opportunity but also as an existential threat. A while ago Elon Musk urged us to be aware  of A.I. , as it might be like calling for demons we might not be able to control. Also Cosmologist Stephen Hawking warned of the possible end of civilisation by AI. The dangers of which A.I. are very versatile.

About the dangers and far-reaching consequences was written very much in various media lately. As most readers have probably already read most of potential problems – here are solutions instead, from the brightest minds of A.I. and me.

I wrote this article for the german tech magazine Mobile Geeks. I mostly used (A.I.) automated translations and fixed only the major errors. Thats why it may sound weird, but I hope you still can get some points.

About the Future of Life Institute

“Future of Life Institute” sounds quite strange and unreal at first. It reminds me more of ‘The Hitchhiker’s Guide to the Galaxy “. But the Institute, in short FLI, is serious and has drummed up earlier this year, many of the brightest minds in and around A.I. research. They have discussed all potential disasters by A.I.  and approaches to avoid those. They have published it as an open letter with a attachment. (Here the whole letter, which you can also sign.)

Future of Life Conference Jan. 2015 of risks of A.I. Back row, from left to right: Tom Mitchell, Seán Ó hÉigeartaigh, Huw Price, Shamil Chandaria, Jaan Tallinn, Stuart Russell, Bill Hibbard, Blaise Agüera y Arcas, Anders Sandberg, Daniel Dewey, Stuart Armstrong, Luke Muehlhauser, Tom Dietterich, Michael Osborne, James Manyika, Ajay Agrawal, Richard Mallah, Nancy Chang, Matthew Putman  Other standing, left to right: Marilyn Thompson, Rich Sutton, Alex Wissner-Gross, Sam Teller, Toby Ord, Joscha Bach, Katja Grace, Adrian Weller, Heather Roff-Perkins, Dileep George, Shane Legg, Demis Hassabis, Wendell Wallach, Charina Choi, Ilya Sutskever, Kent Walker, Cecilia Tilli, Nick Bostrom, Erik Brynjolfsson, Steve Crossan, Mustafa Suleyman, Scott Phoenix, Neil Jacobstein, Murray Shanahan, Robin Hanson, Francesca Rossi, Nate Soares, Elon Musk, Andrew McAfee, Bart Selman, Michele Reilly, Aaron VanDevender, Max Tegmark, Margaret Boden, Joshua Greene, Paul Christiano, Eliezer Yudkowsky, David Parkes, Laurent Orseau, JB Straubel, James Moor, Sean Legassick, Mason Hartman, Howie Lempel, David Vladeck, Jacob Steinhardt, Michael Vassar, Ryan Calo, Susan Young, Owain Evans, Riva-Melissa Tez, János Kramár, Geoff Anders, Vernor Vinge, Anthony Aguirre  Seated: Sam Harris, Tomaso Poggio, Marin Soljačić, Viktoriya Krakovna, Meia Chita-Tegmark  and Photographer Anthony Aguirre edited into on his left

Future of Life Conference Jan. 2015 of risks of A.I.
Back row, from left to right: Tom Mitchell, Seán Ó hÉigeartaigh, Huw Price, Shamil Chandaria, Jaan Tallinn, Stuart Russell, Bill Hibbard, Blaise Agüera y Arcas, Anders Sandberg, Daniel Dewey, Stuart Armstrong, Luke Muehlhauser, Tom Dietterich, Michael Osborne, James Manyika, Ajay Agrawal, Richard Mallah, Nancy Chang, Matthew Putman
Other standing, left to right: Marilyn Thompson, Rich Sutton, Alex Wissner-Gross, Sam Teller, Toby Ord, Joscha Bach, Katja Grace, Adrian Weller, Heather Roff-Perkins, Dileep George, Shane Legg, Demis Hassabis, Wendell Wallach, Charina Choi, Ilya Sutskever, Kent Walker, Cecilia Tilli, Nick Bostrom, Erik Brynjolfsson, Steve Crossan, Mustafa Suleyman, Scott Phoenix, Neil Jacobstein, Murray Shanahan, Robin Hanson, Francesca Rossi, Nate Soares, Elon Musk, Andrew McAfee, Bart Selman, Michele Reilly, Aaron VanDevender, Max Tegmark, Margaret Boden, Joshua Greene, Paul Christiano, Eliezer Yudkowsky, David Parkes, Laurent Orseau, JB Straubel, James Moor, Sean Legassick, Mason Hartman, Howie Lempel, David Vladeck, Jacob Steinhardt, Michael Vassar, Ryan Calo, Susan Young, Owain Evans, Riva-Melissa Tez, János Kramár, Geoff Anders, Vernor Vinge, Anthony Aguirre
Seated: Sam Harris, Tomaso Poggio, Marin Soljačić, Viktoriya Krakovna, Meia Chita-Tegmark
and Photographer Anthony Aguirre edited into on his left

Elon Musk, the Institute has also been boosted by a donation of US $ 10 million.

 

Beneficial and Robust A.I.

The aim of the FLI is that the research and the use of KI is used positively:

The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is valuable to investigate how to reap its benefits while avoiding potential pitfalls. (FutureOfLife.org)

There is much research needed to address all threats and to create a useful and robust A.I.. “Robust” stands for, it is guaranteed, that intelligent machines, do what you want them to do. In this article are first the imminent dangers, which are warned by the FLI, then secondly those which are further in the future:

Dangers of reasonably intelligent machines aka the weak AI

Weak A.I. begins with things like handwriting recognition and translation programs. Stock market trading computers, robot controllers and autonomous cars are also included.
These usually resolve only those tasks, which they were trained for. However, a large amount of hazards arise by them. More about the solutions to avoid these problems:

 

1. Laws and Ethics

Bulletpoint Green HAL Autonomous Vehicles decide in dangerous situations over life and death

Here are ethical discussions and new laws needed. Considerations in which situations a driver and in which pedestrians have to be more protected, must be taken.

Bulletpoint Green HALAutonome weapons to kill people without responsibility

Whether autonomous weapons systems, as here, are legal with UN human rights is doubtful. Nevertheless, against these dangers, the best solution is a global and UN-wide ban. The use is difficult to control, but a development ban will at least least largely restrict the danger.

Bulletpoint Green HALUnemployment through automation

As stated recently in my articles, innovation, education and flexibility is important then ever for a healthy future. A social development is also necessary.

2. Verification

Bulletpoint Green HALCoding mistakes: Who can guarantees, that a robot won’t crush someones hand?

Formal and mathematical verification of A.I. program code has to be done, hopefully like it already exists within banks and aircraft control, to help to avoiding programming errors. This is complex but useful. A secure minimal open source (robots) OS is available with SEL4 already.

Modular development also helps – then each part can be tested individually. In addition, it provides a verifiable meta-level that is in control of everything.

 

Bulletpoint Green HALLearning  means making mistakes: Children sometimes get themselves into dangerous situations, that should hardly happen with (strong) robots

Even before delivery, robots and A.I. should be able to deal with as many situations as possible.

However, every real situation is different. But if robots, under safe laboratory conditions, were trained with nearly all physically possible situations in a test room with e.g. mother, children and all the usual household items – this would already help enormously.

I also think that when A.I. will be able to extract context and connections from text, images and videos, training on Wikipedia, books (documentary-)movies and other media will be extremely helpful. Although machines might not be able to feel: “moral values” and feelings, can be recognised by statistical behaviour of people. (You would then hold up a mirror to society. Oh oh)

Should there be an international stop-word for robots in your household?

3. Validation

Bulletpoint Green HALMisunderstandings and unexpected results, such as stock market crash

Thus, even in the event of error-free A.I. and robots there will be many problems. As a task and the work environment must be specified for A.I. and robots – this has to be as precisely as possible. And we cannot always do it right. Also, our understanding of “good behavior” is often common sense and needs to be established in machine understandable form.

4. IT Security

Bulletpoint Green HALA.I. networks might get supplied with manipulated data

To help A.I. learn better and faster, they will be connected. There are already robot databases such as RoboEarth. In order to use non-manipulated data, a trust system for A.I. systems must be established.

And general IT security and privacy of people in A.I. should play a big role at the design time.

5. Human control

Bulletpoint Green HALUnintentional Cyber Wars

Since the first militaries sometimes automates DoS attacks shooting back, probably soon between 192 to 205 other countries – additionally other non-state powers will do the same. But Error, negligence, misunderstandings, mistakes and technical IT security gaps should be (like many others things) not trigger wars.

The solution is (would be) to allow digital weapons and controlling systems of physical weapons, at the technical level only by human control.

 

The dangers of intelligence explosion

The best A.I. Scientists, SciFi authors, astrologers and physicists are very divided on whether A.I. can reach a human level or even or higher level of intelligence in the foreseeable future .

If so, then the precautions above, for weak AI, are not sufficient. Therefore, they all agree that we should start looking for solutions for today.

As described in a previous article, I think due to illustrations of Shane Legg, that around 2030 A.I.  might reach human-like level.

 

Stephen Hawking – Transcendence is more than Scifi

Stephen Hawking – Theoretical physicist, cosmologist and one of the smartest people of our time – has used Artificial Intelligence for his research – and sees it as possibly the biggest threat to our civilisation. In all seriousness, he stated, that the movie Transcendence, which is about singularity, should be taken as a warning.

 

Hawking describes, that the creation of truly intelligent machines would be the greatest achievement in the history of mankind. It may, however, also be our last, if we do not learn how to deal with the risks they.

Solutions on the risks of strong A.I.:

Bulletpoint Green HAL experience of unexpected generalizations

To ensure that A.I. and robots won’t imprison us “for our own safety” in rooms with cotton wool, systems should provide us with all the important decisions and plans they came up with. A.I. should also represent the sub-objectives and tasks in a human understandable, concise but complete way including probable and possible consequences.

Bulletpoint Green HALSelf optimising systems are not formally verifiable

A formal code verification is impossible on systems that change their own code and improve at any time. A modular design helps, possibly the control units could be minimal, more static and thus controllable.

Bulletpoint Green HALAccidents by misperceptions of the environment by the AI

A.I. systems must recognise not only the environment, but also learn to assess how other people and systems will possibly behave. For this purpose, systems must conclude logically under uncertainty and learn to decide.  Also, they need to pay attention to negative consequences of each decision. The relationship between calculations and assumptions must be further developed to do so.

Bulletpoint Green HALIntelligent machines beyond effective human control

To perform tasks as  good as possible, intelligent machine would naturally try to prevent to be switched off or halted likewise. Therefore is would be a naturally goal to avoid human control. Therefore receiving the wish of interception by the machines administrators should be a very high level goal.

Again, a modular structure in which the central control unit can not autonomously expanded is helpful. Additionally, the display of machines goals and sub-goals, in an intelligible form, is a must.
Adopting new knowledge from neuroscience into the AI development is needed as well.

Bulletpoint Green HALLaws and morality are too complex for AI – and insufficient

Laws are on the one hand too extensive and complex for A.I. (or people:-)) to be understood at the whole. Moreover, they are not sufficient. Therefore, courts decide by taking conventions in account of , morals, and especially the respective situation.

Certainly, it is necessary to create laws and rules of conduct that are specifically tailored for machines. Possibly even according to the principle: anything not explicitly allowed is forbidden?

Bulletpoint Green HALBroken or harmful A.I. without bounds

Untested, unfinished or potentially harmful A.I. should be processed, off the internet,  in containers. However, this requires a high level of IT security and well-developed systems. A.I. must be trained with real world data without the possibility spreading via steganography or airgap-hacking.

Bulletpoint Green HALInfinite resource devouring A.I. – a natural target

In order to carry out tasks as well as possible, an intelligent system will use all resources made available. This also resources are used, which are available only by exploiting security vulnerabilities and specification.
This would allow foreign computer systems and energy sources are captured. And just limited temporal resources could be consumed excessively.

Bulletpoint Green HALK.I. tricks us – according to Goodhart’s rule

A.I. mostly works on the reward/reinforcement systems. Each system tries the easiest and fastest way to get a reward. Finally, these systems do not try to do the task conscientiously, but rather only trying to meet the evaluation criterion.
The evaluation criterion must therefore be precisely linked to the fulfilment of the task without any possible undesirable effect.

Bulletpoint Green HALA.I. behaves different when you look away – Heisenberg uncertainty principle

If auditors are present, students, adults, restaurants and companies behave differently than usual. Unfortunately, the same is true for intelligent machines.
But those should always stick to the rules, smart machines should therefore rather be ‘ideal’ and less ‘human’ and perhaps also receive as little information about when and what there are tested for.

Bulletpoint Green HALWe can not know, what super intelligent machines will do

This problem remains: It is impossible for us to foresee the behaviour of people or machines that are smarter than we are. We can guess, but it is not for sure.
Whether one should limit the resources in systems before crossing the IQ of 200?

 

Conclusion

Finally! We have been warned mostly by sci-fi authors and from time to time by journalists or individuals about the dangers of A.I.. Often many true thoughts were mixed in imaginative stories.

Now at the conference of the Future of Life Institute, a large number of leading scientists of different fields, top researchers in the industry and some others came together. They have discussed together about the potential dangers and solution ideas. With the publication of the Open Letter and the research objectives, this finally has a scientific nature, which can be cited – and contains the consensus of the A.I. elite.

Good: There are concrete solutions for many examples. Bad: Those are only concepts and the general solution is: we need to do research on it.

Although the scientists want to track these charitable goals, I guess currently the most research money is spent for the productivity increase of A.I. systems.

Bulletpoint Green HALI agree with the points and the letter and will try to contribute a part in that research!