Artificial intelligence – or AI for short – was the center of frenzied hype in 2018. The motto of this year's DMEXCO was that every second summit buzzes with visitors and it's starting to feel like every local newspaper and every little magazine is doing a deep dive on the topic. Established media outlets finally showing us how. AI is always good for a story, as you can read in contributions such as "Kirche will Künstliche Intelligenz als Gott verehren" by Deutschlandfunk Kultur or "Gemälde von "min G max D Ex[log(D(x))]+Ez[log(1-D(G(z)))]" erzielt 432.500 Dollar" from Spiegel Online can detect.
At times, the depth of the subject is lower than an iPhone X's height, and like press conferences by football clubs, seem to have totally lost touch. So often what's presented as groundbreaking AI technology is nothing more than "normal" rule-based systems.
However, AI technology is indeed making tremendous progress and generating massive added value. Image and text recognition, automated diagnoses in the medical field, stock market analyzes or autonomous vehicles are either already here or expected sometime soon. But even if the performance of such AI applications is impressive, it's about AI in the "weak" sense. Its "intelligence" is essentially based on the ability to recognize patterns and laws in data and to derive knowledge and prognoses from that. That ability is used to develop "machines" by training them with test data. They optimize themselves based on the trial-and-error principle – and is as such no longer rule-based ("dumb"). For this reason, the term "machine learning" is used in this context.
Although learning machines get better results than humans, and much faster, they do have poor intelligence. Because they are only capable of performing specific tasks. They cannot perceive the world in their complexity and they cannot form their own idea of things. Their cognitive abilities are limited to a kind of savant syndrome – they are essentially "one-track specialists". This means that artificial intelligences – because we should actually speak about them in the plural – can only do one thing at a time, but they can do that thing better than any human being. For example, Google's AlphaGo Zero easily beats world champions in chess and go. But that's all it can do. Outside the chessboard, the program doesn’t know left from right.
On the other hand, strong AI or general AI reaching the level of human intelligence stands for artificial neural networks capable of communicating like a human being, understanding and reflecting on complex relationships, and solving various problems without human help. The strong AI is also referred to as super-intelligence or singularity.
Innovations in AI technology are usually understood as a natural consequence of the ever-accelerating digitization. And as an opportunity to make people's lives better and more comfortable, to increase work efficiency, to generate new business models and to boost the economy massively. Leading business consultants even predict that the growth spurt over the years will be significantly greater than that generated by general-purpose technologies such as the steam engine or information and communication technology. AI evangelists are announcing the dawning of a new age. No wonder companies and entire states see in the key technology the potential for new economic miracles and are planning massive investments to avoid falling behind.
Google's AI Forge DeepMind, Facebook and others have earmarked tremendous funds for the development of AI. China wants to take the lead in artificial intelligence by 2030, and EU Commission Vice-President Andrus Ansip, responsible for the Digital Single Market, announced at least 20 billion euros in AI research and development by the end of 2020. Germany is planning at least 3 billion euros for the same purpose domestically. At the European level, an initiative has also been launched to establish a Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE).
But AI progress is not fueled solely by financial support. Not only are more and more companies and organizations worldwide working on AI, there are also open source platforms, such as the deep learning platform of the Linux Foundation. Technologies for AI, "machine learning" and "deep learning" are thus becoming generally accessible. Also, technical developments such as rapidly increasing computing power, the ever more powerful cloud connections and bandwidths as well as the vast and growing amounts of data with which AI systems can be trained are contributing to the fact that machines are getting smarter faster.
But how long until some sort of super-intelligence is created? Many scientists assume that this level will be reached within a few decades. The notion that the majority of people living today are likely to experience the birth of strong artificial intelligence makes AI a hot topic for our time.
Suddenly, there would be an artificially created instance that can independently make logical decisions based on incredible amounts of data and at the speed of light. They may be able to access hardware anywhere on the planet using the internet. It may not even be a single entity, but rather one of several or even many. What effect would that have on our world?
Such scenarios are known so far mainly from science fiction. In books and films of the genre, AI is mostly portrayed as a threat, often to humanity as a whole. For example, in Stanley Kubrick's dystopian "2001: A Space Odyssey," which debuted 50 years ago, the crew of a spacecraft is in conflict with the super-intelligent onboard computer. In the Matrix series, people serve machines, not the other way around.
OK, these are worst-case scenarios relegated to the realm of fiction. Yet even if you really want to deal with future scenarios, there's no need to turn to conspiracy theorists and await the apocalypse to recognize in AI the potential of a serious threat.
Established heavyweights of technical progress and future visionaries such as Stephen Hawking and Bill Gates, for example, are often quoted reminding us of the consequences of AI. Elon Musk even considers AI more dangerous than nuclear weapons. A research group from Stanford, Yale, Oxford and Tohoku Universities and developers at Microsoft and Google warn that "the development of AI is at a point where people can intervene – if politicians, researchers and entrepreneurs work together".
Typically, the development of potentially risky technologies is subject to state control or regulatory requirements, such as nuclear or genetic engineering. With regard to AI innovations, the euphoria is nearly unanimous. And even if – whoever it may be – wanted to control it, how should this be done? By monitoring software development? Pandora's box is not only open; the lid is lost.
Also, the moment in which a "real" super-intelligence may arise cannot be accurately predicted. There's also no clear definition for AI of when the degree of super-intelligence has been reached – unlike, for example, the atomic bomb, whose functionality is defined by that single powerful explosion. Perhaps the final step will be for AI to be made up of different insular solutions that complement and learn from each – not unlike the Borg in Star Trek.
What if software actually develops an awareness of the world and therefore of itself? What if AI develops the idea "I think, therefore I am"? It's not even clear what (human) consciousness is. There are researchers who believe that this is ultimately a meta-level in the brain that is able to view and evaluate information processing operations. Others believe that a "me" – the meta-level – can evolve through ever-changing re-evaluation of situations in alignment with stored knowledge. And, in principle, this is exactly the process that is supposed to happen in robot brains.
But what would a computer consciousness "think"? Due to evolution, humans act as utility maximizers (homo oeconomicus). What use would software have? Could power, as the epitome of possibilities, be of use to a machine? Could "the drive to survive," that is, to stay online and spread, emerge in the consciousness of a machine?
It's hardly possible to ensure that the creation of a super-intelligence can take place on a trial basis and that one then still has the opportunity to think about a rollout. If the "one" AI technology is there, it will not keep safe behind any walls. Because to spread on the net, they would need no open source platform.
There are certainly approaches to confronting the danger of AI and defining controls in some form. Some companies are already trying to establish control – and it's usually ethics related. Google, for example, has set seven principles that will guide all the company's AI projects in the future. However, these rules were rated by experts as vague declarations of intent with little relevance.
On the governmental side of things, for example, the EU Commission wants to develop ethical guidelines for AI development and set up an "AI observatory" to systematically monitor the progress of learning systems. The agency should also measure how AI applications prevail in the economy and how they impact on fundamental rights, social values, and the democratic process.
Despite all efforts, an imbalance between euphoric implementation activities for AI innovation and self-imposed business principles and government-sanctioned AI surveillance measures is still quite evident. In any case, the current state of control measures gives about the same sense of security as whistling past the graveyard.
If measures of this kind are already all about dealing with a future with AI, that seems rather uninspired. It makes sense to draw on the science fiction genre and see what visions the creative faction outlines in the event that humanity is confronted with a super-intelligence. Maybe with a gin and tonic and a few olives?