“Enlightenment is man’s emergence from his self-imposed immaturity. Immaturity is the inability to use one’s understanding without guidance from another.”
—Immanuel Kant, “What is Enlightenment?” (1784)
The digital revolution is in full swing. How will it change our world? The amount of data we produce doubles every year. In other words: in 2016 we produced as much data as in the entire history of humankind through 2015. Every minute we produce hundreds of thousands of Google searches and Facebook posts. These contain information that reveals how we think and feel. Soon, the things around us, possibly even our clothing, also will be connected with the Internet. It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours. Many companies are already trying to turn this Big Data into Big Money.
Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?
The field of artificial intelligence is, indeed, making breathtaking advances. In particular, it is contributing to the automation of data analysis. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself. Recently, Google’s DeepMind algorithm taught itself how to win 49 Atari games. Algorithms can now recognize handwritten language and patterns almost as well as humans and even complete some tasks better than them. They are able to describe the contents of photos and videos. Today 70% of all financial transactions are performed by algorithms. News content is, in part, automatically generated. This all has radical economic consequences: in the coming 10 to 20 years around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.
It can be expected that supercomputers will soon surpass human capabilities in almost all areas—somewhere between 2020 and 2060. Experts are starting to ring alarm bells. Technology visionaries, such as Elon Musk from Tesla Motors, Bill Gates from Microsoft and Apple co-founder Steve Wozniak, are warning that super-intelligence is a serious danger for humanity, possibly even more dangerous than nuclear weapons. Is this alarmism?
One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.
In the 1940s, the American mathematician Norbert Wiener (1894–1964) invented cybernetics. According to him, the behavior of systems could be controlled by the means of suitable feedbacks. Very soon, some researchers imagined controlling the economy and society according to this basic principle, but the necessary technology was not available at that time.
Today, Singapore is seen as a perfect example of a data-controlled society. What started as a program to protect its citizens from terrorism has ended up influencing economic and immigration policy, the property market and school curricula. China is taking a similar route. Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. It involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called ”Citizen Score”, which will determine under what conditions they may get loans, jobs, or travel visa to other countries. This kind of individual monitoring would include people’s Internet surfing and the behavior of their social contacts (see ”Spotlight on China”).
With consumers facing increasingly frequent credit checks and some online shops experimenting with personalized prices, we are on a similar path in the West. It is also increasingly clear that we are all in the focus of institutional surveillance. This was revealed in 2015 when details of the British secret service’s “Karma Police” program became public, showing the comprehensive screening of everyone’s Internet use. Is Big Brother now becoming a reality? Programmed society, programmed citizens
Everything started quite harmlessly. Search engines and recommendation platforms began to offer us personalised suggestions for products and services. This information is based on personal and meta-data that has been gathered from previous searches, purchases and mobility behaviour, as well as social interactions. While officially, the identity of the user is protected, it can, in practice, be inferred quite easily. Today, algorithms know pretty well what we do, what we think and how we feel—possibly even better than our friends and family or even ourselves. Often the recommendations we are offered fit so well that the resulting decisions feel as if they were our own, even though they are actually not our decisions. In fact, we are being remotely controlled ever more successfully in this manner. The more is known about us, the less likely our choices are to be free and not predetermined by others.
But it won’t stop there. Some software platforms are moving towards “persuasive computing.” In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people.
These technologies are also becoming increasingly popular in the world of politics. Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a “nudge”—a modern form of paternalism. The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is “big nudging”, which is the combination of big data with nudging. To many, this appears to be a sort of digital scepter that allows one to govern the masses efficiently, without having to involve citizens in democratic processes. Could this overcome vested interests and optimize the course of the world? If so, then citizens could be governed by a data-empowered “wise king”, who would be able to produce desired economic and social outcomes almost as if with a digital magic wand.Pre-programmed catastrophes
But one look at the relevant scientific literature shows that attempts to control opinions, in the sense of their “optimization”, are doomed to fail because of the complexity of the problem. The dynamics of the formation of opinions are full of surprises. Nobody knows how the digital magic wand, that is to say the manipulative nudging technique, should best be used. What would have been the right or wrong measure often is apparent only afterwards. During the German swine flu epidemic in 2009, for example, everybody was encouraged to go for vaccination. However, we now know that a certain percentage of those who received the immunization were affected by an unusual disease, narcolepsy. Fortunately, there were not more people who chose to get vaccinated!
Another example is the recent attempt of health insurance providers to encourage increased exercise by handing out smart fitness bracelets, with the aim of reducing the amount of cardiovascular disease in the population; but in the end, this might result in more hip operations. In a complex system, such as society, an improvement in one area almost inevitably leads to deterioration in another. Thus, large-scale interventions can sometimes prove to be massive mistakes.
Regardless of this, criminals, terrorists and extremists will try and manage to take control of the digital magic wand sooner or later—perhaps even without us noticing. Almost all companies and institutions have already been hacked, even the Pentagon, the White House, and the NSA.
A further problem arises when adequate transparency and democratic control are lacking: the erosion of the system from the inside. Search algorithms and recommendation systems can be influenced. Companies can bid on certain combinations of words to gain more favourable results. Governments are probably able to influence the outcomes too. During elections, they might nudge undecided voters towards supporting them—a manipulation that would be hard to detect. Therefore, whoever controls this technology can win elections—by nudging themselves to power.
This problem is exacerbated by the fact that, in many countries, a single search engine or social media platform has a predominant market share. It could decisively influence the public and interfere with these countries remotely. Even though the European Court of Justice judgment made on 6th October 2015 limits the unrestrained export of European data, the underlying problem still has not been solved within Europe, and even less so elsewhere.
What undesirable side effects can we expect? In order for manipulation to stay unnoticed, it takes a so-called resonance effect—suggestions that are sufficiently customized to each individual. In this way, local trends are gradually reinforced by repetition, leading all the way to the “filter bubble” or “echo chamber effect”: in the end, all you might get is your own opinions reflected back at you. This causes social polarization, resulting in the formation of separate groups that no longer understand each other and find themselves increasingly at conflict with one another. In this way, personalized information can unintentionally destroy social cohesion. This can be currently observed in American politics, where Democrats and Republicans are increasingly drifting apart, so that political compromises become almost impossible. The result is a fragmentation, possibly even a disintegration, of society.
Owing to the resonance effect, a large-scale change of opinion in society can be only produced slowly and gradually. The effects occur with a time lag, but, also, they cannot be easily undone. It is possible, for example, that resentment against minorities or migrants get out of control; too much national sentiment can cause discrimination, extremism and conflict.
Perhaps even more significant is the fact that manipulative methods change the way we make our decisions. They override the otherwise relevant cultural and social cues, at least temporarily. In summary, the large-scale use of manipulative methods could cause serious social damage, including the brutalization of behavior in the digital world. Who should be held responsible for this?Legal issues
This raises legal issues that, given the huge fines against tobacco companies, banks, IT and automotive companies over the past few years, should not be ignored. But which laws, if any, might be violated? First of all, it is clear that manipulative technologies restrict the freedom of choice. If the remote control of our behaviour worked perfectly, we would essentially be digital slaves, because we would only execute decisions that were actually made by others before. Of course, manipulative technologies are only partly effective. Nevertheless, our freedom is disappearing slowly, but surely—in fact, slowly enough that there has been little resistance from the population, so far.
The insights of the great enlightener Immanuel Kant seem to be highly relevant here. Among other things, he noted that a state that attempts to determine the happiness of its citizens is a despot. However, the right of individual self-development can only be exercised by those who have control over their lives, which presupposes informational self-determination. This is about nothing less than our most important constitutional rights. A democracy cannot work well unless those rights are respected. If they are constrained, this undermines our constitution, our society and the state.
As manipulative technologies such as big nudging function in a similar way to personalized advertising, other laws are affected too. Advertisements must be marked as such and must not be misleading. They are also not allowed to utilize certain psychological tricks such as subliminal stimuli. This is why it is prohibited to show a soft drink in a film for a split-second, because then the advertising is not consciously perceptible while it may still have a subconscious effect. Furthermore, the current widespread collection and processing of personal data is certainly not compatible with the applicable data protection laws in European countries and elsewhere.
Finally, the legality of personalized pricing is questionable, because it could be a misuse of insider information. Other relevant aspects are possible breaches of the principles of equality and non-discrimination – and of competition laws, as free market access and price transparency are no longer guaranteed. The situation is comparable to businesses that sell their products cheaper in other countries, but try to prevent purchases via these countries. Such cases have resulted in high punitive fines in the past.
Personalized advertising and pricing cannot be compared to classical advertising or discount coupons, as the latter are non-specific and also do not invade our privacy with the goal to take advantage of our psychological weaknesses and knock out our critical thinking.
Nonetheless, experiments with manipulative technologies, such as nudging, are performed with millions of people, without informing them, without transparency and without ethical constraints. Even large social networks like Facebook or online dating platforms such as OkCupid have already publicly admitted to undertaking these kinds of social experiments. If we want to avoid irresponsible research on humans and society (just think of the involvement of psychologists in the torture scandals of the recent past), then we urgently need to impose high standards, especially scientific quality criteria and a code of conduct similar to the Hippocratic Oath.Has our thinking, our freedom, our democracy been hacked?
Let us suppose there was a super-intelligent machine with godlike knowledge and superhuman abilities: would we follow its instructions? This seems possible. But if we did that, then the warnings expressed by Elon Musk, Bill Gates, Steve Wozniak, Stephen Hawking and others would have become true: computers would have taken control of the world. We must be clear that a super-intelligence could also make mistakes, lie, pursue selfish interests or be manipulated. Above all, it could not be compared with the distributed, collective intelligence of the entire population.
The idea of replacing the thinking of all citizens by a computer cluster would be absurd, because that would dramatically lower the diversity and quality of the solutions achievable. It is already clear that the problems of the world have not decreased despite the recent flood of data and the use of personalized information—on the contrary! World peace is fragile. The long-term change in the climate could lead to the greatest loss of species since the extinction of dinosaurs. We are also far from having overcome the financial crisis and its impact on the economy. Cyber-crime is estimated to cause an annual loss of 3 trillion dollars. States and terrorists are preparing for cyberwarfare.
In a rapidly changing world a super-intelligence can never make perfect decisions (see Fig. 1): systemic complexity is increasing faster than data volumes, which are growing faster than the ability to process them, and data transfer rates are limited. This results in disregarding local knowledge and facts, which are important to reach good solutions. Distributed, local control methods are often superior to centralized approaches, especially in complex systems whose behaviors are highly variable, hardly predictable and not capable of real-time optimization. This is already true for traffic control in cities, but even more so for the social and economic systems of our highly networked, globalized world.
Furthermore, there is a danger that the manipulation of decisions by powerful algorithms undermines the basis of “collective intelligence,” which can flexibly adapt to the challenges of our complex world. For collective intelligence to work, information searches and decision-making by individuals must occur independently. If our judgments and decisions are predetermined by algorithms, however, this truly leads to a brainwashing of the people. Intelligent beings are downgraded to mere receivers of commands, who automatically respond to stimuli.
In other words: personalized information builds a “filter bubble” around us, a kind of digital prison for our thinking. How could creativity and thinking “out of the box” be possible under such conditions? Ultimately, a centralized system of technocratic behavioral and social control using a super-intelligent information system would result in a new form of dictatorship. Therefore, the top-down controlled society, which comes under the banner of “liberal paternalism,” is in principle nothing else than a totalitarian regime with a rosy cover.
In fact, big nudging aims to bring the actions of many people into line, and to manipulate their perspectives and decisions. This puts it in the arena of propaganda and the targeted incapacitation of the citizen by behavioral control. We expect that the consequences would be fatal in the long term, especially when considering the above-mentioned effect of undermining culture.A better digital society is possible
Despite fierce global competition, democracies would be wise not to cast the achievements of many centuries overboard. In contrast to other political regimes, Western democracies have the advantage that they have already learned to deal with pluralism and diversity. Now they just have to learn how to capitalize on them more.
In the future, those countries will lead that reach a healthy balance between business, government and citizens. This requires networked thinking and the establishment of an information, innovation, product and service “ecosystem.” In order to work well, it is not only important to create opportunities for participation, but also to support diversity. Because there is no way to determine the best goal function: should we optimize the gross national product per capita or sustainability? Power or peace? Happiness or life expectancy? Often enough, what would have been better is only known after the fact. By allowing the pursuit of various different goals, a pluralistic society is better able to cope with the range of unexpected challenges to come.
Centralized, top-down control is a solution of the past, which is only suitable for systems of low complexity. Therefore, federal systems and majority decisions are the solutions of the present. With economic and cultural evolution, social complexity will continue to rise. Therefore, the solution for the future is collective intelligence. This means that citizen science, crowdsourcing and online discussion platforms are eminently important new approaches to making more knowledge, ideas and resources available.
Collective intelligence requires a high degree of diversity. This is, however, being reduced by today’s personalized information systems, which reinforce trends.
Sociodiversity is as important as biodiversity. It fuels not only collective intelligence and innovation, but also resilience—the ability of our society to cope with unexpected shocks. Reducing sociodiversity often also reduces the functionality and performance of an economy and society. This is the reason why totalitarian regimes often end up in conflict with their neighbors. Typical long-term consequences are political instability and war, as have occurred time and again throughout history. Pluralism and participation are therefore not to be seen primarily as concessions to citizens, but as functional prerequisites for thriving, complex, modern societies.
In summary, it can be said that we are now at a crossroads (see Fig. 2). Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse. If such widespread technologies are not compatible with our society’s core values, sooner or later they will cause extensive damage. They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act. We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution.
Thanks to Big Data, we can now take better, evidence-based decisions. However, the principle of top-down control increasingly fails, since the complexity of society grows in an explosive way as we go on networking our world. Distributed control approaches will become ever more important. Only by means of collective intelligence will it be possible to find appropriate solutions to the complexity challenges of our world.
Our society is at a crossroads: If ever more powerful algorithms would be controlled by a few decision-makers and reduce our self-determination, we would fall back in a Feudalism 2.0, as important historical achievements would be lost. Now, however, we have the chance to choose the path to digital democracy or democracy 2.0, which would benefit us all.