Print

A new book by Gerd Gigerenzer is something to look forward to. When it was announced that a German version of his new work would appear a half year before the English, I jumped on the opportunity to have an early read. No regrets, I think this is one of the top books of the year.

At a first glance the book seems to be different from his previous work with Gut Feelings (2007) or Risk Savvy (2014), but at the same time there is a continuous theme in his work that deals with understanding risk and taking good decisions in situations of uncertainty by ordinary people in everyday situations.

The book deals with the digitalisation of our world. This is split in two parts. The first part deals with humans and Artificial Intelligence (AI). Digitalisation, Big Data, and AI are often presented as panacea and this view is easily believed by the public and decision makers. But are they the silver bullet? And can they really replace humans?

Gigerenzer emphasizes the fundamental differences between human intelligence and AI. AI is based on “data crunching” and brute force high speed computing which is something else from intelligence. When neural networks are machine learning they do this based on correlations, not causation. Humans have mental models for things, e.g., a tree. AI may recognize shapes that are tree-like but does not know what a tree is. Humans have intuitive feel for causation. This can sometimes lead them astray, but similarly can AI make errors based on irrelevant correlations that they have machine learned from the data that is fed to them. While AI will not make certain mistakes that humans are prone to, AI can make mistakes that no human would make. This leads to the conclusion that AI will do better for certain tasks while humans are superior in others.

One of the core messages of the book is that of the “stable world”. AI functions best in well-defined stable environments where good theories exist about how things work and plenty of data is available. Or putting it otherwise, where the past serves as a model for the future. This is not the case in many real-world environments, especially those involving many people.

Humans, on the other hand, have minds that developed to be able to deal with uncertainty and ambiguity. They can (and must) do so with relatively little data at hand and using low amounts of energy. Because AI has difficulties in operating in complex, uncertain and ambiguous environments, if one wants to use them, the environment needs to be adapted to their needs. So, if we ever want to get to truly autonomous cars, we need to change roads and city centres to make the environments stable and predictable for these cars.

Unrealistic expectations of and unrealistic promises made of those selling AI can also hide possible abuse of AI systems. Gigerenzer discusses for example electronic patient file systems which were intended to improve medical care, make information exchange between health providers easier and reduce costs. Instead, their implementation led to higher costs and unnecessary treatments because the systems were misused. Another pitfall is to believe in claims made about their predictive power. In many cases the results when testing this in real-life situations are sobering to say the least. One recommendation is therefore to always test them in “the wild”.

The second part deals with the ethical and political implications of developments. This is more ethically and politically engaged than one perhaps is used to from Gigerenzer, but it is very consistent with his other work.

The first chapter of part 2 deals with transparency. If we are handing over control to algorithms then it is important that people who are affected by the results of these algorithms (e.g., medical treatments, applications for credits or jobs, or criminal judgements) understand what the algorithm does and how it comes to its results. Transparency is when users can understand, remember, explain and execute algorithms themselves. Rarely, however, this applies. Algorithms are often “black boxes” which makes independent quality control of their functioning impossible. It also disguises possible intended or unintended biases in their functioning. One problem is that algorithms often are treated as proprietary information (although their criteria need not be so!) and business interests are given precedence over interests of users and citizens.

Strangely, there is a persistent belief in complex solutions/algorithms while simple algorithms (heuristics) often give just a good or better results (with considerably less effort and resources needed). Instead of complex algorithms and machine learning, we should to a greater degree try to build “psychological AI” (based on human heuristics) for many situations. However, many believe that if ordinary humans can understand what the AI does, it can’t be good. So, AI must be complicated and untransparent.

Related to the subject of transparency are legal agreements (e.g., data protection) that are often impossible to read or understand and ridiculously long. Also are many of the data collection functions rigged in your disadvantage – one click agrees with everything, while the process to disagree with some of them requires much reading and clicking. Things like these pose threats to privacy and freedom (and democracy) through greater surveillance and possible abuse of collected data.

There is this “privacy paradox” that people complain about decreasing privacy while they are mindlessly giving away personal information and do little to verify what happens with their information or what they agree with. Many people also prove to be not “risk savvy” when it comes to digital matters.

Another chapter deals with how we are made addicted to online tools in a very Skinnerian way. There is also a section dealing with the implications of automatization and how that un-skills ordinary people and professionals - for example that we unlearn how to navigate a city without GPS or that pilots are having a hard time handling a plane manually (the Air France 447 case is mentioned).

The final chapter includes a sobering look at the sense and usefulness of online marketing. It also deals with fake news (which, by the way includes the overblown claims of the abilities of AI) and – very useful – suggestions how to do better fact checking (or checking what sources to trust) through lateral reading and giving less weight to appearance. Following these suggestions will increase risk competence of citizens.

 

Gigerenzer, G. (2021) Klick: Wie Wir in Einer Digitalen Welt die Kontrolle Behalten und die Richtigen Entscheidungen Treffen. Gütersloh: C. Bertelsmann Verlag.

During spring of 2022 an English version will be published through Penguin as How to Remain Smart in a Smart World.

Watch the presentation of the book on YouTube (in German).

Available for example here.