Human stupidity is systemic

I should have quit my job last week. I’m a co-founder of an edutech company, ada Learning, that helps companies and organizations upskill their leaders. I believe that people are capable of learning. What we are now experiencing again makes one doubt it. How can you explain that after all the damage, frustration and suffering of the financial crisis from 2008, we are now in the middle of this mess again?

There is always talk of “system risk”. Banks must be rescued if they are “systemically important”. But what is this system actually? Is there something big that controls the financial markets and holds them together that needs to be tamed and secured in its complexity because no one really understands it? Or isn’t “the system” simply an admittedly complicated interplay of many decisions?

It is exactly like that. The system is the result of all decisions that people make in a specific field of application, here: the financial markets. They were – again – flawed, incompetent, inconsistent and sometimes just plain presumptuous. The “systemic risk” is a human one, resulting from many wrong human decisions. There was the strategic focus on government bonds at Silicon Valley Bank, the easing of Dodd-Frank during Donald Trump’s administration, the misjudgment of the risks associated with interest rate hikes, the cash concentration of start-ups in one bank.

Many market participants were quick to point out the mistakes: Shame on Silicon Valley, which is giving us problems on the financial markets again. And then came Credit Suisse. Supposedly a consequence of the uncertainty triggered worldwide. Unfortunately, that just fell short. For years, the bank has produced scandal after scandal, always as a result of human error. The management, which has recently changed at high speed, has stood idly by for far too long as the bank slipped deeper and deeper into the problem zone. The Swiss banking supervisory authority Finma issued a warning, and the warnings fizzled out without any consequences.

In both cases, it’s the same combo again: sheer incompetence in management, lack of oversight, poor risk management. “Systemic Risk” hides a fallacy system on steroids, human mediocrity, scaled and accelerated by technology and social media. If it used to be quite some time between the error and its consequences, we now live in the just-in-time damage period, in which every error can have immediate consequences.

The author

Miriam Meckel is a German journalist and entrepreneur. She is co-founder and CEO of ada Learning GmbH. She also teaches as a professor for communication management at the University of St. Gallen.

(Photo: Klawe Rzeczy)

Whenever we talk about technology and data, for example regarding the fascinating AI systems like ChatGPT, the motto is: garbage in, garbage out. Where crap goes in, crap comes out. But that doesn’t just apply to technology, it also applies to what people do. We produce so much decision waste that it is “systemically important”.

Man is fallible. And sometimes that even makes him likeable. But that’s a huge shift in perspective as AI surpasses us in some of what we previously thought of as humanly unique. Much in the financial markets is driven by mathematical logic. In this area, AI has long been worlds better than humans.

And the less rational, emotional part, which then serves as a nail in the coffin for banks like the SVB and Credit Suisse via a bank run? He is responsible for the fact that a bank that is actually viable, like the SVB, is suddenly sentenced to death. Or as Richard Berner, former US Treasury Department adviser, recently put it: “Silicon Valley Bank wasn’t essential in life, but it was essential in death.”

“AI would laugh at the logical inconsistency of humans”

The “systemic risk” of our world is not a single company or a new technology, it is people. We are not the crown of creation, but only the foam crown on the data stream interpreted by AI. We humans don’t understand many technological systems, but we don’t understand the human system either. The conflict that’s coming back into focus in the current dislocations is the chasm between intelligent automation and human stupidity – and audacity.

Meghan O’Gieblyn, author of the book God, human, animal, machine (2021), observes how AI is overtaking us humans on benchmarks of cognitive performance one after the other. Meanwhile, we tame our existential angst through the constant assertion of our unique consciousness, which requires feelings, sensory experience, and perception—all traits we humans share with animals.

>> Read also: “There’s that feeling of 2008 again”

Has anyone ever noticed that this emergency logic suddenly turns centuries of human self-awareness upside down? Because so far it has always been our intelligence that has made us so unique. Now such qualities are supposed to save our existence, which make us similar to animals and which AI has not had so far.

“If AI were a god,” writes O’Gieblyn, “it would laugh at the logical inconsistency of humans.” In life this inconsistency may not have been systemically relevant to us humans, but in death it could become so.

In this column, Miriam Meckel writes fortnightly about ideas, innovations and interpretations that make progress and a better life possible. Because what the caterpillar calls the end of the world, the rest of the world calls a butterfly. ada-magazin.com

More: How it came to the deep fall of Credit Suisse.

source site-11