Artificial intelligence will not protect banks from short-sightedness

Didier Sornette holds the Chair of Entrepreneurial Risks in the Department of Management, Technology and Economics at the Swiss Federal Institute of Technology ETH Zurich. Elisabeth Real / Keystone

Banks like Credit Suisse use sophisticated models to analyze and predict risk, but too often people ignore or circumvent them, says risk management expert Didier Sornette.

This content was posted on March 28, 2023 March 28, 2023 minutes

Sara Ibrahim

Writes about the impact of new technologies on society: are we aware of the ongoing revolution and its consequences? Hobby: free thinking. Habit: asking too many questions.

The collapse of Credit Suisse has once again exposed the high-stakes risk culture in the financial sector. The many sophisticated artificial intelligence (AI) tools used by the banking system to predict and manage risk are not enough to keep banks from failing.

According to Didier Sornette, Associate Professor of Entrepreneurial Risk at the Swiss Federal Institute of Technology ETH Zurich, the problem isn’t the tools, it’s the short-sightedness of bank bosses who prioritize profit.

SWI Banks use AI models to predict risk and assess the performance of their investments, but these models couldn’t save Credit Suisse or Silicon Valley Bank from collapse. Why didn’t they act on the predictions? And why didn’t decision makers intervene sooner?

Didier Sornette: I have made so many successful predictions in the past that have been systematically ignored by managers and decision makers. Why? Because it is so much easier to say that the crisis was a “force majeure” and not foreseeable, and to shirk any responsibility.

READ :  The Problem With Mental Health Bots

Responding to predictions means “stopping the dance,” that is, taking painful action. Because of this, policymakers are essentially reactive, always behind the curve. It is political suicide to impose pain in order to embrace a problem and solve it before it explodes in your face. This is the fundamental problem of risk control.

Credit Suisse had a very weak risk control and culture for decades. Instead, it was always left to the businesses to do what to do, and inevitably it built up a portfolio of latent risk — or I’d say a lot of put options that were way out of the money [when an option has no intrinsic value]. Then, when a handful of random events occurred that were symptomatic of the fundamental lack of controls, people began to worry. At a major US bank [Silicon Valley Bank] with assets of USD 220 billion (CHF 202 billion) quickly defaulted, people began to reconsider their willingness to leave uninsured deposits with a poorly run bank – and voilà.

SWI: Does this mean that risk prediction and management do not work if the problem is not solved at a systemic level?

DS: The zero or negative interest rate policy is the root cause of all of this. This has led to positions by these banks that are vulnerable to rising interest rates. The countries’ enormous debts have also made them vulnerable. We live in a world made very vulnerable by the short-sighted and irresponsible policies of major central banks, which have failed to consider the long-term consequences of their “firefighting” interventions.

The shock is a systemic one, beginning with Silicon Valley Bank, Signature Bank etc., with Credit Suisse being just one episode that reveals the system’s main problem: the consequences of catastrophic central bank policies since 2008 that have flooded markets with easy money and led to enormous excesses at the financial institutions. We now see some of the episodes.

READ :  AI drones to save people from drowning in Marina- The New Indian Express

SWI: What role can AI-based risk prediction play in the surviving giant UBS, for example?

DS: AI and mathematical models are irrelevant in the sense that (risk control) tools are only useful if there is a will to use them!

When there is a problem, many people always blame the models, risk methods, etc. That’s wrong. The problems lie with people simply ignoring and bypassing models. There have been so many cases in the last 20 years. The same kind of story keeps repeating itself without anyone learning the lessons. So AI can’t do much because the problem is no longer “intelligence” but greed and short-sightedness.

Despite the obvious financial benefits, this is likely a bad and dangerous deal for UBS. That’s because it takes decades to create the right risk culture and they’re now likely to do tremendous moral damage through the big downsizing. In addition, no regulator will award them compensation for inherited regulatory or anti-money laundering breaches by Credit Suisse clients that we know had very weak compliance. There they will have to struggle with surprising problems for years.

SWI: Could we envision a tighter form of oversight of the banking system by governments – or even taxpayers – using data collected by AI systems?

DS: Collecting data is not the job of AI systems. Collecting clean and relevant data is the most difficult challenge, much more difficult than machine learning and AI techniques. Most data is noisy, incomplete, inconsistent and very expensive to acquire and maintain. This requires huge investments and a long-term view, which is almost always missing. Therefore, crises occur about every five years.

READ :  Artificial Intelligence with Fetch.AI (Fet)

SWI: Recently, we’ve been hearing more and more about behavioral finance. Is there more psychology and irrationality in the financial system than we think?

DS: There’s greed, fear, hope and… sex. Jokes aside, people in banking and finance are generally super rational when it comes to optimizing their goals and getting rich. It’s not irrationality, it’s betting and taking big risks, where profits are privatized and losses are socialized.

Strict regulations need to be put in place. In a way, we need to make “banking boring” to tame the beasts that tend to destabilize the financial system by design.

SWI: Is there a future where machine learning can prevent too big to fail banks like Credit Suisse from collapsing, or is that just science fiction?

DS: Yes, an AI can prevent future failure if the AI ​​takes power and enslaves humans to follow risk management with incentives dictated by the AI, as in many scenarios depicting the dangers of super-intelligent AI. I am not joking.

The conversation was conducted in writing. It has been edited for clarity and brevity.

In accordance with JTI standards

More: SWI certified by the Journalism Trust Initiative