The big chance
Harnessing big data and the power of artificial intelligence (AI) could be a match made in heaven for the insurance industry. At the heart of insurance, and underwriting in particular, is the analysis of large amounts of data to make informed decisions about the likelihood of outcomes based on prior knowledge – a natural task for AI.
Of course, there are other uses for AI, such as B. Improving the customer experience during the term of a policy – for example chatbots for personalized offers and faster claims processing. For products like health insurance, using AI to analyze medical claims has the potential to reduce health insurance fraud. If partnerships exist between insurers and technology-based companies using data from wearable devices, fewer claims payments and more attractive premiums for customers are the expected result in the future. In the world of auto insurance, AI continues to enable consumers to access products that better suit them, with prices and updates appropriate to their individual risk factors.
Insurtechs are rapidly unlocking this potential for the insurance industry, but it comes with challenges and increasing regulatory focus.
Challenges of AI for the insurance industry
But AI brings its own challenges to the industry. First of all, there are significant concerns surrounding that Data Use and Privacy. AI systems rely on large amounts of data. The underwriting process itself relies on collecting and analyzing data to create personalized policies and eliminating repetitive tasks and unnecessary delays. Both the source data of the potential insured and the more comprehensive big data enable companies to carry out a targeted risk analysis. The sheer volume of often sensitive personal data required to maximize the benefits of AI requires organizations to protect that material or obtain the necessary consent for its use.
Failure to do so can result in heavy fines for the insurer under GDPR, the EU data protection framework. However, this regulation was drafted before the rise of AI and therefore does not adequately cover the ethical challenges posed by the rapid growth of AI. From a consumer perspective, this often sensitive information is vulnerable to cybercriminals and therefore managing this risk requires constant action from the insurer and all third party providers.
That lack of regulationthe unavailability of trusted data and public perceptions of risk are the top barriers to the widespread adoption and insurability of new and evolving technologies in the insurance industry, according to an International Underwriting Association (IUA) survey conducted earlier this year and released in March.
There is hope in the form of setting international standards. The EU Artificial Intelligence Act (EU AI Act) is expected to come into force in 2024. This will be the first international regulation for AI, but it will not cover all firms everywhere and is limited in its application. The scope introduced by the proposal, the tools and the governance framework are still being discussed and refined by the European co-legislators. The hope is that the EU AI law will become a global standard. And indeed, UK regulators have recognized the need to avoid and where possible harmonize regulatory fragmentation both domestically and internationally.
As recognized by the BoE, guide is critical to the safe adoption of AI in financial services. It ensures accountability and implements the rules, controls and policies for a company’s use of AI. Good governance can ensure effective risk management and support many of the data and model-related issues discussed in previous chapters. On the other hand, poor governance can increase challenges and pose risks for consumers, businesses and the financial system.
AI increases too complex ethical issues. The analysis of claims and the speed at which claims can be processed continue to appeal to businesses and consumers alike. Discrimination is not always obvious, and when AI is used in the underwriting process, there is a significant risk that the insurer risks violating anti-discrimination laws when the AI starts adjusting premiums based on gender information.
Regulatory focus
Regulators undoubtedly recognize the value of AI and the focus is on ensuring that the financial services sector is able to harness the value of AI for the benefit of society at large.
The pandemic has accelerated the overall pace of AI adoption, but the gap between insurtechs’ rapid advances and the existing regulatory framework means the need for clear ethical rules to protect consumers is high on the agenda. UK regulators are investigating the issue and assessing the need for action, but have yet to implement specific guidance on the use of AI and big data.
Together, the Bank of England and the FCA established the AI Public-Private Forum (AIPPF) in October 2020 to encourage dialogue between the public sector, private sector and academia on AI. Earlier this year, the AIPPF published its final report on the various barriers to adoption, challenges and risks associated with the use of AI in financial services. This was followed by the publication of a Bank of England Discussion Paper which looks at the current regulatory framework and examines how key existing sectoral legal requirements and guidance for UK financial services apply to AI.
The discussion will span a range of topics, with a focus on how policy can mitigate AI risks while fostering beneficial innovation. Do technical and even global standards matter? If yes, what?
These are all issues the industry continues to work on collectively to create a framework that encourages innovation while protecting consumers.