AI: Friend or Foe?

Artificial Intelligence (AI) may have various benefits, but its sudden rise up the table in one insurer’s annual global risk survey is a warning sign to both the public and the insurance sector to beware some aspects of AI.[1]

AI and big data, as a named risk, has risen from 14th to 4th place in a table of ‘future’ threats to watch, according to global insurance experts. This may reflect the faster-than-expected growth in Chat GPT and artificial general intelligence, but 64% of experts, now believe there should be a break in research into AI and other disruptive technologies, with 55% seeing AI as a ‘rapidly emerging risk’, and 34% viewing it as an existential threat to mankind.[2]

The general public, however, have other concerns, with many still worried about the pandemic, ranking this as their second future risk[3], after climate change. The public only assigned 11th place to the risk posed by AI and big data, although this still constitutes an 8-place rise from 19th position last year.[4]

Half the general public believe AI presents more risk than opportunity. Both public and experts feel neither the public nor private sector is prepared for the risk that AI presents. Only 7% of experts feel the public sector is ready for AI’s negative impacts and, slightly more positively, 16% feel this is true of the private sector.[5]

In just 12 months, the view on how public authorities should deal with the AI threat has shifted from ‘collaboration with the private sector’ to ‘regulation’ – now favoured by 48%, up 33 percentage points.[5a]

Ensuring AI is equitable and free from biased decision-making is a clear goal. Achieving this in insurance claims handling has led to the launch on 31 January 2024 of a voluntary code of conduct, which will govern the development, implementation and use of artificial intelligence in claims handling.[6] The goal is to ensure that AI is implemented transparently, safely and securely.

Its creators recognise AI has the capacity to speed up and streamline the claims lifecycle, but equally want to avoid issues of bias in claims. They seek justification behind claims decisions and a form of redress, if the claimant believes they were short-changed.

In-built bias could also lead to future legal issues. On the plus side, the capacity to rapidly analyse vast volumes of data could lead to more attractive insurance pricing and a more precise assessment and evaluation of risk. New insurance products could emerge, serving those who have struggled to access insurance support.

Another possible positive impact could result from AI’s ability to identify fraudulent claims – for which all insureds ultimately pay a price in terms of higher premiums[7]. It can also quickly assess whether quotes for repairs are inflated, or if they’re accurate damage reparation calculations.[8]However, AI requires big volumes of data and with that comes a risk of data breaches. The National Cyber Security Centre (NCSC) is definitely not underplaying the risks presented by AI. It states that AI will “almost certainly increase the volume and heighten the impact of cyber attacks over the next two years”, although impacts will be uneven.[9] This will be through an evolution and enhancement of existing tactics, technologies and procedures.

 

It highlights how AI offers a capability uplift in reconnaissance and social engineering, making both harder to detect. The NCSC also talks of AI “lowering the barrier” for a novice cyber criminal, hacker or hacktivist, simplifying their ‘job’ for them. It can certainly remove the grammatical mistakes, translation errors and spelling gaffes that currently allow many phishing attempts to be foiled.

A recent BBC News investigation discovered that a new OpenAI product enabled users to build a customised version of Chat GPT “for almost anything”, with this including highly convincing email, texts and social media posts for use in scams and hacks. The BBC easily constructed its own bot, which could have effectively operated, in multiple languages, as a scam or hacking tool; OpenAI has said it will investigate.[10]

With other warnings about AI’s ability to generate malware that can pass through filters, and to also identify vulnerable devices with out-of-date software, it is time for the UK, and for British businesses, to increase cyber-resilience. AI’s risks will have to be offset by enhanced cyber hygiene, more comprehensive staff training and tech company support. The prediction is, however, for a raised global ransomware threat over the next two years.

Previous
Previous

Access vital support at the time of a major loss

Next
Next

Lithium-ion batteries: a different kind of workplace risk