- in United States
- with readers working within the Insurance industries
- within Technology, Strategy and Family and Matrimonial topic(s)
We are seeing an uptick in the use of artificial intelligence (AI) tools in business; companies and organisations are adopting routine use of AI bots and increasing integration of AI into standard practices.
This step into the future, albeit exciting, comes with risks. Moody's new survey has found that nearly a quarter of businesses surveyed have no rules in place to govern the safe use of AI tools.
The survey investigated almost 2,000 organisations on how they're safeguarding AI in the workplace. It showed that 22% of these organisations said that they have no policies in place, leaving them "vulnerable to data breaches and loss of competitive advantage".
Data breach, supply chain and cybersecurity risks
Public AI tools such as OpenAI's ChatGPT or Google's Gemini often process data on external servers. Should companies submit proprietary information into such tools, they could open themselves to risks such as data and confidentiality breaches, expose sensitive data or even reputational risk.
These third-party software providers are often intertwined in a complex network of third-party vendors and suppliers, causing serious consequences should one of the members' defences in the supply chain be vulnerable to attack, which could in turn pass through the entire supply chain.
Moody's research also showed that many of the organisations they rate "are falling victim to cyberattacks, primarily owing to indirect incidents via third-party suppliers, partners or service providers".
Despite the dangers, Moody's survey revealed that 14% of organisations have never reviewed their vendors' cybersecurity practices, with defence against ransomware being "patchy," finding only 78% of organisations scan their back up data for ransomware or other malware.
What this means for insurers
In the current climate, where cyberattacks are rife and the use of AI tools is on the rise, it is imperative that internal policies are in place to mitigate such risks.
Insurers should take care in the cyber cover to ensure that AI risk has been considered appropriately, along with other sectors, such as PI and MLP, which are likely to be exposed. Insurers may also want to review their pre-inception questionnaires and underwriting criteria to take account of the practices that insureds have in place (or not, as the case may be!).
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.