Artificial intelligence (AI) is not new, but a rapidly evolving form known as generative AI is introducing a variety of new risks for technology companies. In the past, AI has been used to analyze large volumes of data, identify patterns, and distinguish between items when given certain parameters. Among other use cases, these capabilities power chatbots and help automate time-consuming, repetitive processes. Generative AI, on the other hand, takes those capabilities further to quickly create new content, from computer code to images to the written word. Despite its promises and opportunities, generative AI also presents several significant risks and concerns for technology companies that develop or work with it.
The risks that generative AI are introducing do not always fit neatly into existing coverage lines, such as Technology Errors and Omissions Liability (Tech E&O) and Cyber Liability, but rather overlap them. As generative AI develops, an integrated approach to mitigating and managing these emerging risks becomes more important than ever.
Legacy AI and New Risks
Before generative AI's emergence, legacy artificial intelligence had already found use in several applications, including but not limited to: customer service with chatbots, automated data-gathering from forms, pre-programmed decision logic that routes requests to appropriate contacts, and machine learning algorithms that make decisions based on matching patterns. The risks in legacy AI tend to be narrow in scope and confined by the end use - for example, errors in routing or pattern recognition can lead to a poor customer experience and require human intervention. By comparison, generative AI tends to pose more complex risks with wider implications.
One powerful and well-publicized feature of generative AI is its ability to deliver detailed written responses in a matter of seconds. For example, an insurance agent could type a simple prompt into a generative AI tool - "Write an email to a prospective customer seeking Tech E&O coverage" - and quickly receive a suitable draft email requiring minimal edits. This kind of performance, on this scale and available to the general public, is unprecedented. Behind the scenes, this incredible technology is made possible by large language models (LLMs) that are "trained" on an extensive trove of existing text, data, and other internet content. Speed and efficiency benefits aside, trouble can arise when one considers the possibility of inherent biases or inaccuracies in the source material on which generative AI is trained.
New risks that generative AI can introduce include:
- Inaccuracy. Depending on the data used to train the model and its algorithms, generative AI could produce inaccurate results. Often, AI will respond to inquiries with answers that sound or look plausible but are not factual. In some cases, generative AI has delivered answers that are false, irrelevant, or not even remotely logical - a phenomenon in the AI field known as hallucinations. AI could also be used in libel, slander, or defamation cases, such as a defamation case after generative AI generated a fake complaint accusing a man of embezzlement.
- Bias. Another risk that emerges is the possibility of bias in the results, which again depends on how the AI is trained. Subjectivity could lead the AI to deliver undesired or inappropriate findings. Depending on the end use of those results, bias could violate state or federal laws and lead to litigation. There is a concern that AI will simply reinforce existing bias that is already present in the source material.
- Copyright Infringement. A significant liability exposure exists in the risk of generative AI infringing on copyrighted words, images, and other media. If the AI is trained on copyrighted images or data for which prior approval was not obtained, the result could infringe on owners' copyrights. For example, generative AI that is asked to produce music can mimic that of existing recordings, leading to lawsuits by the recorded music's owners. Another example is a copyright infringement lawsuit that says a company misused photos to train AI, scraping millions of images without a license.
- Fraud and Disinformation. A concern of generative AI in the wrong hands is the proliferation of "deepfakes" - artificially-generated, yet life-like images, video, or speech replications - that criminals may use to commit fraud, crack passwords, or defame individuals or businesses. Additionally, AI could be used to launch disinformation campaigns by creating fake news, coupled with believable (yet artificial) images and/or video. It would be difficult for the general public to determine what is real and what is fake, further fueling public division.
Why Generative AI Calls for Proper Coverage for Tech Firms
The nature of exposures that generative AI is introducing requires technology companies to adopt a different approach to insurance coverage. Stand-alone Cyber Liability insurance, which has been available for many years, remains important coverage for technology companies. But Cyber Liability insurance by itself will not respond to some of the other risks of generative AI. Technology E&O Liability insurance is a complementary form of protection. But it too can come up short. Therefore, an integrated insurance product that combines both Cyber and Tech E&O is a better solution.
Philadelphia Insurance Companies (PHLY) has extensive experience and expertise in underwriting technology risks, through stand-alone or monoline coverage offerings for Cyber Liability insurance as well as Tech E&O insurance. With our new Integrated Technology coverage form in the final stages of filing, our agents and brokers will have a distinctive new Tech E&O/Cyber offering for their technology company customers. This coverage will include significant limits for select risks, a broad definition of covered technology products and services, network security and privacy liability, employee privacy liability, media liability, and first party cyber coverage for lost digital assets and computer bricking, non-physical business income and extra expense, extortion threat, security event costs and cyber terrorism coverage.
To learn why Philadelphia Insurance Companies is an outstanding partner to help tech companies manage the risks of generative AI as well as adjacent exposures through complementary insurance products, talk with your insurance agent or contact PHLY today.
Authors: Evan Fenaroli (VP, Cyber Product Manager) and Jason Keeler (Underwriting Executive, Integrated Technology Product Manager), Philadelphia Insurance Companies