Annamarie Giblin, a Partner with Norton Rose Fulbright presented on the topic of artificial intelligence (known as “AI”) technology and its related legal concerns tailored for insurance professionals. A video replay of this presentation is available on the AIRROC On Demand platform.
Currently, the proliferation and power of AI is the subject of intense debate. This presentation gives an overview of AI, a summary of current laws and regulations both general and specific to insurance, and finally highlights areas to watch for the future of this powerful tool.
Noting that forms of AI have existed for decades, Ms. Giblin provided one definition of AI: the field of computer science dedicated to simulating or creating intelligent behavior or thought in a computer. However, she noted that there was no one agreed definition of AI and that courts are defining it “as they go along.”
She presented a brief history: AI has existed since 1950 when Alan Turing developed a test to determine if a machine was intelligent. The actual term “artificial intelligence” came from a 1956 conference at Dartmouth College. After an almost 50 year hiatus, AI re-emerged in the late 1990’s with IBM’s Watson computer beating a chess champion, Watson again beating two former Jeopardy champions in 2011, and finally the release of ChatGPT to the public in 2022.
Two foundational concepts driving AI are machine learning and deep learning, the first being when a computer learns from itself to get better, the second being training a computer to learn to “think” using a “neural net or artificial neural network” consisting of millions of densely interconnected processing nodes. The keys to usable, comprehensive AI are data networks from which AI draws its information. The more accurate, complete, and interconnected the data source, the more reliable the AI response.
Ms. Giblin outlined AI’s current capabilities: natural language processing, speech recognition, computer vision, decision intelligence, ChatGPT, negotiation, robotic process automation, text image generation and brain implants. For lawyers, decision intelligence and negotiation may be extremely helpful in the near future to gauge an opponent’s biological responses to certain arguments, indicating how your opponent might decide between them.
One problem for lawyers has been the unchecked use of ChatGPT to generate legal arguments. As a predictive technology, AI will make up case law to support your position if it isn’t available, even making up the text of a supportive judicial opinion. The most incredible, scariest functionality are brain implant applications, where a chip is inserted into a person’s brain and the computer translates their thoughts into text.
Developments in quantum computers will allow AI to impact underwriting and policy coverage. Although incredibly expensive and “finicky” to outside environmental factors, these super computers will be able to connect with brain tissue, introducing computer functions into the brain, and even create “quantum stealth,” where large objects and construction sites can be hidden from view, opening the question whether hiding defects in this matter would be covered under a normal policy.
General legal issues include how AI is legally defined, and who will be liable for its actions and mistakes? Imagine a passenger in a self-driving car arguing in the future that the car, not they, is responsible for an accident because in fact the car was driving. And for underwriting purposes, do you underwrite the particular driver of this new vehicle or the type of vehicle they own? There are also AI bias and data usage issues: if the data AI taps into is biased, it’s responses will be artificially biased. And how do you sort that out? There are also legal issues concerning the black box effect, that is, how does an algorithm make a particular decision which ultimately causes liability? If you cannot dissect the algorithm’s decision tree, how do you figure out where its culpability lies?
Ms Giblin outlined the current “tsunami” of new AI regulations applicable to everything from privacy and cyber security, copyright/IP, software and hardware development, to sector specific functions including healthcare, insurance and financial matters.
Currently, Colorado is the first state to develop a new law solely dedicated to regulating AI, entitled the Colorado AI Act, enacted in May of this year. The law applies to persons or entities that develop or deploy high-risk AI systems, or offer or deploy any consumer facing AI system. Developers and deployers have a duty of care to protect consumers from Al algorithmic intentional and disparate impact discrimination. The act grants consumers rights if an adverse decision is made by a high risk AI system, including the right to an explanation of reasons, to a correction and to appeal for human review if feasible. Although no private right of action is allowed, the Colorado Attorney General has authority to enforce the law and implement regulations.
As respects insurance, the NAIC created a Model Bulletin: Use of Artificial Intelligence Systems by Insurers, adopted in December 2023 and already applied in 11 states (four states have their own Insurance specific regulations including California, Colorado, Texas and New York). The NAIC model regulations are primarily focused on personal lines underwriting, but expansion to other lines is being considered. Significantly, the model bulletin makes insurers responsible to vet their AI data sets. They cannot rely on a third-party vendor to satisfy this obligation. Senior management must be involved in the incorporation of AI systems into insurance operations and must ensure adoption of a good governance framework to address policies, processes and procedures followed in each phase of the AI systems lifecycle.
Looking into the future, AI will apply to biometrics, space law, technology liabilities, export controls and synthetic and alternative data sets.