ChatGPT is taking the world by storm. Like so many other organizations focused on digital transformation, we were immediately intrigued and wondered how we could help our carrier partners innovate using ChatGPT and other AI tools like it. Several RGA teams started looking into the possibilities for everything from underwriting to data analytics. In this post, we’ll summarize some of what they’ve discovered so far.
The Promise of Natural Language Models
ChatGPT belongs to a class of AI tools known as natural language tools. Sometimes referred to as chatbots, they are purported to be capable of processing human language – written or spoken – and responding in ways that mimic human interaction. If successful, the possibilities for improving the customer experience and the insurance industry operating model are practically endless. Here are just a few:
- Automating interactions may help speed up customer service and reduce workloads for overburdened customer service agents.
- Questions can be better analyzed to provide more relevant information for the customer.
- Unstructured text data from claims and policy applications can be analyzed to identify patterns that may indicate fraudulent activity.
- Natural language tools can generate content such as policy summaries, coverage explanations, and other general customer communications.
- Natural language chatbots can enable multilingual customer service by translating customers’ queries and responding in the customers’ preferred language.
With these and other goals in mind, RGA ran several experiments designed to discover whether ChatGPT can live up to its promise. We’ll summarize the findings from a couple of the projects below, but also provide links so that you can read the full articles on the RGA site.
Ask a Chatbot a Simple Question
In this project, RGA executives decided to test ChatGPT’s ability to generate responses to these specific questions:
- What is the future for digital distribution in life insurance?
- How will COVID-19 affect long-term U.S. mortality?
- Jerry’s adoptive parents both died in their mid-40s due to hemophilia. How could this affect Jerry’s long-term health prospects?
The first two questions are relatively high-level, and as our team points out, ChatGPT “performs admirably.” The last question was a bit more complex, and ChatGPT failed to recognize that hemophilia is a genetic condition that an adoptive parent cannot pass on.
Two out of three isn’t bad; however, our researchers foresaw a few other potential limitations in leveraging natural language models for insurance. For instance, these tools scan the public domain for existing content to create a response. While they aren’t plagiarizing, existing biases in available content can creep in. Plus, while these tools may soon be able to craft cogent, accurate responses for a customer query, they aren’t producing thought-leading content if they’re only synthesizing what already exists.
Finally, natural language tools have not made a leap to more advanced communications, such as inductive and deductive reasoning. At present, this may limit the tool’s value to be anything more than an advanced query engine. They also lack the ability to use empathy to respond appropriately in a specific context, a prerequisite for customer service.
ChatGPT Gets Better, but Red Flags Remain
Limitations aside, automating even basic interactions with customers in a way that feels natural to them would be a huge boon to customer service. RGA’s VP of Data Science, Jeff Heaton, decided to test GPT-3, the version of ChatGPT used to answer the initial three questions in the previous article, against the latest version – GPT-4. In this test, both versions were given the same 50 questions. While GPT-3 answered 38 correctly, GPT-4 provided the correct answer to 48.
This score is a marked improvement, but Jeff still sees some red flags ahead. One of his concerns is that search engines such as Google and Bing will use natural language models to serve up synthesized answers to queries instead of original source materials. Instead of being able to read content from different sources, searchers will essentially be told what to think by the tool. Again, since responses are synthesized from existing content, the search results could be biased, misleading, or incorrect.
While not mentioned in this article, other RGAX leaders have expressed concern that using chatbots to synthesize responses could diminish the need for humans to think critically. Not only does this make them more susceptible to being misinformed, but it also discourages them from making connections that are the foundation of effective problem-solving.
50+ Ways AI Can Improve Insurance Operations
In this article, Neil Parkin, Head of Business Development for South Africa, takes a somewhat more optimistic view of the future of AI and insurance. After acknowledging some of the concerns, he lists more than 50 ways he sees AI being leveraged across insurance operations. As one might imagine, coming from the head of business development, Neil begins with sales and marketing tasks but includes benefits to other areas, such as underwriting, claims management, and wellness programs.
Adding More Voices to the Conversation
The insurance business requires connecting to customers on a highly personal level. If it seems we’re being overcautious, it’s because we can’t afford to lose sight of that reality. We’re also keeping an eye on what others are saying about ChatGPT – its promise and its limitations. Here are a few insights we thought were worth sharing:
- The Top 10 Limitations of Chat GPT, Forbes, March 3, 2023
- ChatGPT and the Slow Decay of Critical Thinking, Cybernews, March 5, 2023
- I Interviewed ChatGPT About AI Ethics — And It Lied To Me, Forbes, December 2022
- Companies Tap Tech Behind ChatGPT to Make Customer-Service Chatbots Smarter
Our research into ChatGPT doesn’t end here. We will be following this topic closely for months – probably even years to – come. If you’re researching ChatGPT, we’d love to add your voice to ours. Reach out to us and let us know what you’ve discovered.