GPT, Other AI Models Can't Decode SEC Filings, New Research Finds
If the large language models didn't refuse to answer, they would oftentimes hallucinate and come up with figures and facts that are not in the SEC filings.
New research conducted by a startup called Patronus AI shows that large language models (LLMs), similar to the one that powers ChatGPT, usually fail to decode Securities and Exchange Commission (SEC) filings.
Despite using OpenAI's GPT-4-Turbo, the researchers only managed to get 79 per cent of answers right on Patronus AI's new test, the company's founders told CNBC.
With the ability to read nearly an entire filing alongside the question, GPT-4-Turbo was the best AI model configuration they tested.
Aside from refusing to answer, the so-called large language models would oftentimes "hallucinate" and come up with figures and facts that weren't mentioned in the SEC filings.
"That type of performance rate is just absolutely unacceptable. It has to be much much higher for it to really work in an automated and production-ready way," Patronus AI co-founder Anand Kannappan said.
Are LLMs really reliable?
Kannappan reposted an X (formerly Twitter) post by DoorDash's Gokul Rajaram, noting "LLMs are nondeterministic". In other words, they are likely to produce different answers for the same input.
So, it is safe to say that companies will have to be more careful when it comes to ensuring they are providing reliable results.
The latest findings further highlight some of the AI model-related challenges big companies, especially in regulated industries such as finance, face while trying to integrate this cutting-edge technology into their operations.
One of the most promising applications for chatbots has been their ability to extract crucial numbers and perform analysis on financial narratives.
Notably, SEC filings are teeming with important data, and if a ChatGPT-like bot could flawlessly summarise them or answer queries about what is in them, it could give the user a major advantage in the competitive financial industry.
Earlier this year, Bloomberg LP used the same underlying technology as OpenAI's GPT to develop an AI model for financial data. Likewise, finance professor Alejandro Lopez-Lira showed that ChatGPT might come in handy for predicting stock movements.
Google is also working on a Gemini AI-powered program codenamed "Project Ellmann," which will give users a "bird's-eye" view of their lives. Moreover, McKinsey & Company suggest generative AI will radically overhaul how wealth management firms do business.
Despite the hype surrounding the newfangled technology, GPT's entry into the industry has been pretty rough. When Microsoft launched its Bing Chat using OpenAI's GPT, one of its primary uses was to quickly summarise an earnings press release.
However, some hawk-eyed observers realised that the numbers in Microsoft's example were off. In fact, some of these numbers were entirely made up. In other words, Bing AI, which was recently rebranded to Copilot, made multiple factual errors.
How did AI models perform in the tests?
Patronus AI tested 4 language models including OpenAI's GPT-4 and GPT-4-Turbo, Anthropic's Claude 2 and Meta's Llama 2. The company used a subset of 150 questions it had produced for the test.
The company also tested a slew of configurations and prompts, including a setting where the OpenAI models were provided the exact relevant source text in the question, which is known as "Oracle" mode.
The other tests involved instructing the models where the underlying SEC documents would be stored. Alternatively, the models were given "long context," which is equivalent to providing an entire SEC filing alongside the question in the prompt.
GPT-4-Turbo
GPT-4-Turbo didn't manage to pass the startup's "closed book" test, where the model wasn't given access to any SEC source document. Producing a correct answer only fourteen times, the model failed to answer 88 per cent of the 150 questions it was asked.
However, it performed better when given access to the underlying filings. In Oracle mode, GPT-4-Turbo answered the questions correctly 85 per cent of the time.
Llama 2
Meta's open-source AI model had several hallucinations and went on to produce wrong answers 70 per cent of the time. It only managed to provide correct answers 19 per cent of the time when it was given access to underlying documents.
Claude 2
Anthropic's Claude 2 performed well when the researchers included the entire relevant SEC filing along with the question. It answered 75 per cent of the questions accurately and gave wrong answers for 21 per cent of the queries it was asked.
Despite these shortcomings, Patronus AI co-founders believe language models like GPT can help people in the finance industry.
"We definitely think that the results can be pretty promising. Models will continue to get better over time. We're very hopeful that in the long term, a lot of this can be automated," Kannappan said.
© Copyright IBTimes 2024. All rights reserved.