Transcription Flaws From OpenAI's Whisper Raise Concerns

Whisper's hallucinations spark concerns over accuracy and safety
By Newser.AI Read our AI policy
Posted Oct 27, 2024 11:00 AM CDT
Updated Oct 28, 2024 12:30 AM CDT
Transcription Flaws From OpenAI's Whisper Raise Concerns
Assistant professor of information science Allison Koenecke, an author of a recent study that found hallucinations in a speech-to-text transcription tool, sits for a portrait in her office at Cornell University in Ithaca, N.Y., Friday, Feb. 2, 2024.   (AP Photo/Seth Wenig)

OpenAI's transcription tool, Whisper, lauded for its near "human level robustness and accuracy," appears to have a significant flaw. Experts reveal that Whisper occasionally fabricates text—known as hallucinations—which can include racial comments or imagined medical treatments. Concerns arise as it gains traction in diverse fields, including medical centers transcribing patient consultations, despite OpenAI's caution against its use in "high-risk domains."

While the scope of Whisper's shortcomings is challenging to assess, a University of Michigan researcher who was studying public meetings said he found hallucinations in 80% of the audio transcriptions he reviewed; a developer said he identified hallucinations in almost every transcript he created—and he created 26,000 of them. Advocates call for government intervention and OpenAI's resolution of this issue.

The tool's integration into prominent platforms like Microsoft's cloud services has led to widespread global usage. "This seems solvable if the company is willing to prioritize it," said William Saunders, a former OpenAI employee. "It's problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems." (This story was generated by Newser's AI chatbot. Source: the AP)

Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X