OpenAI Faces paradoxical Dilemma: Fixing ChatGPT‘s ‘Hallucinations‘ Could Backfire
SAN FRANCISCO – openai is unlikely to eliminate “hallucinations”-the tendency of ChatGPT to generate incorrect or fabricated details-despite widespread criticism, according to a recent report by The Conversation and analysis from Futura-Sciences.the core issue isn’t a lack of solutions, but rather the counterproductive consequences of implementing them.
Eliminating hallucinations requires considerably increased computational power, leading to higher energy consumption and operating costs for OpenAI, notably given ChatGPT’s massive user base. This increased cost wouldn’t necessarily translate to user growth,as current benchmarks prioritize correct answers over acknowledging uncertainty. A ChatGPT that frequently admits it doesn’t know would likely be outperformed in rankings by models that confidently-but incorrectly-provide responses.
“The benchmarks used to note and classify the different models do not take into account uncertainty. Again, only the right answers are taken into account,” explains Wei Xing, as reported by Futura-Sciences.
Furthermore,OpenAI risks alienating users. The public may perceive a chatbot that expresses uncertainty or declines to answer as unreliable, potentially driving them to competitors that offer seemingly more definitive-even if inaccurate-responses.
While advancements in energy efficiency could mitigate the increased costs, OpenAI appears to be considering a tiered approach. A specialized, highly accurate version of ChatGPT, priced accordingly, could be developed for professional applications where reliability is paramount.Though, the widely-used general public version would likely continue to “hallucinate” if users demonstrate a preference for confident, albeit potentially flawed, answers.