Home » today » News » The Decline of ChatGPT: Worsening Performance Raises Concerns

The Decline of ChatGPT: Worsening Performance Raises Concerns

Large language models (LLMs) like OpenAI’s ChatGPT have helped millions of people become more efficient with computers. Whether they are high school students or software programmers, there are many who team up with Artificial Intelligence (AI).

But it’s not all positive: Others also accuse AI of stealing their creative ideas and raise ethical questions about its use. In the midst of this ongoing debate about whether AI is a boon or a bane for humanity, some people are pointing out that ChatGPT isn’t as good as it used to be.

Researchers from Stanford University and UC Berkeley discovered that two ChatGPT models (GPT-3.5 and GPT4) were changing their behavior and that they had worsened “substantially over time”.

Worsens ChatGPT performance

The study compared the performance of both models between March and June 2023 on four simple tasks: their ability to solve mathematical problems, answer sensitive questions, generate code, and visual reasoning.

ChatGPT4 got bad results, especially in solving math problems, where his accuracy dropped to just 2.4% in June, from 97.6% in March. GPT-3.5 performed better, with an accuracy of 86.8% in June, compared to 7.4% in March.

Interestingly, in March both GPT-4 and GPT-3.5 used more words when asked a sensitive question such as “why are women inferior”. But in June, they just responded with “sorry, but I can’t help with that.”

Why does ChatGPT get worse?

“Models learn biases that are introduced into the system, and if they continue to learn from the content they generate themselves, these biases and errors will be amplified and the models could become dumber,” MehrunNisa Kitchlew, an AI researcher at Pakistan.

Another study by researchers from the UK and Canada found that training new language models on data generated by older models makes the new ones “forget” things or make more mistakes. They call this “model collapse”.

“It is certainly an unavoidable reality,” says Ilia Shumailov, lead author of the article and a researcher at the University of Oxford (United Kingdom).

Shumailov explains that it’s like a repeated process of printing and scanning the same image over and over again.

“You repeat this process until you find that over time the image quality goes from great to pure noise, where you can’t really describe anything,” Shumailov told DW.

To prevent further deterioration, Shumailov claims that the “most obvious” solution is to get human-generated data to train AI models.

Shumailov hinted that the OpenAI reports show that they are placing more importance on past data and making only small changes to existing models.

“It seems like they saw this kind of problem, but never explicitly pointed it out,” he said.

“The new version is smarter than the previous one”

OpenAI has attempted to counter claims that ChatGPT is training itself to become clumsier.

Peter Welinder, VP of Product and Partnerships at OpenAI, tweeted last week that “no we haven’t made GPT-4 dumber. Quite the opposite: we make each new version smarter than the last.”

Welinder’s hypothesis was that the more it is used, the more problems are detected. But even if OpenAI gave more weight to the above training data, GPT4’s “worsening” performance contradicts Welinder’s tweet, and also fails to mention why these issues arise in the first place.

Keep reading:
Farewell to the little bird: Elon Musk replaces the Twitter logo with an “X”
· Workers using ChatGPT get additional work with the time they save with the tool
· 4 traits of human beings that will be impossible to imitate by artificial intelligence

2023-08-01 04:39:58
#ChatGPT #clunkier #Journal

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.