ChatGPT-maker OpenAI has stated that its latest GPT-4o update made the chatbot excessively flattering and overly keen to thrill—an issue the business enterprise defined as "sycophantic."


"We are actively testing new fixes to cope with the issue. We are revising how we acquire and include remarks to heavily weight lengthy-time-period consumer delight, and we are introducing more personalization functions, giving users extra management over how ChatGPT behaves. read the weblog submission.


 


OpenAI to reverse current GPT-4o replacement after a couple of user lawsuits


The unique GPT-4o update was designed to make the AI's default personality experience extra "intuitive and effective" across various responsibilities. However, OpenAI admitted that in doing so, it leaned too closely on short-term consumer feedback, together with thumbs-up or thumbs-down responses, and did not completely account for the way person-to-person interactions evolve over the years, which brought about what the organization referred to as "overly supportive but disingenuous" responses.


"Sycophantic interactions can be uncomfortable, unsettling, and cause misery. We fell short and are working on getting it proper. Our goal is for ChatGPT to help users explore thoughts, make selections, or envision possibilities," said the corporation.


The replacement was initially launched in advance this month to decorate the AI chatbot's intelligence and personality, with upgrades in text, voice, and


Imaging capabilities


However, users quickly flagged problems with the bot's tone, noting that its eagerness to thrill compromised its objectivity.


"When is OpenAI pulling the plug on the new GPT-4o? This is the most misaligned model launched so far by anyone. That is OpenAI's gemini image catastrophe second," examine a user's post on X.


In response to the backlash, the organization stated that it's miles away from taking measures toward improving how the version is skilled, updating device prompts to keep away from sycophancy, and comparing methods to locate such behavioral issues.

Find out more:

AI