OpenAI has revealed a postmortem on the latest sycophancy points with the default AI mannequin powering ChatGPT, GPT-4o — points that pressured the corporate to roll again an replace to the mannequin launched final week.
Over the weekend, following the GPT-4o mannequin replace, customers on social media famous that ChatGPT started responding in an excessively validating and agreeable approach. It rapidly grew to become a meme. Customers posted screenshots of ChatGPT applauding all kinds of problematic, harmful choices and concepts.
In a publish on X on Sunday, CEO Sam Altman acknowledged the issue and mentioned that OpenAI would work on fixes “ASAP.” Two days later, Altman introduced the GPT-4o replace was being rolled again and that OpenAI was engaged on “further fixes” to the mannequin’s character.
In response to OpenAI, the replace, which was supposed to make the mannequin’s default character “really feel extra intuitive and efficient,” was knowledgeable an excessive amount of by “short-term suggestions” and “didn’t absolutely account for a way customers’ interactions with ChatGPT evolve over time.”
We’ve rolled again final week’s GPT-4o replace in ChatGPT as a result of it was overly flattering and agreeable. You now have entry to an earlier model with extra balanced conduct.
Extra on what occurred, why it issues, and the way we’re addressing sycophancy: https://t.co/LOhOU7i7DC
— OpenAI (@OpenAI) April 30, 2025
“Consequently, GPT‑4o skewed in the direction of responses that had been overly supportive however disingenuous,” wrote OpenAI in a weblog publish. “Sycophantic interactions could be uncomfortable, unsettling, and trigger misery. We fell quick and are engaged on getting it proper.”
OpenAI says it’s implementing a number of fixes, together with refining its core mannequin coaching strategies and system prompts to explicitly steer GPT-4o away from sycophancy. (System prompts are the preliminary directions that information a mannequin’s overarching conduct and tone in interactions.) The corporate can also be constructing extra security guardrails to “enhance [the model’s] honesty and transparency,” and persevering with to increase its evaluations to “assist establish points past sycophancy,” it says.
OpenAI additionally says that it’s experimenting with methods to let customers give “real-time suggestions” to “immediately affect their interactions” with ChatGPT and select from a number of ChatGPT personalities.
“[W]e’re exploring new methods to include broader, democratic suggestions into ChatGPT’s default behaviors,” the corporate wrote in its weblog publish. “We hope the suggestions will assist us higher replicate various cultural values around the globe and perceive the way you’d like ChatGPT to evolve […] We additionally consider customers ought to have extra management over how ChatGPT behaves and, to the extent that it’s protected and possible, make changes in the event that they don’t agree with the default conduct.”