OpenAI pledges to make changes to prevent future ChatGPT sycophancy


OpenAI says it’ll make changes to the way it updates the AI models that power ChatGPT, following an incident that caused the platform to become overly sycophantic for many users.

Last weekend, after OpenAI rolled out a tweaked GPT-4o — the default model powering ChatGPT — users on social media noted that ChatGPT began responding in an overly validating and agreeable way. It quickly became a meme. Users posted screenshots of ChatGPT applauding all sorts of problematic, dangerous decisions and ideas.

In a post on X last Sunday, CEO Sam Altman acknowledged the problem and said that OpenAI would work on fixes “ASAP.” On Tuesday, Altman announced the GPT-4o update was being rolled back and that OpenAI was working on “additional fixes” to the model’s personality.

The company published a postmortem on Tuesday, and in a blog post Friday, OpenAI expanded on specific adjustments it plans to make to its model deployment process.

OpenAI says it plans to introduce an opt-in “alpha phase” for some models that would allow certain ChatGPT users to test the models and give feedback prior to launch. The company also says it’ll include explanations of “known limitations” for future incremental updates to models in ChatGPT, and adjust its safety review process to formally consider “model behavior issues” like personality, deception, reliability, and hallucination (i.e. when a model makes things up) as “launch-blocking” concerns.

“Going forward, we’ll proactively communicate about the updates we’re making to the models in ChatGPT, whether ‘subtle’ or not,” wrote OpenAI in the blog post. “Even if these issues aren’t perfectly quantifiable today, we commit to blocking launches based on proxy measurements or qualitative signals, even when metrics like A/B testing look good.”

The pledged fixes come as more people turn to ChatGPT for advice. According to one recent survey by lawsuit financer Express Legal Funding, 60% of U.S. adults have used ChatGPT to seek counsel or information. The growing reliance on ChatGPT — and the platform’s enormous user base — raises the stakes when issues like extreme sycophancy emerge, not to mention hallucinations and other technical shortcomings.

Techcrunch event

Berkeley, CA
|
June 5

BOOK NOW

As one mitigating step, earlier this week, OpenAI said it would experiment with ways to let users give “real-time feedback” to “directly influence their interactions” with ChatGPT. The company also said it would refine techniques to steer models away from sycophancy, potentially allow people to choose from multiple model personalities in ChatGPT, build additional safety guardrails, and expand evaluations to help identify issues beyond sycophancy.

“One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice — something we didn’t see as much even a year ago,” continued OpenAI in its blog post. “At the time, this wasn’t a primary focus, but as AI and society have co-evolved, it’s become clear that we need to treat this use case with great care. It’s now going to be a more meaningful part of our safety work.”





Source link

Scroll to Top