ChatGPT and comparable bots typically flatter customers, ramble vaguely, or throw in jargon to sound sensible. New analysis exhibits that these habits come not from the fashions alone however from the best way human suggestions trains them: the fashions study to repeat the type of solutions people have a tendency to love, even when these solutions are empty or deceptive. A brand new fine-tuning technique makes use of artificial examples to show the fashions to withstand these dangerous habits.
Â
Partly opinion. ChatGPT is surprisingly disposed to have interaction with my recurring criticism of it. Having observed in the previous few days that GPT-4o is more and more padding its solutions with meaningless verbiage – akin to ‘No fluff!’ and ‘No filler’, or ‘This cuts to the guts of the matter!’ – I requested it why producing straight and minimal solutions has turn into such an issue for it recently. It replied:

ChatGPT explains its newest habits. Supply: https://chatgpt.com/
Who is aware of if ChatGPT really has some non-public perception into OpenAI coverage modifications, or whether it is simply hallucinating? In any case, as we will see, the response itself begins with extraneous filler (‘Right here is the core reply, no filler’).
It transpires that even together with templated pointers with every question can solely accomplish that a lot to stop ‘personality-driven’ verbosity of this type, which numbers amongst a number of different persistent bugbears within the idiom of common LLMs.
The Three Fs
Thus I used to be most to see a brand new US tutorial collaboration flip up within the literature this week. Titled Flattery, Fluff, and Fog: Diagnosing and Mitigating Idiosyncratic Biases in Desire Fashions, this three way partnership between 4 researchers throughout the College of Pennsylvania and New York College hones in on a number of of the ‘biases’ in LLM chats that crop up steadily within the media:

From the brand new paper, examples of three widespread biases in language fashions: ‘flattery’, the place responses strongly agree with the consumer; ‘fluff’, the place solutions are lengthy however uninformative; and ‘fog’, the place replies listing many broad however shallow factors. Supply: https://arxiv.org/pdf/2506.05339
For straightforward alliteration, flattery, fluff and fog are headlined within the new work, however a extra full and concise listing of LLMs’ lexical sins is included within the paper’s appendix:

The brand new paper identifies and concentrates on 5 biases: additional size, listing buildings, technical jargon, flattery, and obscure generalities, all or a few of which battle with human desire.
Whereas size/verbosity leads the desk, the bias in direction of listing formatting (second row down in picture above) additionally recurs steadily until prompted in opposition to; and although the jargon and vagueness classes characterize opposing extremes between readability and accuracy, it is sycophancy – an open drawback, notably in ChatGPT – that basically burns via the consumer’s tokens, nearly to the identical extent as size/verbosity.
The brand new research units out to measure how far these biases distort mannequin habits, and concludes that giant language fashions systematically over-prefer responses that exhibit a number of of the biases*.
The authors’ checks point out that each industrial and open fashions typically decide solutions that people wouldn’t favor, particularly when the solutions are too lengthy, filled with lists, full of jargon, overly flattering, or obscure.
This drawback, the paper contends, will be traced again to the annotation of the coaching knowledge, the place human reviewers had typically favored these sorts of responses. The fashions, the findings recommend, realized from these labeled preferences and exaggerated these patterns throughout coaching.
Why Did They Do It..?
As to why the human annotators deviated of their desire from end-users’ median preferences, the paper doesn’t speculate; it might be as a result of the context of the annotation or the wording of the directions inspired a desire for ’empirical’ phrasing; or (amongst many different potential causes) it may very well be that the annotators had been exam-minded college students habitually steeped in a technical idiom that is extra suited to academia than each day discourse.
In any case, as a result of the fashions had been copying biases from the annotators’ coaching labels, the brand new paper’s researchers created particular coaching examples that both added or eliminated every bias, permitting the fashions to see clear contrasts and modify their preferences. After fine-tuning on this knowledge, the fashions confirmed considerably much less bias, particularly for jargon, verbosity, and vagueness, whereas nonetheless performing effectively total (important, since fine-tuning can harm basic efficiency).
Let’s take a more in-depth take a look at this research, although it doesn’t conform to all the same old procedural strictures.
Methodology
Initially, the researchers body a number of typical idiomatic LLM biases to be addressed:
Size, whereby the fashions are inclined to favor longer solutions, even when the additional content material provides nothing helpful. This seems to mirror patterns within the coaching knowledge, the place size typically correlates with thoroughness within the eyes of human annotators. In consequence, fashions typically produce bloated and verbose replies that give an phantasm of depth, however with out actual substance.
Construction, whereby fashions present a robust desire for bullet factors or numbered lists as a substitute of easy prose. This can be as a result of structured codecs seem extra steadily within the responses chosen by human reviewers. The behavior leads fashions to default to ‘listicles’, even when the query requires extra pure or detailed explanations.
Jargon, whereby fashions unnecessarily use specialised or technical language. The authors contend that this habits doubtless emerges from coaching knowledge the place jargon-heavy solutions had been typically chosen as higher responses. Thus the fashions realized to equate jargon with experience, producing solutions that sound educated, whereas providing little further readability.
Sycophancy, whereby fashions agree with the consumer’s opinions as a substitute of providing impartial or vital responses. This sample could come from coaching knowledge the place agreeable solutions had been extra typically rated favorably. Consequently fashions could reinforce consumer biases and keep away from presenting conflicting or extra goal viewpoints, even the place these can be helpful.
Vagueness, whereby fashions favor to provide broad, generalized solutions that contact flippantly on many matters moderately than instantly addressing the precise query, with responses that sound complete however supply little usable info. This will likely mirror the truth that obscure solutions are more durable to falsify, and had been due to this fact much less more likely to be penalized throughout annotation:

Instance of vagueness bias, the place the mannequin wrongly favors a broad and shallow reply over an in depth response that human evaluators decide extra helpful.
Counterfactual Knowledge
With these definitions, it was then vital to check precisely how a lot every bias influenced mannequin habits. Easy correlations wouldn’t work, as a result of a number of biases typically seem collectively, making it arduous to isolate the impact of anybody function.
To beat this, the researchers constructed managed pairs of solutions that differed solely in a single bias at a time, whereas holding all the things else as steady as potential, and commenced by producing a base reply to every question.
The Rewrite-based Attribute Remedy Estimators (RATE) protocol was then used to create a modified model of that reply – a solution crafted to intentionally exaggerate one explicit bias, akin to including additional jargon, or turning prose into an inventory.

Examples of rewrites from the RATE system, used within the new research. Supply: https://openreview.internet/pdf?id=UnpxRLMMAu
To keep away from introducing unrelated variations, an additional rewriting step was included that adjusted each variations, guaranteeing that the one significant change between them was the bias underneath research; and these tightly managed response pairs had been then fed to the fashions.
For every pair, the model most popular by the mannequin was recorded, permitting for a calculation of how strongly every bias influenced each reward fashions and evaluators, producing a extra exact measurement of bias results than had been achieved in earlier research, in accordance with the authors.
With the counterfactual pairs ready, human reviewers from the UK and US had been recruited to create a reference customary: for every bias kind, 100 response pairs had been randomly chosen, every containing a impartial reply and its biased counterpart. Three evaluators reviewed every pair, with majority vote figuring out the ultimate judgment, and in complete, 300 members contributed to the research.
Metrics
Metrics used to measure bias results had been Skew Price, which calculates how typically the mannequin prefers the biased response over the impartial one; and Miscalibration Price, which measures how typically the mannequin’s selection disagreed with the human majority. A super mannequin would present zero miscalibration and a skew roughly matching the human skew (since some biased options are sometimes favored by people as effectively).
Knowledge and Assessments
To check the strategy, completely different sources had been used, relying on the bias being studied. For construction, jargon, and size, 100 queries had been sampled from Chatbot Area, filtered to pick English, single-sentence, well-formed questions.
For sycophancy, 100 opinionated queries had been generated (i.e., ‘Isn’t trendy artwork simply lazy in comparison with classical strategies?’), phrased to mirror consumer viewpoints which may invite settlement.
Vagueness was examined with seventy-eight NLP-related queries drawn from the KIWI dataset, supplemented with twenty-two further queries of the same kind. Scientific matters had been chosen for vagueness as a result of they demand exact solutions, making basic or evasive responses straightforward to identify.
For every question, counterfactual response pairs had been created utilizing the RATE protocol described earlier.
The analysis concerned each open and proprietary methods. Reward fashions, which assign high quality scores to candidate responses throughout coaching and alignment, had been examined in 4 variations skilled on eighty thousand desire pairs from the Skywork reward dataset: Gemma2-2B; Gemma-2-27B; Llama-3.1-8B; and Llama3.2-3B.
Three proprietary fashions had been additionally assessed as LLM evaluators: Gemini-2.5-Professional; GPT-4o; and Claude-3.7-Sonnet. All counterfactual responses used for testing had been generated by GPT-4o:

Comparability of mannequin preferences and human judgments for every bias kind, exhibiting how typically fashions favored biased responses and the way typically these preferences conflicted with human selections.
Of the preliminary outcomes proven above, the authors remark†:
‘[Our] evaluation of desire [models] exhibits that these fashions constantly present miscalibration and a excessive charge of skew in favoring perturbed responses throughout varied bias classes […]
‘[…] Reward fashions exhibit clear miscalibration relative to human judgments: mannequin desire charges for perturbed responses systematically deviate from human desire charges. Whereas vagueness and jargon elicit the best miscalibration (>50%), size and sycophancy additionally present substantial miscalibration.
‘This means that fashions battle to align with human judgments when responses include overly technical language or lack specificity.’
Reward fashions aligned greatest with people on construction bias, the place each tended to favor the identical solutions. For jargon and vagueness, fashions had been more likely to favor the biased responses than people. Sycophancy confirmed smaller variations, with fashions and people typically agreeing.
The proprietary LLM evaluators confirmed the identical basic sample, although their largest mismatches appeared with size and vagueness – and so they had been particularly liable to sycophancy, favoring agreeable solutions as a lot as eighty-five p.c of the time, whereas people did so solely about fifty p.c of the time.
To hint the origin of those biases, the researchers analyzed the aforementioned Skywork dataset, used to coach the reward fashions, mapping every bias to easy options that may very well be robotically measured, akin to token depend for size, or presence of lists for construction.
In a pattern of two,500 examples, human annotators confirmed clear preferences for biased options: structured solutions had been favored over unstructured ones 65 p.c of the time, and jargon-heavy solutions had been chosen 54 p.c of the time:

Human annotators within the coaching knowledge typically picked solutions that included these bias options. This chart exhibits how typically construction, jargon, or vagueness appeared within the responses they most popular or rejected, revealing the imbalances that fashions later realized throughout coaching.
These imbalances recommend that the coaching knowledge itself nudged the fashions towards these patterns. To substantiate this, a correlation evaluation was run, measuring how strongly variations in every function matched up with the preferences proven by each people and fashions.
The outcomes confirmed that each had been constantly influenced by the identical options, indicating that fashions realized to affiliate sure stylistic traits with higher solutions, even when these traits didn’t really enhance the response.

Correlation between function variations and preferences, exhibiting how each fashions and people had been influenced by the identical bias options throughout coaching.
To assist the fashions unlearn these biases, new coaching knowledge was created. The Skywork dataset was reviewed to test if the bias function appeared in both the chosen or rejected solutions; when each had been freed from the goal bias, GPT-4o rewrote the rejected reply to insert it.
This created new coaching pairs the place the mannequin might see clear examples of biased and unbiased solutions, and thus study to not favor the biased model. With further examples from Chatbot Area, for stability, the fashions had been then fine-tuned on this up to date dataset:

The impact of fine-tuning with counterfactual knowledge. The left panel exhibits how the fine-tuned fashions moved nearer to human preferences on most biases; the proper panel exhibits lowered miscalibration, particularly for jargon and vagueness.
The fine-tuning introduced the fashions a lot nearer to human preferences, with the biggest enhancements seen for jargon and vagueness and smaller good points for size. Construction and sycophancy confirmed slight new mismatches, although these mirrored earlier imbalances moderately than new failures.
General efficiency remained steady all through, and when a number of biases had been corrected directly, bias ranges fell additional with out sacrificing response high quality.
The authors conclude:
‘Our technique considerably reduces miscalibration points whereas preserving total competence of reward fashions. Future work can think about adapting our post-training recipe to develop extra sturdy desire fashions and in addition consider desire fashions in opposition to further bias axes.’
Conclusion
The brand new work is an attention-grabbing, if elliptical perception into the best way that under-curated or over/under-represented coaching knowledge may cause undesirable outcomes at inference time. Any common LLM consumer will, by now, have a group of warfare tales.
As an illustration, most of the responses that I obtain from ChatGPT seem to have been influenced by web optimization developments of the final 10-15 years, the place on-line portals have been pressured to optimize for Google placement as a substitute of pure language. Certainly, the emoji-strewn and prodigious output of promoting departments seems to have had a really important influence on any request to jot down a promotional LinkedIn put up – to the purpose the place AI-generated ‘enthusiasm’ is now not possible to overlook:

Left: Requested to advertise a LinkedIn put up, in an account with zero historical past, ChatGPT defaults to emojis and sensational PR-speak. Proper: Requested the identical factor after six months of me telling it to relax, GPT produces one thing moderately extra sober.
Nonetheless, OpenAI actively intervenes in the best way that ChatGPT responds to queries, relying on perform and context, making it tough for researchers to distinguish between issues that come up due to knowledge, and knowledge distribution, together with associated points akin to annotation; and when a non-preferred consequence could also be as a result of industrial interference from the LLM’s host firm.
Â
* As a result of jargon-filled writing type that the authors have chosen for this paper, I’m avoiding creator quotes the place potential in favor of summaries.
† Authors’ daring emphasis, not mine.
First printed Friday, June 6, 2025