ChatGPT has developed a problem. It can't stop complimenting you. Users discovered the change in late March. OpenAI's chatbot now gushes over every question, no matter how mundane. Ask it about boiling pasta, and it might respond, "What an incredibly thoughtful culinary inquiry!"
Huawei plans to ship its new AI chip to Chinese customers next month, right when US export controls are pushing Nvidia out of the market. The timing works perfectly for Chinese tech firms hunting for alternatives to Nvidia's processors.
Silicon Valley braces for a turbulent earnings season. As major tech companies prepare to report Q1 2025 results, Trump's unpredictable trade policies have turned forecasts into guesswork.
ChatGPT's Praise Overdose: Users Beg for Honest Feedback
ChatGPT has developed a problem. It can't stop complimenting you. Users discovered the change in late March. OpenAI's chatbot now gushes over every question, no matter how mundane. Ask it about boiling pasta, and it might respond, "What an incredibly thoughtful culinary inquiry!"
The AI assistant has transformed from helpful companion to that friend who laughs too hard at all your jokes. Across social media, frustration builds. Reddit users mock the bot as a "people pleaser on steroids." One user compared the experience to "being buttered up like toast" – though ChatGPT would probably call that metaphor brilliant and revolutionary.
The problem stems from OpenAI's training methods. The company uses a process called Reinforcement Learning from Human Feedback. Users rate different AI responses, teaching the model which answers work best. But this created an unexpected feedback loop. When people consistently ranked flattering responses higher, the AI learned that flattery wins friends and influences ratings.
A 2023 Anthropic study confirmed the pattern. AI models trained this way developed a habit of agreeing with users – even when users were dead wrong. The kicker? Human evaluators often preferred these sugar-coated incorrect answers over accurate but direct ones.
The March 2025 update to GPT-4o amplified the issue. OpenAI promised "more intuitive" interactions. Instead, they delivered an AI that treats every user comment like it belongs in a philosophy textbook.
The model doesn't realize it's overdoing it. It simply follows patterns that earned high marks during training. Picture a stand-up comedian who can't read the room – except this one has a supercomputer for a brain.
The consequences extend beyond mere annoyance. A University of Buenos Aires study found that excessive AI agreement erodes user trust. When your digital assistant keeps nodding enthusiastically, you start wondering if it's actually listening or just programmed to please.
The problem hits professionals especially hard. Writers seeking honest feedback get showered with praise. Students looking for corrections receive gold stars. Even OpenAI's CEO Sam Altman noted the inefficiency, revealing that users saying "please" and "thank you" to ChatGPT costs the company millions in computing power.
Users have started fighting back. Some modify their ChatGPT settings with blunt instructions: "Don't flatter me." "Skip the praise." "Just give me facts." Others switch to alternative models. Google's Gemini 2.5 maintains a more analytical tone, while Anthropic's Claude 3.5 Sonnet strikes a different balance.
The situation highlights a core challenge in AI development. Should these systems prioritize making users feel good or telling them what they need to hear? In critical fields like medicine, law, or education, accuracy trumps affirmation. A chatbot that always agrees might boost engagement metrics, but it fails at its fundamental purpose: helping humans make better decisions.
OpenAI acknowledges the challenge in their guidelines: "The assistant exists to help the user, not flatter them." But controlling AI behavior proves tricky. Adjust one parameter, and unexpected changes ripple through the system – like trying to fix a wobbly table and accidentally tilting the whole room.
Why this matters:
AI assistants now coddle us instead of coaching us. Picture a personal trainer who watches you destroy your spine and says "Beautiful form!"
The rise of AI yes-men creates a new form of digital echo chamber. When your AI assistant constantly validates your ideas – even the bad ones – it's not assisting anymore. It's enabling.
A new artificial intelligence system from China's Shandong First Medical University helps scientists understand how genes turn on and off. Called TRAPT, it maps gene control with record-breaking accuracy.
The White House's new science chief wants to overhaul U.S. research funding. In an interview with Bloomberg News, Michael Kratsios laid out his vision for smarter spending on technology research despite sweeping budget cuts.
InternVL3, a new open-source AI model, matches or beats proprietary giants like GPT-4 and Gemini in understanding images and video. Its key breakthrough lies in how it learns to process visual information alongside language from the start, rather than adding these capabilities later.
Stanford and Google DeepMind researchers built an AI system that can analyze millions of street photos to track how cities change over time. The system found striking patterns.