Even the most advanced AI models stumble when faced with basic physics problems. A new benchmark called PHYBench reveals these supposedly intelligent systems solve physics problems about as well as a struggling high school student.
Taiwan Semiconductor Manufacturing Co. just made every current smartphone feel a bit insecure. The chip giant announced plans to roll out its A14 fabrication process in 2028, pushing beyond the boundaries of what we thought possible in semiconductor manufacturing.
Even the most advanced AI models stumble when faced with basic physics problems. A new benchmark called PHYBench reveals these supposedly intelligent systems solve physics problems about as well as a struggling high school student.
The research comes from Professor Wei Chen's team at Peking University. The test puts AI through its paces with 500 carefully crafted physics problems. These range from simple mechanics to head-scratching quantum physics puzzles. The results? Not great. Gemini 2.5 Pro, Google's latest AI powerhouse, managed only 37% accuracy. For comparison, human experts hit nearly 62%.
PHYBench doesn't just check if answers are right or wrong. It uses a clever scoring system called Expression Edit Distance (EED) to measure how close AI gets to the correct solution. Think of it as giving partial credit for showing your work. Even here, the gap between human and machine remains stark. Humans scored 70.4 on the EED scale, while Gemini limped in at 49.5.
How the Test Works
The problems in PHYBench are purely text-based. No diagrams, no graphs – just words describing physical scenarios. AI must figure out the forces at play and translate them into mathematical expressions. It's like asking someone to picture a game of pool and predict where the balls will go without seeing the table.
The benchmark emerged from a rigorous development process. A team of 178 physics students helped refine the problems, while 109 human experts validated the final set. This ensures the test measures real physics understanding, not just pattern matching.
Where AI Falls Short
The results expose two major weaknesses in AI. First, physical perception – the ability to understand how objects interact in the real world. Second, robust reasoning – the capacity to turn that understanding into correct mathematical expressions. AI often identifies the right physics principles but applies them incorrectly, like knowing the rules of chess but making illegal moves.
These shortcomings show up across all physics domains, but some areas prove particularly challenging. Thermodynamics and advanced physics concepts give AI the most trouble. It's as if the models hit a wall when physics gets more abstract.
The findings carry weight beyond physics. They suggest current AI systems, despite their impressive abilities in language and pattern recognition, lack fundamental reasoning capabilities we take for granted in humans. This gap matters for any field requiring precise logical thinking.
Traditional AI tests often use simplified problems with yes/no answers. PHYBench raises the bar by demanding exact symbolic solutions. This approach reveals subtle differences between models that might look equally capable on simpler tests.
A More Efficient Way to Test
The benchmark's scoring system proves remarkably efficient. The EED score can distinguish between AI models using far fewer test problems than traditional right/wrong scoring. This efficiency makes PHYBench a powerful tool for measuring progress in AI reasoning.
The Road Ahead
Looking ahead, PHYBench sets clear goals for AI development. Future models need better ways to represent physical concepts internally. They must learn to derive relationships from first principles rather than memorizing patterns from training data.
Why this matters:
The gap between AI and human physics understanding remains massive, suggesting current AI systems lack true reasoning capabilities
This benchmark gives us a clear way to measure progress in AI's ability to understand the physical world – a crucial step toward more capable and reliable systems
AI models just got smarter at teaching themselves. A breakthrough method called Test-Time Reinforcement Learning (TTRL) lets AI improve its skills without human guidance, marking a shift in how machines learn.
Social media connects teens but may break their spirit. A new Pew Research survey reveals 48% of U.S. teens believe social media harms their generation – a sharp rise from 32% in 2022.
ChatGPT has developed a problem. It can't stop complimenting you. Users discovered the change in late March. OpenAI's chatbot now gushes over every question, no matter how mundane. Ask it about boiling pasta, and it might respond, "What an incredibly thoughtful culinary inquiry!"
A new artificial intelligence system from China's Shandong First Medical University helps scientists understand how genes turn on and off. Called TRAPT, it maps gene control with record-breaking accuracy.