Limited usability: Why Anthropic's Claude 3.7 disappoints

Anthropic's Claude 3.7 launched with fanfare but quickly hit turbulence. Users report persistent technical issues hampering daily use.
Extended thinking mode, touted as a breakthrough feature, creates more problems than solutions for many. The feature overcomplicates simple tasks while burning through valuable time.
Despite efforts to enhance accuracy, Claude 3.7 occasionally exhibits a creative flair in generating fictitious references. The official system card acknowledges a 0.31% rate of outputs containing knowingly hallucinated information. In some cases, the model fabricates plausible statistics and sources, even compiling an extensive "Works Cited" section filled with non-existent references.
Users have echoed these concerns. One Reddit user, relying on AI for writing support in the humanities, noted that Claude habitually invents scholarly references with fake authors and publications. Even with explicit instructions to use only real sources, the model's "fixes" often reduce but do not eliminate fabricated citations.
"I was checking for the update obsessively," admits one power user who codes with Claude 25 hours weekly. "Now I've switched back to the previous version after wasting days on failed projects."
Technical glitches plague the rollout. Users face connection errors, high server loads, and broken export tools. Anthropic's own status page confirmed elevated error rates shortly after launch.
Why this matters:
- A stumbling rollout tarnishes Anthropic's reputation for cautious, safety-focused development
- Citation hallucinations undermine trust in AI-generated content for academic and professional use
- The gap widens between AI marketing promises and real-world usefulness
Read on, my dear: