OpenAI Plans Social Network to Rival X
OpenAI is developing a social network that could shake up the AI landscape. The project centers on ChatGPT's image generation and includes a social feed, according to sources close to the matter.
OpenAI just updated its rulebook for handling powerful AI, and the timing couldn't be better. Their AI models keep getting smarter, and like any parent of a gifted teenager, they're setting firmer boundaries.
The company's revised Preparedness Framework reads like a highway code for superintelligent machines. It introduces stricter requirements for what counts as "safe enough" and clearer guidelines for evaluating and controlling advanced AI capabilities.
The framework now splits potential risks into two main categories: "Tracked" and "Research." Tracked categories include the usual suspects - biological and chemical capabilities, cybersecurity, and AI self-improvement. These are areas where OpenAI already has mature evaluation methods and safety measures in place.
The new Research categories tackle emerging threats that aren't quite ready for prime time. These include long-range autonomy (think AI systems that can plan far ahead), sandbagging (AI playing dumb until it needs to show its real capabilities), and autonomous replication (AI systems that can copy and adapt themselves).
OpenAI has simplified its capability levels to two thresholds: High and Critical. High capability means the AI could make existing dangers worse, while Critical capability could create entirely new types of threats. Any system reaching these levels must prove it won't cause severe harm before getting its graduation certificate.
The company's Safety Advisory Group (SAG) acts as the stern principal's office, reviewing whether safety measures are adequate. They can approve deployment, request more evaluation, or demand stronger protections.
To keep up with rapid AI advancement, OpenAI has developed automated testing that can scale with more frequent updates. They're also preparing for a world where other AI labs might release powerful systems without similar safeguards. In such cases, OpenAI might adjust its requirements - but only after careful consideration and public disclosure.
The framework introduces new Safeguards Reports to complement existing Capabilities Reports. These detail how OpenAI designs and verifies safety measures, following their "defense in depth" principle.
OpenAI promises to keep publishing their findings with each new model release, maintaining transparency about their safety efforts. They've already done this for models like GPT-4, OpenAI o1, and GPT-4.5, creating a public record of their safety journey.
The company acknowledges this is an ongoing process, with more updates likely as they learn more. They've consulted with internal teams, external researchers, and industry peers, showing that even AI safety experts need a little help from their friends.
Why this matters:
Read on, my dear:
Fuel your morning with AI insights. Lands in your inbox 6 a.m. PST daily. Grab it free now! π