Category: Safety and Alignment

Who gets to decide what behavior AI systems should exhibit?

We’re making it clearer how ChatGPT behaves and what we intend to do to make it […]

Organizing for Beyond AGI

Our goal is to make sure that artificial intelligence (AI) systems—which are typically more intelligent than […]

Our strategy for AI security

Our purpose depends on ensuring that AI systems are developed, implemented, and used securely. OpenAI is […]

Perspectives from worldwide dialogues

We are discussing the lessons we took away from our talks in 22 different nations, as […]

Advancing AI governance

With voluntary pledges, OpenAI and other top labs uphold the safety, security, and reliability of AI. […]

Forum for Frontier Models

We’re shaping a modern industry body to advance the secure and dependable improvement of wilderness AI […]

How OpenAI is preparing for the global elections of 2024

Our goals are to stop misuse, make AI-generated content transparent, and enhance the availability of reliable […]

Democratic contributions to the AI grant program: takeaways and strategies for execution

We financed 10 groups from around the world to plan thoughts and instruments to collectively oversee […]

Rethinking safe systems for cutting-edge AI

To safeguard cutting-edge AI, OpenAI advocates for an evolution in infrastructure security. Securing progressed AI frameworks […]

Update on safety for OpenAI

More than a hundred million clients and millions of designers depend on the work of our security groups. We see security as something we need to contribute in and succeed at over numerous time skylines, from adjusting today's models to the distant more competent frameworks we anticipate within the future. This work has continuously happened over OpenAI and our venture will as it were increment over time.