Summarizes OpenAI's Charter, the company's principles for building safe and beneficial artificial general intelligence.
Outlines OpenAI's commits to using influence over AGI deployment to ensure it benefits all of humanity and avoids harm or power concentration.
Pledges to assist other safety-conscious AGI projects nearing completion, rather than compete, under specific conditions.
Strive for technical leadership within OpenAI and cooperate with other research and policy institutions.
Commit to multistakeholder cooperation and publish its research while acknowledging potential safety and security constraints on traditional AI research publishing.
This summary is awaiting validation (peer review by a second AGORA editor).
Key facts
🏛️ This document has been enacted by a private-sector company.
For authoritative text and metadata, visit the official source.
🎯 This document primarily applies to the private sector, rather than the government.
📜 This document's name is OpenAI Charter.
Themes AI risks, applications, governance strategies, and other themes addressed in AGORA documents.
Thematic tags for this document are awaiting validation (peer review by a second AGORA editor).
This is an unofficial copy. The document has been
archived and reformatted in plaintext for AGORA. Footnotes, tables, and
similar material may be omitted. For the official text, visit the original source.
Thematic tags for this document are awaiting validation (peer review by a second AGORA editor).
OpenAI Charter
Our Charter describes the principles we use to execute on OpenAI’s mission.
This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development.
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:
Establishes that OpenAI is committed to ensuring that artificial general intelligence will benefit all of humanity.
Establishes that OpenAI is committed to ensuring that artificial general intelligence will benefit all of humanity.
Broadly distributed benefits
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
Establishes that OpenAI is committed to ensuring that AGI benefits all and avoids harm or undue power concentration.
Establishes that OpenAI is committed to ensuring that AGI benefits all and avoids harm or undue power concentration.
Long-term safety
We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
Establishes that OpenAI will prioritize aiding other value-aligned, safety-conscious projects over competition if those projects are nearing to achieving AGI.
Establishes that OpenAI will prioritize aiding other value-aligned, safety-conscious projects over competition if those projects are nearing to achieving AGI.
Technical leadership
To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.
We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise.
Establishes that OpenAI is committed to leading in technical AI capabilities alongside policy and safety advocacy.
Establishes that OpenAI is committed to leading in technical AI capabilities alongside policy and safety advocacy.
Cooperative orientation
We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.
We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
Establishes that OpenAI will cooperate with other research and policy institutions and is committed to providing public goods to address AGI's global challenges.
Establishes that OpenAI will cooperate with other research and policy institutions and is committed to providing public goods to address AGI's global challenges.