OpenAI’s New Safety Framework Aims To Ensure Responsible AI Development

[ad_1]

OpenAI has published a new security preparedness framework to manage advanced AI systems. As part of this, they have given their board the power to override executive decisions on releasing AI models that they deem relevant.

The details:

  • A new internal “Readiness” team at OpenAI will continuously assess the capabilities and risks of their AI systems. This team will issue reports advising management and board members on what they observe.
  • The OpenAI board now has the ability to block the release of AI models even if executives have deemed them safe to release. This comes after leadership crises at OpenAI regarding responsible disclosure decisions.
  • The framework aims to formalize the security processes that already exist at OpenAI. The aim is to increase accountability, stricter control of potential harm and transparency around decisions.
  • One OpenAI employee quoted the announcement as possibly joking about advances in artificial general intelligence.

Although the drama at OpenAI in the past has been more about power struggles than security, AI security is still at the top of researchers’ agendas. OpenAI emphasizing thoughtful, ethical development is a positive step, especially as general AI may be closer. However, jokes about “AGI coming” should be considered unverified claims at this point.

Leave a Comment