OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics


OpenAI, the company behind ChatGPT, recently announced the formation of a ‘Safety and Security Committee’ aimed at enhancing the responsible and secure development of its AI technology. This move comes as OpenAI, led by CEO Sam Altman, continues its pursuit of Artificial General Intelligence (AGI), a milestone in AI development that aims to achieve human-like intelligence and self-learning capabilities.

The debut of GPT-4o, a multimodal generative AI model capable of processing and responding with audio, text, and images, has sparked discussions about its capabilities, implications, and ethical considerations. However, concerns were raised following the disbandment of OpenAI’s previous safety oversight team and the departure of key figures like co-founder Ilya Sutskever and AI safety lead Jan Leike. Their departures were reportedly due to concerns about OpenAI’s commitment to responsible development and due diligence.

In response to these concerns, OpenAI has formed the Safety and Security Committee to evaluate and enhance its processes and safeguards over the next 90 days. The committee, chaired by Bret Taylor and including members like Adam D’Angelo and Nicole Seligman, will also consult external experts to ensure a comprehensive review. Following the evaluation period, the committee will share its recommendations with OpenAI’s board and publicly disclose agreed-upon measures in a manner consistent with safety and security protocols.


This proactive step underscores OpenAI’s commitment to responsible AI development and addresses concerns raised by stakeholders. As the company continues to push the boundaries of AI technology, it recognizes the importance of robust oversight and transparency to ensure the ethical and secure deployment of its innovations.