Jen Easterly, the Director of the U.S. Cybersecurity and Infrastructure Security Agency, has urged the integration of safeguards against potential threats posed by the rapid development of artificial intelligence (AI). Easterly emphasized the necessity of incorporating security measures into AI systems from the outset rather than relying on post-development patching.
In a telephone interview following discussions in Ottawa with Sami Khoury, head of Canada’s Centre for Cyber Security, Easterly expressed concern about the fast-paced advancement of AI technology, stating, “It is too powerful, it is moving too fast.” She highlighted the importance of departing from the norm of releasing technology products with vulnerabilities that consumers are later expected to patch.
The conversation aligns with the recent endorsement by agencies from 18 countries, including the United States, of new guidelines on AI cybersecurity developed in Britain. The guidelines focus on secure design, development, deployment, and maintenance of AI systems, emphasizing a comprehensive approach to security throughout the lifecycle of AI capabilities.
Easterly and Khoury stressed the need for a proactive stance in addressing AI vulnerabilities. Earlier this month, leading AI developers committed to collaborating with governments to conduct testing on new AI models before their release, aiming to manage the risks associated with the rapid evolution of AI technology.
“I think we have done as much as we possibly could do at this point in time, to help come together with nations around the world, with technology companies, to set out from a technical perspective how to build these capabilities as securely and safely as possible,” Easterly stated. The call for global collaboration and proactive security measures reflects a growing awareness of the importance of responsible AI development in mitigating potential risks and ensuring the safe deployment of advanced technologies.