Washington: The United States, the United Kingdom, Australia, and 15 more nations have issued worldwide guidelines to safeguard artificial intelligence (AI) models against tampering. The directive encourages companies to prioritize the inherent security of their models, as reported by Cointelegraph.
Sources revealed that on November 26, 2023, around 18 countries introduced a 20-page document. This document is expected to provide guidelines for AI companies on handling cybersecurity aspects while developing or using AI models. It is anticipated to tackle the prevailing notion that “security can often be a secondary consideration” in the rapidly advancing AI industry.
“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy,” Alejandro Mayorkas, US secretary, homeland security, explained.
The expected guidelines are set to align with other governmental initiatives focused on AI. This includes a recent AI Safety Summit conducted in London in November, during which government officials and AI enterprises gathered to establish a mutual agreement on the advancement of AI.
Europe is outpacing the United States in the arena of AI regulations, with legislators actively shaping rules for artificial intelligence in the region. France, Germany, and Italy have also recently reached an agreement on the regulation of AI, supporting the idea of “mandatory self-regulation through codes of conduct” for foundation models of AI, which are crafted to produce a variety of outputs.
Despite the Biden administration’s efforts to push for AI regulation, progress has been minimal in the U.S. Congress, which is polarized and has struggled to pass comprehensive and effective regulations on artificial intelligence.