In a landmark move, AI startups OpenAI and Anthropic have entered into agreements with the US government to collaborate on the research, testing, and evaluation of their artificial intelligence models.
HIGHLIGHTS
- The U.S. Artificial Intelligence Safety Institute, part of the National Institute of Standards and Technology (NIST), announced the deals on Thursday, marking the first such partnerships in the rapidly evolving AI landscape.
- Legislators in California are set to vote on a bill that could introduce broad regulations on AI development and deployment, reflecting growing concerns over the safe and ethical use of AI.
- Jason Kwon, Chief Strategy Officer at OpenAI, highlighted the role of the institute in establishing U.S. leadership in AI. He expressed hope that their joint efforts would serve as a global framework for responsible AI development.
- Jack Clark, Co-Founder and Head of Policy at Anthropic, emphasized the critical nature of this collaboration, noting that safe and trustworthy AI is essential for the technology’s positive impact.
- Clark expressed confidence that working with the U.S. AI Safety Institute would enable comprehensive testing of their models, backed by expertise from the agency.
- Elizabeth Kelly, Director of the U.S. AI Safety Institute, described the agreements as an important milestone in guiding the future of AI responsibly. She noted that these partnerships are just the beginning, with more collaborative research planned to assess the capabilities and risks of AI models.
- The U.S. AI Safety Institute, which was established last year under an executive order by President Joe Biden, plays a key role in assessing both known and emerging risks of AI models.
- The institute will also collaborate with its UK counterpart, providing feedback to the companies involved on potential safety improvements.
- Under the terms of the agreements, the U.S. AI Safety Institute will gain access to significant new AI models from both OpenAI and Anthropic, both before and after their public release. This access will enable collaborative research aimed at evaluating the models’ capabilities and identifying potential risks associated with their deployment.
Click here for Latest Fact Checked News On NewsMobile WhatsApp Channel
For viral videos and Latest trends subscribe to NewsMobile YouTube Channel and Follow us on Instagram