On November 2, 2023, the United States' National Institute of Standards and Technology (NIST) issued a call for collaboration, inviting participation in a consortium aimed at supporting the development of innovative methods for assessing artificial intelligence (AI) systems. This announcement comes at a critical juncture as AI continues to grow rapidly, underscoring the imperative of ensuring its safety and reliability.
The AI Safety Consortium
This consortium represents a pivotal element of the United States' AI Safety Initiative, led by NIST. The initiative was unveiled during the 2023 UK AI Safety Summit, attended by U.S. Secretary of Commerce Gina Raimondo. The consortium is an integral part of NIST's response to the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI. This order mandates that NIST fulfill several responsibilities, including the development of a complementary resource to NIST's AI Risk Management Framework (AI RMF). This resource is designed to clarify fundamental issues, such as auditing AI capabilities, authenticating human-generated content, watermarking AI-generated content, and establishing test environments for AI systems.
Promoting AI Safety and Reliability
The primary objective of NIST's AI Safety Consortium is to foster close collaboration among government agencies, businesses, and engaged communities. Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and Director of NIST, emphasized the importance of developing ways to test and evaluate AI systems to harness their potential while safeguarding safety and privacy.
A Foundation for Trustworthy AI
The U.S. AI Safety Institute will build upon the ongoing work of NIST and other entities to establish a foundation for trustworthy AI systems. This effort supports the use of NIST's AI Risk Management Framework, which was released in January 2023. The framework serves as a voluntary resource to help organizations manage the risks associated with their AI systems, making them more reliable and responsible. The institute aims to quantifiably enhance organizations' ability to assess and validate AI systems, as detailed in the AI RMF Roadmap.
Collaborative Research for Equitable Safety
The institute's collaborative research will bolster the scientific underpinnings of AI measurement, ensuring that remarkable innovations in artificial intelligence benefit all individuals safely and equitably. Elham Tabassi, the Federal AI Standards Coordinator at NIST and a member of the National AI Research Resource Task Force, is enthusiastic about the potential of this endeavor.
An Invitation to Collaborate
NIST, renowned for its extensive history of collaboration with both the public and private sectors and its dedication to measurement and standards-based solutions, is seeking partners from all segments of society to join the consortium. The consortium will serve as a space for informed dialogue and the exchange of information and knowledge. It will be a mechanism to support collaborative research and development through shared projects and will promote the assessment and validation of testing systems and prototypes to guide future AI measurement efforts.
Open Participation for All Organizations
Participation in the consortium is open to all organizations interested in AI safety that can contribute through their expertise, products, data, and models. Jacob Taylor, NIST's Senior Advisor for Critical and Emerging Technologies, emphasized the importance of stakeholder involvement at the intersection of technical and applied domains. The goal is to ensure that the U.S. AI Safety Institute remains highly interactive, as technology is advancing rapidly, and the consortium can help the community's approach to safety evolve accordingly.
Invitation to Collaborate and Participate
In particular, NIST is seeking responses from all organizations with relevant expertise and capabilities to enter into a Consortium Cooperative Research and Development Agreement (CRADA) to support and demonstrate pathways for making AI systems safe and reliable. Members are encouraged to contribute:
- Expertise in one or more specific areas, including AI metrology, responsible AI, AI system design and development, human-AI interaction, socio-technical methodologies, AI explainability and interpretability, and economic analysis.
- Models, data, and/or products to support and demonstrate pathways for making AI systems safe and reliable through the AI RMF.
- Infrastructure support for consortium projects.
- Facility space and management to host consortium researchers, workshops, and conferences.
Deadline for Participation
Organizations interested and possessing relevant technical capabilities should submit a letter of interest by December 2, 2023. Further details about NIST's request for collaborators are available in the Federal Register. NIST plans to host a workshop on November 17, 2023, for those interested in learning more about the consortium and engaging in a dialogue about AI safety.
National and International Collaboration
The U.S. AI Safety Institute will collaborate with other U.S. government agencies to assess AI capabilities, limitations, risks, and impacts and coordinate the creation of test environments. The institute will also work with organizations in allied and partner countries to share best practices, align capability assessment, and provide guidance and benchmarks to ensure responsible AI growth worldwide.
This call for collaboration is a significant step toward a future where AI is secure, reliable, and at the forefront, ensuring that emerging technologies can fully benefit humanity. Join the AI Safety Institute and be part of this exciting initiative for a better and safer future. #AI #AISafety #Collaboration #EmergingTechnologies #Innovation