Department of Defense Releases AI Toolkit

November 14, 2023

United StatesU.S. Executive Branch

Summary

On November 14, 2023, the Department of Defense’s (DoD) Chief Digital and Artificial Intelligence Office (CDAO) announced its Responsible Artificial Intelligence (RAI) Toolkit, which implements the DoD’s AI Ethical Principles. The Toolkit is part of the DoD’s Strategy & Implementation Pathway, which was announced last year, and was created by the RAI Division, the DoD Responsible AI Working Council and experts from academia, industry and government. The RAI Toolkit provides assessments, tools, and examples to help align AI projects with the DoD’s best practices for RAI and AI Ethical Principles. The core of the Toolkit is the SHIELD assessment, consisting of six sequential steps: • Set foundations • Hone operationalizations • Improve and innovate • Evaluate status • Log for traceability • Detect via continuous monitoring

Share This Page

Additional Information

Artificial Intelligence Resource Center

Giving you full access to the latest in AI across regulatory developments, legal & policy issues and industry news.

Akin Intelligence Newsletter

Subscribe to Akin Intelligence, our monthly newsletter recapping the latest in AI and its impact on various sectors. 

© 2025 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.