Appendix B. Mapping Best Practices in AI Governance to the CIPL Accountability Wheel Throughout the roundtables, organizations offered some of the practices they use to ensure responsible and accountable deployment of AI. The table below surveys some of those practices. Accountability Element
Related Practices
Leadership and Oversight
• Public commitment and tone from the top to respect ethic, values and specific principles in AI development • Institutionalized AI processes and decision-making • Internal Code of Ethics rules • AI/ethics/oversight boards, councils, committees (internal and external) • Appointing board member for AI oversight • Appointing responsible AI lead/officer • Privacy/AI engineers and champions • Setting up an internal interdisciplinary board (lawyers, technical teams, research, business units) • Appointing privacy stewards to coordinate others • Engaging with regulators in regulatory sandboxes
Risk Assessment
• Understand the AI purpose and use case in business and processes—for decision-making, or to input into decision, or other • Understand impact on individuals • Understand and articulate benefits of proposed AI application and risk reticence • Fairness assessment tools • Algorithmic Impact Assessment • Ethics Impact Assessment • Broader Human Rights Impact Assessment • DPIA for high-risk processing • Consider anonymization techniques • Document tradeoffs (e.g., accuracy vs. data minimization, security vs. transparency, impact on few vs. benefit to society) • • • • • • •
Policies and Procedures
• • • • • • •
High level principles for AI—how to design, use, sell, etc. Assessment questions and procedures Accountability measures for two stages—training and decision-making White, black, and gray lists of AI use Evaluate the data against the purpose—quality, provenance, personal or not, synthetic, in-house or external sources Verification of data input and output Algorithmic bias—tools to identify, monitor and test and including sensitive data in datasets to avoid bias Pilot testing AI models before release Testing robustness of de-identification techniques Use of encrypted data or synthetic data in some AI/ML models and for model training Use of high-quality but smaller data sets Federated AI learning models (data doesn’t leave device) Special considerations for companies creating and selling AI models, software, applications Due diligence checklists for business partners using AI tech and tools
35