Responsible AI
Responsible AI refers to the practice of designing and managing AI systems that are trustworthy, explainable, and human-centric.
概要
Artificial Intelligence (AI) offers many advantages for companies but comes with significant risks. Companies using AI or creating AI systems must adopt specific rules, methods, and technologies to reduce these risks. Responsible AI means ensuring AI does not hurt people, the companies that use it, or the environment.
This article discusses Responsible AI and describes how the following SAFe practices can help companies quickly advance their efforts to use AI responsibly. It extends the guidance provided in the Artificial Intelligence (AI) and SAFe article.
In 2023, a survey of Scaled Agile customers showed that 43% hesitated to use AI because of its risks. Despite these concerns, AI helps employees do their jobs better and helps companies create new, competitive products. Companies that have dealt with AI’s risks by setting clear rules, following proven methods, and using the right technology have gained an edge over their competitors. This article aims to help SAFe companies use AI responsibly. By following these principles and practices, they can avoid the dangers of AI and benefit from what it has to offer.
Key takeaways from the following guidance are:
- AI systems come with their own risks that need to be addressed to use this technology safely.
- SAFe provides detailed recommendations for using AI responsibly.
- Adopting SAFe values and methods can help companies implement Responsible AI.
Why do Companies Need Responsible AI?
Why is it essential for organizations to put effort and resources into Responsible AI? Because the risks of AI, if unchecked, can far outweigh the benefits it can provide.
AI has its own specific risks, such as:
Bias in AI systems arise from the data on which they are trained. If the training data is biased, the AI will also exhibit bias. For instance, a major technology firm was heavily criticized when its AI facial recognition system failed to recognize individuals with dark skin tones accurately. This issue stemmed from the fact that the system was trained on a dataset of images where most of the people had lighter skin tones.
Hallucination happens when an AI creates outputs that seem wrong or make no sense because of mistakes in its data or how it was made. For example, an airline’s AI chatbot once offered a customer a discount on an airfare. When the customer tried to use this discount, they were told it didn’t exist. The customer then took the airline to court and won, showing that companies can be held responsible for their AI systems’ mistakes.
Data leaks can happen when employees enter confidential company or customer information into cloud-based AI tools external companies provide. Since most AI software is made and hosted by outside vendors, companies must be cautious about collecting, storing, and using their sensitive data with these AI systems to prevent data breaches or improper use. For example, developers at a global consumer electronics company used an AI tool to improve their software code, unintentionally sharing highly sensitive company information with the rest of the world.
Because of these and other risks associated with AI, companies must take intentional steps to mitigate them so that they can use AI safely.
Aspects of Responsible AI
Companies can mitigate the risks of using AI by following the best practices of SAFe’s Responsible AI model. Responsible AI means that an AI system exhibits all of the following aspects.
Trustworthy AI means AI systems work as designed, are secure, and protect private information. When AI is trustworthy, it has the following attributes:
- Privacy – making sure customer and company confidential information doesn’t leak out, especially when using third-party AI tools
- Security – preventing hackers from compromising systems by taking advantage of AI’s weaknesses
- Resilience – handling attacks and fixing itself quickly if a part of the AI stops working
- Reliability – working as it should, being available when needed, performing quickly, and doing what it was created to do
- Accuracy – giving correct information and results
When an AI system has all these qualities, companies and their customers are more likely to trust and rely on it.
Explainable AI makes AI systems open and clear so people can understand how they arrive at the results they produce. This involves:
- Transparency – Making it easy to understand how AI systems work and how they produce results similar to a human’s output by providing clear documentation
- Interpretability – Ensuring that the way AI makes decisions is easy for humans to understand
- Accountability – Holding organizations responsible for AI behavior and outcomes
When companies make their AI explainable, it helps fix mistakes faster and makes customers trust their AI more.
Human-centric AI systems should always be safe and avoid harming people, property, or the environment. They should also respect the rules and values of society. Here are the key attributes of human-centric AI:
- Safety – Ensuring AI doesn’t pose any dangers to humans
- Fairness – Ensuring AI systems treat everyone equally, without bias
- Ethics – Ensuring AI systems follow the moral principles and values that society holds
- Inclusiveness – Ensuring AI considers the wide range of people who might use it
- Sustainability – Ensuring AI does not harm the environment
- Compliance – Ensuring AI follows existing laws, regulations, and standards
The more we empower AI to act independently, the more important it becomes for AI systems to be designed to ‘do no harm’ to the people who create and use them.
Responsible AI is an Executive Level Concern
Responsible AI usually starts as a top priority initiative for senior executives and the board of directors. The initial leader of a company’s Responsible AI initiative is often a C-level leader. It is also common for the company’s board of directors to create a special advisory group to guide the AI strategy and ensure AI is used responsibly.
When the Responsible AI effort starts, a team with members from different departments handles the detailed work of creating rules and practices for Responsible AI use. This team often contracts outside legal experts who have experience in this area.
This team’s main goal is to develop a wide-ranging set of rules and guidelines to ensure that AI is used responsibly in the company. Some rules, like policies on using AI tools on company devices and networks, are for everyone. Other rules are specific to certain roles or departments. Many guidelines help shape the company’s strategy and highlight the initiatives needed to develop systems that support Responsible AI.
SAFe Roles and Practices Support Responsible AI
Companies that use SAFe are well-prepared to work with Responsible AI. This is because the core values of SAFe closely match the idea of AI being ‘responsible.’
Beyond a shared set of values, here are some ways SAFe can help with Responsible AI:
- Strategic Themes help communicate Responsible AI priorities throughout the organization
- Portfolio leaders manage Responsible AI governance within development value streams and make Responsible AI a top priority
- System Teams incorporate Responsible AI automated tests into the DevOps pipeline
- Agile Release Trains include experts focused on integrating Responsible AI into solutions
- Agile Teams apply SAFe practices like creating non-functional requirements to embed Responsible AI in solution features
- Procurement follows SAFe’s Agile contract principles to ensure Responsible AI qualities are included in agreements with AI providers
By using SAFe to practice Responsible AI, companies can make the most of their investments in Lean and Agile practices. This approach helps them keep pace with the fast changes in AI technologies and Responsible AI practices.
詳しく学ぶ
U.S. National Institute of Standards and Technology (NIST) – Trustworthy and Responsible AI Resource Center – https://airc.nist.gov/home
International Standards Organization (ISO) – ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system – https://www.iso.org/standard/81230.html
E.U. Artificial Intelligence Act – https://artificialintelligenceact.eu/
Responsible Artificial Intelligence Institute – https://www.responsible.ai/