To Be a Responsible AI Leader, Focus on Being Responsible (2022)

Findings from the 2022 Responsible AI Global Executive Study and Research Project

by: Elizabeth M. Renieris, David Kiron, and Steven Mills

Executive Summary

As AI’s adoption grows more widespread and companies see increasing returns on their AI investments, the technology’s risks also become more apparent.1 Our recent global survey of more than 1,000 managers suggests that AI systems across industries are susceptible to failures, with nearly a quarter of respondents reporting that their organization has experienced an AI failure, ranging from mere lapses in technical performance to outcomes that put individuals and communities at risk. It is these latter harms that responsible AI (RAI) initiatives seek to address.

Meanwhile, lawmakers are developing the first generation of meaningful AI-specific legislation.2 For example, the European Union’s proposed AI Act would create a comprehensive scheme to govern the technology. And in the U.S., lawmakers in New York, California, and other states are working on AI-specific regulations to govern its use in employment and other high-risk contexts.3 In response to the heightened stakes around AI adoption and impending regulations, organizations worldwide are affirming the need for RAI, but many are falling short when it comes to operationalizing RAI in practice.

There are, however, exceptions. A number of organizations are bridging the gap between aspirations and reality by making a philosophical and material commitment to RAI, including investing the time and resources needed to create a comprehensive RAI program. We refer to them as RAI Leaders or Leaders. They appear to enjoy clear business benefits from RAI. Our research indicates that Leaders take a more strategic approach to RAI, led by corporate values and an expansive view of their responsibility toward a wide array of stakeholders, including society as a whole. For Leaders, prioritizing RAI is inherently aligned with their broader interest in leading responsible organizations.

This MIT Sloan Management Review and Boston Consulting Group report is based on our global survey, interviews with several C-level executives, and insights gathered from an international panel of more than 25 AI experts. (For more details on our methodology, including how the research team surveyed Africa and China, see “About the Research.”) It provides a high-level road map for organizations seeking to enhance their RAI efforts or become RAI Leaders. Though negotiating AI-related challenges and regulations can be daunting, the good news is that a focus on general corporate responsibility goes a long way toward achieving RAI maturity.

Introduction

Responsible AI has become a popular term in both business and the media. Many companies now have responsible AI officers and teams dedicated to ensuring that AI is developed and used appropriately. This emphasis reflects an increasingly common point of view that as AI gains influence over operations and how products work, companies need to address novel risks associated with this emerging technology.

However, leading companies are taking a more expansive approach: For them, RAI is about expanding their foundation of corporate responsibility. These companies are responsible businesses first: The values and principles that determine their approach to responsible conduct apply to their entire suite of technologies, systems, and processes. For these leading companies, RAI is less about a particular technology than the company itself.

H&M Group is a case in point. Linda Leopold, the company’s head of responsible AI and data, recognizes, “There is a close connection between our strategy and our strategy for responsible AI and our efforts to promote social and environmental sustainability.” One example of where these strategies align “is our ambition to use AI as a tool to reduce CO2 emissions,” she explains.

Nitzan Mekel-Bobrov, chief AI officer at eBay, sees an inherent connection between RAI and a broader view of corporate responsibility. He notes, “Many of the core ideas behind responsible AI, such as bias prevention, transparency, and fairness, are already aligned with the fundamental principles of corporate social responsibility, so it should already feel natural for an organization to tie in its AI efforts.”

Their views on RAI reflect a powerful theme that runs throughout our research this year: As organizations develop and mature their RAI programs, they come to see RAI as an organizational issue, not just a technological one. At the same time, many organizations have yet to make this transition.

The State of RAI: Aspirations Versus Reality

Without question, AI adoption is accelerating across organizations in all industries and sectors. An overwhelming majority of the companies surveyed for MIT SMR and BCG’s 2019 report on AI — 90% — had made investments in the technology.4 Our research suggests that organizations deploy AI to optimize internal business processes and improve external customer relations and products. Levi Strauss & Co. provides an example from the retail industry. “AI is starting to permeate the entirety of Levi Strauss & Co.,” observes Katia Walsh, the apparel company’s chief global strategy and AI officer. She explains that, far from playing a limited role in one area, the organization is implementing AI across various functional areas, enabling personalizing consumer experiences online and in stores, automating and optimizing internal processes, pricing and production, order fulfillment, and other initiatives. AI is being implemented horizontally across the enterprise to personalize customer search experiences, enhance internal efficiencies, predict demand for products, and more.

While corporate adoption of AI has been rapid and wide-ranging, the adoption of responsible AI across organizations worldwide has thus far been relatively limited. RAI is often seen as necessary to mitigate the technology’s risks — which encompass issues of safety, bias, fairness, and privacy, among others — yet it is by no means standard practice. Just over half of our respondents (52%) report that their organizations have an RAI program in place. Of those with an RAI program, a majority (79%) report that the program’s implementation is limited in scale and/or in scope. (See Figure 1.)

To Be a Responsible AI Leader, Focus on Being Responsible (3)
(Video) Responsible AI: From theory to practice

Notably, 42% of our respondents say that AI is a top strategic priority for their organization, but even among those respondents, only 19% affirm that their organization has a fully implemented RAI program. In other words, responsible AI initiatives often lag behind strategic AI priorities. (See Figure 2.)

To Be a Responsible AI Leader, Focus on Being Responsible (4)

One factor that could be contributing to RAI’s limited implementation is confusion over the term itself. Given that RAI is a relatively nascent field, it is hardly surprising that there is a lack of consensus on the meaning of responsible AI. Only 36% of respondents believe the term is used consistently throughout their organization. Even enterprises that have implemented RAI programs find that the term is used inconsistently. Kathy Baxter, principal architect of ethical AI practice at Salesforce, notes that there has been discussion at the cloud-based software company over whether to use the term responsible AI or ethical AI and, indeed, over whether the two are interchangeable. Similarly, at H&M Group, Leopold agrees that the terms ethical, trustworthy, and responsible in connection with AI are “used very much interchangeably.” To be consistent, and avoid confusion, she and her team decided to use responsible as an umbrella term, where ethics is one key component.

Other factors that contribute to the limited implementation of RAI have less to do with the technical complexities of AI than with more general organizational challenges. When respondents were asked which factors were preventing their organizations from starting, sustaining, or scaling RAI initiatives, the most common factors were shortcomings related to expertise and talent, training or knowledge among staff members, senior leadership prioritization, funding, and awareness. (See Figure 3.)

Given the rapid spread of AI technology and growing awareness of its risks, most organizations recognize the importance of RAI and want to prioritize it. The vast majority of respondents (84%) believe that RAI should be part of the top management agenda. Several of our RAI panel members share that sentiment.5 Paula Goldman, chief ethical and humane use officer at Salesforce, forcefully makes the point, declaring, “As we navigate increasing complexity and the unknowns of an AI-powered future, establishing a clear ethical framework isn’t optional. It’s vital for its future.” Riyanka Roy Choudhury, a CodeX fellow at Stanford Law School’s Computational Law Center, concurs, describing RAI as “an economic and social imperative.” She adds, “It’s vital that we are able to explain the decisions we use AI to make, so it is important for companies to include responsible AI as a part of the top management agenda.” Our research supports the prioritization of RAI as a business imperative.

Despite widespread agreement regarding the importance of RAI, however, the reality is that most organizations have yet to translate their beliefs into action. Furthermore, even organizations that have implemented RAI to some degree have, in most cases, done so to only a limited extent. Of the 84% of respondents who believe that RAI should be a top management priority, only 56% say that it is in fact a top priority. And of those, only 25% report that their organizations have a fully mature RAI program in place. (See Figure 4.)

To Be a Responsible AI Leader, Focus on Being Responsible (6)

RAI Leaders Bridge the Gap

A small cohort of organizations, representing 16% of our survey respondents, has managed to bridge the gap between aspirations and reality by taking a more strategic approach to RAI. These RAI Leaders have distinct characteristics compared with the remainder of the survey population (84%), whom we characterize as Non-Leaders.6 Specifically, they are organizations whose management prioritizes RAI, include a wide array of participants in RAI implementations, and have an expansive view of their stakeholders with respect to RAI. Accordingly, three-quarters (74%) of Leaders report that RAI is in fact part of the organization’s top management agenda, as opposed to just 46% of Non-Leaders. This prioritization is reflected in the commitment of 77% of Leaders to invest material resources in their RAI efforts, as opposed to just 39% of Non-Leaders.

Steven Vosloo, digital policy specialist in UNICEF’s Office of Global Insight and Policy, attests to the importance of leadership support for RAI practices. “It is not enough to expect product managers and software developers to make difficult decisions around the responsible design of AI systems when they are under constant pressure to deliver on corporate metrics,” he contends. “They need a clear message from top management on where the company’s priorities lie and that they have support to implement AI responsibly.” Without leadership support, practitioners may lack the necessary incentives, time, and resources to prioritize RAI.

In addition to investing in their RAI efforts, Leaders also include a broader range of participants in those efforts. Leaders include 5.8 roles in their RAI efforts, on average, as opposed to only 3.9 roles for Non-Leaders. Notably, this involvement is tilted toward senior positions. Leaders engage 59% more C-level roles in their RAI initiatives than Non-Leaders, and nearly half (47%) of Leaders involve the CEO in their RAI initiatives, more than double the percentage of Non-Leaders (23%). (See Figure 5.)

To Be a Responsible AI Leader, Focus on Being Responsible (7)

Leaders believe that RAI should engage a broad range of participants beyond the organization’s boundaries, even viewing society as a whole as a key stakeholder. Significantly, a strong majority of Leaders (73%) see their RAI efforts as part of their broader corporate social responsibility (CSR) efforts. Brian Yutko, Boeing’s vice president and chief engineer for sustainability and future mobility, embraces this outlook: “There’s nothing that we can do in this industry that doesn’t come with safety as one of the driving requirements. So, it’s hard for me to extract ‘responsible AI’ from the notion of safety, because that’s just simply what we do.” H&M’s Leopold suggests that RAI is connected to CSR, "but it needs to be treated as a separate topic with its own specific challenges and goals. It’s not entirely overlapping and connected" with CSR. Non-Leaders are more likely to define RAI in relation to their business bottom line or internal stakeholders, with only 35% connecting RAI with CSR efforts.7

As organizations mature their RAI initiatives, they become even more interested in aligning their AI use and development with their values and broader social responsibility.

Our research indicates that as organizations mature their RAI initiatives, they become even more interested in aligning their AI use and development with their values and broader social responsibility, and less concerned with limiting risk and realizing business benefits.

It is also significant that Leaders are far more likely than Non-Leaders to disagree that RAI is a “check the box” exercise (61% versus 44%, respectively). These divergent outlooks reflect different outcomes. Our survey results show that organizations with a box-checking approach to RAI are more likely to experience AI failures than Leader organizations.

RAI Leaders Realize Clear Business Benefits

As we have noted, RAI Leaders can realize measurable business benefits from their RAI efforts even if they are not primarily motivated by the promise of such benefits. Benefits include better products and services, improved brand differentiation, accelerated innovation, enhanced recruiting and retention, increased customer loyalty, and improved long-term profitability, as well as a better sense of preparedness for emerging regulations.

(Video) Responsible AI

Overall, 41% of Leaders affirm that they are already realizing business benefits from their RAI efforts, compared with only 14% of Non-Leaders. Moreover, the benefits of AI maturity are amplified when organizations have a robust RAI program in place.8 Thirty percent of RAI Leaders see business benefits from their RAI programs even with immature AI efforts, compared with just 11% of Non-Leaders. Forty-nine percent of Leaders see business benefits from their RAI programs with mature AI efforts, compared with 23% of Non-Leaders. Whether their AI program is mature or immature, Leaders stand to reap more business benefits with RAI.

In addition to increasing business benefits, mature RAI programs also reduce the risks associated with AI itself. With growing AI maturity, and as more AI applications are deployed, the risk of AI failures increases. Increasing RAI maturity ahead of AI maturity significantly reduces the risks associated with scaling AI efforts over time and helps organizations identify AI lapses. Conversely, organizations that mature their AI programs before adopting RAI see more AI failures.

In terms of specific business benefits, half of Leaders report better products and services as a result of their RAI efforts, whereas only 19% of Non-Leaders do. Almost as many Leaders (48%) say that their RAI efforts have resulted in enhanced brand differentiation, while only 14% of Non-Leaders have realized such benefits.

Furthermore, contrary to popular perception, 43% of Leaders report accelerated innovation as a result of their RAI efforts, compared with only 17% of Non-Leaders. (See Figure 6.) Indeed, the overwhelming majority of our AI panel sees RAI as having a positive impact on innovation, with many citing the fact that RAI can help curtail the kinds of negative effects of AI that can hinder its development or adoption.9

To Be a Responsible AI Leader, Focus on Being Responsible (8)

Vipin Gopal, chief data and analytics officer at Eli Lilly, believes that rather than stifling innovation, “responsible AI enables responsible innovation.” He explains: “It would be hard to make the argument that a biased and unfair AI algorithm powers better innovation compared with the alternative. Similar observations can be made with other dimensions of responsible AI, such as security and reliability. In short, responsible AI is a key enabler to ensure that AI-related innovation is meaningful and something that positively benefits society at large.”

Finally, with AI regulations on the horizon, Leaders also experience better preparedness. Most organizations say that they are ill-equipped to face the forthcoming regulatory landscape. But our survey results indicate that those with a mature RAI program in place feel more prepared. A majority of Leaders (51%) feel ready to meet the requirements of emerging AI regulations, compared with less than a third (30%) of organizations with nascent RAI programs.

Recommendations for Aspiring RAI Leaders

Clearly, there are compelling reasons for organizations to transform their own RAI aspirations into reality, including general corporate responsibility, the promise of a range of business benefits, and potentially better preparedness for new regulatory frameworks. How, then, should businesses begin or accelerate this process? These recommendations, inspired by lessons from current RAI leaders, will help organizations scale or mature their own RAI programs.

Ditch the “check the box” mindset. In the face of impending AI regulations and increasing AI lapses, RAI may help organizations feel more prepared. But a mature RAI program is not driven solely by regulatory compliance or risk reduction. Consider how RAI aligns with or helps to express your organizational culture, values, and broader CSR efforts.

Zoom out. Take a more expansive view of your internal and external stakeholders when it comes to your own use or adoption of AI, as well as your AI offerings, including by assessing the impact of your business on society as a whole. Consider connecting your RAI program with your CSR efforts if those are well established within the organization. There are often natural overlaps and instrumental reasons for linking the two.

Start early. Launch your RAI efforts as soon as possible to address common hurdles, including a lack of relevant expertise or training. It can take time — our survey shows three years on average — for organizations to begin realizing business benefits from RAI. Even though it might feel like a long process, we are still early in the evolution of RAI implementation, so your organization has an opportunity to be a powerful leader in your specific industry or geography.

Walk the talk. Adequately invest in every aspect of your RAI program, including budget, talent, expertise, and other human and nonhuman resources. Ensure that RAI education, awareness, and training programs are sufficiently funded and supported. Engage and include a wide variety of people and roles in your efforts, including at the highest levels of the organization.

Conclusion

We are at a time when AI failures are beginning to multiply and the first AI-related regulations are coming online. While both developments lend urgency to the efforts to implement responsible AI programs, we have seen that companies leading the way on RAI are not driven primarily by risks, regulations, or other operational concerns. Rather, our research suggests that Leaders take a strategic view of RAI, emphasizing their organizations’ external stakeholders, broader long-term goals and values, leadership priorities, and social responsibility.

Even though there are unique properties of AI that require an organization to articulate specific cultural attitudes, priorities, and practices, similar strategic considerations might influence how an organization approaches the development or use of blockchain, quantum computing, or any other technology, for that matter.

Given the high stakes surrounding AI, and the clear business benefits stemming from RAI, organizations should consider how to mature their RAI efforts and even seek to become Leaders. Philip Dawson, AI policy lead at the Schwartz Reisman Institute for Technology and Society, warns of liabilities for corporations that neglect to approach this issue strategically. “Top management seeking to realize the long-term opportunity of artificial intelligence for their organizations will benefit from a holistic corporate strategy under its direct and regular supervision,” he asserts. “Failure to do so will result in a patchwork of initiatives and expenditures, longer time to production, damages that could have been prevented, reputational damages, and, ultimately, opportunity costs in an increasingly competitive marketplace that views responsible AI as both a critical enabler and an expression of corporate values.”

On the flip side of those liabilities, of course, are the benefits that we have seen accrue to Leaders that adopt a more strategic view. Leaders go beyond talking the talk to walking the walk, bridging the gap between aspirations and reality. They demonstrate that responsible AI actually has less to do with AI than with organizational culture, priorities, and practices — how the organization views itself in relation to internal and external stakeholders, including society as a whole.

In short, RAI is not just about being more responsible for a special technology. RAI Leaders see RAI as integrally connected to a broader set of corporate objectives, and to being a responsible corporate citizen. If you want to be an RAI Leader, focus on being a responsible company.

(Video) Is Responsible AI Actually Something Organizations Can Achieve?

Appendix: Responsible AI Adoption in Africa and China

In order to better understand how industry stakeholders in Africa and China approach responsible AI, our research team conducted separate surveys in those two key geographies. The Africa survey, conducted in English, returned 100 responses, and the China survey, localized in Mandarin Chinese, returned 99. African respondents represented organizations grossing at least $100 million in annual revenues, and Chinese respondents represented organizations grossing at least $500 million.

A majority of respondents in Africa (74%) agree that responsible AI is a top management agenda item in their organizations. Sixty-nine percent agree that their organizations are prepared to address emerging AI-related requirements and regulations. The highest percentage of African respondents (55%) report that their organizations’ RAI efforts have been underway for a year or less (with 45% at six to 12 months, and 10% at less than six months).

In China, 63% of respondents agree that responsible AI is a top management agenda item, and the same percentage agree that their organizations are prepared to address requirements and regulations. Based on our survey data, China appears to have longer-standing efforts around RAI, with respondents reporting that their organizations have focused on RAI for one to three years (39%) or more than five years (20%).

Respondents in both geographies have realized clear business benefits from their RAI efforts. A majority of respondents — 55% in Africa and 51% in China — cite better products and services as a top benefit. A significant minority have benefited from increased customer retention — 38% in Africa and 34% in China. In Africa, 38% of respondents also cite improved longer-term profitability, while 40% of respondents in China say they have experienced accelerated innovation as a result of RAI. (See Figure 7.)

To Be a Responsible AI Leader, Focus on Being Responsible (9)

About the Research

In the spring of 2022, MIT Sloan Management Review and Boston Consulting Group fielded a global executive survey to learn the degree to which organizations are addressing responsible AI. We focused our analysis on 1,093 respondents representing organizations reporting at least $100 million in annual revenues. These respondents represented companies in 22 industries and 96 countries. The team separately fielded the survey in Africa, as well as a localized version in China, to yield 100 and 99 responses from those geographies, respectively.

We defined responsible AI as “a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.”

To quantify what it means to be a responsible AI Leader, the research team conducted a cluster analysis on three numerically encoded survey questions: “What does your organization consider part of its responsible AI program? (Select all that apply.)”; “To what extent are the policies, processes, and/or approaches indicated in the previous question implemented and adopted across your organization?”; and “Which of the following considerations do you personally regard as part of responsible AI? (Select all that apply.).” The first and third questions were first recategorized into six options each to ensure equal weighting of both aspects. The team then used an unsupervised machine learning algorithm (K-means clustering) to identify naturally occurring clusters based on the scale and scope of the organization’s RAI implementation. The K-means algorithm required specification of the number of clusters (K), which were verified through exploratory data analysis of the survey data and direct visualization of the clusters via UMAP. We then defined an RAI Leader as the most mature of three maturity clusters identified through this analysis based on the scale and scope of the organization’s RAI implementation. Scale is defined as the degree to which RAI efforts are deployed across the enterprise (e.g., ad hoc, partial, enterprisewide). Scope includes the elements that are part of the RAI program (e.g., principles, policies, governance) and the dimensions covered by the RAI program (e.g., fairness, safety, environmental impact). Leaders were the most mature in terms of both scale and scope.

Finally, the research team assembled a panel of 26 RAI thought leaders from industry and academia, who were polled on key questions to inform this research multiple times through its cycle. We conducted deeper-dive interviews with four of those panelists.

About the Authors

Elizabeth M. Renieris is a senior research associate at Oxford’s Institute for Ethics in AI and the founder and CEO of Hackylawyer, a law and policy consultancy. A former fellow at Stanford’s Digital Civil Society Lab, Harvard’s Carr Center for Human Rights Policy, and the Berkman Klein Center for Internet & Society, Renieris is the author of Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse (MIT Press, 2023).

David Kiron is the editorial director for research at MIT Sloan Management Review and program lead for its Big Ideas research initiatives. Previously, he was a senior researcher at Harvard Business School and a researcher at the Global Development and Environment Institute at Tufts University. He is coauthor of the forthcoming book Workforce Ecosystems: Reaching Strategic Goals With People, Partners, and Technology (MIT Press, 2023).

Steven Mills is a managing director and partner at Boston Consulting Group (BCG), where he serves as the chief AI ethics officer. He is responsible for developing BCG’s internal responsible AI program as well as guiding clients as they design and implement their own RAI programs. Mills has been recognized by DataIQ as one of the 100 most influential people in data (2022) and by Forbes as one of 15 AI ethics leaders shaping the future (2021).

Contributors

François Candelon, Maxime Courtaux, Michele Lee DeFilippo, Todd Fitz, Carolyn Ann Geason-Beissel, Franz Gravenhorst, Abhishek Gupta, Sarah Johnson, Tom Porter, Lauren Rosano, Allison Ryder, Max Santinelli, Sean Singer, Barbara Spindel, Peter Strutt, and Yunke Xiang

To cite this report, please use:

Elizabeth M. Renieris, David Kiron, and Steven Mills, “To Be a Responsible AI Leader, Focus on Being Responsible,” MIT Sloan Management Review and Boston Consulting Group, September 2022.

(Video) Responsible AI

Acknowledgments

We thank each of the following individuals, who were interviewed for this report:

Kathy Baxter, principal architect, ethical AI practice, Salesforce

Linda Leopold, head of responsible AI and data, H&M Group

Katia Walsh, chief global strategy and AI officer, Levi Strauss & Co.

Brian Yutko, vice president and chief engineer, sustainability and future mobility, Boeing

MIT Sloan Management Review

At MIT Sloan Management Review (MIT SMR) we explore how leadership and management are transforming in a disruptive world. We help thoughtful leaders capture the exciting opportunities — and face down the challenges — created as technological, societal, and environmental forces reshape how organizations operate, compete, and create value.

MIT Sloan Management Review Big Ideas

MIT Sloan Management Review’s Big Ideas Initiatives develop innovative, original research on the issues transforming our fast-changing business environment. We conduct global surveys and in-depth interviews with frontline leaders working at a range of companies, from Silicon Valley startups to multinational organizations, to deepen our understanding of changing paradigms and their influence on how people work and lead.

Boston Consulting Group

Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we work closely with clients to embrace a transformational approach aimed at benefiting all stakeholders — empowering organizations to grow, build sustainable competitive advantage, and drive positive societal impact. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives that question the status quo and spark change. BCG delivers solutions through leading-edge management consulting, technology and design, and corporate and digital ventures. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, fueled by the goal of helping our clients thrive and enabling them to make the world a better place.

GAMMA, part of BCG X

BCG X is Boston Consulting Group’s home for tech-build and design talent. The multidisciplinary unit develops cutting-edge AI, visionary business ventures, and unique software and products powered by the combined expertise of BCG Digital Ventures, BCG GAMMA, and BCG Platinion. Together as BCG X, this team collaborates at all levels with the world’s leading organizations to solve their biggest strategy and technology challenges. BCG X is at the forefront of thought leadership, with a breadth of industry-recognized experts and deep engagement in industry thought leadership.

References

1. T.H. Davenport and R. Bean, “Companies Are Making Serious Money With AI,” MIT Sloan Management Review, Feb. 17, 2022, https://sloanreview.mit.edu.

2. For a summary of legislative action taken in the U.S., see C. Kraczon, “The State of State AI Policy (2021-22 Legislative Session),” Electronic Privacy Information Center, Aug. 8, 2022, https://epic.org.

3. See, for example, N.E. Price, “New York City’s New Law Regulating the Use of Artificial Intelligence in Employment Decisions,” JD Supra, April 11, 2022, http://www.jdsupra.com; and J.J. Lazzarotti and R. Yang, “Draft Regulations in California Would Curb Use of AI, Automated Decision Systems in Employment,” Jackson Lewis, April 11, 2022, http://www.californiaworkplacelawblog.com.

4. S. Ransbotham, S. Khodabandeh, R. Fehling, et al., “Winning With AI,” MIT Sloan Management Review and Boston Consulting Group, Oct. 15, 2019, https://sloanreview.mit.edu.

5. D. Kiron, E. Renieris, and S. Mills, “Why Top Management Should Focus on Responsible AI,” Sloan Management Review, April 19, 2022, https://sloanreview.mit.edu.

6. Leaders are the most mature of the three maturity clusters identified by analyzing the survey results. An unsupervised machine learning algorithm (K-mean clustering) was used to identify naturally occurring clusters based on the scale and scope of the organization’s RAI implementation. Scale is defined as the degree to which RAI efforts are deployed across the enterprise (e.g., ad hoc, partial, enterprisewide). Scope includes the elements that are part of the RAI program (e.g., principles, policies, governance) and the dimensions covered by the RAI program (e.g., fairness, safety, environmental impact). Leaders were the most mature in terms of both scale and scope.

(Video) Using Responsible Applied AI at Scale

7. We offer a deeper analysis of the connection between RAI and CSR here: E.M. Renieris, D. Kiron, and S. Mills, “Should Organizations Link Responsible AI and Corporate Social Responsibility? It’s Complicated,” MIT Sloan Management Review, May 24, 2022, https://sloanreview.mit.edu.

8. To assess whether an organization’s AI use was mature or immature, we asked respondents, “What is the level of adoption of AI in your organization?” Those who selected “AI at scale with applications in most business and functional areas” or “Large number of applications in select business and functional areas” were classified as mature, and those who answered, “Only prototypes and/or pilots, without full-scale implementations” or “Some applications deployed and implemented” were classified as immature.

9. E.M. Renieris, D. Kiron, and S.D. Mills, “RAI Enables the Kind of Innovation That Matters,” MIT Sloan Management Review, May 24, 2022, https://sloanreview.mit.edu.

FAQs

What is the focus of responsible AI? ›

Responsible AI is the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society—allowing companies to engender trust and scale AI with confidence.

What are the four key principles of responsible AI? ›

Their principles underscore fairness, transparency and explainability, human-centeredness, and privacy and security.

Why top management should focus on responsible AI? ›

Focus on External Stakeholders

They recognize that AI has the potential to advance or undermine an organization's social purpose. Some panelists make a point of emphasizing that RAI should be a top management concern because AI can have a significant impact on many external stakeholders.

Why responsible AI is important? ›

Responsible AI brings many practices together in AI systems and makes them more reasonable and trustable. It makes it possible to use transparent, accountable, and ethical AI technologies consistently w.r.t user expectations, values, and societal laws. It keeps the system safe against bias and data stealing.

Who is responsible for AI ethics? ›

Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI enabling principle – explicability.

Which of the following options are key aspects of responsibility AI? ›

The four pillars of Responsible AI

Organizations need to tackle a central challenge: translating ethical principles into practical, measurable metrics that work for them. To embed these into everyday processes, they also need the right organizational, technical, operational, and reputational scaffolding.

What are our responsibilities as AI practitioners in this new world? ›

Candidates for this role need to demonstrate proficiency in optimizing and tuning AI solutions to deliver the best possible performance in the real world. AI Practitioners require more advanced knowledge of algorithm implementations and should have a firm knowledge of latest toolsets available.

How is responsible AI measured? ›

Identify AI bias before you scale
  1. Set goals around your fairness objectives for the system, considering different end users.
  2. Measure & discover disparities in potential outcomes and sources of bias across various users or groups.
  3. Mitigate any unintended consequences using proposed remediation strategies.

What is an example of responsible AI? ›

For example, a self-driving car can take images from sensors. A machine learning model can use these images to make predictions (e.g. the object in front of us is a tree). These predictions are used by the car to make decisions (e.g. turn left to avoid the tree).

What are the 3 Microsoft Guiding principles for Responsible AI? ›

At Microsoft, we've recognized six principles that we believe should guide AI development and use — fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

Which one of the six principles for responsible AI does this statement refer to AI systems should empower everyone and engage people? ›

Inclusiveness: AI systems should empower everyone and engage people.

In which situation would Accenture apply principles of responsible AI? ›

Answer: They apply the principle of responsible artificial Intelligence (AI) when they want to improve accuracy and decision-making in work.

What are the advantages of adopting responsible Ai by an organization? ›

Benefits of AI
  • Automates the processes. ...
  • Enhance creative tasks. ...
  • Provides precision. ...
  • Reduces human error. ...
  • Reduces time spent on data analysis. ...
  • Predictive maintenance. ...
  • Improvement in decision making at both production and business levels.

How does responsible AI contribute to business? ›

Responsible AI can help your business by:

Trust leads to companies having better retention, spend, and adoption of new services. More than ever before, employees want to work for a purpose-driven company.

What is responsible AI for youth? ›

Empowering youth to be future ready

Taking this further, National e-Governance Division, Ministry of Electronics and Information Technology, Government of India and Intel India have launched 'Responsible AI for Youth 2022'.

What are the AI ethics? ›

AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technology. As AI has become integral to products and services, organizations are starting to develop AI codes of ethics.

Who is responsible for AI mistakes? ›

Ultimately, liability for negligence would lie with the person, persons or entities who caused the damage or defect or who might have foreseen the product being used in the way that it was used.

Which statement is true about artificial intelligence? ›

The correct statement is that data is one of two fundamental reasons for AI success or failure.

What is responsible AI * 1 point ethical transparent accountable all of the above? ›

That being said, from a technical standpoint, responsible AI is about creating explainable, transparent, and accountable systems. It should also protect user privacy and provide fair, unbiased, and inclusive results.

What are four key principles of responsible artificial intelligence Brainly? ›

Explanation: Decision making processed should be transparent and understandable is one of four key principles of Responsible AI. The four principles of AI are: fairness, transparency and explainability, human-centeredness, and privacy and security.

How does responsible AI contribute to business Brainly? ›

Responsible AI is highly relevant for companies to develop inclusive products and services, which perform more efficiently across all user profiles and ensure safety.

Which statement is an example of a Microsoft responsible AI principles? ›

A triage bot that prioritizes insurance claims based on injuries is an example of the Microsoft reliability and safety principle for responsible AI.

What are the most important skills an AI engineer must have? ›

These are the seven skills you need to take advantage of the growing opportunity to build great ML/AI solutions:
  • Programming languages. ...
  • Data engineering. ...
  • Exploratory data analysis. ...
  • Models. ...
  • Services. ...
  • Deploying. ...
  • Security. ...
  • Rock AWS Machine Learning.

What should Organisations do to ensure that they are being responsible with AI and the wider use of data in general? ›

3 steps to ensure AI is served in a responsible way

A key success factor is leadership support and the power to hold leadership accountable. Ensuring the right technical guardrails, creating quality assurance and governance to create traceability and auditability for AI systems.

Which of the following AI driven roles is responsible for holding the artificial intelligence accountable for its results? ›

For example, algorithm forensics analysts would be responsible for holding any algorithm accountable for its results.

Who has the most knowledge and resources to lead the responsible development of AI? ›

As one of the leaders in the field, we acknowledge that Google has an obligation to develop and apply AI thoughtfully and responsibly, and to support others to do the same.

What are the advantages of adopting responsible AI by an organization Brainly? ›

AI would have a low error rate compared to humans, if coded properly. They would have incredible precision, accuracy, and speed. They won't be affected by hostile environments, thus able to complete dangerous tasks, explore in space, and endure problems that would injure or kill us.

What is the name of the responsible AI principle that directs AI Solutions design to include resistance to harmful manipulation? ›

Principle of prevention of harm: This principle requires that “AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings”. Principle of fairness: This principle ensures that “the development, deployment and use of AI systems must be fair.”

What are the Microsoft guiding principles? ›

Microsoft AI guiding principles

At Microsoft, we've recognized six principles that we believe should guide AI development and use: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

What is a benefit of applying artificial intelligence AI to accenture's work? ›

it will replace most jobs held by employees. it will allow ai chat bots to handle client meetings instead accenture employees. it will allow employees more time to work on administrative and data collection tasks.

What are the 4 key principles of responsible AI? ›

4 Principles of Responsible AI & Best Practices to Adopt Them
  • Fairness.
  • Privacy.
  • Security.
  • Transparency.
22 Apr 2022

Which principle's of responsible AI should be considered to handle the unusual or missing values in an AI system? ›

The handling of unusual or missing values provided to an AI system is a consideration for the Microsoft <box 1> principle for responsible AI.

Which task should you include to ensure that the service meets the Microsoft transparency principle for Responsible AI? ›

Which task should you include to ensure that the service meets the Microsoft transparency principle for responsible AI? A. Ensure that all visuals have an associated text that can be read by a screen reader.

Which of the following options are key aspects of Responsible AI? ›

Their principles underscore fairness, transparency and explainability, human-centeredness, and privacy and security.

Why is responsible AI needed? ›

Responsible AI brings many practices together in AI systems and makes them more reasonable and trustable. It makes it possible to use transparent, accountable, and ethical AI technologies consistently w.r.t user expectations, values, and societal laws. It keeps the system safe against bias and data stealing.

What is the one way that accenture helps clients ensure fairness when applying AI Solutions? ›

What is one way that accenture helps clients ensure fairness when applying ai solutions? by providing ready-to-use toolkits to help clients avoid bias. by encouraging clients to collect as much data as possible. by allowing ai solutions to function independently from humans.

In which situation would company apply principles of responsible AI? ›

Answer. Answer: They apply the principle of responsible artificial Intelligence (AI) when they want to improve accuracy and decision-making in work.

Which definition best describes responsible in the context of AI algorithm design? ›

Responsible AI is a governance framework that documents how a specific organization is addressing the challenges around artificial intelligence (AI) from both an ethical and legal point of view.

What is responsible AI for youth? ›

Empowering youth to be future ready

Taking this further, National e-Governance Division, Ministry of Electronics and Information Technology, Government of India and Intel India have launched 'Responsible AI for Youth 2022'.

How does responsible AI contribute to business? ›

Responsible AI can help your business by:

Trust leads to companies having better retention, spend, and adoption of new services. More than ever before, employees want to work for a purpose-driven company.

Which of the following options are key aspects of Responsible Al? ›

Answer: Their principles underscore FAIRNESS, transparency and explainability, human-centeredness, and privacy and security.

In which situation would Accenture apply principles of responsible AI? ›

Answer: They apply the principle of responsible artificial Intelligence (AI) when they want to improve accuracy and decision-making in work.

What exactly AI means? ›

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

What is AI for youth by Intel? ›

Intel® AI For Youth is an engaging, hands-on learning journey based on experiential methodologies covering both social and technology skills. It provides age-appropriate curriculum and resources, to familiarise the learners with AI domains such as computer vision, natural language processing and data.

What are the 3 Microsoft Guiding principles for Responsible AI? ›

At Microsoft, we've recognized six principles that we believe should guide AI development and use — fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

How is responsible AI measured? ›

Identify AI bias before you scale
  1. Set goals around your fairness objectives for the system, considering different end users.
  2. Measure & discover disparities in potential outcomes and sources of bias across various users or groups.
  3. Mitigate any unintended consequences using proposed remediation strategies.

Which of the following options are key aspects of responsible AI Brainly? ›

-Decision-making processes should be transparent and understandable. - Any biases that may arise from Al processes are justifiable if they are backed by data. -Accountability for decisions ultimately lies with computer algorithms.

Which one of the six principles for responsible AI does this statement refer to AI systems should empower everyone and engage people? ›

Inclusiveness: AI systems should empower everyone and engage people.

What is responsible AI * 1 point ethical transparent accountable all of the above? ›

That being said, from a technical standpoint, responsible AI is about creating explainable, transparent, and accountable systems. It should also protect user privacy and provide fair, unbiased, and inclusive results.

What is one of four key principles of responsible artificial intelligence AI )? companies are not liable for violations of the law that result from the actions of AI? ›

Decision-making processes should be transparent and understandable. Companies are not liable for violations of the law that result from the actions of Al. Accountability for decisions ultimately lies with computer algorithms. Any biases that may arise from Al processes are justifiable if they are backed by data.

What is the one way that accenture helps clients ensure fairness when applying AI Solutions? ›

What is one way that accenture helps clients ensure fairness when applying ai solutions? by providing ready-to-use toolkits to help clients avoid bias. by encouraging clients to collect as much data as possible. by allowing ai solutions to function independently from humans.

Which business case is better solved by artificial intelligence AI than conventional programming Brainly? ›

Answer. predicting characteristics of high-value customers is better solved by Artificial Intelligence (AI) than conventional programming. Option C is the correct answer.

What is a benefit of applying artificial intelligence AI to accenture's work? ›

it will replace most jobs held by employees. it will allow ai chat bots to handle client meetings instead accenture employees. it will allow employees more time to work on administrative and data collection tasks.

Videos

1. Keys to Developing A Responsible AI Strategy
(Synthetic Intelligence Forum)
2. Responsible AI: AI risk assessment and management in high-risk applications
(Xomnia)
3. Leadership in Extraordinary Times S4E2: How can businesses use AI responsibly?
(Saïd Business School, University of Oxford)
4. 007 EQUAL INSPIRED | Toju Duke - Responsible AI Programme Manager (Google)
(Equal IT - DE&I is in our DNA)
5. Responsible AI & Speed to Mission: How Can Government Agencies Achieve Both?
(ATARC Channel)
6. Responsible AI for Business Leaders • Ron Bodkin • GOTO 2019
(GOTO Conferences)

Top Articles

Latest Posts

Article information

Author: Kieth Sipes

Last Updated: 11/05/2022

Views: 5996

Rating: 4.7 / 5 (47 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Kieth Sipes

Birthday: 2001-04-14

Address: Suite 492 62479 Champlin Loop, South Catrice, MS 57271

Phone: +9663362133320

Job: District Sales Analyst

Hobby: Digital arts, Dance, Ghost hunting, Worldbuilding, Kayaking, Table tennis, 3D printing

Introduction: My name is Kieth Sipes, I am a zany, rich, courageous, powerful, faithful, jolly, excited person who loves writing and wants to share my knowledge and understanding with you.