Promise or Peril? Artificial Intelligence and the Future of Corporate Governance

Page 1


PROMISE OR PERIL?

ARTIFICIAL INTELLIGENCE AND THE

FUTURE OF CORPORATE GOVERNANCE

SALZBURG GLOBAL SEMINAR IS GRATEFUL TO THE FOLLOWING ORGANIZATIONS FOR THEIR SUPPORT FOR THIS PROGRAM:

SALZBURG GLOBAL SEMINAR WOULD LIKE TO THANK ALL PARTICIPANTS FOR DONATING THEIR TIME AND EXPERTISE TO THIS PROGRAM.

PROMISE OR PERIL?

ARTIFICIAL INTELLIGENCE AND THE FUTURE OF CORPORATE GOVERNANCE

OCTOBER 4 TO 6, 2023

DEPUTY CEO AND MANAGING DIRECTOR, PROGRAMS

Benjamin Glahn

EDITOR Aurore Heugas SENIOR PROGRAM MANAGER Antonio Riolino PHOTOS Katrin Kerschbaumer RAPPORTEUR

INTERVIEW AUTHORS Aurore Heugas

Audrey Plimpton

Paul Mart Jeyand J. Matangcas

Alexandra Walsh

O’Connor: What are some of the potential opportunities for more effective regulatory environments surrounding the use of AI in financial services?

13 Natasha Blycha: What are the principal weaknesses and imperfections/risks of AI and how can humans manage them? 14 Schelly-Jayne Price’s thoughts on corporate governance and the potential and risks of AI 16 Alex Jenkins: What is the most revolutionary AI technology in education so far?

PROMISE OR PERIL?

ARTIFICIAL INTELLIGENCE AND THE FUTURE OF CORPORATE GOVERNANCE

In a world of accelerating digitalization and disruption, Salzburg Global Seminar’s 2023 Corporate Governance Forum explored the complex geopolitical considerations and risks posed by artifical intelligence (AI) and whether corporations should (or must) adopt AI approaches in their corporate oversight functions and in their business operations, internal governance and management of human capital.

Participants in the Forum addressed the opportunities and risks of artificial intelligence for corporate governance, what this means for the future of work following the pandemic, AI’s role in, and impact (if any) on ESG, stakeholder governance and corporate purpose, and lessons learned from other examples of disruptive technological change and accompanying societal developments, including their effect on corporate governance, oversight, and risk management.

While AI clearly presents an opportunity to improve corporate decision-making, operations, oversight, sustainability and reporting, will the widespread use of AI increase vulnerability to cyberattacks and instability, reinforce discrimination and biases, and risk losing control of privacy and critical corporate assets? What can be learned from companies that have rushed to invest in AI capabilities before they are prepared to use them effectively? What might be learned from previous examples of rapid technological and social disruption and the risks, opportunities, and potential impact on board duties and the delivery of corporate and social value?

In addition to the technological and operational challenges posed by AI, corporations around the world are facing difficult and complicated strategic questions about the geopolitical, infrastructure, and cyber risks posed by AI and the potential long-term risks of the “decoupling” and “regionalization” of digital infrastructure.

This report summarizes some of the discussions and takeaways from this year’s session.

INTRODUCTION

From October 4 to 6, 2023, industry leaders, directors of corporations, legal experts, academics, and tech enthusiasts from around the globe convened at Schloss Leopoldskron for the Corporate Governance Forum. Together, they grappled with one of the most important questions of the moment: is artificial intelligence (AI) something to be feared or to be leveraged in business?

As emphasized over three days of vibrant discussion at Salzburg Global Seminar’s program, Promise or Peril: Artificial Intelligence and the Future of Corporate Governance, the answer to this query is complicated.

In a world of accelerating digitization and disruption, AI has become increasingly significant to the realm of corporate direction and control. When used properly, AI systems can analyze enormous amounts of data to identify trends accurately and efficiently and provide insights that might elude human analyses. This capability can inform strategic decision-making, making it particularly valuable in financial forecasting, market analysis, and risk assessment.

AI technologies can also play a critical role in monitoring compliance with jurisdictional laws and regulations. Through automation of tracking and reporting processes, AI tools support companies in adhering to legal standards, ethical norms, and reducing the likelihood of expensive litigation or reputational harm. AI-driven tools can also help to foster shareholder engagement by increasing the accessibility and efficiency of learning and voting.

Still, AI—whether generative or predictive—heralds a complex set of geopolitical and business considerations that every company must square with their unique goals. Indeed, cybersecurity and data breach, lack of transparency in AI decisionmaking processes, and potential bias affecting ESG objectives are real obstacles companies face, even when AI systems are used ethically and in alignment with governance objectives.

Drawing from their training and ample experiences, the participants of the program waded into the gray. The following six key takeaways emerged from their discussions.

1. AI IS HERE TO STAY…LEARN TO EMBRACE IT

Results of a poll circulated to program participants on day two of the program indicated that most felt that the biggest AI risk for their organization was not using it. “AI is evolving rapidly and predicting a clear path for how things will play out is difficult,” one participant noted. “Whether or not we’re comfortable with these technologies,” she asserted, “ [our businesses] need to them to keep pace.” The group acknowledged that people across organizational structures may be frustrated by a dearth of clear answers as AI systems develop. The key, participants concluded, is to be constant learners, working actively to become educated on developing issues and technologies.

2. AI ISN’T JUST AN IT PROBLEM

As synthetic analytic and decision-making systems continue to proliferate business operations, boards and senior management cannot afford to make AI an issue for technologists alone. Across the organization, people at all tiers need to know what they are looking at and for and when using AI. For example, employees must learn how to appropriately use AI to inform their workflows while compliance and internal auditors must understand AI’s relationship with privacy and data integrity. Generally, participants agreed that AI development necessitated greater lines of communication between engineers and business leaders, not only for corporate responsibility but also for innovation. The peril of top-down decision-making concerning AI,” one participant argued, “is that ill-informed directors push back on ideas about AI systems. This can weaken energy and enthusiasm at other levels of the organization.” Better integration, he argued, and greater attention to those in the organization with subject-matter expertise will allow for more creative applications of AI to business goals.

3. BOARDS NEED TO HAVE AN AI PHILOSOPHY

Although directors, principals, and senior management do not need to know the precise mechanisms underlying AI systems, they should cultivate an AI philosophy that aligns best with their organization’s mission and objectives. Ultimately, the group agreed, AI is a tool that—like all tools—requires thoughtful implementation. Because data is the most important asset companies have after their people, boards must consider how AI evaluates, proliferates, and generates value from that data. Directors should also examine how AI can enhance products and services and how these technologies will interface with their employees. Without an AI philosophy, organizations may use AI inefficiently or irresponsibly, inadvertently causing organizational harm. Conversely, a thoughtful and strategic AI philosophy can support innovation and competitive advantage.

4. BETTER REMEDIATION TOOLS ARE NEEDED FOR AI NIGHTMARES

From ChatGPT’s made-up court cases to privacy violations carried out by Target’s pregnancy-prediction analytics, AI carries undeniable risk when it comes to cybersecurity, transparency, and data protection. AI systems often rely on vast datasets to function effectively, which can lead to increased surveillance of consumers and data misuse. Additionally, AI tools may produce hallucinations, a phenomenon where a system generates false or nonsensical information absent from the input data as part of the output. AI system’s implicit bias can lead to discriminatory outcomes, like gender preference in hiring decisions. Furthermore, the complexity of training algorithms can make it difficult to identify the sources of error when AI tools make mistakes. During the discussion, one participant asked the group, “Because of the speed and blast radius of AI that goes wrong, what are the guardrails, and mechanisms to remediate accidents?” Robust ethical frameworks, transparent AI design, rigorous data protection, and consistent monitoring are all necessary for accident prevention, the group concluded, but they may do little in the aftermath. The AI community must develop better damage control mechanisms.

5. COMPANIES MUST CONSIDER HOW AI AFFECTS THE HUMAN ELEMENT

The opportunities and concerns generated by AI create an acute need for business leaders to think about corporate culture, sense of self, and effect on the workforce. Employees hold attitudes and fears about AI technology already. Some of these anxieties are valid, but others arise from misinformation. Participants agreed that directors and senior management must know what to say to someone who walks into their office and asks what AI means for their job. Ultimately, AI is groundbreaking for organizations, but “it’s a cognitive prosthetic,” one participant noted, “it doesn’t replace human cognition.” Employees must know their skills are valued. Consequently, as a board builds their AI philosophy, they must consider how to emphasize and employ AI as a support rather than a replacement for its human capital.

6. WITH AI, BUSINESS AND ETHICS CANNOT SEPARATE

At a time when hard law and regulations differ substantially across nations and when AI systems differ in their sophistication, the pillars of good governance—accountability, transparency, fairness, and responsibility—are more important than ever. A strong ethical framework can play a significant role in combatting algorithmic bias and discrimination originating from AI. Boards can act as a north star to ensure that ethics and business goals align. For example, directors should consider how KPIs contribute to bad acting and abuse of AI systems. They should also contemplate how to promote upstanding business practices when statute has yet to catch up with AI advancement.

SANDIE O’CONNOR: WHAT ARE SOME OF THE POTENTIAL OPPORTUNITIES FOR MORE EFFECTIVE

REGULATORY ENVIRONMENTS

SURROUNDING THE USE OF AI IN FINANCIAL SERVICES?

I think the opportunities are to consider regulated banking at least as a reference point. Longstanding rules and regulations in this sector have been written to be technology agnostic.

AI is already being applied within this framework for things like surveillance and fraud identification or improvement of operating processes, and its use still complies with existing regulations anchored in consumer protections or requirements for model explainability that support outcome reviews and

mitigate bias as example… So there’s opportunity to see how that’s being done. I think we need to think about guardrails more broadly. Why? Because regulated banking is only one segment of financial services, which has so many more participants like hedge funds, private equity, asset managers or the variety of companies offering payment alternatives. And so trying to figure out what are the right rules of the road in the context of AI use across the entire spectrum of financial services and participants would be valuable.

Also, stating the obvious - as good as your data is, is as good as your AI can hope to be. So accountability for data accuracy is important as well as guardrails around data use, data privacy and data sharing. Regulated banking has been deeply engaged in this space. I would think that there is value for a broader framework on the use of financial data that is consistent with corporate culture, societal values, ethics etc. as society moves forward in this space.

Again, there might be opportunity to glean best practices from regulated banking into other applications where frankly there may not be rules of the road or there may be different standards of data quality, data use or rules around sharing.

Sandie O’Connor

NATASHA BLYCHA: WHAT ARE THE PRINCIPAL WEAKNESSES AND IMPERFECTIONS/ RISKS OF AI AND HOW CAN HUMANS MANAGE THEM?

I don’t think we should think about AI and its associated risks and opportunities based on the AI we are using now. It’s easy to make general statements about what’s right or wrong with AI when you’re thinking about, for example, a large language model, or a narrow A.I. model that is set in a specific domain. In the future it is highly likely that AI will be ubiquitous and in everything.

If AI is in everything, then the range of ways that it could be right and the range of ways that it could be wrong are manifold. It’s difficult to even quantify – i.e. what are the threats and what are the opportunities when AI is in your vacuum, in your car, or even in the legal structure of a corporate entity?

A better focus may be to instead work out what going forward we will value humans doing or being responsible for, and then use that framework to establish the legal changes we need to make to underpin that position.

Are there any industries, roles or governance positions that we would like to reserve for humans?

Given that most organisations value profit and efficiency – what impact will that have on human employment if an algorithm or a bot can outpace a human worker? We like to think that humans are valuable, but will corporate duties as they currently stand allow us to value human outputs if they are technically “inferior” (i.e. slower, more expensive, and less accurate) than algorithmic outputs?

Of course even this analysis and focus on “human” value may also be flawed - after all in the future, it is very likely that humans will also have embedded AI. The more I think on these things and the more we work in policy in this space, the less confident I am that there is a clear right answer or an ethically

obvious approach. What we do know is that AI is here to stay, so we need to still play the ball and cross our fingers that we have installed the best legal guardrails in the game.

Natasha Blycha

SHELLY-JAYNE PRICE’S THOUGHTS ON CORPORATE GOVERNANCE AND THE POTENTIAL AND RISKS OF AI

Schellie-Jayne

WHY IS AI CORPORATE GOVERNANCE IMPORTANT? WHY NOW?

AI isn’t new.

But the AI landscape has changed dramatically, the rapid evolution of increasingly capable AI, the rise of foundational models capable of doing many different tasks – the ‘swiss army knife’ of AI if you will - and the ease of use or so-called “democratisation” of AI, enabling people with zero computer science skills to use it in ways previously unimaginable.

WHY SHOULD BOARDS BE FOCUSED ON AI?

These advancements require Boards to engage proactively in understanding and governing AI. Our experience suggests that many Boards are scrambling to ensure their organisations not only capture the opportunities of AI, but also manage and mitigate associated risks and ethical concerns. The title “AI – Promise or Peril” of the Salzburg Global Corporate Governance Forum cleverly describes this conundrum.

WHAT QUESTIONS SHOULD BOARDS BE ASKING?

1. Does the Board have the right skills to govern AI? Boards must evaluate their collective expertise in AI, identifying any skill gaps that could impede effective governance and strategic oversight. A common lexicon and understanding of AI is foundational to enable Board members to engage in nuanced discussions about AI and to ask the higher order questions about corporate risks and opportunities. A positive corporate culture will be increasingly important as organisations navigate a future of dramatic and rapid change, increased uncertainly and workforce fear. The SGS Corporate Governance Forum has provided a valuable opportunity to explore some of the common challenges facing corporations in navigating AI governance in the vibrant, inclusive SGS environment of sharing and learning.

2. Does the Board know what the AI risks are? It’s imperative for the Board to have a broad understanding of AI-related risks to ensure robust governance. There are new AI risks, particularly those arising from generative AI, including risks that are not a choice which corporations are already exposed to.

Does the Board appreciate those new risks are and has it considered how new systemic risks may be amplified by the markets? Understanding potential AI risks enables the Board to develop strategies for effective risk management, risk tolerance and oversight.

3. Does the Board know what the AI opportunities Are?

The Board’s awareness of AI-driven opportunities is critical to avoid obsolescence akin to a ‘Kodak moment’. Is the corporation implementing measures to capture opportunities consistent with the corporation’s risk appetite? What are competitors doing? Further, is AI being considered not merely to do new things, but to envisage business transformation akin to a “Uber moment”? Proactive identification of AI opportunities is essential to ensure the company capitalises on opportunities rather than being left behind.

4. Is the Corporation capturing the value of its data? Data is foundational in AI. Authentic, high quality data is required to train AI (machine learning) systems and continual refreshment of data is necessary to maintain their relevance and accuracy. Data is a valuable asset and data management, utilisation, procurement and commercialisation is increasingly important. The Board should scrutinise data strategies to ensure they underpin AI initiatives and drive value creation.

5. Does the Corporation have the right talent?

The competition for key AI talent is on! With this in mind, the Board should prioritise AI talent development, acquisition and retention. Corporate culture is critical to address workforce concerns surrounding AI, particularly fears of job displacement. Boards should actively engage in transparent communication and invest in re-skilling and upskilling initiatives. A culture of continuous learning, resilience and adaptability is needed to recast AI as a positive opportunity for enhanced work efficiency and professional growth. The Board’s focus on building and nurturing a diverse team proficient in AI management underscores the importance of human expertise in harnessing AI’s full potential.

HOW DOES SALZBURG GLOBAL SUPPORT THE DEPLOYMENT OF RESPONSIBLE, ETHICAL AI THAT BENEFITS HUMANITY?

Salzburg Global understands that human connection across the globe is a force multiplier to navigate the most critical challenges our world faces. AI is one of them. It is my vision that AI could also help us solve some of the biggest issues facing humanity. To do so we need connection in a very human and personal way – and that in person connection with inspiring and diverse leaders from around the globe is what I appreciated the most from my time at the Salzburg Global Corporate Governance Forum.

ALEX JENKINS: WHAT IS THE MOST REVOLUTIONARY AI TECHNOLOGY IN EDUCATION SO FAR?

I think that there has been a promise in education that digital technology would uplift the standard of education that we can provide our students. And it has largely failed.

There have been all kinds of technologies that have helped us in the classroom. But what we haven’t seen is an increase in the educational outcomes of students. Artificial Intelligence stands to revolutionize education. And there’s one area in particular where I think it will be amazing, and that is the ability to provide a tutor to every child.

If we look back at the history of education, the classroom model where we have a teacher for about 30 students dates back to the Industrial Revolution. And it hasn’t really changed that much since then. Now, one other thing that we know about education is that when students receive one on one tutoring, their outcomes improve by about two standard deviations, and that’s massive. So that’s the difference between an average student becoming an exceptional student and a below average student becoming a great student.

Alex Jenkins

As a society, though, we’ve never had the resources where we can provide a tutor for every child. I think with A.I., we have the opportunity to provide a tutor for every child. We can change our education system to a method that allows something called mastery learning.

what I mean by that is that currently, in a classroom, you might learn fractions for three weeks. Now, at the end of the three weeks, you move on to the next topic and it doesn’t matter so much if some of the students in the class haven’t mastered what it means to do a fraction. The difference between mastery learning and that kind of classroom model is, in mastery learning, no one moves forward to the next topic until they’ve got 100%. So currently, we have a situation where, particularly in mathematics, students really lose faith in their own ability to do maths because if they haven’t mastered a previous topic, it’s like building a house on bad foundations, everything falls apart.

We are in many ways leaving the educational outcomes of our students up to chance.That they get a good teacher, that they are able to master the

material in the time that it is provided to them. What I’m proposing instead is that we have a model where we have a teacher that orchestrates a class and we have an AI tutor that works with every child and ensures that they do not move on to the next topic until they have mastered the previous one.

That allows students to learn at different rates, and for their AI tutor to focus on their student, while the teacher can then orchestrate the whole classroom and not worry that some students are still doing fractions and other students are doing multiplication, whatever it might be. This is an experiment worth doing, and the Khan Academy have released an AI called Kahnmigo, it’s currently in beta testing, but I believe that this will transform classroom education as we know it.

I think we will look back in ten years and think, isn’t it crazy that we thought that we could educate a classroom of kids with one teacher? That is the future I would like to see and something that I’m very excited about.

NAVIGATING AI’S MORAL COMPASS: UPHOLDING HUMAN RIGHTS AND GENDER EQUITY

Salzburg Global Fellow Michelle Odayan’s perspectives on inclusive AI, bias mitigation, and corporate board responsibilities

Michelle Odayan is an advocate of the High Court of South Africa, a social entrepreneur, and the cofounder of IndibaConsult, a human rights-focused professional services and development practice. She has substantive experience working with government, the private sector, and civil society in high-impact skills development and decent work, social and economic capacity-building interventions, business and human rights integration, gender and women’s rights, access to justice, and technological advancement. She currently serves on the boards of several large companies and state-owned entities.

Audrey Plimpton, Salzburg Global Communications Associate: What are the opportunities and risks of artificial intelligence (AI), especially in the areas of human rights, women’s rights, and equality?

Michelle Odayan, Partner and Co-founder, IndibaAfrica Group: The AI phenomenon is here. It’s certainly something that we will incorporate into substantive business processes, for personal use, and in different ways to assist us in being generative and efficient with time, because there’s a time advantage to using generative AI. I do think that with it comes some disadvantages. I’m a novice in the field of AI, but I do feel like we have to be careful not to exaggerate its value against the human being. There’s much more work that has to go into the development process of all of the various AI tools and platforms around hidden bias or discriminatory behavior. It is clear that some of the race/class considerations have been aggregated from contexts where there isn’t an aliveness to being inclusive and there isn’t an aliveness to being responsible for shifting stereotypes and categorizations of different groups. For me, the concern around an

Michelle Odayan

inclusive language model for AI is going to be really important because I do feel that it has the potential to discriminate unconsciously. That would have dire effects on business operations and on society more generally if we are striving for an inclusive world order. I think there’s a hidden bias against women in AI. The overemphasis on gender neutrality is also problematic because I think women experience life very differently. Machine learning and women’s lived experiences and reality are not at the point that I would like it to be in terms of a more critical focus on shifting gender norms and behaviors so that women feel truly enabled and supported in an inclusive way through all the AI tools that are available.

AP: How can we ensure the ethical use of AI across businesses and companies?

MO: Ethical considerations would remain a human responsibility. If we take the example of board oversight on the implementation of generative AI in business processes, it’s going to be important that part of the oversight responsibility is built into the board process around critical oversight of AI and all its perils and promises. The philosophy of the organization in how it attempts to utilize AI for its business needs to be set at the board level and then cascaded into an implementation execution framework with which the board is able to track progress and measure its efficacy and its value to the business. The ethics around its implementation and

the philosophy with which AI is used as an enabler is the key responsibility of the board to determine a zero-harm approach to people and the planet and fair, just, reasonable, and responsible corporate behavior through the use of AI.

AP: How are the companies that you serve on the board for utilizing AI in their corporate governance and operations?

MO: In some of my board roles with big companies and state-owned entities, AI has been used in creative ways, but I also feel like we haven’t explored the full potential. I leave here with a deep curiosity about its potential, but at the same time, I open myself up to deeply understand the risks and the perils associated with incorporating generative AI in a substantive way. I understand that there are so many technological advancements and finding what’s the best in our digital universe is going to take a fair amount of time. I think they’re all going to be building on some of the knowledge I take back and possibly allow me to influence our board to have a session that focuses on the digital universe and generative AI… I’m looking forward to influencing the start of a series of discussions at the board level around the potential and the perils of AI and the speed with which we actually use these tools to make the business more effective and understand the value added to the business.

THE FUTURE IS FEMALE: AI, IDENTITY, AND (RE)CLAIMING SPACE

Salzburg Global Fellow May Tan advocates for ethical AI and inclusive corporate leadership

May Tan is the Board Director of Hong Kong-based CLP Holdings Limited with over 30 years of experience as an investment banker, CEO, and executive director across various transnational companies.

AI, the Internet, and all things inevitable

When asked to reflect on the program theme “Promise or Peril? Artificial Intelligence and the Future of Corporate Governance,” May Tan underscored that

in today’s highly digitized world, the dominance of artificial intelligence (AI) is inevitable since it can be used as a catalyst to enable different kinds of businesses. Against the backdrop of geopolitical issues and corporate governance, May strongly believes that AI will become more relevant in the future.

From a corporate perspective, she asserts that companies should work with various sectors to regulate AI, not in a way that restricts advancement but rather encourages the “right” type of innovation.

She cautioned that AI may be susceptible to various kinds of abuse, as it can also be misused for personal or political ends, saying, “I’m from the financial industry, I’ve seen how crypto and other instruments have been abused. So, I suppose that’s where the ‘peril’ comes in.” Despite this, May believes that AI holds great promise in the way that it can be utilized as a transformative tool to foster innovation. She compared the rise of AI to the emergence of the Internet decades ago. She added, “I cannot imagine a life without the Internet. I imagine us having another conversation ten years from now, and we’ll be like: ‘Can you imagine life without AI?’”

According to May, we should not look at technological advances from a simplistic binary view of good versus evil. She believes that “for us to move forward in the future, we need both the human and the machine”, especially at a time when there has been a growing trend of corporations utilizing AI to analyze various metrics such as customer data, delivery processes, and pricing. As a board director, she emphasized that “it’s really about using these tools to look at the strategy of the organization and to make sure that it continues to fit its purpose while also maintaining checks and balances.”

On identity, authority, and (re)claiming space

When May began her career, she did not see many people who looked like her represented in boardrooms. May further explained, “When I was entering the banking industry in the eighties, there were not many female role models. I was happy to do the work and I learned a lot along the journey. But I never really expected to get to where I got to in terms of my career.”

The representation of women in top executive positions is severely lacking despite their undeniable potential. In fact, a report revealed that women in the Asia-Pacific (APAC) region are underrepresented in leadership positions and are less likely to be promoted compared to their male colleagues. This glaring reality was something that May personally experienced in her career. For her, rising through the ranks necessitated making the most out of the environment that she was subjected to daily. “Was

it tough? Yes, because I always felt like I had to work harder to prove myself. I had to show that I deserved to be in the same room as a lot of my colleagues,” May added.

May asserted that a person’s background does not determine what can become of their career. For May, you become deserving of the place that you are occupying in the workforce through hard work and results. However, she also believes a person’s identity provides a more nuanced lens for seeing different perspectives. “Sometimes it is very difficult to assert yourself when you’re a woman because, more often than not, you’re the only different voice in the room. So, it’s not as loud as ten of the men in the room. It helped that I was Asian. I was giving a perspective that a lot of my white male colleagues could not give. I knew the language and I knew the culture. My corporate clients loved me because my approach was quite different,” she recalled.

In describing her brand of leadership, May stated that as a person in a position of authority, there is a stereotype that women are “soft” and that you must

be authoritative to be respected. But for her, there is value in being firm with what you want to achieve while still maintaining empathy for other people. She further explained, “I take the view that people have pride and self-respect. So, if I have to deliver a tough message, I tend to do it one-on-one and in private. Is that showing softness? No, I think that’s showing respect for the individual. You don’t want to shame them, and you should always put yourself into that person’s shoes while making sure that you try to nurture them as much as possible.”

May’s lived experience is also what prompted her and two others to establish ACcelerate—a program intended to bridge the gap in senior female representation in the C-suite and accelerate the progression of women to senior management roles. Since its inception, they have been able to train and equip over a hundred women, and in 2023, the program commenced with its fourth cohort of aspiring female executives. She considers this non-profit work her way of giving back the opportunities she was afforded throughout her career.

THE EMERGING ROLE OF AI IN CORPORATE OVERSIGHT

Salzburg Global Fellow Sandra Lawrence provides a cross-sector view of AI’s impact on corporate governance

Sandra A.J. Lawrence is an accomplished executive with extensive experience in financial and corporate governance leadership, spanning roles in healthcare, finance, and non-profit sectors. Her awards and affiliations have included: NACD’s Directorship 100; WomenInc’s Most Influential Corporate Directors;

C3KC Award; Executive Leadership Council; Savoy Magazine’s Top 100 Corporate Directors; Directors & Boards’ Directors to Watch; Ingram’s 250 Most Powerful; and many others. She currently serves on the boards of multiple listed companies as well as on several non-profit boards.

Sandra A.J. Lawrence

Audrey Plimpton, Salzburg Global Communications Associate: You have decades of executive leadership experience in healthcare, real estate, financial services, and personal computing. What are some lessons you have learned about corporate governance from these diverse sectors?

Sandra Lawrence, Independent/Corporate Director: Initially, you look at them and think that they’re all different and that the rules that apply to one don’t apply to the other. The one thing that I’ve learned is that even though the rules might look different in one industry, if you take those lessons to another industry, it will sometimes inform how you shape the rules for that one. You might make some revisions [and] you might look at things differently. You might be more critical of information that’s coming in because you can recognize that there may be other data out there. It just broadens that toolkit and your information base.

AP: What do you view as the opportunities and risks of AI for corporate governance?

SL: The opportunities are huge. I think that it’s almost as if it’s not a choice anymore. It’s already here and it will just be expanding tremendously over years to come. So that opportunity is one that we just kind of accept, not necessarily have to go out and find, because it’s there. The risks are that having more data and more access to different uses of data also means that you may not necessarily know what is the most meaningful to you. I think how we use it [and] how we define how we need to use it will be important. Otherwise, we could become overwhelmed. It’s like when email came into play, and everyone thought that it would make the world so much easier. But it just extends your workday. It gives you access to a lot of information. But if you’re not careful about how you catalog and monitor it, it can be overwhelming. AI can be the same way. It’s important for us to figure out upfront what that business model is [and] what the strategies are and let that drive how you adopt AI.

AP: How are the companies that you serve on the board for utilizing AI in their corporate governance and operations?

SL: The boards on which I serve are all having conversations about AI and even generative AI. We’ve had staff bring in information for us. We’ve had consultants come in to talk with us. We’re recognizing that generative AI is starting to make its way into those businesses already, but we’re still trying to figure out exactly what it means for us and how it fits into the companies’ operations, the companies’ governance, and probably most importantly, the risk profiles of the companies.

AP: How do you think the discussions from this program can inform your current and future work?

SL: There are a number of ways. One of them is that just hearing people around the table and knowing what their backgrounds have been, I now have resources I can go to when I have questions beyond this program. But as important has been the fact that around the room we have people from so many different geographies. They look at the world in different ways because the structures in different countries are different. It’s been helpful to me to understand some of the reasons why. But it also makes me ask why our systems need to be designed the way that we might be contemplating them. Maybe some of the things that are happening in other places are appropriate for my environment. I like the cross-fertilization that comes from taking information from one place and trying to apply pieces of it to someplace else… [At Salzburg Global,] we were having a conversation about how we, as a world, can get people on the same page about some of the important values that should be embedded in financial models.

PROGRAM PARTICIPANTS

FELLOWS

Nick Allen Director, CLP Holdings LTD., Hong Kong

Maria Axente Head of AI Public policy and ethics, PwC UK, UK

Natasha Blycha

Managing Director, Stirling and Rose, Australia

Walt Burkley General Counsel, Capital Group, USA

John Cannon Partner, Shearman & Sterling LLP, USA

Chris Dolman

Executive Manager, Data and Algorithmic Ethics, Insurance Australia Group, Australia

Bharat Doshi Chairman, Mahindra Accelo LTD., India

Aline Eibl

Independent NED, Board Advisor, Venture Capital MD

Katherine Forrest Partner, Paul, Weiss, Rifkind, Wharton & Garrison LLP, USA

Anne Gates Director, Kroger, USA

Positions correct at time of session October 2023

Jeffrey Grant Board of Trustees, Aerospace Corporation, USA

Holly Gregory Partner, Sidley Austin LLP, USA

Liselotte Hägertz Engstam Chair , NED, Boards Impact Forum, TietoEvry, Cint, Zalaris, Transtema, BoardClic, Digoshen, Climate Governance Initiative, Sweden

George Hines

Chief Innovation & Technology Officer, Lithia & Driveway (NYSE:LAD), USA

Alex Jenkins Director, WA Data Science Innovation Hub - Curtin University, Australia

Rachael Johnson

Global Head of Risk Management and Corporate Governance, ACCA, UK

James Killerlane

Corporate Secretary, Managing Director and Deputy General Counsel, BNY Mellon, USA

Thomas Lang

Global Head of Medical Affairs, Access & Partnerships, Novartis harma AG, Switzerland

Anastassia Lauterbach

Managing Director, The ExCo Leadership Group, Germany

Sandra Lawrence Independent/Corporate Director, Evergy, Brixmor, Delaware by Macquarie, Sera, NACD, Hall (Hallmark) Family Foundation, Delaware Funds by Macquarie USA

Heather Laychack

Vice President & Chief People Officer, The Aerospace Corporation, USA

Chris Lee Senior Partner, FAA Investments, Hong Kong

Monica Lopez Co-Founder & CEO, Cognitive Insights for Artificial Intelligence, USA

Katharina Meran Partner & Consultant, Meran & Beacock, Austria

Bob Mundheim Of Counsel, Shearman & Sterling, USA

Melissa Obegi President, Conduit Capital U.S., USA

Sandie O´Connor Board Director, BNYMellon, Terex, Ripple, YMCA, USA

Michelle Odayan Chairperson / NED, FP&M SETA / Legal Aid South Africa/ Daybreak Farms, South Africa

Barak Orbach

Robert H. Mundheim Professor of Law & Business, University of Arizona, USA

Shellie-Jane Price

Partner, Head of AI Advisory Practice, Stirling and Rose, Australia

Stephen Scott CEO, Starling, USA

David Simmonds

Chief Strategy, Sustainability & Governance Officer, CLP Holdings Limited, Hong Kong

May Tan

Board Director, CLP Holdings Limited, Hong Kong

Vikas Thapar

Board Member, PE Funds and Financial Institutions, Canada

Alexandra Walsh JD/MBA Student, University of Pennsylvania, USA

STAFF

Nicola Daniel Program Director, China Forum

Olisa Dellas Development Associate

Charles Ehrlich Director, Peace and Justice

Benjamin Glahn Deputy CEO and Managing Director, Programs

Faye Hobson Director, Culture

Aurore Heugas Communications Manager

Charlotte Müer Program Manager, Health

Soyoung Park Program Intern

Audrey Plimpton Communications Associate

Antonio Riolino Senior Program Manager

Isabelle Weber Impact Fellow

REPORT

AUTHOR

Alexandra Walsh is a dual-degree law and business candidate at the University of Pennsylvania in Philadelphia, USA. Prior to attending Penn, Alexandra received her bachelor’s degree in history and literature, as well as a minor in the Classics and language citation in Spanish from Harvard College. After graduating, she was awarded a Fulbright Scholarship to live in Madrid, where she worked for a Spanish NGO promoting educational advancement for Gitano youth. Following this, she received her MSc in child development and education from the University of Oxford. Alexandra has previously worked in the education and community advocacy spaces at various non-profits. She has also clerked for the United States District Court for the District of Columbia and worked as a summer associate at a boutique law firm in Washington D.C. that focuses on health services. Alexandra is passionate about equity and interested in the impact of a rapidly digitizing world on social outcomes.

CONTACT

For more information contact:

Benjamin Glahn, Deputy CEO and Managing Director, Programs bglahn@SalzburgGlobal.org

Antonio Riolino, Senior Program Manager ariolino@SalzburgGlobal.org

Aurore Heugas, Communications Manager aheugas@SalzburgGlobal.org

For more information visit: www.SalzburgGlobal.org

SALZBURG GLOBAL SEMINAR

Salzburg Global Seminar is an independent non-profit organization founded in 1947 with a mission to challenge current and future leaders to shape a better world.

Together with our world-spanning network of 40,000 Fellows, we have been at the forefront of global movements for change for 76 years, with significant impact on individuals, institutions, and systems.

Whether at our home of Schloss Leopoldskron, online, or in locations around the world, our programs are inclusive, interdisciplinary, international and intergenerational, and are designed to provide a global lab for innovation and transformation.

We convene cohorts of passionate changemakers across diverse fields and backgrounds. We develop and curate networks that support collaboration, share innovations with new audiences, and expand our impact by working with partners around the globe.

We are supported by a combination of institutional partnerships, generous individual donations and revenue generated from our social enterprise, Hotel Schloss Leopoldskron.

SALZBURG GLOBAL CORPORATE GOVERNANCE FORUM

The Salzburg Global Corporate Governance Forum enables critical thinking on the changing roles and responsibilities of directors across jurisdictions and cultures. Launched in 2015, its annual meeting explores how corporations can pursue both profit and public good in a fast-moving global environment, taking account of growing risks, disruptions, regulation, public scrutiny and consumer pressure.

For more information, please visit: www.SalzburgGlobal.org

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.