Staying Current on AI

AI is here to stay, and it’s incredible. But it also comes with terrifying risks that we’ve heard about in the press including IBM’s Watson, who debuted on Jeopardy in 2011, and then later in 2021, in the Healthcare space, failed sometimes under 50% of the time against human clinicians.  (https://www.theregister.com/2022/01/31/machine_learning_the_hard_way/). 

Some other significant AI failures include Tesla cars crash due to autopilot feature, Amazon’s AI recruiting tool showed bias against women, AI camera mistakes linesman’s head for a ball, Microsoft’s AI chatbot turns sexist, racist, and false facial recognition match leads to Black man’s arrest.  https://www.jumpstartmag.com/ai-gone-wrong-5-biggest-ai-failures-of-all-time/

Even with these failures, AI has become mainstream, and pundits have said if you don’t get on the bandwagon now, you will be behind, and may never catch up!

“With the release of ChatGPT in November 2022, OpenAI “suddenly” and shockingly threatened to overthrow Google’s hitherto total dominance of internet search. In the process, it also became a $29 billion dollar organization, seemingly overnight. Microsoft, OpenAI’s largest stakeholder, will reportedly receive the lion’s share of OpenAI’s profits for years to come.

Board members and corporate execs don’t need AI to decode the lessons to be learned from this. The lessons should be loud and clear: If even the mighty Google can be potentially overthrown by AI disruption, you should be concerned about what this may mean for your company.”(https://www.forbes.com/sites/glenngow/2023/02/09/if-chatgpt-can-disrupt-google-in-2023-what-about-your-company/?sh=304216e24d39

Artificial Intelligence is dominating the headlines, as it should. AI holds tremendous benefits, and risks.

Regulations are coming, but what will they be, and how soon?

Below is an overview of a closed-door meeting in the U.S. Senate in September.  The key question is what does, and what will, AI oversight mean to board members in the short and long term?

“The US's biggest technology executives on Wednesday loosely endorsed the idea of government regulations for artificial intelligence at an unusual closed-door meeting in the US Senate but there is little consensus on what regulation would look like, and the political path for legislation is difficult.”1

“Senate Majority Leader Chuck Schumer, who has made AI one of his top issues as leader, said regulation of artificial intelligence will be “one of the most difficult issues we can ever take on” and he listed some of the reasons why: it is technically complicated, it keeps changing and it “has such a wide broad effect across the whole world” he said.1   

Schumer “asked everyone in the room – including almost two dozen tech executives, advocates, and skeptics whether government should have a role in the oversight of artificial intelligence and “every single person raised their hands, even though they have diverse views”, he said. 1

Executives attending the meeting included Sam Altman, CEO of Open AI, Elon Musk, CEO of Tesla and X, Meta’s Mark Zuckerberg, former Microsoft CEO Bill Gates, and Google CEO Sundar Pichai.

“Musk said the meeting “might go down in history as being very important for the future of civilization.”1

“In the United States, major tech companies have expressed support for AI regulations, though they don't necessarily agree on what that means.  Similarly, members of Congress agree that legislation is needed but there is little consensus on what to do.”1  

This lack of consensus and the time it will take to agree on regulations/legislation should be a concern to all board members.  The key question for board members is what to do now.

To leverage the power of AI, I downloaded the free version of ChatGPT (it’s worth a few minutes to set it up.  Check out a YouTube video for a quick tutorial.)  I asked ChatGPT to “write a 250-word blog what are the risks of AI to private board governance.” 

If you have not tried ChatGPT from OpenAI, you can ask a question or request, and it provides a response in seconds.  The rapid response shows the power of AI.  Most of the time I’ve found ChatGPT to be accurate, but not always, so you must double check it.  Verification is vital- It does make mistakes, even while exhibiting full confidence in its responses.

Below is Chat GPT’s response to my blog request:

Title:  Navigating Uncertainty: Risks of AI in Private Board Governance

Introduction

As Artificial Intelligence (AI) becomes increasingly integrated into corporate boardrooms, private boards face not only opportunities but also potential risks. While AI offers transformative potential, it's crucial to recognize and address the challenges it presents to private board governance. In this article, we explore the key risks associated with AI in the context of private boards.

  1. Data Privacy Concerns:

- AI relies on past data sets, and this raises concerns about data privacy. Boards must ensure that sensitive corporate information and personal data are adequately protected and that AI systems comply with data privacy regulations.

2. Ethical Dilemmas:

- The deployment of AI algorithms can sometimes raise ethical dilemmas especially when it comes to making decisions that impact employees, customers, or stakeholders. Boards must ensure AI aligns with their organization’s ethical principles.

3. Bias and Fairness:

- AI systems may inherit biases from their training data, potentially leading to biased decision-making. Private boards must actively work to identify and correct these biases to ensure fairness and equity.

4. Cybersecurity Threats:

- As AI systems become more sophisticated, so do the threats they face.  Boards must be diligent in securing AI systems against cyberattacks and potential vulnerabilities.

5. Reliance on AI:

- Over-reliance on AI for decision-making can lead to the erosion of human judgment and accountability. Boards should strike a balance between AI support and human oversight.

6. Lack of Transparency:

- Some AI models are inherently complex and lack transparency. Boards may struggle to understand how AI systems arrive at their conclusions, making it difficult to justify decisions to stakeholders. 

7. Cost and Resource Implications:

- Implementing and maintaining AI systems can be costly. Private boards must weigh the potential benefits against these expenses and allocate resources accordingly. 

8. Regulatory Compliance:

- Evolving regulations around AI can pose challenges for private boards. Keeping up with and adhering to regulatory changes is crucial to avoid legal issues. 

Conclusion

While AI holds immense potential to enhance private board governance, these risks should not be underestimated.  To harness the benefits of AI while mitigating its risk, private boards must adopt comprehensive AI governance frameworks. This includes transparent AI development, ethical guidelines, robust cyber security measures, and ongoing education for board members. By addressing these challenges practically, private boards can navigate the AI landscape with confidence and ensure responsible and effective governance in the digital age.

AI is a huge topic and staying on top of regulatory requirements and avoiding risks is paramount for board members. 

My Inbox is inundated with webinars on AI, and I attend them regularly. One of my favorites was offered by Silicon Valley Director’s Exchange (SVDX.org) several months ago – “AI Implications, Are You Ready”

“Artificial intelligence has spiked in news coverage recently, especially with the Microsoft multibillion investment in OpenAI.  As expected, there are a multitude of both challenges and opportunities as the AI technology becomes more advanced.  Will this technological change upend or advance business?  It’s not always obvious how AI will change any particular business, but it’s happening faster than anyone thought possible.  Board members need to prepare.  Join us for a session that will discuss the current state and rapid advancement of AI and its implications for business and for board member fiduciary duties.” 

The panel includes a wealth of experts/expertise on the subject. 

https://www.svdx.org/calendar/2023/3/9/webinar-ai-implications-are-you-ready

In November, I’m attending the Stanford AI Symposium: AI Applications, Risks, and Oversight for Business which is an event sponsored by the Rock Center for Corporate Governance at Stanford University.

And between now and then, I’ll be a sponge for all updates AI related, as I’m on several private boards that offer AI solutions.  AI is here to stay.  It is our responsibility to stay current and especially on top of regulatory changes. 

1Mary Clare Alonick and The Associated Press, “Tech Industry Leaders Endorse Regulating Artificial Intelligence,” Times Colonist, Thursday, September 14, 2023, B3.


ABOUT PATRICIA WATKINS

Patricia Watkins is an experienced board member, Go-To-Market (GTM) Strategist and Sales Growth Expert. She has held senior leadership roles in Sales, Marketing, Alliances, and Channels, with Fortune companies including HP, Teradata, AT&T, NCR, and a number of start-ups in Silicon Valley. Patricia has led new teams starting at $0 million to existing teams delivering in excess of $800 million in sales.

 

She is currently an Independent Board Director on 1 public board, 1 private board, and she is on 4 Advisory Boards.

 

She is the #1 Amazon best-selling author of two books, Driving More Sales, 12 Essential Elements, and Land and EXPAND, 6 Simple Strategies to Grow Your Top and Bottom Line.

 

She graduated with a BBA from The University of Texas, and an MBA from Santa Clara University, both with honors.

 

Share this post: