The AI Artifacts Podcast
The AI Artifacts Podcast
Episode 4: Guest Jade Newton on how to approach responsible AI and regulation

Episode 4: Guest Jade Newton on how to approach responsible AI and regulation

Plus: Sam Altman fired at OpenAI, Ed Newton-Rex leaves Stability AI, and Sundar Pichai addresses regulation

This week’s episode of "The AI Artifacts Podcast" kicks off with fresh news that Sam Altman was ousted as CEO at OpenAI on Nov. 17. Brian and Sarah get into the known details and review other stories, including Ed Newton-Rex leaving Stability AI and Sundar Pichai’s comparison of AI to climate change.

Then, the podcast welcomes Jade Newton, an AI data expert who has worked at large-scale companies in a variety of contexts, as she discusses the future of AI regulation. Get a deep look into standards for responsible use, as well as a very special dog-themed edition of "Two truths and l’AI."

Timestamps for this episode:

[0:00] Intro

[0:27] Breaking news from Friday on Sam Altman being pushed out at OpenAI

[5:07] Altman confirmed plans for GPT-5, though training has not yet begun

[7:15] Ed Newton-Rex leaves Stability AI, criticizing fair use approaches toward training data

[12:22] Sundar Pichai compares AI to climate change in regulation context

[17:44] "Two truths and l’AI: Dog edition"

[25:36] Interview with Jade Newton

[28:45] What does "responsible AI" mean?

[32:33] How to solve issues of access

[36:26] Best and worst ideas for regulatory action

[45:43] How encryption and data privacy concerns relate to AI

[48:14] Risk of regulatory capture by incumbents

[51:39] Regulation in the European Union

[55:20] Best practices for companies implementing AI

Links for topics referenced in this episode:

OpenAI’s board fires Sam Altman as CEO (The Verge):

Official OpenAI announcement about Altman leaving (OpenAI):

Podcast episode from 2019 where Brian and Sarah interviewed Mira Murati, who is now OpenAI’s interim CEO (Apple Podcasts):

Altman confirms plans for GPT-5 at OpenAI (Decrypt):

Ed Newton-Rex quits at Stability AI (BBC):

Sundar Pichai makes AI and climate change comparison (CNBC):

Microsoft makes Bing Image Creator changes after Disney copyright complaints (Ars Technica):

Meta’s AI tools enable dog additions to any photo (Mashable):

U.S. Census data on broadband internet access in 2018 (

Summary of individual parts in Biden’s executive action on AI (The Markup):

Additional note on the White House’s executive action from Jade:

I think that people need to understand that the White House's executive order on AI is a good first step to putting guardrails in place when it comes to developing, testing, training, and optimizing emerging technologies that use machine learning. 

However, I also think that this executive order is very broad and there is this thinking that AI can solve systemic issues (think discrimination/bias/access, etc.). AI is not going to fix systemic racism, sexism, xenophobia, education funding, etc. These are issues that are rooted in the fabric of what America was designed to be. AI cannot change people's mindsets. And despite the daily advances of technology, there will always be a need for a human-in-the-loop. AI is a tool -- it is not the ultimate problem solver. Humans can use AI to solve some problems, but ultimately AI can't do it on its own.

I think it is important for those at the table (i.e., NIST, and whoever else the White House appoints to the committee(s) that are ensuring implementation of these guardrails to understand the above. AI can only do what the humans tell it to do. And the systemic issues that are rampant in American culture are too complex to be solved by technology. 

Jade Newton on LinkedIn:

More links to news that didn’t make the show:

Discord gets rid of its AI assistant Clyde (The Verge):

Microsoft Ignite announcements related to AI (The Verge):

Reactions to the Humane AI pin (Axios):

Music used in this podcast comes from "Vanishing Horizon" by Jason Shaw and is licensed under an Attribution 3.0 United States License.

The AI Artifacts Podcast
The AI Artifacts Podcast
Coming Fall 2023: The AI Artifacts Podcast, co-hosted by Brian Warmoth and Sarah Luger, features timely conversations with founders, technical experts, and practitioners to de-mystify AI's current rise, what it does best, and how it will shape the future.