Damns Given with Nick Richtsmeier

Using AI without It Using You: AI Risk, Labor, Dangerous Incentives, What Joe CEO Should Do with Tim Marple

Season 2 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:08:37

Send us Fan Mail

Tim Marple has a PhD in political science, spent time at Google and OpenAI, and left before his equity vested, unwilling to accept what staying would cost him. Now he co-leads Maiden Labs, a nonprofit focused on measuring emerging technologies effects on society and the economy. In short, he's the guy to talk to about what happens when you build AI into your business, and what's really going on with Anthropic, OpenAI, Google and the other big players.

Nick and Tim start with the Anthropic Mythos announcement and work their way through some of the most important questions in the AI conversation right now: 

Why do AI labs have every incentive to scare you? 

What's the difference between framing and informing? 

Why is the government their most important buyer? 

What is the actual impact of AI on careers and job prospects?

And what should a CEO who isn't an AI teetotaler actually do?

They cover the strategic independence argument — why locking yourself into one AI provider right now is the equivalent of making all your employees sell their cars and take Uber, before Uber raised its prices. They cover the Klarna story and what the CEO didn't tell you when he said he rehired all the humans. They cover labor displacement, the gig-economification of knowledge work, and the project Tim's running at Maiden Labs called Cubit: measuring job vulnerability for almost every role and task imaginable. 

And they end, genuinely, with hope. Not the utopian kind. The kind that comes from sitting with your disappointment long enough to see what you actually believe.

In this episode: The Anthropic Mythos announcement and what to make of it. Why AI labs have strategic incentives to misrepresent their models. The blackmail story and what the documentation actually showed. Why emotional reasoning beats analytical reasoning in a vacuum of meaning. The government as AI's most important and most gullible buyer. The case for strategic independence over AI teetotaling. What Klarna didn't tell you. The ONET and Helm datasets and how Maiden Labs is measuring labor vulnerability. Why the discovery moment ends — and why every lab knows it. The Uber metaphor and what it means for your tech stack. And why Tim left OpenAI before his equity vested.

Subscribe at damnsgiven.com

Join our community at TrustMadeGrowth.com 

Work with Nick at www.CultureCraft.com

Trust-Made Growth®

Leaders who want to understand how to reformat their growth strategies to address trust decay should explore more at CultureCraft.com

Independent Professionals can join the free community exploring how to return trust to our commerce and our communities at trustmadegrowth.com 

Have a business topic you want us to decide if it's working or broken? Have a question about the episode? You can email us at podcast@culturecraft.com.