Blog Details

Column: To navigate the promise and peril of AI, resist the urge to fall into extreme narratives

Editor’s note: This is guest commentary from Dave Siegfried, CEO of Official AI, a Seattle startup building a marketplace designed to help people control their digital likeness and connect them with marketers interested in using AI-generated talent.

AI is everywhere these days, and I get it — it’s overwhelming.

If you’ve been paying attention to the headlines, you’ll notice two main narratives.

On one side, people think AI is going to solve all our problems — making life longer, easier, and maybe even better.

On the other, you have folks saying AI is a disaster waiting to happen, that it’ll replace jobs, creativity, and leave us drowning in meaningless content.

Honestly? Neither of those extreme views feels quite right.

Any time we face a big technological shift, it’s natural for people to lean into these polarized positions. But I think the truth is somewhere in the middle.

It’s not all doom and gloom, nor is it the silver bullet some are selling. What we need to do is embrace a healthy curiosity — exploring what’s exciting about AI, but also being honest about the risks.

Extreme takes get in the spotlight, but most people are in the middle

From my conversations with people — whether it’s talent, brands, or just folks in my network — I’ve noticed that most don’t have extreme opinions about AI. They’re curious, they want to understand, but they haven’t fallen into the “AI is everything” or “AI is evil” camps.

And yet, the loudest voices in the room — often those who have something to gain — dominate the narrative.

When you’ve raised billions in investments, like some of the big AI companies, you have a reason to cheerlead. But it’s important to remember that even those leaders, people like Sam Altman or the folks behind platforms like Anthropic, have financial incentives driving their optimism.

They’ve got access to the roadmaps and early experiments, which we don’t. That gives them reason to push the idea that AI will revolutionize everything — but it doesn’t make them impartial.

At the same time, I understand the concerns being raised by artists like Joseph Gordon-Levitt, who’s been unapologetically vocal about the risks AI poses to creativity. He worries that AI is using years of human effort — actors’ performances, writers’ scripts — to train models without permission or compensation.

He’s not wrong. There’s a real danger here, especially for industries like entertainment, where the lines between authentic art and AI-generated content are blurring fast.

The challenge of deepfakes and AI-generated content

We’re also seeing another problem unfold: AI-generated deepfakes. These tools can create hyper-realistic clones of public figures, making it easy to spread misinformation.

It’s becoming so serious that California recently passed a deepfake law aimed at preventing election-related AI trickery. On the surface, it seems like a step in the right direction, but it also brings up real First Amendment concerns. Parody and satire have long been protected forms of speech and clamping down too hard on AI risks crossing into censorship territory.

A recent court ruling blocked much of California’s deepfake law, which highlights the tension we’re facing: How do we protect ourselves from the misuse of AI without stifling creativity and free speech? It’s not an easy balance to strike, and that’s something we’ll need to keep working on.

Why we need curiosity, not fear

For me, the most important thing is to approach this whole AI conversation with curiosity.

It’s easy to get caught up in the extremes, but what we really need is to engage with the technology thoughtfully. AI has the potential to do incredible things — like breaking down communication barriers, extending life expectancy, or even creating new forms of art we haven’t imagined yet.

At the same time, we need to be mindful of the risks. AI can’t be embraced blindly. We have to set guardrails and protections in place to ensure that it’s used ethically and fairly. That means taking the time to understand both the good and the bad — something that only happens if we resist the urge to fall into extreme narratives.

Guardrails, consent, and trust: Building a sustainable future

A key part of navigating this space will be ensuring that those contributing to AI — whether they’re artists, actors, or everyday users — are properly compensated and credited.

As Gordon-Levitt pointed out, studios and tech companies are already using content from years of creative work to train their models. If AI is going to reshape industries, the people whose work made that possible should benefit, too.

AI also raises questions around trust.

Take deepfakes, for example. While the technology can be used creatively, it can also be weaponized to manipulate public opinion or spread false information. And this isn’t just happening in entertainment — politics is already feeling the impact, with deepfake videos misleading voters.

California’s law may not be perfect, but it shows just how important it is to start building guardrails now, before things get out of hand.

The path forward: Staying curious, staying engaged

So, where do we go from here?

The truth is none of us have all the answers yet. Even the experts are still figuring things out as we go.

But what I do know is that curiosity will get us further than fear. The more we engage with AI — whether it’s experimenting with tools ourselves or staying informed about the latest developments — the better equipped we’ll be to shape the future we want.

It’s okay not to have everything figured out. This technology is still new, and it’s evolving fast. But if we stay open-minded, set smart guardrails, and ensure fair treatment for everyone involved, I think we can unlock AI’s potential without losing what makes us human.

At the end of the day, it’s not about choosing between optimism and fear — it’s about finding that middle ground where we can innovate responsibly. Because the future isn’t written yet, and it’s up to us to make sure AI is a tool that helps us, not one that controls us.