December 2025

Today's Tech with Clare Duffy

Overwhelmed by the dizzying evolution of new technology and artificial intelligence? CNN reporter Clare Duffy '17 is here to help.

  • Interview by Jessica Murphy Moo
Clare Duffy

CLARE DUFFY STARTED her journalism career here on The Bluff at The Beacon under the mentorship of Nancy Copic. After UP she worked for Portland Business Journal, completed Columbia University’s Graduate School of Journalism, and landed a job on the tech beat for CNN.

She quickly learned that tech reporters often have an approach that looks toward the future. This forward-thinking lens makes sense—new technology will always be pushing consumers toward something new. “But,” Clare said, “I felt that we weren’t always connecting the dots for people about what these technology changes mean for people’s lives in the moment.” So she pitched CNN a podcast—Terms of Service—to walk listeners through the present-day changes in the vast landscape of technology, episode by episode, from smartphone use for teens to Fitbit health trackers to artificial intelligence. Under Clare’s steady hand, the podcast Terms of Service is as much a treasure trove of information as it is a public service.

What follows is an edited version of our conversation with Clare about artificial intelligence, plus a list of some of our favorite episodes.

Last year, up hosted an “AI & Ethics” conference for professors of Catholic universities from across the us. What would be the top subject—or blind spot—you’d want to discuss at a conference about AI and ethics?

One potential blind spot is user privacy, both as a consideration for the builders of this technology and also for users to be aware of as they’re engaging with AI. How much data from our conversations with AI tools are companies collecting and holding onto and who can access that information? How are they using it? We’ve seen from the social media era just how valuable our personal data is, and social media companies have been inferring things about us to a remarkable degree of accuracy based on what we watch and what we click on and who we’re connected to. It gets taken to a new level with AI, because we’re directly telling these tools things about us. Does that information get used to sell us things or keep us on these platforms for longer than is healthy?

In general, there’s this framing from Silicon Valley right now that the evolution of AI is sort of inevitable—here’s where it’s going and this is what it’s going to look like. But I actually think that individual people have a lot more control over the evolution of this technology than maybe they want us to realize. We have a choice about how we engage with it, and hopefully more builders have a choice about how they build these technologies.

Your episode “Are Your Conversations with AI Killing the Planet?” delved into environmental challenges posed by the AI data centers. Can you summarize these environmental impacts for our readers, and how does this issue affect how you use AI?

I am really concerned about the environmental impact of AI. This is another ethical consideration around this technology.

If you break AI systems down into the most basic components, they rely on these data centers, which are basically giant warehouses full of rows and rows of stacks of computers. That requires a ton of electricity and a whole bunch of water to cool down those computers. And then, of course, there are the rare earth metals that have to be mined out of the ground to create the components for those computers. We’re learning that AI systems are even more resource intensive than traditional computing systems. The expert I spoke with for that episode, Sasha Luccioni, estimates that it’s 30 times more energy intensive to do an AI web search than a traditional web search.

These environmental concerns are limiting how much I’m choosing to use it. Every time I think about asking ChatGPT a question, I think, “Is it worth it?” or “Is there a different way to find this information?”

In your “Can AI Help Us Grieve” episode, a daughter made a “grief bot” using her late father’s voice and later regretted the experiment. In your “Dating” episode, a young woman uses AI as a partial therapist/dating advisor. Are there examples where there are positive outcomes of people using AI bots as human stand-ins? What kind of cautions do you think people should put in place to avoid negative consequences?

The most successful applications that I have seen when we think about how AI is contributing to or impacting our human relationships are situations where it’s a supplement to human connection, rather than a stand-in or a replacement for human relationships. I actually think that dating coach example is a really great example. The woman I spoke to—Grace—wanted to be able to process dates as soon as she got home, but a human therapist generally isn’t going to be available to chat at 9 or 10 p.m. She found that ChatGPT was a really good way of doing that sort of immediate thinking and processing about dates. But then she was still relying on her human friends, her human connections. AI didn’t become the sole source of support for her.

But then, on the flip side, the woman who created an AI replica of her late father found that, at least for her, AI couldn’t be a stand-in for her dad. In fact, she felt like when AI hallucinated—when it made up memories of them together—it impacted her own real memories and her ability to connect with her true memory of her father. And it was really heartbreaking. I can understand wanting to find some sense of support through AI for your grief. But I think in general it’s important to make sure that you’re not trying to replace human relationships or spending too much time with AI rather than with people.

And it’s also important to remember that AI is programmed to tell you what you want to hear, to be really agreeable. Of course, we know it’s important to sometimes have critical feedback or tough love, and that is what the humans in our lives do for us.

We’re starting to see companies experiment with different ways—with certain chat features—of reminding people on a regular basis that they’re talking to AI, that they’re interacting with a tool and not a person. Because it is, I think, easy to get caught up in the story that AI can create for you.

Clare Duffy's Terms of Service podcast was advertised in New York City's Times Square in May.

 

What AI legislation should we be paying attention to?

A lot of the action right now is happening on the state level, although one notable exception to that is the Take It Down Act that became law in May, which makes it a crime to share nonconsensual, explicit images of people (whether they’re real or created with AI), and also puts some requirements on the tech companies to remove those images when they’re alerted to them. That’s a really important example of where people on both sides of the aisle came together to acknowledge a real harm from these systems and took action on it.

I’m keeping a close eye on the debate around whether we actually need new laws and new regulation around AI, or whether we can apply existing laws to address some of the harms from the technology. I don’t think we know the answer to that yet, but I think we’re going to start to get a better sense as we see recent lawsuits, whether they’re around safety or copyright protections, play out.

There is of course massive interest in how AI can make businesses more efficient. Is this happening?

In most cases, I think organizations are still in the early stages of figuring out where to apply this technology. There was a recent MIT study that found that 95 percent of organizations that have piloted AI systems haven’t actually seen any benefit to their bottom line, even though everybody’s talking about AI taking jobs and being able to save companies money. Part of the finding was that companies just haven’t quite figured out the processes and the best way to roll out AI systems. It’s important for leaders in organizations that are thinking about using AI to actually get their hands on the technology and use it, because there can be a disconnect between what we hear about AI, how it’s going to take our jobs and it’s going to cure cancer and it’s an efficiency tool, and what it can actually do right now.

There are efficiency benefits from AI in some fields. I’ve been learning about the way that AI can help with drug discovery, because it can test the interactions between different compounds so much more quickly than a human could. That’s a potentially promising area of study where it’s not just efficiency for efficiency’s sake, but efficiency to do things that actually are really challenging for people and benefit the world by doing them.

What do you use AI for? How have you found it useful?

Going back to the environment piece, I use it sparingly. I do experiment with it because I cover it and I need to know how it works, but in my personal life, I don’t use it a ton. I don’t use it for writing at all at work, because I just don’t trust that it’s going to be accurate. I don’t even really use it for research, although it potentially could be a starting point. But again, I just think that it’s not really faster than a traditional web search or going on social media and finding the experts or sources that I want to speak to. So I use it in a limited fashion in my work for that reason. Sometimes the things that it gets wrong are just so silly and trivial.

So much of writing—good writing—is about the process and about the messiness of the first draft and how you write as a way of thinking through your ideas. And so I don’t know that getting a first draft put in front of you by AI actually results in a better thing than if you had just done the process yourself.

I got a demo recently of an AI system that basically can read what’s on your computer screen and then do research on the web for you to answer questions about what you’re seeing. I see how this is faster than me going through and doing the research online. But I also wonder what you lose if people stop learning how to read and process information from different sources and draw a conclusion from all of that.

One thing I will say that I’ve found to be really cool is it’s useful for language translation, language learning. I’ve used it when I’m traveling, whether I’m pointing my phone up to a sign or maybe a menu in a different language that I don’t speak so that I can read it.

Do you have any advice for those who want to use the tool well?

A helpful piece of advice that I’ve heard from other people who I think are really smart about this stuff is… if you’re going to ask AI to help you write an email or for advice about your love life, really prompt it, give it backstory and specific expertise, or say, “I want you to use the works of [this human professional in this field] as you’re thinking about how to respond to me.” You’re much more likely to get a useful answer out of it if you’re giving it a perspective from which to respond to your question.

TERMS OF SERVICE: RECOMMENDED EPISODES

  1. “AI Voice Scams Are on the Rise. Here’s How You Stay Safe”

A father saw his son’s phone number on his cell, picked up the call, and heard his son’s voice telling him he’d been in an accident and needed help. The dad didn’t find out that he’d been scammed until he was at an ATM machine and his son Facetimed him and asked him what was going on. This episode offers tips to avoid being duped.

  1. “Robot Recruiters: How AI is Helping Decide Who Gets Hired”

How do you write that cover letter and resume, knowing that companies may be using AI as a first screen for applications? This episode offers helpful tips for job seekers.

  1. “Got Kids? Think Before You Post”

How should we think about online privacy for our children? What should we post about on social media? When might it be a good idea to hold off?

  1. “How Fake Videos Are Adding to the Fog of War”

Advice on how to avoid being duped by fabricated videos, even as the experts are saying it’s getting harder and harder to spot a fake.

  1. “Love and Robots: How AI Is Changing the Dating Scene”

Can AI—which is now part of dating apps—be an effective dating coach, matchmaker, or even a means for self-reflection? How might it enhance human relationships? How do you set boundaries to make sure you’re not relying on it over human connections?