WIRED Roundup: DHS’s Privacy Breach, AI Romantic Affairs, and Google Sues Text Scammers

In this episode of Uncanny Valley, we discuss our scoop about how the Department of Homeland Security illegally collected Chicago residents’ data for month, as well as the news of the week.
WIRED Roundup DHSs Privacy Breach AI Romantic Affairs and Google Sues Text Scammers

In today’s episode, host Zoë Schiffer is joined by executive editor Brian Barrett to discuss five stories you need to know about this week—from how AI affairs can now be grounds for divorce, to why Google is suing one of the largest networks of text scammers. Then, we dive into how the Department of Homeland Security illegally gathered the data of hundreds of Chicago residents.

Articles mentioned in this episode:

Please help us improve Uncanny Valley by filling out our listener survey.

You can follow Zoë Schiffer on Bluesky at @zoeschiffer and Brian Barrett on Bluesky at @brbarrett. Write to us at uncannyvalley@wired.com.

How to Listen

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.

Transcript

Note: This is an automated transcript, which may contain errors.

Zoë Schiffer: Welcome to WIRED's Uncanny Valley. I'm Zoë Schiffer, WIRED's director of business and industry. Today on the show, we're bringing you five stories that you need to know about this week, including our scoop about how the Department of Homeland Security, or DHS, collected Chicago residents’ data for months in violation of domestic espionage rules. I'm joined today by WIRED's executive editor Brian Barrett. Brian, welcome to Uncanny Valley.

Brian Barrett: Zoë, thank you for having me.

Zoë Schiffer: Our first story this week takes us to China. Our colleagues, Zeyi Yang and Louise Matsakis reported that Apple removed two of the top gay dating apps from the app store after receiving an order from China's main internet regulator. This confirmation comes as users of these apps, called Blued and Finca, started noticing and sharing on social media that they could no longer find them on the app store and on several Android app stores as well. The apps appear to still be working for people who already had them downloaded, but it is a blow to the LGBTQ community in China.

Brian Barrett: And this is a community that doesn't have a lot of options to begin with, right?

Zoë Schiffer: Yeah.

Brian Barrett: Most international apps that are gay dating apps have already been blocked. Same-sex marriage is not legal in China, it is not a very welcoming country to begin with.

Zoë Schiffer: Honestly, one thing that really surprised me with this reporting was simply the fact that Apple confirmed it to us, which is pretty unusual as you and I both know.

Brian Barrett: Yeah, they are reticent to acknowledge their specialty touch here on China, the way they handle data there, their supply chains over there. It's all a pretty live wire for them. So, interesting that they confirmed it, and unclear whether there's always a chance these apps could come back. That's happened before. Although given the way that the climate is in China, in the US at Apple, it seems unlikely. It seems like this is an avenue that is closed off for people now. It's also a shame because these were more than just dating apps, one of these apps in particular was doing HIV prevention work, offering health services. These were really omnibus apps that offered a valuable service.

Zoë Schiffer: Yeah, exactly. And I think it's not surprising that Apple complied with the order, that's the name of the game of operating in China. It's a huge reason why Google employees protested so vehemently when the company was going to create a search engine for China because it was inherently going to be pretty censored and people found that to be anti-Google's ethos at the time, but this is what it means to operate in the country. If you are going to be in the Chinese market, you need to share data with the Chinese Communist Party. You need to comply with their core orders, and those can change on a dime it seems.

Brian Barrett: And honestly, it's not just China. We've seen Apple, I don't want to draw too much of an equivalence, but it was just the other month, just a few weeks ago that Apple and Google started pulling ICE detection apps off of the app store, two or three them, and classifying ICE agents as a protected class. I think these are companies, they may have some founding principles that they try to stand on but ultimately they are businesses and businesses conform to the politics of the time, and I think we're seeing that.

Zoë Schiffer: Huge lesson we've learned in the past few years. They actually don't care about those values if it goes against their bottom line.

Brian Barrett: Believe it or not. Zoë, I know you're crushed to learn it.

Zoë Schiffer: Honestly, it's been tough. Moving on to our next story, this one deals with something that I feel like we've been talking about more and more in the WIRED newsroom, which is data centers.

Brian Barrett: Data centers, the things that are propping up the entire US economy, Zoë?

Zoë Schiffer: Yes, exactly. Something I talk about all the time. Our colleague, Molly Taft, reported on a specific question, which is if we have to build data centers in the United States, which apparently we do, where should they actually go? A new analysis looked into what exactly we might be facing as the buildout continues over the next few years, and where the US should be building data centers to avoid the really harmful environmental impacts. Not all data centers are created equally, environmentally speaking. A lot of their water and carbon footprint depends on exactly where they're located. So, the study which was published in Nature Communications Journal on Monday found that the best data center locations fulfill two main requirements. One, the grid ideally needs to run on renewable energy, or at least it should be making big strides and putting more clean energy on the grid, and then two, it should be located in places that don't have as much water scarcity, which seems fairly obvious. So, given this criteria the best places to put them are Texas, Montana, Nebraska and South Dakota.

Brian Barrett: Zoë, I like this better, at least then everyone who is trying to build data centers in space this at least feels more—

Zoë Schiffer: Or underwater.

Brian Barrett: —underwater, or any sort of sci-fi version of this. I thought this was a really interesting study. Obviously these places have a lot in common, they are wide open. They could probably use the economic incentives that come along with building data centers. But so, one thing I thought that was interesting about the study and that really jumped out to me, take Nebraska for example, was cited as a really good spot because it could potentially have really good renewable energy from wind. If Nebraska invested more in wind, then it would be a good spot for a data center. So, it's a pretty big if, but it was telling how much those things go hand in hand. If you want to build out this huge AI infrastructure, you're going to have to embrace some of the renewable energy that a lot of this administration has obviously turned its back on quite a bit, but you need that. You need that if you want to get here.

Zoë Schiffer: Right. Well, you need everything. I was really struck when I was visiting OpenAI's big data center in Abilene, Texas, that there's massive gas turbines. And I was talking to Crusoe, the company that is managing the buildout, and Chase Lochmiller is the CEO of that company, was like, "Look, we need everything. We're not anti-renewable energy, we're very pro-renewable energy, but we need that, and we need gas turbines, and we need everything else we can get," because these data centers can't have literally any downtime.

Brian Barrett: Yeah. And by the way, we need to invent entirely new modular nuclear reactors on top of all of that.

Zoë Schiffer: Also that.

Brian Barrett: And again, put them on the moon.

Zoë Schiffer: Right, right. I mean, I have been talking about this incessantly. I really feel like we're in the middle of this big reshoring push, this big push to bring data centers to the United States. This is a huge focus of our current administration. I firmly believe that in the next few years it's going to be politically untenable to be supportive of having a data center in your local community, because no one wants to live near these things. They're loud, they make your energy bill spike. There's pollution. It's a total NIMBYism dynamic.

Brian Barrett: And they don't provide the jobs that you would think. They are these huge infrastructural plays that are huge drains, drive up everybody's electric bill, but they aren't returning jobs to the community, they're getting often huge tax breaks. So, there's really not that much upside, and I think we're already seeing that. I think even my own two towns away from me has been fighting against a potential data center going in, but I think we're already seeing that sort of pushback.

Zoë Schiffer: So shifting gears, our next story deals with another inescapable fact of modern life, which is scam text messages. Brian, how many scam texts or calls would you say you receive in a given week these days?

Brian Barrett: I mean, how many have I received during this recording? It is incessant, and that's not even counting the political ones, which are legit but annoying. No, they are constant, Zoë. It is the most consistent communication I have in my life is from scammers.

Zoë Schiffer: Yes, same. So, we don't really have a way to know for sure, but alongside millions of other Americans you might've been the target of a Chinese network of fraudsters called Lighthouse. Over the last few years, the group has sent millions of scam text messages, often impersonating USPS or a toll road collector, and reportedly they've made more than a billion dollars from their schemes. Our colleague, Matt Burgess, learned that Google filed a lawsuit this week in the United States suing 25 unnamed individuals who've allegedly operated as part of this scam network. The name of the group, Lighthouse, comes from the software that they sell to help fraudsters scam people. It's developed by cyber criminals and sold, this really cracked me up, as a subscription service to less technically capable scammers. You can buy a weekly, monthly, seasonal, annual or permanent subscription. They're running a professional operation over there.

Brian Barrett: Just make sure to turn off auto-renew in case you don't want a—you don't want to get that bill hitting you after the free trial.

Zoë Schiffer: Yeah, you really need to make sure there's strong ROI before you commit.

Brian Barrett: Yeah. It is remarkable, and we've seen this in ransomware too, like the ransomware service. Now you've got scamming as a service. The level of professionalism to these operations, it shouldn't surprise me, but it always does somehow. Google's filing alleges that Lighthouse offers more than 600 phishing templates that scammers can use to try to steal people's personal information. You can choose from more than 400 entities or organizations to impersonate. It's really specific, it's really sophisticated, and I think what's interesting to me too here, Zoë, is these lawsuits always feel futile.

Even if you successfully win this lawsuit, it's not like someone's going to go knocking on the door of a bunch of scammers in China and say, "Hey, give us the restitution, go to jail." But what I thought was interesting is there are some, as Matt noted in his piece, there are some knock on effects. At the very least, if you can prove that these people are doing this in court then you can maybe get some traction with the platforms they're working with, maybe get some traction with other third party infrastructure that is being used by this group to cut off avenues for them to operate. I don't know how successful that will be, I don't think the number of texts I'm getting is going to go down, but it's not quite as hopeless as it may first appear.

Zoë Schiffer: I thought it was telling that they didn't name the individuals that they're suing, but they did put their Telegram handles in the filing.

Brian Barrett: Yeah, exactly.

Zoë Schiffer: I would say that they should get some help with the copywriting of the templates, because I have not found these scams to be so convincing, but given that they've made over a billion dollars, perhaps I'm wrong.

Brian Barrett: Maybe we're writing our stories wrong. I think maybe we got to pick up that scammer cadence.

Zoë Schiffer: It's true. One more before we go to break, and this one is about chatbots. Specifically it's about people having intense romantic relationships with chatbots and then getting divorced as a result. Our colleague, Jason Parham, wrote a story this week about a new legal frontier that is emerging in family law. Basically an AI affair is now grounds for divorce.

Brian Barrett: So much to unpack here, Zoë. We've written a lot about people who are in serious relationships with AI, and that is a real thing that is happening. It honestly had not occurred to me, I'm so glad Jason wrote this, about the legal implications. Because you do in a lot of states, you need to say why you're getting divorced to get the divorce granted, and is this an affair? Is it an addiction to a computer program? Is it both? Which is a thorny thing to try to figure out in the middle of what is a really hard time and confusing time in people's lives anyway. So, I don't envy anyone in this situation or the divorce lawyers who have to fight over it.

Zoë Schiffer: I think emotionally you can understand that if a spouse is having a very intense texting relationship with a chatbot, it probably, I don't know, in my opinion does feel like an affair. Perhaps not like any other affair with a human, but not totally dissimilar, but it looks like courts are starting to agree with that assessment in certain cases, which could, like you said, have real implications. There are states that have community property laws, it could influence how much money one party has access to if they could show financial waste over hidden payments or even subscription fees for an AI chatbot. I also could see it, and I think Jason talks about this in his article, being implicated in custody battles because it could speak to the judgment of the party that's having the affair.

Brian Barrett: I find it hard to picture myself in that situation, it would be hard to plead in court that you need custody but you are also in a relationship with a large language model. Very challenging. I think people want to joke about this stuff, but I have a hard time doing that because it's really serious the way that people have these attachments and the way that they're blowing up people's lives.

Zoë Schiffer: Yeah, it's real. One other thing that the AI romance brought up for me was the New York Times lawsuit with OpenAI. This week, OpenAI submitted a filing arguing that it shouldn't have to hand over thousands of conversations that its users, its real users, have had with the chatbot. The company is basically saying there should be a thing called AI privilege because people are having very emotional conversations, they're having personal legal work-related conversations, and those conversations shouldn't be subject to a court order like your Google searches, for example.

Brian Barrett: I'm glad you brought up the Google search equivalent, because to me that feels like it is not that different. I get where they're coming from, but I think this is also part of OpenAI's years-long campaign of saying, "What we do is so special and so important, we can't possibly be regulated in the same way as everything else." So, I don't know that I buy it, especially in a case like this if someone is trying to get a divorce from a spouse who is having an affair with a chatbot to say, "No, you actually can't look at any of those conversations." That doesn't seem right to me.

Zoë Schiffer: I see both sides. I do think the conversations, and how emotional and personal these conversations can be with a chatbot, is pretty different from what you'd type in a Google search. Whether that should result in a special legal protection, I don't know. But it was interesting to me that we saw an op-ed in the New York Times this week from a professor who doesn't ostensibly work for OpenAI, basically arguing this same thing, that we should have something called AI privilege and it should have legal implications.

Brian Barrett: I am of a mind that when you type something into the internet, it is going to a company, and that is the same thing with OpenAI. You are giving your data to a company. So, I think part of this too is the fact that people are using it in this way I think doesn't absolve OpenAI from gathering it, and holding onto it and using it. So, in some ways I think there's like a, almost a user education issue here of like, look, you can use these tools as you want, but just a reminder this is a company, this is a for-profit enterprise and you are giving them your data just as you do in any other context, whether that's a weight loss app, or Google or whatever.

Zoë Schiffer: One other thing that I think has been on my mind lately is OpenAI CFO, Sarah Friar, admitted to shareholders and investors recently that while ChatGPT is still obviously doing quite well, growth has cooled a little bit, somewhat as a result of some of the content restrictions that the company put on the chatbot. And there's been this push, Sam Altman, the CEO has been saying, "Look, we're going to roll some of these back. We're going to treat adults like adults as the party line, and we're going to soon allow erotica on the site." And I think that that's a pretty overt push to get growth going again by allowing people to engage in these more emotional and romantic conversations.

Brian Barrett: And they had had a hard line against that, if I recall correctly, in the earlier days. I think in some ways the erotica thing, I get it, and there's a use case for that. In some ways it's the emotional attachment aspect of it that I'm more interested in. When OpenAI tried to shake that out of the system they had a huge user revolt, and the reason they changed it was because they saw unhealthy behavior. And when they got that pushback, they could have gone two ways. They could have said, "No, actually we think this is the right way to do it." Or they could have said, "OK, you're right. You can go ahead and have your friend back," and they gave people their friend back. They bent to that pressure. It's going to be great for growth, it's going to be great for revenue in the long run. I do worry about what it says about what incentives OpenAI is chasing.

Zoë Schiffer: Coming up after the break, we dive into our scoop about how data on hundreds of Chicago residents was held by the DHS illegally for months, and what it says about the state of data privacy in the US.

Welcome back to Uncanny Valley, I'm Zoë Schiffer. I'm joined today by our executive editor, Brian Barrett. Brian, let's dive into our main story this week. Our colleague, Dell Cameron, reported that the Department of Homeland Security, or DHS, collected data on hundreds of Chicago residents supposedly accused of gang ties to test if police files could feed into a federal watch list. But what seemed like testing a potential collaboration between law enforcement agencies ended up becoming a huge privacy breach incident. I feel like we should start with just why was this data being requested in the first place?

Brian Barrett: I'll walk through this as best I can, because it gets a little bit complicated. But basically the records originated, there was a private exchange between DHS analysts and Chicago police, and it was intended as a test to see how local intelligence might help feed federal government watch lists, like how can these two groups work together? They wanted to see whether street level data could surface undocumented gang members and airport queues at border crossings. Internal memos though reviewed by WIRED showed that the data set was first requested by a field officer in the DHS Office of Intelligence Analysis in the summer of 2021. At that time, Chicago's gang data was already notorious for being full of contradictions, it's not the best maintained data.

The residents listed in the data aren't even confirmed to be affiliated with gangs. You start to see where the problem comes, you get in this data set where you didn't need to be arrested or convicted to be on the list, police members often bake their own opinions into the list. So, you would have someone, a Chicago resident, not necessarily confirmed to be affiliated with a gang and they'd be on this list with an occupation down as scumbag, or turd, or in some cases, black. The report found that there was effectively no process for notice or appeal, no routine purge of records for people who had left gangs or gone years without a police encounter, and 95 percent of the people that were on this list were black or Latino.

Zoë Schiffer: So, the data was pretty faulty to begin with, to say the least. And it seems like for immigrants in Chicago, it carried extra weight because Chicago's sanctuary rules ban most data sharing with immigration officers, but there's an exception for "known gang members". So over the course of a decade, immigration officers tapped into this database more than 32,000 times. It's been quite a workaround.

Brian Barrett: Yeah, and really unfortunate for everybody who gets caught up in it. The data moved through layers of review for years with no clear owner. There are no legal safeguards, that legal safeguards were overlooked or ignored. It was just out there. In November 2023 they ended up killing the project. It's not clear that any tangible results came of it, but a source told Dell, our senior writer, that data breaches like these could impact the lives of millions of people in this country, and, "This can potentially justify federal targeting to family members, churches, food banks, and others who support immigrants," again, from people who may not have had any gang affiliation in the first place. There are so many layers of inappropriateness to the use of this data and the gathering of this data, and speaks to just how sloppily all of this was done with real world implications and real world potential for harm.

Zoë Schiffer: In this current administration I feel like we're dancing around this, being accused of being a gang member and undocumented has had exceedingly serious consequences for people who somewhat randomly can get caught up in the system. We just read a front page article from the New York Times the other week about the people who were sent to El Salvador and are now back in Venezuela, and experienced what they alleged in that reporting was torture.

Brian Barrett: Yeah, and this is an administration that will use any pretense to get people out of this country. Guilt by the loosest of associations is what we're seeing, which is what this enables. And I think our own reporting has shown a lot of consolidation of data across the IRS, ICE, DHS, data that was compartmentalized for years and kept separate for years specifically because it can have those consequences and become too powerful a tool. Now all those lines are gone, and everything's up for grabs.

Zoë Schiffer: Right, right. I instantly thought back to that executive order that Trump released earlier this year about eliminating data silos. I think something that looked pretty innocuous, like who wants data silos? That sounds inefficient, but Stephen Levy wrote at the time, there's a really good reason to keep this data separate. You might not want the government to have access to be able to cross-reference all of this stuff, and that's not even getting into the fact that the data at times can be very, very faulty as it was in this instance.

Brian Barrett: Yeah, and I think it's easy to look at this and think of it only in the immigration context because that is where it is primarily being used right now. But you can't really just do something like this for immigrants. It is everybody's data that is now cross-referenceable and up for grabs for any purpose. So, while the focus is on immigration now and that is bad enough, who knows where it goes next?

Zoë Schiffer: That's what we call a slippery slope argument, but in this case—

Brian Barrett: Look at that.

Zoë Schiffer: —it's true. Although this experiment is over, DHS isn't planning on slowing down in its data gathering ambitions. DHS's budget will soon exceed $191 billion, and its leaders are pursuing technologies to fuse sensitive data across different agency systems like we just talked about. So, how do you feel this data breach fits into the kind of larger surveillance and privacy landscape right now?

Brian Barrett: I think there is such a push to gather as much data as you can about as much people as you can, and use tools provided by companies like Palantir to collate cross-reference that data. We have seen it put in service again to our immigration right now, I think I am curious where it's going to be put in service next, and there's really no regulatory framework that would prevent any of this. I think it is continuing to roll downhill and pick up speed.

Zoë Schiffer: If there was anything that I learned from my brief stint working in the US government, it's that very bad things can happen, and oftentimes it's because of bureaucracy, laziness, and sloppiness more than a nefarious desire to make people's lives terrible but the results can be the same. Brian, thank you so much for joining me today.

Brian Barrett: Thank you.

Zoë Schiffer: That's our show for today, we'll link to all the stories we spoke about in the show notes. Make sure to check out Thursday's episode of Uncanny Valley, which is a deep dive into Palantir CEO, Alex Karp. Adriana Tapia and Mark Leyda produced this episode. Amar Lal at Macro Sound mixed this episode. Kate Osborn is our executive producer, and Katie Drummond is WIRED's global editorial director.