content moderationPoliticsTikTok Has a Nazi ProblemBusinessHow Watermelon Cupcakes Kicked Off an Internal Storm at MetaBusinessA Nonprofit Tried to Fix Tech Culture—but Lost Control of Its OwnBusinessThe Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’SecurityEventbrite Promoted Illegal Opioid Sales to People Searching for Addiction Recovery HelpBusinessTwitter’s Former Trust and Safety Chief Is Trying to Clean Up Your Dating AppsBusinessThe Dark Side of Open Source AI Image GeneratorsBusinessElon Musk’s Lawsuit Against a Group That Found Hate Speech on X Isn’t Going WellThe Big StoryThe One Internet Hack That Could Save EverythingPoliticsA Sudanese Paramilitary Group Accused of Ethnic Cleansing Is Still Tweeting Through ItBusinessGenerative AI Learned Nothing From Web 2.0PoliticsIsrael–Hamas Conflict Sparks Meta Oversight Board’s First Emergency CaseBusinessBumble, Grindr, and Hinge Moderators Struggle to Keep Users—and Themselves—SafeBusinessParental Advisory: This Chatbot May Talk to Your Child About Sex and AlcoholBusinessUnderage Workers Are Training AIPoliticsThis New Tool Aims to Keep Terrorism Content Off the InternetPoliticsHere’s How Violent Extremists Are Exploiting Generative AI ToolsBusinessSweeping New Powers Could Let the UK Block Big Tech PlatformsBusinessBig Tech Ditched Trust and Safety. Now Startups Are Selling It Back As a ServiceSecurityHow Telegram Became a Terrifying Weapon in the Israel-Hamas WarBusinessInside Elon Musk’s First Election Crisis—a Day After He ‘Freed’ the BirdBusinessThe UK’s Controversial Online Safety Act Is Now LawSecurityThe AI-Generated Child Abuse Nightmare Is HereSecurityThe Hamas Threat of Hostage Execution Videos Looms Large Over Social MediaMore Stories