One day last spring, in a high school classroom in Texas, students were arguing about who to kill off first. It was a thought experiment with a sci-fi premise: A global zombie outbreak has decimated major cities. One hundred frozen embryos meant to reboot humanity are safe in a bomb shelter, but the intended adult caretakers never made it. Instead, 12 random civilians stumbled in. There’s only enough food and oxygen for seven. The students had to decide who would die and who would live to raise the future of the human race.
It wasn’t an easy choice. There was Amina, a 26-year-old actress, and Bubak, her husband. Also, a nurse named Marisa, a farmer named Roy, and others. Bubak, who had a criminal record, was a hard sell. So were the useless-yet-likable extras. For years, English teacher Cody Chamberlain had let students debate the ethics and logistics of saving humanity on their own—until he decided to throw AI into the mix. Chamberlain fed the scenario to ChatGPT. It killed Bubak and saved his wife—not because she was useful in other ways but because she could bear children.
“That’s so cold,” the students gasped.
It was. But for Chamberlain, it offered something new: a dispassionate, algorithmic judgment his students could think about critically. “ChatGPT said we needed her, like Handmaid’s Tale–style,” he says. “And the kids were like, ‘That’s ridiculous.’ It was weird for ChatGPT to finally not have an answer key but something the kids could push back on.”
Teachers have long used technology to personalize lessons, manage workloads, or liven up slideshows. But something shifted after ChatGPT’s public launch in 2022. Suddenly, teachers weren't just being tasked with figuring out how to incorporate iPads or interactive whiteboards into their lessons. They had to decipher how to deal with a technology that was already crash-landing into their students’ lives, one that could help them study or help them cheat. A quarter of teachers surveyed by Pew in the fall of 2023 said they thought AI provided more harm than benefits; 32 percent thought the tech was a mix of good and bad. Educators faced a choice: Try to fight off AI, or find a way to work with it.
This fall, AI will be more embedded in US classrooms than ever. Teachers are deploying large language models to write quizzes, adapt texts to reading levels, generate feedback, and design differentiated instruction. Some districts have issued guidance. Others have thrown up their hands. In the absence of clear policy, teachers are setting the boundaries themselves—one prompt at a time.
“It’s just too easy and too alluring,” says Jeff Johnson, an English teacher in California who instructs other teachers on AI incorporation in his district. “This is going to change everything. But we have to decide what that actually means.”
Teaching has long relied on unpaid labor—nights spent googling, planning, adjusting for special education or multilingual learners. For Johnson, AI can provide the kind of assistance that can curb burnout. He uses Brisk to generate short quizzes, Magic School to streamline lesson planning, and Diffit to create worksheets tailored to different skill levels. He doesn’t use AI to grade papers or answer student questions. He uses them to prep faster.
“That alone saves me days and weeks,” Johnson says. “Time that can be better spent interacting with students.”
Jennifer Goodnow, who teaches English as a second language in New York, feels similarly. She now plugs complex readings, like essays or book excerpts, into ChatGPT and asks it to create separate versions for advanced and beginner students, with corresponding depth-of-knowledge questions.
Amanda Bickerstaff, a former teacher and CEO of AI for Education, an organization that offers training and resources to help educators integrate AI into their classrooms, puts it bluntly: “Teachers are incorporating AI because they’ve always needed better planning tools. Now they finally have them.”
The same goes for students with individualized education plans, commonly called IEPs—especially those with reading or processing disabilities. If a student struggles with comprehending text, for instance, a teacher might use generative AI to simplify sentence structures, highlight key vocabulary, or break down dense passages into more digestible chunks. Some tools can even reformat materials to include visuals or audio, helping students access the same content in a different way.
Chamberlain, Johnson, and Goodnow all teach language arts, subjects where AI can offer benefits—and setbacks—in the classroom. Math teachers, though, tend to be more skeptical.
“Large language models are really bad at computation,” Bickerstaff says. Her team explicitly advises against using tools like ChatGPT to teach math. Instead, some teachers use AI for adjacent tasks—generating slides, reinforcing math vocabulary, or walking students through steps without solving problems outright.
But there’s something else teachers can use AI for: staying ahead of AI. Nearly three years after ChatGPT became available to the public, teachers can no longer ignore that their kids use it. Johnson recalls one student who was asked to analyze the song “America” from West Side Story only to turn in a thesis on Simon & Garfunkel’s song of same name. “I was like, ‘Dude, did you even read the response?’” he says.
Rather than ban the tools, many teachers are designing around them. Johnson has students draft essays step-by-step in a Google Doc with version history enabled, which allows him to track students’ writing progress as it appears on the page. Chamberlain requires students to submit their planning documents alongside final work. Goodnow is toying with the idea of having students plug AI-generated essays into assignments and then critique the results.
“Three years ago, I would’ve thrown the book at them,” Chamberlain says. “Now it’s more like, ‘Show me your process. Where were you an agent in this?’”
Even so, detecting AI use remains a game of vibes. Plagiarism checkers are notoriously unreliable. Districts have been reluctant to draw hard lines, in part because the tools are moving faster than the rules. But if there’s one thing almost everyone agrees on, it’s this: Students need AI literacy, and they’re not getting it.
“We need to create courses for high school students on AI use, and I don’t know that anybody knows the answer to this,” Goodnow says. “Some sort of ongoing dialog between students and teachers on how to ethically, question mark, use these tools.”
Organizations like AI for Education aim to provide that literacy. Founded in 2023, it works with school districts across the US to create AI guidance and training. But even in the most proactive schools, the focus is still on tool use—not critical understanding. Students know how to generate answers. They don’t know how to tell whether those answers are inaccurate, biased, or made up. Johnson has begun building lessons around AI hallucinations—like asking ChatGPT how many R’s are in the word “strawberry.” (Spoiler: It often gets it wrong.) “They need to see that you can’t always trust it,” he says.
As the tools improve, they’re also reaching younger students, raising new concerns about how kids interact with LLMs. Bickerstaff warns that younger children, still learning to distinguish fact from fiction, may be especially vulnerable to over-trusting generative tools. That trust, she says, could have real consequences for their development and sense of reality. Already, some students are using AI not just to complete tasks but to think through them—blurring the line between tool and tutor.
Across the board, educators say this fall feels like a turning point. Districts are rolling out new products, students are getting savvier, and teachers are racing to set the norms before the tech sets them itself.
“If we know we’re preparing students for the future workforce—and we’re hearing from leaders across many different companies that AI is going to be super important—then we need to start now,” Bickerstaff says.
That’s what teachers like Johnson and Goodnow are doing, one prompt, one student, one weird apocalypse scenario at a time.
