Generative artificial intelligence (AI) tools continue to capture the imagination, but increasingly the technology’s damaging potential is revealing itself. An often problematic use of generative AI is in the creation and distribution of deepfakes online, especially because the vast majority contain sexually explicit intimate depictions. In the past school year (2023-2024), the rise of generative AI has collided with a long-standing problem in schools: the act of sharing non-consensual intimate imagery (NCII). K-12 schools are often the first to encounter large-scale manifestations of the risks and harms facing young people when it comes to technology, and NCII, both deepfake and authentic, is no exception. Over the past year, anecdotes of children being the perpetrators and victims of deepfake NCII have been covered by major news outlets, elevating concerns about how to curb the issue in schools. But just how widespread is NCII really? And how well equipped are schools to handle this challenge?