Pre-Failure? How AI Decides the Fate of Nevada's Students
In Minority Report, a dystopian thriller, the government employs predictive technology to arrest individuals before they commit crimes, relying on the projections of a mysterious and inscrutable system. The chilling premise forces us to confront the unsettling consequences of acting on predictions rather than actual events—where personal autonomy and the right to a fair process are sacrificed at the altar of technological efficiency. It’s a future where human lives are controlled by algorithms, and potential wrongdoing trumps individual freedoms.
Fast-forward to today, and Nevada’s educational system finds itself in a similarly precarious situation, though instead of pre-crime, we have pre-failure. An “artificial intelligence system” has been deployed to identify “at-risk” students before they even begin to fall behind. Much like the pre-crime program in Minority Report, this AI employs a proprietary algorithm (meaning: we’re not allowed to know what it’s doing) to sift through masses of student data—attendance records, home language, parental engagement, GPA, and even minute details such as how often a parent logs into the school portal—to determine which students might be on a trajectory toward academic failure.
The system, designed by Infinite Campus, effectively serves as the educational version of the Pre-Cogs—the mysterious beings in Minority Report who predict criminal behaviour. In this case, the AI becomes a shadowy decision-maker, working behind the scenes to predict which children will struggle in school, without any real transparency or accountability. It makes these determinations based on abstract patterns in the data, but it’s impossible to know how it weighs each factor or whether those predictions bear any resemblance to the lived experiences of the students it assesses.
And that’s where the real trouble begins.
In Minority Report, citizens had no say in how the system determined their guilt, nor could they challenge its predictions. Similarly, in Nevada’s schools, neither educators nor parents have a clear window into the inner workings of the AI. The decision-making process is kept strictly proprietary—an algorithmic black box that spits out scores without any explanation. Students are then slotted into categories, labelled with “grad scores” that dictate whether they are deemed worthy of extra support or not. Those with lower scores are considered more likely to fail, and it is this small group that now receives state resources and intervention.
The consequences of this shift have been startling, to say the least. In 2022, Nevada classified more than 270,000 students as “at risk,” eligible for additional funding and educational support. After the AI’s implementation, however, that number nosedived to fewer than 65,000. The AI, in its cold pursuit of efficiency, has raised the bar on what it considers to be at-risk, cutting hundreds of thousands of children from the support network they once relied on.
One might argue that in doing so, Nevada’s AI is not just predicting failure—it’s ensuring it.
Much like the citizens in Minority Report, who are stripped of their right to a fair process, Nevada’s students are now at the mercy of a system that decides their educational fate based on probabilities and projections rather than their current reality. The algorithm doesn’t account for the nuances of a student’s life or their specific learning needs. Instead, it uses cold, impersonal data to predict outcomes that shape a student’s future, leaving no room for individual context or teacher judgment.
For students with Individualised Education Programs (IEPs)—who require tailored interventions and accommodations due to disabilities—this system represents an even more dangerous proposition. The AI’s inability to consider the full complexity of learning disabilities, trauma, or neurodivergence means that these students are more likely to be miscategorised or, worse, ignored entirely. An algorithm that fails to understand their specific challenges could remove them from the pool of students considered “at risk,” effectively stripping away their access to crucial support services. It is a grim reminder that efficiency in resource allocation does not necessarily equate to fairness.
In Minority Report, the pre-crime system is dismantled once it becomes clear that human lives cannot and should not be determined by technology alone. But in Nevada, the AI system continues to reshape the educational landscape, forcing schools to cut programs, rework budgets, and leave many students without the help they desperately need. Teachers and school leaders, already struggling with post-pandemic realities, have expressed their horror at how the number of students needing extra support seems to have shrunk—on paper, at least—when, in reality, it has only grown.
There’s something disturbingly dystopian about the idea of children being assessed by an algorithm that no one fully understands. These are not just data points; these are young people, each with a unique story, context, and set of challenges. Yet, under the AI’s gaze, they are reduced to numbers in a system that values cold, calculative efficiency over humanity. Just as in Minority Report, where citizens were robbed of their agency in favour of predictions, Nevada’s students are losing their access to support based on a machine’s assumptions of their future potential.
The system’s reliance on abstract data—GPA, attendance, parental engagement—may seem logical on the surface. But what happens when a student’s circumstances change? What about the child who suddenly becomes disengaged due to a family crisis or the student whose neurodivergence manifests in ways that don’t fit neatly into the AI’s algorithm? The AI cannot account for these human complexities, and as a result, many students who should be receiving support are now left to fend for themselves in an education system that prioritises cost-cutting over care.
In Minority Report, we are asked to confront the moral implications of predictive technology gone too far, of a system where humans become subjects to a dehumanising force that prioritises prevention over justice. The same question must be asked of Nevada’s AI-driven education system: At what cost are we choosing efficiency? Who is being left behind in the pursuit of technological ‘progress’? And most importantly, how do we safeguard the futures of students who may fall through the cracks of an algorithm that sees only data, not lives?