“We Missed Some”: The Eugenicist Logic Behind AI-Driven Autism Detection
How Machine Learning is Refining the Surveillance, Classification, and Control of Autistic People
Today’s article critiques a JAMA Pediatrics study using machine learning to refine autism detection—not to support autistic people, but to integrate us into systems of control. It explores its eugenicist framing, ties to Israeli research and AI-driven surveillance.
Introduction
The recent JAMA Pediatrics study, Data-Driven Characterization of Individuals With Delayed Autism Diagnosis Across the Life Span, claims to offer insights into why some autistic people are diagnosed later in life. Using machine learning techniques, the researchers categorise autistic people based on how they interact with education and healthcare systems, rather than centring their lived experiences. The study’s authors are affiliated with Israeli institutions, a detail that is significant given the country’s broader role in medicalised autism research and behavioural science. Israel has long been a site of research that might be considered ethically questionable elsewhere, particularly when it comes to surveillance, behavioural control, and the intersection of medicine and state power. The study’s framing is deeply medicalised, treating autism as a condition to be detected and managed rather than a neurotype that exists independent of institutional oversight.
This study is not about helping autistic people. It is about making their identification more efficient—not for their benefit, but for the convenience of institutions like education, healthcare, and the labour market. The idea that the primary concern with autism is whether or not someone is “missed” by these systems speaks to a fundamental misunderstanding of what actually harms autistic people. It is not diagnosis itself that determines autistic well-being, but the societal structures that shape how an autistic person is treated before and after being identified. Would autistic people, given the opportunity to shape research priorities, really choose to study how to detect them earlier? Or would they focus on systemic barriers to access, the harms of forced intervention, the ways autistic identity is erased through compliance training? The very premise of this research assumes that the problem is one of detection, rather than one of how autistic people are treated once they are recognised.
This fits into a long historical pattern: science refining tools of control under the guise of progress. Just as intelligence testing was once framed as a way to “better serve” children in education but became a tool of eugenics and segregation, so too does machine learning offer a new, more technologically advanced way to categorise and manage autistic people. The history of psychology, medicine, and state control is littered with studies that claim to improve lives but ultimately serve the needs of the powerful. This study is no different. It asks not how autistic people can thrive on their own terms, but how they can be identified and managed more efficiently within existing structures. It is a study of classification, not liberation.
The Role of Machine Learning in Autism Research
Machine learning is increasingly being used to classify and predict so-called “autistic traits,” a deeply flawed and troubling approach. The very concept of ‘traits’ reduces autism to a checklist of observable behaviours, reinforcing stereotypes rather than engaging with the reality of autistic experience. Many of us reject this framework entirely, favouring instead a recognition of autism as a distinct way of being. The ‘traits’ model has always been reductive, shaped by a history of clinical observation that centres white, cisnormative, AMAB presentations of autism. It is the reason so many autistic people—especially women, trans and non-binary people, people of colour, and those without access to formal diagnosis—are overlooked or dismissed. Machine learning does not challenge these biases; it automates and scales them, creating a feedback loop in which only those who fit pre-existing diagnostic moulds are recognised, while others remain invisible.
At its core, this approach is not about understanding autism—it is about surveillance and control. The use of AI to classify and predict behaviours is already deeply embedded in policing, biometric tracking, and behavioural monitoring. Predictive policing algorithms claim to identify individuals at “high risk” of committing crimes, but in reality, they reinforce existing racial biases, criminalising Black and brown communities while absolving systemic inequality of any role in shaping outcomes. Autism research is following a similar trajectory, treating identification as a means of ‘managing’ autistic people within institutions rather than addressing the real sources of harm: systemic ableism, social exclusion, and forced assimilation into the norms of the neuromajority.
The question of whose definition of autism is being used is central to this issue. If the ‘traits’ being fed into machine learning models are based on outdated, deficit-based understandings of autism, then the outputs will only serve to entrench these misconceptions. This is no different from the way police use ‘racial profiling’—taking pre-existing biases and legitimising them through the appearance of scientific neutrality. AI does not create an objective or fair system; it simply encodes the prejudices of those who train it. In this case, the prejudice is not just about who is considered autistic, but about what kinds of autistic people are considered worth recognising. The emphasis remains on those who can be categorised in ways that serve institutional priorities—whether that is education, employment, or public health—rather than those whose experiences fall outside these neat classifications.
The reality is that AI-generated profiles of autistic people will not challenge biases; they will reinforce them. Rather than broadening the definition of autism to include those historically excluded, these models will refine the process of exclusion, making it more difficult for autistic people outside the medicalised ‘standard’ to be recognised at all. Worse still, they will likely be used to justify interventions designed to shape autistic behaviour into something more palatable to the dominant culture within society. Machine learning in autism research is not a neutral tool—it is a mechanism of gatekeeping, determining who gets access to support, who is deemed “too autistic” to integrate, and who is erased altogether. The future this research is building is not one of greater understanding or acceptance. It is a future where autistic existence is increasingly defined by how efficiently we can be monitored, classified, and controlled.
“We Missed Some” – The Eugenicist Framing of Diagnosis
The entire premise of this study hinges on the assumption that some autistic people were “missed” and must be caught earlier. It treats delayed diagnosis as a problem to be solved, as though the failure to identify autistic people sooner is an institutional oversight that must be corrected. But this framing is deeply flawed—it assumes that earlier detection is inherently beneficial, without interrogating why autistic people are missed in the first place or what happens to them once they are identified. The push for earlier and more precise categorisation is not about supporting autistic people; it is about refining systems of surveillance and control to ensure fewer slip through the cracks.
Autistic people are not overlooked because they are undetectable, but because diagnosis is tied to how an individual interacts with institutions, not to their intrinsic experience of being autistic. In both Israel and the US, an autism diagnosis is often dependent on whether a person’s way of being disrupts—or is perceived to disrupt—systems like schools and workplaces. Those who struggle to meet institutional expectations, whether due to communication differences, sensory needs, or difficulties with compliance, are far more likely to be flagged for evaluation. For those who do not, either because they have learned to mask or because their differences do not visibly challenge institutional norms, diagnosis often comes much later, if at all. This is not about recognising autism in its full diversity—it is about identifying individuals who are inconvenient to existing structures.
In education, this means that autistic children are usually diagnosed only when they fail to conform to classroom expectations. In both countries, autism evaluations are often initiated by schools, and a child is far more likely to be assessed if their behaviour is seen as disruptive, if they are not meeting academic benchmarks, or if they require accommodations that schools are reluctant to provide without formal documentation. Children who internalise distress, who excel academically, or who are able to suppress their ‘autistic traits’—whether out of cultural expectation, fear of punishment, or simply because they have learned to navigate the spaces of the neuro-majority through trial and error—are less likely to be identified. The system does not recognise need unless it manifests as disruption.
In the workplace, the same logic applies. Many autistic adults remain undiagnosed until their struggles begin to interfere with employment. The worker who burns out from sensory overwhelm but never speaks up, the one who silently absorbs every microaggression, the one who manages to perform the majority’s social scripts at great personal cost—these individuals are rarely recognised as autistic because their difficulties do not inconvenience the institution. By contrast, those who cannot or will not mask, whose ‘autistic differences’ become visible in ways that challenge expectations, are far more likely to be seen as needing intervention. Diagnosis is not about identifying autism in its own right; it is about assessing its impact on institutions.
This is why the push for early diagnosis is framed as a matter of intervention, not accommodation. The earlier an autistic person is identified, the sooner they can be placed into behavioural therapies aimed at making them more manageable. The goal is not to help autistic people understand themselves, but to ensure that they do not disrupt capitalism. In Israel, this aligns with a broader culture of behavioural intervention, in which compliance-based therapies like ABA are widely accepted as necessary preparation for adulthood. In the US, it fits within a legal and bureaucratic framework that treats diagnosis as a prerequisite for access to services, reinforcing the idea that autism is something to be mitigated rather than understood. In both contexts, the system only recognises autism when it becomes a problem for the system—never when the system itself is the problem.
This study is not about understanding autism. It is about refining the mechanisms of control, ensuring that fewer autistic people go unnoticed—not because they need support, but because institutions need more efficient ways to identify and manage those who do not conform. This is not new. It is the realisation of a dream that eugenicists have pursued for over a century: the ability to categorise, classify, and sort human beings with mechanical precision. The first half of the twentieth century was defined by their crude attempts to do so—IQ testing, racial classification, forced sterilisation—all reliant on human judgement, with all its inconsistencies and inefficiencies. But in the 1930s, the arrival of computational technology offered something new. With IBM’s punch-card systems, the Nazi regime could process vast amounts of data, tracking racial and genetic ‘undesirables’ with unprecedented efficiency, turning eugenics into something automated, something scalable. The dream was no longer limited by the imprecision of human record-keeping. The Holocaust was, in part, a bureaucratic achievement—an industrialised system of human sorting, where decisions of life and death could be executed with machine-like precision.
Machine learning today is lauded as progress, but its function remains eerily similar: optimising the sorting process, refining the ability to categorise, and ensuring that those deemed inconvenient are identified as early as possible. The difference is that, instead of race or perceived genetic inferiority, today’s data-driven eugenics focus on neurodivergence, disability, and so-called ‘deviance’ from the dominant culture’s norms. Autism is only seen as real when it disrupts institutions, just as ‘unfitness’ was only acknowledged when it threatened the social order. Machine learning does not challenge this framework; it perfects it. It replaces fallible human classification with algorithmic precision, ensuring that no autistic person goes undetected—not for their benefit, but for the system’s. Thanks to Hans Asperger’s work in classifying autism, Aktion T4 first targeted disabled and autistic children—removing them from their families, institutionalising them, and ultimately killing them. This program was not just about eugenics in theory; it was a pilot project, a test run for the industrialised mass murder that would follow. The Nazi regime framed these killings as ‘mercy deaths,’ but they were about efficiency—the removal of those who could not be made productive under fascism’s brutal logic. When Aktion T4 proved effective, the program was scaled up. The concentration camps that followed were not just sites of extermination but of forced labour, where prisoners were expected to produce for the Reich until they collapsed. The disabled—those deemed incapable of work—were not given even that cruel chance. They were marked for immediate death upon arrival.
The ideology that underpinned Aktion T4 has not disappeared; it has simply evolved, embedded now in the logic of ‘early detection’ and ‘intervention.’ The same fear of difference, the same desire to control and manage populations, now comes with the sleek, neutral façade of AI. Machine learning offers the promise of more precise sorting—not for the benefit of autistic people, but to ensure that none go undetected, that all can be identified, categorised, and intervened upon before they become a ‘burden’ to the system. The logic remains unchanged: those who cannot be made productive must be controlled, managed, or removed. The only difference is that today, the tools are more sophisticated, and the language more sanitised.
Who Benefits from This Research?
The beneficiaries of this research are not autistic people, but institutions. Schools, healthcare systems, and labour markets all have a vested interest in refining the process of autism identification—not to accommodate autistic needs, but to better manage and extract from autistic individuals. The study frames earlier diagnosis as an unquestionable good, but the question that remains unasked is: good for whom? The answer is not autistic people, but the systems that rely on our compliance, our labour, and our ability to function within their rigid structures.
In education, autism diagnosis is rarely about support—it is about management. Schools do not seek to identify autistic students out of a commitment to neurodiversity, but because knowing who is autistic allows them to enforce compliance more effectively. Once a student is labelled, the focus is not on fostering their strengths or creating environments that respect their needs, but on integrating them into the existing structure as seamlessly as possible. Accommodations, where they exist, are rationed; support is framed as a privilege rather than a right. Behavioural expectations remain unchanged, and autistic students who do not—or cannot—mask well enough are often subjected to increased scrutiny, punishment, or forced intervention. Diagnosis becomes a means of regulation, ensuring that autistic children can be tracked, monitored, and moulded to fit institutional expectations.
Healthcare systems, too, stand to profit from earlier identification—not because it improves autistic well-being, but because it expands the market for medical and behavioural intervention. A diagnosis is not an entry point to understanding; it is an entry point to monetisation. Under the medical model, autism is framed as a disorder in need of correction, and the earlier a child is diagnosed, the earlier they can be fed into the machinery of ‘treatment.’ Big Autism—the vast network of diagnostic clinics, therapy providers, pharmaceutical companies, and research institutions—thrives on this. The earlier a child is identified, the sooner they can become a lifelong consumer of therapies, services, and interventions designed not to support their autonomy, but to generate profit.
Autistic people are not seen as individuals with intrinsic worth. We are therapy cattle, fed into the industry as soon as possible to ensure the longest period of profitability. The goal is not to help us navigate a world that refuses to accommodate us—it is to make us more palatable to neurotypical society, to train us into submission, to extract as much financial value from our existence as possible before we are deemed beyond fixing. This is why early diagnosis is framed as urgent: not because autistic children need help, but because they are more valuable to the industry when intervention begins early. Big Autism is not in the business of supporting autistic people. It is in the business of ensuring that every autistic person can be identified, processed, and profited from for as long as possible.
And then there is labour—the ultimate measure of a person’s worth under capitalism. The state and corporate interests have no use for autistic people who cannot be optimised for productivity, which is why research like this so often focuses on identifying autism before an individual reaches working age. The unspoken goal is not to make workplaces more accessible, but to ensure that autistic people can be shaped into workers before they become a ‘problem.’ The emphasis on early diagnosis is an investment in future labour management: identifying potential difficulties before they emerge, streamlining the process of intervention, and ensuring that autistic individuals can be slotted into the workforce with minimal disruption. The message is clear: productivity is the only measure of value. Autistic people who can be trained to serve the machine are permitted to exist within it. Those who cannot are discarded, just as they always have been.
This study does not exist in a vacuum. It is part of a long history of research designed to serve power rather than people, to refine the processes of categorisation and control. It is not about helping autistic people thrive—it is about ensuring that every autistic person is accounted for, assessed, and placed into the system in whatever way best serves institutional interests. It is not about recognition. It is about management. And it is not about support. It is about control.
Israel as a Research Black Site for Eugenicist Studies
Israel has long served as a research black site for the kinds of studies that might be met with resistance elsewhere, a testing ground for technologies and frameworks of control that are later exported globally. Autism research, particularly in its medicalised and behaviourist forms, exists within this broader landscape of experimentation—one that intersects directly with AI-driven surveillance, military applications, and the policing of populations deemed undesirable. The development of machine learning models to better detect and categorise autistic people is not separate from this context. It is part of the same machinery of control, part of the same logic that drives the refinement of predictive policing, biometric surveillance, and automated warfare. And in the midst of the ongoing genocide of Palestinians, as reports continue to surface of medical experimentation on prisoners and the use of AI to ‘optimise’ mass killings in Gaza and. the West Bank, it is impossible to ignore the connections.
Israel has long been a leader in the development of AI-driven behavioural monitoring, exporting its expertise to police forces, intelligence agencies, and military contractors worldwide. The same companies that develop predictive policing tools to monitor Palestinians in the occupied territories also develop technologies for monitoring ‘at-risk’ children in schools. The same algorithms that refine facial recognition for border security are adapted for autism screening, using the same principles of pattern recognition and behavioural prediction. The impulse is the same: to detect, to categorise, and to control. The data-driven autism research coming out of Israel is not about helping autistic people; it is about refining systems of classification that will inevitably be used to surveil and manage populations more efficiently.
The exportation of Israeli research into autism and machine learning follows the same path as its military technologies—developed, tested, and then sold to allied states under the guise of progress. The US, in particular, has a long history of integrating Israeli security technologies into its own policing and surveillance apparatus, from the use of Israeli biometric tracking systems at US airports to the adoption of Israeli-developed predictive policing models in American cities. Autism research will follow the same pattern, reinforcing the medicalised, pathologising framework that serves institutional needs over individual well-being. AI-driven autism detection is not about ensuring better outcomes for autistic people; it is about ensuring that no autistic person goes unidentified by the system.
This study may not be human subject research, but the refinement of machine learning models for classifying and tracking autistic people will, without question, contribute to the furtherance of genocide. The same logic that drives the need for AI-driven ‘precision’ in autism detection—greater efficiency in identifying and managing those who do not conform—is the logic that underpins every AI-assisted war crime currently unfolding in Gaza. The same technology that refines the process of diagnosing autistic children at scale refines the process of identifying Palestinian resistance fighters through automated surveillance. The same data-driven ‘optimisation’ that improves the categorisation of neurodivergent people improves the efficiency of targeting civilians in drone strikes. The tools are interchangeable because the function is the same: the management of human populations in service of state and corporate power.
Israel has spent decades conducting ethically dubious studies on its prison population, on asylum seekers, on Palestinians in occupied territory. Whether it is the forced administration of birth control to Ethiopian Jewish women, the testing of experimental drugs on prisoners, or the algorithmic refinement of battlefield targeting, the country’s research infrastructure is built on the premise that certain populations exist to be studied, controlled, and, when necessary, eliminated. This autism study does not exist in isolation—it is part of this broader framework, a refinement of categorisation that will not stop with autistic people. The purpose of a system is what it does, and what this system does is create more precise tools for managing and eliminating those deemed inconvenient to the order it serves.
Resisting the Surveillance and Medicalisation of Autism
Resisting the surveillance and medicalisation of autism requires a fundamental rejection of the framework this study represents—a framework that sees autistic people not as humans with intrinsic worth, but as subjects to be detected, categorised, and controlled. The future it envisions is one where machine learning models dictate the terms of our recognition, where our existence is only acknowledged when it becomes an institutional inconvenience, and where our ‘support’ is reduced to interventions designed to extract compliance rather than foster autonomy. This is the world built by the Autism Industrial Complex, a multi-billion-dollar system in which research is not conducted for autistic people but on us, often to justify therapies and treatments that serve the needs of the system rather than those of the community. Resisting this future means shifting the locus of control away from the institutions that profit from our existence and toward autistic-led research that centres lived experience, ethical inquiry, and meaningful change.
Able Grounded Phenomenology (AGP) provides a necessary alternative, rejecting the deficit-based, medicalised framing of autism and demanding that research be conducted with and for autistic people, not as a means of refining our categorisation but as a tool for improving our lives. AGP disrupts the colonial and eugenicist logic that has shaped autism research for decades, challenging the assumption that autistic people must be studied as problems to be solved rather than as individuals with valid and varied ways of being. It exposes how research is often conducted not to benefit autistic people, but to sustain an industry that profits from our struggles—an industry that sees early detection not as a means of support, but as an opportunity to capitalise on a lifetime of interventions, therapies, and ‘treatments’ that rarely, if ever, prioritise autistic well-being.
The reality is that traditional autism research has never been about autistic people. It has been about institutions—about schools, healthcare systems, and employers—looking for better ways to manage us, to ensure that we remain productive, compliant, and undetectable. AI-driven categorisation does not challenge this structure; it perfects it, creating a seamless pipeline from early identification to behavioural intervention, to workforce integration, all under the guise of ‘support.’ Autistic people have fought against this model for decades, exposing its failures, its dehumanisation, and its profound ethical violations. We have documented the harm caused by masking, the violence of compliance-based therapies, the trauma of being forced into a world that refuses to accommodate us. We have built our own models for understanding autism—ones that do not rely on surveillance, control, or intervention, but on the recognition that autistic ways of being are valid in their own right.
Resisting the normalisation of AI-driven categorisation in disability and mental health is not just about rejecting machine learning as a diagnostic tool; it is about rejecting the entire premise that autism should be something detected and managed by institutions at all. It is about recognising that the purpose of autism research, as it currently exists, is not to serve autistic people, but to refine the systems that control us. It is about challenging the deeply entrenched structures of power that make our suffering profitable and our identities negotiable.
This study, and others like it, reveal a grim future for autism research—one where AI, big data, and behavioural science converge to create more precise methods of sorting and intervention, one where autistic people are ‘identified’ earlier not to be supported but to be shaped into something more convenient for the system. But it does not have to be this way. The urgency of pushing back against this trajectory cannot be overstated. The work being done by autistic-led research collectives, by disability justice advocates, and by those fighting against the Autism Industrial Complex offers an alternative: a future where research is a tool for liberation, not control. AGP is a crucial part of this resistance, a framework that insists that autistic voices must be at the centre of autism research, that our ways of being do not need to be studied into oblivion but understood, respected, and accommodated on our own terms.
To accept the future outlined by this study is to accept the continued colonisation of autistic experience, the continued medicalisation of our identities, and the continued refinement of surveillance technologies designed to categorise us into compliance. To resist it is to insist on something radically different: a world where autism research is no longer in the hands of those who profit from our suppression, but in the hands of those who live it, who know it, and who will fight to ensure that we are recognised not as anomalies to be managed, but as people, worthy of dignity, autonomy, and respect.
Final thoughts …
This study is not about supporting autistic people. It is about controlling us—refining the mechanisms of detection, classification, and intervention to ensure that fewer autistic people go unnoticed, not for our benefit, but for the benefit of the institutions that seek to manage us. It is about optimising the systems that dictate our existence, ensuring that we can be identified, processed, and conditioned into compliance as early as possible. Under the guise of scientific progress, it advances the same logic that has underpinned autism research for decades—the belief that autistic people must be shaped into something more palatable, more productive, more invisible. It does not seek to understand autism as a way of being; it seeks to contain it.
This is nothing new. It is the continuation of a long history of eugenicist science masquerading as social progress, a history in which classification has always been a precursor to control. The same institutions that once used IQ tests to justify institutionalisation, that used genetic theories to sterilise those deemed ‘unfit,’ that used behaviourist conditioning to erase difference, now turn to AI and machine learning to continue the work of refining human categorisation. The technology may be new, but the impulse remains the same: to eliminate divergence, to force conformity, to optimise human populations for the needs of power. Machine learning does not liberate us from these histories—it extends them, deepening the efficiency with which institutions can predict, sort, and intervene. It does not make autism research more ethical; it makes it more precise.
Resisting this trajectory is not just about rejecting one study, or even rejecting AI-driven autism detection—it is about rejecting the entire framework that treats autism as something to be managed rather than understood. It is about dismantling the Autism Industrial Complex, about refusing to participate in research that serves institutional control rather than autistic liberation. It is about demanding that autism research be led by autistic people, grounded in neurodiversity-affirming principles, and committed to supporting autistic lives rather than erasing or suppressing them.
There is a future where autism research does not exist to categorise and correct us, but to foster understanding, accessibility, and justice. But that future will not be built by those who see autism as a problem to be solved. It will only be built by those who refuse to be erased. The time to push back is now.