A recent article in USA Today begins with a misleading headline. “New algorithm detects autism in infants. How might that change care?” The author then continues to mislead in her first sentence, “Signs of autism can be picked up as early as the first month of life…”
Pointing out that the premise of her article remains to be tested or confirmed in paragraph four, she notes, “Though the findings still need to be confirmed with further studies …,” before letting her readers know what the actual point of the article is. She reaffirms the untested nature of the methodology later in quoting the System’s director, “if further studies confirm the finding, the algorithm could be used alongside other screening tools, parent reports and medical observations …”
Later, we find that the Duke University Health System created an algorithm to scan its health records in an attempt to correlate health care utilization with an eventual autism diagnosis. The System believes, rightly or wrongly, that autistic children use more healthcare services and this usage may signal (as early as one month old) that the child is autistic. From this data, the system can alert its doctors to “screen” for autism and offer “treatments.”
What could possibly go wrong with such “surveillance?” Do you think this will help or hurt in the long run?
When headlines mislead.
When headlines mislead.
When headlines mislead.
A recent article in USA Today begins with a misleading headline. “New algorithm detects autism in infants. How might that change care?” The author then continues to mislead in her first sentence, “Signs of autism can be picked up as early as the first month of life…”
Pointing out that the premise of her article remains to be tested or confirmed in paragraph four, she notes, “Though the findings still need to be confirmed with further studies …,” before letting her readers know what the actual point of the article is. She reaffirms the untested nature of the methodology later in quoting the System’s director, “if further studies confirm the finding, the algorithm could be used alongside other screening tools, parent reports and medical observations …”
Later, we find that the Duke University Health System created an algorithm to scan its health records in an attempt to correlate health care utilization with an eventual autism diagnosis. The System believes, rightly or wrongly, that autistic children use more healthcare services and this usage may signal (as early as one month old) that the child is autistic. From this data, the system can alert its doctors to “screen” for autism and offer “treatments.”
What could possibly go wrong with such “surveillance?” Do you think this will help or hurt in the long run?