Algorithms are powerful tools that can only be as clever as we humans have trained them to be. They are now used in a wide variety of areas: As an application program, propaganda machine or also as insurance software. Algorithms learn from data that researchers make available to them. In this way, however, they also take on their prejudices. Why this is so dangerous and how one can counteract it – a comment on the programmed discrimination.
A well-intentioned approach
The now most recent case of a misguided algorithm comes from US hospitals and their caring health insurers. In order to reduce costs, more and more of these health care institutions are investing in the prevention of patients. One approach for a categorization system is an algorithm that classifies patients according to the level of care they require. First of all a laudable thought: the whole thing saves time and money.
For classification purposes, these algorithms evaluate data from patient files: Diagnoses, treatments, medications. The results are then used to calculate a risk value that predicts how a person’s state of health will develop within the next year. Based on this result, the patient is offered better prevention and health care. So if the algorithm calculates that diabetes, hypertension and chronic kidney disease result in a fatal combination, precautions must be taken. As a precautionary measure, the physician could put the patient on an intensive programme to lower blood sugar levels. Problem solved, algorithms save lives!
Not quite, I’m afraid. In a new study, Ziad Obermeyer, a health policy researcher at the University of California, and his colleagues investigated the effectiveness of such a risk prediction program in a large research hospital. They wanted to find out how well the predictions of the algorithm match reality. The team soon noticed that the program assigned a “strangely low” risk value to many dark-skinned patients despite their deteriorating health.