What is algorithmic justice and why is it important in medicine?

In recent years, there has been a marked increase in research on Artificial intelligence (AI) models for automated analysis of medical images, which promises faster and more accurate diagnoses in diseases ranging from breast cancer to multiple sclerosis. However, as their demands grew, they also began to appear. Warnings about possible bias in results of these models in certain subpopulations (by age, race, or gender, among others). In health, an AI-based system that does its job best in one group of people can lead to profound disparities when it comes to accessing appropriate diagnoses and treatments.

Thus, an emerging field in computer science began to gain strength: the field of the so-called Algorithmic justice in machine learning (or “machine learning”). In this context, a group of scientists from the Institute for Research in Signals, Systems and Artificial Intelligence – “Sin (1)” – and from the Artificial Intelligence Program of the Department of Health Informatics of the Italian Hospital of Buenos Aires ( DIS-HIBA), published in Nature Connections Comment in which they present the status of the situation on this topic and Warning of challenges to be solved, focusing on medical image analysis. In 2020, CINC (1), which is based on the Universidad Nacional del Litoral (UNL) and CONICET, had already published Pioneering work on gender bias Computer-aided diagnostic models can get them if they are ‘trained’ on unbalanced data.

The field of computational justice seeks to ensure that the performance of a given model is not unfairly biased based on demographic characteristics.“, Explained to CyTA-Leloir Maria Augustina Ricci LaraFrom DIS-HIBA. The bioengineer, who led the new study in the context of her PhD, added: “The dangers of these biases mainly lie in providing different standards of care to groups of individuals simply for the fact of belonging to a particular population. It could mean, for example, offering different rates of resource allocation or Referring to treatments”.

Maria Agustina Ricci Lara, Rodrigo Ichivesti and Enzo Ferrante warn of the biases that artificial intelligence systems applied to medical image analysis can acquire.

The debate on this area is growing to the point that the World Health Organization presented its report last year Ethics and Governance of Artificial Intelligence in Health. There he recognizes the benefits of artificial intelligence in research, diagnosis and disease detection, but warns of the challenges and dangers of “bias coded in algorithms”.

Richie Lara is doing her PhD at the National University of Technology (UTN), where she is developing new fair and adaptive machine learning methods for processing medical images, together with artificial intelligence specialists Rodrigo Echvist and Enzo Ferrante, both researchers at CONICET in the US. Honest (i) and those who directed her in her thesis.

“We were able to verify that many of the cases in which assessed biases are consistent with disease detection in specialties such as radiology, dermatology, ophthalmology, and cardiology. We evaluated each of the databases used in these studies and found that most of them come from high-income countries. , which means that the algorithms are trained on groups with very different sociodemographic characteristics and epidemiological situations than those in Latin America, Africa, and other countries of Asia,” emphasized Richie Lara.

According to her, Biases arise mainly in three cases: databases; AI models themselves; and the people responsible for designing these solutions who may inadvertently incorporate their own biases or mindsets into the tool. In this sense, he emphasized, There are some strategies that can be used to reduce risk: choosing appropriate definitions of computational fairness, adopting methods capable of dealing with differences in data, and forming interdisciplinary working groups with multiple perspectives.The researcher also stressed that in the context of health it is a major challenge International and public databases Which represents the entire population and not just the population of the central countries.

local look

Enzo Ferrante It cooperates with the project “Epidemiological Management Based on Artificial Intelligence and Data Science” (ARPHAI), led by the Interdisciplinary Center for Studies in Science, Technology and Innovation (CIECTI), a civic association based in the scientific pole of Palermo. There, they analyze health data that includes after detecting and mitigating bias to improve the capabilities of the electronic medical record promoted by the National Health Department. “Although this project does not work with medical images, we have applied many of the questions we asked in Nature Communications regarding The need to address the problem of bias from a multidimensional perspectiveFerrante said.

He, Richie Lara, and Echfest seek to advance the evaluation of algorithms in the Argentine population, by considering all of their diversity, and implementing them in clinical practice.

“Diversity is not only in terms of demographics, but also in terms of local diseases, like Chagas. It is important They have private data that allows us to study this type of disease uncommon elsewhere,” explained Ritchie Lara, who also focused on the importance of establishing Latin American databases that can be used to train new models, but also to evaluate those already in use in other parts of the world. He concluded by saying, “This is costly and requires joint work between the various institutions and the support of the state.”

See also  Our commitment to the health, safety and well-being of our professionals, collaborators and customers

Aileen Morales

"Beer nerd. Food fanatic. Alcohol scholar. Tv practitioner. Writer. Troublemaker. Falls down a lot."

Leave a Reply

Your email address will not be published.

Back to top