By Rodrigo C. Barros
What do AI and Chloroquine have in common?
The reader has already understood the astronomical impact of artificial intelligence (AI) on business and government, so much so that large economies felt compelled to establish strategic plans for the technology. What not everyone yet understands are the real risks that technology poses.
A historical overview of artificial intelligence leads us on a roller coaster of exaggerated promises and gigantic disappointments. One of its milestones is the emergence of artificial neural networks (ANNs) in 1958, when Frank Rosenblat invented the “Perceptron”. However, it was only in the 2010s that such networks became the main driving force in the area. Thanks to a favorable union of catalytic factors, such as the explosion in data availability and the possibility of using specialized hardware in matrix multiplication, ANNs provoked an amazing revolution, surprising the world with their ability to handle complex tasks. The area was renamed “Deep Learning”, an allusion to the increasing number of layers of neurons in network architectures, now deeper.
With “Deep Learning” invading our everyday lives, there were not a few futurologists who came up with the old-fashioned prophecies: the uniqueness and the revolt of the machines, with the right to Schwarzenegger in his Terminator costumes. But make no mistake. The probability that a current RNA will gain consciousness is as small as the size of a biological neuron.
The great threat of AI, amazingly, is to reproduce human behavior too well. By the way, reproduce what we have the worst: prejudices. It must be clear that ANNs are correlation machines, not cause and effect. More than that, in a country where the President of the Republic does not understand that “correlation does not necessarily imply a question”, we need to be didactic and instruct the public that there may be several correlations in the data, but that good science is one that looks with suspicion at categorical statements about causality. Otherwise, we would be forced to admit that US government spending on science is responsible for the number of strangulation and hanging suicides in the US.
The biggest example of how much of the population does not understand the difference between correlation and causality are the pseudoscientific raptures in Covid’s CPI in defense of the use of chloroquine to fight the virus. It is true that those responsible for the sanitary tragedy we are experiencing acted out of ignorance: they do not know the difference between correlation and cause, and do not understand the specifics and nuances of the scientific method.
We are at the same risk when we blindly trust ANNs. If we train such methods to discover patterns over disparate data, the generated models will reproduce the disparities. A classic case of injustice carried out by AI is that of the COMPAS tool (Correctional Offender Management Profiling for Alternative Sanctions), which helped US courts to estimate the probability of criminal recidivism by defendants. Would anyone be surprised to find that the algorithm singled out black individuals as more likely to reoffend?
The area of “Fairness in Machine Learning” has been gaining traction in the academy, serving as a warning to all who benefit from AI: it is not enough for models to learn well the existing patterns in the data – they need to be prevented from propagating prejudices. The AI justice effort is just beginning, with many possibilities for combating harmful biases. Models can be developed that deliberately combat previously noted confounding factors. Work can be done on developing synthetic databases that are adjusted to account for such factors. What you can’t do is pretend that prejudices don’t exist. Or that it’s not a problem for all of us if machines reproduce them.
In times of far-right governments, which exude and promote prejudice, it is clear that the main struggle within the AI is the same one that we fight on a daily basis: the battle against injustice and prejudice.
Rodrigo C. Barros is a computer scientist with a doctorate in artificial intelligence from USP. He is a researcher in AI at PUCRS and Research Director at Teia Labs.
Subscribe to Serrapilheira’s newsletter to follow more news from the institute and from the Ciência Fundamental blog.