Unstructured science fiction

I am often asked if I think AI should be regulated. Of course I answer. We do this with medicines, driving vehicles and even regulating bread through the EU’s harmonized horizontal legislation on food matters. Why isn’t technology organizing the most socially and economically transformative of all we’ve developed?

There are those who turn to ethics to keep AI at bay, avoiding its misuse and even abuse. There is no doubt that some companies that lead the research, development and commercialization of artificial intelligence will especially like this. But, although ethics are essential when we talk about how to deal with what we don’t have a clear, coherent idea of ​​(eg, about how to make decisions, resolve conflicts, and set boundaries for what, while possible, is not desirable), we can’t leave Critical issues for us, the people, are in the hands of supposedly moral machines. It is true that the existence of an ethical framework can serve as a precursor to what should be the subject of legislation, but it should not replace it. For example, what would you think if only important issues such as the use of your personal data, the fact that you could be identified in any circumstances through facial recognition, or the transparency or lack of algorithms that make decisions about your life, were collected as recommendations? Personal or professional, responsibility in the event of a self-driving car crash or the same decision you could make in the face of an unavoidable collision? Moreover, it seems to me unbearable and I would revolt against something like that.

See also  Osteoporosis: Six activities that help prevent it in young adults

Another frequently asked question I get is why is legislation always one step behind technology and so much more in the case of AI? It is a question that experts in these matters in the field of law will answer much better than me. In any case, I think it cannot be otherwise, which is also a natural and desirable thing in the legal system. Let’s imagine for a moment that we are trying to orchestrate the use of a technology that does not exist, but that could become a reality in the future. We will end up relying on predictions, speculation and fantasies. Does it make sense to organize teleportation of people today? What trips to the moon in the time of Jules Verne? Yes, I know I’m exaggerating, but taking examples to the extreme is sometimes useful to see absurdities, or, in any case, inconsistencies depending on the approach or dilemmas.

In addition, trying to avoid at all costs the bad things that can happen with new technology, just by imagining it, can prevent other desirable things from happening. In such a scenario, we risk passing laws that, instead of protecting and guiding the proper development of technologies, end up hindering them, if not eliminating them at their root. Prevention can be a temptation because it is much easier than a gift.

This is why I understand that the European Union has spent years working on Amnesty International’s list (AI law, in English). In April 2021, the European Commission submitted a proposal on the regulation of the European Parliament and Council, which, at the end of its processing, will be the first global legal regulation for AI. In addition, it will be directly applicable in all EU member states without the need for conversion rules. When approved, it will be the world’s first AI regulation. Meanwhile, we are not helpless, far from it. Laws of exceptional importance and directly related to artificial intelligence, such as those for digital markets and services – this one stops the flow of dangerous digital content, for example – have been approved by the General Data Protection Regulation (GDPR), which since 2018 imposes severe restrictions on data collection Personal, and new legislative proposals are being introduced, such as a specific directive on liability in the field of AI (RIA directive), or an updated RPD directive, on liability for damages caused by defective products.

See also  Latin America and its open scientific developments 'out of necessity' | Globalism

The starting idea of ​​AI regulations is simple, but the development is complex. Different uses of AI will be analyzed and classified according to risks [potencial] For users, so that different levels of risk entail different obligations for service providers and users. Unacceptable risk systems (cognitive manipulation of human behavior, for example) would be banned. The remainder is divided between: those with high risks (such as those affecting the management and operation of critical infrastructures, health or aiding in legal interpretation and enforcement); those with limited risks (they would be forced, for example, to decide which AI we interact with in this way); And finally, the minimum risk systems, which can be used freely, even if they have recommendations in this regard (an AI-assisted video game, for example).

In addition to regulating what is appropriate, this should be done by inspiring public confidence and encouraging companies to continue investing in the development of AI and new products, services, and applications. In fact, one of the most recent provisions included in the current AI Regulations document requires member states to promote research and development of AI solutions that support positive outcomes for society and the environment, such as addressing social and economic inequalities. Let me be very skeptical about reaching that goal, beyond applauding it. But we will discuss this another day.

AI applications must be safe, transparent, traceable, non-discriminatory, and respectful of the environment. That is why it must be regulated. But the law cannot get ahead, not even in a wild race. The law must be careful, reflective and, above all, concrete, within its inevitable imprecision. You cannot legislate on theories or hypotheses, but on facts and their implications for society. The other could lead us to science fiction canonization.

See also  Women, Science and Technology Without Stereotypes - Degenerate - Podcast

Aileen Morales

"Beer nerd. Food fanatic. Alcohol scholar. Tv practitioner. Writer. Troublemaker. Falls down a lot."

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top