Eight technology companies committed to developing ethical AI | A new agreement to regulate progress in the region

Since the emergence of GPT Chat towards the end of 2022, artificial intelligence has not been on the agenda for a single day, with all the potential and concerns that progress in this field entails. However, states and other important actors in society are still trying to wake up and seek to organize a technological revolution whose dimensions are difficult to measure. In this context, eight global companies in this field, including Microsoft and Telefonica, have committed to cooperating with UNESCO to build “more ethical” artificial intelligence. They did this at the global forum organized by the organization in the city of Kranj, Slovenia, which in its second edition carries the slogan “Changing the landscape of artificial intelligence governance.”

Audrey Azoulay, Director-General of UNESCO“Today, we take another big step forward by securing the same tangible commitment from leading global technology companies, and I call on all stakeholders in the sector to follow suit,” he noted.. An alliance between the public and private spheres is essential if we are to advance AI for the common good“. The formula of the 'common good' expressed by Azoulay is concrete; however, the questions multiply when it comes to artificial intelligence. In the face of such a revolution: Who is responsible if artificial intelligence reproduces evil? What types of fines should be imposed if regulations affect basic rights such as privacy? What role should states play and what contributions can private parties make?

As these questions emerged, eight companies signed a document committing them to adapt their developments in the region to the principles set forth by the United Nations in 2021, which at that time had been adopted by more than 190 member states. specific, The aim is that all developments in this area from now on can follow the ethical recommendation of the organization on the basis of “protecting and promoting human rights and human dignity and ensuring diversity and inclusion”. This is how developing organizations, when designing and implementing AI systems, respond to standards that protect populations.

See also  Russian military plane crashes southeast of Moscow: 5 crew members killed

In addition to Microsoft and Telefónica, six other companies participated in the agreement: GSMA, Lenovo Group, INNIT, LG AI Research, Salesforce, and Mastercard. Along these lines, the signed agreement obliges them to “guarantee human rights in the design, development, purchase, sale and use of artificial intelligence.” Likewise, they must also “comply with safety rules and identify harmful effects of artificial intelligence.”

According to UNESCO's official website, companies must establish “verification procedures to ensure that safety standards are met” and adopt “measures to prevent, reduce or correct risks, in accordance with national legislation.” from here, The agreement highlights the importance of evidence the previousThat is, before artificial intelligence was commercialized; It also requires the development of evaluation practices Old postThat is, after its use becomes effective.

Attempts with little taste

This agreement committed by the two companies joins other attempts to regulate progress in the field of artificial intelligence. In October 2022, Joe Biden's government passed an AI bill of rights. This is a plan that, while it did not specify specific implementation measures, was presented as one of the first calls to action aimed at protecting digital and civil rights in the face of the advances of artificial intelligence.

In March 2023, more than a hundred specialists from various disciplines called for “Artificial Intelligence in Latin America at the Service of People.” This came in a frame Latin American AI Meeting 2023Held in Montevideo, it was the first time that the scientific and technological community in the region thought about this phenomenon from a human rights perspective. This meeting coincided almost temporarily with another turning point. In early April, Figures such as Twitter, Tesla, SpaceX CEO Elon Musk and Apple co-founder Steve Wozniak signed a letter. (Posted on the Future of Life Institute website) They urged an “immediate halt” to training AI systems, for at least six months.

See also  A “turning point” in Türkiye during the era of Erdogan - DW - 04/02/2024

In mid-2023, scattered speeches turned into concrete actions, when… The European Parliament has approved a law to control the use of artificial intelligence on the continent. What they sought after adopting this standard was to begin closely monitoring the risks it might cause. At the end of October of that year, the most powerful countries in the world gathered The G7 also did the same and agreed on the first code of conduct To limit the actions of artificial intelligence development companies. The goal, as stated, is related to restricting misleading practices and violations of basic rights such as privacy and intimacy. The document, called the “Hiroshima Process for Artificial Intelligence,” constitutes one of the first initiatives seeking to regulate this phenomenon.

Despite attempts and good intentions, for now, algorithms created by powerful groups have the green light to act without worry. They are training machines that extract muscles that may sooner rather than later become out of control.

[email protected]

Freddie Dawson

"Beer specialist. Award-winning tv enthusiast. Bacon ninja. Hipster-friendly web advocate. Total social media junkie. Gamer. Amateur writer. Creator."

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top