Argentine engineer Freddy Vivas, who specializes in the center created by Google and NASA, explains what artificial intelligence looks like invisible, but it exists in all areas. What are its advantages and risks.
Freddy Vivas is a systems engineer and musician. The first passion began at the age of 12 in Lomas de Zamora. He has done all imaginable tech related activities, from internet work to technical services of all kinds. But his life was turned upside down when he won a scholarship at Singularity University in Silicon Valley, the space run by NASA and Google to empower future technology leaders.
In this magical place for IT (information technology) enthusiasts, 90 professionals from 50 countries around the world, specializing in the most diverse specialties: doctors, engineers, teachers and others, mingle.
Vivas has become a data science beacon in the country and has his company, RockingData, a startup dedicated to providing data science and big data services. He is the Academic Coordinator of the Big Data Program at the University of San Andreas. He also released a new book, How Do Machines Think? Galerna editorial, which is trying to be a way to spread its specialty. In 10 years of reading Muy Interesante, he became passionate about science and technology. Although he was never what was considered a genius in the exact sciences, according to the report, he found in computer science a tool to improve himself and make a positive impact on society.
“Artificial intelligence (AI) is one of those technical words that has fully entered into the language of everyday life. We can define it as a combination of algorithms developed with the aim of demonstrating behaviors considered intelligent that, in some respects, mimic human capabilities,” he explained.
He admits that while it’s a “very technical” concept, and can be off-putting to most people, “we live it in our daily lives. When we talk with Siri, the iPhone’s virtual assistant, we’re in front of a system that processes natural language. This is artificial intelligence. When We use Waze, the system chooses the best route by making calculations that allow us to improve navigation in real time, ie artificial intelligence.When Amazon gives us product recommendations based on shopping carts made in the past, this is also artificial intelligence.
Vivas is convinced that technology has made the prolonged COVID-19 pandemic more bearable. With artificial intelligence, something similar happens to what happened, for example, with electricity. It’s such an everyday thing that it’s no longer visible. We no longer pay attention to it, but it is there.
In the face of the strong demand that health centers faced when the number of people infected with COVID-19 grew exponentially, along with its working group, implemented “in a Buenos Aires sanatorium, an algorithm that allowed to organize the huge volume of available data, create patterns and predict questions such as the number of Which beds should they get for free, how many supplies are needed, how to order and plan doctor breaks, and above all, how many people will enter intensive care with COVID-19.”
The world of health is being widely traversed by artificial intelligence. For example, “There are many programs that make it possible to detect with great accuracy certain abnormalities in X-rays, visualizing, for example, tumors that in some cases are imperceptible to the human eye. They apply an artificial intelligence technology called computer vision or artificial vision Which has a tremendous ability to identify objects within videos or photos.
So is agriculture, and the applications “are endless, and Argentina is a benchmark in agricultural technology. Transportation, politics, finance and education. In everything there is artificial intelligence.” Data analysis provides predictability and answers to the questions we ask ourselves in these circumstances. Data science relies on knowing what happened to predict what might happen. Around the world there are hundreds of examples of cases similar to this.”
But also in the daily lives of those who have spent many hours in isolation, artificial intelligence has made its appearance to make life more enjoyable. “The Netflix series recommendations algorithm based on our tastes made quarantine easier for us,” he said. “It doesn’t matter in the region you work in or in the field you work in, our activities are affected by technological disruption.”
Vivas believes that there are still certain cultural barriers that lead many people to distrust AI because “every technological change generates not only emotion and enthusiasm, but also fear and anxiety. The fear that technology will replace us, or that we will not be able to Adapting to a new way of doing things, or simply rejecting the unknown, so “the best way to get around it is to learn how AI works.”
To lose sight of the harm that technology can cause, the specialist lists the best examples of its benefits: “We must think about how machines can help us with heavy, dangerous and unhealthy tasks that require repetitive actions. Or how humans can complete machines when we talk about tasks that require High levels of intuition and emotion or cultural sensitivity. I think the book or education or conversations are ways I try to break some of the myths about the relationship between humans and artificial intelligence.”
Then emerge on the scene ethical issues in the application of information technology. The scandal of Cambridge Analytica, the British consultancy that used Facebook data to design targeted messages in political campaigns, is still fresh in the memory of many.
“For our team, it is important not to lose sight of the purpose of our work: it is about using knowledge to think creatively and critically, and to collaborate to solve real-life problems. If we want to take advantage of the advantage of using data to improve people’s quality of life, it is time to establish legal and ethical standards,” he explained. solid.”
The best way to manage and avoid these risks is by making them known, not under the rug but rather on the table. We are facing a very powerful technology. We cannot just focus on opportunities without knowing the risks. But don’t let the tree cover the forest for us: if we use this technology well, we can revolutionize the world we live in. We are already doing that. When we use artificial intelligence and data science, we create models so that machines can make sense of the world, and this can be a process that gets repeated and worse, amplifies human error.
Biases when developing AI processes are widely discussed among professionals. “As users, we seek more transparency in any management, whether there is an algorithm involved or not. This is why the main goal should be to design intelligent human-centered decision-making mechanisms. In this context and given the infinite complexity of the value system For humanity, it is necessary to give artificial intelligence values and principles, and this is undoubtedly the responsibility of the people who build these technologies, and to understand the moral and ethical implications of our work.
A machine learns what a human teaches, so what if it doesn’t teach it well?
As the number of AI projects being run in organizations grows, the need for a deeper exploration of the impact of AI on the organization’s activities, boundaries and goals, including mechanisms and processes, increases. Organizations start down this path as they go, which is why it is important to take considerations for responsible AI generation.
What are some ideas for exercising that responsibility?
I think it is the responsibility of companies to educate and disseminate about AI. Work in a multidisciplinary manner. Review brings opinions and conversations that are impossible to achieve any other way to the negotiating table. Create a reliable and interpretable AI framework, where the goal is to ensure safe AI in the future, which is why it is important to understand the discussions guiding today’s AI regulation, and to compile policies and design principles. Work to establish anti-bias processes. We can probably never avoid having biases in our algorithms, but we can try to take measures that minimize them and be on the alert to detect them as they arise. When you start an AI project, consider the potential impact of this product.
The percent attention duration is 8 seconds and they currently account for 40% of consumers. How do we get their attention?
– Ways to consume content have changed drastically, and this is neither better nor worse, it is simply. Messages that are more relaxed, genuine, that have a practical element, that don’t go very far, I think are the best way to get there. Together with a large team of data professionals, we’re working on something we can’t officially announce yet, but, let me tell you, they’re going to be online courses on data that seek to change the way we learn and teach about these topics. We use microlearning, which is content based on smaller and more fragmented formats.
How will the data scientist change ordinary people?
It really changes us in almost everything we do. The movies we watch, the music we listen to, the products we buy, the news we consume, and the conversations we have on chat. It’s all done by the data scientist, and what’s more interesting is that it generates more data that will be used to improve the recommendations, suggestions, and experience each of us has with technology. It is everywhere. You just have to sharpen your eye a bit to see it.
– Is it possible to become a subject unknown to machines?
In the United States, for example, about half of citizens’ faces are recorded by police force facial recognition systems. This discussion takes a long time, but, in principle, it is very difficult to become invisible in the face of technology. But the AI does not recognize me, Freddy Vivas, as a unique human who does not repeat, but does so based on the data it can get from me.