Rosary 3 |
Bard Developers, W.L.L Google’s new artificial intelligence project Seeking to compete with ChatGPT, I was surprised to discover that the system was able to learn a new language independently, without being trained in it. This fact sparked more conspiracies around what is known as “black box”.
“We found that with very little Bengali stimulation, you can now translate all of the Bengali languages,” James Manika, Google’s head of artificial intelligence, told the network in a recent interview. CBS.
For his part, Google CEO Sundar Pichai acknowledged that the ability of these AI models to generate skills and provide answers in unexpected ways is what experts call a “black box”.
“You don’t quite understand that. You can’t quite say why (the AI) said this or why it was wrong. We have some ideas and our ability to understand them gets better with time. This is where we are at so far, Pichai explained.
Along these lines, he added, “I think in developing this, not only engineers should be involved, but also sociologists, ethicists, philosophers, etc., and we should be very thoughtful. These are the things that society needs to figure out as we go forward.” It’s not for the company to decide.”
What are the risks of the “black box” of artificial intelligence?
As investment by big tech companies to develop AI systems based on natural language models is accelerating, concerns are also growing about the risks involved in enabling the free use of tools that have not been fully explored. And one of the main questions is exactly that It is not clear how they could acquire the skills for which they were not instructed.
In the column opposite the black box is what AI experts call the “white box,” that is, the reflection of the instructions that software developers create using programming languages and that define the operation of applications.
said Ian Hogarth, co-founder of technology company Plural and author of AI State Reports.
Hogarth added that as these AI models evolve, “there are huge leaps in their capabilities.” Unlike more traditional programming software, where instructions to the application seek a certain outcome, in the case of artificial intelligence Engineers are working to create systems that mimic the “neural networks” of human intelligence.
That is, AI is capable of processing massive amounts of data, detecting patterns among millions of variables through machine learning, and most importantly, self-adapting as it continues to function.
David Stern, director of quantitative research at G-Research, a technology company that uses machine learning to predict prices in financial markets, warned that AI developments over the past few years have “increasingly involved a black box approach.” “
he said in an interview with BBC.
As a result, a reality is reached in which it is increasingly difficult to accurately understand the operation of these systems.
“We’ve increased the amount of computing power these models consume by about 100 million in the past decade. So while ChatGPT seems to have practically come out of nowhere for most people, this is a very long-term trend.” This data will continue,” as described by Ian Hogarth.
In conclusion, the Plural formula co-founder noted: “I think it has remarkable potential to change our lives. In some ways, it is perhaps the most powerful technology today. The main thing is that we should have a more public discussion about how quickly these systems are advancing and how different they are from previous generations.” of programs.