Is it possible to prevent uncontrolled artificial intelligence?

 

Is it possible to prevent uncontrolled artificial intelligence?


Artificial Intelligence (AI) is one of the inventions of modern technology. Artificial intelligence is the application of human intelligence and thinking power in an artificial way through technology-based machines. 

Today it has become a field of academic learning, teaching how to build computers and software that will demonstrate intelligence. Computers are brought to mimic cognitive units, so that computers can think like humans.

 For example, learning and problem solving. Artificial intelligence is intelligence demonstrated by machines. But it has now gone to a level where there are negative fears.


Even Geoffrey Hinton, who is called the 'godfather' of artificial intelligence, fears the future of artificial intelligence. So Hinton resigned from Google for openly talking about the dangers and dangers of artificial intelligence. 

According to him, artificial intelligence is taking the world to a point where it will gradually become difficult to find out what is 'true' and what is 'false'.


Not only that, billionaire Warren Buffett has expressed concern about artificial intelligence. Warren compared with the atomic bomb!

He said that thanks to his friend Bill Gates, he was recently introduced to the artificial intelligence application ChatGPT. But he has been concerned ever since. A fear is working in him.

Warren said that as much as he was overwhelmed by the fact that ChatGPT could do so much, he was also apprehensive. "When one thing can do it all, I fear it," he said. It happened in this case too. Because I know we can't take this discovery back. Can't delete it."

Will it be possible to prevent uncontrolled artificial intelligence?

Technologists are warning about achieving the extreme maturity of mechanical intelligence called 'The Singularity', but can anything be done to prevent this future?

All around us are getting filled with fake people day by day. We often don't even notice them. But you may be surprised to know that most of the pseudonyms are not people, but AI-powered bots. However, they are not fully capable of all the tasks that can only be done by humans. 


But, in some cases their skills are outstanding; Which is gradually spreading in various sectors.    

Many AI researchers believe that this pseudo-human development is just the beginning. The possibility is strong that today's AI will one day turn into Artificial General Intelligence or AGI.


AGI is basically an advanced form of AI – which can think like a human in most cases.

So some of these researchers have argued that if a computer system could write code like ChatGPT - it would have a chance to improve itself. And while improving oneself in this way, one will achieve something that will be beyond human control.  

These thinkers also predict the worst-case scenarios that will arise in the future. As uncontrolled AI infiltrates every aspect of our technology-dependent lives—disrupting our infrastructure, financial systems, communications, or manipulating it at will. 


Human-impersonating AI can sway voters and thus strengthen its own interests. Rogue and power-hungry groups can use these to overthrow a welfare-oriented government or terrorize the people.   

But it is not a foregone conclusion that artificial intelligence will achieve the 'singularity'. Artificial intelligence may not have reached AGI levels on its own, or computer systems could not have become more intelligent. 


But it's also true, we can imperceptibly transition from AI to AGI, and from there to super-intelligence.

It's not just over-the-top fantasy, but current AIs often surprise us. The rapid development of artificial intelligence in recent years cannot be underestimated. Rather, they will materialize - there is a strong possibility of that too


Big companies have already taken the initiative to create general algorithms with the aim of creating AGI. Deep Mind is a subsidiary of Google parent company Alphabet. In May, they announced the creation of an AI called Gato - aka a 'generalist agent'. 


It is capable of performing a variety of tasks – from texting to playing video games or controlling a mechanical arm (robotic arm) – by using the same algorithms as ChatGPT.

Jeff Clune, a computer scientist at the University of British Columbia and the Vector Institute, said, "Five years ago it was a career risk for me to publicly say that there was a possibility of creating human-level or superhuman AI."

Clune has worked at Uber, OpenAI and DeepMind. And his recent work indicates that open-ended artificial intelligence will lead to AGI in the near future.

His comment in this context is that the challenges with AI are now largely 'over', with many researchers now opening up. They openly say that the potential of AGI is strong; Which can destabilize the society.  

In March this year, a group of prominent technologists called for a halt to research into some forms of AI in an open letter. In the letter, they take a stand against the creation of machine intelligences that have the potential to one day surpass humans in intelligence. 

They fear that these AIs will not only outpace humans in intelligence, but will also be deemed obsolete at some point. And that's where the danger lies.    

In April, Geoffrey Hinton, one of the pioneers of AI research, resigned from Google. He quit his job to speak more openly about the technological threats to the entire human civilization.


In this context, research on 'AI Alignment' emphasizes how artificial intelligence works in favor of human interests. Its main goal is to program the AI ​​according to human values ​​so that it does not do anything undesirable.

There are many reasons behind this. It turns out that even very simple AIs often learn to do the opposite of things they weren't programmed to do.

A research paper titled 'The Surprising Creativity of Digital Evolution' published in 2020 gives an important insight into this.

In the research paper, Jeff Clune and his co-authors cite several examples of AI's unintended behavior.

For example, a researcher creates some AI-powered virtual animals that are supposed to walk on their stomachs or crawl. But, it is seen that they are trying to stand up and falling down again and again.   

An AI designed to play a boat-racing video game discovers that it can collect more bonuses by simply circling a location rather than going the entire way.

The researchers observed that the AI-driven boat was bumping into other boats while maneuvering, and going the wrong way at will. At the same time, points are getting more and more.

As the AIs we create become more advanced and powerful - so does the threat of them going astray. As a result, we don't need AI to judge, drive cars, or design drugs in the future. There are so many sensitive areas that it would be dangerous to leave complete control in the hands of machines.  

Another danger of artificial intelligence is its undivided attention. Let's say an AI is built to control paperclip production machines in a factory. Someday this mechanical intelligence evolved into super-intelligence. 

He could take control of the world system. Then what if he drops everything and starts directing all the machines in the world to just produce paper clips – what do you think? Not science fiction, the future is raising such fears.

Holden Karnofsky, assistant chief executive of Open Philanthropy — a foundation that researches AI alignment — says one of the hallmarks of machine intelligence programs is that they are designed to achieve their goals. 

Achieving this goal is his meditation and satisfaction. If the goal fails, it will consider its existence useless. That's why unruly AI can be preoccupied with its specific goals in the interest of self-preservation. Source: The New Yorker



Post a Comment

0 Comments