JA week before the second-ever global summit on artificial intelligence, protesters in a small but growing movement called “Pause AI” are calling on world governments to regulate AI companies and create new, cutting-edge technologies. He called for a freeze on the development of artificial intelligence models. They argue that development of these models should only be allowed to continue if companies agree that they be thoroughly evaluated to test their safety first. Protests took place in 13 countries on Monday, including the US, UK, Brazil, Germany, Australia and Norway.
In London, about 20 demonstrators stood outside Britain’s Department of Science, Innovation and Technology, chanting “No racing, it’s dangerous” and “Who’s the future?”. It is expected to attract the attention of policy makers. Protesters say their goal is to force governments to regulate companies developing frontier AI models such as OpenAI’s Chat GPT. They say companies are not taking enough precautions to ensure their AI models are safe enough to release into the world.
“[AI companies] They’ve proven time and time again… through the way these companies treat their employees and the way they handle other people’s work by literally stealing it and putting it into their own models, that they can’t be trusted.” Gideon said. Futerman, an undergraduate student at the University of Oxford, spoke at the protest.
One protester, Tara Steele, a freelance writer who works on blogs and SEO content, said she has seen technology impacting her livelihood. “Since ChatGPT came out, I’ve noticed a dramatic decrease in demand for freelance work,” she says. “I personally love writing…I really do. And it’s kind of sad emotionally.”
read more: Pausing AI development is not enough.everything needs to be shut down
She says her main reason for protesting is that she fears more dangerous consequences from Frontier’s artificial intelligence models in the future. “We have a number of highly qualified and knowledgeable professionals, a Turing Award winner, a highly regarded AI researcher, and the CEO of her own AI company. [saying that AI could be extremely dangerous](The Turing Award is an award given annually to a computer scientist who has made important contributions to this topic, and is sometimes referred to as the “Nobel Prize” of the computing world.)
She is particularly concerned that a growing number of experts are warning that AI, if improperly controlled, can have devastating consequences. A report commissioned by the US government and published in March warned of the “rise of advanced AI and AGI.” [artificial general intelligence] It has the potential to destabilize global security in a way reminiscent of the introduction of nuclear weapons. ” Today, the largest AI labs are trying to build systems that can outperform humans at almost any task, including long-term planning and critical thinking. If they succeed, more and more aspects of human activity will be affected, from everyday things like online shopping to the introduction of autonomous weapons systems that can operate in unpredictable ways. May be automated. This could lead to an “arms race” and increase the possibility of “global weapons of mass destruction.” [weapons of mass destruction]“Massive fatalities, interstate conflict, and escalation,” the report said.
read more: Exclusive: US needs to act ‘decisively’ to avoid ‘extinction-level’ threat from AI, says government-commissioned report
Experts still don’t understand the inner workings of AI systems like Chat GPT, and for more sophisticated systems, the lack of knowledge can lead to gross misinterpretations of how more powerful systems work. I am concerned that this may lead to calculations. Depending on the degree to which AI systems are integrated into human life, they could wreak havoc or take control of dangerous weapons systems, leading many experts to believe they could lead to human extinction. I’m concerned. “These warnings are not being communicated to the general public and they need to know,” she said.
At the moment, machine learning experts are divided over exactly how much risk further development of artificial intelligence techniques entails. Jeffrey Hinton and Joshua Vengio, two of the three godfathers of deep learning, a type of machine learning that allows AI systems to better simulate the human brain’s decision-making processes, are working on this technology. has publicly stated that they believe there are risks such as: It will lead to the extinction of humanity.
read more: Eric Schmidt and Joshua Bengio discuss how AI might scare us
The third godfather, Yann LeCun, who is also Meta’s lead AI scientist, adamantly disagrees with the opinions of the other two. He told Wired in December: “AI will bring many benefits to the world. But people are exploiting fear of technology, and we are in danger of scaring people away from technology.”
Anthony Bailey, another participant who opposed Pause AI, said he understands there are benefits to be gained from new AI systems, but he believes that technology companies are pushing technology that humans can easily lose control of. He said he was concerned that they would be incentivized to develop these technologies because they have enormous potential. For profit. “It’s economically valuable. If people aren’t dissuaded that it’s dangerous, then naturally those kinds of modules will be built.”