Our Future Artificial Intelligence Overlords Need a Resistance Movement

Say

Intelligence is moving so fast that even scientists are struggling to keep up. In the last year, machine learning algorithms have started to produce ridiculous movies and terrible fake photos. They write code. In the future, we can look back to 2022 as the age of AI moving from information processing to content creation with more people.

But what if we look back on it as the year AI took a step towards the destruction of humanity? As hyperbolic and ridiculous as that sounds, public figures from Bill Gates, Elon Musk and Stephen Hawking, and going back to Alan Turing, have expressed concern about the future of humans in a world where machines outnumber them. and intelligence, Musk says AI. was becoming more dangerous than a nuclear war.

After all, humans don’t treat less intelligent species well, so who’s to say that computers, trained everywhere on data that reflects all aspects of human behavior, won’t ‘ put their goals before ours’ as a famous computer scientist. Marvin Minsky once warned.

Refreshingly, there is good news. Many scientists want to make deep learning processes more transparent and measurable. The growth will not stop. As these programs have more impact on financial markets, social media and supply chains, technology companies will need to start prioritizing AI security over potential.

Last year, across the world’s AI labs, about 100 full-time researchers focused on building security systems, according to the State of AI 2021 report by investors in London. Annually produced by Ian Hogarth and Nathan Banaich. Their report for this year found that there are still only about 300 researchers working full-time in AI security.

“It’s a low number,” Hogarth said during a Twitter Spaces chat with me this week about the future threat of AI. “Not only are there very few people working to make these systems compatible, but it’s also kind of the Wild West.”

Also Read :  Layoffs at big tech a boon for climate change firms

Hogarth was talking about one of the last years, AI software and research from open-source organizations, who say that intelligent machines should not be controlled and built privately by a few big companies. , but it was created. open. In August 2021, for example, the community-led EleutherAI community developed a public version of a powerful tool that can write accurate comments and articles on almost any topic, called GPT-Neo. The first tool, called GPT-3, is OpenAI, a company that works with Musk and is well supported by Microsoft Corp. which gives little chance to its powerful system.

This year, several months after OpenAI impressed the AI ​​community with a revolutionary image processing system called DALL-E 2, an open-source company called Stable Diffusion released its version of the tool publicly, n ‘zero.

One advantage of open source software is that by being out in the open, a large number of people are constantly checking it for inefficiencies. That’s why Linux is one of the most popular operating systems for the public.

But throwing strong AI systems out in the open also raises the risk that they will be misused. If AI is as destructive as a virus or nuclear contamination, then it might make sense to regulate its development. After all, bacteria are tested in bio-safety laboratories and uranium is enriched in a carefully controlled environment. Research on viruses and nuclear power is regulated by law, however, with the government following the pace of AI, there are no clear guidelines for its development.

Also Read :  Get 83% off total anonymity this Black Friday 

Hogarth said: “We almost got the worst of both worlds. AI gets things wrong by creating it in the open, but no one is responsible for what happens when it is created behind closed doors.

For now at least, it’s encouraging to see a growing focus on AI integration, a growing field that refers to the design of AI systems that are “compatible” with human purposes. Leading AI companies like Alphabet Inc.’s DeepMind and OpenAI have many teams working on AI alignment, and many researchers from those companies have gone on to start their own startups, some of their internal focus is on making AI safer. These include San Francisco-based Anthropic, whose founding team left OpenAI and raised $580 million from investors earlier this year, and London-based Conjecture, owned by the founders of Github Inc., Stripe Inc. and recently confirmed by FTX Trading Ltd.

The theory works under the assumption that AI will reach parity with human intelligence within the next five years, and that its current state is catastrophic for the human race.

But when I asked Conjecture’s Chief Executive Officer Connor Leahy why AI might want to hurt a human in the first place, he said yes. He said: “Suppose that people want to flood the valley to build hot water, but there are anthills in the valley.” “This will not stop humans from doing their work, and the anthill will be flooded immediately. No human ever thought of harming an ant. They just need more energy, and this is the best way to achieve that goal. Likewise, autonomous AIs will need more power, faster communications, and more intelligence to achieve their goals. “

Leahy says that to prevent that dark future, the world needs a “portfolio of bets,” including analyzing deep learning algorithms to better understand how they make decisions, and trying to give AI more human-like features.

Also Read :  Meta Quest Pro teardown contains some big surprises

Even if Leahy’s fears seem overblown, it’s clear that AI is not on a path that is entirely compatible with human well-being. Just look at some of the recent attempts to build a basket case. Microsoft abandoned the Tay bot in 2016, which learned from its interactions with Twitter users, after it posted racist and sexist messages within hours of its launch. In August of this year, Meta Platforms Inc. released a chatbot that said Donald Trump is still president, having been trained on public texts on the Internet.

No one knows if AI will disrupt the financial markets or disrupt the food supply system one day. But it can turn people against each other through social media, something that has always been controversial. A powerful AI system recommends people to post on Twitter Inc. and Facebook is to break our engagement, which means you get offensive content or misinformation. When it comes to “AI design,” changing those incentives would be a good place to start.

More from Bloomberg Opinion:

• Awesome, Awesome Tech Week Mentioned in Top 10 Charts: Tim Culpan

• Wile E. Coyote’s Time as Tech Races Off the Cliff: John Authers

• Microsoft’s Art AI tool could be a good thing: Parmy Olson

This column does not reflect the views of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a technology writer for Bloomberg News. A former reporter for the Wall Street Journal and Forbes, he is the author of “We Don’t Know.”

More stories like this one are available at bloomberg.com/opinion

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button