Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Elon Musk, Steve Wozniak urge OpenAI to stop ChatGPT upgrades

Elon Musk, Steve Wozniak urge OpenAI to stop ChatGPT upgrades

Several well-known researchers, technologists, and other experts are telling OpenAI to stop training ChatGPT, an AI platform.

A group of well-known AI experts, including Elon Musk, Steve Wozniak, and Andrew Yang, have signed an open letter asking for a six-month ban on making AI systems stronger than GPT-4, the latest version of the generating tool that came out in November 2022.

The people who wrote the letter say that “governments” should get involved if the private company doesn’t stop training its AI after GPT4 is put into place.

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the letter has said.

Who signs the letter?

There are now almost 1,100 signatures on the letter. However, some people have questioned whether or not the letter was signed by all of the well-known people who signed it. Some of these people include Yoshua Bengio, winner of the Turing Award, Stuart Russell, CEO of Stability AI, Emad Mostaque, and Chris Larsen, co-founder of Ripple.

The project’s spokeswoman said that the first signatures had been checked twice, but that the identities had not yet been shown because there was so much interest.

Anthony Aguirre, vice president of the Future of Life Institute, said that “All of the top signatories on the list have been independently verified. Doing so for the whole list (which is now well over 30,000) exceeds our capacity.”

A note on the website says that due to high demand, we are still collecting signatures but have stopped putting them on the letter so that our screening processes can catch up.

Musk was one of the people who started OpenAI. He left the board in 2018, but it is said that he put $100 million into the project.

AI serious risks

Since GPT4 came out two weeks ago, there has been both excitement and fear. Some people have called it more important than the coming of fire, and others are afraid that if it isn’t stopped, it could destroy humanity.

The letter says that the focus should be on making existing systems more accurate, safe, transparent, strong, and trustworthy while speeding up the development of AI governance systems. This is because it is said that AI systems with human-competitive intelligence pose serious risks to society and humanity.

“It said,” “With AI, people can have a bright future. We can do so. Let’s have a long AI summer and not rush into fall without being ready. Other technologies that could be very bad for civilization have been put on hold.

Musk wrote in response to a tweet about the petition that top AI developers won’t listen to this warning, but at least it was said.

OpenAI hasn’t said anything yet in response to a request for comment.

Content Source: decrypt

About MahKa

MahKa loves exploring the decentralized world. She writes about NFTs, the metaverse, Web3 and similar topics.

Latest NFT News, Trendings and Tutorials, right in your inbox, every Monday

IMPORTANT DISCLAIMER: All content provided here in our website, hyperlinked sites, social media accounts and other platforms are for your general information only, procured from third party sources. We make no warranties of any kind in relation to our content. No part of the content that we provide constitutes financial advice, legal advice or any other form of advice meant for your specific reliance for any purpose. Any use or reliance on our content is solely at your own risk and discretion. You should conduct your own research, review, analyse and verify our content before relying on them.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *