Big Data News Hubb
Advertisement
  • Home
  • Big Data
  • News
  • Contact us
No Result
View All Result
  • Home
  • Big Data
  • News
  • Contact us
No Result
View All Result
Big Data News Hubb
No Result
View All Result
Home News

Open Letter Urges Pause on AI Research

admin by admin
April 3, 2023
in News


(ANDRANIK HAKOBYAN/Shutterstock)

The Future of Life Institute has issued an open letter calling for a six-month pause on some forms of AI research. Citing “profound risks to society and humanity,” the group is asking AI labs to pause research on AI systems more powerful than GPT-4 until more guardrails can be put around them.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the Future of Life Institute wrote in its March 22 open letter, which you can read here. “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening.”

Without an AI governance framework–such as the Asilomar AI Principles–in place, we lack the proper checks to ensure that AI develops in a planned and controllable manner, the institute argues. That’s the situation we face today, it says.

Turing Award winner Yoshua Bengio supports the AI pause

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one–not even their creators–can understand, predict, or reliably control,” the letter states.

Absent a voluntary pause by AI researchers, the Future of Life Institute urges government action to prevent harm caused by continued research on large AI models.

Top AI researchers were split on whether or not to pause their research. Nearly 1,200 individuals, including Turing Award winner Yoshua Bengio, OpenAI co-founder Elon Musk, and Apple co-founder Steve Wozniak, signed the open letter before a pause on the signature-counting process itself had to be instituted.

However, not everybody is convinced a ban on researching AI system more powerful than GPT-4 is in our best interests.

“The letter to pause AI training is ludicrous,” Bindu Reddy, the CEO and founder of Abacus.AI, wrote on Twitter. “How would you pause China from doing something like this? The US has a lead with LLM technology, and it’s time to double-down.”

“I did not sign this letter,” Yan LeCun, the chief AI scientist for Meta and a Turing Award winner (he won it together with Bengio and Geoffrey Hinton in 2018), said on Twitter. “I disagree with its premise.”

Turing Award winner Yan LeCun opposed the pause in AI research

LeCun, Bengio, and Hinton, whom the Association of Computing Machinery (ACM) has dubbed the “Fathers of the Deep Learning Revolution,” kicked off the current AI craze more than a decade ago with their research into neural networks. Fast forward 10 years, and deep neural nets are the predominant focus of AI researchers around the world.

Following their initial work, AI research was kicked into overdrive with the publication of Google’s Transformer paper in 2017. Soon, researchers were noting unexpected emergent properties of large language models, such as the capability to learn math, chain-of-thought reasoning, and instruction-following.

The general public got a taste of what these LLMs can do in late November 2022, when OpenAI released ChatGPT to the world. Since then, the tech world has been consumed with implementing LLMs into everything they do, and the arms race to build ever-bigger and more capable models has gained extra steam, as seen with the release of GPT-4 on March 15.

While some AI experts have raised concerns about the downsides of LLMs–including a tendency to lie, the risk of private data disclosure, and potential impact on jobs–it has done nothing to quell the enormous appetite for new AI capabilities in the general public. We may be at an inflection point with AI, as Nvidia CEO Jensen Huang said last week. But the genie would appear to be out of the bottle, and there’s no telling where it will go next.

Related Items:

ChatGPT Brings Ethical AI Questions to the Forefront

Hallucinations, Plagiarism, and ChatGPT

Like ChatGPT? You Haven’t Seen Anything Yet



Source link

Previous Post

Blockchain wallet: How Relevant Is This Idea for a Successful Startup?

Next Post

Generic orchestration framework for data warehousing workloads using Amazon Redshift RSQL

Next Post

Generic orchestration framework for data warehousing workloads using Amazon Redshift RSQL

Recommended

“Above the Trend Line” – Your Industry Rumor Central for 11/14/2023

November 15, 2023

Live from the Lakehouse | Databricks Blog

June 27, 2023

#ClouderaLife Employee Spotlight: Shallan Miller

August 4, 2023

Don't miss it

Big Data

Heard on the Street – 12/4/2023

December 5, 2023
News

Unlocking the value of data as your differentiator

December 5, 2023
News

AWS Launches High-Speed Amazon S3 Express One Zone

December 5, 2023
Big Data

PwC’s 2023 Emerging Technology Survey

December 4, 2023
Big Data

Databricks Wins AWS ISV Partner of the Year Award in NAMER

December 4, 2023
News

Amazon Redshift announcements at AWS re:Invent 2023 to enable analytics on all your data

December 4, 2023
big-data-footer-white

© Big Data News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Big Data
  • News
  • Contact us

Newsletter Sign Up

No Result
View All Result
  • Home
  • Big Data
  • News
  • Contact us

© 2022 Big Data News Hubb All rights reserved.