Vote for Cryptopolitan on Binance Square Creator Awards 2024. Click here to support our content!

OpenAI launches ‘superalignment’ team to tackle superintelligence risks head-on

OpenAIOpenAI
330476

Contents

Share link:

In this post:

  • OpenAI is raising awareness about the risks of AI superintelligence and forming a dedicated team to address these concerns.
  • The company emphasizes aligning superintelligence with human values and intentions and establishing new governance institutions.
  • OpenAI acknowledges that aligning AGI poses significant risks and may require a collective effort from humanity.

OpenAI‘s CEO, Sam Altman, has embarked on a global campaign to raise awareness about the potential dangers of AI superintelligence, where machines surpass human intelligence and could become uncontrollable.

In response to these concerns, OpenAI has recently announced the formation of a dedicated team tasked with developing methods to address the risks associated with superintelligence, which may emerge within this decade.

The company emphasizes that effectively managing superintelligence requires establishing new governance institutions and solving the critical challenge of aligning the superintelligence with human values and intentions.

OpenAI acknowledges that aligning AGI (Artificial General Intelligence) poses significant risks to humanity and may necessitate a collective effort from all of humanity, as stated in a blog post released last year.

Dubbed “Superalignment,” the newly formed team comprises top-tier researchers and engineers in machine learning. Ilya Sutskever, co-founder and chief scientist of OpenAI, and Jan Leike, the head of alignment, are guiding this endeavor.

To tackle the core technical challenges of superintelligence alignment, OpenAI has committed to dedicating 20% of its computational resources acquired thus far to the alignment problem. The company anticipates that within four years, it will resolve these challenges.

Read Also  Luno wallet adds Uniswap and Chainlink tokens due to high demand for investments in Africa

The primary objective of the Superalignment team is to develop a human-level automated alignment researcher. This entails creating AI systems that can effectively align superintelligent AI systems, outperforming humans in speed and precision.

To achieve this milestone, the team will focus on developing a scalable training method that utilizes AI systems to evaluate other AI systems. They will validate their resulting model by automating the search for potentially problematic behaviour. The alignment pipeline will also undergo rigorous stress testing by deliberately training misaligned models to gauge their detectability.

OpenAI’s efforts to address superintelligence risks mark a significant step forward in pursuing responsible and aligned AI development. By assembling a team of top researchers and committing substantial computational resources, the company demonstrates its commitment to proactively mitigating the potential risks associated with the advent of superintelligence. As they embark on this ambitious journey, OpenAI sets a precedent for collaboration and unity in safeguarding humanity’s future in the age of AI.

From Zero to Web3 Pro: Your 90-Day Career Launch Plan

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Editor's choice

Loading Editor's Choice articles...

Stay on top of crypto news, get daily updates in your inbox

Most read

Loading Most Read articles...
Subscribe to CryptoPolitan