OpenAI gathers squad to control super-smart AI

0
67615990-1b79-11ee-9fcd-589a780dca15
Spread the love

OpenAI Forms Team to Manage Risks of Superintelligent AI

OpenAI is taking a proactive approach to the potential risks associated with superintelligent artificial intelligence. This new team, led by OpenAI Chief Scientist Ilya Sutskever and Jan Leike, aims to ensure that any superintelligent AI developed is safe to use and aligned with human values.

Dedicated Compute Power for Automated Alignment Researcher

OpenAI has committed 20 percent of its compute power to developing an automated alignment researcher. This system will assist in ensuring the safety and alignment of superintelligent AI with human values. While the goal is ambitious, OpenAI is optimistic that a focused effort can solve this problem.

Challenges in AI Regulation

As governments consider regulating the AI industry, organizations like OpenAI are raising awareness about the potential risks of superintelligent AI. However, there are more immediate concerns such as labor issues, misinformation, and copyright that policymakers need to address today, not just in the future.