OpenAI Launch $10M Superalignment Fast Grants for AI Safety Research
In a groundbreaking move to address the potential risks associated with superhuman artificial intelligence (AI) systems, OpenAI, in collaboration with Eric Schmidt, has announced the Superalignment Fast Grants program. Valued at $10 million, this initiative aims to fund and support technical research focusing on the alignment and safety of future AI systems, particularly those surpassing human intelligence.
As OpenAI envisions the possibility of superintelligence emerging within the next decade, the organization recognizes the need for innovative approaches to ensure the safety of such advanced AI. Unlike current AI systems, which are aligned using reinforcement learning from human feedback (RLHF), superhuman AI presents unprecedented challenges. With capabilities beyond human comprehension, these systems demand new strategies to guarantee alignment and safety.
One of the critical challenges is the inability of humans to comprehend and evaluate complex behaviors generated by superhuman models entirely. For instance, when confronted with a million lines of intricate code, humans may struggle to determine whether the code is safe or poses risks. Conventional alignment techniques like RLHF, dependent on human supervision, may need to be revised in the face of such advanced AI capabilities. The fundamental question arises: how can humans effectively steer and trust AI systems that surpass their intelligence?
OpenAI’s Superalignment project seeks to engage top researchers and engineers globally to address this critical challenge. In partnership with Eric Schmidt, the organization has allocated $10 million in grants to support technical research that ensures superhuman AI systems remain aligned and safe.
Try GPT Guard for free for 14 days
* No credit card required. No software to install
The Superalignment Fast Grants program offers grants ranging from $100,000 to $2 million for academic labs, non-profits, and individual researchers. Additionally, a one-year $150,000 OpenAI Superalignment Fellowship, comprising a $75,000 stipend and $75,000 in compute and research funding, is available for graduate students. Notably, the grants are open to researchers regardless of their prior experience in the alignment field, encouraging newcomers to contribute to advancing AI safety.
The application process is streamlined and straightforward, with responses promised within four weeks of the application deadline on Feb 18. OpenAI expresses particular interest in funding research directions, including weak-to-strong generalization, interpretability, scalable oversight, and various other areas such as honesty, chain-of-thought faithfulness, adversarial robustness, and evaluations.
The Superalignment Fast Grants program reflects OpenAI’s commitment to collaborative efforts in addressing the technical challenges posed by superhuman AI systems. By supporting innovative research in these areas, OpenAI and Eric Schmidt aim to foster advancements that will contribute to the responsible and holistic development and deployment of future AI technologies.
OpenAI Suspends ByteDance’s Account Over GPT Use
In a recent development within the artificial intelligence (AI) community, OpenAI, the non-profit organization behind the influential GPT technology, has severed ties with ByteDance, the parent company of the widely popular TikTok app. The decision comes in response to the revelation that ByteDance had clandestinely utilized OpenAI’s technology to develop its proprietary AI model, “Project Seed.”
This maneuver by ByteDance has stirred tensions within the AI community, as it breaches generally accepted norms in the field and directly violates OpenAI’s terms of service. OpenAI’s policies explicitly state that the output from its models cannot be utilized “to develop any artificial intelligence models that compete with our products and services.” Given OpenAI’s backing by Microsoft, ByteDance’s purchase of OpenAI access through Microsoft accentuates the gravity of this violation.
The move by ByteDance, currently under scrutiny due to its Chinese government affiliations and concerns surrounding TikTok, adds complexity to its operations in the United States. The company is now facing heightened scrutiny following the revelation of its unauthorized use of OpenAI’s technology.
OpenAI spokesperson Niko Felix emphasized the organization’s commitment to ethical usage of its technology, stating, “All API customers must adhere to our usage policies to ensure that our technology is used for good.” While acknowledging that ByteDance’s utilization of the API was minimal, OpenAI has suspended their account pending further investigation. Felix added, “If we discover that their usage doesn’t follow these policies, we will ask them to make the necessary changes or terminate their account.”
The revelation unfolds against the heated AI race in China, particularly with ByteDance joining the competition in June. The tech giant, renowned for TikTok, initiated internal testing of its AI chatbot, aiming to position itself as China’s answer to ChatGPT, the dominant AI language model from OpenAI. This strategic move places ByteDance in direct competition with other tech giants like Alibaba and Baidu, all vying for supremacy in the rapidly expanding Chinese AI market.
The suspension of ByteDance’s OpenAI account highlights the significance of adhering to ethical AI practices and respecting the terms of service laid out by organizations contributing to the advancement of AI technology. As the investigation unfolds, the AI community will be closely watching the implications of this incident on the broader landscape of AI collaborations and ethical considerations.