The United States VP Kamala Harris held a meeting with CEOs from leading AI companies to discuss concerns about the risks associated with AI. Notably, Mark Zuckerberg, the CEO of Meta, was not in attendance. The meeting was attended by nine top Biden administration advisers in science, national security, policy, and economics.
United States VP wants research into AI technology
Before the meeting, the White House announced several AI-related initiatives, including funding for AI research facilities, government AI policy, and AI systems evaluation.
The meeting focused on three key areas, namely the transparency of AI systems, the importance of evaluating and validating the safety of AI, and ensuring AI is secured from malicious actors.
It was agreed that more work is needed to develop appropriate safeguards and protections for AI, and the CEOs committed to engaging with the White House to ensure that Americans can benefit from AI innovation.
No specific details were provided by the United States VP on what safeguards are required or what the engagement with the government will involve. The Biden administration also addressed national security concerns related to AI, specifically mentioning cybersecurity and biosecurity, without providing further details.
The government announced $140 million in funding to establish seven new national AI research institutes, bringing the total to 25 across the country.
The institutes aim to bolster America’s AI research and development infrastructure and to drive breakthroughs in areas such as climate, agriculture, energy, public health, education, and cybersecurity.
The government will release a draft policy on AI regulation
AI development firms, including Anthropic, Google, Microsoft, OpenAI, NVIDIA, Hugging Face, and Stability AI, will participate in publicly evaluating AI systems on a platform from AI training firm Scale AI at the hacker convention DEFCON in August.
Additionally, the United States VP said the government will release a draft policy on how it plans to use AI, which will be made available for public comment in the summer. The policies will be used as a model for state and local governments in their procurement and use of AI.
The meeting between the US government and leading AI company CEOs underscores the need for transparency, safety, and security in AI systems. The use of AI in various fields has increased over the years, and concerns about its impact on society and individuals have grown.
Therefore, the United States VP said the government’s initiatives to address national security concerns and fund AI research institutes aim to drive breakthroughs in various fields while ensuring appropriate safeguards and protections are in place.
Transparency is an essential factor in building public trust in AI. The public is skeptical of the technology and how it will be used. The development of appropriate safeguards and protections for AI is crucial in ensuring that the technology is used ethically and responsibly.
The government’s initiatives, such as the release of a draft policy on how AI will be used, demonstrate a commitment to responsible AI development and use. Furthermore, the participation of leading AI development firms in publicly evaluating AI systems on a platform from AI training firm Scale AI at the hacker convention DEFCON in August will ensure that AI systems are subject to rigorous testing and scrutiny.
Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap