Nvidia CEO Jensen Huang has stated that developing more advanced artificial intelligence is the best way to combat AI abuse. Speaking at an event hosted by the Bipartisan Policy Center in Washington, Huang emphasized that AI technology is a significant threat and a crucial solution in the ongoing battle against misinformation and data manipulation.
AI threats to U.S. elections
As the U.S. approaches the federal elections in November, concerns surrounding the misuse of AI are escalating. Huang pointed out that AI can generate fake information at a rapid pace, which can distort public perception and influence democratic processes. A survey conducted by the Pew Research Center revealed that nearly 60 percent of Americans are deeply concerned about AI’s potential role in spreading false information about candidates.
This anxiety spans both major political parties, with around two in five respondents believing that AI will predominantly be used for harmful purposes during the elections. Huang urged the U.S. government to adopt AI technologies to avoid these threats. He advocated for every government department, especially those related to energy and defense, to leverage AI capabilities, highlighting the necessity for a government-led AI initiative.
The energy demands of AI
A substantial increase in energy consumption accompanies the rapid evolution of AI. According to Huang, AI data centers currently account for approximately 1.5 percent of global electricity use, which is expected to rise significantly. He predicts that future data centers may require 10 to 20 times more energy than what is utilized today.
Huang explained that energy consumption will further escalate as AI models learn from one another. He proposed constructing data centers near abundant but hard-to-transport energy resources to address these demands. This strategy would allow AI systems to operate efficiently in remote locations while ensuring adequate energy supply.
Regulatory challenges facing AI development
As AI technology progresses, the conversation surrounding its regulation becomes increasingly important. California Governor Gavin Newsom recently vetoed SB 1047, a bill designed to impose mandatory safety measures on AI systems. The proposed legislation faced strong opposition from major tech companies, including OpenAI, Meta, and Google. Newsom expressed concern that the bill would hinder innovation and fail to protect the public from genuine threats posed by AI adequately.
The bill, authored by Democratic Senator Scott Wiener, sought to require AI developers to incorporate a “kill switch” into their models and to create plans for mitigating extreme risks. Additionally, it would have held developers liable for any ongoing threats posed by their systems, such as a potential AI grid takeover. The veto reflects the continuing struggle to ensure safety and foster innovation within the AI landscape.
Jensen Huang’s remarks underscore AI’s dual nature as both a risk and a potential solution. As the technology evolves, stakeholders in the public and private sectors must address the associated challenges and opportunities presented by AI advancements.