When OpenAI's ChatGPT took the world by storm last year, it caught many powerful brokers in both Silicon Valley and Washington DC by surprise. The U.S. government needs to be forewarned about future AI breakthroughs, including large-scale language models, the technology behind ChatGPT.
The Biden administration is preparing to use the Defense Production Act to force tech companies to notify the government when they use large amounts of computing power to train AI models. The rule could come into effect as early as next week.
This new requirement will give the U.S. government access to sensitive information about some of the most sensitive projects within OpenAI, Google, Amazon, and other technology companies that compete in the AI space. Companies will also need to provide information about the safety tests being conducted on their new AI creations.
OpenAI has been tight-lipped about how much work has been done on the successor to its current top-of-the-line product, GPT-4. The U.S. government may be the first to know when work and safety testing on GPT-5 actually begins. OpenAI did not immediately respond to a request for comment.
“We are using the Defense Production Act, an authority granted to us by the President, to conduct research that requires companies to share with us every time they train a new large-scale language model, and to We want you to share the results, the safety, with us,” U.S. Commerce Secretary Gina Raimondo said at an event at Stanford University's Hoover Institution on Friday. He did not say what measures the government would take. I received information about an AI project. Details will be announced next week.
The new rules come into effect as part of a sweeping White House executive order issued last October. The executive order gave the Commerce Department a Jan. 28 deadline to develop a plan that would require companies to notify U.S. authorities details of the powerful new AI models they are developing. The order states that these details must include the amount of computing power being used, information about the ownership of the data fed to the model, and details of safety testing.
The October order calls for work to begin defining when AI models require reporting to the Department of Commerce, but the initial standard is set at 100 septillion (ten billion or 10). There is.26) floating point operations, or flops, per second, and a factor of 1,000 for large language models dealing with DNA sequence data. Neither OpenAI nor Google has disclosed how much computing power they used to train their most powerful models, GPT-4 and Gemini, respectively, but according to a Congressional Research Service report on the executive order. ,Ten26 The flops are slightly more than those used to train GPT-4.
Raimondo also said the Commerce Department would require cloud computing providers such as Amazon, Microsoft and Google to notify the government if foreign companies use their resources to train large-scale language models. He confirmed that another requirement of the executive order in June will be implemented soon. Foreign projects must be reported when they exceed the same initial threshold of 100 septillion flops.