National Cyber Warfare Foundation (NCWF)

Critics warn America s ‘move fast’ AI strategy could cost it the global market


0 user ratings
2026-02-10 00:39:31
milo
Blue Team (CND)

As the U.S. promises a light-touch approach to AI regulation, businesses and other stakeholders must work out the rules of the road for themselves.


The post Critics warn America’s ‘move fast’ AI strategy could cost it the global market appeared first on CyberScoop.



The Trump administration has made U.S. dominance in artificial intelligence a national priority, but some critics say a light-touch approach to regulating security and safety in U.S. models is making it harder to promote adoption in other countries.





White House officials have said since taking office that Trump intended to move away from predecessor Joe Biden’s emphasis on AI safety. Instead, they would allow U.S. companies to test and improve their models with minimal regulation, prioritizing speed and capability. 





But this has left other stakeholders, including U.S. businesses, to work out the rules of the road for themselves.





Camille Stewart Gloster, a former deputy national cyber director in the Biden administration, now owns and manages her own cyber and national security advisory firm. There are some companies, she said, who “recognize that security is performance.”





This means putting governance and security guardrails in place so the AI behaves as intended, access is tightly restricted , and inputs and outputs are monitored for unsafe or malicious activity that could create legal or regulatory risk.





“Unfortunately [there are] a small amount of organizations that realize it at a real, tangible ‘let’s put the money behind it’ level, and there are a number of small and medium organizations, and even some larger ones, that really just want to move fast and don’t quite understand how to strike that balance,” she said Monday at the State of the Net conference in Washington D.C.





Stewart Gloster said she has seen organizations inadvertently put users at risk by giving AI agents too much authority and too little oversight, leading to disastrous results. One company she advised was “effectively DDoSing their customers” with their AI agent, who was “flooding their customers with notifications to the point where they were upset, but they could not stop it, because cutting off the agent meant cutting off a critical capability.”





The Trump administration and Republicans in Congress have made global AI leadership a top national priority. They argue that new regulations for the fast-growing AI industry would inhibit innovation and make U.S. tech companies less competitive. 





Some worry that the GOP’s zeal to boost U.S. AI companies may backfire. Michael Daniel, former White House Cybersecurity Coordinator during the Obama administration, said artificial intelligence regulations in the U.S. remain woefully inadequate to gain broad adoption in other parts of the world, like Europe, where regulatory safety and security standards for commercial AI models are often higher.





“If we don’t take action here in the United States, we may find ourselves…being forced to play the follower, because not everybody will wait for us,” said Daniel, “And I would say that geopolitics are making that even less likely, and it’s making it more likely that others will move faster and more sharply than the U.S. will.”





One recent example: Elon Musk’s xAI is currently under investigation by multiple regulators on the state and international level following the generation of millions of nonconsensual, deepfakes nudes, sexualized photos and Child Sexual Abuse Material of real user photos by its AI tool Grok. Multiple countries have threatened to ban or restrict the use of X and Grok in their countries over the episode.





Musk himself has at times endorsed Grok’s propensity for making controversial or objectionable content, promoting features like “spicy mode” that make the model more offensive and vulgar, including by generating nude deepfakes generated from photos of real individuals.





AI researcher Emily Barnes noted that Grok’s Spicy Mode “sits squarely in a zone where intellectual property jurisprudence, platform governance and human rights frameworks have yet to align.”

“The result is a capability that can mass-produce non-consensual sexual images at scale without triggering consistent legal consequences” in the U.S.,” she wrote.





Daniel is part of a growing chorus of U.S. policymakers – mostly Democrats – who have argued over the past year that strong security and safety guardrails will help U.S.-made AI models compete on the world stage, not hurt them.





Last year, Sen. Mark Kelly, D-Ariz., urged that similar security and safety protections become a core part of how U.S. AI tools are built “not only to ensure the technology is safe for businesses and individuals to use and isn’t leveraged in widespread discrimination or scamming, but also because they can serve as a key differentiator between the U.S. and other competitors like China and Russia.”





“If we create the rules, maybe we can get our allies to work within the system that we have and we’ve created,” Kelly added. “I think we’ll have leverage there, I hope we do.”Stewart Gloster said that in the absence of direction or regulation by the federal government, industry is finding that any rules of the road around ensuring security and reliability will have to come from companies looking to protect their own brand partnering with other, smaller regulatory stakeholders.





“There are a lot of organizations that are contending with this new role that they must play as [the federal] government pushes down the responsibility of security to state government and as they look to industry to drive what innovation looks like,” she said.





While businesses are starting to have those conversations in trade associations and consortia to brainstorm alternatives, “this is not happening generally.”  





What’s more likely is that legal liability for AI developers, organizations and individuals around AI security and privacy failures will be shaped through lawsuits and the court system.





“That’s probably not the way we want it to happen, because bad facts make bad law, which means if it’s litigated in the courts, we’re likely to see a precedent that is very tailored to that set of facts, and that will be a really tough place for us to operate from,” she said.


The post Critics warn America’s ‘move fast’ AI strategy could cost it the global market appeared first on CyberScoop.



Source: CyberScoop
Source Link: https://cyberscoop.com/trump-ai-policy-global-adoption-safety-regulation-critics/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Blue Team (CND)



Copyright 2012 through 2026 - National Cyber Warfare Foundation - All rights reserved worldwide.