Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Disclaimer
FREE REPORT
The Unpredictable AI Plans of the Trump Administration: What’s on the Horizon The Unpredictable AI Plans of the Trump Administration: What’s on the Horizon

The Unpredictable AI Plans of the Trump Administration: What’s on the Horizon

Loading the Elevenlabs Text to Speech AudioNative Player...

Trump’s surprising AI agenda: What to expect next

President-elect Donald Trump has telegraphed big changes to the nation’s all-important AI strategy, many of which are expected to be implemented immediately after his inauguration in January. 

But while some of Trump’s plans are predictable, as part of an effort to make the U.S. the world’s leader in the fast-emerging technology, others are still a mystery, experts told Fortune. 

Part of the reason is that AI policy is complex. And because AI is such a new technology, officials are still trying to figure it out. “Nobody has clearly laid out a perfect AI regulation strategy, because, frankly, there probably isn’t one, we’re still so early in this innovation cycle,” said Aaron Levie, CEO of cloud storage company Box.

Another wildcard is the chorus of voices advising Trump on technology and AI policy, including billionaire Elon Musk, who campaigned for Trump and contributed over $100 million to a pro-Trump political action committee. Who Trump will ultimately choose to listen to, among the conflicting agendas, is unknown. “Given that there are so many voices in that room and so many powerful men with egos, how is that going to work out?” said Chloe Autio, an AI policy consultant who works with AI companies and government.

Still, Trump has sent some very clear signals about what he’ll do about AI. The most obvious, experts agree, is that he’ll make good on his promise to repeal President Joe Biden’s year-old executive order aimed at making AI safe and secure.

The order sets safety and privacy standards for AI, and promotes its ethical use. But the 2024 Republican platform called the order “dangerous,” saying that it “hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” 

In general, Trump will likely pick up where his first administration left off in January 2020, when it issued guidance to federal agencies about AI. The memo called on the government to reduce barriers to AI development and adoption and avoid regulations that hamper innovation and growth, said Adam Thierer, a senior fellow at the R Street Institute, a center-right think tank in Washington, D.C.

AI safety on the chopping block?

One thing that may be on the chopping block is the AI Safety Institute (AISI). The executive order directed the Department of Commerce to create the institute, housed within the National Institute of Standards and Technology, and is intended to evaluate the safety of the most advanced artificial intelligence to national security, public safety, and individual rights. 

Adam Aft, the lead attorney in Baker McKenzie’s North America technology transactions group, with a focus on AI, said the safety institute is among the elements of Biden’s executive order that is most likely to be killed. And since Trump has said he would repeal the order, it would likely be one of the first and easiest changes.

However, there are many supporters inside and outside government who don’t want the AISI to vanish, said Thierer. A group of tech industry players and think tanks have been pushing Congress to make the AISI permanent before the end of the year, and before Trump takes office.

If AISI survives, Trump could appoint new leaders to it that, in a twist, could be among those who fear AI is long-term risk to humanity. Among those who have talked about the dangers is Musk, who is now in a position to influence Trump’s AI policies and his picks for the AISI’s leadership. “Trump could turn to Musk and say, ‘Who do you want to bring in?’” said Thierer. “And that’s going to be a really interesting moment.”

Open source AI: Friend or foe?

Another big question is Trump’s position on open source AI, or AI tools and models available for anyone to use, modify, and distribute. Supporters of open source AI, which includes models from Meta, Mistral, and Musk’s xAI, describe it as a counterbalance to AI from Big Tech companies like OpenAI, Anthropic, and Google, which typically keep their AI models closed and proprietary.

But there is also a strong push to block unfriendly nations from getting access to advanced AI, due to national security concerns, by regulating AI exports and limiting cybersecurity improvements. For example, Chinese researchers reportedly developed an AI model for military use by building on Meta’s open source model, Llama.

“That is going to be a high-level cat fight all the way up,” Thierer said about the coming debate within the Trump administration about how to regulate open source AI.

Autio pointed out that JD Vance, Trump’s vice president-elect, has previously supported open source AI development. “How do we reconcile that? I think it will be a big question like who will be the loudest voice in [Trump’s] ear when it comes to figuring out some of these very deeply substantive, thorny issues,” she said.  

AI is also an indirect consideration when it comes to Trump’s plan to increase tariffs on products imported from countries like China. It was a core part of his campaign, intended to encourage U.S. manufacturing.

But tariffs could increase costs for hardware that is critical for AI, such as chips, many of which are manufactured abroad. They may also disrupt the supply chains of tech companies and put U.S. businesses at a competitive disadvantage to companies in Asia and Europe, due to higher component costs, retaliatory tariffs, or foreign firms that can undercut on price. “We’re hearing from people, across the board, the possibly unintended impacts that might have on research and development in this space,” Danielle Benecke, global head of law firm Baker McKenzie’s machine learning practice.

You can also expect pushback on so-called woke AI, Thierer said, using a term for AI that is considered too left-leaning. Trump could use an executive order to pressure tech companies to disclose or revise algorithms deemed politically biased, or establish guidelines or oversight that review algorithms for bias, ensuring they do not favor one political viewpoint over another.

Previously, Musk has attacked OpenAI and Google, claiming they are influenced by a “woke mind virus.” For example, in February, when Google’s Gemini chatbot generated historically inaccurate images, such as Black Nazis and Vikings, Musk cited it as evidence of Google’s AI promoting what he viewed as an excessively “woke” perspective.

“Conservatives, since the time Trump left office and his de-platforming on X, have been very fired up about what they regard as algorithmic bias or discrimination,” Thierer said. “I’ve pushed back myself kind of aggressively against that, but the bottom line is they feel it’s very real, and it made for a strong shift by MAGA conservatives against so-called woke tech issues.” 

Any efforts by Trump to regulate or censor what AI produces, however, could face legal challenges under the First Amendment, which guarantees free speech. But could still have a chilling effect on AI research or adoption, as businesses pull back on developing or deploying AI systems, if they face unpredictable legal consequences based on perceived social or political bias.

Division in Silicon Valley

Much of what Trump ultimately does will depend on who advises him on AI. In addition to Musk, there’s Andreessen, investor and podcaster David Sacks, and Sequoia Capital’s Shaun Maguire. Jacob Helberg, founder of software company Palantir, is another who may have Trump’s ear.

Trump’s tech supporters are willing to work closely with the government on national security issues to counter China, Thierer said. It’s a big change from recent years when Big Tech largely balked at allying with Washington. “This is a very different voice from Silicon Valley than in the past,” Thierer said.

The U.S. political divide also risks playing out among career government employees working on AI technology or policy issues. Some may decide to quit if they disagree with Trump’s policies while recruiting replacements may be made more difficult, said Dr. Rumman Chowdhury, a member of the U.S. Department of Homeland Security’s AI Safety and Security Board as well as a U.S. Science Envoy for AI for the State Department. “There are thoughtful, hardworking and kind people in government who are about to be in a difficult situation, and I have every sympathy for the tough decisions they are going to have to make,” she said.

No matter what happens, Box’s Levie, for one, said he’s more optimistic about Trump’s future AI policy than he would have been during Trump 1.0. It boils down to what he considers to be more knowledgeable people in his orbit now. “Trump is surrounded by more tech-centric folks, like Elon, that I think are directionally aligned with where I see a lot of the most important technology innovations going, whether that’s AI, EVs or energy production.”

Source link

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Disclaimer
Advertisement