Advertisement
Follow

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Disclaimer
FREE REPORT
California Leads in AI Regulation, Despite Governor Newsom’s Uncertainty on SB1047 California Leads in AI Regulation, Despite Governor Newsom’s Uncertainty on SB1047

California Leads in AI Regulation, Despite Governor Newsom’s Uncertainty on SB1047

California is leading the way on AI regulation—but Governor Gavin Newsom is still unsure about SB1047

Hello and welcome to Eye on AI. In this edition…California takes the reins on AI regulation, LinkedIn quietly starts training its AI on user data, and new research shows how generative AI is guzzling up water and electricity on a massive scale. 

California Governor Gavin Newsom is going to need a bigger desk to accommodate all the AI bills the state legislature has sent to his office. This week alone Newsom signed no less than five AI bills into law—three aimed at election deepfakes and two to protect the digital likeness of performers—and he still has 38 in front of him, he told Salesforce CEO Marc Benioff at the company’s Dreamforce conference in San Francisco. 

Advertisement

While the fate of the most significant bill of them all, SB 1047, is still very much up in the air, the bills enacted this week are set to have real impact. 

California targets AI election deepfakes

Perhaps the most interesting of the AI bills enacted in California this week is AB 2655, which requires large platforms to remove or label deceptive election-related content that was digitally altered or created during specified periods before and after elections. It also mandates that platforms offer a way for California residents to report such content and authorizes various parties (candidates, elected officials, elections officials, the Attorney General, and a district attorney or city attorney) to seek court orders against large online platforms for noncompliance with the act, also assigning precedence to such filings in court. 

It’s the first law in the country to place such an onus on social media platforms, which are accustomed to punting responsibility on misinformation. (At least in the U.S. In Europe, laws already require the biggest social media companies to rapidly take down illegal content, including misinformation in some cases.) The tech companies are likely to challenge the new California law in court, but critics would say it’s about time they’re forced to do more.   

The next law, AB 2839, referred to in a press release as an “emergency measure,” prohibits any person or entity from knowingly distributing certain deceptive content within 120 days of an election in California and, in specified cases, 60 days after an election. It’s not the first bill of this type (the U.S. Senate passed a similar one in May that’s since been stalled), but it’s the first to actually go into effect in the lead up to this November’s election. 

Finally, AB 2355 requires that political ads that have been generated or substantially altered using AI feature a disclosure making people aware of the use of AI. 

Another win for SAG-AFTRA

Tackling another hot button AI issue, AB 2602 and AB 1836 address concerns raised about AI by members of the screen actors’ union during strikes last year. Now, the contract provisions they fought for—and won—are officially law. 

AB 2602 requires contracts to specify the use of AI-generated digital replicas of a performer’s voice or likeness, giving performers more control. The law also requires that performers be professionally represented in negotiating such contracts. AB 1836 protects the likeness of deceased performers, prohibiting commercial use of digital replicas of their likeness without prior consent of their estates.

Newsom still concerned about SB 1047

But despite signing all these bills, Governor Newsom said he’s still mulling over SB 1047, the controversial bill that seeks to prevent what it terms “catastrophic” harms from the most powerful AI models. The state’s lawmakers overwhelmingly passed the bill in late August. But it’s split the AI community, is opposed by many in Silicon Valley, and would represent the most sweeping AI law passed in the U.S. to date. 

“We’ve been working over the last couple years to come up with some rational regulation that supports risk-taking, but not recklessness. That’s challenging now in this space, particularly with SB 1047, because of the sort of outsized impact that legislation could have, and the chilling effect, particularly in the open source community,” said Newsom onstage during the conversation with Benioff on Tuesday, according to TechCrunch

Newsom has until September 30 to sign or veto bills, giving him about two weeks to make up his mind. He can also choose to do neither, in which case SB 1047 will automatically become law under the state’s “pocket signature” rule.

The bigger picture

During his chat with Benioff, Governor Newsom also criticized  the federal government for having “failed to regulate AI”—a feeling shared by many. 

California is, of course, a main hub for the development and commercialization of generative AI. But the passing of more state laws continues the fractured approach to AI and data regulation in the U.S. that’s made life more complicated for users and companies alike. Hopefully, the new California laws inspire Congress to finally stop dragging its feet on the many stalled AI bills currenting circulating on Capitol Hill that have been sitting on their desks all year. 

And with that, here’s more AI news. 

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

AI IN THE NEWS

LinkedIn began training AI on user data by default before updating its terms of service. The company confirmed the move to 404 Media, saying it plans to update its terms of service “shortly.” LinkedIn quietly published a help center post about opting out of AI training last week but did not notify users of the change. Users yesterday began noticing a new setting indicating that the company was using their data to improve its generative AI, causing backlash over the lack of transparency surrounding the rollout and the automatic opt-in. 

The U.S. will host a global summit on AI safety in November. Set to be held in San Francisco on November 20 and 21, the goal of the meeting—the first official meeting of the International Network of AI Safety Institutes—is to jumpstart technical collaboration before the AI Action Summit in Paris in February. Technical experts from each member country’s AI safety group are expected to share knowledge around AI safety to advance priority areas of work, such as the secure and trustworthy development of generative AI. Member countries include Australia, Canada, the E.U., France, Japan, Kenya, South Korea, Singapore, Britain, and the U.S. You can read more from Reuters

ChatGPT isn’t getting a new feature to allow it to proactively message users—at least not any time soon. After users began sharing stories online this week describing how ChatGPT seemingly messaged them unbidden and speculating that a new feature is being tested, an OpenAI spokesperson told Eye on AI the company “addressed an issue” where it appeared that ChatGPT was starting new conversations. “This issue occurred when the model was trying to respond to a message that didn’t send properly and appeared blank. As a result, it either gave a generic response or drew on ChatGPT’s memory,” OpenAI said.

FORTUNE ON AI

Mark Zuckerberg says Europe needs more consistent AI regulation—and even his privacy nemesis agrees—by David Meyer

AI video startup Runway scores a first-of-its-kind deal with John Wick studio Lionsgate —by David Meyer

AI could soon be beyond our control—and the scientists who created it are worried —by Marco Quiroz-Gutierrez

Mars to spend $1 billion on tech-focused hiring, AI, and other digital initiatives to bolster its pet food division —by John Kell

AI CALENDAR

Sept. 25-26: Meta Connect in Menlo Park, Calif. 

Oct. 22-23: TedAI, San Francisco

Oct. 28-30: Voice & AI, Arlington, Va.

Nov. 19-22: Microsoft Ignite, Chicago, Ill.

Dec. 2-6: AWS re:Invent, Las Vegas, Nev.

Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia

Dec. 9-10: Fortune Brainstorm AI San Francisco (register here)

EYE ON AI NUMBERS

1,468

That’s how many milliliters of water GPT-4 uses at a Washington data center to generate a 100-word email, according to data published in the Washington Post. That’s a little more than six eight-ounce bottles of water. 

The same email would use 925 milliliters at the Arizona data center, 464 milliliters in Illinois, and 235 millimeters in Texas. Even where the amount of water used is significantly less, the impact is enormous. Data centers are also straining power grids: generating a 100-word email once a week for a year requires 7.5k watt-hourWh, equal to the electricity consumed by 9.3 D.C. households for one hour, according to the Post. 

Of course, many consumers and businesses are using generative AI to write a lot more than emails—and much more frequently than once a week. OpenAI recently said 200 million people now use ChatGPT each week. What’s more, generative AI models that create images and videos use even more resources to generate outputs. 

Source link

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Disclaimer
Advertisement