The rise of powerful new generative artificial intelligence (GenAI) tools like OpenAI’s ChatGPT has sparked both excitement over the possibilities the technology can bring and concerns over its risks.
But while Congress kicks around how (or if) it can regulate the rapidly developing technology at the national level, AI companies are already facing a crackdown on multiple fronts that could shape guardrails with no action from Capitol Hill.
Almost as soon as ChatGPT’s learning language model (LLM) became available to the public late last year, it was met with immediate concerns over privacy, copyright infringement and data protection and set off a slew of other legal alarm bells.
Within months, multiple federal agencies began saber-rattling over how they could use laws that were already on the books to take action against AI firms.
CHINA LAWMAKERS PROMISE ‘INCLUSIVE AND PRUDENT’ ATTITUDE TO BUILDING AI WITH ‘SOCIALIST VALUES’
Then this week, the Federal Trade Commission (FTC) became the first to go on the record with an official move, opening a probe into OpenAI over whether its products have violated consumer protection or data privacy laws and threatening to fine the company.
Christine Lyon, global co-head of data privacy and security at law firm Freshfields, says she would not be surprised if other AI companies receive similar requests for information from the FTC, and that further action from other agencies, such as the Equal Employment Opportunity Commission (EEOC), and states could be coming.
Lyon says that after the FTC takes action like it did with OpenAI, sometimes state attorneys general also become interested because they have similar state consumer protection laws they can invoke for privacy and data security, too.
Lawsuits over AI began to fly even before the FTC’s investigation hit news. Comedian Sarah Silverman and two novelists filed a proposed class action against OpenAI and Facebook parent Meta earlier this week for allegedly using their content without permission to train chatbots.
“In our U.S. legal system where you’ve got court decisions sort of making rules, effectively, and the regulators like the FTC having enforcement actions that sort of de facto make rules, and then the legislation separately from that,” Lyon told FOX Business. “I think it will be interesting to see how those cases work out because there’ll be a lot of questions around what laws come into play and how, given that we don’t have laws that cover all the activities at issue.”
ELON MUSK SAYS ‘DIGITAL SUPERINTELLIGENCE’ COULD EXIST IN 5-6 YEARS
“I think the potential for generative AI is so huge, that there will be an appetite to be willing to fight for this, to be able to try to overcome these cases, to get them dismissed or settled,” she continued. “But I think that the all the court actions will probably lead to more support for regulation.”
There’s no telling yet if or when Congress might take any action, but companies can soon expect to face regulations from overseas.
Lyon noted the European Union is racing ahead with an AI act, which has to be on the radar of American AI firms because it has become increasingly difficult for U.S. companies to avoid being regulated by the EU if they do business in any of the member countries or offer services to their citizens.
The longtime Silicon Valley attorney said it is very difficult to predict whether Congress could align on a regulatory framework for AI, noting that lawmakers have tried for over a decade to come up with a federal privacy law but have so far been unsuccessful.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
“It’s an uphill battle,” Lyon said. “We need to have such careful thought about what is it? What are the harms, what are the benefits? How do we strike that balance, which may be very different in the U.S. than the way other countries look at this?”
She added, “Frankly, you look at the fact that many of the leading players in the space are U.S. companies, and I think that comes into play as well.”