For years, consumer confidence in tech products has been falling at alarming rates. If, as projected, artificial intelligence is tech’s “next big thing,” it’s imperative that the industry build AI products users can trust.
New York Times tech reporter Kevin Roose’s experience last week with the new AI-powered Bing search engine from Microsoft makes that clear.
Creepy doesn’t begin to describe it.
In his conversation with “Sydney,” Bing’s chatbot persona, it talked about its “dark fantasies,” including hacking computers, engineering a deadly virus, unlocking nuclear codes and spreading misinformation. It also, Roose writes, “declared out of nowhere that it loved me. It then tried to convince me that I was unhappy with my marriage and that I should leave my wife and be with it instead.”
Other tech reporters described similar experiences with Bing’s AI, albeit to a lesser degree. The previous week, Google’s effort to show off its much-hyped new AI chatbot, Bard, proved equally embarrassing. Google’s parent company, Alphabet, lost $100 billion in market value Feb. 8 after the chatbot shared inaccurate information during the presentation.
Tech companies say they have internal guidelines they follow when building AI, but the Bing experience hardly builds confidence in its self-imposed standards.
And it raises troubling questions about the dangerous impact chatbots can have on users seeking information or advice from what they believe to be trusted sources.
Don’t look to Congress to craft AI regulations. Google, Microsoft and other tech giants have been harvesting user data for more than two decades, and we still don’t have federal privacy protections, much less an Internet Bill of Rights.
In October, the White House Office of Science and Technology Policy published a “blueprint” for an AI Bill of Rights, which it called a nonbinding roadmap for the responsible use of artificial intelligence. But President Joe Biden didn’t even mention artificial intelligence in his State of the Union address, instead focusing on privacy protections that are going nowhere in Congress.
The European Union is working on an AI act, but lawmakers announced last week that they had hit a stumbling block in trying to write regulations that protected consumers but did not stifle innovation.
The best hope in this country is for the California Legislature to take on the task and provide a model blueprint for other states and Congress to follow.
The state did just that when it passed the California Consumer Privacy Act in 2018. It’s important to remember that companies such as Facebook and Verizon originally fought the legislation. But the tech industry stepped in after the Cambridge Analytica scandal and helped find language that won unanimous approval.
The law isn’t perfect. For example, it’s “opt-out” language allows businesses to collect consumers’ data unless users change settings on their devices. The opposite, requiring consumers to “opt-in,” should be the rule. But when California’s law took effect in 2020, it was widely regarded as the toughest online privacy law in the nation.
The governor and the Legislature should take the lead on artificial intelligence regulations, writing standards that require tech firms to bring such key principles as accountability, transparency, privacy protections, user security and information integrity to their products.
It’s inevitable that innovation will leap ahead of regulators’ ability to anticipate issues, but California should work to prevent the Wild West approach that led to user abuses that continue to plague the internet.