I agree with each and every one of those points, which can potentially guide us to realistic limits we can consider to mitigate the dark side of AI. Things like sharing what goes into training large language models like ChatGPT, and allowing opt-outs for those who don’t want their content to be part of the LLM for users. Rule against implicit bias. Antitrust laws that prevent a few giant companies from building an artificial intelligence cabal that homogenize (and monetize) almost all of the information we receive. and the security of your personal information, as used by all Intelligent AI products.
But reading that list also highlights the difficulty of turning uplifting suggestions into actual binding legislation. When you look closely at the points in the White House blueprint, it’s clear that they apply not only to AI, but to almost everything in tech. Each one embodies a user’s right that has been infringed upon forever. Big Tech didn’t wait around for generative AI to develop uneven algorithms, opaque systems, abusive data practices and lack of opt-outs. It’s table stakes, dude, and the fact that these problems are being brought into discussion of a new technology only highlights the failure to protect citizens from the ill effects of our current technology.
During that Senate hearing where Altman spoke, senator after senator sang the same chord: We blew it when it came to regulating social media, so let’s not mess with AI. But there is no limit to legislating to prevent past abuses. Last time I looked, billions of people, including nearly everyone in America with the ability to poke a smartphone display, are still on social media, being bullied, privacy compromised, and exposed to horror. Nothing stops Congress from getting tough on those companies and, above all, passing privacy laws.
The fact that Congress hasn’t done so casts serious doubt on the AI bill’s prospects. No wonder some regulators, notably FTC Chairwoman Leena Khan, are not looking forward to the new laws. She is claiming that current law gives her agency sufficient jurisdiction to act on issues of bias, anti-competitive behavior, and invasion of privacy of new AI products.
Meanwhile, the difficulty of actually coming up with new laws — and the enormity of the work that remains to be done — was highlighted this week when the White House update released On that AI Bill of Rights. It pointed out that the Biden administration is breaking a big sweat when it comes to coming up with a national AI strategy. But apparently “national priorities” are still not ruled out in that strategy.
Now the White House wants the general public as well as tech companies and other AI stakeholders to submit answers 29 questions About the benefits and risks of AI. Just as a Senate subcommittee asked Altman and his fellow panelists to suggest a way forward, the administration is asking corporations and the public for ideas. In its request informationThe White House promises to consider “each comment, whether it contains a personal narrative, experiences with AI systems, or technical legal, research, policy, or scientific material, or other material.” (I am relieved to see that comments are not being sought from the larger language model, although I’m willing to bet that GPT-4 will make a big contribution despite this omission.)