A COLD WAR CLICHE IS A LESSON ON AI CHATBOTS
Clichés are widely denigrated in writing guides as indicative of laziness and a lack of original thinking. An ironic exception is applying the Russian proverb “Doveryai, no proveryai”….“Trust, but verify” when considering the merits or otherwise of AI chatbots.
The phrase was pushed to cliché status by President Ronald Reagan, who used it to excess in talks with his then Soviet counterpart Mikhail Gorbachev, to try to end the Cold War.
At a White House meeting, Reagan attributed their success to the fact that: “We have listened to the wisdom in an old Russian maxim…”Dovorey no provorey”. (He also apologised for the pronunciation).
AI is transmuting the world faster than the two presidents could ever have dreamt.
History and AI’s cheerleaders make it clear that trust should be minimal at best, and verify needs to be rigorous in the extreme.
The proof of the ratio was inadvertently highlighted by Google’s chief scientist Jeff Dean. In a recent effort to stress the first part of the proverb and an apparent hope the second will be ignored, he claimed: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Sound familiar? It should, and with dread.
Think Mark Zuckerberg’s heedless founding motto for Facebook: “Move fast and break things.”
Google apparently feels the same. It’s ethical researchers were barred from publishing papers critical of, or warning about, AI.
Microsoft released its AI chatbot despite cautions that the technology behind it had the potential to “…degrade critical thinking and erode the factual foundation of modern society.”
Executives chose instead to focus on market share. In an internal memo, Microsoft technology executive Sam Schillace said it was an “absolutely fatal error in this moment to worry about things that can be fixed later.”
Is there a proverb along the lines of “Ignore, but profit”?
HISTORY TELLS ANOTHER STORY
As for “things that can be fixed later”, look to the history of climate change warnings.
The fossil fuel industry, major auto manufacturers and electric utilities, knew as far back as the 1950s that fossil fuel products could lead to global warming with “dramatic environmental effects before the year 2050.”
So naturally, they put profit over responsibility, and set about “denigrating climate models, mythologizing global cooling, feigning ignorance about the discernibility of human-caused warming…”
AI is touted as playing “a crucial role in developing electric vehicles by enhancing safety, improving efficiency, and reducing emissions.”
The developers and manufacturers pay as much heed to the enormous environmental and human costs of mining the rare minerals needed for the batteries, and for pretty much every device involved in AI, as the tobacco industry did to the damage they knew smoking causes, and did their damndest to demand public trust and block scientific verification. Or, as U.S. District Court Judge Gladys Kessler opined in 2006, they marketed their product with zeal, “with a single-minded focus on their financial success, and without regard for the human tragedy or social costs that success exacted.”
Those hell-bent on unfettered development of Chatbots, risk fitting the same mould. Among the problems the makers admit AI generates are: misinformation, biases, and “synthetically generated photos, audio or video that are fake but look real”, quaintly called deepfakes.
Then there is “information that sounds plausible but is irrelevant, nonsensical or entirely false”. That’s labelled hallucinations, a disingenuous way of saying bull****.
Another applicable cliché for that is the GIGO Rule: Garbage In. Garbage Out.
There’s an ongoing debate about whether advances in AI portend “a utopian future or a dangerous new reality, where truth is indecipherable from fiction.”
REALITY AND MORALITY
Nonetheless, executives from Open AI, which created the viral chatbot ChatGPT, aver that “the bounds and defaults for AI systems” should be “democratically set” (whatever that means) while admitting “we don’t yet know how to design such a mechanism.”
For a clue on how to do so, those who want AI to shape the 21st century can learn something from the intelligence of the 18th, specifically that of French writer and philosopher Voltaire: “No problem can withstand the assault of sustained thinking.”
The moral case for caution with AI was made by Pope Francis, who noted that the positive potential of AI”… will be realized only if there is a constant and consistent commitment on the part of those developing these technologies to act ethically and responsibly.”
I suspect that like Reagan and Gorbachev, the Pope, who is widely held as a moral compass point, would put more faith in verify, than in trust.
So should the legislators, and the public.
Comments are welcomed. Click CONTACT on the site header.
To receive e‑mail alerts to new posts, Click SIGN-UP on the header.
2 thoughts on “A COLD WAR CLICHE IS A LESSON ON AI CHATBOTS”
Astounding that ‘Ethical Researchers’ in an organisation such as Google can be banned from publicising ethical observations that don’t suit the marketing aspirations of the company they work for ..
‘Move fast and break things’ indeed! .. Where to next?
Where to next? One fears it could be along the lines of “to Hell in a hand basket”.