A COLD WAR CLICHE IS A LESSON ON AI CHATBOTS

A COLD WAR CLICHE IS A LESSON ON AI CHATBOTS

Clichés are wide­ly den­i­grat­ed in writ­ing guides as indica­tive of lazi­ness and a lack of orig­i­nal think­ing. An iron­ic excep­tion is apply­ing the Russ­ian proverb “Doveryai, no proveryai”….“Trust, but ver­i­fy” when con­sid­er­ing the mer­its or oth­er­wise of AI chatbots.

The phrase was pushed to cliché sta­tus by Pres­i­dent Ronald Rea­gan, who used it to excess in talks with his then Sovi­et coun­ter­part Mikhail Gor­bachev, to try to end the Cold War.
At a White House meet­ing, Rea­gan attrib­uted their suc­cess to the fact that: “We have lis­tened to the wis­dom in an old Russ­ian maxim…”Dovorey no provorey”. (He also apol­o­gised for the pronunciation).
AI is trans­mut­ing the world faster than the two pres­i­dents could ever have dreamt.
His­to­ry and AI’s cheer­lead­ers make it clear that trust should  be min­i­mal at best, and ver­i­fy needs to be rig­or­ous in the extreme.
The proof of the ratio was inad­ver­tent­ly high­light­ed by Google’s chief sci­en­tist Jeff Dean. In a recent effort to stress the first part of the proverb and an appar­ent hope the sec­ond will be ignored, he claimed: “We remain com­mit­ted to a respon­si­ble approach to A.I. We’re con­tin­u­al­ly learn­ing to under­stand emerg­ing risks while also inno­vat­ing boldly.”
Sound famil­iar? It should, and with dread.
Think Mark Zuckerberg’s heed­less found­ing mot­to for Face­book: “Move fast and break things.”
Google appar­ent­ly feels the same. It’s eth­i­cal researchers were barred from pub­lish­ing papers crit­i­cal of, or warn­ing about, AI.
Microsoft released its AI chat­bot despite cau­tions that the tech­nol­o­gy behind it had the poten­tial to “…degrade crit­i­cal think­ing and erode the fac­tu­al foun­da­tion of mod­ern society.”
Exec­u­tives chose instead to focus on mar­ket share. In an inter­nal memo, Microsoft tech­nol­o­gy exec­u­tive Sam Schillace said it was an “absolute­ly fatal error in this moment to wor­ry about things that can be fixed later.”
Is there a proverb along the lines of “Ignore, but profit”?

                                HISTORY TELLS ANOTHER STORY

As for “things that can be fixed lat­er”, look to the his­to­ry of cli­mate change warnings.
The fos­sil fuel indus­try, major auto man­u­fac­tur­ers and elec­tric util­i­ties, knew as far back as the 1950s that fos­sil fuel prod­ucts could lead to glob­al warm­ing with “dra­mat­ic envi­ron­men­tal effects before the year 2050.”
So nat­u­ral­ly, they put prof­it over respon­si­bil­i­ty, and set about “den­i­grat­ing cli­mate mod­els, mythol­o­giz­ing glob­al cool­ing, feign­ing igno­rance about the dis­cerni­bil­i­ty of human-caused warm­ing…”
AI is tout­ed as play­ing  “a cru­cial role in devel­op­ing elec­tric vehi­cles by enhanc­ing safe­ty, improv­ing effi­cien­cy, and reduc­ing emissions.”
The devel­op­ers and man­u­fac­tur­ers pay as much heed to the enor­mous envi­ron­men­tal and human costs of min­ing the rare min­er­als need­ed for the bat­ter­ies, and for pret­ty much every device involved in AI, as the tobac­co indus­try did to the dam­age they knew smok­ing caus­es, and did their damn­d­est to demand pub­lic trust and block sci­en­tif­ic ver­i­fi­ca­tion. Or, as U.S. Dis­trict Court Judge Gladys Kessler opined in 2006, they mar­ket­ed their prod­uct with zeal, “with a sin­gle-mind­ed focus on their finan­cial suc­cess, and with­out regard for the human tragedy or social costs that suc­cess exacted.”
Those hell-bent on unfet­tered devel­op­ment of Chat­bots, risk fit­ting the same mould. Among the prob­lems the mak­ers admit AI gen­er­ates are: mis­in­for­ma­tion, bias­es, and “syn­thet­i­cal­ly gen­er­at­ed pho­tos, audio or video that are fake but look real”, quaint­ly called deep­fakes.
Then there is “infor­ma­tion that sounds plau­si­ble but is irrel­e­vant, non­sen­si­cal or entire­ly false”. That’s labelled hal­lu­ci­na­tions, a disin­gen­u­ous way of say­ing bull****.
Anoth­er applic­a­ble cliché for that is the GIGO Rule: Garbage In. Garbage Out.

There’s an ongo­ing debate about whether advances in AI por­tend “a utopi­an future or a dan­ger­ous new real­i­ty, where truth is inde­ci­pher­able from fiction.”
                                           REALITY AND MORALITY

Nonethe­less, exec­u­tives from Open AI, which cre­at­ed the viral chat­bot Chat­G­PT, aver that “the bounds and defaults for AI sys­tems” should be “demo­c­ra­t­i­cal­ly set” (what­ev­er that means) while admit­ting “we don’t yet know how to design such a mechanism.”
For a clue on how to do so, those who want AI to shape the  21st cen­tu­ry can learn some­thing from the intel­li­gence of the 18th,  specif­i­cal­ly that of French writer and philoso­pher Voltaire: “No prob­lem can with­stand the assault of sus­tained thinking.”
The moral case for cau­tion with AI was made by Pope Fran­cis, who not­ed that the pos­i­tive poten­tial of AI”… will be real­ized only if there is a con­stant and con­sis­tent com­mit­ment on the part of those devel­op­ing these tech­nolo­gies to act eth­i­cal­ly and responsibly.”
 I sus­pect that like Rea­gan and Gor­bachev, the Pope, who is wide­ly held as a moral com­pass point, would put more faith in ver­i­fy, than in trust.
So should the leg­is­la­tors, and the public.

Com­ments are wel­comed. Click CONTACT on the site header.
To receive e‑mail alerts to new posts, Click SIGN-UP on the header.

2 thoughts on “A COLD WAR CLICHE IS A LESSON ON AI CHATBOTS

  1. Astound­ing that ‘Eth­i­cal Researchers’ in an organ­i­sa­tion such as Google can be banned from pub­li­cis­ing eth­i­cal obser­va­tions that don’t suit the mar­ket­ing aspi­ra­tions of the com­pa­ny they work for ..
    ‘Move fast and break things’ indeed! .. Where to next?

Leave a Reply

Your email address will not be published. Required fields are marked *