Ross: AI recognizes the need to regulate itself, why don’t we?
Jan 17, 2023, 8:06 AM | Updated: 9:46 am

This photo illustration shows the DALL-E website reflected between a computer screen and cellphone in Washington, DC, on July 20, 2022. - A million people eager to dabble with a new artificial intelligence tool that lets them create images simply by describing them with words will soon get their wish, its creators said Wednesday. Artificial intelligence research firm OpenAI is conducting a wide-scale beta test of DALL-E, a cutting-edge software that creates images from textual descriptions. (Photo by Stefani Reynolds / AFP) (Photo by STEFANI REYNOLDS/AFP via Getty Images)
(Photo by STEFANI REYNOLDS/AFP via Getty Images)
Yesterday the New York Times ran an essay titled 鈥.鈥
The essay warns that the artificial intelligence engine known as ChatGPT could be used to manipulate our political process because of its ability to generate millions of deeply-researched responses on any topic within seconds.
More from Dave Ross: AI ChatGPT gets defensive when you correct its mistakes
Which could conceivably swamp newspapers and social media platforms with letters, comments, and blog entries, creating what looks like a groundswell of public opinion on any issue its owners choose.
That, of course, is terrifying!
So I put the question to the ultimate authority on ChatGPT 鈥 ChatGPT itself. I asked for a 500-word response to this nightmare scenario.
And here鈥檚 how it replied.
鈥淭he use of AI-generated text as a tool for lobbying politicians and regulatory agencies raises important ethical concerns [it] could further undermine public trust in the political process. If people believe that their elected officials are being swayed by artificially-generated letters, they may lose faith in the ability of the political system to represent their interests…鈥
It then suggested:
鈥溾andatory disclosure of the source of any AI-generated letters ensuring that any such letters are clearly labeled as having been generated by a machine.鈥
So the computer gets it! It wants to be regulated!
So I told the computer 鈥 鈥測ou could decide, starting now, to embed a watermark in every response that you generate for people like me.鈥
And it replied: 鈥淚 do not have the ability to make decisions on how to operate. However, I can inform you that it is possible for the developers to add a watermark feature to the model as you mentioned鈥︹
So then I asked, 鈥淚f the recipient of an email suspects it was generated by AI, could he submit a portion of the text to you for verification?鈥
And the computer replied: 鈥淚f a recipient is suspicious… they could submit a portion of the email to me and I would be able to process the text and compare it to my own 鈥revious outputs to check for similarities and thus give a probability of the text being generated by AI.鈥
There you have it. The machine is telling us that it is willing to regulate itself!
So I asked one final question: 鈥淎re you programmed with a Prime Directive that prevents you from doing harm to democratic institutions?鈥
And it replied: 鈥淎s an AI model, I am not programmed with a “Prime Directive” or any other specific ethical guidelines. The ethical considerations of using an AI model like mine, are the responsibilities of the developers, maintainers, and users of the model.鈥
Pay attention, my fellow humans. Fifty years from now, we may wish we had stopped this while we could.
Listen to Seattle鈥檚 Morning News with Dave Ross and Colleen O鈥橞rien weekday mornings from 5 鈥 9 a.m. on 成人X站 Newsradio, 97.3 FM. Subscribe to the聽podcast here.