Ross: If AI can learn to be good, maybe people can too
Oct 18, 2022, 8:18 AM | Updated: 8:34 am

Developed by Aldebaran, the Pepper robot gives hugs during the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge Expo at the Fairplex June 6, 2015 in Pomona, California. Organized by DARPA, the Pentagon's science research group, 24 teams from aorund the world are competing for $3.5 million in prize money that will be awarded to the robots that best respond to natural and man-made disasters. (Photo by Chip Somodevilla/Getty Images)
(Photo by Chip Somodevilla/Getty Images)
Many of you know that I鈥檓 an Artificial Intelligence skeptic. I don鈥檛 believe machines can achieve consciousness, and I don鈥檛 believe they can think in the human sense, because they don鈥檛 have human needs.
I do believe that if you feed them enough raw text they can mimic an interview, but ultimately, they鈥檙e just assembling the written thoughts of actual people.
More from Dave Ross: An open letter to the Mariners, we still love you
And that鈥檚 what University of Washington Professor is trying to change 鈥 by bringing some common sense to AI programs. She was awarded a MacArthur grant for recognizing you can鈥檛 simply feed raw text into these programs, and expect them to think straight.
“Currently the AI just consumes any data out there but that’s not safe. That’s dangerous even for humans,” Choi said.
So she came up with a solution.
“One way to fix this, to make an analogy, is to write textbooks for machines to learn from, in the same way, that humans also learn from textbooks, not just any random data from the internet,” Choi said.
Yes 鈥 her idea is to prepare a textbook to help the computer teach itself to recognize information that might be compromised by sexism, racism, or outright lies.
“And then that textbook could consist of examples of what’s right from wrong,” Choi explained.
But 鈥 here鈥檚 the catch 鈥 a lot depends on who writes the textbook.
鈥淚 mean, there are cases where needs are so obvious that it’s a clear case of sexism and racism,” Choi said. “But then there are cases where two people disagree depending on their upbringing, or depending on their depth of understanding of the issue. Someone might think that oh, ‘that’s freedom of speech,’ while others think that that’s a clear case of microaggression.鈥
What that tells me is that an AI program is going to take on the personality and prejudices of its creator.
“That’s right. The Creator should be not just one person, but a diverse set of people representing it, but even if so, it’s going to reflect some biases,” Choi confirmed.
So the idea is to get a spectrum of smart, well-adjusted people to feed the AI program examples of right and wrong so that the computer can teach itself which information to accept and which information to avoid.
And some of you may be thinking 鈥 that鈥檚 great! In fact, if you can come up with a universal truth filter, why limit it to computers? Why not teach it to humans?
The answer, of course, is that it鈥檚 been tried for thousands of years and we鈥檙e still fighting over things like pronouns and bathrooms.
I say we test it on the computers first, and see if they really learn to behave, or just try to unplug each other.
Listen to Seattle鈥檚 Morning News with Dave Ross and Colleen O鈥橞rien weekday mornings from 5 鈥 9 a.m. on 成人X站 Newsradio, 97.3 FM. Subscribe to the聽podcast here.