A couple thoughts spurred by a quote from an obituary and a comment in a podcast about AI.
The SF Chronicle obituary for Ellen Tauscher, a moderate Democrat from the East Bay who served in Congress and the Obama Administration, had this piece of advice from her:
Politics, she said in a 2013 interview with University of California television, is the “ability to listen to people and understand what they’re saying.”
“You have to remember that you (may) know more than your constituents, but you don’t know better than they do,” she said.
It’s something to keep in mind in an age of artificial intelligence. Although there are people and systems who might seem to know more about you, they don’t know you better. And they’re not really listening to you.
Stuart Russell, an AI professor at UC Berkeley, said in an interview with Sam Harris on the Making Sense podcast episode, “Possible Minds,” aren’t paying attention to you and what you like. He talked about how the reinforcement learning algorithms of social media work.
“The system, which is a combination of the algorithms and the corporation — the people in the corporation that are adjusting them — have created a reinforcement learning process to maximize click-through revenue. When you run a reinforcement learning algorithm, here’s what it’s not doing. It is not looking at what that person clicks on and learning what that person likes and sending them more of what that person likes. That’s what you think (happens), and that’s what the designers of these algorithms imagine would happen. But that’s not what reinforcement learning does. What it does is it acts on its environment to maximize its reward. The environment here is your brain. It acts on your brain to make you a more predictable clicker. By a typical process of trial and error, what I think these algorithms have figured out is how to gradually feed you articles that will move you in a direction towards being a more predictable person, a more predictable clicker. That’s all it cares about.”
He remarked that people on the extreme ends of the political spectrum, left and right, are more predictable on what they click on. The people in the middle are less predictable. So the reinforcement learning algorithms push people out to the extremes where they will be more predictable, and of course, not caring whether that extreme is to the left or to the right.
This thought is elaborated on in Shoshana Zubbof’s “The Age of Surveillance Capitalism“. She writes that Internet companies use your data to build a predictive model of your behavior and create a prediction market. What they sell to other companies is their ability to predict whether you will click on one ad versus another, whether you are more like to buy one thing instead of another. Surveillance capitalism, she writes, “unilaterally claims human experience as free raw material for translation into behavioral data.” This is, in turn, is “fabricated into prediction products that anticipate what you will do now, soon, and later.” What hubris!
Tauscher’s advice is worth remembering for technocrats and technologists alike. Even though you may know more than another person, it doesn’t mean that you know what’s better for them.