As crazy as this may sound, all of this is not Science Fiction. It is happening right now. Machines already filter, sort and choose the information we base our decisions upon. They count our votes. They sort the tasks we spend our time on, they choose the people we talk to and meet. More and more key aspects of our lives are decided by information technology. And things go wrong. Machines are made by humans. As long as we make mistakes, our machines make mistakes.
When things go wrong, both parties—those who use machines and those who build, manage and own information technology—decline responsibility. Users hide behind their lack of power, owners hide behind “the algorithm”. They sell artificial intelligence as Deus ex Machina and when it fails they blame the machine as a mere machine.
The question “Who serves whom?” is not a topic for experts in 2047. It is a key question for all of us, today, right here and now. Whether or not machines can be intelligent is not just technically or scientifically relevant, it is existential.
Imagine if Facebook decided who you could marry, because the network knows more about you and your tastes than you do yourself. What if machines made better politicians? iA raises these and other scenarios, as well as several suggestions for safeguards we can put in place to help keep more distinct lines between human- and machine-generated content.
When most people think of world-ruining A.I., they might think of a robotic uprising. Legions of drones, smart light bulbs, and other sentient machines overthrow their creators. That’s how the movies paint it, at least.
In reality, damaging, world-shaping A.I. will look less exciting, and we’re already getting glimpses of it. Consider the vast influence and reach that A.I. and machine learning have over the algorithms that feed our social networks, populate our search results, and prompt us with contextual lifestyle suggestions. How much of your life is guided by a machine, and to what degree does that influence sway your decisions? Now consider that these same algorithms and neural networks are proprietary code, often being written and improved by engineers who rarely fully understand the complexities of the whole system. At what point does the situation become a problem, dangerous, or irreversible? iA:
We need to know who runs these robots. And we need to know how they work. Bots have no right to anonymity. Algorithms that influence human existence on the deepest level shouldn’t be trade secrets.
I don’t think we have any clear answers yet, but that doesn’t mean we shouldn’t be asking the questions. If anything, these topics need to be more common in our daily lives.
A common example of the dilemma we’re in goes like this: what should happen if a self-driving vehicle suddenly needed to decide between killing a group of pedestrians or swerving off the road and into a ditch, putting the passengers at risk? Should it consider the passengers’ health? What about the pedestrians’ apparent age? Regardless, in either of those outcomes, we should want to know everything we can about the driver. Will we hold machines and their makers to the same standard?