The courses are emerging at a moment when big tech companies have been struggling to handle the side effects — fake news on Facebook, fake followers on Twitter, lewd children’s videos on YouTube — of the industry’s build-it-first mind-set. They amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.
“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” said Laura Norén, a postdoctoral fellow at the Center for Data Science at New York University who began teaching a new data science ethics course this semester. “You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”
Computer science would benefit from an equivalent to the medical profession’s Hippocratic oath. As the complexities of computer systems — especially A.I. and machine learning — increase, the easier it becomes to disregard or remain ignorant to the damage these tools can inflict. Personally, I’m still unsure where the ethical line should be drawn, or to what degree, say, an open source software maintainer is responsible for the eventual usages of her code. Maybe some? Not at all? This, to me, is where any comparison of the medical field to the computer science field becomes futile; the doctor creates actions, a computer scientist creates tools. While both products can be used unethically, a tool can operate independently and in ways the creator never imagined. So should the tool have never been created in the first place?
I always liked Google’s now-defunct mantra of “don’t be evil,” because even if the motto was only paid lip service during its final years, it served as a reminder that technology can be and is used for evil every day. So while these systems are too large to blame any one developer or computer scientist, it’s on all of us to not only discuss, but also come to an agreement on the boundaries of what technology should do and how wide-ranging its influence should be.