Posted onin Big Data
Facebook recently stirred controversy after reports came out that they manipulated user timelines to see the effects of people receiving positive or negative content in their feeds. The GPS company TomTom stepped in a puddle when their traffic data ended up in the hands of police looking for speeders. Target incorrectly presumed female consumers were pregnant after a failed attempt at predictive modeling. And before an update, Siri would direct you to the nearest bridge if you asked to jump off one.
All of these instances involve morally questionable uses of data, in which people’s privacy was violated or conclusions were drawn that led to invasive or poor decisions and involvement on the company’s behalf. And yet for each of these instances, the genesis of the idea and what could be accomplished from data analysis was probably first seen as “cool”. Did anyone else in the room upfront question if it was creepy?
“In the absence of an ethical framework in talking about business decisions, we revert back to our moral code,” says Kord Davis, author of the book “Ethics of Big Data.” This is where we are with Big Data, stuck in a lawless Wild West in which the technology is ethically neutral but everything that’s done with it is volatile.
As Gartner Research VP Frank Buytendijk explained, companies have long had a desire to know who they are dealing with, who their customers are and what their needs are. And with new technology, every business sector will work to develop new, better, smarter ways of identifying these people. But the legitimate goal of determining just who your customers are, Buytendijk said, is being overshadowed by the additional data that’s captured around it.
Data that users may wish to be private, that may lead to false conclusions or may even prove to be dangerous and contrary to what the customer needs, is getting in the way of more traditional business intelligence. Those making the decisions don’t understand the power and the consequences behind the data driven insights, and those with the ability to create those insights aren’t asking questions.
“I am not afraid of the bad people (there will always be bad people),” Buytendijk said. “I am afraid of the ignorant ones, who have no idea what they are doing. For them all consequences are unintended. There is no intention.” This was the basis of our show “Big Data Ethics: Privacy, Risks and Principles”, in which we invited both Buytendijk and Davis to hash out this thorny issue.
Buytendijk wrote that with every new technology innovation that comes up, the question of ethics comes to check whether or not this is a good thing for customers, businesses or society. But Davis says in the case of Big Data there’s more to it. “We are facing an unprecedented capability with unprecedented outcomes. The implications range from the deeply and directly personal (health insurance premiums being calculated based on your purchase history) to the broadest cultural and social impacts (political events like the Arab Spring),” Davis wrote in a pre-show interview.
With that unprecedented capability is the realization that companies are doing more with the data they’ve always collected than ever before. “Today, your personal data is being shared, sold, analyzed and sliced and diced in a 100 different ways you know nothing about,” Davis said.
And the reason it begins to cross the line into the creepy territory is because the analysis that Big Data allows is no longer just based on simple demographics. Buytendijk calls it “behavioral analysis”, in which companies and governments might know all too well, or perhaps even better than you do, what you’re doing.
If you’re a CEO or business leader with this capability in your back pocket, the question may very well be, “Why not use it?” You have stakeholder demands to meet, you can get to know your customers’ needs better to improve their satisfaction and move more product, and the ability to churn this information may just be the competitive advantage you need. But as companies neglect the choices of right and wrong, they also face adverse business effects.
“They face risks of negative brand impacts, fear, uncertainty, and doubt from customers, existing and emerging legislation and regulation, decreasing trust and damaged reputations,” Davis said. “The trick is to understand where that moral boundary lies, and few organizations are actively working to identify it.” Because the regulations are as of yet undefined, the moral boundaries are likewise vague, and companies will have to review their company values in order to decide which side of history they wish to land on.
For some companies like Facebook, challenging the notion of what data should be private and what should be public has been a part of their organization’s brand and how they’ve shaped their success over time. For others, claiming transparency with data decisions and a respect for privacy could be its own competitive advantage, one that customers will value on ethical grounds of right and wrong rather than dollars and cents.
Although Davis says that there is no specific vocabulary to define Big Data Ethics, Buytendijk explains we have all of human history’s recorded vocabulary to determine right and wrong and recognize that privacy is something that humans deeply value.
Organizations can begin forming ethics boards and a “Big Data code of conduct”. They need to develop new capabilities to bring ethical discussions into the context of business and data decisions, and this practice starts with internal education and media awareness rather than simply regulation that will only exist for people to get around.
We have ethics in every field of our working society. Doctors have medical ethics, soldiers have military codes, teachers have to take into consideration what is being shown to kids, and now IT people have digital ethics. Determining what is right and wrong with Big Data isn’t just a business need or a customer concern; it’s a human responsibility.
Hear more from Davis and Buytendijk on our show “Big Data Ethics: Privacy, Risks, and Principles”.