We recently discussed some of the ethical challenges facing the development and deployment of artificial intelligence (AI) systems. One area of AI ethics in particular that demands more attention and scrutiny is data privacy.
This is especially true in the age of Facebook and its connection with Cambridge Analytica. In case you’re unfamiliar with this situation, Cambridge Analytica was involved in a scandal of harvesting 50 million Facebook profiles and using a sophisticated machine learning system to analyze the data and deliver highly targeted, personalized messages to change (it’s since been shut down). The scandal is that it collected user data illegally, and that data was used to manipulate voters in both the 2016 United States presidential election and the 2016 Brexit vote in the U.K. As a result, Facebook was recently fined $5 billion USD for poor data privacy practices that allowed the breach in the first place.
To be sure, that’s just a drop in the bucket for Facebook, a company valued at over $500 billion USD. Most businesses aren’t that big, of course, and most probably couldn’t weather a fine of that size. What’s more, with the introduction of Europe’s General Data Protection Regulation (GDPR), which requires companies doing business in Europe to comply with a number of data protection requirements or else face fines.
With serious legal and financial consequences at stake for companies using artificial intelligence, it’s essential to develop policies that ensure your business maintains compliance at all times. Following are two key areas of consideration for developing such a policy.
Understanding Your Data
Understanding Legal Compliance
With more and more regulations on data collection and usage being developed, it’s essential to develop policies that ensure compliance, else face the fines of non-compliance. This includes examining the legality of:
- Collection & warehousing practices: Does data collection and storage comply with legal requirements around privacy and data security?
- Notification and opt-out practices: Do you sufficiently notify users of data collection practices and do you provide sufficient opportunity to opt-out of data collection?
- User control: Can users whose data you are collecting control their data, what is used and who it is shared with? Can they change their preferences at any time or request their data to be deleted from your system?
- Audit practices: What is your company doing to ensure ongoing compliance?
What’s more, from a legal standpoint, it’s essential to ensure that users are made aware of and agree to whatever your data collection and usage practices are. This might be as simple as a privacy statement on your website, or it can be a more complex opt-in/opt-out system.
If your company is thinking about developing an AI program, it’s important to think through all the different implications of data collection and usage. And we can help. Contact us to discuss how we can work with you to ensure your AI program meets both your business needs and your customers’ privacy expectations.