The U.S. Senate and House hearings last week on Facebook’s use of data and foreign interference in the U.S. election raised important challenges concerning data privacy, security, ethics, transparency, and responsibility. They also illuminated what could become a vast chasm between traditional privacy and security laws and regulations and rapidly evolving internet-related business models and activities. To help close this gap, technologists need to seriously reevaluate their relationship with government. Here are four ways to start.
Help to increase tech literacy in Washington. Lawmakers expressed surprise and confusion about Facebook’s business model, including how the company generates revenue and uses data for targeted advertising. They also seemed to misunderstand how Facebook functions as a platform for third-party applications and how users’ data is flowing between the user, Facebook, and third parties. This lack of knowledge — despite the millions of dollars that the tech industry spent on lobbying Washington in 2017 — shows that technology literacy among lawmakers still needs to be improved.
It’s especially important since Washington is clearly interested in enhanced regulation of the data economy. Multiple lawmakers suggested more expansive federal privacy legislation. There are state efforts as well, such as the California Consumer Privacy Act. The question is whether regulators will be able to create rules that reflect modern business models and data flows and support innovative services and products. To ensure that they do, internet and technology companies must engage with legislators in a different way, beyond the transactional and fleeting.
To accomplish this, those who care about the future of technology must be more strategic and focused on educating the federal government and changing its culture over the long term. A few examples of how to embed tech expertise into policy making are TechCongress, the U.S. Digital Service, and the Presidential Innovation Fellows. These programs offer an extended opportunity for people with different experiences and backgrounds to work together to improve how government works — increasing their exposure to different ideas, forcing collaboration, and working through tension and conflicting viewpoints. And when those technologists finish their public service tour of duty and return to industry, they will go back with a deeper understanding of public service.
Create and enforce stronger policies for governing third parties’ use of data. Much of the internet economy runs on application programming interfaces (APIs). Companies such as Facebook, Google, Apple, Amazon, and Salesforce have robust developer programs that encourage third-party integration with their platforms via APIs. These partnerships make it possible to offer customers valuable services — for example, add-on applications that allow you to customize your Gmail experience.
However, these partnerships can also lead to data spreading to places that are far beyond users’ awareness or control. As many commenters have pointed out, the Facebook and Cambridge Analytica story is about how these platforms in partnership with third-party applications can use and misuse our data in ways that many of us did not know was possible.
In order for industry — and users — to realize the full benefit of these types of collaborations, information access by third parties must be accompanied by strong API policies and transparent business practices. This includes responsible vetting of third-party applications, clear policies around what third parties can and cannot do with user data, and dedicated resources and processes for monitoring and enforcing the policies.
Finally, companies must also be proactive in notifying consumers when violations occur and taking timely action when they are. Mark Zuckerberg, Facebook’s CEO, admitted last week that his team knew about the improper use of 87 million people’s data as far back as December of 2015, yet did not disclose this to their users. Timely notification is critically important to establish trust with users.
Invest in user-centered models for consent and terms of service. Several lawmakers questioned the ability of Facebook’s users to truly read and comprehend the company’s terms of service. It’s a big problem that extends way beyond Facebook. Most consumer internet services and products are designed in ways that encourage consumers to quickly click through long terms of service and legal policies in order to move on and use the app or website they are trying to access.
A new consent model is needed for the data economy, one that is designed around the user’s experience and will actually help people understand how their data is being used. This means less legalese and clearer, simpler design and language that have been tested and demonstrated to convey meaning in the same way a website’s primary services are carefully designed — allowing an “opt in” to be truly meaningful rather than just another checkbox.
Some pioneering efforts in this area have been made to improve electronic consent in the clinical research space. One example is using images in consent forms, which can slow down a reader’s eye and focus their attention. Another is allowing users to select the level of detail they want for any given consent provision, so they can learn more about the topics they care about most.
Future federal policy will likely focus on stronger consent regulations. The Federal Trade Commission has pushed for “just-in-time” disclosures to consumers to obtain their affirmative express consent, and the European Union’s General Data Protection Regulation (GDPR) places great importance on consent. Yet usability and user-friendly software design are difficult to mandate by legislation. That should be the responsibility of tech companies, which already have robust user-centered design teams and clearly know how to make their services engaging (and possibly even addictive).
Include data ethics as a central component of any regulatory reform. As Zuckerberg said on April 10, he considers Facebook to be a content-neutral platform for users to share ideas and opinions freely. Yet many lawmakers expressed concerns over extremist views on the site, such as hate speech and terrorist propaganda, and the proliferation of false information to influence elections around the world. And while Zuckerberg frequently described his company’s efforts to use artificial intelligence (AI) to monitor Facebook’s content and purge material that violates Facebook’s policies, it is also true that the effects of using AI, whether intentionally or not, can be to perpetuate or exacerbate biases.
Given that future hearings and legislative and regulatory activity may center around regulation of algorithms that power AI technology, the data science community and policy makers must work together to identify the principles and rules of the road to guide efforts to address this continuously evolving challenge. One great example of this ongoing work is a collaboration between Data for Democracy and Bloomberg on a data science code of ethics.
The congressional hearings represent a fascinating milestone in the evolution of the tech industry and its relationship with regulation. While Facebook represents a unique case study, the challenges and opportunities amplified in the congressional hearings are pervasive across the digital economy. This includes an entire sector of data brokers who move more and more consumer data every day and are rarely in the public eye in the same way that Facebook was last week.
It seems inevitable that the federal government will enact stronger privacy and consent safeguards, as the European Union has with GDPR. Instead of reacting or resisting such efforts, tech companies must proactively work with governments and acknowledge that along with their increasingly greater power comes greater responsibility.
from HBR.org https://ift.tt/2HCslb6