This policy brief was written by Minsu Longiaru, PowerSwitch Action; Wilneida Negrón, PhD, Coworker.org; Brian J. Chen, Data & Society; Aiha Nguyen, Data & Society; Seema N. Patel, University of California College of the Law, San Francisco (formerly UC Hastings Law); and Dana Calacci, PhD, Pennsylvania State University College of Information Sciences and Technology.
In response to widespread concerns about data tracking and the collection of personal information, corporations are deploying a new brand of technologies, including forms of artificial intelligence, that claim to be “privacy-preserving” or “privacy-friendly” because they protect individuals’ personal data. But protecting workers’ personal data does not necessarily lead to protecting workers. Because corporations can use these Privacy-Preserving AI Techniques as workarounds to analyze data at scale and make predictions without “touching” personal data, these technologies can enable corporations to technically comply with data privacy laws while exerting control over workers in ways that should cause grave concern.
Left unchecked and absent proactive intervention, these technologies will be deployed in ways that further obscure accountability, entrench inequality, and strip workers of their voice and agency. Stronger state-level enforcement of existing laws — and most fundamentally, new workplace technology rights and standards — are necessary to protect workers from an expanding web of invisible control and digital exploitation.
This brief proposes three forward-looking design principles that target root causes, addressing the overall deficit of worker power and autonomy that tends to worsen as employers deploy new technologies. Paired with concrete legislative tools, enforcement reforms, and grassroots policy change efforts already underway across the US, these principles offer a roadmap for bold governance that provides meaningful protection to workers and positions them to be decision-makers in the digital age.