Two workers ensnared in wires and cables (Illustration by Hugo Herrera)

In 2018 in the Netherlands, the public learned that Dutch tax authorities had for years been using an AI-driven system to incorrectly accuse people of committing child welfare fraud. Tens of thousands of parents stopped receiving child care support and instead received letters demanding repayment of thousands of euros. Another algorithmically-controlled system subsequently decided that some of those parents’ income could no longer support their families, and nearly 1,700 children were forcibly removed from their homes. Later investigation found the AI system had systematically discriminated against non-white Dutch citizens.

The same year, an Australian court ruled its government had illegally used a faulty Automated Decision Making system as the basis for deciding citizens’ eligibility for various welfare benefits when around half a million Australians were ordered to repay years of government disability or unemployment support. At least three people killed themselves over the stress caused by the “Robodebt” scandal.

Making Tech Work for Workers
Making Tech Work for Workers
This in-depth article series, sponsored by the Ford Foundation, explores the harms of the digital economy and asks workers, organizers, technologists, economists, and funders: How can we collectively build a future of work that is just, equitable, and sustainable for all?

These cases, on opposite sides of the globe, illustrate with painful clarity how the irresponsible use of artificial intelligence can harm both workers and the public, and how unregulated technology in the workplace also hurts communities. They also show that protecting labor rights is foundational to protecting human and civil rights as well.

AI Management, at Work and Beyond

As workplaces become more reliant on technology, algorithmic management systems are being used for everything from hiring and training to scheduling and performance evaluations, as well as growing task automation, with jobs once performed by humans now run by AI—no matter how unintelligent.

Often these systems are touted as increasing efficiency and productivity and making decisions more neutral, by removing human subjectivity. But too often, algorithms replicate and intensify inequality and bias, lead to escalating demands for more worker productivity, and use employees’ data in ways that are both secretive and abusive. Today, workers have little idea what data is being collected from them or how it’s used to assess their performance because those judgments are being made by proprietary, third-party automated systems, the inner workings of which are not disclosed.

The effect can be hiring and firing decisions based on inscrutable considerations. In 2011, more than 200 teachers in Washington, D.C., were fired for poor performance, in some cases despite having exemplary in-person evaluations, because an algorithmic performance evaluation system was used that gave more weight to student test scores than any other consideration. But the algorithm was flawed, failing to account for variables outside teachers’ control, such as poverty or student learning disabilities.

Other automated management systems discriminate against workers or job applicants in ways that would be illegal if people did them—for example, AI evaluation systems that correlate worker productivity with personal characteristics like gender, race, or weight; or health tracking systems that weed out applicants at higher statistical risk of getting sick or becoming pregnant. When workers lack access to the algorithms used by employers, it’s nearly impossible to prove that discrimination has occurred.

The problem goes beyond the workplace, though. Increasingly, we’ve seen that the fates of workers’ rights and the community at large are intertwined, since the same opaque systems used to quantify and evaluate workers are also used to undermine and privatize public services provided by governments. As governments around the world outsource elements of public service delivery to private tech corporations—a trend that exploded during the COVID-19 pandemic—there has been a corresponding growth in governments allowing public data to be accessed and managed by private corporations. This can undermine the public good in numerous ways: letting public data be used not to better public welfare but increase corporate profit; placing the practice of public policy into private hands; and furthering the precariousness of work, as public servants are replaced by automated systems.

The solution to this mess, we believe, lies in the public sector, and in particular, with unions, which have long recognized that when some of us suffer, all of us eventually will.

Protecting Workers Protects Us All

One of unions’ most foundational principles is that workers deserve transparency in wages and working conditions. The proprietary algorithms now used by many employers undermine this principle completely, which affects how unions must respond. As a starting point, unions and workers need to understand what’s happening in digitized workplaces: what AI systems are being used in workplaces and what data has been used to train those systems; what data is being extracted from employees, and how it affects employment decisions.

To that end, our organizations, Public Services International (PSI) and the Why Not Lab, have recently concluded a three-year capacity-building project, Our Digital Future, for public service unions around the world. Through that project we have trained union leaders, bargaining officials, and legal experts. We have developed recommendations to protect both workers’ rights and the integrity of public services and we have created regional Digital Rights Organizer hubs to pass this knowledge on to local unions.

We’ve also created a number of other tools to help workers and unions, such as PSI’s Digital Bargaining Hub, to help unions prepare to bargain over employees’ digital rights, and the Why Not Lab’s Governance of Algorithmic Systems Guide, a checklist for unions to effectively talk to management about digital products used in the workplace and the use of workers’ data. The Why Not Lab has also created the Data Lifecycle at Work, to help unions grasp how data is collected and used by employers, and PSI is supporting affiliates to include new clauses in collective agreements, like the “digital right of entry” policy, expanding unions’ right to enter the workplace to include their right to know what technology is being used by employers and how.

Unions must also recognize that they are part of a broader, joint struggle, across many different sectors, and around the world, that binds workers’ rights together with the rights of community members and those who rely on public services. Unions don’t need to only win power and rights over digitalization in the workplace. They must also work to influence policies to ensure that digitalization in public services increases quality, access, and effectiveness, and not just the bottom line. They must engage the public to understand the role that technology plays in government systems, and popularize the idea that public data must serve public goods. Together, PSI, the Why Not Lab and other civil society partners are working towards this broader goal as well, through the founding of the Global Digital JusticeForum, which promotes the idea that governments, not corporations, should be in charge of regulating data.

Inclusive Governance for Digital Justice

In 2024, the United Nations’ Summit of the Future will discuss and adopt the UN Global Digital Compact, to outline universal standards for corporate responsibility in the digitized economy. In anticipation of the summit, the Global Digital Justice Forum submitted a recommendation to the UN this May, rejecting the model that has allowed corporations to set the terms for digital governance and calling for a new framework based on principles of human rights, democracy, and governments’ rightful role in setting digital policy. The process to negotiate next year’s compact is an opportunity to challenge Big Tech and fight for a more accountable and democratic social contract when it comes to data governance and full digital rights for workers and communities.

Our digital policies and practices must be calibrated to benefit everyone, not just a few, and that starts with making sure that everyone who will be subjected to algorithmic management—as workers or as citizens—has a seat at the table in determining what and how they are used.

This is another area where unions again can help lead the way. In the Culinary Union in Las Vegas, housekeepers were asked to use an app on an employer-provided smartphone. The app included an interactive list of rooms to clean that can be micromanaged in real time from a manager’s tablet. The app led to lots of extra walking and increased workload. In order to preserve their autonomy, housekeepers documented where the phone sent them and eventually presented maps ultimately showing how unintelligent the smartphone-driven workload was and how much unprecedented zig-zag the smartphones generated. As a result of their documentation, thousands of housekeepers won added language in their union contract to preserve their ability to self-sequence their work - even as they use the smartphone.

It was a small but brilliant example of participatory design—bringing workers to the table to ensure that technology is actually additive and not detrimental. We need to broaden that concept into the idea of inclusive governance: that the only way the tech revolution can live up to its promise is if the people affected by it are included in its creation and governance. Workers managed by algorithm have the right to access and understand those systems. Communities whose interactions with the government are managed by algorithms must likewise have knowledge of and access to the systems that impact their lives.

If those principles were in place in the Netherlands and Australia, where flawed and biased algorithms falsely accused citizens of welfare fraud, those scandals—and the human tragedies they caused—never would have happened. The Community and Public Sector Union (CPSU), PSI’s affiliate union in Australia that represents workers who manage the welfare system, has successfully negotiated a set of clauses in their new Collective Agreement to allow workers to blow the whistle on the unethical use of algorithms in the future. The union persistently raised concerns about the Robodebt system and were accused of taking illegal strike action over the scandal. Ultimately, the system was ruled illegal. The government was forced to repay the debts as well as additional costs to people the algorithm had harmed.

The power shifts involved in the digitalization of our economy are as profound as during the Industrial Revolution. Then, corporate abuses amid a rapidly changing economy led to the rise of trade unionism and the concept of workers’ rights. Today we face an equivalent challenge and opportunity. Making an equitable and just digital economy requires uprooting systems that have become embedded in workplaces and governments. Addressing the scale of these problems won’t be easy, but if unions and their allies are ready for the fight, it comes with the chance to increase worker power, to save and improve public services, and to win back democratic control of public data.

None of the gains workers have ever made came as a gift, but rather as the result of sustained struggle. Workers and communities alike need to renew that struggle now. Building solidarity across sectors and countries is a vital first step.

How can investors make sure new workplace technologies are a force for good? Listen to the discussion from SSIR’s 2023 Data on Purpose conference:

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Christina J. Colclough & Kate Lappin.