by Arseniy Svynarenko, Mikko Perkiö, Benta Mbare, and Meri Koivusalo
Algorithmic management is increasingly important for occupational wellbeing. This is reflected in the Italian court ruled in January 2021 against Deliveroo’s rider-ranking algorithm in the case that was brought by a group of delivery workers and the Italian General Confederation of Labour. The platform’s algorithmic management system applied profiled and evaluated the “reliability” of a worker. The platform didn’t ask if workers were sick or had an emergency and the algorithm automatically downgraded those delivery workers who failed to cancel pre-scheduled shifts. This automatic algorithmic decision had a significant impact on workers’ access to work. A similar practice is in place in one of the food delivery platforms in Finland.
In February 2021, the District Court of Amsterdam issued a default judgment ordering Uber to reinstate five UK drivers and one Dutch driver that were dismissed solely on the basis of automated processing. That was one of the first court cases on platform workers’ access to work brought under EU General Data Protection Regulation (GDPR) Article 22 “Automated individual decision-making, including profiling”.
The court cases signal the importance of algorithmic management to the platform economy and its workers. At the end of 2021, the European Commission has proposed a draft Directive on Improving Working Conditions in Platform Work. The directive offers an important step forward regulation of platform work and algorithmic management. The Directive raises the issues of i) improving transparency of use of algorithms, ii) fairness in requirements for obligatory human monitoring of automated decisions that may significantly affect workers working conditions, and iii) accountability for automated decisions so that these can be challenged.
“You have to take this ride. Otherwise, you may be blocked.”
The Commission’s draft Directive aims at increasing transparency in the operation of labour platforms. It is rather common that on-location labour platforms in taxi and food delivery sectors do not provide drivers and delivery workers with sufficient information on how they monitor their workers and how the platform makes the decisions that directly affect their work. Who is involved in making these decisions? Adopting the new Directive on working conditions in platform work in its current version would make platforms more open to workers, civil society, and regulators.
They downgrade you as a driver. While I don’t know what the effects would be at the end of the day. … you have to take this ride. Otherwise, you may be blocked. (Food delivery worker, Tampere)
The application of algorithmic management differs between platform companies, but most require some management to ensure monitoring and follow-up of tasks. Many drivers and delivery workers interviewed in our research have expressed difficulties in understanding how the platforms work. They know that in many cases a low rating, a damaged car, or a customer’s complaint may lead to automatic deactivation. Rejecting tasks assigned automatically may lead to losing priority points (affecting working time autonomy or access to some of the platform’s features thereby limiting work methods autonomy) or temporary deactivation. Sometimes losing access to work comes without proper explanations, disclosure of significant factors that influenced the automatic decision, and without the possibility for a worker to present their own position or challenge the decision. Lack of sufficient information on how the work is managed by the platform leaves a ride-hailing driver or a food delivery worker with very little work autonomy, e.g. ability to make decisions on how and when to work.
Furthermore, some workers seem to be confused about their status and role in business. This includes treating the tech support team of the platform company as their managers. Replacing human management with algorithmic management may lead to diffusion of responsibility for workers’ protections. Treating platform workers as independent contractors further distances the platform from any responsibility for the wellbeing of the worker, including adequate insurance cover for work-related accidents.
“A journey of a thousand miles begins with a single step”
A few decades ago public discussions were mostly concerned with robots replacing humans on assembly lines. Now robots are used in management utilizing software and data-based automation, which has become especially visible in platform work. A broad range of issues arises from this rapidly growing sector of technology: from innovation and business transformation to people’s rights to social security and an adequate standard of living.
European Commission (EC) draft for Directive on improving the working conditions of platform work was tabled on 9th of December 2021. It builds on previous initiatives and growing recognition of the impact of AI and algorithmic management on people. The key EU regulation regarding algorithmic management is General Data Protection Regulation (GDPR) was adopted in 2016. The EC communication on Artificial Intelligence for Europe touches upon the issues of AI impact on employment. In the White Paper on AI (2020) the Commission presented its policy outlines for the AI. In it, the EC declared the aims of achieving an ‘ecosystem of excellence’ (promoting the development of AI) and an ‘ecosystem of trust’ (ensuring compliance with EU rules and addressing the risks associated with AI). The Digital Services Act and Digital Markets Acts proposed in 2020 also cover several issues specific to algorithmic management, including algorithmic decision-making and human review, transparency, and explainability.
The provisions on algorithmic management build on previous work on data protection. In 2016 the GDPR treated humans as data subjects and focuses on how the data is collected and used. Digital Market and Digital Services Acts have addressed primarily the potential of innovation in the sphere of AI and its impact on the future of European businesses and economy.
The EC draft proposal highlights the necessity of clear communication between a) companies who use algorithmic management and b) their workers who are affected by the AI irrespective of employment status. Platform workers must be made aware that AI is involved, and their representatives (unions and other public organisations) must be informed how the AI and algorithmic management are affecting workers in the workplace.
Currently, these initiatives set the framework for regulation and European-wide action. A key in this process is to what extent the focus will shift towards workers and their rights in the platform economy. The draft proposal is likely to face substantial lobbying and yet it already represents a compromise. The importance of the proposed draft directive will depend 1) on how governments will take it further, 2) on how the technical side of the AI will define the level of transparency of AI, and 3) how standardisation of data and reporting will be addressed. There will be substantial scope for not only lobbying, but utilisation and creation of loopholes and technological escape routes within the fast-developing context of AI and algorithmic management.
The Commission proposal will be subject to full scrutiny. Due to commercial importance of AI, algorithms, data, and trade secrets, there is a risk that political compromise will water down the potential of the proposal to lead to actual improvements in working conditions.
Power, responsibility, and accountability
So far most of reactions on the draft of the directive have been focused on the employment status of platform workers with less focus on algorithmic management. While algorithmic management remains part of the means for defining employment relationship, the proposed directive would apply in this respect also to contractors.
In the Article 6 of the proposed directive on improving working conditions in the platform economy, the EC requires labour platforms to provide workers (irrespective of employment status) with information about automated monitoring systems and decision-making systems which ‘significantly affect’ working conditions. The Article 7 of the proposed directive mandates human monitoring of automated systems’ impact on working conditions.
The GDPR covers a set of rights of data subjects. The draft directive mostly draws from the GDPR. Articles 6 and 7 of the proposed directive on improving working conditions in the platform economy are based on rights of data subjects presented in the GDPR (GDPR Art. 22 on automated individual decision-making, including profiling). Further focus on shaping the rules of the digital economy has been sought at European level also through the EU AI act proposed in 2021. The proposal for the AI Act was presented as part of the ”coordinated European approach on the human and ethical implications of AI”, but it does not mention those individuals who are affected by AI. Nonetheless, the AI act has been praised for recognizing that the AI and algorithmic decision-making are increasingly integrated into functioning of social safety nets and affect social and economic rights. However, at the same time the AI Act also represents a missed opportunity to address the plight of workers affected by the use of AI.
The proposed Directive on improving the working conditions of platform work recognises that algorithmic management and broader specificities of use of worker’s data has a significant impact on workers’ rights and wellbeing. The Directive offers a set of measures to be applied by platforms and involved parties in to safeguard the rights of platform workers. These measures recognize: 1) the complexity of algorithmic management, 2) the lack of transparency of digital labour platforms, and 3) the need for human intervention with the decisions that have a significant impact on working conditions and wellbeing of platform workers.
The proposed Directive offers scope for increasing transparency, strengthening trust, and balancing the power between the digital labour platforms and workers. The question is how much the legislative process will water down what is currently proposed while leaving out key areas or creating mechanisms to bypass requirements.
It is essential that further legal initiatives will offer rules for proper standardisation of technical reporting in the AI and will clearly define the rights of government agencies and workers’ organisations to assess the AI tools used by the various business or public entities in their management of workers. The implementation of the proposed Directive will depend also on the outcomes of the other legislative processes under multilevel governance. This will offer further scope for lobbying and amending technical documents.
It is justified that addressing occupational impacts of AI and algorithmic management begins with the platform work, where AI has the most pervasive influence. The proposal for the directive on working conditions of platform workers has potential to make a step further in regulation of working conditions for all workers in the age of AI. The question is how large or small it will be.