Commentary: Algorithms are horrible bosses, but they’re taking over the office
Machine-learning algorithms, on the other hand, learn to make decisions on their own after exposure to lots of training data. This means they become more complex as they develop, making their operations opaque even to programmers.
When the reasoning behind a decision like whether to sack an employee is not transparent, a morally dubious arrangement is afoot. Was the algorithm’s decision to fire the employee biased, corrupt or arbitrary?
If so, its output would be considered morally illegitimate, if not illegal in most cases. But how would an employee demonstrate that their dismissal was the result of unlawful motivations?
Algorithm management exacerbates the power imbalance between employers and employees by shielding abuses of power from redress. And algorithms cut a critical human function from the employment relationship.
It’s what late philosopher Jean-Jacques Rousseau called our “natural sense of pity” and “innate repugnance to seeing one’s fellow human suffer”.
Even though not all human managers are compassionate, there is zero per cent chance that algorithm managers will be. In our case study of Amazon Flex couriers, we observed the exasperation that platform workers feel about the algorithm’s inability to accept human appeals.
Algorithms designed to maximise efficiency are indifferent to childcare emergencies. They have no tolerance for workers moving slowly because they are still learning the job. They do not negotiate to find a solution that helps a worker struggling with illness or disability.