The legal pitfalls of using Artificial Intelligence at the workplace

Artificial intelligence (AI) is being used by businesses more and more to expedite decision-making and other HR procedures, including hiring, task allocation, management choices, and terminations.

In the workplace, using AI makes involves making the following types of decisions:

Profiling: the use of algorithms to classify data and discover relationships between different data sets. This can be used to generate predictions about specific people, such as by gathering information and significant implications about employees to determine whether they are not fulfilling goals, which could result in capability hearings or terminations.

Automated decision-making (ADM): Artificial intelligence (AI) is utilised to make decisions automatically, without human involvement. For instance, a job applicant might be required to complete a personality test as part of the application process and be automatically disqualified based on the results. The use of artificial intelligence makes the recruit process easier while abiding by employment law.

Machine learning: Machine learning is the process through which computers are instructed to mimic intelligent human behaviour using algorithms. This is another example of use of AI in employment and recruitment process. Taking image recognition as an illustration, it can be used to evaluate candidates’ performance in video interviews.

Risks of artificial intelligence in the workplace:

Discrimination risk 

The risk of “algorithmic discrimination” exists when there is the use of AI in the workplace. Since AI algorithms are developed by humans, they may exhibit the prejudices of their developers. Employers and HR professionals need to be careful not to violate the Equality Act of 2010, which forbids discrimination based on protected traits, with their usage of AI. Therefore, it is important to review the AI software’s “decisions” and the underlying data that went into them to make sure that any apparent prejudice is detected and that any errors (such ignoring important criteria) were fixed.

An employment tribunal claim was filed against Uber by one of its former UK drivers on the grounds that the company’s facial recognition software did not perform as well with people of colour, and the automated decision led to the claimant’s account being mistakenly deactivated. This case serves as an illustration of the risk that biases embedded in AI software pose.

Another instance of AI generating news involves Amazon, which was forced to abandon its AI recruitment tool after use of AI systems favoured male applicants over female ones. This was due to the fact that the software’s analysis of the data used to make hiring decisions was based on previously hired, successful candidates, the majority of whom were males due to the male-dominated nature of the technology sector.

In this case, a female candidate could file a claim for indirect discrimination because the AI decision significantly disadvantages her because of her gender. The employer would have to demonstrate that using the technology was a reasonable way to accomplish a justifiable goal in order to refute such a claim. While streamlining the hiring process with technology may be a worthwhile goal, it is challenging to see how such a tool, which has the potential to have such profound effects on potential employees, can be a proportionate means of achieving that goal without any human oversight.

Legal pitfalls of using AI involve brings out one more discriminatory problem. For instance, it is obvious that a manager who is responsible for recruiting employees will be subject to anti-discrimination legal obligations, but who is held accountable when AI software brings discrimination into the employer’s decision-making process? This question still has no obvious solution.

Data protection

Although there is presently no regulation prohibiting or restricting the use of AI at work, data protection law offers some protection for people who might be affected by AI choices.

The UK GDPR’s Article 22 has rules that attempt to protect people from automated decision-making and profiling by restricting their usage and imposing protection requirements on businesses that wish to employ them. However, as part of its suggested changes to the UK data protection framework, the government hinted in September of last year that it would remove article 22. As a result, these protections may become less robust.

Possible reform

In an effort to address the hazards of AI use, particularly in areas that have a big influence on people’s lives, the EU released a proposal for legislation on the technology last year. It was argued that AI used in hiring decisions should be considered “high risk” and as a result, be subject to particular protections. If adopted in the EU, it is unknown if a similar legislation would be implemented in the UK.

In a paper released in May 2021, the TUC and the AI Consultancy made a number of reform recommendations, including modifying the burden of evidence in discrimination claims that aim to contest AI or ADM systems in the workplace so that the employer will have to refute discrimination (as opposed to the employee proving it). Additionally, it demanded statutory direction on possible precautions to take in cases when AI is utilised to prevent discrimination.

Without a question, AI will continue to propel important innovation in the workplace. The UK’s official position on regulating AI is anticipated to be announced soon by the Office for Artificial Intelligence.

It will be intriguing to see what suggestions are made that apply to the workplace. Employers should continue to ensure transparency in the workplace and be watchful in the interim to reduce the danger of lawsuits and maintain some level of human engagement in their procedures.

Similar Posts