
Artificial Intelligence in employment – managing the risks
The use of generative AI tools in the workplace such as ChatGPT, Copilot and Gemini has gathered pace at a breathtaking speed over the past few years. There are many benefits to the use of AI in the workplace and the approach of the Government to date has generally been to favour innovation and regulate the use of AI by non-statutory guidance through existing regulatory bodies such as the Information Commissioner’s Office, Equality and Human Rights Commission and the Health and Safety Executive.
While there is no specific legislation to regulate AI in the workplace, there are a number of existing laws to protect workers who may be directly or indirectly impacted by the use of AI, particularly in regard to discrimination, unfair dismissal, health & safety, and data protection.
This article examines some of the potential pitfalls and considerations for businesses when using AI in recruitment and workplace management.
At a basic level, AI refers to computer systems able to perform tasks normally requiring human intelligence. It encompasses a wide variety of technologies, systems, models and tools, touching every part of our lives. Commonly used AI in the employment space are algorithmic management tools and machine learning, which cover three main areas:
- Recruitment;
- HR and task management functions; and
- Employee performance.
This can range from basic level CV screening to the use of machine learning to generate content and predict performance outcomes.
Some of the potential benefits of the use of AI in the workplace include speed of recruitment and assessing the quality of shortlisted candidates. This can provide efficiency savings in both time and cost for staff, improved accuracy and impartiality of decision making, plus workforce insights to allow businesses to adapt and innovate in ways that were not previously possible.
Despite the benefits of AI, particularly on driving efficiencies and decision making, there are also significant risks to employers if the use of AI is not risk assessed, implemented responsibly, tested and monitored, to ensure there are safeguards in place to prevent its use falling foul of employment law and protections.
As is the case with many emerging and groundbreaking technological advances, the same tools can be used, intentionally or inadvertently, for good or ill. An algorithm which monitors and identifies poor performers could be used by a good manager to provide additional support and training. The same tool could be used unlawfully by another manager to ensure the worst performers are selected for redundancy without consideration of the other selection criteria. The tool is the same; the managers aren’t.
Reduced favouritism and unconscious biases which intrude into management decisions around promotion, remuneration, holiday, or shift allocation, are arguably only a positive. However, in practice, AI can be flawed in various ways and produce the opposite effects to those intended; instead of eliminating biases, they are amplified; producing inaccuracies rather than increasing accuracy; the risk of dehumanising the management process and alienating staff; oversurveillance and control of the workforce by the use of AI encouraging managers to overstep ethical and professional boundaries.
A significant risk is the inability to know exactly which factors have been taken into account, and with what weight, when the decision-making process has been outsourced to an AI tool. There is a danger that when employers rely on AI to make decisions about recruitment, human resources, and employee management, they give effect to prejudices and biases hidden in the software which could lead to unfair and discriminatory outcomes.
AI tools are only as good as the input data and information provided to them. Depending on the data and algorithms used, AI tools may duplicate and proliferate past discriminatory practices that favoured one group of candidates or staff over another. This could be seen in 2018 when it was reported by Reuters that Amazon scrapped its algorithmic recruitment tool when it found that it discriminated against women. The AI had been trained on data submitted by applicants over a 10-year period, much of which came from men. As a result, the data was not balanced; the AI had effectively taught itself that male candidates were preferable.
In addition to the risk of imbalanced data input, the ICO has highlighted other areas of concern in its Guidance on AI and data protection which could lead to discrimination when using AI:
- Prejudices or bias in the way variables are measured, labelled or aggregated;
- Biased cultural assumptions of developers;
- Inappropriately defined objectives (e.g. where the ‘best candidate’ for a job embeds assumptions about gender, race or other characteristics); or
- The way the model is deployed (e.g. via a user interface which doesn’t meet accessibility requirements).
What can employer’s do to ensure AI is a force for good in the workplace and not a liability?
- Employers should understand clearly the problem they are trying to solve by the use of AI, identify and assess the risks that it may pose if adopted, and consider alternative options where appropriate.
- Ensure that safeguards and additional measures are in place to mitigate any of the risks identified.
- Identify automated decision making in the workplace and ensure compliance with the UK GDPR and the Data Protection Act 2018.
- Adopt the “good data in, good data out” approach to ensure the quality of the input data is assessed for any potential risks of bias or discrimination.
- A human manager should always have final responsibility for any workplace decisions, with algorithms used only as a tool to aid management.
- Line managers should receive relevant training to ensure they understand the use of AI as an aid to decision making and not the decision maker.
- AI tools should never be used to mask intentional unfair treatment and discrimination by managers.
- When introducing new technology, communicate and consult with staff to ensure it is well implemented and improves workplace outcomes.
- Share the benefits of using AI in the workplace with staff.
- Ensure that appropriate policies are in place and information is provided to employees about when algorithms are being used and how they can be challenged; particularly in recruitment, task allocation and performance management.
- Staff should be trained on how the employer’s data privacy policies will apply to the use of AI in the workplace and be aware of their data protection obligations. In particular the importance of not including any personal data in AI searches, as any information that is entered could be made public or used by others.
As more businesses incorporate AI into their daily work, we are likely to see a rise in claims, not only in employment related disputes, but litigation more generally; for example, in relation to data protection, intellectual property and negligent advice produced by AI. This will undoubtably increase as the use of AI becomes more widespread, and in ways we may not be able to predict at this time.
Some AI related conflicts may require escalation to the Employment Tribunal or Courts. Given that AI is an emerging field, there is little legal precedent on how the Courts and Tribunals will rule on many AI-related legal issues.
Summary
It is important that employers have appropriate policies in place so that their employees are aware as to how they should use AI in relation to their duties and the importance of the human factor in the final decision-making process. Moreover, employees need to be made aware that personal data should not be used in AI searches as several AI applications retain the data from queries and share inputs with third parties, which could result in unintended confidentiality and data breaches.
The Taylor Walton Employment Team is able to assist you in updating your contracts of employment and policies to ensure that where staff are using AI, they are aware of their contractual obligations and to comply with the legal requirements. If you require further advice on the use of AI in the workplace or any other employment law matters, please contact Taylor Walton’s employment team here.
Disclaimer: General Information Provided Only
Please note that the contents of this article are intended solely for general information purposes and should not be considered as legal advice. We cannot be held responsible for any loss resulting from actions or inactions taken based on this article.
Insights
Latest Insights



Request a call back
We’ll arrange a no-obligation call back at a time to suit you.