The self-stated goal of ChatGPT is ‘to provide a more natural and efficient way of communicating with computers and other machines, allowing for more seamless interactions between humans and technology in various applications, including workplace communication.’ When faced with the demands of managing a twenty-first century workforce, this sounds like an enticing prospect for employers. We see what ChatGPT has to say about its own potential uses in supporting employers and what some pitfalls might be.
Those of us who have tried ChatGPT quickly realise its potential as a knowledgeable and (incredibly) quick–thinking assistant, able to converse and generate natural–sounding responses at lightning speed. How better to find out more about how ChatGPT can be used in the workplace than by going straight to the horse’s mouth?
We asked ChatGPT how it can help employers in managing their staff and the list generated is impressive, covering scheduling, time-tracking, communication, task management, meeting coordination, employee engagement, performance tracking, compliance and employee retention.
More specifically, ChatGPT explained how it could assist with a core function of an HR team: a performance management process.
‘ChatGPT can assist an employer with a performance management process by:
But its concluding remarks are telling: in addition to complying with local laws and maintaining the employee’s privacy, ‘it’s always recommended to have human oversight and review of the generated texts’. And in employee relations, this human filter will be critical.
James Davies, Employment Partner at Lewis Silkin and author of the recently published Eight Drivers of Change report on emerging trends in the world of work, believes the ramifications of this technology and the need for human oversight will be significant: ‘Generative AI like ChatGPT promises to transform the “knowledge job”, including many managerial and HR ones. The scary thing is that we are seeing only the beginnings of its capability and, in the years ahead, it will become much more powerful. I’m sure the future will see AI platforms like ChatGPT, with access to huge amounts of data, and humans working together. One thing which is for sure is the skills needed from people will evolve quickly.’
Of course progress comes at a price. Luckily ChatGPT was nothing but honest when we asked it about the risks are of using it in the workplace.
‘There are several risks to using ChatGPT in the workplace, including:
Reflecting on this, Chief Technology Officer at Lewis Silkin, Alex Bazin, shared his concerns about the use of generative AI in this context: ‘Inaccuracy, formally known as “model hallucination”, is a real concern as most current tools don’t provide any indicators of their likely accuracy – they are “confidently wrong”.’
In contrast, explainable AI would provide clear and understandable explanations of its decision-making processes and predictions, thereby enhancing the transparency of AI. Alex explained that ‘explainable AI would go a long way to addressing inaccuracy, bias and transparency, but is in its early stages of development.’
But overall, Alex Bazin was positive: ‘The fine-tuning available in the commercial versions of GPT and its competitors will allow organisations to have more confidence in the results of AI tools. Combined with human supervision and review, generative AI is likely to be a valuable tool in the modern workplace, accelerating the creation of ideas and content.’
Indeed, recent news that BuzzFeed intends to use AI to inform its brainstorming shows that this is quickly becoming a reality.
James Davies agrees that ChatGPT is right about its own limitations. ‘Human oversight, at least for now, will remain important. As an employment lawyer, one fascinating question which many of us are already thinking about is liability for AI-made employment decisions. Much has been written about these challenges, such as our previous article about algorithms and employment law. AI can perpetuate discrimination and in the context of generative AI this may be overt and relatively easy to correct’, such as gender bias in the language generated, ‘but it may also be implicit and difficult to detect’.
At the recent Davos World Economic Forum, Erik Brynjolfsson, professor and senior fellow at the Stanford Institute for Human-Centered AI echoed these concerns, strongly emphasising the current need for human supervision of AI: ‘Right now it’d be downright dangerous to use AI tools without a human in the loop.’
As this technology moves at pace, it’s essential that organisations ensure that their use of AI is consistent with their values. We asked Chat GPT about whether this technology could pose this kind of threat and its response is practical:
‘ChatGPT, as a language model, is a tool that can be used to automate certain tasks and assist with information gathering and dissemination. It is not inherently capable of threatening company values. However, as with any technology, ChatGPT can be misused or implemented in ways that may negatively impact company values.
There are some potential risks to consider when using ChatGPT in a workplace, such as language bias, misinformation, and security and privacy concerns. It is important for companies to establish clear guidelines for the use of ChatGPT and to monitor its usage to ensure that it is not being used in a way that undermines the company’s values or ethical principles.
Additionally, it is important to ensure that the data used to train the model does not contain any information that would be in conflict with the company’s values. Companies should also consider the ethics of using AI, such as potential negative impacts on human employment, privacy, and fairness.
Overall, ChatGPT is a tool that can be used to improve efficiency and productivity, but it should be used with caution and with consideration of its potential risks and the company’s values.’
On this, James Davies agreed that ‘AI tools are vulnerable to the embedded bias in the data inputted into training and verification models. Taking steps to verify that AI tools do not discriminate is likely to become an increasing issue not least as maintaining corporate values becomes an ever more important issue.’
The mind-boggling technology behind ChatGPT is usefully explained in our post here, in particular how the programme is trained on data points and is gradually optimised by the reinforced learning algorithm. Many organisations will be thinking about the use cases for generative AI in their organisation. That may be through the use of industry models packaged with existing tools, or by fine–tuning models by training them with their own data. ChatGPT 4 is anticipated to be released later this year and may be even quicker, more accurate and better at inferring human intentions.
However, also under development is EU regulation of AI through the proposed AI Act. This looks to ensure that AI systems are safe and robust before they enter the EU market. Within this framework certain systems are defined as ‘high risk’, and notably that includes its use in employment contexts such as recruitment and performance evaluation. Crucially, high risk systems must (amongst other things) have human oversight.
Whilst the UK’s regulatory plans appear to be lighter-touch (as set out in the recent White Paper on AI), the impact of the AI Act will reach beyond the borders of the EU, particularly for organisations operating across Europe. What we can say with certainty is that this will be a topic to which we return soon.
For more information about employment law