• Insights

1

1. Artificial Intelligence (AI)

AI continues to dominate headlines in 2025.  

Global regulation is advancing rapidly, with significant developments such as the UK’s AI Opportunities Action Plan, Australia’s National AI Capability Plan, and the first provisions of the EU AI Act coming into force. In the US, President Trump has rescinded existing Executive Orders and signed a new Executive Order to develop an AI action plan in 180 days. In Europe on the other hand regulatory scrutiny is increasing, with fines (e.g. for Open AI) and investigations (e.g. Meta, Grok AI and of course, DeepSeek). The UK Information Commissioner’s Office’s (‘ICO’s’) Gen AI outcomes report, the culmination of the series of consultations it conducted throughout the year, emphasises the need for transparency, urging companies to tell people how their information is used.  

Against this backdrop, ensuring your AI governance is fit for purpose is going to be key, whether by identifying your lawful basis to train or deploy a model, putting DPIAs in place, reviewing AI vendor terms, updating your procurement processes or setting boundaries with LLM/Gen AI policies. This is the year to get your house in order!  

2. Data Reform in the UK

Is it third time lucky for data reform in the UK? The Data (Use and Access) Bill is making significant progress through the UK Parliament. This Bill aims to simplify data protection requirements to encourage trade, while maintaining the UK’s adequacy status with the EU.  

So, what does this mean in practice? For now, it is a case of waiting to see what is finally enacted but it likely means the relaxing of some, for want of a better phrase, “red” tape. We should all keep a close eye on what is happening in the Privacy and Electronic Communications Regulations (‘PECR’) space – UK GDPR level fines are one thing, but cookie consent and proposed exemptions will need to be scrutinised to understand their impact. At present there is almost a blurring of PECR and the UK GDPR and how this will all fit with the ICO’s new guidance (see below) remains to be seen.  

3. International Data Transfers

The EU-US Data Privacy Framework (‘DPF’) and its UK Extension and Swiss framework meant we all hoped to see a more settled period for international data transfers. However, recent fines and court decisions, such as the Dutch DPA’s €290 million fine for Uber and the EU General Court’s decision in Bindl v. European Commission, highlight ongoing challenges. Max Schrems and NOYB also appear to have shifted their focus to data transfers to China, filing complaints against companies like TikTok, AliExpress, Temu and SHEIN.  

As for existing EU adequacy decisions the spotlight is on the UK, with the deadline for a decision on the UK’s adequacy due on 27 June 2025, while in the US the potential impact of changes in the US Privacy and Civil Liberties Oversight Board (PCLOB) on data transfers is also worth monitoring in case, as some believe, it may impact the validity of the DPF.   

For global businesses with complex international data flows it will be imperative to keep track of developments in order to ensure compliance. 

4. Ad tech and online tracking technologies

To answer our question from last year, 2024 was not the year when we saw the death of third party cookies, instead Google announced an updated approach. Google also announced it is adopting a new stance on device fingerprinting.  

The ICO has also been busy publishing various guidance related to online tracking. In December, it released a draft of its storage and access technologies (cookies) guidance, where it took a conservative view regarding the need to obtain cookie consent for non-essential cookies, although interestingly seemed to take a more relaxed view on enforcement. Just last month, the ICO was particularly busy releasing various papers around pay or consent models (also known as ‘pay or ok’ models). This included its online tracking strategy, with an emphasis on fair and transparent online tracking and it also published a draft of its long-awaited consent or pay guidance following its call for views last year.  

But it is not just the ICO focusing on this area. The European Data Protection Board (‘EDPB’) also published their Opinion on ‘pay or ok’ models, but bearing in mind the difficult issues of consent, fairness, purpose limitation, data minimisation etc., we doubt that we’ve heard the last on this as yet! Further, the EDPB also adopted its own tracking technology guidance at the end of last year.  As for the EU Commission’s cookie pledge very little has happened but that could well be as it has been eclipsed by the ’pay or ok’ debate. 

Further, we have seen regulators and courts continuing to opine in this space. The Court of Justice of the European Union (CJEU) ruled on the use of personal data for targeted online advertising in the case of Schrems v Meta (C-446/21), prompting reviews of data cleansing and retention policies and processes to ensure compliance with the data minimisation principle and also the need to consider putting stricter controls in place around how data sets can be used for retargeting.  

The Irish Data Protection Commission (DPC) fined LinkedIn EUR 310 million for processing personal data for behavioural analysis and targeted advertising, which interestingly focused on bias as well as the usual issues around transparency and lawful bases. A reprimand and compliance order was also issued. In the long running IAB Europe’s TCF litigation, the CJEU handed down their preliminary ruling and the proceedings resumed before the Belgian Market Court for final determination. We await the judgment. 

All in all, tracking technologies and the online advertising ecosystem remain a significant regulatory focus in the UK and EU and it is crucial to stay on top of what is an ever changing landscape.  

5. Online Tech Regulation

2025 is a pivotal year for online tech regulation in the UK, with many of the Online Safety Act (‘OSA’) provisions coming into force. Providers must assess the risk of illegal harms and implement safety measures set out in the codes of practice. The EU’s Digital Services Act (‘DSA’) and Digital Markets Act (DMA) continue to drive compliance through regulatory action, with investigations into companies like TikTok and X.  

Navigating the ever-evolving regulatory landscape will be a challenge but those who champion online safety, bake in compliance by design, increase transparency and implement ethical practices will find themselves in a better position to avoid reputational and monetary consequences from regulatory action. If you haven’t already done so, you should assess whether the wide-ranging obligations apply to your organisation, and if so, it would be prudent to audit your policies and processes and put a roadmap in place to demonstrate how you will comply with the new regulatory landscape. 

6. Children's Data

Protecting children’s data remains a global priority. While there has been very little in the way of regulatory enforcement so far in the UK, the ICO and Ofcom are expected to begin actively enforcing the Children’s Code and the OSA, respectively. It appears the time for talking is over.  

The EU’s DSA prohibition on targeted advertising to children is now in force and those in scope must undertake risk assessments of the impact on children’s rights online as well as complying with transparency requirements.  

While the EU and UK have been quieter in this area than expected, that is certainly not the case across the pond. The Tik Tok litigation for alleged breaches of the Children’s Online Privacy Protection Rule (‘COPPA’) hit the headlines (although given the current situation Tik Tok finds itself in this may not be top of its priority list!) and we’ve also seen the FTC’s changes to COPPA, setting new requirements for handling children’s data. Meanwhile the States are conducting investigations, e.g. Texas’s investigation into Character.AI and fourteen other SMPs.  

The use of AI in children’s online services (e.g. in recommender systems, content moderation or age verification), is another area of concern. Given the regulatory focus both on AI and children’s data it will come as no surprise that regulators are likely to scrutinise how AI is used both to collect and process children’s data. Being proactive and adaptive to ongoing regulatory changes while prioritising a child’s best interests will be key to ensuring compliance.  

7. Cyber Security

With 2024 reportedly being a record year for ransomware payments, cyber security remains a top concern – governments around the world are looking for a way to tackle its scourge and ever-increasing scale. For its part, the UK is currently consulting on introducing a ban on ransomware payments, as well as a mandatory ransomware incident reporting regime. While international consensus is that additional transparency around mandatory incident reporting is a good thing, many governments have already considered and dismissed the idea of an absolute ban on payments.  

Meanwhile, many UK organisations are affected – either directly or indirectly – by the progress of the EU’s cybersecurity strategy. Notably, the deadline for EU Member States to transpose the Network and Information Systems Directive (NIS 2) has now passed. This expands cyber security requirements to more critical services, and includes digital infrastructure such as managed service providers. The Digital Operational Resilience Act (‘DORA’) is also now in effect. This affects financial service institutions and their information communication technology supply chains. Each introduce significant obligations, so if you haven’t done so already, you’ll need to determine whether your organisation is in scope. 

With the UK still not having updated its previous implementation of the NIS 1 Directive, but critical sectors and supply chains being increasingly targeted by threat actors, it will be interesting to see what position the UK government adopts in its long-awaited Cyber Security and Resilience Bill, which is expected to be introduced in 2025. 

8. Workplace Privacy

AI in the workplace is under scrutiny, with the EU AI Act deeming most uses of AI in this context as high risk. In the UK, both the Department for Science, Innovation and Technology and the ICO have issued guidance on AI in recruitment. Clear concerns of the potential scale and the harm arising from biased systems/tools mean this is an area of focus (and concern) for the regulators and employees alike. 

Following the increase in adoption of biometric recognition tech the ICO has issued guidance as well as taking enforcement action, highlighting the need to conduct risk assessments before investing in and implementing workplace biometric technology.  

The key proposals of the Data (Use and Access) Bill of interest in this context will be the changes to Subject Access Rights (SARs) – stopping the clock and “reasonable and proportionate” searches – and automated decision-making – both clarified and liberalised and therefore facing intense scrutiny as the Bill progresses. It will be important to follow the progress of the Bill to see where these proposals end up, with a view to reviewing your policies and processes once the final position is known. 

If you are interested in international workplace privacy developments you may like to sign up for the quarterly Workplace Privacy Update we produce in conjunction with Ius Laboris.  

9. Litigation

In the early days of the GDPR, data litigation focused on breaches. Seven years on, many breach-related issues are still being teased out in the courts, both here and in the EU. Many such cases have wider impacts, such as whether exfiltrated payment card information constitutes personal data – an issue impacting on identifiability more generally. An appeal by the ICO of the Upper Tribunal’s decision against it is reportedly on the cards, and is one to watch in 2025.  

But there has also been a rise in litigation beyond data breaches. We’ve already mentioned Bindl v Commission on international data transfers, Schrems v Meta in the context of targeted online advertising, and the long running IAB Europe TCF litigation (see above). In 2024, the CJEU also heard a number of cases making determinations such as: an apology can be sufficient for non-material damage depending on the circumstances (C-507/23); loss of control can constitute non-material damage (C-200/23); purely commercial interests can constitute legitimate interests (C-621/22); and providing clarification on what constitutes personal data (C-479/22). 

With the continued rapid adoption of AI by stakeholders across organisations, the risk of claims related to its misuse is all the greater – especially in higher risk settings such as the workplace. We will likely start to see more of these claims in 2025, leveraging the existing legal framework that has data protection at its heart. So, to mitigate this risk, it will be all the more important to step up governance efforts when it comes to using personal data with AI systems. 

10. Anonymisation

The debate as to whether data can ever truly be anonymised persists, especially with AI’s vast and ever-growing data demands. In the UK, the ICO’s long-awaited guidance on anonymisation and pseudonymisation is still pending, awaiting the Data (Use and Access) Bill’s enactment. We have more clarity on the European position with the recently published guidelines on pseudonymisation from the EDPB, that provide the legal and technical requirements necessary for pseudonymisation, along with helpful, practical examples. 

This is still a complex and developing area, so while following the debate may provide some insight, a back to basic preparation plan might be something to consider – assess your current data sets and identify which are suitable for anonymisation, consider which techniques provide the robustness you need to minimise the risk of re-identification, e.g. would masking, aggregation or another solution work best for you? And when you include anonymisation techniques be sure to train your employees as well as updating your policies and procedures.  

Takeaway for employers

As ever 2025 is shaping up to be a busy year with lots of interesting developments. For employers tasked with navigating this complex and ever-evolving landscape, emphasis should be on the following:  

  • Ensure your AI governance is fit-for-purpose – this will mitigate the risks of claims related to its misuse;  
  • Keep abreast of developments in fast-moving areas such as data reform, international data transfers, advertising technology and workplace privacy; 
  • Assess whether online technology regulations apply to your organisation, and if so, consider how you will comply;  
  • When it comes to children’s data, be proactive and adaptive to ongoing regulatory changes while prioritising children’s best interests; and 
  • On anoymisation, assess data sets for anonymisation suitability, choose robust techniques to minimise re-identification risks, and ensure employee training and policy updates. 

 

Remaining proactive, informed and adaptable will certainly be essential for employers to effectively manage the dynamic world of data, privacy and cyber in 2025. 

Discover more about employee data privacy in our Global HR Law Guide