Law in the age of tech — a risk perspective
Calum MacLean, risk manager (professional and financial risks) at Miller offers top insights on how firms can harness the benefits of AI while safeguarding their clients, data, and reputation — gained during its latest 2025 Legal Risk & Compliance conference
The intersection of law and technology was a dominant theme at Miller’s 2025 Legal Risk & Compliance conference, with a particular focus on artificial intelligence (AI) and its implications for legal practice and data protection.
Featuring expert insights from Andrea Ward, data protection officer at Charles Russell Speechleys (CRS), and Dean Armstrong KC, head of technology at KaurMaxwell, the sessions offered practical guidance for law firms as they adapt to this rapidly evolving landscape.
AI in legal practice: opportunity meets risk
The rise of AI in the legal sector is undeniable. Miller’s Risk Benchmarking survey revealed that 33% of firms are already using AI tools, with an additional 14% planning to adopt them within the next year. This aligns with the 46% of conference participants whose firms have implemented AI training programmes. Yet, alongside the excitement about AI’s potential, there is palpable concern about its risks, including accuracy, data protection, and supervision.
A clear message from the conference was that no firm can afford to bury their head in the sand on AI. It is here, and it is going to have a material impact on the profession. Warning against the dangers of furtive use of AI, Ward cited the example of her own firm, where staff are actively encouraged to propose innovative use cases for AI to improve work practices or better address client needs. This approach not only fosters innovation but also mitigates the risks associated with unauthorised AI usage.
Ward highlighted the importance of revisiting firm policies to address AI usage. Whether a firm actively uses AI or not, it is crucial to update acceptable use policies to clearly define permissible AI applications. For firms actively engaging with AI, a standalone AI policy may be necessary, and client engagement terms may need revision to reflect these changes.
Data and regulation: a complex interface
Dean Armstrong’s session underscored the crucial role of data in legal practice, describing it as the fuel that powers everything, including AI tools. However, the global variation in data regulation poses significant challenges for law firms. The USA adopts a free-market approach to data resale, the EU enforces stringent rules under GDPR and the upcoming AI Act, while the UK navigates a more ambiguous middle ground.
Armstrong illustrated these complexities with an example from the automotive industry: a Jaguar Land Rover electric vehicle produced in the Midlands is sold to the USA, UAE, EU and British markets, amongst others. This car contains a considerable number of data points within it, capturing and transmitting all sorts of data. If that system is operating in an EU member state, then the EU AI Act will apply.
Law firms face a similar challenge in identifying and meeting their obligations across jurisdictions. The audience were asked what algorithms their business used. Unsurprisingly, no one could answer! Making the point that it is nigh on impossible for firms to confidently work out what they have to comply with.
Armstrong then posed an even trickier question: can an AI system comply with GDPR, particularly given the regulation’s “right to be forgotten”? AI’s reliance on learning and data maximisation directly conflicts with GDPR’s principles of data minimisation and justified retention. Additionally, GDPR restricts certain automated decision-making processes unless there is human oversight, further complicating AI’s integration into legal workflows.
Practical guidance for law firms
Whilst the session posed more questions than answers, it did flag some practical steps law firms can take to navigate this landscape effectively:
- Update policies and training: Revise acceptable use policies to address AI tools explicitly. If AI is actively used, develop a standalone AI policy and ensure staff are trained on its implications.
- Be transparent with clients: Clearly communicate how client data is used and disclose any AI exposure. Transparency builds trust and helps manage client expectations.
- Encourage controlled innovation: Adopt initiatives, like CRS’s open-door policy, to explore new AI use cases while maintaining appropriate safeguards.
- Understand regulatory obligations: Work closely with Data Protection Officers to navigate complex regulatory requirements across jurisdictions. Regular audits and compliance checks can help mitigate risks.
- Focus on risk management: Address key AI risk areas — accuracy, data protection, and supervision — through robust governance frameworks and oversight mechanisms.
As legal tech continues to evolve, law firms must balance innovation with compliance and risk management. By taking proactive measures, firms can harness the benefits of AI while safeguarding their clients, data, and reputation. The journey may be complex, but with the right strategies, the opportunities are immense.