
Data will decide if AI will work in your firm
Nigel Williams, product director and interim head at LexisNexis Enterprise Solutions, explores how structured data and operational discipline optimise AI usage
AI is unfolding in a similar pattern across law firms. First comes the excitement. AI solution demonstrations feel like magic. Instant summaries. Drafted clauses in seconds. Insights surfaced from thousands of documents in moments. Boards lean forward. Innovation committees accelerate. Budgets shift.
Then comes the quieter, profound realisation. AI is powerful, but entirely dependent on the quality of the data it is fed. AI does not solve data problems or correct structural disorder. It amplifies it.
If client entities are duplicated, AI will draw on duplicated records. If matter naming conventions are inconsistent, AI will misinterpret them. If metadata is incomplete, AI will generate incomplete insight.
The danger of this all? AI output will still look confident.
The myth of the quick fix
AI is often positioned as a shortcut — a way to leapfrog process discipline, and as a tool that can sit above existing systems and make sense of whatever lies beneath. However, the governance and risk teams will instinctively understand that no technology can reliably interpret chaos.
Furthermore, feeding the AI with poor-quality data risks horrific results — there have already been numerous public examples of lawyers relying on AI-generated case law that did not exist.
The firms that treat AI as an overlay rather than a foundation risk building something impressive looking but structurally unstable.
Structured data means clarity
In the legal context, structured data means that clients are defined consistently across the firm, matter records contain agreed and standardised fields, risk indicators are captured in the same way every time, documents are linked to matters in a coherent structure, and so on.
In essence, structured data provides clarity. This is important when using AI. For example, if a firm wants AI to identify trends across matters, but the matters are recorded inconsistently, the analysis will be flawed. If it wants AI to support risk review, but risk data is captured sporadically, the conclusions will be unreliable.
This is unlike in human-led manual processes, where lawyers correct or compensate instinctively for inaccuracies and inconsistencies in data. They’ll remember which spreadsheet holds which information. They’ll know where to look for that one critical document. They’ll know which colleague is likely to have the information, and such.
With structured data, AI scales
There is another dimension that firms must recognise. If a firm has strong, structured data, AI scales insight. That is powerful. But if a firm has inconsistent, duplicated, or incomplete data, AI scales errors. That is perilous.
Firms that invest first in structured matter management and data consolidation position themselves differently. They are not layering intelligence onto disorder: they are enabling intelligence to operate within a coherent architecture.
AI thrives on consistency and coherence
The structured data conversation often begins with things such as workflow, efficiency and automation through matter and case management technology systems. However, the deeper value of structured matter management systems (MMS) such as Lexis Visualfiles and Lexis Everyfile lies in how they organise information.
These MMS’s facilitate data consistency by default. If someone changes an email address or contact number, the change filters down through the rest of the system.
This may sound operational, but it’s in fact strategic — one update applied everywhere to ensure one client entity, one version of truth.
This coherence is needed to enable AI to operate safely. It ensures that when analysis is performed across matters, the underlying data points are comparable and complete. Similarly, when risk questionnaires or compliance fields are embedded into the matter lifecycle, the resulting dataset is structured by design rather than assembled retrospectively.
AI and the confidence question
Lawyers will only rely on AI if they trust it, which in turn is built through reliability. If early AI outputs are inconsistent or questionable, adoption stalls, scepticism hardens, and the narrative shifts from opportunity to threat. Lawyers are already wary of tools that appear to take control away from them. If AI produces unreliable answers, that suspicion intensifies.
The path to AI confidence is not persuading lawyers that the technology is clever. Rather, it’s demonstrating that the data foundation beneath it is sound. When a lawyer knows that the system captures matter information consistently, that client data is accurate, that risk flags are structured, the outputs become easier to trust.
At the board level, the AI discussion often revolves around competitiveness, market positioning and innovation signalling.
The more fundamental strategic question the board should be asking is this: is our data mature enough to justify AI at scale?
Designing for AI, not chasing It
Firms that will benefit most from AI over the next three years are unlikely to be those that moved fastest. They will be those who built foundations deliberately — through structured client entities, consistent matter metadata, embedded risk capture, governed retention and integrated systems.
Such initiatives rarely make headlines, but they certainly create the conditions in which AI can genuinely enhance legal delivery for commercial benefit.
AI is not a shortcut around operational discipline, but rather a reward for it.
Here on in, the competitive edge will not come from simply saying ‘we use AI’, it will come from being able to say, with credibility, that the firm’s data is ready for it.

