emailfacebookinstagrammenutwitterweiboyoutube

Five risks of using ChatGPT

ChatGPT can provide quick wins but along with producing bland writing, its use is risky, says this month’s Quiss columnist David Baskerville, consultant advising law firms and legal IT tech providers

David Baskerville|Consultant advising law firms and legal IT tech providers|

Over the last few months, it has been impossible to avoid the explosion of aritificial intelligence (AI) news with a plethora of new developments and products being discussed on mainstream news, in the legal press, LinkedIn and networking events.

However, we must remember that despite giant leap forwards, the tools we are seeing are currently advanced “decision tree” / prediction / natural language processing models which can repeat the failings and prejudices of their developers, or the dataset used to “train” the model.

These days even traditional transcription tools , are now sold as AI solutions. Before the AI boom, we would have just referred to them as “transcription tools”.

So, I decided to try an experiment. Without telling my colleagues I asked ChatGPT to write me a blog on system procurement and sent it around for review. The feedback was lukewarm – while the article provided some good points it had no personality or “bite”, it was bland and uninteresting. Yes, the article was publishable but very much left an impression of “so what” – the points made were not relatable or better from a basic Google search.

There are also several risks of using ChatGPT which users need to be aware of.

1. Information governance

Any information uploaded to ChatGPT becomes available to the ChatGPT engineers to refine and improve their model or the data source which it uses.

2. Unauthorised use by staff

While firm policy could be cautious (or ban) the use of such tools, the excitement and ease of access will make them a very attractive “quick win” for less experienced lawyers who find them quicker and easier to use, overusing the firm’s other knowledge libraries.

This risk of systems being adopted as “shadow IT” does not simply stop with junior lawyers. If your experienced fee earners were under pressure to deliver to tight deadlines, would they not also be attracted to a tool which provides a shortcut? It doesn’t stop there — how about marketing? Need a marketing strategy for a regional law firm specialising in private client and business law — ChatGPT can rattle one off in a few minutes.

Think back to when Google and other early search tools were introduced. Some firms actively sought to stop fee earners from using them due to the inaccuracy of results. With ChatGPT, the risk is higher as staff can upload material for it to “learn” from. Unless the account is “opted out” this content is then available for ChatGPT to consider when it answers questions from other users of the system.

It is therefore essential that firms create and adopt a policy which staff understand. Equally, risk and compliance reviews must actively start to identify content which has been generated from such tools.

3. Accuracy

ChatGPT makes mistakes, surprisingly it often gets basic maths questions wrong and struggles to answer logical questions. Its knowledge is only as good as its training dataset, and it is out of date having been trained up to a certain point of time.

4. Bias

ChatGPT has been trained based on a wide-ranging collection of information (but it is not publicly known which sources were used) and has reportedly “picked-up” a bias based on the information it has consumed. There is a suggestion in some reports that as a higher percentage of higher education academics, researchers and newspaper reporters tend to be “left-leaning” then material consumed by ChatGPT will contain more “left-leaning” material than middle ground or “right-leaning” articles.

5. Personality and reputation

If you are a regular writer of articles you, will over time, develop your own distinctive style. The same will be true of business communications. It will therefore be the case that clients will know your “style” and quickly pick up where you have used a tool to formulate your work.

I’m hugely excited by the advancements of tools in the AI space and think there will be some which truly change the legal profession. These tools, however, need to be “managed”, nurtured and challenged to ensure that client data and their intellectual property rights (IPR) are not made public/available to their competitors. I think most firms will make use of their own installation of such tools rather than using “public” platforms.

LPM Conference 2024

The LPM annual conference is the market-leading event for management leaders in SME law firms

SMEs vs Big Law: The tech race

Navigating tech advancements as an SME law firm