Does your business need an AI usage policy?

Tools like ChatGPT are infiltrating almost every aspect of our lives. Is it appropriate to allow unrestricted usage of these tools within your organisation?

Petru Tinca • 
Does your business need an AI usage policy

When the Guardian reported back at the start of 2023 that ChatGPT had surpassed 100 million active users, it was clear that this niche generative AI tool was breaking through to the mainstream.

Throughout 2023, we have been inundated with stories about ChatGPT across a range of mainstream media outlets. It’s becoming clear that generative AI tools like ChatGPT now have their feet firmly under the proverbial coffee table.

What this means for HR professionals is that many of your employees are already using ChatGPT in an experimental way to increase their productivity, output, and creativity in their daily work.

Adopting generative AI in this unchecked fashion exposes your business to the risks of ChatGPT usage without boundaries. However, banning ChatGPT now would be like closing the stable door after the horse has bolted and would push it underground with your staff using it on the quiet.

Where ChatGPT goes AI usage policies must follow…

The answer then is for HR to develop an AI usage policy which guides your employees to make responsible use of ChatGPT and any other generative AI tool. The beauty of this approach is that you give your employees some boundaries, mitigate some of the risks, while at the same time send a timely message that you embrace generative AI.

But where should you start?Of course, you could find an AI usage policy template on the web, or you could even ask ChatGPT to create one! You’d need to customise it to suit your own business and there are several themes that should be covered in a good AI usage policy.

Of course, policy-making is not for everybody, with many HR professionals believing they limit the flexibility of their workplace. But for those looking to create one, here are a few things to bear in mind.

It’s a tool and not a replacement for human intelligence.

We should not allow the intoxicating effects of generative AI to obscure its current limitations.  For example, The Centre for Science in the Public Interest has highlighted ChatGPT’s tendency to hallucinate which is where it basically creates fictional output which can seem real: ‘It sometimes puts words, names and ideas together that don’t actually belong together, such as discussing the record for crossing the English Channel on foot, or bibliographies of names and books that don’t exist or long lists of bogus legal citations.’

It could be embarrassing if such hallucinations ended up in important business presentations or corrupted a decision-making process. Racial and gender biases have also been noted within ChatGPT, along with egregious factual errors.

Whilst ChatGPT can increase productivity, if it is used freely in an unchecked manner in its current form it could undermine the integrity of your business processes.

Therefore, a good AI usage policy should acknowledge these shortcomings and insist that all AI generated content be fact-checked, sanity-checked, offence-checked and GDPR-checked.

Generative AI is a tool and not a replacement for an employee and their experience, critical thinking, and intellectual scrutiny; it still needs a human firmly at the reins.

How to get the best out of generative AI

Once appropriate safeguards are established it makes sense to have a proactive AI usage policy that ensures that your organisation is maximising the use of generative AI.

To this end, we’d recommend the inclusion of some links in your AI usage policy around prompt engineering which are the text commands and inputs that a user applies to ChatGPT to generate content.  Prompt engineering is rapidly becoming an art-form, the better your prompts, the more specific, unique, and aesthetically pleasing the outcome.  

These resources from pluralinsights.com are a good starting point and will expert coach your staff on prompt engineering.

  1. AI & generative AI explained.
  2. Getting started on prompt engineering with generative AI.
  3. How to use ChatGPT’s new ‘code interpreter’. (This a tool for general users to analyse data, create charts and work with maths etc.)

We haven’t covered everything that a HR professional might need to include in an AI usage policy, and we’ve come at this from a general user perspective and not an advanced one. Some aspects of the policy, (such as around using AI to generate code or to scrutinise live or custom datasets) would need HR professionals to work in partnership with IT leaders anyway. 

Of course, there are plenty of AI usage policy templates out there, some more restrictive, and some more liberal and proactive, and it would be a case of choosing one that best suits your organisation and tailoring it, so it is a snug fit.