AI In The Workplace

What businesses need to know about AI policies

What is an AI policy?

An AI policy is part of an organisation’s internal policies and procedures aimed at setting out the rules under which employees can use artificial intelligence for work purposes. 

Do businesses need an AI policy?

AI has already become a powerful tool for business providing many benefits including increased efficiency of repetitive tasks and routine tasks, provide valuable insights and improve business performance and it is set to become more and more widely used, potentially revolutionising some knowledge based areas.  The use of AI is not without risk and businesses need to understand, communicate and manage the potential risks of AI.  The AI policy is the core to this risk management, establishing a common set of rules by which all employees should operate when integrating AI.

A good AI policy is not, in itself, a guarantee that artificial intelligence AI risks are mitigated.  It should be used in conjunction with staff training, monitoring and auditing AI use internally, and senior management governance.  However, without the ground rules established by the AI policy these additional measures will be difficult to implement.

How do we make an AI policy binding on employees?

The best way of introducing the AI policy is via a “staff handbook”.  The staff handbook contains policies that are not part of an employee’s formal terms and conditions and they can be added, revised or deleted. Implementing AI in this way means there is no  need to undergo a formal consultation with employees and obtain consent to change their terms of employment. 

Importantly, individual employees can still be subject to disciplinary procedures from HR professionals for failing to follow a non-contractual policy.

What should an AI policy include?

 

Permissive or Prohibitive?

One of the initial decisions an organisation needs to take is whether it will take a generally permissive attitude to use of artificial intelligence AI in the workplace, i.e. allowing use of AI tools and/or allowing AI to be used for any purposes unless specifically excluded by the AI policy. 

Alternatively, organisations may take a prohibitive approach, i.e. not allowing the use of AI tools and/or the use of artificial intelligence for any purposes unless specifically approved by the AI policy. 

A permissive approach may be appropriate for organisations to identify areas that they want workers to use AI to make efficiency gains and are relatively low risk in terms of AI, e.g. where they do not rely on their own intellectual property as core business value, they do not use large amounts of personal data, or where they are not using AI to make decisions on individuals.  The prohibitive approach would suit more conservative organisations or where one or more of the above factors apply, or where the organisation operates in a highly regulated environment such as financial services.

Depending on this decision, the AI policy may need to list AI tools/new technology that are prohibited or permitted and also list the purposes that employees use AI systems for either prohibited or permitted.

Confidentiality

One key issue is protecting business secrets of the organisation, but also of third parties.  Employees will be inputting data into prompts and this data will be processed by a third party AI system provider.  Therefore any confidential or business sensitive data put into the prompts will be leaving the organisation and this in itself could cause significant implications and be an issue or expose the organisation to a breach of confidentiality agreements with other parties.  For example, a prompt relating to a potential business acquisition due diligence or product development may allow inferences to be made about future plans of the company.

Add to this that the AI system is likely to be using the prompt data to perform tasks, “learn” and develop future responses to other users of the AI system.  This could lead to a loss of market advantage by losing control of business sensitive information.  For example, some AI systems are used to develop software code, the submission of existing company confidential code in a prompt to assist with software development could lead to that existing code being used to help other users.

The AI Policy should therefore have a section preventing confidential information being submitted in an AI prompt, and provide helpful and thought-provoking examples for employees about what information should not be used in the AI system prompts.

These issues can be mitigated to some extent by the use of opt-outs.  For example, ChatGPT has an opt out option to prevent data collection which will prevent the use of the prompt data being used to train the model. The AI policy should educate users on the use of this and require that employees use the opt-out function for all company related queries.

Data Protection

An AI Policy would typically prevent the use of personal data in prompts for general foundation models. Inputting personal data into AI prompts could be a breach of data protection law, for example where employee or customer personal data is processed in a way that is not in line with data protection transparency statements, or where it results in international data transfers that have not been approved.

It is possible that organisations are developing AI tools that include the processing of personal data, but the privacy aspects of these systems should be considered at an early stage.  The policy should direct personnel to the relevant department if these types of systems are to be developed.

In addition, GDPR contains rules on the use of automated decision making that must be taken into account if AI systems are to be used for these purposes.  The AI policy should refer personnel to the experts within the organisation to ensure that these rules are being met.

Intellectual Property Rights

The policy should manage the risk of AI use creating an infringement of intellectual property rights.  Many AI systems have been trained on materials that are available on the Internet, but may not be licensed for use in this way.  This is a developing area of law and subject to a number of test cases in different jurisdictions.  In the UK, there is a data mining exemption for copyright infringement, but this does not apply to commercial use.  In the US, there are wider fair use exemptions for copyright infringement.

Until we have more clarity on this the AI policies used by organisations can probably only highlight the risk of intellectual property infringement, and warn users to take care of what information is being used to generate the output requested by the user. 

An AI policy could put in place a risk management process on IP.  For example, where a foundation model is being used for internal research purposes that are not for publication, then the risk of IP infringement is relatively low.  If a foundation model is used to create a new product for commercial exploitation, the risk is greater and a user could be referred to an internal expert.  This could involve the assessment of the input used and the terms and conditions of the relevant foundation model.

Acceptable use and ethics

The policy should establish the basis of acceptable use of AI systems.  In particular, offensive, discriminatory, inappropriate or otherwise unlawful content should not be part of the inputs made by the user into the prompt sections of AI tools.

The policy should require used to operate AI and machine learning tools in an ethical manner (in line with the organisations other corporate and social responsibility policies) and not be used for purposes such as harassment or bullying.  Nor should any use of AI tools infringe data protection, intellectual property, or confidentiality of information principles.

Bias and hallucinations

 It is useful to educate and remind users that AI systems are susceptible to bias and believable but highly inaccurate results.  Therefore users of artificial intelligence should review and check results, especially where they are to be used for important business decisions.

Organisations may also want their staff to record the use of AI and cite the use of AI for transparency and auditing purposes.

Sector specific

Some organisations may also have sector specific rules and regulations to follow and these should be reflected in the AI Policy.  For example, there are existing financial services regulatory requirements that could be impacted by AI, these include rules on consumer protection in the FCA Handbook preventing bias and discrimination.

AI System development

Most AI policies focus on the use of third party AI systems such as foundation models.  If an organisation is going to develop its own AI systems then there are greater legal and regulatory issues to take into account.  For example, data protection by design and data protection by default principles, and automated decision making requirements in GDPR will become relevant.  An AI policy should point users to the correct internal experts before embarking on these type of projects.

Monitoring

Organisations may wish to monitor AI use to ensure that employees are complying with the rules set out in the policy.  Employees should be made aware of the monitoring and the purposes that this monitoring is being carried out.  This could be set out in the AI policy or in an update to the organisation’s IT policy.

Breach

It is sensible to state that a breach of the AI policy may result in disciplinary action for an employee and in some circumstances this could occur in relation to usage of technology outside of working hours.  Employees should also be required to co-operate with investigations and provide access to AI tools that they have been using.

Future Regulation

There is likely to be more AI regulation in the future, in particular the EU AI Act will impose obligations on organisations using AI in the European Union.  Organisations will need to monitor this and update AI policies as these new regulations become effective.

One of the initial decisions an organisation needs to take is whether it will take a generally permissive attitude to use of artificial intelligence AI in the workplace, i.e. allowing use of AI tools and/or allowing AI to be used for any purposes unless specifically excluded by the AI policy. 

Alternatively, organisations may take a prohibitive approach, i.e. not allowing the use of AI tools and/or the use of artificial intelligence for any purposes unless specifically approved by the AI policy. 

Conclusion

With the expanding use of AI in the workplace it is advisable for an organisation to establish an AI policy.  This will allow it to set ground rules for the use of these tools, mitigate risk around matters such as IPR infringement and regulatory sanctions and also create a basis for dealing with employees that misuse AI potentially creating reputational damage.

For more information on the issues raised in this note or for any of your IT or data legal issues please get in touch with us:

Codified Legal 
7 Stratford Place
London
W1C 1AY
Get directions to Codified Legal

 

The information contained in this briefing note is intended to be for information purposes only and is not legal advice. You must take professional legal advice before acting on any issues raised in this briefing.