I've already talked about the pros and cons of AI in the workplace. One of the most effective tools for quickly and transparently managing AI use in your company is an AI use policy.
At Sommo, we developed ours about a year ago (although we should have done it much earlier 🤦♂️). Since then, it has gone through two more revisions. Below, I’ll briefly share my experience and offer tips for creating and implementing an AI use policy.
AI usage policy template
For those who don’t want to waste a minute - you can download a ready-made AI usage policy template and adapt it for your company.
What is AI usage policy
An AI acceptable use policy is a set of fundamental guidelines that explains to employees how to use AI tools like ChatGPT or Google's Bard in the workplace. These policies are designed to ensure everyone uses AI in an ethical and appropriate way.
Having such a policy protects the company from problems like:
- Leaks of Important/Confidential Information: There are more and more stories on social media about employees getting fired for improperly sharing information with AI, misjudging the risks and benefits.
- Erosion of Responsibility and Competency: AI cannot handle more complex or skilled tasks better than an employee. Any task completed with AI must be reviewed by a person who takes responsibility for the final outcome.
- Ethical Issues: Deloitte’s comprehensive and recent report, "State of Ethics and Trust in Technology," is dedicated to AI ethics and its impacts.
- Lawsuits and Reputational Damage: These are the potential consequences of the above problems. The resources needed to compensate for damages and restore the status quo can be substantial.
I hope I’ve convinced you that developing an AI policy is not just a trend but an effective tool for safeguarding and growing your company. Now, let’s move on to the second question—who should create such a policy?
In practice, this is usually handled by executive leadership. For instance, here is some data regarding developing ethical standards as part of an AI usage policy.
If possible, you can form a special committee—a group of individuals from different parts of your company—to work on the AI usage policy.
It is crucial that everyone involved in developing the policy understands the basics of AI and the challenges it can present. A good idea is to organize short workshops on key topics for the future AI usage policy—such as algorithmic bias, data privacy and protection, accountability, ethics, and social implications.
But most importantly, employees need to be aware of the AI usage policy, and it needs to actually work.
For myself, I've outlined the following rules for an effective AI usage policy:
- Team involvement: Different teams will be involved in developing, implementing, and updating the policy. This ensures you address the real interests and needs of those using the policy and fosters a sense of ownership.
- Keep it simple: The policy should be straightforward, concise, and understandable. Time and cognitive resources are the most limited assets—respect that.
- Principles over rules: Principles are more important than rigid rules. However, before establishing your own principles, I recommend looking at examples from companies like Microsoft or Google.
At Sommo, our AI usage policy is built on five key principles: security, privacy, accountability, transparency, and growth (encouraging exploration of AI’s benefits).
How to develop an AI-acceptable use policy
Developing an AI usage policy is still relatively new for most companies (see figure below), and there are no clear algorithms or tools yet. Therefore, when creating such a policy, you have to rely on common sense, experience, and your team.
Here are my personal tips for starting your work on an AI usage policy:
- Study government regulations: Look into relevant regulations, such as Artificial Intelligence And Worker Well-being: Principles And Best Practices For Developers And Employers
- Get familiar with the topic: There are already many courses, training programs, and guidelines for AI regulation. For example, a simple option is the AI Regulations and Frameworks Crash Course - 2024 or Introduction to Responsible AI by Google
- Don't reinvent the wheel: Study existing AI policies from similar companies or industry leaders. Use them as a reference to shape your policy. This will save time and ensure you don't miss key elements that have already been successfully implemented elsewhere.
- Focus on the goal(s): Remember that the goal of an AI policy is to encourage innovation and effective use of AI, not to impose unnecessary restrictions. The policy should promote responsible experimentation with AI while ensuring ethical and legal compliance.
The specific content and focus of an AI usage policy will vary depending on the company's industry, size, and specific use cases for AI. However, regardless of your company's industry or size, I recommend following such a structure to develop an AI usage policy:
Objectives: Clearly outline the objectives of using AI in your company. Understand what problems AI aims to solve and how it aligns with your business strategy.
Core principles: based on your company’s values, establish ethical principles that will guide AI usage. We used principles such as safety, privacy, accountability, fairness, transparency, and growth.
Use cases: determine which business functions/areas will use AI and for what purposes.
Ethics: Set guidelines to prevent bias, ensure fairness, and protect privacy.
Data management: Define how data will be collected, stored, and used. Ensure compliance with relevant data protection laws and regulations and include policies on data security and privacy.
Risk management: Identify potential risks associated with AI usage, such as security vulnerabilities, biases, or job displacement, and outline measures to mitigate these risks.
Accountability and responsibility: Assign responsibilities for monitoring AI systems and ensuring compliance with the AI policy. Define who is accountable for any issues that arise from AI usage.
Training and awareness: Provide training for employees on how to use AI responsibly. Ensure that staff understand both the benefits and risks of AI and how to follow the policy.
Continuous evaluation: Establish a process for regularly evaluating the effectiveness and impact of AI systems. Update the AI policy as needed to adapt to new challenges, technologies, or regulations.
Transparency: Maintain transparency with stakeholders, including employees, customers, and partners, about how AI is used and the decisions it influences.
Remember that developing an AI usage policy is an ongoing process. It is important to stay updated on changes in AI technology, regulations, and best practices. Regularly review and revise the policy to ensure it remains effective and relevant.
Even with a firm policy, it is essential to maintain human involvement in the development, deployment, and monitoring of AI systems to ensure they are used ethically and responsibly.