By Anne Bibeau & Ross Broudy
Are you aware that your employees are using commercially available generative artificial intelligence (AI) to do their work? Without the proper guardrails and AI use policies, rogue employee use of AI can pose complex legal risks and ethical challenges for businesses.
To concurrently limit these legal risks and challenges and fully realize the wealth of opportunities that AI integration into the workplace presents, businesses must establish clear, written guidelines to ensure their workforce manages generative AI technology responsibly.
Here are some key considerations for addressing your employees’ usage of generative AI:

Anne Bibeau
Establish a Generative AI Use Policy
As a critical first step, every business should develop a comprehensive AI use and ethics policy to establish clear principles and guidelines for the organization’s responsible and ethical use of AI. This is known as internal AI governance. Employees need rules on what AI systems are permitted, how those AI tools should be used (and when they should not), and what disclosures need to be made about AI use. A critical best practice is to prohibit all generative AI tools unless the tools themselves and specific use cases are approved by management.
Employees need to understand that they remain responsible for the content of their work. To this end, the AI use policy should require employees always to verify the accuracy and truthfulness of the AI output and to avoid potential intellectual property infringement.
The policy should include provisions related to accountability and transparency and, at minimum, address data privacy and security issues, such as what types of information can and cannot be shared with generative AI tools. As part of the generative AI tool approval process, the tool should be thoroughly vetted to carefully consider the terms of service and privacy policy.
Understand the Legal Landscape
Several legal concerns may be implicated in your business’s use of generative AI. Generative AI systems often rely on data inputted by users to improve and train the AI tools, known as training data, which raises data privacy concerns. Depending on the terms and conditions of the specific generative AI tool, information your employees share with the AI tool can become training data, meaning that your business has likely waived any claim to confidentiality over that information.
For example, employees who share confidential and proprietary business information, trade secrets, or sensitive personal information with generative AI could trigger various legal risks, such as a formal data breach or a breach of contractual obligations. Rogue employee use of generative AI can significantly amplify these data privacy risks, particularly when the business lacks a generative AI use policy that clearly establishes how these tools may be used and what information and data may be shared.
When AI is used in human resources functions, there is a risk of illegal discrimination. AI can promote efficiencies in areas like recruiting, hiring, and performance monitoring but can also perpetuate existing societal biases, leading to discriminatory outcomes. Organizations must implement measures to identify and mitigate bias in AI systems, ensuring fairness and equal opportunity. The EEOC has issued guidance on using AI in hiring, firing, and monitoring employees. Of particular note from this guidance is that your business cannot rely on representations from the AI developer that the program does not discriminate—the employer must perform its own analysis periodically to ensure that the AI system does not discriminate.
AI-generated content raises questions about copyright and ownership. Companies need clear policies on the use of AI in creating intellectual property.
Finally, transparency in your business’s AI decision-making processes can assist in building trust and ensuring accountability.

Ross Broudy
Guidelines for Ethical AI Management
After establishing a generative AI use policy, it is also essential to train employees on the policy itself so they understand ethical considerations in using the technology responsibly. Bedrock ethical principles of generative AI use include fairness, transparency, accountability, privacy, and explainability.
The company needs mechanisms to enforce these policies. Leadership should designate individuals or teams responsible for overseeing generative AI usage and ensuring compliance with ethical guidelines. Procedures should also be established for reporting and addressing ethical concerns. Even with advanced generative AI, it is important to maintain human oversight of AI systems, especially when those systems are making important decisions.
By proactively addressing these legal and ethical considerations, organizations can continue to harness the power of generative AI while mitigating risks associated with rogue employee use of these tools.
Anne Bibeau co-chairs the management-side Labor & Employment Practice at Woods Rogers, where she is a principal. Ross Broudy is an associate attorney in the firm’s Cybersecurity & Data Privacy Practice. They may be reached at anne.bibeau@woodsrogers.com and
ross.broudy@woodsrogers.com.