AI tools are showing up in nonprofit workplaces whether organizations have policies for them or not. Staff are using ChatGPT to draft emails, Claude to summarize documents, and Copilot to build spreadsheets. Without a clear policy, organizations face inconsistent practices, potential data privacy problems, and no shared standard for when AI output needs human review.
This template gives you a starting point. It is written for small nonprofits and educational organizations that want a clear, practical policy without legal jargon. Review it with your leadership team, adjust the sections marked in [brackets] to fit your organization, and adopt it as a board-approved policy or staff operating guideline.
Note: This template is a practical starting point, not legal advice. Organizations with complex data handling, grant compliance requirements, or specific regulatory obligations should review any AI policy with legal counsel before adoption.
Artificial Intelligence Acceptable Use Policy
[Organization Name]
Adopted: [Date] • Review Date: [Date, typically one year after adoption]
1. Purpose
This policy establishes guidelines for the use of artificial intelligence (AI) tools by staff, volunteers, and contractors of [Organization Name] (the Organization). It is intended to help the Organization benefit from AI capabilities while protecting the privacy of clients and donors, maintaining the accuracy of our communications, and upholding our organizational values.
2. Scope
This policy applies to all individuals who perform work on behalf of the Organization, including paid staff, volunteers, board members, and contracted service providers, when using AI tools for Organization-related tasks. It covers AI tools accessed through personal accounts, organizational accounts, or third-party platforms.
3. Approved Uses
The Organization encourages the thoughtful use of AI tools to improve efficiency and quality. The following uses are permitted:
- Drafting, editing, and proofreading documents such as newsletters, grant narratives, donor communications, meeting minutes, and social media content
- Summarizing reports, articles, or meeting notes
- Brainstorming ideas, program names, event themes, or outreach strategies
- Translating content into other languages for community outreach
- Creating templates for recurring documents
- Building spreadsheet formulas or automating simple administrative tasks
- [Add other uses specific to your organization's work]
4. Required Practices
When using AI tools for any Organization-related task, staff and volunteers must follow these practices:
- Human review is required. AI output must be reviewed and edited by a staff member before being used in any official communication, published material, or submitted document. AI-generated content should never be used verbatim without review.
- Verify facts and figures. AI tools can produce plausible-sounding but inaccurate information. Any facts, statistics, dates, names, or legal references generated by AI must be independently verified before use.
- Maintain your voice. Final materials should reflect the Organization's tone and mission. Review AI-generated drafts to ensure they sound like us, not like a generic document.
- Disclose AI use when appropriate. If AI was used to generate a significant portion of a submitted grant application, staff should check the funder's policy on AI-generated content and disclose use if required.
5. Data Privacy and Confidentiality
Protecting the privacy of our clients, donors, and partners is a core organizational responsibility. The following restrictions apply to all AI tool use:
- Do not enter personally identifiable information (PII) about clients, donors, or staff into any AI tool. This includes names, addresses, phone numbers, email addresses, Social Security numbers, financial information, and health or case-related information.
- Do not enter confidential organizational information into AI tools, including unpublished financial data, personnel matters, pending legal issues, or information shared under a nondisclosure agreement.
- Use general descriptions instead of specific details. If you need AI to help with a communication about a specific client situation, describe the situation in general terms without identifying the individual.
- Be aware of how AI tools use your input. Most free AI tools may use conversation data to improve their models. Assume that anything you type into a free AI tool could be stored or reviewed. Use organizational accounts with privacy agreements where available.
6. Prohibited Uses
The following uses of AI tools are not permitted:
- Creating content intended to deceive, mislead, or misrepresent the Organization or its work
- Generating images, audio, or video of real individuals without their consent
- Using AI to make final decisions about client eligibility, staff hiring, or other consequential determinations without human judgment and review
- Submitting AI-generated content to grant funders in violation of their stated policies
- Using AI tools for personal tasks on Organization time or using Organization accounts
- [Add any restrictions specific to your funding requirements or program regulations]
7. Approved Tools
[Optional: List the AI tools your organization has reviewed and approved for use, or note that staff should check with their supervisor before using a new tool. Example below.]
The following tools have been reviewed and are approved for general staff use: [e.g., Microsoft Copilot (via our Microsoft 365 subscription), Claude.ai (free tier), ChatGPT (free tier)]. Staff wishing to use an AI tool not on this list should consult with [supervisor / executive director / IT contact] before doing so.
8. Responsibility and Accountability
Each staff member and volunteer is responsible for understanding and following this policy. Questions about whether a specific use is appropriate should be directed to [designated contact, e.g., the Executive Director or Operations Manager].
The Organization is responsible for any content it publishes or submits, regardless of whether AI tools were used in its creation. Staff remain accountable for the accuracy, appropriateness, and quality of their work.
9. Policy Review
AI technology and best practices are evolving rapidly. This policy will be reviewed annually, or sooner if significant changes in AI capabilities or relevant regulations require an update. Staff are encouraged to raise questions or suggest updates as they gain experience with these tools.
Approved by the Board of Directors of [Organization Name] on [Date].
How to Use This Template
Step 1: Fill in the brackets. Replace every instance of bracketed text with your organization's specific information. Pay particular attention to Section 7 (Approved Tools) and the prohibited uses list, which may need additions based on your funding sources or program regulations.
Step 2: Review with leadership. Share the draft with your executive director and, if applicable, a board committee. The goal is a policy that your leadership understands and supports, not one that sits in a drawer.
Step 3: Train your staff. A policy is only useful if people know it exists and understand what it means. Plan a brief staff meeting or training to walk through the key points, especially the data privacy section.
Step 4: Adopt it formally. For most nonprofits, AI policy falls under operational policies that the executive director can approve. If your bylaws or board prefer board approval for all policies, bring it to a regular board meeting for a vote.
Step 5: Schedule a review. Put a calendar reminder for 12 months out. AI is changing fast, and your policy should keep up.
If you'd like help customizing this template for your organization or training your staff on AI tools and practices, Cochise AI offers workshops and consulting specifically designed for nonprofits. Reach out through the contact form to start a conversation.