Resources/Threat Intelligence
Template14 pages

AI & GenAI Security Policy Template

Customizable acceptable use policy for AI tools in K-12 environments. Covers ChatGPT, Copilot, Gemini, and emerging tools with monitoring and enforcement guidance.

01

Why Districts Need an AI Policy Now

Generative AI tools have been adopted by students and staff at a pace that has outstripped most districts' ability to develop governance frameworks. Survey data indicates that over 70% of K-12 educators have used generative AI tools for professional purposes, and student usage rates are even higher. This adoption is occurring largely without formal district guidance, creating immediate risks to data privacy, FERPA compliance, academic integrity, and equitable access.

The data privacy implications are the most urgent concern. When a teacher enters student IEP information into ChatGPT to help draft accommodation plans, or pastes behavioral incident reports into an AI tool to assist with documentation, they are transmitting FERPA-protected education records to a third-party service provider without the data processing agreements, parental consent, or privacy impact assessments that federal and state regulations require. Most commercial AI tools retain user inputs for model training purposes, meaning student data submitted to these platforms may be permanently incorporated into AI models and cannot be retrieved or deleted.

Academic integrity concerns, while important, should not be the primary driver of AI policy. Overly restrictive policies that attempt to ban AI entirely are unenforceable and counterproductive — they drive usage underground where it cannot be monitored and miss the opportunity to develop AI literacy skills that students will need. Effective policy balances appropriate use with strong guardrails around data protection, transparency, and attribution.

The pace of AI tool evolution demands that district AI policies be living documents with regular review cycles. New tools and capabilities emerge monthly, and policies that reference specific product names without establishing principle-based frameworks quickly become outdated. This template provides both the specific guidance needed for current tools and the flexible framework needed to adapt to emerging technologies.

02

Policy Framework

This policy establishes four classification tiers for AI tools based on data privacy risk, instructional value, and district review status. Every AI tool used in the district — whether by staff or students — must be classified into one of these tiers before use is permitted.

Tier 1 — Approved for Instruction: AI tools that have been fully vetted through the district's technology review process, have signed data processing agreements with acceptable privacy terms, and are approved for use with students. These tools may be integrated into lesson plans and used in classroom settings. Examples may include district-licensed educational AI platforms that have been reviewed and approved.

Tier 2 — Approved for Staff Professional Use Only: AI tools that are approved for staff use for professional purposes such as lesson planning, communication drafting, and administrative tasks, but are not approved for student use or for processing student data. Staff may use these tools with the strict understanding that no student PII, behavioral data, IEP information, or other FERPA-protected records may be entered. Examples may include commercial AI assistants used for drafting parent communication templates (without student-specific content) or generating lesson plan frameworks.

Tier 3 — Monitored: AI tools that have not been reviewed by the district but are not blocked. Usage is logged through iboss CASB monitoring and access may be revoked pending review. Staff and students should be aware that their usage of Tier 3 tools is visible to district technology administration.

Tier 4 — Blocked: AI tools that have been evaluated and rejected due to unacceptable data privacy practices, terms of service that claim ownership of input data, or tools that are specifically designed for purposes incompatible with educational use. Access to Tier 4 tools is blocked through iboss web filtering.

03

Approved AI Tools List Template

The Approved AI Tools List is maintained by the District Technology Committee and updated on a quarterly basis. Before any AI tool is added to the approved list, it must pass the district's AI tool evaluation process, which assesses the following criteria.

Data Processing and Privacy: Where is user input data processed and stored? Does the vendor retain user inputs, and if so, for how long? Is user input data used for model training? Can the district opt out of training data usage? Does the vendor's privacy policy comply with FERPA, COPPA (for tools used by students under 13), and applicable state student privacy laws? Has the vendor signed the district's Student Data Privacy Agreement or an equivalent data processing agreement?

Security Posture: Does the vendor maintain SOC 2 Type II certification or equivalent? What authentication methods are supported — does the tool integrate with the district's SSO? What encryption standards are used for data at rest and in transit? What is the vendor's incident response and breach notification process?

Terms of Service Analysis: Do the terms of service grant the vendor any rights to user-generated content or input data? Are there arbitration clauses that limit the district's legal remedies? Can the vendor make material changes to the terms without notice? What are the data portability and deletion provisions upon contract termination?

Data Retention and Deletion: What is the vendor's data retention policy for user inputs and outputs? Can the district request deletion of all district data? What is the timeline for data deletion upon request? Is deletion verified and certified by the vendor?

  • Data processing location must be within the United States
  • Vendor must provide signed Data Processing Agreement
  • SOC 2 Type II or equivalent security attestation required for tools handling student data
  • Terms of service must not claim ownership or training rights over input data
  • Vendor must support district SSO integration for staff-facing tools
  • Data retention period must not exceed the contractual service period
  • Vendor must provide documented data deletion procedures
  • Breach notification timeline must not exceed 72 hours
04

Prohibited Uses

Certain uses of AI tools are prohibited regardless of the tool's tier classification. These prohibitions are based on legal requirements, ethical considerations, and data privacy obligations that apply across all AI platforms.

No student personally identifiable information may be entered into any AI tool that has not been specifically approved for that purpose under Tier 1 classification. This includes student names, ID numbers, grades, attendance records, disciplinary information, IEP or 504 plan content, medical information, family financial information, and any other information that could identify a specific student. This prohibition applies even when the staff member believes they are anonymizing the data — contextual information can often re-identify students even when names are removed.

AI tools may not be used to make or inform disciplinary decisions, threat assessment determinations, special education eligibility decisions, or any other consequential decisions about students. These decisions require human professional judgment and due process protections that AI tools cannot provide. AI-generated content may not be used as evidence in student disciplinary proceedings.

No AI tool may be deployed in a student-facing capacity without prior parental notification that meets district policy requirements. This includes AI-powered tutoring systems, chatbots, writing assistance tools, and any application where students directly interact with AI-generated content or submit their work to AI systems. The notification must describe the tool's purpose, what student data is collected, how it is processed and stored, and how parents can opt their child out.

Staff may not use AI tools to generate official district communications — including letters to parents, board reports, or legal documents — without disclosure that AI assistance was used and human review of the final content for accuracy and appropriateness.

05

Staff Guidelines

All staff members who use AI tools for professional purposes must complete the district's AI Literacy and Responsible Use training module before access is provisioned. This training covers data privacy obligations, the specific prohibitions in this policy, appropriate use scenarios, and the technical enforcement mechanisms in place. Training must be renewed annually, with supplemental updates provided when significant policy changes occur.

AI-assisted lesson planning is an approved professional use that can enhance instructional quality and save preparation time. Staff may use approved AI tools to generate lesson plan frameworks, create differentiated activity suggestions, develop assessment questions, and brainstorm instructional strategies. However, all AI-generated instructional content must be reviewed by the teacher for accuracy, age-appropriateness, bias, and alignment with curriculum standards before use with students. AI tools are supplements to professional expertise, not replacements for it.

Grading and assessment present specific considerations. AI tools may be used to assist with rubric development, generate sample responses for calibration exercises, and provide feedback on assessment design. However, AI tools may not be used as the sole or primary mechanism for evaluating student work. Final assessment decisions — including grades — must reflect human professional judgment. If AI tools are used in any part of the assessment process, the specific use must be documented and approved by the building principal.

Staff should exercise particular caution with AI-generated content that presents factual claims. Current generative AI systems are known to produce confident but inaccurate statements, fabricate citations, and present plausible but false information. Any factual claims in AI-generated content intended for instructional or professional use must be independently verified before dissemination.

06

Student Guidelines

Student access to AI tools is governed by grade-level appropriateness, parental notification status, and the specific tool's tier classification. Elementary students (K-5) may access only Tier 1 AI tools that have been specifically approved for their grade band, used under direct teacher supervision, and subject to parental notification. Middle school students (6-8) may access Tier 1 tools with teacher guidance and Tier 2 tools under direct supervision for specific learning activities. High school students (9-12) may access Tier 1 and Tier 2 tools for academic purposes within the framework established by their teachers.

Attribution requirements are fundamental to maintaining academic integrity while embracing AI as a learning tool. When students use AI tools to assist with academic work, they must disclose the specific tool used, the nature of the AI assistance (brainstorming, drafting, editing, research, coding, etc.), and which portions of the submitted work were AI-assisted. The specific attribution format should be determined by each department or grade-level team, but the principle of transparent disclosure is non-negotiable.

AI literacy should be integrated into the curriculum as a twenty-first century competency. Students need to understand how large language models work at a conceptual level, including their training process, limitations, tendency toward hallucination, and inherent biases. They should develop critical evaluation skills for AI-generated content, understand the ethical implications of AI use, and learn to use AI tools as productivity enhancers rather than substitutes for learning. The library media specialist and instructional technology staff should collaborate with classroom teachers to develop age-appropriate AI literacy activities.

Students must not enter personally identifiable information about themselves or others into AI tools, including names, addresses, phone numbers, student ID numbers, or any information that could identify classmates, teachers, or family members. This restriction should be taught as part of broader digital citizenship education and reinforced through technical controls.

07

Technical Enforcement with iboss

Policy without technical enforcement is merely aspiration. The iboss SASE platform provides the technical controls necessary to enforce this AI policy at the network level, ensuring consistent application across all users, devices, and locations.

iboss CASB (Cloud Access Security Broker) policies enable granular control over AI tool categories. AI tools are categorized and assigned to the policy tier framework, with iboss enforcing the appropriate access policy: Tier 1 tools are allowed with logging, Tier 2 tools are allowed for staff groups with logging, Tier 3 tools are allowed with enhanced monitoring and alerts, and Tier 4 tools are blocked with a custom block page explaining the district's AI policy and the process for requesting tool review.

DLP (Data Loss Prevention) rules provide the critical technical backstop preventing FERPA-protected data from reaching AI services. iboss DLP inspects data submitted to AI tool interfaces — including text pasted into chat windows, files uploaded to AI platforms, and API calls from AI-integrated applications — and blocks transmissions containing patterns matching student PII including SSNs, student ID formats, and other configured identifiers. Custom DLP dictionaries can be created to match district-specific data patterns such as local student ID number formats.

URL filtering and categorization maintain the block list for Tier 4 AI tools and ensure that newly emerging AI tools are promptly categorized. iboss maintains real-time categorization of AI tool domains and automatically classifies new AI services as they appear. Comprehensive logging of all AI tool access — including timestamps, user identity, device information, data volume, and specific AI tool used — provides the audit trail necessary for policy compliance monitoring and incident investigation.

Monthly AI usage reports should be generated from iboss logs and reviewed by the District Technology Committee to identify emerging tool adoption patterns, detect potential policy violations, and inform decisions about tool classification and policy updates.

08

Policy Review and Update Cadence

Given the unprecedented pace of AI tool development and the rapidly evolving regulatory landscape, this policy must be treated as a living document subject to regular review and revision. A quarterly review cycle is the recommended minimum, with provisions for emergency updates when significant developments warrant immediate policy changes.

The quarterly review should be conducted by the District Technology Committee, which should include representation from instructional technology, information security, curriculum and instruction, legal counsel, a building administrator, a teacher representative, and a parent representative. Each quarterly review should assess: new AI tools that have emerged and require classification, changes to existing tool privacy policies or terms of service, new regulatory guidance from the Department of Education or state education agencies, incident reports related to AI tool usage, and feedback from staff and students on policy effectiveness.

Emergency policy updates may be issued by the Chief Technology Officer or Superintendent in response to events including: a data breach involving an approved AI tool, regulatory action by a government agency affecting a specific AI platform, significant changes to an approved tool's data handling practices, or discovery of a technical vulnerability in an AI platform's security controls. Emergency updates take effect immediately upon issuance and are reviewed by the full committee at the next quarterly meeting.

Stakeholder input is essential for policy legitimacy and compliance. The district should maintain a public feedback mechanism — such as a form on the district website — where staff, students, parents, and community members can submit questions, concerns, and suggestions related to the AI policy. The Technology Committee should review and respond to all submissions as part of the quarterly review cycle. Annual community forums on AI in education provide additional opportunities for transparent dialogue about the district's approach to emerging technology.

← All Resources
14 pages · Template

Need help implementing this?

Calbrate configures iboss to meet every requirement covered in this resource. Free assessment included.

Free · No obligation · Response within 24 hours