Copied to clipboard

[Clinic Name] — Generative AI Policy

[About this Policy] | [Take the Quiz]

Effective Date: [Date]
Last Revised: [Date]
Applies to: All students, faculty supervisors, and staff working in [Clinic Name]

1. Purpose

This policy governs the use of generative artificial intelligence (GAI) tools in [Clinic Name]. Because this clinic represents real clients in real legal matters, the use of GAI raises professional responsibility concerns that do not arise — or arise differently — in other law school courses. This policy exists to protect clients, ensure compliance with applicable rules of professional conduct, preserve the educational value of clinical work, and establish clear expectations for students and supervisors.

Commentary
Sag treats the absence of a policy as itself a policy choice — and a bad one. Without clear rules, students face uncertainty about what conduct is permissible, and faculty lack a shared vocabulary for enforcement. The opening paragraph makes the “why” explicit: clinic-specific concerns (live clients, professional duties) distinguish this context from doctrinal courses and justify a standalone policy.

This policy will be reviewed and updated at least once per academic year. GAI technology and the professional norms surrounding it evolve rapidly; this document reflects our best understanding as of its effective date.

Commentary
Both Sag and Bliss caution against policies too tightly coupled to specific tools or capabilities. The annual-review commitment and the epistemic humility of “best understanding as of its effective date” operationalize their shared insistence on adaptability.

2. Guiding Principles

This policy rests on several principles that inform every provision that follows. We state them here so that when a situation arises that this policy does not expressly address, you can reason from these principles to an appropriate course of action.

Commentary
This policy can be understood through a three-step analytical framework for evaluating any AI use case in a clinical setting: (1) identify the risks the use introduces, (2) design the safeguards necessary to mitigate those risks, and (3) determine whether the use serves the values the clinic exists to advance. The policy’s structure maps to this framework: Sections 4–5 address risk identification, Sections 5–7 operationalize safeguards, and the guiding principles below embed the values deliberation. This sequential design ensures that the decision to permit or restrict AI use benefits from full analytical work — preventing both reflexive prohibition and reflexive adoption.

2.1 Pedagogical Purpose

Every restriction and permission in this policy serves a learning objective. Clinical education develops professional competencies — legal analysis, client counseling, advocacy, judgment — that require students to do the intellectual work themselves. Where AI use supports that development (for example, by freeing students to focus on higher-order analysis), it is permitted and encouraged. Where AI use threatens to substitute for the student’s own reasoning, it is restricted. When you encounter a restriction, understand that it exists to protect a specific learning outcome, not to express distrust of the technology.

Commentary
This section applies Bliss’s core pedagogical principle: AI policy should follow from learning outcomes, not the other way around. Bliss argues that when a restriction exists to protect a specific learning outcome, naming that outcome gives the restriction legitimacy and helps students internalize the values behind it. The final sentence — “not to express distrust of the technology” — preempts the perception that the policy is merely reactionary.
Commentary
The decision to permit or restrict AI use requires deliberation across three constituencies whose interests may not align: the clinic (efficiency, mission, resources), the client (quality, timeliness, thoroughness of representation), and the student (learning objectives, skill development). The guiding principles in Sections 2.1–2.6 address these constituencies individually, but the tension among them deserves explicit attention. A use that serves clinic efficiency may undermine student learning. A use that excites the student may not serve the client. Naming these tensions makes the decision honest rather than reflexive.

2.2 Professional Responsibility

Lawyers owe duties of competence, confidentiality, and candor that do not pause for new technology. This policy translates those duties into specific rules for GAI use. Students should understand that following this policy is not merely an academic exercise; it is practice in the kind of thoughtful, context-sensitive self-regulation that the profession demands.

Commentary
Both authors connect AI policy to professional responsibility. Sag argues that a well-designed AI policy models the kind of thoughtful, context-sensitive regulation lawyers will encounter in practice; a poorly reasoned one teaches students that regulation is arbitrary. The phrase “practice in…self-regulation” frames the policy itself as pedagogy — students are learning to navigate professional norms, not just following school rules.

2.3 Transparency over Prohibition

This policy does not impose a blanket ban on GAI. Blanket bans are unenforceable (detection tools are unreliable), and they forfeit the opportunity to teach students how to use AI responsibly. Instead, this policy requires transparency: you may use GAI within the boundaries described here, but you must always disclose that use and document your process. Transparency is the mechanism that makes everything else in this policy workable.

Commentary
This is the policy’s most consequential architectural choice. Both Sag and Bliss reject blanket bans, but for complementary reasons. Sag’s argument is pragmatic: detection tools are unreliable, so prohibition drives use underground rather than eliminating it. Bliss’s argument is pedagogical: prohibition forfeits the opportunity to teach responsible professional AI use — a competency graduates will need. The phrase “transparency is the mechanism” signals that the entire policy depends on this foundation. Faculty who are skeptical of this choice should consider the enforceability question directly: if a ban cannot be reliably detected and enforced, does it protect the interests it claims to serve?

2.4 Authentication

Regardless of what tools you use, you must be able to represent that the work product you submit reflects your own understanding and professional judgment. This means you can explain the reasoning, defend the conclusions, and identify the choices you made. AI may contribute to your process, but the professional responsibility for the product is yours.

Commentary
Sag’s “authentication requirement” is the complement to transparency. Transparency asks “did you disclose?” while authentication asks “can you stand behind it?” Together they shift the integrity question from “did you use AI?” (nearly impossible to verify) to “do you own the work?” (testable through oral discussion, process documentation, and supervisor review). This framing also aligns with professional responsibility: a lawyer who signs a brief must be able to defend it regardless of who — or what — contributed to the drafting.

2.5 Adaptability

This policy is written in terms of principles rather than specific products. Particular tools will come and go; the obligations of competence, confidentiality, transparency, and independent judgment will not. When evaluating a new tool or a novel use, apply these principles rather than looking for the tool’s name on a list.

2.6 Equity and Access

Students have unequal access to GAI tools. Some can afford premium subscriptions; others cannot. This policy does not permit advantages to flow from that disparity. [Clinic Name] will [select one]:

Commentary
Bliss raises this concern directly: policies that permit AI use without addressing access disparities risk exacerbating existing inequities. The options below force clinics to confront this rather than leave it implicit. Option A (providing institutional access) is the strongest equity measure but may require budget allocation. Option B (calibrating expectations to non-AI performance) avoids the cost but may feel like a concession. Neither option is wrong; what matters is that the clinic makes a deliberate choice.
Option A: Provide all students with access to [specific tool(s)] so that no student is disadvantaged by inability to afford premium AI tools.
Option B: Ensure that no assignment or workflow requires the use of a premium AI tool. Students who choose to use GAI do so voluntarily; the clinic’s expectations are calibrated to what a student can accomplish without it.
Option C: [Describe your clinic’s approach.]

3. Scope

3.1 Tools Covered

This policy applies to any software that uses generative AI to produce, edit, summarize, or analyze text, including:

  • General-purpose large language models (e.g., ChatGPT, Claude, Gemini, Copilot — see Section 5.4 for the tool classification that determines whether and how each may be used for clinic work)
  • Legal-specific AI tools (e.g., Westlaw AI, Lexis+ AI)
  • AI features embedded in other software (e.g., AI-powered drafting suggestions in Microsoft Word, AI summarization in email clients, browser-integrated AI assistants)
  • AI-powered transcription or translation services

This policy does not apply to traditional legal research databases (Westlaw, Lexis) when used without their AI features, standard spell-check or grammar-check tools without generative capabilities, or basic search engines.

If you are unsure whether a tool qualifies, ask your supervisor before using it.

3.2 Tasks Covered

This policy applies whenever GAI is used in connection with any clinic matter, including but not limited to: legal research, factual investigation, drafting, editing, client communication, case strategy, preparation for hearings or interviews, and administrative tasks involving client information.

3.3 Assignment-Level Flexibility

Supervisors may impose more restrictive or more permissive AI rules for specific tasks or assignments. For example, a supervisor may prohibit AI use on a first-draft memo to ensure the student works through the analysis independently, while permitting AI-assisted revision on a later draft. When a supervisor sets an assignment-specific rule, that rule governs for that assignment even if it differs from the general permissions in Section 4. Supervisors will communicate assignment-level AI rules in writing before the assignment begins.

Commentary
This implements Sag’s three-level architecture (institution → course → assignment), which he considers essential for balancing predictability with faculty autonomy. The example — prohibiting AI on a first draft but permitting it on a revision — reflects Bliss’s scaffolding principle: different stages of skill development warrant different levels of AI access. The “in writing” requirement protects both student and supervisor by eliminating ambiguity about what was permitted.

4. Permitted and Prohibited Uses

4.1 Permitted Uses

Commentary
The table below reflects Sag’s “permitted with restrictions” approach — his third category on the spectrum from prohibition to open use. Sag favors approaches toward the permissive end for practice-oriented courses. Every permitted use is conditioned on compliance with Sections 57, which means no permission here is unconditional.

The following uses are permitted, subject to the data privacy (Section 5), verification (Section 6), and documentation (Section 7) requirements of this policy:

UseConditions
Brainstorming and idea generationNo client-identifying information entered into the tool
Legal researchAll citations independently verified in primary sources; legal-specific tools preferred
Drafting and editing assistanceSupervisor review required before any work product is shared with a client, filed, or sent outside the clinic
Summarizing or analyzing non-confidential materialsPublic documents only (e.g., published opinions, statutes, regulations)
Preparing for client interactionsInterview/hearing prep questions only; no client-identifying information entered
[Pedagogical exercises as assigned]As directed by supervisor for specific learning objectives

4.2 Prohibited Uses

The following uses are prohibited without exception:

  • Entering any client-identifying information into a GAI tool that lacks institutional data protection agreements (see Section 5)
  • Submitting any AI-generated or AI-assisted work product to a court, opposing party, government agency, or client without full supervisor review and approval — AI-generated work product that reaches a client or tribunal without review by a licensed attorney raises unauthorized-practice concerns in addition to competence and supervisory obligations
  • Using GAI to perform tasks you could not competently evaluate yourself — if you cannot assess whether the output is correct, you should not use GAI for that task
  • Relying on GAI-generated legal citations without independent verification in an authoritative source
  • Using GAI to communicate directly with a client (e.g., drafting and sending a client email without supervisor review)
  • Using personal GAI accounts for clinic work unless expressly authorized by your supervisor
  • Using GAI in any manner that violates a court order, local rule, or tribunal requirement regarding AI disclosure
  • Relying on AI-generated analysis or recommendations that affect a client’s matter without evaluating the output for potential bias — AI tools can reproduce or amplify systemic biases present in their training data, including biases related to race, gender, national origin, socioeconomic status, and other protected characteristics
  • Submitting AI-generated work without disclosure as though it reflects your own analysis — doing so is a form of misrepresentation that violates both this policy and the professional norms it models (see Section 2.4)
Commentary
The third bullet — “tasks you could not competently evaluate yourself” — deserves particular attention. It operationalizes Sag’s deskilling concern: if a student lacks the knowledge to recognize whether AI output is correct, using AI for that task creates a competence gap the student cannot see. This is a self-assessment standard, and it requires students to exercise judgment about their own limitations — itself a professional skill. The final bullet connects nondisclosure to Sag’s concept of “implicit misrepresentation,” framing it not as cheating in the academic sense but as a professional-responsibility failure.
Commentary
The bias-evaluation bullet reflects a concern that the NJ Supreme Court Preliminary Guidelines on the Use of AI uniquely emphasize among the ethical authorities: anti-discrimination duties. While ABA Opinion 512 and other bar opinions focus primarily on competence, confidentiality, and candor, the NJ Guidelines specifically identify the risk that AI tools may reproduce or amplify systemic biases. For Rutgers clinics — many of which represent clients from communities disproportionately affected by algorithmic bias — this concern is particularly salient.

4.3 A Note on Automation Bias and Deskilling

Commentary
This section addresses two of Sag’s six AI-specific risks head-on. Both risks are heightened in clinical education, where the stakes are real and students are still developing foundational skills. Including this discussion in the policy itself — rather than relegating it to a training session — reflects Bliss’s view that the policy should teach, not merely regulate.

GAI outputs are fluent, confident, and fast. These qualities make them persuasive — and dangerous. Research consistently shows that people over-rely on computer-generated output simply because it comes from a computer (a phenomenon called automation bias). In a clinical setting, automation bias can lead you to accept an incorrect legal standard, overlook a factual nuance, or adopt a strategic approach that sounds right but does not serve your client’s interests.

There is also a deskilling risk: if you delegate core analytical tasks to GAI before you have developed the skill those tasks are designed to build, you may never develop it. A student who uses GAI to draft every memo from scratch may graduate without learning how to write one independently.

To guard against both risks:

  • Approach all GAI output with professional skepticism. Assume it contains errors until you have confirmed otherwise through your own independent evaluation.
  • Be especially cautious early in your clinical experience, when the skills being developed are foundational.
  • If you find yourself unable to explain why the AI’s output is correct (or incorrect), that is a signal you should do the work yourself.

5. Data Privacy and Confidentiality

Commentary
This entire section addresses the risk Bliss highlights as unique to clinical and experiential courses: students who input client information into AI tools may violate confidentiality obligations. General academic AI policies typically do not address what information may be provided to AI systems — only whether AI may be used at all. A clinic policy must make this distinction because the duty of confidentiality under MRPC 1.6 applies to the student’s use of any tool, including AI.
Commentary
This policy’s safeguard architecture draws on James Reason’s Swiss Cheese model of risk mitigation (BMJ 320(7237), 768–770 (2000)): no single safeguard is perfect, but stacking imperfect layers blocks risk from reaching the client. The policy implements this model in two layers. The floor — safeguards that are always required — consists of a written AI policy (this document), supervision and review protocols (Section 6.3), and training (Section 8). The layers — task-specific safeguards selected based on risk — include anonymization (Section 5.3), verification (Section 6), and client consent and disclosure (Section 7.4). As Reason wrote: “We cannot change the human condition, but we can change the conditions under which humans work.”

5.1 Governing Rules

Lawyers have an ethical obligation to protect client information and preserve privilege. See ABA Model Rule of Professional Conduct 1.6; [State] RPC 1.6; ABA Formal Opinion 512 (2024); NJ Supreme Court Preliminary Guidelines on the Use of Artificial Intelligence; United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026) (addressing waiver of privilege through AI use). This obligation extends to information entered into GAI tools.

Commentary
The ethical rules serve distinct functional roles in determining how AI tools may be used. Rule 1.6 (Confidentiality) determines which tools may be used with client information — it drives the tool classification in Section 5.4. Rules 1.1 (Competence) & 3.3 (Candor) mandate training and validation protocols appropriate for each tool — they drive Section 6 and Section 8. Rule 1.4 (Communication) requires disclosure of AI use to clients — it drives Section 7.4. Rules 5.1 & 5.3 (Supervision) require oversight intensity that matches the risk of the tool and task — they drive Section 6.3 and Section 3.3.

5.2 Prohibited Inputs

Never enter the following into any GAI tool unless the tool operates under an institutional data processing agreement that your supervisor has confirmed provides adequate protection:

  • Client names, nicknames, or other identifying information
  • Case numbers, docket numbers, or internal file identifiers
  • Addresses, phone numbers, Social Security numbers, or other personal identifiers
  • Specific facts of a client’s case that, alone or combined, could identify the client
  • Financial records, medical records, immigration records, or other sensitive documents
  • Attorney-client communications
  • Work product reflecting case strategy or mental impressions

5.3 Anonymization Protocols

If you wish to use GAI to assist with a task that relates to a specific client matter:

  1. Strip all identifying information before entering any text into the tool. Replace client names with generic placeholders (e.g., “Client A,” “Landlord”). Remove dates, locations, case numbers, and any other facts that could identify the client.
  2. Assess whether anonymized facts remain identifying. In small communities or unusual fact patterns, even anonymized information may identify a client. When in doubt, consult your supervisor.
  3. Do not rely on the GAI tool’s privacy settings or “private mode” features as a substitute for anonymization. These features vary by provider and may not prevent data retention or use in model training.
Commentary
Step 2 is the most important — and the hardest to teach. Anonymization can create a false sense of security; in small-community or unusual-fact-pattern cases, even stripped-down facts may be identifying. Step 3 addresses a common misconception: students often assume that “private mode” or “don’t train on my data” toggles resolve confidentiality concerns. They do not — terms vary by provider, enforcement is opaque, and data retention practices change.

5.4 Approved Tools

Not all GAI tools carry the same data privacy risk. [Clinic Name] classifies tools into three categories based on their data protection profile. The category determines what information may be entered, what approvals are required, and whether the tool may be used for clinic work at all.

Consumer Tools (e.g., personal ChatGPT, free Claude, personal subscriptions)

These tools typically lack institutional data processing agreements. User inputs may be retained, used for model training, or accessible to the provider’s employees. Consumer tools are not approved for use in connection with any clinic work. The absence of institutional data protections, combined with uncertain data retention practices, makes these tools incompatible with the confidentiality obligations that apply to all clinic activities — even tasks that do not directly involve client information. See United States v. Heppner, No. 25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026) (addressing privilege waiver through AI use). Students may use consumer tools for personal learning outside of clinic matters, but should not use them for any task connected to a clinic case, client, or assignment.

University-Licensed Tools (e.g., university-provided Copilot, university Gemini)

These tools operate under an institutional data processing agreement between the provider and the university. Data is generally not used for model training, and the university’s IT security office has reviewed the provider’s terms. However, it remains uncertain whether all such tools’ data retention practices fully comply with Rule 1.6’s confidentiality requirements. This policy’s permission to use anonymized information with university-licensed tools reflects a considered judgment that the anonymization protocols in Section 5.3 adequately mitigate the residual data-retention risk. University-licensed tools may be used with properly anonymized client information subject to those protocols. Students must use their institutional accounts, not personal accounts, when working on clinic matters.

Legal Platform Tools (e.g., Westlaw AI, Lexis+ AI)

These tools are designed for legal practice and operate under contractual protections specific to confidential legal work. They draw on verified legal databases and provide source-linked citations. Legal platform tools may be used with client information to the extent permitted by the provider’s terms and the supervising attorney’s judgment, though the general principle of minimizing unnecessary disclosure of client information still applies.

CategoryExample ToolsPermitted for Clinic Work?Client Info Permitted?Approval Required?
Consumer toolsPersonal ChatGPT, free ClaudeNo — not approved for any clinic workN/AN/A
University-licensed toolsUniversity Copilot, university GeminiYesAnonymized only — per Section 5.3 protocolsUse institutional account; follow clinic protocols
Legal platform toolsWestlaw AI, Lexis+ AIYesYes — within provider terms and supervisor judgmentFollow clinic protocols

No GAI tools outside these three categories may be used for clinic work without advance supervisor approval. If you wish to use a tool not listed above, submit a written request to your supervisor explaining the tool, the proposed use, and the tool’s data privacy terms.

Commentary
The category structure reflects Sag’s principle that different risks warrant different responses. Confidentiality risk does not argue for a blanket ban on AI; it argues for content-specific restrictions tied to the data-protection profile of the tool. This approach gives students a practical framework for evaluating new tools as they encounter them in practice — which tools require what level of caution — rather than a binary permitted/prohibited list that will be outdated within a year.
Commentary
The category system addresses data privacy risk, but different tool categories also carry qualitatively different competence risks. University-licensed general-purpose LLMs are trained on general internet data and are most prone to hallucinated citations and lack of legal grounding. Legal platform tools draw on primary and secondary legal materials and are more reliable on citations, but carry risks of omission and mischaracterization. The verification obligations in Section 6 should be calibrated accordingly: university-licensed tool outputs require more aggressive citation verification; legal platform tool outputs require closer scrutiny for analytical gaps and overgeneralization.

6. Verification Requirements

6.1 Governing Rules

Competent representation requires that a lawyer understand the tools they use and verify the accuracy of their work product. See ABA MRPC 1.1; [State] RPC 1.1; ABA Formal Opinion 512 (2024); NJ Supreme Court Preliminary Guidelines on the Use of Artificial Intelligence. GAI tools produce plausible-sounding text, not verified information. Outputs frequently contain fabricated citations, incorrect legal standards, jurisdictional errors, and outdated law presented as current.

Every piece of AI-generated or AI-assisted work product must be independently verified before it is relied upon, shared with a client, filed with a tribunal, or sent outside the clinic.
Commentary
Sag identifies hallucination and error as a foundational AI risk. The bolded rule above is absolute — no exceptions, no categories. This reflects the duty of competence under MRPC 1.1 and ABA Formal Opinion 512’s guidance that a lawyer must understand the tools used and verify accuracy. The verification obligation also reinforces the authentication principle from Section 2.4: you cannot defend work you have not checked.

6.2 Verification Checklist

Before any AI-assisted work product proceeds beyond the initial draft stage, the student must confirm:

  • All legal citations have been located and verified in an authoritative primary or secondary source
  • All statements of law have been checked for accuracy, currency, and jurisdictional applicability
  • All factual assertions have been confirmed against the case file or other reliable sources
  • The analysis reflects the student’s own professional judgment, not simply a restatement of AI output
  • The work product has been reviewed for tone, clarity, and appropriateness for the intended audience
  • The student can explain and defend every substantive assertion in the document

6.3 Supervisor Review

The supervising attorney must review all AI-assisted work product before it is:

  • Sent to or shared with a client
  • Filed with any court or tribunal
  • Sent to opposing counsel, a government agency, or any third party
  • Relied upon for case strategy decisions

The supervisor’s review encompasses both the substance of the work product and the appropriateness of the student’s AI use. See Section 7 for documentation requirements.

Commentary
When supervising AI-assisted work, faculty should adopt specific review heuristics: assume unfamiliar authorities are unverified until confirmed, cross-check drafts against the factual record to catch “drifted” facts, and treat highly polished prose from junior students as an indication to scrutinize for blended doctrines or fabricated authorities.

6.4 A Note on AI Detection Tools

This policy does not rely on AI-detection software to enforce its requirements. Current detection tools produce both false positives and false negatives at rates that make them unsuitable as primary enforcement mechanisms. Instead, compliance rests on the transparency, documentation, and authentication obligations described in this policy. Supervisors who suspect undisclosed AI use should address the concern through conversation and process review, not through detection software.

Commentary
Both Sag and Bliss warn explicitly against reliance on AI-detection software. Sag calls detection tools unsuitable as a primary enforcement mechanism. This section names the alternative enforcement strategy: transparency, documentation, and authentication — the three-legged stool. The final sentence redirects supervisors toward conversation rather than surveillance, which aligns with Bliss’s emphasis on the supervisor-student relationship as the real enforcement mechanism in clinical education.

7. Documentation and Attribution

7.1 Process Documentation

Whenever you use GAI in connection with a clinic matter, you must retain and be prepared to produce:

  1. The prompt(s) you entered into the tool
  2. The complete output(s) the tool generated (unedited)
  3. The final work product incorporating or based on the AI output
  4. A reflective note identifying:
    • What changes you made to the AI output and why
    • What independent verification you performed
    • What the AI got wrong or what you disagreed with
    • What you learned from the interaction that you did not know before
Commentary
The reflective note (item 4) is the heart of this section and directly implements Bliss’s central insight: requiring students to document their AI interactions forces metacognitive engagement. Sag recommends that disclosure be specific rather than formulaic — prompts used, how output was used, what was changed. Both authors warn against disclosure requirements so onerous they become performative.

This documentation serves two purposes. First, it enables meaningful supervisor review. Second — and equally important — it develops your ability to evaluate AI-generated work critically. The act of articulating what the AI contributed, what you contributed, and where the two diverged is itself a professional skill. Do not treat this as a compliance exercise; treat it as a thinking exercise.

7.2 Scaffolded Workflow for AI-Assisted Tasks

For substantial work product (memos, briefs, motions), supervisors are encouraged to structure AI-assisted work in stages that make the student’s reasoning visible:

  1. Independent analysis first. The student identifies the legal issues, develops a research plan, and forms a preliminary view before consulting GAI.
  2. AI-assisted development. The student uses GAI to test, extend, or refine the analysis — for example, by asking the tool to identify counterarguments, check for overlooked authorities, or suggest alternative framings.
  3. Critical evaluation. The student evaluates the AI output against the independent analysis, identifies discrepancies, and resolves them through their own judgment.
  4. Oral discussion. The supervisor reviews the work product with the student in conversation, asking the student to explain key choices and defend the analysis. This step ensures the student can authenticate the work and has not passively adopted AI output.
Commentary
This four-step workflow is arguably the most practical section of the entire policy for clinical supervisors. It implements Bliss’s multi-stage assignment design and his “process over product” principle. Step 1 (independent analysis first) protects against deskilling by ensuring the student forms a preliminary view before consulting AI. Step 4 (oral discussion) is Bliss’s recommended oral component — it tests authentication and makes passive adoption of AI output visible. Supervisors who adopt nothing else from this policy should consider adopting this workflow.

7.3 Attribution in Work Product

When AI-assisted work product is submitted to a court or tribunal, comply with any applicable disclosure rules. Where no specific rule governs, [Clinic Name]’s default position is:

Option A (Full disclosure): All filings include a disclosure statement indicating that GAI was used in preparation and identifying the tool(s) used.
Option B (Disclosure upon inquiry): Disclosure is made if required by court rule, court order, or direct inquiry from the tribunal.
Option C (Supervisor discretion): The supervising attorney determines on a case-by-case basis whether disclosure is warranted, considering the nature of the filing, the degree of AI involvement, and applicable rules.

7.4 Client Disclosure

[Clinic Name]’s approach to client disclosure of AI use is as follows:

Option A (Proactive disclosure): Clients are informed during the initial engagement or orientation that the clinic may use AI tools in connection with their matter. The disclosure includes a plain-language explanation of what this means and how client information is protected. Client consent is documented in writing.
Option B (Disclosure when material): Clients are informed when GAI has materially contributed to their representation — for example, when AI-assisted research or drafting significantly shapes a filing or recommendation. Disclosure is not required for incidental or background use.
Option C (Disclosure upon request): Clients are informed if they ask whether AI was used. The clinic does not affirmatively disclose absent a client inquiry, but also does not conceal AI use if asked.

Note: ABA Formal Opinion 512 does not impose a blanket disclosure obligation but identifies circumstances where communication obligations under MRPC 1.4 may require disclosure — including when client information is entered into a tool, when the client asks, or when the client cannot make an informed decision about the representation without knowing. Review the [State]-specific guidance for any additional state requirements.

Commentary
Client disclosure is likely the most debated choice in the policy. Option A (proactive disclosure) is the most protective of client autonomy and aligns with the clinic’s educational mission — it teaches students the discipline of informed consent. Option C (disclosure upon request) may seem insufficient in a clinical setting where clients are often unfamiliar with the tools lawyers use. Consider your client populations and practice areas when selecting an option.

8. Training Requirement

Before using any GAI tool in connection with clinic work, each student must:

  1. Complete the clinic’s AI orientation session or module, covering the contents of this policy, basic GAI capabilities and limitations, data privacy protocols, and verification methods for AI-generated work product
  2. Demonstrate understanding by [completing a short assessment / acknowledging this policy in writing / participating in a supervised practice exercise — select as appropriate]
  3. Review ABA Formal Opinion 512 and [State] guidance on AI in legal practice

Supervisors are responsible for ensuring that students under their supervision have completed this training before authorizing GAI use. See ABA MRPC 5.1, 5.3; [State] RPC 5.1, 5.3. Supervisors should also maintain their own competence regarding GAI tools, including an understanding of the capabilities and limitations of tools students use. See ABA MRPC 1.1, Comment [8] (duty to keep abreast of changes in technology relevant to practice).

Commentary
The training obligation runs in both directions. MRPC 1.1, Comment [8] requires lawyers to keep abreast of changes in technology relevant to practice. A supervisor who authorizes students to use AI tools but does not understand those tools’ capabilities and limitations faces a competence gap of their own. This policy applies to faculty as much as to students.
Commentary
Rutgers provides access to AI literacy courses via LinkedIn Learning at https://it.rutgers.edu/ai/. These resources may supplement clinic-specific training, particularly for faculty and staff seeking to develop baseline AI competence before engaging with clinic-specific tools and protocols.

9. Error Correction and Incident Response

If a student or supervisor discovers that GAI use has resulted in an error — a fabricated citation in a filed document, client information entered into an unapproved tool, an inaccurate legal standard communicated to a client — the following steps apply:

  1. Notify the supervising attorney immediately if the student discovers the problem.
  2. Assess the scope. Determine what the error was, who has seen or relied on the affected work product, and whether the error has been incorporated into any filing, communication, or advice.
  3. Determine correction obligations. If a filing contains a fabricated citation or incorrect legal standard, it must be corrected. The duty of candor to the tribunal (MRPC 3.3) may require amendment or supplemental filing. If a client received inaccurate advice, the client must be re-advised.
  4. Determine disclosure obligations. If client information was entered into an unapproved tool, assess the scope of the confidentiality breach and whether the client must be notified under MRPC 1.4 and [State] RPC 1.4.
  5. Document the incident. Record what happened, when it was discovered, and what corrective steps were taken.
  6. Update protocols. Determine whether the incident reveals a gap in this policy or in supervisory procedures and revise accordingly.

10. Violations and Integration with Academic Integrity

Commentary
Sag argues that AI policies must be integrated into existing academic integrity frameworks; otherwise they risk being treated as advisory rather than binding. The opening sentence achieves this integration by placing AI policy violations on equal footing with other clinic protocol and professional responsibility breaches. This signals to students that AI use is a professional conduct matter, not an ancillary technology issue.

Violations of this policy will be addressed in the same manner as other breaches of clinic protocols and professional responsibility standards. This policy is part of [Clinic Name]’s broader professional and academic integrity framework — not a standalone document. A violation of this policy carries the same weight as other integrity violations, and the same processes apply.

Depending on the severity of the violation, consequences may include:

  • Additional training and closer supervision
  • Restriction or revocation of GAI use privileges
  • Grade consequences as set forth in the clinic syllabus
  • Referral to the law school’s academic integrity process
  • In cases involving client harm or breach of confidentiality, referral to the appropriate faculty or administrative body

Undisclosed AI use is treated as a form of implicit misrepresentation: by submitting work without disclosure, the student represents a level of independent understanding and effort that may not be accurate. This is incompatible with the transparency obligations of this policy and the professional norms it models.

Commentary
The “implicit misrepresentation” framing comes from Sag: when a student submits AI-generated work as their own without disclosure, they implicitly represent a level of understanding and effort they may not possess. This framing ties nondisclosure to professional norms (candor, honesty) rather than to academic policing. It explains why nondisclosure is wrong in terms students can carry into practice.

11. Acknowledgment

I have read and understood the [Clinic Name] Generative AI Policy. I agree to comply with its terms and to seek guidance from my supervisor when I am uncertain about any aspect of this policy.

I understand that regardless of any tools I use, I am responsible for authenticating my work product — meaning I can explain, defend, and take professional responsibility for every substantive element of what I submit.

Commentary
The acknowledgment deliberately uses the language of authentication rather than prohibition. It does not ask students to promise they will not use AI; it asks them to take responsibility for whatever they produce. This is Sag’s authentication requirement in its purest form — and it mirrors the professional obligation a lawyer assumes when signing a brief or entering an appearance.

Student Name (print):

Student Signature:

Date:


This policy was synthesized and compiled by David S. Kemp on February 19, 2026, with the assistance of Claude Cowork (Opus 4.6 Extended), and draws on ABA Formal Opinion No. 512 (2024), the New Jersey Supreme Court Preliminary Guidelines on the Use of Artificial Intelligence, NYCBA Formal Opinion 2024-5: Generative AI in the Practice of Law, Pennsylvania Bar Association & Philadelphia Bar Association Joint Formal Opinion 2024-200, and guidance from the legal education literature, including Matthew Sag, AI Policies for Law Schools and John Bliss, Teaching Law in the Age of Generative AI. It is intended as a template and must be customized to reflect the specific needs, practice areas, and risk profile of each clinic. The information provided on this website does not, and is not intended to, constitute legal advice, but is for general informational purposes only.