Copied to clipboard

Generative AI Policy Quiz

[Read the Policy]

Test your understanding of the Clinic's Generative AI Policy. Select an answer to receive immediate feedback.

Part 1: Knowledge-Based Questions

These questions test familiarity with the express rules, definitions, and requirements laid out in the policy.

1. Which of the following software applications falls completely OUTSIDE the scope of the clinic's Generative AI Policy?

A. An AI-powered legal drafting assistant embedded directly into Microsoft Word.
B. A third-party AI transcription service used to transcribe client interviews.
C. Microsoft Word’s standard spelling and grammar-check tool that lacks generative capabilities.
(Section 3.1 explicitly excludes standard spell-check/grammar-check tools without generative capabilities.)
D. Westlaw's AI-assisted legal research feature.

2. To fulfill the policy’s process documentation requirements, which of the following must a student retain and be prepared to produce whenever they use GAI for a clinic matter?

A. A screenshot of the tool's privacy settings confirming that "private mode" was enabled during the session.
B. Only the initial prompt entered into the tool and the final submitted work product.
C. A formal certificate generated by an approved AI-detection software tool showing the percentage of AI-generated text.
D. The prompts used, the unedited AI outputs, the final work product, and a reflective note detailing the student's independent verification and analytical choices.
(Section 7.1 outlines exactly these four elements for process documentation.)

3. Under the policy’s tool classification system, what is the rule regarding the use of "University-Licensed" GAI tools (e.g., university-provided Copilot or Gemini)?

A. They are strictly prohibited for any clinic work because of residual data retention risks inherent to all general-purpose LLMs.
B. They may be used for clinic work, but only if all client-identifying information is properly stripped and anonymized first.
(Section 5.4 specifies that University-Licensed tools may be used, but only with properly anonymized client information.)
C. They may be used with unredacted client information because the university’s IT department has signed an institutional data processing agreement.
D. They may only be used for clinic work if the client provides express written consent to data sharing during their initial intake interview.

4. What is the policy’s stated position on using AI-detection software to enforce student compliance with the rules?

A. Supervisors are required to run all major final drafts through university-approved AI-detection software before filing them with a court.
B. Students must run their own work through AI-detection tools and attach the resulting "zero-percent AI" report to their assignments.
C. AI-detection software may only be utilized if a supervisor has documented probable cause to suspect that a student used unauthorized tools.
D. The policy explicitly rejects relying on AI-detection tools as a primary enforcement mechanism because they produce unreliable false positives and false negatives.
(Section 6.4 explicitly states the policy does not rely on detection software due to false positives/negatives.)

5. According to the policy, how is a student’s submission of AI-generated work without proper disclosure treated?

A. As a form of implicit misrepresentation that violates the professional norms and academic integrity standards modeled by the clinic.
(Section 10 characterizes undisclosed AI use as implicit misrepresentation incompatible with professional norms.)
B. As an excusable technological oversight, provided the substantive legal analysis within the document is ultimately verified as accurate.
C. As an intellectual property and copyright infringement issue that must be referred to the university's general counsel.
D. As a minor procedural infraction that warrants mandatory supplemental training but cannot result in academic grade consequences.

Part 2: Application-Based Questions

These questions present novel scenarios to test whether you can apply the policy’s principles and rules in practice.

6. A 2L clinic student is asked to draft a motion on a highly complex, niche area of administrative law they have never studied. Hoping to get a head start, the student uses Lexis+ AI to generate the first draft. The student checks that the cited cases exist, thinks the reasoning sounds highly persuasive, and submits it to their supervisor. Under the policy, why is this use of GAI problematic?

A. The student relied on a legal platform tool rather than a university-licensed tool, which is required for first-draft motions.
B. The student engaged in a prohibited use by utilizing GAI to perform an analytical task they could not competently evaluate themselves.
(Section 4.2 prohibits using AI to perform tasks the student cannot competently evaluate themselves, highlighting the deskilling risk.)
C. The student failed to run the AI output through a standard grammar-check program before submitting it for supervisor review.
D. The general policy strictly prohibits the use of GAI on any first-draft motion, regardless of the student's level of expertise.

7. A student is working on a housing case involving a well-known local politician suing their landlord over a highly publicized eviction in a small township. To brainstorm legal theories, the student removes the client's name and address, replaces them with "Tenant" and "Township," and inputs the remaining facts into a university-licensed AI tool. Did the student comply with the policy’s anonymization protocols?

A. Yes, because they replaced the specific identifying markers with generic placeholders before entering the text into the tool.
B. Yes, because university-licensed tools operate under an institutional data agreement that permits the entry of unredacted client facts.
C. No, because in unusual fact patterns or small communities, even anonymized facts may remain identifying, meaning the anonymization was insufficient.
(Section 5.3 Step 2 warns that in small communities/unusual fact patterns, merely swapping names is insufficient anonymization.)
D. No, because the policy completely bans the use of university-licensed tools for the purpose of brainstorming legal theories.

8. While working from a coffee shop, a student uses their personal, free ChatGPT account to summarize an opposing party's interrogatory responses, which contain sensitive medical history. To protect privacy, the student turns on ChatGPT's "temporary chat" and "do not train on my data" toggles before uploading the document. Does this action violate the AI policy?

A. Yes, because consumer tools are strictly prohibited for clinic work, and relying on a tool's "private mode" is not a valid substitute for proper data protection.
(Sections 5.3 & 5.4 strictly prohibit consumer tools for clinic work and warn against relying on "private mode" features.)
B. Yes, because the student failed to use the university-provided version of ChatGPT, which is the only tool permitted for summarizing unredacted medical records.
C. No, because the student successfully utilized the tool's privacy settings to ensure the data would not be used for model training, fulfilling their confidentiality duties.
D. No, because summarizing non-confidential or opposing party documents is an expressly permitted use of GAI across all tool categories.

9. A student uses an approved AI tool to help draft a memo analyzing a multi-factor test for a client's statutory benefit eligibility. During review, the supervisor asks the student why they weighed the third factor so heavily. The student replies, "I'm not exactly sure; that's just how the AI structured the argument, but the cases it cited are real." Has the student complied with the policy?

A. Yes, because they fulfilled their core verification requirements by confirming the citations in the memo were real.
B. Yes, because the policy encourages transparency, and the student honestly disclosed that the AI generated the analytical structure.
C. No, because the student failed the authentication requirement, which mandates they must be able to explain, defend, and take professional responsibility for the reasoning.
(Section 2.4/11 "Authentication" requires students to be able to explain, defend, and take responsibility for the reasoning, not just confirm citations exist.)
D. No, because the policy strictly forbids using AI to draft legal memos unless the student has completed a purely paper-based independent analysis first.

10. The clinic's general policy permits using AI for drafting assistance. However, at the start of the semester, a supervisor emails their students: "For the initial client interview memo, you may not use any GAI tools whatsoever; I want to see your unassisted factual synthesis." A student uses AI to outline the memo anyway, assuming the clinic's general policy overrides the supervisor's specific preference. Is the student correct?

A. Yes, because individual supervisors cannot impose blanket bans on AI use for assignments under the policy's "Transparency over Prohibition" principle.
B. Yes, because assignment-level restrictions are only valid if they are published in the official clinic syllabus rather than in an email.
C. No, because supervisors have the flexibility to impose more restrictive rules for specific assignments, provided they communicate them in writing beforehand.
(Section 3.3 explicitly gives supervisors assignment-level flexibility to impose more restrictive rules if done in writing.)
D. No, because the general policy expressly prohibits the use of AI for factual synthesis in all circumstances.

11. A student uses Lexis+ AI to help draft a brief, which the supervisor reviews and files. The next day, the student realizes the AI hallucinated a quote from a key case, which made it into the filed brief. Following the policy's incident response protocol, what is the student's immediate first step?

A. Contact the tribunal directly to file an amended brief.
B. Notify the supervising attorney immediately so they can assess the scope of the error and determine correction obligations.
(Section 9 Step 1 requires immediately notifying the supervising attorney when an error is discovered.)
C. Run the prompt again to see if the AI generates the same hallucinated quote in order to complete the required process documentation.
D. Contact the client to explain that an error was made due to generative AI hallucination.

12. A student uses a university-licensed AI tool to analyze a dataset of local sentencing outcomes to help recommend a plea strategy. The AI suggests accepting a harsh plea deal based on historical conviction rates in that jurisdiction. The student incorporates this recommendation into the client memo without further critical thought. Which specific prohibited use has the student likely engaged in?

A. Relying on AI-generated recommendations that affect a client's matter without evaluating the output for potential systemic biases present in the training data.
(Section 4.2 prohibits relying on AI recommendations affecting a client without evaluating for bias, a specific risk noted in the policy.)
B. Entering sensitive client information into a university-licensed tool without express written permission from the presiding judge.
C. Using AI to perform factual investigation, which is strictly prohibited across all tool categories under the scope of the policy.
D. Failing to use a legal platform tool for statistical analysis, as university tools are strictly approved only for text-based drafting.

13. A student uses Westlaw AI to research the statute of limitations for a specific tort. The AI outputs a confident summary stating the limitation period is three years, citing a 1998 state supreme court case. Pressed for time, the student pastes this standard into their memo without reading the 1998 case. In reality, the limitation period was shortened to two years by a 2021 statute. Under the policy, what failure occurred here?

A. The student failed to use a university-licensed tool, which the policy designates as better suited for statute of limitations queries.
B. The student fell victim to automation bias and violated the strict prohibition against relying on GAI-generated legal citations without independent verification.
(Sections 4.2 & 6.1 strictly prohibit relying on AI citations without independent verification in an authoritative source.)
C. The student breached client confidentiality by entering the client's tort claim into a legal platform tool without supervisor consent.
D. The student violated the policy by using an AI tool for legal research, a task which the policy restricts solely to brainstorming and idea generation.

14. A panicked client emails the clinic asking what to do about a sudden eviction notice. The assigned student drafts a reassuring response using Copilot, which accurately explains the immediate next steps in the legal process. The student reads it, verifies it is legally accurate, and emails it directly to the client. Why is this a policy violation?

A. The student did not use a legal platform tool to draft the email.
B. The policy strictly prohibits using GAI to communicate directly with a client without supervisor review, regardless of whether the output is legally accurate.
(Section 4.2 prohibits using GAI to communicate directly with a client without full supervisor review.)
C. The student failed to attach a reflective note to the email sent to the client as required by the documentation rules.
D. Copilot is a general-purpose tool incapable of understanding local eviction laws, meaning the student engaged in the unauthorized practice of law.

15. A student is drafting a confidential settlement demand letter. They enter the client's full name, settlement bottom line, and detailed case history into Lexis+ AI to help refine the tone of the demand. The student acts under the belief that this is permitted because Lexis+ AI is a categorized as a "Legal Platform Tool." Have they followed the policy?

A. Yes, because legal platform tools operate under contractual protections and are approved for use with client information, subject to provider terms and supervisor judgment.
(Section 5.4 explicitly permits the use of Legal Platform Tools with client information, making them the exception to the strict anonymization rules required for university/consumer tools.)
B. Yes, but only because the student was refining the tone of the letter rather than generating substantive new legal arguments.
C. No, because all client-identifying information must be stripped before being entered into any AI tool, including legal platform tools.
D. No, because settlement demands involve attorney mental impressions, which are prohibited inputs for all AI tools without exception.

This quiz accompanies the Sample Generative AI Policy. If you modify the policy's provisions, review the corresponding quiz questions to ensure they remain accurate.

Compiled by David S. Kemp for Rutgers Law Clinical Faculty. [About this Policy]