Getting Started
AI Tools at Penn Law
Penn Law provides access to several AI platforms for faculty and staff. Here's what's available and how to get in.
General-Purpose AI Models
These are foundational AI models — they can write, analyze, summarize, brainstorm, and handle a wide range of tasks. Think of them as general-purpose assistants.
ChatGPT EDU
OpenAI's GPT models via Penn's institutional agreement — data protections built in. All 1Ls have accounts, plus Littleton Fellows and 1L TAs. Faculty and staff can request access — email ITShelp@law.upenn.edu.
Penn Law ITS guide (Penn Law access only)Claude
Anthropic's AI assistant — strong at writing, analysis, long documents, and coding. Available to faculty using your research account — contact me for details at pwagner@law.upenn.edu.
Google Gemini
Google's AI model — strong at research, multimodal tasks (images, video, code), and integration with Google Workspace. Available at gemini.google.com with a Google account; paid tiers for advanced models.
Productivity Tools
AI features embedded in tools you already use — not standalone models, but AI built into your existing workflow.
Microsoft Copilot
Microsoft's AI assistant, integrated with the Office 365 apps you already use — Word, Outlook, Teams, Excel. Available through your Penn Carey Law O365 account.
Penn Law ITS guide (Penn Law access only)Legal-Specific AI Tools
These tools are built specifically for legal work — trained on legal data, designed for legal research and drafting, with features tailored to how lawyers and law faculty work.
Harvey
Legal-specific AI for research, drafting, and analysis. Enterprise agreement with Penn Law — your data is not used for training. All upper-level students, full-time faculty, and staff have access. Log in with your LawKey username.
Log in to HarveyWestlaw AI-Assisted Research
AI features built into Westlaw — natural language search, case analysis, and document review. Available through the law school's existing Westlaw subscription.
Lexis+ AI
LexisNexis's AI-powered legal research assistant — conversational search, document drafting, and summarization. Available through the law school's existing Lexis subscription.
Getting Set Up
Claude Code
Claude Code is Anthropic's AI coding and productivity tool — it works in your terminal, in VS Code, as a desktop app, or in your browser. Despite the name, it's not just for coding. I use it for writing, research, document production, and administrative tasks. It's the tool behind the Claude Code Skills listed on this page and on the pedagogy portal.
You need a paid Claude subscription (Pro, Max, Team, or Enterprise) to use it. The free tier won't cut it — you need the pro models and the privacy controls that come with a paid plan.
Getting Started
- Overview & installation — install the CLI, VS Code extension, or desktop app
- Quickstart — walk through your first task
- Law faculty skills — my open-source skills for faculty tasks (install instructions in the README)
If you want help getting set up, email me — I'm happy to walk you through it.
Orientation
What Can AI Actually Do?
The Short Version
The AI tools listed above are all built on large language models (LLMs) — software trained on enormous amounts of text that can generate fluent, often remarkably useful responses to natural-language prompts. You don't need to understand the engineering. The practical takeaway: these tools are very good at working with language, and not so good at everything else.
Think of it this way: you have a very fast, very well-read research assistant who sometimes makes things up. That's not a knock — it's the right mental model. When you treat the output as a strong first draft that needs your judgment and verification, these tools can save you real time.
Where AI is Genuinely Useful
Drafting and writing. LLMs are excellent first-draft machines. Emails, memos, syllabi, recommendation letters, committee reports — give it context and a clear prompt, and you'll get a solid starting point in seconds. I use this daily.
Brainstorming and outlining. When you're staring at a blank page, AI is a surprisingly good thought partner. It won't have your ideas, but it will give you a structured framework to react to — which is often exactly what you need to get moving.
Summarizing and explaining. Drop in a long document, article, or set of comments and ask for a summary. Ask it to explain a technical concept in plain language. This works well and can save significant time on reading-heavy tasks.
Research assistance. Harvey and the Westlaw/Lexis AI tools are designed specifically for legal research — finding relevant cases, statutes, and secondary sources. They're not a replacement for careful research, but they can accelerate the early stages significantly.
Where AI Falls Short
It makes things up. This is the big one. LLMs generate plausible-sounding text, and sometimes that text is simply wrong — fabricated case names, invented statistics, confident but incorrect legal analysis. The field calls this "hallucination." It's not a bug that's getting fixed next quarter; it's a fundamental feature of how these models work. Always verify anything that matters.
Math and precision. LLMs are language tools, not calculators. They'll get basic arithmetic right most of the time, but anything involving complex calculations, data analysis, or precise quantitative reasoning should be checked independently.
Confidentiality. When you type something into an AI tool, that text goes to a server. I recommend using only paid, enterprise-tier tools for professional work — and checking your privacy settings. More on this in the Policies & Guidelines tab.
It doesn't "understand" anything. This is worth saying plainly: LLMs don't know what they're saying. They predict the next word based on patterns in training data. The output can be impressive — even insightful — but there's no reasoning happening behind the curtain the way there is when you think through a problem. Your judgment is not optional.
Use Cases & Tips
Getting Better Results
Most people try an AI tool once, get a mediocre answer, and conclude it's not that useful. The difference between a mediocre answer and a genuinely helpful one usually comes down to how you ask. Here's what I've learned works.
Be Specific About What You Want
Don't just say "write me a memo." Say "write a two-page memo to the faculty curriculum committee recommending we add a course on AI regulation, in a professional but collegial tone." The more you specify — format, length, audience, tone — the better the output. Vague prompts get vague results.
Give It Context
AI tools work dramatically better when you give them something to work with. Paste in the document you want summarized. Copy in the email thread you need to respond to. Describe the situation in enough detail that a smart colleague could help you. Context is the single biggest lever you have.
Iterate — Treat It as a Conversation
The first response is almost never the final product. Push back. Say "make this more concise" or "you missed the point about X" or "rewrite the second paragraph in a more formal tone." These tools respond well to iteration, and the back-and-forth is where the real value emerges.
Ask It to Critique Its Own Work
One of the most underused techniques: after the AI gives you a draft, ask it to identify weaknesses in what it just wrote. "What are the strongest objections to this argument?" or "What did you leave out?" This often surfaces issues you'd catch on your own — but faster.
For a more detailed guide to prompting, the AI Law Lab has put together a comprehensive resource:
AI Law Lab Prompt Engineering GuideUse Cases
What Can You Do With AI?
Research & Analysis
Summarize long documents, explore unfamiliar areas of law, find patterns in data, get up to speed on a topic quickly. Works best when you can verify the output.
Drafting & Writing
Draft emails, memos, reports, recommendation letters, grant applications, committee documents. Give it your existing text to revise, or describe what you need and iterate.
Data & Administration
Analyze survey results, clean up spreadsheets, prepare meeting agendas, summarize long email threads, draft routine communications.
Teaching
AI tools for the classroom — syllabus language, exam generation, virtual TAs, and more. We have a full set of guides on the pedagogy portal.
Visit Pedagogy ResourcesPatterns That Work
A few specific approaches I come back to again and again:
- Paste in a draft and ask it to critique the argument
- Before a meeting, ask it to summarize the background materials
- Ask it to explain a concept as if to a specific audience
- Use it to generate multiple options, then pick the best one
- When it gets something wrong, tell it — it adjusts
Tools & Setup
Getting More Out of Claude Code
New to Claude Code? Start with the basics on the Getting Started tab. If you've already set it up and want to do more, here are some features worth knowing about.
CLAUDE.md — Persistent Instructions
Drop a file called CLAUDE.md in any project folder and Claude Code reads it at the start of every session. Use it for coding standards, project context, preferred conventions — anything you'd otherwise repeat every time. I use mine to set voice and formatting preferences so I don't have to re-explain them. Documentation →
Custom Skills
Skills are reusable prompts you can install and invoke by name — like /commit or /review-pr. The law faculty skills I've built are examples of this. You can also create your own for any workflow you repeat. Documentation →
MCP — Connecting to External Tools
The Model Context Protocol lets Claude Code connect to external services — Google Drive, Gmail, calendars, databases, Slack, and more. This is how I have Claude draft emails, check my calendar, and pull documents without leaving the conversation. Documentation →
Multiple Environments
Claude Code works in the terminal, VS Code, the desktop app, and the web. Your settings, CLAUDE.md files, and MCP servers carry across all of them. Start a task on your laptop, pick it up from your phone.
Full documentation: code.claude.com/docs
For the Technically Curious
Model APIs
Everything above — ChatGPT, Claude, Harvey — uses a conversational interface. But the same AI models are also available through APIs, which let you build your own tools, automate workflows, and integrate AI into custom applications. You don't need to be a software engineer to find this useful — if you can write a basic script (or ask an AI to write one for you), you can use an API.
Why Use an API?
The conversational tools are great for one-off tasks. But if you find yourself doing the same thing over and over — processing a batch of documents, grading with a rubric, extracting data from a set of files — an API lets you automate it. You write a script once, and it runs the same prompt across hundreds of inputs without you copy-pasting anything.
APIs also give you more control: you can choose the model, adjust parameters like temperature (how creative vs. deterministic the output is), and build multi-step workflows where the output of one call feeds into the next.
Anthropic (Claude) API
Anthropic's API gives you direct access to the Claude models — the same ones powering Claude Code and the Claude chat interface, but programmatically. Strong at writing, analysis, long documents, and coding tasks.
- Documentation
- Getting started
- Pricing — pay per use; starts at a few dollars for significant usage
OpenAI API
OpenAI's API gives you access to the GPT models (GPT-4o, o3, etc.) — the same models behind ChatGPT, but with full programmatic control. Broad capabilities across writing, reasoning, and multimodal tasks.
Other Models
The AI model landscape is broader than just Anthropic and OpenAI. Google's Gemini models are available through their API with similar capabilities. There's also a growing ecosystem of open-source models — Meta's Llama, Mistral, and others — that you can run locally or through hosting providers, sometimes for free. I'm happy to discuss options if you're exploring this space.
Advanced Computing
If you're doing heavy data work — training models, running large-scale analyses, or working with datasets that don't fit on a laptop — the Penn Advanced Research Computing Center (PARCC) provides high-performance computing clusters and storage for faculty research. Niche interest for most, but essential if you need it.
If you're interested in working with APIs and want help getting started, email me. I can point you to examples and walk through the basics.
Claude Code Skills
AI-Powered Tools for Faculty
I've built a set of open-source Claude Code skills for common faculty tasks — install the ones you want and use them in natural conversation. These require a paid Claude subscription. Email me if you want help getting set up.
Memo & Document Production
Produce formatted .docx memos and documents with Penn Carey Law letterhead — proper margins, fonts, and logo. Also includes PDF rendering from Markdown.
View on GitHubEmail Drafting
Draft emails and professional communications in your voice — replies, declines, invitations, follow-ups. Learns your style and preferred sign-off.
View on GitHubDocument Comment Summary
Extract and summarize all comments from Word (.docx) files into a clean report. Useful for compiling reviewer feedback on drafts, committee documents, or student papers.
View on GitHubPDF Rendering
Convert Markdown files to polished, professionally formatted PDFs in Penn Carey Law house style. Reading lists, handouts, reports.
View on GitHubRex (Critical Reviewer)
A senior engineering critic persona that reviews code, plans, designs, and documents. Finds problems before they ship — blunt, specific, actionable feedback.
View on GitHubEddie (Senior Editor)
Editorial review of any document — checks factual accuracy, citations, internal consistency, institutional sensitivity, voice/style, and AI-specific failure modes. Prioritized revision report with self-check.
View on GitHubFor teaching-specific skills — exam question generators, class prep, slide review — see the Claude Code Skills section on the Pedagogy Resources portal.
Full list with installation instructions: github.com/polkwagner/law-faculty-claude-skills
Policies & Guidelines
Using AI Responsibly
AI tools are powerful, but they come with real risks around data, accuracy, and confidentiality. Here's what you need to know.
Use Pro-Tier Models — Always
I want to be very clear about this: do not use free-tier AI tools for any professional work. Free versions of ChatGPT, Claude, and other services may use your inputs as training data, offer weaker models, and lack the privacy controls you need. Always use the paid, pro-tier versions — and typically the most powerful model available.
Just as important: check your privacy settings. Even on paid tiers, most AI tools have settings that control whether your conversations are used for model training. Turn that off. On ChatGPT, it's under Settings → Data Controls. On Claude, it's under Settings → Privacy. Do this before you start using the tool for real work.
Penn Law provides institutional access to Harvey and ChatGPT EDU — these have enterprise agreements with data protections built in (see the ChatGPT EDU FAQ; Harvey details available from me). Use them. If you're using Claude or another tool on your own, make sure you're on a paid plan with training opt-outs enabled.
What's Generally Safe to Share
When you're using a properly configured pro-tier or enterprise tool:
- Published materials and public information
- Your own draft text and work product
- General questions about law, pedagogy, or administration
The key distinction is between information that's already public (or your own work product) and information that belongs to someone else or is institutionally confidential.
Information Security and Privacy
As with any technology, it's worth thinking about what information you're sharing with AI tools — student data, personnel matters, confidential deliberations, and so on. Penn has university-wide guidance on AI use that covers information security and privacy, and the specific terms of service for each tool spell out how your data is handled. I'd encourage you to be familiar with both.
If you have questions about a particular use case, reach out — I'm happy to think through it with you.
Our Institutional Tools
- Harvey — enterprise agreement with Penn Law. Your inputs are not used to train models. Appropriate for most work-related tasks.
- ChatGPT EDU — Penn's institutional agreement with OpenAI includes data protections. Your conversations are not used for model training. See the ChatGPT EDU FAQ for details.
- Claude, Copilot, and other tools — if you're using these independently, make sure you're on a paid plan, using the strongest model available, and have confirmed that your data is not being used for training. Check the privacy settings — they're not always set correctly by default.
Responsible Use
Accuracy, Attribution, and Bias
AI Hallucinates — Verify Everything That Matters
I said this in the Getting Started tab, and I'll say it again here because it's the single most important thing to understand about these tools: LLMs generate plausible text, not verified truth. They will fabricate case citations, invent statistics, and present made-up facts with complete confidence.
This isn't a minor issue. A lawyer was sanctioned for filing a brief with AI-fabricated case citations. Law review articles have been submitted with invented sources. It happens because the output looks right — and when you're moving fast, it's easy to trust it. Don't. Anything you plan to rely on, share externally, or put your name on should be independently verified.
Attribution
When and how to disclose AI use is still evolving, but the direction is clear: transparency is the right default. If AI contributed meaningfully to a piece of work, say so. For faculty publications, grant applications, and student-facing materials, err on the side of disclosure.
Professor Catherine Struve has put together a thoughtful guide on AI and attribution that's worth reading:
Struve Guide on AI Attribution (Pedagogy Portal)Bias
AI models reflect the biases present in their training data. This is well-documented and worth keeping in mind — especially in contexts that affect people directly: hiring decisions, admissions-related work, student evaluations, or any process where fairness matters. AI output can be a useful input, but it shouldn't be the sole basis for consequential decisions about people.
Penn Policies
Institutional Guidelines
Penn Law Exam Policies
AI policies for exams are set by individual faculty and administered through the Registrar's office. The pedagogy portal has current guidance on exam AI policies, including model syllabus language and the different policy tiers available:
Penn Law Pedagogy Resources — ExamsUniversity-Level AI Guidance
Penn has published institutional guidance on AI use:
- Statement on Guidance for the Penn Community on Use of Generative AI — the university-wide framework covering transparency, data privacy, and security
- ChatGPT EDU FAQ — data handling, account access, and usage guidelines for Penn's enterprise ChatGPT
Specific policies vary by context. Research use, classroom use, and administrative use may each have different considerations. When a situation doesn't fit neatly into the guidance above, reach out — I'm happy to think through it.
When in Doubt, Ask
AI policy at Penn and Penn Law is evolving. When in doubt about whether a particular use is appropriate, reach out — I'm happy to think through it with you. pwagner@law.upenn.edu
AI at Penn Law
What We're Building
Penn Carey Law has built meaningful AI infrastructure over the past two years across curriculum, technology partnerships, faculty development, student programs, and research. For an overview, see Forging the Future: AI at Penn Carey Law. Here's what's in place.
Curriculum
AI is integrated into the 1L Legal Practice Skills program — students engage AI tools as part of foundational legal training, not as an elective. Faculty across the curriculum have also experimented with AI-integrated assignments, simulations, and new assessment approaches.
Technology Partnerships
We have institutional partnerships with Harvey — one of the leading legal AI platforms, used by major law firms — and ChatGPT EDU, supporting AI use across teaching, research, and administration. Details and access info are on the Getting Started tab.
Faculty Support
Workshops on AI use cases, pedagogy, and tool adoption. A faculty AI toolkit with practical guidance and best practices. Regular communications keeping faculty current on developments. I coordinate all of this — reach out if you want to get involved.
Student Programs
The Madhani Legal Tech Fellowship supports students building legal technology ventures — an established entrepreneurial pathway connecting law students to the legal tech ecosystem.
Faculty Research
Multiple faculty are conducting active research on AI and law — spanning governance, intellectual property, regulatory frameworks, and the structure of legal work.
AI Announcements List
I maintain a mailing list for news about AI at Penn Law — new tools, policy updates, workshops, and anything else worth knowing. Low volume, high signal. Contact me at pwagner@law.upenn.edu to join.
Faculty Support
AI Office Hours
AI Office Hours: What's Worth Knowing
The current AI landscape for law faculty — the models worth using, the shift from chatbots to agents, and resources to get started. Reference screen from the April 1, 2026 faculty session.
View session screenAI at Penn
The AI Law Lab
The AI Law Lab
I run the AI Law Lab — Penn Law's initiative to help faculty and students navigate AI in legal education and practice. The Lab produces guides and resources, runs workshops and training sessions, provides access to AI tools, and supports faculty who want to experiment with AI in their teaching and research. If you've used the guides and skills on this site, you've already been using the Lab's work.
The Lab maintains a full resource menu with everything we offer — guides, tools access, workshop schedules, and more.
For teaching-specific AI resources — syllabus language, exam policies, classroom tools, and pedagogy guides — see the companion portal:
Teaching-specific AI guides on the Pedagogy Resources portalAI Across Penn
Penn-Wide AI Initiatives
There's a lot happening with AI across Penn. Here are the initiatives and resources most worth knowing about.
Penn AI
Penn's central AI initiative — the university-wide hub for AI research, education, and events across all 12 schools. Good starting point for understanding the broader landscape.
Visit Penn AIPenn AI Guidance
The university's official guidance on responsible use of generative AI — covers data privacy, security, and transparency expectations for faculty, staff, and students.
Read guidanceWharton AI & Analytics Initiative
Wharton's integrated approach to AI and analytics — industry partnerships, research, student programs, and events. One of the most active AI efforts on campus.
Visit sitePenn Engineering AI
SEAS AI program — home to Penn's graduate AI degree (the first at an Ivy), research labs, and the new Amy Gutmann Hall for data science and AI.
Visit sitePenn Libraries AI Guide
The library's guide to AI tools and best practices — covers AI concepts, notable tools across domains, and practical guidance. A good resource to share with students and RAs.
View guideChatGPT EDU FAQ
Penn ISC's FAQ on the institutional ChatGPT EDU deployment — account access, data privacy, and usage guidelines.
View FAQPARCC
Penn Advanced Research Computing Center — high-performance computing clusters, GPU resources, and large-scale storage for data-intensive research. Niche, but essential if you're doing heavy computational work.
Visit sitePenn AI Fellows Program
A fellowship for postdocs and advanced grad students whose research involves AI — includes funding, mentoring, and a cross-disciplinary seminar. Law students doing AI-related work are encouraged to apply. Tell your students and RAs about this.
Learn more & applyKnow of a Penn AI initiative I should include here? Let me know.
Further Learning
Reading & Resources
For teaching-focused scholarship on AI and legal education, see the Reading & Research section on the Pedagogy Resources portal.
Below are the sources I follow most closely and recommend to colleagues. This is how I keep up — and honestly, keeping up is half the challenge.
Blogs & Newsletters
These are the writers I read consistently. They explain what's happening in AI clearly and honestly, without hype.
One Useful Thing
Ethan Mollick (Wharton) on AI's implications for work, education, and life. The single best resource for academics thinking about AI — practical, grounded, and updated frequently. If you read one thing on this list, make it this.
SubscribeSimon Willison's Weblog
Deep, technical-but-accessible writing on LLMs, prompt engineering, and building with AI. Willison is one of the most thoughtful voices on how these tools actually work and what you can do with them.
Read blogStratechery
Ben Thompson on technology strategy — not AI-specific, but his AI coverage is among the best for understanding the business and policy implications. Paid, but worth it.
Visit siteImport AI
Jack Clark's weekly newsletter on AI policy, research, and capabilities. Clark co-founded Anthropic and previously led policy at OpenAI — excellent on the intersection of AI and governance.
SubscribeNews & Analysis
Ars Technica — AI
Strong technical reporting on AI developments — new models, capabilities, policy, and the occasional reality check. Good signal-to-noise ratio.
Read coverageThe Verge — AI
Accessible AI coverage aimed at a general audience — product launches, policy developments, and the cultural impact of AI tools.
Read coverageLegal & Academic
AALS AI Resources
The Association of American Law Schools' collection of AI and legal education resources — reports, panel recordings, and guidance for law faculty.
Visit pageStanford HAI
Stanford's Institute for Human-Centered AI — research, policy briefs, and the annual AI Index report. The best single source for data on where AI capabilities actually stand.
Visit siteAnthropic Research
Anthropic's research blog — technical papers on AI safety, interpretability, and capabilities. More technical than the others, but their safety work is worth following.
Read researchHave a source I should add? Let me know.
Tell Me What You're Doing
If you're using AI in interesting ways — for teaching, research, administration, anything — I want to hear about it. What's working, what's not, what you wish existed. This helps me figure out where to focus the Lab's efforts and what resources to build next.