Five Circles principles for using AI in class

A short statement to anchor classroom work.

Circles' 5 principles for using AI in the classroom

We publish this because we get asked often: how does Circles use AI? What do you recommend to the educators you work with? This is our position. It is not a manual or a policy document — it is how we think about AI in teaching practice, written down so it can be debated, cited, and revised.

We apply these principles ourselves when designing courses and when using AI in our own work. We wrote them with classroom teachers in mind, but they apply equally to school leaders, teacher educators, and curriculum teams. Each principle stands on its own; you do not need to read them in order.


1. AI does not teach; you teach

AI can prepare materials, generate options, summarize documents, and save time on repetitive tasks. What it cannot do is know your students, read the room, make the right pedagogical call at the right moment, or sustain the relationship that makes learning possible. Those are your skills. Using them well is still what separates a good teacher from one who simply covers the material. AI is an assistant that never sleeps and never complains, but that needs you to decide what it does and what it does not.

A question to sit with: What parts of your teaching are irreplaceable by any tool — and are you spending enough time on those parts?


2. What goes into the model matters

When you paste text into an external AI model, that text leaves your control. Student data — names, identification numbers, grades, psychological assessments, special education records, disciplinary logs — should not be pasted into tools that transmit them to external servers without clear privacy guarantees. Your jurisdiction's data protection rules establish obligations about how third-party data is handled, and minors' data typically has additional protections. Using AI does not suspend those obligations. The practical solution is straightforward: anonymize before pasting. Remove names, ID numbers, and any data that could identify a specific student. The model can still help you — it only needs the pedagogical context, not the personal details.

A question to sit with: Do you know what the AI services you use regularly do with the data you enter? Have you read their privacy policy?


3. Every AI output is a draft

The text an AI model generates is a starting point, not a finished product. Models can invent data, cite papers that do not exist, give outdated advice, or simply be wrong — with the same confidence they use when they are right. This does not make them useless; it makes them tools that require an expert to review them. You are that expert. Before using any AI output in your classroom — a rubric, a worksheet, a piece of feedback — read it completely and verify anything you cannot independently confirm. That includes dates, figures, references to laws or regulations, and any claim that seems too convenient.

A question to sit with: Do you have a consistent practice of reviewing AI outputs before using them, or do you pass them directly to students?


4. If AI can complete an entire assignment for a student, redesign it

This principle is uncomfortable, and it is the most important one. If you can give Claude a prompt and get the requested assignment back in 30 seconds, your students can too. That is not an academic integrity problem — it is a design signal. Assignments that consist only of producing a text, answering closed questions, or summarizing information are assignments AI does quickly and well. Assignments AI cannot complete for a student are those requiring a personal position, situated decisions, a documented process, or real interaction. Redesigning assessments so that AI is a resource rather than a solution is the most urgent pedagogical work of the coming years. There are no shortcuts.

A question to sit with: Take one of your regular assessments and ask yourself: what part of this task could an AI model not complete in 2 minutes? If the answer is "very little," what would you change?


5. The best adoption is collective

Using AI alone, without discussing it with colleagues, produces poor results for two reasons. First: usage standards remain inconsistent across teachers in the same school, which confuses students and creates inequities. Second: good practices do not spread. A colleague who found a prompt that works for giving feedback on writing in 3rd grade has something worth sharing. The schools navigating AI integration best are those with teaching teams who talk explicitly about what they use, how they use it, what works, and what does not. That does not require a perfect institutional policy — it requires regular, honest conversations.

A question to sit with: How many colleagues have you talked to about how you use AI in your teaching? Is there something you learned this week worth sharing?


These principles are a starting point, not a closed document. If you have an objection, an experience that challenges them, or something you think is missing, we want to hear it. You can write to us directly or bring it up in one of our courses — that is exactly the kind of conversation we want to have.