Authentic assessment and rubricsPromptingLesson and material designAssessment and academic integrity
Design an authentic assessment for higher education
- Tested on
- Claude Opus 4.7, May 2026
- Estimated time
- 20 min
- Time saved
- 2-3 hours
- Published
- 2026-05-10
- Last reviewed
- 2026-05-10
- Attribution
- Equipo Circles
Context
A university or post-secondary instructor who wants to replace or complement a traditional exam with an authentic assessment — a real-world challenge that students can't solve with AI alone or by memorizing content, and that demonstrates genuine competence.
Paste first
Before opening the model, have ready:
- The course learning outcomes (CLOs) this assessment should evidence
- The course level (first year, advanced, thesis seminar, etc.)
- The program's graduate profile or competencies the course belongs to
- Any constraints: time available for the assessment, whether it's individual or group, whether there's an oral component
Prompt
Act as an expert in authentic assessment design for higher education.
I need to replace (or complement) a traditional assessment in my course. The problem I have is that students can answer the current format using AI without demonstrating real understanding. I want to design something that requires genuine thinking and contextualized application.
**Course context:**
- Course name: {{course name}}
- Institution type: {{university / technical college — public or private}}
- Level: {{year or semester, e.g. "second year Civil Engineering"}}
- Learning outcomes I want to assess: {{paste the exact CLOs}}
- Relevant graduate profile or competencies: {{optional, but useful}}
- Possible format: {{individual / group / with oral component}}
- Time available for the assessment: {{hours, days, or weeks}}
**What I need:**
1. Three distinct authentic assessment proposals (one simple, one intermediate, one more complex), each with a task title, the student prompt, and the simulated or real context.
2. For each proposal: what makes this task difficult to complete with generative AI alone, and what evidence of genuine learning it produces.
3. A skeleton rubric (3–4 criteria with 4 levels) for the proposal you consider most viable.
4. An honest warning: what could go wrong with each proposal?
Don't give me tasks that sound good on paper but are impossible to grade. Be concrete: what does the student submit? How is it graded?Expected output
**Proposal 1 (simple): Real case analysis with a stated position**
Task: From a real case in your region (instructor provides 3 cases to choose from), the student writes a 600-word memo addressed to a specific decision-maker, arguing a position.
Prompt: "You are an advisor to [position]. Problem X just occurred. Write a memo to the mayor with your recommendation, using at least two concepts from the course."
Why it's hard with AI: AI can draft the memo, but can't choose the position or know the local context without the student providing it. A brief oral defense (5 min) reveals real comprehension.
Evidence of learning: contextualized application, taking a position, argumentation with course concepts.
**Skeleton rubric:**
Criterion 1: Application of course concepts — 4 levels...
Watch out for
- Don't paste previous students' work for the model to design the assessment. If you want to calibrate difficulty, describe the course's typical performance in general terms without identifying anyone.
- Poorly designed authentic assessments favor students with access to more resources (time, technology, networks). Factor this in when reviewing the model's proposals.
- AI may propose very creative assessments that take more grading hours than you have. Ask: "How long does it take me to grade this with 30 students?" before adopting.
Suggested iteration
If no proposal fits your available time, ask: "I only have one week and 35 students. Proposal 2 looks good, but the oral component isn't feasible. How do you adapt it to stay authentic without the presentation?" If you want to include AI as part of the assessment rather than exclude it, ask: "Design a task where students deliberately use AI and then critique its output — make the reflection on the process the actual assessment."