students can “cheat” with AI,
How to Design Tests in the Age of AI In this era of learning, everything has changed — not only the manner in which students learn but also the manner in which they prove that they have learned. Students today employ tools such as ChatGPT, Grammarly, or math solution AI tools as an integral part ofRead more
How to Design Tests in the Age of AI
In this era of learning, everything has changed — not only the manner in which students learn but also the manner in which they prove that they have learned. Students today employ tools such as ChatGPT, Grammarly, or math solution AI tools as an integral part of their daily chores. While technology enables learning, it also renders the conventional models of assessment through memorization, essays, or homework monotonous.
So the challenge that educators today are facing is:
How do we create fair, substantial, and authentic tests in a world where AI can spew up “perfect” answers in seconds?
The solution isn’t to prohibit AI — it’s to redefine the assessment process itself. Let’s start on how.
1. Redefining What We’re Assessing
For generations, education has questioned students about what they know — formulas, facts, definitions. But machines can memorize anything at the blink of an eye, so tests based on memorization are becoming increasingly irrelevant.
In the AI era, we must test what AI does not do well:
- Critical thinking — Do students understand AI-presents information?
- Creativity — Can they leverage AI as a tool to make new things?
- Ethical thinking — Do they know when and how to apply AI in an ethical manner?
- Problem setting — Can they establish a problem first before looking for a solution?
Attempt replacing the following questions: Rather than asking “Explain causes of World War I,” ask “If AI composed an essay on WWI causes, how would you analyze its argument or position?”
This shifts the attention away from memorization.
2. Creating “AI-Resilient” Tests
An AI-resilient assessment is one where even if a student uses AI, the tool can’t fully answer the question — because the task requires human judgment, personal context, or live reasoning.
Here are a few effective formats:
- Oral and interactive assessments:Ask students to explain their thought process verbally. You’ll see instantly if they understand the concept or just relied on AI.
- Process-based assessment:Rather than grading the final product alone, grade the process — brainstorm, drafts, feedback, revisions.
Have students record how they utilized AI tools ethically (e.g., “I used AI to grammar-check but wrote the analysis myself”).
- Scenario or situational activities:Provide real-world dilemmas that need interpretation, empathy, and ethical thinking — areas where AI is not yet there.
Choose students for the competition based on how many tasks they have been able to accomplish.
Example: “You are an instructor in a heterogeneously structured class. How do you use AI in helping learners of various backgrounds without infusing bias?”
Thinking activities:
Instruct students to compare or criticize AI responses with their own ideas. This compels students to think about thinking — an important metacognition activity.
3. Designing Tests “AI-Inclusive” Not “AI-Proof”
it’s a futile exercise trying to make everything “AI-proof.” Students will always find new methods of using the tools. What needs to happen instead is that tests need to accept AI as part of the process.
- Teach AI literacy: Demonstrate how to use AI to research, summarize, or brainstorm — responsibly.
- Request disclosure: Have students report when and how they utilized AI. It encourages honesty and introspection.
Mark not only the result, but their thought process as well: Have students discuss why they accepted or rejected AI suggestions.
Example prompt:
- “Use AI to create three possible solutions to this problem. Then critique them and let me know which one you would use and why.”
This makes AI a study buddy, and not a cheat code.
4. Immersing Technology with Human Touch
Teachers should not be driven away from students by AI — but drawn closer by making assessment more human-friendly and participatory.
Ideas:
- Blend virtual portfolios (AI-written writing, programmed coding, or designed design) with face-to-face discussion of the student’s process.
- Tap into peer review sessions — students critique each other’s work, with human judgment set against AI-produced output.
- Mix live, interactive quizzes — in which the questions change depending on what students answer, so the tests are lifelike and surprising.
Human element: A student may use AI to redo his report, but a live presentation tells him how deep he really is.
5. Justice and Integrity
Academic integrity in the age of AI is novel. Cheating isn’t plagiarizing anymore but using crutches too much without comprehending them.
Teachers can promote equity by:
- Having clear AI policies: Establishing what is acceptable (e.g., grammar assistance) and not acceptable (e.g., writing entire essays).
Employing AI-detecting software responsibly — not to sanction, but to encourage an open discussion.
- Requesting reflection statements: “Tell us how you employed AI on the completion of this assignment.”
It builds trust, not fear, and shows teachers care more about effort and integrity than being great.
6. Remixing Feedback in the AI Era
- AI can speed up grading, but feedback must be human. Students learn optimally when feedback is personal, empathetic, and constructive.
- Teachers can use AI to produce first-draft feedback reports, then revise with empathy and personal insight.
- Have students use AI to edit their work — but ask them to explain what they learned from the process.
- Focus on growth feedback — learning skills, not grades.
Example: Instead of a “AI plagiarism detected” alert, give a “Let’s discuss how you can responsibly use AI to enhance your writing instead of replacing it.” message.
7. From Testing to Learning
The most powerful change can be this one:
- Testing no longer has to be a judgment — it can be an odyssey.
AI eliminates the myth that tests are the sole measure of demonstrating what is learned. Tests, instead, become an act of self-discovery and learning skills.
Teachers can:
- Substitute high-stakes testing with continuous formative assessment.
- Incentivize creativity, critical thinking, and ethical use of AI.
- Students, rather than dreading AI, learn from it.
Final Thought
- The era of AI is not the end of actual learning — it’s the start of a new era of testing.
- A time when students won’t be tested on what they’ve memorized, but how they think, question, and create.
- An era where teachers are mentors and artists, leading students through a virtual world with sense and sensibility.
- When exams encourage curiosity rather than relevance, thinking rather than repetition, judgment rather than imitation — then AI is not the enemy but the ally.
Not to be smarter than AI. To make students smarter, more moral, and more human in a world of AI.
See less
If Students Are Able to "Cheat" Using AI, How Should Exams and Assignments Adapt? Artificial Intelligence (AI) has disrupted schools in manners no one had envisioned a decade ago. From ChatGPT, QuillBot, Grammarly, and math solution tools powered by AI, one can write essays, summarize chapter contenRead more
If Students Are Able to “Cheat” Using AI, How Should Exams and Assignments Adapt?
Artificial Intelligence (AI) has disrupted schools in manners no one had envisioned a decade ago. From ChatGPT, QuillBot, Grammarly, and math solution tools powered by AI, one can write essays, summarize chapter content, solve equations, and even simulate critical thinking — all in mere seconds. No wonder educators everywhere are on edge: if one can “cheat” using AI, does testing even exist anymore?
But the more profound question is not how to prevent students from using AI — it’s how to rethink learning and evaluation in a world where information is abundant, access is instantaneous, and automation is feasible. Rather than looking for AI-proof tests, educators can create AI-resistant, human-scale evaluations that demand reflection, imagination, and integrity.
Let’s consider what assignments and tests need to be such that education still matters even with AI at your fingertips.
1. Reinventing What’s “Cheating”
Historically, cheating meant glancing over someone else’s work or getting unofficial help. But in 2025, AI technology has clouded the issue. When a student uses AI to get ideas, proofread for grammatical mistakes, or reword a piece of writing — is it cheating, or just taking advantage of smart technology?
The answer lies in intention and awareness:
Example: A student who gets AI to produce his essay isn’t learning. But a student employing AI to outline arguments, structure, then composing his own is showing progress.
Teachers first need to begin by explaining — and not punishing — what looks like good use of AI.
2. Beyond Memory Tests
Rote memorization and fact-recall tests are old hat with AI. Anyone can have instant access to definitions, dates, or equations through AI. Tests must therefore change to test what machines cannot instantly fake: understanding, thinking, and imagination.
The aim isn’t to trap students — it’s to let actual understanding come through.
3. Building Tests That Respect Process Over Product
If we can automate the final product to perfection, then we should begin grading on the path that we take to get there.
Some robust transformations:
By asking students to reflect on why they are using AI and what they are learning through it, cheating is self-reflection.
4. Using Real-World, Authentic Tests
Real life is not typically taken with closed-book tests. Real life does include us solving problems to ourselves, working with other people, and making choices — precisely the places where human beings and computers need to communicate.
So tests need to reflect real-world issues:
Example: Rather than “Analyze Shakespeare’s Hamlet,” ask a student of literature to pose the question, “How would an AI understand Hamlet’s indecisiveness — and what would it misunderstand?”
That’s not a test of literature — that is a test of human perception.
5. Designing AI-Integrated Assignments
Rather than prohibit AI, let’s put it into the assignment. Not only does that recognize reality but also educates digital ethics and critical thinking.
Examples are:
Projects enable students to learn AI literacy — how to review, revise, and refine machine content.
6. Building Trust Through Transparency
Distrust of AI cheating comes from loss of trust between students and teachers. The trust must be rebuilt through openness.
If students observe honesty being practiced, they will be likely to imitate it.
7. Rethinking Tests for the Networked World
Old-fashioned time tests — silent rooms, no computers, no conversation — are no longer the way human brains function anymore. Future testing is adaptive, interactive, and human-facilitated testing.
Potential models:
These models make cheating virtually impossible — not because they’re enforced rigidly, but because they demand real-time thinking.
8. Maintaining the Human Heart of Education
So the teacher’s job now needs to transition from tester to guide and architect — assisting students in applying AI properly and developing the distinctively human abilities machines can’t: curiosity, courage, and compassion.
As a teacher joked:
Last Thought
“What do you know?”
but rather: