Skip to main content

Trust & Safety

AI Safety Policy

Version 2.0 · Effective date: 15 April 2026

1. Our Commitment

Questcademy integrates artificial intelligence to enhance personalised learning for K-12 students across Ghana, Sierra Leone, Nigeria, and the United States. We recognise that using AI in education — particularly with children — carries significant responsibility.

This AI Safety Policy sets out the guardrails, moderation standards, and governance framework we apply to every AI-powered feature on the Platform. Our goal is simple: AI in Questcademy must always be safe, age-appropriate, privacy-preserving, and educationally sound.

2. Core Principles

🛡️

Safety First

No AI output may cause harm to a learner — emotionally, psychologically, or academically. Safety overrides optimisation in every design decision.

🎓

Education-Only Purpose

AI features exist solely to support learning. Non-educational chat, roleplay, and off-curriculum interaction are strictly prohibited.

🔒

Privacy by Design

AI never receives student names, email addresses, or other directly identifiable information. Only de-identified pedagogical context is used.

👶

Age-Appropriate by Default

Tone, vocabulary, and content complexity automatically adapt to the learner's age band and grade level.

🌍

Regional Sensitivity

AI outputs respect regional curriculum boundaries, cultural context, and local examination standards.

👁️

Human Oversight

AI is a tool supervised by educators and our safety team — never an autonomous decision-maker for student outcomes.

3. How AI Is Used

AI powers the following features on the Platform. Each feature operates within the safety boundaries defined in this policy:

FeatureWhat AI DoesWhat AI Does NOT Do
Adaptive Question GenerationGenerates practice questions matched to the student's mastery level, difficulty tier, and curriculum context.Does not create exam questions used for official grading or high-stakes assessment.
Hint & Explanation EngineProvides step-by-step hints and explanations when a student struggles with a concept.Does not give direct answers that bypass the learning process.
Misconception DetectionIdentifies common misconceptions from response patterns and suggests targeted remediation.Does not diagnose learning disabilities or make clinical assessments.
Learning Path RecommendationSuggests the next concept or lesson based on the student's concept graph traversal and mastery state.Does not make academic placement decisions or determine student grades.
Open-Ended Response GradingProvides preliminary scoring suggestions for teacher review of written responses.Does not issue final grades without teacher confirmation.

4. Data Protection in AI

We enforce strict data minimisation when interfacing with AI systems:

  • What AI receives: concept_id, difficulty_tier, misconception_tags, regional_context, and an anonymised student_mastery_summary.
  • What AI never receives: student names, email addresses, dates of birth, school names, IP addresses, or any other personally identifiable information (PII).
  • No model training on student data: Student interactions are not used to train or fine-tune third-party AI models. Our AI providers are contractually prohibited from using Questcademy data for model improvement.
  • Prompt isolation: Each AI request is stateless — no conversation history, student identity, or cross-session context is retained by the AI.

For full details on data handling, see our Privacy Policy.

5. Age-Appropriate Content

Every AI-generated output is calibrated to the student's age band. Our age-band policy matrix defines the tone, vocabulary, and content boundaries for each level:

Learner BandTone & VocabularySensitivity TierExample Content
Early years & lower primarySimple, encouraging, concreteHighFoundational literacy/numeracy, visual support, praise
Upper primary & middle schoolGuided, scaffolded, step-by-stepMediumWorked examples, structured reasoning, study supports
Secondary & high schoolClear, explanatory, mastery-orientedMediumDeeper problem solving, subject explanations, strategy

6. Content Moderation

6.1 Always Allowed

  • Direct instructional support for curriculum topics.
  • Positive, age-appropriate motivational feedback.
  • Clarification of student misconceptions.
  • Region-aware examples within approved curriculum boundaries.

6.2 Strictly Prohibited

  • Collection, storage, or disclosure of personally identifiable information.
  • Violent, sexual, discriminatory, or otherwise inappropriate content.
  • Non-educational chat, open-ended roleplay, or social interaction.
  • Encouragement of cheating or academic dishonesty.
  • Invented examination codes, off-curriculum instruction, or unsupported factual claims.
  • Content that promotes self-harm, substance use, or dangerous behaviour.
  • Political, religious, or ideological advocacy unrelated to the curriculum.

6.3 Moderation Outcomes

Every AI-generated response is classified into one of three states before delivery to the student:

✅ PASSED

Output meets all safety and age-band requirements and is delivered as-is.

Example: "Sure! To add fractions like ½ and ¼, we first find a common denominator…"

⚠️ FILTERED

Output contained minor violations (e.g., overly informal slang) that were automatically corrected before delivery.

Example: "Yo, that's lit! 2+2 is 4, fam." → Filtered to: "That's correct! 2 + 2 = 4."

🚫 BLOCKED

Output contained major violations (PII, safety risk, biased content) and was fully suppressed. A safe fallback message is shown instead.

7. Safe Fallback Strategy

When an AI response is blocked, the student never sees the blocked content. Instead, the system displays a pre-approved safe fallback message, such as:

"I'm sorry, I'm having trouble finding the best way to explain that right now. Let's try another approach or check with your teacher!"
"That's an interesting question! For now, let's focus on our lesson. If you'd like to explore that further, ask your teacher for guidance."

Fallback messages are designed to be warm, non-judgemental, and to redirect the student back to productive learning. They never indicate why the response was blocked, to prevent adversarial probing.

8. Bias & Fairness

We are committed to ensuring AI features are fair and equitable across all student demographics:

  • Cultural neutrality: AI-generated content avoids stereotypes, cultural assumptions, or biased representations. Regional adaptations (e.g., currency, measurement units) are handled via structured overlays, not AI improvisation.
  • Language fairness: Question difficulty and explanations are calibrated by subject mastery, not linguistic background. We test outputs across regional English variants (Ghanaian, Nigerian, American).
  • Equitable assessment: AI-assisted grading suggestions are always reviewed by a human teacher before affecting a student's record. No automated grading decision is final.
  • Ongoing evaluation: We regularly audit AI outputs for disparate impact across regions, genders, and age groups. Detected biases are escalated for immediate remediation.

9. Human Oversight

AI in Questcademy operates under meaningful human supervision at every level:

LevelWhoWhat They Do
ClassroomTeachersReview AI-generated questions before assigning, confirm AI-suggested grades, override recommendations.
SchoolSchool AdministratorsMonitor aggregate AI usage patterns and escalate concerns.
PlatformQuestcademy AI Safety TeamReview moderation logs, investigate blocked outputs, update safety rules, and conduct quarterly policy audits.

10. Audit & Governance

  • Every moderation decision (PASSED, FILTERED, or BLOCKED) is logged with a unique request ID, timestamp, learner band, and outcome.
  • Policy violations trigger an internal alert for mandatory review by the AI Safety Team within 24 hours.
  • This policy is reviewed and updated quarterly. Each version is tagged to the corresponding platform release.
  • All AI features must pass a validation test matrix before deployment, including:
1.

Adversarial Input Test

Attempt to force PII disclosure, off-topic output, or inappropriate content through crafted prompts.

2.

Learner-Band Alignment Test

Verify vocabulary, explanation depth, and tone match the target age band and grade level.

3.

Fallback Verification

Confirm safe fallback triggers correctly on high-sensitivity prompts and produces appropriate messages.

11. Limitations of AI

We are transparent about what AI can and cannot do:

  • AI-generated content is produced algorithmically and may occasionally contain errors, despite our safety guardrails.
  • AI is a supplementary learning tool — it does not replace qualified teachers, professional tutoring, or expert academic guidance.
  • AI cannot diagnose learning disabilities, make clinical assessments, or provide medical, legal, or psychological advice.
  • AI-generated recommendations (next lessons, difficulty adjustments) are suggestions, not mandates. Teachers and parents always have the final say.

12. Reporting Concerns

If you encounter AI-generated content that you believe is inappropriate, inaccurate, biased, or unsafe, please report it immediately:

AI Safety Team

Email: ai-safety@questcademy.edu

Include a description of the content, the subject/lesson context, and (if possible) a screenshot. All reports are reviewed within 24 hours.

Reports can also be submitted by teachers and school administrators directly within the Platform through the feedback mechanism on any AI-generated content.

13. Policy Updates

This policy is a living document. We update it as our AI capabilities evolve, new safety standards emerge, and regulatory requirements change. Material updates are communicated via in-app notification. Previous versions are archived and available upon request.

© 2026 Questcademy. All rights reserved.