TESLA Certified

Trust & Transparency

At Lit-Levels, we believe educators deserve complete visibility into how student data is calculated, analyzed, and presented. This page documents our commitment to accuracy, fairness, and algorithmic transparency.

Accuracy Calculation (SUM/SUM)

Canonical

Student accuracy is calculated using the SUM/SUM methodology, which ensures consistent, fair measurement across all contexts:

Accuracy = (Total Correct Answers) / (Total Questions Attempted) × 100

This canonical formula is used identically across Teacher Dashboards, Admin Dashboards, and all reports. We do not average percentages—we aggregate raw counts first, then calculate a single percentage. This eliminates mathematical artifacts that could misrepresent student performance.

  • Teacher and Admin dashboards show identical accuracy values (0% delta)
  • All API endpoints use the same calculation module
  • Automated regression tests verify parity continuously

Confidence Score Factors

The Data Confidence Score reflects how reliable the displayed metrics are, based on the volume and recency of student activity:

Positive Factors

  • ✓ Sample size (more students = higher confidence)
  • ✓ Activity recency (data from past 7 days)
  • ✓ Question diversity (coverage across skills)
  • ✓ Floor completion rates

Thresholds

  • 🟢 High (80-100%): 50+ students, 7-day activity
  • 🟡 Medium (50-79%): 20-49 students
  • 🟠 Low (20-49%): 5-19 students
  • 🔴 Insufficient (<20%): <5 students

Rigor Engine Explanation

The Rigor Engine analyzes question difficulty using multiple factors to ensure fair assessment and appropriate challenge levels:

  • Depth of Knowledge (DOK): Questions are tagged DOK 1-4, measuring cognitive complexity from recall to extended thinking.
  • Text Complexity: Lexile measures, sentence structure, and vocabulary density are factored into difficulty ratings.
  • Historical Performance: Aggregate success rates inform difficulty calibration without penalizing individual students.
  • Standards Alignment: Questions map to state standards and NC EOG benchmarks for grade-appropriate rigor.

The Rigor Engine does not modify scores retroactively. It informs adaptive pathways and coaching recommendations.

AI Guardrails

Active

All AI-generated content in Lit-Levels passes through multiple safety layers:

  1. Content Filtering: Generated stories and feedback are screened for age-appropriateness and educational value.
  2. Bias Detection: Automated checks flag potentially biased language or cultural insensitivity for human review.
  3. Factual Grounding: AI tutoring responses cite specific passage evidence rather than generating unsupported claims.
  4. Human Oversight: Teachers can review, edit, or reject AI-generated content before student exposure.

Transparency Note: AI assists with personalization and feedback but does not determine grades, advancement, or high-stakes outcomes.

Certification Process

Lit-Levels undergoes continuous automated testing (TESLA Certification) to ensure data integrity:

TypeScript Compilation: Zero type errors required
Unit Tests: 33+ automated tests for metric accuracy
Dashboard Parity: Teacher/Admin sync verified
Accessibility: WCAG AA contrast compliance
Bundle Limits: Performance thresholds enforced

Data Integrity & Anomaly Detection

Our systems continuously monitor for data anomalies that could indicate technical issues or integrity concerns:

  • Statistical Outliers: Scores significantly above or below historical norms are flagged for review.
  • Timing Anomalies: Unusually fast response patterns trigger verification checks.
  • Duplicate Detection: Idempotency keys prevent accidental double-counting of rewards or progress.
  • Cross-Reference Validation: Student data is validated against roster records and enrollment status.

Detected anomalies are logged in the Admin Integrity Panel for transparent investigation by authorized personnel.

Questions About Our Data Practices?

We welcome inquiries from educators, administrators, and district leaders.

Contact Us

Lit-Levels Trust Documentation • Version 1.0.0

Last Updated: February 2026