| Abstract | Communication barriers between deaf and hearing communities persist across home, school, work and public services, exacerbated by scarce interpreter ca-pacity and limited access to high-quality, scalable sign-language (SL) learning for hearing people. We address this problem by proposing LearnSLXR, a Learner-Centric, Teacher-Empowering AI-enabled XR platform that delivers a scaffolded Learn → Practice → Assess flow with authoring, analytics, and low-friction web/XR deployment. Our approach draws on framing the deaf community needs—CEFR-aligned deaf-approved content, and interpretable, real-time feedback on standard devices—and grounds the design in lessons from the SIGNUM Battle serious-game, which showed higher recognition and engagement than video-only materials and highlighted usability/analytics gaps that a platform must close. We (1) restate the societal and technical problem space and deaf-led requirements, (2) review related SL learning tools, (3) distil findings from evaluating SIGNUM Battle with student cohorts to extract peda-gogical and technical design requirements, and (4) present the LearnSLXR ar-chitecture: teacher capture-to-publish authoring, on-device landmarking for formative feedback, voice/keyboard assessment, and item-level analytics for classroom use. We conclude with an evaluation roadmap aligned to deaf-governed metrics for inclusion, accessibility and learning gains, positioning LearnSLXR to reduce the skills gap between hearing and deaf communities at scale. |
|---|