Gaze-based Child Reading Assistant
In this project, we explore how AI can support young children during picture-book reading by combining real-time eye-tracking with adaptive language generation. Our system monitors a child’s visual attention as they look around a storybook page and uses this information to estimate when they are engaged or losing interest. Based on their gaze patterns, the AI proactively offers short, child-friendly narrative prompts that highlight what the child is currently looking at and gently guide them toward new parts of the picture. By integrating multimodal sensing, lightweight attention modeling, and LLM-based storytelling, the project aims to create a more responsive, personalized, and engaging reading experience that supports early literacy and curiosity-driven exploration.