NCERT - Diksha
Precision Navigation for Student-Centric E-Learning
Case Study
Problem Statement
With thousands of educational videos hosted on the DIKSHA platform, students often struggled to find specific segments within long lectures. A student looking for a two-minute explanation of a complex concept shouldn't have to watch a 40-minute video to find it.
​
​
Challenge
Educational content requires high granularity. Traditional search only looks at video titles or descriptions, failing to index the actual knowledge contained within the video frames and spoken dialogue.
​
​
Solution
We implemented a Hybrid Contextual & Lexical Search solution:
-
Automated Indexing: We used high-speed Speech-to-Text to generate transcripts for the entire library.
-
Multi-Layered Filtering: We integrated deep-link filters for Grade, Subject, and Chapter.
-
Visual-Textual Synergy: The search engine treats the video as a searchable document, combining keyword matching (lexical) with concept-based understanding (semantic).
​
​
Example of Impact
A student searches for "Continental Drift Theory." Instead of just landing on a video titled "Geography Class 10," the player opens and starts playing at the exact timestamp (e.g., 14:22) where the teacher begins the animation explaining the movement of tectonic plates.
​
​
Conclusion
By transforming "static" video files into "interactive" data, we significantly enhanced the learning experience on DIKSHA. Students can now navigate directly to the "moment of learning," increasing engagement and information retention.
