-
Notifications
You must be signed in to change notification settings - Fork 138
Description
We should explore adopting or integrating the MidCamp Live Captioning project as part of FOSDEM’s accessibility efforts.
https://github.com/MidCamp/live-captioning?tab=readme-ov-file#live-captioning
Also worth looking at:
https://github.com/ggml-org/whisper.cpp
Why this matters
Deaf and hard-of-hearing attendees face significant barriers at hybrid conferences. Live captioning — especially real-time captioning that feeds both in-room displays and livestreams — makes talks accessible in the moment. Beyond accessibility for Deaf and hard-of-hearing participants, captions and transcripts provide broader value by:
• supporting non-native speakers
• improving comprehension in noisy environments
• enhancing remote participation
• making recorded talks more discoverable and searchable (SEO)
• creating a lasting archived text alongside video
The MidCamp Live Captioning project demonstrates a working foundation and tooling around live captioning. Using or adapting this for FOSDEM could accelerate implementation rather than building from scratch.
https://github.com/MidCamp/live-captioning?tab=readme-ov-file#live-captioning
I'd like to see:
1. Evaluate the MidCamp Live Captioning repository for:
• feasibility with FOSDEM’s existing streaming infrastructure
• scalability to multiple simultaneous rooms
• support for open caption formats compatible with post-event use
• integration with the schedule/streaming pages
2. Produce a short prototype or Proof of Concept for one room at a future FOSDEM or related event.
3. Define accessibility requirements for caption quality (latency, accuracy, editing workflow).
4. Document any prerequisites (e.g., volunteer workflows, tooling, sponsorship needs).
If we don't work to support the deaf and hard of hearing, they won't be part of our community. Losing hearing (and indeed all of our abilities) is just part of being human.