I captioned half of a11ycampNYC a few weekends ago, switching off with the illustrious Stan Sakai. I also gave two presentations, one five-minute lunchtime demo that I captioned myself, and one full-length presentation that Stan captioned. Eventually I want to upload this with fully edited, properly timed captions using Amara, but I thought I'd post the provisional versions for now, just so they're out there. If you'd like to see the other presentations that day, all with recorded (unedited) live captions, check out The Internet Society's recording archive.
Here's my five-minute steno demo:
And here's my talk, The Three Prongs of Steno Accessibility:
Stenographic technology has been used to provide realtime captioning for over 25 years, but two other important potential applications of steno for accessibility are less well known: 1) As a way to rectify the catastrophic levels of underemployment in the blind/low vision community (especially those who use screen readers and are already comfortable processing speech at over 300 WPM) by making it possible for them to become professional realtime captioners, and 2) By integrating it with text-to-speech technology to allow for a truly conversational speech synthesis system for AAC users. In this session, I'll discuss these three potential applications and the way they intersect.
Many thanks to Stan for captioning me and to Thomas, Shawn, and Cameron, the organizers of a11ycampnyc and of the ongoing a11ynyc Meetup group! It was a fantastic experience.