In my article Schlepping the Gear, I gave a brief rundown of all the stuff I kept in my steno bag. But I'd like to speak in more detail about some of the less obvious items I carry every day, and what I use them for. None of these are specifically steno-related, but they sure do come in handy every now and then.
* Gum/breath mints (Most onsite CART jobs involve sitting next to the client. Make sure your breath is fresh! My favorite gum for this purpose is Ice Breakers Ice Cubes, because I love popping the tiny liquid mint bubbles with my teeth.)
* Screen cleaning kit (I used to carry a pen-sized bottle of screen cleaner, but these days I find it's more convenient to use the pre-moistened cloths from Staples, which pack flat and have no danger of leaking. Having a clean laptop -- especially your screen -- is a subtle but vital aspect of professionalism when providing onsite CART. You don't want your client to have to squint through the grime to read your captions, and you don't want them to get grossed out when you hand them a tablet to carry. I also have several cans of canned air, which I keep in my home office, and I try to spray the crumbs out of my computers on a regular basis, but those are too bulky to carry on the job. Single-use screen cleaning cloths work just fine.)
* External battery for phone (I don't know about you, but my Motorola Droid 4 barely holds a charge past noon, no matter how sparingly I use it. Admittedly, that might be because I often get up at 5:30, but still; if I don't get home until 8:00 or so, that's eight hours without a phone unless I find some way to charge it. Typically I'll just plug it into my Surface Pro's auxiliary charger slot. But if for whatever reason I don't have the opportunity to connect it to AC power, this little external battery is there to save my bacon. I can plug my phone into it and leave it in my pocket. It takes an hour or so to charge it back up to full capacity (longer if I'm using it while it charges), and usually has enough juice for about 1.5 full charge cycles. I love it so much I'm thinking of buying another one just to have on hand. They can be pricey, but they're ridiculously useful when you can't get to an outlet, you don't want to drain the power on your laptops (or inconvenience your client) by plugging your phone into their USB slots, and suddenly going incommunicado is not an option.)
* Flat spool of gaffers tape (This is one of my easiest hacks: A length of plastic shelf divider snapped off so it's about 6x4 inches, with a few hundred lengths of gaffers tape wrapped around it. This way, I don't have to bring the huge, bulky spool the gaffers tape comes on, but I always have 30 or 40 feet of it at the ready. Gaffers tape is special because it doesn't leave behind adhesive residue on floors or carpet, and it's easy to tear off pieces either transversely or along its length. I use it to cover over extension cords when my laptop is placed at a distance from an outlet, so that I don't create a tripping hazard. I also use it to cover the indicator lights of my laptop when I'm in a theater or other situation where light leakage would be distracting.)
* ID card on retractable cord (The building I work at requires me to flash my official Contractor badge when I come in, so I hooked a retractable badge holder in to the breast pocket of my ScottEVest jacket, which has a special loop just for the purpose. This way I just have to unzip the pocket, reach in for the card, show it to the guy at the security desk, and then let it go; it'll spool back into my pocket and I can go on my way. Very convenient.)
* Business card case (I have a slim leather one, with my business card on one side and Plover stickers on the other)
* Towel tablets (As Douglas Adams so rightly pointed out, a towel is a ridiculously useful object, especially when you own a lot of expensive electronics. I keep a few lightload towel tablets on hand in case of rain, spills, puddles, wet subway seats, or anything else I might encounter. Extremely useful little objects!)
* Water bottle (I'm extremely happy with my insulated Kleen Kanteen. It keeps stuff cold for hours and hours, and it's got a nice big plastic loop in the lid. I made a length of Velcro tape with the hooks on one side and the loops on the other and ran it through one of the straps on the side of my backpack. Now I have to do is run it through the water bottle's lid and close it on itself, and I know that the bottle will stay in my backpack's mesh side pocket no matter how much bumping and jostling I put it through.)
* Multitool (I love my Leatherman Sidekick, though I admit that frequently I have to take it out of my backpack when I know I have to go through a metal detector or security checkpoint of some kind, and sometimes I forget to put it back in. Still a tremendously useful thing to keep around.)
* Superglue (Individual use tubes of Superglue can be absolute lifesavers when you need to do last-minute repairs to a steno machine that's been knocked to the floor by a careless student. Oh, yes, that's happened to me. More than once. Not fun.)
* 3-to-2 prong adapter (Use these sparingly; I understand that if you don't actually screw the metal bit in, they're not completely safe. But sometimes there just isn't a three-prong wall outlet anywhere, so these can mean the difference between running on battery and possibly going dark during a job or being able to plug in. I always make sure to keep a surge protector between the outlet and my electronic gear, and so far I haven't had any problems at all using this thing, but as with any electrical improvisations, be careful.)
* 1 foot extension cord (More common even than a two-prong outlet is a situation where I just can't fit my surge protector in the available outlet no matter how I twist and turn it. This short extension cord is much more convenient than my usual 15-foot-long one when the problem is just finding space around the outlet to plug in my stuff.)
* Sistema cutlery set (not to be confused with Systema, the scary Soviet martial arts technique. My lovely Ladybug Spork broke off one of its wings a few weeks ago, which of course was heartbreaking, but I have to admit that the Sistema Cutlery Set is actually way better. It takes up just a little more space in my bag, but it lets me assemble a full-sized spoon, fork, knife, and chopsticks to suit the needs of any meal. I really like it. I've also started using a Muji bento box, which fits in my backpack's other mesh side pocket, to supplement the stuff I carry in my lunch bag every day. It works really well, and holds more food than the small Tupperware containers I was using before.)
Many thanks to everyone who contributed to this post on Depoman. Below is a selection of things they carry in their bags. Click through to the post for more, especially for court reporting-specific items (about which I know next to nothing. Is an exhibit sticker the same thing as an exhibit label? I have no idea!)
* campus map (for onsite CART)
* cough drops (for scratchy throats)
* Fingerless gloves (for chilly mornings)
* Garbage bags (for putting over steno bag if it rains)
* Kleenex (for spills, runny noses, or emotional witnesses)
* Colgate Wisp toothbrushes (when you have garlic bread for lunch)
* Magnifying glass/whistle/compass (For... Um... I got nothing. Is this another court reporting thing?)
So that's all the non-steno stuff I keep in my bag. Have I left anything out? What bits, bobs, and gewgaws do you bring with you to every job?
Monday, October 28, 2013
Thursday, September 19, 2013
Bag lunch!
Bag lunch photo by Jeffrey Beale
When Fall semester started, I decided to start bringing my lunch to work, for the first time in the six years I've been CARTing. There are plenty of reasons why I hadn't done it before:
* New York City is full of delis, bodegas, diners, and restaurants, offering an unbelievable variety of food.
* I carry a 26-pound backpack stuffed to the gills with equipment, which just barely fits under a subway seat as it is. Clipping on a lunchbox or insulated lunch bag would add weight, bulk, and ungainliness, and would probably look kind of unprofessional, to boot.
* I wake up at 5:30 am several days a week. The last thing I want to do at that hour is put my lunch together from scratch.
But, on the other hand, bringing my lunch has real advantages:
* Eating out is expensive. The University's cash-only cafeteria is pretty good, but most entrees are $10 or more, and most of it isn't particularly healthy. There isn't always seating during the lunch rush, plus having to get cash for it all the time is a pain. Most of the places near the University are even more expensive, and their healthiness is usually in inverse proportion to their tastiness.
* I clean out my fridge every few weeks, and I always seem to come across leftovers that never got eaten and just went to waste. I hate that feeling.
* I'd rather eat several small meals than one big meal. If all I've eaten all day is a large lunch around 11:00, I'm much more likely to be hungry when I get out at 5:00, which makes it all too tempting to just buy a bag of chips at the 7-11 to eat on the bus ride home, spoiling both my dinner and my lipid levels. If I have some breakfast on the train to work, most of my lunch at 11:00, and then a handful of nuts and fruit at 3:00, I'm much better able to ride out my belly until I can get home and put dinner on the table.
There are lots of good reasons. But what do I do about the lunchbox situation? See, my subway stop is near the end of the line, so I'm almost always guaranteed a seat when I head in to work. When I head home at rush hour, on the other hand, I almost always have to stand, and my giant backpack makes things awkward both for me and for my fellow passengers. My backpack has two small mesh pockets on each side, but they're definitely not big enough to hold an entire day's worth of food. Finally it hit me: The good old brown paper bag. If I pack a full bag in the morning, I can eat the most perishable items on the train to work as breakfast (taking care for courtesy's sake not to eat anything too crumbly, messy, or stinky). When I take my steno machine out and set it up for my first job, that frees up space in my backpack to stow away the remainder of my lunch. I nibble on it over the course of the day, and then I can throw the empty bag away before heading home completely unencumbered. I fit a small stainless steel water bottle in one of my backpack's mesh pockets, and it turns out that two empty Tupperware containers, nestled inside one another, fit very nicely in the other. That lets me bring leftovers from the night before, which eliminates the wasted food problem. I feel a little bit bad about throwing away the bag every day, but I was able to find 100% recycled paper bags, and there are recycling cans all over the University, so the environmental impact isn't as bad as it could be. Bringing my lunch also gives me an excuse to use my ladybug spork, which lives happily inside my backpack among all the electronics and cables:
Here's my lunch for today:
* Refried beans left over from the Mexican restaurant delivery food we had on Tuesday.
* Mixed greens with homemade Mustard Fig Balsamic Vinaigrette.
* Miso Marinated Boiled Egg.
* Thyme-dill pita chips.
* Grape-flavored fruit leather.
* String cheese.
* Carrots, bell peppers, radishes, and celery.
Pretty tasty! When I finished the beans and greens, I just popped the small container inside the big one, and stuffed them both in the mesh pocket of my backpack. I've been making different variations on all these things for the past few weeks. Other things to go in the bag:
* Cut-up cubes of Jarlsberg and Gouda, two of my favorite cheeses.
* Apricots, plums, apples, pears, or grapes, sometimes with a small foil packet of peanut butter (which you can buy in small quantities at stores like Minimus).
* Granola bars, all along the spectrum from healthy (low sugar, flax seed and almonds) to not so healthy (chocolate and peanut butter).
* Mixed nuts. I like an equal mix of cashews, hazelnuts, pecans, walnuts, almonds and peanuts.
* Salsa, either homemade or storebought.
* Bean chips, which are a tiny bit better for you than potato chips, but still taste great with salsa. As a longtime junk food addict, I'm all about the marginally healthier alternatives.
* Crispbread with butter and Marmite. I know it's weird for someone who grew up in Montana to love a British staple like Marmite, but I can't help it. It's been my favorite condiment since I was a kid and my dad got me hooked on it. (No idea how he started eating it; he grew up in Queens.) So savory and salty. Mmmm. High in niacin and folic acid, too!
* The old standby: A classic peanut butter and jelly sandwich.
* Iced herbal tea. I like making some fridge tea -- usually a mix of berry flavors and chamomile -- and then diluting it heavily with water, so there's just a hint of flavor. The straight-up berry tea is usually too sweet and cloying for me.
I've also ordered a copy of The Brown Bag Lunch Cookbook, so when it arrives it'll hopefully give me even more ideas. And it's fun crawling around the web, looking at sites like Just Bento, which gave me the recipe for the miso marinated eggs. Even though I don't have room for an actual bento box and lack the patience to construct the intricate designs and patterns bento is famous for, it's fun to see what people are getting up to in other corners of the lunch-lugging world.
The last issue, of course, is how to deal with that whole bleary-eyed 5:30 am thing. I've realized that I can do most of the packing on the night before, or sometimes even the weekend before. I'll cut a whole bunch of vegetables on Sunday and put them in a quart-sized Ziploc. They'll stay fresh all week, and I can parcel them into individual sandwich-sized bags as the urge takes me the evening before I think I might want them. At first I'd get really excited about a particular lunch item and then forget to take it out of the fridge in the morning, but I've more or less solved that by writing a menu for myself on the whiteboard before going to bed. Then, even though my brain is only operating at a fraction of its normal capacity when I'm stumbling around in the wee hours of the morning, I can just go straight down the list and put everything together. I suppose it's only a matter of time before I forget my lunch entirely and have to eat out after all, but that'll be okay when it happens. It's the overall change of habit that counts, and I have to say I'm having a really good time with it. We'll see how long I manage to keep it up, but so far, so good!
Tuesday, August 27, 2013
Association of Adult Musicians with Hearing Loss 2013 Web Conference
I'm the captioning sponsor for the Adult Musicians with Hearing Loss web conference, to be held on Saturday, September 7th, 2013. I finish my weekly recorder lesson at noon, and then I'll have to ride my scooter quickly downhill to my home office in order to get setup in time for sound check, so it'll be a very musical day. I'm really looking forward to this conference. I've captioned for AAMHL for several years now, both onsite and remotely, and they always put together top quality events. So if you love music and you've either got hearing loss or you work/play with people who have hearing loss (and specifically, for this conference, cochlear implants, though I'm sure plenty of the information will be relevant for non-CI users as well), feel free to register for the conference!
--
Making Music with a Hearing Loss: Issues for Cochlear Implant Users
Note: All time listed in this announcement are for Eastern Standard Time (EST)
The Association of Adult Musicians with Hearing Loss (AAMHL) is pleased to announce our second web conference on September 7, 2013, featuring presentations on listening and making music with a hearing loss when you are a cochlear implant user. The intended audience is for consumers, musicians, music teachers who are interested in the effects of cochlear implantation on music perception and music performance.
We will be using voice-over IP (no calling in via phone) and captioning will be provided for this online event.
If you have questions regarding the conference or registration, please email us at info@aamhl.org
Agenda:
1. Introduction of our Association and our presenters (1:00-1:05 pm EST) / Wendy Cheng
2. Musical Interval Perception between Cochlear Implant Users and individuals with Normal Hearing (1:10-1:40 pm EST) / Dr. XIn Luo
3. Factors that Contribute to and Impede Satisfactory Music Participation by Adult Cochlear Implant Users (1:45-2:15 pm EST)- / Dr. Kate Gfeller and Ms. Virginia Driscoll
4. Developing a Music Rehabilitation Program (2:15-2:45 pm EST) / Mr. Richard Reed
5. Cochlear Implant Musicians Panel (2:45 to 3:30 pm EST) / Wendy Cheng, Moderator: Blue O'Connell, Lisa Jordan, Sara Gould, panelists
Bios of presenters and panelists:
Wendy Cheng is the founder of AAMHL, and is also studying viola and music theory while raising two musical daughters. She hopes to obtain a music degree someday.
Dr. Xin Luo is an assistant professor in the Department of Speech and Hearing Sciences at Purdue University. Prior to this appointment, he worked as a post-doctorate research fellow at the House Ear Institute. Dr. Luo has authored many publication and studies on pitch perception and cochlear implants.
Dr. Kate Gfeller holds a joint appointment at the University of Iowa's Music Therapy department and the Department of Communication Sciences and Disorders. She is currently the principal investigator of the Music Perception Project at the University of Cochlear Implant Clinics.
Virginia (Ginny) Driscoll is an investigator at the University of Iowa's Music Perception Project. She received her masters in music therapy at the University of Iowa in 2006.
Richard Reed is a composer, musician and cochlear implant advocate. Before losing his hearing to antibiotics, Richard played piano and Hammond organ with Junior Walk and the All Starts, Otis Rush, Mark Cutler, and many other R&B, Blues and Rock and Roll bands. Unable to appreciate music for almost ten years, Richard underwent CI surgery in 2001. Richard is certified by the Hearing Loss Association of America and Gallaudet University as a hearing loss specialist. Richard is a guest lecturer at universities, symposiums and research facilities across the globe. His experiential knowledge help researchers improve the fidelity to hearing loss technologies.
Blue O'Connell is a music practitioner at the University of Virginia's Medical Center, a songwriter, and avid cochlear implant user. She resides in the Charlottesville, Virginia area where she gives concerts in coffeehouse settings.
Lisa Jordan received her bachelor's degree in music education from West Virginia University in 2004. While working as a high school band director, she began to lose her hearing, and opted to receive bilateral cochlear implants in 2012. Lisa has even performed her oboe and flute in musical ensembles post-implant.
Sara Gould began playing saxophone at the age of 9. She continued playing all 4 kinds of saxophones even after her hearing loss progress to the profound stage. She received her first cochlear implant in September 2009 and her second implant in December 2009. She currently resides in Charlottesville, Virginia, where she plays with several saxophone ensembles.
--
Making Music with a Hearing Loss: Issues for Cochlear Implant Users
Note: All time listed in this announcement are for Eastern Standard Time (EST)
The Association of Adult Musicians with Hearing Loss (AAMHL) is pleased to announce our second web conference on September 7, 2013, featuring presentations on listening and making music with a hearing loss when you are a cochlear implant user. The intended audience is for consumers, musicians, music teachers who are interested in the effects of cochlear implantation on music perception and music performance.
We will be using voice-over IP (no calling in via phone) and captioning will be provided for this online event.
If you have questions regarding the conference or registration, please email us at info@aamhl.org
Agenda:
1. Introduction of our Association and our presenters (1:00-1:05 pm EST) / Wendy Cheng
2. Musical Interval Perception between Cochlear Implant Users and individuals with Normal Hearing (1:10-1:40 pm EST) / Dr. XIn Luo
3. Factors that Contribute to and Impede Satisfactory Music Participation by Adult Cochlear Implant Users (1:45-2:15 pm EST)- / Dr. Kate Gfeller and Ms. Virginia Driscoll
4. Developing a Music Rehabilitation Program (2:15-2:45 pm EST) / Mr. Richard Reed
5. Cochlear Implant Musicians Panel (2:45 to 3:30 pm EST) / Wendy Cheng, Moderator: Blue O'Connell, Lisa Jordan, Sara Gould, panelists
Bios of presenters and panelists:
Wendy Cheng is the founder of AAMHL, and is also studying viola and music theory while raising two musical daughters. She hopes to obtain a music degree someday.
Dr. Xin Luo is an assistant professor in the Department of Speech and Hearing Sciences at Purdue University. Prior to this appointment, he worked as a post-doctorate research fellow at the House Ear Institute. Dr. Luo has authored many publication and studies on pitch perception and cochlear implants.
Dr. Kate Gfeller holds a joint appointment at the University of Iowa's Music Therapy department and the Department of Communication Sciences and Disorders. She is currently the principal investigator of the Music Perception Project at the University of Cochlear Implant Clinics.
Virginia (Ginny) Driscoll is an investigator at the University of Iowa's Music Perception Project. She received her masters in music therapy at the University of Iowa in 2006.
Richard Reed is a composer, musician and cochlear implant advocate. Before losing his hearing to antibiotics, Richard played piano and Hammond organ with Junior Walk and the All Starts, Otis Rush, Mark Cutler, and many other R&B, Blues and Rock and Roll bands. Unable to appreciate music for almost ten years, Richard underwent CI surgery in 2001. Richard is certified by the Hearing Loss Association of America and Gallaudet University as a hearing loss specialist. Richard is a guest lecturer at universities, symposiums and research facilities across the globe. His experiential knowledge help researchers improve the fidelity to hearing loss technologies.
Blue O'Connell is a music practitioner at the University of Virginia's Medical Center, a songwriter, and avid cochlear implant user. She resides in the Charlottesville, Virginia area where she gives concerts in coffeehouse settings.
Lisa Jordan received her bachelor's degree in music education from West Virginia University in 2004. While working as a high school band director, she began to lose her hearing, and opted to receive bilateral cochlear implants in 2012. Lisa has even performed her oboe and flute in musical ensembles post-implant.
Sara Gould began playing saxophone at the age of 9. She continued playing all 4 kinds of saxophones even after her hearing loss progress to the profound stage. She received her first cochlear implant in September 2009 and her second implant in December 2009. She currently resides in Charlottesville, Virginia, where she plays with several saxophone ensembles.
Friday, August 9, 2013
NCRA Convention Post!
I'm in the Newark airport, waiting for my flight to Nashville as I write this. I'll be getting in around 9:05 this evening, and staying until Sunday evening. If you see me in the hallway, please say hi! I'll be giving ad hoc Plover demonstrations at various points around the convention, and I'll be demonstrating Google Glass for captioning Saturday morning at 10:00 am at the "CART: The Tech Connection" seminar. Speaking of Plover, check out my most recent post on the Plover Blog, including my new Plover FAQ for Steno Professionals. If you have any questions about Plover that aren't answered by that FAQ, please feel free to ask, either in person or by emailing me at info@stenoknight.com. I'm really looking forward to meeting new people at the convention and to catching up with some of the people I've met at previous events. Please don't be shy about flagging me down if you see me around! I love talking about steno, and I can't wait to spend a whole weekend doing pretty much nothing but. Here's a bigger version of my profile picture, so you can see what I look like:
Making Tablets Client-Friendly
Most CART providers do the lion's share of their work one-on-one, with a single client reading from their laptop, which is usually mounted a foot or so away on a tripod. There's an unwritten social convention that it's not polite to reach over and fiddle with someone else's laptop, and I've found that virtually all my clients respect that rule without having to be asked. But what about when a CART provider sends their realtime feed to a tablet, which the client holds in their hands, often sitting at some distance away from the provider, or moving around the room while the provider stays in one place? For some reason, that simple act of holding a screen rather than viewing it while mounted on a tripod changes the whole situation. When people hold tablets, they're tempted to play with them; it's practically a law of nature. And if you don't want your client to have free rein over everything accessible via your tablet, what should you do? Fortunately, there's a very simple solution: Lock it up.
I use an app called Friend Lock Pro on my Samsung Galaxy Tab II. It's simple, aesthetically attractive, and only $1.99.
This is what the homescreen of my tablet looks like when it's unlocked. Notice the large dolphin icon. That's a link to the Dolphin Browser, which offers an aesthetically attractive full screen mode for realtime CART streaming. I made the icon nice and big using Giganticon, and set Dolphin Browser's homepage to the URL I use for realtime captioning via Streamtext. To get to the captions, it's only a single button push away. When I'm ready to hand the tablet to the client, I push the lock icon and get this:
Now Dolphin Browser is the only app accessible to the client. They can page through my homescreens and eventually find my personal screen (though I have blank homescreens on either side of the main one as a sort of buffer):
But if they try to access my email, they only get a notification that the app has been blocked. All notifications are also turned off, so the client won't get annoying (and possibly confidential) pop-ups from my email, calendar, or other apps while they're trying to read the captions. To unlock the tablet, I just have to draw a simple design in Friend Lock Pro's unlock screen, and the tablet is mine again. I can use it to access student schedules, display prep material, log my steno practice, or navigate to my next gig, using my own personal apps. Then I can lock it back up again and hand it on to the next client, secure that none of that information is accessible to them. Since installing this app, I haven't had to fret or worry about what the client is doing with my tablet when they're out of my sight. I can't recommend it highly enough.
I'm not a paid endorser for this app or anything; I'm just really, really happy with it! If anyone knows of equivalent apps for iOS devices, feel free to let me know about them in comments, and I'll pass them along to my iPad-loving colleagues.
I use an app called Friend Lock Pro on my Samsung Galaxy Tab II. It's simple, aesthetically attractive, and only $1.99.
This is what the homescreen of my tablet looks like when it's unlocked. Notice the large dolphin icon. That's a link to the Dolphin Browser, which offers an aesthetically attractive full screen mode for realtime CART streaming. I made the icon nice and big using Giganticon, and set Dolphin Browser's homepage to the URL I use for realtime captioning via Streamtext. To get to the captions, it's only a single button push away. When I'm ready to hand the tablet to the client, I push the lock icon and get this:
Now Dolphin Browser is the only app accessible to the client. They can page through my homescreens and eventually find my personal screen (though I have blank homescreens on either side of the main one as a sort of buffer):
But if they try to access my email, they only get a notification that the app has been blocked. All notifications are also turned off, so the client won't get annoying (and possibly confidential) pop-ups from my email, calendar, or other apps while they're trying to read the captions. To unlock the tablet, I just have to draw a simple design in Friend Lock Pro's unlock screen, and the tablet is mine again. I can use it to access student schedules, display prep material, log my steno practice, or navigate to my next gig, using my own personal apps. Then I can lock it back up again and hand it on to the next client, secure that none of that information is accessible to them. Since installing this app, I haven't had to fret or worry about what the client is doing with my tablet when they're out of my sight. I can't recommend it highly enough.
I'm not a paid endorser for this app or anything; I'm just really, really happy with it! If anyone knows of equivalent apps for iOS devices, feel free to let me know about them in comments, and I'll pass them along to my iPad-loving colleagues.
Tuesday, July 16, 2013
Lucky Printer Giveaway!
My RMR certificate
So I'm a little late with blogging about it, but if you've checked my resume or FAQ page recently, you'll know that I am now Mirabai Knight, CCP, CBC, CRR, RPR, RMR! Yep, after several false starts, I finally passed that last pesky 260 WPM Testimony exam and got to add "Registered Merit Reporter" to my roster of certifications. I have to say it feels pretty good. While I'm disappointed that there aren't any merit-speed realtime certifications, which would be more directly relevant to showcasing my professional skills, the RMR shows that I've got at least a certain amount of speed, even if it doesn't speak much to my realtime accuracy. I'll be taking the RDR written exam this fall, just to collect the whole set of certifications offered by the NCRA, and then I guess there's nowhere else to go but the annual conference realtime and speed competitions. Still haven't decided about those. I don't tend to do my best under test-taking conditions. If they had a test with guaranteed medical subject matter I think I'd do pretty well, but since most of it is full of legal vocabulary, I'd be starting from a disadvantage. I did promise myself that I'd get to go to the NCRA convention if I passed the RMR, though, which is very exciting. It'll be my first national convention since I was a 120 student back in 2006. I wish I could be there for the world record attempt so I could root for my main man Stan Sakai, but sadly I have to work until Friday afternoon, so I won't be getting in until Friday night. That also probably means that I'll be missing the CART and Captioning dessert reception, though I'm gonna hustle from the airport as fast as I can, in hopes of catching the tail end of it. But if any readers of this blog are there, please feel free to flag me down in the hallway and say hi! I'll also be presenting a very short demo of how to use Google Glass for Captioning at the "CART: The Tech Connection" seminar on Saturday at 10:00. (Oh, and I'll be trying to do small informal demos of Plover throughout the weekend. Have you seen our latest release, complete with a video demonstrating our hot new on-the-fly dictionary update system?)
So anyway, I'm really looking forward to putting that RMR ribbon on my name tag. Lord knows it took long enough to get there; I had to take the thing five times, all told. But that's what steno is all about. You keep failing and failing and failing, and then finally you wake up and realize that not only do you have the speed, but you're so much faster than you need to be, and somehow that fact has made those test nerves just magically melt away.
This is my practice graph for the RMR. As you can see, I stopped practicing while waiting for results, which is not recommended, but I just couldn't bring myself to do yet another take when it might turn out that I'd passed.
My RMR practice graph
And I have to give big props to Ann Plainos Record, for our competition. She won hands down (and I sent her a basket of gourmet New York-made munchies as a prize), but just having that extra push helped hugely with my motivation in those last crucial weeks of practice.
But there's one more component of my success. On my next-to-last attempt at the RMR, my old portable printer (which had seen me through the RPR and both the Jury Charge and Literary portions of the RMR) suddenly gave up the ghost, and I wound up unceremoniously dumping it in a garbage can next to the subway entrance before heading home. So for the attempt in May, I actually bought a printer solely for the purpose of getting this one last take.
It's an HP Deskjet 1000, and it cost me about $40. It's certainly not the best printer you'll ever use; these low-end machines tend to be made of cheap components and sold for rock bottom prices so that people will spend lots of money on ink. It's not a patch on the giant, sleek double-sided printer I keep in my home office, but then again, it's far more portable, and it's totally serviceable for low stakes, low volume printing jobs. I really don't see myself getting much use out of it, now that I've passed all the available NCRA speed tests (yeah, sorry, I just enjoy typing that sentence way too much), so I've decided to give it away. After all, it turned out to be lucky for me. Maybe it'll be lucky for you too! The box is a bit ripped up from when I opened it, but it's otherwise in near mint condition; only about 15 pages have been printed on it.
I'm going to be completely arbitrary and subjective about this give-away. Email me at info@stenoknight.com with the subject header "Printer Giveaway" and give me your most persuasive argument about why I should give it to you. Make it interesting! Tell me your story! The person whose email makes the most compelling case will get the printer shipped to them, and hopefully some of that good old test passing magic will come along with it.
Monday, June 17, 2013
Glass for Captioning: First Field Tests
Glass for Captioning series:
Augmented Reality Captioning
Preliminary Impressions of Google Glass
First Field Tests
Last week I tried Google Glass out in the field for the first time. I've gotten a new pair now; Google was very helpful and accommodating when I told them about the optical flaw in my last pair and happily switched them out for a new one. There's still a little bit of glare underneath the screen, which they told me is pretty much inherent to the design, but it's much less than before, and the glare along the sides is gone, plus the smeary left edge has cleared up and the text overall seems crisper and less diffuse than before. I think my prior set just had a slightly misaligned projector or something. Absolute top scores to Google's customer service team, which was communicative, timely, and quick to move.
There are still some frustrations when it comes to actually using Glass for captioning. Initial tests seemed to offer Hangout Screen Sharing as a good solution; the resolution was clear enough that about 8 lines of captioning were visible, very readable, with all of my carefully tweaked Eclipse display and font options on view, plus it would allow me to use all the realtime editing tricks I rely on every day to clean up misstrokes and define new words from my steno machine. It sounded like a slam dunk. When I was just writing a few experimental words for myself, it seemed perfect. Unfortunately, as I feared, when I started actually transcribing the professor in action, the amount of lag involved in refreshing an entire computer screen several times a second quickly tanked the experiment. This was a particularly slow and steady lecturer, but the display was consistently 10 to 20 lines behind my laptop's display, and sometimes it would skip whole swaths of texts in order to catch up to the present, only to fall immediately behind again. So much for that idea. Incidentally, my laptop was on good quality institutional Wi-Fi, and Glass was tethered via Bluetooth to my phone, using the connection from that same institutional Wi-Fi. Glass can connect to most Wi-Fi networks directly, but this particular one required an authentication type that wasn't supported, so it had to piggyback from the connection on my phone. I've also had some trouble connecting it to my 4G hotspot, but I want to fiddle with it some more before deciding whether it's Glass's fault or user error.
So the next day I gave up the beautiful dream of lagless screen sharing and went to the fallback option: Hangout Chat. I set it up well before the class started, which was good, because currently Glass offers no way to turn off the little blips, bloops, swoops, and blats it makes while connecting to someone via Hangout. I really hope they offer a silent option soon; the bone conducting speaker makes the sound louder to the user than to anyone else in the room, but it's still clearly audible and potentially quite disruptive. By default, Glass displays video from whoever you're hanging out with, but I turned that off to save Glass's battery. So then it displayed my user icon, but that was a distracting background against which to view the text, so I set my user icon to a plain black rectangle. Then I muted Glass's own microphone and camera, also to save on battery life. This is what I wound up with:
The Hangout Chat interface without text.
The Hangout Chat Interface with text.
The two main distractors are the prefix of my username before every line of text that's sent (inescapable in this sort of text chat format) and the two prominent "camera muted" and "microphone muted" icons in the center of the lowest part of the screen. I think this is somewhat poor design, considering that new text starts at the bottom and is pushed upward, and that the very top of the screen isn't used by Hangout Chat at all. So rather than keeping the mute icons down at the bottom, interfering with the newest and presumably most important texts, why not put them at the top and out of the way?
The battery was also a little disappointing. On the previous day, when I was screen sharing, it died completely after less than an hour. That was too bad, but I had higher hopes for Hangout Chat, which presumably required less juice. And indeed, its total life was about an hour and 40 minutes, but the intensely irritating low battery alert came on just about exactly halfway through:
So not only does the alert pop up when the battery's presumably only at 50% of its capacity, but the alert is an entire line of full-sized text smack in the middle of the screen. How does that make sense? What's wrong with a discreet little battery icon tucked away in the corner of the screen? I can only hope that as the UI is updated (which happens on a pretty regular basis, I'm happy to say), this will be fixed to be less disruptive. I'll also probably post a comment to the Glass Explorers' Forum. This is, as I have to keep reminding myself, a prototype device, and a lot can change over the next few months.
But here's the most serious issue, which you can also see in the pictures above: Once an old line of text is pushed up to make room for a new one, it's suddenly severely truncated. So if you didn't manage to read the entirety of the text the first time around, it's going to be all but useless to you as soon as another line comes in. For captioning, where text can come in at a pretty solid clip, that is a big, big problem.
This is the main thing that's keeping me from enthusiastically offering Glass to my clients. If all they've got to work with is one line of text, it won't be good for much except slow-paced one-on-one conversations, and if the battery is really only good for less than two hours (I haven't yet tested it with the microphone active, which would presumably reduce the battery life even more, even though it would potentially allow them to interact and even move around without me and my machine having to sit there at their elbow), that severely restricts the circumstances in which Glass will actually be useful.
What about the future? Will Screen Sharing get less laggy? Will they remove the truncation from previous messages? Will they show the username the first time a message is sent and then allow it to be implied for subsequent messages? Will they condense alerts to icons and move them out of the way? Or will I have to commission special captioning Glassware to solve all these problems for me? I guess we'll have to wait and find out.
Oh, and one last thing. By request, a picture of me actually wearing Glass. Dork Factor: Significant.
Augmented Reality Captioning
Preliminary Impressions of Google Glass
First Field Tests
Last week I tried Google Glass out in the field for the first time. I've gotten a new pair now; Google was very helpful and accommodating when I told them about the optical flaw in my last pair and happily switched them out for a new one. There's still a little bit of glare underneath the screen, which they told me is pretty much inherent to the design, but it's much less than before, and the glare along the sides is gone, plus the smeary left edge has cleared up and the text overall seems crisper and less diffuse than before. I think my prior set just had a slightly misaligned projector or something. Absolute top scores to Google's customer service team, which was communicative, timely, and quick to move.
There are still some frustrations when it comes to actually using Glass for captioning. Initial tests seemed to offer Hangout Screen Sharing as a good solution; the resolution was clear enough that about 8 lines of captioning were visible, very readable, with all of my carefully tweaked Eclipse display and font options on view, plus it would allow me to use all the realtime editing tricks I rely on every day to clean up misstrokes and define new words from my steno machine. It sounded like a slam dunk. When I was just writing a few experimental words for myself, it seemed perfect. Unfortunately, as I feared, when I started actually transcribing the professor in action, the amount of lag involved in refreshing an entire computer screen several times a second quickly tanked the experiment. This was a particularly slow and steady lecturer, but the display was consistently 10 to 20 lines behind my laptop's display, and sometimes it would skip whole swaths of texts in order to catch up to the present, only to fall immediately behind again. So much for that idea. Incidentally, my laptop was on good quality institutional Wi-Fi, and Glass was tethered via Bluetooth to my phone, using the connection from that same institutional Wi-Fi. Glass can connect to most Wi-Fi networks directly, but this particular one required an authentication type that wasn't supported, so it had to piggyback from the connection on my phone. I've also had some trouble connecting it to my 4G hotspot, but I want to fiddle with it some more before deciding whether it's Glass's fault or user error.
So the next day I gave up the beautiful dream of lagless screen sharing and went to the fallback option: Hangout Chat. I set it up well before the class started, which was good, because currently Glass offers no way to turn off the little blips, bloops, swoops, and blats it makes while connecting to someone via Hangout. I really hope they offer a silent option soon; the bone conducting speaker makes the sound louder to the user than to anyone else in the room, but it's still clearly audible and potentially quite disruptive. By default, Glass displays video from whoever you're hanging out with, but I turned that off to save Glass's battery. So then it displayed my user icon, but that was a distracting background against which to view the text, so I set my user icon to a plain black rectangle. Then I muted Glass's own microphone and camera, also to save on battery life. This is what I wound up with:
The Hangout Chat interface without text.
The Hangout Chat Interface with text.
The two main distractors are the prefix of my username before every line of text that's sent (inescapable in this sort of text chat format) and the two prominent "camera muted" and "microphone muted" icons in the center of the lowest part of the screen. I think this is somewhat poor design, considering that new text starts at the bottom and is pushed upward, and that the very top of the screen isn't used by Hangout Chat at all. So rather than keeping the mute icons down at the bottom, interfering with the newest and presumably most important texts, why not put them at the top and out of the way?
The battery was also a little disappointing. On the previous day, when I was screen sharing, it died completely after less than an hour. That was too bad, but I had higher hopes for Hangout Chat, which presumably required less juice. And indeed, its total life was about an hour and 40 minutes, but the intensely irritating low battery alert came on just about exactly halfway through:
So not only does the alert pop up when the battery's presumably only at 50% of its capacity, but the alert is an entire line of full-sized text smack in the middle of the screen. How does that make sense? What's wrong with a discreet little battery icon tucked away in the corner of the screen? I can only hope that as the UI is updated (which happens on a pretty regular basis, I'm happy to say), this will be fixed to be less disruptive. I'll also probably post a comment to the Glass Explorers' Forum. This is, as I have to keep reminding myself, a prototype device, and a lot can change over the next few months.
But here's the most serious issue, which you can also see in the pictures above: Once an old line of text is pushed up to make room for a new one, it's suddenly severely truncated. So if you didn't manage to read the entirety of the text the first time around, it's going to be all but useless to you as soon as another line comes in. For captioning, where text can come in at a pretty solid clip, that is a big, big problem.
This is the main thing that's keeping me from enthusiastically offering Glass to my clients. If all they've got to work with is one line of text, it won't be good for much except slow-paced one-on-one conversations, and if the battery is really only good for less than two hours (I haven't yet tested it with the microphone active, which would presumably reduce the battery life even more, even though it would potentially allow them to interact and even move around without me and my machine having to sit there at their elbow), that severely restricts the circumstances in which Glass will actually be useful.
What about the future? Will Screen Sharing get less laggy? Will they remove the truncation from previous messages? Will they show the username the first time a message is sent and then allow it to be implied for subsequent messages? Will they condense alerts to icons and move them out of the way? Or will I have to commission special captioning Glassware to solve all these problems for me? I guess we'll have to wait and find out.
Oh, and one last thing. By request, a picture of me actually wearing Glass. Dork Factor: Significant.
Tuesday, June 4, 2013
Preliminary Impressions of Google Glass
For background, read my first post on augmented reality captioning.
I picked up my Google Glass last Thursday. It's certainly an impressive bit of hardware, and I'm very excited about the possibilities for captioning, but of course it's still a prototype device; the consumer models won't be released until after a year of additional quality testing and user feedback. The first pair they gave me had an unresponsive touchpad, and the second pair (the one I have now) seems to have some kind of optical defect that results in a lot of light scattering and glare, which I didn't notice with the first pair. I think I'm going to have to go back to Google to see if they can either repair the problem or give me another pair. The light scattering is just obnoxious, though. It doesn't actually prevent me from using the device. The voice recognition is about as good as one would expect (which is to say, borderline okay when one speaks slowly and deliberately, but pretty terrible with casual speech); no surprises there. It comes with a nice clear plastic lens insert, which will be good to protect the user's eyes in potentially messy situations. It's more lightweight than I expected, and the interface is pleasantly intuitive.
But the really exciting thing is that it seems to be caption-ready pretty much out of the box. I just started a hangout with myself, using my personal Gmail account on Glass and my professional Gmail account on my computer. My computer got video from Glass's camera (which was pointing over the top of my computer monitor, into my apartment's foyer), and Glass got video from my laptop's camera, which showed my own face wearing the admittedly dorky-looking Glass. I muted my laptop's microphone to test the sound quality of Glass's microphone and was impressed with its clarity, though of course we'll have to see how that alters depending on background noise and how far away the people we're captioning stand from the person wearing Glass. Best of all, though, when I typed into the hangout's chat window, the text came up instantly on Glass, with perfect clarity. So even though I haven't actually tested it with my steno machine yet, I think that as long as I use Plover or Eclipse with the Keyboard Macro setting turned on, I'll be able to send captions to Glass without having to commission any additional software. The Wi-Fi in the place I'll be using it is fairly reliable, but if it isn't I can always use my 4G hotspot as a backup. And if the microphone proves to be as good as it seems to be at first glance, I'll be able to caption remotely instead of having to stand next to my client, cramming myself into tiny spaces and generally making a nuisance of myself. The only downside is that I'll have to press "Enter" on my steno machine (which I've mapped to R-R, because it uses the two strongest fingers of the hands) after everything I write. But that won't be so terrible. I actually had to do that when I captioned a webinar two weeks ago, using Plover with the closed captioning feature built into InstantPresenter.com. It's a little tricky to get into the rhythm of pressing Enter each time, but it's certainly not a dealbreaker. More concerning is that Glass's display is designed to be above and to the right of a typical user's line of sight, forcing the user to glance upwards whenever they want to read anything on it, which might result in some eyestrain after constant use. I also haven't tested the battery life yet, though I'm hoping that Glass's battery will be able to withstand at least an hour or two of constant video chat. It's all very promising. Now I just have to get that optical defect sorted out, and then start testing it out with clients!
Our accessible cyberpunk future is so close, I can practically taste it.
I picked up my Google Glass last Thursday. It's certainly an impressive bit of hardware, and I'm very excited about the possibilities for captioning, but of course it's still a prototype device; the consumer models won't be released until after a year of additional quality testing and user feedback. The first pair they gave me had an unresponsive touchpad, and the second pair (the one I have now) seems to have some kind of optical defect that results in a lot of light scattering and glare, which I didn't notice with the first pair. I think I'm going to have to go back to Google to see if they can either repair the problem or give me another pair. The light scattering is just obnoxious, though. It doesn't actually prevent me from using the device. The voice recognition is about as good as one would expect (which is to say, borderline okay when one speaks slowly and deliberately, but pretty terrible with casual speech); no surprises there. It comes with a nice clear plastic lens insert, which will be good to protect the user's eyes in potentially messy situations. It's more lightweight than I expected, and the interface is pleasantly intuitive.
But the really exciting thing is that it seems to be caption-ready pretty much out of the box. I just started a hangout with myself, using my personal Gmail account on Glass and my professional Gmail account on my computer. My computer got video from Glass's camera (which was pointing over the top of my computer monitor, into my apartment's foyer), and Glass got video from my laptop's camera, which showed my own face wearing the admittedly dorky-looking Glass. I muted my laptop's microphone to test the sound quality of Glass's microphone and was impressed with its clarity, though of course we'll have to see how that alters depending on background noise and how far away the people we're captioning stand from the person wearing Glass. Best of all, though, when I typed into the hangout's chat window, the text came up instantly on Glass, with perfect clarity. So even though I haven't actually tested it with my steno machine yet, I think that as long as I use Plover or Eclipse with the Keyboard Macro setting turned on, I'll be able to send captions to Glass without having to commission any additional software. The Wi-Fi in the place I'll be using it is fairly reliable, but if it isn't I can always use my 4G hotspot as a backup. And if the microphone proves to be as good as it seems to be at first glance, I'll be able to caption remotely instead of having to stand next to my client, cramming myself into tiny spaces and generally making a nuisance of myself. The only downside is that I'll have to press "Enter" on my steno machine (which I've mapped to R-R, because it uses the two strongest fingers of the hands) after everything I write. But that won't be so terrible. I actually had to do that when I captioned a webinar two weeks ago, using Plover with the closed captioning feature built into InstantPresenter.com. It's a little tricky to get into the rhythm of pressing Enter each time, but it's certainly not a dealbreaker. More concerning is that Glass's display is designed to be above and to the right of a typical user's line of sight, forcing the user to glance upwards whenever they want to read anything on it, which might result in some eyestrain after constant use. I also haven't tested the battery life yet, though I'm hoping that Glass's battery will be able to withstand at least an hour or two of constant video chat. It's all very promising. Now I just have to get that optical defect sorted out, and then start testing it out with clients!
Our accessible cyberpunk future is so close, I can practically taste it.
Tuesday, May 21, 2013
Variables in Wireless Captioning
The end of the semester is looming, and with it I'm taking on more work outside of my ordinary daily academic CART schedule. Last week I did an awards ceremony and a graduation ceremony, and yesterday I captioned one of the monthly Songbook performances at the New York Public Library. All three of those events had one thing in common: Wireless captioning. In each case, I was given a partial script of the event, which I was able to feed line by line to the client's screen. Other parts were CARTed live, so I had my steno machine at the ready to switch off from line feeding when necessary. This necessitated a split screen view in Eclipse, which was very different from the clean, stripped-down view I like to use with my clients. In addition, two of the events had multiple viewers, seated at a distance from one another, and the event organizers didn't want me to project open captions to a big screen at the front of the venue. Wireless captioning to the rescue!
Samsung Galaxy Tab
Microsoft Surface Pro
I used my laptop to send the script and monitor my CART output, with pending translation display turned on to give me an extra 1.5 seconds of error correction, since the client wasn't reading my screen and wouldn't be forced to read Eclipse's confusing markup syntax. Then I used Streamtext to send the captions to web browsers on my Microsoft Surface and Samsung Galaxy Tab 2 (to replace my dear old Samsung Q1, now on its last legs and looking somewhat junky), as well as to the smartphones and iPads of any audience members who pointed their browsers to the caption feed's URL. According to the guy in the NYPL sound booth, there were about 15 people using their own equipment to view captions yesterday, which is probably a record at that particular event. Why Streamtext? Well, there are a few options for wireless captions, with pros and cons for each:
* Screen sharing apps such as ScreenLeap or Join.me.
* Peer-to-Peer connections such as Teleview and Bridge.
* Free document collaboration services such as Etherpad or Google Docs.
* Instant messaging applications such as Google Talk.
At these particular events, I didn't want to share my screen, since it was split into two unsightly panes, and because screen sharing is usually restricted to specific devices, while I wanted the captions to be accessible to any number of audience members without having to hook up their equipment individually. Screen sharing is also heavier on bandwidth than simple text streaming, doesn't allow the caption viewer to scroll backwards and review captions they might have missed, and is prone to lag, especially as more devices are added to a single screen. Peer-to-Peer connections such as Teleview and Bridge have a lot of potential, and many of my colleagues have used them, but I've been reluctant to use them much after experiencing several problems with freezing, broken connections, and incompatibilities with institutional Wi-Fi. Since reliability is all-important in a captioning situation where you're not on hand to troubleshoot potential problems with every caption viewer's device, I prefer to use server-hosted text streaming services. That way, if the connection drops on the user's end, they just have to refresh their browser once their connection resumes, and the captions start streaming again as usual. If the connection drops on the captioner's end, they have to reset their connection and then wait for users to refresh their browsers.
That's not ideal, but better than peer-to-peer services, which require both provider and users to go through a synchronized handshaking process whenever either party drops a connection. Instant messaging applications are similarly limited to predetermined lists of users, which wouldn't have worked for me in this situation (though they've proven to be helpful as a stopgap during one-on-one CART when the text streaming service has a sudden outage and I know the user's IM identity). Additionally, instant messaging requires the captioner to press "enter" after every line of text, which slows the rate of captioning, and it doesn't tend to support script feeding. Document collaboration services also don't tend to support script feeding, though they can be useful in live captioning situations that don't require script feeding. However, the collaborative editing features can often prove to be more of a hindrance than an asset in most live captioning situations, and the interface isn't always as clean and simple as I'd like.
So as of right now, Streamtext is my go-to service. It's server-hosted, reliable, supports line-by-line script sending, and can connect any number of users on several different devices by streaming the captions to a single URL accessible by nearly any web browser. It also offers customizable font and color settings, which can be set by the captioner or customized by the user. The only real disadvantage is that, like most good things in life, it costs a pretty penny, starting at $6/hour and increasing from there, depending on how many users are connected at a time. At times, my Streamtext bill has exceeded $400/month, and while it's deductible as a business expense, it hasn't been much fun to pay that bill, knowing that I might have been able to get by with a cheaper or entirely free service instead. Still, I've been burned too often by inconsistent services to want to switch from Streamtext without an extremely compelling reason.
So what other variables are at play, besides the text streaming service? Well, if you're supplying your own devices to clients instead of requiring that they provide their own, you'll want to configure them properly. In my case, the Surface was easy; I just pointed Chrome at my all-purpose text streaming URL (http://stenoknight.com/nypl, since I first set it up for use at the New York Public Library, and have been using it for various other purposes ever since), which redirects to http://www.streamtext.net/player?event=nypl. That way, as long as I set up my Streamtext job to point to a file called "nypl", users only have to input my short Stenoknight.com URL instead of the long, awkward Streamtext URL. I've put a link to the site on both my Surface and my Galaxy Tab 2, for quick and easy access. On the Galaxy Tab 2, I initially tried Streamtext with the default browser and then with Chrome for Android, but neither of them supported full-screen viewing, and I didn't like how much real estate was taken up by the address bar and browser UI, so I installed the Dolphin Browser, which supports simple toggling in and out of full screen mode. The result is a clean, simple text-only interface on both tablets, with customizable font resizing and seamless transitioning between portrait or landscape mode, to fit each client's preferences. One of the three events I captioned was held outdoors, so I was able to crank up the contrast on both devices, with large white text on a black background, to compensate as much as possible for glare.
The last and ultimately most crucial decision was how to make the internet connection that would keep everything running smoothly. My preference when providing remote or wireless captioning is always to connect my captioning computer to a wired wideband Ethernet connection such as the one I use in my home office, but that's not always possible in every venue. At all three recent wireless captioning events, I had access both to institutional Wi-Fi and to the connection offered by my 4G wireless modem/hotspot, but my decision on which to use varied wildly with the circumstances. At the awards ceremony and the library gig (both indoors), the institutional Wi-Fi was strong and steady, faster than my 4G modem and more responsive, with significantly less lag time. At the outdoor graduation ceremony, however, the situation was reversed. The Wi-Fi signal was weak and patchy, dropping frequently and showing significant lag. My 4G modem, on the other hand, had a strong signal throughout, and I quickly switched all my devices over to it from the Wi-Fi during setup. The only disadvantage there, of course, is that the 4G modem has a limited range, and I was concerned that my client's connection would drop if the Galaxy Tab were brought up to the stage during the actual diploma-granting portion of the ceremony. My client decided to go without captioning for that part of the event, so it was never put to the test, but the range limitation is definitely something to keep in mind when choosing a hotspot-based internet solution over institutional Wi-Fi. I've heard of services such as Connectify, which claim to consolidate multiple internet connections as a sort of failsafe mechanism, but I haven't yet given it a try; definitely something to investigate now that the semester's wrapping up.
So there are some things to consider for onsite streaming to multiple wireless devices at public events. Please feel free to share your own tips and tools, if you solve these problems differently! There's always something to learn in this business, and the technology is advancing all the time, so it's important to keep up to date as much as possible. As more people start carrying smartphones and tablets, essentially providing their own caption-viewing devices, I foresee a boom in open-URL wireless device captioning for public events, and we captioners will need to be able to offer it.
Samsung Galaxy Tab
Microsoft Surface Pro
I used my laptop to send the script and monitor my CART output, with pending translation display turned on to give me an extra 1.5 seconds of error correction, since the client wasn't reading my screen and wouldn't be forced to read Eclipse's confusing markup syntax. Then I used Streamtext to send the captions to web browsers on my Microsoft Surface and Samsung Galaxy Tab 2 (to replace my dear old Samsung Q1, now on its last legs and looking somewhat junky), as well as to the smartphones and iPads of any audience members who pointed their browsers to the caption feed's URL. According to the guy in the NYPL sound booth, there were about 15 people using their own equipment to view captions yesterday, which is probably a record at that particular event. Why Streamtext? Well, there are a few options for wireless captions, with pros and cons for each:
* Screen sharing apps such as ScreenLeap or Join.me.
* Peer-to-Peer connections such as Teleview and Bridge.
* Free document collaboration services such as Etherpad or Google Docs.
* Instant messaging applications such as Google Talk.
At these particular events, I didn't want to share my screen, since it was split into two unsightly panes, and because screen sharing is usually restricted to specific devices, while I wanted the captions to be accessible to any number of audience members without having to hook up their equipment individually. Screen sharing is also heavier on bandwidth than simple text streaming, doesn't allow the caption viewer to scroll backwards and review captions they might have missed, and is prone to lag, especially as more devices are added to a single screen. Peer-to-Peer connections such as Teleview and Bridge have a lot of potential, and many of my colleagues have used them, but I've been reluctant to use them much after experiencing several problems with freezing, broken connections, and incompatibilities with institutional Wi-Fi. Since reliability is all-important in a captioning situation where you're not on hand to troubleshoot potential problems with every caption viewer's device, I prefer to use server-hosted text streaming services. That way, if the connection drops on the user's end, they just have to refresh their browser once their connection resumes, and the captions start streaming again as usual. If the connection drops on the captioner's end, they have to reset their connection and then wait for users to refresh their browsers.
That's not ideal, but better than peer-to-peer services, which require both provider and users to go through a synchronized handshaking process whenever either party drops a connection. Instant messaging applications are similarly limited to predetermined lists of users, which wouldn't have worked for me in this situation (though they've proven to be helpful as a stopgap during one-on-one CART when the text streaming service has a sudden outage and I know the user's IM identity). Additionally, instant messaging requires the captioner to press "enter" after every line of text, which slows the rate of captioning, and it doesn't tend to support script feeding. Document collaboration services also don't tend to support script feeding, though they can be useful in live captioning situations that don't require script feeding. However, the collaborative editing features can often prove to be more of a hindrance than an asset in most live captioning situations, and the interface isn't always as clean and simple as I'd like.
So as of right now, Streamtext is my go-to service. It's server-hosted, reliable, supports line-by-line script sending, and can connect any number of users on several different devices by streaming the captions to a single URL accessible by nearly any web browser. It also offers customizable font and color settings, which can be set by the captioner or customized by the user. The only real disadvantage is that, like most good things in life, it costs a pretty penny, starting at $6/hour and increasing from there, depending on how many users are connected at a time. At times, my Streamtext bill has exceeded $400/month, and while it's deductible as a business expense, it hasn't been much fun to pay that bill, knowing that I might have been able to get by with a cheaper or entirely free service instead. Still, I've been burned too often by inconsistent services to want to switch from Streamtext without an extremely compelling reason.
So what other variables are at play, besides the text streaming service? Well, if you're supplying your own devices to clients instead of requiring that they provide their own, you'll want to configure them properly. In my case, the Surface was easy; I just pointed Chrome at my all-purpose text streaming URL (http://stenoknight.com/nypl, since I first set it up for use at the New York Public Library, and have been using it for various other purposes ever since), which redirects to http://www.streamtext.net/player?event=nypl. That way, as long as I set up my Streamtext job to point to a file called "nypl", users only have to input my short Stenoknight.com URL instead of the long, awkward Streamtext URL. I've put a link to the site on both my Surface and my Galaxy Tab 2, for quick and easy access. On the Galaxy Tab 2, I initially tried Streamtext with the default browser and then with Chrome for Android, but neither of them supported full-screen viewing, and I didn't like how much real estate was taken up by the address bar and browser UI, so I installed the Dolphin Browser, which supports simple toggling in and out of full screen mode. The result is a clean, simple text-only interface on both tablets, with customizable font resizing and seamless transitioning between portrait or landscape mode, to fit each client's preferences. One of the three events I captioned was held outdoors, so I was able to crank up the contrast on both devices, with large white text on a black background, to compensate as much as possible for glare.
The last and ultimately most crucial decision was how to make the internet connection that would keep everything running smoothly. My preference when providing remote or wireless captioning is always to connect my captioning computer to a wired wideband Ethernet connection such as the one I use in my home office, but that's not always possible in every venue. At all three recent wireless captioning events, I had access both to institutional Wi-Fi and to the connection offered by my 4G wireless modem/hotspot, but my decision on which to use varied wildly with the circumstances. At the awards ceremony and the library gig (both indoors), the institutional Wi-Fi was strong and steady, faster than my 4G modem and more responsive, with significantly less lag time. At the outdoor graduation ceremony, however, the situation was reversed. The Wi-Fi signal was weak and patchy, dropping frequently and showing significant lag. My 4G modem, on the other hand, had a strong signal throughout, and I quickly switched all my devices over to it from the Wi-Fi during setup. The only disadvantage there, of course, is that the 4G modem has a limited range, and I was concerned that my client's connection would drop if the Galaxy Tab were brought up to the stage during the actual diploma-granting portion of the ceremony. My client decided to go without captioning for that part of the event, so it was never put to the test, but the range limitation is definitely something to keep in mind when choosing a hotspot-based internet solution over institutional Wi-Fi. I've heard of services such as Connectify, which claim to consolidate multiple internet connections as a sort of failsafe mechanism, but I haven't yet given it a try; definitely something to investigate now that the semester's wrapping up.
So there are some things to consider for onsite streaming to multiple wireless devices at public events. Please feel free to share your own tips and tools, if you solve these problems differently! There's always something to learn in this business, and the technology is advancing all the time, so it's important to keep up to date as much as possible. As more people start carrying smartphones and tablets, essentially providing their own caption-viewing devices, I foresee a boom in open-URL wireless device captioning for public events, and we captioners will need to be able to offer it.
Tuesday, May 14, 2013
Former CART Client Wins NSF Fellowship!!
CART providers are bound by rules of confidentiality not to disclose the names or details of people they've captioned for, but in this case my client graciously allowed me to use her name and link to her information. A few years ago, I captioned several classes (including Latin, one of my all-time favorite subjects) for Navena Chaitoo, an undergraduate at Fordham University up in the Bronx. Now she's graduating, and yesterday she informed me that she won a Graduate Research Fellowship from the National Science Foundation to further her education in public policy and management at Carnegie Mellon University! From the article she sent me:
“I was diagnosed with a severe-to-profound hearing loss when I was about 5 years old, and at the time, my audiologists relied on the latest medical studies to determine that I would probably never graduate high school,” said the Brooklyn native. “Ultimately, my parents knew better and saw to it that I had all the accommodations necessary to offset my hearing loss, which allowed me to be as successful as I am today.”
[...]
Chaitoo will continue research she began at Fordham on the economic wellbeing of persons with disabilities in the United States, particularly the indirect as well as direct medical costs of persons with disabilities—a topic in which she has been personally invested.
Navena is only one of countless examples demonstrating how important accommodations can be, and how much can be achieved if they're put in place. The communication access came from CART providers like me and the other captioners who've worked with her, but the brilliance, insight, and dedication all came from her. This woman is amazing, and I'm honored to have played a part in her success. I know she'll just keep going up and up from here, and I'll definitely be watching to see the great things she does in the future.
“I was diagnosed with a severe-to-profound hearing loss when I was about 5 years old, and at the time, my audiologists relied on the latest medical studies to determine that I would probably never graduate high school,” said the Brooklyn native. “Ultimately, my parents knew better and saw to it that I had all the accommodations necessary to offset my hearing loss, which allowed me to be as successful as I am today.”
[...]
Chaitoo will continue research she began at Fordham on the economic wellbeing of persons with disabilities in the United States, particularly the indirect as well as direct medical costs of persons with disabilities—a topic in which she has been personally invested.
Navena is only one of countless examples demonstrating how important accommodations can be, and how much can be achieved if they're put in place. The communication access came from CART providers like me and the other captioners who've worked with her, but the brilliance, insight, and dedication all came from her. This woman is amazing, and I'm honored to have played a part in her success. I know she'll just keep going up and up from here, and I'll definitely be watching to see the great things she does in the future.
Tuesday, May 7, 2013
Thresholds and Tolerance
I'm not a fan of starting a blog post by quoting the definition of the topic in question; it's virtually always just a lazy attempt to co-opt some of the dictionary's presumed authority or credibility and doesn't add anything of substance to the author's argument. That said...
"Tolerance is the permissible limit or limits of variation in a measured value or physical property of a material, manufactured object, system, or service. [...] A variation beyond the tolerance [...] is said to be non-compliant, rejected, or exceeding the tolerance."
I'm quoting this definition because it refers to a specific technical meaning of an otherwise well known word. Most people aren't familiar with the word "tolerance" used in this sense, but it's a useful concept not just in mechanical engineering but in the provision of transcription services for Deaf and hard of hearing students and professionals. In my CART Problem Solving series, I addressed the popular misconception that a tolerance of 90% accuracy was acceptable, because most people think of 90 and 100 as rather large numbers that are pretty much equivalent to each other, even though language is such a fine-grained system that 100 words constitutes only about a paragraph of text, and a 90% error rate works out to an error in just about every sentence. I also talked about the ways in which human captioners are able to use lateral context clues to fill in the gaps of non-ideal audio conditions, while outside of a perfectly amplified, perfectly enunciated standard American accent, automated speech recognition systems go from almost adequate to laughably awful perilously quickly.
Tolerance enters the captioning sphere in other cases as well. Speed, for instance; if a professor's average rate of speed is 160 words per minute (quite a bit below the typical rate of speech, which tends to be between 180 and 220 WPM), a stenocaptioner (AKA a CART provider like me) with a speed of 240 words per minute will be able to achieve virtually 100% accuracy, because any errors can be immediately caught and corrected. A text expansion provider (using a system such as C-Print or Typewell) may have a speed of 140 words per minute or so, which means that if the professor's rate stays completely steady all the way through, they will probably be able to capture a good 85% of what's spoken. Since they're human and not just a mindless speech recognition system, they will give preference to writing down important things (names, technical terms, relationships between concepts), and will try to make sure that the remaining 15% of speech that they're too slow to capture consists mainly of "Um", "Uh", "You know", repeated words, irrelevant asides, and inefficient phrasing that can be tightened up and paraphrased to use fewer keystrokes. In some cases, that will be enough. The professor's speed will never rise above 160 WPM throughout the entire class, and there will be plenty of chaff to ignore, leaving enough time to take down the important content, even though the provider's writing speed is lower than the professor's average rate of speech. By contrast, the stenocaptioner will probably choose to leave out the "Um", "Uh", and "You know" sorts of filler words for clarity's sake, but will not omit repeated words or attempt to paraphrase the professor's wording, no matter how inefficient it might be. Stenocaptioners are focused on providing a verbatim realtime stream, only omitting words that add absolutely no value to understanding, while text expansion providers are focused on tightening up whatever they hear so that it can be written in as few keystrokes as possible. So far, so good. This is a case where stenocaptioning and text expansion are more or less equivalent, and the difference lies mostly in whether the client wants the pure, unmediated words of their professors to interpret for themselves, or whether they'd rather have a condensed version of the information delivered in class, more along the lines of the bullet points on a PowerPoint slide.
Change any of the factors in play, and the results will be very different. For instance, say the professor's average rate of speed is still 160 words per minute, but that's because his rate is 135 when he's writing formulas on the board (about half the class) and 185 when he's explaining what the formulas mean (the other half of the class). Or it's 140 for long stretches at a time, when he's lecturing on the information mandated by the syllabus, but it shoots up to 200 for brief moments, when he gets excited about a particular detail of whatever he's talking about. The stenocaptioner, whose top speed is 240 WPM, is still able to get 100% in all of these situations. The text expansion provider, on the other hand, will be able to handle the 135 WPM formula sections almost perfectly, but will start cutting or condensing words and phrases from the 185 sections, and will be forced to leave out over a quarter of the material from the 200 WPM sections. If this particular professor has a tendency to repeat words, insert lots of filler words, pause between sentences to take a drink of water, or otherwise speak in a lightweight, inefficient way, the text expansion provider might be able to deliver a workable portion of the class's important material, because there will be enough less important stuff they can cut out and still have enough reserve speed to write down the good parts.
If, on the other hand, the professor is an accomplished speaker, who says precisely what she means in precisely the way she means it, if her lectures are a constant stream of dense technical jargon and precise, specific descriptions of how everything fits together, if there's no chaff or filler to cut out and no awkward repetitions to rephrase... The text expansion provider is out to sea. They've got to start cutting important material in favor of leaving in vital material, and that becomes a dangerous guessing game when it comes to the grade of the student they're transcribing for. Text expansion services acknowledge this to a certain extent; they tend to say that CART is recommended when the material is technical or highly precise, such as in the graduate and professional programs that I specialize in. And admittedly, there are some classes and some subjects and some professors where a 140 WPM typing speed, as slow as it is when compared to a stenocaptioner's 240 WPM typing speed, is enough to deliver most important material given in the class.
The question is: How do you tell which situation you're dealing with? If you're a disability director and you're trying to decide between hiring a text expansion provider or a certified CART provider for a given student's schedule of classes, it may seem obvious to choose the former, since text expansion services are cheaper and more widely available. But have you audited the professors in all of the classes in question? Does their average speed always stay under that 160-180 WPM sweet spot? Is there enough extraneous speech to discard and paraphrase without losing important information? Are there ever spikes of higher speeds, and if there are, can you guarantee that none of that high speed material will appear on the test? Have you checked to make sure that there won't be any guest lecturers or student presentations during the course of the semester? Guest experts, since they're not used to speaking for students, tend to speak at 200 to 220 WPM or higher. One that I transcribed a few years ago spoke at 280 WPM, and I found myself starting to do the same sort of paraphrasing and chaff cutting that my text expansion colleagues do as a matter of course. I think I managed a good 90% to 95% of relevant material given in that lecture. But I didn't reach that paraphrasing threshold until I encountered a speaker at the high end of the rate-of-speech bell curve; for text expansion providers, it's their starting point. They don't have any speed in reserve, and if there's nothing extraneous to cut out, they start losing important material very quickly. Give them a 280 WPM speaker, and they're now losing a full 50% of everything that's spoken.
Of course, you could make the argument that most students without hearing loss don't take in 100% of every lecture. They might daydream or nod off, experience a moment of inattention, miss a word or two here or there while skimming through their notes from the class before. Even without getting every word of every lecture, many students do quite well. But where's the cutoff? How many words can you lose and still receive equal access? Which words can you leave out and which must you absolutely leave in? Who do you trust to make that call? It all comes down to tolerance.
"Tolerance is the permissible limit or limits of variation in a measured value or physical property of a material, manufactured object, system, or service. [...] A variation beyond the tolerance [...] is said to be non-compliant, rejected, or exceeding the tolerance."
I'm quoting this definition because it refers to a specific technical meaning of an otherwise well known word. Most people aren't familiar with the word "tolerance" used in this sense, but it's a useful concept not just in mechanical engineering but in the provision of transcription services for Deaf and hard of hearing students and professionals. In my CART Problem Solving series, I addressed the popular misconception that a tolerance of 90% accuracy was acceptable, because most people think of 90 and 100 as rather large numbers that are pretty much equivalent to each other, even though language is such a fine-grained system that 100 words constitutes only about a paragraph of text, and a 90% error rate works out to an error in just about every sentence. I also talked about the ways in which human captioners are able to use lateral context clues to fill in the gaps of non-ideal audio conditions, while outside of a perfectly amplified, perfectly enunciated standard American accent, automated speech recognition systems go from almost adequate to laughably awful perilously quickly.
Tolerance enters the captioning sphere in other cases as well. Speed, for instance; if a professor's average rate of speed is 160 words per minute (quite a bit below the typical rate of speech, which tends to be between 180 and 220 WPM), a stenocaptioner (AKA a CART provider like me) with a speed of 240 words per minute will be able to achieve virtually 100% accuracy, because any errors can be immediately caught and corrected. A text expansion provider (using a system such as C-Print or Typewell) may have a speed of 140 words per minute or so, which means that if the professor's rate stays completely steady all the way through, they will probably be able to capture a good 85% of what's spoken. Since they're human and not just a mindless speech recognition system, they will give preference to writing down important things (names, technical terms, relationships between concepts), and will try to make sure that the remaining 15% of speech that they're too slow to capture consists mainly of "Um", "Uh", "You know", repeated words, irrelevant asides, and inefficient phrasing that can be tightened up and paraphrased to use fewer keystrokes. In some cases, that will be enough. The professor's speed will never rise above 160 WPM throughout the entire class, and there will be plenty of chaff to ignore, leaving enough time to take down the important content, even though the provider's writing speed is lower than the professor's average rate of speech. By contrast, the stenocaptioner will probably choose to leave out the "Um", "Uh", and "You know" sorts of filler words for clarity's sake, but will not omit repeated words or attempt to paraphrase the professor's wording, no matter how inefficient it might be. Stenocaptioners are focused on providing a verbatim realtime stream, only omitting words that add absolutely no value to understanding, while text expansion providers are focused on tightening up whatever they hear so that it can be written in as few keystrokes as possible. So far, so good. This is a case where stenocaptioning and text expansion are more or less equivalent, and the difference lies mostly in whether the client wants the pure, unmediated words of their professors to interpret for themselves, or whether they'd rather have a condensed version of the information delivered in class, more along the lines of the bullet points on a PowerPoint slide.
Change any of the factors in play, and the results will be very different. For instance, say the professor's average rate of speed is still 160 words per minute, but that's because his rate is 135 when he's writing formulas on the board (about half the class) and 185 when he's explaining what the formulas mean (the other half of the class). Or it's 140 for long stretches at a time, when he's lecturing on the information mandated by the syllabus, but it shoots up to 200 for brief moments, when he gets excited about a particular detail of whatever he's talking about. The stenocaptioner, whose top speed is 240 WPM, is still able to get 100% in all of these situations. The text expansion provider, on the other hand, will be able to handle the 135 WPM formula sections almost perfectly, but will start cutting or condensing words and phrases from the 185 sections, and will be forced to leave out over a quarter of the material from the 200 WPM sections. If this particular professor has a tendency to repeat words, insert lots of filler words, pause between sentences to take a drink of water, or otherwise speak in a lightweight, inefficient way, the text expansion provider might be able to deliver a workable portion of the class's important material, because there will be enough less important stuff they can cut out and still have enough reserve speed to write down the good parts.
If, on the other hand, the professor is an accomplished speaker, who says precisely what she means in precisely the way she means it, if her lectures are a constant stream of dense technical jargon and precise, specific descriptions of how everything fits together, if there's no chaff or filler to cut out and no awkward repetitions to rephrase... The text expansion provider is out to sea. They've got to start cutting important material in favor of leaving in vital material, and that becomes a dangerous guessing game when it comes to the grade of the student they're transcribing for. Text expansion services acknowledge this to a certain extent; they tend to say that CART is recommended when the material is technical or highly precise, such as in the graduate and professional programs that I specialize in. And admittedly, there are some classes and some subjects and some professors where a 140 WPM typing speed, as slow as it is when compared to a stenocaptioner's 240 WPM typing speed, is enough to deliver most important material given in the class.
The question is: How do you tell which situation you're dealing with? If you're a disability director and you're trying to decide between hiring a text expansion provider or a certified CART provider for a given student's schedule of classes, it may seem obvious to choose the former, since text expansion services are cheaper and more widely available. But have you audited the professors in all of the classes in question? Does their average speed always stay under that 160-180 WPM sweet spot? Is there enough extraneous speech to discard and paraphrase without losing important information? Are there ever spikes of higher speeds, and if there are, can you guarantee that none of that high speed material will appear on the test? Have you checked to make sure that there won't be any guest lecturers or student presentations during the course of the semester? Guest experts, since they're not used to speaking for students, tend to speak at 200 to 220 WPM or higher. One that I transcribed a few years ago spoke at 280 WPM, and I found myself starting to do the same sort of paraphrasing and chaff cutting that my text expansion colleagues do as a matter of course. I think I managed a good 90% to 95% of relevant material given in that lecture. But I didn't reach that paraphrasing threshold until I encountered a speaker at the high end of the rate-of-speech bell curve; for text expansion providers, it's their starting point. They don't have any speed in reserve, and if there's nothing extraneous to cut out, they start losing important material very quickly. Give them a 280 WPM speaker, and they're now losing a full 50% of everything that's spoken.
Of course, you could make the argument that most students without hearing loss don't take in 100% of every lecture. They might daydream or nod off, experience a moment of inattention, miss a word or two here or there while skimming through their notes from the class before. Even without getting every word of every lecture, many students do quite well. But where's the cutoff? How many words can you lose and still receive equal access? Which words can you leave out and which must you absolutely leave in? Who do you trust to make that call? It all comes down to tolerance.
Monday, April 22, 2013
Word Boundary Error Commentary Track
Word boundary errors! They don't come up as often as you'd think, especially if you have a robust conflict-free dictionary full of prefix and suffix strokes, but when they do, they're baffling and embarrassing in equal measure.
I recently downloaded my entire Twitter archive and then searched for the hashtag "#wordboundaryerrors". It offered up a treasure trove of them, collected over the last several years. Here are some of the best, with my comments on how I can avoid errors like them in the future.
"more tartar" came out "mortar tar". #wordboundaryerrors
I should probably take MOR/TAR out of my dictionary altogether. Eclipse tells me I've written it MORT/*AR 168 times, but MOR/TAR only once: When someone said "more tartar". #facepalm
"sing Hallelujah" came out "Singha lay lieu ya". #wordboundaryerrors #mmmthaibeer
Two ways to keep this from happening again: Redefine SING/HA as S*ING/HA (with the asterisk denoting a proper noun), or just stick to writing "hallelujah" as HAL/LAOU/YA (already defined in my dictionary that way). I'll probably do both, since it's silly to do "Hallelujah" in four strokes.
Conversely, #stenofail of the day: "salivary gland malignancy" came out "salivary grandma lignancy". Not always smart to define misstrokes!
Take out that GLAND/MA -> "grandma" definition! It might have come up as a misstroke that once, but I probably shouldn't have kept it.
"These cisterns" came out "thesis terns". #steno #wordboundaryerrors
I now write "thesis" THAES, so I don't think this will happen again. Alternately, I could use SIFT/*ERNS, but that feels unintuitive to me.
Argh! And "call me Sophia" came out "call miso feia". #wordboundaryerrors #rackinfrackin
This is a tricky one. I think I'll have to redefine "miso" as MIS/JO. (JO is my {^o} suffix stroke). And Sophia should probably have an asterisk, though that makes it tough to distinguish from Sofia. So I might just leave it and concentrate on "miso".
"optically pure lens" came out "optically purulence". #wordboundaryerrors
Simple way to fix this is keep "pure lens" as PAOUR/LENS and redefine "purulence" as PUR/LENS. Not sure why it wasn't that way already. I think it was a legacy entry.
"supplied by lingual nerve" came out "supplied bilingual nerve". Argh! I have a bi- prefix; it was a legacy entry. #wordboundaryerrors
All your masterful prefix and suffix definitions won't help you if you don't weed out your conflict-ridden legacy entries! This one came from either my NYCI dictionary or the Sten Ed dictionary. Tsk-tsk.
"They're mossy fibers" came out "Thermosy fibers". Sigh. #wordboundaryerrors
Change Thermos to THERM/OS to use -os suffix stroke. In general try to avoid using briefs for common words like articles, prepositions, and pronouns as word parts, because the chance of a conflict is just too high.
"key efficacy objectives" came out "Kiev case objectives". #wordboundaryerrors
I had both KAOE/EF and KAOE/*EF defined as "Kiev". Delete the first one! It's not theory-appropriate anyway.
"could coexist" came out "cocoa exist" #wordboundaries
Like with "Thermos" above, I shouldn't use KO for both my "could" brief and my "co-" prefix. Usually my "co-" prefix is KOE, but this must have been a misstroke define that bit me later on.
"acyanotic" came out "acai nottic". Yep, I had "cyanotic" and the "a" prefix defined, but "acai" got in there first. #wordboundaries
To be honest, this is a tough one. I could have written "cyanotic" SAON/OT/IK, or just have predefined "acyanotic" so that the problem wouldn't have come up, but I'm cutting myself a little slack on this particular error.
Today "saturated fat was bad" came out "saturated fatwas bad". #boundaryerrors
Inflections of "to be" should never be used in word parts. I should either have written FAT/WA/S or FAT/W*AS, or even FA/TWAS. (Since 'twas is pretty uncommon in modern usage, though I do have it in my dictionary.)
Argh. "cost Coca-Cola" came out "Costco ka cola". #wordboundaryerrors
When brand names collide! I probably should have thrown an asterisk in at least one of these corporations, since they are both proper nouns.
"crazy cat lady" came out "Krazy Kat lady". #wordboundaryerrors
Krazy Kat came up in a History of Comics course. I really should have used an asterisk in that proper noun.
Ha! Funniest boundary error in a while: "big surveillance studies" came out "Big Sur valance studies".
Here too. Especially in proper nouns that are only one syllable long. Nearly all one-syllable words can come up as word parts at some point. That's kind of what syllables are. (':
Stickler Syndrome isn't kicking yourself because "past attendance" came out "pasta tendance". It's a genetic disorder: http://bit.ly/4fPTdj
Another legacy entry! I've written pasta PAFT/A precisely zero times, PAFT/YA 29 times, and PAS/TA 106 times. But of course it had to come up here.
"Broadly correlated" came out "broad liqueurlated". Man, am I glad that was transcription and not CART. How embarrassing. Fixed now.
This is actually a bit of a hole in my current theory. I don't distinguish between the {^ly} suffix and the "li" word part. Boo, hiss. It doesn't come up as often as a lot of other word boundary errors do, but I should still really fix that, and soon. I mostly write "liqueur" LIK/AOUR, but LI/KOR was in there as an alternate stroke.
Tricky boundary error -- "Chris Crosby" came out "criss-crossby".
Easy fix is to redefine KRIS/KROS as KRIS/KR*OS and pray that nobody mentions the short-lived backwards-trouser-wearing '90s rap group Kris Kross. (Or be prepared to fingerspell it!)
Where pharmacology and medieval studies collide: "Fetishistic reliquaries" came out "Fetishist Ikorel wears". Sigh.
Should have kept my medical dictionary turned off during my Medieval studies class! And also should have put an asterisk somewhere in Ikorel, since it is a proper noun.
Worst error so far from tonight's class on Job: "An Israelite" came out "Anise realite".
Probably should start writing "Israel" with an asterisk, since it's a proper noun.
"Per vertebra" came out "pervert bra" #steno #wordboundaryerrors #particularlyunfortunatewordboundaryerrors
Yeah. I got nothing. :'o
Feel free to post samples from your own word boundary rogues gallery, if you like! I promise I won't belittle you for them. No matter how diligent we are, we can never completely avoid every possible word boundary in the universe. We've just got to keep trying to squash them, one word part/suffix overlap at a time.
I recently downloaded my entire Twitter archive and then searched for the hashtag "#wordboundaryerrors". It offered up a treasure trove of them, collected over the last several years. Here are some of the best, with my comments on how I can avoid errors like them in the future.
"more tartar" came out "mortar tar". #wordboundaryerrors
I should probably take MOR/TAR out of my dictionary altogether. Eclipse tells me I've written it MORT/*AR 168 times, but MOR/TAR only once: When someone said "more tartar". #facepalm
"sing Hallelujah" came out "Singha lay lieu ya". #wordboundaryerrors #mmmthaibeer
Two ways to keep this from happening again: Redefine SING/HA as S*ING/HA (with the asterisk denoting a proper noun), or just stick to writing "hallelujah" as HAL/LAOU/YA (already defined in my dictionary that way). I'll probably do both, since it's silly to do "Hallelujah" in four strokes.
Conversely, #stenofail of the day: "salivary gland malignancy" came out "salivary grandma lignancy". Not always smart to define misstrokes!
Take out that GLAND/MA -> "grandma" definition! It might have come up as a misstroke that once, but I probably shouldn't have kept it.
"These cisterns" came out "thesis terns". #steno #wordboundaryerrors
I now write "thesis" THAES, so I don't think this will happen again. Alternately, I could use SIFT/*ERNS, but that feels unintuitive to me.
Argh! And "call me Sophia" came out "call miso feia". #wordboundaryerrors #rackinfrackin
This is a tricky one. I think I'll have to redefine "miso" as MIS/JO. (JO is my {^o} suffix stroke). And Sophia should probably have an asterisk, though that makes it tough to distinguish from Sofia. So I might just leave it and concentrate on "miso".
"optically pure lens" came out "optically purulence". #wordboundaryerrors
Simple way to fix this is keep "pure lens" as PAOUR/LENS and redefine "purulence" as PUR/LENS. Not sure why it wasn't that way already. I think it was a legacy entry.
"supplied by lingual nerve" came out "supplied bilingual nerve". Argh! I have a bi- prefix; it was a legacy entry. #wordboundaryerrors
All your masterful prefix and suffix definitions won't help you if you don't weed out your conflict-ridden legacy entries! This one came from either my NYCI dictionary or the Sten Ed dictionary. Tsk-tsk.
"They're mossy fibers" came out "Thermosy fibers". Sigh. #wordboundaryerrors
Change Thermos to THERM/OS to use -os suffix stroke. In general try to avoid using briefs for common words like articles, prepositions, and pronouns as word parts, because the chance of a conflict is just too high.
"key efficacy objectives" came out "Kiev case objectives". #wordboundaryerrors
I had both KAOE/EF and KAOE/*EF defined as "Kiev". Delete the first one! It's not theory-appropriate anyway.
"could coexist" came out "cocoa exist" #wordboundaries
Like with "Thermos" above, I shouldn't use KO for both my "could" brief and my "co-" prefix. Usually my "co-" prefix is KOE, but this must have been a misstroke define that bit me later on.
"acyanotic" came out "acai nottic". Yep, I had "cyanotic" and the "a" prefix defined, but "acai" got in there first. #wordboundaries
To be honest, this is a tough one. I could have written "cyanotic" SAON/OT/IK, or just have predefined "acyanotic" so that the problem wouldn't have come up, but I'm cutting myself a little slack on this particular error.
Today "saturated fat was bad" came out "saturated fatwas bad". #boundaryerrors
Inflections of "to be" should never be used in word parts. I should either have written FAT/WA/S or FAT/W*AS, or even FA/TWAS. (Since 'twas is pretty uncommon in modern usage, though I do have it in my dictionary.)
Argh. "cost Coca-Cola" came out "Costco ka cola". #wordboundaryerrors
When brand names collide! I probably should have thrown an asterisk in at least one of these corporations, since they are both proper nouns.
"crazy cat lady" came out "Krazy Kat lady". #wordboundaryerrors
Krazy Kat came up in a History of Comics course. I really should have used an asterisk in that proper noun.
Ha! Funniest boundary error in a while: "big surveillance studies" came out "Big Sur valance studies".
Here too. Especially in proper nouns that are only one syllable long. Nearly all one-syllable words can come up as word parts at some point. That's kind of what syllables are. (':
Stickler Syndrome isn't kicking yourself because "past attendance" came out "pasta tendance". It's a genetic disorder: http://bit.ly/4fPTdj
Another legacy entry! I've written pasta PAFT/A precisely zero times, PAFT/YA 29 times, and PAS/TA 106 times. But of course it had to come up here.
"Broadly correlated" came out "broad liqueurlated". Man, am I glad that was transcription and not CART. How embarrassing. Fixed now.
This is actually a bit of a hole in my current theory. I don't distinguish between the {^ly} suffix and the "li" word part. Boo, hiss. It doesn't come up as often as a lot of other word boundary errors do, but I should still really fix that, and soon. I mostly write "liqueur" LIK/AOUR, but LI/KOR was in there as an alternate stroke.
Tricky boundary error -- "Chris Crosby" came out "criss-crossby".
Easy fix is to redefine KRIS/KROS as KRIS/KR*OS and pray that nobody mentions the short-lived backwards-trouser-wearing '90s rap group Kris Kross. (Or be prepared to fingerspell it!)
Where pharmacology and medieval studies collide: "Fetishistic reliquaries" came out "Fetishist Ikorel wears". Sigh.
Should have kept my medical dictionary turned off during my Medieval studies class! And also should have put an asterisk somewhere in Ikorel, since it is a proper noun.
Worst error so far from tonight's class on Job: "An Israelite" came out "Anise realite".
Probably should start writing "Israel" with an asterisk, since it's a proper noun.
"Per vertebra" came out "pervert bra" #steno #wordboundaryerrors #particularlyunfortunatewordboundaryerrors
Yeah. I got nothing. :'o
Feel free to post samples from your own word boundary rogues gallery, if you like! I promise I won't belittle you for them. No matter how diligent we are, we can never completely avoid every possible word boundary in the universe. We've just got to keep trying to squash them, one word part/suffix overlap at a time.
Saturday, April 6, 2013
Competition
I'm taking the RMR 260 WPM Q&A again this May. Last time I failed by 11 points, because my everloving nerves took over and made my hands shake. This time I really, really want to get it. I found someone on Facebook who also only has the Q&A left before getting the RMR, who also failed by 11 points last winter. I challenged her to a little competition. First I'll pick one of the five-minute practice tests (actually tests administered in previous years), and give it my best shot. Then I'll transcribe the test verbatim (pausing and rewinding as necessary) to compare my own transcript to. Then I'll do the same with the transcript she sends me. Next round, she'll pick another take, then do the verbatim transcription and grading. The choice of which take to do in the final round will go to whoever is winning after two takes, and the one with the fewest overall errors after all three takes wins the contest. The loser has to find something that can only be purchased in their home city and send it to the winner. I'm hoping this will turn out to be pretty motivating for both of us. I like a little competition to keep things interesting. I'll keep you posted on how it all turns out.
Wednesday, March 13, 2013
From the New York Times Classifieds, 1925
Just something fun my partner found while scanning through the New York Times archive. Highlighted portion reads: "Stenographer, 4 years' experience. Refined, accurate, ambitious. $20. G 714 Times Downtown."
That's what I call a bargain!
Friday, March 8, 2013
Survey on CART by CCAC
The Collaborative for Communication Access via Captioning just released the results of their survey on the importance of CART in the lives of people with hearing loss. It's a really good read, and I encourage people especially to look at the testimonials quoted at the bottom of the survey. Some really excellent and eloquent stuff here. A very small sampling:
"It is the single most important thing anyone can do to help aid communication and equally understanding. With both of those things covered everyone benefits and the speaker retains an audience fully."
"I used to stay home rather than pretend to be involved in an event or take a course or seminar. With CART, I can attend, understand, and contribute to a discussion with less fear that I’ve missed a crucial point or even have the topic confused. It gives me more confidence that I’m understanding new information."
"It is the single most important thing anyone can do to help people follow audio communication. Any time it is made available I’ve noticed almost everyone in an audience will make use of it (hearing or deaf), whether they admit to it or not is a different thing."
"I used CART for college. I never realized how much I missed in classes until I started using CART. Now I use it for all training events and classes."
Thanks to the CCAC for conducting the survey. It's so important to listen to the opinions of the people we work for, and really gratifying to learn how dramatic an impact CART can have on a Deaf, deafened, or hard of hearing person's life.
"It is the single most important thing anyone can do to help aid communication and equally understanding. With both of those things covered everyone benefits and the speaker retains an audience fully."
"I used to stay home rather than pretend to be involved in an event or take a course or seminar. With CART, I can attend, understand, and contribute to a discussion with less fear that I’ve missed a crucial point or even have the topic confused. It gives me more confidence that I’m understanding new information."
"It is the single most important thing anyone can do to help people follow audio communication. Any time it is made available I’ve noticed almost everyone in an audience will make use of it (hearing or deaf), whether they admit to it or not is a different thing."
"I used CART for college. I never realized how much I missed in classes until I started using CART. Now I use it for all training events and classes."
Thanks to the CCAC for conducting the survey. It's so important to listen to the opinions of the people we work for, and really gratifying to learn how dramatic an impact CART can have on a Deaf, deafened, or hard of hearing person's life.
Saturday, February 23, 2013
How CART Helped Me Sneak into STEM
I'm terrible at math. I always have been. Despite coming from a family of engineers and teachers (including at least one math teacher, who gave me what turned out to be one of my favorite books as a kid, though unfortunately it didn't rub off all that well), I've never had a talent for it, and when I was younger I had a tendency not to work hard at anything that didn't come naturally. Hint for any small humans who might be reading this: This is a really bad habit to get into. It will come back to bite you in the butt so many times. Repair those small deficiencies when they're small, no matter how much of a grind it might be. You'll thank me later. Anyway, my dislike for arithmetic in elementary school turned into barely passing algebra in middle school turned into failing physics in high school. When I got to college, I had to take four years of math, but fortunately for me it was "Great Books"-style math, where I had to read Euclid and Ptolemy and Lobachevsky and Einstein and talk about them on an abstract level, but never had to take any actual tests on them. Even so, my math grades in college were not particularly great, and I emerged with a B.A. in Liberal Arts plus a pretty deep-seated math phobia. The frustrating thing is that I think math is pretty interesting, even though I have no aptitude to actually do it. And more than that, I absolutely love science, and so much of science is underpinned by math. Right out of college, I briefly enrolled in a post-baccalaureate science program, with the intention of applying to medical school, but the amount of math involved quickly forced me to give up and put that long-held dream on the shelf. I stuck to what I was good at and applied to an MA program in English, which I eventually turned down so that I could go to steno school. It was only after starting work as a CART provider that I realized what a gift I'd been given. Despite my lousy math scores and dreadful number crunching skills, I'd be able to sit in all sorts of math and science classes that people had sweated bullets to test into. I could absorb as much of the material as my brain would let me, and as long as I wrote the words down correctly, it didn't matter if I didn't grok some of the concepts. I wouldn't have to prove my fitness to be there. No tests, no papers, no chance of being called on. I got to be a fly on the wall, getting paid to help my brilliant clients flex their own math and science muscles while I sat back and marveled. Over the years I've been CARTing, I've worked for future economists, architects, pharmacists, doctors, and dentists. Along the way, I've gotten to take in:
Math for Economists
Financial Instruments
International Taxation
Intro to General Relativity
Economics for Urban Planners
Advanced Statistical Methods
Architectural Structures: Steel and Concrete
Plus a ton of gigs that involved anywhere from a drabble to a torrent of math, such as:
Radiology
Epidemiology
Biochemistry
Anesthesiology
Pharmacotherapeutics
Thermal and Statistical Physics
Meetings of Math for America
Meetings of the American Chemical Society
(You can read the complete list, if you're interested, on my Experience Page.)
Even though I don't think I'd be able to recall more than a small fraction of what I've learned in these classes, it's still way more exposure to these subjects than I ever would have been able to get if I'd done it the old fashioned way. Even getting out of the 101 level courses would have been a struggle, but the graduate and professional school material that I've been exposed to would have been stratospherically above my cognitive pay grade. And yet... There I was, sitting in the classes, absorbing all this cool information about the structures and systems that make up our universe. I've even developed sort of a specialty in captioning technical and scientific material. It's my favorite sort of job to take. All for someone who barely managed to learn her times tables. I'll never be a doctor, and I've made my peace with that, but I still get to swim in this stuff every day. Of all the gifts CART has given me, this might be the one I'm most grateful for.
Math for Economists
Financial Instruments
International Taxation
Intro to General Relativity
Economics for Urban Planners
Advanced Statistical Methods
Architectural Structures: Steel and Concrete
Plus a ton of gigs that involved anywhere from a drabble to a torrent of math, such as:
Radiology
Epidemiology
Biochemistry
Anesthesiology
Pharmacotherapeutics
Thermal and Statistical Physics
Meetings of Math for America
Meetings of the American Chemical Society
(You can read the complete list, if you're interested, on my Experience Page.)
Even though I don't think I'd be able to recall more than a small fraction of what I've learned in these classes, it's still way more exposure to these subjects than I ever would have been able to get if I'd done it the old fashioned way. Even getting out of the 101 level courses would have been a struggle, but the graduate and professional school material that I've been exposed to would have been stratospherically above my cognitive pay grade. And yet... There I was, sitting in the classes, absorbing all this cool information about the structures and systems that make up our universe. I've even developed sort of a specialty in captioning technical and scientific material. It's my favorite sort of job to take. All for someone who barely managed to learn her times tables. I'll never be a doctor, and I've made my peace with that, but I still get to swim in this stuff every day. Of all the gifts CART has given me, this might be the one I'm most grateful for.
Friday, February 15, 2013
Temporarily Non-Disabled
Some disability rights activists, understandably objecting to obnoxious terms like "able-bodied", occasionally refer to non-disabled people semi-facetiously as "temporarily non-disabled". Tongue-in-cheek or not, it's factually true. The disability community is often called The Largest Minority, because virtually every human being who doesn't die a sudden and premature death will join it eventually. Of course, there's a big difference between people who are born with a disability or acquired it relatively young and people who become disabled as a consequence of aging. Many older people don't consider themselves part of the disability community. Negotiating a sensory, mobility, or cognitive disability can look very different when someone is settling into sedate retirement, versus when they're trying to navigate employment, education, and relationships, while laying the groundwork for the next several decades in their lives.
Because I work in the accessibility industry, I think about disability a lot. I'm currently the only breadwinner in my household and we might start planning for a kid in the next few years, so I have to anticipate any possible disruptions in income and make contingency plans for them. Any number of things could happen. The student loan crisis could explode, making the number of people receiving higher education very different from what it looks like right now. I might have to transition out of academic CART and start specializing in CART for deaf professionals or public events. A lot of people worry about speech recognition replacing stenographers, but I think that's pretty unlikely, as I've explained in my CART Problem Solving Series. What could happen, though, is a restructuring of case law pertaining to the Americans with Disabilities Act. If a precedent were established that providing captioning accommodations is an unreasonable burden for companies, schools, or event organizers -- or that verbatim captioning is a luxury, and deaf or hard of hearing people are only entitled to non-verbatim notetaking or summarizing services such as C-Print or Typewell -- my job situation would look very different. Even if the amount of work available stays essentially the same, rates could fall, and I might have to settle for taking home less money while doing the same amount of work. The number of CART providers could fall so drastically due to retirement and lack of qualified graduates that CART could be replaced as the gold standard of speech-to-text accommodation simply due to lack of supply for the growing demand. Anything could happen.
By and large (disability advocacy and steno promotion projects like Plover aside), most of that stuff is out of my control. The captioning business will do what it will do, and I'll have to navigate whatever changes are ahead. I'm saving money, establishing good relationships with fellow captioners, building connections with international firms in case the disability access situation in the US gets dicey, and providing the best service I possibly can to my clients while they're in school, knowing that someday soon they'll be successful professionals who will continue to need occasional captioning. But what if I acquired a disability that interfered with my ability to caption? Since my trade involves my ears, eyes, hands, and brain, it leaves me quadruply vulnerable.
Sight loss would be the easiest to adapt to without seriously affecting my career. There are quite a few blind and low vision stenographers out there. I'd probably learn Braille and buy a refreshable Braille display to confirm that my steno strokes are translating correctly. It might take me a while to get back up to speed, but I think I could make it work. Hearing is another story, of course, since it's the fundamental basis of the service I'm providing. So far my hearing is pretty darn good (as I confirmed last weekend when I was able to pick up very quiet questions from the back of the audience during a recent theater talkback; lots of people told me afterwards that they couldn't make out a word without reading the captions), but my dad developed significant early hearing loss, and while in his case there was a specific noise exposure precipitating it, I don't want to get too cocky about my so-far functional ears. As I mentioned in the previous post, a third of people over 65 have some degree of hearing loss, and as a freelancer, I don't have an employer-run pension fund, so I would be a little nervous about finances if I were forced to stop working at 65. While I try to protect my hearing by listening to music at a fairly low volume and keeping out of loud nightclubs, I take the subway every day, and the sound of those trains rattling by is pretty cataclysmic. I've contemplated wearing earplugs during my morning commute, but I'd miss listening to music and podcasts. I might compromise by buying a set of expensive noise-blocking earbuds sometime soon. While there are a few acknowledged hard of hearing court reporters and ASL interpreters out there, I've never spoken to any CART providers who have admitted to having hearing loss. If I did start losing my hearing, it would definitely be a tricky thing to balance being fair to my clients by not providing them substandard service while not counting myself out of the game too early. I'd probably switch from onsite CART to only doing remote CART jobs with excellent quality (i.e. direct line) audio, and I'd have to keep testing myself to make sure that I didn't miss or misinterpret anything that I heard.
And what about my hands? Even though I love it, I've decided not to go downhill skiing anymore. The fun of it wouldn't erase the risk of falling and breaking a wrist or a finger. Even if it healed completely, I'd be out of the game for months, and I'd hate to think how much speed and accuracy (not to mention income) I'd have to make up when I was able to start writing again. While I have long-term disability insurance, I love this job too much to jeopardize it for some cheap thrills. On the other hand, I don't want to wear bubble wrap and wrist guards everywhere I go. I still ride my Razor scooter around town, and I play video games heedless of any potential (really rather small) risk of repetitive strain injury. Trying to guard against every possible contingency is a fool's errand. You do what you can, hope that your luck holds out, and figure out workarounds when it doesn't.
The simple fact is this: Disability can affect anyone, no matter how much they try to avert it. If something happened to me that prevented me from providing CART, I'd have to find an alternative career. The good thing about working within the disability community is that acquiring a disability isn't an instant mark against your prospects the way it is in most of the non-disabled employment sector. It's illegal, unjust, and wrong-headed, but too many companies rely on unfounded prejudices in their hiring decisions, and they assume a disabled employee will be expensive, high-maintenance, incompetent, unreliable, et cetera, ad nauseam. In fact, employees with disabilities demonstrate improved retention and loyalty and are often better workers than their non-disabled peers, due to a high level of ingenuity and technical knowledge, plus a general tendency not to take their jobs for granted.
If I ever do develop a disability that prevents me from providing CART, I have a feeling that the connections I've made within the disability and accessibility communities will be able to give me some solid ideas on searching for new employment. Also, by that point, The Plover Project will hopefully have found its wings, and I might be able to devote my time to spreading the word about the fantastic usefulness of steno both for people who don't use their voices to speak and for anyone who wants to write text swiftly, fluently, and efficiently while minimizing the risk of the repetitive stress injuries commonly seen in frequent qwerty use.
Of course, there's no need to borrow trouble. With any luck, I'll be providing CART for many decades to come. But I'm glad that my job has led me to be more aware of the potential for disability, so that I can plan for the possibility if it should arise. As the saying goes, forewarned is half an octopus.
Because I work in the accessibility industry, I think about disability a lot. I'm currently the only breadwinner in my household and we might start planning for a kid in the next few years, so I have to anticipate any possible disruptions in income and make contingency plans for them. Any number of things could happen. The student loan crisis could explode, making the number of people receiving higher education very different from what it looks like right now. I might have to transition out of academic CART and start specializing in CART for deaf professionals or public events. A lot of people worry about speech recognition replacing stenographers, but I think that's pretty unlikely, as I've explained in my CART Problem Solving Series. What could happen, though, is a restructuring of case law pertaining to the Americans with Disabilities Act. If a precedent were established that providing captioning accommodations is an unreasonable burden for companies, schools, or event organizers -- or that verbatim captioning is a luxury, and deaf or hard of hearing people are only entitled to non-verbatim notetaking or summarizing services such as C-Print or Typewell -- my job situation would look very different. Even if the amount of work available stays essentially the same, rates could fall, and I might have to settle for taking home less money while doing the same amount of work. The number of CART providers could fall so drastically due to retirement and lack of qualified graduates that CART could be replaced as the gold standard of speech-to-text accommodation simply due to lack of supply for the growing demand. Anything could happen.
By and large (disability advocacy and steno promotion projects like Plover aside), most of that stuff is out of my control. The captioning business will do what it will do, and I'll have to navigate whatever changes are ahead. I'm saving money, establishing good relationships with fellow captioners, building connections with international firms in case the disability access situation in the US gets dicey, and providing the best service I possibly can to my clients while they're in school, knowing that someday soon they'll be successful professionals who will continue to need occasional captioning. But what if I acquired a disability that interfered with my ability to caption? Since my trade involves my ears, eyes, hands, and brain, it leaves me quadruply vulnerable.
Sight loss would be the easiest to adapt to without seriously affecting my career. There are quite a few blind and low vision stenographers out there. I'd probably learn Braille and buy a refreshable Braille display to confirm that my steno strokes are translating correctly. It might take me a while to get back up to speed, but I think I could make it work. Hearing is another story, of course, since it's the fundamental basis of the service I'm providing. So far my hearing is pretty darn good (as I confirmed last weekend when I was able to pick up very quiet questions from the back of the audience during a recent theater talkback; lots of people told me afterwards that they couldn't make out a word without reading the captions), but my dad developed significant early hearing loss, and while in his case there was a specific noise exposure precipitating it, I don't want to get too cocky about my so-far functional ears. As I mentioned in the previous post, a third of people over 65 have some degree of hearing loss, and as a freelancer, I don't have an employer-run pension fund, so I would be a little nervous about finances if I were forced to stop working at 65. While I try to protect my hearing by listening to music at a fairly low volume and keeping out of loud nightclubs, I take the subway every day, and the sound of those trains rattling by is pretty cataclysmic. I've contemplated wearing earplugs during my morning commute, but I'd miss listening to music and podcasts. I might compromise by buying a set of expensive noise-blocking earbuds sometime soon. While there are a few acknowledged hard of hearing court reporters and ASL interpreters out there, I've never spoken to any CART providers who have admitted to having hearing loss. If I did start losing my hearing, it would definitely be a tricky thing to balance being fair to my clients by not providing them substandard service while not counting myself out of the game too early. I'd probably switch from onsite CART to only doing remote CART jobs with excellent quality (i.e. direct line) audio, and I'd have to keep testing myself to make sure that I didn't miss or misinterpret anything that I heard.
And what about my hands? Even though I love it, I've decided not to go downhill skiing anymore. The fun of it wouldn't erase the risk of falling and breaking a wrist or a finger. Even if it healed completely, I'd be out of the game for months, and I'd hate to think how much speed and accuracy (not to mention income) I'd have to make up when I was able to start writing again. While I have long-term disability insurance, I love this job too much to jeopardize it for some cheap thrills. On the other hand, I don't want to wear bubble wrap and wrist guards everywhere I go. I still ride my Razor scooter around town, and I play video games heedless of any potential (really rather small) risk of repetitive strain injury. Trying to guard against every possible contingency is a fool's errand. You do what you can, hope that your luck holds out, and figure out workarounds when it doesn't.
The simple fact is this: Disability can affect anyone, no matter how much they try to avert it. If something happened to me that prevented me from providing CART, I'd have to find an alternative career. The good thing about working within the disability community is that acquiring a disability isn't an instant mark against your prospects the way it is in most of the non-disabled employment sector. It's illegal, unjust, and wrong-headed, but too many companies rely on unfounded prejudices in their hiring decisions, and they assume a disabled employee will be expensive, high-maintenance, incompetent, unreliable, et cetera, ad nauseam. In fact, employees with disabilities demonstrate improved retention and loyalty and are often better workers than their non-disabled peers, due to a high level of ingenuity and technical knowledge, plus a general tendency not to take their jobs for granted.
If I ever do develop a disability that prevents me from providing CART, I have a feeling that the connections I've made within the disability and accessibility communities will be able to give me some solid ideas on searching for new employment. Also, by that point, The Plover Project will hopefully have found its wings, and I might be able to devote my time to spreading the word about the fantastic usefulness of steno both for people who don't use their voices to speak and for anyone who wants to write text swiftly, fluently, and efficiently while minimizing the risk of the repetitive stress injuries commonly seen in frequent qwerty use.
Of course, there's no need to borrow trouble. With any luck, I'll be providing CART for many decades to come. But I'm glad that my job has led me to be more aware of the potential for disability, so that I can plan for the possibility if it should arise. As the saying goes, forewarned is half an octopus.
Saturday, February 9, 2013
Conference Captioning
I really enjoy open captioning on the big screen for conferences and professional events, but I don't get the chance to do it as often as I'd like. Partly that's because my weekday schedule is pretty full, so I'm only available on weekends. But partly it's because, while captioning is an extremely useful accommodation for many people, most of those people either don't know that captioning exists or that they have the right to request it from the organizers of the conference. In the USA, one in seven people have hearing loss. For people over 65, that rate goes up to one in three. Events at most conferences seat hundreds of people, so statistically it's a sure bet that at least some of those people would benefit from captioning. Even people with mild hearing loss, who do quite well in one-on-one social situations by using a combination of residual hearing, lip reading, and context clues, often have trouble with conference audio, which can be distorted in the amplification process, and which puts the speaker so far away from the audience that lipreading becomes impossible. There's also the benefits that captioning can offer people without hearing loss, who may be more comfortable reading written English than understanding spoken English (very common when English isn't a person's first language), or who may have central audio processing issues (very common in Aspergers and autism) or attention deficit issues such as ADHD. I remember one event I captioned, when I looked over my shoulder and saw a bevy of Samsung executives all reading my captions with great excitement. Their English was excellent, but the rate of American speech was sometimes too quick for them to parse comfortably, so they found the captions incredibly useful in making sure that they were getting everything. After the event, one of them asked if I would be willing to move to Korea so I could caption all of their English language meetings, but I told him regretfully that I needed to stay in New York. Even non-disabled native English speakers often find captioning helpful when trying to assimilate a large amount of rapid-fire information; captioning can give them correct spellings of difficult words, allow them to take more detailed notes, and provide dual-sensory feedback by sending the same information to their eyes and ears at the same time, which improves memory and retention. After every event I caption, I get dozens of people coming up to me and saying how useful they found the captioning. Some of those people self-identify as deaf or hard of hearing, but the majority do not. So why isn't conference captioning more common? There are a number of reasons:
• People don't request captions. Refer back to that figure I mentioned up above. Of those one in seven people with hearing loss, very few feel comfortable requesting captioning. It takes an average of five years between the onset of hearing loss and a person admitting that they have it, even to themselves. There's still a tremendous amount of social stigma involved in admitting hearing loss. They don't like talking about it, and many would suffer through the frustrations of inaudible speech and missed information than ask for any special treatment. Even for people who realize that they can't hear very well in a large lecture hall, so few of them have seen or heard of captioning, that most wouldn't know to ask for it in the first place.
• Conferences don't want to pay for it. Captioners come with a certain amount of sticker shock, it's true, but I think the problem is more that it's an unfamiliar service, and the value of it is not clear to the people in charge of deciding what to spend their attendees' money on. Thirty years ago, most conference organizers would balk at the idea of having to supply a computer, projector, and screen to every room so that each presenter could display PowerPoint slides, but these days it's de rigueur. Food costs money. Chairs cost money. Event space costs money. By adding a small surcharge to each attendee's ticket price, the captioning could be paid for quite easily, but organizers need to be convinced of its value first, and that's difficult to do because of the relative rarity of captioning right now. It needs to build up a certain amount of recognition and momentum before it's truly accepted as an ordinary conference amenity, like free wi-fi or complimentary lanyards. I've found that it's often easier to get the sponsors of conferences to pay for the captioning than to ask the organizers themselves. The companies that sponsor conferences like to be seen providing a public service, and accessibility is becoming a mark of good citizenship. If you propose captioning at a conference and the organizer swears that it's impossible (which, incidentally, is a violation of the ADA, but there's not much you can do to enforce that, short of a lawsuit), ask them if any of their sponsors would like to pay for the captioning in exchange for prominent billing. Also, whenever possible, ask organizers to survey their attendees after every captioned conference. The more positive feedback they get from people who appreciated the captions, the more likely they are to offer captioning in the future.
• CART provider availability is limited. Again, this is a bit of a chicken-and-egg problem; there isn't enough conference captioning to supply a typical provider's schedule on its own, so most providers pay the bills with academic captioning. Academic captioning schedules tend to conflict with all but weekend conferences. Conferences sometimes reach out to providers, only to find that no one's available. Captioning is put back in the "impossible" column, and the cycle perpetuates itself. I think the only solution is to increase the amount of conference captioning, so that some providers can specialize in it, and not be forced to tie themselves down to academic schedules.
• Reserved caption seating is often counterproductive. I've captioned at a few events in the past few weeks, and two of them employed a captioning section, near the CART screens. Of course, this was better than no captioning at all, but it still wasn't ideal. For one thing, the seatbacks had "reserved for CART" posted on them by the conference staff. At one conference these seats were nearly all occupied a group of self-identified late deafened people, who had requested the captioning in advance, and it worked out fairly well, even though looking out into the crowd I saw dozens of people over 65, many of whom almost certainly had some hearing loss, who were unable to benefit from the captions due to the screen size and placement. At the other conference, nearly all the "reserved for CART" seats were empty for several hours until I got wise and removed them. Then they all filled up with people who followed the captions avidly and made a point to come up and thank me afterwards, telling me how useful they'd found them. The problem was that word "reserved". It makes people think that they've got to be on some list before they're allowed to sit there, and many people who need captions stay away from those seats, because they assume they must not be in the "reserved" group. The solution, of course, is to avoid small projector screens and caption vs. non-caption seating whenever possible; providing open captions to the entire room (and, if the event has a simultaneous webcast, to the internet as well) by using large centralized screens. I'm very excited about Text On Top, a new device that seems to allow CART providers to overlay their captions on the presenter's own PowerPoint slides. Up until now it's only been available in Europe, but it just came out in the United States, and I'll be buying one soon. I'll probably put a review of it up here, so stay tuned.
For a great discussion of event accessibility from a consumer's perspective, read CART or ASL or ALD by Svetlana Kouznetsova on her excellent Audio Accessibility page. She goes into the intricacies of when CART is preferred to Sign Language interpretation (and vice versa) and the logistical tradeoffs of employing each accommodation. I hope that eventually captioning will become second nature to all organizers of large events, without it having to be specifically requested each time, but for now I'm grateful for each conference I get the chance to caption. The more people see it, the more they'll want it.
• People don't request captions. Refer back to that figure I mentioned up above. Of those one in seven people with hearing loss, very few feel comfortable requesting captioning. It takes an average of five years between the onset of hearing loss and a person admitting that they have it, even to themselves. There's still a tremendous amount of social stigma involved in admitting hearing loss. They don't like talking about it, and many would suffer through the frustrations of inaudible speech and missed information than ask for any special treatment. Even for people who realize that they can't hear very well in a large lecture hall, so few of them have seen or heard of captioning, that most wouldn't know to ask for it in the first place.
• Conferences don't want to pay for it. Captioners come with a certain amount of sticker shock, it's true, but I think the problem is more that it's an unfamiliar service, and the value of it is not clear to the people in charge of deciding what to spend their attendees' money on. Thirty years ago, most conference organizers would balk at the idea of having to supply a computer, projector, and screen to every room so that each presenter could display PowerPoint slides, but these days it's de rigueur. Food costs money. Chairs cost money. Event space costs money. By adding a small surcharge to each attendee's ticket price, the captioning could be paid for quite easily, but organizers need to be convinced of its value first, and that's difficult to do because of the relative rarity of captioning right now. It needs to build up a certain amount of recognition and momentum before it's truly accepted as an ordinary conference amenity, like free wi-fi or complimentary lanyards. I've found that it's often easier to get the sponsors of conferences to pay for the captioning than to ask the organizers themselves. The companies that sponsor conferences like to be seen providing a public service, and accessibility is becoming a mark of good citizenship. If you propose captioning at a conference and the organizer swears that it's impossible (which, incidentally, is a violation of the ADA, but there's not much you can do to enforce that, short of a lawsuit), ask them if any of their sponsors would like to pay for the captioning in exchange for prominent billing. Also, whenever possible, ask organizers to survey their attendees after every captioned conference. The more positive feedback they get from people who appreciated the captions, the more likely they are to offer captioning in the future.
• CART provider availability is limited. Again, this is a bit of a chicken-and-egg problem; there isn't enough conference captioning to supply a typical provider's schedule on its own, so most providers pay the bills with academic captioning. Academic captioning schedules tend to conflict with all but weekend conferences. Conferences sometimes reach out to providers, only to find that no one's available. Captioning is put back in the "impossible" column, and the cycle perpetuates itself. I think the only solution is to increase the amount of conference captioning, so that some providers can specialize in it, and not be forced to tie themselves down to academic schedules.
• Reserved caption seating is often counterproductive. I've captioned at a few events in the past few weeks, and two of them employed a captioning section, near the CART screens. Of course, this was better than no captioning at all, but it still wasn't ideal. For one thing, the seatbacks had "reserved for CART" posted on them by the conference staff. At one conference these seats were nearly all occupied a group of self-identified late deafened people, who had requested the captioning in advance, and it worked out fairly well, even though looking out into the crowd I saw dozens of people over 65, many of whom almost certainly had some hearing loss, who were unable to benefit from the captions due to the screen size and placement. At the other conference, nearly all the "reserved for CART" seats were empty for several hours until I got wise and removed them. Then they all filled up with people who followed the captions avidly and made a point to come up and thank me afterwards, telling me how useful they'd found them. The problem was that word "reserved". It makes people think that they've got to be on some list before they're allowed to sit there, and many people who need captions stay away from those seats, because they assume they must not be in the "reserved" group. The solution, of course, is to avoid small projector screens and caption vs. non-caption seating whenever possible; providing open captions to the entire room (and, if the event has a simultaneous webcast, to the internet as well) by using large centralized screens. I'm very excited about Text On Top, a new device that seems to allow CART providers to overlay their captions on the presenter's own PowerPoint slides. Up until now it's only been available in Europe, but it just came out in the United States, and I'll be buying one soon. I'll probably put a review of it up here, so stay tuned.
For a great discussion of event accessibility from a consumer's perspective, read CART or ASL or ALD by Svetlana Kouznetsova on her excellent Audio Accessibility page. She goes into the intricacies of when CART is preferred to Sign Language interpretation (and vice versa) and the logistical tradeoffs of employing each accommodation. I hope that eventually captioning will become second nature to all organizers of large events, without it having to be specifically requested each time, but for now I'm grateful for each conference I get the chance to caption. The more people see it, the more they'll want it.
Subscribe to:
Posts (Atom)