I was at the Eastside Xcoders on Thursday night for a talk by Alexander Caskey, a former employee at Microsoft Research, Wildfire, Cisco, and Linguistic Technology. He has been working on speech and natural language processing (NLP) for the past 20 years.
His talk was on Speech Recognition in Mobile Apps.
The talk was very deep in the history, forms of speech recognition, NLP, and covered the state of the art today.
I went to hear what is available for mobile developers who want to add speech recognition into their apps. I have a future fitness app I am designing that would be served well by speech recognition as an alternate interaction model.
From the talk, I picked up that for iOS devs the first place to start for speech recognition is Open Ears - iPhone Speech Recognition and Text To Speech. This looks like exactly what I want for the problem I have.
I have a limited vocabulary of words that I want to use as an interaction model for my app. This will allow the user to interact with my app through a limited vocabulary where touch is not an option. I'll be investigating Open Ears to see how it works and report back how it goes.