Speaking in Context: Medical Language Models and Mobile Dictation

Written by on November 1, 2011

Speech recognition is an increasingly common interface – we interact with speech systems on the phone, using our phones and in our cars. But as Jonathan Dreyer points out in this piece – speech for general use is different to use in healthcare. In Healthcare it requires an appropriate context to attain the necessary levels of accuracy.
>>>What’s “humerus” to a clinician, and what’s “humorous” to a consumer are two very different things

Quite! So using the right versions tuned for the user and his domain – and int eh case of healthcare there are many different domains that can be applied for different specialties (Radiology, orthopedics, general surgery, general medicine…to mention just a few). With the right context and model applied medical speech recognition has become an integral part of clinical solutions and is becoming increasingly important in mobile applications where the keyboard interface is not always ideal or as easily accessible.

So while general speech recognition solutions are delivering real value to derive the same results in healthcare it is important not to fall into the trap of offering generic solutions that will work but generate too many errors to make them useable and worse will turn clinicians off the tools before they have even had a chance to experience the results that are possible today with the right tools for medical speech recognition

So if you are looking to integrate speech into your healthcare applications – use the right version that includes the relevant context and vocabulary models to at the outset and help create a positive experience for users from the beginning.

Posted via email from drnic’s posterous





Search