Language is more than just a set of vocabulary words. For example, think of how two different people can say the same thing two different ways. This complexity is what makes it so hard to create adequate machine translation programs, or even to teach machines to recognize spoken commands.
In an attempt to build cars that people can more easily control with their voices, Ford is teaming up with a company called Nuance Communications to address this issue using a technique called “statistical language modeling,” or SLM.
Ford’s SYNC system is one of the car company’s major selling points, allowing drivers to call people, control the stereo and more, without having to take their eyes off the road. However, at the moment, it’s pretty finicky when it comes to the commands it will accept. Ford programs the car to recognize specific recorded commands. If you try to give the car a command in a different format, it won’t respond.
That’s not how people naturally talk, so basically humans have to be trained to interact with the car. As Ford’s lead engineer for voice control, Brigitte Richardson, told MSNBC, statistical language modeling is
“a totally different way of doing things. It wants to incorporate more natural ways of talking.” Ideally, you’d drive the car off the lot and it would “just work,” with little to no learning curve.
To reach this goal, Ford is working with Nuance to build an “inference engine” that can learn, understand, and interpret voice commands.” It won’t be easy, especially since the technology needs to work without the benefit of an internet connection. However, if they are successful, the end result will be a car that you can talk to just as you would another human being- like a less super-powered version of KITT from Knight Rider.