Ford Motor Company utilizes Nuance speech solutions for its new in-car digital system
BURLINGTON, Mass. - Ford Motor Company utilized an array of Nuance speech solutions for
its new in-car digital system. This new factory-installed, in-car communications and
entertainment system, dubbed Sync, is designed to change the way consumers use digital
media portable music players and mobile phones in their vehicles. This Ford-exclusive
technology, based on Microsoft Auto software, includes Nuance's speech recognition and
speech synthesis technologies that provide consumers with the ability to bring into their
vehicle nearly any mobile phone or digital media player and operate it using voice commands,
or the vehicle's steering wheel or radio controls. Sync integrates the vehicle with portable
electronic devices and is upgradeable. Features of the new Sync system based on Nuance
speech solutions include:
Voice-activated calling: Drivers can press the Push-to-Talk button on the steering wheel, and
then say the name of the person to call. Sync will connect with the names in the mobile
phone's contact list.
Audible text messages: With Nuance speech synthesis, Sync will convert text messages from a
phone to audio and read them out loud. The system is translates commonly used text
messaging expressions such as LOL.
Voice-activated music: Drivers can browse the music collection on a digital media player or
USB drive by genre, album, artist and song title using voice commands, such as Play Genre
Rock, Play U2, or Play Playlist Acoustic.
Voice recognition: The Nuance speech recognition in Sync lets a driver speak commands that
are understood by the system, without any training.
Multilingual intelligence: Sync is fluent in English, French, and Spanish. Sync will debut
this calendar year on the 2008 Ford Focus, Fusion, Five Hundred, Edge, Freestyle, Explorer,
and Sport Trac; Mercury Milan, Montego and Mountaineer; and Lincoln MKX and MKZ. The
technology is expected to be available on all Ford, Lincoln, and Mercury vehicles in the near
from Speech Technology, September 2007, page 12