25 Facts About Speech synthesis

1.

The first video game to feature speech synthesis was the 1980 shoot 'em up arcade game, Stratovox, from Sun Electronics.

FactSnippet No. 1,577,908
2.

The first personal computer game with speech synthesis was Manbiki Shoujo, released in 1980 for the PET 2001, for which the game's developer, Hiroshi Suzuki, developed a "zero cross" programming technique to produce a synthesized speech waveform.

FactSnippet No. 1,577,909
3.

The quality of synthesized speech has steadily improved, but as of 2016 output from contemporary speech synthesis systems remains clearly distinguishable from actual human speech.

FactSnippet No. 1,577,910
4.

Concatenative synthesis is based on the concatenation of segments of recorded speech.

FactSnippet No. 1,577,911
5.

Diphone synthesis uses a minimal speech database containing all the diphones occurring in a language.

FactSnippet No. 1,577,912
6.

An early example of Diphone synthesis is a teaching robot, Leachim, that was invented by Michael J Freeman.

FactSnippet No. 1,577,913
7.

Domain-specific Speech synthesis concatenates prerecorded words and phrases to create complete utterances.

FactSnippet No. 1,577,914
8.

Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech.

FactSnippet No. 1,577,915
9.

Articulatory synthesis refers to computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there.

FactSnippet No. 1,577,916
10.

Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems.

FactSnippet No. 1,577,917
11.

HMM-based Speech synthesis is a Speech synthesis method based on hidden Markov models, called Statistical Parametric Synthesis.

FactSnippet No. 1,577,918
12.

Speech synthesis waveforms are generated from HMMs themselves based on the maximum likelihood criterion.

FactSnippet No. 1,577,919
13.

Sinewave synthesis is a technique for synthesizing speech by replacing the formants with pure tone whistles.

FactSnippet No. 1,577,920
14.

Deep learning speech synthesis uses deep neural networks to produceartificial speech from text or spectrum.

FactSnippet No. 1,577,921
15.

The quality of speech synthesis systems depends on the quality of the production technique and on the facilities used to replay the speech.

FactSnippet No. 1,577,922
16.

So, the demo was accomplished with a prototype 512k Mac, although those in attendance were not told of this and the Speech synthesis demo created considerable excitement for the Macintosh.

FactSnippet No. 1,577,923
17.

The voice synthesis was licensed by Commodore International from SoftVoice, Inc, who developed the original MacinTalk text-to-speech system.

FactSnippet No. 1,577,924
18.

The synthesis system was divided into a translator library which converted unrestricted English text into a standard set of phonetic codes and a narrator device which implemented a formant model of speech generation.

FactSnippet No. 1,577,925
19.

Speech synthesis was occasionally used in third-party programs, particularly word processors and educational software.

FactSnippet No. 1,577,926
20.

The synthesis software remained largely unchanged from the first AmigaOS release and Commodore eventually removed speech synthesis support from AmigaOS 2.

FactSnippet No. 1,577,927
21.

Microsoft Speech Server is a server-based package for voice synthesis and recognition.

FactSnippet No. 1,577,928
22.

Speech synthesis synthesizers were offered free with the purchase of a number of cartridges and were used by many TI-written video games.

FactSnippet No. 1,577,929
23.

Speech synthesis has long been a vital assistive technology tool and its application in this area is significant and widespread.

FactSnippet No. 1,577,930
24.

Speech synthesis techniques are used in entertainment productions such as games and animations.

FactSnippet No. 1,577,931
25.

Text-to-speech is finding new applications; for example, speech synthesis combined with speech recognition allows for interaction with mobile devices via natural language processing interfaces.

FactSnippet No. 1,577,932