Speech synthesis of auditory lecture books for blind school children (SALB)
Speech synthesis of auditory lecture books for blind school children
Information technology in general and speech technology in particular have enhanced the accessibility to information for blind and partially sighted users. Nowadays, blind users are able to access the entire amount of information on the web by using speech-based User Interfaces (UI), the advantage of those over Braille-lines being clearly increased cost-efficiency and the usage without any special training. Through a combination of speech-based UIs and Braille-lines a more robust interaction is possible.
Parametric methods of speech synthesis are nowadays used in many speech-based UIs, since they use up little memory, can be calculated efficiently and are highly flexible. Regarding intelligibility of speech these methods are sufficient, but there is still need for improvements in the quality and intelligibility of speech at high speaking rates.
Parametric methods that are based on hidden Markov models (HMM), allow a high degree of flexibility. Through model adaptation, it is easily possible to create voices for certain speakers. Adaptive methods can also be used for the generation of fast speech, which is very important for blind users to interact efficiently with an information system. In this project we evaluated HMM-based synthesis of different language varieties (standard, dialect, sociolect) for auditive lecture books. Moreover, we analyzed the influence of different social roles (teacher – student) as well as of self-perception and perception of others, that exists between the listener and the person whose voice is synthesized.
In basic technology we optimized the synthesis of fast speech with HMMs by using adaptation- and interpolation methods.
The knowledge of school children and teachers was brought into the project via user workshops, and they also participated in the development and evaluation of the synthetic voices. The findings from the cooperation with school children and teachers substantially improved the development of speech-based user interfaces.
This project has been completed.