Posted: Sun Jun 18, 2006 6:22 am
by Shroomz~>
I'm thinking this is an idea for the more advanced SDK devs out there. Imagine a modular module with text entry & alphabetical recognition linked to an internal speach synthesiser. You type in a word or two & press a 'talk' button...

Anyone think this would be possible? Anything is possible right?


Posted: Sun Jun 18, 2006 12:15 pm
by Shroomz~>
In fact, a standalone SFP device with the power of Vocaloid would be incredible.


Posted: Sun Jun 18, 2006 1:53 pm
by HUROLURA
From what I remind from my studies, speech synthesis is mostly a matter of getting some sound sources (called voiced or unvoiced) and gettting them from formant filters (bandpass filter actually).
Voiced sounds are risponsible for voyels generation ([a], , [o]) as unvoiced sounds are just filterd noise...

Now the problem would be to build up sommething which could be fed with text.

The only idea I have know would be to use a pattern generator ...

Will have a look if I still can find something more acccurate about this topic.



Posted: Sun Jun 18, 2006 2:12 pm
by Shroomz~>
Here's an article with some good links at the bottom >>> http://emusician.com/tutorials/emusic_v ... ex.html