Was just wondering if anyone has ideas on how to get speech recognition and synthesis working on the board. I have already got recognition and synthesis working on my web based GUI which controls the board but some stages of my projects require the board to be able to hear and speak.
1. What microphone would be best to implement onto the board.
2. What speech recognition api is best to use, on my web GUI I use a combination of webspeechapi and espeak depending on browser, but not sure which one would be best in a browserless situation.
3. What text to speech api would be best to use?
4. What speakers would be best to output speech generate from text to speech?
If anyone could point me to code/shields/apis would be big help.
Guess nobody has done much with this. I use a simple USB sound card and espeak to produce speech, which works with any cheap powered speakers (I have some that are powered by USB), and can also record, but haven't tried speech recognition. If you are using AlexT's version of Yocto you can install espeak from his repository opkg install espeak