Oddcast
Part of the personal messenger has been designed to handle human-device interaction. One goal of the marid project is to touch lots of networks, including those in which you do not have a keyboard. So for instance, listen and respond to your email via phone, talk/respond and listen to your IM message. The key part of this is the translation between the channels.
This includes two types of communication channels - immediate/live and offline/delayed. The offline case for most translation requires less bandwidth processing time. If we can translate your email to sound as it comes in, and have it waiting for your phone call, then we can add functionality and performance. It would be nice to say, forward this email in voice format to another phone number, where it will be played after a short introduction. This would be difficult with a live conversation (but doable after the conversation has completed).
With this in mind, there are two (currently) ways of creating human readable sound. The first is to translate the text through a tts engine (festival or freetts) or perform this operation on the fly.
In the immediate/live case, the device which will render the sound needs to be considered. As of this post, we do not have an immediate/live tts solution for Asterisk. We do have a solution for a web device (browser limited at the moment).
Wow - nasty preamble.
Ok, to the solution. We are using oddcast's vhost solution on the MaridSpace home page. As a part of the solution, you can connect the tool to an Alice chatterbox (notice they are using oddcast also). But more importantly, you can use JavaScript to push/pull content and then have the tool tts for you. Below is the most of the code required to pull off a full chatterbox impl.
Two items of interest, the textarea is required for the response from the chatterbox server, and the replay link removes the cookie (on the reload) that says you have already heard the presentation.
To the details. sayAIResponse is a JavaScript function that makes a call to oddcast's servers _without_ doing a post. This allows for quicker render time of the page (since we don't have to reload the oddcast tool), and we can immediately start rendering the tts sound.
We are using this technology in the Personal Messenger.
So to impl the same postless communication to a Marid Server, we are using Brent Ashley's Remote Scripting tools against a servlet. This allows us to impl our own Alice server (with a little spice) and still use the tts features.
This should allow for the final goal of listening to your email over coffee (but not yet responding to it).
This includes two types of communication channels - immediate/live and offline/delayed. The offline case for most translation requires less bandwidth processing time. If we can translate your email to sound as it comes in, and have it waiting for your phone call, then we can add functionality and performance. It would be nice to say, forward this email in voice format to another phone number, where it will be played after a short introduction. This would be difficult with a live conversation (but doable after the conversation has completed).
With this in mind, there are two (currently) ways of creating human readable sound. The first is to translate the text through a tts engine (festival or freetts) or perform this operation on the fly.
In the immediate/live case, the device which will render the sound needs to be considered. As of this post, we do not have an immediate/live tts solution for Asterisk. We do have a solution for a web device (browser limited at the moment).
Wow - nasty preamble.
Ok, to the solution. We are using oddcast's vhost solution on the MaridSpace home page. As a part of the solution, you can connect the tool to an Alice chatterbox (notice they are using oddcast also). But more importantly, you can use JavaScript to push/pull content and then have the tool tts for you. Below is the most of the code required to pull off a full chatterbox impl.
<script language="JavaScript" type="text/javascript">
AC_VHost_Embed_9329(112,150,'FFFFFF',1,1,123207,0,0,0,'69e82a5026978b9d275a540feb6fdfed',6);
</script>
<FORM NAME="myForm" class="form" onSubmit="javascript:sayAIResponse(textToSay.value, 1, 1, 1, 1);textToSay.value='';return false;">
<input name="textToSay" value="" class="form" width="8" maxlength="80">
<INPUT TYPE="button" VALUE="Ask AI Engine" onClick="javascript:sayAIResponse(textToSay.value, 1, 1, 1, 1);textToSay.value='';" class="form">
<TEXTAREA NAME="message" value="" ROWS=1 COLS=1 READONLY></TEXTAREA>
</FORM>
<A HREF="/index.php?replay=true">Replay</a>
Two items of interest, the textarea is required for the response from the chatterbox server, and the replay link removes the cookie (on the reload) that says you have already heard the presentation.
To the details. sayAIResponse is a JavaScript function that makes a call to oddcast's servers _without_ doing a post. This allows for quicker render time of the page (since we don't have to reload the oddcast tool), and we can immediately start rendering the tts sound.
We are using this technology in the Personal Messenger.
So to impl the same postless communication to a Marid Server, we are using Brent Ashley's Remote Scripting tools against a servlet. This allows us to impl our own Alice server (with a little spice) and still use the tts features.
This should allow for the final goal of listening to your email over coffee (but not yet responding to it).
Comments