DMCPP: development of another dialog manager

An experimental dialog manager of Galatea for Linux, using lib-julius and OpenCV, written in C++, is under development.
The project is focusing on the low-level multimodal event handling, while Galatea Dialog Studio is focusing on higher-level dialog management using VoiceXML.
Although current version is just the skeleton of applications, I would like to ask for feedback from the developers.

DMCPP in English
DMCPP in Japanese

Please join mailing lists for discussions.

Voice interface and effectiveness

One of my colleague made a presentation at Human-Agent-Interaction symposium in Tokyo yesterday.

The assumption is that the human-like spoken dialogs are highly effective. Our proposal is to use the reinforcement-learning for acquiring the strategy how to respond quickly to overlapped utterances, interruptions, or gestures during spoken dialogs between human and machine. Although the research is still in early stage, we hope something like mind-reading will be possible, in other words, the users of spoken dialog systems do not need to say from the beginning to the end.
Continue reading “Voice interface and effectiveness”

Research on Spoken Dialogue Agent

The upcoming publications at Human Agent Interaction Symposium (HAI2009) are as follows:

  • Masayuki Nakazawa, Takuya Nishimoto, Shigeki Sagayama:
    Title: Behavior Generation for Spoken Dialogue Agent by Dynamical Model
    Abstract: For the spoken dialog systems with the anthropomorphic agents, it is important to give the natural impressions and the real presence to human. For this purpose, the head and gaze controls of the agent which are consistent with the spoken dialogs are expected to be effective. Our approach is based on the following hypotheses: 1) An agent performs the dialog concurrently with the intentional controls of the head and gaze to retrieve the information and to give signals. 2) The movement of the head and eyeballs is based on mathematical models. To achieve these purpose, we have adopt the mathematical model for movements of the agent.
    There are several merits to formulate by the mathematical model, a) the parameters can reflect the subjectivity which can generate various movement from this model, b) the movements of the agents can reflect the personality, c) the continuous movements of the agent can be controlled by the mathematics. In this paper, we propose a mathematics model by the second order system and perform comparison with the linear model and show the superiority.
  • Di Lu, Masayuki Nakazawa, Takuya Nishimoto, Shigeki Sagayama:
    Title: Barge-in Control with Reinforcement Learning for Efficient Multi-modal Spoken Dialogue Agent
    Abstract: To make the dialogue between the agent and the user smoother, we propose a multi-modal user simulator that could be widely used in real-time agent control for multi-modal dialog agent with reinforcement learning. We also implemented the prototype system that utilized the result of reinforcement learning.

Date: Fri, Dec 4 – Sat, Dec 5, 2009

Place: Tokyo Institute of Technology

Language: Japanese

Date: Thu, Oct 29 – Fri, Oct 30, 2009

Place: ASPAM (Aomori City, Japan)

Language: Japanese

Galatea release announcement

Latest Galatea Toolkit (beta version) is released as follows:

http://en.sourceforge.jp/projects/galatea/releases/

Please notice that current version is for Japanese conversations. I would like to discuss the plans for internationalization of this tools. The English documents are not fully checked. Please give me the comments and suggestions at galatea-i18n mailing list, which is hosted at sourceforge.jp.

P.S. Galatea Toolkit Video Demos are now available at YouTube.

A multimodal interactive system based on hierarchical Model-View-Controller architecture

Multimodal interactive systems are expected to be used widely. To realize life-like agents or humanoid robots, flexible architecture for integrating software modules is necessary. Many frameworks are proposed.

  • Joseph Polifroni, Stephanie Seneff. 2000. Galaxy-II as an Architecture for Spoken Dialogue Evaluation. Proceedings of Second International Conference on Language Resources and Evaluation, pp.42-50.
  • Yosuke Matsusaka, Kentaro Oku, Tetsunori Kobayashi. 2003. Design and Implementation of Data Sharing Architecture for Multi-Functional Robot Development. Trans. of IEICE, Vol.J86-D1, No.5, pp.318-329 (in Japanese).
  • SRI International, The Open Agent Architecture. http://www.ai.sri.com/~oaa/

In this post, the following topics related to Galatea Toolkit are discussed;

  1. A developer should be able to customize a parameter which influences many modules within the system easily.
  2. A developer who doesn’t have knowledge concerning the speech technology should be able to develop the spoken dialog applications efficiently.

Continue reading “A multimodal interactive system based on hierarchical Model-View-Controller architecture”

Japanese TTS for NVDA

Objectives
Many of tools for the visually impaired to use a PC and the Web are commercial software at present. The following problems occur due to this.

  • The financial problem.
  • It is difficult to follow flexibly and rapidly for a change of needs and the OS environment of the user.
  • The needs cannot be shared among Web developers and persons with visual disability. The tool with speech is useful to verify Web accessibility. However, a lot of Web developers do not use it because such a software is charged.

In late years, NVDA, an open-source screen reader for Windows, attracts attention.

Continue reading “Japanese TTS for NVDA”

Galatea English Technical Notes

English technical notes page of Galatea Linux is available:

(continue from previous post)
I have implemented a simple template engine for Java before experiencing Ruby on Rails, and have used template engines for PHP and Perl. However, the installation may be troublesome, and it was dissatisfaction to have to use engine-dependent description languages.
Ruby has a template engine called ERB by default. We can use Ruby language itself in ERB. The function of ERB is easily available from our own Ruby script. I have a good feeling about these things.
A VoiceXML browser and an HTML browser are not completely equivalent positioning. In addition, further consideration is necessary when you put modalities together. How should we make up in which hierarchy? We want to make proposals for it, on the basis of experience of implementations.

Galatea English Tutorial

New English tutorial page of Galatea Linux is available:

English version of Release Notes page was also created:

We were concerned with a project of voice interaction toolkit “Galatea,” and we were concerned with standardization of multi-modal talks description / the architecture, and we thought about implementation of the VoiceXML application by Ruby on Rails. And it came to be thought that “the implementation of a hierarchized system became the hierarchy of the template engine.”

Many of frameworks of the Web application offer a template engine. There is the merit of standardizing the description in each in many hierarchies, but there is the demerit that a description becomes redundant. The template engine is one such expedient to solve a problem.

Interactive Speech Technology Consortium (ISTC) investigated the standardization of the interaction description specification in each hierarchy about an interface system having voice input output, a GUI input-output. We go into more details about a structure of so-called Model/View/Controller, and six classes are proposed.
Some hierarchies correspond to an MVC framework of Ruby on Rails when they think about dialog control engine called Galatea Dialog Studio which I continue developing.
It is a reasonable method to implement a voice interaction system as follows: At first so-called Web application is implemented. Only a layer depending on HTML is replaced with VoiceXML.

In the Linux version of Galatea Toolkit, a problem of the difficulty of installation and the setting was left. We succeeded in unifying each modules by original design. However, it is necessary to change many points without contradiction at present when customizing and device setting are necessary.

There are necessary parameters and setting information respectively to operate each hierarchy. It is not the interaction description itself. For example, it is necessary to give the information for language processing and the speaker models for the voice synthesis. There are many parameters including audio in, speech detection, the acoustic models for the speech recognition. These setting wants to be handled in Galatea Toolkit in a mass.
After all this seems to become “the hierarchy of the template.”