Research on Spoken Dialogue Agent

The upcoming publications at Human Agent Interaction Symposium (HAI2009) are as follows:

  • Masayuki Nakazawa, Takuya Nishimoto, Shigeki Sagayama:
    Title: Behavior Generation for Spoken Dialogue Agent by Dynamical Model
    Abstract: For the spoken dialog systems with the anthropomorphic agents, it is important to give the natural impressions and the real presence to human. For this purpose, the head and gaze controls of the agent which are consistent with the spoken dialogs are expected to be effective. Our approach is based on the following hypotheses: 1) An agent performs the dialog concurrently with the intentional controls of the head and gaze to retrieve the information and to give signals. 2) The movement of the head and eyeballs is based on mathematical models. To achieve these purpose, we have adopt the mathematical model for movements of the agent.
    There are several merits to formulate by the mathematical model, a) the parameters can reflect the subjectivity which can generate various movement from this model, b) the movements of the agents can reflect the personality, c) the continuous movements of the agent can be controlled by the mathematics. In this paper, we propose a mathematics model by the second order system and perform comparison with the linear model and show the superiority.
  • Di Lu, Masayuki Nakazawa, Takuya Nishimoto, Shigeki Sagayama:
    Title: Barge-in Control with Reinforcement Learning for Efficient Multi-modal Spoken Dialogue Agent
    Abstract: To make the dialogue between the agent and the user smoother, we propose a multi-modal user simulator that could be widely used in real-time agent control for multi-modal dialog agent with reinforcement learning. We also implemented the prototype system that utilized the result of reinforcement learning.

Date: Fri, Dec 4 – Sat, Dec 5, 2009

Place: Tokyo Institute of Technology

Language: Japanese

Date: Thu, Oct 29 – Fri, Oct 30, 2009

Place: ASPAM (Aomori City, Japan)

Language: Japanese