Research on Effective Designs and Evaluation for Speech Interface Systems

This post is excerpt from the draft version of the abstract of doctoral dissertation.
My public hearing for the dissertation will be held this month at Waseda University.
Although the thesis itself is written in Japanese, I am willing to write the related topics here in English.

This paper describes a systematic way of enabling of developers and designers to build information-communication systems successfully with speech technologies, such as speech synthesis and speech recognition. As the results of this work, application systems of speech technologies can be used easily for everyone.

This work also describes four research projects including the development of speech applications and the evaluations of speech interfaces, which are performed based on the proposed methodology.

Continue reading “Research on Effective Designs and Evaluation for Speech Interface Systems”

ICCHP 2010 talk

My talk on audio CAPTCHA at ICCHP finished yesterday in Vienna.

I enjoyed using Twitter during the conference. Thank you.

Dear new friends,
Usually I tweet in Japanese with @nishimotz account. I prefer using facebook for English conversation. So, please feel free to unfollow me.
Both facebook and @nishimotz account on Twitter can be used for English conversation.
I will tweet in Japanese with @nishimtz account.

My slide and my tweets at ICCHP is as follows: Takuya Nishimoto, Takayuki Watanabe: The Evaluations of Deletion-Based Method and Mixing-Mased Method for Audio CAPTCHAs.

Notice (2010-10-16): Updated slide of this topic (for Interspeech in Sep. 2010) is available.

Continue reading “ICCHP 2010 talk”

DMCPP: development of another dialog manager

An experimental dialog manager of Galatea for Linux, using lib-julius and OpenCV, written in C++, is under development.
The project is focusing on the low-level multimodal event handling, while Galatea Dialog Studio is focusing on higher-level dialog management using VoiceXML.
Although current version is just the skeleton of applications, I would like to ask for feedback from the developers.

DMCPP in English
DMCPP in Japanese

Please join mailing lists for discussions.

pyAA

To test the MSAA-related features of Microsoft Japanese Input Method Editor MS-IME 2002 for Japanese version of Windows, I am working with pyAA.
This is a preliminary work for localization of NVDA for Japanese users.

I tried to adopt original pyAA to Python 2.6.x.
At first I obtained the source code (of simpler branch) from CVS repository, then I built it with Visual Studio 2008 and SWIG. I also modified the code so that the Value-property can be accessed correctly under multibyte charactor coding environments.

We are successfully proceeding the work with it at the moment.
At the previous meeting of NVDAjp project, we added some code to NVDA, and verified that the WinEvent of MS-IME 2002 can be captured and the Value property can be accessed using IAccessible interface.

Related pages in Japanese (not yet translated) : pyaa and nvdajp

Notes on Feb 27 : I created a github repository of pyaa.

Voice interface and effectiveness

One of my colleague made a presentation at Human-Agent-Interaction symposium in Tokyo yesterday.

The assumption is that the human-like spoken dialogs are highly effective. Our proposal is to use the reinforcement-learning for acquiring the strategy how to respond quickly to overlapped utterances, interruptions, or gestures during spoken dialogs between human and machine. Although the research is still in early stage, we hope something like mind-reading will be possible, in other words, the users of spoken dialog systems do not need to say from the beginning to the end.
Continue reading “Voice interface and effectiveness”

orpheus_tw

I am developing a service called orpheus_tw.
Japanese songs composed by the automatic composition system “Orpheus” (a research project at the University of Tokyo) can be shared with the followers of a Twitter account @orpheus_tw.

This service was built with Ruby and Rails, and hosted by Heroku. Additional “delayed job” option is also used.

Research on Spoken Dialogue Agent

The upcoming publications at Human Agent Interaction Symposium (HAI2009) are as follows:

  • Masayuki Nakazawa, Takuya Nishimoto, Shigeki Sagayama:
    Title: Behavior Generation for Spoken Dialogue Agent by Dynamical Model
    Abstract: For the spoken dialog systems with the anthropomorphic agents, it is important to give the natural impressions and the real presence to human. For this purpose, the head and gaze controls of the agent which are consistent with the spoken dialogs are expected to be effective. Our approach is based on the following hypotheses: 1) An agent performs the dialog concurrently with the intentional controls of the head and gaze to retrieve the information and to give signals. 2) The movement of the head and eyeballs is based on mathematical models. To achieve these purpose, we have adopt the mathematical model for movements of the agent.
    There are several merits to formulate by the mathematical model, a) the parameters can reflect the subjectivity which can generate various movement from this model, b) the movements of the agents can reflect the personality, c) the continuous movements of the agent can be controlled by the mathematics. In this paper, we propose a mathematics model by the second order system and perform comparison with the linear model and show the superiority.
  • Di Lu, Masayuki Nakazawa, Takuya Nishimoto, Shigeki Sagayama:
    Title: Barge-in Control with Reinforcement Learning for Efficient Multi-modal Spoken Dialogue Agent
    Abstract: To make the dialogue between the agent and the user smoother, we propose a multi-modal user simulator that could be widely used in real-time agent control for multi-modal dialog agent with reinforcement learning. We also implemented the prototype system that utilized the result of reinforcement learning.

Date: Fri, Dec 4 – Sat, Dec 5, 2009

Place: Tokyo Institute of Technology

Language: Japanese

Date: Thu, Oct 29 – Fri, Oct 30, 2009

Place: ASPAM (Aomori City, Japan)

Language: Japanese

A research on speech CAPTCHA systems

I am working on my presentation at WIT/SP meeting as follows.

Title: The comparison between the deletion-based methods and the mixing-based methods for safe speech CAPTCHA systems

Authors: Takuya NISHIMOTO, Hitomi MATSUMURA and Takayuki WATANABE

Abstract: Speech-based CAPTCHA systems, which distinguish between software agents and human beings, are especially important for persons with visual disability.  The popular approach is based on mixing-based methods, which use the mixed sounds of target speech and noises.  We have proposed a deletion-based method which uses the phonemic restoration effects.  Our approach can control the difficulty of tasks simply by the masking ratio. Our design principle of CAPTCHA insists that such tasks should be chosen so that the larger difference in performance between the machines and human beings can be provided.  In this paper, we give some hypotheses on the differences between the deletion-based method and the mixing-based methods.  We also show a plan of experiments which compare the automatic speech recognition performance, speech intelligibility, and mental workload of these two approaches.

Date:
Thu, Oct 29 – Fri, Oct 30, 2009

Place:
ASPAM (Aomori City, Japan)

Language:
Japanese

Japanese TTS for NVDA

Objectives
Many of tools for the visually impaired to use a PC and the Web are commercial software at present. The following problems occur due to this.

  • The financial problem.
  • It is difficult to follow flexibly and rapidly for a change of needs and the OS environment of the user.
  • The needs cannot be shared among Web developers and persons with visual disability. The tool with speech is useful to verify Web accessibility. However, a lot of Web developers do not use it because such a software is charged.

In late years, NVDA, an open-source screen reader for Windows, attracts attention.

Continue reading “Japanese TTS for NVDA”