Personal Information Manager
Manage your notes, write your diary, save your bookmarks, import your documents, photos, songs and make relations between them.
It's based on tools and ideas borrowed from Semantic Web area.
@author : Le Quoc Anh @email : email@example.com Project : GV-Lex Telecom-ParisTech/TSI/2009 http://www.tsi.enst.fr/~quoc/ Context:
The project takes place within the GV-Lex project, that aims to model a humanoid robot, NAO developed by Aldebaran (http://www.aldebaran-robotics.com/)
... [More], able to read a story in an expressive manner. The expressive gesture model will be based at first on an existing virtual agent system, the ECA system Greta:
The system takes as input a text to be said by the agent. The text has been enriched with information on the manner the text ought to be said (i.e. with which communicative acts it should be said). The behavioral engine selects the multimodal behaviors to display and synchronizes the verbal and nonverbal behaviors of the agent.
The objective of the project is to create an agent (be virtual or physical) able to read a story expressively. While other partners of the GV-Lex project will deal with expressive voice, my workna will focus on expressive behaviors. The work to be done concerns mainly the animation of the 3D Greta agent and of the humanoid robot NAO.
The respective animation modules for the humanoid robot and the virtual agent are script-based. That is the animation is generated from a language command of the type 'move the right arm forward with the palm up'. Both languages should be made compatible to ensure that they both encompass the limitation of the robot's movement capabilities and are able to produce equivalent movements on the robot and on the virtual agent. A repertoire of gestures will be established.
The animation module for the virtual agent should be made expressive. A first approach has been implemented on the virtual agent. Expressivity has been defined over six dimensions, namely the spatiality of the movement, their temporality, fluidity, power and repetitiveness. This model needs to be extended and refined to model aspects of expressivity that have not been considered yet, such as tension or continuousness.
Computer graphics aspects will be considered to study parametric surface deformation based on meshes. In particular, recent work on physic-based animation as well variational shape modeling can be reused to improve the visual quality of the 3D Greta agent. This part of the work will involve GPU programming and realistic realtime rendering methods.
Finally the gesture animation and expressivity models should be evaluated. An objective evaluation will be set to measure the capability of the implementation. A subjective evaluation will be made to test how expressive the gesture animation is perceived on the robot and on the agent when reading a story.
Work to be done:
The control of the agent (be virtual or physical) will be done through FML (Function Markup Language) and BML (Behavior Markup Language) BML,TCF,NST. The Greta agent is driven by this language GRE. Moreover behavior is described through a symbolic language. On the other hand, NAO behavior is keyframed where each keyframe is described by the value of all the articulators of the robot. The behavior of the robot will need to be BML-compatible. Robot and virtual agent do not have the same behavior capabilities. For example, the robot can move its legs and torso but does not have facial expression and very limited hand movements. To ensure both agents to be driven by the same language as well as that both their animations convey the same meaning, behavior invariant will be elaborated CAS. BML will be extended to include information related to behavior invariant SDG. Moreover the mapping between an element of FML (eg an emotional state) and behaviors will be extended to cover robot and virtual agent's repertoire.
The algorithm that selects the behaviors to display a given communicative intention or emotional state (based on the Greta's algorithm) will need to be further elaborated to include:
- synchronization mechanism: it will ensure behaviors to be tightly tied to speech.
- physical constraint mechanism: articulators need to be coordinated to obtain a given behavior (eg the torso follows the arm when it is stretched) GGG. This coordination will need to respect the physical constraint of the agent (more particularly of the robot).
- repair mechanism: this mechanism will be important specially in the case of the robot. It will ensure the robot to continue communicating even after an interruption (eg due to falling down after loosing its equilibrium)
The work will be evaluated to ensure both agents convey similar information. Evaluation will also be conducted to check if they are capable of reading expressively a story.
The last part of the work will focus more specifically on the geometry and animation of the 3D Greta Agent. The idea is to define animation transfer methods from either captured or designed facial and body animation toward the agent. High dimensional embedding SDT and variational deformation methods will be used to provide an intuitive interactive tool allowing to quickly generate new agent from a carefully designed one and several different assets (faces, bodies, etc). Recent advances in face picture generation will be generalized to the 3D case VGN. [Less]