PROGRAMME


Thursday, November 20 :

Registration with coffee (8:30 - 9:45)

Welcome (9:45 - 10:15)

Session "Virtual Characters" (10:15 - 12:15)

    Invited Lecture:
      Building virtual actors who can really act (summary)
        Ken Perlin, New York University Media Research Laboratory and Center for Advanced Technology, U.S.A.
    Scientific Talks:

      The V-Man Project: towards autonomous virtual characters
        E. Menou, L. Philippon, S. Sanchez, J. Duchon, O. Balet
        Virtual Reality Dept., C-S, Toulouse, France

      Tell me that bit again... Bringing interactivity to a virtual storyteller

        André Silva, Guilherme Raimundo, Ana Paiva
        IST/INESC-ID, Lisbon, Portugal
    Demonstrations:
      A New Automated Workflow for 3D Character Creation Based on 3D Scanned Data
        Alexander Sibiryakov, Xiangyang Ju, Jean-Christophe Nebel
        Computing Science Department, University of Glasgow, U.K.

      Using motivation-driven continuous planning to control the behaviour of virtual agents
        N. Avradinis, R.S. Aylett, T. Panayiotopoulos
        Centre for Virtual Environments, University of Salford, U.K. / Knowledge Engineering Lab, University of Piraeus, Greece

Lunch (12:15 - 13:45)

Session "Narrativity and Authoring" (13:45 - 15:45)

    Scientific Talks:

      Authoring Highly Generative Interactive Drama
        Nicolas Szilas, Olivier Marty, Jean-Hughes Rety
        www.idtension.com, Paris, France / LSS ENS-EHESS, Paris, France / LINC - Lab. Paragraphe, Montreuil, France

      Character-focused Narrative Planning for Execution in Virtual Worlds
        Mark Riedl, R. Michael Young
        Liquid Narrative Group, Dept. of Computer Science, North Carolina State University, U.S.A.

      Managing Authorship in Plot Conduction
        Daniel Sobral, Isabel Machado, Ana Paiva
        INESC-ID, Lisboa, Portugal / IST - Technical University at Lisbon, Portugal

      Authoring Edutainment Stories for Online Players (AESOP): Introducing Gameplay into Interactive Dramas
        Barry G. Silverman, Michael Johns, Ransom Weaver, Joshua Mosley
        ACASA, University of Pennsylvania, U.S.A.
    Demonstrations:
      From the Necessity of Film Closure to Inherent VR Wideness
        Nelson Zagalo, Vasco Branco, Anthony Barker
        Dept. of Coomunication and Art, University of Aveiro, Portugal / Dept. of Languages and Cultures, University of Aveiro, Portugal

      Virtual StoryTelling: a Methodology for Developing Believable Communication Skills in Virtual Actors
        Sandrine Darcy, Julie Dugdale, Mehdi El Jed, Nico Pallamin, Bernard Pavard
        IRIT-GRIC, Toulouse, France

Session Poster / Demo 1 with coffee (15:45 - 16:30) : 4 demos

Session "Mediation/Interface"

    Scientific Talks:
      Stories in Space: The Concept of the Story Map
        Michael Nitsche, Maureen Thomas
        Digital Studios for Research in Design, Visualisation and Communication, University of Cambridge, U.K.

      Mediating Action and Background Music
        Pietro Casella, Ana Paiva
        Instituto Superior Técnico, Intelligent Agents and Synthetic Characters Group, Lisboa, Portugal

      The Effects of Mediation in a Storytelling Virtual Environment
        Sarah Brown, Ilda Ladeira, Cara Winterbottom, Edwin Blake
        Collaborative Visual Computing Laboratory, University of Cape Town, Republic of South Africa
    Demonstrations:
      Context Design and Cinematic Mediation in Cuthbert Hall Virtual Environment
        Stanislav Roudavski, François Penz
        Digital Studios for Research in Design, Visualisation and Communication, Cambridge University, U.K.

      Group Interaction and VR Storytelling in Museums
        Raul Cid
        Barco Simulation Products, Kuurne, Belgium

      Beyond Human, Avatar as Multimedia Expression
        Ron Broglio, Steve Guynup
        Georgia Institute of Technology, U.S.A./Georgia State University, U.S.A.

Cocktail in the Salle des Illustres, City Hall (18:00 - 20:00)

Gala Dinner (20:00 - ...)


Friday, November 21:

Session "Real-Time" (8:30 - 10:30)

    Invited Lecture:
      Seizing the Power: Shaders and Storytellers (summary)
        Kevin Bjorke, Nvidia Corporation, U.S.A.
    Scientific Talks:
      Real-Time Lighting Design for Interactive Narrative
        Magy Seif El-Nasr, Ian Horswill
        Computer Science Dept., Northwestern University, U.S.A.

      Interactive Out-of-Core Visualisation of Very Large Landscapes on Commodity Graphics Platform
        P. Cignoni, F. Ganovelli, E. Gobbetti, F. Marton, F. Ponchio, R. Scopigno
        CRS4, Pula, Italy / ISTI-CNR, Pisa, Italy
    Demonstrations:
      A Cinematography System for Virtual Storytelling
        Nicolas Courty, Fabrice Lamarche, Stéphane Donikian, Eric Marchand
        IRISA, Rennes, France

Session Poster / Demo 2 with coffee (10:30 - 11:15) : 4 demos

Session "Applications" (11:15 - 12:15)

    Scientific Talks:

      Using Virtual Reality for "New Clowns"

        Martin Hachet, Pascal Guitton
        LaBRI, (Université de Bordeaux 1, ENSEIRB, CNRS) - INRIA

      Storytelling for Recreating Our Selves: ZENetic Computer
        Naoko Tosa, Koji Miyazaki, Hideki Murasato, Seigo Matsuoka
        Center for Advanced Visual Studies, MIT, U.S.A./ Japan Science Technology Corporation "Interaction & Intelligence"/ Adaptive Communications Research Laboratories, Japan / Editorial Engineering Laboratory, Japan
    Demonstrations:
      A Distributed Virtual Storytelling System for Firefighters Training
        Eric Perdigau, Patrice Torguet, Cédric Sanza, Jean-Pierre Jessel
        Computer Graphics and Virtual Reality, Group, IRIT, Toulouse, France

      CITYCLUSTER "From the Renaissance to the Megabyte Networking Age" - A Virtual Reality & High Speed Networking Project.
        Franz Fischnaller
        Electronic Visualization Lab., University of Illinois at Chicago, U.S.A. / F.A.B.R.I.CATORS, Milan, Italy

      A Storytelling Concept for Digital Heritage Exchange in Virtual Environments
        Stefan Conrad, Ernst Krujiff, Martin Suttrop, Frank Hasenbrink, Alex Lechner
        Fraunhofer Institute for Media Communication, Dept. of Virtual Environments, Sankt Augustin, Germany / rmh, Köln, Germany / Vertigo Systems, Köln, Germany

Lunch (12:15 - 13:45)

Session "Mixed Reality" (13:45 - 14:45 & 15:30 - 17:00)

    Invited Lecture:
      The Art of Mixing Realities (summary)
        Sally Jane Norman, Ecole Supérieure de l'Image, Angoulême/Poitiers, France.

Session Poster / Demo 3 with coffee (14:45 -15:30) : 3 Demos

    Scientific Talks:
      "Just Talking About Art" - Creating Virtual Storytelling Experiences in Mixed Reality
        Ulrike Spierling , Ido Iurgel
        FH Erfurt, University of Applied Sciences, Erfurt, Germany / Zentrum für Graphische Datenverarbeitung, Darmstadt, Germany

      Users Acting in Mixed Reality Interactive Storytelling
        Marc Cavazza, Olivier Martin, Fred Charles, Steven J. Mead, Xavier Marichal
        School of Computing and Mathematics, University of Teesside, U.K. / Laboratoire de Télécommunications et Télédétection, UCL, Belgium / Alterface, Louvain-la-Neuve, Belgium

      Is Seeing Touching? Mixed Reality Interaction and Involvement Modalities
        Alok Nandi, Xavier Marichal
        Alterface, Louvain-la-Neuve, Belgium

Prize Ceremony (17:00 - 17:15)

Keynote Speakers

    "Building virtual actors who can really act" by Prof. Ken Perlin


    Professor in the Department of Computer Science, and Director of the New York University Media Research Laboratory and Center for Advanced Technology.

    Ken Perlin's research interests include graphics, animation, and multimedia. In 2002 he received the NYC Mayor's award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU. In 1997 he won an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television.
    In 1991 he received a Presidential Young Investigator Award from the National Science Foundation.
    Dr. Perlin received his Ph.D. in Computer Science from New York University in 1986, and a B.A. in theoretical mathematics from Harvard University in 1979. He was Head of Software Development at R/GREENBERG Associates in New York, NY from 1984 through 1987. Prior to that, from 1979 to 1984, he was the System Architect for computer generated animation at Mathematical Applications Group, Inc., Elmsford, NY. TRON was the first movie for which his name got onto the credits. He has served on the Board of Directors of the New York chapter of ACM/SIGGRAPH, and currently serves on the Board of Directors of the New York Software Industry Association.


    "Seizing the Power: Animation and Realtime Hardware" by Kevin Bjorke


    Kevin Bjorke is Shading Engineer and evangelist for developer marketing and content development at NVIDIA Corporation. He was involved intimately with the creation and use of the Cg shading language and CgFX format for realtime shading. He has created numerous shaders, sample scenes, tutorials, educational talks, lab classes, online videos, tools. He has visited and interfaced with a wide variety of production studios in both the film and game industries. He worked with the software and architecture groups of NVIDIA to bring high-end 3D rendering capabilities into realtime hardware. He has lectured at many graphics developer events around the US and the world including Siggraph 2002, Game Developer's Conference 2003, Developer Deep Fry Austin, Iron Developer Tokyo, Gathering 2 and Dawn to Dusk London. He contributed to the book The Cg Tutorial, and worked directly with the providers of CAD and DCC tools (such as Maya, SolidWorks, 3DStudio Max, and SoftImage XSI) to ensure that high-quality realtime shading was available at every 3D artist's desktop around the world. Previously, he supervised lighting, camera work, shading and imaging for The Animatrix, Final Fantasy, and similarly contributed to the films A Bug's Life and Toy Story.


    "The Art of Mixing Realities" by Dr Sally Jane Norman


    Sally Jane Norman is a performing arts theorist and practitioner, Docteur d’état (Institute of Theatre Studies, Paris III), author of numerous papers including new media studies commissioned by UNESCO and the French Ministry of Culture, involved since 1996 in EU R&D programmes (esprit and IST), director of experimental platforms testing creative use of digital tools in live performance (International Institute of Puppetry, Charleville-Mézières ; Studio for Electro-Instrumental Music, Amsterdam ; Zentrum für Kunst und Medientechnologie, Karlsruhe ; European Festival of Young Digital Creation, Valenciennes ; Ecole supérieure de l’image, Angoulême). Director General of the Ecole supérieure de l'image, Angoulême/ Poitiers, she directs the " Digital Arts " doctoral programme linking ESI with the Universities of Poitiers and La Rochelle in the Poitou-Charentes region.


Other Demonstrations:

Holografika's holographic display.

Artificial three dimensional visualisation is existing for long time. It is holography. This invention set the minimum requirements for 3D visualisation. Viewers should see a 3D image on the screen, as they would see in reality. Systems that cause any discomfort or restrain the viewer will not be broadly accepted. Several announcements were made about the invention of the ultimate 3D display but none of these are "true" 3D display solutions, since none of them comply with all the following criteria:
  • No glasses needed, the 3D image can be seen with unassisted naked eye
  • Viewers can walk around the screen in a wide field of view seeing the objects and shadows moving continuously as in the normal perspective. It is even possible to look behind the objects, hidden details appear, while others disappear (motion parallax)
  • Unlimited number of viewers can see simultaneously the same 3D scene on the screen, with the possibility of seeing different details
  • Objects appear behind or even in front of the screen like on holograms
  • No positioning or head tracking applied
  • Spatial points are addressed individually
  • Objects can be animated
HoloVizio is the first and only operating 3D display that meets all the above requirements simultaneously.


Immersion will present the first laser device projecting information onto the retina

The Nomad™ Personal Display System provides users with the ability to achieve an entirely new level of man-machine interface. Worn in front of the eye, the Nomad System displays images and data that appear to the users to be floating directly in front of them. It is as if the very air before you becomes a 17-inch computer screen.
The Nomad™ System uses a laser based light source to display images directly onto the user's retina. It superimposes data or images on what is viewed without hampering the user's vision. This is extremely advantageous to users who require access to information directly at their point of task. The Nomad™ System eliminates the viewing and performance display limitations of large and bulky stationary computer monitors or small and unreadable portable devices.


Immersion will demonstrate a 6x2 meters POWERwall powered by several genlocked GeForce Quadro FX 3000G boards

Immersion will present the latest version of the THE NVIDIA QUADRO FX 3000G SERIES WITH POWERWALL CAPABILITIES.
Designed For Full-Scale Models, the NVIDIA Quadro FX 3000 By PNY delivers advanced features for very high-resolution visualization on wide screens.



F.A.B.R.I.CATORS's CityCluster."From the Renaissance to the Gigabits Networking Age"

CITYCLUSTER is a Virtual-reality high networking matrix with original technological features, navigation, interactivity and graphic style. The framework, was developed according to a creative method of tracing diverse concepts and systems of collection of cities, urban ambiences and virtual spaces interconnected by a high-speed network which enables participants in remote locations to interact in shared environments. The framework can be expanded, modified and enriched, in accordance with the nature and typology of the environment to be incorporated.
Visitors, with their virtual bodies become active protagonists in City Cluster's virtual terrain. Free to communicate, intervene, share viewpoints, exchange knowledge, ideas, buildings, objects, build a new share virtual ambience, recreate a new city or design their own urban environment. Meta-Net-Page, a virtual-reality networking interface display, was designed and implemented ad hoc for City Cluster. Assists the visitor in finding, seeing, becoming informed instantaneously of the point at which he is located, detect information, images, details that are invisible zones or intangible realities for the naked eye, and permit the user to "teleport" to the location shown on the view panel.. In addition, MNP allow the user to "grab" onto a building shown within the panel and move it to another location or even to another city in real time over the net.
The first CITYCLUSTER virtual-reality networked application," From the Renaissance to the Megabyte Networking Age, offers an actively creative experience in the language of interactive design through the use of multiple layers of interactive narrative. The visitors can experience an interactive journey departing from the time of the Renaissance until arriving to the super broadband Networking and Electronic Age, for this application two virtual reality environments were created: Florence metaphorically represents the "Renaissance Age", the other related to Chicago representing the "Gigabits Age". Each virtual city is inhabited by a group of avatars: David, Venus, and Machiavelli in Florence, and Mega, Giga, and Picasso in Chicago.


France Telecom's conversational agents.

The design of a 3D Embodied Conversational Agent with which one can interact in a natural way requires at least the following modules to run in real time: speech recognition, dialogue, text-to-speech (TTS) and animation engines.
We propose this kind of system where all the modules are embedded in a generic network- based architecture. Speech recognition, dialogue and TTS are on a server while the avatar animation engine and the audio rendering modules are on the client. The avatar animation is driven by speech phonemes provided by the TTS and by behavior tags provided by the dialogue engine. The modules running on the client have been designed to be able to run on both PDAs and mobile phones.
We will present a demonstration with an interactive 3D Embodied Conversational Agent, Nestor, to help find a restaurant in Paris