Files
DissLiteratur/storage/CKM6CBMC/.zotero-ft-cache
Johannes Paehr c4354c0441 init
2025-10-18 15:35:31 +02:00

471 lines
34 KiB
Plaintext
Raw Permalink Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/7454465
Mixed Reality in Education, Entertainment, and Training
Article in IEEE Computer Graphics and Applications · November 2005
DOI: 10.1109/MCG.2005.139 · Source: PubMed
CITATIONS
361
4 authors: Charles E. Hughes University of Central Florida 248 PUBLICATIONS 3,775 CITATIONS
SEE PROFILE
Darin Hughes University of Central Florida 15 PUBLICATIONS 477 CITATIONS
SEE PROFILE
READS
4,647
Christopher Stapleton Simiosys Real World Laboratory 67 PUBLICATIONS 918 CITATIONS
SEE PROFILE
Eileen Smith University of Central Florida 17 PUBLICATIONS 661 CITATIONS
SEE PROFILE
All content following this page was uploaded by Christopher Stapleton on 05 June 2014.
The user has requested enhancement of the downloaded file.
Moving Mixed Reality into the Real World
Mixed Reality in Education, Entertainment, and Training
Charles E. Hughes, Christopher B. Stapleton, Darin E. Hughes, and Eileen M. Smith University of Central Florida
Transferring research from the laboratory to mainstream applications requires the
convergence of people, knowledge, and conventions
from divergent disciplines. Solutions involve more than
combining functional requirements and creative nov-
elty. To transform technical capabilities of emerging
mixed reality (MR) technology into the mainstream
involves the integration and evolution of unproven sys-
tems. For example, real-world applications require com-
plex scenarios (a content issue) involving an efficient
iterative pipeline (a production issue) and driving the
design of a story engine (a technical issue) that pro-
vides an adaptive experience with an after-action
review process (a business issue).
This article describes how a multi-
A multidisciplinary research disciplinary research team trans-
formed core MR technology and
team transforms mixed
methods into diverse urban terrain
applications. These applications are
reality technology into
used for military training and situ-
ational awareness, as well as for
diverse applications used for community learning to significant-
ly increase the entertainment, edu-
military training, situational cational, and satisfaction levels of
existing experiences in public
awareness, and community venues.
augmented virtuality interface display devices to provide real group interaction within a virtual setting, seamlessly mixed with real and virtual objects.4 Traditional VR production techniques and tools help create the virtual assets. Users can then blend these assets with the real environment using real-time, realistic rendering algorithms.5
The key for creating an effective MR experience goes beyond Milgrams technology centric continuum.1 The underlying story must draw on the users imagination to complete the mixed fantasy continuum between the physical venue, the virtual medium, and the audiences interactive imagination.6 This latter requirement is needed to leave a lasting impression of the experience with users. This requirement is important in the fields of entertainment, training, and education and is economically vital for any mainstream application or product.7 The ultimate criterion for judging an applications success is not a functional requirement, but the human impact measured by affective evaluation. Our research quest forced us out of the laboratory and into demanding experiential venues to evaluate this ultimate challenge. In particular, we were motivated to go beyond a one-time staged demonstration of MR, into settings where multiple uses occur without the intervention or oversight of our team members.
learning.
Background An MR experience is one where
the user is placed in an interactive setting that is either real with virtual asset augmentation (augmented reality), or virtual with real-world augmentation (augmented virtuality).1 Figure 1 depicts these discrete points in the MR continuum. In previous projects, the Media Convergence Laboratory at the University of Central Florida has developed novel tools, displays, and interfaces to produce instances using each of these aspects of MR technology. The Mixed Reality Software Suite includes a special-effects engine that employs traditional scenography and show control technology from theme parks to integrate the physical realities of haptic and olfactory devices.2 The graphics and audio engines capture, render, and mix visual and audio content for both augmented reality and augmented virtuality.3 The Mixed Simulation Demo Dome introduces compelling
Filling in the gaps of MR To illustrate our approach and accomplishments, we
focus on two extreme experiences that are equally demanding, each presenting drastically different priorities and criteria for success. These tests validate both the functional demands and the effective range of our tools and techniques.
The MR for Military Operations in Urban Terrain (MR MOUT) is installed at the US Armys Research Development and Engineering Command (RDECom), where researchers evaluate the latest training technologies for transitioning into acquisition readiness. This application focuses on an extreme and complex layered representation of combat reality, using all the simulation domains (live, virtual, and constructive). It applies the advanced video see-through MR HMD from Canon8 and tracking technology from Intersense, integrated within a re-created city courtyard.
24
November/December 2005
Published by the IEEE Computer Society
0272-1716/05/$20.00 © 2005 IEEE
Physical reality
Augmented reality
Augmented virtuality
Virtual reality
1 Milgrams continuum with examples from Media Convergence Laboratory projects.
MR Sea Creatures, on the other hand, deals with the MR Sea Creatures: Flights of fancy
engagement of a childs insatiable curiosity within a The Sea Creatures experience begins with the reality
museum. The goal is to make the static contents of the of the Orlando Science Centers DinoDigs exhibition hall
museum come to life, leading to a continually evolving (see Figure 3a, next page), which presents fossils of
adventure in science. We developed this experience as marine reptiles and fish in an elegant, uncluttered envi-
an experiment to broaden the content and appeal of an ronment. As visitors approach a spherical screen and
existing dinosaur exhibit at the Orlando Science Cen- projector beyond a scenic MR portal at one end of the
ter. Unlike desktop-based, optical see-through enhance- exhibits venue, a virtual guide walks onto the screen
ments,9 our approach deals with large spaces (entire and welcomes visitors to an underwater journey. When
venues). For operational and economic necessity, we the guide finishes telling the back story, the museum
replaced the advanced HMD and tracking in this system venue, as seen in the dome exhibit, appears to be flood-
with an MR portal (a fixed Elumens Vision Station ing with water and visitors experience the virtual Cre-
equipped with a camera with a fish-eye lens and special taceous environment as fossils come alive (see Figure
multisensory effects) to capture and engage the entire 3b). Guests navigate a rover vehicle through the ocean
space. Unlike typical digital technology used in science environment to explore and collect specimens of the
centers, this effect does not separate the dynamic media vegetation, reptiles, and fish. The augmented reality
from the immersive venue.
experience includes an embedded VR window, show-
Most MR experiences are about the visual domain, ing the rovers virtual point of view of the sea life that
with the principal differentiation being between display augments the DinoDigs exhibit (see Figure 3c). types.1,8 The primary scientific and technical issues gen- As the experience winds down, the water recedes
erally center on tracking, registration, and rendering. within the dome, and the unaugmented science center
Our emphasis is to supply the missing parts to produce hall begins to emerge again. At the point where the
a full spectrum of multisensory, nonlinear, immersive water is about head high, a group of pterodactyls flies
MR experiences. Our goal is to give as much attention to overhead, only to have the straggler snagged by a
the audio as to the visual, with special effects currently tylosaur leaping out of the water (Figure 3d). Holding
addressing the olfactory and tactile senses. Such effects the pterodactyl in its mouth, the tylosaur settles back
include water vapor to simulate steam and smoke; ser- down to the ocean floor. A walk through the exhibit
vomechanisms to cause objects to act as if they have space (the real exhibit) reveals that the tylosaur fossil
been hit, for instance by being bumped by a virtual char- is there with a pterodactyl fossil in its mouth. This con-
acter; and haptic vests, vibrating devices, and shakers nection of the MR experience back to the pure real expe-
to provide tactile stimulation for the users.
rience is intended to imply unbounded pathways to
Figure 2 depicts a roadmap for MR, showing the key explore the museum content through MR experiences
areas of development that we identified as critical to layered within a social and immersive physical sur-
providing a seamless MR experience. The areas we rec- rounding. This physical and interpersonal interaction is
ognized and have sought to address
include a creative pipeline, MR audio cap-
ture and production tools, and a delivery suite that can aid in the transfer of this emerging technology to practical commercial use. We measure how well we achieve our goals by the affective evalua-
Virtual production
Asset production • Visual tools • Audio tools* • Haptic tools*
Real-time reality mixing
Graphics engine Audio engine* Special-effects engine*
Venue reality
Sensory capture Sensory augmentation • Visual displays • Audio displays*
tions conducted on each project. MR Sea Creatures preliminary evaluations are complete and encouraging. RDECom and the Army Research Institute are presently conducting equivalent evaluations of MR
• Olfactory tools* Audio*/visual editing Story scripting* tools
Story engine*
* Critical improvements needed to advance MR experiences
• Haptic displays* • Olfactory displays* Affective evaluation*
MOUT.
2 Mixed reality development roadmap.
IEEE Computer Graphics and Applications
25
Moving Mixed Reality into the Real World
(a)
(b)
(c)
(d)
3 Cretaceous life at Orlando Science Center: (a) physical reality, (b) MR dome, (c) augmented reality, and (d) tylosaur lunch.
intended to permanently bind the experience to the visitors mind.
Because the purpose of a free-learning education experience is to inspire curiosity, create a positive attitude toward the topic, and engage the visitor in a memorable experience that inspires discussion long after the encounter, we conducted an affective evaluation to test whether this installation met these criteria. We were particularly interested in whether the experience led to deeper learning, better entertainment, more return visits, and an increased interest in exploring the subject matter. We developed a questionnaire that was given to visitors during the three weeks we had MR Sea Creatures installed. Table 1 shows those results. What is critical to recognize from this data is that, for 98 percent of visitors, the MR experience encouraged them to spend more time in the hall. More than 80 percent of the visitors queried noted that this exhibit encouraged repeat
visits. Furthermore, approximately the same number of guests noted that they would visit similar exhibits. Importantly, visitors also felt that the experience improved their understanding of the Cretaceous period. In summary, these preliminary data strongly support our contention that MR substantially augments the overall experience and encourages repeat visits.
These results were made possible by an iterative production process that allowed subject matter expertise, play testing, and artistic conventions to be employed to overcome technical deficiencies. The MR portals limited visual representation was expanded with 3D surround sound that incorporated novel capture, rendering, and display technology. This increased the interactive groups (visitors) immersion in a shared space and provided a heightened sense of presence beyond the projected video display, deemphasizing the fidelity limitations of the visual projection system. We
Table 1. User reactions to the MR Sea Creatures experience.
Statements
Encouraged longer time Encouraged repeat visits Learned more about Cretaceous period Visit similar exhibits Entertaining experience
Strongly Agree (%)
34 25 20 22 35
Agree (%)
64 59 63 66 53
Neutral (%)
2 16 16
8 10
Disagree (%)
0 0 2 2 2
Strongly Disagree (%)
0 0 0 2 0
26
November/December 2005
created a subtle ambient environment employing underwater audio, acquired using unique surround capture tools. We then rendered these effects on hybrid audio surround displays that grabbed the groups attention without the isolating effects of earphones. The creative mixing and design system allowed for subtle effects that complemented the visual exhibit and drew the visitors attention without increasing the dissonance that would have been caused by a higher volume. The audio provided the emotional and immersive qualities normally attributed to subtle artistic techniques used in motion pictures and theme parks.
MR MOUT: Extreme reality The MR MOUT testbed is a training simulation using
a video see-through HMD and a re-creation of urban facades that represent a 360-degree mini MOUT site. We extended the realism to a four-city block situational awareness experience using computer-generated environments; characters; props; a realistic gun or other tracked interface device; and real lights, crates, and walls. Standing inside the mini MOUT creates the sense of ultimate combat reality (and nightmare) of a soldier dismounted from a vehicle who is open to attack on all sides and from high up.
Using a combination of bluescreen technology and occlusion models, we developed a script that directs our software system to layer and blend the real and virtual elements into a rich MOUT site. The trainee can move around the courtyard and hide behind objects with real and virtual players popping out from portals to engage in closecombat battle. The most effective and powerful result of this MR training is that the virtual characters can occupy the same complex terrain as the trainee. The trainees can literally play hide-and-seek with a virtual soldier, thereby leveraging the compelling nature of passive haptics.
Figure 4a shows the mini MOUT without virtual assets. Figure 4b shows an MR version of this testbed, along with environmental models (sky, clouds, and buildings in the distance). This is the view the trainee sees through the HMD. The models that match physical assets are not displayed, but are used for occlusion (they clip the rendered images of characters inside the buildings, allowing us to see only those parts that are visible through portals). The environmental models are displayed, thereby completing the visual landscape. Show action control completes the effects to create realistic combat scenarios where the real world around the trainee feels physically responsive. This is done using computers to control lights, doors, window shutters, blinds, and other types of on/off or modulated actions. The system can monitor the trainee by an interface device, such as a special gun with tracking, and then change the real environment based on the users behavior. For example, the lights on buildings can be shot out, resulting in audio feedback (the gunshot and shattered glass sounds) and visual changes (the real lights go out).
With all the compelling visual and haptic effects, this training can provide a competitive edge due to a heightened acoustical situational awareness.10 You cant see or feel through walls, around corners, or behind your head. However, your ears can perceive activity where
(a)
(b)
4 Views of MR MOUT: (a) physical reality and (b) augmented reality.
you cannot see it. In urban combat, where a response to a threat is measured in seconds, realistic audio representation is vital to creating a combat simulation and to training soldiers in basic tactics. Standard 3D audio with earphones shuts out critical real-world sounds, such as a companions voice or a radio call. The typical surround audio is still 2D with audio assets designed for a desktop video game that tend to flatten the acoustical capture. Our system temporally and spatially synchronizes audio, leading to an immersive experience. This is especially important in military training, where sensory overload prepares the soldier for the real battlefield. Iterative MR production pipeline
While creating MR Timeportal for Siggraph 2003— an experience extending Canon Mixed Reality Laboratorys groundbreaking work on AquaGauntlet entertainment8—our team devised an iterative MR production pipeline for developing scenario content, incorporating interdisciplinary collaboration at each iteration of development. Taking our cue from the pipeline used in the film industry and the experimental development of off Broadway, our approach allows the artistic and the programming teams to move forward in interde-
IEEE Computer Graphics and Applications
27
Moving Mixed Reality into the Real World
pendent, parallel steps, going from the concept to the delivery of an MR scenario.
Consensus: Eyes on the prize The process starts with graphic preproduction that
includes the written story and a rapidly produced animatic. The video animatic is a simple visual rendering of the story from a single point of view following the participants emotional journey. Its purpose is to communicate the vision of the creative team and confirm the key stakeholders expectations. This then allows the subject matter expert, art director, audio producer, and lead programmer to effectively exchange ideas and determine each teams focus, seeking to avoid compromise and achieve a compelling creative solution. This supports an early determination as to whether a perceived problem has a technical, scientific, artistic, or programmatic solution.
Asset production: Into the third dimension Once the animatic is presented and the behaviors are
agreed upon, the artists can begin creating in parallel high-quality virtual assets (CG models, textures, animations, images, and videos). Concurrently, the programmers implement a first-cut virtual 3D experience using the preliminary models developed for the animatic. Similarly, the audio producer creates and/or captures appropriate ambient, 3D, and point-source sounds. Typically these tasks take about the same amount of time. This represents the 3D prototype stage where alternative scripts, effects, and viewpoints are tested to validate the big picture scope.
Scenario design: Into real time The next step is to validate the cause-and-effect inter-
action with more complex, compounded action scripts playing out behavioral and environmental assumptions. Before enhancing the virtual world with the new artistic creations, a purely virtual version of the scenario needs affective evaluation. This is where we view and hear the scene from multiple angles and positions. Using this birds-eye view provides us with the equivalent of a virtual camera that can move around the environment in real time to see every aspect and interaction point in the scenario. This allows the teams to see problems and prioritize solutions now, rather than after creating the full MR experience. The content and story are evaluated and decisions are made that improve the scenarios playability. The art, audio, and programming teams then continue to work on their respective areas addressing the issues that were raised at this stage.
Game design: Play ball The next step is the interactive scenario. This is a ver-
sion of the scenario implementation, which is interactive and nonlinear, but is still completely virtual. This is the final time to make minor changes and tweaks to the interaction, story, and technology.
Technical rehearsal: Melting the boundaries The last step is integration. If all of the previous stages
have been followed, there should be no major surprises.
This is the step in which the entire team needs to be involved, from the programmers to the artists to the audio producers. All the pieces (audio, graphics, special effects, and story) of the MR scenario come together now without the confusion of rendering, tracking, story, or game issues. The key at this stage is to recognize and make the subtle tweaks that turn an interesting experience into a compelling one. This occurs in the refinement of the detail where the real meets the virtual. If the system is well designed, these changes are made through GUI interfaces, for example, to adjust chroma keys, or reposition virtual assets to interact more dramatically or precisely with real elements.
Production and delivery tools Our modeling team uses standard tool sets, particu-
larly Autodesk 3ds Max with its comprehensive set of plug-ins. The programming team uses Eclipse (http:// www.eclipse.org) and Microsoft Visual Studio .NET as its development environments. Audio production and story development use homegrown tools.
Our focus on creating original audio production tools was driven by the limited capabilities of existing tools. These limitations exist partially because venues such as games and VR systems, with their constrained movement, can focus on delivering audio to select sweet spots. With MR, while we can assume some constraints on a persons motion, these constraints do not lead to any single sweet spot. Moreover, since MR is integrated into the real world, an MR audio system must perform within the constraints on speaker placement imposed by the physical environment. This is especially true when we wish to install a single experience in diverse settings such as museums.
The audio landscape Our experience with mainstream audiences is that
the standard for a compelling story is cinematic expression. In films, audio is the equal of visuals. In contrast, audio production in virtual environments is rarely given the attention it deserves. This is unfortunate as sound can travel through walls and around corners providing information that is well out of the line of sight. Additionally, audio plays a crucial role in environmental awareness, immersion, and presence, and is essential in most forms of communication. Adequate methods of capturing, synthesizing, mixing and mastering, designing and integrating, and delivering soundscapes are crucial to creating immersive and compelling experiences.
We are currently using a Holophone (http://www. holophone.com) to capture accurate 3D soundscapes to produce a sense of subtle environmental presence. The Holophone H2 is an eight-channel microphone that captures audio in 360 degrees and on the vertical plane. This capture device can record a fairly accurate spatial impression and has the advantage of portability and ease of use. It is especially useful in capturing general ambience for military simulations, for example, MR MOUT, where an accurate sound field is required.
For entertainment or artistic applications, more experimental techniques can be used. One such technique is that of spatial scaling whereby individual micro-
28
November/December 2005
phones can be placed over extended distances. The effect created by this technique is unlike any- (b) Sensor data
(a) Mixed reality software suite
Reality
thing that human ears are accustomed to hearing. Where a car passing by might take only a few
Graphics engine
(f) Experience capture
seconds, this capture technique stretches the spatial image out over a longer time span, avoiding changes in pitch or additional audio artifacts. Likewise, the reverse can be done with miniature microphones embedded in places such as ant mounds. These experimental techniques might
(c) Plug-ins (d) Scripted behaviors
Audio engine
Special effects engine
Story engine
(e) Multisensory augmentation
create perceptual changes that can alter the emo-
tional experience of a simulation. The novel use 5 Flow of major Mixed Reality Software Suite components.
of captured sound from foley sessions (audio
effects capture for cinema) can provide a more
real-than-real experience when the captured audio is er the multimodal simulation (visual, audio, and spe-
played out of context for heightened emotional effect. cial effects) while a fourth engine drives the integration
The classic example is how a lions roar was used in an and creates an interactive, nonlinear scenario. These
Indiana Jones movie to intensify the turning of a steer- engines are interoperable with other commercial com-
ing wheel in a climactic action.
ponents. The central networking protocols receive and
Transducers are a particularly useful tool for captur- integrate sensor data such as tracking, registration, and
ing the direct vibrations of an object. This is convenient orientation (see Figure 5b) with input from other
when either the desired source does not transmit a suf- sources, for example, artificial intelligence and special-
ficient signal for standard microphone capture or when ized physics engines (see Figure 5c), and execute a non-
the sound designer is interested in capturing only the linear, interactive script (see Figure 5d). This then
direct vibration of a source and not the sound it creates produces a multimodal simulation situated within real-
in air. Dramatic effects can be created when transducer world conditions based on the rendering and display
capture is delivered through a subwoofer or bass shak- technology available (see Figure 5e). The experience is
er beneath a user in an interactive experience. This pro- captured (by means of multimodal capture, physiolog-
vides the artistic variable of scale to audio for unique ical monitoring, and so on) for human performance
effects. Details of our mixing are published elsewhere.10 evaluation, aggregate empirical data, and playback (see
Figure 5f). The scenario is then adapted based on the
Delivery system: MR Software Suite
conditions and actions of the participants whether
The visual compositing and blending of real and vir- human, robotic, or virtual (see Figure 5e).
tual objects requires an analysis and understanding of The key technologies used in this MR system are Open
the real objects so that proper relative placement, inter- Scene Graph and Cal3D for graphics, Port Audio for
occlusion, illumination, and intershadowing can occur. sound, and a DMX (http://www.usitt.org/standards/
We assume here that this is possible in real time and so DMX512.html) chain for talking to special effects
our efforts focus on the delivery of the experience.5,11,12 devices. Our network protocol is built on top of TCP/IP.
The key areas that we address are the
Authoring of stories is done in XML, which can include
C or Java-style advanced scripting. The MR system can
■ layering of the real and the simulated,
run stand-alone (one user) or in combination with mul-
■ interaction of dynamic agents, and
tiple MR systems (each managing one or more users).
■ integration of multiple synchronized senses in real Thus, the system can be configured for collaboration. In
time.
this context, users see each other as real people in a com-
mon setting, while interacting with each other, real
The MR Software Suite (MRSS) acts as the develop- props, virtual characters, and virtual props. ment and delivery system for the MR experience.2 It
integrates a collection of concurrent cooperating com- Next-generation MR
ponents. The central component is the MR story engine, Every new medium has required decades to overcome
a container for agents (actors), one for every user; vir- its limitations before being transformed from a labora-
tual and real objects that interact with other agents; plus tory invention into mainstream innovation. This is
abstract agents that might be useful for the story line. achieved through the maturation of creative conven-
The agents manage the story semantics in that their tions and experimentation with real-world applications.
states and behaviors determine what a user sees, hears, To achieve this desired maturation with MR, we must
and feels. Environments that are candidates for using recognize that it is an expressive medium and not just a
the MRSS include those delivering interactive story- rendering tool. MRs success will come about not only
telling experiences, collaborative design environments, by advancing technological capabilities, but also by
technology demonstrations, games, and educational or exploiting creative possibilities. We will need to make
training programs. In other words, the MRSS is not creative leaps in how we define the breadth and depth
strictly tied to MR, although thats the focus here.
of the next generation of MR in terms of art, science,
The MRSS system consists of four subsystems called and business before we will transform the laboratory
engines (see Figure 5a). Three rendering engines deliv- application of MR from novelty to true real-world inno-
IEEE Computer Graphics and Applications
29
Moving Mixed Reality into the Real World
vation. This is something we have sought to achieve in
every case study reported here.
Acknowledgments The research reported here is in participation with the
Research in Augmented and Virtual Environments supported by the Naval Research Laboratory VR Laboratory. The MR MOUT effort is funded by the US Armys Science and Technology Objective Embedded Training for Dismounted Soldier at RDECom. Special thanks to the Mixed Reality Laboratory, Canon, for their generous support and technical assistance. The work of Darin Hughes was supported in part by a fellowship from the Interdisciplinary Information Science Laboratory of the University of Central Floridas College of Engineering and Computer Science. Major contributions were made to this effort by Scott Malo, Matthew OConnor, Shane Taber, Theo Quarles, Nathan Selikoff, Nick Beato, and Scott Vogelpohl.
References 1. P. Milgram and A.F. Kishino, “Taxonomy of Mixed Reality
Visual Displays,” IEICE Trans. Information and Systems, vol. E77-D, no. 12, 1994, pp. 1321-1329. 2. M. OConnor and C.E. Hughes, “Authoring and Delivering Mixed Reality Experiences,” Proc. 2005 Intl Conf. Human Computer Interface Advances in Modeling and Simulation, Soc. for Modeling and Simulation Intl, 2005, pp. 33-39. 3. D.E. Hughes, “Defining an Audio Pipeline for Mixed Reality,” Proc. HCI Intl, CD-ROM, Lawrence Erlbaum Assoc., 2005. 4. C.E. Hughes and C.B. Stapleton, “The Shared Imagination: Creative Collaboration in Augmented Virtuality,” Proc. HCI Intl, CD-ROM, Lawrence Erlbaum Assoc., 2005. 5. J. Konttinen, C.E. Hughes, and S.N. Pattanaik, “The Future of Mixed Reality: Issues in Illumination and Shadows,” J. Defense Modeling and Simulation, vol. 2, no. 1, 2005, pp. 51-59. 6. C.B. Stapleton and C.E. Hughes, “Interactive Imagination: Tapping the Emotions through Interactive Story for Compelling Simulations,” IEEE Computer Graphics and Applications, vol. 24, no. 5, 2003, pp. 11-15. 7. C.B. Stapleton and C.E. Hughes, “Mixed Reality and Experiential Movie Trailers: Combining Emotions and Immersion to Innovate Entertainment Marketing,” Proc. 2005 Intl Conf. HumanComputer Interface Advances in Modeling and Simulation, Soc. for Modeling and Simulation Intl, 2005, pp. 40-48. 8. S. Uchiyama et al., “MR Platform, A Basic Body on which Mixed Reality Applications are Built,” Proc. Intl Symp. Mixed and Augmented Reality (ISMAR), IEEE CS Press, 2002, pp. 246-256. 9. S. Beckhaus, F. Ledermann, and O. Bimber, “Storytelling and Content Presentation with The Virtual Showcase in a Museum Context,” Proc. Intl Committee for Documentation, Intl Council of Museums, 2003; http://viswiz.gmd. de/~steffi/Publications/Cidoc2003.pdf. 10. D.E. Hughes et al., “Spatial Perception and Expectation: Factors in Acoustical Awareness for MOUT Training,” Proc. Army Science Conf. (ASC), CD-ROM, US Army; http:// www.asc2004.com/Manuscripts/sessionI/IO-05.pdf.
11. M. Haller, S. Drab, and W. Hartmann, “A Real-Time Shadow Approach for an Augmented Reality Application Using Shadow Volumes,” Proc. ACM Symp. Virtual Reality Software and Technology (VRST), 2003, ACM Press, pp. 56-65.
12. M. Nijasure, S.N. Pattanaik, and V. Goel, “Interactive Global Illumination in Dynamic Environments Using Commodity Graphics Hardware,” Proc. Pacific Graphics, vol. 11, Academic Press Professionals, 2003, pp. 450-454.
Charles E. Hughes is a professor of computer science at the University of Central Florida. He is a principal in the Media Convergence Laboratory and the CS Graphics Group. His research interests include MR, interactive simulation, and models of computation. Hughes has a BA in mathematics from Northeastern University, and an MS and a PhD in computer science from Pennsylvania State University. Contact him at ceh@cs.ucf.edu.
Christopher B. Stapleton is founding director of the Media Convergence Laboratory and a faculty member at the Graduate School of Film and Digital Media of the University of Central Florida. His research interests include developing creative and scientific research for the next-generation, experiencebased digital media for art, entertainment, and education. Stapleton has a BFA in theater and design and an MFA in film and design from New York University. Contact him at chris@mcl.ucf.edu.
Darin E. Hughes is a research associate in the Media Convergence Laboratory at the University of Central Floridas Institute for Simulation and Training. His research interests include MR, auditory perception, and human computer interaction. Hughes has a BA in English from the University of Florida and a BS in information technology from the University of Central Florida where he is pursuing a PhD in modeling and simulation. Contact him at darin@cs.ucf.edu.
Eileen M. Smith is a research associate at the Institute for Simulation and Training and a faculty member in the School of Film and Digital Media at the University of Central Florida. Her research interests include creating rich, dynamic environments for learning. Smith has a BA in speech and communications from Georgia State University and an MA in theater from the University of Louisville. Contact her at esmith@ist. ucf.edu.
30
View publication stats
November/December 2005