Agenda

  • Expand all
May 26

01:00 PM - 01:20 PM

Description

From black and white to color, and desktop to mobile, our electronic devices are visual mediums that have rapidly evolved and advanced over the past century. New device technologies continue to add clarity and mobility, enhancing our viewing experience to be more convenient and closer to our reality. Today, more than ever, these devices have become indispensable information vehicles that are ingrained into every aspect of our daily lives.
The average person is now so acclimated to having these thousands of digital interactions each day, they’re hungry for a more immersive experience. They’re ready for the next medium that will seamlessly enhance their lives and bridge the gap between real life and technology (without a headset). They’re ready for light fields.
In this talk, David will discuss Lightfield as the next generation medium. He will explain how this emerging technology provides users with a fully interactive, lifelike viewing experience by rendering images and videos with 3D depth, and complex, realistic light effects such as sparkles, texture and highlights. He will also discuss how this type of technology will change the game for businesses by empowering them to provide end-users with more immersive and engaging experiences across industries including auto, retail, medical and education.

Speakers

CTO , Leia Inc
May 26

01:00 PM - 01:20 PM

Description

From black and white to color, and desktop to mobile, our electronic devices are visual mediums that have rapidly evolved and advanced over the past century. New device technologies continue to add clarity and mobility, enhancing our viewing experience to be more convenient and closer to our reality. Today, more than ever, these devices have become indispensable information vehicles that are ingrained into every aspect of our daily lives.
The average person is now so acclimated to having these thousands of digital interactions each day, they’re hungry for a more immersive experience. They’re ready for the next medium that will seamlessly enhance their lives and bridge the gap between real life and technology (without a headset). They’re ready for light fields.
In this talk, David will discuss Lightfield as the next generation medium. He will explain how this emerging technology provides users with a fully interactive, lifelike viewing experience by rendering images and videos with 3D depth, and complex, realistic light effects such as sparkles, texture and highlights. He will also discuss how this type of technology will change the game for businesses by empowering them to provide end-users with more immersive and engaging experiences across industries including auto, retail, medical and education.

Speakers

CTO , Leia Inc
May 26

01:40 PM - 02:00 PM

Description

3D face scanning is the first step in cloning a person and bringing the person to a virtual environment with high fidelity. High quality 3D face scanning typically has been done with specialized and complicated photogrammetry systems. The emergence of low-cost and high-quality mobile depth sensors, such as those in Apple iPhone X, helps to fuel the advance of 3D scanning and brings the scanning capabilities to the hands of millions of users. The talk will discuss different types of mobile depth sensors, including structured light, ToF, and active stereo, and compare their pros and cons for close-range 3d face scanning. The talk will also present the unique challenges of 3d face scanning and discuss their applications in fields such as medical, dental, eyewear, entertainment, and 3d printing.

Speakers

CEO , Bellus3D
May 26

01:40 PM - 02:00 PM

Description

3D face scanning is the first step in cloning a person and bringing the person to a virtual environment with high fidelity. High quality 3D face scanning typically has been done with specialized and complicated photogrammetry systems. The emergence of low-cost and high-quality mobile depth sensors, such as those in Apple iPhone X, helps to fuel the advance of 3D scanning and brings the scanning capabilities to the hands of millions of users. The talk will discuss different types of mobile depth sensors, including structured light, ToF, and active stereo, and compare their pros and cons for close-range 3d face scanning. The talk will also present the unique challenges of 3d face scanning and discuss their applications in fields such as medical, dental, eyewear, entertainment, and 3d printing.

Speakers

CEO , Bellus3D
May 26

02:00 PM - 02:10 PM

May 26

02:00 PM - 02:10 PM

May 26

03:10 PM - 03:30 PM

Description

Learn how Artie’s technology enables developers to create games in Unity that have next-gen interactive features, including speech recognition and computer vision.

Speakers

Founder , Artie
May 26

03:10 PM - 03:30 PM

Description

Learn how Artie’s technology enables developers to create games in Unity that have next-gen interactive features, including speech recognition and computer vision.

Speakers

Founder , Artie
May 26

03:30 PM - 03:50 PM

Description

The Augmented Conversation is a dialog between human and AI participants that enables them to imagine, describe and create live virtual objects and simulations that they can interactively and simultaneously explore and modify. The AI is a full participant in this exploration – listening and watching the human participants and responding instantly to reify the ideas and concepts that they discuss.
This isn’t an app. This is live collaboration – a foundation for how the next devices will mediate our engagement with others and with the world. Not only will these devices replace your smart phone, but they will replace your PC. The new devices will and must become super workstations that live up to the promise of delivering the “bicycle for the mind”. They will enable you to work with ideas and concepts that are simply not possible today, and you'll be able to share those ideas at any time with anyone.

Speakers

CTO, Founder , Croquet
May 26

03:30 PM - 03:50 PM

Description

The Augmented Conversation is a dialog between human and AI participants that enables them to imagine, describe and create live virtual objects and simulations that they can interactively and simultaneously explore and modify. The AI is a full participant in this exploration – listening and watching the human participants and responding instantly to reify the ideas and concepts that they discuss.
This isn’t an app. This is live collaboration – a foundation for how the next devices will mediate our engagement with others and with the world. Not only will these devices replace your smart phone, but they will replace your PC. The new devices will and must become super workstations that live up to the promise of delivering the “bicycle for the mind”. They will enable you to work with ideas and concepts that are simply not possible today, and you'll be able to share those ideas at any time with anyone.

Speakers

CTO, Founder , Croquet
May 26

03:50 PM - 04:10 PM

Description

Since the mid-1990s, a significant scientific literature has evolved regarding the mental/physical health outcomes from the use of what we now refer to as Clinical Virtual Reality (VR). While the preponderance of clinical work with VR has focused on building immersive virtual worlds for treating anxiety disorders with exposure therapy, providing distracting immersive experiences for acute pain management, and supporting physical/cognitive rehabilitation with game-based interactive content, there are other emerging areas that have extended the impact of VR in healthcare. One such area involves the evolution of conversational virtual human (VH) agents. This has been driven by seminal research and development leading to the creation of highly interactive, artificially intelligent and natural language capable VHs that can engage real human users in a credible fashion. No longer at the level of a prop to add context or minimal faux interaction in a virtual world, VH representations can now be designed to perceive and act in a 3D virtual world, engage in face-to-face spoken dialogues with real users, and in some cases, can exhibit human-like emotional reactions. This presentation will provide a brief rationale and overview of research that has shown the benefits derived from the use of virtual humans in healthcare applications. Research will be detailed reporting positive outcomes from studies using VHs in the role of virtual patients for training novice clinicians, as job interview/social skill trainers for persons on the autism spectrum, and as online healthcare support agents with university students and military Veterans. The computational capacity now exists to deliver similar VH interactions by way of mobile device technology. This capability can support the “anywhere/anytime” availability of VH characters as agents for engaging users with healthcare information and could provide opportunities for improving access to care and emotional support for a wide range of wellness and clinical applications for a variety of populations. This work will be discussed along with a look into the future of this next major movement in Clinical VR.

For more information on this topic, please visit our website: http://medvr.ict.usc.edu/ and YouTube channel: https://www.youtube.com/user/AlbertSkipRizzo/videos?view=0&sort=dd&shelf_id=1

Speakers

Director of Medical Virtual Reality, Institute for Creative Technologies , USC
May 26

03:50 PM - 04:10 PM

Description

Since the mid-1990s, a significant scientific literature has evolved regarding the mental/physical health outcomes from the use of what we now refer to as Clinical Virtual Reality (VR). While the preponderance of clinical work with VR has focused on building immersive virtual worlds for treating anxiety disorders with exposure therapy, providing distracting immersive experiences for acute pain management, and supporting physical/cognitive rehabilitation with game-based interactive content, there are other emerging areas that have extended the impact of VR in healthcare. One such area involves the evolution of conversational virtual human (VH) agents. This has been driven by seminal research and development leading to the creation of highly interactive, artificially intelligent and natural language capable VHs that can engage real human users in a credible fashion. No longer at the level of a prop to add context or minimal faux interaction in a virtual world, VH representations can now be designed to perceive and act in a 3D virtual world, engage in face-to-face spoken dialogues with real users, and in some cases, can exhibit human-like emotional reactions. This presentation will provide a brief rationale and overview of research that has shown the benefits derived from the use of virtual humans in healthcare applications. Research will be detailed reporting positive outcomes from studies using VHs in the role of virtual patients for training novice clinicians, as job interview/social skill trainers for persons on the autism spectrum, and as online healthcare support agents with university students and military Veterans. The computational capacity now exists to deliver similar VH interactions by way of mobile device technology. This capability can support the “anywhere/anytime” availability of VH characters as agents for engaging users with healthcare information and could provide opportunities for improving access to care and emotional support for a wide range of wellness and clinical applications for a variety of populations. This work will be discussed along with a look into the future of this next major movement in Clinical VR.

For more information on this topic, please visit our website: http://medvr.ict.usc.edu/ and YouTube channel: https://www.youtube.com/user/AlbertSkipRizzo/videos?view=0&sort=dd&shelf_id=1

Speakers

Director of Medical Virtual Reality, Institute for Creative Technologies , USC