Agenda

  • Expand all
Jun 2

10:00 AM - 10:55 AM

Description

VR and AR tools have provided numerous ways to align and expand the physical and virtual, and our interpersonal interactions can be shaped by these experiences. New research from StanfordVR has provided additional context to our learnings about avatars and virtual environments. Whether you are a developer, artist or stakeholder, come discuss the latest with our panel of industry leaders, researchers, artists, and visionaries.

A variety of perspectives will be taken into account, from philosophy to AI-generated content, platforms, avatar embodiment, and ethics. Based on audience questions, the group will also cover cross-reality methods for training, entertainment, and personal development as takeaways for creating effective and memorable experiences.

Speakers

XR Producer & Director , Dimension Adventures
Graduate Research Fellow , Stanford University
CEO , Odeon Theatrical
Advisor , Dulce Dotcom
Jun 2

10:00 AM - 10:55 AM

Description

VR and AR tools have provided numerous ways to align and expand the physical and virtual, and our interpersonal interactions can be shaped by these experiences. New research from StanfordVR has provided additional context to our learnings about avatars and virtual environments. Whether you are a developer, artist or stakeholder, come discuss the latest with our panel of industry leaders, researchers, artists, and visionaries.

A variety of perspectives will be taken into account, from philosophy to AI-generated content, platforms, avatar embodiment, and ethics. Based on audience questions, the group will also cover cross-reality methods for training, entertainment, and personal development as takeaways for creating effective and memorable experiences.

Speakers

XR Producer & Director , Dimension Adventures
Graduate Research Fellow , Stanford University
CEO , Odeon Theatrical
Advisor , Dulce Dotcom
Jun 2

11:00 AM - 11:30 AM

Description

On their own, text-based LLM interfaces like ChatGPT can’t deliver exchanges that are akin to one person chatting with another person in the real world. They lack a face, a personality, and a physical presence with which to connect. But very soon, we won’t just be interacting with LLMs through a text prompt.The AIs that we will chat with will have a physical form. Will that form be built of real-world holograms or physical robots? Just as iPhones were the “container” for the mobile revolution, the race between holograms and emotional robots as the “containers” for the conversational AI revolution is heating up. This talk will touch on the early commercial examples of each.

Speakers

Chief Executive Officer , Looking Glass
Jun 2

11:00 AM - 11:30 AM

Description

On their own, text-based LLM interfaces like ChatGPT can’t deliver exchanges that are akin to one person chatting with another person in the real world. They lack a face, a personality, and a physical presence with which to connect. But very soon, we won’t just be interacting with LLMs through a text prompt.The AIs that we will chat with will have a physical form. Will that form be built of real-world holograms or physical robots? Just as iPhones were the “container” for the mobile revolution, the race between holograms and emotional robots as the “containers” for the conversational AI revolution is heating up. This talk will touch on the early commercial examples of each.

Speakers

Chief Executive Officer , Looking Glass
Jun 2

11:35 AM - 12:30 PM

Description

Coming Soon!

Speakers

CEO and Chief Scientist , Unanimous AI
President , Rethink Next
Chief Strategy Officer & Co-Founder , Subtext
CEO / Co-Founder , WaveAI
Founder , Virtual Events Group
Jun 2

11:35 AM - 12:30 PM

Description

Coming Soon!

Speakers

CEO and Chief Scientist , Unanimous AI
President , Rethink Next
Chief Strategy Officer & Co-Founder , Subtext
CEO / Co-Founder , WaveAI
Founder , Virtual Events Group
Jun 2

01:30 PM - 01:55 PM

Description

Facial. Vocal. Verbal. Human conversations are complex. Many factors are at work when we speak, including our facial expressions, tone of voice and word choice. Today, companies are operating in a more globalized and virtualized world than ever before. It can be challenging to accurately understand the conversations needed to establish business-critical relationships. The stakes are high – reading a situation incorrectly can lead to miscommunication, confusion and ultimately, the erosion of trust.

Leveraging machine learning algorithms that draw from an ever-expanding database of emotional cues, Multimodal Emotional AI fuses data from all modalities (face, voice, words) together, helping to overcome the relationship challenge. By capturing the complex messages conveyed through emotional cues, we’re now able to identify key moments that influence sentiment and engagement as well as emotional intelligence (EQ) factors that build trust and empathy. It’s like having a coach sensing even the most subtle emotions in the room and making real-time recommendations on how to create the best possible human connection.

Drawing on real-world examples, Umesh Sachdev, Co-founder and CEO of Uniphore will describe how behavioral science and Emotional AI are helping to close the divide between humans and machines to drive deep and meaningful connections built on trust. Umesh will also discuss the potential applications for this technology and how it is being used to derive value from critical conversations across the enterprise.

And, make every conversation count.

Speakers

Co-Founder and CEO , Uniphore
Jun 2

01:30 PM - 01:55 PM

Description

Facial. Vocal. Verbal. Human conversations are complex. Many factors are at work when we speak, including our facial expressions, tone of voice and word choice. Today, companies are operating in a more globalized and virtualized world than ever before. It can be challenging to accurately understand the conversations needed to establish business-critical relationships. The stakes are high – reading a situation incorrectly can lead to miscommunication, confusion and ultimately, the erosion of trust.

Leveraging machine learning algorithms that draw from an ever-expanding database of emotional cues, Multimodal Emotional AI fuses data from all modalities (face, voice, words) together, helping to overcome the relationship challenge. By capturing the complex messages conveyed through emotional cues, we’re now able to identify key moments that influence sentiment and engagement as well as emotional intelligence (EQ) factors that build trust and empathy. It’s like having a coach sensing even the most subtle emotions in the room and making real-time recommendations on how to create the best possible human connection.

Drawing on real-world examples, Umesh Sachdev, Co-founder and CEO of Uniphore will describe how behavioral science and Emotional AI are helping to close the divide between humans and machines to drive deep and meaningful connections built on trust. Umesh will also discuss the potential applications for this technology and how it is being used to derive value from critical conversations across the enterprise.

And, make every conversation count.

Speakers

Co-Founder and CEO , Uniphore
Jun 2

02:00 PM - 02:25 PM

Description

EMPLOYMENT IS DEAD: HOW THE METAVERSE WILL FUNDAMENTALLY CHANGE THE WAY WE WORK, by New York Times best-selling author Deborah Perry Piscione and Harvard Innovation Lab advisor, Josh Drean, address the elephant in the room – work no longer works! The fundamental structure of work is in desperate need of a new vessel for productivity and fulfillment. With the explosion of technological advances in AI, web3, digitization, and cryptocurrency, employees are discovering new ways to earn a living that break the mold of traditional employment that offers greater ownership, flexibility, and agency over their working lives. EMPLOYMENT IS DEAD explores the mindset shift necessary to thrive in the metaverse, provides reasons why employees will eventually abandon traditional employment for the bounty of web3, and outlines practical strategies that digital-first leaders can start implementing today to take advantage of this vast new world of endless possibilities.

Speakers

Co-Founder & Dir. of Employee Experience , Work3 Institute
Co-Founder & Chief Metaverse Officer , Work3 Institute
Jun 2

02:00 PM - 02:25 PM

Description

EMPLOYMENT IS DEAD: HOW THE METAVERSE WILL FUNDAMENTALLY CHANGE THE WAY WE WORK, by New York Times best-selling author Deborah Perry Piscione and Harvard Innovation Lab advisor, Josh Drean, address the elephant in the room – work no longer works! The fundamental structure of work is in desperate need of a new vessel for productivity and fulfillment. With the explosion of technological advances in AI, web3, digitization, and cryptocurrency, employees are discovering new ways to earn a living that break the mold of traditional employment that offers greater ownership, flexibility, and agency over their working lives. EMPLOYMENT IS DEAD explores the mindset shift necessary to thrive in the metaverse, provides reasons why employees will eventually abandon traditional employment for the bounty of web3, and outlines practical strategies that digital-first leaders can start implementing today to take advantage of this vast new world of endless possibilities.

Speakers

Co-Founder & Dir. of Employee Experience , Work3 Institute
Co-Founder & Chief Metaverse Officer , Work3 Institute
Jun 2

02:30 PM - 02:55 PM

Description

Over the last two decades, 3D capture has become a core technology used in nearly every field from architecture and construction, to VFX and gaming. In 2022, a novel machine learning technique called NeRF (Neural Radiance Fields) became accessible and rapidly started shifting the way we approach 3D capture.

Originally introduced in a 2020 academic paper, NeRF first became accessible to creators when NVIDIA released their "Instant NGP" research tool two years later. In recent months, it has rapidly become a consumer-ready technology with platforms like Luma AI dramatically lowering the barrier of entry.

Using NeRF, anyone is now empowered to create stunningly detailed captures of scenes with unparalleled photorealism from simple photo or video input without requiring expensive specialized hardware. By leveraging emerging AI methodologies, it leapfrogs previous capture techniques while reducing the need for human labor resulting in a dramatic reduction in the cost of creating VFX-grade 3D scenes.

With its advanced capabilities and efficiency, NeRF is revolutionizing how industries think about 3D capture and enabling a future where anybody can capture truly immersive memories.

Speakers

Founder , Luma AI
Managing Director , PropTech Consulting
Jun 2

02:30 PM - 02:55 PM

Description

Over the last two decades, 3D capture has become a core technology used in nearly every field from architecture and construction, to VFX and gaming. In 2022, a novel machine learning technique called NeRF (Neural Radiance Fields) became accessible and rapidly started shifting the way we approach 3D capture.

Originally introduced in a 2020 academic paper, NeRF first became accessible to creators when NVIDIA released their "Instant NGP" research tool two years later. In recent months, it has rapidly become a consumer-ready technology with platforms like Luma AI dramatically lowering the barrier of entry.

Using NeRF, anyone is now empowered to create stunningly detailed captures of scenes with unparalleled photorealism from simple photo or video input without requiring expensive specialized hardware. By leveraging emerging AI methodologies, it leapfrogs previous capture techniques while reducing the need for human labor resulting in a dramatic reduction in the cost of creating VFX-grade 3D scenes.

With its advanced capabilities and efficiency, NeRF is revolutionizing how industries think about 3D capture and enabling a future where anybody can capture truly immersive memories.

Speakers

Founder , Luma AI
Managing Director , PropTech Consulting