03:00 PM - 03:25 PM
LiFi is a technology that delivers highspeed data wirelessly to XR devices using light waves. Connect wirelessly to your devices where radio frequencies are not permitted due to security or safety reasons. Signify (formally Philips) is connecting XR devices with stable speeds of 250 Mbps, with ultra-low latency that is stable, safe and secure. This technology is being deployed in hospitals, military, oil and gas, mining and manufacturing. Find out how LiFi will enable wireless XR with even higher data throughput for superior performance.
02:25 PM - 02:50 PM
In 2019, Eliud Kipchoge chased a GPS-guided green laser into history and ran the first sub-2 hour marathon. In technical terms, the laser was a real-time positional indicator. Said differently, the green laser was a ghost-runner, a virtual running partner - a tech-enabled Enhanced Reality solution.
What if everyone - not just world-class athletes - had the ability to compete against a ghost runner in real time? Not vs. an abstract segment displayed on a watch or a cycling computer - but a visible projection in the natural field of view. Non-distracting, but instantly accessible. That's Enhanced Reality.
Mark Prince, GM of smart eyewear company ENGO, will present advances in display systems that can enable anyone to do exactly that. ENGO's product is the first wearable display for sports that achieves glanceability for instant access to data, light weight for comfort and practical use, long battery life required for general use and fir endurance sports in particular, and a high-performance display that works in all light conditions.
Prince will explain how ENGO's solution can be used to change the dominant mode of sports training from post-activity review and analysis to in-activity decision making - with the potential to unlock performance for runners, cyclists, and other athletes - much like a vehicle-based green laser helped Kipchoge break the 2-hour barrier. Prince will demonstrate how use cases like this will finally pave the way for mass adoption of AR technology beyond niche applications.
05:10 PM - 05:35 PM
Due to the demanding optical architectures of diffractive waveguide gratings used for augmented reality applications, the gratings must be manufactured to extremely high accuracies, or the image quality will suffer. The grating period has to match the designs within tens of picometers, and the tolerances for the relative orientation of the gratings are in the arcsecond range. Both the production masters and the replicated gratings need to be characterized nondestructively, and the grating areas scanned to ensure uniformity. The measurement system should work for surface relief and volume holographic gratings in various material systems. We describe a Littrow diffractometer that can perform this challenging task. A narrow-band and highly stable laser source is used to illuminate a spot on the sample. Mechanical stages with high-accuracy encoders rotate and tilt the sample, until the laser beam is diffracted back to the the laser. This so-called Littrow condition is detected through a feedback loop with a beam splitter and a machine vision camera. The grating period and relative orientation can then be calculated from the stage orientation data. With the system properly constructed, and custom software algorithms performing an optimized measurement sequence, it is possible to reach repeatability in the picometer and arcsecond range for the grating period and relative orientation, respectively. By carefully calibrating the stages or by using golden samples, absolute accuracy for the grating period can also reach picometer range.
04:40 PM - 05:05 PM
More than a decade ago, the industry started to signal interest in enabling core XR technologies. Around 6~8 years into that phase, the industry started seeing first generation XR devices introduced into the market with varying degrees of success. Just 2 years ago, we started to see an explosion in user interest in XR-related products, in part due to the pandemic influencing user behavior, but also thanks to companies trail-blazing in products and services. The question we continue to ask ourselves is: what are the devices and the user experience that should be offered to the end-users in the next 3~5 years? Goertek is proud of its legacy, and its continued investment into the XR category and we look forward to using this session to engage with the XR community.
11:10 AM - 11:35 AM
In the backdrop of the Metaverse, how does the real world fit in, what does open, interoperable, and ethical look like, and what is the role of the Web ? We explore some potential guiding principles for the real world Spatial Web and early steps to realization through standards activities and the Open Spatial Computing Platform (OSCP). Along the way, we highlight areas that present strong opportunities for academic research, standards, and open source contributions.
01:00 PM - 01:25 PM
The NVIDIA CloudXR SDK empowers developers to deliver highly complex applications and 3D data to low-powered VR & AR devices. We discuss what NVIDIA CloudXR is, how to get started integrating the SDK into your applications, and the benefits customers have shared during their development. We learn from Bentley Systems as they discuss how NVIDIA CloudXR introduces new ways of work while preserving and elevating existing AEC/O workflows for design and building.
11:00 AM - 11:25 AM
Whether hands free mobile displays or in situ data overlay, head-mounted augmented reality offers much to improve productivity and reduce human error in space. Unfortunately, existing solutions for tracking and holographic overlay alignment tend to rely on, or at least assume, earth gravity. Nothing inherent to a microgravity environment would make AR tracking impossible but several factors need to be taken into account. First, the high-frequency camera pose estimation uses SLAM, which relies on data from IMU sensors, which, by default accommodate the acceleration of gravity in their base measurements. Second, most holographic alignment strategies assume a consistent down direction. This session will explore strategies to mitigate these limitations.
10:40 AM - 11:05 AM
Smart glasses in a normal form factor will significantly change our everyday lives. tooz envisioned this more than a decade ago when the tooz journey began in the corporate research labs of ZEISS Germany. Starting with the first curved waveguide, tooz developed several generations of optical engines and provided a continuous stream of patented inventions to the smart glass industry. 5 years ago, Deutsche Telekom/T-Mobile joined the journey as a 50% shareholder and enabled the development of full smart glasses solutions based on tooz’ waveguides. At AWE 2022, tooz will launch its next breakthrough innovation on its mission to lead this market: tooz ESSNZ Berlin is the first market-ready smart glasses reference design with vision correction that will change the daily interaction of consumers with data, media and ecosystem interfaces. The underlying tooz technology is highly customizable, scalable, and marketable – Not in the future, but already today.
02:00 PM - 02:25 PM
How hologram telecommunication delivers the parts that every other technology leaves out — the parts that make us human.
10:00 AM - 10:25 AM
03:40 PM - 04:35 PM
How does augmented reality overlap with the shift from web2 to web3?
Amy will moderate a panel of experts on the convergence of #AR with the growth in adoption of blockchain, NFT, and cryptocurrency.
04:05 PM - 05:00 PM
Haptics is the next major unlock for augmented reality and virtual reallity, like “HD” and “4K” was for audio and video. This panel will explore the current state of haptics tech in XR, ethnics, accessibility and future-forward use cases, including live events, training and esports. The discussion will cover technologies, innovations, standards, advice on adoption, and hands-on development.
03:25 PM - 03:50 PM
Recently, there has been, and continues to be, a flurry of activities around AR and the Metaverse. How these domains intersect and unfold over time is still very much in the early stages. What is clear, however, is that the “on-ramp” or gateways into the Metaverse starts with the ability to perceive the physical and digital worlds simultaneously. Many technologies and devices are needed to enable the true immersion and first and foremost is the ability to overlay the digital domain onto the physical space. In this talk we will discuss these aspects and delve deeply into near-to-eye display technologies that allows uses to coexist in the physical and digital domains.
02:30 PM - 02:55 PM
Headset-based AR/VR offers an immersive dive into these new digital worlds, but to many it still feels cumbersome and unfamiliar. As a result, mass adoption is still relatively slow.
3D Lightfield displays offer a naturally immersive, “window-like” 3D look into the Metaverse while leaving users’ faces unencumbered. They can be readily deployed on familiar terminals, from smartphones / tablets to laptops or automotive displays.
Better still, this method ensures compatibility with much of the existing digital content ecosystem, hence democratizing access to the Metaverse and potentially accelerating its deployment.
In this talk, I will review our efforts at Leia to commercialize Lightfield-based mobile devices and our take on how to steadily ramp consumer adoption of the Metaverse.
12:00 PM - 12:25 PM
Volumetric video technology captures full-body, dynamic human performance in four dimensions. An array of 100+ cameras point inward at a living entity (person, animal, group of people) and record their movement from every possible angle. Processed and compressed video data from each camera becomes a single 3D file – a digital twin of the exact performance that transpired on stage – for use on virtual platforms. Finished volcap assets are small enough to stream on mobile devices but deliver the visual quality detail of 100+ cameras, making them a go-to solution for bringing humans into the Metaverse.
The volumetric video market is expected to grow from $1.5B USD in 2021 to $4.9B USD by 2026 as holographic imaging becomes increasingly crucial for the development of compelling, human-centric immersive content and Metaverse creators strive to solve the “uncanny valley” problem.
The session dives into the latest and greatest applications of volcap in augmented reality across multiple sectors – including fashion, entertainment, AR marketing and branding, enterprise training, and more…
We’ll examine the ground-breaking potential this technology holds for augmented and mixed reality as well as some of the challenges that may face this burgeoning industry.
05:05 PM - 05:30 PM
In this talk, Luis will provide a structured overview of the key delivery bottlenecks, what technology advancements are being made and some case examples of recent metaverse implementations spanning transportation, education and entertainment, in order to answer the question: “Are we there yet?!”
11:35 AM - 12:00 PM
The Open AR Cloud is working to democratize the AR Cloud with infrastructures based on open and interoperable technology. And we are building city-scale AR testbeds that are being experienced throughout cities around the world. These are real-world use cases that combine the digital with the physical–rich experiences that are synchronous, persistent, and geospatially tied to a specific location. Content in situ allows the user to explore the world, connect with others, and have a shared experience.
We will discuss new types of content activations based on proximity, gaze, voice, sensor data, and algorithmic spatial ads. Partners will present use cases such as wayfinding and NFT exhibits, as well as case studies that demonstrate how the technology is being used to build more diverse, equitable, and inclusive, real-world communities that raise awareness on important critical issues like climate change and public health.
01:30 PM - 01:55 PM
Building digital twins of our environments is a key enabler for XR technology. In this session, we will cover a few works, which have recently been developed at InnoPeak Technology, on 3D reconstruction (building digital twins) from monocular images. We first present MonoIndoor, a self-supervised framework for training depth neural networks in indoor environments, and consolidate a set of good practices for self-supervised depth estimation. We then introduce GeoRefine, a self-supervised online depth refinement system for accurate dense mapping. Finally, we talk about PlaneMVS, a novel end-to-end method that reconstructs semantic 3D planes using multi-view stereo.
02:35 PM - 03:00 PM
Ray tracing techniques improve graphics rendering qualities dramatically in recent years. In the coming few years, mobile chipsets will also support hardware accelerated ray tracing, which will bring a more visually believable virtual environment with realistic lighting and shadowing effects. It will become a major technique used in mobile gaming, augmented and virtual reality devices.
OPPO, collaborated with partners, began developing its ray tracing technology early in 2018 and had initially adopted a hybrid rendering method to gradually introduce ray tracing to existing mobile devices. This talk will introduce the short history of mobile ray tracing, forecast its trends in mobile devices and explore the potential applications.
02:05 PM - 02:30 PM
As part of a DoD project, Deloitte recently built a successful 5G Infrastructure and a multitude of technologies built on top adding measured value to one of the largest military branches in the US. In this talk we'll discuss AR, Edge Compute and the impact 5G is having actively across dozens of bases in the Military today and where we see the future of xR going beyond the current 'Metaverse' buzz.