An Austin-based startup best known for its VR and mixed reality workspace software for other companies’ headsets has developed its own hardware: the Immersed Visor appears to be somewhere between the Vision Pro Lite and the Xreal Plus: a lightweight, head-worn device that creates high-resolution spatial computing environments on the cheap (well, relatively speaking).
After being long-rumored for months, Immersed founder Renji Bijoy finally unveiled the Visor at an event in Austin on Thursday. Better than glasses but far less than a full headset, the device provides the equivalent of a 4K OLED screen for each eye; has a 100-degree field of view; supports 6DoF tracking (meaning it responds to movement on multiple axes, not just simple head rotation); offers hand-and-eye tracking capabilities; and supports five or more screens in a virtual or mixed reality environment.
Immersion
During the presentation, Bejoy revealed that the Immersed Visor weighs just 186 grams, making it slightly lighter than an iPhone 16 Pro; 64 percent lighter than the Meta Quest 3 (515 grams) and about 70 percent lighter than the Apple Vision Pro (600-650 grams). Weight and ergonomics have been a sticking point for many early adopters of VR and mixed reality technology. (This includes customers of the $3,500 Vision Pro.) So keeping the Visor’s weight close to that of a high-end smartphone could, in theory, translate into success in an area where competitors struggle. Part of the reason is the wired battery pack that (borrowing from Apple) can be tucked away in your pocket.
But unlike those devices, the Immersed Visor doesn’t include onboard experiences like app stores or games. Instead, it’s tailored for work: You connect it (wirelessly or wired) to your Windows, macOS, or Linux computer and get work done on an array of immersive virtual screens. 6DoF tracking means that even if you stand up, lean, or twist, the virtual screens stay fixed where you place them, rather than moving unnaturally through space.
Like the company’s workspace apps for Meta Quest and Vision Pro, you can work with either a pass-through view of your space or a completely virtual view (including cozy virtual environments like a mountaintop ski resort next to a fireplace.) You can also collaborate with others in shared spaces.
The device is powered by the Qualcomm XR2+ Gen 2 chip, which debuted at CES 2024. The chip supports up to 4.3K resolution per eye and can process content at up to 90fps.
Immersion
Immersed has adopted an unconventional pricing structure: the one-time purchase price for the device starts at $1,050. However, if you agree to a subscription model, you can get it for $400 up front: $40 per month for 24 months, or $60 per month for 12 months. Incidentally, this model won’t ship until “six months” from October, which means April 2025. If you want the device that starts shipping the following month (i.e. the “Founder’s Edition”), the one-time price rises to $1,350, or a monthly subscription fee of $700 (the same price as the version that ships at a later date).
In theory, the Immersed Visor could be an attractive product for many people interested in spatial computing who want something cheaper than the Vision Pro, with higher resolution than the Meta Quest 3, and less beta-like than Xreal’s AR glasses. Whether it succeeds on these points remains to be seen. As far as I can see, no major media outlet (including Engadget) has published a hands-on demo of the device. As this year’s insane rise in AI gadgets has reminded us, big promises mean nothing if they end up being a $1,000 paperweight.
Watch the presentation below and if it piques your interest, pre-order it on Immersed’s website.
