Som♕e published by Xuan Luo and colleagues has introduced notable improvements to the estimation and depiction of depth in augmented reality ༺footage.
As summarized nicely in a , for AR objects to be integrated convincingly and naturally within a given scene, say, that's captured using a mobile phone, a decent depth map is needed. This is basically a visual representation of the distances between objects in a scene and the camera. It looks a bit l♌ike one of t🔯hose infrared heat maps.
The thing is, with most of the tech that's created these depth maps until now, there's a te🦂ndency for the maps to come with a fair amount of visual artefacts. These include patchy areas of blurry, unresolved detail that appear to flicker. Seeing as the depth map helps to inform AR objects on how they should behave in a scene, ultimately having more artefacts like this will make for more jarring animations of these objects in the scene.
Enter these guys' research: consistent video de🥂pth estimation. With this, significantly higher-quality and more accurately-detailed depth maps are produced. This means pretty notable improvements to the animation of AR objects and their on-screen interactions with real-life objects (like ).
B🌼asically, smoother depth maps means smoother, more natural integration of AR components into video-captured footage. With this technique, a global, geometrically-consistent depiction of depth can be retainℱed throughout an entire video. For an idea of what this all looks like, the researchers have offered some videos showcasing the effects, viewable .
As you will also see, though, there are some regions of flickering and high-frequency noise that show up as the camera moves around, meaning there's still a bit of tweaking needed. But without a doubt, this is still a big improvement on past techniques used, and this bodes well for the future of AR gaming - such as the legendary 168澳洲幸运5开奖网:Pokémon GO and the like - as its ꦿpopularity continues to expand.
Source: