News volumetric video technology on display at Siggraph 2019, cinematic quality digital avatars…
What are we going to learn about the latest advances in cinematic quality volumetric video at Siggraph?
What are we going to learn about the latest advances in cinematic quality volumetric video at Siggraph?
Article by Micah Blumberg, http://vrma.io
It’s Saturday July 27th, I’m a journalist and a software architect and I am packing my bags to go to Siggraph 2019 in Los Angeles tomorrow.
For the first time I am bringing a depth camera with me, it’s Microsoft’s new Azure Kinect DK, that has been on sale now for over a month.
The camera is for developers specifically because for example it has a command line interface for recording sensor activity, ie its not consumer ready, it doesn’t record sound out of the box, and you can’t simultaneously record and view what you are recording with the default software, and like other depth cameras it needs to be plugged into a PC to operate.
Buy the Azure Kinect developer kit - Microsoft
Azure Kinect DK is a developer kit that contains a best-in-class 1MP depth camera, 360˚ microphone array, 12MP RGB…www.microsoft.com
Fortunately Microsoft has partnered with Depthkit Beta which is ready now to help you capture depth video with the Azure Kinect DK (with the pro licenses).
Announcing Azure Kinect support in Depthkit!
The time has come! Depthkit Pro users can start using the Azure Kinect with Depthkit today and take their volumetric…www.depthkit.tv
Eventually there will be other software options with Brekel, a suite of affordable motion capture tools, already pledging to support the Azure Kinect in the near future.
Imagine that you wanted to combine sensor feeds from multiple Kinects into a single model? The problem of camera alignment, calibration, and 3D capture fusion of multiple cameras is an extremely hard problem in general. To our knowledge Brekel’s beta software is the only off the shelf software that does it.
- Brekel
"Ahh Screw it, Let's Use Depth Sensors and VR/AR Equipment in Production" - VRLA/FMX talk Pro Hands Track hands &…brekel.com
Even as I begin to experiment with volumetric film making as a journalist, I surprised by what I am learning already from companies coming to Siggraph that have been emailing me (my email is micah@vrma.io ) to share their press releases. Technology that is already going to change what I do in the future as a journalist.
Think about computer generated influencer's for a moment. LilMiquela for example who despite not being a real flesh and blood person has attracted a real following, millions of people are influenced by her on social media, and it’s a big business for the companies that advertise with her.
The existence of these computer generated influencer’s threatens the future incomes of real influencers.
Lilmiquela is an example of how companies are already using high resolution CG models for sales and marketing.
These Influencers Aren't Flesh and Blood, Yet Millions Follow Them
The kiss between Bella Hadid and Miquela Sousa, part of a Calvin Klein commercial last month, struck many viewers as…www.nytimes.com
So what does that have to do with volumetric film making.
Well while the Microsoft Azure Kinect DK is currently the best of the best when it comes to it’s combination of sensors and price, there is a new technique that combines two old techniques that is resulting in cinematic quality volumetric video that essentially animates 3D computer generated models of a person with motion capture achieved with depth sensing cameras.
I have two examples of this:
I received a video from a company called Dynamixyz.
What Dynamixyz has shown is a proof of concept demonstrating how facial tracking trained directly from scans along with a direct solve rig in Maya can deliver high fidelity raw results. In other words you create a digital double of an actor, extract key poses from scanned data, and you get next level volumetric video.
The other example is from a company called ICVR.
They are show casing their own approach to creating a photorealistic human, in this case they collaborated with “The Scan Truck” a company that makes a 3D model of a person with a cameras pointed at you in every direction. They took 30,000 photographs and rendered them in real time in Unreal Engine.
This technology again uses motion capture to animate the photographs. Creating a volumetric video that is far superior image quality to what you can capture with a Microsoft Kinect Azure DK with just its default sensor recording configuration.
ICVR Interactive - Full Service VR, AR & App Development
ICVR's experienced team of software developers & designers will bring your product to life. We specialize in AR & VR…icvr.io
The technique of the future might be to make use of depth cameras like the Azure Kinect as motion tracking devices, for animating 3D models that you have captured with high resolution cameras. There is a lot more two it than that, but it’s exciting to see what I think is an industry shift towards turning reality into a cinematic quality object first, before bringing to your screen, AR VR device, or holographic monitor.
Combine advanced motion capture techniques with digital influencers like Shudu Gram, and you have disruptive new technology that will change all the media industries.
Hey, Quick Question: Are Computer-Generated Influencers About to Take Over The Beauty Industry?
Welcome to our column, " Hey, Quick Question, " where we investigate seemingly random happenings in the fashion and…fashionista.com