Articles, Blog

Real-Time Hair Rendering With Deep Opacity Maps | Two Minute Papers #171

December 2, 2019


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. In earlier episodes, we’ve seen plenty of
video footage about hair simulations and rendering. And today we’re going to look at a cool new
technique that produces self-shadowing effects for hair and fur. In this image pair, you can see this drastic
difference that shows how prominent this effect is in the visual appearance of hair. Just look at that. Beautiful. But computing such a thing is extremely costly. Since we have a dense piece of geometry, for
instance, hundreds of thousands of hair strands, we have to know how each one occludes the
other ones. This would take hopelessly long to compute. To even get a program that executes in a reasonable
amount of time, we clearly need to simplify the problem further. An earlier technique takes a few planes that
cut the hair volume into layers. These planes are typically regularly spaced
outward from the light sources and it is much easier to work with a handful of these volume
segments than with the full geometry. The more planes we use, the more layers we
obtain, and the higher quality results we can expect. However, even if we can do this in real time,
we will produce unrealistic images when using around 16 layers. Well of course, we should then crank up the
number of layers some more! If we do that, for instance by now using 128
layers, we can expect better quality results, but we’ll be able to process an image only
twice a second, which is far from competitive. And even then, the final results still contain
layering artifacts and are not very close to the ground truth. There has to be a better way to do this. And with this new technique called Deep Opacity
Maps, these layers are chosen more wisely, and this way, we can achieve higher quality
results with only using 3 layers, and it runs easily in real time. It is also more memory efficient than previous
techniques. The key idea is that if we look at the hair
from the light source’s point of view, we can record how far away different parts of
the geometry are from the light source. Then, we can create the new layers further
and further away according to this shape. This way, the layers are not planar anymore,
they adapt to the scene that we have at hand and contain significantly more useful occlusion
information. As you can see, this new technique blows all
previous methods away and is incredibly simple. I have found an implementation from Philip
Rideout, the link to this is available in the video description. If you have found more, let me know and I’ll
include your findings in the video description for the fellow tinkerers out there. The paper is ample in comparisons, make sure
to have a look at that too. And sometimes I get some messages saying “Károly,
why do you bother covering papers from so many years ago, it doesn’t make any sense!”. And here you can see that part of the excitement
of Two Minute Papers is that the next episode can be about absolutely anything. The series has been mostly focusing on computer
graphics and machine learning papers, but don’t forget, that we also have an episode
on whether we’re living in a simulation, or the Dunning-Kruger effect and so much more. I’ve put a link to both of them in the video
description for your enjoyment. The other reason for covering older papers
is that a lot of people don’t know about them and if we can help just a tiny bit to make
sure these incredible works see more widespread adoption, we’ve done our job well. Thanks for watching and for your generous
support, and I’ll see you next time!

You Might Also Like

28 Comments

  • Reply hustlingHassler July 16, 2017 at 4:08 pm

    First comment, fourth like. The viewers are definitely more adult than on other channels 🙂

  • Reply Reinier Vens July 16, 2017 at 4:11 pm

    Isn't this also great for self-shadowing alpha-blended particles?

    Edit: The implementation you linked doesn't really do deep opacity maps. He's raymarching against the density field, which is trivial. If I understand correctly, deep opacity maps are just opacity shadow maps plus that depth offset optimization. I don't know if that implementation can even be called shadow mapping.

  • Reply Leon Zinkleiche July 16, 2017 at 4:13 pm

    love the use of the teepott

  • Reply The rig July 16, 2017 at 4:17 pm

    I love this channel. True research says that it takes an average of 12.5 years for journal published ideas to go to pilot or commercial scale. With this channel that path can be accelerated. So I don't see a problem covering older papers, we must start from somewhere. Kudos @3-minute papers

  • Reply Zhorky July 16, 2017 at 4:24 pm

    I haven't known this so far, and i think it's often not used in game development (im a dev)… So great to know knowledge!

  • Reply cr9pr3 July 16, 2017 at 4:54 pm

    Understanding state-of-the art techniques requires knowledge of how stuff has been done in the past.
    If there is a very important paper in a field, you should probably know about that so you can reason a bit better about new ones, that might be influenced by those.
    I love the way you do your show 🙂

  • Reply TH July 16, 2017 at 4:56 pm

    lol dat hairy teapot

  • Reply h0lyRS July 16, 2017 at 5:39 pm

    This is easily one of my favorite channels on YouTube

  • Reply iLikeTheUDK July 16, 2017 at 6:13 pm

    Wasn't this technique already used by Pixar for hair and fur in offline rendering since 2000?

  • Reply eerereps July 16, 2017 at 6:47 pm

    doesn't matter how old the papers are. Most of us usually don't have time to read all papers in the world, that's why we subscribed, so you read them for us, and break it down on a 3 min video 🙂 You have NO idea how much time you are saving us! – keep them coming! – the new and the old!

  • Reply Ryan Roberson July 16, 2017 at 6:48 pm

    maybe make episode 200 the last two-minute papers episode and go into three-minute papers, since you tend to go longer these days and less of a time constraint might do you better.

  • Reply F. S. July 16, 2017 at 8:06 pm

    The teapot is like "duuude!"

  • Reply GaborBartal July 16, 2017 at 8:15 pm

    na igen a "vótmá" komment effektus érthetetlen számomra, hiába volt valami, azt sem láthatta mindenki

  • Reply foobargorch July 16, 2017 at 9:28 pm

    the famous teapot made me chuckle 🙂

  • Reply user73o1u 81716 July 16, 2017 at 11:49 pm

    awesome, thanks. Good explanations for covering older papers. Though I still sort of yawned and crave for the new ones 🙁

  • Reply E Borge July 17, 2017 at 12:13 am

    Could you use this to simulate skin subsurface scattering since skin has multiple layers?

  • Reply Chikato 710 July 17, 2017 at 5:43 am

    Another reason to bring up old papers is that the techniques may have applied to prerendered renderers before and now may apply to real-time environments. The shader model may have finally caught up. With these techniques layered on top of one another, they are creating new levels of graphics. Mostly old techniques that were developed for films are now real-time.

  • Reply bumsahoy July 17, 2017 at 11:55 pm

    @2:00 oh that's a covfefe

  • Reply José Neto July 18, 2017 at 1:54 pm

    Where do you think simulation research(fluids, fracture, etc) is going now? It seems we have already achieved great visual results using full hardware capability.

  • Reply Kevin Comerford July 18, 2017 at 9:16 pm

    Holy moly. This is great. I'm going to look deeper into this and I might be implementing this into our next game.

  • Reply Smaakjeks K July 21, 2017 at 2:31 pm

    Any video about the Dunning-Kruger effect I consider a PSA.

  • Reply Yves Gomes July 23, 2017 at 11:37 pm

    Amazing! I think Batman Arkham Knight used something like this, but it looked abit weird, as if the layers were too translucid. I'm far from sure if this (layering) was the actual technique they used, though.

  • Reply Christopher July 27, 2017 at 12:45 am

    What is old can be new again; am I not correct. If Karoly publishes older papers, so be it.

  • Reply Rubi Wiliams August 11, 2017 at 3:43 am

    https://www.youtube.com/watch?v=9Oo0TlprwAQ

  • Reply blakegriplingph January 12, 2018 at 9:40 am

    Has this been implemented in a retail game already? My beef with most games is that grass and other foliage, besides hair, tend to look rather flat.

  • Reply Glicher 3 January 23, 2018 at 8:48 am

    Klaffa csatorna GG 🙂

  • Reply Almarma March 20, 2018 at 11:14 pm

    Just this channel is now more interesting than TED talks! Thank you!

  • Reply Raul Diaz December 29, 2018 at 3:26 am

    I'm going to implement Luxrender into my pipeline if you implemented this into your renderer

  • Leave a Reply