Thursday, December 8, 2011

Volume Rendering

I'm playing around with volume rendering for my final project.  The first simple step was to make some assumptions to simplify the problem, for example, I'm only going to allow volumetric data to be rendered inside a transparent sphere.

I started by just implementing Beer's Law using marched rays sampling the constants of absorption of the material.  The result of this was supposed to look identical to the way I implemented Beer's Law in the first place, i.e. using the single path distance across the object .  This worked, and my images looked the same no matter how small I made my parameter step size.  Here's an image of a transparent sphere with refractive index of 1, with rays marched across the inside of the sphere, attenuating by the constants of absorption.




The next step was to start visualizing a vector field inside the sphere, rather than just the constants of absorption.  Happily, I was given a library for generating 3D Perlin noise, so generating a vector field to visualize was quickly within reach, and I could generate the image below:




The still frame of that looks pretty neat, kind of swirly like oily smoke or something.  So I created a time parameter which I could use to allow the noise to drift directionally to create a swirling effect which could be captured in a video.




I did a white one of these things, above, and it was, well, white, but essentially the same thing.  Then, as I was working on trying a simple and inefficient direct lighting scheme, I discovered a bug in the code that was samping the noise field. I had been treating all points within the sphere as if they were on the surface for the purposes of sampling.  When I fixed the bug and began sampling the noise field the way I had intended, I saw something that looked more like what I was had been expecting:




Now it's easier to see that before I was really getting a lot of highlights on the surface, and now I'm getting something where I can see some depth in the stuff in the sphere.  I made a video of that too.  It looks like the wind is blowing pretty hard in there!




Boy, these videos really don't look very good compared to the ppm images they came from, I wonder if there are some ffmpeg settings I need to master?  Just to see how it looked, I tried using 25 rays per pixel instead of only one, as I was hoping less aliasing in the individual frames might create a better looking final video, so here's the same video (except it's shorter by about half) rendered with 25 rays per pixel:




Well, I guess that's a bit better.  I should still look into getting the best quality when I'm using ffmpeg.

So I also tried my hand at some rudimentary direct lighting.  To begin with, I just wanted the simplest O(N^2) algorithm, where, from every sampling point along a refracted ray which I march through the sphere, I march rays in the direction of each point light in my scene until I hit the edge of the sphere.  Here's an image of that, where the parameter size is 0.05 (inside a sphere of radius 1):



So I tried changing my parameter step size down to 0.01 (again, the radius of the sphere is 1), and multiplying the number of steps by 5 definitely resulted in a factor of nearly 25 increase in running time, though the resulting image was disappointing.  All I did was decrease the step size, and I was expecting to get better resolution, but instead I got this:




This is a bummer because it looks like I might be handling the transparency incorrectly, and I'm convinced it should not be so!

Here's a short clip of the lower resolution fog, I feel like seeing it animated gives you a better feel for what you're looking at.





No comments:

Post a Comment