Friday, December 16, 2011

Final Images and Animations

For my final scene, I'm going to have the camera follow the same path as for the last video in the previous post.  The only real difference is that this video will demonstrate the direct lighting of the smoke volume.  This is a really cool effect with which I already love to tinker and see the results.


One of the main takeaways for me at this point is in the drastically increased rendering times for doing direct lighting on the volume.  My algorithm for lighting the volume data is O(N^2) in the number of steps that are taken in marching through the volume.  Most of the images I rendered for this post had to be done with no anti-aliasing and slightly larger step sizes so that I could get more than two together for the post.  Next, I'll document an example of how much longer it's now taking to render a frame.

The screenshot below is a single frame I rendered at 512 x 512 pixels, with 25 rays per pixel.  I also decreased the marching step size from 0.05 to 0.03 (the sphere diameter is 1.4 in the same units).  This image took about an hour and a half to render.  In this image, most of the rays that go out have to march through the volume, so you really get the flavor of the O(N^2) performance.


Here's a quickly rendered video of the rising smoke effect.  I think because of the larger step size and single ray per pixel, you see those floating, flipping artifacts.


Below are several images I captured while playing around with the absorption and out-scatter coefficients of the volume data as well as the colors of the four lights in my scene.










Final Animation

My final video is rendered with 25 rays per pixel, and the resolution is 1024 x 1024 pixels.  I rendered 1001 frames in all, and the frames took anywhere from less than a minute to over 14 minutes, depending on how many rays had to travel through the smoke volume.  Still, I rendered all the frames for the final video, in just about 2 hours!  This magic was made possible by using all four cores on each of 35 new computers in a brand-new computer science classroom/lab on the UNM campus for about 1 hour straight.  If I hadn't been able to render 140 frames at a time, the video simply wouldn't have been possible.


Here's a link to the full-resolution video.  Be warned though, it's almost 25MB.

Full resolution video: video

This video is a "Where's Waldo" of items I didn't have time to complete.  Most people will notice that the smoke volume casts the wrong kind of shadow on the ground.  Also, when I calculate the color of the volume for a reflection, I'm not marching through it, just treating it like a Blinn-Phong shaded object, and the default surface colors are all set to black on my volume.  You can see this just about half-way through the video.

In spite of these issues, I'm pretty pleased with how the smoke looks, and with how my green dielectric sphere looks, and how nice the video looks in general.

Cheers.

Tuesday, December 13, 2011

Toward the Final Project

I'm going to make an animation for the final project which includes a lot of the fun or tricky things from the semester, along with the volume rendering I was working on in the last post.  Toward that end, I have lots to do:  Scripting a smooth path through the scene, synchronizing the movements or animations of all the moving parts, fixing the script which should allow all the machines in the cs lab to be simultaneously working on different frames of my scene, fixing the bugs which were affecting my ability to properly texture a sphere with the map of the earth, killing a newly introduced bug where all my spheres have to be filled with smoke (oops!), etc.

Here's the first step, get my rotating, textured sphere back in the mix.  I haven't had it available since I did my raytracer rewrite (which is already really suffering from sophomore syndrome, by the way).  So I brought it back and fixed the texture coordinates.  See below.


Then it was time to parameterize a plane curve for my raytracing camera to follow.  I chose to move it along the y axis while it's x position oscillates sinusoidally.  Check it out, here it is, with direct lighting of the volume turned off.


Same scene below, but now with the direct lighting turned on.  Kind of interesting, but wrong.


Now I took some time to set up a little scene and include some of the recent features of my raytracer, like texturing, dielectrics and volume rendering.  Here's a trip around the scene.


I'll want the final scene to be rendered at as high a quality as possible, I'd like the direct lighting of my volume to work properly, and I'm hoping to get a good quality video with resolution about 1024 x 768.  If I could fix the shadows cast by transparent and smoky spheres to look correct, that would be great too.  I'll probably have to forget (for now) about putting the whole scene on something that looks like a park bench and lighting it with a light probe map.

Here's a really large (~25MB) file you can download if you want to see a nicer looking version of the round trip around the scene.  It's 1024 x 1024 pixels, and I used 25 rays per pixel to anti-alias it, which still probably wasn't enough for that resolution.




More Volume Rendering

This weekend I worked on making the volume data (generated by 3D Perlin noise) look smoother, trying to light it directly using the most basic O(N^2) approach, and parameterizing the activities that I'll do for my final animation.

Direct lighting of the volume data has given me the most trouble this weekend.  From total light blowout to a white shell around my volumetric sphere to lighting that appears to be stuck to the outside of the sphere, to strange little color explosions that shift around with the time parameter, I'm just not having any luck getting the direct lighting working correctly.  There are several images and videos belowing illustrating various weird effects when direct lighting is turned on.

Below the noise is tweaked to keep it from filling the sphere by applying a step-function "cut-off" to the perturbed position.  The result looks something like lava lamp blobs.


Below I've turned down the density scalar on the resulting noise.


By giving the sphere containing the volume data the same indes of refraction as air, the containing sphere sort of disappears.  I say "sort of" because when it's animated, you can still make out the spherical shape of the container.


By changing the rgb absorption coefficients, the smoke changes colors.


Now I'm trying to do direct lighting of the volume using, roughly, the following algorithm:

For each each little step along each marched ray through the volume, I send out "shadow" rays to each of the point lights.  The light being refracted through the volume (attenuated along the primary ray), is, at each little step, added to by the sum of the attenuated contributions from each shadow ray.  That sum of refracted plus attenuated shadow ray is then available to be attenuated back along the primary toward the eye.

Here's a screen shot showing way too much light apparently evenly scattered in the volume.  Where the density is really low (wherever the black cloud isn't), there are few particles to scatter the available direct light, so it should keep going until it hits an area of greater density, where it should be absorbed and scattered in relation to the density.


This image makes it look like I have a white shell around a volume that still appears unlit.


By changing the density of the noise inside the volume, the cloud disappears behind the white shell.


Here I turned up the noise density and turned off the direct lighting.


Now I have positioned the eye inside the volume.  You can see artifacts I think are related to the white shell you see from the outside.


I've started using positional density of the noise along with angle between view direction and each light direction.  I noticed various mentions that local density gradients could be used to approximate normals for Blinn-Phong style shading of the volume, so perhaps I'll need to come up with an efficient way to compute or keep track of those.  Below I'm using density and angle between view and light directions to scale the amount of light attenuated at each step.  It has created white blobs in my cloud, but the shell is still all white.


Aha!  I've finally discovered the source of the white shell.  I had divided the distance from the point of interest to the edge of the sphere in the direction of a light into evenly sized bits plus a remainder.  Then I started at the edge of the sphere (well, just short of it actually) and worked back toward the point of interest, attenuating the light along the way.  By taking care of the remainder and attenuating from the very edge of the sphere, the white shell went away.

Unfortunately, the volume still doesn't look properly lit.  It's got a few white blobs near the edge of the sphere, but the rest just looks to be the color prescribed by the absorption coefficients.


Here's another look at the same problem.


I was using a step-function to cut off the noise based on the length of the perturbed position.  This was an attempt to keep the noise from just filling the sphere with what appeared to be a slightly more homogeneous mixture of smoke and air, which was kind of boring.

So to blur the smoke a little more, I changed that step-function cut-off to scale the noise using a negative sigmoidal curve.  So if the length of the perturbed position is greater than 0.3, I use something akin to:  1 / (1 + exp((<perturbed_pos> - 0.3) * 12).  The minus 0.3 shifts what is normally a zero-centered sigmoid to the region where you want the cutoff.  The 12 is a scalar that affects how step the sigmoid is, the higher it is, the closer it gets to a step function.  I picked 12 by trial and error.

Below is the result, with direct lighting turned off.


Now with it turned on.


I monkeyed around with the rgb-dependent scatter and absorb coefficients, the density of the noise, and lots of other things, generating a lot of messy pictures.











I started making animations by using a time value to parameterize the way things move or rotate in the scene.  One of the first things I learned about ffmpeg is that even if I generated anti-aliased frames, the video quality can still be really crappy.  Here's an example.


That happened when I did it like this:

ffmpeg -i screen%04d.ppm video.mpg

Then I saw a tip that helped, and now I get a little better results with this:

ffmpeg -i screen_%04d.ppm -vcodec mpeg4 -b 4800k video.avi

As you can see below, it's a little better, but still not great.


So off I go, using the machines in the new cs lab to render little movies, all night long.

Here's a close up movie of my first attempt at direct lighting, back when the noise was more homogeneous.  It really looks kind of like the lights are interacting with the smoke, or at least it's hard to tell because it's so noisy.


Here's something with nicer looking noise, and with the direct lighting turned off.


Now here it is with direct lighting:


I started scaling the attenuated light at each step along shadow rays by the density and angle between view and light vectors at each step.  That sort of made it look like the direct lighting affects the outside of the sphere, while the absorption proceeds unlit on the inside:


Thursday, December 8, 2011

Volume Rendering

I'm playing around with volume rendering for my final project.  The first simple step was to make some assumptions to simplify the problem, for example, I'm only going to allow volumetric data to be rendered inside a transparent sphere.

I started by just implementing Beer's Law using marched rays sampling the constants of absorption of the material.  The result of this was supposed to look identical to the way I implemented Beer's Law in the first place, i.e. using the single path distance across the object .  This worked, and my images looked the same no matter how small I made my parameter step size.  Here's an image of a transparent sphere with refractive index of 1, with rays marched across the inside of the sphere, attenuating by the constants of absorption.




The next step was to start visualizing a vector field inside the sphere, rather than just the constants of absorption.  Happily, I was given a library for generating 3D Perlin noise, so generating a vector field to visualize was quickly within reach, and I could generate the image below:




The still frame of that looks pretty neat, kind of swirly like oily smoke or something.  So I created a time parameter which I could use to allow the noise to drift directionally to create a swirling effect which could be captured in a video.




I did a white one of these things, above, and it was, well, white, but essentially the same thing.  Then, as I was working on trying a simple and inefficient direct lighting scheme, I discovered a bug in the code that was samping the noise field. I had been treating all points within the sphere as if they were on the surface for the purposes of sampling.  When I fixed the bug and began sampling the noise field the way I had intended, I saw something that looked more like what I was had been expecting:




Now it's easier to see that before I was really getting a lot of highlights on the surface, and now I'm getting something where I can see some depth in the stuff in the sphere.  I made a video of that too.  It looks like the wind is blowing pretty hard in there!




Boy, these videos really don't look very good compared to the ppm images they came from, I wonder if there are some ffmpeg settings I need to master?  Just to see how it looked, I tried using 25 rays per pixel instead of only one, as I was hoping less aliasing in the individual frames might create a better looking final video, so here's the same video (except it's shorter by about half) rendered with 25 rays per pixel:




Well, I guess that's a bit better.  I should still look into getting the best quality when I'm using ffmpeg.

So I also tried my hand at some rudimentary direct lighting.  To begin with, I just wanted the simplest O(N^2) algorithm, where, from every sampling point along a refracted ray which I march through the sphere, I march rays in the direction of each point light in my scene until I hit the edge of the sphere.  Here's an image of that, where the parameter size is 0.05 (inside a sphere of radius 1):



So I tried changing my parameter step size down to 0.01 (again, the radius of the sphere is 1), and multiplying the number of steps by 5 definitely resulted in a factor of nearly 25 increase in running time, though the resulting image was disappointing.  All I did was decrease the step size, and I was expecting to get better resolution, but instead I got this:




This is a bummer because it looks like I might be handling the transparency incorrectly, and I'm convinced it should not be so!

Here's a short clip of the lower resolution fog, I feel like seeing it animated gives you a better feel for what you're looking at.