Friday, December 16, 2011

Final Images and Animations

For my final scene, I'm going to have the camera follow the same path as for the last video in the previous post.  The only real difference is that this video will demonstrate the direct lighting of the smoke volume.  This is a really cool effect with which I already love to tinker and see the results.


One of the main takeaways for me at this point is in the drastically increased rendering times for doing direct lighting on the volume.  My algorithm for lighting the volume data is O(N^2) in the number of steps that are taken in marching through the volume.  Most of the images I rendered for this post had to be done with no anti-aliasing and slightly larger step sizes so that I could get more than two together for the post.  Next, I'll document an example of how much longer it's now taking to render a frame.

The screenshot below is a single frame I rendered at 512 x 512 pixels, with 25 rays per pixel.  I also decreased the marching step size from 0.05 to 0.03 (the sphere diameter is 1.4 in the same units).  This image took about an hour and a half to render.  In this image, most of the rays that go out have to march through the volume, so you really get the flavor of the O(N^2) performance.


Here's a quickly rendered video of the rising smoke effect.  I think because of the larger step size and single ray per pixel, you see those floating, flipping artifacts.


Below are several images I captured while playing around with the absorption and out-scatter coefficients of the volume data as well as the colors of the four lights in my scene.










Final Animation

My final video is rendered with 25 rays per pixel, and the resolution is 1024 x 1024 pixels.  I rendered 1001 frames in all, and the frames took anywhere from less than a minute to over 14 minutes, depending on how many rays had to travel through the smoke volume.  Still, I rendered all the frames for the final video, in just about 2 hours!  This magic was made possible by using all four cores on each of 35 new computers in a brand-new computer science classroom/lab on the UNM campus for about 1 hour straight.  If I hadn't been able to render 140 frames at a time, the video simply wouldn't have been possible.


Here's a link to the full-resolution video.  Be warned though, it's almost 25MB.

Full resolution video: video

This video is a "Where's Waldo" of items I didn't have time to complete.  Most people will notice that the smoke volume casts the wrong kind of shadow on the ground.  Also, when I calculate the color of the volume for a reflection, I'm not marching through it, just treating it like a Blinn-Phong shaded object, and the default surface colors are all set to black on my volume.  You can see this just about half-way through the video.

In spite of these issues, I'm pretty pleased with how the smoke looks, and with how my green dielectric sphere looks, and how nice the video looks in general.

Cheers.

Tuesday, December 13, 2011

Toward the Final Project

I'm going to make an animation for the final project which includes a lot of the fun or tricky things from the semester, along with the volume rendering I was working on in the last post.  Toward that end, I have lots to do:  Scripting a smooth path through the scene, synchronizing the movements or animations of all the moving parts, fixing the script which should allow all the machines in the cs lab to be simultaneously working on different frames of my scene, fixing the bugs which were affecting my ability to properly texture a sphere with the map of the earth, killing a newly introduced bug where all my spheres have to be filled with smoke (oops!), etc.

Here's the first step, get my rotating, textured sphere back in the mix.  I haven't had it available since I did my raytracer rewrite (which is already really suffering from sophomore syndrome, by the way).  So I brought it back and fixed the texture coordinates.  See below.


Then it was time to parameterize a plane curve for my raytracing camera to follow.  I chose to move it along the y axis while it's x position oscillates sinusoidally.  Check it out, here it is, with direct lighting of the volume turned off.


Same scene below, but now with the direct lighting turned on.  Kind of interesting, but wrong.


Now I took some time to set up a little scene and include some of the recent features of my raytracer, like texturing, dielectrics and volume rendering.  Here's a trip around the scene.


I'll want the final scene to be rendered at as high a quality as possible, I'd like the direct lighting of my volume to work properly, and I'm hoping to get a good quality video with resolution about 1024 x 768.  If I could fix the shadows cast by transparent and smoky spheres to look correct, that would be great too.  I'll probably have to forget (for now) about putting the whole scene on something that looks like a park bench and lighting it with a light probe map.

Here's a really large (~25MB) file you can download if you want to see a nicer looking version of the round trip around the scene.  It's 1024 x 1024 pixels, and I used 25 rays per pixel to anti-alias it, which still probably wasn't enough for that resolution.




More Volume Rendering

This weekend I worked on making the volume data (generated by 3D Perlin noise) look smoother, trying to light it directly using the most basic O(N^2) approach, and parameterizing the activities that I'll do for my final animation.

Direct lighting of the volume data has given me the most trouble this weekend.  From total light blowout to a white shell around my volumetric sphere to lighting that appears to be stuck to the outside of the sphere, to strange little color explosions that shift around with the time parameter, I'm just not having any luck getting the direct lighting working correctly.  There are several images and videos belowing illustrating various weird effects when direct lighting is turned on.

Below the noise is tweaked to keep it from filling the sphere by applying a step-function "cut-off" to the perturbed position.  The result looks something like lava lamp blobs.


Below I've turned down the density scalar on the resulting noise.


By giving the sphere containing the volume data the same indes of refraction as air, the containing sphere sort of disappears.  I say "sort of" because when it's animated, you can still make out the spherical shape of the container.


By changing the rgb absorption coefficients, the smoke changes colors.


Now I'm trying to do direct lighting of the volume using, roughly, the following algorithm:

For each each little step along each marched ray through the volume, I send out "shadow" rays to each of the point lights.  The light being refracted through the volume (attenuated along the primary ray), is, at each little step, added to by the sum of the attenuated contributions from each shadow ray.  That sum of refracted plus attenuated shadow ray is then available to be attenuated back along the primary toward the eye.

Here's a screen shot showing way too much light apparently evenly scattered in the volume.  Where the density is really low (wherever the black cloud isn't), there are few particles to scatter the available direct light, so it should keep going until it hits an area of greater density, where it should be absorbed and scattered in relation to the density.


This image makes it look like I have a white shell around a volume that still appears unlit.


By changing the density of the noise inside the volume, the cloud disappears behind the white shell.


Here I turned up the noise density and turned off the direct lighting.


Now I have positioned the eye inside the volume.  You can see artifacts I think are related to the white shell you see from the outside.


I've started using positional density of the noise along with angle between view direction and each light direction.  I noticed various mentions that local density gradients could be used to approximate normals for Blinn-Phong style shading of the volume, so perhaps I'll need to come up with an efficient way to compute or keep track of those.  Below I'm using density and angle between view and light directions to scale the amount of light attenuated at each step.  It has created white blobs in my cloud, but the shell is still all white.


Aha!  I've finally discovered the source of the white shell.  I had divided the distance from the point of interest to the edge of the sphere in the direction of a light into evenly sized bits plus a remainder.  Then I started at the edge of the sphere (well, just short of it actually) and worked back toward the point of interest, attenuating the light along the way.  By taking care of the remainder and attenuating from the very edge of the sphere, the white shell went away.

Unfortunately, the volume still doesn't look properly lit.  It's got a few white blobs near the edge of the sphere, but the rest just looks to be the color prescribed by the absorption coefficients.


Here's another look at the same problem.


I was using a step-function to cut off the noise based on the length of the perturbed position.  This was an attempt to keep the noise from just filling the sphere with what appeared to be a slightly more homogeneous mixture of smoke and air, which was kind of boring.

So to blur the smoke a little more, I changed that step-function cut-off to scale the noise using a negative sigmoidal curve.  So if the length of the perturbed position is greater than 0.3, I use something akin to:  1 / (1 + exp((<perturbed_pos> - 0.3) * 12).  The minus 0.3 shifts what is normally a zero-centered sigmoid to the region where you want the cutoff.  The 12 is a scalar that affects how step the sigmoid is, the higher it is, the closer it gets to a step function.  I picked 12 by trial and error.

Below is the result, with direct lighting turned off.


Now with it turned on.


I monkeyed around with the rgb-dependent scatter and absorb coefficients, the density of the noise, and lots of other things, generating a lot of messy pictures.











I started making animations by using a time value to parameterize the way things move or rotate in the scene.  One of the first things I learned about ffmpeg is that even if I generated anti-aliased frames, the video quality can still be really crappy.  Here's an example.


That happened when I did it like this:

ffmpeg -i screen%04d.ppm video.mpg

Then I saw a tip that helped, and now I get a little better results with this:

ffmpeg -i screen_%04d.ppm -vcodec mpeg4 -b 4800k video.avi

As you can see below, it's a little better, but still not great.


So off I go, using the machines in the new cs lab to render little movies, all night long.

Here's a close up movie of my first attempt at direct lighting, back when the noise was more homogeneous.  It really looks kind of like the lights are interacting with the smoke, or at least it's hard to tell because it's so noisy.


Here's something with nicer looking noise, and with the direct lighting turned off.


Now here it is with direct lighting:


I started scaling the attenuated light at each step along shadow rays by the density and angle between view and light vectors at each step.  That sort of made it look like the direct lighting affects the outside of the sphere, while the absorption proceeds unlit on the inside:


Thursday, December 8, 2011

Volume Rendering

I'm playing around with volume rendering for my final project.  The first simple step was to make some assumptions to simplify the problem, for example, I'm only going to allow volumetric data to be rendered inside a transparent sphere.

I started by just implementing Beer's Law using marched rays sampling the constants of absorption of the material.  The result of this was supposed to look identical to the way I implemented Beer's Law in the first place, i.e. using the single path distance across the object .  This worked, and my images looked the same no matter how small I made my parameter step size.  Here's an image of a transparent sphere with refractive index of 1, with rays marched across the inside of the sphere, attenuating by the constants of absorption.




The next step was to start visualizing a vector field inside the sphere, rather than just the constants of absorption.  Happily, I was given a library for generating 3D Perlin noise, so generating a vector field to visualize was quickly within reach, and I could generate the image below:




The still frame of that looks pretty neat, kind of swirly like oily smoke or something.  So I created a time parameter which I could use to allow the noise to drift directionally to create a swirling effect which could be captured in a video.




I did a white one of these things, above, and it was, well, white, but essentially the same thing.  Then, as I was working on trying a simple and inefficient direct lighting scheme, I discovered a bug in the code that was samping the noise field. I had been treating all points within the sphere as if they were on the surface for the purposes of sampling.  When I fixed the bug and began sampling the noise field the way I had intended, I saw something that looked more like what I was had been expecting:




Now it's easier to see that before I was really getting a lot of highlights on the surface, and now I'm getting something where I can see some depth in the stuff in the sphere.  I made a video of that too.  It looks like the wind is blowing pretty hard in there!




Boy, these videos really don't look very good compared to the ppm images they came from, I wonder if there are some ffmpeg settings I need to master?  Just to see how it looked, I tried using 25 rays per pixel instead of only one, as I was hoping less aliasing in the individual frames might create a better looking final video, so here's the same video (except it's shorter by about half) rendered with 25 rays per pixel:




Well, I guess that's a bit better.  I should still look into getting the best quality when I'm using ffmpeg.

So I also tried my hand at some rudimentary direct lighting.  To begin with, I just wanted the simplest O(N^2) algorithm, where, from every sampling point along a refracted ray which I march through the sphere, I march rays in the direction of each point light in my scene until I hit the edge of the sphere.  Here's an image of that, where the parameter size is 0.05 (inside a sphere of radius 1):



So I tried changing my parameter step size down to 0.01 (again, the radius of the sphere is 1), and multiplying the number of steps by 5 definitely resulted in a factor of nearly 25 increase in running time, though the resulting image was disappointing.  All I did was decrease the step size, and I was expecting to get better resolution, but instead I got this:




This is a bummer because it looks like I might be handling the transparency incorrectly, and I'm convinced it should not be so!

Here's a short clip of the lower resolution fog, I feel like seeing it animated gives you a better feel for what you're looking at.





Thursday, November 24, 2011

Raytracing OBJ Models

I recently found this website and wrapped up the obj reader I found there for use in my raytracer.  In order for this to work, I had to re-implement (or at least refactor) my bounding volume hierarchy code and get it working in my new raytracer.  Once I did that, I verified that it still provided the speedup I had seen before by testing it on a sphereflake.  It still worked.  So then I implemented another subclass of my GeometricObject type and used my BVH class inside of it to efficiently hit the many triangles that make up the model.

The image below depicts a model with roughly 16,000 triangles and facet normals.



Here it is up close, again with 16,000 triangles, facet normals, and 1 ray per pixel.  The image below took about 2 or 3 seconds to render.




Ok, now I'm rendering the "smooth mesh" model with more like 64,000 triangles, still using facet normals, and 1 ray per pixel.  This image took more like 5 seconds to render.




Finally, below you can see the model with 64,000 triangles, facet normals, and 25 rays per pixel.  This took about 2.5 minutes to render, and almost 15 million rays were generated.




Below is my glass version of the model, with 64,000 triangles, facet normals, and 25 rays per pixel.  It took my raytracer over 9 minutes to render the image.  I'm kind of disappointed with the results, I'm not sure where all the black lines and patches are coming from.




The soccer ball model looked a little better as glass, but not much.




I think if I use the vertex normals provided by Nate Robins' excellent sample code, then maybe even interpolate those across each triangle using barycentric coordinates, some of the mess might get cleaned up.

Still, it's kind of cool to be able to raytrace obj files now, and it's pretty neat to see them as glass, or at least a little like glass.

Monday, November 21, 2011

Dielectric Spheres

After rewriting my raytracer (most of it anyway, some pieces still haven't made the transition), I've got much of the functionality back and, for the most part, working better than before.

Now that it's easier to swap out "tracers", it was a snap to, for example, drop one in that visualizes normals instead of doing normal raytracing, the results of which you see below.




Additionally, after the rewrite I didn't even need to worry about where those weird speckles were coming from in my previous refraction blog entry.  I think adding a ShadeRecord to the hit functions of all my objects took care of that problem.  So I started working on splitting the energy between reflected and refracted rays by just dividing it in half everywhere.




Then I tried fiddling with the constant value a bit.  This resulted in interesting images, but they're definitely not correct.




The red sphere which appears on the surface of the left-most sphere in this scene is actually behind it and it's image is visible due to refraction.  I don't seem to have to the reflections here.




Now in the image below you can see that the reflection of the red sphere, which should have become more visible in it's transparent neighbor due to the incidental view angle on the transparent sphere, is completely missing.  So my Fresnel term isn't working yet.




Oh, is this more like it?  I can see below that when my view angle at a transparent sphere hit point gets closer to 90 degrees from the normal, the reflections on the surface of the sphere are more visible, but where my view angle is close to the normal angle, I see more of the refraction.




I don't know.  These sphere's look weird, below.  They don't seem to have any blending between where you see refracted and reflected images.  The whole set of seven spheres have indices of refraction ranging from 1.1 to 1.7, and to me, the lower ior spheres look better.  Maybe that's because a perfect sphere with an ior of 1.7 just doesn't exist in nature, so my eye isn't used to what it should look like?




It's very pronounced below.  Around the edges, you see all reflection, and in the middle you see all refraction.




I'm definitely missing something, because when I set a sphere's index of refraction to 1.0, it still completely disappears.




In image below, I tried setting the index of refraction of this sphere to something less than that of air.  I believe it was 0.9.




Below you can see all seven spheres in one shot, the one with the lowest index of refraction is on the right, the sphere with the highest is on the left.





This one is still a mystery to me.  I put my camera inside this sphere, and I get a mess!  If this is what is seen from inside the sphere, why doesn't it show up in refracted rays that leave the sphere and reflect off other objects?




I'm not sure it's correct, but I like this image below, looking down the line of spheres, with the dark side of the scene in the background.




Here I decided to take a little break and get some color filtering, or attenuation due to Beer's Law, working while I keep working on the Fresnel term.




I'm currently working on getting OBJ file reading working.  I'm actually just wrapping up some code I found from Nate Robins to read the OBJ files, and then I'm going to create a compound object in my geometric object hierarchy which will allow me to contain all the triangles of an OBJ file in a BVH within the compound object itself.  Hopefully soon I'll have a Frank model rendered as colored glass!

I'm also still working on getting other types of compound objects working which can be composed of parts of implicitly defined objects.

Ok, well I got a good tip to visualize the fresnel terms on my transparent spheres and see if they, indeed, looked like a step function as all the above images seem to indicate.  The following sequence of images illustrate the situation.


Above is an image created by a "Tracer" which keeps only refracted rays from transparent surfaces, and doesn't even compute reflections at all.  Now look at the image below, created by my buggy tracer which computes the Fresnel term and allows for wavelength-dependent absorption in the medium of each sphere.


Instead of a gradual change from refraction to reflectance as rays approach hitting the sphere at a glancing angle, there appears to be an arbitrary cutoff.  So in the next image, I'm visualizing this Fresnel term which determines the reflect/refract ratio, and it's clearly a step function.  The white outer rim indicates a Fresnel term of 1.0, corresponding to pure reflection and no refraction, the black inner circle indicates a Fresnel term of 0.0, corresponding to pure refraction.


So I went to work and found the problem, now that I was sure where it was.  When I had fixed the problem, the first thing I saw were much more reasonable looking Fresnel terms, seen below.


So then I went back to doing the normal refraction/reflection rendering, and the images immediately looked a lot better.  Notice in the following images that the reflections on the spheres are visible pretty much all over, but they're very faint when you look straight into the center of the sphere, and they get more visible as you look closer to the edge of the sphere.