I forgot to add this in. It wasn't noticable, since the QMC
sequences did use the seed, and we probably don't ever get to
the random values for 15+ light bounces. But it seems worth
fixing anyway!
This produces identical results, but generates the direction
vectors from the original sources at build time. This makes
the source code quite a bit leaner, and will also make it easier
to play with other direction vectors in the future if the
opportunity arises.
1. Use better constants for the hash-based Owen scrambling.
2. Use golden ratio sampling for the wavelength dimension.
On the use of golden ratio sampling:
Since hero wavelength sampling uses multiple equally-spaced
wavelengths, and most samplers only consider the spacing of
individual samples, those samplers weren't actually doing a
good job of distributing all the wavelengths evenly. Golden
ratio sampling, on the other hand, does this effortlessly by
its nature, and the resulting reduction of color noise is huge.
The previous implementation was fundamentally broken because it
was mixing the bits in the wrong direction. This fixes that.
The constants have also been updated. I created a (temporary)
implementation of slow but full owen scrambling to test against,
and these constants appear to give results consistent with that
on all the test scenes I rendered on. It is still, of course,
possible that my full implementation was flawed, so more validation
in the future would be a good idea.
This gives better variance than random digit scrambling, at a
very tiny runtime cost (so tiny it's lost in the noise of the
rest of the rendering process).
The important thing here is that I figured out how to use the
scrambling parameter properly to decorrelate pixels. Using the
same approach as with halton (just adding an offset into the sequence)
is very slow with sobol, since moving into the higher samples is
more computationally expensive. So using the scrambling parameter
instead was important.
This is the inverse of what was being done before, which was to
loop over all of the rays for each triangle. At the moment, this
actually appears to be a tiny bit slower, but it should allow
for future optimizations testing against multiple triangles at once.
It has a slight color cast to it at the moment, I believe due to
incorrect color space conversions, not because of the upsampling
method itself. So Meng upsampling is still the active method
at the moment.
Eventually the Surface trait will be changed to actually mean the
ability to be processed _into_ a MicropolyBatch. So it's ultimately
nonsensical for MicropolyBatch to implement it.
This uses a normalized version of blackbody radiation, so the
colors still vary but the brightness doesn't vary nearly as
wildly as with genuine blackbody radiation.
The sampling method used before is numerically unstable for very
small lights. That sampling method is still used for large/close
lights, since it works very well for that. But for small/distant
lights a simpler and numerically stable method is used.
It *seemed* to fix the problem I was running into, but it actually
made the SphereLight ray intersection code incorrect, and wa just
avoiding intersections that should have happened.
I should test better before committing. :-)
Thanks to a discovery by Petra Gospodnetic during her GSOC
project, I was able to substantially improve light tree sampling
for lambert surfaces. As part of this, the part of the surface
closure API relevant to light tree sampling has been adjusted to
be more flexible.
These improvements do not yet affect GTR surface light tree
sampling.
More specifically: prior to this, SurfaceLights returned the
shadow ray direction vector to use. That was fine, but it
kept the responsibility of generating proper offsets (to account
for floating point error) inside the lights.
Now the SurfaceLights return the world-space point on the light
to sample, along with its surface normal and error magnitude.
This allows the robust shadow ray generation code to be in one
place inside the renderer code.
Reorganized light and surface traits so that light sources are
surfaces as well, which will let them slide easily into
intersection tests with the rest of the scene geometry.
It's not used right now, but in the future I want shaders to be
able to vary over time and have motion blur. This serves as a
nice little reminder by putting it in the API.
The main change is that SurfaceClosures now have the hero
wavelength baked into them. Since surface closures come from
surface intersections, and intersections are always specific to
a ray or path, and rays/paths have a fixed wavelength, it doesn't
make sense for the surface closure to constantly be converting
from a more general color representation to spectral samples
whenever its used.
This is also nice because it keeps surface closures removed from
any particular representation of color. All color space handling
etc. can be kept inside the shaders.