If the average surface area of all the time samples is close enough
to the surface area of their union, just take the union and use that.
This both makes the BVH smaller in memory (time samples don't
propigate up the tree beyond their usefulness) and makes it
faster since traversal can avoid interpolating BBoxes when there's
only one BBox for a node.
Reduced from 64 to 42. This still allows each BVH to hold 4.4
trillion elements, but it guarantees that the accel ray's
traversal bitstack can accommodate at least two nested max-depth
trees.
In practice it worked fine, but only by accident. NaN's were
being passed to the lerp_slice function, which led to the
correct result in this case but is icky and dependant
on how lerp_slice is implemented.
The BVH building code is now largely split out into a separate
type, BVHBase. The intent is that this will also be used by
the BVH4 when I get around to it.
The BVH itself now uses references instead of indexes, allocating
and pointing directly into the MemArena. This allows the nodes
to all be right next to their bounding boxes in memory.
This seems to work more nicely than a fixed block size, because
it adapts to how much memory is being requested over-all. For
example, a small scene won't allocate large amounts of RAM,
but a large scene with large data won't be penalized with a
lot of tiny allocations.
Not tested yet, just a straightforward conversion from the C++
Psychopath codebase. So there are probably bugs in it from the
conversion. But it compiles!
Also created a proper World struct in the process, to store all
infinite-extent type stuff.
Note that I goofed and did a new rustfmt pass but forgot to
commit before making these changes, so there's a lot of
formatting changes in this too. *sigh*
After some experimentation, it's pretty clear that the LightTree
performs a lot better with a model of spherical _volume_ light
sources. This makes sense considering that generally they
represent a distribution of other lights in space.
This is a quick hack to make it behave a bit more like that. But
the long-term solution will be to adjust how
estimate_eval_over_solid_angle() of surface closures is implemented.