Previously normalization will happen per image buffer individually, which
was wrong in cases different faces of the sane mesh uses different images.
Also solved possible threading issues when calculating min.max displacement.
This will calculate maximal distance automatically and normalize displacement
to it. Before this change normalization will not happen at all in cases max
distance is not set manually.
This affects on "regular" baker only, there are still some fixes to come for
multiresolution baker, but that could be solved separately.
It was caused by image threading safe commit and it was noticeable
only on really multi-core CPU (like dual-socket Xeon stations), was
not visible on core i7 machine.
The reason of slowdown was spinlock around image buffer referencing,
which lead to lots of cores waiting for single core and using image
buffer after it was referenced was not so much longer than doing
reference itself.
The most clear solution here seemed to be introducing Image Pool
which will contain list of loaded and referenced image buffers, so
all threads could skip lock if the pool is used for reading only.
Lock only needed in cases when buffer for requested image user is
missing in the pool. This lock will happen only once per image so
overall amount of locks is much less that it was before.
To operate with pool:
- BKE_image_pool_new() creates new pool
- BKE_image_pool_free() destroys pool and dereferences all image
buffers which were loaded to it
- BKE_image_pool_acquire_ibuf() returns image buffer for given
image and user. Pool could be NULL and in this case fallback to
BKE_image_acquire_ibuf will happen.
This helps to avoid lots to if(poll) checks in image sampling
code.
- BKE_image_pool_release_ibuf releases image buffer. In fact, it
will only do something if pool is NULL, in all other case it'll
equal to DoNothing operation.
Detect feedback loop and do not bake to images detected in this loop and show
nice warning message in such cases.
It's a way which wouldn't overcomplicate code trying to duplicate images and so
without real benefit.
* Enabled modifier "Apply" button since it can now be used to apply displacement or output layers to the mesh.
* Default surface output names are now unique in case canvas has multiple surfaces of same type.
* Merged "face aligned" and "non-closed" brush options to a single "Project" toggle, available for "Proximity" brushes.
* Added more icons to user interface selections.
* Increased default proximity distance.
* Set proximity falloff ramp to only affect alpha by default.
* Removed some no longer required render ext. functions.
* Fix: geometry node vertex alpha didn't work unless "Vertex Color Paint/Light" was enabled from material.
* Object velocity can now be used to determine brush influence and color.
* Brushes can now be set to "smudge" existing paint.
* Added new operators to easily add and remove surface output mesh data layers from Dynamic Paint ui.
* Fixed drip effect algorithm to work properly on forces pointing towards surface.
* Adjusted drip effect speed.
* Drip effect can now use canvas velocity and acceleration to influence drip direction.
* Fixed texture mapping for material enabled brushes.
* "Object Center" type brushes can now use "material color" as well.
* Improved surface partitioning grid generation algorithm.
* Fixed possible invalid brush collision detection when OpenMP enabled.
* Fixed incorrect random sized particle displace/wave influence.
* Fixed "Object Center" brush color ramp falloff.
* Fixed invalid zero alpha sampling when rendering vertex colors.
* Lots of smaller tweaking.
* Added alpha support renderer for vertex colors. You can now easily render Dynamic Paint produced vertex colors by checking "Vertex Color" in material options.
* Added "Vertex Alpha" socket for "Geometry" material node.
* Fixed vertex surface color output issues.
the two files mikktspace.h and mikktspace.c. These are standalone files
which can be redistributed into any other application and regenerate the
same tangent spaces. The implementation is independent of the ordering
of faces and the vertex ordering of faces.
These should not have any effect on render results, except in some cases with
you have overlapping faces, where the noise seems to be slightly reduced.
There are some performance improvements, for simple scenes I wouldn't expect
more than 5-10% to be cut off the render time, for sintel scenes we got about
50% on average, that's with millions of polygons on intel quad cores. This
because memory access / cache misses were the main bottleneck for those scenes,
and the optimizations improve that.
Interal changes:
* Remove RE_raytrace.h, raytracer is now only used by render engine again.
* Split non-public parts rayobject.h into rayobject_internal.h, hopefully
makes it clearer how the API is used.
* Added rayintersection.h to contain some of the stuff from RE_raytrace.h
* Change Isect.vec/labda to Isect.dir/dist, previously vec was sometimes
normalized and sometimes not, confusing... now dir is always normalized
and dist contains the distance.
* Change VECCOPY and similar to BLI_math functions.
* Force inlining of auxiliary functions for ray-triangle/quad intersection,
helps a few percentages.
* Reorganize svbvh code so all the traversal functions are in one file
* Don't do test for root so that push_childs can be inlined
* Make shadow a template parameter so it doesn't need to be runtime checked
* Optimization in raytree building, was computing bounding boxes more often
than necessary.
* Leave out logf() factor in SAH, makes tree build quicker with no
noticeable influence on raytracing on performance?
* Set max childs to 4, simplifies traversal code a bit, but also seems
to help slightly in general.
* Store child pointers and child bb just as fixed arrays of size 4 in nodes,
nearly all nodes have this many children, so overall it actually reduces
memory usage a bit and avoids a pointer indirection.
Compositor: Texture node didn't use texture-nodes itself.
Now composites initialize texture nodes correctly.
Also reviewed the fix for crashing texture nodes for displace.
It appears texture nodes also are used for sculpt/paint
brushes, in these cases it can be allowed again. But, don't
do this during rendering for now!
* The problem is that shadow pass is derived from the diffuse pass as
shad = shad'/diff, where shad' = shad*diff. In cases where diff is
0 and the division can't be done shad is left as shad' (=0).
* This all works just fine until the diffuse color is 0 on just one
channel (no red in material color for example). In this case the shadow
pass is left as 0 too regardless of the existence of an actual shadow,
so the end result is a colored shadow!
* The only real solution is to use the original shadow intensity to
determine if there actually is a shadow or not. This is now stored in
shr->shad[3] from the lamp shadow calculation.
Note: The best solution would probably be to calculate the shadow pass on
it's own and not to derive it from the diffuse pass, but I didn't dare to
start messing up the shading code totally.
Now, rather than the bit-too-alarming stop sign, threaded wmJobs
display a progress indicator in the header. This is an optional feature
for each job type and still uses the same hardcoded ui template
(could use further work here...).
Currently implemented for:
Render - parts completed, then nodes comped
Compositor - nodes comped
Fluid Sim - frames simulated
Texture Bake - faces baked
Example: http://mke3.net/blender/devel/2.5/progress.mov
* Remove the manual OSA method but rather pass on derivatives to the
textures. This means that at the moment e.g. the bricks node is not
antialiased, but that image textures are now using mipmaps. Doing
oversampling on the whole nodetree is convenient but it is really
the individual textures that can do filtering best and quickest.
* Image textures in a texture node tree were not color corrected and
did not support 2d mapping, now it's passing along shadeinput to
make this possible. Would like to avoid this but not sure how.
* Fix preview not filling in all pixels when scaling or rotating in
the texture nodes.
Ambient occlusion: multiplied with direct lighting by default, add
is also still available and more blending methods might be added if
they are useful. This is fundamentally a non physical effect.
Environment lighting: always added as you would expect (though you can
subtract by specifying negative energy). This can be just white or take
colors or textures from the world.
Indirect lighting: only supported for AAO at the moment (and is still
too approximate), and also is always added. A factor is available to
specify how much is added, though value 1.0 is correct.
Also:
* Material ambient value now defaults to 1.0.
* Added Environment, Indirect and Emit pass.
* "Both" blending method is no longer available.
* Attenuation, sampling parameters are still shared, some could be split
up, though if they are different this would affect performance.
This brings back the single bounce indirect diffuse lighting for AAO,
it's not integrated well but that will be tackled later as part of
shading system refactor and subdivision changes. The caveats are the
same as AAO, with one extra thing, the diffuse lighting is sampled once
per face, so it will not be accurate unless faces are subdivided.
I'm committing this now so we can start testing it for Durian, and
since changes need to make it work properly are planned.
* Transparency is now it's own panel, with a boolean toggle
+ enum for z/ray transparency (following mockup made by
William). Also had to change DNA flags for this.
* Disabled radiosity a bit more in render engine, it still had
some effects like auto autosmooth.
* Make some sliders in material buttons percentages in RNA.
* Some other small tweaks in layout and naming.
Integration is still very rough around the edges and WIP, but it works, and can render smoke (using new Smoke format in Voxel Data texture) --> http://vimeo.com/6030983
More to come, but this makes things much easier to work on for me :)
*"Added" SCE_PASS_RAYHITS to visually see each pixel primitive and BB tests (not-completed for UI)
*Added runtime exchange of tree structure
*Removed FLOAT_EPSILON from BLI_bvhkdop BB's
svn merge https://svn.blender.org/svnroot/bf-blender/trunk/blender -r19820:HEAD
Notes:
* Game and sequencer RNA, and sequencer header are now out of date
a bit after changes in trunk.
* I didn't know how to port these bugfixes, most likely they are
not needed anymore.
* Fix "duplicate strip" always increase the user count for ipo.
* IPO pinning on sequencer strips was lost during Undo.
Robin (Frrr) Allen did a decent job on this, so we can also welcome him
as a member in the svn committers team to maintain it!
I do the first commit with some minor fixes:
- get Makefiles work
- fix rounding issue with tiles on unit faces
- removed UI includes from tex node
A nice doc in wiki is here:
http://wiki.blender.org/index.php/User:Frr/TexnodeManual
On the todo for Robin is:
- When using one or more Texture-input nodes, you cannot edit them by activating
(as works now for Material nodes).
- The new "output node" option fails on the default case, when only one
output node is active. It then shows often a blank menu. Will get fixed asap.
- When using a NodeTree-Texture as input node, the menu for 'active output'
should not show. NodeTree should ignore other nodetrees to keep things sane
for now.
- On a future todo is proper usage of "Dxt" and "Dyt" texture vectors for
superior antialising of checkers/bricks.
General note; I know people are dying to get a full integrated shader system
with nodes. In theory we could merge this with Material Nodetrees... but I
rather wait for a solid and very well thought out design proposal for this,
also including design ideas for unifying with a shader language (GPU, CPU).
For the time being this is a nice extension of current textures. :)
This was a bit complicated to do, but is working pretty well now, and can make shading significantly faster to render.
This option pre-calculates self-shading information into a
3d voxel grid before rendering, then uses and interpolates
that data during the main rendering phase, rather than
calculating shading for each sample. It's an approximation
and isn't as accurate as getting the lighting directly,
but in many cases it looks very similar and renders much faster.
The voxel grid covers the object's 3D screen-aligned bounding box
so this may not be that useful for large volume regions like a
big range of cloud cover, since you'll need a lot of resolution.
The render time speaks for itself here:
http://mke3.net/blender/devel/rendering/volumetrics/vol_light_cache_interpolation.jpg
The resolution is set in the volume panel - it's the resolution
of one edge of the voxel grid. Keep in mind that the higher the
resolution, the more memory needed, like in fluid sim. The
memory requirements increase with the cube of the edge
resolution so be careful. I might try and add a little memory
calculator thing like fluid sim has there later.
The voxels are interpolated using trilinear interpolation -
here's a comparison image I made during testing:
http://mke3.net/blender/devel/rendering/volumetrics/vol_light_cache_compare.jpg
There might still be a couple of little tweaks I can do to
improve the visual quality, I'll see.
due to jittering of the start position for antialiasing in a pixel.
Now it distributes the start position over the fixed osa sample
positions, instead of of random positions in space. The ugly bit is
that a custom ordering was defined for osa 8/11/16 to ensure that the
first 4 are distributed relatively fair for adaptive sampling to decide
if more samples need to be taken.
Otherwise known as a phase function, this determines in which directions
the light is scattered in the volume. Until now it's been isotropic
scattering, meaning that the light gets scattered equally in all
directions. This adds some new types for anisotropic scattering, to
scatter light more forwards or backwards towards the viewing direction,
which can be more similar to how light is scattered by particles in nature.
Here's a diagram of how light is scattered isotropically and anisotropically:
http://mke3.net/blender/devel/rendering/volumetrics/phase_diagram.png
The new additions are:
- Rayleigh
describes scattering by very small particles in the atmosphere.
- Mie Hazy / Mie Murky
more generalised, describes scattering from large particle sizes.
- Henyey-Greenstein
a very flexible formula, that can be used to simulate a wide range of
scattering. It uses an additional 'Asymmetry' slider, ranging from -1.0
(backward scattering) to 1.0 (forward scattering) to control the
direction of scattering.
- Schlick
an approximation of Henyey-Greenstein, working similarly but faster.
And a description of how they look visually (just an omnidirectional lamp
inside a volume box)
http://mke3.net/blender/devel/rendering/volumetrics/phasefunctions.jpg
* Sun/sky integration
Volumes now correctly render in front of the new physical sky. Atmosphere
still doesn't work correctly with volumes, due to something that i hope
can be fixed in the atmosphere rendering, but the sky looks quite good.
http://mke3.net/blender/devel/rendering/volumetrics/sky_clouds.png
This also works very nicely with the anisotropic scattering, giving
clouds their signature bright halos when the sun is behind them:
http://mke3.net/blender/devel/rendering/volumetrics/phase_cloud.mov
in comparison here's a render with isotropic scattering:
http://mke3.net/blender/devel/rendering/volumetrics/phase_cloud_isotropic.png
* Added back the max volume depth tracing limit, as a hard coded value -
fixes crashes with weird geometry, like the overlapping faces around
suzanne's eyes. As a general note, it's always best to use volume
materials on airtight geometry, without intersecting or overlapping faces.
solids, in front of other volumes, etc. Now there's a 'layer depth'
value that works similarly to refraction depth - a limit for how many
times the view ray will penetrate different volumetric surfaces.
I have it close to being able to return alpha, but it's still not 100%
correct and needs a bit more work. Going to sit on this for a while.