- move: material.add_texture(tex, coords, mapto) --> material.texture_slots.add()
- added material.texture_slots.create(index), material.texture_slots.clear(index)
- texture slot functions also work for lamp and world now.
Other minor changes
- allow rna functions to set FUNC_NO_SELF and FUNC_USE_SELF_ID at once.
- [#23317] Changed some operators' RNA to accept lengths, a modification I made to this patch made it not work as intended, removed this edit so unit buttons appier in the UI for certain operators.
- Sphinx doc gen, 2 columns rather then 3, didnt quite fit in some cases.
changed some rna names to be more consistant
- use_texture -> use_image, since it sets if 'image' is used.
- use_map_color_diff -> use_map_color_diffuse since diffuse is used elsewhere in the same type.
numbers for negative influences (as opposed to old 3-state button)
but the ui range was only set to 0,1.
Changed the defaults to -1,1 and added a shortcut - pressing minus
key while the mouse is over a number field or slider will make it negative.
Also: Changed 'Spread' value to be proportional to the light cache voxel grid
(i.e. 0.5 spreads half the width of the grid), so that it's independent of light
cache resolution. This means that results should be similar as you increase/
decrease resolution.
* Property update functions no longer get context, instead they get only
Main and Scene. The RNA api was intended to be as context-less as
possible, since it doesn't really matter who is changing the property,
everything that uses the property should be updated.
* There's still one exception case that use it now, screen operations
still depend on context too much. It also revealed a few places using
context where they shouldn't.
* Ideally Scene shouldn't be passed, but much of Blender still depends on
it, should be dropped when we try to support multiple scene editing.
Change was planned for a while, but need this now to be able to call
update without a context pointer.
After testing and feedback, I've decided to slightly modify the way color
management works internally. While the previous method worked well for
rendering, was a smaller transition and had some advantages over this
new method, it was a bit more ambiguous, and was making things difficult
for other areas such as compositing.
This implementation now considers all color data (with only a couple of
exceptions such as brush colors) to be stored in linear RGB color space,
rather than sRGB as previously. This brings it in line with Nuke, which also
operates this way, quite successfully. Color swatches, pickers, color ramp
display are now gamma corrected to display gamma so you can see what
you're doing, but the numbers themselves are considered linear. This
makes understanding blending modes more clear (a 0.5 value on overlay
will not change the result now) as well as making color swatches act more
predictably in the compositor, however bringing over color values from
applications like photoshop or gimp, that operate in a gamma space,
will give identical results.
This commit will convert over existing files saved by earlier 2.5 versions to
work generally the same, though there may be some slight differences with
things like textures. Now that we're set on changing other areas of shading,
this won't be too disruptive overall.
I've made a diagram explaining the pipeline here:
http://mke3.net/blender/devel/2.5/25_linear_workflow_pipeline.png
and some docs here:
http://www.blender.org/development/release-logs/blender-250/color-management/
eg.
scene.objects.link()
object.constraints.new()
mesh.verts.transform(...)
mesh.faces.active
PropertyRNA stores an StructRNA pointer where these can be defined.
* Enums with an _itemf callback now never get context NULL passed in,
rather a fixed list of enum items are defined which should contain
all items (if possible), from which the _itemf callback can then use
a subset.
Since the deep shadow buffer summer of code project is not actively under
development anymore, I decided to build my own DSM implementation from
scratch, based on reusing as much existing shadow buffer code as possible.
It's not very advanced, but implements the basic algorithm. Just enough so
we can do shading tests with it, optimizations and other improvements can
be done later.
Supported:
* Classical shadow buffer options: filter, soft, bias, ..
* Multiple sample buffers, merged into one.
* Halfway trick to support lower bias.
* Compression with user defined threshold.
* Non-textured alpha transparency, using Casting Alpha value.
* Strand render.
Not Supported:
* Tiling disk cache, so can use a lot of memory.
* Per part rendering for lower memory usage during creation.
* Colored shadow.
* Textured color/alpha shadow.
* Mipmaps for faster filtering.
* Volume shadows.
Usage Hints:
* Use sample buffers + smaller size rather than large size.
* For example 512 size x 9 sample buffers instead of 2048 x 1.
* Compression threshold 0.05 works, but is on the conservative side.
Selecting a material in the node tree sets this as the active material and the buttons view redraws.
Added rna prop material.active_node_material
Currently its not clear what settings are used by the node material and the base material (needs some tedious research) so I made most panels use the node material with the exceptions of volumetrics, physics and halo settings.
We'll probably need to split the panels up to do this properly.
Volumes can now receive shadows from external objects, either raytraced shadows or shadow maps.
To use external shadows, enable 'external shadows' in volume material 'lighting' panel. This an extra toggle since it causes a performance hit, but this can probably be revisited/optimised when the new raytrace accelerator is integrated. For shadow maps at least, it's still very quick.
Renamed 'scattering mode' to 'lighting mode' (a bit simpler to understand), and the options inside. Now there's:
- Shadeless
takes light contribution, but without shadowing or self-shading (fast)
good for fog-like volumes, such as mist, or underwater effects
- Shadowed (new)
takes light contribution with shadows, but no self-shading. (medium)
good for mist etc. with directional light sources
eg. http://vimeo.com/6901636
- Shaded
takes light contribution with internal/external shadows, and self shading (slower)
good for thicker/textured volumes like smoke
- Multiple scattering etc (still doesn't work properly, on the todo).
After code review and experimentation, this commit makes some changes to the way that volumes are shaded. Previously, there were problems with the 'scattering' component, in that it wasn't physically correct - it didn't conserve energy and was just acting as a brightness multiplier. This has been changed to be more correct, so that as the light is scattered out of the volume, there is less remaining to penetrate through.
Since this behaviour is very similar to absorption but more useful, absorption has been removed and has been replaced by a 'transmission colour' - controlling the colour of light penetrating through the volume after it has been scattered/absorbed. As well as this, there's now 'reflection', a non-physically correct RGB multiplier for out-scattered light. This is handy for tweaking the overall colour of the volume, without having to worry about wavelength dependent absorption, and its effects on transmitted light. Now at least, even though there is the ability to tweak things non-physically, volume shading is physically based by default, and has a better combination of correctness and ease of use.
There's more detailed information and example images here:
http://wiki.blender.org/index.php/User:Broken/VolumeRendering
Also did some tweaks/optimisation:
* Removed shading step size (was a bit annoying, if it comes back, it will be in a different form)
* Removed phase function options, now just one asymmetry slider controls the range between back-scattering, isotropic scattering, and forward scattering. (note, more extreme values gives artifacts with light cache, will fix...)
* Disabled the extra 'bounce lights' from the preview render for volumes, speeds updates significantly
* Enabled voxeldata texture in preview render
* Fixed volume shadows (they were too dark, fixed by avoiding using the shadfac/AddAlphaLight stuff)
More revisions to come later...
Editors Modules
* render/ module added in editors, moved the preview render code there and
also shading related operators.
* physics/ module made more consistent with other modules. renaming files,
making a single physics_ops.c for operators and keymaps. Also move all
particle related operators here now.
* space_buttons/ now should have only operators relevant to the buttons
specificially.
Updates & Notifiers
* Material/Texture/World/Lamp can now be passed to DAG_id_flush_update,
which will go back to a callback in editors. Eventually these should
be in the depsgraph itself, but for now this gives a unified call for
doing updates.
* GLSL materials are now refreshed on changes. There's still various
cases missing,
* Preview icons now hook into this system, solving various update cases
that were missed before.
* Also fixes issue in my last commit, where some preview would not render,
problem is avoided in the new system.
Icon Rendering
* On systems with support for non-power of two textures, an OpenGL texture
is now used instead of glDrawPixels. This avoids problems with icons get
clipped on region borders. On my Linux desktop, this gives an 1.1x speedup,
and on my Mac laptop a 2.3x speedup overall in redrawing the full window,
with the default setup. The glDrawPixels implementation on Mac seems to
have a lot of overhread.
* Preview icons are now drawn using proper premul alpha, and never faded so
you can see them clearly.
* Also tried to fix issue with texture node preview rendering, globals can't
be used with threads reliably.
* copied I/O scripts
* copied, modified rna_*_api.c and rna_*.c
I/O scripts not working yet due to slight BPY differences and RNA changes. Will fix them later.
Not merged changes:
* C unit testing integration, because it is clumsy
* scons cross-compiling, can be merged easily later