Safe method to move render results to the displayed image.
It now allocates a single image for display, and on each
refresh callback from render, it copies the refreshed
section over to this image, in 32 bits. While rendering
that image then only shows progress updates, as usual.
This also now works for scenes in composte and results
for composite.
This should solve reported crashes for MBlur or SSS.
Added WM Jobs manager
- WM can manage threaded jobs for you; just provide a couple
of components to get it work:
- customdata, free callback for it
- timer step, notifier code
- start callback, update callback
- Once started, each job runs an own timer, and will for
every time step check necessary updates, or close the
job when ready.
- No drawing happens in jobs, that's for notifiers!
- Every job stores an owner pointer, and based on this owner
it will prevent multiple jobs to enter the stack.
Instead it will re-use a running job, signal it to stop
and allow caller to re-initialize it even.
- Check new wm_jobs.c for more explanation. Jobs API is still
under construction.
Fun: BLI_addtail(&wm->jobs, steve); :)
Put Node shader previews back using wmJobs
- Preview calculating is now fully threaded (1 thread still)
- Thanks to new event system + notifiers, you can see
previews update even while dragging sliders!
- Currently it only starts when you change a node setting.
Warning: the thread render shares Node data, so don't delete
nodes while it renders! This topic is on the todo to make safe.
Also:
- bug in region initialize (do_versions) showed channel list in
node editor wrong.
- flagged the channel list 'hidden' now, it was really in the
way! This is for later to work on anyway.
- recoded Render API callbacks so it gets handlers passed on,
no globals to use anymore, remember?
- previewrender code gets now so much nicer! Will remove a lot
of stuff from code soon.
Think global, act local!
The old favorite G.scene gone! Man... that took almost 2 days.
Also removed G.curscreen and G.edbo.
Not everything could get solved; here's some notes.
- modifiers now store current scene in ModifierData. This is not
meant for permanent, but it can probably stick there until we
cleaned the anim system and depsgraph to cope better with
timing issues.
- Game engine G.scene should become an argument for staring it.
Didn't solve this yet.
- Texture nodes should get scene cfra, but the current implementation
is too tightly wrapped to do it easily.
Cleanup
- for portablity we can keep the old ugly defines for retrieving
active object, cfra and so on. But, they will use 'scene' not
G.scene.
- fixed code that uses those defines.
- some unused variables/functions removed
Robin (Frrr) Allen did a decent job on this, so we can also welcome him
as a member in the svn committers team to maintain it!
I do the first commit with some minor fixes:
- get Makefiles work
- fix rounding issue with tiles on unit faces
- removed UI includes from tex node
A nice doc in wiki is here:
http://wiki.blender.org/index.php/User:Frr/TexnodeManual
On the todo for Robin is:
- When using one or more Texture-input nodes, you cannot edit them by activating
(as works now for Material nodes).
- The new "output node" option fails on the default case, when only one
output node is active. It then shows often a blank menu. Will get fixed asap.
- When using a NodeTree-Texture as input node, the menu for 'active output'
should not show. NodeTree should ignore other nodetrees to keep things sane
for now.
- On a future todo is proper usage of "Dxt" and "Dyt" texture vectors for
superior antialising of checkers/bricks.
General note; I know people are dying to get a full integrated shader system
with nodes. In theory we could merge this with Material Nodetrees... but I
rather wait for a solid and very well thought out design proposal for this,
also including design ideas for unifying with a shader language (GPU, CPU).
For the time being this is a nice extension of current textures. :)
This was a bit complicated to do, but is working pretty well now, and can make shading significantly faster to render.
This option pre-calculates self-shading information into a
3d voxel grid before rendering, then uses and interpolates
that data during the main rendering phase, rather than
calculating shading for each sample. It's an approximation
and isn't as accurate as getting the lighting directly,
but in many cases it looks very similar and renders much faster.
The voxel grid covers the object's 3D screen-aligned bounding box
so this may not be that useful for large volume regions like a
big range of cloud cover, since you'll need a lot of resolution.
The render time speaks for itself here:
http://mke3.net/blender/devel/rendering/volumetrics/vol_light_cache_interpolation.jpg
The resolution is set in the volume panel - it's the resolution
of one edge of the voxel grid. Keep in mind that the higher the
resolution, the more memory needed, like in fluid sim. The
memory requirements increase with the cube of the edge
resolution so be careful. I might try and add a little memory
calculator thing like fluid sim has there later.
The voxels are interpolated using trilinear interpolation -
here's a comparison image I made during testing:
http://mke3.net/blender/devel/rendering/volumetrics/vol_light_cache_compare.jpg
There might still be a couple of little tweaks I can do to
improve the visual quality, I'll see.
due to jittering of the start position for antialiasing in a pixel.
Now it distributes the start position over the fixed osa sample
positions, instead of of random positions in space. The ugly bit is
that a custom ordering was defined for osa 8/11/16 to ensure that the
first 4 are distributed relatively fair for adaptive sampling to decide
if more samples need to be taken.
Otherwise known as a phase function, this determines in which directions
the light is scattered in the volume. Until now it's been isotropic
scattering, meaning that the light gets scattered equally in all
directions. This adds some new types for anisotropic scattering, to
scatter light more forwards or backwards towards the viewing direction,
which can be more similar to how light is scattered by particles in nature.
Here's a diagram of how light is scattered isotropically and anisotropically:
http://mke3.net/blender/devel/rendering/volumetrics/phase_diagram.png
The new additions are:
- Rayleigh
describes scattering by very small particles in the atmosphere.
- Mie Hazy / Mie Murky
more generalised, describes scattering from large particle sizes.
- Henyey-Greenstein
a very flexible formula, that can be used to simulate a wide range of
scattering. It uses an additional 'Asymmetry' slider, ranging from -1.0
(backward scattering) to 1.0 (forward scattering) to control the
direction of scattering.
- Schlick
an approximation of Henyey-Greenstein, working similarly but faster.
And a description of how they look visually (just an omnidirectional lamp
inside a volume box)
http://mke3.net/blender/devel/rendering/volumetrics/phasefunctions.jpg
* Sun/sky integration
Volumes now correctly render in front of the new physical sky. Atmosphere
still doesn't work correctly with volumes, due to something that i hope
can be fixed in the atmosphere rendering, but the sky looks quite good.
http://mke3.net/blender/devel/rendering/volumetrics/sky_clouds.png
This also works very nicely with the anisotropic scattering, giving
clouds their signature bright halos when the sun is behind them:
http://mke3.net/blender/devel/rendering/volumetrics/phase_cloud.mov
in comparison here's a render with isotropic scattering:
http://mke3.net/blender/devel/rendering/volumetrics/phase_cloud_isotropic.png
* Added back the max volume depth tracing limit, as a hard coded value -
fixes crashes with weird geometry, like the overlapping faces around
suzanne's eyes. As a general note, it's always best to use volume
materials on airtight geometry, without intersecting or overlapping faces.
solids, in front of other volumes, etc. Now there's a 'layer depth'
value that works similarly to refraction depth - a limit for how many
times the view ray will penetrate different volumetric surfaces.
I have it close to being able to return alpha, but it's still not 100%
correct and needs a bit more work. Going to sit on this for a while.
Commit patch #7788, allow to set the render step, so it's
possible make render every N frames only.
The step is change in Scene buttons (F10), below start and
end frame buttons.
Also add a command line options (-j), so it's possible to
overwrite the file step (useful for renderfarm).
[ Brecht, this work with OpenGL renders and simulated
the skipped frames, please double check ]
but only when the UV's are connected. That fixes some artifacts when
baking and using tangent space normal maps. It does mean increased
memory usage because it now stores 4 tangents per face like UV's,
and increased processing time, but there's no simple way around that.
scenes could crash, there was code to make sure osa level is
the same in all scenes, but that was set too late, after sample
tables are created.
Fix for some unitinialized vector warnings with FSA, these were
harmless, unfortunately.
This completes the pipeline make-over, as started in 2006. With this
option, during rendering, each sample for every layer and pass is being
saved on disk (looks like non-antialiased images). Then the composite
and color correction happens, then a clip to 0-1 range, and only in end
all samples get combined - using sampling filters such as gauss/mitch/catmul.
This results in artefact-free antialiased images. Even Z-combine or
ID masks now work perfect for it!
This is an unfinished commit btw; Brecht will finish this for strands.
Also Halo doesnt work yet.
To activate FSA: press "Save Buffers" and the new button next to it. :)
Problem: artist wants character to walk in grass, but still have all rendered
in seperate render-layers, for postpro effects and vblur. How to efficiently
create a mask image you can put *over* the character for the grass?
Solution has two parts; this commits allows any layer inside of the renderlayers
to become a Z-mask (Z values for solid gets filled in, but not rendered).
Second part of commit is render option "Only render stuff that's in front of
a zbuffer value that was filled in (saves render time)
Also, duplis are now taking into account, the proper way to exclude
them is to set the material to be not traceable.
Removed an unnecessary pointer from the VlakRen struct to save some
memory, not really that significant, but still, saves 70 mb for 10
million faces.
This is actually just the alpha value as currently being calculated
by the mist code. It is in many cases not very useful to have this as
alpha in shading result, also for postprocess and composite.
Note: this pass also works with "Mist" not set in World, of course.
=============================
A new approximate ambient occlusion method has been added, next to the
existing one based on raytracing. This method is specifically targetted
at use in animations, since it is inherently noise free, and so will
not flicker across frames.
http://www.blender.org/development/current-projects/changes-since-244/approximate-ambient-occlusion/http://peach.blender.org/index.php/approximate-ambient-occlusion/
Further improvements are still needed, but it can be tested already. There
are still a number of known issues:
- Bias errors on backfaces.
- For performance, instanced object do not occlude currently.
- Sky textures don't work well, the derivatives for texture evaluation
are not correct.
- Multiple passes do not work entirely correct (they are not accurate
to begin with, but could be better).
====================
- "From Dupli" option for orco and uv texture coordinates. For dupliverts,
duplifaces and dupli particles, this uses the orco and uv at the point
on the parent surface. Can for example be used for texturing feathers
and leafs. Note that uv only works for duplifaces and particles emitted
from faces, these are not defined at vertices.
- "Width Fade" option for strand render, to fade out along the width of the
strand. Committing this so it can be tested, might be changed or removed
even, if it doesn't give nice results.
=================
Big commit, but little user visible changes.
- Dupliverts and duplifaces are now rendered as instances, instead
of storing all of the geometry for each dupli, now an instance is
created with a matrix transform refering to the source object.
This should allow us to render tree leaves more memory efficient.
- Radiosity and to some degree raytracing of such objects is not
really efficient still. For radiosity this is fundamentally hard
to solve, but raytracing an octree could be created for each object,
but the current octree code with it's fixed size doesn't allow this
efficiently.
- The regression tests survived, but with I expect that some bugs will
pop up .. hopefully not too many :).
Implementation Notes
====================
- Dupligroups and linked meshes are not rendered as instances yet,
since they can in fact be different due to various reasons,
instancing of these types of duplis that are the same can be added
for them at a later point.
- Each ObjectRen now stores it's own database, instead of there being
one big databases of faces, verts, .. . Which objects that are actually
rendered are defined by the list of ObjectRenInstances, which all refer
to an ObjectRen.
- Homogeneous coordinatess and clipping is now not stored in vertices
anymore, but instead computed on the fly. This couldn't work for
instances. That does mean some extra computation has to be done, but
memory lookups can be slow too, and this saves some memory. Overall
I didn't find a significant speed impact.
- OSA rendering for solid and ztransp now is different. Instead of e.g.
going 8 times over the databases times and rendering the z-buffer, it
now goes over the database once and renders each polygon 8 times. That
was necessary to keep instances efficient, and can also give some
performance improvement without instances.
- There was already instancing support in the yafray export code, now it
uses Blender's render instances for export.
- UV and color layer storage in the render was a bit messy before, now
should be easier to understand.
- convertblender.c was reorganized somewhat. Regular render, speedvector
and baking now use a single function to create the database, previously
there was code duplicated for it.
- Some of these changes were done with future multithreading of scene
and shadow buffer creation in mind, though especially for scene creation
much work remains to be done to make it threadsafe, since it also involves
a lot of code from blenkernel, and there is an ugly conflict with the way
dupli groups work here .. though in the render code itself it's almost there.
editmode was not exited, and vertex normals would not write at all! (probably own error)
- Edited tooltip for texture DVar (was some user confusion in the studio as to its purpose)
- Set render border is disabled when it has no area - so drawing a box outside the camera disables .
=========
- Fix crash in particle transform with the particle system not editable.
- Particle child distribution and caching is now multithreaded.
- Child particles now have a separate Render Amount next to the existing
Amount. The render amount particles are now only distributed and cached
at render time, which should make editing with child particles faster.
- Two new options for diffuse strand shading:
- Surface Diffuse: computes the strand normal taking the normal at
the surface into account.
- Blending Distance: the distance in Blender units over which to
blend in the normal at the surface.
- Special strand rendering for more memory efficient and faster hair and
grass. This is a work in progress, and has a number of known issues,
don't report bugs to me for this feature yet.
More info:
http://www.blender.org/development/current-projects/changes-since-244/particles/
=============
A new "Selected to Active" option in the Bake panel, to (typically) bake
a high poly object onto a low poly object. Code based on patch #7339 by
Frank Richter (Crystal Space developer), thanks!.
Normal Mapping
==============
Camera, World, Object and Tangent space is now supported for baking, and
for material textures. The "NMap TS" setting is replaced with a dropdown
of the four choices in the image texture buttons.
http://www.blender.org/development/current-projects/changes-since-244/render-baking/
=========
Merge of the famous particle patch by Janne Karhu, a full rewrite
of the Blender particle system. This includes:
- Emitter, Hair and Reactor particle types.
- Newtonian, Keyed and Boids physics.
- Various particle visualisation and rendering types.
- Vertex group and texture control for various properties.
- Interpolated child particles from parents.
- Hair editing with combing, growing, cutting, .. .
- Explode modifier.
- Harmonic, Magnetic fields, and multiple falloff types.
.. and lots of other things, some more info is here:
http://wiki.blender.org/index.php/BlenderDev/Particles_Rewritehttp://wiki.blender.org/index.php/BlenderDev/Particles_Rewrite_Doc
The new particle system cannot be backwards compatible. Old particle
systems are being converted to the new system, but will require
tweaking to get them looking the same as before.
Point Cache
===========
The new system to replace manual baking, based on automatic caching
on disk. This is currently used by softbodies and the particle system.
See the Cache API section on:
http://wiki.blender.org/index.php/BlenderDev/PhysicsSprint
Documentation
=============
These new features still need good docs for the release logs, help
for this is appreciated.
This is an additional option for 'TexFace', which uses the alpha of
the UV assigned faces as well as the colour. It appears in material
buttons as a little 'A' button next to 'TexFace', when 'TexFace is
switched on. It's a bit horrible, but no point tweaking that layout in
isolation at this stage.
This image is using texface alpha, with different assigned images, all
sharing the one material:
http://mke3.net/blender/devel/rendering/texface_alpha.jpg
Usually I consider texface (and teaching people to use it for UV
mapping) to be pretty evil, but in some cases, when you have lots of
separate images that you want to control in the one material, it can
be quite handy.
two separate files, raytrace.c and rayshade.c. The tracing code can now
be used separately from the renderer (will be used in a later commit),
and the raytracing acceleration structure can now also be easily replaced,
if someone wants to experiment with that.
* Geometry node: Front/back output
This is used as a mask for determining whether you're looking at the front side or back side of a mesh, useful for blending materials, my practical need was giving different materials to the pages of a magazine: http://mke3.net/blender/etc/frontback-h264.mov
Give 1.0 if it's the front side, and 0.0 if it's the back side.
* Extended material node
This is the same as the material node, but gives more available inputs and outputs, (basically just connecting up more of ShadeInput and ShadeResult to the node). I didn't want to add it to the normal simple Material node since you don't always need all that stuff, and it would make the node huge, but when you do need it, it's nice to have it.
== Comp nodes ==
* Invert node
Inverting is something that happens all the time in a node setup, and this makes it easier. It's been possible to invert previously by adding a mix node and subtracting the input from 1.0, but it's not the best way of doing it. This node:
- makes it a lot faster to set up, rather than all the clicking required with the mix node
- is a lot more usable amidst a complex comp setup, when you're looking at a node tree, it's very helpful to be able to see at a glance what's going on. Using subtract for inverting is easily mixed up with other nodes in which you are actually subtracting, not inverting, and looks very similar to all the other mix nodes that usually litter a comp tree.
- has options to invert the RGB channels, the Alpha channel, or both. This saves adding lots of extra nodes (separate RGBA, subtract, set alpha) when you want to do something simple like invert an alpha channel. I'd like to add this option to other nodes too.
There's also a shader node version too.
* Also a few fixes that I committed ages ago, but seems to have been overwritten in Bob's node refactor:
- adding new compbufs to the set alpha and alphaover nodes when you have only one noodle connected to the lower input
- making the fac value on RGB curves still work when there's nothing connected to it
- Radius R, G, B sliders had too small number increase on clicking.
- Preview render now renders with higher SSS error setting to speed it up a
bit.
- bug #6664: 3d preview render had artifacts. re->viewdx/dy wasn't set then,
which is needed to estimate the area of each point. Have set this now, not
in the nicest way, there is some bit duplicated code, but I don't want to
refactor existing code with the chance of breaking it at this point.
- bug #6665: grid like artifacts with parts rendering. The two extra pixels
around parts used for filtering were used as well, leading to double points.
Added support for multiple UVs in the render engine. This also involved
changing the way faces are stored, to allow data to be added optionally
per 256 faces, same as the existing system for vertices.
A UV layer can be specified in the Map Input panel and the Geometry node
by name. Leaving this field blank will default to the active UV layer.
Also added sharing of face selection and hiding between UV layers, and at
the same time improved syncing with editmode selection and hiding.
Still to do:
- Multi UV support for fastshade.
- Multires and NMesh preservation of multiple UV sets.
Please read:
http://www.blender3d.org/cms/Imaging.834.0.html
Or in short:
- adding MultiLayer Image support
- recoded entire Image API
- better integration of movie/sequence Images
Was a whole load of work... went down for a week to do this. So, will need
a lot of testing! Will be in irc all evening.
- New Passes: UV and Rad(iosity)
- New Nodes: UV Map and Index Mask
- Z-combine now is antialiased
As usual, please check the log. Has nice pics!
http://www.blender3d.org/cms/Composite__UV_Map__ID.830.0.html
For devs: the antialias code from Vector Blur is now exported in compo
too. Works pretty good. Even fixed a bug in antialias, so vectorblur
will be better.
Also: found out that OpenGL display list speedup accidentally was still
triggered with the rt button... so it did not work by default.
- Bug: material emit was ignored (showed in preview render backdrop)
- Bug: world exposure was ignored
- Bug: lamp halo was ignoring 'render layer light override'.
Further reshuffled the way shadows are being pre-calculated, this to enable
more advanced (and faster) usage of Material lightgroups. Now shadows are
being cached in lamps, using a per-sample counter to check if a recalc is
needed. Will also work (later) for Raytracing node shaders.
- New: Material LightGroup option "Always", which always shades the lights
in the group, independent of visibility layer. (so it allows to move such
lights to hidden layer, not influencing anything).
Full log:
http://www.blender3d.org/cms/Render_Passes.829.0.html
In short:
- Passes now have option to be excluded from "Combined".
- RenderLayers allow to override Light (Lamp groups) or Material.
- RenderLayers and Passes are in Outliner now, (ab)using Matt's nice
'restriction collumns'. :)
- Crash, caused by commit of 1 hour ago to fix 'All Z' render problem
- Bug: yesterday's fix for node material renders caused some issues with
precalculating correct shadow.
- Composite Translate node: input sockets allowed multiple inputs