Added support for multiple UVs in the render engine. This also involved
changing the way faces are stored, to allow data to be added optionally
per 256 faces, same as the existing system for vertices.
A UV layer can be specified in the Map Input panel and the Geometry node
by name. Leaving this field blank will default to the active UV layer.
Also added sharing of face selection and hiding between UV layers, and at
the same time improved syncing with editmode selection and hiding.
Still to do:
- Multi UV support for fastshade.
- Multires and NMesh preservation of multiple UV sets.
Please read:
http://www.blender3d.org/cms/Imaging.834.0.html
Or in short:
- adding MultiLayer Image support
- recoded entire Image API
- better integration of movie/sequence Images
Was a whole load of work... went down for a week to do this. So, will need
a lot of testing! Will be in irc all evening.
- New Passes: UV and Rad(iosity)
- New Nodes: UV Map and Index Mask
- Z-combine now is antialiased
As usual, please check the log. Has nice pics!
http://www.blender3d.org/cms/Composite__UV_Map__ID.830.0.html
For devs: the antialias code from Vector Blur is now exported in compo
too. Works pretty good. Even fixed a bug in antialias, so vectorblur
will be better.
Also: found out that OpenGL display list speedup accidentally was still
triggered with the rt button... so it did not work by default.
- Bug: material emit was ignored (showed in preview render backdrop)
- Bug: world exposure was ignored
- Bug: lamp halo was ignoring 'render layer light override'.
Further reshuffled the way shadows are being pre-calculated, this to enable
more advanced (and faster) usage of Material lightgroups. Now shadows are
being cached in lamps, using a per-sample counter to check if a recalc is
needed. Will also work (later) for Raytracing node shaders.
- New: Material LightGroup option "Always", which always shades the lights
in the group, independent of visibility layer. (so it allows to move such
lights to hidden layer, not influencing anything).
Full log:
http://www.blender3d.org/cms/Render_Passes.829.0.html
In short:
- Passes now have option to be excluded from "Combined".
- RenderLayers allow to override Light (Lamp groups) or Material.
- RenderLayers and Passes are in Outliner now, (ab)using Matt's nice
'restriction collumns'. :)
- Crash, caused by commit of 1 hour ago to fix 'All Z' render problem
- Bug: yesterday's fix for node material renders caused some issues with
precalculating correct shadow.
- Composite Translate node: input sockets allowed multiple inputs
-t <threads>
It overrides the settings as saved in scenes. Only works for background
rendering, to force thread amounts to match the cpus in system.
For funny jokers: amount is clipped for MAXTHREADS :)
Bake-render:
Quad faces still didn't get handled properly, error visible for vertex
color or UV textures.
Also: added error meny when a Bake cannot work because there are no Images
or no Images with buffers
Here's the full release log with example file.
http://www.blender3d.org/cms/Render_Baking.827.0.html
For people who don't read docs; just press ALT+CTRL+B on a Mesh
with texture faces!
Todos:
- maybe some filter options extra?
- Make normal maps in Tangent space
Full log:
http://www.blender3d.org/cms/Irregular_Shadow_Buffe.785.0.html
In short: this is a shadow buffer approach that always results in crispy
shadows, independent of lamp buffer size or zoom level. This shadow buffer
system also supports transparent shadow.
This is part of work on refreshing Shadow Buffers in Blender. You now can
choose of two types (Classical, Irregular). More types will follow. Also
quality issues for Classical shadow buffers are going to be reviewed,
especially to solve the lousy Biasing.
For the CVS log record; it is based on articles:
Gregory Johnson et al, University of Texas, Austin. (Regular grid method).
Timo Aila and Samuli Laine, Helsinki University of Technology. (BSP method).
- ImagePaint now uses ImBuf directly, and the rect blending functions
were moved into the imbuf module.
- The brush spacing, timing and sampling was abstracted into brush.c, for
later reuse in other paint modes.
Float ImagePaint support.
Textured Brushes:
- Only the first texture channel is used now.
- Options for size and offset should be added, but need to find some space
in the panel, or add a second one ..
The raytracer wasn't calling node shaders yet, so results showed only
shading for the base material.
This now works, but there's a conflict in the internal Blender shader that
makes recursive raytracing with nodes unpredictable. Basicaly the conflict
is that when a ray wants to shade a point, it should be able to check the
material for mirror properties, but this is undefined for node trees...
Probably we need to separate raytrace entirely from material shading. Is
a good topic for NodeShader 2.0, when we really split up materials in
shading components.
I'll add a note in the release log about this. Best results you get now
when you don't include mirror/ray-transp insde a node tree, in that case
a regular material mirror can render that material perfectly.
suffered for the entire movie. :)
It only happened when rendering large frames, using a lot of memory and
typically when you also use other software in meantime.
Reason: the main thread does the drawing updating, while rendering is
still continuing. When using Ztransp, there was a free buffer done
when possibly a draw could still be in progress. Only crashed when drawing
is slow... explaining why it only showed up in more complex cases.
- Shaded drawmode is back (shift+z).
Note it still only uses orco texture; but lighting/shading is using
the internal render module entirely.
- "Make Sticky" option back.
(Also fix in sticky texture render, was wrong scaled)
This commit brings back:
- Field Render
- MBlur Render (old style)
- Border render with or without cropping
Note: Field Render is not supported in Compositor yet. Blurring or filter
will destroy field information.
Both MotionBlur as Field render are done before Compositing happens.
Fixes:
- The "Save Buffers" option only worked on single frame renders, not for
Anim render.
- Found an un-initalized variable in Render initialize... this might have
caused the unknown random crashes with render.
Code restructure:
Cleaned up names and calls throughout the pipeline, more clearly telling
what goes on in functions.
This is visible in the updated first image of the Wiki doc:
http://mediawiki.blender.org/index.php/BlenderDev/RenderPipeline
Using "Fresnel" for transparency only worked when material had "ZTransp"
set. That's not a real problem, but it made Fresnel not work for Materials
used in Nodes.
Now a Fresnel on alpha works always.
Material Nodes: The Texture node didn't do the standard "2d mapping" yet
in case an Image Texture is used. Caused wrong mapping for example for UV
coordinate inputs.
- Bug fix: the upper tile in a collumn for Panorama render didn't put the
mainthread to sleep properly. Now panorama renders 25% faster if you had
set Y-Parts to 4.
- Enabling Compositing in Scene for first time now adds a "Composite" node
too, so render output gets applied.
- An attempt to render with "Do Composite" without "Composite" node will
throw an error and stops rendering. In background mode it will just not
render at all, and print errors.
- Errors that prevent rendering now give a popup menu again.
- Having MBlur or Fields option on will now normally render, but with an
error print in console (not done yet...)
the ones that get changed within threads, to communicate with the main
thread.
(Part of the long quest to get threaded render safe, especially in Linux)
passes in single file. Code is currently disabled, commit is mainly to
have a nicer method of excluding OpenEXR dependency from render module.
This should compile with disabled WITH_OPENEXR too.
Reason why EXR is great to include by default in Blender is its feature
to store unlimited layers and channels, and write this tile based. I
need the feature for saving memory; while rendering tiles, all full-size
buffers for all layers and passes are kept in memory now, which can go
into 100s of MB easily.
The code I commit now doesn't allocate these buffers while rendering, but
saves the tiles to disk. In the end is it read back. Overhead for large
renders (like 300 meg buffers) is 10-15 seconds, not bad.
Two more interesting aspects:
- Blender can save such multi-layer files in the temp directory, storing
it with .blend file name and scene name. That way, on each restart of Blender,
or on switching scenes, these buffers can be read. So you always see what was
rendered last. Also great for compositing work.
- This can also become an output image type for rendering. There's plenty of
cases where you want specific layers or passes saved to disk for later use.
Anyhoo, finishing it is another days of work, and I got more urgent stuff
now!
Until now, on each mouse/key event preview render restarted with first tile.
It now rememers where it was, and continues rendering.
Also tried to get threaded preview working, but its more work than I can
spend right now. Back to bugs :)
vectors. It's actually shutter speed, but in this case works identical to
the old motionblur 'blur fac' button.
Note; the "Max Speed" button only clips speed, use this to prevent
extreme speed values. Max speed applied before the scaling happens.
After a couple of experiments with variable blur filters, I tried
a more interesting, and who knows... original approach. :)
First watch results here:
http://www.blender.org/bf/rt0001_0030.avihttp://www.blender.org/bf/hand0001_0060.avi
These are the steps in producing such results:
- In preprocess, the speed vectors to previous and next frame are
calculated. Speed vectors are screen-aligned and in pixel size.
- while rendering, these vectors get calculated per sample, and
accumulated in the vector buffer checking for "minimum speed".
(on start the vector buffer is initialized on max speed).
- After render:
- The entire image, all pixels, then is converted to quad polygons.
- Also the z value of the pixels is assigned to the polygons
- The vertices for the quads use averaged speed vectors (of the 4
corner faces), using a 'minimum but non-zero' speed rule.
This minimal speed trick works very well to prevent 'tearing' apart
when multiple faces move in different directions in a pixel, or to
be able to separate moving pixels clearly from non-moving ones
- So, now we have a sort of 'mask' of quad polygons. The previous steps
guaranteed that this mask doesn't have antialias color info, and has
speed vectors that ensure individual parts to move nicely without
tearing effects. The Z allows multiple layers of moving masks.
- Then, in temporal buffer, faces get tagged if they move or not
- These tags then go to an anti-alias routine, which assigns alpha
values to edge faces, based on the method we used in past to antialias
bitmaps (still in our code, check the antialias.c in imbuf!)
- finally, the tag buffer is used to tag which z values of the original
image have to be included (to allow blur go behind stuff).
- OK, now we're ready for accumulating! In a loop, all faces then get
drawn (with zbuffer) with increasing influence of their speed vectors.
The resulting image then is accumulated on top of the original with a
decreasing weighting value.
It sounds all quite complex... but the speed is still encouraging. Above
images have 64 mblur steps, which takes about 1-3 seconds per frame.
Usage notes:
- Make sure the render-layer has passes 'Vector' and 'Z' on.
- add in Compositor the VectorBlur node, and connect the image, Z and
speed to the inputs.
- The node allows to set amount of steps (10 steps = 10 forward, 10 back).
and to set a maximum speed in pixels... to prevent extreme moving things
to blur too wide.
- Scene support in RenderLayers
You now can indicate in Compositor to use RenderLayer(s) from other scenes.
Use the new dropdown menu in the "Render Result" node. It will change the
title of the node to indicate that.
The other Scenes are rendered fully separate, creating own databases (and
octrees) after the current scene was finished. They use their own render
settings, with as exception the render output size (and optional border).
This makes the option an interesting memory saver and speedup.
Also note that the render-results of other scenes are kept in memory while
you work. So, after a render, you can tweak all composit effects.
- Render Stats
Added an 'info string' to stats, printed in renderwindow header. It gives
info now on steps "creating database", "shadow buffers", and "octree".
- Bug fixes
Added redraw event for Image window, when using compositor render.
Text objects were not rendered using background render (probably a bug
since depsgraph was added)
Dropdown buttons in Node editor were not refreshed after usage
Sometimes render window did not open, this due to wrong check for 'esc'.
Removed option that renders view-layers on F12, with mouse in 3d window.
Not only was it confusing, it's now more efficient with the Preview Panel,
which does this nicely.
http://www.blender.org/bf/filters/
I found out current blur actually doesn't do gauss, but more did regular
quadratic. Now you can choose common filter types, but more specifically;
- set gamma on, to emphasize bright parts in blur more than darker parts
- use the bokeh option for (current circlular only) blur based on true
area filters (meaning, for each pixel it samples the entire surrounding).
This enables more effects, but is also much slower. Have to check on
optimization for this still... use with care!
- RenderLayers with 'view layers' set, now also take visible lights into
account. Works just like for scene layer settings.
- On ESC from render, compositing (if set) is being skipped too
- While rendering with multiple RenderLayers it will end with a display
of the current RenderLayer (as in Scene buttons)
- Enabled Groups to execute in Compositor. They were ignored still.
Note; inside of groups nothing is cached, so a change of a group input
will recalculate it fully. This is needed because groups are linked
data (instances use same internal nodes).
- Made Composit node "Viewer" display correctly input for images with
1/2/3/4 channels.
- Added pass rendering, tested now with only regular Materials. For
Material nodes this is quite more complex... since they cannot be
easily separated in passes (each Material does a full shade)
In this commit all pass render is disabled though, will continue work on
that later.
Sneak preview: http://www.blender.org/bf/rt.jpg (temporal image)
- What did remain is the 'Normal' pass output. Normal works very nice for
relighting effects. Use the "Normal Node" to define where more or less
light should be. (Use "Value Map" node to tweak influence of the
Normal node 'dot' output.)
- EVIL bug fix: I've spend almost a day finding it... when combining AO and
mirror render, the event queue was totally screwing up... two things not
related at all!
Found out error was in ray-mirror code, which was using partially
uninitialized 'ShadeInput' data to pass on to render code.
- Another fix; made sure that while thread render, the threads don't get
events, only the main program will do. Might fix issues reported by
people on linux/windows.
- Live scanline updates while rendering
Using a timer system, each second now the tiles that are being processed
are checked if they could use display.
To make this work pretty, I had to use the threaded 'tile processor' for
a single thread too, but that's now proven to be stable.
Also note that these updates draw per layer, including ztransp progress
separately from solid render.
- Recode of ztransp OSA
Until now (since blender 1.0) the ztransp part was fully rendered and
added on top of the solid part with alpha-over. This adding was done before
the solid part applied sub-pixel sample filtering, causing the ztransp
layer to be always too blurry.
Now the ztransp layer uses same sub=pixel filter, resulting in the same
AA level (and filter results) as the solid part. Quite noticable with hair
renders.
- Vector buffer support & preliminary vector-blur Node
Using the "Render Layer" panel "Vector" pass button, the motion vectors
per pixel are calculated and stored. Accessible via the Compositor.
The vector-blur node is horrible btw! It just uses the length of the
vector to apply a filter like with current (z)blur. I'm committing it anyway,
I'll experiment with it further, and who knows some surprise code shows up!
extern/bullet/BulletDynamics/ConstraintSolver/SimpleConstraintSolver.h
added newline at end of file.
intern/boolop/intern/BOP_Face2Face.cpp
fixed indentation and had nested declarations of a varible i used
for multiple for loops, changed it to just one declaration.
source/blender/blenkernel/bad_level_call_stubs/stubs.c
added prototypes and a couple other fixes.
source/blender/include/BDR_drawobject.h
source/blender/include/BSE_node.h
source/blender/include/butspace.h
source/blender/render/extern/include/RE_shader_ext.h
added struct definitions
source/blender/src/editmesh_mods.c
source/gameengine/Ketsji/KX_BlenderMaterial.cpp
source/gameengine/Ketsji/KX_ConvertPhysicsObjects.cpp
source/gameengine/Ketsji/KX_RaySensor.cpp
removed unused variables;
source/gameengine/GameLogic/Joystick/SCA_Joystick.cpp
changed format of case statements to avoid warnings in gcc.
Kent
-> Rendering in RenderLayers
It's important to distinguish a 'render layer' from a 'pass'. The first is
control over the main pipeline itself, to indicate what geometry is being
is rendered. The 'pass' (not in this commit!) is related to internal
shading code, like shadow/spec/AO/normals/etc.
Options for RenderLayers now are:
- Indicate which 3d 'view layers' have to be included (so you can render
front and back separately)
- "Solid", all solid faces, includes sky at the moment too
- "ZTransp", all transparent faces
- "Halo", the halos
- "Strand", the particle strands (not coded yet...)
Currently only 2 'passes' are exported for render, which is the "Combined"
buffer and the "Z. The latter now works, and can be turned on/off.
Note that all layers are still fully kept in memory now, saving the tiles
and layers to disk (in exr) is also todo.
-> New Blur options
The existing Blur Node (compositor) now has an optional input image. This
has to be a 'value buffer', which can be a Zbuffer, or any mask you can
think of. The input values have to be in the 0-1 range, so another new
node was added too "Map Value".
The value input can also be used to tweak blur size with the (todo)
Time Node.
Temporal screenies:
http://www.blender.org/bf/rt.jpghttp://www.blender.org/bf/rt1.jpghttp://www.blender.org/bf/rt2.jpg
BTW: The compositor is very slow still, it recalulates all nodes on each
change still. Persistant memory and dependency checks is coming!
- New Node "Composite" is output node that puts composited result back
in render pipeline.
- This then also displays in the render window while editing
- But, only with Scene buttons option "Do Compositor" set
- Then, just press F12 or render anims to see the magic!
For clarity, the former 'Output" node is renamed to "Viewer".
A full detailed description of this will be done later... is several days
of work. Here's a summary:
Render:
- Full cleanup of render code, removing *all* globals and bad level calls
all over blender. Render module is now not called abusive anymore
- API-fied calls to rendering
- Full recode of internal render pipeline. Is now rendering tiles by
default, prepared for much smarter 'bucket' render later.
- Each thread now can render a full part
- Renders were tested with 4 threads, goes fine, apart from some lookup
tables in softshadow and AO still
- Rendering is prepared to do multiple layers and passes
- No single 32 bits trick in render code anymore, all 100% floats now.
Writing images/movies
- moved writing images to blender kernel (bye bye 'schrijfplaatje'!)
- made a new Movie handle system, also in kernel. This will enable much
easier use of movies in Blender
PreviewRender:
- Using new render API, previewrender (in buttons) now uses regular render
code to generate images.
- new datafile 'preview.blend.c' has the preview scenes in it
- previews get rendered in exact displayed size (1 pixel = 1 pixel)
3D Preview render
- new; press Pkey in 3d window, for a panel that continuously renders
(pkey is for games, i know... but we dont do that in orange now!)
- this render works nearly identical to buttons-preview render, so it stops
rendering on any event (mouse, keyboard, etc)
- on moving/scaling the panel, the render code doesn't recreate all geometry
- same for shifting/panning view
- all other operations (now) regenerate the full render database still.
- this is WIP... but big fun, especially for simple scenes!
Compositor
- Using same node system as now in use for shaders, you can composit images
- works pretty straightforward... needs much more options/tools and integration
with rendering still
- is not threaded yet, nor is so smart to only recalculate changes... will be
done soon!
- the "Render Result" node will get all layers/passes as output sockets
- The "Output" node renders to a builtin image, which you can view in the Image
window. (yes, output nodes to render-result, and to files, is on the list!)
The Bad News
- "Unified Render" is removed. It might come back in some stage, but this
system should be built from scratch. I can't really understand this code...
I expect it is not much needed, especially with advanced layer/passes
control
- Panorama render, Field render, Motion blur, is not coded yet... (I had to
recode every single feature in render, so...!)
- Lens Flare is also not back... needs total revision, might become composit
effect though (using zbuffer for visibility)
- Part render is gone! (well, thats obvious, its default now).
- The render window is only restored with limited functionality... I am going
to check first the option to render to a Image window, so Blender can become
a true single-window application. :)
For example, the 'Spare render buffer' (jkey) doesnt work.
- Render with border, now default creates a smaller image
- No zbuffers are written yet... on the todo!
- Scons files and MSVC will need work to get compiling again
OK... thats what I can quickly recall. Now go compiling!
Until now, the zbuffer was written straight from the internal zbuffer,
which has values that are inverse-proportional (like 1.0/z) which makes
it very hard to use it for postprocess, like zblur or other composit effects
that require Z.
Based on info from ILM, the values stored for Z in exr files is the
actual distance from a camera. I think that's about time to migrate to that
convention!
By default now, after render, the z values are converted to floats. This
saves in exr files now, but not in the Iris Z files. That latter was a
blender-only anyway, so might be not a real hassle to drop. :)
You can see the difference in the image window, but notice the range now
is linear mapped from camera clipstart to clipend.
Note; I just discover that ortho Z values need a different correction...
- Image textures use float colors now, when present. Works for mipmap too,
and for AO skycolor (probes)
- Backbuffer option uses float buffers too. Note that rendering OSA will
resample the backbuffer, filtering it... will need to be solved with the
new composit stage
- LMB sampling in image window now shows float color too
+ bugfix in imbuf, filtering for float buffers had an error.
the coordinate outputs now have correct dx/dy vectors for Image AA, and
texture delivers correct intensity, rgb, alpha and normal.
Note; we need a "Vector Mapping" node, to do 2d/3d mapping, like in the
Material "Map In" panel.
**** NEW: Group Nodes
Node trees usually become messy and confusing quickly, so we need
not only a way to collapse Nodes into single 'groups', but also a
way to re-use that data to create libraries of effects.
This has been done by making a new Library data type, the NodeTree.
Everything that has been grouped is stored here, and available for
re-use, appending or linking. These NodeTrees are fully generic,
i.e. can store shader trees, composit trees, and so on. The 'type'
value as stored in the NodeTree will keep track of internal type
definitions and execute/drawing callbacks. Needless to say, re-using
shader trees in a composit tree is a bit useless, and will be
prevented in the browsing code. :)
So; any NodeTree can become a "Goup Node" inside in a NodeTree. This
Group Node then works just like any Node.
To prevent the current code to become too complex, I've disabled
the possibility to insert Groups inside of Groups. That might be
enabled later, but is a real nasty piece of code to get OK.
Since Group Nodes are a dynamic Node type, a lot of work has been
done to ensure Node definitions can be dynamic too, but still allow
to be stored in files, and allow to be verified for type-definition
changes on reloading. This system needs a little bit maturing still,
so the Python gurus should better wait a little bit! (Also for me to
write the definite API docs for it).
What works now:
- Press CTRL+G to create a new Group. The grouping code checks for
impossible selections (like an unselected node between selected nodes).
Everthing that's selected then gets removed from the current tree, and
inserted in a new NodeTree library data block. A Group Node then is
added which links to this new NodeTree.
- Press ALT+G to ungroup. This will not delete the NodeTree library
data, but just duplicate the Group into the current tree.
- Press TAB, or click on the NodeTree icon to edit Groups. Note that
NodeTrees are instances, so editing one Group will also change the
other users.
This also means that when removing nodes in a Group (or hiding sockets
or changing internal links) this is immediately corrected for all users
of this Group, also in other Materials.
- While editing Groups, only the internal Nodes can be edited. A single
click outside of the Group boundary will close this 'edit mode'.
What needs to be done:
- SHIFT+A menu in toolbox style, also including a list of Groups
- Enable the single-user button in the Group Node
- Displaying all (visible) internal group UI elements in the Node Panel
- Enable Library linking and prevent editing of Groups then.
**** NEW: Socket Visibility control
Node types will be generated with a lot of possible inputs or outputs,
and drawing all sockets all the time isn't very useful then.
A new option in the Node header ('plus' icon) allows to either hide all
unused sockets (first keypress) or to reveil them (when there are hidden
sockets, the icon displays black, otherwise it's blended).
Hidden sockets in Nodes also are not exported to a Group, so this way
you can control what options (in/outputs) exactly are available.
To be done:
- a way to hide individual sockets, like with a RMB click on it.
**** NEW: Nodes now render!
This is still quite primitive, more on a level to replace the (now
obsolete and disabled) Material Layers.
What needs to be done:
- make the "Geometry" node work properly, also for AA textures
- make the Texture Node work (does very little at the moment)
- give Material Nodes all inputs as needed (like Map-to Panel)
- find a way to export more data from a Material Node, like the
shadow value, or light intensity only, etc
Very important also to separate from the Material Buttons the
"global" options, like "Ztransp" or "Wire" or "Halo". These can not
be set for each Material-Node individually.
Also note that the Preview Render (Buttons window) now renders a bit
differently. This was a horrid piece of antique code, using a totally
incompatible way of rendering. Target is to fully re-use internal
render code for previews.
OK... that's it mostly. Now test!
********* Node editor work:
- To enable Nodes for Materials, you have to set the "Use Nodes"
button, in the new Material buttons "Nodes" Panel or in header
of the Node editor. Doing this will disable Material-Layers.
- Nodes now execute materials ("shaders"), but still only using the
previewrender code.
- Nodes have (optional) previews for rendered images.
- Node headers allow to hide buttons and/or preview image
- Nodes can be dragged larger/smaller (right-bottom corner)
- Nodes can be hidden (minimized) with hotkey H
- CTRL+click on an Input Socket gives a popup with default values.
- Changing Material/Texture or Mix node will adjust Node title.
- Click-drag outside of a Node changes cursor to "Knife' and allows to
draw a rect where to cut Links.
- Added new node types RGBtoBW, Texture, In/Output, ColorRamp
- Material Nodes have options to ouput diffuse or specular, or to use
a negative normal. The input socket 'Normal' will force the material
to use that normal, otherwise it uses the normal from the Material
that has the node tree.
- When drawing a link between two not-matching sockets, Blender inserts
a converting node (now only for value/rgb combos)
- When drawing a link to an input socket that's already in use, the
old link will either disappear or flip to another unused socket.
- A click on a Material Node will activate it, and show all its settings
in the Material Buttons. Active Material Nodes draw the material icon
in red.
- A click on any node will show its options in the Node Panel in the
Material buttons.
- Multiple Output Nodes can be used, to sample contents of a tree, but
only one Output is the real one, which is indicated in a different
color and red material icon.
- Added ThemeColors for node types
- ALT+C will convert existing Material-Layers to Node... this currently
only adds the material/mix nodes and connects them. Dunno if this is
worth a lot of coding work to make perfect?
- Press C to call another "Solve order", which will show all possible
cyclic conflicts (if there are).
- Technical: nodes now use "Type" structs which define the
structure of nodes and in/output sockets. The Type structs store all
fixed info, callbacks, and allow to reconstruct saved Nodes to match
what is required by Blender.
- Defining (new) nodes now is as simple as filling in a fixed
Type struct, plus code some callbacks. A doc will be made!
- Node preview images are by default float
********* Icon drawing:
- Cleanup of how old icons were implemented in new system, making
them 16x16 too, correctly centered *and* scaled.
- Made drawing Icons use float coordinates
- Moved BIF_calcpreview_image() into interface_icons.c, renamed it
icon_from_image(). Removed a lot of unneeded Imbuf magic here! :)
- Skipped scaling and imbuf copying when icons are OK size
********* Preview render:
- Huge cleanup of code....
- renaming BIF_xxx calls that only were used internally
- BIF_previewrender() now accepts an argument for rendering method,
so it supports icons, buttonwindow previewrender and node editor
- Only a single BIF_preview_changed() call now exists, supporting all
signals as needed for buttos and node editor
********* More stuff:
- glutil.c, glaDrawPixelsSafe() and glaDrawPixelsTex() now accept format
argument for GL_FLOAT rects
- Made the ColorBand become a built-in button for interface.c
Was a load of cleanup work in buttons_shading.c...
- removed a load of unneeded glBlendFunc() calls
- Fixed bug in calculating text length for buttons (ancient!)
(As usual movies disappears after while)
Face example showing stress values on a blend. White is stretch, black
is squeeze
http://www.blender.org/bf/0001_0014.avi
Quick test with softbody stretch
http://www.blender.org/bf/0001_0100.avi
Based on the difference of the "Orco" (original undeformed coordinate)
and the actual render coordinate, a stress value is computed to make
textures react to stretching or wrinking skin.
The texture coordinate is neutral (0) on relaxed state. -1 is squeezed
to zero, +1 is stretched to infinity.
Note that scaling (object itself or parent) also will result in
stress values.
The reason for the huge commit is a cleanup in allocating memory for
the vertices. These were growing too large with new options, so now it
allocates the optional coordinates dynamically.
Saves about 20 MB memory per 1M vertices already. But best of all is that
I now can add much more fun... so tangents, here we come!