* Fix for plane material preview render. Now, light cache aborts if there isn't enough volume, and falls back on non-cached single scattering. It still doesn't make much sense to render a plane as a volume, but for now in the preview it will shade the region in between the plane and the checker background.
Also, made the Outliner's horizontal scrollbar work better for keymaps view. It's still using an approximation of the width, but at least you can scroll now.
Integration is still very rough around the edges and WIP, but it works, and can render smoke (using new Smoke format in Voxel Data texture) --> http://vimeo.com/6030983
More to come, but this makes things much easier to work on for me :)
- 1st stage: Linear Workflow
This implements automatic linear workflow in Blender's renderer. With the
new Colour Management option on in the Render buttons, all inputs to the
renderer and compositor are converted to linear colour space before
rendering, and gamma corrected afterwards. In essence, this makes all
manual gamma correction with nodes, etc unnecessary, since it's done
automatically through the pipeline.
It's all explained much better in the notes/doc here, so please have a look:
http://wiki.blender.org/index.php/Dev:Source/Blender/Architecture/Colour_Management
And an example of the sort of difference it makes:
http://mke3.net/blender/devel/rendering/b25_colormanagement_test01.jpg
This also enables Colour Management in the default B.blend, and changes the
default lamp falloff to inverse square, which is more correct, and much
easier to use now it's all gamma corrected properly.
Next step is to look into profiles/soft proofing for the compositor.
Thanks to brecht for reviewing and fixing some oversights!
* Added support for additional file types in the voxel data texture. I added
support for 8 bit raw files, but most notably for image sequences.
Image sequences generate the voxel grid by stacking layers of image slices on top
of each other to generate the voxels in the Z axis - the number of slices in the
sequence is the resolution of the voxel grid's Z axis.
i.e. http://mke3.net/blender/devel/rendering/volumetrics/skull_layers.jpg
The image sequence option is particularly useful for loading medical data into
Blender. 3d medical data such as MRIs or CT scans are often stored as DICOM
format image sequences. It's not in Blender's scope to support the DICOM format,
but there are plenty of utilities such as ImageMagick, Photoshop or OsiriX that
can easily convert DICOM files to formats that Blender supports, such as PNG or JPEG.
Here are some example renderings using these file formats to load medical data:
http://vimeo.com/5242961http://vimeo.com/5242989http://vimeo.com/5243228
Currently the 8 bit raw and image sequence formats only support the 'still'
rendering feature.
* Changed the default texture placement to be centred around 0.5,0.5,0.5, rather
than within the 0.0,1.0 cube. This is more consistent with image textures, and it
also means you can easily add a voxel data texture to a default cube without
having to mess around with mapping.
* Added some more extrapolation modes such as Repeat and Extend rather than just clipping
http://mke3.net/blender/devel/rendering/volumetrics/bradybunch.jpg
* Changed the voxel data storage to use MEM_Mapalloc (memory mapped disk cache)
rather than storing in ram, to help cut down memory usage.
* Refactored and cleaned up the code a lot. Now the access and interpolation code
is separated into a separate voxel library inside blenlib. This is now properly
shared between voxel data texture and light cache (previously there was some
duplicated code).
* Made volume light cache support non-cubic voxel grids. Now the resolution
specified in the material properties is used for the longest edge of the volume
object's bounding box, and the shorter edges are proportional (similar to how
resolution is calculated for fluid sim domains).
This is *much* more memory efficient for squashed volume regions like clouds
layer bounding boxes, allowing you to raise the resolution considerably while
still keeping memory usage acceptable.
This adds a header to the voxel data texture's data file format, to
auto-fill the resolution settings upon loading a file. I don't have a data
file of the right format to test with, so I'll trust it works and wait for
confirmation!
It also adds a 'still frame' setting, to pause the voxel data sequence on a
specified frame, throughout the course of the rendered animation.
Safe method to move render results to the displayed image.
It now allocates a single image for display, and on each
refresh callback from render, it copies the refreshed
section over to this image, in 32 bits. While rendering
that image then only shows progress updates, as usual.
This also now works for scenes in composte and results
for composite.
This should solve reported crashes for MBlur or SSS.
This is mostly a contribution from Raul 'farsthary' Hernandez - an approximation for
multiple scattering within volumes. Thanks, Raul! Where single scattering considers
the path from the light to a point in the volume, and to the eye, multiple scattering
approximates the interactions of light as it bounces around randomly within the
volume, before eventually reaching the eye.
It works as a diffusion process that effectively blurs the lighting information
that's already stored within the light cache.
A cloudy sky setup, with single scattering, and multiple scattering:
http://mke3.net/blender/devel/rendering/volumetrics/vol_sky_ss_ms.jpghttp://mke3.net/blender/devel/rendering/volumetrics/sky_ms.blend
To enable it, there is a menu in the volume panel (which needs a bit of cleanup, for
later), that lets you choose between self-shading methods:
* None: No attenuation of the light source by the volume - light passes straight
through at full strength
* Single Scattering: (same as previously, with 'self-shading' enabled)
* Multiple Scattering: Uses multiple scattering only for shading information
* Single + Multiple: Adds the multiple scattering lighting on top of the existing
single scattered light - this can be useful to tweak the strength of the effect,
while still retaining details in the lighting.
An example of how the different scattering methods affect the visual result:
http://mke3.net/blender/devel/rendering/volumetrics/ss_ms_comparison.jpghttp://mke3.net/blender/devel/rendering/volumetrics/ss_ms_comparison.blend
The multiple scattering methods introduce 3 new controls when enabled:
* Blur: A factor blending between fully diffuse/blurred lighting, and sharper
* Spread: The range that the diffuse blurred lighting spreads over - similar to a
blur width. The higher the spread, the slower the processing time.
* Intensity: A multiplier for the multiple scattering light brightness
Here's the effect of multiple scattering on a tight beam (similar to a laser). The
effect of the 'spread' value is pretty clear here:
http://mke3.net/blender/devel/rendering/volumetrics/ms_spread_laser.jpg
Unlike the rest of the system so far, this part of the volume rendering engine isn't
physically based, and currently it's not unusual to get non-physical results (i.e.
much more light being scattered out then goes in via lamps or emit). To counter this,
you can use the intensity slider to tweak the brightness - on the todo, perhaps there is a more automatic method we can work on for this later on. I'd also like to check
on speeding this up further with threading too.
Added WM Jobs manager
- WM can manage threaded jobs for you; just provide a couple
of components to get it work:
- customdata, free callback for it
- timer step, notifier code
- start callback, update callback
- Once started, each job runs an own timer, and will for
every time step check necessary updates, or close the
job when ready.
- No drawing happens in jobs, that's for notifiers!
- Every job stores an owner pointer, and based on this owner
it will prevent multiple jobs to enter the stack.
Instead it will re-use a running job, signal it to stop
and allow caller to re-initialize it even.
- Check new wm_jobs.c for more explanation. Jobs API is still
under construction.
Fun: BLI_addtail(&wm->jobs, steve); :)
Put Node shader previews back using wmJobs
- Preview calculating is now fully threaded (1 thread still)
- Thanks to new event system + notifiers, you can see
previews update even while dragging sliders!
- Currently it only starts when you change a node setting.
Warning: the thread render shares Node data, so don't delete
nodes while it renders! This topic is on the todo to make safe.
Also:
- bug in region initialize (do_versions) showed channel list in
node editor wrong.
- flagged the channel list 'hidden' now, it was really in the
way! This is for later to work on anyway.
- recoded Render API callbacks so it gets handlers passed on,
no globals to use anymore, remember?
- previewrender code gets now so much nicer! Will remove a lot
of stuff from code soon.
soc-2008-nicholasbishop branch.
Note: any old code with multires_test() or multires_level1_test() can
just be deleted, not needed by the multires modifier.
* Fixed an old problem where if both the camera and a solid surface were
inside a volume, the volume wouldn't attenuate in front of the surface (was
visible in the blender conference art festival clouds animation:
http://mke3.net/blender/devel/rendering/volumetrics/vol_shade_cam_inside.mov
* Initial support for refracting solids inside volumes. I might be able to
make this work better, but not sure at the moment. It's a bit dodgy,
limited by the code that does the recursive ray shading - it's really not
set up for this kind of thing and could use a refactor very much.
* Multithreaded volume light cache
While the render process itself is multithreaded, the light cache pre-process
previously wasn't (painfully noticed this the other week rendering on some
borrowed octocore nodes!). This commit adds threading, similar to the tiled render -
it divides the light cache's voxel grid into 3d parts and renders them with the
available threads.
This makes the most significant difference on shots where the light cache pre-
process is the bottleneck, so shots with either several lights, or a high res light
cache, or both. On this file (3 lights, light cache res 120), on my Core 2 Duo it now
renders in 27 seconds compared to 49 previously.
http://mke3.net/blender/devel/rendering/volumetrics/threaded_cache.jpg
This commit introduces a new texture ('Voxel Data'), used to load up saved voxel
data sets for rendering, contributed by Raúl 'farsthary' Fernández Hernández
with some additional tweaks. Thanks, Raúl!
The texture works similar to the existing point density texture, currently it
only provides intensity information, which can then be mapped (for example) to
density in a volume material. This is an early version, intended to read the
voxel format saved by Raúl's command line simulators, in future revisions
there's potential for making a more full-featured 'Blender voxel file format',
and also for supporting other formats too.
Note: Due to some subtleties in Raúl's existing released simulators, in order
to load them correctly the voxel data texture, you'll need to raise the
'resolution' value by 2. So if you baked out the simulation at resolution 50,
enter 52 for the resolution in the texture panel. This can possibly be fixed in
the simulator later on.
Right now, the way the texture is mapped is just in the space 0,0,0 <-> 1,1,1
and it can appear rotated 90 degrees incorrectly. This will be tackled, for now,
probably the easiest way to map it is with and empty, using Map Input -> Object.
Smoke test: http://www.vimeo.com/2449270
One more note, trilinear interpolation seems a bit slow at the moment, we'll
look into this.
For curiosity, while testing/debugging this, I made a script that exports a mesh
to voxel data. Here's a test of grogan (www.kajimba.com) converted to voxels,
rendered as a volume: http://www.vimeo.com/2512028
The script is available here: http://mke3.net/projects/bpython/export_object_voxeldata.py
* Another smaller thing, brought back early ray termination (was disabled
previously for debugging) and made it user configurable. It now appears as a new
value in the volume material: 'Depth Cutoff'. For some background info on what
this does, check:
http://farsthary.wordpress.com/2008/12/11/cutting-down-render-times/
* Also some disabled work-in-progess code for light cache
Robin (Frrr) Allen did a decent job on this, so we can also welcome him
as a member in the svn committers team to maintain it!
I do the first commit with some minor fixes:
- get Makefiles work
- fix rounding issue with tiles on unit faces
- removed UI includes from tex node
A nice doc in wiki is here:
http://wiki.blender.org/index.php/User:Frr/TexnodeManual
On the todo for Robin is:
- When using one or more Texture-input nodes, you cannot edit them by activating
(as works now for Material nodes).
- The new "output node" option fails on the default case, when only one
output node is active. It then shows often a blank menu. Will get fixed asap.
- When using a NodeTree-Texture as input node, the menu for 'active output'
should not show. NodeTree should ignore other nodetrees to keep things sane
for now.
- On a future todo is proper usage of "Dxt" and "Dyt" texture vectors for
superior antialising of checkers/bricks.
General note; I know people are dying to get a full integrated shader system
with nodes. In theory we could merge this with Material Nodetrees... but I
rather wait for a solid and very well thought out design proposal for this,
also including design ideas for unifying with a shader language (GPU, CPU).
For the time being this is a nice extension of current textures. :)
Index OB pass didn't support FSA for Ztransp.
Also made buttons to set black/white for non-RGBA images hide in Image Window,
the Curves color code only supports 4 channels atm.
proper results, should bake as if SSS was disabled.
- Fix for GLSL to handle failing shadow buffer creation better.
- Fix for sky/atmosphere version patch, was not doing files from 2.46
and newer.
due to jittering of the start position for antialiasing in a pixel.
Now it distributes the start position over the fixed osa sample
positions, instead of of random positions in space. The ugly bit is
that a custom ordering was defined for osa 8/11/16 to ensure that the
first 4 are distributed relatively fair for adaptive sampling to decide
if more samples need to be taken.
SunSky didn't include skycolor in raytrace.
Note: there seems to be an error in sunsky when looking straight down,
so this option requires raytracing stuff not in outer space. :)
- removed ugly clamping function (it was dividing XYZ based on max of
one of the values)
- added option to use Exposure, this only works for brightness (Y).
results look very pleasant, foggy and hazy results are possible.
with exposre==0, no exposure happens for HDR extreme range skies,
this is how yafray rendered it.
- added menu for choosing color spaces (CIE = modern lcds)
Please review! (and yes i know it's still not in World :)
Removed all the old particle rendering code and options I had in there
before, in order to make way for...
A new procedural texture: 'Point Density'
Point Density is a 3d texture that find the density of a group of 'points'
in space and returns that in the texture as an intensity value. Right now,
its at an early stage and it's only enabled for particles, but it would be
cool to extend it later for things like object vertices, or point cache
files from disk - i.e. to import point cloud data into Blender for
rendering volumetrically.
Currently there are just options for an Object and its particle system
number, this is the particle system that will get cached before rendering,
and then used for the texture's density estimation.
It works totally consistent with as any other procedural texture, so
previously where I've mapped a clouds texture to volume density to make
some of those test renders, now I just map a point density texture to
volume density.
Here's a version of the same particle smoke test file from before, updated
to use the point density texture instead:
http://mke3.net/blender/devel/rendering/volumetrics/smoke_test02.blend
There are a few cool things about implementing this as a texture:
- The one texture (and cache) can be instanced across many different
materials:
http://mke3.net/blender/devel/rendering/volumetrics/pointdensity_instanced.png
This means you can calculate and bake one particle system, but render it
multiple times across the scene, with different material settings, at no
extra memory cost.
Right now, the particles are cached in world space, so you have to map it
globally, and if you want it offset, you have to do it in the material (as
in the file above). I plan to add an option to bake in local space, so you
can just map the texture to local and it just works.
- It also works for solid surfaces too, it just gets the density at that
particular point on the surface, eg:
http://mke3.net/blender/devel/rendering/volumetrics/pointdensity_solid.mov
- You can map it to whatever you want, not only density but the various
emissions and colours as well. I'd like to investigate using the other
outputs in the texture too (like the RGB or normal outputs), perhaps with
options to colour by particle age, generating normals for making particle
'dents' in a surface, whatever!
Initial commit for supporting rendering particles directly as
volume density. It works by looking up how many particles are
within a specified radius of the currently shaded point and using
that to calculate density (which is used just as any other
measure of density would be).
http://mke3.net/blender/devel/rendering/volumetrics/smoke_test01.mov
http://mke3.net/blender/devel/rendering/volumetrics/smoke_test01.blend
Right now it's an early implementation, just to see that it can
work - it may end up changing quite a bit. Currently, it's just a
single switch on the volume material - it looks up all particles
in the scene for density at the current shaded point in world
space (so the volume region must enclose the particles in order
to render them.
This will probably change - one idea I have is to make the
particle density estimation a procedural texture with options for:
* the object and particle system to use
* the origin of the co-ordinate system, i.e. object center, world
space, etc.
This would allow you in a sense, to instance particle systems for
render - you only need to bake one particle system, but you can
render it anywhere.
Anyway, plenty of work to do here, firstly on getting a nice
density evaluation with falloff etc...
* Now it's possible to render with the camera inside a volume. I'm not sure how this goes with overlapping volumes yet, will look at it. But it allows nice things like this :)
http://mke3.net/blender/devel/rendering/volumetrics/clouds_sky.mov
* Sped up shading significantly by not doing any shading if the density
of the current sample is less than 0.01 (there's nothing to shade there anyway!) Speeds up around 200% on that clouds scene.
* Fixed a bug in global texture coordinates for volume textures
Now other objects (and sky) correctly render if they're partially
inside or behind a volume. Previously all other objects were ignored,
and volumes just rendered on black. The colour of surfaces inside or
behind the volume gets correctly attenuated by the density of the
volume in between - i.e. thicker volumes will block the light coming
from behind. However, other solid objects don't receive volume shadows
yet, this is to be worked on later.
http://mke3.net/blender/devel/rendering/volumetrics/vol_inside_behind.png
Currently this uses raytracing to find intersections within the volume,
and rays are also traced from the volume, heading behind into the
scene, to see what's behind it (similar effect to ray transp with IOR
1). Because of this, objects inside or behind the volume will not be
antialiased. Perhaps I can come up with a solution for this, but until
then, for antialiasing, you can turn on Full OSA (warning, this will
incur a slowdown). Of course you can always avoid this by rendering
volumes on a separate renderlayer, and compositing in post, too.
Another idea I've started thinking about is to calculate an alpha
value, then use ztransp to overlay on top of other objects. This won't
accurately attenuate and absorb light coming from objects behind the
volume, but for some situations it may be fine, and faster too.
Rather than a single absorption value to control how much light is absorbed as it
travels through a volume, there's now an additional absorption colour. This is
used to absorb different R/G/B components of light at different amounts. For
example, if a white light shines on a volume which absorbs green and blue
components, the volume will appear red.
To make it easier to use, the colour set in the UI is actually the inverse of the
absorption colour, so the colour you set is the colour that the volume will
appear as.
Here's an example of how it works:
http://mke3.net/blender/devel/rendering/volumetrics/vol_col_absorption.jpg
And this can be textured too:
http://mke3.net/blender/devel/rendering/volumetrics/vol_absorb_textured.png
Keep in mind, this doesn't use accurate spectral light wavelength mixing (just
R/G/B channels) so in cases where the absorption colour is fully red green or
blue, you'll get non-physical results.
Todo: refactor the volume texturing internal interface...
This is an initial commit to get it in SVN and make it easier to work on.
Don't expect it to work perfectly, it's still in development and there's
plenty of work still needing to be done. And so no I'm not very interested
in hearing bug reports or feature requests at this stage :)
There's some info on this, and a todo list at:
http://mke3.net/weblog/volume-rendering/
Right now I'm trying to focus on getting shading working correctly (there's
currently a problem in which 'surfaces' of the volume facing towards or away
from light sources are getting shaded differently to how they should be),
then I'll work on integration issues, like taking materials behind the volume
into account, blending with alpha, etc. You can do simple testing though,
mapping textures to density or emission on a cube with volume material.
- Added blending mode and factor option, so it's more clear and
controllable what happens with it. Also nice for crazy effects
of course!
- Preview render now shows preview for it too
On the todos:
- have this in World buttons (as well) for quicker sky setups
- review math of color clamping and scaling, this is definitely
not good... but a fix will make old files look very different.
Finally, after a long time new render candy for the non-game peoples! :)
Good doc is here: (url splits in two)
http://www.harkyman.com/2008/08/06/controllable-shadow-intensity-
and-color/
Note the colorpicker for shadow is in "Shadow and Spot" panel. A bit
hidden, could get more attention. For later. :)
the features that are needed to run the game. Compile tested with
scons, make, but not cmake, that seems to have an issue not related
to these changes. The changes include:
* GLSL support in the viewport and game engine, enable in the game
menu in textured draw mode.
* Synced and merged part of the duplicated blender and gameengine/
gameplayer drawing code.
* Further refactoring of game engine drawing code, especially mesh
storage changed a lot.
* Optimizations in game engine armatures to avoid recomputations.
* A python function to get the framerate estimate in game.
* An option take object color into account in materials.
* An option to restrict shadow casters to a lamp's layers.
* Increase from 10 to 18 texture slots for materials, lamps, word.
An extra texture slot shows up once the last slot is used.
* Memory limit for undo, not enabled by default yet because it
needs the .B.blend to be changed.
* Multiple undo for image painting.
* An offset for dupligroups, so not all objects in a group have to
be at the origin.
Baking would split non-planer quads in an unpredictable way, which is fine for rending but game engines often use a fixed order (0,1,2), (0,2,3) or (1,2,3) (1,3,0).
Added an option to use a fixed order when baking.