Show World will now influence if world is rendered in opengl rendering.
This is a little undefined according to blender history, since sky used
to always be drawn when offscreen rendering, as if "Only Render" was
ticked. Since if we don't draw sky in that case there's no valid color
really (and using theme colors is not so nice) we just draw transparent
background.
This commit introduces a few ready made effects for the 3D viewport
and OpenGL rendering.
Included effects are Depth of Field, accessible from camera view
and screen space ambient occlusion. Those effects can be turned on and
tweaked from the shading panel in the 3D viewport.
Off screen rendering will use the settings of the current camera.
WIP documentation can be found here:
http://wiki.blender.org/index.php/User:Psy-Fi/Framebuffer_Post-processing
D937 with minor edits (whitespace only)
@aligorith, I double checked everything runs smoothly, blame me if I missed something ;). Sorry for just taking the initiative and committing without talking to you, but I wasn't able to catch you the last days. This should be fixed before the release IMHO, but I don't think it's important enough to be committed during BCon5, so sorry again, but hopefully everything is okay :)
After double checking the sequencer code, there doesn't seem to be any reason to
exclude these from the sequencer previews. This makes it possible to use the
sequencer to non-destructively chain together difference Grease Pencil animated
shots together without having to render each image sequence first, allowing for
a smoother workflow.
Just in case the initial assumption isn't entirely correct, I've put in place
an extra arg to the relevant functions which can be hooked up to a suitable
option on the scene strip later to turn this on/off as needed.
This merge-commit brings in a number of new features and workflow/UI improvements for
working with Grease Pencil. While these were originally targetted at improving
the workflow for creating 3D storyboards in Blender using the Grease Pencil,
many of these changes should also prove useful in other workflows too.
The main highlights here are:
1) It is now possible to edit Grease Pencil strokes
- Use D Tab, or toggle the "Enable Editing" toggles in the Toolbar/Properties regions
to enter "Stroke Edit Mode". In this mode, many common editing tools will
operate on Grease Pencil stroke points instead.
- Tools implemented include Select, Select All/Border/Circle/Linked/More/Less,
Grab, Rotate, Scale, Bend, Shear, To Sphere, Mirror, Duplicate, Delete.
- Proportional Editing works when using the transform tools
2) Grease Pencil stroke settings can now be animated
NOTE: Currently drivers don't work, but if time allows, this may still be
added before the release.
3) Strokes can be drawn with "filled" interiors, using a separate set of
colour/opacity settings to the ones used for the lines themselves.
This makes use of OpenGL filled polys, which has the limitation of only
being able to fill convex shapes. Some artifacts may be visible on concave
shapes (e.g. pacman's mouth will be overdrawn)
4) "Volumetric Strokes" - An alternative drawing technique for stroke drawing
has been added which draws strokes as a series of screen-aligned discs.
While this was originally a partial experimental technique at getting better
quality 3D lines, the effects possible using this technique were interesting
enough to warrant making this a dedicated feature. Best results when partial
opacity and large stroke widths are used.
5) Improved Onion Skinning Support
- Different colours can be selected for the before/after ghosts. To do so,
enable the "colour wheel" toggle beside the Onion Skinning toggle, and set
the colours accordingly.
- Different numbers of ghosts can be shown before/after the current frame
6) Grease Pencil datablocks are now attached to the scene by default instead of
the active object.
- For a long time, the object-attachment has proved to be quite problematic
for users to keep track of. Now that this is done at scene level, it is
easier for most users to use.
- An exception for old files (and for any addons which may benefit from object
attachment instead), is that if the active object has a Grease Pencil datablock,
that will be used instead.
- It is not currently possible to choose object-attachment from the UI, but
it is simple to do this from the console instead, by doing:
context.active_object.grease_pencil = bpy.data.grease_pencil["blah"]
7) Various UI Cleanups
- The layers UI has been cleaned up to use a list instead of the nested-panels
design. Apart from saving space, this is also much nicer to look at now.
- The UI code is now all defined in Python. To support this, it has been necessary
to add some new context properties to make it easier to access these settings.
e.g. "gpencil_data" for the datablock
"active_gpencil_layer" and "active_gpencil_frame" for active data,
"editable_gpencil_strokes" for the strokes that can be edited
- The "stroke placement/alignment" settings (previously "Drawing Settings" at the
bottom of the Grease Pencil panel in the Properties Region) is now located in
the toolbar. These were more toolsettings than properties for how GPencil got drawn.
- "Use Sketching Sessions" has been renamed "Continuous Drawing", as per a
suggestion for an earlier discussion on developer.blender.org
- By default, the painting operator will wait for a mouse button to be pressed
before it starts creating the stroke. This is to make it easier to include
this operator in various toolbars/menus/etc. To get it immediately starting
(as when you hold down DKEy to draw), set "wait_for_input" to False.
- GPencil Layers can be rearranged in the "Grease Pencil" mode of the Action Editor
- Toolbar panels have been added to all the other editors which support these.
8) Pie menus for quick-access to tools
A set of experimental pie menus has been included for quick access to many
tools and settings. It is not necessary to use these to get things done,
but they have been designed to help make certain common tasks easier.
- Ctrl-D = The main pie menu. Reveals tools in a context sensitive and
spatially stable manner.
- D Q = "Quick Settings" pie. This allows quick access to the active
layer's settings. Notably, colours, thickness, and turning
onion skinning on/off.
Summary:
Made objects update happening from multiple threads. It is a task-based
scheduling system which uses current dependency graph for spawning new
tasks. This means threading happens on object level, but the system is
flexible enough for higher granularity.
Technical details:
- Uses task scheduler which was recently committed to trunk
(that one which Brecht ported from Cycles).
- Added two utility functions to dependency graph:
* DAG_threaded_update_begin, which is called to initialize threaded
objects update. It will also schedule root DAG node to the queue,
hence starting evaluation process.
Initialization will calculate how much parents are to be evaluation
before current DAG node can be scheduled. This value is used by task
threads for faster detecting which nodes might be scheduled.
* DAG_threaded_update_handle_node_updated which is called from task
thread function when node was fully handled.
This function decreases num_pending_parents of node children and
schedules children with zero valency.
As it might have become clear, task thread receives DAG nodes and
decides which callback to call for it.
Currently only BKE_object_handle_update is called for object nodes.
In the future it'll call node->callback() from Ali's new DAG.
- This required adding some workarounds to the render pipeline.
Mainly to stop using get_object_dm() from modifiers' apply callback.
Such a call was only a workaround for dependency graph glitch when
rendering scene with, say, boolean modifiers before displaying
this scene.
Such change moves workaround from one place to another, so overall
hackentropy remains the same.
- Added paradigm of EvaluaitonContext. Currently it's more like just a
more reliable replacement for G.is_rendering which fails in some
circumstances.
Future idea of this context is to also store all the local data needed
for objects evaluation such as local time, Copy-on-Write data and so.
There're two types of EvaluationContext:
* Context used for viewport updated and owned by Main. In the future
this context might be easily moved to Window or Screen to allo
per-window/per-screen local time.
* Context used by render engines to evaluate objects for render purposes.
Render engine is an owner of this context.
This context is passed to all object update routines.
Reviewers: brecht, campbellbarton
Reviewed By: brecht
CC: lukastoenne
Differential Revision: https://developer.blender.org/D94
It was only used by opengl render and in fact it needed just to
set DISPLAY_BUFFER_INVALID flag for the image buffer.
In theory it wouldn't make any change to opengl render speed
(because this change just moved rect_from_float from color
management code to image save code). And could not see any speed
changes on my laptop.
were no properly updating when rendering animations.
The render engine was only updating the image user current frame on images used
by material textures. Now moved the function that updates all from the editors
to blenkernel level and do it on all frame changes.
of editmode on the child object.
Problem was that the object custom data mask was not taken into account when
rebuilding the derivedmesh in some cases, which is needed for the derivedmesh
to contain the mapping back to the original vertices. Now this data mask is
used for any derivedmesh build that will be cached.
Also problematic was that the datamask for the active object was applied to
all objects in the scene, which caused the parent object to be recalculated
when it didn't need to be. Now this datamask is only used for the active object.
Doing linearization with GLSL was already faster, but even faster is to just read the
bytes instead of floats and convert those to linear, since byte => float is just a quick
256 entry table lookup. Also made it assign the bytes directly to the image buffer so
they do not need to be converted back from float to byte for file saving, and made sky
render write the background color with OpenGL instead of doing it on the CPU.
Uses GLSL for drawing image in Image Editor space.
This requires change in image_buffer_rect_update, so
original float buffer is being updated as well. This
is unlikely be something bad, but will keep an eye
on this change.
Also no byte buffer allocation happens there, this
is so because byte buffer used for display only
and in case of GLSL display such allocation and
partial update is just waste of time.
Also switched OpenGL render from using CPU color
space linearization to GLSL color space transform.
Makes OpenGL rendering pretty much faster (but
still slower than in 2.60).
internal changes:
- Added functions to setup GLSL shader for color
space conversion in colormanagement.c. Currently
conversion form a colorspace defined by a role to
linear space is implemented. Easy to extend to
other cases.
- Added helper functions to glutil.c which does
smarter image buffer draw (calling all needed OCIO
stuff, editors now could draw image buffer with a
single function call -- all the checks are done in
glutil.c).
- Also added helper function for buffer linearization
from a given role to glutil.c. Everyone now able to
linearize buffer with a single call.
This function will do nothing is GLSL routines fails
or not supported.
And one last this: this function uses offscreen
drawing, could potentially give issues on some
cards, also will keep an eye on this.
with transparency, would show as too dark colors on edges.
Found a strange issue here though, the alpha value in the OpenGL render result
is not the same as the one specified in the material. It's not clear to me why
this happens, color space conversions should not influence the alpha channel.
without hurting quick texture painting
- ED_view3d_draw_offscreen will now output buffer with
transparent alpha, if sky needed it should be alpha-undered
later.
- ED_view3d_draw_offscreen_imbuf now accepts alpha mode as an
argument which could be either R_ADDSKY or R_PREMULALPHA
- OpenGL render and sequencer's opengl preview will now reflect
scene's Alpha Mode
- Quick Edit will use OpenGL with transparent alpha mode
This codec is absolutely needed to generate DCP using OpenDCP,
before that external application to convert JP2 to J2K was used
which slowed down export a lot.
New codec is exposed to image format settings panel and called
Codec. Default one is JP2 which creates files with .jp2 extension,
new one is called J2K which creates with .j2c extension.
Other changes:
- Fixed avi jpeg warning which was treating as error here.
- Made it so extension is detecting from ImageFormatData instead
of image file type, which makes it possible to have different
extension for the same file type depending on it's settings.
IRIS format should still be changed (depending on number of
channels it'll be .bw, .rgb or .rgba extension)
- Default image format settings would be set from image buffer
when re-saving it. Makes it possible to easily open .j2c file
and save it using J2K codec (without this change it'll save as
.jp2 using JP2 codec)
This commit makes BKE_image_acquire_ibuf referencing result, which means once
some area requested for image buffer, it'll be guaranteed this buffer wouldn't
be freed by image signal.
To de-reference buffer BKE_image_release_ibuf should now always be used.
To make referencing working correct we can not rely on result of
image_get_ibuf_threadsafe called outside from thread lock. This is so because
we need to guarantee getting image buffer from list of loaded buffers and it's
referencing happens atomic. Without lock here it is possible that between call
of image_get_ibuf_threadsafe and referencing the buffer IMA_SIGNAL_FREE would
be called. Image signal handling too is blocking now to prevent such a
situation.
Threads are locking by spinlock, which are faster than mutexes. There were some
slowdown reports in the past about render slowdown when using OSX on Xeon CPU.
It shouldn't happen with spin locks, but more tests on different hardware would
be really welcome. So far can not see speed regressions on own computers.
This commit also removes BKE_image_get_ibuf, because it was not so intuitive
when get_ibuf and acquire_ibuf should be used.
Thanks to Ton and Brecht for discussion/review :)
Screencast recording stopped on a undo/redo. This was because all thread jobs
were killed then. Now it leaves screen jobs (screen cast) running, that's
data that doesn't change on undos.
Also renamed jobs_stop_all() to jobs_kill_all() - it terminates threads.