The idea is to avoid having roundtrip from byte to float and back to byte buffer
and use render result's byte buffer to store result of sequencer rendering.
This actually matches to what regular render pipeline is doing and this gives
around 2-3 times speedup of sequencer export on a simple scenes.
non-power-of-2 texture support. Note that all I did was pass
the correct width/height into glReadPixels; the result may not
be the same as if non2 textures were enabled.
file, since we need the updated times and frames.
This was lost during stamp code refactoring. The refactoring moved the
stamp when render is initialized so we would be guaranteed to have
correct cameras even when saving render stills at a later time (and even
if cameras were changed). For regular render this would work since
render init takes care of stamp, but for openGL rendering we need to do
this manually.
Still not 100% correct, does not apply multiview cameras to metadata
Originally I wanted to get rid of RenderResult->rect* entirely, but it's
convenient to have for temporary structs.
This patch makes sure they are used only when really needed, which
should help clearing the code out.
(they are needed when using RE_AcquireResultImage() - which produces a
RenderResult with no RenderView)
Reviewers: sergey
Differential Revision: https://developer.blender.org/D1270
Official Documentation:
http://www.blender.org/manual/render/workflows/multiview.html
Implemented Features
====================
Builtin Stereo Camera
* Convergence Mode
* Interocular Distance
* Convergence Distance
* Pivot Mode
Viewport
* Cameras
* Plane
* Volume
Compositor
* View Switch Node
* Image Node Multi-View OpenEXR support
Sequencer
* Image/Movie Strips 'Use Multiview'
UV/Image Editor
* Option to see Multi-View images in Stereo-3D or its individual images
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
I/O
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
Scene Render Views
* Ability to have an arbitrary number of views in the scene
Missing Bits
============
First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report.
Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report.
Everything else is likely small todos, and may wait until we are sure none of the above is happening.
Apart from that there are those known issues:
* Compositor Image Node poorly working for Multi-View OpenEXR
(this was working prefectly before the 'Use Multi-View' functionality)
* Selecting camera from Multi-View when looking from camera is problematic
* Animation Playback (ctrl+F11) doesn't support stereo formats
* Wrong filepath when trying to play back animated scene
* Viewport Rendering doesn't support Multi-View
* Overscan Rendering
* Fullscreen display modes need to warn the user
* Object copy should be aware of views suffix
Acknowledgments
===============
* Francesco Siddi for the help with the original feature specs and design
* Brecht Van Lommel for the original review of the code and design early on
* Blender Foundation for the Development Fund to support the project wrap up
Final patch reviewers:
* Antony Riakiotakis (psy-fi)
* Campbell Barton (ideasman42)
* Julian Eisel (Severin)
* Sergey Sharybin (nazgul)
* Thomas Dinged (dingto)
Code contributors of the original branch in github:
* Alexey Akishin
* Gabriel Caraballo
range and extra frames.
Issue here is that the movie backend would unconditionally use the start
frame of the scene instead of the preview frame. Solved by passing an
explicit "preview" argument.
Strictly speaking, the preview argument is part of the renderdata
struct, that is also passed to the code, but when rendering the final
result we want to unconditionally render the full range regardless of
the preview setting of the render structure.
However, OpenGL rendering does use the preview range so we need to
account for that when making those exports.
This is also a nice chance to correct the filenames, which still used
the full range.
Show World will now influence if world is rendered in opengl rendering.
This is a little undefined according to blender history, since sky used
to always be drawn when offscreen rendering, as if "Only Render" was
ticked. Since if we don't draw sky in that case there's no valid color
really (and using theme colors is not so nice) we just draw transparent
background.
This commit introduces a few ready made effects for the 3D viewport
and OpenGL rendering.
Included effects are Depth of Field, accessible from camera view
and screen space ambient occlusion. Those effects can be turned on and
tweaked from the shading panel in the 3D viewport.
Off screen rendering will use the settings of the current camera.
WIP documentation can be found here:
http://wiki.blender.org/index.php/User:Psy-Fi/Framebuffer_Post-processing
D937 with minor edits (whitespace only)
@aligorith, I double checked everything runs smoothly, blame me if I missed something ;). Sorry for just taking the initiative and committing without talking to you, but I wasn't able to catch you the last days. This should be fixed before the release IMHO, but I don't think it's important enough to be committed during BCon5, so sorry again, but hopefully everything is okay :)
After double checking the sequencer code, there doesn't seem to be any reason to
exclude these from the sequencer previews. This makes it possible to use the
sequencer to non-destructively chain together difference Grease Pencil animated
shots together without having to render each image sequence first, allowing for
a smoother workflow.
Just in case the initial assumption isn't entirely correct, I've put in place
an extra arg to the relevant functions which can be hooked up to a suitable
option on the scene strip later to turn this on/off as needed.
This merge-commit brings in a number of new features and workflow/UI improvements for
working with Grease Pencil. While these were originally targetted at improving
the workflow for creating 3D storyboards in Blender using the Grease Pencil,
many of these changes should also prove useful in other workflows too.
The main highlights here are:
1) It is now possible to edit Grease Pencil strokes
- Use D Tab, or toggle the "Enable Editing" toggles in the Toolbar/Properties regions
to enter "Stroke Edit Mode". In this mode, many common editing tools will
operate on Grease Pencil stroke points instead.
- Tools implemented include Select, Select All/Border/Circle/Linked/More/Less,
Grab, Rotate, Scale, Bend, Shear, To Sphere, Mirror, Duplicate, Delete.
- Proportional Editing works when using the transform tools
2) Grease Pencil stroke settings can now be animated
NOTE: Currently drivers don't work, but if time allows, this may still be
added before the release.
3) Strokes can be drawn with "filled" interiors, using a separate set of
colour/opacity settings to the ones used for the lines themselves.
This makes use of OpenGL filled polys, which has the limitation of only
being able to fill convex shapes. Some artifacts may be visible on concave
shapes (e.g. pacman's mouth will be overdrawn)
4) "Volumetric Strokes" - An alternative drawing technique for stroke drawing
has been added which draws strokes as a series of screen-aligned discs.
While this was originally a partial experimental technique at getting better
quality 3D lines, the effects possible using this technique were interesting
enough to warrant making this a dedicated feature. Best results when partial
opacity and large stroke widths are used.
5) Improved Onion Skinning Support
- Different colours can be selected for the before/after ghosts. To do so,
enable the "colour wheel" toggle beside the Onion Skinning toggle, and set
the colours accordingly.
- Different numbers of ghosts can be shown before/after the current frame
6) Grease Pencil datablocks are now attached to the scene by default instead of
the active object.
- For a long time, the object-attachment has proved to be quite problematic
for users to keep track of. Now that this is done at scene level, it is
easier for most users to use.
- An exception for old files (and for any addons which may benefit from object
attachment instead), is that if the active object has a Grease Pencil datablock,
that will be used instead.
- It is not currently possible to choose object-attachment from the UI, but
it is simple to do this from the console instead, by doing:
context.active_object.grease_pencil = bpy.data.grease_pencil["blah"]
7) Various UI Cleanups
- The layers UI has been cleaned up to use a list instead of the nested-panels
design. Apart from saving space, this is also much nicer to look at now.
- The UI code is now all defined in Python. To support this, it has been necessary
to add some new context properties to make it easier to access these settings.
e.g. "gpencil_data" for the datablock
"active_gpencil_layer" and "active_gpencil_frame" for active data,
"editable_gpencil_strokes" for the strokes that can be edited
- The "stroke placement/alignment" settings (previously "Drawing Settings" at the
bottom of the Grease Pencil panel in the Properties Region) is now located in
the toolbar. These were more toolsettings than properties for how GPencil got drawn.
- "Use Sketching Sessions" has been renamed "Continuous Drawing", as per a
suggestion for an earlier discussion on developer.blender.org
- By default, the painting operator will wait for a mouse button to be pressed
before it starts creating the stroke. This is to make it easier to include
this operator in various toolbars/menus/etc. To get it immediately starting
(as when you hold down DKEy to draw), set "wait_for_input" to False.
- GPencil Layers can be rearranged in the "Grease Pencil" mode of the Action Editor
- Toolbar panels have been added to all the other editors which support these.
8) Pie menus for quick-access to tools
A set of experimental pie menus has been included for quick access to many
tools and settings. It is not necessary to use these to get things done,
but they have been designed to help make certain common tasks easier.
- Ctrl-D = The main pie menu. Reveals tools in a context sensitive and
spatially stable manner.
- D Q = "Quick Settings" pie. This allows quick access to the active
layer's settings. Notably, colours, thickness, and turning
onion skinning on/off.
Summary:
Made objects update happening from multiple threads. It is a task-based
scheduling system which uses current dependency graph for spawning new
tasks. This means threading happens on object level, but the system is
flexible enough for higher granularity.
Technical details:
- Uses task scheduler which was recently committed to trunk
(that one which Brecht ported from Cycles).
- Added two utility functions to dependency graph:
* DAG_threaded_update_begin, which is called to initialize threaded
objects update. It will also schedule root DAG node to the queue,
hence starting evaluation process.
Initialization will calculate how much parents are to be evaluation
before current DAG node can be scheduled. This value is used by task
threads for faster detecting which nodes might be scheduled.
* DAG_threaded_update_handle_node_updated which is called from task
thread function when node was fully handled.
This function decreases num_pending_parents of node children and
schedules children with zero valency.
As it might have become clear, task thread receives DAG nodes and
decides which callback to call for it.
Currently only BKE_object_handle_update is called for object nodes.
In the future it'll call node->callback() from Ali's new DAG.
- This required adding some workarounds to the render pipeline.
Mainly to stop using get_object_dm() from modifiers' apply callback.
Such a call was only a workaround for dependency graph glitch when
rendering scene with, say, boolean modifiers before displaying
this scene.
Such change moves workaround from one place to another, so overall
hackentropy remains the same.
- Added paradigm of EvaluaitonContext. Currently it's more like just a
more reliable replacement for G.is_rendering which fails in some
circumstances.
Future idea of this context is to also store all the local data needed
for objects evaluation such as local time, Copy-on-Write data and so.
There're two types of EvaluationContext:
* Context used for viewport updated and owned by Main. In the future
this context might be easily moved to Window or Screen to allo
per-window/per-screen local time.
* Context used by render engines to evaluate objects for render purposes.
Render engine is an owner of this context.
This context is passed to all object update routines.
Reviewers: brecht, campbellbarton
Reviewed By: brecht
CC: lukastoenne
Differential Revision: https://developer.blender.org/D94
It was only used by opengl render and in fact it needed just to
set DISPLAY_BUFFER_INVALID flag for the image buffer.
In theory it wouldn't make any change to opengl render speed
(because this change just moved rect_from_float from color
management code to image save code). And could not see any speed
changes on my laptop.