This introduces a snap-context that can be re-used for casting rays into the scene
(by operators such as walk-mode, ruler and transform code).
This can be used to cache data between calls too.
We cannot use FLT_MAX as initi distance for raycast...
Renamed TRANSFORM_DIST_MAX_RAY to BVH_RAYCAST_DIST_MAX, moved it into BLI_kdopbvh,
and use in RNA raycast callbacks (and all other places using that API).
- It wasn't possible to know when Object.ray_cast was successful
in the case when it hit a face with no original index.
- Take (ray_start, ray_direction) vectors.
- Take an optional distance argument.
- Scene/Object.ray_cast now use matching arguments & return values.
See D1650 (own patch)
This was getting very hard to follow,
- mixing input/output args.
- mixing arg order between functions.
- arg names (mode, snap_mode) rename to (snap_to, snap_select)
We need to avoid passing a NULL string here, and also we need to pass
the correct suffix we used to pass view string directly which is
probably not what we want.
Official Documentation:
http://www.blender.org/manual/render/workflows/multiview.html
Implemented Features
====================
Builtin Stereo Camera
* Convergence Mode
* Interocular Distance
* Convergence Distance
* Pivot Mode
Viewport
* Cameras
* Plane
* Volume
Compositor
* View Switch Node
* Image Node Multi-View OpenEXR support
Sequencer
* Image/Movie Strips 'Use Multiview'
UV/Image Editor
* Option to see Multi-View images in Stereo-3D or its individual images
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
I/O
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
Scene Render Views
* Ability to have an arbitrary number of views in the scene
Missing Bits
============
First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report.
Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report.
Everything else is likely small todos, and may wait until we are sure none of the above is happening.
Apart from that there are those known issues:
* Compositor Image Node poorly working for Multi-View OpenEXR
(this was working prefectly before the 'Use Multi-View' functionality)
* Selecting camera from Multi-View when looking from camera is problematic
* Animation Playback (ctrl+F11) doesn't support stereo formats
* Wrong filepath when trying to play back animated scene
* Viewport Rendering doesn't support Multi-View
* Overscan Rendering
* Fullscreen display modes need to warn the user
* Object copy should be aware of views suffix
Acknowledgments
===============
* Francesco Siddi for the help with the original feature specs and design
* Brecht Van Lommel for the original review of the code and design early on
* Blender Foundation for the Development Fund to support the project wrap up
Final patch reviewers:
* Antony Riakiotakis (psy-fi)
* Campbell Barton (ideasman42)
* Julian Eisel (Severin)
* Sergey Sharybin (nazgul)
* Thomas Dinged (dingto)
Code contributors of the original branch in github:
* Alexey Akishin
* Gabriel Caraballo
Can be considered TODO but it's not bad to support either. Also added
RNA api to get aspect ratio of assigned UV image - returns aspect
corrected image dimensions so needs adjustments for uv editing.
Looks like the cleanest way to handle this is to no do bounding box collision
for edit mode at all. But this is easy to enforce
This reverts commit 7b5fe4f316.
Conflicts:
source/blender/editors/transform/transform_snap.c
slightly outside the mesh.
Reported by Thomas Beck on irc. Issue here is that the mesh bounding box
changes as we are transforming the vertices. Solution is to collide
against the initial bounding box. Unfortunately the snapping functions
are made in a way that a lot of code needed to be tweaked here, but the
change should be straightforward and harmless (famous last words, I
know).
Ideally we might want to even increase the size of the bounding box a
little (as seen in screen space) to allow snapping even in cases where,
cursor is slightly outside the bounding box, but since this is not so
straightforward to do for all cases, at least for me, leaving this as
a TODO.
Issue was caused by cycles setting scene frame which will update scene for
all the layers (not just visible ones) which confuses depsgraph making
objects which are needed as dependency are not really evaluated.
Made it so setting frame via scene.frame_set() which check whether update
need to be flushed to an invisible objects and do this if so.
Not ideal solution but seems to be safest at this point.
Summary:
Made objects update happening from multiple threads. It is a task-based
scheduling system which uses current dependency graph for spawning new
tasks. This means threading happens on object level, but the system is
flexible enough for higher granularity.
Technical details:
- Uses task scheduler which was recently committed to trunk
(that one which Brecht ported from Cycles).
- Added two utility functions to dependency graph:
* DAG_threaded_update_begin, which is called to initialize threaded
objects update. It will also schedule root DAG node to the queue,
hence starting evaluation process.
Initialization will calculate how much parents are to be evaluation
before current DAG node can be scheduled. This value is used by task
threads for faster detecting which nodes might be scheduled.
* DAG_threaded_update_handle_node_updated which is called from task
thread function when node was fully handled.
This function decreases num_pending_parents of node children and
schedules children with zero valency.
As it might have become clear, task thread receives DAG nodes and
decides which callback to call for it.
Currently only BKE_object_handle_update is called for object nodes.
In the future it'll call node->callback() from Ali's new DAG.
- This required adding some workarounds to the render pipeline.
Mainly to stop using get_object_dm() from modifiers' apply callback.
Such a call was only a workaround for dependency graph glitch when
rendering scene with, say, boolean modifiers before displaying
this scene.
Such change moves workaround from one place to another, so overall
hackentropy remains the same.
- Added paradigm of EvaluaitonContext. Currently it's more like just a
more reliable replacement for G.is_rendering which fails in some
circumstances.
Future idea of this context is to also store all the local data needed
for objects evaluation such as local time, Copy-on-Write data and so.
There're two types of EvaluationContext:
* Context used for viewport updated and owned by Main. In the future
this context might be easily moved to Window or Screen to allo
per-window/per-screen local time.
* Context used by render engines to evaluate objects for render purposes.
Render engine is an owner of this context.
This context is passed to all object update routines.
Reviewers: brecht, campbellbarton
Reviewed By: brecht
CC: lukastoenne
Differential Revision: https://developer.blender.org/D94
Change Scene.frame_set so that it ensures subframe in range [0,1[ as Blender
expects, otherwise some things like physics point cache lookups don't get
evaluated properly.
This codec is absolutely needed to generate DCP using OpenDCP,
before that external application to convert JP2 to J2K was used
which slowed down export a lot.
New codec is exposed to image format settings panel and called
Codec. Default one is JP2 which creates files with .jp2 extension,
new one is called J2K which creates with .j2c extension.
Other changes:
- Fixed avi jpeg warning which was treating as error here.
- Made it so extension is detecting from ImageFormatData instead
of image file type, which makes it possible to have different
extension for the same file type depending on it's settings.
IRIS format should still be changed (depending on number of
channels it'll be .bw, .rgb or .rgba extension)
- Default image format settings would be set from image buffer
when re-saving it. Makes it possible to easily open .j2c file
and save it using J2K codec (without this change it'll save as
.jp2 using JP2 codec)
Added support of such features, as:
- Ability to call RNA functions using C++ classes
For example RenderEngine.tag_update
- Property setters (for scalars and arrays)
Used Qt/jQuery-like getters/setters style, meaning Class.prop() is a getter,
Class.prop(value) is a setter.
Still to come:
Collection functions are not currently registering inside a property
Meaning BlendData.meshes wouldn't be a subclass of BlendDataMeshes result
you'll need to explicitly create BlendDataMeshes for now instead of doing
BlendData.meshes.remove()