There is one legit place in the code where memcpy was used as an
optimization trick. Was needed for older version of GCC, but now
it should be re-evaluated and checked if it still helps to have
that trick.
In other places it's somewhat lazy programming to zero out all
object members. That is absolutely unsafe, at the moment when
less trivial class is used as a member in that object things
will break.
Other cases were using memcpy into an object which comes from
an external library. We don't control that object, and we can
not guarantee it will always be safe for such memory tricks
and debugging bugs caused by such low level access is far fun.
Ideally we need to use more proper C++, but needs to be done with
big care, including benchmarks of each change, For now do
annoying but simple cast to void*.
This save a little memory and copying in the kernel by storing only a 4x3
matrix instead of a 4x4 matrix. We already did this in a few places, and
those don't need to be special exceptions anymore now.
The offscreen dicing scale helps to significantly reduce memory usage,
by reducing the dicing rate for objects the further they are outside of
the camera view.
The dicing camera can be specified now, to keep the geometry fixed and
avoid crawling artifacts in animation. It is also useful for debugging,
to see the tesselation from a different camera location.
Differential Revision: https://developer.blender.org/D2891
The idea is to make include statements more explicit and obvious where the
file is coming from, additionally reducing chance of wrong header being
picked up.
For example, it was not obvious whether bvh.h was refferring to builder
or traversal, whenter node.h is a generic graph node or a shader node
and cases like that.
Surely this might look obvious for the active developers, but after some
time of not touching the code it becomes less obvious where file is coming
from.
This was briefly mentioned in T50824 and seems @brecht is fine with such
explicitness, but need to agree with all active developers before committing
this.
Please note that this patch is lacking changes related on GPU/OpenCL
support. This will be solved if/when we all agree this is a good idea to move
forward.
Reviewers: brecht, lukasstockner97, maiself, nirved, dingto, juicyfruit, swerner
Reviewed By: lukasstockner97, maiself, nirved, dingto
Subscribers: brecht
Differential Revision: https://developer.blender.org/D2586
The idea of pole merge is to fade interocular distance after a certain
altitude to zero when altitude goes closer to a pole. This should prevent
annoyances looking up in the sky or down to the bottom.
Works for both panorama and perspective cameras when Spherical Stereo
is enabled.
Reviewers: dfelinto, brecht
Reviewed By: brecht
Subscribers: sebastian_k
Differential Revision: https://developer.blender.org/D1998
NOTE: this is only the first of many patches towards completing the subdivison
and displacement system in Cycles. These patches will be reviewed and committed
one by one over the coming weeks.
Reviewed By: brecht, sergey
Differential Revision: https://developer.blender.org/D1909
This is a new option for panorama cameras to render
stereo that can be used in virtual reality devices
The option is available under the camera panel when Multi-View is enabled (Views option in the Render Layers panel)
Known limitations:
------------------
* Parallel convergence is not supported (you need to set a convergence distance really high to simulate this effect).
* Pivot was not supposed to affect the render but it does, this has to be looked at, but for now set it to CENTER
* Derivatives in perspective camera need to be pre-computed or we shuld get rid of kcam->dx/dy (Sergey words, I don't fully grasp the implication shere)
* This works in perspective mode and in panorama mode. However, for fully benefit from this effect in perspective mode you need to render a cube map. (there is an addon for this, developed separately, perhaps we could include it in master).
* We have no support for "neck distance" at the moment. This is supposed to help with objects at short distances.
* We have no support to rotate the "Up Axis" of the stereo plane. Meaning, we hardcode 0,0,1 as UP, and create the stereo pair related to that. (although we could take the camera local UP when rendering panoramas, this wouldn't work for perspective cameras.
* We have no support for interocular distance attenuation based on the proximity of the poles (which helps to reduce the pole rotation effect/artifact).
THIS NEEDS DOCS - both in 2.78 release log and the Blender manual.
Meanwhile you can read about it here: http://code.blender.org/2015/03/1451
This patch specifically dates from March 2015, as you can see in the code.blender.org post. Many thanks to all the reviewers, testers and minor sponsors who helped me maintain spherical-stereo for 1 year.
All that said, have fun with this. This feature was what got me started with Multi-View development (at the time what I was looking for was Fulldome stereo support, but the implementation is the same). In order to make this into Blender I had to make it aiming at a less-specic user-case Thus Multi-View started. (this was December 2012, during Siggraph Asia and a chat I had with Paul Bourke during the conference). I don't have the original patch anymore, but you can find a re-based version of it from March 2013, right before I start with the Multi-View project https://developer.blender.org/P332
Reviewers: sergey, dingto
Subscribers: #cycles
Differential Revision: https://developer.blender.org/D1223
Basically the idea is to make code robust against extending
enum options in the future by falling back to a known safe
default setting when RNA is set to something unknown.
While this approach solves the issues similar to T47377,
but it wouldn't really help when/if any of the RNA values
gets ever deprecated and removed. There'll be no simple
solution to that apart from defining explicit mapping from
RNA value to Cycles one.
Another part which isn't so great actually is that we now
have to have some enum guards and give some explicit values
to the enum items, but we can live with that perhaps.
Reviewers: dingto, juicyfruit, lukasstockner97, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D1785
This way we avoid passing structures which could be up to
few hundred bytes by value to the utility functions.
Ideally we'll also have to add `const` qualifier in majority
of the calls, but C++ RNA does not allow us to do that because
it does not know if some function modifies contents or not.
This is an attempt to emulate real CMOS cameras which reads sensor by scanlines
and hence different scanlines are sampled at a different moment in time, which
causes so called rolling shutter effect. This effect will, for example, make
vertical straight lines being curved when doing horizontal camera pan.
This is controlled by the Shutter Type option in the Motion Blur panel.
Additionally, since scanline sampling is not instantaneous it's possible to have
motion blur on top of rolling shutter.
This is controlled by the Rolling Shutter Time slider which controls balance
between pure rolling shutter effect and pure motion blur effect.
Reviewers: brecht, juicyfruit, dingto, keir
Differential Revision: https://developer.blender.org/D1624
Shutter curve now can be controlled using curve mapping widget in the motion
blur panel in Render buttons. Only mapping from 0..1 by x axis are allowed,
Y values will be normalized to fill in 0..1 space as well automatically.
Y values of 0 means fully closed shutter, Y values of 1 means fully opened
shutter.
Default mapping is set to old behavior when shutter opens and closes instantly.
This shutter mapping curve could easily be used by any other render engine by
accessing scene.render.motion_blur_shutter_curve.
Reviewers: #cycles, brecht, juicyfruit, campbellbarton
Differential Revision: https://developer.blender.org/D1585
This adds an option to control at what time relative to the current frame
the shutter is fully opened. Supported options are:
- Shutter is starting to open at the current frame
- Shutter is fully opened at the current frame
- Shutter is fully closed at the current frame
Custom shutter time offset is possible, same as custom curve for shutter
openness but those are considered nice things to have rather than something
crucial.
Reviewers: juicyfruit, dingto
Subscribers: venomgfx, hjalti
Differential Revision: https://developer.blender.org/D1380
Works totally similar to camera motion blur and majority of the changes are
related on just passing extra arguments to sync() functions.
Couple of things still to look into:
- Motion pass will not include motion caused by the zoom.
- Only perspective cameras are supported currently.
- Motion is being interpolated on projected coordinates, which might give
different results from constructing projection matrix from interpolated
field of view.
This could be good enough for us, but we need to consider improving this
at some point.
Reviewers: juicyfruit, dingto
Reviewed By: dingto
Differential Revision: https://developer.blender.org/D1383
This was a mistake in the original code from D1079.
With the current way how direction ot mirror ball works camera should look
into negative Y direction. Corrected it in the camera matrix synchronization,
so no extra calculations are needed at the render time.
That's a bit annoying, but we'd better port it to the release branch, or
otherwise we'll end up with files created with wrong camera mapping after
2.75 release.
Official Documentation:
http://www.blender.org/manual/render/workflows/multiview.html
Implemented Features
====================
Builtin Stereo Camera
* Convergence Mode
* Interocular Distance
* Convergence Distance
* Pivot Mode
Viewport
* Cameras
* Plane
* Volume
Compositor
* View Switch Node
* Image Node Multi-View OpenEXR support
Sequencer
* Image/Movie Strips 'Use Multiview'
UV/Image Editor
* Option to see Multi-View images in Stereo-3D or its individual images
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
I/O
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
Scene Render Views
* Ability to have an arbitrary number of views in the scene
Missing Bits
============
First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report.
Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report.
Everything else is likely small todos, and may wait until we are sure none of the above is happening.
Apart from that there are those known issues:
* Compositor Image Node poorly working for Multi-View OpenEXR
(this was working prefectly before the 'Use Multi-View' functionality)
* Selecting camera from Multi-View when looking from camera is problematic
* Animation Playback (ctrl+F11) doesn't support stereo formats
* Wrong filepath when trying to play back animated scene
* Viewport Rendering doesn't support Multi-View
* Overscan Rendering
* Fullscreen display modes need to warn the user
* Object copy should be aware of views suffix
Acknowledgments
===============
* Francesco Siddi for the help with the original feature specs and design
* Brecht Van Lommel for the original review of the code and design early on
* Blender Foundation for the Development Fund to support the project wrap up
Final patch reviewers:
* Antony Riakiotakis (psy-fi)
* Campbell Barton (ideasman42)
* Julian Eisel (Severin)
* Sergey Sharybin (nazgul)
* Thomas Dinged (dingto)
Code contributors of the original branch in github:
* Alexey Akishin
* Gabriel Caraballo
This inconsistency drove me totally crazy, it's really confusing
when it's inconsistent especially when you work on both Cycles and
Blender sides.
Shouldn;t cause merge PITA, it's whitespace changes only, Git should
be able to merge it nicely.
The issue was caused by the whole viewplane used for mapping calculation
which would for sure lead to differences between final camera render and
viewport render from the camera view.
This commit makes it so window texture mapping is the same as final render
when viewing from the camera in viewport render.
It's not totally clear what's the right thing to do when viewport is not
in the camera view mode and that part is left unchanged.
This patch adds the option to set minimum/maximum latitude/longitude values for
the equirectangular panorama camera in Cycles, as discussed in T34400.
The separate functions in kernel_projection.h are needed because the regular
ones are also used as helper functions for environment map sampling.
Reviewers: #cycles, sergey
Reviewed By: #cycles, sergey
Subscribers: dingto, sergey, brecht
Differential Revision: https://developer.blender.org/D960
This means it's no longer needed to enable experimental feature set in order to
have proper camera in volume support. And this also means if there's something
wrong going on, or if there's speed regression for cases when camera is obviously
not in the volume -- this issues are to be reported and handled in the regular
matter.
Happy blending!
Basically the title says it all, volume stack initialization now is aware that
camera might be inside of the volume. This gives quite noticeable render time
regressions in cases camera is in the volume (didn't measure them yet) because
this requires quite a few of ray-casting per camera ray in order to check which
objects we're inside. Not quite sure if this might be optimized.
But the good thing is that we can do quite a good job on detecting whether
camera is outside of any of the volumes and in this case there should be no
time penalty at all (apart from some extra checks during the sync state).
For now we're only doing rather simple AABB checks between the viewplane and
volume objects. This could give some false-positives, but this should be good
starting point.
Need to mention panoramic cameras here, for them it's only check for whether
there are volumes in the scene, which would lead to speed regressions even if
the camera is outside of the volumes. Would need to figure out proper check
for such cameras.
There are still quite a few of TODOs in the code, but the patch is good enough
to start playing around with it checking whether there are some obvious mistakes
somewhere.
Currently the feature is only available in the Experimental feature sey, need
to solve some of the TODOs and look into making things faster before considering
the feature is ready for the official feature set. This would still likely
happen in current release cycle.
Reviewers: brecht, juicyfruit, dingto
Differential Revision: https://developer.blender.org/D794
Thanks for Aldo Zang for the help with the fix for the panorama/fisheye
depth of field calculation and the overall math.
Reviewed By: sergey, dingto
Subscribers: juicyfruit, gregzaal, #cycles, dingto, matray
Differential Revision: https://developer.blender.org/D753
This now supports multiple steps and subframe sampling of motion.
There is one difference for object and camera transform motion blur. It still
only supports two steps there, but the transforms are now sampled at subframe
times instead of the previous and next frame and then interpolated/extrapolated.
This will give different render results in some cases but it's more accurate.
Part of the code is from the summer of code project by Gavin Howard, but it has
been significantly rewritten and extended.