This was a mistake in the original code from D1079.
With the current way how direction ot mirror ball works camera should look
into negative Y direction. Corrected it in the camera matrix synchronization,
so no extra calculations are needed at the render time.
That's a bit annoying, but we'd better port it to the release branch, or
otherwise we'll end up with files created with wrong camera mapping after
2.75 release.
Official Documentation:
http://www.blender.org/manual/render/workflows/multiview.html
Implemented Features
====================
Builtin Stereo Camera
* Convergence Mode
* Interocular Distance
* Convergence Distance
* Pivot Mode
Viewport
* Cameras
* Plane
* Volume
Compositor
* View Switch Node
* Image Node Multi-View OpenEXR support
Sequencer
* Image/Movie Strips 'Use Multiview'
UV/Image Editor
* Option to see Multi-View images in Stereo-3D or its individual images
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
I/O
* Save/Open Multi-View (OpenEXR, Stereo3D, individual views) images
Scene Render Views
* Ability to have an arbitrary number of views in the scene
Missing Bits
============
First rule of Multi-View bug report: If something is not working as it should *when Views is off* this is a severe bug, do mention this in the report.
Second rule is, if something works *when Views is off* but doesn't (or crashes) when *Views is on*, this is a important bug. Do mention this in the report.
Everything else is likely small todos, and may wait until we are sure none of the above is happening.
Apart from that there are those known issues:
* Compositor Image Node poorly working for Multi-View OpenEXR
(this was working prefectly before the 'Use Multi-View' functionality)
* Selecting camera from Multi-View when looking from camera is problematic
* Animation Playback (ctrl+F11) doesn't support stereo formats
* Wrong filepath when trying to play back animated scene
* Viewport Rendering doesn't support Multi-View
* Overscan Rendering
* Fullscreen display modes need to warn the user
* Object copy should be aware of views suffix
Acknowledgments
===============
* Francesco Siddi for the help with the original feature specs and design
* Brecht Van Lommel for the original review of the code and design early on
* Blender Foundation for the Development Fund to support the project wrap up
Final patch reviewers:
* Antony Riakiotakis (psy-fi)
* Campbell Barton (ideasman42)
* Julian Eisel (Severin)
* Sergey Sharybin (nazgul)
* Thomas Dinged (dingto)
Code contributors of the original branch in github:
* Alexey Akishin
* Gabriel Caraballo
This inconsistency drove me totally crazy, it's really confusing
when it's inconsistent especially when you work on both Cycles and
Blender sides.
Shouldn;t cause merge PITA, it's whitespace changes only, Git should
be able to merge it nicely.
The issue was caused by the whole viewplane used for mapping calculation
which would for sure lead to differences between final camera render and
viewport render from the camera view.
This commit makes it so window texture mapping is the same as final render
when viewing from the camera in viewport render.
It's not totally clear what's the right thing to do when viewport is not
in the camera view mode and that part is left unchanged.
This patch adds the option to set minimum/maximum latitude/longitude values for
the equirectangular panorama camera in Cycles, as discussed in T34400.
The separate functions in kernel_projection.h are needed because the regular
ones are also used as helper functions for environment map sampling.
Reviewers: #cycles, sergey
Reviewed By: #cycles, sergey
Subscribers: dingto, sergey, brecht
Differential Revision: https://developer.blender.org/D960
This means it's no longer needed to enable experimental feature set in order to
have proper camera in volume support. And this also means if there's something
wrong going on, or if there's speed regression for cases when camera is obviously
not in the volume -- this issues are to be reported and handled in the regular
matter.
Happy blending!
Basically the title says it all, volume stack initialization now is aware that
camera might be inside of the volume. This gives quite noticeable render time
regressions in cases camera is in the volume (didn't measure them yet) because
this requires quite a few of ray-casting per camera ray in order to check which
objects we're inside. Not quite sure if this might be optimized.
But the good thing is that we can do quite a good job on detecting whether
camera is outside of any of the volumes and in this case there should be no
time penalty at all (apart from some extra checks during the sync state).
For now we're only doing rather simple AABB checks between the viewplane and
volume objects. This could give some false-positives, but this should be good
starting point.
Need to mention panoramic cameras here, for them it's only check for whether
there are volumes in the scene, which would lead to speed regressions even if
the camera is outside of the volumes. Would need to figure out proper check
for such cameras.
There are still quite a few of TODOs in the code, but the patch is good enough
to start playing around with it checking whether there are some obvious mistakes
somewhere.
Currently the feature is only available in the Experimental feature sey, need
to solve some of the TODOs and look into making things faster before considering
the feature is ready for the official feature set. This would still likely
happen in current release cycle.
Reviewers: brecht, juicyfruit, dingto
Differential Revision: https://developer.blender.org/D794
Thanks for Aldo Zang for the help with the fix for the panorama/fisheye
depth of field calculation and the overall math.
Reviewed By: sergey, dingto
Subscribers: juicyfruit, gregzaal, #cycles, dingto, matray
Differential Revision: https://developer.blender.org/D753
This now supports multiple steps and subframe sampling of motion.
There is one difference for object and camera transform motion blur. It still
only supports two steps there, but the transforms are now sampled at subframe
times instead of the previous and next frame and then interpolated/extrapolated.
This will give different render results in some cases but it's more accurate.
Part of the code is from the summer of code project by Gavin Howard, but it has
been significantly rewritten and extended.
before this only active scene would be rendered with border.
When do_render_fields_blur_3d() is finished, it'll modify
render's display rect so it'll correspond bordered render
result placed on black backgrund. Actual border is stored
nowhere, which makes it only way to re-calculate disprect
for all other renders used in compo based on source. Not
so big deal actually.
Also needed to modify Cycles a bit, because before this
patch it used border settings from scene being rendered.
Now made it so render data is passing to external engines.
Using a property inside RenderEngine structure for this.
Not best ever design for passing render data, but this
would prevent API breakage. So now external engines could
access engine.render to access active rendering settings.
Reviewed by Brecht, thanks!
Now one no longer needs to match the sensor dimensions with the render dimensions manually.
IMPORTANT NOTE: if you were using AUTO before with mismathing sensor aspect ratio (comparing to the render dimensions)
this will change your render! We can doversion this, but apart from Tube project I don't know if anyone else
is using this yet (part due to this bug and the only recently fixed 3dview preview aspect ratio).
That should help more artists to take advantage of this fantastic Blender feature.
It still helps to know the parameters of kwnown cameras/lens though.
For example:
Nikon DX2S with a 10.5mm fisheye can be set with:
Render resolution: 4288 x 2848
Sensor 23.7 x 15.70 (15.70 can be ommitted if AUTO is used as fit method)
Note: some cameras render different sizes according to the recording mode.
For example, a Red Scarlet in 5k (@12 fps) can render a full circular fisheye with a sigma 4.5 lens.
The same camera in the 30fps recording mode renders 4k in a cropped circular image.
So it's not only the resolution that changes, but the actual sensor been used.
So just keep in mind that the more information you have from the camera/lens you want to emulate the better.
Bug found at/patch written as a follow up of the BlenderPRO2012, patch reviewed by Brecht Van Lommel
Patch by Yasuhiro Fujii, thanks!
Original issue was that in vases viewport's lens are different from default
value switching between perspective and orthographic projections will change
viewplane a lot, which is disorienting and annoying.
This makes it possible to do a border render inside a viewport even
when not looking through the camera.
Render border could be defined by Ctrl-B shortcut (works for both
camera render border and viewport render border).
Camera render border could still be defined using Shift-B (so no
muscule memory would be broken). Currently used a special flag of
operator to do this, otherwise you'll need to either two operators
with different poll callback or it could go into conflict with a
border zoom,
Border render of a viewport could be enabled/disabled in View
panel using "Render Border" option.
still look a bit strange since the viewport can't actually render such panorama views,
so the opengl drawn scene behind the border render will not match up.
For sample images see:
http://www.dalaifelinto.com/?p=399 (equisolid)
http://www.dalaifelinto.com/?p=389 (equidistant)
The 'use_panorama' option is now part of a new Camera type: 'Panorama'.
Created two other panorama cameras:
- Equisolid: most of lens in the market simulate this lens - e.g. Nikon, Canon, ...)
this works as a real lens up to an extent. The final result takes the
sensor dimensions into account also.
.:. to simulate a Nikon DX2S with a 10.5mm lens do:
sensor: 23.7 x 15.7
fisheye lens: 10.5
fisheye fov: 180
render dimensions: 4288 x 2848
- Equidistant: this is not a real lens model. Although the old equidistant lens simulate
this lens. The result is always as a circular fisheye that takes the whole sensor
(in other words, it doesn't take the sensor into consideration).
This is perfect for fulldomes ;)
For the UI we have 10 to 360 as soft values and 10 to 3600 as hard values (because we can).
Reference material:
http://www.hdrlabs.com/tutorials/downloads_files/HDRI%20for%20CGI.pdfhttp://www.bobatkins.com/photography/technical/field_of_view.html
Note, this is not a real simulation of the light path through the lens.
The ideal solution would be this:
https://graphics.stanford.edu/wikis/cs348b-11/Assignment3http://www.graphics.stanford.edu/papers/camera/
Thanks Brecht for the fix, suggestions and code review.
Kudos for the dome community for keeping me stimulated on the topic since 2009 ;)
Patch partly implemented during lab time at VisGraf, IMPA - Rio de Janeiro.
Most of the changes are related to adding support for motion data throughout
the code. There's some code for actual camera/object motion blur raytracing
but it's unfinished (it badly slows down the raytracing kernel even when the
option is turned off), so that code it disabled still.
Motion vector export from Blender tries to avoid computing derived meshes
when the mesh does not have a deforming modifier, and it also won't store
motion vectors for every vertex if only the object or camera is moving.
environment map, by enabling the Panorama option in the camera.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Camera#Panorama
The focal length or sensor settings are not used, the UI can be tweaked still to
communicate this, also panorama should probably become a proper camera type like
perspective or ortho.
* Passes renamed to samples
* Camera lens radius renamed to aperature size/blades/rotation
* Glass and fresnel nodes input is now index of refraction
* Glossy and velvet fresnel socket removed
* Mix/add closure node renamed to mix/add shader node
* Blend weight node added for shader mixing weights
There is some version patching code for reading existing files, but it's not
perfect, so shaders may work a bit different.