Input constants are to be connected before removing proxies,
otherwise node groups might give totally different result.
This is a regression and to be put into final release.
The functionality was got lost when new compositor system was landed
and it wasn't always clear what's causing the hicucps. Now it's nicely
reported to the stats line.
It was actually an old issue with wrong conversion happening for muted
nodes, which wasn't visible before memory optimization commit.
This is to be backported to the final release.
This commit makes some preliminary fixes and tweaks aimed to make blender
compilable with C++11 feature set. This includes:
- Build system attribute to enable C++11 featureset.
It's for sure default OFF, but easy to enable to have a play around with
it and make sure all the stuff is compilable before we go C++11 for real.
- Changes in Compositor to use non-named cl_int structure fields.
This is because __STRICT_ANSI__ is defined by default by GCC and OpenCL
does not use named fields in this case.
- Changes to TYPE_CHECK() related on lack of typeof() in C++11
This uses decltype() instead with some trickery to make sure returned type
is not a reference.
- Changes for auto_ptr in Freestyle
This actually conditionally switches between auto_ptr and unique_ptr since
auto_ptr is deprecated in C++11. Seems to be not strictly needed but still
nice to be ready for such an update anyway/
This all based on changes form depsgraph_refactor branch apart from the weird
changes which were made in order to support MinGW compilation. Those parts of
change would need to be carefully reviewed again after official move to gcc49
in MinGW.
Tested on Linux with GCC-4.7 and Clang-3.5, other platforms are not tested and
likely needs some more tweaks.
Reviewers: campbellbarton, juicyfruit, mont29, lukastoenne, psy-fi, kjym3
Differential Revision: https://developer.blender.org/D1089
The issue seems to be caused by the integer overflow. It's actually
still needed to investigate why exactly buffer contained such a huge
value, but the patch is still legit and seems to be solving the issue
just nicely.
This was a regression caused by attempts to fix T42844 and there were
some red-herrings which lead me to the wrong way to fix it. It's some
deeper issue than just interpolation offset, it's mainly how the node
resolution is being mapped to each other.
It could be actually a part of canvas awareness project..
The issue was caused by AO operation reporting it's a color operation
(which means it's expected to output RGBA) but internally it's RGB
only in the render engine, which caused some memory to be uninitialized.
This was still the known issue with pixel center, original commit didn't cover all the
cases by the looks of it.
Should be all fine now, but much more intense testing is welcome.
This way re-mapping scene nodes to EXR files becomes much easier,
no extra trickery with separate RGBA setups is needed.
Plus makes it more consistent with regular EXR files.
This uses EGBA pass to get alpha from.
Quite striaghtforward change, and in theory we can even try supporting motion
blur for the corner pin node (which is tricky because coordinates actually
coming from sockets, but with some black magic should be doable).
The issue was caused by the conflict between preview render which would set
R_NO_IMAGE_LOAD flag on the renderer and texture samplers called outside of
the render pipeline trying to use this flag.
Now the sampler functions accepts extra argument so render pipeline can
still skip image load, but calls outside of the pipeline will nicely load
all the images.
Not cleanest change in the world but good enough to unlock gooseberry team,
and assuming we already had pool passed all over the place it should be all
fine.
Will need to reshuffle arguments into SamplerOptions structure later.
Different interpolation methods in compositor could lead to 0.5 pixel offset in
final renders. This is because of some inconsistency in integer coordinates
which might mean pixel corner or pixel center.
Should be all fine now.
The compostor used a fixed size of 4 floats to hold pixel data. this
patch will select size of a pixel based on its type.
It uses 1 float for Value, 3 float for vector and 4 floats for color
data types.
When benchmarking on shots (opening shot of caminandes) we get a
reduction of memory of 30% and a tiny speedup as less data
transformations needs to take place (but these are negligable.
More information of the patch can be found on
https://developer.blender.org/D627 and
http://wiki.blender.org/index.php/Dev:Ref/Proposals/Compositor2014_p1.1_TD
Developers: jbakker & mdewanchand
Thanks for Sergey for his indept review.
Buffers should actually be cleared before running operations on them,
but this doesn't work for some reason.
Note also that the sunbeams node can show some creases and hard aliasing
when the source point is close to a bright area with strong gradient.
To fix this a better filtering algorithm, dithering or ray sampling
would need to be implemented. In the meantime simply blurring the
sunbeams result a bit should help (or simply avoid putting the source on
a bright spot).
Pretty much straightforward change which gives around 30%
speedup on my laptop and around 2x speedup on desktop in
the BI (which uses gts580). Tested with huge blurs (like
10% of blur) which was rather common during Caminandes.
For now OpenCL is only limited for blur size more than
100 pixels.
This is a bit experimental still, feedback is welcome.
Reviewers: jbakker, lukastoenne
Subscribers: ton
Differential Revision: https://developer.blender.org/D576
The UV values includes the image width/height now. To restore the
previous method as close as possible (even though it is not documented
anywhere how this is supposed to work), we have to ignore this scaling.
This ensures that the beams color does not darken along borders,
by using the last valid color of the ray as the border color (extending
colors in the direction of the source point).