That would break proxy behavior after a library reload.
The usual super-annoying loop-back pointers... At least that one is
easily detectable and can be fixed in-place.
Found while investigating T64764.
Not re-reading linked data-blocks in undo/redo case also means that we
do not touch to their usercounts. Even worse, lib_link process in
readfile will increase those (for cases where local data uses linked
one).
Whole data management code is now heavily relying on valid consistent
refcount of all IDs, so we cannot allow that anymore.
Simple solution here could have been to then not increase that one for
linked IDs in `newlibadr_us()`, but unfortunately that would not be
totally bullet-proof, as some local users of linked data may be added or
removed by an undo step...
So I cannot think of any other solution than the ugly brute force one,
i.e. going over the whole Main database and recompute linked IDs users
count... Should not be a big issue performance wise though, this is
fairly cheap process.
This will be needed in undo/redo case, since we do not re-read linked
IDs, their usercounts become total garbage (especially in 'used by local
ID' cases)...
Steps to reproduce were:
* Duplicate some area into new window (shift-click corner triangle)
* Make it fullscreen
* Close the window again
* Activate the added screen from the menu (the one without the
-nonnormal prefix)
-> Crash (you may have to press "Back to Previous" first though)
When activating a screen, code should check if there's a fullscreen
variant of it and activate that instead. From what I can tell that's
what the code tried to do, but incorrectly.
Same issue as T64045, but things are a bit different for 2.7.
For some reason the buildinfo header was not re-generated. The root
reason is not really clear to me, so simply remove the header similar
to the CMake cache.
GCC 9 started implementing the OpenMP 4.0 and later behavior. When not using
default clause or when using default(shared), this makes no difference, but
if using default(none), previously the choice was not specify the const
qualified variables on the construct at all, or specify in firstprivate
clause. In GCC 9 as well as for OpenMP 4.0 compliance, those variables need
to be specified on constructs in which they are used, either in shared or
in firstprivate clause. Specifying them in firstprivate clause is one way to
achieve compatibility with both older GCC versions and GCC 9,
another option is to drop the default(none) clause.
This patch thus drops the default(none) clause.
See https://gcc.gnu.org/gcc-9/porting_to.html#ompdatasharing
Signed-off-by: Robert-André Mauchin <zebob.m@gmail.com>
Most of the source tarballs are retrieved via http, but a few remain
that are still downloaded via ftp. This causes some pain with corporate
firewalls, so moving the last two URIs to http helps ease the build process.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D4192
For custom path selected during 'install_deps.sh' using '--source'/'--install', paths for blosc, embree and opencollada are not printed/inclued into BUILD_NOTES.txt file.
As '/opt/lib/<package>' paths are hardcoded into CMakes's Find* modules, this error is not noticeable, but for custom paths it is.
This patch includes those fixes/prints for those packages.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D4574
This is only available through the API, mainly intended for render farms to
combine rendered multilayer EXR Files with different samples. The images are
currently expected to have the exact same render layers and passes, just with
different samples.
Variance passes are still simply a weighted average, ideally these should be
merged more intelligently.
Differential Revision: https://developer.blender.org/D4554
Always use native function since this was already the case due to
__CL_USE_NATIVE__ not being defined in time, and seems to have caused no
known issues.
It's effectively always enabled, only not on some unsupported OpenCL devices.
For testing those it's not useful to disable these features. This is replaced
by the more fine grained feature toggles that we have now.
This version fixes various bugs, and there is no need anymore to use both
9.1 and 10.0 for different cards.
There is a bug related to WITH_CYCLES_CUBIN_COMPILER and bump mapping in the
regression tests, so that remains disabled same as it was for CUDA 10.0.
Fix T59286: CUDA bake failing on some cards.
Fix T56858: CUDA 9.2 and 10 issues.
It only returned those for the active device type. For backwards compatibility
return them all again, but still avoid enumerating them from our own code on
startup or opening preferences.
The main goals of this change is faster starting when using foreground
rendering.
This patch will build kernels in parallel to the update process of
the scene. When these optimized kernels are not available (yet) an AO
kernel will be used.
These AO kernels are fast to compile (3-7 seconds) and can be
reused by all scenes. When the final kernels become available we
will switch to these kernels.
In background mode the AO kernels will not be used.
Some kernels are being used during Scene update (displace, background
light). When these kernels are being used the process can halt until
these become available.
Reviewed By: brecht, #cycles
Maniphest Tasks: T61752
Differential Revision: https://developer.blender.org/D4428
The functions that determine the program name + filename of kernels
were missing some base kernels like denoising and base. For completeness
I added those kernels so the function returns the correct results.
This patch puts threads that render the same pixel closer together,
as opposed to threads that render the same sample. Thus threads
within a warp are more coherent in memory access and control flow,
leading to performance improvements.
Example benchmarks on a Quadro RTX4000 (WDDM) on Windows 10:
Koro: 4:23 -> 3:46
BMW: 1:18 -> 1:25
Barbershop Interior: 17:52 -> 14:55
Classroom: 4:37 -> 3:45
Performance differences on OpenCL/AMD were hit and miss, some scenes
became faster, others lost significantly. Therefore, this is kept as
CUDA only change for now.
The NODE_GROUP_LEVEL of the Geometry node should be bumped to 1
when Backface is connected. Backface uses `NODE_LIGHT_PATH` that
is part of NODE_GROUP_LEVEL1, the rest of the geometry ndoe is
NODE_GROUP_LEVEL_0.
This patch will reduce the number of times that we need to
recompile kernels. It does this by (en/dis)abling features
by default. So when the user needs them that the kernels are
already available.
Other features are enabled by default for background and foreground
rendering. When in background rendering the user wants the best
render performance. When in foreground rendering the user wants
the least amount of recompilations.
Enabling volumetrics or subdivision evaluation will still trigger
a recompilation during foreground rendering.
Reviewed By: #cycles, brecht
Differential Revision: https://developer.blender.org/D4485
This is something what is caused by OCIO library. The patch
has been submitted there:
https://github.com/imageworks/OpenColorIO/pull/638
For until it is refined and checked we do workaround from
our side.
Solves weird situation when default display name is queried
from OCIO, but Default view being assumed to be set for it.
Now view is initialized to a default view of that display.
Steps to reproduce were:
* Maximize area (Shift+Spacebar in 2.7, Ctrl+Spacebar in 2.8)
* Open temp file browser (Ctrl+O)
* Cancel file browser (Esc) - should return to previous full-screen
* Press "Return to Previous" button
The previously maximized area would turn into a file-browser.
Note that the issue will still happen when opening old files saved while
in maximized area full-screen.
Part of the cleanup of the OpenCL codebase.
Single program is not effective when using OpenCL, it is slower
to compile and slower during rendering (when used in for example
`barbershop` or `victor`).
Reviewers: brecht, #cycles
Maniphest Tasks: T62267
Differential Revision: https://developer.blender.org/D4481
llvm generates some header files at build time that differ between
debug/release causing linker errors when you used the release headers
for a debug build.
Float2 are now a new type for attributes in Cycles. Before, the choices
for attribute storage were float and float3, the latter padded to
float4. This meant that UV maps were inflated to twice the size
necessary.
Reviewers: brecht, sergey
Reviewed By: brecht
Subscribers: #cycles
Tags: #cycles
Differential Revision: https://developer.blender.org/D4409
The Lamp data was not always set. When using CUDA or CPU it was, but when using OpenCL
without `OBJECT_MOTION` `sd->lamp` not updated to the actual lamp. This made the
TextureCoordinate output the wrong normal when used in a light shader.
As the normal was incorrect it made the IES node render incorrectly.
(what is the default for the IES node).
By setting the lamp data when no `__OBJECT_MOTION__` compile directive is present makes
sure that the normal is correctly calculated.
Fix D4450
Reviewed By: Brecht van Lommel
Displacement and Background kernels are selectively used, but always compiled. This patch will not compile these kernels when they are not needed.
Displacement kernel is only used for true displacement.
Background kernel is only used when there is a (Cycles)Light of type `LIGHT_BACKGROUND`.
Reviewed By: brecht, #cycles
Tags: #cycles
Maniphest Tasks: T61971
Differential Revision: https://developer.blender.org/D4412
The goal of this patch is to have limit the number of times
kernels needs to be compiled and are reused as kernels with
different compile directives can lead to identical same
binaries.
The implementation does this by stripping the compile directives.
and reshuffling kernels so the output is more likely to be the
same.
We focussed on the kernels where it was easy to detect and maintain
(bundle, bake, displace, do_volume and background). More optimizations
could be done but they are probably less obvious.
Merged the data_init and state_buffer_size kernels to split_bundle.
This patch will also remove empty kernels for do_volume and bake
when their features are not enabled.
When using the benchmark files there are less background, bake and
do_volume kernels compiled.
Fix: T61576, T61501, T61466
Reviewed By: brecht, #cycles
Differential Revision: https://developer.blender.org/D4390
VS2019 is binary compatible with the existing vc14 libraries and no
new libraries libs are required in svn.
VS2019 support requires cmake 3.14.
VS2019 is still in pre-release state, you are required to explicitly
select the pre-release version by using:
make full 2019pre
The goal is to make it easy to know which exact blender version
and built was used for a job on a farm. This includes but not
exclusively render farms. But same is handy for simulation tasks
as well.
When using preview rendering through a camera or final rendering
the `scene.render.use_motion_blur` was not respected when building
the compile directives.
This patch will when building the compile directives check if
motion blur is enabled at all. This should lead to more efficient
kernels when no motion blur is needed.
Tags: #cycles
Differential Revision: https://developer.blender.org/D4387
It was copying the alpha from the foreground instead of background image,
which is not usually what is needed and inconsistent with the compositor.
Differential Revision: https://developer.blender.org/D4371
The bake kernels are also used during mesh displacement and light
importance sampling. We disabled the implementation of these kernels
when baking was not enabled.
Was happening when looking for all intersections for transparent shadow rays
in the case the ray is degenerate.
Still quesitonable whether we should consider this a transparent or opaque
configuraiton. Ideally, we should prevent such rays from happening, but that
is another vector of debugging.
To keep running these tests relatively fast and practical to run often,
running it on all .blend files is a bit much. So now we only run it on
files from this directory.
Additionally this adds supports for following symlinks, so that you can
easily symlinks to other directories if you want to tests extra files
which may have linked libraries.
Using OpenCL MegaKernel has been slow and therefore not usefull.
This patch will remove the mega kernel from the OpenCL codebase
and the OpenCLDeviceBase class.
T61736: removal of mega kernel
T61703: baking does not work with mega kernel
Tags: #cycles
Differential Revision: https://developer.blender.org/D4383
There is a generic function to retrieve float and float3 attributes
`primitive_attribute_float` and primitive_attribute_float3`. Inside
these functions an prioritised if-else construction checked where
the attribute is stored and then retrieved from that location.
Actually the calling function most of the time already knows where
the data is stored. So we could simplify this by splitting these
functions and remove the check logic.
This patch splits the `primitive_attribute_float?` functions into
`primitive_surface_attribute_float?` and `primitive_volume_attribute_float?`.
What leads to less branching and more optimum kernels.
The original function is still being used by OSL and `svm_node_attr`.
This will reduce the compilation time and render time for kernels.
Especially in production scenes there is a lot of benefit.
Impact in compilation times
job | scene_name | previous | new | percentage
-------+-----------------+----------+-------+------------
t61513 | empty | 10.63 | 10.66 | 0%
t61513 | bmw | 17.91 | 17.65 | 1%
t61513 | fishycat | 19.57 | 17.68 | 10%
t61513 | barbershop | 54.10 | 24.41 | 55%
t61513 | classroom | 17.55 | 16.29 | 7%
t61513 | koro | 18.92 | 18.05 | 5%
t61513 | pavillion | 17.43 | 16.52 | 5%
t61513 | splash279 | 16.48 | 14.91 | 10%
t61513 | volume_emission | 36.22 | 21.60 | 40%
Impact in render times
job | scene_name | previous | new | percentage
-------+-----------------+----------+--------+------------
61513 | empty | 21.06 | 20.35 | 3%
61513 | bmw | 198.44 | 190.05 | 4%
61513 | fishycat | 394.20 | 401.25 | -2%
61513 | barbershop | 1188.16 | 912.39 | 23%
61513 | classroom | 341.08 | 340.38 | 0%
61513 | koro | 472.43 | 471.80 | 0%
61513 | pavillion | 905.77 | 899.80 | 1%
61513 | splash279 | 55.26 | 54.86 | 1%
61513 | volume_emission | 62.59 | 61.70 | 1%
There is also a possitive impact when using CPU and CUDA, but they are small.
I didn't split the hair logic from the surface logic due to:
* Hair and surface use same attribute types. It was not clear if it could be
splitted when looking at the code only.
* Hair and surface are quick to compile and to read. So the benefit is quite
small.
Differential Revision: https://developer.blender.org/D4375
The code assumed all datablocks were read from .blend files saved with the
same version. This restructures the Cycles versioning code to take into
account libraries.
For llvm 6 the visual studio integration was 'not great' and we had
our own, which broke when llvm 7.0.1 came out. llvm now has properly
supported integration available on the VS market place hence we can
retire our custom support.
This patch implements a workaround to get the multithreaded compilation from D2231 working.
So far, it only works for Blender, not for Cycles Standalone. Also, I have only tested the Linux codepath in the helper function.
Depends on D2231.
Patch by lukasstockner97, jbakker, brecht
job | scene_name | compilation_time
----------+-----------------+------------------
Baseline | empty | 22.73
D2264 | empty | 13.94
Baseline | bmw | 56.44
D2264 | bmw | 41.32
Baseline | fishycat | 59.50
D2264 | fishycat | 45.19
Baseline | barbershop | 212.28
D2264 | barbershop | 169.81
Baseline | victor | 67.51
D2264 | victor | 53.60
Baseline | classroom | 51.46
D2264 | classroom | 39.02
Baseline | koro | 62.48
D2264 | koro | 49.03
Baseline | pavillion | 54.37
D2264 | pavillion | 38.82
Baseline | splash279 | 47.43
D2264 | splash279 | 37.94
Baseline | volume_emission | 145.22
D2264 | volume_emission | 121.10
This patch reduced compilation time as the split kernels and base
kernels are compiled in parallel. In cycles debug mode (256) you can set
unmark the opencl single program file, what reduces the compilation time
even further (bmw 17 seconds, barbershop 53 seconds).
Reviewers: brecht, dingto, sergey, juicyfruit, lukasstockner97
Reviewed By: brecht
Subscribers: Loner, jbakker, candreacchio, 3dLuver, LazyDodo, bliblubli
Differential Revision: https://developer.blender.org/D2264
Values outside the 0..1 range produce negative colors, so now clamp to that
range everywhere. Also fixes improper handling of hue > 2.0 in some places.
For OIIO 2.x we must use unique_ptr. This also required updating the
guarded allocator for std::move to work. Since C++11 construct/destroy
have a default implementation that also works this case, so we just
leave it out.
The render layer name is now always included. Best to keep these consistent,
so that animation denoising and sample merging works the same for both and
tests can be the same. Ref D4311.
This adds a cycles.denoise_animation operator, which denoises an animation
sequence or individual file. Renders must be saved as multilayer EXR files
with denoising data passes.
By default file path and frame range come from the current scene, and EXR
files are denoised in-place. Alternatively, a different input and/or output
file path can be provided.
Denoising settings come from the current view layer. Renders can be denoised
again with different settings, as the original noisy image is preserved along
with other passes and metadata.
There is no user interface yet for this feature, that comes later.
Code by Lukas with modifications by Brecht. This feature was originally
developed for Tangent Animation, thanks for the support!
Differential Revision: https://developer.blender.org/D3889
This is backporting a change from 2.8, which may help solve crashes when
activating a window. Previously bringTabletContextToFront() would call
tablet API functions with NULL tablet, which may crash on some drivers.
Ref T60811.
This is the internal implementation, not available from the API or
interface yet. The algorithm takes into account past and future frames,
both to get more coherent animation and reduce noise.
Ref D3889.
Prefiltering of feature passes will happen during rendering, which can
then be used for denoising immediately or written as a render pass for
later (animation) denoising.
The number of denoising data passes written is reduced because of this,
leaving out the feature variance passes. The passes are now Normal,
Albedo, Depth, Shadowing, Variance and Intensity.
Ref D3889.
BF-admins agree to remove header information that isn't useful,
to reduce noise.
- BEGIN/END license blocks
Developers should add non license comments as separate comment blocks.
No need for separator text.
- Contributors
This is often invalid, outdated or misleading
especially when splitting files.
It's more useful to git-blame to find out who has developed the code.
See P901 for script to perform these edits.
We've had many reported crashes on Windows where we suspect there is a
corrupted OpenCL driver. The purpose here is to keep Blender generally
usable in such cases.
Now it always shows None / CUDA / OpenCL in the preferences, and only when
selecting one will it reveal if there are any GPUs available. This should
avoid crashes when opening the preferences or on startup.
Differential Revision: https://developer.blender.org/D4265
This bring macOS on par with Windows and Linux. It uses the OpenMP library
added to our precompiled libraries.
Custom flags are set because FindOpenMP from CMake below 3.12 does not support
AppleClang, and more recent versions do not work with our custom directory
location either.
Differential Revision: https://developer.blender.org/D4257
The integrator maximum number of closures was not set properly for the CPU/mega
kernels to match the actual available memory. Before relatively recent code
refactoring we did not use this value in those kernels so it worked fine.
Do not see why flags from loaded file should be skipped when we do not
load UI, this is not related to UI...
Think we can keep flags from file in both cases, should this raise some
other issue we'll just have to fine tune masked flags in each case
separately.
Even though it makes sense logically to have displacement actually displace
the mesh, this is causing a lot of confusion for existing users that are used
to the previous behavior. Further, since Eevee does not support displacement
yet and the discrepancy between the viewport and final render is problematic.
Allows users to select a font for text strips in the video sequence editor.
Related: 3610f1fc43 Sequencer: refactor clipboard copy to no longer increase user count.
Reviewed by: Brecht
Differential Revision: https://developer.blender.org/D3621
The clipboard is not a real user and should not be counted. Only on paste
should the user count increase.
This is part of D3621, and was implemented by Richard Antalik and me.
Index files used by emacs, vim and others, for autocompletion and
searching; generated by etags, universal-ctags and others.
Differential Revision: https://developer.blender.org/D4208
The problem here was that when a render result is allocated, the standard render passes are added according to the
pass bitfield. Then, when the render engine sets the result, it adds the additional passes which are then merged
into the main render result.
However, when using Save Buffers, the EXR is created before the actual render starts, so it's missing all those
additional passes.
To fix that, we need to query the render engine for a list of additional passes so they can be added before the EXR
is created. Luckily, there already is a function to do that for the node editor.
The same needs to be done when the EXR is loaded back.
Due to how that is implemented though (Render API calls into engine, engine calls back for each pass), if we have
multiple places that call this function there needs to be a way to tell which one the call came from in the pass
registration callback. Therefore, the original caller now provides a callback that is called for each pass.
Xorg's XIWarpPointer doesn't support multi-head display while
XWarpPointer does.
Revert since this is a known TODO in Xorg and setting a custom
xinput matrix seems not to be used often.
Resolves T50383
This disables touch gesture recognition in Blender, avoiding any initial delay
when drawing with grease pencil, texture paint, etc.
Differential Revision: https://developer.blender.org/D4203
Fix T60194: Sequencer cut loses animation data for the right strip.
Fixing the first also fixes the second. First attempt was delaying
uniquename check at a later step of cut process, after everything had
been duplicated. While this fixed first issue, second one became even
more proeminent (it become active for all strips, and not only
video/audio movie strips in meta's).
So instead, passing along the list of (new) sequences, so that duplicated
seqs can be put there immediately, before checking for unique names,
henceforth ensuring even strips inside meta's get properly handled.
This partially reverts commit bb98e83b99.
It fixed 'strips having same name' issue, but broke handling of
animation then. Need to find a better way to handle this.
This change brings back old original logic which was checking
whether worker threads do fit into an active CPU group. But
it does it a bit smarter now and is also checking affinity
within that group. This way Cycles will use all threads on a
Threadripper2 CPU if it's set to automatic number of threads,
but on another hand will not change affinity if user requested
16 threads and changed Blender affinity.
Tweaked scheduling so it survives this situation by scattering
"extra" threads uniformly over all the NUMA nodes.
There are still tweaks possible to make some specific hardware
configurations work better.
Exiting modes shouldn't be needed since loading the new memfile
will free the old data.
Sculpt mode dynamic topology was adding undo data on exiting the mode
which isn't logical in this case and can be avoided altogether.
The intention was to flip normals when extruding in the opposite
direction, however the sign of the angle isn't meaningful unless
the geometry center and region normal are taken into account.
Disable, may add back in a way that works more predictably.
Please always build tests when messing with build system/libs, am tired
of fixing that kind of issues...
Also, that fix is probably not working for standalone, no idea where's
the numaapi lib then, but committing since I need a building blender
here (with the tests, yes).
The issue was introduced by a Threadripper2 commit back in
ce927e15e0. This boils down to threads inheriting affinity
from the parent thread. It is a question how this slipped
through the review (we definitely run benchmark round).
Quick fix could have been to always set CPU group affinity
in Cycles, and it would work for Windows. On other platforms
we did not have CPU groups API finished.
Ended up making Cycles aware of NUMA topology, so now we
bound threads to a specific NUMA node. This required adding
an external dependency to Cycles, but made some code there
shorter.
Previously we would try to guess what the main tablet device is, but this is
error prone. Now we keep a list of X11 devices and try to match events to
them. On the Blender side there are still some limitations in regards to using
multiple devices at the same time, but this should improve things already.
Fixes T59645.
The issue was caused by NaN valid of the average spring length being
stored in the file. This caused accumulation in the springs builder
to also deliver NaNs, which then caused solver itself to not do
anything.
Not sure why these values where never initialized prior to the
accumulation. Or even, why this runime data is stored in a DNA.
Some sanitizing is possible here, but needs to be done with care
to not disrupt Spring production.
The key indices were wrong: need to offset curve key index
by first curve key index. Also corrected calculation of the
interpolation step.
Annoyingly, can not reproduce this on a simple file, need
production rig. For the possible future look the following
file from Spring was used: 03_005_A.lighting.debug.blend
The original issue was that different platforms will use different
hash lengths, just because defaults on Git client were different.
Now we use explicit length for the hash, and length is the same as
is used for short hashes in Linux -- apparently they started to have
collisions with length of 11.
rBa520e7c85c83 defined T_OVERRIDE_CENTER(1 << 25)
which was already in use T_PROP_PROJECTED(1 << 25)
thus skipping center calculation
Fixes T58882, T59518
Reviewers: campbellbarton, brecht
Maniphest Tasks: T58882, T59518
Differential Revision: https://developer.blender.org/D4100
This has been unbelievably painful to understand... And solution is only
partially good actually, we may even want a single axis for all the
islands in that case? But for now this is giving much better results
already, compared to the random crazyness it used to produce.
Prefer legacy OpenGL library, for the compatibility and portability
reasons.
Also use proper OpenGL libraries to be linked against, so we can
change preference to GLVND.
There are some changes in API of OpenImageIO, but those are quite
simple to keep working with older and newer library versions.
Reviewers: brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D4064
It used to be used for some sort of ignoring automatically
generated bump nodes. But nowadays it causes one of the shaders
in Classroom demo file to be compiled wrong.
better have this vertex color layer cover the whole 0-1 range
thx @sergey for checking
Maniphest Tasks: T57994
Differential Revision: https://developer.blender.org/D3976
This is unfortunate, but the number of bugs in this configuration
keeps growing, and almost all of them are caused by bug in OpenCL
compiler.
The compiler is not likely to be fixed, since Apple declared OpenCL
deprecated.
This evil commit is aimed to keep officially supported features
of Blender in a good working and stable state.
Other software uses this to define UV islands, so we can't just merge
any UVs with the same coordinate. They have to share a vertex too.
Contributed by Maxime Robinot, with changes by me.
Differential Revision: https://developer.blender.org/D4006
This reverts commit 3f31c28a02.
Gives issues zooming, could be resolved but it mostly worked OK before,
and it's not a priority to spend time on, so leave as is for now.
Now used the original dist instead, since using the distance between
the camera and the views offset may seem random from the users POV.
This addresses strange behavior noticed in T56934.
While atomics library was trying to use "user-space" defined
LIKELY() and UNLIKELY(), this is not always true that user
code was checking for those macro coming from an unrelated
area.
This commit adds a sample-based profiler that runs during CPU rendering and collects statistics on time spent in different parts of the kernel (ray intersection, shader evaluation etc.) as well as time spent per material and object.
The results are currently not exposed in the user interface or per Python yet, to see the stats on the console pass the "--cycles-print-stats" argument to Cycles (e.g. "./blender -- --cycles-print-stats").
Unfortunately, there is no clear way to extend this functionality to CUDA or OpenCL, so it is CPU-only for now.
Reviewers: brecht, sergey, swerner
Reviewed By: brecht, swerner
Differential Revision: https://developer.blender.org/D3892
The idea is to make main thread and job threads to be scheduled
on CPU dies which has direct access to memory (those are NUMA
nodes 0 and 2).
We also do this for new EPYC CPUs since their NUMA nodes 1 and 3
do have access but only to a higher range DDR slots. By preferring
nodes 0 and 2 on EPYC we make it so users with partially filled
DDR slots has fast memory access.
One thing which is not really solved yet is localization of
memory allocation: we do not guarantee that memory is allocated
on the closest to the NUMA node DDR slot and hope that memory
manager of OS is acting in favor of us.
In some instances, the number of control vertices of a hair could change mid-frame.
Cycles would then be unable to calculate proper motion blur for those hairs. This adds
interpolated CVs to fill in for the missing data. While this will not necessarily result in
a fully accurate reconstruction of the guide hair, it preserves motion blur instead of disabling it.
Reviewers: #cycles, sergey
Reviewed By: #cycles, sergey
Subscribers: sergey, brecht, #cycles
Tags: #cycles
Differential Revision: https://developer.blender.org/D3695
Second part of the fix: do not try at all to compute normals in degenerated
geometry. Just loss of time and potential issues later with weird
invalid computed values.
The goal is to address performance regression when going from
few threads to 10s of threads. On a systems with more than 32
CPU threads the benefit of threaded loop was actually harmful.
There are following tweaks now:
- The chunk size is adaptive for the number of threads, which
minimizes scheduling overhead.
- The number of tasks is adaptive to the list size and chunk
size.
Here comes performance comparison on the production shot:
Number of threads DEG time before DEG time after
44 0.09 0.02
32 0.055 0.025
16 0.025 0.025
8 0.035 0.033
For some reason when building with gcc-7.2 (which is default
in previous Ubuntu LTS) the guarded allocator is not being
properly instantiated.
Doesn't happen with newer version of gcc-7 which is 7.3, and
also doesn't happen with gcc-6 and gcc-8.
Would be nice to know what is wrong, but for the time being
committing workaround which keeps Blender users happy.
While we shouldn't have logic in an entry point, and since one should
not be making typos when moving lines around, there is bigger entanglement
issue with BVH host code using kernel function. This is bad violation,
but is tricky to get solved moments before the weekly.
In order to keep things in a (less) broken state than before own cleanup
reverting the changes.
This reverts commit 2bad10be96.
This reverts commit ddabb21d05
Note that this is turned off by default and must be enabled at build time with the CMake WITH_CYCLES_EMBREE flag.
Embree must be built as a static library with ray masking turned on, the `make deps` scripts have been updated accordingly.
There, Embree is off by default too and must be enabled with the WITH_EMBREE flag.
Using Embree allows for much faster rendering of deformation motion blur while reducing the memory footprint.
TODO: GPU implementation, deduplication of data, leveraging more of Embrees features (e.g. tessellation cache).
Differential Revision: https://developer.blender.org/D3682
The code this was taken from assumes a 'size_t' result,
which isn't the case here.
In practice the bucket distribution wasn't bad,
even so this was a nop so best fix.
Mainly useful for debugging. Previously, when AVX2 was disabled
in the debug panel but BVH layout was kept on BVH8 nothing was
rendered.
Needed to make it so supported BVH layout mask for devices is
queried in "dynamic", so it is possible to use DebugFlags there.
Needed for the animation denoiser since the denoising filter is done separately there.
Reviewers: brecht, sergey
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D3833
This allows for extra output passes that encode automatic object and material masks
for the entire scene. It is an implementation of the Cryptomatte standard as
introduced by Psyop. A good future extension would be to add a manifest to the
export and to do plenty of testing to ensure that it is fully compatible with other
renderers and compositing programs that use Cryptomatte.
Internally, it adds the ability for Cycles to have several passes of the same type
that are distinguished by their name.
Differential Revision: https://developer.blender.org/D3538
Apparently quite a few users would like to have the noisy pass available when using the denoiser, and since it's being generated anyways we might as well expose it by default.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D3608
This function is supposed to prevent the black artifacts caused by strong normal- or bumpmapping, but failed in some cases.
Now the code correctly handles all test files and previous issues I am aware of and also has extensive comments describing
the algorithm and the math behind it.
Basically, the main problem was that there can be multiple valid solutions that fulfil the reflection angle criterium,
but I had assumed that only one would exist and therefore simply picked the first solution with a positive term in srqt().
Now, the code uses additional validity checks and a simple heuristic to pick the best valid solution.
Additionally, the code messed up very shallow reflections even if the normal map strength was zero due to the constant
limit for the outgoing ray angle, which caused shallow incoming rays to fail the initial test even when reflected directly
on Ng. Now, the code accounts for this by reducing the threshold in the case of a shallow incoming ray, ensuring that at
least N=Ng is always a valid solution.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D3816
Looks like we need to explicitly set i18n language to default value on
some OSs... Unless that 'need to create new translated-name IDs in
versionning code for startup file' situation is really seldom.
Anyway, hopefully that will fix the crash.
This is to solve an issue where a blend file could be compressed
unbeknownst to the artist. This happened in the following situtation:
- Artist edits an uncompressed blend file.
- Some script saves a compressed blendfile to a separate location.
- When the artist saves the file (s)he is editing (File>Save, or Ctrl+S),
it was silently compressed.
Cherry pick from: cd3b313d5f
This is a bug experienced by animators in the Blender Studio that developers
have been trying to fix for a /long/ time.
What happens is that partial file writing extracts the needed datablocks from
the main list of datablocks into a smaller one. Afterwards they are added back
to the main list, but in some cases not exactly in the same order.
There is file path remapping code that depends on the datablocks being in
exactly the same order as before, and when this was not the case filepaths
would get swapped between datablocks
The reason datablocks are not restored in the same order is because the sorting
of datablocks by name is a) case insensitive and b) undefined if there are
multiple datablocks with the same name from different libraries. This should
be made well defined, but the fix in this commit is simpler.
The way animators ran into this bug is that they use the Copy Attributes addon
a lot, which has as the first item in the menu Copy Selection to Buffer. In
some cases this would be clicked accidentally when menu is near the edge of the
window, breaking the library paths which would only be noticed a much later on
file save and reload.
The way this bug was finally tracked down is that it was suspected that the
undo system was the cause, and so Bastien added library validation for undo.
When Hjalti then did undo and noticed the error, he remembered accidentally
clicking Copy Selection to Buffer just before, and we could finally reproduce
the bug.
There are serious suspicions that weird corruptions faced by studio
artists may happen in undo/redo code, so let's see whether that's the
case.
With this, and when --debug-io arg is passed on startup, the whole lib
data are checked at every undo. This makes undo slower (from two to
three times slower), but it could help us spot better what happens...
At least on windows we do not re-run datatoc when the .glsl files change.
To test is simple, just change edit_mesh_overlay_common_lib.glsl
remove lines, write plain text, ..., now rebuild and go in edit mode
with the default cube.
I also had to remove the entry in gpu/CMakeLists.txt for
gpu_shader_material.glsl since this was being tracked directly, as well
as running data_to_c_simple (otherwise CMake raises an error for
duplicated entries).
We probably want to do the same for the other datatoc functions.
Reviewers: LazyDodo, brecht
Differential Revision: https://developer.blender.org/D3803
Since my temporary buffer commit (about a month ago), the OpenCL device was zeroing the wrong buffer, leading to
completely wrong filtered feature passes and therefore significantly lower-quality results than CPU and CUDA.
With new jemalloc versions memory allocated by threads that then become
inactive is not longer automatically freed. Instead we have to enable a
background thread to do it.
Some testing is needed to find out of this is sufficient, because the
background thread only runs periodically.
The crash was caused by BVH traversal stack being overflowed.
That overflow was caused by lots of false-positive intersections
for rays originating on a non-finite location.
Not sure why those rays will be existing in the first place,
this is to be investigated separately.
This commit moves pre-SSE4.1 check to a higher level function
and enables it for all miroarchitectures.
I think there are two possible ways to fix that.
1. Make the name a required parameter.
2. Provide a default value.
I choosed option 1 in this fix to be consistent with other .new functions.
Also I think `RNA_def_string` instead of `RNA_def_string_file_path` should be used here. Looks like a copy-paste error.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D3728
The default compositor node update function sets the need_exec flag on the node
which the Auto Render feature checks, but the custom update function that was
added as part of rB4cf7fc3b3a4d didn't do so.
Therefore, the two custom update functions that were added now also call the
default update function.
Most other software expects to read indexed vertex colors, so write indices
along with the colors as we already do for UVs.
Differential Revision: https://developer.blender.org/D3704
Terms get/set don't make much sense when casting values.
Name macros so the conversion is obvious,
use common prefix for easier completion.
- GET_INT_FROM_POINTER -> POINTER_AS_INT
- SET_INT_IN_POINTER -> POINTER_FROM_INT
- GET_UINT_FROM_POINTER -> POINTER_AS_UINT
- SET_UINT_IN_POINTER -> POINTER_FROM_UINT
This partially solves ASAN report about unfreed memory. There is still
something in the report, need to have a closer look with debug version
of OpenEXE library.
While building the AVX kernel, util_avxf.h/avxb.h were using some AVX2 intrinsics,
these were never called, so it wasn't a run-time issue, but the intrinsics headers
on centos excluded the AVX2 prototypes when building the AVX kernel causing build errors.
This commit cleans up the improper usage of the AVX2 intrinsics and provides AVX
fallback implementations for future use.
Differential Revision: https://developer.blender.org/D3696
ffi stubbornly wants to put libs in lib64 even when you tell it not to on some linux distributions.
patch based on sed fix the gentoo guys did [1]
[1] https://bugs.gentoo.org/462814
This isn't really possible to do the shuffle which was attempted to do.
While it's possible to achieve expected behavior, the function needs to
be rewritten. Since it's not used anyway, it's simpler to remove it for
now.
Since we are not linking against OpenCV ourselves, that generated
linking errors later on (while building OSL e.g.).
Those 'open' libs link against way too many other libs... :/
Thanks to @intrah for initial report (T56785), and @LazyDodo for
suggested solution.
That bug probably did not affect 2.7x, only 2.8 with COW copying IDs in
threads... But root of the issue is that underlying boost i18n lib does
not support well multi-threaded access. So simply forbid any translation
from non-main thread. This *may* be an annoying limit at some point, but
doubt it will be any issue currently.
That pointer can be NULL, RNA default string handling does not support
that. (that whole uv_layer prop is quite nasty actually, since it does
not own that string, always borrows it from some other data :((( ).
Failed as in did not allocate due to possibly weight cutoff.
Tryign to allocated Extra storage for closure in such situation
will consfuse Cycles and cause crashes later one due to obscure
values in ShaderData.
* WITH_SYSTEM_OPENJPEG is removed and is now always on, this was already
the case for macOS and Windows.
* This should not break existing Linx builds. If there is no new enough
OpenJPEG installed, CMake will no find libopenjp2 and WITH_IMAGE_OPENJPEG
will be disabled.
* install_deps.sh was updated with new package names, since distributions
put this version in a new package.
Differential Revision: https://developer.blender.org/D3663
Mainly this is following Linux to build own xml2/lzma/ssl/sqlite and linking
them all statically. This ensures the Python ssl module uses a recent openssl
version rather than a very old one shipped with macOS.
Was using UI_BLOCK_LOOP to control draw style,
this meant we couldn't use popup theme colors for cases
where it the interface has the same purpose as a popup but happens
not to use this flag.
When using the 'normal' orientation, the normal would be ignored
if the plane couldn't be calculated.
Now use only the normal if the plane is zero length,
this was already done, just not in all cases.
Tested with Debian Testing, might need some adjustements for other
distributions...
Also removed last patches we used here, we shall not need any anymore!
In a real world it is very weird to disable AA on a mask,
it will give ugly looking result. For some fast preview
passes (like in the node preview) the system can decide
to disable AA without asking user to do anything.
One thing we can consider doing is to remove Feather
option as well. If real compo becomes measurably slower
in cases when mask has no real feather, we can disable
feather internally, without user input. Disabling
feather in the interface is like making things faster
but giving a wrong result, which doesn't sound that
helpful either.
Reviewers: brecht
Reviewed By: brecht
Subscribers: hype, sebastian_k
Differential Revision: https://developer.blender.org/D3677
The issue there was that number of layers did not include normals,
while element size counts bytes used by normals. This sounds very
fragile and dangerous to work further. Also, one value can easily
be delivered from another, so it is redundancy going on here.
Possible difference now is that multires subdivision will copy
normals to a higher levels. Shouldn't be big of a problem, since
leaving old normals and updating coordinates is not correct either.
MASS unit was already implemented for the C api. Only making sure it is
accessible in the python api. Also added 'CAMERA' to the documentation as a valid option.
The statement that PBVH needs to keep track of CCGDM is wrong, PBVH itself
does not care about CCGDM at all, and it's weird for it to carry on this
beast so others can access.
Even more, nobody will actually caring about CCGDM itself, all the usages
were checking whether there is CCGDM or not. This is as good as simply
checking PBVH type.
Tested with an original report T53551 and everything is still stable.
WITH_ASSERT_ABORT was not disabled for release builds. In most cases asserts
are disabled in release builds, but not always.
This also changes the buildbot to use blender_release.cmake instead of
blender_full.cmake, the only effective difference should be WITH_ASSERT_ABORT.
The assembler template was backing up and restoring ebx, which is
fair enough. However, this did not prevent compiler for putting
result variables to ebx. This was causing data corruption.
In order to prevent this easiest solution is to list ebx in clobbers
for the assembly.
There is chance that on a system with both versions installed this
*might* cause some issues. Such system will be pain to support out
of the box anyway.
This change allows to use precompiled libraries without extra
modifications in the config.
- Foe some reason CMake's platform and processor are not intialized there.
- Need to set variables in cache, otherwise they are not visible in the
actual CMake files.
The script clearly states:
This makes the presumption that you are include al.h like
#include "al.h"
and not
#include <AL/al.h>
The reason for this is that the latter is not entirely portable.
Windows/Creative Labs does not by default put their headers in AL/ and
OS X uses the convention <OpenAL/al.h>.
This commit makes default precompiled OpenAL to be properly detected
and also removes hack on MacOS which was finding the OpenAL package but
then was overwriting include directory.
Note, that new audaspace in 2.8 is using expected #include <al.h>.
This is an initial implementation of BVH8 optimization structure
and packated triangle intersection. The aim is to get faster ray
to scene intersection checks.
Scene BVH4 BVH8
barbershop_interior 10:24.94 10:10.74
bmw27 02:41.25 02:38.83
classroom 08:16.49 07:56.15
fishy_cat 04:24.56 04:17.29
koro 06:03.06 06:01.45
pavillon_barcelona 09:21.26 09:02.98
victor 23:39.65 22:53.71
As memory goes, peak usage raises by about 4.7% in a complex
scenes.
Note that BVH8 is disabled when using OSL, this is because OSL
kernel does not get per-microarchitecture optimizations and
hence always considers BVH3 is used.
Original BVH8 patch from Anton Gavrikov.
Batched triangles intersection from Victoria Zhislina.
Extra work and tests and fixes from Maxym Dmytrychenko.
There is system-wide libz development package installed by default,
needed for some other dependencies. This patch ensures Python will
use our own self-compiled Zlib.
This involved getting SSL compiled from sources first, ensuring
it is a static library placement independent code. Configuration
is based on what Debian is using. CFlags required to have own
configuration file, which i didn't find a better place that next
to the corresponding CMake file.
It is OpenSSL btw.
It is set to Python via --with-openssl= configuration argument.
This works fine in a clean chroot, but having libssl-dev installed
might make Python to prefer system wide library, This was worked
around by using libssl_pic.a name for the library and modifying
setup.py. Would be cool to ensure system wide libraries are not
a problem, but official release builder is safe against this,
since it will catch possible non-static dependencies.
There is also a new map file which shadows bunch of Python
symbols. Without this Python's shared libraries might bring
conflicting symbols to Blender namespace at runtime.
Hopefully this doesn't break other platforms.
there is an issue with objects destructing in a non deterministic way during process shutdown, temporary work around this until osl has a fix in place.
With small tiles, the repeated allocations on GPUs can actually slow down the denoising quite a lot.
Allocating the buffer just once reduces rendertime for the default cube with 16x16 tiles and denoising on a mobile 1050 from 22.7sec to 14.0sec.
While the crash is in 2.8, it's possible undo operates on data
which isn't only owned by the current scene (any object for eg).
Thanks to @mont29 for suggesting the fix.
Building the CUDA kernels takes quite a bit of memory, and when building all of
them the combined usage can be too much on some systems (especially VMs).
Therefore, this patch adds an option to force the build system to build them
sequentially by making each build step depend on the previous kernel.
Reviewers: brecht, sergey
Differential Revision: https://developer.blender.org/D3623
This changes the text hinting setting to be an enum with options
Auto / None / Slight / Full. The default is Auto which currently disables
hinting.
The hinting was tested with a new FreeType version, but this is not what
is used on the buildbots an official release environment, and the fonts
look quite bad because of that. Once FreeType has been upgraded we can
change the default.
Even then the results are not ideal, perhaps due to missing subpixel
positioning and linear color blending support in BLF.
blosc embedded a copy of zlib/pthreads causing duplicate symbol linker errors. pthreads was windows specific, but zlib may apply to other platforms as well.
Without this, there was no simple way to have
launchers for different app-templates.
Also allows force-disabling the app-template stored in the preferences.
Very dummy mistake (someone forgot to increment one of the variables in
one of the loops in that spaghetti nightmare that is nurbs shapekey
code), took half an age to spot it... :/
-Moved from dynamic link to static on windows
-gained lcms/tinyxml/yamlcpp deps, since we need a little more control over the build flags than the build-in options will provide.
Conversion code could leave object with inconsistent material data
compared to its new obdata.
Ideally, various conversion code would handle that properly, conserving
materials when possible, but for now at least ensure we get valid
result!
Related to T56363, this is not fixing the root of the bug, but ID
copying should always be a good occasion to ensure sanity of our data
(and error checking is always better than a crash!).
To get consistent, user-expected results here, we need to 'fake'
starting immediately after a 'skip' block (such that we start with a
full block of selected elements).
Same issue affected vertices and edges selection of course, did not
check the other usages of WM_operator_properties_checker_interval_test()
though.
seems like the openexr 2.3.0 release tarball has broken cmake support, latest from git works
we'll have to revisit this once they get a new release out.
Beautiful example of typo going unoticed and firing back up in totally
unexpected place years later. Guess nobody actually duplicated a Clip
data-block before! :P
Most likely own fault, during refactor of ID copying code.
Patch porting to OpenJPEG 2.3 is by Campbell.
Once all platforms are upgraded we can remove the code for 1.5, and upgrade
or remove the openjpeg version from extern/. This intermediate step makes it
possible for platform maintainers to upgrade to 2.3 without breaking other
platforms.
This make the root flag writable using the Python API, using the
generic skin vertex flag setter function.
Reviewed By: Campbell Barton
Differential Revision: http://developer.blender.org/D3583
Having 'flag, flag2, flag3' is getting out of hand especially
when we support increasing the size of types.
Make flag2 into an int.
Note, this looses the 'show world' option,
but it's not such an important setting.
This is in preparation of upgrading our library dependencies, some of which
need C++11. We already use C++11 in blender2.8 and for Windows and macOS, so
this just affects Linux.
On many distributions this will not require any changes, on some
install_deps.sh will need to be run again to rebuild libraries.
Differential Revision: https://developer.blender.org/D3568
Just basic algebra - because all vectors have the same z coordinate, a lot of terms end up cancelling out.
Not exactly a massive improvement, but it's measurable with Branched PT and a high sample count on the lamp.
Reviewers: brecht, sergey
Reviewed By: brecht
Subscribers: swerner
Differential Revision: https://developer.blender.org/D3540
Gathers information about object geometry and textures. Very basic at
this moment, but need to start somewhere.
Things which needs to be included still:
- "Runtime" information, like BVH. While it is not directly controllable
by artists, it's still important to know.
- Device array sizes. Again, not under artists control, but is added to
the overall size.
- Memory peak at different synchronization stages.
At this point it simply prints info to the stdout after F12 is done,
need better control over that too.
Reviewers: brecht
Differential Revision: https://developer.blender.org/D3566
There is no reason or justification to have helper functions as
class methods: they do not depend on anything in the class itself.
There are probably more cases like that.
While changing the shading normal is a great way to add additional detail to a model, there are some problems with it.
One of them is that at grazing angles and/or strong changes to the normal, the reflected ray can end up pointing into the actual geometry, which results in a black spot.
This patch helps avoid this by automatically reducing the strength of the bump/normal map if the reflected direction would end up too shallow or inside the geometry.
Differential Revision: https://developer.blender.org/D2574
RNA API Object.ray_cast would not normalize direction vector before
doing first quick bbox intersection test, while using its returned
distance value. This could lead to wrong exclusion of object.
Thanks to @codemanx for finding that issue.
The old springs with damping 1.0 operate in a special way that
is more similar to plastic deformation than a spring. Some users
rely on that, so let the user choose which implementation to use.
This also restores full backward compatibility with 2.79.
Reviewers: sergof
Differential Revision: https://developer.blender.org/D3544
We actually still had cases of Meta strip duplication resulting in
non-unique strip names. Quiet surprising this went unoticed for so long. :(
Fixed that bug, and think it was last one (at least, no other case of
SEQ_DUPE_UNIQUE_NAME usage should be broken, I think...), and raised
subversion and updated doversion to run uniquename check on strips on
all previous fileversions.
Note: will have to do that again when merging in 2.8...
Add new tag to bSound (runtime flags), and make read code to set a 'no
reload waveform' new tag, since it uses a mapping to get existing
waveform in undo case...
Damped Track by specification attempts to arrive at the desired
direction via the shortest rotation. However with opposite vectors
there are infinitely many valid 180 degree rotations. Currently
it gives up and does nothing.
I think that it would be more reasonable to resolve the ambiguity
arbitrarily, so that Damped Track won't have a weird dead zone.
To make it more predictable I use a local axis.
In addition, the singularity area vicinity has some floating
point precision problems that result in significant jitter.
This applies workarounds for two causes of instability.
Differential Revision: https://developer.blender.org/D3530
This increases stack memory usage some, and ideally we'd support a dynamic
size. But this is quite difficult on the GPU and hopefully 32 is enough even
for very complex cases.
This is a physically-based, easy-to-use shader for rendering hair and fur,
with controls for melanin, roughness and randomization.
Based on the paper "A Practical and Controllable Hair and Fur Model for
Production Path Tracing".
Implemented by Leonardo E. Segovia and Lukas Stockner, part of Google
Summer of Code 2018.
This patch adds a new matte node that implements the Cryptomatte specification.
It also incluces a custom eye dropper that works outside of a color picker.
Cryptomatte export for the Cycles render engine will be in a separate patch.
Reviewers: brecht
Reviewed By: brecht
Subscribers: brecht
Tags: #compositing
Differential Revision: https://developer.blender.org/D3531
metadata loading code was assuming all videos in Blender were from
FFMPEG... added empty place-holders for other types too, we probably
could load some metadata from pictures or AVI files too!
Features to get the 2nd, 3rd, 4th closest point instead of the closest, and
various distance metrics. No viewport/Eevee support yet.
Patch by Michel Anders, Charlie Jolly and Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D3503
Useful to store a snapshot of the current keymap state
so changes to the default keymap are ignored.
Also useful for testing keymap export works properly.
This works for Cycles, Eevee, texture nodes and compositing. It helps to
reduce the number of math nodes required in various node setups.
Differential Revision: https://developer.blender.org/D3537
Previously CMake was raising a fatal error, which wasn't too helpful.
There is still some fatal messages about Audaspace and Game Engine,
but the latter one is on it's EOL and is removed in Blender 2.8.
The X resource database is to be explicitly destroyed. This fixes 46 bytes
leak per every window DPI query (which happens a lot on window move/resize
and even on areas resize).
Unfortunately, this does not fully fix the leak since the known leak:
https://bugs.freedesktop.org/show_bug.cgi?id=94604
The flag was only used in readfile.c, and resulted in a delayed call to
BKE_ocean_add(); this call is now immediately made instead as it's not
very expensive.
Textures in 16 bit integer format are sometimes used for displacement, bump and normal maps and can be exported by tools like Substance Painter. Without this patch, Cycles would promote those textures to single precision floating point, causing them to take up twice as much memory as needed.
Reviewers: #cycles, brecht, sergey
Reviewed By: #cycles, brecht, sergey
Subscribers: sergey, dingto, #cycles
Tags: #cycles
Differential Revision: https://developer.blender.org/D3523
This deduplicates the calls for tile (un)mapping and allows to have a target buffer that is different from the source buffer (needed for baking and animation denoising).
This reverts commit 357b72e0a7 which caused
the issue, we need a better fix for that cosmetic issue from T50862. For
now displaying keyframes and drivers is the more important one.
Assert from BLI_assert by default in debug builds
(instead of just printing a warning).
Some developers ignored this, causing errors for others.
Better debug builds cause hard error so code isn't ignored.
Disabling is still useful when bisecting or testing outdated code.
find_elem(olddata=NULL) doesn't work reliably for existence checks; it will
return NULL both when the field is found at offset 0 and when it is not
found at all.
The latest clang compiler (at least the one in Xcode 9.4.1) warns about the register keyword and macro expansions using defined().
Since these warnings come from third party code, we can't address them directly in Blender. Silencing them via #pramgas will
at least keep the warnings during a build down to the ones that are relevant to Blender code.
block and layout could be NULL and checking this everywhere
wasn't practical.
Instead of lazy initializing, add UI_popup_menu_end_or_cancel
which cancels empty popup menus.
Silences the following strict flags from external libraries:
- -Wclass-memaccess
- -Wswitch
- -Wtype-limits
- -Wint-in-bool-context
Needed to tweak macro a bit, since the old logic was wrong:
we can not use CXX flags for C compiler, need way more strict
separation between what goes where.
Again, we cannot actually get rid of G_MAIN global access here, so in
most case just 'marked' them as valid, and added assert checks to ensure
we do only work with IDs in G_MAIN in those cases.
Validate some cases using G_MAIN instead (I don't think we want to work
on any other Main than G.main one when registering/unregistering nodes
etc.).
And when freeing, all ID not in Main shall now be tagged accordingly, so
we *should* not need to do that stupi search over all ntrees in G.main
to check wether we have to free it ourself or not!
SculptSession.mode_type wasn't initialized until painting,
making it unreliable for checks in other parts of the code.
Also remove unnecessary initialization,
matching sculpt mode more closely.
There were two issues here, introduced by rB66aa4af836:
* Forgot to change length of some filter_glob var deep in filebrowser code.
* Truncating filter_glob in general can be dangerous, generating
unexpected patterns.
Last point was the root of the issue here, truncating to 63 chars string
left last group as 'match everything' `*` pattern.
To fix that to some extent, added a new BLI_path_extension_glob_validate
helper to BLI_path_util, which ensures we do not have last
wildcards-only group in our pattern, when there are more than one group.
Limit to a restricted set of built-ins, as well as the math module.
Also restrict of op-codes, disallowing imports and attribute access.
This allows most math expressions to run
without any performance cost once the initial check is done.
See: D1862 for details.
This means the shader can now be used for procedural texturing. New
settings on the node are Samples, Inside, Local Only and Distance.
Original patch by Lukas with further changes by Brecht.
Differential Revision: https://developer.blender.org/D3479
Need to use the 'use_partial_connect' option in island connect,
so changed signatures of various functions to pass that into and
then down from BM_mesh_intersect (making true for intersect, false
for boolean).
Then fix bm_face_split_edgenet_partial_connect to work when
input edges are not necessarily wire, but at least not in the
face they are being connected in. That caused generalization
of core BM_vert_separate_hflag_wire (which is only used in
this one place in all Blender).
I've limited it to just the RGB<->XYZ stuff for now, correct image handling is the next step.
Reviewers: brecht, sergey
Differential Revision: https://developer.blender.org/D3478
The automatic mode checks all Enviroment Texture nodes and picks the largest image's resolution.
If there are no Enviroment Textures, it just uses the old default.
Also, the sampling map now isn't limited to square shapes. The automatic detection uses the exact image size,
the manual UI option now halves the value to get the height.
A default aspect ratio of 2:1 makes sense since this is what most HDRIs use.
Reviewers: brecht, sergey
Differential Revision: https://developer.blender.org/D3477
this is actually adding option to add buggy behavior, but.. NPR often
expects buggy behaviors, and its one of the main targets for normal editing.
So think it's reasonable to add that option (disabled by default of
course).
Note that am not really happy with UI, but:
* Not sure where to put it, it's kind of own self-contained area option.
* Don't to make it too much visible, using this should be the exception!
For grouped undo we should not skip the undo push, rather replace the
previous undo push. This way undo goes back to the state after the last
operation in the group.
Note that due to RNA get/setters issue, that one may actually add some
G.main usages to the total... But at least it's not hidden anymore in a
very low-level, dark corner of BKE pointcache code!
This is supposed to be handled by calling code! Henceforce, no need to
call BKE_sequencer_clear_scene_in_allseqs() here, and... no need for
that ugly G.main case. ;)
There is one legit place in the code where memcpy was used as an
optimization trick. Was needed for older version of GCC, but now
it should be re-evaluated and checked if it still helps to have
that trick.
In other places it's somewhat lazy programming to zero out all
object members. That is absolutely unsafe, at the moment when
less trivial class is used as a member in that object things
will break.
Other cases were using memcpy into an object which comes from
an external library. We don't control that object, and we can
not guarantee it will always be safe for such memory tricks
and debugging bugs caused by such low level access is far fun.
Ideally we need to use more proper C++, but needs to be done with
big care, including benchmarks of each change, For now do
annoying but simple cast to void*.
In C++ it is not really safe to memcpy objects, and newer GCC will warn
about this. However, we don't use our vector for unsafe-to-memcpy objects,
so just explicitly silence that warning.
thanx bblanimation (Christopher Gearhart) for spotting the issue and
providing the fix!
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D3449
atoi usage in BLI_stringdec could overflow, use strtoll instead and
check
valid range with INT_MIN and INT_MAX
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D3452
use better poll and get ob with 'ED_object_active_context' (instead of
'CTX_data_active_object')
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D3467
All keyboard events were sending double key events (including modifiers)
when xinput was enabled with gnome (causing much confusion!).
I cant test if XIM works,
but this isn't useful to send double events, so disabling for now.
Notes:
* Really need to address RNA setters case, end up adding way too much
G.main here these days... :/
* Added Main pointer into bAnimContext, helps a lot in anim code ;)
Not sure why exactly it is called a cleanup, the code was much more clear
and robust against possible missing return statements which are MANDATORY.
Missing return statement will:
- Cause two different BVH traversals to be run.
Not is happening currently, but if more BVH layouts are added, it will
become a problem.
- It is already causing assert() statements to fail, since functions are
no longer returning when they are supposed to.
If there is any measurable reason to keep this change, let me know.
Otherwise just stick to reliable/tested/robust code.
This reverts commit ba65f7093b.
The recent change also used the buildtools instead of the regular compiler, you now have to explicitly state what you want to use :
2017 - the standard msvc compiler
2017pre - the msvc compiler from the preview installation
2017b - the msvc compiler from the buildtools installation
This helps making things clearer and cleaner. Func returning filepath of
G.main is separate, so that we can easily track its usages, and
hopefully deprecate it at some point. Though that usage of G.main is
likely the less evil one, you nearly always want current blendfile path
in those cases anyway.
When run from make.bat the environment is setup correctly and the VCToolsRedistDir environment variable exists, on later invocations of cmake this may no longer be the case and a warning was emitted about the missing runtime. we can't rely on InstallRequiredSystemLibraries.cmake here since it uses the compiler version to figure out the correct location and it doesn't know how to deal with clang.
-expanded build_deps.cmd with 2017 support, it can't locate msvc2017 so needs to be run from developer prompt.
-Newer cmake was unhappy with openal's cmakelists.txt
-collada has warning as error on and errored out on new msvc2017 warnings.
This is really convenient for development. Either for profiling the
generated shaders or to check if the generated code is correct.
It writes the shaders to the temporary blender session folder.
(ported over from blender2.8)
This will currently only work for the RelWithDebInfo configuration since asan
does not support the debug crt. for source line information in the reports,
you need a copy of llvm-symbolizer in the blender folder or set the
ASAN_SYMBOLIZER_PATH environment variable to point to it. Currently (as of
6.0.0) llvm-symbolizer does not ship with the binary clang/llvm distribution.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D3446
2018-05-31 11:50:30 -06:00
4930 changed files with 101582 additions and 119332 deletions
option(WITH_INTERNATIONAL"Enable I18N (International fonts and text)"ON)
option(WITH_PYTHON"Enable Embedded Python API (only disable for development)"ON)
option(WITH_PYTHON_SECURITY"Disables execution of scripts within blend files by default"ON)
mark_as_advanced(WITH_PYTHON)# dont want people disabling this unless they really know what they are doing.
option(WITH_PYTHON_SECURITY"Disables execution of scripts within blend files by default"ON)
mark_as_advanced(WITH_PYTHON)# don't want people disabling this unless they really know what they are doing.
mark_as_advanced(WITH_PYTHON_SECURITY)# some distributions see this as a security issue, rather than have them patch it, make a build option.
option(WITH_PYTHON_SAFETY"Enable internal API error checking to track invalid data to prevent crash on access (at the expense of some effeciency, only enable for development)."OFF)
option(WITH_PYTHON_SAFETY"Enable internal API error checking to track invalid data to prevent crash on access (at the expense of some efficiency, only enable for development)."OFF)
mark_as_advanced(WITH_PYTHON_SAFETY)
option(WITH_PYTHON_MODULE"Enable building as a python module which runs without a user interface, like running regular blender in background mode (experimental, only enable for development), installs to PYTHON_SITE_PACKAGES (or CMAKE_INSTALL_PREFIX if WITH_INSTALL_PORTABLE is enabled)."OFF)
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib#other things like numpy still want it though.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.