Multithreading makes collisions be detected in different orders, causing
the clustering step of collision resolution to generate possibly
slightly different results on each run. This commit makes collision
order consistent.
This increases stack memory usage some, and ideally we'd support a dynamic
size. But this is quite difficult on the GPU and hopefully 32 is enough even
for very complex cases.
This is a physically-based, easy-to-use shader for rendering hair and fur,
with controls for melanin, roughness and randomization.
Based on the paper "A Practical and Controllable Hair and Fur Model for
Production Path Tracing".
Implemented by Leonardo E. Segovia and Lukas Stockner, part of Google
Summer of Code 2018.
This patch adds a new matte node that implements the Cryptomatte specification.
It also incluces a custom eye dropper that works outside of a color picker.
Cryptomatte export for the Cycles render engine will be in a separate patch.
Reviewers: brecht
Reviewed By: brecht
Subscribers: brecht
Tags: #compositing
Differential Revision: https://developer.blender.org/D3531
metadata loading code was assuming all videos in Blender were from
FFMPEG... added empty place-holders for other types too, we probably
could load some metadata from pictures or AVI files too!
Features to get the 2nd, 3rd, 4th closest point instead of the closest, and
various distance metrics. No viewport/Eevee support yet.
Patch by Michel Anders, Charlie Jolly and Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D3503
Useful to store a snapshot of the current keymap state
so changes to the default keymap are ignored.
Also useful for testing keymap export works properly.
This works for Cycles, Eevee, texture nodes and compositing. It helps to
reduce the number of math nodes required in various node setups.
Differential Revision: https://developer.blender.org/D3537
Previously CMake was raising a fatal error, which wasn't too helpful.
There is still some fatal messages about Audaspace and Game Engine,
but the latter one is on it's EOL and is removed in Blender 2.8.
The X resource database is to be explicitly destroyed. This fixes 46 bytes
leak per every window DPI query (which happens a lot on window move/resize
and even on areas resize).
Unfortunately, this does not fully fix the leak since the known leak:
https://bugs.freedesktop.org/show_bug.cgi?id=94604
The flag was only used in readfile.c, and resulted in a delayed call to
BKE_ocean_add(); this call is now immediately made instead as it's not
very expensive.
Textures in 16 bit integer format are sometimes used for displacement, bump and normal maps and can be exported by tools like Substance Painter. Without this patch, Cycles would promote those textures to single precision floating point, causing them to take up twice as much memory as needed.
Reviewers: #cycles, brecht, sergey
Reviewed By: #cycles, brecht, sergey
Subscribers: sergey, dingto, #cycles
Tags: #cycles
Differential Revision: https://developer.blender.org/D3523
This deduplicates the calls for tile (un)mapping and allows to have a target buffer that is different from the source buffer (needed for baking and animation denoising).
This reverts commit 357b72e0a7 which caused
the issue, we need a better fix for that cosmetic issue from T50862. For
now displaying keyframes and drivers is the more important one.
Assert from BLI_assert by default in debug builds
(instead of just printing a warning).
Some developers ignored this, causing errors for others.
Better debug builds cause hard error so code isn't ignored.
Disabling is still useful when bisecting or testing outdated code.
find_elem(olddata=NULL) doesn't work reliably for existence checks; it will
return NULL both when the field is found at offset 0 and when it is not
found at all.
The latest clang compiler (at least the one in Xcode 9.4.1) warns about the register keyword and macro expansions using defined().
Since these warnings come from third party code, we can't address them directly in Blender. Silencing them via #pramgas will
at least keep the warnings during a build down to the ones that are relevant to Blender code.
block and layout could be NULL and checking this everywhere
wasn't practical.
Instead of lazy initializing, add UI_popup_menu_end_or_cancel
which cancels empty popup menus.
Silences the following strict flags from external libraries:
- -Wclass-memaccess
- -Wswitch
- -Wtype-limits
- -Wint-in-bool-context
Needed to tweak macro a bit, since the old logic was wrong:
we can not use CXX flags for C compiler, need way more strict
separation between what goes where.
Again, we cannot actually get rid of G_MAIN global access here, so in
most case just 'marked' them as valid, and added assert checks to ensure
we do only work with IDs in G_MAIN in those cases.
Validate some cases using G_MAIN instead (I don't think we want to work
on any other Main than G.main one when registering/unregistering nodes
etc.).
And when freeing, all ID not in Main shall now be tagged accordingly, so
we *should* not need to do that stupi search over all ntrees in G.main
to check wether we have to free it ourself or not!
SculptSession.mode_type wasn't initialized until painting,
making it unreliable for checks in other parts of the code.
Also remove unnecessary initialization,
matching sculpt mode more closely.
There were two issues here, introduced by rB66aa4af836:
* Forgot to change length of some filter_glob var deep in filebrowser code.
* Truncating filter_glob in general can be dangerous, generating
unexpected patterns.
Last point was the root of the issue here, truncating to 63 chars string
left last group as 'match everything' `*` pattern.
To fix that to some extent, added a new BLI_path_extension_glob_validate
helper to BLI_path_util, which ensures we do not have last
wildcards-only group in our pattern, when there are more than one group.
Limit to a restricted set of built-ins, as well as the math module.
Also restrict of op-codes, disallowing imports and attribute access.
This allows most math expressions to run
without any performance cost once the initial check is done.
See: D1862 for details.
This means the shader can now be used for procedural texturing. New
settings on the node are Samples, Inside, Local Only and Distance.
Original patch by Lukas with further changes by Brecht.
Differential Revision: https://developer.blender.org/D3479
Need to use the 'use_partial_connect' option in island connect,
so changed signatures of various functions to pass that into and
then down from BM_mesh_intersect (making true for intersect, false
for boolean).
Then fix bm_face_split_edgenet_partial_connect to work when
input edges are not necessarily wire, but at least not in the
face they are being connected in. That caused generalization
of core BM_vert_separate_hflag_wire (which is only used in
this one place in all Blender).
I've limited it to just the RGB<->XYZ stuff for now, correct image handling is the next step.
Reviewers: brecht, sergey
Differential Revision: https://developer.blender.org/D3478
The automatic mode checks all Enviroment Texture nodes and picks the largest image's resolution.
If there are no Enviroment Textures, it just uses the old default.
Also, the sampling map now isn't limited to square shapes. The automatic detection uses the exact image size,
the manual UI option now halves the value to get the height.
A default aspect ratio of 2:1 makes sense since this is what most HDRIs use.
Reviewers: brecht, sergey
Differential Revision: https://developer.blender.org/D3477
this is actually adding option to add buggy behavior, but.. NPR often
expects buggy behaviors, and its one of the main targets for normal editing.
So think it's reasonable to add that option (disabled by default of
course).
Note that am not really happy with UI, but:
* Not sure where to put it, it's kind of own self-contained area option.
* Don't to make it too much visible, using this should be the exception!
For grouped undo we should not skip the undo push, rather replace the
previous undo push. This way undo goes back to the state after the last
operation in the group.
Note that due to RNA get/setters issue, that one may actually add some
G.main usages to the total... But at least it's not hidden anymore in a
very low-level, dark corner of BKE pointcache code!
This is supposed to be handled by calling code! Henceforce, no need to
call BKE_sequencer_clear_scene_in_allseqs() here, and... no need for
that ugly G.main case. ;)
There is one legit place in the code where memcpy was used as an
optimization trick. Was needed for older version of GCC, but now
it should be re-evaluated and checked if it still helps to have
that trick.
In other places it's somewhat lazy programming to zero out all
object members. That is absolutely unsafe, at the moment when
less trivial class is used as a member in that object things
will break.
Other cases were using memcpy into an object which comes from
an external library. We don't control that object, and we can
not guarantee it will always be safe for such memory tricks
and debugging bugs caused by such low level access is far fun.
Ideally we need to use more proper C++, but needs to be done with
big care, including benchmarks of each change, For now do
annoying but simple cast to void*.
In C++ it is not really safe to memcpy objects, and newer GCC will warn
about this. However, we don't use our vector for unsafe-to-memcpy objects,
so just explicitly silence that warning.
thanx bblanimation (Christopher Gearhart) for spotting the issue and
providing the fix!
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D3449
atoi usage in BLI_stringdec could overflow, use strtoll instead and
check
valid range with INT_MIN and INT_MAX
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D3452
use better poll and get ob with 'ED_object_active_context' (instead of
'CTX_data_active_object')
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D3467
All keyboard events were sending double key events (including modifiers)
when xinput was enabled with gnome (causing much confusion!).
I cant test if XIM works,
but this isn't useful to send double events, so disabling for now.
Notes:
* Really need to address RNA setters case, end up adding way too much
G.main here these days... :/
* Added Main pointer into bAnimContext, helps a lot in anim code ;)
Not sure why exactly it is called a cleanup, the code was much more clear
and robust against possible missing return statements which are MANDATORY.
Missing return statement will:
- Cause two different BVH traversals to be run.
Not is happening currently, but if more BVH layouts are added, it will
become a problem.
- It is already causing assert() statements to fail, since functions are
no longer returning when they are supposed to.
If there is any measurable reason to keep this change, let me know.
Otherwise just stick to reliable/tested/robust code.
This reverts commit ba65f7093b.
The recent change also used the buildtools instead of the regular compiler, you now have to explicitly state what you want to use :
2017 - the standard msvc compiler
2017pre - the msvc compiler from the preview installation
2017b - the msvc compiler from the buildtools installation
This helps making things clearer and cleaner. Func returning filepath of
G.main is separate, so that we can easily track its usages, and
hopefully deprecate it at some point. Though that usage of G.main is
likely the less evil one, you nearly always want current blendfile path
in those cases anyway.
When run from make.bat the environment is setup correctly and the VCToolsRedistDir environment variable exists, on later invocations of cmake this may no longer be the case and a warning was emitted about the missing runtime. we can't rely on InstallRequiredSystemLibraries.cmake here since it uses the compiler version to figure out the correct location and it doesn't know how to deal with clang.
-expanded build_deps.cmd with 2017 support, it can't locate msvc2017 so needs to be run from developer prompt.
-Newer cmake was unhappy with openal's cmakelists.txt
-collada has warning as error on and errored out on new msvc2017 warnings.
This is really convenient for development. Either for profiling the
generated shaders or to check if the generated code is correct.
It writes the shaders to the temporary blender session folder.
(ported over from blender2.8)
This will currently only work for the RelWithDebInfo configuration since asan
does not support the debug crt. for source line information in the reports,
you need a copy of llvm-symbolizer in the blender folder or set the
ASAN_SYMBOLIZER_PATH environment variable to point to it. Currently (as of
6.0.0) llvm-symbolizer does not ship with the binary clang/llvm distribution.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D3446
This commit contains the minimum to make clang build/work with blender, asan and ninja build support is forthcoming
Things to note:
1) Builds and runs, and is able to pass all tests (except for the freestyle_stroke_material.blend test which was broken at that time for all platforms by the looks of it)
2) It's slightly faster than msvc when using cycles. (time in seconds, on an i7-3370)
victor_cpu
msvc:3099.51
clang:2796.43
pavillon_barcelona_cpu
msvc:1872.05
clang:1827.72
koro_cpu
msvc:1097.58
clang:1006.51
fishy_cat_cpu
msvc:815.37
clang:722.2
classroom_cpu
msvc:1705.39
clang:1575.43
bmw27_cpu
msvc:552.38
clang:561.53
barbershop_interior_cpu
msvc:2134.93
clang:1922.33
3) clang on windows uses a drop in replacement for the Microsoft cl.exe (takes some of the Microsoft parameters, but not all, and takes some of the clang parameters but not all) and uses ms headers + libraries + linker, so you still need visual studio installed and will use our existing vc14 svn libs.
4) X64 only currently, X86 builds but crashes on startup.
5) Tested with llvm/clang 6.0.0
6) Requires visual studio integration, available at https://github.com/LazyDodo/llvm-vs2017-integration
7) The Microsoft compiler spawns a few copies of cl in parallel to get faster build times, clang doesn't, so the build time is 3-4x slower than with msvc.
8) No openmp support yet. Have not looked at this much, the binary distribution of clang doesn't seem to include it on windows.
9) No ASAN support yet, some of the sanitizers can be made to work, but it was decided to leave support out of this commit.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D3304
This patch adds support for IES files, a file format that is commonly used to store the directional intensity distribution of light sources.
The new IES node is supposed to be plugged into the Strength input of the Emission node of the lamp.
Since people generating IES files do not really seem to care about the standard, the parser is flexible enough to accept all test files I have tried.
Some common weirdnesses are distributing values over multiple lines that should go into one line, using commas instead of spaces as delimiters and adding various useless stuff at the end of the file.
The user interface of the node is similar to the script node, the user can either select an internal Text or load a file.
Internally, IES files are handled similar to Image textures: They are stored in slots by the LightManager and each unique IES is assigned to one slot.
The local coordinate system of the lamp is used, so that the direction of the light can be changed. For UI reasons, it's usually best to add an area light,
rotate it and then change its type, since especially the point light does not immediately show its local coordinate system in the viewport.
Reviewers: #cycles, dingto, sergey, brecht
Reviewed By: #cycles, dingto, brecht
Subscribers: OgDEV, crazyrobinhood, secundar, cardboard, pisuke, intrah, swerner, micah_denn, harvester, gottfried, disnel, campbellbarton, duarteframos, Lapineige, brecht, juicyfruit, dingto, marek, rickyblender, bliblubli, lockal, sergey
Differential Revision: https://developer.blender.org/D1543
ninja is an alternative to msbuild designed for fast rebuilds. However there is no IDE support, builds only from the command line.
Comparison between msbuild and ninja for a full build, build time in seconds.
Full Clean Build
msbuild 867.5
Ninja 801.2
Difference -66.3 (-7.6%)
Minor Change
msbuild 43.0
Ninja 14.9
Difference -28.1 (-64.4%)
No Changes
msbuild 23.0
Ninja 6.1
Difference -16.9 (-73.5%)
make.bat was starting to become hard to maintain, this refactors it into separate batch files for each stage of the process.
-Improved detection of msvc2013/2015
-Improved failure handling.
-Added check for working msbuild and C++ compiler
-Added verbose switch to ease trouble shooting.
-Added Check if svn/cmake/git are in the path before using them
-Display the build configuration before asking to download the libraries
-Offer an option to recover an interrupted checkout of the libraries.
-Automatically check out sub-modules in-case they are missing.
This commit adds number formatting (thousands separator) to the baking panel. It also adds a new function to format memory sizes (KB/GB/etc) and applies it to the baking panel and scene stats. The new function is unit tested.
Reviewers: Severin
Tags: #user_interface
Differential Revision: https://developer.blender.org/D1248
The registry hack we were using wasn't very reliable, the recommended way to locating visual studio is using vswhere (15.2 and up), using it also allows to switch between the regular and pre-release versions.
The GPU kernel needs to use atomics for accumulation since all offsets are processed in
parallel, but on CPUs that's not the case, so we can disable them there for a considerable speedup.
The Math node currently has the normal atan() function, but for
actual angles this is fairly useless without additional nodes to handle the signs.
Since the node has two inputs anyways, it only makes sense to add an arctan2 option.
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D3430
affects one item
UI editing multiple selected items missed the case of PROP_POINTER
properties
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D3373
Some conversion helper functions were (most likely by accident) contained
inside an ifdef for SSE2 support, so on e.g. ARM they would be undefined
and therefore cause compilation to fail.
Regression in recent undo system changes,
This caused T55048.
When each mode had its own undo stack it was important
to initialize it when entering edit-mode.
Freeing sequencer would always do usercount, which is now forbidden when
called from main ID freeing code.
Annoying in 2.7x, much more critical issue in 2.8!
Also, moved RNA sequencer API functions to proper rna_scene_api.c file.
Readcode always set relative paths of indirectly linked libs relative to
*current* .blend file, not to the library using it.
But BKE_library_filepath_set was then setting them relative to their
parent library, breaking checking code (and saved files even :((( ).
Some years-old deprecated stuff has now been removed.
Correct solution is probably to use valid defines etc. in own code, but
this is more FFMEPG maintainer task (since it also may change how old
FFMPEG we do support...).
frames
caused division by zero if both dupli_frames_on and dupli_frames_off are
zero. doing this doesnt seem useful, dupliframes can be disabled in
other ways.
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D3132
The value of epsilon was never used to create this bvhtree because whenever we activate this constraint, a bvhtree with parameter epsilon 0.0 was created and cached.
Seems to be only related on linked nature of particles.
This is caused by some conflicting optimization done for viewport, which
does not do particles re-calculation if they do not depend on time
(which is crucial for big layout scene grass fields) and particle render
setting switch which was relying on fact that render pipeline will do
particle update via time dependency.
Now we extent an old workaround for invisible objects, which now also
deals with particles in the same way as old dependency graph was dealing
with this: tag object data for update if there is particle system.
There shouldn't be any speed difference between old and new depsgraph,
since tagging was already needed and was happening.
In Blender 2.8 such things should be easier to deal with since the whole
depsgraph is to be evaluated for render engine anyway.
`MREMAP_RAYCAST_APPROXIMATE_BVHEPSILON(ray_radius)` greatly increased the radius making for example that 0.1 becoming 1.5
Now the result is much more predictable.
This allows one to set a vertex group, to which "trouble" data is
written. This allows one to see where issues occured for instance in the
event of a cloth explosion.
Collision object coordinates were not being updated when moving back in
time, causing simulation recalculations to somethimes have the collision
object only at the final position on the first simulated frame.
This also changes collision objects to use static bvh (like the cloth
bvh), which should cause a slight performance improvement.
Cloth was being re-initialized whenever pinning settings were changed.
This has been necessary in the past, as pins were only computed during
initialization, but ever since animated vertex weight support was added,
by having pins recalculate on every step, this re-initialization has
become unnecessary.
Implement a dedicated collision detection function (was previously
relying on Bullet's generic `plNearestPoints`).
This function computes all the collision data to be used for response:
coordinates, distance, direction vector.
This new function has three advantages:
* Remove a dependency from cloth simulation (Bullet).
* Give more pleasing collision results (this function is tailored
specifically for our collision response method).
* Much faster computation (not benchmarked extensively, but observed
overal simulation time was cut roughly in half with "collision-heavy"
simulations).
This adds single sided collisions without necessarily using normal
override, and adds a separate normal override option. (code can do with
some cosmetic cleanup)
Also removed a printf I had forgotten.
This makes all multi-object collisions be solved simultaneously, as not
to give any of them priority over the others (as in the sequential
collision solve that was implemented)
Note that self collisions have not been integrated, and are instead
solved independently after the object collisions. This is so that they
don't have to fight against the object collisions, as those are
generally more robust, and might cause issues with the self collisions.
Instead of doing a full implicit pre-collision solve, do a simple
inertial solve (avance using velocities from previous step), this saves
time by doing one less solve per step, and also improves resting
stability by eliminating the influence of external forces disturbing the
mesh in contacting areas.
This improves general cloth performance by about 30-40% in relation to
the new cloth code, and improves performance by about 3 times in
relation to the original cloth code. This also makes collisions more
reliable by removing the need for positional interpolation, and thus
decreases the number of subframes required for a successful solve.
No need to use moving BVH for intersection checking, as in the end
collisions are only evaluated at the next state anyway, so can use
static BVH to reduce the amount of intersections to check.
Moving BVH will only make sense if ccd is implemented.
It seems I was a bit too optimistic in thinking that just distance based
repulses could handle contacts nicely. Turns out impulse scaled repulses
really have enough of an influence in making the collision response more
effective, that it is worth keeping it for now, even if at the cost of a
very slight stability reduction and and a bit of an increase in
collision elasticity.
I hope to come up with a better solution, and elliminate this, after
cluster based impulse pruning is implemented.
Self collision pointer logic has been simplified and allocations
reduced, as it only collides with one object (self).
Also, allocation size for all collisions has been reduced to 1/4,
because it was allocating extra space for deprecated stuff.
Self collisions are now impulse based, and utilize the same solver as
the object collisions. Having separate quality controls would cause
issues when self and object collisions occur simultaneously.
The repulse when colliding points are approaching was being scaled
incorrectly by the impulse, causing some undesired elasticity in the
collision. I imagine this was a workaround to avoid penetrations
because the collision responce was being calculated with respect to
the incorrect state, but this is no longer necessary now that that has
been fixed. (had missed this case in my previous elasticity commit)
Other than that, I have bary interpolated the static repulses to avoid a
nasty instability issue in some corner cases.
Collision distance was computed with respect to the positions before the
pre-collision solve, but should rather be computed with respect to the
newly solved positions.
This fixes the issue of the collisions being semi-elastic (bouncy),
making them pretty much perfectly inelastic. This also improves the
effectiveness of the collision response, preventing penetrations even
with far less collision steps.
This cleans up the cloth caching logic a bit.
Now, a simulation step only occurs if the current time is exactly one
frame after the last simulated step, thus preventing time gaps and
runtime data corruption. Also, now the cloth is only re-initialized
if the cache is actually outdated, instead of always being reset when
on the first frame.
This solves a bug where the cache was freed when moving the time to
a point within the cached period.
These changes also fix the issue of non-cached runtime simulation state
properties (e.g. plasticity) being reset when going to the first frame
or retaining a future state when simulating over an existing cache with
time gaps.
This uses the pre-collision solve result only to find the collisions and
calculate the responce impulses, but rolls back to before the
pre-collision solve when it is time to actually apply the responce.
This prevents the cloth from undergoing a double solve per time-step,
which essentially made colliding clothes move much faster than
non-colliding clothes.
Also removed the "vel_damping" option from the UI, because it sucks,
and nobody shall ever use, or speak of it again. (though I left its
implementation and RNA definition, for historic reasons perhaps?)
A bending spring runs along each structural and shearing spring. Taking
advantage of that, this integrates the required data for both into the
same spring struct instance. This greatly simplifies bending spring
generation code, and also reduces the memory usage for spring storage
in about 40-50%.
This change also fixes minor memory management issues.
The issue here is that the rest length factor isn't cached, because the
cache only allows you to store a single type of data per cache (I'll try
to fix this later), so if you interrupt the simulation midway through,
going to the start frame, the plastic deformations will be reset.
This implements bending resistant forces between adjacent polygons in
cloth simulation. Note that derivatives are not yet included in the
computations (and might not even be needed, as the simulation is already
quite stable). Angular damping is not yet implemented either.
This implements the Choi and Ko compression model, with an added damping
component, instead of using the same model as for tension, which
improves stability of the internal forces.
Spring forces and jacobians (f, dfdx and dfdv), were unnecessarily being
stored in sring struct. They are only used locally at computation time,
and don't have to remain in memory.
editrestlen was unused.
message("Unable to detect the Visual Studio redist directory, copying of the runtime dlls will not work, try running from the visual studio developer prompt.")
setBUILD_CMAKE_ARGS=%BUILD_CMAKE_ARGS% -G "Visual Studio %BUILD_VS_VER%%BUILD_VS_YEAR%%WINDOWS_ARCH%"%TESTS_CMAKE_ARGS%%CLANG_CMAKE_ARGS%%ASAN_CMAKE_ARGS%
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.