testing now against being smaller than minimum mass 0.001 and setting it to 0.001 instead
of testing against 0 (useless with float)
adapted displayed UI precision with breaking threshold, cluster breaking threshold, breaking distance and cluster breaking distance.
fixes for:
- speed transfer (shards could stop their motion)
- shards could be stuck in the air
- ghost dynamic triggers work now too (firing a contact callback)
parts of the constraint rules can influence compound simulation behavior too, especially
clustering. You can either simulate compounds with our without triggers. First method might be slower than with constraints, but no bending. With triggers its faster, but objects always fall apart entirely (clusters can control this a bit)
after upgrade to Linux Mint 19 locally, nothing did work anymore basically... and in order to use this with the blender docker build system, it needed the new(er) cmake based library building system.
it caused crashes at baking and rendering, as well as on saving and loading.
Needs to be re-implemented differently without customdata, rather let cycles read the rigidbody
data directly via FM.
Previous fix for T53430 caused T54200.
The edge case for soft & hard cuts weren't working,
where the strip used start/end-still & the frame was placed exactly on
the start/end of of the sequence content.
T54200 fixed the end-still case but broke hard-cuts for all other cases.
This fixes the case for soft/hard cuts with/without start/end-still.
Now repeating the operator will use the previously chosen offset, either with
the modal operator or typed in. The modal operator will still start at zero.
The check to see if `use_advanced_hair` was enabled was actually in two places
(render panel `draw` function and physics panel `poll` function). As these
properties are only in one place now the check in `draw` isn't needed anymore.
Related: T53513, a6c69ca57f
We shouldn't mix image pool acuisition with and without user provided,
the fact that internally image.c uses last frame from Image datablock
confuses the logic.
ensure refresh of "parent" FM object on every refresh of connected objects, also ensure evaluation order in rigidbody.c / bullet (parent always after children)
This can be very slow if it contains a big texture, and it's not
necessarily setup in a useful way anyway, and materials can be used
in multiple scenes.
Was happening when viewport visibility on the particle system is disabled.
This became an issue after c45afcf, but the actual issue goes a bit deeper
and the following aspects were involved:
- Relations builder for particle system was ignoring particle system if
it's visibility is not enabled for viewport. This is something what
shouldn't have been done -- depsgraph relations are supposed to be the
same no matter if it's viewport or render.
- Relation builder was only dealing with duplication set to object, but
was ignoring group duplication.
This is NOT a regression since 2.79, but a regression since 2.79a-rc1.
A single diagonal axis was used for sorting coordinates,
the algorithm relied on users not having vertices axis aligned.
Use BLI_kdtree to remove doubles instead.
Overall speed varies, it's more predictable than the previous method.
Some typical tests gave speedup of ~1.4x - 1.7x.
FFMPEG uses int for the numerator, while Blender uses a short. So in
cases people gave weird exotic framerate values and we cannot reduce
enough the numerator, we'd get totally weird values (even negative frame
rates sometimes!)
Now we add checks for short overflow and approximate as best as possible
in that case (error should not matter unless you have shots of at least
several hundreds of hours ;) ).
This reverts commit dc2617130b, which disabled
writing of previews for undo. While this uses some memory, re-rendering all
previews is very expensive, especially if for example you have lots of materials
using high-res image textures.
The idea is to avoid any threading overhead when we start pushing tasks in a
loop. Similarly to how we do it from the new dependency graph. Gives couple of
percent of speedup here, but also improves scalability.
We can not store pointers to elements of collection property in the
case we modify that collection. This is like storing pointers to
elements of array before calling realloc().
The issue was caused by light sample being evaluated to nan at some point.
This is root of the cause which is to be fixed, but is very hard to trace down
especially via ssh (the issue only happens on AVX2 release build). Will give it
a closer look when back to my AVX2 machine.
For until then this is a good check to have anyway, it corresponds to what's
happening in regular radiance sum.
Our own implementation was behaving different comparing to OSL and GPU,
namely on the border pixels OSL and CUDA was doing interpolation with
black, but we were clamping coordinate.
This partially fixes issue reported in T53452.
Similar change should also be done for 3D interpolation perhaps, but this
is to be investigated separately.
This would lead to sock.default_value pointing to the wrong data type,
possibly causing crashes. Unfortunately, this bug will still exist for
older Blender versions that try to load newer files, which makes
changing the type of a node socket problematic.
Drawing hair weights read before the hair array start.
This code could be improved since it currently copy-pastes,
from do_particle_interpolation, but this would need larger changes.
For now just correct existing logic.
Solves these security issues from T52924:
CVE-2017-12102
CVE-2017-12103
CVE-2017-12104
While the specific overflow issue may be fixed, loading the repro .blend
files may still crash because they are incomplete and corrupt. The way
they crash may be impossible to exploit, but this is difficult to prove.
Differential Revision: https://developer.blender.org/D3002
Solves these security issues from T52924:
CVE-2017-12081
CVE-2017-12082
CVE-2017-12086
CVE-2017-12099
CVE-2017-12100
CVE-2017-12101
CVE-2017-12105
While the specific overflow issue may be fixed, loading the repro .blend
files may still crash because they are incomplete and corrupt. The way
they crash may be impossible to exploit, but this is difficult to prove.
Differential Revision: https://developer.blender.org/D3002
CUDA 9.0.176 apparently caused some slow down on high-end Pascal cards that can be mitigated by increasing the number of registers. See https://developer.blender.org/F1142667 for a detailed comparison.
We tried to do as much as possible in a single threaded callback, which
lead to using some nasty tricks like fake atomic-based spinlocks to
perform some operations (like float addition, which has no atomic
intrinsics).
While OK with 'standard' low number of working threads (8-16), because
collision were rather rare and implied memory barrier not *that* much
overhead, this performed poorly with more powerful systems reaching the
100 of threads and beyond (like workstations or render farm hardware).
There, both memory barrier overhead and more frequent collisions would
have significant impact on performances.
This was addressed by splitting further the process, we now have three
loops, one over polys, loops and vertices, and we added an intermediate
storage for weighted loop normals. This allows to avoid completely any
atomic operation in body of threaded loops, which should fix scalability
issues. This costs us slightly higher temp memory usage (something like
50Mb per million of polygons on average), but looks like acceptable
tradeoff.
Further more, tests showed that we could gain an additional ~7% of speed
in computing normals of heavy meshes, by also parallelizing the last two
loops (might be 1 or 2% on overall mesh update at best...).
Note that further tweaking in this code should be possible once Sergey
adds the 'minimum batch size' option to threaded foreach API, since very
light loops like the one on loops (mere v3 addition) require much bigger
batches than heavier code (like the one on polys) to keep optimal
performances.
This is a bit annoying to have per-DM locking, but it's way better (as in, up to
4 times better) for playback speed when having lots of subsurf objects,
This a small cleanup of something which I think is just a typo anyway.
With all the recent talks of harrassment and groping, I think we better avoid
that within our source code! :)
Reviewers: sergey
Reviewed By: sergey
Tags: #motion_tracking
Differential Revision: https://developer.blender.org/D2979
Technically this was introduced in 01b547f993 when
exposing size and randomness for particles.
This "fixes" makes sure particle size and size randomness is always in the
Render panel when it affects the particle system (i.e., always unless using
advanced hair or hair that is not rendering groups/objects).
The issue was caused by SpinLock implementation in old pthreads we ar eusing on
Windows. Using newer one (2.10-rc) demonstrates same exact behavior. But likely
using own atomics and memory barrier based implementation solves the issue.
A bit annoying that we need to change such a core part of Blender just to make
specific CPU happy, but it's better to have artists happy on all computers.
There is no expected downsides of this change, but it is so called "works for
me" category. Let's see how it all goes.
There was some changes about namespaces, which causes ambiguities.
Replaces using namespace with an explicit symbols we need. Is good idea to NOT
pull in the whole namespace anyway!
Now we replace O(N^2) computational complexity with O(N) extra memory penalty.
Memory is much cheaper than CPU time. Keep in mind, memory penalty is like
4 megabytes per 1M vertices.
The issue here is that we can not read scale from socket when determining
dependent area of interest. This area will depend on current pixel. Now fall
back to more stupid but reliable thing: if scale size input is connected to some
nodes, we use the whole frame as area of interest.
The issue was caused by threading conflict around looptris: it was possible
that DM will return non-NULL but non-initialized array of looptris.
Thanks Campbell for second pair of eyes!
Use more watertight and robust intersection test.
It uses now ray to triangle intersection, but it's all fine because segment was
covering the whole bounding box anyway.
Mainly when object origin is not at the geometry bounding box center.
Seems to be straightforward to fix, hopefully it doesn't break some obscure case
where this was a desired behavior.
This is fully unpredictable for artists when one damaged object makes the whole
scene to render incorrectly. This involves two main changes:
- It is not enough to check triangle bounds to be valid when building BVH.
This is because triangle might have some finite vertices and some non-finite.
- We shouldn't add non-finite triangle area to the overall area for MIS.
Best guess is that cuInit() somehow interferes with the AMD graphics driver
on Windows, and switching the initialization order to do OpenCL first seems
to solve the issue.
This reverts commit d749320e3b.
It's possible the container struct is larger,
we could do sizeof checks that falls back to memmove
but rather avoid complicating things.
Recent addition of 'reinsert' didn't match logic for ghash API.
Rename to BLI_heap_node_value_update,
also add BLI_heap_insert_or_update since it's a common operation.
Was using an edge hash for triangle -> edge lookups,
updating triangle indices for each edge-rotation.
Replace this with half-edge which can rotate edges much more simply,
writing triangles back once the solution has been calculated.
Gives ~33% speedup in own tests.
This causes render differences in some scenes, for example fishy_cat
and pabellon scenes render brighter in a few spots. This is an old
bug, not due to recent RR changes.
SVM nodes need to read all data to get the right offset for the following node.
This is quite weak, a more generic solution would be good in the future.
The loc one (shift-alt-G) was same as 'remove selected from active group'
action... Clear delta transform is not a common operation, so we can
live without a default shortcut for it.
Note that using same key (G) in same space for two completely different
kind of operations is probably a rather bad thing, nice topic for future
keymap work. ;)
Probably nice to have in 2.79a.
by the transform constraint lines
Ported over e7395c75d5 from the
greasepencil-object branch. I should've fixed this ages ago, but
couldn't figure out why at the time.
When the mesh changed topology but kept the vertex count the same, it would
result in a corrupt mesh. By checking the face & loop counts too, this has
become less likely.
I've checked IPolyMeshSchema::isConstant(), but it returns true even when
we see that the mesh changed topology.
Before this patch, the XBlur/YBlur compositor nodes would crash for me when run in a MSVC 2015 debug build (test scene: BMW27_cpu). I added the compiler instructions to explicitly align the local variables that the SSE instructions are accessing.
This was wrong since it's concenption in 28ee0f9218.
The if statement was returning true when pinid was NULL, and false otherwise.
However when scene is pinned we also want to run this code.
Code snippet by Brecht Van Lommel.
Since in Alembic the loop order seems to be reversed when exporting and
importing, and this was the only place where it was not, I was thinking
to match this to the convention of reversing the loop order as well.
Reviewers: sybren, kevindietrich
Tags: #alembic
Differential Revision: https://developer.blender.org/D2968
Two issues here:
- Checking table size to be non-zero is not a proper way to go here. This is
because we first resize the table and then fill it in. So it was possible that
non-initialized table was used.
Trickery with using temporary memory and then doing table.swap() might work,
but we can not guarantee that table size will be set after the data pointer.
- Mutex guard was useless, because every thread was using own mutex. Need to
make mutex guard static so all threads are using same mutex.
One crucial thing here: OpenVDB shoudl be compiled WITHOUT
OPENVDB_ENABLE_3_ABI_COMPATIBLE flag. This is how OpenVDB's Makefile is
configured and it's not really possible to detect this for a compiled library.
If we ever want to support that option, we need to add extra CMake argument and
use old version 3 API everywhere.
There is absolute no reason to have such an indentation level, it only causes
readability and maintainability issues. It is really simple to make code more
"streamlined".
The issue actually goes a bit deeper, converting curve to mesh will
change texture space just because font and bezier curves are using CV
to calculate texture space.
So now when those objects are converted to mesh, we disable auto
texture space and copy evaluated space over.
Previously, hitting Shift-LMB will first invoke selection operator, which
then later on is transformed to mouse tweak used for reroute operator.
This was causing problems extending selection with Shift-LMB when clicking
fast or from a tablet.
This was expected behavior for over-exposured lamps when the mode was originally
created for Tears of Steel. Turns out, there could be really bad green screen in
real production which will only have green (or rather screen) channel over
exposured.
Tweaked condition now so we use least bright channel to see if the area has
proper exposure or not.
Seems to work fine in tests, but further tweaks are possible.
Was a mistake in optimization commit which was disconnecting closures and nodes
which does not make sense for volume output.
OSL script we can't ignore and can't currently know in advance if it's a proper
volume shader or not. So we never disconnect OSL nodes from volume output.
This is a good candidate for corrective release.
The issue here was that removing datablock from main database will poke editors
update, which includes buttons context to free users of texture. Since Cycles
will free datablocks from job thread, it might crash Blender since main thread
might be in the middle of drawing.
Solved by exposing extra arguments to bpy.data.foo.remove() which indicates
whether we want to perform ID user count and interface updates. While scripts
shouldn't be using those normally, this is the only way to allow Cycles to skip
interface update when removing datablock.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2840
Was caused by numeric overflow when calculating preview dimensions.
Now we try to avoid really insance preview resolutions by fitting
aspect into square.
The issue was caused by operator redo which frees all object's evaluated data,
including bounding box. This bounding box can not be reconstructed properly
without full curve evaluation (need to at least convert font to nurbs, which is
not cheap already).
In fact, any type of baking might have caused holes in mesh.
The issue was caused by zspan_scanconvert() attempting to get order of traversal
'a-priori', which might have failed if check happens at the "tip" of span where
`zspan->span1[sn1] == zspan->span2[sn1]`.
Didn't see anything bad on making it a check when iterating over scanlines and
pick minimal span based on current scanline. It's slower, but unlikely to cause
measurable difference. Quality should stay the same unless i'm missing something.
Reviewers: brecht, dfelinto
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2837
Fix T52977: Parent bone name disappeared in the UI in pose mode.
Regression caused by own rBc57636f060018. So instead of changing widget
type, just flag it as disabled.
Note that core of the issue is elsewhere though - there is absolutely no
reasons to have a search widget for pointers we cannot change nor
search! But fixing this is not really top priority, one of the many
glitches of our UI code, so think we can live with current code.
To be backported to 2.79a.
Inner DAG code would not check against NULL pointer, and in case of an
active linked scene, scene pointer will be NULL here, so we have to
check it ourself. ;)
No need to print status for basic & reliable operations,
build systems can output operations they run if needed,
or debug output changed in the source if developers are debugging.
Nice for ninja, so any printed text hints at a problem to fix.
Was using cursor position from within menu,
clicking on the same position for every selected item (toggling).
Now operate on each selected outliner element, without toggling.
When 2x loops have different number of vertices,
the distribution for vertices fan-fill depended on the loop order
and was often lop-sided.
This caused noticeable inconstancies depending on the input
since edge-loops are flipped to match each others winding order.
Pixel size was not initial early enough. For first time this was not a problem
because the bevel amount starts at 0 then, and after the mouse moves the pixel
size is initialized. For the second time the bevel amount starts at a non-zero
value, and it failed then.
Use triple buffer by default now on all platforms, remaing ones where:
* Mesa: seems to have been working well for a long time now, and not using
it gives issues with the latest Mesa 17.2.0.
* Windows software OpenGL: no longer supported since OpenGL 2.1 requirement
was introduced.
* OS X with thousands of colors: this option was removed in OS X 10.6, and
that's our minimum requirement.
Camera clipping was left to default values, which won't work well for
very large (or small) objects. Now recompute valid clipping start/end
based on boundingbox of rendered data, and final location of camera.
Tentative fix, since I cannot reproduce thenissue for some reason here
on linux.
Core of the problem is pretty clear though, thanks to Germano Cavalcante
(@mano-wii): another thread could try to use looptris data after worker
one had allocated it, but before it had actually computed looptris.
So now, we use a temp 'wip' pointer to store looptris being computed
(since this is protected by a mutex, other threads will have to wait on
it, no possibility for them to double-compute the looptris here).
This should probably be backported to 2.79a if done.
Should have been done before ahoy, sorry about that. Means 2.79 tag will
be one (no functionnal changes) commit ahead from our 2.79 builds, think
we can live with that.
Previous fix wasn't working correct for certain compiler and CPU intrinsics
mode, causing quite some crashes.
This should be a safer fix, which is closer in behavior to previous release
but which should still fix issues with robust curve intersection.
Note: this commit seems to work as expected (also with transform
snapping etc.). However, it is rather unsafe - not enough for 2.79 at
least, unless we get much more testing on it. It also depends on three
previous ones.
Note that using a global lock here is far from ideal, we should rather
have a lock per DM, but that will do for now, whole DM thing is doomed
to oblivion anyway in 2.8.
Also, we may need a `DM_DIRTY_LOOPTRIS` dirty flag at some point. Looks
like we can survive without it for now though... Probably because cached
looptris are never copied accross DM's?
This was... horribly wrong, CDDM will often *not* need to allocate
anything to return arrays of mesh items! Just check whether array
pointer is NULL.
Also, remove `DM_get_looptri_array`, that one is useless currently,
`dm->getLoopTriArray` will always return cached array (computing it if
needed).
Only the camera from View3D.localvd is used,
other pointers may be invalid.
Longer term we should probably clear these to ensure no accidents.
For now just follow the rest of Blender's code and don't access.
Baking rigid body cache was broken if some cached frames already
existed.
This is just a band aid for release, the logic need to be looked into
further.
scripts being "Artwork" which is your sole property and free to license.
I've removed the reference to scripts in this text.
This was from 2002! With our Python scripts becoming part of how Blender runs,
such scripts now are officially required to be compliant with GNU GPL.
For more information; check the FAQ or consult foundation@blender.orghttps://www.blender.org/support/faq/
We have a hardcored limit of 1000 images to be baked.
However anything anove 100 would be leading to overflow in the code.
Caught by warning from builder bot (my compiler doesn't even complain
about this, but it should).
Apparently with Maya in a certain configuration, it's possible to have an
Alembic object without schema in the Alembic file. This is now handled
properly, instead of crashing on a null pointer.
For example, if you have two keyframes:
k1 = 1px, k2 = 10px
it was doing:
1px, 9px, 8px, ..., 3px, 2px, 10px
instead of:
1px, 2px, 3px, ..., 8px, 9px, 10px
- Vertex only meshes never restored their selection history.
- Select history was cleared on the source instead of the target.
Simple Optimizations:
- Avoid O(n^2) linked list looping that checked the entire list before
adding elements (NULL values in the source array to prevent dupes).
- Re-use vert & edge lookup tables instead of allocating new ones.
For some specific pipelines (e.g., holographic rendering) you can easily
need over a million frames (1k * 1k view angles).
It seems a corner case, but there is no real reason not to allow users
doing that.
That said we do loose subframe precision in the highest frame range. Which can
affect motionblur. The current maximum sub-frame precision we have is 16.
While the previous limit of 500k frames has a precision of 32.
Thanks to Campbell Barton for the help here.
To be backported to 2.79
Issue was nasty hidden one, the dual status (mix of local and linked)
of proxies striking again.
Here, remapping process was considering obdata pointer of proxies as
indirect usage, hence clearing the 'LIB_TAG_EXTERN' of obdata pointer.
That would make savetoblend code not store any 'lib placeholder' for
obdata data-block, which was hence lost on next file read.
Another (probably better) solution here would be to actually consider
obdata of proxies are fully indirect usage, and simply reassign proxies
from their linked object's obdata on file read...
However, that change shall be safer for now, probably good for 2.79 too.
Would be nice to be able to catch this with assert as well, will see what would
be the best way to do this/.\
Need to verify with Mai that this solves crash for her and maybe consider
porting this to 2.79.
We need to make sure we can store all volume closures for all objects in volume
stack. This is a bit tricky to detect what would be the "nestness" level of
volumes so for now use maximum possible stack depth. Might cause some slowdown,
but better to give reliable render output than to fail quickly.
Should be safe for 2.79 after extra eyes.
We were showing "search for unknown menutype WM_MT_button_context" messages in terminal which were not helpful for users, so now they are disabled.
To be backported to 2.79
It is possible to have same image used multiple times at different frames,
which means we can not free it's buffers without any guard. From quick tests
this seems to be doing what it is supposed to.
Need more testing and port this to 2.79.
Regression from rBfed853ea78221, calling this inside thread worker was
not really good idea anyway, and we already have all the code we need in
pre-threading init function, was just disabled for vertex particles
before.
To be backported to 2.79.
The documentation for the bpy.app.handlers.scene_update_{pre,post}
handlers states that they're called "on updating the scenes data".
However, they're called even when the data hasn't changed. Of course
such handlers are useful, but the documentation should reflect the
current behaviour.
Reviewers: mont29, sergey
Subscribers: Blendify
Maniphest Tasks: T46329
Differential Revision: https://developer.blender.org/D1535
Commit b6d7cdd3ce changed how the mesh data
is deformed, which wasn't taken into account yet in this unit test.
Instead of directly reading the mesh vertices (which aren't animated any
more), we convert the modified mesh to a new one, and inspect those
vertices instead.
When a mesh changes its number of vertices during the animation,
Blender rebuilds the DerivedMesh, after which the materials weren't
applied any more (causing the default to the first material slot).
This can happen with Alembic files exported from Maya. I'm unsure as to the
root cause, but at least this fixes the crash itself.
Thanks to @looch for reporting this with a test file. The test file has to
remain confidential, though, so it's on my workstation only.
Simply disabled python tests, they can't be run anyway (since blender target is
not enabled) and we don't have any player-related tests in that folder.
Creating ngons with multiple axis aligned shapes in the middle of a
single face would fail in some cases.
This exposed multiple problems in BM_face_split_edgenet_connect_islands
- Islands needed to be sorted on Y axis when X was aligned.
- Checking edge intersections needed increased endpoint bias.
- BVH epsilon needed to be increased.
Basically, make re-alloc and memcpy from the same lock, otherwise one
thread might be re-allocating thread while another one is trying to
copy data there.
Reported by Mohamed Sakr in IRC, thanks!
The tweakmode flag and the selected-channels flag accidentally
used the same value, due to confusion over where these flags were
supposed to be set. The selected-channels flag has now been moved
to use a different value, so that there shouldn't be any further
conflicts.
To be ported to 2.79.
Own previous fix (rBd5d626df236b) was not valid, curves are actually
supported by SoftBodies. It was rather a mere UI bug, which was not
including Surfaces and Font obect types in those valid for softbody UI.
Thanks to @brecht for the head up!
Also, fix safe for 2.79, btw.
Adds thin/default/thick modes to add -1/0/1 to the auto detected line width,
while leaving the overall UI scale unchanged.
Also tweaks the default line width threshold, so thicker lines start from
slightly high UI scales.
Differential Revision: https://developer.blender.org/D2778
*Changed categories of some keywords
*reordered some longer keywords that didn't appear
*Activated another color (reserved builtins) by Leonid
*added some HGPOV and UberPOV missing keywords
Patch by Maurice Raybaud (@mauriceraybaud). Thanks to Leonid for additions, feedback and Linux testing.
Related diffs: D2754 and D2755.
While not a regression, this is new feature and would be nice to have it
backported to final 2.79.
As part of the fix for T51587, I removed the Depth output for non-Multilayer
images since it seemed weird that PNGs etc. that don't have a Z pass still get
a socket for it.
However, I forgot about non-multilayer EXRs, which are a special case that can
actually have a Z pass.
Therefore, this commit brings back the Depth output for non-multilayer images
just like it was in 2.78.
New code was handling correctly ID's internal references to self, but
not references between 'made real' different objects...
Regression, to be backported in 2.79.
additionally added a python-settable variable to indicate whether the dynamic fracture data has been loaded externally or not (just set it from the handler)
this is supposed to keep externally loaded data in place when changing over to dynamic fracture, the handler triggers a python script which populates the meshislands and another to populate the constraints via the FM python API.
Additionally a couple of minor fixes for dynamic fracturing.
There was fully wrong logic in comparison: was actually accessing memory
past the array boundary. Run test manually and the figure seems correct
to me now.
Spotted by @LazyDodo, thanks!
The issue was caused by pointiness being calculated after
faces split now. Ported all fixes we did here.
Should be safe, pointiness is used all over the barbershop.
This commit contains all commits required to get proper hair rendering
with BVH motion steps enabled.
The issue here was mainly coming from minimal pixel width feature
which is quite commonly enabled in production shots.
This feature will use some probabilistic heuristic in the curve
intersection function to check whether we need to return intersection
or not. This probability is calculated for every intersection check.
Now, when we use multiple BVH nodes for curve primitives we increase
probability of that primitive to be considered a good intersection
for us. This is similar to increasing minimal width of curve.
What is worst here is that change in the intersection probability
fully depends on exact layout of BVH, meaning probability might
change differently depending on a view angle, the way how builder
binned the primitives and such. This makes it impossible to do
simple check like dividing probability by number of BVH steps.
Other solution might have been to split BVH into fully independent
trees, but that will increase memory usage of all the static
objects in the scenes, which is also not something desirable.
For now used most simple but robust approach: store BVH primitives
time and test it in curve intersection functions. This solves the
regression, but has two downsides:
- Uses more memory.
which isn't surprising, and ANY solution to this problem will
use more memory.
What we still have to do is to avoid this memory increase for
cases when we don't use BVH motion steps.
- Reduces number of maximum available textures on pre-kepler cards.
There is not much we can do here, hardware gets old but we need
to move forward on more modern hardware..
Deleting the bake after changing the rigidbody count still is necessary, else the shard order and the mesh appearance of fracture modifier objects will be messed up.
Furthermore, decreased the convert to keyframe default threshold from 0.05 to 0.005, reconverting by re-using
the operator again still is somewhat broken
Fixes for clamp-omp, fewer shared variables, fix some cases of threads writing
to the same memory location. Issue found by Jens Verwiebe, who reports 30%
speedup with 16 core CPU, when using this with a recent clang-omp version.
Blenders baking system currently doesn't support the topology used by
adaptive subdivision and primitive ids will be wrong or out of range
leading to crashes. Updating the baking system to support other
topologies would be a bit involved, so for now we simply disable
subdivision while baking to avoid crashes.
Cycles add-on did not actually support reloading correctly.
When you want to correctly reload sub-modules (i.e. modules of an add-on
which is a package), you need to use importlib, a mere import will do
nothing with already loaded modules (RNA classes are sort of
pre-registered when they are evaluated, through the meta-class system).
This way we can stop traversing BVH node early on.
Gives about 2-2.5x times render time improvement with 3 BVH steps.
Hopefully this gives no measurable performance loss for scenes with
single BVH step.
Traversal is currently only implemented for QBVH, meaning old CPUs
and GPU do not benefit from this change.
Similar to the previous commit, the statistics goes as:
BVH Steps Render time (sec) Memory usage (MB)
0 46 260
1 27 373
2 18 598
3 15 826
Scene used for the tests is the agent's body from one of the barber
shop scenes (no textures or anything, just a diffuse material).
Once again this is limited to regular (non-spatial split) BVH,
Support of spatial split to this feature will come later.
The idea is to create several smaller BVH nodes for each of the motion
curve primitives. This acts as a forced spatial split for the single
primitive.
This gives up render time speedup of motion blurred hair in the cost
of extra memory usage. The numbers goes as:
BVH Steps Render time (sec) Memory usage (MB)
0 258 191
1 123 278
2 69 453
3 43 627
Scene used for the tests is the agent's hair from one of the barber
shop scenes.
Currently it's only limited to scenes without spatial split enabled,
since the spatial split builder requires some changes to work properly
with motion steps coordinates.
Also fixed some issues with motion keys calculation:
- Clamp lower and upper limits of curves so we can safely call those
functions for the very first and very last curve segment.
- Fixed wrong indexing for the curve radius array.
- Fixed wrong motion attribute offset calculation.
Mimics how regular triangles are working and makes it more clear where
the stuff is located in the kernel.
Needed to have some forward declarations because of the current placement
of things in the kernel.
This kind of keeps threads "warmer" and should in theory give better
cache coherency bringing some %% of speedup. It was already tested
few months ago and it gave few % speedup in barber shop, but was
reverted due to some bone popping. The popping is now fixed so it
should be fine to use new scheduling policy.
Did similar trick to old dependency graph: tag invisible relations for update.
Might need some re-consideration, see the comment.
This should solve our issues with powerlib addon here in the studio.
New dependency graph expects strict separation between nodes and relations builder,
meaning, if we try to create relation with an object which is not in the graph yet
we'll have an error in depsgraph.
Now, so far object nodes were created from bases of the current scene, which caused
missing objects in graph in certain cases.
Didn't find better approach than to simply ensure object nodes exists when we know
they'll be used by relation builder.
The idea is simple: when falling back to one of the nodes which was partially
handled we "resume" checking outgoing relations from the index which we stopped.
This gives about 15-20% depsgraph construction time save.
This is more proper way to go:
- Avoids re-compilation of all dependent files when implementation changes
without changed API,
- Linker should have much simpler time now de-duplicating and getting rid
of redundant implementations.
The idea here is to address issue that name on it's own is not
always unique: for example, when adding driver operations the
name used for nodes is the RNA path (and multiple drivers can
write to different array indices of the path). Basically, now
it's possible to pass extra integer value to distinguish
operations in such cases.
So now we've already switched from sprintf() to construct unique
operation name to pass RNA path and array index.
There should be no functional changes yet, but this work is
required for further work about replacing string with const
char*.
There is no real reason to have nodes storing heap-allocated name
and description. Doing this increases amount of allocations during
dependency graph building, which usually means somewhat slowness.
We're temporarily loosing some eyecandy in the graphviz visualizer,
but those we can bring back as a part of graphiz dump (which happens
much less often than depsgraph build).
This will happen in multiple commits for the ease of bisect in the
future just in case this causes any regression. This commit contains
ID creation API changes.
The issue was caused by image ID nodes not being in the depsgraph.
Now, tricky part: we only add nodes but do not add relations yet. Reasoning:
- It's currently important to only call editor's ID update callback to solve
the issue, without need to flush changes somewhere deeper.
- Adding relations might cause some unwanted updates, so will leave that for
a later investigation.
Animation system has separate fcurves for each of array elements and
dependency graph creates separate nodes for each of fcurve, This is
needed to keep granularity of updates, but causes issues because
animation system will actually write the whole array to property when
modifying single value (this is a limitation of RNA API).
Worked around by adding operation relation between array drivers
so we never write same array form multiple threads.
They were not real issues, it's just some areas of code tried to create
relations between non-existing nodes without checking whether such
relations are really needed.
Now it should be easier to see real bugs printed.
Hopefully should be no regressions here.
Do this for the new dependency graph: was missing handle of OB_UPDATE_TIME in tag update.
Hopefully it's all correct still.
Old dependency graph needs work, but i'm tempting to call it unsupported and move on
to 2.8 branch.
This was quite weak to consider all scripted expression to be time-dependent.
Current solution is somewhat better but still crappy. Not sure how can we make
it really nice.
It was possible to have synchronization issues whe naccumulating smooth
normal to a vertex, causing shading artifacts during playback.
Bug found by Dalai, thanks!
This way render engine can request mesh to be auto-split and not
worry about implementing this functionality on it's own.
Please note that this split is to be performed prior to tessellation.
Was a bit confusing to have transparent and translucent depth
exposed but no diffuse or glossy.
Reviewers: brecht
Subscribers: eyecandy
Differential Revision: https://developer.blender.org/D2399
This is important for the reliable behavior or isnan/isfinite/min/max
functions to work with nan and non-finite values. Some of the issues
with fast math are possible to work around, but didn't find a way to
have reliable min/max implementation yet.
Please NEVER EVER use such a statement, it's only causing HUGE
issues. What is even worse: it's not always possible to immediately
see that the hell is coming from such a statement.
There is still some statements in the existing code, will leave
those for a later cleanup.
This avoids intersection AABB of different curve primitives
which makes it less ray-to-primitive intersections.
This gives about 30% speedup of hair rendering in the barber
shop scenes here. There is still some work to be done on those
files to solve major speed issues on certain frames.
This way we can have different limits for regular and motion curves
which we'll do in one of the upcoming commits in order to gain some
percents of speedup.
The reasoning here is that motion curves are usually intersecting
lots of others bounding boxes, which makes it inefficient to have
single primitive in the leaf node.
Maximal number of elements is supposed to be inclusive. That is what
it was always meant in this file and what @brecht considered still
the case in 6974b69c61.
In fact, the commit message to that change mentions that we allowed
up to 2 curve primitives per leaf while in fact it was doing up to 1
curve primitive.
Making it real 2 primitives at a max gives about 5% slowdown for the
koro.blend scene. This is a reason why BVHParams.max_curve_leaf_size
was changed to 1 by this change.
Since the beginning of times hair settings in cycles were global for
the whole scene but were located in the particle context. This causes
quite some trickery to get shots set up for the movies here in the
studio by forcing artists to create dummy particle system to change
settings of hair on the shot.
While ideally this settings should be properly become per-particle
system for the time being it will save sweat and blood to move the
settings to scene context.
Reviewers: brecht
Subscribers: jtheninja, eyecandy, venomgfx, Blendify
Differential Revision: https://developer.blender.org/D2287
The issue was that we used to compare number of vertices for mesh after the auto
smooth was applied (at the center of the shutter time) with number of vertices
prior to the auto smooth applied. This caused false-positive consideration of a
mesh as changing topology.
Now we do autosplit as early as possible and do it from blender side, so Cycles
does not need to re-implement splitting on it's side.
This matches behavior of Multiscatter GGX and could become handy later on
when/if we decide it would be beneficial to replace on closure with another.
Reviewers: lukasstockner97, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2413
There was 16 bits reserved for primitive type, while we only need 4.
Reviewers: brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2401
Discard the whole volume stack on the last bounce (but keep
world volume if present).
Volumes are expected to be closed manifol meshes, meaning if
ray entered the volume there should be an intersection event
of ray exisintg the volume. Case when ray hit nothing and
there are still non-world volumes in the stack can happen in
either of cases.
1. Mesh is not closed manifold.
Such configurations are not really supported anyway and should
not be used.
Previous code would have consider the infinite length of the
ray to sample across, so render result wasn't really correct
anyway.
2. Exit intersection is more far away than the camera far
clip distance.
This case also will behave differently now, but previously it
wasn't really correct either, so it's not like we're breaking
something which was working as expected.
3. We missed exit event due to intersection precision issues.
This is exact the case which this patch fixes and avoid
fireflies.
4. Volume has Camera only visibility (all the rest visibility
is set to off)
This is what could be considered a regression but could be
solved quite easily by checking volume stack's objects flags
and keep entries which doesn't have Volume Scatter visibility
(or even better: ensure Volume Scatter visibility for objects
with volume closure),
Fixes T46108: Cycles - Overlapping emissive volumes generates unexpected bright hotspots around the intersection
Also fixes fireflies appearing on the edges of cube with
emissive volue.
Reviewers: juicyfruit, brecht
Reviewed By: brecht
Maniphest Tasks: T46108
Differential Revision: https://developer.blender.org/D2212
The Progress system in Cycles had two limitations so far:
- It just counted tiles, but ignored their size. For example, when rendering a 600x500 image with 512x512 tiles, the right 88x500 tile would count for 50% of the progress, although it only covers 15% of the image.
- Scene update time was incorrectly counted as rendering time - therefore, the remaining time started very long and gradually decreased.
This patch fixes both problems:
First of all, the Progress now has a function to ignore time spans, and that is used to ignore scene update time.
The larger change is the tile size: Instead of counting samples per tile, so that the final value is num_samples*num_tiles, the code now counts every sample for every pixel, so that the final value is num_samples*num_pixels.
Along with that, some unused variables were removed from the Progress and Session classes.
Reviewers: brecht, sergey, #cycles
Subscribers: brecht, candreacchio, sergey
Differential Revision: https://developer.blender.org/D2214
They are defined for MSVC but seems to be missing in GCC and CLang-3.8.
Maybe some further tweaks to policy when to define those functions is
needed, but should be fine for now.
I can no longer reproduce crash with neither of the files where
the crash was originally visible. This is something where other
changes (light threshold, sampling) had an effect and made code
to work as it is supposed to. Could have been optimizator issue
or something like that.
Let's see if we hit same issue again.
There were two cases where correlation issues were obvious:
- File from T38710 was giving issues in 2.78a again
- File from T50116 was having totally different shadow between
sample 1 and sample 32.
Use some more simplified version of CMJ hash which seems to give
nice randomized value which solves the correlation.
This commit will break all unit test files, but it's a bug fix
so perhaps OK to commit this.
This also fixes T41143: Sobol gives nonuniform noise
Proper science paper about hash function is coming.
Reviewers: brecht
Reviewed By: brecht
Subscribers: lukasstockner97
Differential Revision: https://developer.blender.org/D2385
This is a way to avoid possible memory corruption when render threads works
in parallel with UI thread.
Not guarantees complete safe, but makes things easier to check anyway.
Was giving huge artifacts in the barber shop file here in the studio,
Maybe not fully optimal solution, but committing it for now to have
closer look later.
Main intention is to give some quick way to control scene's memory
usage by clamping textures which are too big. This is really handy
on the early production stages when you first create really nice
looking hi-res textures and only when it all works and approved
start investing time on optimizing your scene.
This is a new option in Scene Simplify panel and it acts as
following: when texture size is bigger than the given value it'll
be scaled down by half for until it fits into given limit.
There are various possible improvements, such as:
- Use threaded scaling using our own task manager.
This is actually one of the main reasons why image resize is
manually-implemented instead of using OIIO's resize. Other
reason here is that API seems limited to construct 3D texture
description easily.
- Vectorization of uchar4/float4/half4 textures.
- Use something smarter than box filter.
Was playing with some other filters, but not sure they are
really better: they kind of causes more fuzzy edges.
Even with such a TODOs in the code the option is already quite
useful.
Reviewers: brecht
Reviewed By: brecht
Subscribers: jtheninja, Blendify, gregzaal, venomgfx
Differential Revision: https://developer.blender.org/D2362
There is some define conflict between system headers and clew,
so delay include of clew.h as much as possible.]
This is something which needed to be done in the code before
the refactor, hopefully such change will still work.
For the multi-GPU case users still have to reconfigure the devices they want to use.
Based on patch from Lukas Stockner.
Differential Revision: https://developer.blender.org/D2347
This can be used together with camera culling to keep nearby objects visible in
reflections, using a minimum distance within which objects are visible. It is
also useful to cull small objects far from the camera.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2332
The code was templated already, so don't see big reason to have
3 versions of templated functions. It was giving some extra code
to maintain and in fact already had divergency for support of huge
image resolution (missing size_t cast in byte image loading).
There should be no changes visible by artists.
`kernel_path.h` and `kernel_path_branched.h` have a lot of conditional code and
it was kind of hard to tell what code belonged to which directive. Should be
easier to read now.
Add subframe to the animated seed hash calculation.
Should be no difference for the regular files, only for cases when scene is
rendered from sequencer with a speed effect, which is not really a common thing.
This is a late follow-up commit to the light sample threshold changes which
caused difference in rendering all existing .blend files which is not something
we are happy about: it is fine to use new optimized defaults for new files, but
existing ones should always be rendering in the same way as they used to be.
Sorry for the inconveniece, but such thing should have been done to begin with.
If this setting was modified it will not be reset to zero.
Now all render tests should be passing again.
P.S. Also really annoying to bump subversion for such reasons, but currently we
don't have better way to achieve what we want.
After CUDA dynload changes having CUDA toolkit became required
in order to compile Cycles. This only happened due to wrong
default value to the option.
Previously, it was only possible to choose a single GPU or all of that type (CUDA or OpenCL).
Now, a toggle button is displayed for every device.
These settings are tied to the PCI Bus ID of the devices, so they're consistent across hardware addition and removal (but not when swapping/moving cards).
From the code perspective, the more important change is that now, the compute device properties are stored in the Addon preferences of the Cycles addon, instead of directly in the User Preferences.
This allows for a cleaner implementation, removing the Cycles C API functions that were called by the RNA code to specify the enum items.
Note that this change is neither backwards- nor forwards-compatible, but since it's only a User Preference no existing files are broken.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: brecht, juicyfruit, mib2berlin, Blendify
Differential Revision: https://developer.blender.org/D2338
With this fix, using a MIS map resolution equal to the image size for closest imterpolation or twice the size for linear interpolation gets rid of all fireflies.
Previously, a much higher resolution was needed to get acceptable noise levels.
Basically, the problem here was that the transform that's used to bring texture coordinates
to world space is either fetched while setting up the shader (with Object Motion is enabled) or
fetched when needed (otherwise). That helps to save ShaderData memory on OpenCL when Object Motion isn't needed.
Now, if OM is enabled, the Lamp transform can just be stored inside the ShaderData as well. The original commit just assumed it is.
However, when it's not (on OpenCL by default, for example), there is no easy way to fetch it when needed, since the ShaderData doesn't
store the Lamp index.
So, for now the lamps just don't support local texture coordinates anymore when Object Motion is disabled.
To fix and support this properly, one of the following could be done:
- Just always pre-fetch the transform. Downside: Memory Usage increases when not using OM on OpenCL
- Add a variable to ShaderData that stores the Lamp ID to allow fetching it when needed
- Store the Lamp ID inside prim or object. Problem: Cycles currently checks these for whether an object was hit - these checks would need to be changed.
- Enable OM whenever a Texture Coordinate's Normal output is used. Downside: Might not actually be needed.
This allows to save a memory copy, which will be particularly useful for network rendering.
Reviewers: sergey, brecht, dingto, juicyfruit, maiself
Differential Revision: https://developer.blender.org/D2323
In scenes with many lights, some of them might have a very small contribution to some pixels, but the shadow rays are traced anyways.
To avoid that, this patch adds probabilistic termination to light samples - if the contribution before checking for shadowing is below a user-defined threshold, the sample will be discarded with probability (1 - (contribution / threshold)) and otherwise kept, but weighted more to remain unbiased.
This is the same approach that's also used in path termination based on length.
Note that the rendering remains unbiased with this option, it just adds a bit of noise - but if the setting is used moderately, the speedup gained easily outweighs the additional noise.
Reviewers: #cycles
Subscribers: sergey, brecht
Differential Revision: https://developer.blender.org/D2217
This option allows to create a smoother transition between Bricks and Mortar - 0 applies no smoothing, and 1 smooths across the whole mortar width.
Mainly useful for displacement textures.
The new default value for the smoothing option is 0.1 to give some smoothing that helps with antialiasing, but existing nodes are loaded with smoothing 0 to preserve compatibility.
Reviewers: sergey, dingto, juicyfruit, brecht
Reviewed By: brecht
Subscribers: Blendify, nutel
Differential Revision: https://developer.blender.org/D2230
When using the Normal output of the Texture Coordinate node on Point and Spot lamps, the coordinates now depend on the rotation of the lamp.
On Area lamps, the Parametric output of the Geometry node now returns UV coordinates on the area lamp.
Credit for the Area lamp part goes to Stefan Werner (from D1995).
Oh man, is it a compiler bug? Is it something we do stupid?
For now more crap to prevent crashes. During the conference will talk to
Maxyn about how can we troubleshoot such weird issues.
Basically don't use rcp() in areas which seems to be critical after
second look. Also disabled some multiplication operators, not sure
yet why they might be a problem.
Tomorrow will be setting up a full test with all cases which were
buggy in our farm to see if this fix is complete.
There is some precision issues for big magnitude coordinates which started
to give weird behavior of release builds. Some weird memory usage in BVH
which is tricky to nail down because only happens in release builds and GDB
reports all variables as optimized out when trying to use RelWithDebInfo.
There are two things in this commit:
- Attempt to make vectorized code closer to original one, hoping that it'll
eliminate precision issue.
This seems to work for transform_point().
- Similar trick did not work for transform_direction() even tho absolute
error here is much smaller. For now disabled that function, need a more
careful look here.
Several ideas here:
- Optimize calculation of near_{x,y,z} in a way that does not require
3 if() statements per update, which avoids negative effect of wrong
branch prediction.
- Optimization of direction clamping for BVH.
- Optimization of point/direction transform.
Brings ~1.5% speedup again depending on a scene (unfortunately, this
speedup can't be sum across all previous commits because speedup of
each of the changes varies from scene to scene, but it still seems to
be nice solid speedup of few percent on Linux and bigger speedup was
reported on Windows).
Once again ,thanks Maxym for inspiration!
Still TODO: We have multiple places where we need to calculate near
x,y,z indices in BVH, for now it's only done for main BVH traversal.
Will try to move this calculation to an utility function and see if
that can be easily re-used across all the BVH flavors.
Similar to the previous commit, avoid negative effect of bad branch prediction.
Gives measurable performance up to ~2% in tests here.
Once again, thanks to Maxym Dmytrychenko!
The idea here is to avoid if statements which could cause wrong
branch prediction.
Gives a bit of measurable speedup up to ~1%. Still nice :)
Inspired by Maxym Dmytrychenko, thanks!
Similar to regular triangle intersection case. Gives about 3% speedup rendering
SSS object on my desktop,
Question: how to avoid such a code duplication in a nice way without speed loss?
Note that volume rendering is not supported yet, this is a step towards that.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2299
Previously an error message would be printed whenever the OpenCL build produced output.
However, some frameworks seem to print extra information even if the build succeeded, so now the actual returned error is checked as well.
When --debug-cycles is activated, the build output will always be printed, otherwise it only gets printed if there was an error.
The problem here was, as the title says, that the two kernels were swapped.
Since shader evaluation is only used for building the samling map when World MIS is enabled, rendering without it would still work fine, although baking also was broken.
The previous refactor changed the code to use a separate logging mechanism to support multithreaded compilation.
However, since that's not supported by any frameworks yes, it just resulted in bad logging behaviour.
So, this commit changes the logging to go diectly to stdout/stderr once again by default.
This was giving some speedup but made intersection tests to fail
from watertight point of view.
Needs deeper investigation, but need to quickly get it fixed for
the studio.
This gives about 5% speedup on AVX2 kernels (other kernels still
have SSE disabled for math operations) and this solves the slowdown
of koro scene mention in the previous commit.
The title says it all actually. This commit also contains
changes to pass float3 as const reference in affected functions.
This should make MSVC happier without breaking OpenCL because it's
only done in areas which are ifdef-ed for non-OpenCL.
Another patch based on inspiration from Maxym Dmytrychenko, thanks!
This commit basically vectorizes existing code using AVX2 instructions
(without modifying algorithm itself). This gives quite nice speedups:
BMW: -8%
Classroom: -5%
Cat: -5%
Koro: +1%
Barcelona: -8%
That's on Linux machine, reported performance improvement on Windows
goes up to 20%.
Not currently sure why Koro is somewhat slower because it mainly uses
curve intersection tests, could be a time noise? Or osmething with the
cache utilization perhaps? In any case speedup in other scenes makes
me thinking that current state is acceptable for initial implementation.
This is again inspired by Maxym Dmytrychenko.
Based on existing ssef data type and to my knowledge it's also what happens in
Embree nowadays.
Inspired by Maxym Dmytrychenko and required for the upcoming triangle
intersection commit.
Hopefully the copyright message is correct.
This is also an important mathematical operation that can be folded
if it is known that one argument is a certain constant. For colors
the operation is provided as a Gamma node.
The SVM Gamma node needs a small fix to make it follow the 0 ^ 0 == 1
rule, same as the Power node, or the Gamma node itself in OSL mode.
Reviewers: #cycles
Differential Revision: https://developer.blender.org/D2263
The light sampling functions calculate light sampling PDF for the case that the light has been randomly selected out of all lights.
However, since BPT handles lamps and meshlights separately, this isn't the case. So, to avoid a wrong result, the code just included the 0.5 factor in the throughput.
In theory, however, the correction should be made to the sampling probability, which needs to be doubled. Now, for the regular calculation, that's no real difference since the throughput is divided by the pdf.
However, it does matter for the MIS calculation - it's unbiased both ways, but including the factor in the PDF instead of the throughput should give slightly better results.
Reviewers: sergey, brecht, dingto, juicyfruit
Differential Revision: https://developer.blender.org/D2258
This reverts commit ecbfa31caa.
Original commit broke logic in nodes re-fitting. That area can
access non-existing children momentarely. Not sure what would
be best solution here, for now simply reverting the change/
Both spot and area light have large areas where they're not visible.
Therefore, this patch stops the light sampling code when one of these cases (outside of the spotlight cone or behind the area light) occurs, before the lamp shader is evaluated.
In the case of the area light, the solid angle sampling can also be skipped.
In a test scene with Sample All Lights and 18 Area lamps and 9 Spot lamps that all point away from the area that the camera sees, render time drops from 12sec to 5sec.
Reviewers: brecht, sergey, dingto, juicyfruit
Differential Revision: https://developer.blender.org/D2216
The title says it all actually. From tests with barber shop scene here
gives 2-3x speedup for shader compilation on my oldie i7 machine. The
gain is mainly due to textures metadata query from jpeg files (which
seems to requite de-compression before metadata can be read). But in
theory could give nice improvements for scenes with huge node trees
as well (i'm talking about node trees of complexity of fractal which
we had reports about in the past).
Reviewers: juicyfruit, dingto, lukasstockner97, brecht
Reviewed By: brecht
Subscribers: monio, Blendify
Differential Revision: https://developer.blender.org/D2215
Basically just moves cached kernels from ~/.config/blender/BLENDER_VERSION to
~/.cache/cycles/kernels. This has following benefits:
- Follows XDG specification more closely,
not as if it's totally crucial or measurable by users, but still nice.
- Prevents unexpected sizes of config folder, makes disk space used in more
predictable for users way.
- Allows to share kernels across multiple Blender versions,
which makes it easier debugging at the times close to release.
- "Copy Previous Settings" operator will no longer be copying possibly
gigabytes of cached kernels, which used to lead to really nast disk usage
and annoying delays of copying settings.
- In the future we can have some smart logic to clear old unused cached
kernels.
Currently only done for Linux and OSX. Windows still follows old "cache"
folder logic, but it's not really important for now because we don't
support kernel compilation on this platform yet.
Reviewers: dingto, juicyfruit, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2197
Most of the time, Lamps in Cycles are just a constant emission closure, no texturing etc. Therefore, running a full shader evaluation is wasteful.
To avoid that, Cycles now detects these constant emission shaders and stores their value in the lamp data along with a flag in the shader.
Then, at runtime, if this flag is set, the lamp code just uses this value and only runs the full shader evaluation if it is neccessary.
In scenes with a lot of lamps and with "Sample all direct/indirect" enabled, this saves up to 20% of rendering time in my tests.
Reviewers: #cycles
Differential Revision: https://developer.blender.org/D2193
propagate trigger allows to propagate impulses across different FM objects, dynamic trigger can start a dynamic fracture independent from force, and constraint dissolve means that all constraints of activated shards will be disabled immediately
bugfix: switching from kinematic to simulated should work now, performance improvements with dynamic fracture,
new option just exposes the disable collision flag to FM (works inverted here, enable it to let constrained objects collide, disable for default behavior)
This will confuse hell of a guarded allocators because it is possible
to have allocation happened prior to Blender's guarded allocator is
fully initialized.
This was causing crashes and assert failures when running blender
with fully guarded memory allocator.
Initialization order of global stats and node types was not strictly
defined and it was possible to have node types initialized first and
stats after that. This will zero out memory which was allocated from
the statistics causing assert failure when de-initializing node types.
It was possible to have non-initialized unaligned BVH split
to be used when regular BVH split SAH was inf. Now we ensure
that unaligned splitter is only used when it's really initialized.
It's a regression and should be in 2.78a.
Material linking might and does change the way how drawObject is calculated
but does not tag drawObject for recalculation in any way.
Now use dependency graph to tag draw object for reclaculation. Currently do
this using OB_RECALC_DATA taq since tagging is not very granular yet. In the
future we can introduce ore granular tagging in the new dependency graph
easily.
Simple and safe for 2.78a.
When ED_screen_animation_play is called from wm_event_do_handlers,ScrArea *sa = CTX_wm_area(C); is NULL in ED_screen_animation_timer.
Informing the audio system in CTX_data_main_set, that a new Main has been set.
the issue was caused by wrong default value for brush particle count
which was clamped on display from 0 to 1. This is technically a regression
but how to port this to 2.78a?
Regression from rB69b66d549bcc8, was supposed to be non-functionnal
change, so not sure why search menu was reduced here? For now, restore
to 2.77 width.
Seems to be a bug in original implementation of a830280: code was always
using tangent space instead of UV map because it had the same name. Now
prefer UVMap over tangent because this is how Cycles works. At least it's
closer to.
Not sure it the save+reload issue is still relevant after this fix, that
needs to be double-checked.
Thanks @dfelinto for looking into the report and simplifying the case.
Should be included into 2.78a.
Column flow layout was abuse ui_item_fit in a weird way, which was
broken for last column items.
Now rather use own code, which basically spread available width as
equally as possible between all columns.
When ray hits curve segment with SSS shader it was possible to have
uninitialized hit_P variable used for sampling.
Seems that was a reason of our headache of difference between AVX2
and SSE4 render results here, so now we can revert all the nasty
ifdef-ed inline policies.
Original fix in this area was not really complete (but was the safest at
the release time). Now all the crazy configurations of slots going out
of sync should be handled here.
It was possible to have two viewports opened and start using Ctrl-0
to make different objects an active camera for the viewport. This
worked fine for viewports which had decoupled camera from the scene,
but if viewport was locked to scene camera it was possible to run into
situation when two different viewports are locked to scene camera but
had different v3d->camera pointers.
We were checking for number of tasks from given pool already active, and
then atomically increasing it if allowed - this is not correct, number
could be increased by another thread between check and atomic op!
Atomic primitives are nice, but you must be very careful with *how* you
use them... Now we atomically increase counter, check result, and if we
end up over max value, abort and decrease counter again.
Spotted by Sergey, thanks!
Pretty much same reason as for the 'from' pointer of shapekeys - runtime
data creating loops and 'ghost' dependencies between datablocks.
We need to handle them in cases like remapping, but whall not take them
into account to check dependencies between datablocks... :/
New recursive iteration over IDs in BKE_library_foreach_ID_link() was
broken by the infamous nodetree case. We cannot really recusively call
this function in that case, so better to deffer handling of
non-datablock NodeTrees as if real IDs here.
Also fixed initial ID not being stored as handled, in rare cases this
could also lead to infinite looping.
To be backported to 2.78a.
Mostly this is making inlining match CUDA 7.5 in a few performance critical
places. The end result is that performance is now better than before, possibly
due to less register spilling or other CUDA 8.0 compiler improvements.
On benchmarks scenes, there are 3% to 35% render time reductions. Stack memory
usage is reduced a little too.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D2269
We *always* want to increase mat user count when from Object (and not
Data), because in that case we are moving mat from object to temp
generated mesh, material can never be 'borrowed' in that case.
To be backported to 2.78a
If I didn't miss anything these are indeed not used. Old themes should still work (will only print info on redundant theme defines into console), but updated non-contrib themes already.
- Explicitly specify the platform for msbuild, to facilitate builds with just the Visual C++ Build Tools installed.
- When vs2013 is not found, try looking for 2015 as a fallback
- Clear up any batch variables that might have been set from previous runs
'camera' Object pointer of TimeMarkers is a 'temp' hack since Durian project...
Would need to be either made definitive now, or removed/reworked/whatever.
But since we intend to use that object pointer for other needs, and current code
could lead to crashing .blend files, for now let's fix that mess (was missing
some bits in read code, and also totally ignored in libquery code).
Should be safe for 2.78a.
It will discard the whole tile, but it's still kind of more friendly than
fully locked interface (sort of) for until tile is fully sampled.
Sorry if it causes PITA to merge for the opencl split work, but this issue
bothering a lot when collecting benchmarks.
This change makes it so that when the sequences within a Scene strip are
evaluated, they use the Scene that they come from as the context as opposed
the Scene that the Scene strip is in. This is necessary, for example, in the
case of the MulticamSelector where it needs to reference strips in the original
Scene as opposed to the Scene where the Scene strip is located.
Patch by @Matt (HyperSphere), thanks!
Previously it was falling back to just a path after #include
statement was finished. Now we fall back to a proper current
file name after dealing with the preprocessor statement.
- The build folder name used to be depended on the order of the parameters, this is now normalized to
"build_windows_[Release/Full/Lite/Headless/Cycles/Bpy]_[x86/x64]_vc[12/14]_[Release/Debug]" regardless of the order of the parameters.
-Use CUDA8 for all kernels when building the release convenience target with visual studio 2015
Another case of float imprecision leading to endless loop. INcreasing a bit 'noise threashold' seems to work OK.
Not a regression, but might be nice to have in 2.78a.
Not sure where this comes from, but code was converting BMEdge* to BMVert* to check oflags,
i.e. not accessing correct memory.
Regression, to be backported to 2.78a.
Regression caused rBbcc863993ad, write code was assuming dw->ob was always valid,
wich is no more the case right after reading file e.g.
Another good example of how bad it is to use 'hidden' dependencies between datablocks. :(
And another fix to be backported to 2.78a. :(((
Moved that code forth and back a few times while creating rB776a8548f03a fix,
ended up forgetting to update it correctly for function where it layed down in the end.
Last-minute fixes are never a good thing... Now we already have real reason for 2.78 'a' release :(
Code was not getting correct boundbox in some cases (group only instancing other groups e.g.), now
compute our own bbox in those cases.
Based on patch by @lichtwerk, but extended the fix to include linked datablocks in some cases
(linked objects in local groups, linked objects in local scene, etc.), this was also broken in existing code.
Reviewers: mont29
Subscribers: duarteframos
Differential Revision: https://developer.blender.org/D2257
Regression from rBa4a968f, we would adjust current point's wetness without actually protecting it
in new multi-threaded context, leading to concurrent access mess.
Now delay applying wetness reduction to current point to end of function, allows us to avoid having
to lock current point twice together with neighbor one (and reducing spinlock awainting too).
To be backported to 2.78a.
Never call function that might recompute a DM in an RNA itemf callback (or any UI-related func in general)!
There was an XXX comment asking if this was OK - well, no, it was not. :P
Could be ported back to some 2.78 flavour should we need it.
Was only happening with new dependency graph.
The issue here is that scene's depsgraph layers will be 0 unless
it was ever visible. Worked around by checking for 0 layer in the
update_tagged of new depsgraph. This currently kind of following
logic of visible_layers, but is weak.
Committing so studio is unlocked here, will re-evaluate this layer.
Now using new system dedicated to that kind of cases, id_ensure_real_user(), instead.
That way, usercount of Scenes is handled correctly at deletion time.
Reported by @sergey over IRC, thanks.
Some of the files were wrongly attributing code to some other
organizations and in few places proper attribution was missing.
This is mainly either a copy-paste error (when new file was
created from an existing one and header wasn't updated) or due
to some refactor which split non-original-BF code with purely
BF code.
Should solve some confusion around.
Buildbot machine was updated to the new SDK which seems to have
QTKit removed.
For until we've installed older SDK or ported our code to a new
AVFramework disabling QuickTime.
Actually two errors here:
* Properties editor wasn't refreshing on (NC_SCENE | ND_RENDER_OPTIONS) notifiers
* Was using notifier info bits wrongly, needs to send two separate notifiers
Decided to remove ND_RENDER_OPTIONS rather than adding properties editor scene context refresh for it, this is more than a render option change.
This is something what was guaranteed in give_current_material(), just
copied some range checking logic from there.
Not sure what would be a proper fix here tho.
In fact, it was the whole remapping process that was broken in logic bricks area,
due to terrible design of links between those bricks...
Object copying was also broken in that case, fixed as well.
To be backported to 2.78.
Note that issue was actually probably there since ages, hidden behind dirty hacks
used in previous append code (though likely visible in some corner cases).
Listen kids: do not, never, ever, do what has been done for links between logic bricks. Never. Ever.
Even as pure runtime data it would have been bad, but as stored data...
Using context.active_gpencil_brush to access the active Grease Pencil brush
would result in a crash if trying to rename the brush, because the "ID" pointer
was not set.
To be backported to 2.78
Optimization attempt with BKE_library_idtype_can_use_idtype() was not taking into account
the fact that drivers may link virtually against any datablock...
Has to be rethinked, but for after 2.78 release, this commit is safe to backport.
Regression caused by rBb27ba26, we would always tag datablocks to update in G.main,
ignoring given bmain, now always use this one instead.
To be backported to 2.78.
Idea here is to select the lowest isolation level that wont compromise quality.
By using the lowest level we save memory and processing time. This will also
help avoid precision issues that have been showing up from using the highest
level (T49179, T49257).
This is a pretty simple heuristic that gives ok results. There's more we could
do here, such as filtering for vertices/edges adjacent geometric features that
need isolation instead of checking them all, but the logic there could get a
bit involved.
There's potential for slight popping of edges during animation if the dice
rate is low, but I don't think this should be a problem since low dice rates
really shouldn't be used in animation anyways.
Reviewed By: brecht, sergey
Differential Revision: https://developer.blender.org/D2240
New features:
1) Release target that checks for both cuda 7.5 and 8 with WITH_CYCLES_CUDA_BINARIES=ON and CYCLES_CUDA_BINARIES_ARCH=sm_20;sm_21;sm_30;sm_35;sm_37;sm_50;sm_52;sm_60;sm_61 options set.
2) Option to switch between x86 and x64 builds, the default remains (auto detect the architecture) but can be overridden.
3) Option to switch between vs12(2013) and vs14(2015) default is 2013.
Reviewers: juicyfruit, sergey
Reviewed By: sergey
Tags: #platform:_windows
Differential Revision: https://developer.blender.org/D2180
Stupid mistake wrapping path validation code inside a BLI_assert, which means it was
only called in Debug builds...
Found by Sergey, thanks.
Should be backported to 2.78.
Those 'never null' ID pointers are really a PITA to handle... luckily we don't have much of those around!
Found by Sybren, thanks.
Should be backported to 2.78.
This is internal pointer helper for scene evaluation and tools, though exposed to bpy API,
it can give false 'dependency cycles' in bpy.data.user_map() results.
That's followup to rBe007552442634 really, both should be backported to 2.78
This is internal pointer helper for scene evaluation and tools, it's not exposed to bpy API anyway,
and can give false 'dependency cycles' in bpy.data.user_map() results.
Found by sybren in his Splode work.
Problem was zero length normal caused by a precision issue in patch evaluation.
This is somewhat of a quick fix, but is better than allowing possible NaNs to
occur and cause problems elsewhere.
Small issues in GHOST
- use NSApplicationDelegate protocol for our app delegate
- make sure NSApp is initialized before using
(cherry picked from commit df7be04ca6)
Regression from rB036c006cefe471. We can't use self here, self is bpy.app, not pydescriptor of python path getsetter...
So for now, do not try to replace getsetter by actual value in bpy.app's dict,
just return static var generated on first run.
Should be safe for 2.78.
When drawing with Grease Pencil "continous drawing" for a long time
(i.e. basically, drawing a very large number of strokes), it could be
possible to cause lower-specced machines to run out of RAM and start
swapping. This was because there was no limit on the number of undo
states that the GP undo code was storing; since the undo states grow
exponentially on each stroke (i.e. each stroke results in another undo
state which contains all the existing strokes AND the newest stroke), this
could cause issues when taken to the extreme.
I) Filename was not put in temp Main generated to save selected data only,
this was breaking readcode when trying to open partial file, leading to missing
filename in final loaded Main data.
II) Read code would confuse partial .blend files with Undo ones, when they had no screen in them
(which happens to 99.999% of partial .blend files I guess).
Reported by @sybren, thanks.
Should be safe enough for 2.78 release.
Based on patch by @codemanx, but with slightly less verbose descriptions.
More detailed behavior etc. rather belongs to doc/python_api/examples/bpy.ops.x.py imho.
The issue was caused by some false-positive empty non-AABB intersection.
Tried to tweak it a bit so it does not record intersection anymore.
Hopefully will work for all platforms. Tested here on iMac and Debian.
Constant folding was removing all nodes connected to the displacement output
if they evaluated to a constant, causing there to be no valid graph for
displacement even when there was displacement to be applied, and sometimes
caused crashes.
Using ones complement for detecting if transform has been applied was confusing
and led to several bugs. With this proper checks are made.
Also added a few transforms where they were missing, mostly affecting baking
and displacement when `P` is used in the shader (previously `P` was in the
wrong space for these shaders)
Also removed `TIME_INVALID` as this may have resulted in incorrect
transforms in some cases.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2192
Bump mapping was happening in world space while displacement happens in object
space, causing shading errors when displacement type was used with bump mapping.
To fix this the proper transforms are added to bump nodes. This is only done
for automatic bump mapping however, to avoid visual changes from other uses of
bump mapping. It would be nice to do this for all bump mapping to be consistent
but that will have to wait till we can break compatibility.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2191
The existing code uses the input value count of the first channel
for all of them. If the first channel is the largest, it leads to
a crash-causing buffer overrun in memcpy below. Likely this was
left since the time when only one channel was supported.
As a crash fix, probably should go into 2.78
Root of the issue is that active render index became wrong. This is the actual
thing to be fixed, but as usual this is quite tricky to reproduce. Since such
bad situation might have happened more and fix isn't really difficult or
intruisive let's avoid crash for now.
Can be revisited once we figure out root of the issue.
Nice for 2.78 release.
Own fault in new ID management work, thought rebuild the DAG itself was
enough to actually update whole scene, but we actually need to tag datablocks
for update as well, when we change (or remove) one of their ID pointers...
For proper indexing to work we need to use unaligned node with
identity transform instead of aligned nodes when doing refit.
To be backported to 2.78 release.
This is really hack-fix actually, not sure why `get_pointcache_keys_for_time()` seems to assume
it will always find key for given part index at least for current frame, and whether this assumption
is wrong or whether bug happens elsewhere...
Anyway, this is to be wiped out in 2.8, so no point loosing too much time on it, for now merely
returning unchanged (i.e. zero'ed) ParticleKeys in case index2 is invalid. Won't hurt anyway,
even if this did not crash in release builds, would be returning giberish values.
Weirdly enough, this version of XCode seems to have static_assert()
even when NOT using C++11. This is totally weird and counter intuitive
since static_assert() is supposed to be C++11 onlky feature.
Can XCode stop using future, please? :)
The two SVM nodes added with e7ea1ae78c caused a slowdown on AMD cards when rendering with OpenCL, whether displacement was used or not.
In the Barcelona Pavillon scene on a RX480, this would cause a 12% slowdown.
Therefore, this commit adds a additional flag for feature-adaptive compilation so that the new SVM nodes are only enabled when they are needed (Node tree connected to the Displacement output and Displacement type set to Both).
Also, the nodes were also added to shaders when the Displacement Type was set to Bump (the default), which was unneccessary and is fixed now.
Thanks to linda2 on IRC for reporting and testing and to maiself for help with the displacement shader code.
This fix might be relevant for 2.78, but it should be tested further before including it.
See commit's comments for details, but this boils down to: do not try to use
purely runtime cache data as a 'real' ID pointer in readcode, it's likely
doomed to fail in some cases, and is bad practice in any case!
Thix fix implies dupliweight's object will be invalid until first scene update
(i.e. first particles evaluation).
Cpack generates a standard filename with git information in it, which might not always be wanted for release builds, this patch adds an option to override that default filename.
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Differential Revision: https://developer.blender.org/D2199
ammended to fix: wrong variable name in main CMakeLists.txt
This reverts commit 9444cd56db.
This commit caused some flickering in the bones when swapping IK to Fk.
While it's unclear why such change caused any regressions, let's revert
it to unlock the studio.
Cpack generates a standard filename with git information in it, which might not always be wanted for release builds, this patch adds an option to override that default filename.
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Differential Revision: https://developer.blender.org/D2199
Based on reading documentation around. This particular version
is based on the ImageMagic documentation which could be found
there:
http://www.imagemagick.org/Usage/filter/http://www.imagemagick.org/Usage/filter/nicolas/
Current filter is based on measuring mean error with the current
splash screen and choosing combination of parameters which
gives minimal mean error.
Fix cast shadows options (in material tab) not working in the viewport.
An off-by-one error. See D2194 for more.
Committing for Ulysse Martin (youle) who found & fixed this.
This is a bug in the multithreaded task manager in negative value range.
The problem here is that if previter is unsigned, the comparison in the
return statement is unsigned, and works incorrectly if stop < 0 &&
iter >= 0. This in turn can happen if stop is close to 0, because this
code is designed to overrun the stop by chunk_size*num_threads as
the threads terminate.
This probably should go into 2.78 as it prevents a crash.
Was incorrect indexing done in the array. Caused by 5abae51.
Not sure why it needed to be changed here, but array here is supposed to be
a loop data, so bringing back loop index as it originally was. The shading was
wrong in edit mode with BI active as well (so it's not like it's needed for
BI only).
Patch in collaboration with Alexander Gavrilov (angavrilov), thanks!
Should be double-checked and ported to 2.78.
Crash was due to a name collision in Alembic objects caused by the fact
that names derive from the one of the Blender object. An object having
multiple particles system would thus give its name to various
subobjects.
Now use the name of the particles system for the Alembic object.
Another great example of inconsistency in usercount handling - dupli_group was considered
as refcounted by readfile.c code (and hence by library_query.c one, which is based on it),
but not by editor/BKE_object code, which never increased group's usercount when creating
an instance of it etc.
To be backported to 2.78.
if you use voronoi+bisect and solidify after FM, you can automerge now without having autohideable inner faces. This should have the same effect as autohide in regular use case has.
note: cache will always be overwritten in autoexec sim mode, because keeping it may lead to incorrect sim results
furthermore a couple of changes for external mode (attempt to be more BCB compatible)
solver iteration override was taken into account too late, after first validation which probably caused different simulation behavior using the default value for one frame
The addressed issue is a regression from Blender 2.75, after the internal
switch from double to single precision floating-point numbers in the
Freestyle code base. Face normal calculations require the higher
precision during the computations, even though the results can be stored
as single precision numbers.
This reverts commit 2bd3acf7cd.
Commit is valid in theory - but in practice, it means nearly all edited hair systems
would need to be re-created from scratch... Not nice, so better revert and note in code
that particle distribution is ugly and does not respect 'use modifier stack' option in most cases.
Disclaimer: this is tentative fix, seems to be working but you never know with particles.
If issue arise, we'll just revert this commit and hide 'use modifier stack' option
for grid distribution instead.
This commit also try to make choice of which dm to use during distribution more generic
(deduplication of code), again hopping not to break anything. :P
Looks like some distro still provide llvm-3.4-devel, while no more clang-3.4.
Since clang depends on llvm of same version, check clang only should ensure
us we also have matvhing llvm... *sigh*
That might consider a bit more objects to be considered deform modified,
but it covers common case of using taper object without require of doing
recursive checks.
In worst case it'll be just some extra synchronization time, no render
time difference will happen for false-positive because of extra checks
happening in Cycles.
When non-random, particle distribution used a small start offset (to avoid
zero-weight faces), which is fine with "continuous" entities like faces, but not
for discrete ones like vertices - in that case it was generating some undesired
"jump" over a few verts in case step was small enough
(i.e. total number of verts/particles was big enough).
Point-cached particles (those using simulations) would not update at all outside of
first frame, due to PSYS_RECALC_RESET flag being ingnored in `system_step()`...
For some mysterious reasons, udate is still non-fully functional outside of startframe
(e.g. changing face distribution between random and jittered), but at least when choosing
'Vertices' you get particles from verts and not faces!
Follow same logic in `psys_render_restore` as in `psys_render_set` - if hair and
display percentage is not 100%, we have to recompute particles...
With regular 'emitter' particles just hiding some is fine (though using random here
will not give a precise proportion...).
readfile.c would increment object usercount in three places, where it should not.
Remember kids: Objects are **only** refcounted by Scene's bases, and Object->proxy!
rB5b3af3dd made the poll function here slightly too laxist.
To be backported to 2.77 should we make an 'a' release.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D1861
This allows us to verify certificates of HTTPS connections, which is
mandatory for logins like on Blender ID.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D1845
World space and view space normals were mixed up, we should only convert from
world to view space if a custom normal is connected, otherwise it is already in
view space.
this seems a bit more stable, but does crash occasionally too, and may even
corrupt the blend. Workaround, open with official blender and FM is gone, save
and blend should be usable again.
This should allow to control breaking behavior of the compounds better.
As before the masses still have the biggest influence to the breaking behavior, followed by thresholds and
connection count per shard.
While this isn't essential, accessing this setting required navigating to each scene and using render menu.
Expose in sequencer UI for more convenient access.
Static schedule was responsible here...
Also, made a minor optimization in case adaptative (auto) subframes are enabled,
gives a few percent of speedup here.
Similar cause as in T47482, we used to have poor handling of 'user_one' cases of ID usage,
leading to inconsistent behavior depending on order of operations e.g.
Here, was object used by a group but not linked in any scene - once linked in scene,
their usercount would be 2, leading to 'making single copy', when it's actually not needed.
We now have better control here, so let's use it!
Note that other ID 'make single user' code will likely need similar fix (Images, etc.).
Safe to be backported to 2.77.
Problem was, during initialization of boids particles in `dynamics_step()`,
psys of target objects was not obtained with generic `psys_get_target_system()`
as later in code, which could lead to some uninitialized `psys->tree` usage...
Think it's safe enough for 2.77, though not a regression.
This assumed the OSX SDK version matched the OSX version, which isn't always true.
Also problematic for maintenance and would make building older Blender versions on OSX fail.
Passing in pre-defined OSX_SYSTEM is also supported,
if you have multiple and want to select one.
Once a dupli had a valid bbox, that bbox would be used for all following objects
without bbox, instead of skipping clipping check.
Issue unveiled by rB3fa0a1a5bc0ff2, but not related at all (in fact, bug was present before that commit).
Image filter was not set, but only if invoked from toolbar (image strip needs to be selected to see the button).
Caused by rB7fa72b8970, Wasn't aware there's another button for this for image strips.
Current startup .blend has old (percent?) values for particle brush strength.
Since rBe4e21480d6331903c90ab073746484498441e1ac, UI controls do not clamp automatically values anymore,
which means when you first enable comb (or any other brush) you get a 50 strength, waaaayyyy to powerful.
This commit fixes this in `BLO_update_defaults_startup_blend`, note that it does not fix custom users'
startup files, nothing to do here...
Handling `me` data here is not good idea anyway, we override it completly with data
from `tmp` (crash came from freeing already existing bb from me, while pointer still existed in tmp).
(rediscovered it while working on T47676...).
To be backported to 2.77.
Pointers of faces were passed as face keys during parametrizer's face creation. Since those
addresses were different for every run, the layout of the faces ended up being different
in the internal hash, leading to inconsistent order of their evaluation during LSCM solving,
and slightly different UV maps.
Solved by simply using faces' indices as key instead, which ensures we always get same results
with exact same input data now.
Many thanks to Roman Nagornov (RomanN) for raising the issue, investigating it and finding
the solution! And thanks to Brecht for quick review too.
Code was using degrees as radians.
Still unclear why default cube will have 180 degrees angle, but new meshes 30,
but that's kinda separate topic which is to be addressed separately.
This is a subject for final 2.77 release.
See rB935e241fa6ea095493 for details of the issue, but first fix caused regression T47632.
So for now handling the issue in a localized way, this is not a real solution (since this could happen
in other cases), but will do for 2.77.
This commit is to be backported to 2.77.
This reverts commit 935e241fa6.
Issue will be fixed in a more localized way for now (not that nice, since this use-after-free can possibly happen
in other places too, but only safe solution for 2.77).
This commit is to be backported in 2.77.
Now using same UI as in object/armature properties, also save one line in 3DView panesl. ;)
Nothing crucial there, but nice & safe to backport to 2.77 imho.
Brush.toggle_brush was allowed to be an invalid pointer,
it worked for the one operator that used it - but in general bad practice,
requiring a lookup on every access.
Ensure the pointer is kept valid now.
Metas are scanning all scenes duplis,
which can go into particle systems without an initialized derived-mesh.
For now just do NULL check, its not correct but real fix is not fitting well with current design.
'Distance from Object' color/alpha/thickness modifiers without a target
object were raising a run-time exception although it is not considered an
error condition.
The reported crash was confirmed as a segmentation fault in std::sort().
The cause of the crash was traced down to a binary comparison function
that was not satisfying the so-called strict weak ordering requirements of
the C++ standard sorting function. Specifically, the comparison operator
has to return false when two objects are equivalent (i.e., comp(a, a) must
be false), but that requirement was not met.
Since the binary comparison operator in question could be a user-defined
Python function, here a safety measure is implemented in the C++ layer to
make sure the aforementioned requirement is always satisfied.
Previously, a warning was added to provide feedback to users trying to change the values
of driven properties why their edits would not have any effect on the propeerty. However,
it turned out that instead of only showing up when the user tried to increment/decrement/slide
the property's value, it was also firing everytime they were trying to edit the expression.
That however is not what we want at all!
This fix assumes that BUTTON_STATE_TEXT_EDITING is used for expression editing, and
BUTTON_STATE_NUM_EDITING (or everything else) refers to the user trying to adjust the
value normally.
When the "Use Y" option in the Copy Rotation constraint is disabled, the constraint
behaves eratically when rotating all the target on all axes at the same time.
This is partially to be expected due to the way that euler rotations work
(i.e. the rotation orders stuff - you should use a rotation order based on most to
least important/significant rotations). Hence, by locking Y, you're causing accuracy
problems for Z.
What was not expected though was that changing the rotation orders on the objects
involved (for the record, it's the constraint owner that counts) did nothing.
It turns out that for objects, the rotation order settings were getting ignored!
This commit fixes this problem, and this particular case can be resolved by using
"XZY".
Notes:
* Since all object constraints were previously working on the assumption that they
used XYZ (default) order, it is possible that this change may have the unintended
consequence of changing the behaviour of some rigs which relied on the buggy
behaviour. Hopefully this will be a rare occurrence.
Sometimes the timeline header didn't update after time-scrubbing in the graph
editor ends, leaving the "Pause" button visible until the next refresh of the
timeline (e.g. on mouse over)
It was possible to miss some intersection caused by wrong barycentric
coordinates sign.
Cases when one of the coordinate is zero and other are negative was not
handled correct.
The issue was caused by wrong sign check. It originally came from more optimized
Cycles code where because of other reasons it wasn't visible yet. But in fact it
should be solved there as well.
It fix T46381. Normally BL_Action::Update (manage action time, end, loop…) should be called the same number of times as BL_Action::UpdateIPO (update action position, scale ect… in the game object).
But the bug report shows that UpdateIPO is called one less time than Update. To fix it i revert the commit 362b25b382 and implement a mutex in BL_Action::Update.
Example file : {F245823}
Reviewers: lordloki, kupoman, campbellbarton, youle, moguri, sybren
Reviewed By: youle, moguri, sybren
Maniphest Tasks: T39928, T46381
Differential Revision: https://developer.blender.org/D1562
Same case as with space char really, one should not use those special chars in
filenames, but they are globally supported by all current FS/OS, so no real reason
to enforce that behvior on users here.
To be backported to 'a' release.
All optional image format are not #define'd in submodules like DDS read/write code.
This means values of `eImbTypes` would not always be the same in all contexts, yuck!
This is a regression and should be backported to 'a' release.
Even if the weights are normalized, the weighted sum of normalized vectors
usually does **not** give a normalized vector (unless all source vectors
are aligned).
This probably was not a big issue in most cases, since we usually interpolate
similar vectors here - but still!
The final stage of the process (copying/interpolating new dst cddata from src cddata)
was simply broken in normal case, where we need to convert from source to destination
object space.
This patch is a bit verbose, but I cannot see how to avoid it really.
To think this code is in master since over 6 months and it only gets reported now... :/
Filenames over 128 chars would crash.
Move BLI_newname into file_ops,
this was only used in one place and isn't all that re-usable.
Also remove special behavior for 4 digits.
Currently OpenCL devices are packing images into a single texture,
which means technically number of textures is not limited here.
Now OpenCL will use same number of textures as CPU. If we want
to bump number of textures further, this values are to be modified
in sync.
NOTE OpenCL still does not support float textures.
Original patch from a guy called bliblubli in the tracker with
some own modifications.
Reviewers: brecht, dingto, sergey
Differential Revision: https://developer.blender.org/D1530
Looks like instancing of smoke sim is not supported at all
(was fake-working in 3DView in 2.74, but not rendered).
But it should not crash - code was adding temp 'fromdupli' base to the delayed
drawing list...
Nice to backport this to 2.76 I think.
The "tweak user" flag used to flag strips using the same action as the active strip
was not getting set on other strips that live on the same track as the active one.
Strips with this flag set are shown with a red colour to indicate that editing the
action may have the unintended consequence of modifying another strip.
We now have to explicitely enure tesselation of DMs when we need it.
Notes: Maybe we could use looptris here as well?
Not a regression (bug already present in 2.75, but not 2.74), nice to backport to 2.76 nontheless.
Own regression caused by fix for T46067,
edit-mode bvh only contained unselected faces.
This commit adds support for an edit-mode bvh containing all faces.
Would write 1.04 seconds as `00:00:01,40` instead of `00:00:01,040`...
Anyway, we already have BLI API for timecodes, much better to add
SubRip timecode format there, heavily simplifies code.
To be backported to final 2.76.
If t->mode remains edge/vert slide, restoreTransObjects() ends up calling
projectVert/EdgeSlideData(), which tries to access invalid customdata...
Not sure why we call again restoreTransObjects() and resetTransRestrictions() here tbh,
but safer not to change that for now.
Should be backported to 2.76 if possible.
Windows-only bug, mmap & co are not threadsafe by default on this platform, so we have to add a dedicated
spinlock for them in win32.
Note that we may try to get rid of those mmap later, but not for 2.76!
To be backported to final 2.76...
The issue was introduced by original Cycles point density support commit,
it lead to a constant density of 1 for object verticies point density source.
This is a mere bandage, that whole area is known broken anyway, but at least it should prevent the crash.
Note that that kind of stuff (the efd->index being a pointer) is really bad practice imho...
Should be backported to final 2.76.
Calling event handling recursively during window live resize is problematic,
the code wasn't designed to do that. Instead postpone event handling until
after live resize.
This was broken since ages I think, did not really hurt since we usually never use libs' names
to access them. Rather bad behavior however, breaking a ground rule of our ID system!
And no real reason to add new libraries to new (split) Main at all, libraries are
never considered linked datablocks, which means they should always be in 'main' Main->library list.
Not a regression, but should be included in 2.76 imho.
Compositing might make main thread so busy that animation is considered done due to duration before final position is reached.
Also added check to avoid unnecessary redraws.
Issue is, when 'Rotate Aroud Selection' is set, in Edit mode we do a fake transform operation
to get center point around which to rotate. For curves, most transform operations involve
a check of handle types. For now, added 'TFM_DUMMY' as an exception here.
Think it would be best to actually undo those changes in case of cancelled operation,
but this is much more involved, while this fix is safe enough to be included in final 2.76.
Basically, after border selecting, a wrong file was selected by using arrow keys if the screen was scrolled a bit vertically. Reason was that we didn't use correct view space coordinates but region space coordinates for measuring distance from mouse to first/last file in selection after border select.
Added some extra seed to the hash, so it's now less likely to give repetitive
patters at values around zero.
This will change distribution of bricks for existing files. but it's something
inevitable.
When the active object had no shapekey data, trying to create a new action from the
Shape Keys mode of the DopeSheet would crash. The segfault here was a silly regression
caused by my earlier Action Stashing work.
However, the old (pre-Action Stashing) code here also wasn't that great either.
While it didn't crash, it would still silently create a new action, even if that
could not get assigned/used anywhere. To prevent both of these problems from
happening again, I've added additional null checks, as well as beefing up the poll
callback here to forbid keyframing
The issue has been here since we changed drawing code for meshes to use
vertex arrays instead of immediate mode when VBO was off. Basically we
should now always invalidate the GPU objects regardless of the VBO
setting in the preferences.
The bug has been there since 2.73 at least, but what made it apparent
now is that new version resets preferences and as an extension the VBO
flag.
Should be included in final 2.74 release
Typical error using '->next' member of a freed linked list item. A bit trickier
even here, since we have some recursion...
Trivial fix for nasty crasher, safe for 2.74 imho?
It was an issue with what bounds to use for BVH node during construction.
Also corrected case when there are all 4 primitive types in the range and
also there're objects in the same range.
Input constants are to be connected before removing proxies,
otherwise node groups might give totally different result.
This is a regression and to be put into final release.
Simply check and early return in case we have no source or destination items
(verts/edges/loops/polys) available...
Also, fix an assert in `BKE_mesh_calc_normals_poly()`, when called with no poly.
Issue was caused by accident in c8a9a56 which not only disabled glossy
reflection if Glossy visibility is disabled, but also Diffuse reflection.
Quite safe and should go to final release branch.
Issue was introduced in 01ee21f where i didn't notice *_setup()
function only doing partial initialization, and some of parameters
are expected to be initialized by callee function.
This was hitting only some setups, so tests with benchmark scenes
didn't unleash issues. Now it should all be fine.
This is to go to the 2.74 branch and we actually might re-AHOY.
340b76b42c
Reported by formerly Old_Demon on blenderartists.
Apparently this caused old files to lose their links to material sockets
(noob own mistake from inexperience with node system).
This should either be included in release with version checking being
set to version 2.73 and subversion 10, without tweaking the
BKE_blender.h file
OR
340b76b42c should be reverted for this
release.
Thanks to Lukas for checking this out.
Conflicts:
source/blender/blenkernel/BKE_blender.h
source/blender/blenloader/intern/versioning_270.c
Transformed 'OrientationHelper' class into 'orientation_helper_factory' function,
which returns an OrientationHelper customized class with specified default axes.
For some reason recent change in avoiding non-aligned eigen vectors
was behaving differently for cmake and scons. Made it a bit different
now by storing scalars. This is more robust approach anyway, because
it's not really guaranteed Mat.col() gives a pointer inside data,
depending on column-major vs. row-major storage.
This is to be backported to 2.74 branch.
mode.
Yes it will, because those modes stay active. So on user side, expose
depth of field option always (I don't see why not), but disable SSAO in
wireframe/bounding box mode. It is a known limitation that compositing
does not support antialiasing yet, but better give users some more
control.
This could be included in final release but it's not that serious
either.
Basically out of range could happen when opening files made in 2.72 when
the new icons for texture painting were added. Apparently some more
caution is needed here.
Fix T44007: Cycles Volumetrics: block artifacts with overlapping volumes
The issue was caused by uninitialized parameters of some closures, which
lead to unpredictable behavior of shader_merge_closures().
New 'strip' snapping was simply not computed in case of constrained transform, hence init
'0' value was used as frame offset in this case.
This commit reorganizes a bit that snapping, to keep it more 'confined' into `snapSequenceBounds()`
dedicated function. It still needs a minor hack (setting snapping mode to something else than
defualt `SCE_SNAP_MODE_INCREMENT`, to avoid this snapping to be called by contraint code).
Thanks to Antony for review and enhancements.
This fix should be backported to 2.74.
It was actually an old issue with wrong conversion happening for muted
nodes, which wasn't visible before memory optimization commit.
This is to be backported to the final release.
The way we were getting diff to apply to vcos from target object was just bad!
Also, fixed another related issue - negated scale would be clamped to nearly zero,
now only consider absolute version of size (we do not care about its sign here anyway).
This should be backported to 2.74 (with previous commit too).
This patch would reassign the relative of all keyblocks to the relative
of the deleted keyblock. And it fixes the misalignement of the index values
after the keyblock is deleted.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D1176
I noticed our version code and subversion got out of sync in the past, maybe
that's what the issue was here.
Deleting the entries from the .xml makes it fall back to the default values.
The fix was really flacky, in terms during speed benchmarks i had
abort() in the fallback block to be sure it never runs in production
scenes, but that affected on the optimization as well. Without this
abort there's quite bad slowdown of 5-7% on the renders even tho
the Pleucker fallback was never run.
This is all weird and for now reverting the change which affects on
all the production scenes and will look into alternative fixes for
the original issue with precision loss on huge planes.
This reverts commit 9489205c5c.
The problem here was that when a Grease Pencil datablock is shared between
the 3D view and another one of the editors, all the strokes were getting handled
by the editing operators, even if those strokes could not be displayed/used
in that context. As a result, the coordinate conversion methods would fail,
as some of the needed data would not be set.
The fix here involves not including any offending strokes in such cases...
Conflicts:
source/blender/editors/gpencil/gpencil_edit.c
Issue was caused by wrong order of scene device update, which could
lead to missing object flags in shader kernel.
This patch solves a bit more than that making sure objects flags are
always properly updated, so adding/removing volume BSDF will properly
reflect on viewport where camera might become being in volume and so.
Really stupid issue caused by typo in bitfield bit lead to bit conflict,
Not sure how it was done, could be some bad merge conflict resolve in the
original commit or just pure man stupidnes.
This is a nice example when having set of small test render scenes hooked
to the ctest would really help.
It's probably not that stopper issue (even tho still quite bad) since it
was made 2 months ago. But if we ever do 'a' this time it's a nice change
to include.
Happens because material setting now occurs in the derived mesh drawing
routine as it should. However that means that it also happens during
selection and that influenced the drawing state somehow.
In 2.72 this did not occur because material setting happened during draw
setting (skip or draw) instead of after the draw setting passed (so
selection would skip it by use another draw setting function). Of course
this violated design but worked.
Made it now so backbuffer selection does not enable materials (it's
redundant in those cases anyway).
This could be ported to a possible 'a' release but as is classic with
display code there may be some other places that it could backfire.
Tested fix with texture/vertex painting and selection which use
backbuffer for both subsurf and regular meshes and it seems to work OK.
Own fault in rBb154aa8c060a60d to fix T42447... Reverted that commit, and added
kind of not-so-nice hack instead.
Note root of the issue comes from the special case we are doing here re 'Local'
space of parent-less objects. In that case, local space should be the same as
world one, but instead we apply the object rotation to it... This is inconsistent
with all other cases and could very well lead to other issues as T42447, but afraid
fixing that properly would be rather hairy - not to mention it would likely break
all existing riggings etc. :(
Should be safe for a 2.73a, shall we need it.
Generally for build systems, libraries that do not depend on other
libraries, such as system libraries, OpenGL etc always go at the end.
We could even get rid of some duplicate dependency libraries here but
auto duplication by build systems and differences between OSs make this
difficult.
GTest still duplicates all libraries twice to solve some issues which is
weird (maybe libs are not sorted correctly for some reason? needs
investigation)
Instead of getting fancy this time, we'll just use Mahalin's simpler
fix. This may have slight performance impacts, but it is a lot simpler
than the previous fix and shouldn't cause as many bugs.
They're not intended to be executed directly and seems mode change happened
by accident.
Setting -x for this files to avoid possible incidents by trying to run this
files in shell.
The issue was originall caused by 2e8ba17 by removing necessery call
GPU_enable_material(). It was probably removed because in some cases
material was enabled after calling setDrawOptions.
That wasn't always a case for edit mode.
This is absolutely to be included to 2.73
This isn't so bad for until one goes re-posing the armature and then uses undo.
It is the same issue as with edit mode which was solved back in the days.
This reverts commit 1549fea999.
After some further discussion with other developers in the team it becomes
clear there's no correct solution here. It is just more matter of what's
more convenient in particular case.
We're just going back to old code to avoid possible frustration with the
older files in newer blenders. This also means all HSV/HSL is considered
to be "linear" in the shading nodes.
Would be ported to 2.73 final.
Safe for 2.73...
This revert rB9b0ab890676790bb1e8e77797629b889ea66f69e - needed to set that threshold to a small
negative value to remove the last artefacts reported in T39735, but now I could not reproduce
any with the previous 0.0f value, so restoring it for the time being.
If this 'shadowed neighbor face' case re-appears, we can always choose a value in-between, like -1e-18f...
basically shadow rays were totally broken and most of the time did not record
any intersections, leading to really ad rendering artifacts.
This commit makes it so regardless of enabled optimization level render result
would be the same.
birth coordinates as pointsource, but you can override this to get the coordinates of the current particle simulation state. But as the particle cache is kinda broken and shows wrong positions at beginning if simulation has been run before, this can lead to errors. Best practice is to keep this option enabled, only disable if you really need to !
Number of neighbors is exactly the same as number of faces - each face
has one neighbor. Moved this into the face section of voro++ output too,
since it's essentially face data.
Use available print options from voro++ to get number of verts and loops
in advance, to avoid realloc. Also fixed bad mesh results with missing
vertices.
Goal of this commit is to support NGons for boolean modifier
(currently mesh is being tessellated before performing boolean
operation) and also solve the limitation of loosing edge custom
data layers after boolean operation is performed.
Main idea is to make it so boolean modifier uses Carve library
directly via it's C-API, avoiding BSP intermediate level which
was doubling amount of memory needed for the operation and which
also used quite reasonable amount of overhead time.
Reviewers: lukastoenne, campbellbarton
CC: scorpion81, karja, jsm
Differential Revision: https://developer.blender.org/D274
BMesh element arrays.
This will make Shard work more like a mini-Mesh struct and allow storing
of mesh data in blend files.
Larger construction methods currently disabled, TODO.
FractureModifierData.
This keeps the derived mesh inside the modifier system and is more
in line with common principles in other modifiers.
Also did lots of style cleanup.
Was a regression from rBaa3c06b41ca9, hope this time all things are OK again (note the X/Y subdivision values still are different than before (-1 for same result), but imho they make more sense this way).
[BlenderWiki page about Fracture Modifier](https://wiki.blender.org/index.php/User:Scorpion81/Fracture_Documentation)
## Useful Videos
[Development and experimental videos from the fracture modifier build & helpers addon by Dennis Fassbaender](https://www.youtube.com/playlist?list=PLyWdRVpqt5ZdQ6SdPuLQ76nShiuwXu_uC)<br>
[Simulation Videos made by Kai Kostack](https://www.youtube.com/playlist?list=PLGYnM-Mk7-OeqpCBmw6ZjFyiopkvkE6_S)
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}.lib#missing postfix on purpose, distutils is not expecting it
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib#other things like numpy still want it though.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.