Previous fix for T53430 caused T54200.
The edge case for soft & hard cuts weren't working,
where the strip used start/end-still & the frame was placed exactly on
the start/end of of the sequence content.
T54200 fixed the end-still case but broke hard-cuts for all other cases.
This fixes the case for soft/hard cuts with/without start/end-still.
Now repeating the operator will use the previously chosen offset, either with
the modal operator or typed in. The modal operator will still start at zero.
The check to see if `use_advanced_hair` was enabled was actually in two places
(render panel `draw` function and physics panel `poll` function). As these
properties are only in one place now the check in `draw` isn't needed anymore.
Related: T53513, a6c69ca57f
We shouldn't mix image pool acuisition with and without user provided,
the fact that internally image.c uses last frame from Image datablock
confuses the logic.
This can be very slow if it contains a big texture, and it's not
necessarily setup in a useful way anyway, and materials can be used
in multiple scenes.
Was happening when viewport visibility on the particle system is disabled.
This became an issue after c45afcf, but the actual issue goes a bit deeper
and the following aspects were involved:
- Relations builder for particle system was ignoring particle system if
it's visibility is not enabled for viewport. This is something what
shouldn't have been done -- depsgraph relations are supposed to be the
same no matter if it's viewport or render.
- Relation builder was only dealing with duplication set to object, but
was ignoring group duplication.
This is NOT a regression since 2.79, but a regression since 2.79a-rc1.
A single diagonal axis was used for sorting coordinates,
the algorithm relied on users not having vertices axis aligned.
Use BLI_kdtree to remove doubles instead.
Overall speed varies, it's more predictable than the previous method.
Some typical tests gave speedup of ~1.4x - 1.7x.
FFMPEG uses int for the numerator, while Blender uses a short. So in
cases people gave weird exotic framerate values and we cannot reduce
enough the numerator, we'd get totally weird values (even negative frame
rates sometimes!)
Now we add checks for short overflow and approximate as best as possible
in that case (error should not matter unless you have shots of at least
several hundreds of hours ;) ).
This reverts commit dc2617130b, which disabled
writing of previews for undo. While this uses some memory, re-rendering all
previews is very expensive, especially if for example you have lots of materials
using high-res image textures.
The idea is to avoid any threading overhead when we start pushing tasks in a
loop. Similarly to how we do it from the new dependency graph. Gives couple of
percent of speedup here, but also improves scalability.
We can not store pointers to elements of collection property in the
case we modify that collection. This is like storing pointers to
elements of array before calling realloc().
The issue was caused by light sample being evaluated to nan at some point.
This is root of the cause which is to be fixed, but is very hard to trace down
especially via ssh (the issue only happens on AVX2 release build). Will give it
a closer look when back to my AVX2 machine.
For until then this is a good check to have anyway, it corresponds to what's
happening in regular radiance sum.
Our own implementation was behaving different comparing to OSL and GPU,
namely on the border pixels OSL and CUDA was doing interpolation with
black, but we were clamping coordinate.
This partially fixes issue reported in T53452.
Similar change should also be done for 3D interpolation perhaps, but this
is to be investigated separately.
This would lead to sock.default_value pointing to the wrong data type,
possibly causing crashes. Unfortunately, this bug will still exist for
older Blender versions that try to load newer files, which makes
changing the type of a node socket problematic.
Drawing hair weights read before the hair array start.
This code could be improved since it currently copy-pastes,
from do_particle_interpolation, but this would need larger changes.
For now just correct existing logic.
Solves these security issues from T52924:
CVE-2017-12102
CVE-2017-12103
CVE-2017-12104
While the specific overflow issue may be fixed, loading the repro .blend
files may still crash because they are incomplete and corrupt. The way
they crash may be impossible to exploit, but this is difficult to prove.
Differential Revision: https://developer.blender.org/D3002
Solves these security issues from T52924:
CVE-2017-12081
CVE-2017-12082
CVE-2017-12086
CVE-2017-12099
CVE-2017-12100
CVE-2017-12101
CVE-2017-12105
While the specific overflow issue may be fixed, loading the repro .blend
files may still crash because they are incomplete and corrupt. The way
they crash may be impossible to exploit, but this is difficult to prove.
Differential Revision: https://developer.blender.org/D3002
CUDA 9.0.176 apparently caused some slow down on high-end Pascal cards that can be mitigated by increasing the number of registers. See https://developer.blender.org/F1142667 for a detailed comparison.
We tried to do as much as possible in a single threaded callback, which
lead to using some nasty tricks like fake atomic-based spinlocks to
perform some operations (like float addition, which has no atomic
intrinsics).
While OK with 'standard' low number of working threads (8-16), because
collision were rather rare and implied memory barrier not *that* much
overhead, this performed poorly with more powerful systems reaching the
100 of threads and beyond (like workstations or render farm hardware).
There, both memory barrier overhead and more frequent collisions would
have significant impact on performances.
This was addressed by splitting further the process, we now have three
loops, one over polys, loops and vertices, and we added an intermediate
storage for weighted loop normals. This allows to avoid completely any
atomic operation in body of threaded loops, which should fix scalability
issues. This costs us slightly higher temp memory usage (something like
50Mb per million of polygons on average), but looks like acceptable
tradeoff.
Further more, tests showed that we could gain an additional ~7% of speed
in computing normals of heavy meshes, by also parallelizing the last two
loops (might be 1 or 2% on overall mesh update at best...).
Note that further tweaking in this code should be possible once Sergey
adds the 'minimum batch size' option to threaded foreach API, since very
light loops like the one on loops (mere v3 addition) require much bigger
batches than heavier code (like the one on polys) to keep optimal
performances.
This is a bit annoying to have per-DM locking, but it's way better (as in, up to
4 times better) for playback speed when having lots of subsurf objects,
This a small cleanup of something which I think is just a typo anyway.
With all the recent talks of harrassment and groping, I think we better avoid
that within our source code! :)
Reviewers: sergey
Reviewed By: sergey
Tags: #motion_tracking
Differential Revision: https://developer.blender.org/D2979
Technically this was introduced in 01b547f993 when
exposing size and randomness for particles.
This "fixes" makes sure particle size and size randomness is always in the
Render panel when it affects the particle system (i.e., always unless using
advanced hair or hair that is not rendering groups/objects).
The issue was caused by SpinLock implementation in old pthreads we ar eusing on
Windows. Using newer one (2.10-rc) demonstrates same exact behavior. But likely
using own atomics and memory barrier based implementation solves the issue.
A bit annoying that we need to change such a core part of Blender just to make
specific CPU happy, but it's better to have artists happy on all computers.
There is no expected downsides of this change, but it is so called "works for
me" category. Let's see how it all goes.
There was some changes about namespaces, which causes ambiguities.
Replaces using namespace with an explicit symbols we need. Is good idea to NOT
pull in the whole namespace anyway!
Now we replace O(N^2) computational complexity with O(N) extra memory penalty.
Memory is much cheaper than CPU time. Keep in mind, memory penalty is like
4 megabytes per 1M vertices.
The issue here is that we can not read scale from socket when determining
dependent area of interest. This area will depend on current pixel. Now fall
back to more stupid but reliable thing: if scale size input is connected to some
nodes, we use the whole frame as area of interest.
The issue was caused by threading conflict around looptris: it was possible
that DM will return non-NULL but non-initialized array of looptris.
Thanks Campbell for second pair of eyes!
Use more watertight and robust intersection test.
It uses now ray to triangle intersection, but it's all fine because segment was
covering the whole bounding box anyway.
Mainly when object origin is not at the geometry bounding box center.
Seems to be straightforward to fix, hopefully it doesn't break some obscure case
where this was a desired behavior.
This is fully unpredictable for artists when one damaged object makes the whole
scene to render incorrectly. This involves two main changes:
- It is not enough to check triangle bounds to be valid when building BVH.
This is because triangle might have some finite vertices and some non-finite.
- We shouldn't add non-finite triangle area to the overall area for MIS.
Best guess is that cuInit() somehow interferes with the AMD graphics driver
on Windows, and switching the initialization order to do OpenCL first seems
to solve the issue.
This reverts commit d749320e3b.
It's possible the container struct is larger,
we could do sizeof checks that falls back to memmove
but rather avoid complicating things.
Recent addition of 'reinsert' didn't match logic for ghash API.
Rename to BLI_heap_node_value_update,
also add BLI_heap_insert_or_update since it's a common operation.
Was using an edge hash for triangle -> edge lookups,
updating triangle indices for each edge-rotation.
Replace this with half-edge which can rotate edges much more simply,
writing triangles back once the solution has been calculated.
Gives ~33% speedup in own tests.
This causes render differences in some scenes, for example fishy_cat
and pabellon scenes render brighter in a few spots. This is an old
bug, not due to recent RR changes.
SVM nodes need to read all data to get the right offset for the following node.
This is quite weak, a more generic solution would be good in the future.
The loc one (shift-alt-G) was same as 'remove selected from active group'
action... Clear delta transform is not a common operation, so we can
live without a default shortcut for it.
Note that using same key (G) in same space for two completely different
kind of operations is probably a rather bad thing, nice topic for future
keymap work. ;)
Probably nice to have in 2.79a.
by the transform constraint lines
Ported over e7395c75d5 from the
greasepencil-object branch. I should've fixed this ages ago, but
couldn't figure out why at the time.
When the mesh changed topology but kept the vertex count the same, it would
result in a corrupt mesh. By checking the face & loop counts too, this has
become less likely.
I've checked IPolyMeshSchema::isConstant(), but it returns true even when
we see that the mesh changed topology.
Before this patch, the XBlur/YBlur compositor nodes would crash for me when run in a MSVC 2015 debug build (test scene: BMW27_cpu). I added the compiler instructions to explicitly align the local variables that the SSE instructions are accessing.
This was wrong since it's concenption in 28ee0f9218.
The if statement was returning true when pinid was NULL, and false otherwise.
However when scene is pinned we also want to run this code.
Code snippet by Brecht Van Lommel.
Since in Alembic the loop order seems to be reversed when exporting and
importing, and this was the only place where it was not, I was thinking
to match this to the convention of reversing the loop order as well.
Reviewers: sybren, kevindietrich
Tags: #alembic
Differential Revision: https://developer.blender.org/D2968
Two issues here:
- Checking table size to be non-zero is not a proper way to go here. This is
because we first resize the table and then fill it in. So it was possible that
non-initialized table was used.
Trickery with using temporary memory and then doing table.swap() might work,
but we can not guarantee that table size will be set after the data pointer.
- Mutex guard was useless, because every thread was using own mutex. Need to
make mutex guard static so all threads are using same mutex.
One crucial thing here: OpenVDB shoudl be compiled WITHOUT
OPENVDB_ENABLE_3_ABI_COMPATIBLE flag. This is how OpenVDB's Makefile is
configured and it's not really possible to detect this for a compiled library.
If we ever want to support that option, we need to add extra CMake argument and
use old version 3 API everywhere.
There is absolute no reason to have such an indentation level, it only causes
readability and maintainability issues. It is really simple to make code more
"streamlined".
The issue actually goes a bit deeper, converting curve to mesh will
change texture space just because font and bezier curves are using CV
to calculate texture space.
So now when those objects are converted to mesh, we disable auto
texture space and copy evaluated space over.
Previously, hitting Shift-LMB will first invoke selection operator, which
then later on is transformed to mouse tweak used for reroute operator.
This was causing problems extending selection with Shift-LMB when clicking
fast or from a tablet.
This was expected behavior for over-exposured lamps when the mode was originally
created for Tears of Steel. Turns out, there could be really bad green screen in
real production which will only have green (or rather screen) channel over
exposured.
Tweaked condition now so we use least bright channel to see if the area has
proper exposure or not.
Seems to work fine in tests, but further tweaks are possible.
Was a mistake in optimization commit which was disconnecting closures and nodes
which does not make sense for volume output.
OSL script we can't ignore and can't currently know in advance if it's a proper
volume shader or not. So we never disconnect OSL nodes from volume output.
This is a good candidate for corrective release.
The issue here was that removing datablock from main database will poke editors
update, which includes buttons context to free users of texture. Since Cycles
will free datablocks from job thread, it might crash Blender since main thread
might be in the middle of drawing.
Solved by exposing extra arguments to bpy.data.foo.remove() which indicates
whether we want to perform ID user count and interface updates. While scripts
shouldn't be using those normally, this is the only way to allow Cycles to skip
interface update when removing datablock.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2840
Was caused by numeric overflow when calculating preview dimensions.
Now we try to avoid really insance preview resolutions by fitting
aspect into square.
The issue was caused by operator redo which frees all object's evaluated data,
including bounding box. This bounding box can not be reconstructed properly
without full curve evaluation (need to at least convert font to nurbs, which is
not cheap already).
In fact, any type of baking might have caused holes in mesh.
The issue was caused by zspan_scanconvert() attempting to get order of traversal
'a-priori', which might have failed if check happens at the "tip" of span where
`zspan->span1[sn1] == zspan->span2[sn1]`.
Didn't see anything bad on making it a check when iterating over scanlines and
pick minimal span based on current scanline. It's slower, but unlikely to cause
measurable difference. Quality should stay the same unless i'm missing something.
Reviewers: brecht, dfelinto
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2837
Fix T52977: Parent bone name disappeared in the UI in pose mode.
Regression caused by own rBc57636f060018. So instead of changing widget
type, just flag it as disabled.
Note that core of the issue is elsewhere though - there is absolutely no
reasons to have a search widget for pointers we cannot change nor
search! But fixing this is not really top priority, one of the many
glitches of our UI code, so think we can live with current code.
To be backported to 2.79a.
Inner DAG code would not check against NULL pointer, and in case of an
active linked scene, scene pointer will be NULL here, so we have to
check it ourself. ;)
No need to print status for basic & reliable operations,
build systems can output operations they run if needed,
or debug output changed in the source if developers are debugging.
Nice for ninja, so any printed text hints at a problem to fix.
Was using cursor position from within menu,
clicking on the same position for every selected item (toggling).
Now operate on each selected outliner element, without toggling.
When 2x loops have different number of vertices,
the distribution for vertices fan-fill depended on the loop order
and was often lop-sided.
This caused noticeable inconstancies depending on the input
since edge-loops are flipped to match each others winding order.
Pixel size was not initial early enough. For first time this was not a problem
because the bevel amount starts at 0 then, and after the mouse moves the pixel
size is initialized. For the second time the bevel amount starts at a non-zero
value, and it failed then.
Use triple buffer by default now on all platforms, remaing ones where:
* Mesa: seems to have been working well for a long time now, and not using
it gives issues with the latest Mesa 17.2.0.
* Windows software OpenGL: no longer supported since OpenGL 2.1 requirement
was introduced.
* OS X with thousands of colors: this option was removed in OS X 10.6, and
that's our minimum requirement.
Camera clipping was left to default values, which won't work well for
very large (or small) objects. Now recompute valid clipping start/end
based on boundingbox of rendered data, and final location of camera.
Tentative fix, since I cannot reproduce thenissue for some reason here
on linux.
Core of the problem is pretty clear though, thanks to Germano Cavalcante
(@mano-wii): another thread could try to use looptris data after worker
one had allocated it, but before it had actually computed looptris.
So now, we use a temp 'wip' pointer to store looptris being computed
(since this is protected by a mutex, other threads will have to wait on
it, no possibility for them to double-compute the looptris here).
This should probably be backported to 2.79a if done.
Should have been done before ahoy, sorry about that. Means 2.79 tag will
be one (no functionnal changes) commit ahead from our 2.79 builds, think
we can live with that.
Previous fix wasn't working correct for certain compiler and CPU intrinsics
mode, causing quite some crashes.
This should be a safer fix, which is closer in behavior to previous release
but which should still fix issues with robust curve intersection.
Note: this commit seems to work as expected (also with transform
snapping etc.). However, it is rather unsafe - not enough for 2.79 at
least, unless we get much more testing on it. It also depends on three
previous ones.
Note that using a global lock here is far from ideal, we should rather
have a lock per DM, but that will do for now, whole DM thing is doomed
to oblivion anyway in 2.8.
Also, we may need a `DM_DIRTY_LOOPTRIS` dirty flag at some point. Looks
like we can survive without it for now though... Probably because cached
looptris are never copied accross DM's?
This was... horribly wrong, CDDM will often *not* need to allocate
anything to return arrays of mesh items! Just check whether array
pointer is NULL.
Also, remove `DM_get_looptri_array`, that one is useless currently,
`dm->getLoopTriArray` will always return cached array (computing it if
needed).
Only the camera from View3D.localvd is used,
other pointers may be invalid.
Longer term we should probably clear these to ensure no accidents.
For now just follow the rest of Blender's code and don't access.
Baking rigid body cache was broken if some cached frames already
existed.
This is just a band aid for release, the logic need to be looked into
further.
scripts being "Artwork" which is your sole property and free to license.
I've removed the reference to scripts in this text.
This was from 2002! With our Python scripts becoming part of how Blender runs,
such scripts now are officially required to be compliant with GNU GPL.
For more information; check the FAQ or consult foundation@blender.orghttps://www.blender.org/support/faq/
We have a hardcored limit of 1000 images to be baked.
However anything anove 100 would be leading to overflow in the code.
Caught by warning from builder bot (my compiler doesn't even complain
about this, but it should).
Apparently with Maya in a certain configuration, it's possible to have an
Alembic object without schema in the Alembic file. This is now handled
properly, instead of crashing on a null pointer.
For example, if you have two keyframes:
k1 = 1px, k2 = 10px
it was doing:
1px, 9px, 8px, ..., 3px, 2px, 10px
instead of:
1px, 2px, 3px, ..., 8px, 9px, 10px
- Vertex only meshes never restored their selection history.
- Select history was cleared on the source instead of the target.
Simple Optimizations:
- Avoid O(n^2) linked list looping that checked the entire list before
adding elements (NULL values in the source array to prevent dupes).
- Re-use vert & edge lookup tables instead of allocating new ones.
For some specific pipelines (e.g., holographic rendering) you can easily
need over a million frames (1k * 1k view angles).
It seems a corner case, but there is no real reason not to allow users
doing that.
That said we do loose subframe precision in the highest frame range. Which can
affect motionblur. The current maximum sub-frame precision we have is 16.
While the previous limit of 500k frames has a precision of 32.
Thanks to Campbell Barton for the help here.
To be backported to 2.79
Issue was nasty hidden one, the dual status (mix of local and linked)
of proxies striking again.
Here, remapping process was considering obdata pointer of proxies as
indirect usage, hence clearing the 'LIB_TAG_EXTERN' of obdata pointer.
That would make savetoblend code not store any 'lib placeholder' for
obdata data-block, which was hence lost on next file read.
Another (probably better) solution here would be to actually consider
obdata of proxies are fully indirect usage, and simply reassign proxies
from their linked object's obdata on file read...
However, that change shall be safer for now, probably good for 2.79 too.
Would be nice to be able to catch this with assert as well, will see what would
be the best way to do this/.\
Need to verify with Mai that this solves crash for her and maybe consider
porting this to 2.79.
We need to make sure we can store all volume closures for all objects in volume
stack. This is a bit tricky to detect what would be the "nestness" level of
volumes so for now use maximum possible stack depth. Might cause some slowdown,
but better to give reliable render output than to fail quickly.
Should be safe for 2.79 after extra eyes.
We were showing "search for unknown menutype WM_MT_button_context" messages in terminal which were not helpful for users, so now they are disabled.
To be backported to 2.79
It is possible to have same image used multiple times at different frames,
which means we can not free it's buffers without any guard. From quick tests
this seems to be doing what it is supposed to.
Need more testing and port this to 2.79.
Regression from rBfed853ea78221, calling this inside thread worker was
not really good idea anyway, and we already have all the code we need in
pre-threading init function, was just disabled for vertex particles
before.
To be backported to 2.79.
The documentation for the bpy.app.handlers.scene_update_{pre,post}
handlers states that they're called "on updating the scenes data".
However, they're called even when the data hasn't changed. Of course
such handlers are useful, but the documentation should reflect the
current behaviour.
Reviewers: mont29, sergey
Subscribers: Blendify
Maniphest Tasks: T46329
Differential Revision: https://developer.blender.org/D1535
Commit b6d7cdd3ce changed how the mesh data
is deformed, which wasn't taken into account yet in this unit test.
Instead of directly reading the mesh vertices (which aren't animated any
more), we convert the modified mesh to a new one, and inspect those
vertices instead.
When a mesh changes its number of vertices during the animation,
Blender rebuilds the DerivedMesh, after which the materials weren't
applied any more (causing the default to the first material slot).
This can happen with Alembic files exported from Maya. I'm unsure as to the
root cause, but at least this fixes the crash itself.
Thanks to @looch for reporting this with a test file. The test file has to
remain confidential, though, so it's on my workstation only.
Simply disabled python tests, they can't be run anyway (since blender target is
not enabled) and we don't have any player-related tests in that folder.
Creating ngons with multiple axis aligned shapes in the middle of a
single face would fail in some cases.
This exposed multiple problems in BM_face_split_edgenet_connect_islands
- Islands needed to be sorted on Y axis when X was aligned.
- Checking edge intersections needed increased endpoint bias.
- BVH epsilon needed to be increased.
Basically, make re-alloc and memcpy from the same lock, otherwise one
thread might be re-allocating thread while another one is trying to
copy data there.
Reported by Mohamed Sakr in IRC, thanks!
The tweakmode flag and the selected-channels flag accidentally
used the same value, due to confusion over where these flags were
supposed to be set. The selected-channels flag has now been moved
to use a different value, so that there shouldn't be any further
conflicts.
To be ported to 2.79.
Own previous fix (rBd5d626df236b) was not valid, curves are actually
supported by SoftBodies. It was rather a mere UI bug, which was not
including Surfaces and Font obect types in those valid for softbody UI.
Thanks to @brecht for the head up!
Also, fix safe for 2.79, btw.
Adds thin/default/thick modes to add -1/0/1 to the auto detected line width,
while leaving the overall UI scale unchanged.
Also tweaks the default line width threshold, so thicker lines start from
slightly high UI scales.
Differential Revision: https://developer.blender.org/D2778
*Changed categories of some keywords
*reordered some longer keywords that didn't appear
*Activated another color (reserved builtins) by Leonid
*added some HGPOV and UberPOV missing keywords
Patch by Maurice Raybaud (@mauriceraybaud). Thanks to Leonid for additions, feedback and Linux testing.
Related diffs: D2754 and D2755.
While not a regression, this is new feature and would be nice to have it
backported to final 2.79.
As part of the fix for T51587, I removed the Depth output for non-Multilayer
images since it seemed weird that PNGs etc. that don't have a Z pass still get
a socket for it.
However, I forgot about non-multilayer EXRs, which are a special case that can
actually have a Z pass.
Therefore, this commit brings back the Depth output for non-multilayer images
just like it was in 2.78.
New code was handling correctly ID's internal references to self, but
not references between 'made real' different objects...
Regression, to be backported in 2.79.
This is still far from prefect, but yet much better than what we had so
far (more consistent with inheritent precision available in floats).
Note that this fixes some (currently commented out) units unittests, and
requires adjusting some others, will be done in next commit.
Bug was in RNA nodes code actually, itemf functions shall never, ever
return NULL!
Note that there were other itemf functions there that were potentially
buggy. Also harmonized a bit their code.
* Numbers with units (especially, angles) where not handled correctly
regarding number of significant digits (spotted by @brecht in T52222
comment, thanks).
* Zero value has no valid log, need to take that into account!
When calling sculpt from Python,
setting 3D 'location' but not 2D 'mouse' stopped working in 2.78.
Now check if the operator is running non-interactively and
skip the mouse-over check.
Regression in D1812: PyDriver variables as Objects
Taking the Python representation is nice in general
but for enums it would convert them into strings,
breaking some existing drivers.
Some users really liked previous behavior,
so making it an option.
Cursor Lock Adjustment can be disabled to give something close to
2.4x behavior of cursor locking.
When lock-adjustment is disabled placing the cursor the view.
This avoids the issue reported in T40353
where the cursor could get *lost*.
Even strands that were excluded by the density texture were being added
to the DM passed to cloth, but these ended up having some invalid data,
because they were not fully constructed.
This simply excludes `UNEXISTED` particles from the DM generation, as
would be expected.
Note that fix is not perfect, systematically make refcounting of all IDs
assigned to node's id pointer, which breaks the 'do not refcount
scene/object/text datablocks' principle...
But besides that principle being far from ideal in general, it becomes
pretty much impossible to apply when using //generic// ID pointer,
unless we add some kind of type data to that pointer somehow.
So for now, better to live with that, than having broken usercount.
The purpose of the keymap strings is probably for un-embossed menu items
like seen in most pulldowns. I can't see a reason for also adding that
string for regularly drawn buttons within popups, we don't add it
anywhere else in the UI either. So this commit makes sure shortcut
strings are only added to buttons that are drawn like pulldown-menu
items.
The Blender text editor's built in python template "Gamelogic" has a reference near the bottom to "objectHitList" as an alleged attribute to the KX_TouchSensor. This name is incorrect, it's correct name is "hitObjectList."
Attempting to access the suggested objectHitList returns error...
```
AttributeError: 'KX_TouchSensor' object has no attribute 'objectHitList'
```
The provided diff corrects this minor error.
Reviewers: kupoman, moguri, campbellbarton, Blendify
Reviewed By: Blendify
Tags: #game_engine, #game_python
Differential Revision: https://developer.blender.org/D2748
`defvert_array_find_weight_safe()` was confusing 'invalid vgroup' and
'valid but totally empty vgroup' cases.
Note that this also affected at least ShrinkWrap and SimpleDeform
modifiers.
The order of evaluation of function arguments is undefined, and the order
was reversed between these compilers. This was causing regressions tests
to give different results between Linux and macOS.
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
Moving the ray_start_local to the new position does not lose as much precision as moving the ray_org_local to the corresponding position.
The problem of inaccuracy is within the functions: `bvhtree_ray_cast_data_precalc` and` fast_ray_nearest_hit`. And not directly in the values of the rays.
The way we use it, UI_PRECISION_FLOAT_MAX is actually + 1 to get total
number of digits, and float only has 7 meaningful digits, so that define
shall be at 6.
GCC seems to detect uninitialized into function calls now, but then isn't
always smart enough to see that it is actually initialized. Disabling this
warning entirely seems a bit too much, so initialize a bit more now.
This commit unifies the flattened texture slot names for bindless and regular CUDA textures. Texture indices are now identical across all CUDA architectures, where before Fermi used different indices, which lead to problems when rendering on multi-GPU setups mixing Fermi with newer hardware.
Hopefully fix it actually, at least could not reproduce it anymore with
that changen, but Was already quite hard to trigger before.
We need a memory barrier at this allocation, otherwise it might happen
after preview gets added to done queue, so preview could end up being
freed twice, leading to crash.
Change the implementation so it no longer takes over the mouse cursor motion
from the OS, instead only move it when warping, similar to Windows and X11.
Probably the reason it was not done this way originally is that you then get
a 500ms delay after warping, but we can use a trick to avoid that and get much
smoother mouse motion than before.
While drawing nice 'rounded' values is OK also for 'low precision'
editing like dragging and such, it's quite an issue when you type in a
precise value, validate, edit again the value, and find a rounded
version of it instead of what you typed in!
So now, *only when entering textedit of num buttons*, we always get the highest
reasonable precision for floats (and use exponential notation when
values are too low or too high, to avoid tremendous amounts of zero's).
Tweaked the path radiance summing and alpha to accommodate for possible contribution of
light by transparent surface bounces happening prior to shadow catcher intersection.
This commit will change the way how shadow catcher results looks when was behind semi
transparent object, but the old result seemed to be fully wrong: there were big artifacts
when alpha-overing the result on some actual footage.
Since we added auto DPI on Linux, on some systems the UI draws smaller than before
due to the monitor reporting DPI values like 88. Blender font drawing gives quite
blurry results for such slightly smaller DPI, apparently because the builtin font
isn't really designed for such small font sizes. As a workaround this clamps the
auto DPI to minimum 96, since the main case we are interested in supporting is
high DPI displays anyway.
Differential Revision: https://developer.blender.org/D2740
This is a different fix for the issue from D2088, preserving backwards compatibility
for IK stretching. The main problem with this patch is that this new behavior has
been there for a year, so it may break rigs created since then which rely on the new
IK stretch behavior.
Test file for various cases:
https://developer.blender.org/diffusion/BL/browse/trunk/lib/tests/animation/IK.blend
Reviewers: campbellbarton
Subscribers: maverick, pkrime
Differential Revision: https://developer.blender.org/D2743
Revise the logic here to be more robust when keyframes with
similar-but-different frame numbers (e.g. 70.000000 vs 70.000008)
would cause the search to go into an infinite loop, as the same
keyframe was repeatedly found (and skipped).
Tweak the bias from the previous fix a bit to be more backwards compatible in
some scene. In the end which way we round is quite arbitrary, but keeping the
case where the texture coordinate is exactly zero the same seems better.
The option has always (un)set the "Overwrite" flag on all strips. Calling
it "Override" seems misleading, since even when unchecking it, it overrides
whatever was set on the selected strips. It really just (un)sets the
"Overwrite" flag, and now it is also labeled as such.
In the reported example it seemed reasonable to apply this change.
But it causes a much more common case (selecting projections)
to be split into 2x islands.
Resolves T50970
This makes sense when we want to avoid float precision error
for near co-linear edges. OTOH, this is an arbitrary decision,
so keep functions separate.
That problem occurs because of the imprecision of `short int` (16 bits).
The 3d coordinates are converted to 2d, and when they are off the screen, their values can exceed 32767! (max short int value)
One quick solution is to use float instead of short
The snap code is actually a little tricky. I want to make some arithmetic simplifications in it
The only similarity between these functions is that both serve to snap.
However their codes are totally different from one another.
So by separating these functions, it:
- removes the need to put several conditions;
- simplifies and
- optimizes the code
* Display a warning above the pose list if the pose library is in an invalid
state (i.e. when it has keyframes but no pose-markers associated with those
keyframes). This warning prompts users to run the "Sanitize Pose Library Action"
operator, which should fix up such issues.
* "Sanitize" operator now creates unique names for each newly create pose
marker it generates, including the frame on which it found the pose
The problem here was that the "frame_start" and "frame_end" RNA properties of
the Stepped FModifier were shadowing/overriding "frame_start" and "frame_end"
properties of the base FModifier. As a result, when the range() callback
for the In/Out parameters (defined as part of the base FModifier) checked
it's start/end properties, they were always still zero, meaning that the
acceptable range for the In/Out parameters was 0 -> 0 = 0.
Note:
If you've got old files with this problem, you'll need to manually click on
the frame_start/end properties to flush out the old values. It's probably
not worth the effort of applying a version patch for this (given that this
modifier is not one of the most often used ones AFAIK).
As with Strip Time, the updates here would get triggered before the
autokeying had a chance to record the unkeyed values, making it impossible
to autokey.
This is something which was reported to work fine by Mai, Benjamin and
confirmed by myself. Disabling this workaround gains us some speedup:
Before Now
bmw27 04:28.42 04:07.79
classroom 09:26.48 08:54.53
fishy_cat 08:44.01 08:18.70
koro 09:17.98 08:57.18
pavillon_barcelone 12:26.64 11:52.81
Test environment is:
- Ubuntu 16.04, with all updates installed
- AMD RX 480 GPU
- amdgpu pro driver version 17.10-450821
The issue was caused by combination of following factors:
- Clipboard cleanup function will pass node tree as NULL to node free
function.
This is fine on it's own, we don't have tree in clipboard.
- Node free function will call node storage cleanup only when there is
a non-NULL node tree.
This is somewhat weird, because storage cleanup does not take node
tree as argument.
So the solution here: move node storage cleanup outside of check that
node tree is not NULL.
imapaint's clone and canvas are refcounting Image usages.
And particle's editsettings' object and scene seem to be pure runtime
data (they are reset to NULL in readcode), so resetting them to NULL
here as well.
Really, really, really need to get rid of this usercount handling
everywhere, hopefully incomming ID copying rewrite will help sanitize
that mess. But fix was needed for 2.79 release!
The code was somewhat weird: it was first copying border/crop settings from
the "source" scene, then was checking border settings of the current scene
and only then was copying border from "source" scene.
Now we first copy border/crop flags, then copy border from source and then
check whether border is a full-frame.
Auto & aligned handles wouldn't restore to their correct locations.
Note that a more direct fix for the bug is possible
(storing the handle locations to restore on cancel).
But that still gives some odd behavior, see code-comments for details.
The default anisotropic tangent computation could fail in some cases,
leading to NaNs and artifacts. Use a simpler formulation that doesn't
suffer from this.
Unfortunately this means disabling the code that ensures the title
bar is properly scaled with DPI, however better to have that as a
cosmetic issue than Blender being unusable with a lot of Intel GPUs.
The issue was caused by combination of following factors:
- Blender Internal viewport render can not distinguish between which parts of
main database changed, so it does full database re-sync when anything is
tagged for an update.
This way, if any NodeTree (including compositor) is changed, Blender Internal
viewport is tagged for full render database update.
- With old dependency graph, scene-level drivers are evaluated on every
iteration of scene_update_tagged, even if nothing is tagged for an update.
This causes compositor drivers be evaluated quite often.
- Driver evaluation checks whether value was changed, and if so it tags
corresponding ID type as updated (this is what was telling viewport to do
render database update).
This check was quite stupid: current property value was checked against the
one coming from driver expression. This means, if driver value is outside
of the hard limit range of the property, the property will always be
considered updated.
The fix is to compare current property value against clamped value from the
driver.
New dependency graph is tacking root bone into account when building the graph.
This is required in order to get proper dependencies between bones. so we can
reliably use bones as targets from the same rig (and even indirect relations
via external objects). This forces us to tag relations for update when we change
root IK chain bone.
Since relations rebuild is not fully trivial operation, we only do it for
the new dependency graph. In the future it'll be nice to avoid whole graph
rebuild for such cases, but that's mentioned as a TODO.
This situation happens when a file with a text effect sequencer strip is
loaded in Blender < 2.76 and saved. This destroys the effect data, causing
a crash in Blender ≥ 2.76.
d2f748a222 prevented the crash when opening such a file, but accessing
the strip still caused a crash. This commit fixes that by actually
initialising the invalid strip. Of course this still causes data loss, but
that already happened by opening & overwriting the file in Blender < 2.76.
Using an arbitrary face as the source of the UV data is mostly fine, as
vertices on seams will generally map to different parts of the texture
that have the same color.
This is regarding fed853ea78
Some of the functions might have been inlined, but others i don't see
how that was possible (don't think virtual functions can be inlined here).
In any case, better be explicitly optimal in the code.
Clearing of custom bones outline's line thickness was not done at proper
point, wireframe drawing never changes line thickness, only solid draw
with outline does...
Last fix only accounted for direct changes to the RB settings, but
failed for, say, object transformations. This fix accounts for any
change that might invalidate the RB cache.
Fix 9cd6b03187 introduced a bug that
prevented simulation after a cache invalidation (for instance when
changing a setting after simulating). This fixes that.
The problem here was that when a "invalid" path is generated by the panoramic camera, it was tagged
as RAY_TO_REGENERATE with the intention of generating a new path in kernel_buffer_update.
However, since that state was not handled in kernel_queue_enqueue, kernel_buffer_update did not
process the path which resulted in an infinite loop.
Things like missing directories are now properly checked for, rather than
crashing Blender.
This also adds support for relative paths when opening an ABC file.
This makes the last time (`ltime`) stored in the rigid body world (`rbw`)
only be updated once a simulation step actually occurs, this prevents
another simulation step from being solved unless the current time is
exactly one frame after the last cached frame. Thus this prevents the
formation of gaps in the cache, such as seen in T50230.
Reviewers: mont29, sergey, angavrilov
Tags: #physics
Maniphest Tasks: T50230
Differential Revision: https://developer.blender.org/D2458
D2729 by @IgorNull
Currently, trackball rotation sequentially applies rotation across x axis and y axis,
which produces a strange/unusable result on diagonal pointer motion.
This change fixes the problem by using a single axis which is orthogonal
and proportional to mouse delta - matching view-port trackball.
As the title says, the normal wasn't set for the Hair BSDF because it wasn't
needed before. However, the denoiser uses it to store the feature passes, so
it needs to be set now.
Now that some node types may have custom context, we need to handle that
in the (convoluted :| ) UI code of nodes as well.
Reported in T43295 by Gabriel Gazzán (@gab3d), thanks.
`BMO_iter_as_array()` may fill less items than requested in given array,
so we have to update number of items to work on from its returned value,
otherwise code might try to use uninitialized memory.
The original code was doing a sanity check to see if existing index was
out of range. However the comparison was wrong.
So if the previous ct->user (active index of texture node) was larger
than then number of available texture nodes + 1 in the other material,
we would never re-set the index to 0.
Bug introduced on c31f74de6b.
There was an early attempt of fixing this (2b2ac5d3cc) but it was just working
by pure, luck. And failing in cases like the one from this bug report.
Convenience makefile now uses CMAKE_BUILD_TYPE_INIT,
this means you can change the build type of an existing build
and it won't be overwritten when running `make`.
Useful if you want to add debug info to a release build for profiling.
This is a very important, potentially deadly side-effect of this
operator. If something goes wrong, it can save a broken .blend file.
Ideally we could get rid of that operation anyway, once ID management if
fully renewed, but for now would rather keep it around.
Related to T51902.
Fix is a bit ugly, but cannot think of another solution for now, at
least this **should** not break anything else.
And now I go find myself a very remote, high and lonely mountain, climb
to its top, roar "I hate proxies!" a few times, and relax hearing the echos...
Based on D2578, now you can install JACK audio server and use it in
Blender build without having to specify the `--with-all` option (that
one still enables also JACK of course).
Reviewers: mont29
Maniphest Tasks: T51033
Differential Revision: https://developer.blender.org/D2578
@campbell Barton: Why is this declaration needed at all in stubs.c?
Further up the file collada.h is imported and that already decalres
the function and results in a duplicate declaration.
avoids wrong texture data when multiple objects are exported. Note: This
commit might possiblyt not work fully. The full feature is added with the
next commit)
Reorder keyingsets registration order, since it also afects order of I
menu options, better show most used ones first, pure alphabetical order
is not great here... Will likely break some muscle memory though. :|
Based on D2720 by Carlo Andreacchio (@candreacchio), thanks.
Although the original report was about the docs, the real issue was in
the API.
My original commit started from a copy-paste from the Switch
Node. However I don't use custom1 for thew Switch View node.
The docs is slightly incomplete since it would be nice to mention the
views here. Or maybe even expose them via Python. But honestly they are
generated depending on the scene multi-view settings.
If there was any specularity in the Principled BSDF, it would get a sampling
weight of one regardless of its actual impact.
This commit makes Cycles estimate the contribution of the component and adjust
the weighting accordingly, which greatly improves the noise characteristics of
the Principled BSDF in many cases.
Note that this commit might slightly change the brightness of areas when using
MultiGGX and high roughnesses, but the new brightness is more accurate and
closer to the result of Branched Path Tracing. See T51836 for details.
Differential Revision: https://developer.blender.org/D2677
The PDF of the MultiGGX sampling is approximated by the singlescattering GGX
term as well as a scaled diffuse term that makes up for the energy in the
multiscattering component that's missed by GGX.
However, there were two problems with the glossy terms: The diffuse term missed
a normalization factor, and the singlescattering term was not properly scaled
down based on the albedo estimate.
The glass term was completely wrong and has been rewritten. It uses the fresnel
factor to weight reflection vs. refraction and uses the glossy MultiGGX model
for reflection.
For refraction, the correct singlescattering term is now used, and a new
albedo approximation is used that was derived by evaluating GGX albedo for
roughnesses from 0 to 1 and IORs from 1 to 3 and fitting numerical
approximations to it. The resulting model has a mean relative error of 9e-5,
but could probably be simplified without losing noticable accuracy in the
final render.
The improved PDFs help with glossy highlights (due to better light sampling vs.
closure sampling MIS) and fix the situation described in T51836 where mixing
MultiGGX with other closures (as it happens in e.g. the Principled
BSDF) causes incorrect darkening.
This is annoying especially for exporters who do use mesh name, since it
broke any relation with actual Mesh naming in original Blend file.
Unfortunately, we cannot avoid the extra .xxx digits. ;)
It tried to assert that
addons/io_blend_utils/blender_bam-unpacked.whl/__init__.py was loaded when
the io_blend_utils module was imported. However, this happens only on
demand, and not directly when importing the add-on.
*Sigh* One more example of why we should keep ID management handling in
as few places as possible! It's impossible to keep more than a few
places in sync regarding which ID pointer is refcounted etc.
Children where always getting at least one segment of fixed length...
Now fully hidden ones (zero length) get no segment at all.
Note that even very short ones keep getting one 'unit' length segment - would
rather avoid changing that at this point, given how complex children
particles 'length' can get with all kind of modifiers... Think we can
live with that for now anyway.
The crash did not happen yet because we always had proper vmemh defined in
the parent scope.
Patch by Ivan Ivanov (aka obiwanus), thanks!
Differential Revision: https://developer.blender.org/D2715
This is not enough to mutex-guard modification code of integer values,
since this operation is NOT atomic. This is not even safe for a single
byte data types.
For now guarded the getter functions, similar to other functions in
this module.
Ideally we want to switch modification to an atomic operations, so we
wouldn't need any locks in the getters.
'Convert To...' Object operation has very weird effect of actually
working at obdata level, not object level, which means *all* objects
(even unselected/hidden/in other scenes/...) using same obdata will be
converted to new selected type.
IMHO this is very bad behavior, but... not a bug really, so do not
change this for now.
But at least, do not do that when working on some linked data, else it
leaves Blend file in invalid (incoherent) state until next reload.
So workaround for now is to enforce the 'Keep Original' option when some
linked object/obdata is affected by the operation.
Also fixed somewhat broken usercount handling in Curve->Mesh part.
That one was probably not an actual issue, except maybe in some corner
cases (like deleting a linked scene also used by some other linked scene).
Again, better not try to do smart & complex freeing logic outside of
BKE_library area, let's keep spaghetti nitghmare in a single place!
Previous code (same as what `BKE_libblock_free_us()` is doing when
usercount reach 0) was probably OK in that specific case, but still not
good idea, and potentially risky.
Even though in that specific it was probably safe-ish, there is no
guarantee at this point Brush we want to remove are not used somewhere,
better take the slightly slower, much safer `BKE_libblock_delete()` path here.
Eeeeeek!^2 Calling unconditionnaly ID freeing `BKE_libblock_free()` on a
datablock (ob->data, i.e. Curve) that may be used elsewhere...
Veryveryvery bad!
*tryed "#" as preprocessor used in POV-Ray for language keywords best behaviour was to have it as a punctuation symbol
*moved "finish" to its proper category
*changed order of some POV-Ray ini files keywords to have them work better
*added a few keywords from latest pov version
*Fixed C-style closing of multiline comments (*/)
Reviewers: campbellbarton, mont29
Reviewed By: campbellbarton, mont29
Subscribers: mont29
Differential Revision: https://developer.blender.org/D2707
Noisy change, but safe, and better do it sooner than later if we are to
rework copying code. Also, previous commit shows this *is* useful to
catch some mistakes.
Avoids using copy-constructor invoked every time we pass function
to the builder functions.
Should lower number of CPU ticks spent during DEG construction.
Was rather weird and only used for time source. It is simpler to make depsgraph
to keep track of time source directly.
No need to introduce extra entitites without actual need.
Was some mismatch in address space. Seems to be caused by recent additions.
Additionally, moved decoupled ray marching functions under ifdef, so they
don't try to use malloc() functions.
Thanks Mai for testing the patch!
Now, when there is no usable neighboring pixel for denoising, the noisy value
is preserved instead of producing a NaN.
Also, negative results are clamped to zero.
Note that there are just workarounds that don't fix the underlying problems,
but these issues are very rare and I'm not sure if it's even possible to fix
the underlying problems without introducing a significant slowdown or quality
decrease in other situations.
Because of that and since 2.79 is happening very soon, I just went for these
workarounds for now.
Replaces the placeholder 'emtpy' icons of "Force Field" and "Group
Instance" entries in object-add menu with proper new ones.
Icons by @zlsa, thanks a lot!
Maniphest task T51291.
Technically not passing all buffers used by a kernel is undefined
behavior. We haven't had any issues with this so far on AMD or
Nvidia, but it's known to be a problem with Intel and we received
a report from AMD that this is a problem on newer hardware, so we
need to make this change at some point.
Unfortunately there a cost to being correct, about 5% for the
benchmark scenes. For low sample counts it's even worse, I've
seen up to 50% slowdown. For the latter case I think adjusting
tile updating logic can help, but not sure what that would look
like yet (it would be just a few lines change however).
Unlike regular path tracing, branched path tracing is usually used with lower
sample counts, at least for primary rays. This means that are less samples for
the GPU to work on in parallel and rendering is slower. As there is less work
overall there is also more inactive threads during rendering with BPT. This
patch makes use of those inactive rays to render branched samples in parallel
with other samples.
Each thread that is preparing for a branched sample will attempt to find an
inactive thread and if one is found the state for the sample is copied to that
thread. Potentially, if there are enough inactive threads, 100s of branched
samples could be generated from the same originating thread and ran in
parallel giving large speed ups.
Gives 70% faster render for pavillion midday scene. 20-60% faster on BMW
with car paint replaced with SSS/volumes.
Due to various driver issues with AMD GCN 1 cards we can no longer support
these GPUs. This patch makes them unavailable to select for Cycles rendering.
GCN cards 2 and higher are still supported. Please use the most recent
drivers available to ensure proper functionality.
See here for a list to check which GPUs are supported:
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units
Reported by Andy Goralczyk (@eyecandy) over IRC, thanks!
Simply nuke all that poor broken custom one-by-one handling in
object_relations.c code, and use highly complex but powerful and
well-tested BKE_library_make_local() in all cases of MakeLocal!
ID management, especially related to linking, is very hairy matters,
better to have as few as possible core functions managing all the dirty
details. ;)
Edge collapse was using bounding box center as the point to collapse to.
When collapsing multiple adjacent edges together, this caused
inconsistencies in placement of the collapsed point, depending on the
orientation of the edges in relation to the space axis.
This makes edge collapse use the mean point instead.
The previous outlier heuristic only checked whether the pixel is more than
twice as bright compared to the 75% quantile of the 5x5 neighborhood.
While this detected fireflies robustly, it also incorrectly marked a lot of
legitimate small highlights as outliers and filtered them away.
This commit adds an additional condition for marking a pixel as a firefly:
In addition to being above the reference brightness, the lower end of the
3-sigma confidence interval has to be below it.
Since the lower end approximates how low the true value of the pixel might be,
this test separates pixels that are supposed to be very bright from pixels that
are very bright due to random fireflies.
Also, since there is now a reliable outlier filter as a preprocessing step,
the additional confidence interval test in the reconstruction kernel is no
longer needed.
The problem was that the strokes in the copy-paste buffer could be keeping
dangling pointers to colors that were already freed. Therefore, this commit
makes it so that when copying the strokes, we now make copies of the colors
and put them in a hashtable beside the stroke buffer. This is convenient,
as it saves us having to look up what colours need to be copied over each
time when pasting.
The problem was that newly pasted strokes were still using colours from
the original datablock. As a result, you'd either get an immediate crash,
or if you managed to save the file before it crashed, each stroke would get
reloaded with a dummy colour.
This commit fixes makes it possible to copy/paste strokes between datablocks
again. However, there are still problems when trying to paste across file
boundaries (i.e. copy strokes in one file, paste in another), which the next
commit will address.
faces.
This was requested by script writers. Especially needed if beveling
wire edges with vertex_only.
Should be backward compatible as just adds two new keys to returned
dict in python ('edges' and 'verts').
Was internally a no-op operation, which only caused extra work
to be done during depsgrpah traversal and evaluation, without
making any measurable improvement.
This is a minor update add more information on how Blender handles modal
operators. The existing docs provide a good overview, but might not be
as helpful to those unfamiliar with modal programming. This patch also
corrects a few small grammar issues.
It was never actually used apart from being stored at a construciton time.
This caused some redundancy and ncertanty about which relation type to use
during construciton (often existing types were not close enough to particular
use case).
Made this resilient to unknown types, for now. Supporting specific INT
sockets (through implicit conversion to GPU_FLOAT ones) is considered nice TODO.
Was just keeping the default '1' user from `BKE_libblock_alloc()`,
instead of using correct way to handle extra virtual user needed when we
want to keep unused datablocks around...
Do not call invoke ops from outliner's operations menus. Invoke op would
search again for item under mouse coordinates... when it is invoked!
Means often entry menu you would have clicked would not be over target
item, leading to either nothing or operation being applied to wrong item.
Note: about groups, there is another minor annoyance leading to some
assert - groups have an annoying virtual fake user which breaks
usercount, will see whether this is easily fixable. :|
The idea is to accumulate all new tasks in a thread local queue
first without doing any thread synchronization (aka, locks and
conditional variables) and move those tasks to a scheduler queue
once they are all ready. This way we avoid per-task-pool lock
and only have one lock per bunch of tasks.
This is particularly handy when scheduling new dependency graph
node children. Brings FPS of cached simulation from the linked
below file from ~30 to ~50.
See documentation for BLI_task_pool_delayed_push_{begin, end}
and for TaskThreadLocalStorage::do_delayed_push.
Fixes T50027: Rigidbody playback and simulation performance regression with new depsgraph
Thanks Bastien for the review!
If users wanted to bake only a few of the mesh materials, they would
still need to create dummy textures for the other parts.
This commit report (as RPT_INFO) the materials with no texture, but move
on to bake the others materials.
This was causing proxies updates on every frame, even if they
do not really change. Additionally, it was causing second round
of armature update when used from inside dupligroup (viewport
ensures all objects from dupligroup are up to date before draw).
It's now less confusing (for example, using nr_of_samples directly,
instead of using 1 / 1 / nr_of_samples). Might also have fixed a bug.
Also added unittests.
The scale matrix must have its homogeneous 'w' (at mat[3][3]) set to the
scale in order to also scale the translations along with it. However, this
also scales the transform matrix's 'w' component, which is not supposed
to happen.
The old default values (start/end frame = 1) could have been an actually
desired setting (for example when exporting a non-animated model). To
make this worse, this was only interpreted as "start/end of the scene" by
the export operator when running interactively, but not when run from
Python.
By choosing INT_MIN as default it's highly unlikely that the interval
[start, end) was intended as actual export range.
This way we always have predictable behavior, especially from the
performance point of view. Additionally, if some bottleneck is found
in stack implementation it'll be easier for us to address.
Yep, that got reported... Was slightly more involved than UI message
fixing though: RNA string length getter shall return exact lentgh of
string (same as strlen), not size of allocated buffer to contain it!
Otherwise, NULL final char leaks in and...
Denoising was setting session parameters for every frame, which was detected as
a change and therefore caused a resync.
Since the parameter modification change is only needed for viewport rendering
(which doesn't support denoising anyways) and resyncing after a frame change
(which isn't affected by denoising settings), an easy fix is to just ignore
the denoising parameters like it's currently done with the samples.
Follow up to 9f044cb422
These comments described the difference between Microsoft & MinGW's struct definition. Now that we dropped MinGW we don't need to go into these details.
DM evaluation code was simply never clearing the `deformedOnly` flag
when evaluating a generative modifier...
Quite astonishing this never got catched before, a lot of particle code
relies on valid value of this flag!!!
The fact that we can end with uninstantiated objects is not expected
currently, but would rather not start chasing all corner cases that may
lead to that situation.
User shall be able to delete uninstantiated objects from Outliner, though!
Properly remap nodes' pointers to copied IDs in copied ntrees.
Note that this only affects root trees, node groups are not concerned
here, since they are assumed to be reusable chunks and hence *not*
duplicated.
The operator is indeed not adding frames but inserting them at the current frame (shifting all subsequent ones). Changed the operator name and description.
Approved by Antonio.
This operator relies on a rather specific context setup, so it shall not
be exposed to user in 'operator search' menu etc.
Based on D2528 by Vuk Gardašević (lijenstina).
The Issue
=======
For a long time now MinGW has been unsupported and unmaintained and at this point,
it looks like something that we should just leave behind and move on.
Why Remove
==========
One of the big motivations for MinGW back in the day is that it was free compared to MSVC which was licensed based.
However, now that this is no longer true we have basically stopped updating the need CMake files.
Along with the CMake files, there are several patches to the extern libs needed to make this work. For example, see:
https://developer.blender.org/diffusion/B/browse/master/extern/carve/patches/mingw_w64.patch
If we wanted to keep MinGW then we would need to make more custom patches to the external libs and
this is not something our platform maintainers are willing to do.
For example, here is the patches needed to build python: https://github.com/Alexpux/MINGW-packages/tree/master/mingw-w64-python3
Fixes T51301
Differential Revision: https://developer.blender.org/D2648
We had handling of fully duplicated polygons already, but... absolutely
nothing to sanitize partially merged polygons! This were giving us
totally invalid geometry, with duplicated vertices in single poly,
invalid edges, etc.
Now we do check for invalid loops inside polys, and generate new edges
as needed to get only valid polys.
For some reason this was a nightmare to get running fully OK, playing
with old and new indices is really, really mind breaking.
This feature got lost with new auto-track API,
Added it back by extending frame accessor class. This isn't really
a frame thing, but we don't have other type of accessor here.
Surely, we can use old-style API here and pass mask via region
tracker options for this particular case, but then it becomes much
less obvious how real auto-tracker will access this mask with old
style API.
So seems we do need an accessor for such data, just matter of
finding better place than frame accessor.
It's a bit ugly but I couldn't find a better way to keep fast installs and
correct handling of switching between master and blender2.8 with different
lib directories.
This makes it possible to have an animated / procedurally generated mesh
that starts empty and obtains data in later frames.
Fixes the export of an empty mesh with an Ocean Modifier, as described in
issue T51351.
This allows you to put any kind of animation data on the mesh, and its
shape will be exported on each timekey. Note that this timekey is unrelated
to the animation data (so we don't export on each keyframe, for example).
A practical example is the addition of an animated custom property to
trigger the export of animated mesh data. The mesh data can then be created
from any source, like Python scripts.
Not only is this useful in itself, it also provides a workaround for one
of the two issues described in T51351.
This is written in a custom metadata key, so it isn't shown by utilities
like abcecho or abcls. However, it's still something that's useful to
have available.
Houdini writes vertex data in a different format than Blender does; Houdini
uses "face-varying scope", which means that the vertex colours are indexed
by an ever-increasing number over all vertices of all faces instead of the
vertex index.
I've also merged the read_custom_data_mcols() and read_mcols() functions,
because the latter was only called from the former, and the changes in this
commit would add yet more function parameters to pass.
A big chunk of code was copied between the if and else bodies. By using
a boolean to store whether the c3f_ptr or c4f_ptr should be used, the
in-loop condition is kept as simple as possible.
Note: the angle in bug isn't really reflex - using the vertex normal
for this test isn't always right, but usually is. At any rate,
shouldn't try to put vertex on edge between if a reflex angle.
This was two-fold.
1) The export used viewport settings to obtain the particle cache, rather
than render settings.
2) The child hair writer tried to obtain UV-coordinates from the parent
chair, without checking whether those were available in the first place.
Since we already have a rather advanced PovRay exporter, makes sense to
also nicely display generated 'code'.
Patch by Maurice Raybaud (@mauriceraybaud), thanks!
Cleanup (mostly styling) by @mont29.
Brightness/contrast node was changing color but did not modify alpha
or ensured colors are premultiplied on the output. This was giving
artifacts later on unless alpha was manually converted.
Compositor is supposed to work in premultiplied alpha (except of
some really corner cases) so it makes sense to ensure premultiplied
alpha after brightness/contrast node.
This is now done as an option enabled by default, so we:
(a) Keep compatibility with old files.
(b) Have correct behavior for newly created files.
Later on we can get rid of this option.
We were looping over all vgroups in destination mesh and making string
comparison, for every vgroup of every vertex of merged mesh! Crazy!
Now we simply create a temp mapping of vgroup indices, seriously
simplifies things (and gives significant speedup when merging huge meshes
with lots of vgroups, here with quick stupid test went from 120ms in
vgroup merging to less than 5ms, 25 times quicker!).
Root of the issue here was that two stupid modifiers could create named
vgroup CD layers (vgroup editing ones... shame on me :") ).
Fix that, and added some versionning code to also fix 'corrupted' blend
files created by those so far.
Seems re-loading module invalidates memory pointers by the looks of it,
which gives an error on the next kernel call.
Not sure how to move memory pointer from one CUDA module to another one,
so for now simply disabling kernel re-load for CUDA devices. Not ideal,
but better than failing render.
Feature-selective option for CUDA is not an official feature anyway.
`screen_findedge()` is not expected to return NULL in that case, but
checking against that does not hurt (we do it in all its other call
cases anyway), better than crashing.
For some reason GCC-6 successfully compiles test program with
-Wno-implicit-fallthrough passed via command line. It just
silently ignores the unknown arguments which are starting with
-Wno-.
The issue is, if some other waning happens in the code, then
GCC will complain about unknown -Wno- argument which is not
supported by current GCC version.
This makes some misleading warning prints about unknown
command line argument when any other warning happens in code
from extern/.
Volume shaders without anything connected to the surface output are treated
as if they had a transparent BSDF as the surface shader in Cycles, so the
denoiser should skip feature pass writing for them just as it does with an
actual transparent BSDF.
If the central pixel is an outlier, the denoiser is supposed to predict its
value from the surrounding pixels. However, in some cases the confidence
interval test would reject every single surrounding pixel, which leaves the
model fitting with no data to work with.
- Some arguments were inapproriatry tagged as unused
using (void)foo semantic.
Only use such semantic in tricky casses, when something
needs to be ignored in release builds or something is
dependent on tricky ifndef policy.
For rest of the cases just use void foo(int /bar*/)
semantic, which ensures variable is not used. Solves
confusion and code running out of sync with later
development.
- Used proper unused semantic to some arguments.
- Added braces to make code easier to follow, tricky
indentation with ifdef, uh.
For example, when using a radius of 1, only 9 pixels (due to weighting maybe
even less) will be used, but the transform code may still decide to use a
5-dimensional (or even higher) fit.
This causes severe overfitting and therefore weird pixel values.
To avoid this, this commit limits the amount of dimensions to a third of the
pixel number. For a radius of 3 or more, this doesn't change anything, but
for 1 and 2 it can prevent fireflies and/or negative values being produced.
Once again, numerical instabilities causing the Cholesky decomposition to fail.
However, further increasing the diagonal correction just because of a few
pixels in very specific scenes and settings seems unjustified.
Therefore, this commit simply falls back to the basic NLM-filtered pixel
if the more advanced model fails.
I wouldn't mind switching fully to Google style, but i am against of
mixing two different styles in same project. So just stick to brace
at the new line after function definition.
There were following issues with ccl_restrict_ptr:
- We already had ccl_restrict for all platforms.
- It was secretly adding `const` qualifier to the declaration,
which is quite weird since non-const pointer can also be
declared as restricted.
- We never in Blender are using foo_ptr or FooPtr type definitions,
so not sure why we should introduce such a thing here.
- It is absolutely wrong from semantic point of view to put pointer
into the restrict macro -- const is a part of type, not part of
hint for compiler that some pointer is never aliased.
Denoise commit introduced kernel_write_result() which saves light passes, so
no need to call both kernel_write_result() and kernel_write_light_passes() from
the split kernel.
Weirdly enough. kernel_write_result() does not take care about debug passes.
The issue is coming from some weird semi-finished canvas feature, which
was remapping coordinate without applying any differential on the sampling
ellipse (in fact, there is no ellipse, sampling think is always a single
pixel).
The whole thing is just weak in the compositor, for now just bring behavior
back to how it was prior to optimization (multithreading) commit.
The problem was that Cycles implicitly uses a transparent surface shader when only
volume nodes are used, but since the black emission shader gets optimized away,
it was no longer detected and therefore no transparent surface was used.
Therefore, the shader now stores whether volume nodes were connected before
optimizing.
Extremely bright pixels in the rendered image cause the denoising algorithm
to produce extremely noticable artifacts. Therefore, a heuristic is needed
to exclude these pixels from the filtering process.
The new approach calculates the 75% percentile of the 5x5 neighborhood of
each pixel and flags the pixel if it is more than twice as bright.
During the reconstruction process, flagged pixels are skipped. Therefore,
they don't cause any problems for neighboring pixels, and the outlier pixels
themselves are replaced by a prediction of their actual value based on their
feature pass values and the neighboring pixels.
Therefore, the denoiser now also works as a smarter despeckling filter that
uses a more accurate prediction of the pixel instead of a simple average.
This can be used even if denoising isn't wanted by setting the denoising
radius to 1.
The implementation originally handled four different cases:
Regular glossy, glass, metallic fresnel glossy and diffuse.
However, only the first two are actually used currently. Therefore, this commit
removes the other two, which allows to simplify the code.
Additionally, due to the Principled BSDF, the function arguments are now
identical for glossy and glass, which allows to get rid of some ugly #ifdefs.
Use smarter check of where the file is coming from instead of
attempting to replace same source twice with different settings.
Brings down processing time from 3.6sec to 1.8sec.
It is not a good idea to:
1. Duplicate metadata to self
2. Ignore the fact that something might have had metadata already.
Also moved metadata copy to a preparation function, so it is
never lost.
The mesh interpolation code had an edge case where one of two
adjacent edges to a vertex has 0 length. This caused an assert
failure indexing the vertex mesh for splash Blenderman.blend.
Found when was looking into T49864. The issue is caused here
by render_copy_renderdata() doing a copy of views with
BLI_duplicatelist() so we can not just zero the pointers out.
Similar thing is happening for layers as well.
Fix T50882: VSE: Blend Modes on Scenes do not layer properly
Fix T51002: Scene strip with Alpha over not working as expected
The byte-to-float conversion was being skipped if the color spaces of the sequence and the scene
are the same, which is the default, resulting in any non-float strips becoming invisible.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D2635
Normally, segments up to 50 can be quite enough for most cases.
However, when dealing with things like braids,
the current limit can sometimes be quite a pain.
This commit fixes crash, but user feedback can be improved here to
inform artist that one can't use Render Result as a texture since that
will cause feedback loop.
Basically upon invoking cycles baking we could canell it which would
leave G.is_break hanging as true. Since we were not setting is_break to
false before exec baking, it would misbehave.
The issue was caused by stupid workaorund for libav. Now things works for
FFmpeg. There might need some tweaks needed for Libav, but that one is
not really priority for support.
Previous method was based on face-area, giving un-even results
based on topology and gave issues with zero area faces.
This method gives matching results for concave ngons and the same geometry triangulated.
The code only updated nodes in the nodetree of the scene to which the render layer belongs. Therefore, when using scene B in the compositor setup of scene A, A's node wouldn't be updated.
With this fix, the update function loops over all scenes and checks them for relevant nodes.
Was preventing update in 3DView etc. when changing something in the
World's NodeTree, especially annoying in blender2.8 branch (since legacy
depsgraph has been removed there), but also affecting master.
Old code was working quite unreliable in combination with fast math
flag, especially when compiling with Clang. It seems we were hitting
result of the following bug submitted to Clang [1].
Basically, it was happening so that (int)sqrtf(64) was 7 when Cycles
is built with Clang but was correct 8 when built with GCC.
This commit works this around. Annoying, but don't see other way to
keep sampling pattern the same for Clang and GCC.
[1] https://bugs.llvm.org//show_bug.cgi?id=24063
The idea here is to keep things in a logical order to match the order of ones worflow.
This concept can be seen in Graph > Dope Sheet > NLA. This issue is mainly affecting the manual.
Fixes T50709
Differential Revision: https://developer.blender.org/D2630
The goal is to reduce wasted space and improve clarity in the 'N' panel of the VSE through layout changes.
The changes are intentional conservative to avoid making people re-learn anything.
Author: @mpan3
Differential Revision: https://developer.blender.org/D2439
* "Filmic" and "False Color" view transforms added (sRGB display device only).
* "Very Low/Low/Base/High/Very High Contrast" looks added.
* Added filtering so that Filmic only shows look names prefixed with "Filmic - ".
Filmic Dynamic Range LUT configuration created by Troy James Sobotka with
special thanks and feedback from Guillermo, Claudio Rocha, Bassam Kurdali,
Eugenio Pignataro, Henri Hebeisen, Jason Clarke, Haarm-Peter Duiker, Thomas
Mansencal, and Timothy Lottes.
Differential Revision: https://developer.blender.org/D2659
This commit contains the first part of the new Cycles denoising option,
which filters the resulting image using information gathered during rendering
to get rid of noise while preserving visual features as well as possible.
To use the option, enable it in the render layer options. The default settings
fit a wide range of scenes, but the user can tweak individual settings to
control the tradeoff between a noise-free image, image details, and calculation
time.
Note that the denoiser may still change in the future and that some features
are not implemented yet. The most important missing feature is animation
denoising, which uses information from multiple frames at once to produce a
flicker-free and smoother result. These features will be added in the future.
Finally, thanks to all the people who supported this project:
- Google (through the GSoC) and Theory Studios for sponsoring the development
- The authors of the papers I used for implementing the denoiser (more details
on them will be included in the technical docs)
- The other Cycles devs for feedback on the code, especially Sergey for
mentoring the GSoC project and Brecht for the code review!
- And of course the users who helped with testing, reported bugs and things
that could and/or should work better!
Those shall not be considered while checking whether a to-be-made-local
ID will end up fully local, or still be partially used by linked data...
Even less since we already do have special handling of proxies later.
Fixes main remaining issue found with 04_01_H.lighting.blend Agent327
file, and allows us to switch back to optimized post-processing in
make_local code.
That one tags those ugly little 'from' ID pointers (shape keys and
proxies), which point back from used to user ID, and require a lot of
special care in data-block management...
Again, Agent327's 04_01_H.lighting.blend shows some problem here, it
triggers several times the 'not used at all' assert in step 5 of secure
code, and with optimized version we lose the connection between
rigs and the main characters!
Will keep investigating on this, but for now let's try to give something
working to the studio.
This should not be needed imho, we already set POSE_RECALC flag
correctly there, but it still is missing actual update of poses in some
(complex and convoluted) cases. So at least for now, let's go with this
hack, it's not really harming anyone anyway.
Fixes crash in Agent327's 04_01_H.lighting.blend when making all local.
Not sure how this happens, but in some cases we can evaluate
deformations of an armature which pose is not valid, at least put a
warning here to help identifying the issue quickly.
The issue was caused by unlimited textures commit, root of the issue is that
displacement code updates some of the image slots directly, so it needs to
ensure device vectors are all proper size.
This was set to maxcpu which in an 8 core box would be 8, each project would then spawn
8 instances of cl.exe, making a possible of 64 simultaneously running compiler instances
slowing the compile down instead of speeding it up.
We unfortunately cannot fix this for previous versions of Blender, but
at least the issue (Blender crashing on unknown IDProp types) should now
be addressed for future.
Simply reset unknown IDProp types to integer one, and reset its value to zero.
Previously, every RenderPass would have a bitfield that specified its type. That limits the number of passes to 32, which was reached a while ago.
However, most of the code already supported arbitrary RenderPasses since they were also used to store Multilayer EXR images.
Therefore, this commit completely removes the passflag from RenderPass and changes all code to use the unique pass name for identification.
Since Blender Internal relies on hardcoded passes and to preserve compatibility, 32 pass names are reserved for the old hardcoded passes.
To support these arbitrary passes, the Render Result compositor node now adds dynamic sockets. For compatibility, the old hardcoded sockets are always stored and just hidden when the corresponding pass isn't available.
To use these changes, the Render Engine API now includes a function that allows render engines to add arbitrary passes to the render result. To be able to add options for these passes, addons can now add their own properties to SceneRenderLayers.
To keep the compositor input node updated, render engine plugins have to implement a callback that registers all the passes that will be generated.
From a user perspective, nothing should change with this commit.
Differential Revision: https://developer.blender.org/D2443
Differential Revision: https://developer.blender.org/D2444
Reduce thread divergence in kernel_shader_eval.
Rays are sorted in blocks of 2048 according to shader->id.
On R9 290 Classroom is ~30% faster, and Pabellon Barcelone is ~8% faster.
No sorting for CUDA split kernel.
Reviewers: sergey, maiself
Reviewed By: maiself
Differential Revision: https://developer.blender.org/D2598
Previously the logic was different for duplis and regular objects: regular objects
were using render visibility when Render Layer option is enabled which duplis were
always using viewport visibility when rendering from the viewport.
This was quite confusing because caused different results in viewport and render
when artists were expecting them to match 1:1.
This implements branched path tracing for the split kernel.
General approach is to store the ray state at a branch point, trace the
branched ray as normal, then restore the state as necessary before iterating
to the next part of the path. A state machine is used to advance the indirect
loop state, which avoids the need to add any new kernels. Each iteration the
state machine recreates as much state as possible from the stored ray to keep
overall storage down.
Its kind of hard to keep all the different integration loops in sync, so this
needs lots of testing to make sure everything is working correctly. We should
probably start trying to deduplicate the integration loops more now.
Nonbranched BMW is ~2% slower, while classroom is ~2% faster, other scenes
could use more testing still.
Reviewers: sergey, nirved
Reviewed By: nirved
Subscribers: Blendify, bliblubli
Differential Revision: https://developer.blender.org/D2611
Negative scale on camera is a nice trick to invert render image on one
axis at no extra CPU cost. It was implemented in the Decklink branch but
I introduced a typo when porting it to master. It is now fixed.
The change was initially needed for Blender 2.8 branch but the actual
function was reverted in there. So no reason to keep dead unused
placeholder in the dependency graph.
This reverts commit fd69ba2255.
Avoid calculating a new split-index when re-fitting.
While checking if a knot can be removed, the index with the highest error
can be used as a candidate to replace the knot
(in the case it can't be removed).
Not sure if this is a proper fix, but was getting frequent crashes, so
committing this real quick just to make master sable again. Can be
reverted later if there's a better fix. The changes to images really
need a closer look...
Hi Guys,
as one of my clients needs the possibility to have custom menu entries in the general right click menu (all over Blender: in the node editor, properties, toolbars,..) I talked with Campbell about expanding our hard coded menu a bit. This is the outcome. As I only need those two, I support currently a button_prop and a button_pointer.
{F540397}
I tested the changes with a custom script where I added a custom entry and executed an operator on click - it seems to work exactly how it's intended to. The script: {F540435}
As I'm not too experienced in rna stuff I would really appreciate any review.
Thanks very much Campbell for his open ears & help on this issue!
Reviewers: campbellbarton, mont29
Reviewed By: campbellbarton, mont29
Subscribers: sybren, mont29
Tags: #addons
Differential Revision: https://developer.blender.org/D2612
Previous fix did not work for mixed textures. This one will over-allocate
information array, but it's better than not being able to render at all.
Some more cleanup and improvement is coming.
This patch allows for an unlimited number of textures in Cycles where the hardware allows. It replaces a number static arrays with dynamic arrays and changes the way the flat_slot indices are calculated. Eventually, I'd like to get to a point where there are only flat slots left and textures off all kinds are stored in a single array.
Note that the arrays in DeviceScene are changed from containing device_vector<T> objects to device_vector<T>* pointers. Ideally, I'd like to store objects, but dynamic resizing of a std:vector in pre-C++11 calls the copy constructor, which for a good reason is not implemented for device_vector. Once we require C++11 for Cycles builds, we can implement a move constructor for device_vector and store objects again.
The limits for CUDA Fermi hardware still apply.
Reviewers: tod_baudais, InsigMathK, dingto, #cycles
Reviewed By: dingto, #cycles
Subscribers: dingto, smellslikedonkey
Differential Revision: https://developer.blender.org/D2650
Previously canceling a render done by the split kernel could cause artifacts
such as very bright or dark tiles. This was caused by unfinished samples
being included in the output buffer. To avoid this we now wait till all the
currently rendering samples have finished, up to a limit of twice the
expected time for them to finish (currently this is no more than 20 seconds,
but usually its much less). If samples still haven't finished by then we
stop anyways in case there's an endless loop occurring.
It was totally unclear whether the device is enabled or disabled.
Lots of people got fully lost in the current interface.
While the solution is not fully ideal, it is at least solves
ambiguity in the interface.
Simple child hairs don't have a face index number assigned, so the
call to dm->getTessFaceData(dm, num, CD_MFACE) would cause a crash. To
work around this, UV and normal vectors are copied from the parent
hair.
I've also removed an unnecessary call to dm->getTessFaceArray(dm);
Reviewers: kevindietrich
Differential Revision: https://developer.blender.org/D2638
The function doesn't return whether the object is a shape at all, since
it also returns true for camera objects (and soon also for empties). It
returns true when objects of this type can be exported to Alembic at all.
This is now reflected in the name.
This works around a long outstanding issue T50176 with cycles on msvc2015/x86 . root cause is still unknown though,feels like a game of whack'a'mole
Reviewers: sergey, dingto
Subscribers: Blendify
Tags: #cycles
Differential Revision: https://developer.blender.org/D2573
Using -cl-fast-relaxed-math assumes no NaN/Inf values in any expression.
This causes problems on overflow, division by zero, square root of negative number.
Comparisons with NaN or infinite value are affected as well.
This patch causes <2% slowdown on benchmark scenes.
Fix T50985: Rendering volume scatter with GPU OpenCL comes to an halt after a few seconds
HDF5 Alembic files are not officially supported by Blender. With this
commit, the HDF5 format is detected even when Blender is compiled without
HDF5 support, and the user is given an explanatory error message (rather
than the generic "Could not open Alembic archive for reading".
The final goal to reach is to make vectorized types much easier to maintain
and the previous design had following issues:
- Having all types and methods implementation made the source file rather
bloated and unfun to navigate in.
- It was not possible to quickly glance available API for the type you are
interested in.
- Adding more vectorization types will bloat the file even more, making
things even more tricky to follow.
Fixes performance issues of C++ one with Windows MSVC debug builds...
Merely a translation from msgfmt.cc code by @sergey, using BLI libs intead of C++'s stdlib.
Reviewers: sergey, campbellbarton, LazyDodo
Subscribers: sergey
Differential Revision: https://developer.blender.org/D2605
QtKit was removed in macOS Sierra, this patch disables WITH_CODEC_QUICKTIME
in Sierra and greater versions of macOS.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2645
By mistake, the code relied on ALEMBIC_ROOT_DIR being defined by the user
running the tests. Now CMake macros are used to correctly find the Alembic
root directory.
Matrix.decompose() should either return "location, orientation, size" or
"translation, rotation, scale". Since there are constructors for the former,
I've replaced "location" in the documentation with "translation".
The code is still the same, I just changed the documentation.
The idea is to have osme geenric BSDF node which is subclassed by
"regular" BSDF nodes and uber shaders.
This way we can access special type and closure type for making
decisions somewhere else.
No longer passing time as float and constructing ISampleSelectors all
over the place. Instead, just construct an ISampleSelector once and
pass it along.
It is disabled by default, so should not affect existing configurations.
Main benefits of this goes as:
- Linux distros can use that to avoid libraries duplication and link
blender package against gflags package from the system.
- It it easier to test whether Blender works with updated version of
Gflags prior to re-bundling the library.
This switches the internal color representation of the eye dropper from display space to linear. Any time a linear color is requested and the color is picked from a linear object, the result is now precise to the bit as the color gets patched through directly. Color space conversion now only happens when a color is picked from non-linear display space objects or when the color is requested to be returned in non-linear space.
In addition, this patch changes the DifferenceMatte node to interpret a tolerance of 0.0 to accept colors that are identical bit by bit, as apposed to simply refusing all colors.
Replaced some STREQ(snode->tree_idname, ...) calls with ED_node_is_*() calls for improved readability, fixed one case where the STREQ was used the wrong way
That's a quick hack to address that specific case, new pointer IDProp
actually enlights a generic problem - datablocks using themselves - which
is not really handled by current code, would consider this not-so-urgent
TODO though.
The ABC_export and ABC_import functions both take a as_background_job
parameter, and return a boolean.
When as_background_job=true, returns false immediately after scheduling
a background job. This was the old behaviour of this function, which makes
it very hard for scripts to do something with the data after the import
or export completes.
When as_background_job=false, performs the export synchronously, and
returns true when the export was ok, and false if there were any errors.
This allows further processing.
The Scene.alembic_export() function is deprecated, and will be removed from
Blender 2.8 in favour of calling the bpy.ops.wm.alembic_export() operator.
As such, it has been hard-coded to the old background job behaviour.
The export is still slower than needed, as the particle systems themselves
aren't disabled during the export. It's only the writing to the Alembic
file that's skipped.
We could not edit them, but still could delete them, which makes no
sense, API-defined properties are similar to class members, removing
them from single instances is pure garbage. And it was broken anyway.
Found by @a.romanov while checking on T51198, thanks.
Curve resolution isn't natively supported by Alembic, hence it is stored
in a user property "blender:resolution". I've looked at a Maya curves
example file, but that also didn't contain any information about curve
resolution.
Differential Revision: https://developer.blender.org/D2634
Reviewers: kevindietrich
The order number written to Alembic is the same as we use in memory, so
the +1 wasn't needed, at least according to the reference Maya exporter
maya/AbcExport/MayaNurbsCurveWriter.cpp, function
MayaNurbsCurveWriter::write(), in the Alembic source code.
Furthermore, when writing an array of nurb orders, the curve type should
be set to kVariableOrder, otherwise the importer will ignore it.
commit 90778901c9
Merge: 76eebd93bf0026
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Mon Apr 3 07:52:05 2017 +0200
Merge branch 'master' into cycles_disney_brdf
commit 76eebd9379
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Thu Mar 30 15:34:20 2017 +0200
Updated copyright for the new files.
commit 013f4a152a
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Thu Mar 30 15:32:55 2017 +0200
Switched from multiplication of base and subsurface color to blending
between them using the subsurface parameter.
commit 482ec5d1f2
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Mon Mar 13 15:47:12 2017 +0100
Fixed a bug that caused an additional white diffuse closure call when using
path tracing.
commit 26e906d162
Merge: 0593b8c223aff9
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Feb 6 11:32:31 2017 +0100
Merge branch 'master' into cycles_disney_brdf
commit 0593b8c51b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Feb 6 11:30:36 2017 +0100
Fixed the broken GLSL shader and implemented the Disney BRDF in the
real-time view port.
commit 8c7e11423b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Feb 3 14:24:05 2017 +0100
Fix to comply strict compiler flags and some code cleanup
commit 17724e9d2d
Merge: 379ba34520afa2
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jan 24 09:59:58 2017 +0100
Merge branch 'master' into cycles_disney_brdf
commit 379ba346b0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jan 24 09:28:56 2017 +0100
Renamed the Disney BSDF to Principled BSDF.
commit f80dcb4f34
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Dec 2 13:55:12 2016 +0100
Removed reflection call when roughness is low because of artifacts.
commit 732db8a57f
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Nov 16 09:22:25 2016 +0100
Indication if to use fresnel is now handled via the type of the BSDF.
commit 0103659f5e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Nov 11 13:04:11 2016 +0100
Fixed an error in the clearcoat where it appeared too bright for default
light sources (like directional lights)
commit 0aa68f5335
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Nov 7 12:04:38 2016 +0100
Resolved inconsistencies in using tabs and spaces
commit f5897a9494
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Nov 7 08:13:41 2016 +0100
Improved the clearcoat part by using GTR1 instead of GTR2
commit 3dfc240e61
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 31 11:31:36 2016 +0100
Use reflection BSDF for glossy reflections when roughness is 0.0 to
reduce computational expense and some code cleanup
Code cleanup includes:
- Code style cleanup and removed unused code
- Consolidated code in the bsdf_microfacet_multi_impl.h to reduce
some computational expense
commit a2dd0c5faf
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Oct 26 08:51:10 2016 +0200
Fixed glossy reflections and refractions for low roughness values and
cleaned up the code.
For low roughness values, the reflections had some strange behavior.
commit 9817375912
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Oct 25 12:37:40 2016 +0200
Removed default values in setup functions and added extra functions for
GGX with fresnel.
commit bbc5d9d452
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Oct 25 11:09:36 2016 +0200
Switched from uniform to cosine hemisphere sampling for the diffuse and
the sheen part.
commit d52d8f2813
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 16:17:13 2016 +0200
Removed the color parameters from the diffuse and sheen shader and use
them as closure weights instead.
commit 8f3d927385
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 09:57:06 2016 +0200
Fixed the issue with artifacts when using anisotropy without linking the
tangent input to a tangent node.
commit d93f680db9
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 09:14:51 2016 +0200
Added subsurface radius parameter to control the per color channel
effection radius of the subsurface scattering.
commit c708c3e53b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 08:14:10 2016 +0200
Rearranged the inputs of the shader.
commit dfbfff9c38
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Oct 21 09:27:05 2016 +0200
Put spaces in the parameter names of the shader node
commit e5a748ced1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Oct 21 08:51:20 2016 +0200
Removed code that isn't in use anymore
commit 75992bebc1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Oct 21 08:50:07 2016 +0200
Code style cleanup
commit 4dfcf455f7
Merge: 243a0e32cd6a89
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Thu Oct 20 10:41:50 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit 243a0e3eb8
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Thu Oct 20 10:01:45 2016 +0200
Switching between OSL and SVM is more consistant now when using Disney
BSDF.
There were some minor differences in the OSL implementation, e.g. the
refraction roughness was missing.
commit 2a5ac50922
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 09:17:57 2016 +0200
Fixed a bug that caused transparency to be always white when using OSL and
selecting GGX as distribution of the Disney BSDF
commit e1fa862391
Merge: d0530a87f76f6f
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 08:59:32 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit d0530a8af0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 08:53:18 2016 +0200
Cleanup the Disney BSDF implementation and removing unneeded files.
commit 3f4fc826bd
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 08:36:07 2016 +0200
Unified the OSL implementation of the Disney clearcoat as a simple
microfacet shader like it was previously done in SVM
commit 4d3a0032ec
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Sep 26 12:35:36 2016 +0200
Enhanced performance for Disney materials without subsurface scattering
commit 3cd5eb56cf
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Sep 16 08:47:56 2016 +0200
Fixed a bug in the Disney BSDF that caused specular reflections to be too
bright and diffuse is now reacting to the roughness again
- A normalization for the fresnel was missing which caused the specular
reflections to become too bright for the single-scatter GGX
- The roughness value for the diffuse BSSRDF part has always been
overwritten and thus always 0
- Also the performance for refractive materials with roughness=0.0 has
been improved
commit 7cb37d7119
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Thu Sep 8 12:24:43 2016 +0200
Added selection field to the Disney BSDF node for switching between
"Multiscatter GGX" and "GGX"
In the "GGX" mode there is an additional parameter for changing the
refraction roughness for materials with smooth surfaces and rough interns
(e.g. honey). With the "Multiscatter GGX" this effect can't be produced at
the moment and so here will be no separation of the two roughness values.
commit cdd29d06bb
Merge: 02c315ab40d1c1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 6 15:59:05 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit 02c315aeb0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 6 15:16:09 2016 +0200
Implemented the OSL part of the Disney shader
commit 5f880293ae
Merge: 630b80eb399a6d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Sep 2 10:53:36 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit 630b80e08b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Sep 2 10:52:13 2016 +0200
Fresnel in the microfacet multiscatter implementation improved
commit 0d9f4d7acb
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Aug 26 11:11:05 2016 +0200
Fixed refraction roughness problem (refractions were always 100% rough)
and set IOR of clearcoat to 1.5
commit 9eed34c7d9
Merge: ef29aaeae475e3
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Aug 16 15:22:32 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit ef29aaee1a
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Aug 16 15:17:12 2016 +0200
Implemented the fresnel in the multi-scatter GGX for the Disney BSDF
- The specular/metallic part uses the multi-scatter GGX
- The fresnel of the metallic part is controlled by the specular value
- The color of the reflection part when using transparency can be
controlled by the specularTint value
commit 88567af085
Merge: cc267e5285e082
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Aug 3 15:05:09 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit cc267e52f2
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Aug 3 15:00:25 2016 +0200
Implemented the Disney clearcoat as a variation of the microfacet bsdf,
removed the transparency roughness again and added an input for
anisotropic rotations
commit 81f6c06b1f
Merge: ece5a087065022
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Aug 3 11:42:02 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit ece5a08e0d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 16:29:21 2016 +0200
Base color now applied again to the refraction of transparent Disney
materials
commit e3aff6849e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 16:05:19 2016 +0200
Added subsurface color parameter to the Disney shader
commit b3ca6d8a2f
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 12:30:25 2016 +0200
Improvement of the SSS in the Disney shader
* Now the bump normal is correctly used for the SSS.
* SSS in Disney uses the Disney diffuse shader
commit d68729300e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 12:23:13 2016 +0200
Better calculation of the Disney diffuse part
Now the values for NdotL und NdotV are clamped to 0.0f for a better look
when using normal maps
commit cb6e500b12
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Jul 25 16:26:42 2016 +0200
Now one can disable specular reflactions again by setting specular and
metallic to 0 (cracked this in the previous commit)
commit bfb9cb11b5
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Jul 25 16:11:07 2016 +0200
fixed the Disney SSS and cleaned the initialization of the Disney shaders
commit 642c0fdad1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Jul 25 16:09:55 2016 +0200
fixed an error that was caused by the missing LABEL_REFLECT in the Disney
diffuse shader
commit c10b484dca
Author: Jens Verwiebe <info@jensverwiebe.de>
Date: Fri Jul 22 01:15:21 2016 +0200
Rollback attempt to fix sss crashing, it prevented crash by disabling sss completely, thus useless
commit 462bba3f97
Author: Jens Verwiebe <info@jensverwiebe.de>
Date: Thu Jul 21 23:11:59 2016 +0200
Add an undef for sc_next for safety
commit 32d348577d
Author: Jens Verwiebe <info@jensverwiebe.de>
Date: Thu Jul 21 00:15:48 2016 +0200
Attempt to fix Disney SSS
commit dbad91ca6d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Jul 20 11:13:00 2016 +0200
Added a roughness parameter for refractions (for scattering of the rays
within an object)
With this, one can create a translucent material with a smooth surface and
with a milky look.
The final refraction roughness has to be calculated using the surface
roughness and the refraction roughness because those two are correlated
for refractions. If a ray hits a rough surface of a translucent material,
it is scattered while entering the surface. Then it is scattered further
within the object. The calculation I'm using is the following:
RefrRoughnessFinal = 1.0 - (1.0 - Roughness) * (1.0 - RefrRoughness)
commit 50ea5e3e34
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jun 7 10:24:50 2016 +0200
Disney BSDF is now supporting CUDA
commit 10974cc826
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 31 11:18:07 2016 +0200
Added parameters IOR and Transparency for refractions
With this, the Disney BRDF/BSSRDF is extended by the BTDF part.
commit 218202c090
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 15:08:18 2016 +0200
Added an additional normal for the clearcoat
With this normal one can simulate a thin layer of clearcoat by applying a
smoother normal map than the original to this input
commit dd139ead7e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 12:40:56 2016 +0200
Switched to the improved subsurface scattering from Christensen and
Burley
commit 11160fa4e1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 10:16:30 2016 +0200
Added Disney Sheen shader as a preparation to get to a BSSRDF
commit cee4fe0cc9
Merge: 4f955d06b5bab6
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 09:08:09 2016 +0200
Merge branch 'cycles_disney_brdf' of git.blender.org:blender into cycles_disney_brdf
Conflicts:
intern/cycles/kernel/closure/bsdf_disney_clearcoat.h
intern/cycles/kernel/closure/bsdf_disney_diffuse.h
intern/cycles/kernel/closure/bsdf_disney_specular.h
intern/cycles/kernel/closure/bsdf_util.h
intern/cycles/kernel/osl/CMakeLists.txt
intern/cycles/kernel/osl/bsdf_disney_clearcoat.cpp
intern/cycles/kernel/osl/bsdf_disney_diffuse.cpp
intern/cycles/kernel/osl/bsdf_disney_specular.cpp
intern/cycles/kernel/osl/osl_closures.h
intern/cycles/kernel/shaders/node_disney_bsdf.osl
intern/cycles/render/nodes.cpp
intern/cycles/render/nodes.h
commit 4f955d0523
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 24 16:38:23 2016 +0200
SVM and OSL are both working for the simple version of the Disney BRDF
commit 1f5c41874b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 24 09:58:50 2016 +0200
Disney node can be used without SVM and started to cleanup the OSL implementation
There is still some wrong behavior for SVM for the Schlick Fresnel part at the
specular and clearcoat
commit d4b814e930
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:22:29 2016 +0200
Switched from a parameter struct for Disney parameters to ShaderClosure params
commit b86a1f5ba5
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:19:57 2016 +0200
Added additional variables for storing parameters in the ShaderClosure struct
commit 585b886236
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 12:03:17 2016 +0200
added output parameter to the DisneyBsdfNode
That has been forgotten after removing the inheritance of BsdfNode
commit f91a286398
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 10:40:48 2016 +0200
removed BsdfNode class inheritance for DisneyBsdfNode
That's due to a naming difference. The Disney BSDF uses the name 'Base Color'
while the BsdfNode had a 'Color' input. That caused a text message to be
printed while rendering.
commit 30da91c9c5
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 16:08:10 2016 +0200
disney implementation cleaned
commit 30d41da0f0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 13:23:07 2016 +0200
added the disney brdf as a shader node
commit 1f099fce24
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 3 16:54:49 2016 +0200
added clearcoat implementation
commit 00a1378b98
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Apr 29 22:56:49 2016 +0200
disney diffuse und specular implemented
commit 6baa7a7eb7
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 15:21:32 2016 +0200
disney diffuse is working correctly
commit d8fa169bf3
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 08:41:53 2016 +0200
added vessel for disney diffuse shader
commit 6b5bab6cec
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:22:29 2016 +0200
Switched from a parameter struct for Disney parameters to ShaderClosure params
commit f6499c2676
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:19:57 2016 +0200
Added additional variables for storing parameters in the ShaderClosure struct
commit 7100640b65
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 12:03:17 2016 +0200
added output parameter to the DisneyBsdfNode
That has been forgotten after removing the inheritance of BsdfNode
commit 419ee54411
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 10:40:48 2016 +0200
removed BsdfNode class inheritance for DisneyBsdfNode
That's due to a naming difference. The Disney BSDF uses the name 'Base Color'
while the BsdfNode had a 'Color' input. That caused a text message to be
printed while rendering.
commit 6006f91e87
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 16:08:10 2016 +0200
disney implementation cleaned
commit 0ed0895914
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 13:23:07 2016 +0200
added the disney brdf as a shader node
commit 0630b742d7
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 3 16:54:49 2016 +0200
added clearcoat implementation
commit 9f3d39744b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Apr 29 22:56:49 2016 +0200
disney diffuse und specular implemented
commit 9b26206376
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 15:21:32 2016 +0200
disney diffuse is working correctly
commit 4711a3927d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 08:41:53 2016 +0200
added vessel for disney diffuse shader
Differential Revision: https://developer.blender.org/D2313
This is possible to use surface-only nodes and connect them to volume output.
If there was something connected to surface output those extra connections
will not change anything visually but will force volume features to be included
into feature-adaptive kernels.
In fact, this exact reason seems to be causing slowdown of Barcelone file
comparing AMD OpenCL to NVidia CUDA.
Currently only supported by the final F12 renders because of the current design
of what gets optimized out when and how feature-adaptive kernel accesses
list of required features.
Reviewers: dingto, nirved, maiself, lukasstockner97, brecht
Reviewed By: brecht
Subscribers: bliblubli
Differential Revision: https://developer.blender.org/D2569
Remapping to itself is nonsense here (was triggering an assert in
BKE_library code actually), just make it a bail out early in RNA
callback in that case.
Sanitize a bit how cache path is handled by fluidsim (there is much more
to be done here though :( ), and forbid empty path (we reset to default
path relative to current .blend file in case it's empty).
If people really, really want to use current OS-wise directory, they can at
least use '.' as path. ;)
When moved the options to toolsetting, this part was missing. The problem was not the pointer as suggested in D2629.
Thanks Arvīds Kokins for his help fixing this bug
This avoids the unnecessary creation of bvhtree, which can be highly inefficient in some cases
(for example: in the `operator_modal_view3d_raycast.py` template)
Unfortunately this does break compatibility in that the viewport will look a
bit different depending on the settings, but the old behavior was simply not
usable for higher distances.
Object Info node can be useful to give some variation to a single material assigned to multiple instances. This patch adds support for Viewport and BI.
{F499530}
Example: {F499528}
Reviewers: merwin, brecht, dfelinto
Reviewed By: brecht
Subscribers: duarteframos, fclem, homyachetser, Evgeny_Rodygin, AlexKowel, yurikovelenov
Differential Revision: https://developer.blender.org/D2425
The U-resolution of the imported curves was kept at the default value
of 12, which is way too high for imported hair. We export hair at a
fairly high resolution already, so it's not needed to subdivide even
further when importing.
Of course this may have an impact on other curves that do require this
U-resolution to be higher. In that case the resolution can be
increased after importing.
I removed the default nu->orderu = num_verts, as that allowed every
point to influence the entire spline, which was more expensive for the
CPU, and unlikely to be needed. The orderu computations had off-by-one
errors in the curve importer, which are now also fixed. The correct
values are:
- Linear: orderu = 2
- Quadratic: orderu = 3
- Cubic: orderu = 4
These values are also what is stored in the Alembic file for curves of
type kVariableOrder, according to the reference Maya exporter
maya/AbcExport/MayaNurbsCurveWriter.cpp, function
MayaNurbsCurveWriter::write(), in the Alembic source code.
The result is a frame rate increase of roughly 100x (tested with one
100-hair test on one machine, so take with grain of salt).
This test checks that a set of cubes are exported with the correct
transform, both with flatten=True and flatten=False.
This commit also adds an easy to use superclass for upcoming Alembic
unit tests.
This supports our common character animation workflow, where a character,
its rig, and the custom bone shapes are all part of a group. This group
is then linked into the scene, the rig is proxified and animated. Such
a group can now be exported. Use "Renderable objects only" to prevent
writing the custom bone shapes to the Alembic file.
The absence of datablock properties "will certainly be resolved soon as the need for them is becoming obvious" said the [[http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.67/Python_Nodes|Python Nodes release notes]]. So this patch allows Python scripts to create ID Properties which reference datablocks.
This functionality is implemented for `PointerProperty` and now such properties can be created with Python.
In addition to the standard update callback, `PointerProperty` can have a `poll` callback (standard RNA) which is useful for search menus. For details see the test included in this patch.
Original author: @artfunkel
Alexander (Blend4Web Team)
Reviewers: brecht, artfunkel, mont29, campbellbarton
Reviewed By: mont29, campbellbarton
Subscribers: jta, sergey, campbellbarton, wisaac, poseidon4o, mont29, homyachetser, Evgeny_Rodygin, AlexKowel, yurikovelenov, fjuhec, sharlybg, cardboard, duarteframos, blueprintrandom, a.romanov, BYOB, disnel, aditiapratama, bliblubli, dfelinto, lukastoenne
Maniphest Tasks: T37754
Differential Revision: https://developer.blender.org/D113
We can not re-use anything for such pools, because we will know nothing about whether
the main thread is sleeping or not. So we identify such threads as 0, but we don't
use main thread's TLS.
This fixes dead-locks and crashes reported by Luca when doing playblasts.
Global size depends on memory usage which might change during rendering.
Havent seen it happen but seems possible that this could cause the global
size to be different than what was used for allocating buffers.
The issue here is that the preferences are still used because both can be accessed from the 3D View, view menu. In the future, it is likely that the old mode will be removed (maybe 2.8?) but for now we want to keep both operational.
Differential revision: https://developer.blender.org/D2320
Do not recompute both points's 2D coordinates for each segments, we can
copy over from previous one... Does not gives any measurable speedup off
hands, though.
Couple of things here:
- Boost is not necesserily compiled into your /opt/lib and system-wide
version might have been used. The recent change in Alembic did not
take this into account.
- Alembic needs some extra component of Boost.
This part might be missing now for other distros than DEB.
The issue was caused by recent change in inline policy.
There is some sort of memory corruption happening here, ASAN suggests
it's stack overflow issue. Not quite sure why it is happening tho and
was not able to solve anything here yet in the past hours.
Committing fix which works with a big TODO note.
The issue is visible on AVX2 machine when rendering cycles_reports_test.
Doing this in a fully 'clean' way is far from obvious, especially
unregister, you often end up leaving nasty 'orphanned' keymap items
referring to unregistered operators...
This seems to happen on Windows only, happened to Thomas and Nathan already.
Similar patch Thomas was showing, but i do not see it committted. So comitting
now in order to get more developers and users happy.
Ever since we merged the extra texture types (half etc) and spit kernel the compile time for cycles_kernel has been going out of control.
It's currently sitting at a cool 1295.762 seconds with our standard compiler (2013/x64/release)
I'm not entirely sure why msvc gets upset with it, but the inlining of matrix near the bottom of the tri-cubic 3d interpolator is the source of the issue, this patch excludes it from being inlined.
This patch bring it back down to a manageable 186 seconds. (7x faster!!)
with the attached bzzt.blend that @sergey kindly provided i got the following results with builds with identical hashes
58:51.73 buildbot
58:04.23 Patched
it's really close, the slight speedup could be explained by the switch instead of having multiple if's (switches do generate more optimal code than a chain of if/else/if/else statements) but in all honesty it might just have been pure luck (dev box,very polluted, bad for benchmarks) regardless, this patch doesn't seem to slow down anything with my limited testing.
{F532336}
{F532337}
Reviewers: brecht, lukasstockner97, juicyfruit, dingto, sergey
Reviewed By: brecht, dingto, sergey
Subscribers: InsigMathK, sergey
Tags: #cycles
Differential Revision: https://developer.blender.org/D2595
It's possible that cancellation occured between the creation of the reader
and the creation of the Blender object, in which case reader->object()
returns a NULL pointer.
BKE_libblock_free_us() was called on the object data, which decrements
its user count, after which the same function was called on the object,
which decrements the user count of the object data again. This double
decrement was too much.
The state mask wasnt applied before comparison giving false results. It
shouldnt really happen that a ray state contains any flags that need to
be masked away, but if it does happen its better to not get stuck.
Previously, a GHash was used to store a flattened mapping of parent
information based on the Alembic hierarchy, and then that hash was used to
set parent pointers on Blender objects. This resulted in errors and
some duplicate objects. The new approach stores parent pointers while
traversing the Alembic hierarchy, which means that there is much more
information about the actual context of the Alembic object itself,
producing a more stable import.
Also replaced the bool param "to_yup" with "AbcAxisSwapMode mode", so that
it's more explicit that axes are swapped.
Also added unittests for create_swapped_rotation_matrix.
There was a problem with parent-child relations not getting set up
correctly when an Alembic object was both the transform for a mesh object
and the parent of other mesh objects.
convert_matrix() now only converts from Imath::M44d to float[4][4] (taking
different camera orientations into account). Import-time scaling is now
performed by the caller.
Also renamed AbcObjectReader::readObjectMatrix to
setupObjectTransform, as it does more than just reading the object
matrix; it also sets up an object constraint if the Alembic Xform is
animated.
Alembic is an interchange and caching format, that can contain custom
object schemas. Blender shouldn't crash (because of failing asserts) just
because it doesn't know such an object type.
It's a mapping from full path of an Alembic object to an AbcObjectReader*.
The fact that at some point it is used to construct parent-child relations
doesn't matter.
The importer was guessing whether an Alembic IXform object was part of a
child object, or should be represented as an Empty in Blender. By reversing
the order in which objects are visited, the children can now claim their
parent as part of the same object (so IPolyMesh claims its parent IXform
as part of the same Blender object). This results in much less guesswork.
I've also removed similar guesswork from the code that sets parent pointers,
by simply searching for the parent in a hierarchical way, instead of trying
to predict (again) which IXforms were turned into empties.
Also, visit_object() now actually visits the object -- previously it only
visited its children, and assumed the object it was called on was already
handled by a previous call.
create_transform_matrix(float[4][4]) did mostly the same as
create_transform_matrix(Object *, float[4][4]), but more elegant.
However, the former has some inconsistencies with the latter (which
are now merged and made explicit, turned out one was for z-up→y-up
while the other was for y-up→z-up), and was renamed to
copy_m44_axis_swap(...) to convey its purpose more clearly.
Furthermore, "loc" has been renamed to "trans", as matrices don't
store locations but translations; and more variables now have a src_
or dst_ prefix to denote whether they contain a matrix/vector in the
source or destination axis orientation.
AbcExporter::createTransformWriter() tries to predict the parent Xform
name, but if it cannot be found has multiple ways of creating it, possibly
under a different name than originally searched for.
The idea is to have a system where we properly log error messages and
let the users know that errors occured redirecting them to the console
for explanations. This is only implemented for the exporter since the
importer already has similar functionalities; however they shall
ultimately be unified in some way.
Reviewers: sybren, dfelinto
Differential Revision: https://developer.blender.org/D2541
@@ -69,4 +67,3 @@ base class --- :class:`CPropValue`
..warning::
The id can't be stored as an integer in game object properties, as those only have a limited range that the id may not be contained in. Instead an id can be stored as a string game property and converted back to an integer for use in from_id lookups.
@@ -10,4 +8,3 @@ base class --- :class:`SCA_IController`
An NAND controller activates when all linked sensors are not active.
There are no special python methods for this controller.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.