Issue was nasty hidden one, the dual status (mix of local and linked)
of proxies striking again.
Here, remapping process was considering obdata pointer of proxies as
indirect usage, hence clearing the 'LIB_TAG_EXTERN' of obdata pointer.
That would make savetoblend code not store any 'lib placeholder' for
obdata data-block, which was hence lost on next file read.
Another (probably better) solution here would be to actually consider
obdata of proxies are fully indirect usage, and simply reassign proxies
from their linked object's obdata on file read...
However, that change shall be safer for now, probably good for 2.79 too.
Don't use quick sort for small arrays, bubble sort works way faster for small
arrays due to cache coherency. This is what qsort() from libc is doing actually.
We can also experiment unrolling some extra small arrays, for example 3 and 4
element arrays.
This reduces tangent space calculation for dragon from 3.1sec to 2.9sec.
Brings tangent space calculation from 4.6sec to 3.1sec for dragon model in BI.
Cycles is also somewhat faster, but it has other bottlenecks.
Funny thing, using simple `static inline` already gives a lot of speedup here.
That's just answering question whether it's OK to leave decision on what to
inline up to a compiler..
Would be nice to be able to catch this with assert as well, will see what would
be the best way to do this/.\
Need to verify with Mai that this solves crash for her and maybe consider
porting this to 2.79.
scripts being "Artwork" which is your sole property and free to license.
I've removed the reference to scripts in this text.
This was from 2002! With our Python scripts becoming part of how Blender runs,
such scripts now are officially required to be compliant with GNU GPL.
For more information; check the FAQ or consult foundation@blender.orghttps://www.blender.org/support/faq/
We have a hardcored limit of 1000 images to be baked.
However anything anove 100 would be leading to overflow in the code.
Caught by warning from builder bot (my compiler doesn't even complain
about this, but it should).
Fishy cat benchmark was rendering with wrong shadows. Cause is unclear,
adding printf or rearranging code seems to avoid this issue, possibly a
compiler bug. This reverts the fix and solves the OSL bug elsewhere.
This was needed when we accessed OSL closure memory after shader evaluation,
which could get overwritten by another shader evaluation. But all closures
are immediatley converted to ShaderClosure now, so no longer needed.
While unlikely to have had any serious effects because of limited use, the
previous implementation was not actually atomic due to a data race and
incorrectly coded CAS loop. We also had duplicates of this code in a few
places, it's now been moved to a single location with all other atomic
operations.
We need to make sure we can store all volume closures for all objects in volume
stack. This is a bit tricky to detect what would be the "nestness" level of
volumes so for now use maximum possible stack depth. Might cause some slowdown,
but better to give reliable render output than to fail quickly.
Should be safe for 2.79 after extra eyes.
We were showing "search for unknown menutype WM_MT_button_context" messages in terminal which were not helpful for users, so now they are disabled.
To be backported to 2.79
It is possible to have same image used multiple times at different frames,
which means we can not free it's buffers without any guard. From quick tests
this seems to be doing what it is supposed to.
Need more testing and port this to 2.79.
This was meant to be generic but introduced possible type errors
and unnecessary complication.
Replace with typed PyC_Tuple_PackArray_* functions.
Also add PyC_Tuple_Pack_* macro which replaces some uses of
Py_BuildValue, with the advantage of not having to parse a string.
Regression from rBfed853ea78221, calling this inside thread worker was
not really good idea anyway, and we already have all the code we need in
pre-threading init function, was just disabled for vertex particles
before.
To be backported to 2.79.
Python's C-API doesn't provide functions to get
int's at specific integer sizes.
Leaving the caller to check for overflow,
which ended up being ignored in practice.
Add API functions that convert int/uint 8/16/32/64, also bool.
Raising overflow exception for unsupported ranges.
The thing that most often still goes wrong for new users building blender on windows is checking out the libraries, some skip over the wiki, some check out to the wrong folder, in an effort to reduce the time i spend on this, I added detection of svn and misisng libs to make.bat .
When the user has svn installed, and the libdir is missing he'll be asked if he wants to download them
if svn is not installed, or the user chooses 'no' the current error message is shown.
Reviewers: Blendify, sergey, juicyfruit
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D2782
We should only early out with any hit in BVH traversal if the only visibility
bits used are opaque shadow. Not when opaque shadow is one of multiple bits.
Also pass by value and don't write back now that it is just a hash for seeding
and no longer an LCG state. Together this makes CUDA a tiny bit faster in my
tests, but mainly simplifies code.
This implements Arvo's "Stratified sampling of spherical triangles". Similar to how we sample rectangular area lights, this is sampling triangles over their solid angle. It does significantly improve sampling close to the triangle, but doesn't do much for more distant triangles. So I added a simple heuristic to switch between the two methods. Unfortunately, I expect this to add render time in any case, even when it does not make any difference whatsoever. It'll take some benchmarking with various scenes and hardware to estimate how severe the impact is and if it is worth the change.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: Vega-core, brecht, SteffenD
Tags: #cycles
Differential Revision: https://developer.blender.org/D2730
Caused by own recent changes in handling of verts/edges/etc. arrays storage
for raycasting (rBe324172d9ca6690e8).
Issue was actually even weirder - there is absolutely no reason at all to
release DM here, those finaldm are stored in Object or EditMesh structs and
handled by general update system, other code shall never try to release them!
We stop using the .zip file and just have all files now in
lib/darwin/python/lib, along with numpy, numpy headers and requests.
This makes it consistent with Linux and simplifies code.
For old libraries the .zip stays, code for that gets removed when we
fully switch to new libraries.
This can happen with Alembic files exported from Maya. I'm unsure as to the
root cause, but at least this fixes the crash itself.
Thanks to @looch for reporting this with a test file. The test file has to
remain confidential, though, so it's on my workstation only.
FFMPEG & VPX don't handle target with --build parameter, so we need to make sure use of plain configure command
Reviewed by: Brecht Van Lommel
Differential Revision: http://developer.blender.org/D2791
This patch adds "Pixel Size" to the performance options, which allows to render
in a smaller resolution, which is especially useful for displays with high DPI.
Reviewers: Severin, dingto, sergey, brecht
Reviewed By: brecht
Subscribers: Severin, venomgfx, eyecandy, brecht
Differential Revision: https://developer.blender.org/D1619
The only reason shutter time was marked as non-animatable is because Blender
Internal render does not support such animation. But this is something what
users are keeping asking for and now Blender Internal is on it's way out.
Enabled animation of this property, but noted in tooltip that Blender Internal
does not support animation of this property.
Bug in new ID copying code, thanks once again to stupid nodetrees, we
ended up wrongly remapping MA node->id pointers to NodeTree when copying
materials using node trees...
Basically, make re-alloc and memcpy from the same lock, otherwise one
thread might be re-allocating thread while another one is trying to
copy data there.
Reported by Mohamed Sakr in IRC, thanks!
Enabled cache for frame accessor and tweaked policy so we guarantee keyframed
images to be always in the cache. The logic might fail in some real corner case
(for example, when doing multiple tracks at once on a system where we can not
fit 2 clip frames in cache) but things are much better now for regular use.
Creating ngons with multiple axis aligned shapes in the middle of a
single face would fail in some cases.
This exposed multiple problems in BM_face_split_edgenet_connect_islands
- Islands needed to be sorted on Y axis when X was aligned.
- Checking edge intersections needed increased endpoint bias.
- BVH epsilon needed to be increased.
Note: this commit seems to work as expected (also with transform
snapping etc.). However, it is rather unsafe - not enough for 2.79 at
least, unless we get much more testing on it. It also depends on three
previous ones.
Note that using a global lock here is far from ideal, we should rather
have a lock per DM, but that will do for now, whole DM thing is doomed
to oblivion anyway in 2.8.
Also, we may need a `DM_DIRTY_LOOPTRIS` dirty flag at some point. Looks
like we can survive without it for now though... Probably because cached
looptris are never copied accross DM's?
This was... horribly wrong, CDDM will often *not* need to allocate
anything to return arrays of mesh items! Just check whether array
pointer is NULL.
Also, remove `DM_get_looptri_array`, that one is useless currently,
`dm->getLoopTriArray` will always return cached array (computing it if
needed).
The tweakmode flag and the selected-channels flag accidentally
used the same value, due to confusion over where these flags were
supposed to be set. The selected-channels flag has now been moved
to use a different value, so that there shouldn't be any further
conflicts.
To be ported to 2.79.
Old bevel 'Clamp overlap' code was very naive: just limit amount
to half edge length. This uses more accurate (but not perfect)
calculations for the max amount before (many) geometry collisions
happen. This is not a backward compatible change - meshes that
have modifiers with 'Clamp overlap' will likely have larger allowed
bevel widths now. But that can be fixed by turning off clamp overlap
and setting the amount to the desired value.
Own previous fix (rBd5d626df236b) was not valid, curves are actually
supported by SoftBodies. It was rather a mere UI bug, which was not
including Surfaces and Font obect types in those valid for softbody UI.
Thanks to @brecht for the head up!
Also, fix safe for 2.79, btw.
Adding structs was checking for duplicates
causing approx 75k string comparisons on startup.
While overall speedup is minimal,
Python access to `bpy.types` will now use a hash lookup
instead of a full linked list search.
See D2774
It doesn't seem that useful in practice, was mostly added to match some
other renderers but also seems to be causing user confusing and accidental
long render times. So let's just keep the UI simple and remove this.
Differential Revision: https://developer.blender.org/D2768
We're adding some bias by default, which now I think is the right thing
to do from a usability point of view since you really need to use those
options anyway to get clean renders in a practical time.
Differential Revision: https://developer.blender.org/D2769
Adds thin/default/thick modes to add -1/0/1 to the auto detected line width,
while leaving the overall UI scale unchanged.
Also tweaks the default line width threshold, so thicker lines start from
slightly high UI scales.
Differential Revision: https://developer.blender.org/D2778
*Changed categories of some keywords
*reordered some longer keywords that didn't appear
*Activated another color (reserved builtins) by Leonid
*added some HGPOV and UberPOV missing keywords
Patch by Maurice Raybaud (@mauriceraybaud). Thanks to Leonid for additions, feedback and Linux testing.
Related diffs: D2754 and D2755.
While not a regression, this is new feature and would be nice to have it
backported to final 2.79.
Bug introduced in recent ID copying refactor.
This commit basically sanitizes seq strip copying behavior, by making
destination scene pointer mandatory (and source one a const one).
Nothing then prevents you from using same pointer as source and
destination!
This is a bit confusing, especially when one mixes OpenCL code where ulong equals
to uint64_t with CPU side code where ulong is expected to be something else from
the naming.
This commit makes it so we use explicit name, common on all platforms.
When a mesh changes its number of vertices during the animation,
Blender rebuilds the DerivedMesh, after which the materials weren't
applied any more (causing the default to the first material slot).
Commit b6d7cdd3ce changed how the mesh data
is deformed, which wasn't taken into account yet in this unit test.
Instead of directly reading the mesh vertices (which aren't animated any
more), we convert the modified mesh to a new one, and inspect those
vertices instead.
Problem was that some code checks to see if device_pointer is null or
not and the new allocator wasn't even setting the pointer to anything
as it tracks memory location separately. Setting the pointer to non
null keeps all users of device_pointer happy.
This could make output really polluted, where it'll be hard to see actual
issues.
It is still possible to have all backtraces printed using BLENDER_VERBOSE
environment variable.
As part of the fix for T51587, I removed the Depth output for non-Multilayer
images since it seemed weird that PNGs etc. that don't have a Z pass still get
a socket for it.
However, I forgot about non-multilayer EXRs, which are a special case that can
actually have a Z pass.
Therefore, this commit brings back the Depth output for non-multilayer images
just like it was in 2.78.
- Apparently MSVC does not support compound literals
in C++ (at least by the looks of it).
- Not sure how opencl_device_assert was managing to
set protected property of the Device class.
We don't enable global SSE optimizations in regular kernel, and we
keep those disabled on Linux 32bit.
One possible workaround would be to pass arguments by ccl_ref, but
that is quite a few of code which better be done accurately.
Steps to reproduce:
- Create shader Image texture -> Diffuse BSDF -> Output. Do NOT select image yet!
- Start viewport render.
- Select image from the ID browser of Image Texture node.
Thing is: with the memory manager we always need to inform device that memory
was freed.
Common folks, nobody considered master a C++11 only branch. Such decision is to
be done officially and will involve changes in quite a few infrastructure related
areas.
New code was handling correctly ID's internal references to self, but
not references between 'made real' different objects...
Regression, to be backported in 2.79.
It is defined to & for CPU side compilation, and defined to an empty for any GPU
platform. The idea here is to use this macro instead of #ifdef block with bunch
of duplicated lines just to make it so CPU code is efficient.
Eventually we might switch to references on CUDA as well, but that would require
some intensive testing.
The documentation for the bpy.app.handlers.scene_update_{pre,post}
handlers states that they're called "on updating the scenes data".
However, they're called even when the data hasn't changed. Of course
such handlers are useful, but the documentation should reflect the
current behaviour.
Reviewers: mont29, sergey
Subscribers: Blendify
Maniphest Tasks: T46329
Differential Revision: https://developer.blender.org/D1535
Image textures were being packed into a single buffer for OpenCL, which
limited the amount of memory available for images to the size of one
buffer (usually 4gb on AMD hardware). By packing textures into multiple
buffers that limit is removed, while simultaneously reducing the number
of buffers that need to be passed to each kernel.
Benchmarks were within 2%.
Fixes T51554.
Differential Revision: https://developer.blender.org/D2745
Simply disabled python tests, they can't be run anyway (since blender target is
not enabled) and we don't have any player-related tests in that folder.
Note these are intended for platform maintainers, we do not intend to
support users making their own builds with these. For that precompiled
libraries from lib/ should be used.
Implemented by Martijn Berger, Ray Molenkamp and Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D2753
Since all the shadow catchers are already assumed to be in the footage,
the shadows they cast on each other are already in the footage too. So
don't just let shadow catchers skip self, but all shadow catchers.
Another justification is that it should not matter if the shadow catcher
is modeled as one object or multiple separate objects, the resulting
render should be the same.
Differential Revision: https://developer.blender.org/D2763
This will allow much finer controll over how we copy data-blocks, from
full copy in Main database, to "lighter" ones (out of Main, inside an
already allocated datablock, etc.).
This commit also transfers a llot of what was previously handled by
per-ID-type custom code to generic ID handling code in BKE_library.
Hopefully will avoid in future inconsistencies and missing bits we had
all over the codebase in the past.
It also adds missing copying handling for a few types, most notably
Scene (which where using a fully customized handling previously).
Note that the type of allocation used during copying (regular in Main,
allocated but outside of Main, or not allocated by ID handling code at
all) is stored in ID's, which allows to handle them correctly when
freeing. This needs to be taken care of with caution when doing 'weird'
unusual things with ID copying and/or allocation!
As a final note, while rather noisy, this commit will hopefully not
break too much existing branches, old 'API' has been kept for the main
part, as a wrapper around new code. Cleaning it up will happen later.
Design task : T51804
Phab Diff: D2714
Shows new, reference and diff renders, with mouse hover to flip between
new and ref for easy comparison. This generates a report.html in
build_dir/tests/cycles, stored along with the new and diff images.
Differential Revision: https://developer.blender.org/D2770
* Remove some unnecessary SSE emulation defines.
* Use full precision float division so we can enable it.
* Add sqrt(), sqr(), fabs(), shuffle variations, mask().
* Optimize reduce_add(), select().
Differential Revision: https://developer.blender.org/D2764
I need to use some macros defined in util_simd.h for float3/float4, to emulate
SSE4 instructions on SSE2. But due to issues with order of header includes this
was not possible, this does some refactoring to make it work.
Differential Revision: https://developer.blender.org/D2764
We already detect this automatically based on shading nodes and per shader
settings, and performance of this option is ok now all devices.
Differential Revision: https://developer.blender.org/D2767
Two main things here:
1. Replace all unsafe for #line directive characters into a single loop,
avoiding multiple iterations and multiple temporary strings created.
2. Don't merge token char by char but calculate start and end point and
then copy all substring at once.
This gives about 15% speedup of source processing time. At this point
(with all previous commits from today) we've shrinked down compiled
sources size from 108 MB down to ~5.5 MB and lowered processing time
from 4.5 sec down to 0.047 sec on my laptop running Linux (this was a
constant time which Blender will always spent first time loading kernel,
even if we've got compiled clbin).
Add a safe version of normalize since all uses of normalize
did zero length checks, move this into a function.
Also avoid unnecessary conversion.
Gives minor speedup here (approx 3-5%).
Basically gather lines as-is during traversal, avoiding allocating
memory for all the lines in headers.
Brings additional performance improvement abut 20%.
The idea here is that it is possible to mark certain include statements
as "precompiled" which means all subsequent includes of that file will
be replaced with an empty string.
This is a way to deal with tricky include pattern happening in single
program OpenCL split kernel which was including bunch of headers about
10 times.
This brings preprocessing time from ~1sec to ~0.1sec on my laptop.
The idea is to re-use files which were already processed. Gives about 4x speedup
of processing time (~4.5sec vs 1.0sec) on my laptop for the whole OpenCL kernel.
For users it will mean lower delay before OpenCL rendering might start.
This is still far from prefect, but yet much better than what we had so
far (more consistent with inheritent precision available in floats).
Note that this fixes some (currently commented out) units unittests, and
requires adjusting some others, will be done in next commit.
Bug was in RNA nodes code actually, itemf functions shall never, ever
return NULL!
Note that there were other itemf functions there that were potentially
buggy. Also harmonized a bit their code.
* Numbers with units (especially, angles) where not handled correctly
regarding number of significant digits (spotted by @brecht in T52222
comment, thanks).
* Zero value has no valid log, need to take that into account!
When calling sculpt from Python,
setting 3D 'location' but not 2D 'mouse' stopped working in 2.78.
Now check if the operator is running non-interactively and
skip the mouse-over check.
Regression in D1812: PyDriver variables as Objects
Taking the Python representation is nice in general
but for enums it would convert them into strings,
breaking some existing drivers.
Some users really liked previous behavior,
so making it an option.
Cursor Lock Adjustment can be disabled to give something close to
2.4x behavior of cursor locking.
When lock-adjustment is disabled placing the cursor the view.
This avoids the issue reported in T40353
where the cursor could get *lost*.
Even strands that were excluded by the density texture were being added
to the DM passed to cloth, but these ended up having some invalid data,
because they were not fully constructed.
This simply excludes `UNEXISTED` particles from the DM generation, as
would be expected.
Note that fix is not perfect, systematically make refcounting of all IDs
assigned to node's id pointer, which breaks the 'do not refcount
scene/object/text datablocks' principle...
But besides that principle being far from ideal in general, it becomes
pretty much impossible to apply when using //generic// ID pointer,
unless we add some kind of type data to that pointer somehow.
So for now, better to live with that, than having broken usercount.
The purpose of the keymap strings is probably for un-embossed menu items
like seen in most pulldowns. I can't see a reason for also adding that
string for regularly drawn buttons within popups, we don't add it
anywhere else in the UI either. So this commit makes sure shortcut
strings are only added to buttons that are drawn like pulldown-menu
items.
The Blender text editor's built in python template "Gamelogic" has a reference near the bottom to "objectHitList" as an alleged attribute to the KX_TouchSensor. This name is incorrect, it's correct name is "hitObjectList."
Attempting to access the suggested objectHitList returns error...
```
AttributeError: 'KX_TouchSensor' object has no attribute 'objectHitList'
```
The provided diff corrects this minor error.
Reviewers: kupoman, moguri, campbellbarton, Blendify
Reviewed By: Blendify
Tags: #game_engine, #game_python
Differential Revision: https://developer.blender.org/D2748
`defvert_array_find_weight_safe()` was confusing 'invalid vgroup' and
'valid but totally empty vgroup' cases.
Note that this also affected at least ShrinkWrap and SimpleDeform
modifiers.
The order of evaluation of function arguments is undefined, and the order
was reversed between these compilers. This was causing regressions tests
to give different results between Linux and macOS.
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
Moving the ray_start_local to the new position does not lose as much precision as moving the ray_org_local to the corresponding position.
The problem of inaccuracy is within the functions: `bvhtree_ray_cast_data_precalc` and` fast_ray_nearest_hit`. And not directly in the values of the rays.
The way we use it, UI_PRECISION_FLOAT_MAX is actually + 1 to get total
number of digits, and float only has 7 meaningful digits, so that define
shall be at 6.
GCC seems to detect uninitialized into function calls now, but then isn't
always smart enough to see that it is actually initialized. Disabling this
warning entirely seems a bit too much, so initialize a bit more now.
This commit unifies the flattened texture slot names for bindless and regular CUDA textures. Texture indices are now identical across all CUDA architectures, where before Fermi used different indices, which lead to problems when rendering on multi-GPU setups mixing Fermi with newer hardware.
Hopefully fix it actually, at least could not reproduce it anymore with
that changen, but Was already quite hard to trigger before.
We need a memory barrier at this allocation, otherwise it might happen
after preview gets added to done queue, so preview could end up being
freed twice, leading to crash.
Change the implementation so it no longer takes over the mouse cursor motion
from the OS, instead only move it when warping, similar to Windows and X11.
Probably the reason it was not done this way originally is that you then get
a 500ms delay after warping, but we can use a trick to avoid that and get much
smoother mouse motion than before.
While drawing nice 'rounded' values is OK also for 'low precision'
editing like dragging and such, it's quite an issue when you type in a
precise value, validate, edit again the value, and find a rounded
version of it instead of what you typed in!
So now, *only when entering textedit of num buttons*, we always get the highest
reasonable precision for floats (and use exponential notation when
values are too low or too high, to avoid tremendous amounts of zero's).
Tweaked the path radiance summing and alpha to accommodate for possible contribution of
light by transparent surface bounces happening prior to shadow catcher intersection.
This commit will change the way how shadow catcher results looks when was behind semi
transparent object, but the old result seemed to be fully wrong: there were big artifacts
when alpha-overing the result on some actual footage.
Since we added auto DPI on Linux, on some systems the UI draws smaller than before
due to the monitor reporting DPI values like 88. Blender font drawing gives quite
blurry results for such slightly smaller DPI, apparently because the builtin font
isn't really designed for such small font sizes. As a workaround this clamps the
auto DPI to minimum 96, since the main case we are interested in supporting is
high DPI displays anyway.
Differential Revision: https://developer.blender.org/D2740
This is a different fix for the issue from D2088, preserving backwards compatibility
for IK stretching. The main problem with this patch is that this new behavior has
been there for a year, so it may break rigs created since then which rely on the new
IK stretch behavior.
Test file for various cases:
https://developer.blender.org/diffusion/BL/browse/trunk/lib/tests/animation/IK.blend
Reviewers: campbellbarton
Subscribers: maverick, pkrime
Differential Revision: https://developer.blender.org/D2743
Revise the logic here to be more robust when keyframes with
similar-but-different frame numbers (e.g. 70.000000 vs 70.000008)
would cause the search to go into an infinite loop, as the same
keyframe was repeatedly found (and skipped).
Tweak the bias from the previous fix a bit to be more backwards compatible in
some scene. In the end which way we round is quite arbitrary, but keeping the
case where the texture coordinate is exactly zero the same seems better.
The option has always (un)set the "Overwrite" flag on all strips. Calling
it "Override" seems misleading, since even when unchecking it, it overrides
whatever was set on the selected strips. It really just (un)sets the
"Overwrite" flag, and now it is also labeled as such.
In the reported example it seemed reasonable to apply this change.
But it causes a much more common case (selecting projections)
to be split into 2x islands.
Resolves T50970
This makes sense when we want to avoid float precision error
for near co-linear edges. OTOH, this is an arbitrary decision,
so keep functions separate.
That problem occurs because of the imprecision of `short int` (16 bits).
The 3d coordinates are converted to 2d, and when they are off the screen, their values can exceed 32767! (max short int value)
One quick solution is to use float instead of short
The snap code is actually a little tricky. I want to make some arithmetic simplifications in it
The only similarity between these functions is that both serve to snap.
However their codes are totally different from one another.
So by separating these functions, it:
- removes the need to put several conditions;
- simplifies and
- optimizes the code
* Display a warning above the pose list if the pose library is in an invalid
state (i.e. when it has keyframes but no pose-markers associated with those
keyframes). This warning prompts users to run the "Sanitize Pose Library Action"
operator, which should fix up such issues.
* "Sanitize" operator now creates unique names for each newly create pose
marker it generates, including the frame on which it found the pose
The problem here was that the "frame_start" and "frame_end" RNA properties of
the Stepped FModifier were shadowing/overriding "frame_start" and "frame_end"
properties of the base FModifier. As a result, when the range() callback
for the In/Out parameters (defined as part of the base FModifier) checked
it's start/end properties, they were always still zero, meaning that the
acceptable range for the In/Out parameters was 0 -> 0 = 0.
Note:
If you've got old files with this problem, you'll need to manually click on
the frame_start/end properties to flush out the old values. It's probably
not worth the effort of applying a version patch for this (given that this
modifier is not one of the most often used ones AFAIK).
As with Strip Time, the updates here would get triggered before the
autokeying had a chance to record the unkeyed values, making it impossible
to autokey.
This is something which was reported to work fine by Mai, Benjamin and
confirmed by myself. Disabling this workaround gains us some speedup:
Before Now
bmw27 04:28.42 04:07.79
classroom 09:26.48 08:54.53
fishy_cat 08:44.01 08:18.70
koro 09:17.98 08:57.18
pavillon_barcelone 12:26.64 11:52.81
Test environment is:
- Ubuntu 16.04, with all updates installed
- AMD RX 480 GPU
- amdgpu pro driver version 17.10-450821
The issue was caused by combination of following factors:
- Clipboard cleanup function will pass node tree as NULL to node free
function.
This is fine on it's own, we don't have tree in clipboard.
- Node free function will call node storage cleanup only when there is
a non-NULL node tree.
This is somewhat weird, because storage cleanup does not take node
tree as argument.
So the solution here: move node storage cleanup outside of check that
node tree is not NULL.
imapaint's clone and canvas are refcounting Image usages.
And particle's editsettings' object and scene seem to be pure runtime
data (they are reset to NULL in readcode), so resetting them to NULL
here as well.
Really, really, really need to get rid of this usercount handling
everywhere, hopefully incomming ID copying rewrite will help sanitize
that mess. But fix was needed for 2.79 release!
The code was somewhat weird: it was first copying border/crop settings from
the "source" scene, then was checking border settings of the current scene
and only then was copying border from "source" scene.
Now we first copy border/crop flags, then copy border from source and then
check whether border is a full-frame.
Auto & aligned handles wouldn't restore to their correct locations.
Note that a more direct fix for the bug is possible
(storing the handle locations to restore on cancel).
But that still gives some odd behavior, see code-comments for details.
The default anisotropic tangent computation could fail in some cases,
leading to NaNs and artifacts. Use a simpler formulation that doesn't
suffer from this.
Unfortunately this means disabling the code that ensures the title
bar is properly scaled with DPI, however better to have that as a
cosmetic issue than Blender being unusable with a lot of Intel GPUs.
The issue was caused by combination of following factors:
- Blender Internal viewport render can not distinguish between which parts of
main database changed, so it does full database re-sync when anything is
tagged for an update.
This way, if any NodeTree (including compositor) is changed, Blender Internal
viewport is tagged for full render database update.
- With old dependency graph, scene-level drivers are evaluated on every
iteration of scene_update_tagged, even if nothing is tagged for an update.
This causes compositor drivers be evaluated quite often.
- Driver evaluation checks whether value was changed, and if so it tags
corresponding ID type as updated (this is what was telling viewport to do
render database update).
This check was quite stupid: current property value was checked against the
one coming from driver expression. This means, if driver value is outside
of the hard limit range of the property, the property will always be
considered updated.
The fix is to compare current property value against clamped value from the
driver.
New dependency graph is tacking root bone into account when building the graph.
This is required in order to get proper dependencies between bones. so we can
reliably use bones as targets from the same rig (and even indirect relations
via external objects). This forces us to tag relations for update when we change
root IK chain bone.
Since relations rebuild is not fully trivial operation, we only do it for
the new dependency graph. In the future it'll be nice to avoid whole graph
rebuild for such cases, but that's mentioned as a TODO.
This situation happens when a file with a text effect sequencer strip is
loaded in Blender < 2.76 and saved. This destroys the effect data, causing
a crash in Blender ≥ 2.76.
d2f748a222 prevented the crash when opening such a file, but accessing
the strip still caused a crash. This commit fixes that by actually
initialising the invalid strip. Of course this still causes data loss, but
that already happened by opening & overwriting the file in Blender < 2.76.
Using an arbitrary face as the source of the UV data is mostly fine, as
vertices on seams will generally map to different parts of the texture
that have the same color.
This is regarding fed853ea78
Some of the functions might have been inlined, but others i don't see
how that was possible (don't think virtual functions can be inlined here).
In any case, better be explicitly optimal in the code.
Clearing of custom bones outline's line thickness was not done at proper
point, wireframe drawing never changes line thickness, only solid draw
with outline does...
Last fix only accounted for direct changes to the RB settings, but
failed for, say, object transformations. This fix accounts for any
change that might invalidate the RB cache.
Fix 9cd6b03187 introduced a bug that
prevented simulation after a cache invalidation (for instance when
changing a setting after simulating). This fixes that.
The problem here was that when a "invalid" path is generated by the panoramic camera, it was tagged
as RAY_TO_REGENERATE with the intention of generating a new path in kernel_buffer_update.
However, since that state was not handled in kernel_queue_enqueue, kernel_buffer_update did not
process the path which resulted in an infinite loop.
Things like missing directories are now properly checked for, rather than
crashing Blender.
This also adds support for relative paths when opening an ABC file.
This makes the last time (`ltime`) stored in the rigid body world (`rbw`)
only be updated once a simulation step actually occurs, this prevents
another simulation step from being solved unless the current time is
exactly one frame after the last cached frame. Thus this prevents the
formation of gaps in the cache, such as seen in T50230.
Reviewers: mont29, sergey, angavrilov
Tags: #physics
Maniphest Tasks: T50230
Differential Revision: https://developer.blender.org/D2458
D2729 by @IgorNull
Currently, trackball rotation sequentially applies rotation across x axis and y axis,
which produces a strange/unusable result on diagonal pointer motion.
This change fixes the problem by using a single axis which is orthogonal
and proportional to mouse delta - matching view-port trackball.
As the title says, the normal wasn't set for the Hair BSDF because it wasn't
needed before. However, the denoiser uses it to store the feature passes, so
it needs to be set now.
Now that some node types may have custom context, we need to handle that
in the (convoluted :| ) UI code of nodes as well.
Reported in T43295 by Gabriel Gazzán (@gab3d), thanks.
`BMO_iter_as_array()` may fill less items than requested in given array,
so we have to update number of items to work on from its returned value,
otherwise code might try to use uninitialized memory.
The original code was doing a sanity check to see if existing index was
out of range. However the comparison was wrong.
So if the previous ct->user (active index of texture node) was larger
than then number of available texture nodes + 1 in the other material,
we would never re-set the index to 0.
Bug introduced on c31f74de6b.
There was an early attempt of fixing this (2b2ac5d3cc) but it was just working
by pure, luck. And failing in cases like the one from this bug report.
Convenience makefile now uses CMAKE_BUILD_TYPE_INIT,
this means you can change the build type of an existing build
and it won't be overwritten when running `make`.
Useful if you want to add debug info to a release build for profiling.
This is a very important, potentially deadly side-effect of this
operator. If something goes wrong, it can save a broken .blend file.
Ideally we could get rid of that operation anyway, once ID management if
fully renewed, but for now would rather keep it around.
Related to T51902.
Fix is a bit ugly, but cannot think of another solution for now, at
least this **should** not break anything else.
And now I go find myself a very remote, high and lonely mountain, climb
to its top, roar "I hate proxies!" a few times, and relax hearing the echos...
Based on D2578, now you can install JACK audio server and use it in
Blender build without having to specify the `--with-all` option (that
one still enables also JACK of course).
Reviewers: mont29
Maniphest Tasks: T51033
Differential Revision: https://developer.blender.org/D2578
@campbell Barton: Why is this declaration needed at all in stubs.c?
Further up the file collada.h is imported and that already decalres
the function and results in a duplicate declaration.
avoids wrong texture data when multiple objects are exported. Note: This
commit might possiblyt not work fully. The full feature is added with the
next commit)
Reorder keyingsets registration order, since it also afects order of I
menu options, better show most used ones first, pure alphabetical order
is not great here... Will likely break some muscle memory though. :|
Based on D2720 by Carlo Andreacchio (@candreacchio), thanks.
Although the original report was about the docs, the real issue was in
the API.
My original commit started from a copy-paste from the Switch
Node. However I don't use custom1 for thew Switch View node.
The docs is slightly incomplete since it would be nice to mention the
views here. Or maybe even expose them via Python. But honestly they are
generated depending on the scene multi-view settings.
If there was any specularity in the Principled BSDF, it would get a sampling
weight of one regardless of its actual impact.
This commit makes Cycles estimate the contribution of the component and adjust
the weighting accordingly, which greatly improves the noise characteristics of
the Principled BSDF in many cases.
Note that this commit might slightly change the brightness of areas when using
MultiGGX and high roughnesses, but the new brightness is more accurate and
closer to the result of Branched Path Tracing. See T51836 for details.
Differential Revision: https://developer.blender.org/D2677
The PDF of the MultiGGX sampling is approximated by the singlescattering GGX
term as well as a scaled diffuse term that makes up for the energy in the
multiscattering component that's missed by GGX.
However, there were two problems with the glossy terms: The diffuse term missed
a normalization factor, and the singlescattering term was not properly scaled
down based on the albedo estimate.
The glass term was completely wrong and has been rewritten. It uses the fresnel
factor to weight reflection vs. refraction and uses the glossy MultiGGX model
for reflection.
For refraction, the correct singlescattering term is now used, and a new
albedo approximation is used that was derived by evaluating GGX albedo for
roughnesses from 0 to 1 and IORs from 1 to 3 and fitting numerical
approximations to it. The resulting model has a mean relative error of 9e-5,
but could probably be simplified without losing noticable accuracy in the
final render.
The improved PDFs help with glossy highlights (due to better light sampling vs.
closure sampling MIS) and fix the situation described in T51836 where mixing
MultiGGX with other closures (as it happens in e.g. the Principled
BSDF) causes incorrect darkening.
This is annoying especially for exporters who do use mesh name, since it
broke any relation with actual Mesh naming in original Blend file.
Unfortunately, we cannot avoid the extra .xxx digits. ;)
It tried to assert that
addons/io_blend_utils/blender_bam-unpacked.whl/__init__.py was loaded when
the io_blend_utils module was imported. However, this happens only on
demand, and not directly when importing the add-on.
*Sigh* One more example of why we should keep ID management handling in
as few places as possible! It's impossible to keep more than a few
places in sync regarding which ID pointer is refcounted etc.
Children where always getting at least one segment of fixed length...
Now fully hidden ones (zero length) get no segment at all.
Note that even very short ones keep getting one 'unit' length segment - would
rather avoid changing that at this point, given how complex children
particles 'length' can get with all kind of modifiers... Think we can
live with that for now anyway.
The crash did not happen yet because we always had proper vmemh defined in
the parent scope.
Patch by Ivan Ivanov (aka obiwanus), thanks!
Differential Revision: https://developer.blender.org/D2715
This is not enough to mutex-guard modification code of integer values,
since this operation is NOT atomic. This is not even safe for a single
byte data types.
For now guarded the getter functions, similar to other functions in
this module.
Ideally we want to switch modification to an atomic operations, so we
wouldn't need any locks in the getters.
'Convert To...' Object operation has very weird effect of actually
working at obdata level, not object level, which means *all* objects
(even unselected/hidden/in other scenes/...) using same obdata will be
converted to new selected type.
IMHO this is very bad behavior, but... not a bug really, so do not
change this for now.
But at least, do not do that when working on some linked data, else it
leaves Blend file in invalid (incoherent) state until next reload.
So workaround for now is to enforce the 'Keep Original' option when some
linked object/obdata is affected by the operation.
Also fixed somewhat broken usercount handling in Curve->Mesh part.
That one was probably not an actual issue, except maybe in some corner
cases (like deleting a linked scene also used by some other linked scene).
Again, better not try to do smart & complex freeing logic outside of
BKE_library area, let's keep spaghetti nitghmare in a single place!
Previous code (same as what `BKE_libblock_free_us()` is doing when
usercount reach 0) was probably OK in that specific case, but still not
good idea, and potentially risky.
Even though in that specific it was probably safe-ish, there is no
guarantee at this point Brush we want to remove are not used somewhere,
better take the slightly slower, much safer `BKE_libblock_delete()` path here.
Eeeeeek!^2 Calling unconditionnaly ID freeing `BKE_libblock_free()` on a
datablock (ob->data, i.e. Curve) that may be used elsewhere...
Veryveryvery bad!
*tryed "#" as preprocessor used in POV-Ray for language keywords best behaviour was to have it as a punctuation symbol
*moved "finish" to its proper category
*changed order of some POV-Ray ini files keywords to have them work better
*added a few keywords from latest pov version
*Fixed C-style closing of multiline comments (*/)
Reviewers: campbellbarton, mont29
Reviewed By: campbellbarton, mont29
Subscribers: mont29
Differential Revision: https://developer.blender.org/D2707
Noisy change, but safe, and better do it sooner than later if we are to
rework copying code. Also, previous commit shows this *is* useful to
catch some mistakes.
Avoids using copy-constructor invoked every time we pass function
to the builder functions.
Should lower number of CPU ticks spent during DEG construction.
Was rather weird and only used for time source. It is simpler to make depsgraph
to keep track of time source directly.
No need to introduce extra entitites without actual need.
Was some mismatch in address space. Seems to be caused by recent additions.
Additionally, moved decoupled ray marching functions under ifdef, so they
don't try to use malloc() functions.
Thanks Mai for testing the patch!
Now, when there is no usable neighboring pixel for denoising, the noisy value
is preserved instead of producing a NaN.
Also, negative results are clamped to zero.
Note that there are just workarounds that don't fix the underlying problems,
but these issues are very rare and I'm not sure if it's even possible to fix
the underlying problems without introducing a significant slowdown or quality
decrease in other situations.
Because of that and since 2.79 is happening very soon, I just went for these
workarounds for now.
Replaces the placeholder 'emtpy' icons of "Force Field" and "Group
Instance" entries in object-add menu with proper new ones.
Icons by @zlsa, thanks a lot!
Maniphest task T51291.
Technically not passing all buffers used by a kernel is undefined
behavior. We haven't had any issues with this so far on AMD or
Nvidia, but it's known to be a problem with Intel and we received
a report from AMD that this is a problem on newer hardware, so we
need to make this change at some point.
Unfortunately there a cost to being correct, about 5% for the
benchmark scenes. For low sample counts it's even worse, I've
seen up to 50% slowdown. For the latter case I think adjusting
tile updating logic can help, but not sure what that would look
like yet (it would be just a few lines change however).
Unlike regular path tracing, branched path tracing is usually used with lower
sample counts, at least for primary rays. This means that are less samples for
the GPU to work on in parallel and rendering is slower. As there is less work
overall there is also more inactive threads during rendering with BPT. This
patch makes use of those inactive rays to render branched samples in parallel
with other samples.
Each thread that is preparing for a branched sample will attempt to find an
inactive thread and if one is found the state for the sample is copied to that
thread. Potentially, if there are enough inactive threads, 100s of branched
samples could be generated from the same originating thread and ran in
parallel giving large speed ups.
Gives 70% faster render for pavillion midday scene. 20-60% faster on BMW
with car paint replaced with SSS/volumes.
Due to various driver issues with AMD GCN 1 cards we can no longer support
these GPUs. This patch makes them unavailable to select for Cycles rendering.
GCN cards 2 and higher are still supported. Please use the most recent
drivers available to ensure proper functionality.
See here for a list to check which GPUs are supported:
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units
Reported by Andy Goralczyk (@eyecandy) over IRC, thanks!
Simply nuke all that poor broken custom one-by-one handling in
object_relations.c code, and use highly complex but powerful and
well-tested BKE_library_make_local() in all cases of MakeLocal!
ID management, especially related to linking, is very hairy matters,
better to have as few as possible core functions managing all the dirty
details. ;)
Edge collapse was using bounding box center as the point to collapse to.
When collapsing multiple adjacent edges together, this caused
inconsistencies in placement of the collapsed point, depending on the
orientation of the edges in relation to the space axis.
This makes edge collapse use the mean point instead.
The previous outlier heuristic only checked whether the pixel is more than
twice as bright compared to the 75% quantile of the 5x5 neighborhood.
While this detected fireflies robustly, it also incorrectly marked a lot of
legitimate small highlights as outliers and filtered them away.
This commit adds an additional condition for marking a pixel as a firefly:
In addition to being above the reference brightness, the lower end of the
3-sigma confidence interval has to be below it.
Since the lower end approximates how low the true value of the pixel might be,
this test separates pixels that are supposed to be very bright from pixels that
are very bright due to random fireflies.
Also, since there is now a reliable outlier filter as a preprocessing step,
the additional confidence interval test in the reconstruction kernel is no
longer needed.
The problem was that the strokes in the copy-paste buffer could be keeping
dangling pointers to colors that were already freed. Therefore, this commit
makes it so that when copying the strokes, we now make copies of the colors
and put them in a hashtable beside the stroke buffer. This is convenient,
as it saves us having to look up what colours need to be copied over each
time when pasting.
The problem was that newly pasted strokes were still using colours from
the original datablock. As a result, you'd either get an immediate crash,
or if you managed to save the file before it crashed, each stroke would get
reloaded with a dummy colour.
This commit fixes makes it possible to copy/paste strokes between datablocks
again. However, there are still problems when trying to paste across file
boundaries (i.e. copy strokes in one file, paste in another), which the next
commit will address.
faces.
This was requested by script writers. Especially needed if beveling
wire edges with vertex_only.
Should be backward compatible as just adds two new keys to returned
dict in python ('edges' and 'verts').
Was internally a no-op operation, which only caused extra work
to be done during depsgrpah traversal and evaluation, without
making any measurable improvement.
This is a minor update add more information on how Blender handles modal
operators. The existing docs provide a good overview, but might not be
as helpful to those unfamiliar with modal programming. This patch also
corrects a few small grammar issues.
It was never actually used apart from being stored at a construciton time.
This caused some redundancy and ncertanty about which relation type to use
during construciton (often existing types were not close enough to particular
use case).
Made this resilient to unknown types, for now. Supporting specific INT
sockets (through implicit conversion to GPU_FLOAT ones) is considered nice TODO.
Was just keeping the default '1' user from `BKE_libblock_alloc()`,
instead of using correct way to handle extra virtual user needed when we
want to keep unused datablocks around...
Do not call invoke ops from outliner's operations menus. Invoke op would
search again for item under mouse coordinates... when it is invoked!
Means often entry menu you would have clicked would not be over target
item, leading to either nothing or operation being applied to wrong item.
Note: about groups, there is another minor annoyance leading to some
assert - groups have an annoying virtual fake user which breaks
usercount, will see whether this is easily fixable. :|
The idea is to accumulate all new tasks in a thread local queue
first without doing any thread synchronization (aka, locks and
conditional variables) and move those tasks to a scheduler queue
once they are all ready. This way we avoid per-task-pool lock
and only have one lock per bunch of tasks.
This is particularly handy when scheduling new dependency graph
node children. Brings FPS of cached simulation from the linked
below file from ~30 to ~50.
See documentation for BLI_task_pool_delayed_push_{begin, end}
and for TaskThreadLocalStorage::do_delayed_push.
Fixes T50027: Rigidbody playback and simulation performance regression with new depsgraph
Thanks Bastien for the review!
If users wanted to bake only a few of the mesh materials, they would
still need to create dummy textures for the other parts.
This commit report (as RPT_INFO) the materials with no texture, but move
on to bake the others materials.
This was causing proxies updates on every frame, even if they
do not really change. Additionally, it was causing second round
of armature update when used from inside dupligroup (viewport
ensures all objects from dupligroup are up to date before draw).
It's now less confusing (for example, using nr_of_samples directly,
instead of using 1 / 1 / nr_of_samples). Might also have fixed a bug.
Also added unittests.
The scale matrix must have its homogeneous 'w' (at mat[3][3]) set to the
scale in order to also scale the translations along with it. However, this
also scales the transform matrix's 'w' component, which is not supposed
to happen.
The old default values (start/end frame = 1) could have been an actually
desired setting (for example when exporting a non-animated model). To
make this worse, this was only interpreted as "start/end of the scene" by
the export operator when running interactively, but not when run from
Python.
By choosing INT_MIN as default it's highly unlikely that the interval
[start, end) was intended as actual export range.
This way we always have predictable behavior, especially from the
performance point of view. Additionally, if some bottleneck is found
in stack implementation it'll be easier for us to address.
Yep, that got reported... Was slightly more involved than UI message
fixing though: RNA string length getter shall return exact lentgh of
string (same as strlen), not size of allocated buffer to contain it!
Otherwise, NULL final char leaks in and...
Denoising was setting session parameters for every frame, which was detected as
a change and therefore caused a resync.
Since the parameter modification change is only needed for viewport rendering
(which doesn't support denoising anyways) and resyncing after a frame change
(which isn't affected by denoising settings), an easy fix is to just ignore
the denoising parameters like it's currently done with the samples.
Follow up to 9f044cb422
These comments described the difference between Microsoft & MinGW's struct definition. Now that we dropped MinGW we don't need to go into these details.
DM evaluation code was simply never clearing the `deformedOnly` flag
when evaluating a generative modifier...
Quite astonishing this never got catched before, a lot of particle code
relies on valid value of this flag!!!
The fact that we can end with uninstantiated objects is not expected
currently, but would rather not start chasing all corner cases that may
lead to that situation.
User shall be able to delete uninstantiated objects from Outliner, though!
Properly remap nodes' pointers to copied IDs in copied ntrees.
Note that this only affects root trees, node groups are not concerned
here, since they are assumed to be reusable chunks and hence *not*
duplicated.
The operator is indeed not adding frames but inserting them at the current frame (shifting all subsequent ones). Changed the operator name and description.
Approved by Antonio.
This operator relies on a rather specific context setup, so it shall not
be exposed to user in 'operator search' menu etc.
Based on D2528 by Vuk Gardašević (lijenstina).
The Issue
=======
For a long time now MinGW has been unsupported and unmaintained and at this point,
it looks like something that we should just leave behind and move on.
Why Remove
==========
One of the big motivations for MinGW back in the day is that it was free compared to MSVC which was licensed based.
However, now that this is no longer true we have basically stopped updating the need CMake files.
Along with the CMake files, there are several patches to the extern libs needed to make this work. For example, see:
https://developer.blender.org/diffusion/B/browse/master/extern/carve/patches/mingw_w64.patch
If we wanted to keep MinGW then we would need to make more custom patches to the external libs and
this is not something our platform maintainers are willing to do.
For example, here is the patches needed to build python: https://github.com/Alexpux/MINGW-packages/tree/master/mingw-w64-python3
Fixes T51301
Differential Revision: https://developer.blender.org/D2648
We had handling of fully duplicated polygons already, but... absolutely
nothing to sanitize partially merged polygons! This were giving us
totally invalid geometry, with duplicated vertices in single poly,
invalid edges, etc.
Now we do check for invalid loops inside polys, and generate new edges
as needed to get only valid polys.
For some reason this was a nightmare to get running fully OK, playing
with old and new indices is really, really mind breaking.
This feature got lost with new auto-track API,
Added it back by extending frame accessor class. This isn't really
a frame thing, but we don't have other type of accessor here.
Surely, we can use old-style API here and pass mask via region
tracker options for this particular case, but then it becomes much
less obvious how real auto-tracker will access this mask with old
style API.
So seems we do need an accessor for such data, just matter of
finding better place than frame accessor.
It's a bit ugly but I couldn't find a better way to keep fast installs and
correct handling of switching between master and blender2.8 with different
lib directories.
This makes it possible to have an animated / procedurally generated mesh
that starts empty and obtains data in later frames.
Fixes the export of an empty mesh with an Ocean Modifier, as described in
issue T51351.
This allows you to put any kind of animation data on the mesh, and its
shape will be exported on each timekey. Note that this timekey is unrelated
to the animation data (so we don't export on each keyframe, for example).
A practical example is the addition of an animated custom property to
trigger the export of animated mesh data. The mesh data can then be created
from any source, like Python scripts.
Not only is this useful in itself, it also provides a workaround for one
of the two issues described in T51351.
This is written in a custom metadata key, so it isn't shown by utilities
like abcecho or abcls. However, it's still something that's useful to
have available.
Houdini writes vertex data in a different format than Blender does; Houdini
uses "face-varying scope", which means that the vertex colours are indexed
by an ever-increasing number over all vertices of all faces instead of the
vertex index.
I've also merged the read_custom_data_mcols() and read_mcols() functions,
because the latter was only called from the former, and the changes in this
commit would add yet more function parameters to pass.
A big chunk of code was copied between the if and else bodies. By using
a boolean to store whether the c3f_ptr or c4f_ptr should be used, the
in-loop condition is kept as simple as possible.
Note: the angle in bug isn't really reflex - using the vertex normal
for this test isn't always right, but usually is. At any rate,
shouldn't try to put vertex on edge between if a reflex angle.
This was two-fold.
1) The export used viewport settings to obtain the particle cache, rather
than render settings.
2) The child hair writer tried to obtain UV-coordinates from the parent
chair, without checking whether those were available in the first place.
Since we already have a rather advanced PovRay exporter, makes sense to
also nicely display generated 'code'.
Patch by Maurice Raybaud (@mauriceraybaud), thanks!
Cleanup (mostly styling) by @mont29.
Brightness/contrast node was changing color but did not modify alpha
or ensured colors are premultiplied on the output. This was giving
artifacts later on unless alpha was manually converted.
Compositor is supposed to work in premultiplied alpha (except of
some really corner cases) so it makes sense to ensure premultiplied
alpha after brightness/contrast node.
This is now done as an option enabled by default, so we:
(a) Keep compatibility with old files.
(b) Have correct behavior for newly created files.
Later on we can get rid of this option.
We were looping over all vgroups in destination mesh and making string
comparison, for every vgroup of every vertex of merged mesh! Crazy!
Now we simply create a temp mapping of vgroup indices, seriously
simplifies things (and gives significant speedup when merging huge meshes
with lots of vgroups, here with quick stupid test went from 120ms in
vgroup merging to less than 5ms, 25 times quicker!).
Root of the issue here was that two stupid modifiers could create named
vgroup CD layers (vgroup editing ones... shame on me :") ).
Fix that, and added some versionning code to also fix 'corrupted' blend
files created by those so far.
Seems re-loading module invalidates memory pointers by the looks of it,
which gives an error on the next kernel call.
Not sure how to move memory pointer from one CUDA module to another one,
so for now simply disabling kernel re-load for CUDA devices. Not ideal,
but better than failing render.
Feature-selective option for CUDA is not an official feature anyway.
`screen_findedge()` is not expected to return NULL in that case, but
checking against that does not hurt (we do it in all its other call
cases anyway), better than crashing.
For some reason GCC-6 successfully compiles test program with
-Wno-implicit-fallthrough passed via command line. It just
silently ignores the unknown arguments which are starting with
-Wno-.
The issue is, if some other waning happens in the code, then
GCC will complain about unknown -Wno- argument which is not
supported by current GCC version.
This makes some misleading warning prints about unknown
command line argument when any other warning happens in code
from extern/.
Volume shaders without anything connected to the surface output are treated
as if they had a transparent BSDF as the surface shader in Cycles, so the
denoiser should skip feature pass writing for them just as it does with an
actual transparent BSDF.
If the central pixel is an outlier, the denoiser is supposed to predict its
value from the surrounding pixels. However, in some cases the confidence
interval test would reject every single surrounding pixel, which leaves the
model fitting with no data to work with.
- Some arguments were inapproriatry tagged as unused
using (void)foo semantic.
Only use such semantic in tricky casses, when something
needs to be ignored in release builds or something is
dependent on tricky ifndef policy.
For rest of the cases just use void foo(int /bar*/)
semantic, which ensures variable is not used. Solves
confusion and code running out of sync with later
development.
- Used proper unused semantic to some arguments.
- Added braces to make code easier to follow, tricky
indentation with ifdef, uh.
For example, when using a radius of 1, only 9 pixels (due to weighting maybe
even less) will be used, but the transform code may still decide to use a
5-dimensional (or even higher) fit.
This causes severe overfitting and therefore weird pixel values.
To avoid this, this commit limits the amount of dimensions to a third of the
pixel number. For a radius of 3 or more, this doesn't change anything, but
for 1 and 2 it can prevent fireflies and/or negative values being produced.
Once again, numerical instabilities causing the Cholesky decomposition to fail.
However, further increasing the diagonal correction just because of a few
pixels in very specific scenes and settings seems unjustified.
Therefore, this commit simply falls back to the basic NLM-filtered pixel
if the more advanced model fails.
I wouldn't mind switching fully to Google style, but i am against of
mixing two different styles in same project. So just stick to brace
at the new line after function definition.
There were following issues with ccl_restrict_ptr:
- We already had ccl_restrict for all platforms.
- It was secretly adding `const` qualifier to the declaration,
which is quite weird since non-const pointer can also be
declared as restricted.
- We never in Blender are using foo_ptr or FooPtr type definitions,
so not sure why we should introduce such a thing here.
- It is absolutely wrong from semantic point of view to put pointer
into the restrict macro -- const is a part of type, not part of
hint for compiler that some pointer is never aliased.
Denoise commit introduced kernel_write_result() which saves light passes, so
no need to call both kernel_write_result() and kernel_write_light_passes() from
the split kernel.
Weirdly enough. kernel_write_result() does not take care about debug passes.
The issue is coming from some weird semi-finished canvas feature, which
was remapping coordinate without applying any differential on the sampling
ellipse (in fact, there is no ellipse, sampling think is always a single
pixel).
The whole thing is just weak in the compositor, for now just bring behavior
back to how it was prior to optimization (multithreading) commit.
The problem was that Cycles implicitly uses a transparent surface shader when only
volume nodes are used, but since the black emission shader gets optimized away,
it was no longer detected and therefore no transparent surface was used.
Therefore, the shader now stores whether volume nodes were connected before
optimizing.
Extremely bright pixels in the rendered image cause the denoising algorithm
to produce extremely noticable artifacts. Therefore, a heuristic is needed
to exclude these pixels from the filtering process.
The new approach calculates the 75% percentile of the 5x5 neighborhood of
each pixel and flags the pixel if it is more than twice as bright.
During the reconstruction process, flagged pixels are skipped. Therefore,
they don't cause any problems for neighboring pixels, and the outlier pixels
themselves are replaced by a prediction of their actual value based on their
feature pass values and the neighboring pixels.
Therefore, the denoiser now also works as a smarter despeckling filter that
uses a more accurate prediction of the pixel instead of a simple average.
This can be used even if denoising isn't wanted by setting the denoising
radius to 1.
The implementation originally handled four different cases:
Regular glossy, glass, metallic fresnel glossy and diffuse.
However, only the first two are actually used currently. Therefore, this commit
removes the other two, which allows to simplify the code.
Additionally, due to the Principled BSDF, the function arguments are now
identical for glossy and glass, which allows to get rid of some ugly #ifdefs.
Use smarter check of where the file is coming from instead of
attempting to replace same source twice with different settings.
Brings down processing time from 3.6sec to 1.8sec.
It is not a good idea to:
1. Duplicate metadata to self
2. Ignore the fact that something might have had metadata already.
Also moved metadata copy to a preparation function, so it is
never lost.
The mesh interpolation code had an edge case where one of two
adjacent edges to a vertex has 0 length. This caused an assert
failure indexing the vertex mesh for splash Blenderman.blend.
Found when was looking into T49864. The issue is caused here
by render_copy_renderdata() doing a copy of views with
BLI_duplicatelist() so we can not just zero the pointers out.
Similar thing is happening for layers as well.
Fix T50882: VSE: Blend Modes on Scenes do not layer properly
Fix T51002: Scene strip with Alpha over not working as expected
The byte-to-float conversion was being skipped if the color spaces of the sequence and the scene
are the same, which is the default, resulting in any non-float strips becoming invisible.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D2635
Normally, segments up to 50 can be quite enough for most cases.
However, when dealing with things like braids,
the current limit can sometimes be quite a pain.
This commit fixes crash, but user feedback can be improved here to
inform artist that one can't use Render Result as a texture since that
will cause feedback loop.
Basically upon invoking cycles baking we could canell it which would
leave G.is_break hanging as true. Since we were not setting is_break to
false before exec baking, it would misbehave.
The issue was caused by stupid workaorund for libav. Now things works for
FFmpeg. There might need some tweaks needed for Libav, but that one is
not really priority for support.
Previous method was based on face-area, giving un-even results
based on topology and gave issues with zero area faces.
This method gives matching results for concave ngons and the same geometry triangulated.
The code only updated nodes in the nodetree of the scene to which the render layer belongs. Therefore, when using scene B in the compositor setup of scene A, A's node wouldn't be updated.
With this fix, the update function loops over all scenes and checks them for relevant nodes.
Was preventing update in 3DView etc. when changing something in the
World's NodeTree, especially annoying in blender2.8 branch (since legacy
depsgraph has been removed there), but also affecting master.
Old code was working quite unreliable in combination with fast math
flag, especially when compiling with Clang. It seems we were hitting
result of the following bug submitted to Clang [1].
Basically, it was happening so that (int)sqrtf(64) was 7 when Cycles
is built with Clang but was correct 8 when built with GCC.
This commit works this around. Annoying, but don't see other way to
keep sampling pattern the same for Clang and GCC.
[1] https://bugs.llvm.org//show_bug.cgi?id=24063
The idea here is to keep things in a logical order to match the order of ones worflow.
This concept can be seen in Graph > Dope Sheet > NLA. This issue is mainly affecting the manual.
Fixes T50709
Differential Revision: https://developer.blender.org/D2630
The goal is to reduce wasted space and improve clarity in the 'N' panel of the VSE through layout changes.
The changes are intentional conservative to avoid making people re-learn anything.
Author: @mpan3
Differential Revision: https://developer.blender.org/D2439
* "Filmic" and "False Color" view transforms added (sRGB display device only).
* "Very Low/Low/Base/High/Very High Contrast" looks added.
* Added filtering so that Filmic only shows look names prefixed with "Filmic - ".
Filmic Dynamic Range LUT configuration created by Troy James Sobotka with
special thanks and feedback from Guillermo, Claudio Rocha, Bassam Kurdali,
Eugenio Pignataro, Henri Hebeisen, Jason Clarke, Haarm-Peter Duiker, Thomas
Mansencal, and Timothy Lottes.
Differential Revision: https://developer.blender.org/D2659
This commit contains the first part of the new Cycles denoising option,
which filters the resulting image using information gathered during rendering
to get rid of noise while preserving visual features as well as possible.
To use the option, enable it in the render layer options. The default settings
fit a wide range of scenes, but the user can tweak individual settings to
control the tradeoff between a noise-free image, image details, and calculation
time.
Note that the denoiser may still change in the future and that some features
are not implemented yet. The most important missing feature is animation
denoising, which uses information from multiple frames at once to produce a
flicker-free and smoother result. These features will be added in the future.
Finally, thanks to all the people who supported this project:
- Google (through the GSoC) and Theory Studios for sponsoring the development
- The authors of the papers I used for implementing the denoiser (more details
on them will be included in the technical docs)
- The other Cycles devs for feedback on the code, especially Sergey for
mentoring the GSoC project and Brecht for the code review!
- And of course the users who helped with testing, reported bugs and things
that could and/or should work better!
Those shall not be considered while checking whether a to-be-made-local
ID will end up fully local, or still be partially used by linked data...
Even less since we already do have special handling of proxies later.
Fixes main remaining issue found with 04_01_H.lighting.blend Agent327
file, and allows us to switch back to optimized post-processing in
make_local code.
That one tags those ugly little 'from' ID pointers (shape keys and
proxies), which point back from used to user ID, and require a lot of
special care in data-block management...
Again, Agent327's 04_01_H.lighting.blend shows some problem here, it
triggers several times the 'not used at all' assert in step 5 of secure
code, and with optimized version we lose the connection between
rigs and the main characters!
Will keep investigating on this, but for now let's try to give something
working to the studio.
This should not be needed imho, we already set POSE_RECALC flag
correctly there, but it still is missing actual update of poses in some
(complex and convoluted) cases. So at least for now, let's go with this
hack, it's not really harming anyone anyway.
Fixes crash in Agent327's 04_01_H.lighting.blend when making all local.
Not sure how this happens, but in some cases we can evaluate
deformations of an armature which pose is not valid, at least put a
warning here to help identifying the issue quickly.
The issue was caused by unlimited textures commit, root of the issue is that
displacement code updates some of the image slots directly, so it needs to
ensure device vectors are all proper size.
This was set to maxcpu which in an 8 core box would be 8, each project would then spawn
8 instances of cl.exe, making a possible of 64 simultaneously running compiler instances
slowing the compile down instead of speeding it up.
We unfortunately cannot fix this for previous versions of Blender, but
at least the issue (Blender crashing on unknown IDProp types) should now
be addressed for future.
Simply reset unknown IDProp types to integer one, and reset its value to zero.
Previously, every RenderPass would have a bitfield that specified its type. That limits the number of passes to 32, which was reached a while ago.
However, most of the code already supported arbitrary RenderPasses since they were also used to store Multilayer EXR images.
Therefore, this commit completely removes the passflag from RenderPass and changes all code to use the unique pass name for identification.
Since Blender Internal relies on hardcoded passes and to preserve compatibility, 32 pass names are reserved for the old hardcoded passes.
To support these arbitrary passes, the Render Result compositor node now adds dynamic sockets. For compatibility, the old hardcoded sockets are always stored and just hidden when the corresponding pass isn't available.
To use these changes, the Render Engine API now includes a function that allows render engines to add arbitrary passes to the render result. To be able to add options for these passes, addons can now add their own properties to SceneRenderLayers.
To keep the compositor input node updated, render engine plugins have to implement a callback that registers all the passes that will be generated.
From a user perspective, nothing should change with this commit.
Differential Revision: https://developer.blender.org/D2443
Differential Revision: https://developer.blender.org/D2444
Reduce thread divergence in kernel_shader_eval.
Rays are sorted in blocks of 2048 according to shader->id.
On R9 290 Classroom is ~30% faster, and Pabellon Barcelone is ~8% faster.
No sorting for CUDA split kernel.
Reviewers: sergey, maiself
Reviewed By: maiself
Differential Revision: https://developer.blender.org/D2598
Previously the logic was different for duplis and regular objects: regular objects
were using render visibility when Render Layer option is enabled which duplis were
always using viewport visibility when rendering from the viewport.
This was quite confusing because caused different results in viewport and render
when artists were expecting them to match 1:1.
This implements branched path tracing for the split kernel.
General approach is to store the ray state at a branch point, trace the
branched ray as normal, then restore the state as necessary before iterating
to the next part of the path. A state machine is used to advance the indirect
loop state, which avoids the need to add any new kernels. Each iteration the
state machine recreates as much state as possible from the stored ray to keep
overall storage down.
Its kind of hard to keep all the different integration loops in sync, so this
needs lots of testing to make sure everything is working correctly. We should
probably start trying to deduplicate the integration loops more now.
Nonbranched BMW is ~2% slower, while classroom is ~2% faster, other scenes
could use more testing still.
Reviewers: sergey, nirved
Reviewed By: nirved
Subscribers: Blendify, bliblubli
Differential Revision: https://developer.blender.org/D2611
Negative scale on camera is a nice trick to invert render image on one
axis at no extra CPU cost. It was implemented in the Decklink branch but
I introduced a typo when porting it to master. It is now fixed.
The change was initially needed for Blender 2.8 branch but the actual
function was reverted in there. So no reason to keep dead unused
placeholder in the dependency graph.
This reverts commit fd69ba2255.
Avoid calculating a new split-index when re-fitting.
While checking if a knot can be removed, the index with the highest error
can be used as a candidate to replace the knot
(in the case it can't be removed).
Not sure if this is a proper fix, but was getting frequent crashes, so
committing this real quick just to make master sable again. Can be
reverted later if there's a better fix. The changes to images really
need a closer look...
Hi Guys,
as one of my clients needs the possibility to have custom menu entries in the general right click menu (all over Blender: in the node editor, properties, toolbars,..) I talked with Campbell about expanding our hard coded menu a bit. This is the outcome. As I only need those two, I support currently a button_prop and a button_pointer.
{F540397}
I tested the changes with a custom script where I added a custom entry and executed an operator on click - it seems to work exactly how it's intended to. The script: {F540435}
As I'm not too experienced in rna stuff I would really appreciate any review.
Thanks very much Campbell for his open ears & help on this issue!
Reviewers: campbellbarton, mont29
Reviewed By: campbellbarton, mont29
Subscribers: sybren, mont29
Tags: #addons
Differential Revision: https://developer.blender.org/D2612
Previous fix did not work for mixed textures. This one will over-allocate
information array, but it's better than not being able to render at all.
Some more cleanup and improvement is coming.
This patch allows for an unlimited number of textures in Cycles where the hardware allows. It replaces a number static arrays with dynamic arrays and changes the way the flat_slot indices are calculated. Eventually, I'd like to get to a point where there are only flat slots left and textures off all kinds are stored in a single array.
Note that the arrays in DeviceScene are changed from containing device_vector<T> objects to device_vector<T>* pointers. Ideally, I'd like to store objects, but dynamic resizing of a std:vector in pre-C++11 calls the copy constructor, which for a good reason is not implemented for device_vector. Once we require C++11 for Cycles builds, we can implement a move constructor for device_vector and store objects again.
The limits for CUDA Fermi hardware still apply.
Reviewers: tod_baudais, InsigMathK, dingto, #cycles
Reviewed By: dingto, #cycles
Subscribers: dingto, smellslikedonkey
Differential Revision: https://developer.blender.org/D2650
Previously canceling a render done by the split kernel could cause artifacts
such as very bright or dark tiles. This was caused by unfinished samples
being included in the output buffer. To avoid this we now wait till all the
currently rendering samples have finished, up to a limit of twice the
expected time for them to finish (currently this is no more than 20 seconds,
but usually its much less). If samples still haven't finished by then we
stop anyways in case there's an endless loop occurring.
It was totally unclear whether the device is enabled or disabled.
Lots of people got fully lost in the current interface.
While the solution is not fully ideal, it is at least solves
ambiguity in the interface.
Simple child hairs don't have a face index number assigned, so the
call to dm->getTessFaceData(dm, num, CD_MFACE) would cause a crash. To
work around this, UV and normal vectors are copied from the parent
hair.
I've also removed an unnecessary call to dm->getTessFaceArray(dm);
Reviewers: kevindietrich
Differential Revision: https://developer.blender.org/D2638
The function doesn't return whether the object is a shape at all, since
it also returns true for camera objects (and soon also for empties). It
returns true when objects of this type can be exported to Alembic at all.
This is now reflected in the name.
This works around a long outstanding issue T50176 with cycles on msvc2015/x86 . root cause is still unknown though,feels like a game of whack'a'mole
Reviewers: sergey, dingto
Subscribers: Blendify
Tags: #cycles
Differential Revision: https://developer.blender.org/D2573
Using -cl-fast-relaxed-math assumes no NaN/Inf values in any expression.
This causes problems on overflow, division by zero, square root of negative number.
Comparisons with NaN or infinite value are affected as well.
This patch causes <2% slowdown on benchmark scenes.
Fix T50985: Rendering volume scatter with GPU OpenCL comes to an halt after a few seconds
HDF5 Alembic files are not officially supported by Blender. With this
commit, the HDF5 format is detected even when Blender is compiled without
HDF5 support, and the user is given an explanatory error message (rather
than the generic "Could not open Alembic archive for reading".
The final goal to reach is to make vectorized types much easier to maintain
and the previous design had following issues:
- Having all types and methods implementation made the source file rather
bloated and unfun to navigate in.
- It was not possible to quickly glance available API for the type you are
interested in.
- Adding more vectorization types will bloat the file even more, making
things even more tricky to follow.
Fixes performance issues of C++ one with Windows MSVC debug builds...
Merely a translation from msgfmt.cc code by @sergey, using BLI libs intead of C++'s stdlib.
Reviewers: sergey, campbellbarton, LazyDodo
Subscribers: sergey
Differential Revision: https://developer.blender.org/D2605
QtKit was removed in macOS Sierra, this patch disables WITH_CODEC_QUICKTIME
in Sierra and greater versions of macOS.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2645
By mistake, the code relied on ALEMBIC_ROOT_DIR being defined by the user
running the tests. Now CMake macros are used to correctly find the Alembic
root directory.
Matrix.decompose() should either return "location, orientation, size" or
"translation, rotation, scale". Since there are constructors for the former,
I've replaced "location" in the documentation with "translation".
The code is still the same, I just changed the documentation.
The idea is to have osme geenric BSDF node which is subclassed by
"regular" BSDF nodes and uber shaders.
This way we can access special type and closure type for making
decisions somewhere else.
No longer passing time as float and constructing ISampleSelectors all
over the place. Instead, just construct an ISampleSelector once and
pass it along.
It is disabled by default, so should not affect existing configurations.
Main benefits of this goes as:
- Linux distros can use that to avoid libraries duplication and link
blender package against gflags package from the system.
- It it easier to test whether Blender works with updated version of
Gflags prior to re-bundling the library.
This switches the internal color representation of the eye dropper from display space to linear. Any time a linear color is requested and the color is picked from a linear object, the result is now precise to the bit as the color gets patched through directly. Color space conversion now only happens when a color is picked from non-linear display space objects or when the color is requested to be returned in non-linear space.
In addition, this patch changes the DifferenceMatte node to interpret a tolerance of 0.0 to accept colors that are identical bit by bit, as apposed to simply refusing all colors.
Replaced some STREQ(snode->tree_idname, ...) calls with ED_node_is_*() calls for improved readability, fixed one case where the STREQ was used the wrong way
That's a quick hack to address that specific case, new pointer IDProp
actually enlights a generic problem - datablocks using themselves - which
is not really handled by current code, would consider this not-so-urgent
TODO though.
The ABC_export and ABC_import functions both take a as_background_job
parameter, and return a boolean.
When as_background_job=true, returns false immediately after scheduling
a background job. This was the old behaviour of this function, which makes
it very hard for scripts to do something with the data after the import
or export completes.
When as_background_job=false, performs the export synchronously, and
returns true when the export was ok, and false if there were any errors.
This allows further processing.
The Scene.alembic_export() function is deprecated, and will be removed from
Blender 2.8 in favour of calling the bpy.ops.wm.alembic_export() operator.
As such, it has been hard-coded to the old background job behaviour.
The export is still slower than needed, as the particle systems themselves
aren't disabled during the export. It's only the writing to the Alembic
file that's skipped.
We could not edit them, but still could delete them, which makes no
sense, API-defined properties are similar to class members, removing
them from single instances is pure garbage. And it was broken anyway.
Found by @a.romanov while checking on T51198, thanks.
Curve resolution isn't natively supported by Alembic, hence it is stored
in a user property "blender:resolution". I've looked at a Maya curves
example file, but that also didn't contain any information about curve
resolution.
Differential Revision: https://developer.blender.org/D2634
Reviewers: kevindietrich
The order number written to Alembic is the same as we use in memory, so
the +1 wasn't needed, at least according to the reference Maya exporter
maya/AbcExport/MayaNurbsCurveWriter.cpp, function
MayaNurbsCurveWriter::write(), in the Alembic source code.
Furthermore, when writing an array of nurb orders, the curve type should
be set to kVariableOrder, otherwise the importer will ignore it.
commit 90778901c9
Merge: 76eebd93bf0026
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Mon Apr 3 07:52:05 2017 +0200
Merge branch 'master' into cycles_disney_brdf
commit 76eebd9379
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Thu Mar 30 15:34:20 2017 +0200
Updated copyright for the new files.
commit 013f4a152a
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Thu Mar 30 15:32:55 2017 +0200
Switched from multiplication of base and subsurface color to blending
between them using the subsurface parameter.
commit 482ec5d1f2
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Mon Mar 13 15:47:12 2017 +0100
Fixed a bug that caused an additional white diffuse closure call when using
path tracing.
commit 26e906d162
Merge: 0593b8c223aff9
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Feb 6 11:32:31 2017 +0100
Merge branch 'master' into cycles_disney_brdf
commit 0593b8c51b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Feb 6 11:30:36 2017 +0100
Fixed the broken GLSL shader and implemented the Disney BRDF in the
real-time view port.
commit 8c7e11423b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Feb 3 14:24:05 2017 +0100
Fix to comply strict compiler flags and some code cleanup
commit 17724e9d2d
Merge: 379ba34520afa2
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jan 24 09:59:58 2017 +0100
Merge branch 'master' into cycles_disney_brdf
commit 379ba346b0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jan 24 09:28:56 2017 +0100
Renamed the Disney BSDF to Principled BSDF.
commit f80dcb4f34
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Dec 2 13:55:12 2016 +0100
Removed reflection call when roughness is low because of artifacts.
commit 732db8a57f
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Nov 16 09:22:25 2016 +0100
Indication if to use fresnel is now handled via the type of the BSDF.
commit 0103659f5e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Nov 11 13:04:11 2016 +0100
Fixed an error in the clearcoat where it appeared too bright for default
light sources (like directional lights)
commit 0aa68f5335
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Nov 7 12:04:38 2016 +0100
Resolved inconsistencies in using tabs and spaces
commit f5897a9494
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Nov 7 08:13:41 2016 +0100
Improved the clearcoat part by using GTR1 instead of GTR2
commit 3dfc240e61
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 31 11:31:36 2016 +0100
Use reflection BSDF for glossy reflections when roughness is 0.0 to
reduce computational expense and some code cleanup
Code cleanup includes:
- Code style cleanup and removed unused code
- Consolidated code in the bsdf_microfacet_multi_impl.h to reduce
some computational expense
commit a2dd0c5faf
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Oct 26 08:51:10 2016 +0200
Fixed glossy reflections and refractions for low roughness values and
cleaned up the code.
For low roughness values, the reflections had some strange behavior.
commit 9817375912
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Oct 25 12:37:40 2016 +0200
Removed default values in setup functions and added extra functions for
GGX with fresnel.
commit bbc5d9d452
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Oct 25 11:09:36 2016 +0200
Switched from uniform to cosine hemisphere sampling for the diffuse and
the sheen part.
commit d52d8f2813
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 16:17:13 2016 +0200
Removed the color parameters from the diffuse and sheen shader and use
them as closure weights instead.
commit 8f3d927385
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 09:57:06 2016 +0200
Fixed the issue with artifacts when using anisotropy without linking the
tangent input to a tangent node.
commit d93f680db9
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 09:14:51 2016 +0200
Added subsurface radius parameter to control the per color channel
effection radius of the subsurface scattering.
commit c708c3e53b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 08:14:10 2016 +0200
Rearranged the inputs of the shader.
commit dfbfff9c38
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Oct 21 09:27:05 2016 +0200
Put spaces in the parameter names of the shader node
commit e5a748ced1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Oct 21 08:51:20 2016 +0200
Removed code that isn't in use anymore
commit 75992bebc1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Oct 21 08:50:07 2016 +0200
Code style cleanup
commit 4dfcf455f7
Merge: 243a0e32cd6a89
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Thu Oct 20 10:41:50 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit 243a0e3eb8
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Thu Oct 20 10:01:45 2016 +0200
Switching between OSL and SVM is more consistant now when using Disney
BSDF.
There were some minor differences in the OSL implementation, e.g. the
refraction roughness was missing.
commit 2a5ac50922
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 09:17:57 2016 +0200
Fixed a bug that caused transparency to be always white when using OSL and
selecting GGX as distribution of the Disney BSDF
commit e1fa862391
Merge: d0530a87f76f6f
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 08:59:32 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit d0530a8af0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 08:53:18 2016 +0200
Cleanup the Disney BSDF implementation and removing unneeded files.
commit 3f4fc826bd
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 08:36:07 2016 +0200
Unified the OSL implementation of the Disney clearcoat as a simple
microfacet shader like it was previously done in SVM
commit 4d3a0032ec
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Sep 26 12:35:36 2016 +0200
Enhanced performance for Disney materials without subsurface scattering
commit 3cd5eb56cf
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Sep 16 08:47:56 2016 +0200
Fixed a bug in the Disney BSDF that caused specular reflections to be too
bright and diffuse is now reacting to the roughness again
- A normalization for the fresnel was missing which caused the specular
reflections to become too bright for the single-scatter GGX
- The roughness value for the diffuse BSSRDF part has always been
overwritten and thus always 0
- Also the performance for refractive materials with roughness=0.0 has
been improved
commit 7cb37d7119
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Thu Sep 8 12:24:43 2016 +0200
Added selection field to the Disney BSDF node for switching between
"Multiscatter GGX" and "GGX"
In the "GGX" mode there is an additional parameter for changing the
refraction roughness for materials with smooth surfaces and rough interns
(e.g. honey). With the "Multiscatter GGX" this effect can't be produced at
the moment and so here will be no separation of the two roughness values.
commit cdd29d06bb
Merge: 02c315ab40d1c1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 6 15:59:05 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit 02c315aeb0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 6 15:16:09 2016 +0200
Implemented the OSL part of the Disney shader
commit 5f880293ae
Merge: 630b80eb399a6d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Sep 2 10:53:36 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit 630b80e08b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Sep 2 10:52:13 2016 +0200
Fresnel in the microfacet multiscatter implementation improved
commit 0d9f4d7acb
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Aug 26 11:11:05 2016 +0200
Fixed refraction roughness problem (refractions were always 100% rough)
and set IOR of clearcoat to 1.5
commit 9eed34c7d9
Merge: ef29aaeae475e3
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Aug 16 15:22:32 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit ef29aaee1a
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Aug 16 15:17:12 2016 +0200
Implemented the fresnel in the multi-scatter GGX for the Disney BSDF
- The specular/metallic part uses the multi-scatter GGX
- The fresnel of the metallic part is controlled by the specular value
- The color of the reflection part when using transparency can be
controlled by the specularTint value
commit 88567af085
Merge: cc267e5285e082
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Aug 3 15:05:09 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit cc267e52f2
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Aug 3 15:00:25 2016 +0200
Implemented the Disney clearcoat as a variation of the microfacet bsdf,
removed the transparency roughness again and added an input for
anisotropic rotations
commit 81f6c06b1f
Merge: ece5a087065022
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Aug 3 11:42:02 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit ece5a08e0d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 16:29:21 2016 +0200
Base color now applied again to the refraction of transparent Disney
materials
commit e3aff6849e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 16:05:19 2016 +0200
Added subsurface color parameter to the Disney shader
commit b3ca6d8a2f
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 12:30:25 2016 +0200
Improvement of the SSS in the Disney shader
* Now the bump normal is correctly used for the SSS.
* SSS in Disney uses the Disney diffuse shader
commit d68729300e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 12:23:13 2016 +0200
Better calculation of the Disney diffuse part
Now the values for NdotL und NdotV are clamped to 0.0f for a better look
when using normal maps
commit cb6e500b12
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Jul 25 16:26:42 2016 +0200
Now one can disable specular reflactions again by setting specular and
metallic to 0 (cracked this in the previous commit)
commit bfb9cb11b5
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Jul 25 16:11:07 2016 +0200
fixed the Disney SSS and cleaned the initialization of the Disney shaders
commit 642c0fdad1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Jul 25 16:09:55 2016 +0200
fixed an error that was caused by the missing LABEL_REFLECT in the Disney
diffuse shader
commit c10b484dca
Author: Jens Verwiebe <info@jensverwiebe.de>
Date: Fri Jul 22 01:15:21 2016 +0200
Rollback attempt to fix sss crashing, it prevented crash by disabling sss completely, thus useless
commit 462bba3f97
Author: Jens Verwiebe <info@jensverwiebe.de>
Date: Thu Jul 21 23:11:59 2016 +0200
Add an undef for sc_next for safety
commit 32d348577d
Author: Jens Verwiebe <info@jensverwiebe.de>
Date: Thu Jul 21 00:15:48 2016 +0200
Attempt to fix Disney SSS
commit dbad91ca6d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Jul 20 11:13:00 2016 +0200
Added a roughness parameter for refractions (for scattering of the rays
within an object)
With this, one can create a translucent material with a smooth surface and
with a milky look.
The final refraction roughness has to be calculated using the surface
roughness and the refraction roughness because those two are correlated
for refractions. If a ray hits a rough surface of a translucent material,
it is scattered while entering the surface. Then it is scattered further
within the object. The calculation I'm using is the following:
RefrRoughnessFinal = 1.0 - (1.0 - Roughness) * (1.0 - RefrRoughness)
commit 50ea5e3e34
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jun 7 10:24:50 2016 +0200
Disney BSDF is now supporting CUDA
commit 10974cc826
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 31 11:18:07 2016 +0200
Added parameters IOR and Transparency for refractions
With this, the Disney BRDF/BSSRDF is extended by the BTDF part.
commit 218202c090
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 15:08:18 2016 +0200
Added an additional normal for the clearcoat
With this normal one can simulate a thin layer of clearcoat by applying a
smoother normal map than the original to this input
commit dd139ead7e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 12:40:56 2016 +0200
Switched to the improved subsurface scattering from Christensen and
Burley
commit 11160fa4e1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 10:16:30 2016 +0200
Added Disney Sheen shader as a preparation to get to a BSSRDF
commit cee4fe0cc9
Merge: 4f955d06b5bab6
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 09:08:09 2016 +0200
Merge branch 'cycles_disney_brdf' of git.blender.org:blender into cycles_disney_brdf
Conflicts:
intern/cycles/kernel/closure/bsdf_disney_clearcoat.h
intern/cycles/kernel/closure/bsdf_disney_diffuse.h
intern/cycles/kernel/closure/bsdf_disney_specular.h
intern/cycles/kernel/closure/bsdf_util.h
intern/cycles/kernel/osl/CMakeLists.txt
intern/cycles/kernel/osl/bsdf_disney_clearcoat.cpp
intern/cycles/kernel/osl/bsdf_disney_diffuse.cpp
intern/cycles/kernel/osl/bsdf_disney_specular.cpp
intern/cycles/kernel/osl/osl_closures.h
intern/cycles/kernel/shaders/node_disney_bsdf.osl
intern/cycles/render/nodes.cpp
intern/cycles/render/nodes.h
commit 4f955d0523
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 24 16:38:23 2016 +0200
SVM and OSL are both working for the simple version of the Disney BRDF
commit 1f5c41874b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 24 09:58:50 2016 +0200
Disney node can be used without SVM and started to cleanup the OSL implementation
There is still some wrong behavior for SVM for the Schlick Fresnel part at the
specular and clearcoat
commit d4b814e930
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:22:29 2016 +0200
Switched from a parameter struct for Disney parameters to ShaderClosure params
commit b86a1f5ba5
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:19:57 2016 +0200
Added additional variables for storing parameters in the ShaderClosure struct
commit 585b886236
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 12:03:17 2016 +0200
added output parameter to the DisneyBsdfNode
That has been forgotten after removing the inheritance of BsdfNode
commit f91a286398
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 10:40:48 2016 +0200
removed BsdfNode class inheritance for DisneyBsdfNode
That's due to a naming difference. The Disney BSDF uses the name 'Base Color'
while the BsdfNode had a 'Color' input. That caused a text message to be
printed while rendering.
commit 30da91c9c5
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 16:08:10 2016 +0200
disney implementation cleaned
commit 30d41da0f0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 13:23:07 2016 +0200
added the disney brdf as a shader node
commit 1f099fce24
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 3 16:54:49 2016 +0200
added clearcoat implementation
commit 00a1378b98
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Apr 29 22:56:49 2016 +0200
disney diffuse und specular implemented
commit 6baa7a7eb7
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 15:21:32 2016 +0200
disney diffuse is working correctly
commit d8fa169bf3
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 08:41:53 2016 +0200
added vessel for disney diffuse shader
commit 6b5bab6cec
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:22:29 2016 +0200
Switched from a parameter struct for Disney parameters to ShaderClosure params
commit f6499c2676
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:19:57 2016 +0200
Added additional variables for storing parameters in the ShaderClosure struct
commit 7100640b65
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 12:03:17 2016 +0200
added output parameter to the DisneyBsdfNode
That has been forgotten after removing the inheritance of BsdfNode
commit 419ee54411
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 10:40:48 2016 +0200
removed BsdfNode class inheritance for DisneyBsdfNode
That's due to a naming difference. The Disney BSDF uses the name 'Base Color'
while the BsdfNode had a 'Color' input. That caused a text message to be
printed while rendering.
commit 6006f91e87
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 16:08:10 2016 +0200
disney implementation cleaned
commit 0ed0895914
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 13:23:07 2016 +0200
added the disney brdf as a shader node
commit 0630b742d7
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 3 16:54:49 2016 +0200
added clearcoat implementation
commit 9f3d39744b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Apr 29 22:56:49 2016 +0200
disney diffuse und specular implemented
commit 9b26206376
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 15:21:32 2016 +0200
disney diffuse is working correctly
commit 4711a3927d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 08:41:53 2016 +0200
added vessel for disney diffuse shader
Differential Revision: https://developer.blender.org/D2313
This is possible to use surface-only nodes and connect them to volume output.
If there was something connected to surface output those extra connections
will not change anything visually but will force volume features to be included
into feature-adaptive kernels.
In fact, this exact reason seems to be causing slowdown of Barcelone file
comparing AMD OpenCL to NVidia CUDA.
Currently only supported by the final F12 renders because of the current design
of what gets optimized out when and how feature-adaptive kernel accesses
list of required features.
Reviewers: dingto, nirved, maiself, lukasstockner97, brecht
Reviewed By: brecht
Subscribers: bliblubli
Differential Revision: https://developer.blender.org/D2569
Remapping to itself is nonsense here (was triggering an assert in
BKE_library code actually), just make it a bail out early in RNA
callback in that case.
Sanitize a bit how cache path is handled by fluidsim (there is much more
to be done here though :( ), and forbid empty path (we reset to default
path relative to current .blend file in case it's empty).
If people really, really want to use current OS-wise directory, they can at
least use '.' as path. ;)
When moved the options to toolsetting, this part was missing. The problem was not the pointer as suggested in D2629.
Thanks Arvīds Kokins for his help fixing this bug
This avoids the unnecessary creation of bvhtree, which can be highly inefficient in some cases
(for example: in the `operator_modal_view3d_raycast.py` template)
Unfortunately this does break compatibility in that the viewport will look a
bit different depending on the settings, but the old behavior was simply not
usable for higher distances.
Object Info node can be useful to give some variation to a single material assigned to multiple instances. This patch adds support for Viewport and BI.
{F499530}
Example: {F499528}
Reviewers: merwin, brecht, dfelinto
Reviewed By: brecht
Subscribers: duarteframos, fclem, homyachetser, Evgeny_Rodygin, AlexKowel, yurikovelenov
Differential Revision: https://developer.blender.org/D2425
The U-resolution of the imported curves was kept at the default value
of 12, which is way too high for imported hair. We export hair at a
fairly high resolution already, so it's not needed to subdivide even
further when importing.
Of course this may have an impact on other curves that do require this
U-resolution to be higher. In that case the resolution can be
increased after importing.
I removed the default nu->orderu = num_verts, as that allowed every
point to influence the entire spline, which was more expensive for the
CPU, and unlikely to be needed. The orderu computations had off-by-one
errors in the curve importer, which are now also fixed. The correct
values are:
- Linear: orderu = 2
- Quadratic: orderu = 3
- Cubic: orderu = 4
These values are also what is stored in the Alembic file for curves of
type kVariableOrder, according to the reference Maya exporter
maya/AbcExport/MayaNurbsCurveWriter.cpp, function
MayaNurbsCurveWriter::write(), in the Alembic source code.
The result is a frame rate increase of roughly 100x (tested with one
100-hair test on one machine, so take with grain of salt).
This test checks that a set of cubes are exported with the correct
transform, both with flatten=True and flatten=False.
This commit also adds an easy to use superclass for upcoming Alembic
unit tests.
This supports our common character animation workflow, where a character,
its rig, and the custom bone shapes are all part of a group. This group
is then linked into the scene, the rig is proxified and animated. Such
a group can now be exported. Use "Renderable objects only" to prevent
writing the custom bone shapes to the Alembic file.
The absence of datablock properties "will certainly be resolved soon as the need for them is becoming obvious" said the [[http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.67/Python_Nodes|Python Nodes release notes]]. So this patch allows Python scripts to create ID Properties which reference datablocks.
This functionality is implemented for `PointerProperty` and now such properties can be created with Python.
In addition to the standard update callback, `PointerProperty` can have a `poll` callback (standard RNA) which is useful for search menus. For details see the test included in this patch.
Original author: @artfunkel
Alexander (Blend4Web Team)
Reviewers: brecht, artfunkel, mont29, campbellbarton
Reviewed By: mont29, campbellbarton
Subscribers: jta, sergey, campbellbarton, wisaac, poseidon4o, mont29, homyachetser, Evgeny_Rodygin, AlexKowel, yurikovelenov, fjuhec, sharlybg, cardboard, duarteframos, blueprintrandom, a.romanov, BYOB, disnel, aditiapratama, bliblubli, dfelinto, lukastoenne
Maniphest Tasks: T37754
Differential Revision: https://developer.blender.org/D113
We can not re-use anything for such pools, because we will know nothing about whether
the main thread is sleeping or not. So we identify such threads as 0, but we don't
use main thread's TLS.
This fixes dead-locks and crashes reported by Luca when doing playblasts.
Global size depends on memory usage which might change during rendering.
Havent seen it happen but seems possible that this could cause the global
size to be different than what was used for allocating buffers.
The issue here is that the preferences are still used because both can be accessed from the 3D View, view menu. In the future, it is likely that the old mode will be removed (maybe 2.8?) but for now we want to keep both operational.
Differential revision: https://developer.blender.org/D2320
Do not recompute both points's 2D coordinates for each segments, we can
copy over from previous one... Does not gives any measurable speedup off
hands, though.
Couple of things here:
- Boost is not necesserily compiled into your /opt/lib and system-wide
version might have been used. The recent change in Alembic did not
take this into account.
- Alembic needs some extra component of Boost.
This part might be missing now for other distros than DEB.
The issue was caused by recent change in inline policy.
There is some sort of memory corruption happening here, ASAN suggests
it's stack overflow issue. Not quite sure why it is happening tho and
was not able to solve anything here yet in the past hours.
Committing fix which works with a big TODO note.
The issue is visible on AVX2 machine when rendering cycles_reports_test.
Doing this in a fully 'clean' way is far from obvious, especially
unregister, you often end up leaving nasty 'orphanned' keymap items
referring to unregistered operators...
This seems to happen on Windows only, happened to Thomas and Nathan already.
Similar patch Thomas was showing, but i do not see it committted. So comitting
now in order to get more developers and users happy.
Ever since we merged the extra texture types (half etc) and spit kernel the compile time for cycles_kernel has been going out of control.
It's currently sitting at a cool 1295.762 seconds with our standard compiler (2013/x64/release)
I'm not entirely sure why msvc gets upset with it, but the inlining of matrix near the bottom of the tri-cubic 3d interpolator is the source of the issue, this patch excludes it from being inlined.
This patch bring it back down to a manageable 186 seconds. (7x faster!!)
with the attached bzzt.blend that @sergey kindly provided i got the following results with builds with identical hashes
58:51.73 buildbot
58:04.23 Patched
it's really close, the slight speedup could be explained by the switch instead of having multiple if's (switches do generate more optimal code than a chain of if/else/if/else statements) but in all honesty it might just have been pure luck (dev box,very polluted, bad for benchmarks) regardless, this patch doesn't seem to slow down anything with my limited testing.
{F532336}
{F532337}
Reviewers: brecht, lukasstockner97, juicyfruit, dingto, sergey
Reviewed By: brecht, dingto, sergey
Subscribers: InsigMathK, sergey
Tags: #cycles
Differential Revision: https://developer.blender.org/D2595
It's possible that cancellation occured between the creation of the reader
and the creation of the Blender object, in which case reader->object()
returns a NULL pointer.
BKE_libblock_free_us() was called on the object data, which decrements
its user count, after which the same function was called on the object,
which decrements the user count of the object data again. This double
decrement was too much.
The state mask wasnt applied before comparison giving false results. It
shouldnt really happen that a ray state contains any flags that need to
be masked away, but if it does happen its better to not get stuck.
Previously, a GHash was used to store a flattened mapping of parent
information based on the Alembic hierarchy, and then that hash was used to
set parent pointers on Blender objects. This resulted in errors and
some duplicate objects. The new approach stores parent pointers while
traversing the Alembic hierarchy, which means that there is much more
information about the actual context of the Alembic object itself,
producing a more stable import.
Also replaced the bool param "to_yup" with "AbcAxisSwapMode mode", so that
it's more explicit that axes are swapped.
Also added unittests for create_swapped_rotation_matrix.
There was a problem with parent-child relations not getting set up
correctly when an Alembic object was both the transform for a mesh object
and the parent of other mesh objects.
convert_matrix() now only converts from Imath::M44d to float[4][4] (taking
different camera orientations into account). Import-time scaling is now
performed by the caller.
Also renamed AbcObjectReader::readObjectMatrix to
setupObjectTransform, as it does more than just reading the object
matrix; it also sets up an object constraint if the Alembic Xform is
animated.
Alembic is an interchange and caching format, that can contain custom
object schemas. Blender shouldn't crash (because of failing asserts) just
because it doesn't know such an object type.
It's a mapping from full path of an Alembic object to an AbcObjectReader*.
The fact that at some point it is used to construct parent-child relations
doesn't matter.
The importer was guessing whether an Alembic IXform object was part of a
child object, or should be represented as an Empty in Blender. By reversing
the order in which objects are visited, the children can now claim their
parent as part of the same object (so IPolyMesh claims its parent IXform
as part of the same Blender object). This results in much less guesswork.
I've also removed similar guesswork from the code that sets parent pointers,
by simply searching for the parent in a hierarchical way, instead of trying
to predict (again) which IXforms were turned into empties.
Also, visit_object() now actually visits the object -- previously it only
visited its children, and assumed the object it was called on was already
handled by a previous call.
create_transform_matrix(float[4][4]) did mostly the same as
create_transform_matrix(Object *, float[4][4]), but more elegant.
However, the former has some inconsistencies with the latter (which
are now merged and made explicit, turned out one was for z-up→y-up
while the other was for y-up→z-up), and was renamed to
copy_m44_axis_swap(...) to convey its purpose more clearly.
Furthermore, "loc" has been renamed to "trans", as matrices don't
store locations but translations; and more variables now have a src_
or dst_ prefix to denote whether they contain a matrix/vector in the
source or destination axis orientation.
AbcExporter::createTransformWriter() tries to predict the parent Xform
name, but if it cannot be found has multiple ways of creating it, possibly
under a different name than originally searched for.
The idea is to have a system where we properly log error messages and
let the users know that errors occured redirecting them to the console
for explanations. This is only implemented for the exporter since the
importer already has similar functionalities; however they shall
ultimately be unified in some way.
Reviewers: sybren, dfelinto
Differential Revision: https://developer.blender.org/D2541
This avoids write access happening in non-atomic manner in
Shader::tag_update which modifies the global managers. Even
for 1 byte data types it's quite dangerous.
The issue is coming from the fact that float3 is actually 16 bytes aligned
data type and the "padding" was not initialized. This caused memcmp() to
access non-initialized memory.
This provides us with a clearer API (so I don't have to use const_cast<>
in upcoming code). It also allows layering of different Alembic files,
so you can have a base file and load a separate file containing overrides.
Verbally approved by Dr. Sergey.
Alembic requires one of ALEMBIC_LIB_USES_BOOST, ALEMBIC_LIB_USES_TR1, or
C++11, and silently defaults to the latter if the former two are OFF.
Before this change, Alembic was only built without C++11 of OpenEXR
was built at the same time. This dependency was both unnecessary and
undocumented.
This function was modifying texture datablock, which makes the call
unsafe for call from multiple threads. Now we pass the argument that
we don't need nodes to the underlying functions.
There will be still race condition in noise texture, but that should
at least be free from crashes. Doesn't mean we shouldn't fix it tho.
In this case the Pyobject gets lost from pybm, and bm.free() does not invalidate the PyElem.
This will cause the destructor of python to read invalid memory and crash.
The solution is to make a copy of the pyobjects pointers before overwriting.
This area is a subject of reconsideration, so for now used simplest
way possible -- ensure depsgraph's nodes have proper layer flags
when going in and out of local mode.
The issue was apparently caused by -fno-finite-math-only added to kernel.cpp
CFLAGS. For now just removed this flag from the kernel (we don't really want
it there at this point, and we don't have it for SSE/AVX optimized kernels).
But surely more investigation is needed here.
The idea is to make include statements more explicit and obvious where the
file is coming from, additionally reducing chance of wrong header being
picked up.
For example, it was not obvious whether bvh.h was refferring to builder
or traversal, whenter node.h is a generic graph node or a shader node
and cases like that.
Surely this might look obvious for the active developers, but after some
time of not touching the code it becomes less obvious where file is coming
from.
This was briefly mentioned in T50824 and seems @brecht is fine with such
explicitness, but need to agree with all active developers before committing
this.
Please note that this patch is lacking changes related on GPU/OpenCL
support. This will be solved if/when we all agree this is a good idea to move
forward.
Reviewers: brecht, lukasstockner97, maiself, nirved, dingto, juicyfruit, swerner
Reviewed By: lukasstockner97, maiself, nirved, dingto
Subscribers: brecht
Differential Revision: https://developer.blender.org/D2586
The intention of this commit it to address issues mentioned in the
reports T43865,T50164 and T50452.
The code is based on Embree code with some extra vectorization
to speed up single ray to single triangle intersection.
Unfortunately, such a fix is not coming for free. There is some
slowdown for AVX2 processors, mainly due to different vectorization
code, which caused different number of instructions to be executed
and different instructions-per-cycle counters. But on another hand
this commit makes pre-AVX2 platforms such as AVX and SSE4.1 a bit
faster. The prerformance goes as following:
2.78c AVX2 2.78c AVX Patch AVX2 Patch AVX
BMW 05:21.09 06:05.34 05:32.97 (+3.5%) 05:34.97 (-8.5%)
Classroom 16:55.36 18:24.51 17:10.41 (+1.4%) 17:15.87 (-6.3%)
Fishy Cat 08:08.49 08:36.26 08:09.19 (+0.2%) 08:12.25 (-4.7%
Koro 11:22.54 11:45.24 11:13.25 (-1.5%) 11:43.81 (-0.3%)
Barcelone 14:18.32 16:09.46 14:15.20 (-0.4%) 14:25.15 (-10.8%)
On GPU the performance is about 1.5-2% slower in my tests on GTX1080
but afraid we can't do much as a part of this chaneg here and
consider it a price to pay for more proper intersection check.
Made in collaboration with Maxym Dmytrychenko, big thanks to him!
Reviewers: brecht, juicyfruit, lukasstockner97, dingto
Differential Revision: https://developer.blender.org/D1574
That one was:
* Resetting non-ID pointers (lib_link_xxx funcs should only affect ID
pointers, everything else shall be done in direct_link_xxx func).
* Even worse, always calling lib_link_animdata, even when
LIB_TAG_NEED_LINK tag was unset...
We do not need any special handling anymore for usercount of images used
by faces/polygons (tpage stuff), since we have the 'real_user' handling,
which will gracefully cope with all possible situations.
So better not keep that ugly confusing useless special case.
Mainly:
* Add missing `IDP_LibLinkProperty()` calls for many ID types
(harmless currently, but better be consistent here!).
* Bring lib_link_xxx functions more in line with each other.
* Replace some long if/else by switch.
Simplifies code quite a bit, making it shorter and easier to extend.
Currently no functional changes for users, but is required for the
upcoming work of shadow catcher support with OpenCL.
It uses an idea of accumulating all possible light reachable across the
light path (without taking shadow blocked into account) and accumulating
total shaded light across the path. Dividing second figure by first one
seems to be giving good estimate of the shadow.
In fact, to my knowledge, it's something really similar to what is
happening in the denoising branch, so we are aligned here which is good.
The workflow is following:
- Create an object which matches real-life object on which shadow is
to be catched.
- Create approximate similar material on that object.
This is needed to make indirect light properly affecting CG objects
in the scene.
- Mark object as Shadow Catcher in the Object properties.
Ideally, after doing that it will be possible to render the image and
simply alpha-over it on top of real footage.
Before, Cycles would first sync the shader exactly as shown in the UI, then determine and sync the used attributes and later optimize the shader.
Therefore, even completely unconnected nodes would cause unneccessary attributes to be synced.
The reason for this is to avoid frequent resyncs when editing shaders interactively, but it can still be avoided for noninteractive renders - which is what this commit does.
Reviewed by: sergey
Differential Revision: https://developer.blender.org/D2285
Moving to manual class registration means its easier to accidentally
miss registering classes.
Now detect missing class registration
and warn when running with `--debug-python`
For Windows 8.1 and X11 (Linux, BSD) now use the DPI specified by the operating
system, which previously only worked on macOS. For Windows this is handled per
monitor, for X11 this is based on Xft.dpi or xrandr --dpi. This should result
in appropriate font and button sizes by default in most cases.
The UI has been simplified to a single UI Scale factor relative to the automatic
DPI, instead of two DPI and Virtual Pixel Size settings. There is forward and
backwards compatibility for existing user preferences.
Reviewed By: brecht, LazyDodo
Differential Revision: https://developer.blender.org/D2539
This adds the ability to switch between different application-configurations
without interfering with Blender's normal operation.
This commit doesn't include any templates,
so its mostly to allow collaboration for the Blender 101 project
and other custom configurations.
Application templates can be installed & selected from the file menu.
Other details:
- The `bl_app_template_utils` module handles template activation
(similar to `addon_utils`).
- The `bl_app_override` module is a general module
to assist scripts overriding parts of Blender in reversible way.
See docs:
https://docs.blender.org/manual/en/dev/advanced/app_templates.html
See patch: D2565
Use fast-math friendly version of this function.
We should probably avoid unsafe fast math, but this is to be done with
real care with all the benchmarks properly done.
For now comitting much safer fix.
Ideally we need to find a way to remove such a static limit here, but it's not so
trivial to implement for texture nodes. Requires some bigger system redesign there.
Just raising limit for now, which is fine for modern systems.
It is unused now and if we want similar function we should use
Pluecker intersection which is same performance with SSE optimization
but which is more watertight.
The title says it all actually. Gives up to 10% speedup on test scenes here
on i7-6800K.
Render times on GPU are unreliable here, but there might be some slowdown
caused by watertight nature of intersections.
Avoid construction of temporary array and make utility function force-inlined.
Additionally avoid calling float4_to_float3 twice.
This brings render times to the same values as before current patch series.
This is a preparation work for the followup commit which wil l move
remaining parts of Woop intersection logic to an utility file.
Doing it as a separate commit to keep changes more atomic and easier
to bisect when/if needed.
There are following benefits:
- Modifying intersection algorithm will not cause so much re-compilation.
- It works around header dependency hell and allows us to use vectorization
types much easier in there.
This removes the goal springs, in favor of simply calculating the goal forces on the vertices directly. The vertices already store all the necessary data for the goal forces, thus the springs were redundant, and just defined both ends as being the same vertex.
The main advantage of removing the goal springs, is an increase in flexibility, allowing us to much more nicely do some neat dynamic stuff with the goals/pins, such as animated vertex weights. But this also has the advantage of simpler code, and a slightly reduced memory footprint.
This also removes the `f`, `dfdx` and `dfdv` fields from the `ClothSpring` struct, as that data is only used by the solver, and is re-computed on each step, and thus does not need to be stored throughout the simulation.
Reviewers: sergey
Reviewed By: sergey
Tags: #physics
Differential Revision: https://developer.blender.org/D2514
This is already called by wm_init_userdef, in old code
different initialization methods were used but now it's not needed.
Confusing since prefs are loaded in this function that don't initialize temp.
There seems to be a compiler bug of MSVC2013. The issue does not happen on Linux and
does not happen on Windows when building with MSVC2015.
Since it's reallly a pain to debug release builds with MSVC2013 the AVX2 optimization
is disabled for curve sergemnts for this compiler.
Utility to get a file/dir in the path by index,
supporting negative indices to start from the end of the path.
Without this it wasn't straightforward to get
the a files parent directory name from a filepath.
When the new window didn't end up using the size stored in the preferences
the splash would not be centered (even outside the screen in some cases).
Now centered popups listen for window resizing.
For example, for RX480 you'll no longer see "Ellesmere" but will see
"AMD Radeon RX 480 Graphics" which makes more sense and allows to easily
distinguish which exact card it is when having multiple different cards
of Ellesmere codenames (i.e. RX480 and WX7100) in the same machine.
Previously, the code would only update the status string if the main status changed.
However, the main status did not include the remaining time, and therefore it wasn't updated until the amount of rendered tiles (which is part of the main status) changed.
This commit therefore makes the BlenderSession remember the time of the last status update and forces a status update if the last one was more than a second ago.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D2465
Although this wasn't so obvious since it
only showed up for factory settings and in the preferences window.
Panel display order depends on registration order,
Sorry for the noise. On the bright side we no longer need to move
classes around to re-arrange panels.
Instead of calling a function looping over whole list of a given ID
type, make whole loop over Main in parent function, and call functions
writing a single datablock at a time.
This design is more in line with all other places in Blender where we
handle whole content of Main (including readfile.c), and much more easy
to extend and add e.g. some generic processing of IDs before/after
writing, etc.
From user point, there should be no change at all, only difference is
that data-block types won't be saved in same order as before (.blend
file specs enforces no order here, so this is not an issue, but it could
bug some third party users using other, simplified .blend file reader maybe).
Reviewers: sergey, campbellbarton
Differential Revision: https://developer.blender.org/D2510
Meshes w/o modifiers wouldn't have their derived mesh applied.
Check was to avoid crash but its in fact meaningless,
since the modifier might be disabled, or there may be virtual modifiers.
Not sure why I did not put those from start... Actually *not* having an
undo point here can be problematic, since undoing some previous action
was trying to restore from bad pointer (I think) in UI, generating
asserts.
Note however that it's not a 'pure' undo, in that you may not find your
linked data in exact same state as before deleting it, after an undo,
since it actually implies *reloading* the deleted libraries (and not
restoring from a previously stored memory dump).
Reported by @sergey, thanks.
There is no way currently to prevent the option from showing in menu, so
instead report a warning to user (and curse again current nightmarish
system of operation in outliner...).
Reported by @sergey, thanks.
Pass globals as a bare pointer, same as it sued to be prior to split kernel rework.
AMD CPU platform and Intel OpenCL were complaining about this.
Perhaps we shouldn't pass globals as pointer at all, this isn't something what is
really portable and can cause issues on 32 bit perhaps.
Internal change needed for template support.
Loading the user preferences first so it's possible
for preferences to control startup behavior.
In general it's useful to load preferences before data-files,
so we know security settings for eg.
The range is controlled using the following command line arguments:
--cycles-resumable-start-chunk
--cycles-resumable-end-chunk
Those are 1-based index of range for rendering.
The thing i'm really starting to hate is the requirement to specify both
operation code and node type. Seems to be duplicated enums without real
need for that.
This addresses an issue raised by D2453 -
that there was no way to check if operators are run
multiple times in a row.
Actions are still ignored that don't cause an UNDO event.
Was using some threaded queue on top of task pool, tssk...
Now using properly task pool directly to crunch chunks of smooth fans.
No noticable changes in speed.
Tried to completely get rid of the 'no threading with few loops' code,
but even just creating/freeing the task pool, without actually pushing
any task, is enough to make code 50% slower in worst case scenario (i.e.
few thousands of simple cube objects).
The root of the issue was in custom normal code, so far it assumed that
we could only have one cyclic smooth fan around each vertex, which is...
blatantly wrong (again, the two cones sharing same vertex tip e.g.).
This required a rather deep change in how smooth fans/clnor spaces are processed,
took me some time to find a 'good' solution.
Note that new code is slightly slower than previous one (maybe about 5%),
not much to be done here, am afraid.
Tested against all older report files I could find, seems OK.
- Add optional 'display_name' callback
so callers can construct own names.
- Add optional 'prop_filepath' argument
(for operators that don't use "filepath").
- Add doc-string.
- Use keyword only arguments.
While this compiler is not officially supported yet, getting it to work is
a nice thing because more and more AMD cards will fall under MESA driver.
It's also nice to use explicit comparison with NULL, which makes it more
clear whether variable is a boolean or pointer. Even Rust enforces this!
Patch by Ian Bruce with own modifications.
The mesh convert operator can 'freeze' a mesh
(WYSIWYG, modifiers, shape keys etc).
However its not very obvious that the way to perform this
operation is to convert a mesh to a mesh.
Expose this as 'Visual Geometry to Mesh' in the 'Apply' menu,
since this is where users might expect to see it.
Use expanded names for bmesh primitive operations
(urmv jvke semv jfke).
Use 'bmesh_kernel_' prefix,
these functions aren't intended for wide use so favor readability.
Remove BM_face_vert_separate,
it wasn't used and only skipped step of finding correct loop of face.
It might be useful to keep the search string stored in some cases, but
in most it's not useful but confusing. Especially if the string is taken
from a menu showing a different enum.
When the loop region passed in had no loops to edge-split from,
it was assumed nothing needed to be done.
This ignored the case where loops share a vertex
without any shared edges.
Now BM_face_loop_separate_multi behaves like BM_face_loop_separate.
Fixed error where faces remained connected by verts in BM_mesh_separate_faces.
This commit adds new features to the breakdowner, giving animators more
control over what gets interpolated by the breakdowner. Specifically:
"Just as G R S let you move rotate scale, and then X Y Z let you do that
in one desired axis, when using the Breakdower it would be great to be
able to add GRS and XYZ to constrain what transform / axis is being
breakdowned."
As requested here:
https://rightclickselect.com/p/animation/csbbbc/breakdowner-constrain-transform-and-axis
Notes:
* In addition to G/R/S, there's also B (Bendy Bone settings and C (custom properties)
* Pressing G/R/S/B/C or X/Y/Z again will turn these constraints off again
- Connectivity length was overwritten by distance to closest selected.
- Vertices used the 'island' center of the closest vertex,
even if it wasn't connected.
Now optionally keep track of the original index of used as the closest
connected distance.
To support this needed to add optional support for islands of 1 vertex.
The changes introduced in rB3e628eefa9f55fac7b0faaec4fd4392c2de6b20e
made the non-subframe frame change behaviour less intuitive, by always
truncating downwards, instead of rounding to the nearest frame instead.
This made the UI a lot less forgiving of pointing precision errors
(for example, as a result of hand shake, or using a tablet on a highres scren)
This commit restores the old behaviour in this case only (subframe inspection
isn't affected by these changes)
Selection loop would draw the selection ignoring xray.
Now draw in a separate pass after clearing the depth buffer,
as with regular drawing.
Also disable depth sorting,
caller can sort the hit-list by depth if needed.
Single program generally compiles kernels faster (2-3 times), loads faster,
takes less drive space (2-3 times), and reduces the number of cached kernels.
Reduces memory allocation for split kernel.
This allows for faster rendering due to bigger global size,
specially when GPU memory is limited.
Perfromance results:
R9 290 total render time
Before After Change
BMW 4:37 4:34 -1.1 %
Classroom 14:43 14:30 -1.5 %
Fishy Cat 11:20 11:04 -2.4 %
Koro 12:11 12:04 -1.0 %
Pabellon Barcelona 22:01 20:44 -5.8 %
Pabellon Barcelona(*) 15:32 15:09 -2.5 %
(*) without glossy connected to volume
Decoupled ray marching is not supported yet.
Transparent shadows are always enabled for volume rendering.
Changes in kernel/bvh and kernel/geom are from Sergey.
This simiplifies code significantly, and prepares it for
record-all transparent shadow function in split kernel.
Intended to replace legacy GL_SELECT, without the limitations of
sample queries which can't access depth information.
This commit adds VIEW3D_SELECT_PICK_NEAREST and VIEW3D_SELECT_PICK_ALL
which access the depth buffers to detect whats under the pointer,
so initial selection is always the closest item.
The performance of this method depends a lot on the OpenGL
implementations glReadPixels.
Since reading depth can be slow, buffers are cached for object picking
so selecting re-uses depth data, performing 1 draw instead of 3
(for 24, 18, 10 px regions, picking with many items under the pointer).
Occlusion queries draw twice when picking nearest,
so worst case 6x draw calls per selection.
Even with these improvements occlusion queries is faster on AMD hardware.
Depth selection is disabled by default, toggle option under select method.
May enable by default if this works well on different hardware.
Reviewed as D2543
The issue was caused by sometimes negative color returned by the filter node.
Seems to be caused by precision issues. Don't see any reason why we would want
negative colors in output. Those only causing issues later on.
By calculating the size of the state buffer in the kernel rather than the host
less code is needed and the size actually reflects the requested features.
Will also be a little faster in some cases because of larger global work size.
Because the split kernel can render multiple samples in parallel it is
necessary to have everything initialized before rendering of any samples
begins. The code that normally handles initialization of
`rng_state` (`kernel_path_trace_setup()`) only does so for the first sample,
which was causing artifacts in the split kernel due to uninitialized
`rng_state` for some samples.
Note that because the split kernel can render samples in parallel this
means that the split kernel is incompatible with the LCG.
This was only needed for the previous implementation of parallel samples. As
we don't have that any more it can be removed.
Real reason for removal tho is this: `per_sample_output_buffers` was being
calculated too small and artifacts resulted. The tile buffer is already
the correct size and calculating the size for `per_sample_output_buffers`
is a bit difficult with the current layout of the code. As
`per_sample_output_buffers` was only needed for `sum_all_radiance`,
removing that kernel and writing output to the tile buffer directly
fixes the artifacts.
This is to help debug and track memory usage for generic buffers. We
have similar for textures already since those require a name, but for
buffers the name is only for debugging proposes.
Simple workaround for some issues we've been having with AMD drivers hanging
and rendering systems unresponsive. Unfortunately this makes things a bit
slower, but its better than having to do hard reboots. Will be removed when
drivers have been fixed.
Define CYCLES_DISABLE_DRIVER_WORKAROUNDS to disable for testing purposes.
This does a few things at once:
- Refactors host side split kernel logic into a new device
agnostic class `DeviceSplitKernel`.
- Removes tile splitting, a new work pool implementation takes its place and
allows as many threads as will fit in memory regardless of tile size, which
can give performance gains.
- Refactors split state buffers into one buffer, as well as reduces the
number of arguments passed to kernels. Means there's less code to deal
with overall.
- Moves kernel logic out of OpenCL kernel files so they can later be used by
other device types.
- Replaced OpenCL specific APIs with new generic versions
- Tiles can now be seen updating during rendering
Suspended pools allows to push huge amount of initial tasks
without any threading synchronization and hence overhead.
This gives ~50% speedup of cached rigid body with file from
T50027 and seems to have no negative affect in other scenes
here.
The idea is to allow some amount of tasks to be pushed from working
thread to it's local queue, so we can acquire some work without doing
whole mutex lock.
This should allow us to remove some hacks from depsgraph which was
added there to keep threads alive.
This allows us to avoid TLS stored in pool which gives us advantage of
using pre-allocated tasks pool for the pools created from non-main thread.
Even on systems with slow pthread TLS it should not be a problem because
we access it once at a pool construction time. If we want to use this more
often (for example, to get rid of push_from_thread) we'll have to do much
more accurate benchmark.
Basically move all thread-specific data (currently it's only task
memory pool) from a dedicated array of taskScheduler to TaskThread.
This way we can add more thread-specific data in the future with
less of a hassle.
This feature was adding extra complexity to task scheduling
which required yet extra variables to be worried about to be
modified in atomic manner, which resulted in following issues:
- More complex code to maintain, which increases risks of
something going wrong when we modify the code.
- Extra barriers and/or locks during task scheduling, which
causes extra threading overhead.
- Unable to use some other implementation (such as TBB) even for
the comparison tests.
Notes about other changes.
There are two places where we really had to use that limit.
One of them is the single threaded dependency graph. This will
now construct a single-threaded scheduler at evaluation time.
This shouldn't be a problem because it only happens when using
debugging command line arguments and the code simply don't
run in regular Blender operation.
The code seems a bit duplicated here across old and new
depsgraph, but think it's OK since the old depsgraph is already
gone in 2.8 branch and i don't see where else we might want
to use such a single-threaded scheduler.
When/if we'll want to do so, we can move it to a centralized
single-threaded scheduler in threads.c.
OpenGL render was a bit more tricky to port, but basically we
are using conditional variables to wait background thread to
do all the job.
This slightly changes SDef behavior, by now respecting object transforms
at bind time, thus not requiring the objects to be aligned in their
respective local spaces, but instead using world space.
When rendering multi-view in side-by-side or top-bottom mode, we squash
the UI to half of its size and draw it twice on screen. That means the
cursor coordinates used for UI interaction don't match what's visible on
screen.
This commit is a little event system hack (tm) to fix this. It has some
small glitches with cursor grabbing, but nothing to bad.
We'll also use it for viewport HMD support.
D1350, thanks for the feedback @dfelinto!
It was only possible to separate all geometry from an intersection or none.
Made this into an enum with a 3rd option to 'Cut', (now default)
which keeps each side of the intersection separate
without splitting faces in half.
There was a bug in the intended code behaviour to always seek with a
pitch of 1.0 regardless of pitch/pitch animation/doppler effects.
Check the bug report for a more detailed explanation of problems
concerning pitch and seeking.
Comments said that function was supposed to 'stop worker threads', but
it absolutely did not do anything like that, was merely wiping out TODO
queue of tasks from given pool (kind of subset of what
`BLI_task_pool_cancel()` does).
Misleading, and currently useless, we can always add it back if we need
it some day, but for now we try to simplify that area.
Freeing pool was calling `BLI_task_pool_stop()`, which only clears
pool's tasks that are in TODO queue, whithout ensuring no more tasks
from that pool are being processed in worker threads.
This could lead to use-after-free random (and seldom) crashes.
Now use instead `BLI_task_pool_cancel()`, which does waits for all tasks
being processed to finish, before returning.
The paint slot name was not the same as what is displayed on the texture properties panel.
Instead, the slot type (e.g. "Diffuse Color") was used as the name.
Patch by Suchaaver (@minifigmaster125) with minor changes from @mont29.
Reviewers: mont29, sergey
Maniphest Tasks: T50704
Differential Revision: https://developer.blender.org/D2523
Can't say enough how much I hate those proxies... their duality (sharing
some aspects of both direct *and* indirect users) is a nightmare to handle. :(
Issue was that the VIEW_OT_manipulator operator calls the transform
operators and passes them it's own operator properties. That means the
transform operator got properties passed that it doesn't have.
The custom poll function for surfacedeform_bind seems to have caused
issues when calling it from Python. Fixed by using the generic modifier
poll function, and setting the button to be active or not in the
Python UI code instead. (there might be a better way, but for now this
works fine)
The issue was introduced by 4df75e5 and seems we just need to explicitly
add new keymap item now.
There is still some difference from old behavior, which is planar transform
is using precision movement since e138cde and here i don't see nice solution
currently: the change was requested here in the studio and it's just a
conflict in picking shift key for something which is not supposed to be
accurate.
At least now it's possible to invoke planar constraint and simply unhold
shift.
Not really happy of per-pool threads limit, need to find better
approach to that. But at least it's possible to get rid of half
of the nastyness here by removing getter which was only used in
an assert statement.
That piece of code was already well-tested and this code becomes
obsolete in the new depsgraph and does no longer exists in blender
2.8 branch.
This was only used for progress report, and it's wrong because:
- Pool might in theory be re-used by different tasks
- We should not make any decision based on scheduling stats
Proper way is to take care of progress by the task itself.
Do a "full" update on leaving sculpt mode, so we are sure scene will be brought
to a consistent state.
Ideally we'll only do that when there are objects which depends on geometry
without re-calculating self geometry, but that's a bit tricky currently.
The idea is to make it simpler to remove noise from scenes when some prop uses
Sharp glossy closure and causes noise in certain cases. Previously Sharp Glossy
was not affected by Filter Glossy at all, which was quite confusing.
Here is a file which demonstrates the issue: {F417797}
After applying the patch all the noise from the scene is gone.
This change also solves fireflies reported in T50700.
Reviewers: brecht, lukasstockner97
Differential Revision: https://developer.blender.org/D2416
Can't see any reason to call AUD exit early in WM_exit, that's a
low-level module that has no dependency on anything else in Blender, but
is dependency of some other parts of Blender, so it should rather be
exited late in the process!
This adds an option to force fields of type "Force", which enables the
simulation of gravitational behavior (dist^-2 falloff).
Patch by @AndreasE
Reviewers: #physics, LucaRood, mont29
Reviewed By: #physics, LucaRood, mont29
Tags: #physics
Differential Revision: https://developer.blender.org/D2389
- for rc/release: /api/2.79c/, zip file named blender_python_reference_2.79c_release.zip
- for dev: /api/master/, zip file named blender_python_reference_2_79_4.zip
Please never, ever use same DNA var for two different things. Even worse
if they do not have same type and ranges!
This is only ensuring issues (as described in report, but also if
animating both RNA props using same DNA var... yuck).
And we were not even saving any byte in DNA, could reuse some padding
there to store the two new needed vars (yes, two, since we cannot re-use
existing one if we want to keep backward *and* forward compatibility).
Was actually harmeless and not crashing, but I'd say more or less only
by luck: the NULL-check for region data would only evaluate to true for
the correct 3D View region. However, if we were to add region data to a
different region type in future, this would lead to undefined behavior
if executed in the wrong region.
So... Curve+shapekey was even more broken than it looked, this report was
actually a nice crasher (immediate crash in an ASAN build when trying to
edit a curve shapekey with some viewport rendering enabled).
There were actually two different issues here.
I) The less critical: rB6f1493f68fe was not fully fixing issues from
T50614. More specifically, if you updated obdata from editnurb
*without* freeing editnurb afterwards, you had a 'restored' (to
original curve) editnurb, without the edited shapekey modifications
anymore. This was fixed by tweaking again `calc_shapeKeys()` behavior in
`ED_curve_editnurb_load()`.
II) The crasher: in `ED_curve_editnurb_make()`, the call to
`init_editNurb_keyIndex()` was directly storing pointers of obdata
nurbs. Since those get freed every time `ED_curve_editnurb_load()` is
executed, it easily ended up being pointers to freed memory. This was
fixed by copying those data, which implied more complex handling code
for editnurbs->keyindex, and some reshuffling of a few functions to
avoid duplicating things between editor's editcurve.c and BKE's curve.c
Note that the separation of functions between editors and BKE area for
curve could use a serious update, it's currently messy to say the least.
Then again, that area is due to rework since a long time now... :/
Finally, aligned 'for_render' curve evaluation to mesh one - now
editing a shapekey will show in rendered viewports, if it does have some
weight (exactly as with shapekeys of meshes).
This was fixed ages ago for the interface case but not for the
command line. The thing here is that currently external engines
are relying on reports system to indicate that error happened
so suppressing reports storage in the background mode prevented
render pipeline from detecting errors happened.
This is all weak and i don't like it, but this is better than
delivering black frames from the farm.
New logic of split_faces was leaving mesh in a proper state
from Blender's point of view, but Cycles wanted loop normals
to be "flushed" to vertex normals.
Now we do such a flush from Cycles side again, so we don't
leave bad meshes behind.
Thanks Bastien for assistance here!
Finding which loop should share its vertex with which others is not easy
with regular Mesh data (mostly due to lack of advanced topology info, as
opposed with BMesh case).
Custom loop normals computing already does that - and can return 'loop
normal spaces', which among other things contain definitions of 'smooth
fans' of loops around vertices.
Using those makes it easy to find vertices (and then edges) that needs
splitting.
This commit also adds support of non-autosmooth meshes, where we want to
split out flat faces from smooth ones.
The issue seems to be caused by vertex normal being re-calculated
to something else than loop normal, which also caused wrong loop
normals after re-calculation.
For now issue is solved by preserving CD_NORMAL for loops after
split_faces() is finished, so render engine can access original
proper value.
Logic of handling shapekeys when entering and leaving edit mode for
curves was... utterly broken.
Was leaving actual curve data with edited shapekey applied to it.
The release of these arrays should be the programmer's discretion since these arrays can continue to be used.
Only the expanded functions `bvhtree_from_mesh_edges_ex` and `bvhtree_from_mesh_looptri_ex` are currently being used in blender (in mesh_remap.c), and from what I could to analyze, these changes can prevent a crash.
A group of object groups can be formed by means of the dupli_group option in
the Object properties window. The present revision extends the Selection by
Group option in the Freestyle Line Set so as to support not only flat object
groups but also nested groups.
We need to first split all vertices before we can reliably
check whether edge can be reused or not.
There is still known issue happening with a edge-fan mesh
with some faces being on the same plane.
This commit adds a way to debug Cycles motion blur issues which
are usually happening due to something crazy happening in between
of frames. Biggest trouble was that artists had no clue about
what's happening in subframes before they render. This is at
least inefficient workflow when dealing with motion blur shots
with complex animation.
Now there is an option in Time Line Editor which could be found
in View -> Show Subframe. This option will expose current frame
with it's subframe to the time line editor header and it'll allow
scrubbing with a subframe precision in time line editor.
Please note that none of the tools in Blender are aware of
subframe, so they'll likely be using current integer frame still.
This is something we don't consider a bug for now, the whole
purpose for now is to give a tool for investigation. Eventually
we'll likely tweak all tools to be aware of subframe.
Hopefully now we can finish the movie here in the studio..
Now new edges will be properly created between original and
new split vertices.
Now topology is correct, but shading is still not quite in
some special cases.
Doesn't currently change anything, but would need for some future
work here.
It uses existing padding in kernel BVH structure, so there is
nothing changed memory-wise.
The issue here was mainly coming from minimal pixel width feature
which is quite commonly enabled in production shots.
This feature will use some probabilistic heuristic in the curve
intersection function to check whether we need to return intersection
or not. This probability is calculated for every intersection check.
Now, when we use multiple BVH nodes for curve primitives we increase
probability of that primitive to be considered a good intersection
for us. This is similar to increasing minimal width of curve.
What is worst here is that change in the intersection probability
fully depends on exact layout of BVH, meaning probability might
change differently depending on a view angle, the way how builder
binned the primitives and such. This makes it impossible to do
simple check like dividing probability by number of BVH steps.
Other solution might have been to split BVH into fully independent
trees, but that will increase memory usage of all the static
objects in the scenes, which is also not something desirable.
For now used most simple but robust approach: store BVH primitives
time and test it in curve intersection functions. This solves the
regression, but has two downsides:
- Uses more memory.
which isn't surprising, and ANY solution to this problem will
use more memory.
What we still have to do is to avoid this memory increase for
cases when we don't use BVH motion steps.
- Reduces number of maximum available textures on pre-kepler cards.
There is not much we can do here, hardware gets old but we need
to move forward on more modern hardware..
The change was delivering broken topology for certain cases.
The assumption that new edge only connects new vertices was
wrong.
Reverting to a commit which was giving correct render results
but was using more memory.
This reverts commit af1e48e8ab.
The bug T46099 no longer applies since the addition of `dist_squared_to_projected_aabb_simple`
Has also been added comments that relates to an occlusion bug with the ruler. I'll investigate this.
When importing an Alembic file with grouped transforms, it would badly name the transforms, taking the name of the parent instead of its own.
Patch by @maxime.robinot
Differential Revision: https://developer.blender.org/D2507
Quite simple fix for now which only deals with this case. Maybe we want to do
some "clipping" on image load time so regular textures wouldn't give NaN as
well.
This allows us to use faster math and still have reliable
isnan/isfinite tests.
Only do it for host side, kernels stays unchanged.
Thanks Lukas Stockner for the tip!
The issue was caused by usage of non-initialized image user, which
could have different settings, causing some random image being loaded
or not loaded at all.
This caused non-deterministic behavior of Cycles image loading because
it was querying image information from several places.
This fixes crash reported in T50616, but it's not a complete fix
because preview rendering in material is wrong (same wrong as in
2.78a release).
We now assert that we now file version of libraries (needed for
do_version after linking step), so for missing libraries, set dummy
numbers (using version of main .blend file actually).
Works similar to regular Cycles tests, just does OpenGL render to
get output image.
Seems to work fine with the only funny effect: Blender window will
pop up for each of the tests. This is current limitation of our
OpenGL context. Might be changed in the future.
Seems CUDA failed to de-duplicate the array across multiple inlined
versions of the shadow_blocked(). Helped it a bit with that now.
Gives about 100MB memory improvement on a scenes after previous
commit and brings up memory "regression" to only 100MB comparing to
the master branch now.
This commit enables record-all behavior of transparent shadows
rays.
Render times difference goes as following:
GTX 1080 render time
BMW -0.5%
Fishy Cat -0.0%
Pabellon Barcelona -11.6%
Classroom +1.2%
Koro -58.6%
Kernel will now use some extra VRAM memory to store the intersection
array (200MB on my configuration). This we can optimize out with some
further commits.
The idea is to record all possible transparent intersections when
shooting transparent ray on GPU (similar to what we were doing on
CPU already).
This avoids need of doing whole ray-to-scene intersections queries
for each intersection and speeds up a lot cases like transparent
hair in the cost of extra memory.
This commit is a base ground for now and this feature is kept
disabled for until some further tweaks.
Now we break the traversal cycle and then perform volume attenuation
and check with zero throughput. Not sure it makes any measurable sense
at this moment, but in the future it might help de-duplicating some
extra logic here.
Removed unnecessary call to DM_update_tessface_data(). This call is
already performed by DM_ensure_tessface(dm). The call being performed
twice caused a failing BLI_assert().
Reviewed by: Kévin Dietrich
With the new names the arguments (yup, zup) are in the same order as
they appear in the function name. The old names used copy_src_dst(dst,
src), which I found very confusing. Furthermore, now it is clear from
where to where the copy is made.
This makes the function names a little bit longer, though. If that is
a real issue, we can just name them zup_from_yup(zup, yup).
Reviewed by: Kévin Dietrich
Speedup is mainly gained by multi-threading. Gives about 3x
fps gain on an edit shot file.
There is still some room for improvements, will happen in one
of the upcoming commits.
Derived mesh for particles did not include tessellated faces when it
was expected to. Now added explicit function to copy CDDM with tess
faces without need to re-tessellate the result.
It is not necessary to add MOTO library dependency when we use
WITH_IK_SOLVER (now it uses Eigen) or we use WITH_MOD_BOOLEAN (it was
used by bsp intern library some time ago but it is not present in the
code anymore).
Reviewers: mont29, sergey
Subscribers: mont29, sergey
Differential Revision: https://developer.blender.org/D2477
No reason to not make this private to this file, and it gave conflict
when using bpy as module and loading it in a GLib application (which
also has a g_atexit var).
The title says it all actually. Use BLI task to loop over vertices
and distort their locations. Gives 2x FPS increase in a file with
just time-dependent displace modifier on my desktop.
This version will give less spin locks and now well-tested by render engines.
This should reduce amount of threading overhead when having multiple objects
with displace modifier enabled.
In the future this will also help us threading the modifier.
There are more modifiers which could benefit from this, but let's first
investigate the new behavior with one of them.
We (the Microsoft C++ team) use the Blender project as part of our "Real world code" tests.
I noticed a place in WIN32 specific code (dvpapi.cpp:85) where a string literal is losing
its const-ness when being passed to BLI_dynlib_open(). This is not permitted when using the
/permissive- conformance compiler switch (see our blog
https://blogs.msdn.microsoft.com/vcblog/2016/11/16/permissive-switch/)
My suggested fix is to add const and propagate it where needed. Another possible fix would be
to explicitly cast away the const.
Reviewers: mont29, sergey, LazyDodo
Subscribers: Blendify, sergey, mont29, LazyDodo
Tags: #platform:_windows
Differential Revision: https://developer.blender.org/D2495
Instead of reference the vertex first and test the bitmap afterwards. Test the bitmap first and reference the vertex after.
In a mesh with 31146 vertices and the entire bitmap disabled, the loop time is 243% faster
With all bitmap enabled, the time becomes 463473% faster!!!
One possible reason for this huge difference in peformance is that maybe the compiler is not putting the function "BM_vert_at_index" inline (I dont know if buildbot do this, but it's good to investigate).
Looks like `object_map` and `mem_arena` may be NULL sometimes...
Also, cleaned up function pointers declaration of Nearest2dUserData,
those were warning out in gcc. Please, *always* use typdef defined
prototypes for function pointers, it is sooooo much cleaner and clearer
that way. And easy to convert from compatible functions too.
BKE_lamp_free was somehow missing the refactor of datablocks handling
(which, among other things, completely separated ID refcounting and
linking management from ID freeing itself).
Either forgot during development, or lost during merge...
The code looks for the closest element between its centers. In the case of islands, the center of each vertex is the center of the island.
The solution here is to skip the search for islands when the operation is translation
Pretty straight forward actually, just do not bother about obdata part
of vgroups in that case, only copy object part of it.
And let's curse once again those stuff spread accross several types of
data-blocks...
The order was wrong from the semantic point of view, caused
by some legacy workarounds in Libmv. Didn't realize it's was
not how things were expected to be used.
Issue was indeed in join operation, mesh in which we join all others
could be re-added to final data after others, leading to undesired
re-ordering of CD layers, and existing vertices etc. being shifted away
from their original indices, etc.
All kind of more or less bad and undesired changes, fixed by always
re-inserting destination mesh first.
Also cleaned up a bit that code, it was doing some rather
non-recommanded things (like allocating zero-sized mem, doing own
coocking to remove a data-block from main, etc.).
Tricky issue caused by CDDM_copy() coying MFACE array but not MTFACE which
confused logic later on.
Now we don't copy ANY tessellation unless it is requested to.
Thanks Bastien for help and review!
'page' prop of scroll up/down operators would get stuck once set once by
pageup/down keys... Now only take this prop into account if explicitely
set, not when its value is inherited from previous run.
Strangely this change does not affect the performance very much.
Suzanne subdividide 6x (ortho view):
Before:0.00013983
After :0.00013920
But it makes it easier to read the code
When the function that tests snap on multiple elements starts from the face and ends at the vertex, the transition between elements becomes much smoother.
Better to have clear way to tell whether flag is parameter for
BKE_library_foreach_ID_link(), parameter for its callback function, or
return value from this callback function.
Taking advantage of the area, the depth is decreased 0.01 BU to each loop to give priority to elements in order: Vertice > Edge > Face. This increases the threshold and improves the snap to multiple elements
The previous solution took arbitrary values to determine if the mouse was near or not to the Bound Box (it simply scaled the Bound Box).
Now the same function that detected the distance from the BVHTree nodes to the mouse is used in the Bound Box
This revision extends the functionality of the "Fill Range by Selection" button in
the "Distance from Camera/Object" modifiers so that only selected mesh vertices
in the edit mode are taken into account (instead of considering all vertices when
in the object mode) to compute the min & max distances from the reference.
This will give users much finer control on the range values.
Use new Main->relations ID usages mapping in BKE_library_make_local().
This allows a noticeable simplification in code, and can be up to twice
quicker as previous code (Make Local: All from 2 to 1 minute e.g. in a
huge production file with thousands of linked data-blocks).
Note that new code has been successfuly tested with several complex cases
(production files from Agent327), as well as some testcases from recent
bug reports related to that function. But as always, nothing beats real
usage by real users, so please check this before we release 2.79. ;)
Main areas that would be affected: Make Local operations (L shortcut in
3DView), and append from libraries.
Use Main->relations in BKE_library_foreach_ID_link(), when possible
(i.e. IDWALK_READONLY is set), and if the data is available of course.
This is quite minor optimization, no sensible improvements are expected,
but does not hurt either to avoid potentially tens of looping over e.g.
objects constraints and modifiers, or heap of drivers...
The new MainIDRelations stores two mappings, one from ID users to ID
used, the other vice-versa.
That data is assumed to be short-living runtime, code creating it is
responsible to clear it asap. It will be much useful in places where we
handle relations between IDs for a lot of them at once.
Note: This commit is not fully functional, that is, the infamous, ugly,
PoS non-ID nodetrees will not be handled correctly when building relations.
Fix needed here is a bit noisy, so will be done in next own commit.
This provides a slight improvement in performance in specific cases, such as when the observer is inside a high poly object and executes snap to edge or vertex
Distance calculation performed by the "Fill Range by Selection" button of the
"Distance from Camera" color, alpha and thickness modifiers was incorrect,
limiting the usefulness of the functionality.
The problem was that the distance between the camera and individual vertex
locations was calculated in the world space, which was inconsistent with the
distance calculation done by the modifiers in the camera space.
The new `isect_ray_aabb_v3_simple` function replaces the `BKE_boundbox_ray_hit_check` and can be used in BVHTree Root (first AABB). So it is much more efficient.
In order to simplify the reading of these functions, the parameters: `snap_to`, `mval`, `ray_start`, `ray_dir`, `view_proj` and `depth_range` are now stored in the struct `SnapData`
Checking only whether mverts is same as base mesh one is not enough in
all cases, some modifiers (deform ones) can only generate new mvert
data, while keeping others from original mesh.
Now checking both mvert or medge, hopefully this will be enough to catch
all problematic cases this time.
Thanks @gaia for finding that problem. :)
Although the "BLI_bvhtree_find_nearest_to_ray" function is more practical than the generic "BLI_bvhtree_walk_dfs", it does not work to snap in perspective view. This makes it necessary to add "ifs" and functions that make the code difficult to understand
patch: D2474
This is a speed up option which is mainly useful for viewport. Gives nice speedup in
the barbershop scene of 2x when replacing GI with AO after 2nd bounce without loosing
too much details.
Reviewers: brecht
Subscribers: eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D2383
We are not bumping file version, but we cannot have the doversion code running twice.
In this particular case it was crashing files, since we were setting node->storage to NULL, and later on accessing it.
The idea was to link something to a parent, but the point is:
we must not pass owner deep and then have any parent-type-related
logic implemented in the "children".
This is much more flexible solution which will allow doing some
more procedural features.
Reviewers: brecht, dfelinto, mont29
Reviewed By: mont29
Subscribers: Severin
Differential Revision: https://developer.blender.org/D2403
The freestyle data was never freed when removing a renderlayer.
```
blender -b --factory-startup --debug-memory --python-expr "import bpy;bpy.ops.scene.render_layer_add();bpy.context.scene.render.layers.active_index=0;bpy.ops.scene.render_layer_remove()"
```
Currently the tests don't run on windows for the following reasons
1) render_graph_finalize has an linking issue due missing a bunch of libraries (not sure why this is not an issue for linux)
2) This one is more interesting, in test/python/cmakelists.txt ${TEST_BLENDER_EXE_BARE} and ${TEST_BLENDER_EXE} are flat out wrong, but for some reason this doesn't matter for most tests, cause ctest will actually go out and look for the executable and fix the path for you *BUT* only for the command, if you use them in any of the parameters it'll happily pass on the wrong path.
3) on linux you can just run a .py file, windows is not as awesome and needs to be told to run it with pyton.
4) had to use the NAME/COMMAND long form of add_test otherwise $<TARGET_FILE:blender> doesn't get expanded, why? beats me.
5) missing idiff.exe for msvc2015/x64 in the libs folder.
This patch addresses 1-4 , but given I have no working Linux build environment, I'm unsure if it'll break anything there
5 has been fixed in rBL61751
Reviewers: juicyfruit, brecht, sergey
Reviewed By: sergey
Subscribers: Blendify
Tags: #cycles, #automated_testing
Differential Revision: https://developer.blender.org/D2367
Blenders baking system currently doesn't support the topology used by
adaptive subdivision and primitive ids will be wrong or out of range
leading to crashes. Updating the baking system to support other
topologies would be a bit involved, so for now we simply disable
subdivision while baking to avoid crashes.
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python35${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python35.lib#missing postfix on purpose, distutils is not expecting it
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python35${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python35${PYTHON_POSTFIX}.lib#other things like numpy still want it though.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.