*Changed categories of some keywords
*reordered some longer keywords that didn't appear
*Activated another color (reserved builtins) by Leonid
*added some HGPOV and UberPOV missing keywords
Patch by Maurice Raybaud (@mauriceraybaud). Thanks to Leonid for additions, feedback and Linux testing.
Related diffs: D2754 and D2755.
While not a regression, this is new feature and would be nice to have it
backported to final 2.79.
Bug introduced in recent ID copying refactor.
This commit basically sanitizes seq strip copying behavior, by making
destination scene pointer mandatory (and source one a const one).
Nothing then prevents you from using same pointer as source and
destination!
This is a bit confusing, especially when one mixes OpenCL code where ulong equals
to uint64_t with CPU side code where ulong is expected to be something else from
the naming.
This commit makes it so we use explicit name, common on all platforms.
When a mesh changes its number of vertices during the animation,
Blender rebuilds the DerivedMesh, after which the materials weren't
applied any more (causing the default to the first material slot).
Commit b6d7cdd3ce changed how the mesh data
is deformed, which wasn't taken into account yet in this unit test.
Instead of directly reading the mesh vertices (which aren't animated any
more), we convert the modified mesh to a new one, and inspect those
vertices instead.
Problem was that some code checks to see if device_pointer is null or
not and the new allocator wasn't even setting the pointer to anything
as it tracks memory location separately. Setting the pointer to non
null keeps all users of device_pointer happy.
This could make output really polluted, where it'll be hard to see actual
issues.
It is still possible to have all backtraces printed using BLENDER_VERBOSE
environment variable.
As part of the fix for T51587, I removed the Depth output for non-Multilayer
images since it seemed weird that PNGs etc. that don't have a Z pass still get
a socket for it.
However, I forgot about non-multilayer EXRs, which are a special case that can
actually have a Z pass.
Therefore, this commit brings back the Depth output for non-multilayer images
just like it was in 2.78.
- Apparently MSVC does not support compound literals
in C++ (at least by the looks of it).
- Not sure how opencl_device_assert was managing to
set protected property of the Device class.
We don't enable global SSE optimizations in regular kernel, and we
keep those disabled on Linux 32bit.
One possible workaround would be to pass arguments by ccl_ref, but
that is quite a few of code which better be done accurately.
Steps to reproduce:
- Create shader Image texture -> Diffuse BSDF -> Output. Do NOT select image yet!
- Start viewport render.
- Select image from the ID browser of Image Texture node.
Thing is: with the memory manager we always need to inform device that memory
was freed.
Common folks, nobody considered master a C++11 only branch. Such decision is to
be done officially and will involve changes in quite a few infrastructure related
areas.
New code was handling correctly ID's internal references to self, but
not references between 'made real' different objects...
Regression, to be backported in 2.79.
It is defined to & for CPU side compilation, and defined to an empty for any GPU
platform. The idea here is to use this macro instead of #ifdef block with bunch
of duplicated lines just to make it so CPU code is efficient.
Eventually we might switch to references on CUDA as well, but that would require
some intensive testing.
The documentation for the bpy.app.handlers.scene_update_{pre,post}
handlers states that they're called "on updating the scenes data".
However, they're called even when the data hasn't changed. Of course
such handlers are useful, but the documentation should reflect the
current behaviour.
Reviewers: mont29, sergey
Subscribers: Blendify
Maniphest Tasks: T46329
Differential Revision: https://developer.blender.org/D1535
Image textures were being packed into a single buffer for OpenCL, which
limited the amount of memory available for images to the size of one
buffer (usually 4gb on AMD hardware). By packing textures into multiple
buffers that limit is removed, while simultaneously reducing the number
of buffers that need to be passed to each kernel.
Benchmarks were within 2%.
Fixes T51554.
Differential Revision: https://developer.blender.org/D2745
Simply disabled python tests, they can't be run anyway (since blender target is
not enabled) and we don't have any player-related tests in that folder.
Note these are intended for platform maintainers, we do not intend to
support users making their own builds with these. For that precompiled
libraries from lib/ should be used.
Implemented by Martijn Berger, Ray Molenkamp and Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D2753
Since all the shadow catchers are already assumed to be in the footage,
the shadows they cast on each other are already in the footage too. So
don't just let shadow catchers skip self, but all shadow catchers.
Another justification is that it should not matter if the shadow catcher
is modeled as one object or multiple separate objects, the resulting
render should be the same.
Differential Revision: https://developer.blender.org/D2763
This will allow much finer controll over how we copy data-blocks, from
full copy in Main database, to "lighter" ones (out of Main, inside an
already allocated datablock, etc.).
This commit also transfers a llot of what was previously handled by
per-ID-type custom code to generic ID handling code in BKE_library.
Hopefully will avoid in future inconsistencies and missing bits we had
all over the codebase in the past.
It also adds missing copying handling for a few types, most notably
Scene (which where using a fully customized handling previously).
Note that the type of allocation used during copying (regular in Main,
allocated but outside of Main, or not allocated by ID handling code at
all) is stored in ID's, which allows to handle them correctly when
freeing. This needs to be taken care of with caution when doing 'weird'
unusual things with ID copying and/or allocation!
As a final note, while rather noisy, this commit will hopefully not
break too much existing branches, old 'API' has been kept for the main
part, as a wrapper around new code. Cleaning it up will happen later.
Design task : T51804
Phab Diff: D2714
Shows new, reference and diff renders, with mouse hover to flip between
new and ref for easy comparison. This generates a report.html in
build_dir/tests/cycles, stored along with the new and diff images.
Differential Revision: https://developer.blender.org/D2770
* Remove some unnecessary SSE emulation defines.
* Use full precision float division so we can enable it.
* Add sqrt(), sqr(), fabs(), shuffle variations, mask().
* Optimize reduce_add(), select().
Differential Revision: https://developer.blender.org/D2764
I need to use some macros defined in util_simd.h for float3/float4, to emulate
SSE4 instructions on SSE2. But due to issues with order of header includes this
was not possible, this does some refactoring to make it work.
Differential Revision: https://developer.blender.org/D2764
We already detect this automatically based on shading nodes and per shader
settings, and performance of this option is ok now all devices.
Differential Revision: https://developer.blender.org/D2767
Two main things here:
1. Replace all unsafe for #line directive characters into a single loop,
avoiding multiple iterations and multiple temporary strings created.
2. Don't merge token char by char but calculate start and end point and
then copy all substring at once.
This gives about 15% speedup of source processing time. At this point
(with all previous commits from today) we've shrinked down compiled
sources size from 108 MB down to ~5.5 MB and lowered processing time
from 4.5 sec down to 0.047 sec on my laptop running Linux (this was a
constant time which Blender will always spent first time loading kernel,
even if we've got compiled clbin).
Add a safe version of normalize since all uses of normalize
did zero length checks, move this into a function.
Also avoid unnecessary conversion.
Gives minor speedup here (approx 3-5%).
Basically gather lines as-is during traversal, avoiding allocating
memory for all the lines in headers.
Brings additional performance improvement abut 20%.
The idea here is that it is possible to mark certain include statements
as "precompiled" which means all subsequent includes of that file will
be replaced with an empty string.
This is a way to deal with tricky include pattern happening in single
program OpenCL split kernel which was including bunch of headers about
10 times.
This brings preprocessing time from ~1sec to ~0.1sec on my laptop.
The idea is to re-use files which were already processed. Gives about 4x speedup
of processing time (~4.5sec vs 1.0sec) on my laptop for the whole OpenCL kernel.
For users it will mean lower delay before OpenCL rendering might start.
This is still far from prefect, but yet much better than what we had so
far (more consistent with inheritent precision available in floats).
Note that this fixes some (currently commented out) units unittests, and
requires adjusting some others, will be done in next commit.
Bug was in RNA nodes code actually, itemf functions shall never, ever
return NULL!
Note that there were other itemf functions there that were potentially
buggy. Also harmonized a bit their code.
* Numbers with units (especially, angles) where not handled correctly
regarding number of significant digits (spotted by @brecht in T52222
comment, thanks).
* Zero value has no valid log, need to take that into account!
When calling sculpt from Python,
setting 3D 'location' but not 2D 'mouse' stopped working in 2.78.
Now check if the operator is running non-interactively and
skip the mouse-over check.
Regression in D1812: PyDriver variables as Objects
Taking the Python representation is nice in general
but for enums it would convert them into strings,
breaking some existing drivers.
Some users really liked previous behavior,
so making it an option.
Cursor Lock Adjustment can be disabled to give something close to
2.4x behavior of cursor locking.
When lock-adjustment is disabled placing the cursor the view.
This avoids the issue reported in T40353
where the cursor could get *lost*.
Even strands that were excluded by the density texture were being added
to the DM passed to cloth, but these ended up having some invalid data,
because they were not fully constructed.
This simply excludes `UNEXISTED` particles from the DM generation, as
would be expected.
Note that fix is not perfect, systematically make refcounting of all IDs
assigned to node's id pointer, which breaks the 'do not refcount
scene/object/text datablocks' principle...
But besides that principle being far from ideal in general, it becomes
pretty much impossible to apply when using //generic// ID pointer,
unless we add some kind of type data to that pointer somehow.
So for now, better to live with that, than having broken usercount.
The purpose of the keymap strings is probably for un-embossed menu items
like seen in most pulldowns. I can't see a reason for also adding that
string for regularly drawn buttons within popups, we don't add it
anywhere else in the UI either. So this commit makes sure shortcut
strings are only added to buttons that are drawn like pulldown-menu
items.
The Blender text editor's built in python template "Gamelogic" has a reference near the bottom to "objectHitList" as an alleged attribute to the KX_TouchSensor. This name is incorrect, it's correct name is "hitObjectList."
Attempting to access the suggested objectHitList returns error...
```
AttributeError: 'KX_TouchSensor' object has no attribute 'objectHitList'
```
The provided diff corrects this minor error.
Reviewers: kupoman, moguri, campbellbarton, Blendify
Reviewed By: Blendify
Tags: #game_engine, #game_python
Differential Revision: https://developer.blender.org/D2748
`defvert_array_find_weight_safe()` was confusing 'invalid vgroup' and
'valid but totally empty vgroup' cases.
Note that this also affected at least ShrinkWrap and SimpleDeform
modifiers.
The order of evaluation of function arguments is undefined, and the order
was reversed between these compilers. This was causing regressions tests
to give different results between Linux and macOS.
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
Moving the ray_start_local to the new position does not lose as much precision as moving the ray_org_local to the corresponding position.
The problem of inaccuracy is within the functions: `bvhtree_ray_cast_data_precalc` and` fast_ray_nearest_hit`. And not directly in the values of the rays.
The way we use it, UI_PRECISION_FLOAT_MAX is actually + 1 to get total
number of digits, and float only has 7 meaningful digits, so that define
shall be at 6.
GCC seems to detect uninitialized into function calls now, but then isn't
always smart enough to see that it is actually initialized. Disabling this
warning entirely seems a bit too much, so initialize a bit more now.
This commit unifies the flattened texture slot names for bindless and regular CUDA textures. Texture indices are now identical across all CUDA architectures, where before Fermi used different indices, which lead to problems when rendering on multi-GPU setups mixing Fermi with newer hardware.
Hopefully fix it actually, at least could not reproduce it anymore with
that changen, but Was already quite hard to trigger before.
We need a memory barrier at this allocation, otherwise it might happen
after preview gets added to done queue, so preview could end up being
freed twice, leading to crash.
Change the implementation so it no longer takes over the mouse cursor motion
from the OS, instead only move it when warping, similar to Windows and X11.
Probably the reason it was not done this way originally is that you then get
a 500ms delay after warping, but we can use a trick to avoid that and get much
smoother mouse motion than before.
While drawing nice 'rounded' values is OK also for 'low precision'
editing like dragging and such, it's quite an issue when you type in a
precise value, validate, edit again the value, and find a rounded
version of it instead of what you typed in!
So now, *only when entering textedit of num buttons*, we always get the highest
reasonable precision for floats (and use exponential notation when
values are too low or too high, to avoid tremendous amounts of zero's).
Tweaked the path radiance summing and alpha to accommodate for possible contribution of
light by transparent surface bounces happening prior to shadow catcher intersection.
This commit will change the way how shadow catcher results looks when was behind semi
transparent object, but the old result seemed to be fully wrong: there were big artifacts
when alpha-overing the result on some actual footage.
Since we added auto DPI on Linux, on some systems the UI draws smaller than before
due to the monitor reporting DPI values like 88. Blender font drawing gives quite
blurry results for such slightly smaller DPI, apparently because the builtin font
isn't really designed for such small font sizes. As a workaround this clamps the
auto DPI to minimum 96, since the main case we are interested in supporting is
high DPI displays anyway.
Differential Revision: https://developer.blender.org/D2740
This is a different fix for the issue from D2088, preserving backwards compatibility
for IK stretching. The main problem with this patch is that this new behavior has
been there for a year, so it may break rigs created since then which rely on the new
IK stretch behavior.
Test file for various cases:
https://developer.blender.org/diffusion/BL/browse/trunk/lib/tests/animation/IK.blend
Reviewers: campbellbarton
Subscribers: maverick, pkrime
Differential Revision: https://developer.blender.org/D2743
No real reason we keep this only to copying, creating ID outside of
database is handy as well!
Also, add helpers to add/remove an ID from Main, and to set/clear its
'user refcounting' status. Those will be useful in future complex ID
manipulation cases (like static override...).
Revise the logic here to be more robust when keyframes with
similar-but-different frame numbers (e.g. 70.000000 vs 70.000008)
would cause the search to go into an infinite loop, as the same
keyframe was repeatedly found (and skipped).
Tweak the bias from the previous fix a bit to be more backwards compatible in
some scene. In the end which way we round is quite arbitrary, but keeping the
case where the texture coordinate is exactly zero the same seems better.
The option has always (un)set the "Overwrite" flag on all strips. Calling
it "Override" seems misleading, since even when unchecking it, it overrides
whatever was set on the selected strips. It really just (un)sets the
"Overwrite" flag, and now it is also labeled as such.
In the reported example it seemed reasonable to apply this change.
But it causes a much more common case (selecting projections)
to be split into 2x islands.
Resolves T50970
This makes sense when we want to avoid float precision error
for near co-linear edges. OTOH, this is an arbitrary decision,
so keep functions separate.
That problem occurs because of the imprecision of `short int` (16 bits).
The 3d coordinates are converted to 2d, and when they are off the screen, their values can exceed 32767! (max short int value)
One quick solution is to use float instead of short
The snap code is actually a little tricky. I want to make some arithmetic simplifications in it
The only similarity between these functions is that both serve to snap.
However their codes are totally different from one another.
So by separating these functions, it:
- removes the need to put several conditions;
- simplifies and
- optimizes the code
* Display a warning above the pose list if the pose library is in an invalid
state (i.e. when it has keyframes but no pose-markers associated with those
keyframes). This warning prompts users to run the "Sanitize Pose Library Action"
operator, which should fix up such issues.
* "Sanitize" operator now creates unique names for each newly create pose
marker it generates, including the frame on which it found the pose
The problem here was that the "frame_start" and "frame_end" RNA properties of
the Stepped FModifier were shadowing/overriding "frame_start" and "frame_end"
properties of the base FModifier. As a result, when the range() callback
for the In/Out parameters (defined as part of the base FModifier) checked
it's start/end properties, they were always still zero, meaning that the
acceptable range for the In/Out parameters was 0 -> 0 = 0.
Note:
If you've got old files with this problem, you'll need to manually click on
the frame_start/end properties to flush out the old values. It's probably
not worth the effort of applying a version patch for this (given that this
modifier is not one of the most often used ones AFAIK).
As with Strip Time, the updates here would get triggered before the
autokeying had a chance to record the unkeyed values, making it impossible
to autokey.
This is something which was reported to work fine by Mai, Benjamin and
confirmed by myself. Disabling this workaround gains us some speedup:
Before Now
bmw27 04:28.42 04:07.79
classroom 09:26.48 08:54.53
fishy_cat 08:44.01 08:18.70
koro 09:17.98 08:57.18
pavillon_barcelone 12:26.64 11:52.81
Test environment is:
- Ubuntu 16.04, with all updates installed
- AMD RX 480 GPU
- amdgpu pro driver version 17.10-450821
The issue was caused by combination of following factors:
- Clipboard cleanup function will pass node tree as NULL to node free
function.
This is fine on it's own, we don't have tree in clipboard.
- Node free function will call node storage cleanup only when there is
a non-NULL node tree.
This is somewhat weird, because storage cleanup does not take node
tree as argument.
So the solution here: move node storage cleanup outside of check that
node tree is not NULL.
imapaint's clone and canvas are refcounting Image usages.
And particle's editsettings' object and scene seem to be pure runtime
data (they are reset to NULL in readcode), so resetting them to NULL
here as well.
Really, really, really need to get rid of this usercount handling
everywhere, hopefully incomming ID copying rewrite will help sanitize
that mess. But fix was needed for 2.79 release!
Not much happy, have to include BKE_curve in BLI_freetypefont to copy
nurbs... Not sure why fonts are in BLI tbh. We are already replicating
quite a bit of nurbs logic there. :(
The code was somewhat weird: it was first copying border/crop settings from
the "source" scene, then was checking border settings of the current scene
and only then was copying border from "source" scene.
Now we first copy border/crop flags, then copy border from source and then
check whether border is a full-frame.
Auto & aligned handles wouldn't restore to their correct locations.
Note that a more direct fix for the bug is possible
(storing the handle locations to restore on cancel).
But that still gives some odd behavior, see code-comments for details.
The default anisotropic tangent computation could fail in some cases,
leading to NaNs and artifacts. Use a simpler formulation that doesn't
suffer from this.
Unfortunately this means disabling the code that ensures the title
bar is properly scaled with DPI, however better to have that as a
cosmetic issue than Blender being unusable with a lot of Intel GPUs.
Those are not actually copying the ID, only its internal subdata (and
other ID-specific handling). Generic processing common to all ID copying
is done by `BKE_id_copy_ex()`!
The issue was caused by combination of following factors:
- Blender Internal viewport render can not distinguish between which parts of
main database changed, so it does full database re-sync when anything is
tagged for an update.
This way, if any NodeTree (including compositor) is changed, Blender Internal
viewport is tagged for full render database update.
- With old dependency graph, scene-level drivers are evaluated on every
iteration of scene_update_tagged, even if nothing is tagged for an update.
This causes compositor drivers be evaluated quite often.
- Driver evaluation checks whether value was changed, and if so it tags
corresponding ID type as updated (this is what was telling viewport to do
render database update).
This check was quite stupid: current property value was checked against the
one coming from driver expression. This means, if driver value is outside
of the hard limit range of the property, the property will always be
considered updated.
The fix is to compare current property value against clamped value from the
driver.
Makes things much simpler, and more consistent.
Also fix issue with new copying and bloody nodetrees, using same hack as
in original ntree copying code to detect 'root' ntrees that shall never
be put into bmain :(((((((
New dependency graph is tacking root bone into account when building the graph.
This is required in order to get proper dependencies between bones. so we can
reliably use bones as targets from the same rig (and even indirect relations
via external objects). This forces us to tag relations for update when we change
root IK chain bone.
Since relations rebuild is not fully trivial operation, we only do it for
the new dependency graph. In the future it'll be nice to avoid whole graph
rebuild for such cases, but that's mentioned as a TODO.
This situation happens when a file with a text effect sequencer strip is
loaded in Blender < 2.76 and saved. This destroys the effect data, causing
a crash in Blender ≥ 2.76.
d2f748a222 prevented the crash when opening such a file, but accessing
the strip still caused a crash. This commit fixes that by actually
initialising the invalid strip. Of course this still causes data loss, but
that already happened by opening & overwriting the file in Blender < 2.76.
Using an arbitrary face as the source of the UV data is mostly fine, as
vertices on seams will generally map to different parts of the texture
that have the same color.
This is regarding fed853ea78
Some of the functions might have been inlined, but others i don't see
how that was possible (don't think virtual functions can be inlined here).
In any case, better be explicitly optimal in the code.
Clearing of custom bones outline's line thickness was not done at proper
point, wireframe drawing never changes line thickness, only solid draw
with outline does...
Last fix only accounted for direct changes to the RB settings, but
failed for, say, object transformations. This fix accounts for any
change that might invalidate the RB cache.
Fix 9cd6b03187 introduced a bug that
prevented simulation after a cache invalidation (for instance when
changing a setting after simulating). This fixes that.
The problem here was that when a "invalid" path is generated by the panoramic camera, it was tagged
as RAY_TO_REGENERATE with the intention of generating a new path in kernel_buffer_update.
However, since that state was not handled in kernel_queue_enqueue, kernel_buffer_update did not
process the path which resulted in an infinite loop.
Things like missing directories are now properly checked for, rather than
crashing Blender.
This also adds support for relative paths when opening an ABC file.
This makes the last time (`ltime`) stored in the rigid body world (`rbw`)
only be updated once a simulation step actually occurs, this prevents
another simulation step from being solved unless the current time is
exactly one frame after the last cached frame. Thus this prevents the
formation of gaps in the cache, such as seen in T50230.
Reviewers: mont29, sergey, angavrilov
Tags: #physics
Maniphest Tasks: T50230
Differential Revision: https://developer.blender.org/D2458
D2729 by @IgorNull
Currently, trackball rotation sequentially applies rotation across x axis and y axis,
which produces a strange/unusable result on diagonal pointer motion.
This change fixes the problem by using a single axis which is orthogonal
and proportional to mouse delta - matching view-port trackball.
As the title says, the normal wasn't set for the Hair BSDF because it wasn't
needed before. However, the denoiser uses it to store the feature passes, so
it needs to be set now.
Now that some node types may have custom context, we need to handle that
in the (convoluted :| ) UI code of nodes as well.
Reported in T43295 by Gabriel Gazzán (@gab3d), thanks.
`BMO_iter_as_array()` may fill less items than requested in given array,
so we have to update number of items to work on from its returned value,
otherwise code might try to use uninitialized memory.
The original code was doing a sanity check to see if existing index was
out of range. However the comparison was wrong.
So if the previous ct->user (active index of texture node) was larger
than then number of available texture nodes + 1 in the other material,
we would never re-set the index to 0.
Bug introduced on c31f74de6b.
There was an early attempt of fixing this (2b2ac5d3cc) but it was just working
by pure, luck. And failing in cases like the one from this bug report.
Convenience makefile now uses CMAKE_BUILD_TYPE_INIT,
this means you can change the build type of an existing build
and it won't be overwritten when running `make`.
Useful if you want to add debug info to a release build for profiling.
This is a very important, potentially deadly side-effect of this
operator. If something goes wrong, it can save a broken .blend file.
Ideally we could get rid of that operation anyway, once ID management if
fully renewed, but for now would rather keep it around.
Related to T51902.
Fix is a bit ugly, but cannot think of another solution for now, at
least this **should** not break anything else.
And now I go find myself a very remote, high and lonely mountain, climb
to its top, roar "I hate proxies!" a few times, and relax hearing the echos...
Based on D2578, now you can install JACK audio server and use it in
Blender build without having to specify the `--with-all` option (that
one still enables also JACK of course).
Reviewers: mont29
Maniphest Tasks: T51033
Differential Revision: https://developer.blender.org/D2578
Note that those are an nightmare to handle correctly, current code is
like a plate of noodle, so... most likeley not fully working, think I'll
have to nuke localize code as well :(
@campbell Barton: Why is this declaration needed at all in stubs.c?
Further up the file collada.h is imported and that already decalres
the function and results in a duplicate declaration.
avoids wrong texture data when multiple objects are exported. Note: This
commit might possiblyt not work fully. The full feature is added with the
next commit)
Reorder keyingsets registration order, since it also afects order of I
menu options, better show most used ones first, pure alphabetical order
is not great here... Will likely break some muscle memory though. :|
Based on D2720 by Carlo Andreacchio (@candreacchio), thanks.
Although the original report was about the docs, the real issue was in
the API.
My original commit started from a copy-paste from the Switch
Node. However I don't use custom1 for thew Switch View node.
The docs is slightly incomplete since it would be nice to mention the
views here. Or maybe even expose them via Python. But honestly they are
generated depending on the scene multi-view settings.
If there was any specularity in the Principled BSDF, it would get a sampling
weight of one regardless of its actual impact.
This commit makes Cycles estimate the contribution of the component and adjust
the weighting accordingly, which greatly improves the noise characteristics of
the Principled BSDF in many cases.
Note that this commit might slightly change the brightness of areas when using
MultiGGX and high roughnesses, but the new brightness is more accurate and
closer to the result of Branched Path Tracing. See T51836 for details.
Differential Revision: https://developer.blender.org/D2677
The PDF of the MultiGGX sampling is approximated by the singlescattering GGX
term as well as a scaled diffuse term that makes up for the energy in the
multiscattering component that's missed by GGX.
However, there were two problems with the glossy terms: The diffuse term missed
a normalization factor, and the singlescattering term was not properly scaled
down based on the albedo estimate.
The glass term was completely wrong and has been rewritten. It uses the fresnel
factor to weight reflection vs. refraction and uses the glossy MultiGGX model
for reflection.
For refraction, the correct singlescattering term is now used, and a new
albedo approximation is used that was derived by evaluating GGX albedo for
roughnesses from 0 to 1 and IORs from 1 to 3 and fitting numerical
approximations to it. The resulting model has a mean relative error of 9e-5,
but could probably be simplified without losing noticable accuracy in the
final render.
The improved PDFs help with glossy highlights (due to better light sampling vs.
closure sampling MIS) and fix the situation described in T51836 where mixing
MultiGGX with other closures (as it happens in e.g. the Principled
BSDF) causes incorrect darkening.
This is annoying especially for exporters who do use mesh name, since it
broke any relation with actual Mesh naming in original Blend file.
Unfortunately, we cannot avoid the extra .xxx digits. ;)
(continuation of previous WIP commit, sorry about that one :/ ).
This commits changes quite a few things, distributing new copying flags
into sub-data copying code (mostly concerns ID refcounting or not).
It also removes ID refcounting handling from Modifiers' copy callback
(this was ugly from the start, proved to be problematic in current
master, and generally bad practice). This is now done by calling code.
Also, it brings back ID refcounting handling to main BKE_library's copy
code, which means in generic ID copying lower-level IDType-specific copy
code should not use it at all. Support at lower-level remains needed
though, unfortunately, to cope with partial copying tools etc.
It tried to assert that
addons/io_blend_utils/blender_bam-unpacked.whl/__init__.py was loaded when
the io_blend_utils module was imported. However, this happens only on
demand, and not directly when importing the add-on.
*Sigh* One more example of why we should keep ID management handling in
as few places as possible! It's impossible to keep more than a few
places in sync regarding which ID pointer is refcounted etc.
Children where always getting at least one segment of fixed length...
Now fully hidden ones (zero length) get no segment at all.
Note that even very short ones keep getting one 'unit' length segment - would
rather avoid changing that at this point, given how complex children
particles 'length' can get with all kind of modifiers... Think we can
live with that for now anyway.
The crash did not happen yet because we always had proper vmemh defined in
the parent scope.
Patch by Ivan Ivanov (aka obiwanus), thanks!
Differential Revision: https://developer.blender.org/D2715
It tried to assert that
addons/io_blend_utils/blender_bam-unpacked.whl/__init__.py was loaded when
the io_blend_utils module was imported. However, this happens only on
demand, and not directly when importing the add-on.
*Sigh* One more example of why we should keep ID management handling in
as few places as possible! It's impossible to keep more than a few
places in sync regarding which ID pointer is refcounted etc.
Children where always getting at least one segment of fixed length...
Now fully hidden ones (zero length) get no segment at all.
Note that even very short ones keep getting one 'unit' length segment - would
rather avoid changing that at this point, given how complex children
particles 'length' can get with all kind of modifiers... Think we can
live with that for now anyway.
The crash did not happen yet because we always had proper vmemh defined in
the parent scope.
Patch by Ivan Ivanov (aka obiwanus), thanks!
Differential Revision: https://developer.blender.org/D2715
Idea is the same, looks like it will be a tad simpler than with copy
though, since we should not need to change each ID type freeing func, as
ID usercount handling is done in main BKE_library code (would like to do
that for copy as well, but it's not that simple).
This is not enough to mutex-guard modification code of integer values,
since this operation is NOT atomic. This is not even safe for a single
byte data types.
For now guarded the getter functions, similar to other functions in
this module.
Ideally we want to switch modification to an atomic operations, so we
wouldn't need any locks in the getters.
'Convert To...' Object operation has very weird effect of actually
working at obdata level, not object level, which means *all* objects
(even unselected/hidden/in other scenes/...) using same obdata will be
converted to new selected type.
IMHO this is very bad behavior, but... not a bug really, so do not
change this for now.
But at least, do not do that when working on some linked data, else it
leaves Blend file in invalid (incoherent) state until next reload.
So workaround for now is to enforce the 'Keep Original' option when some
linked object/obdata is affected by the operation.
Also fixed somewhat broken usercount handling in Curve->Mesh part.
That one was probably not an actual issue, except maybe in some corner
cases (like deleting a linked scene also used by some other linked scene).
Again, better not try to do smart & complex freeing logic outside of
BKE_library area, let's keep spaghetti nitghmare in a single place!
Previous code (same as what `BKE_libblock_free_us()` is doing when
usercount reach 0) was probably OK in that specific case, but still not
good idea, and potentially risky.
Even though in that specific it was probably safe-ish, there is no
guarantee at this point Brush we want to remove are not used somewhere,
better take the slightly slower, much safer `BKE_libblock_delete()` path here.
Eeeeeek!^2 Calling unconditionnaly ID freeing `BKE_libblock_free()` on a
datablock (ob->data, i.e. Curve) that may be used elsewhere...
Veryveryvery bad!
*tryed "#" as preprocessor used in POV-Ray for language keywords best behaviour was to have it as a punctuation symbol
*moved "finish" to its proper category
*changed order of some POV-Ray ini files keywords to have them work better
*added a few keywords from latest pov version
*Fixed C-style closing of multiline comments (*/)
Reviewers: campbellbarton, mont29
Reviewed By: campbellbarton, mont29
Subscribers: mont29
Differential Revision: https://developer.blender.org/D2707
Noisy change, but safe, and better do it sooner than later if we are to
rework copying code. Also, previous commit shows this *is* useful to
catch some mistakes.
Avoids using copy-constructor invoked every time we pass function
to the builder functions.
Should lower number of CPU ticks spent during DEG construction.
Was rather weird and only used for time source. It is simpler to make depsgraph
to keep track of time source directly.
No need to introduce extra entitites without actual need.
Was some mismatch in address space. Seems to be caused by recent additions.
Additionally, moved decoupled ray marching functions under ifdef, so they
don't try to use malloc() functions.
Thanks Mai for testing the patch!
Now, when there is no usable neighboring pixel for denoising, the noisy value
is preserved instead of producing a NaN.
Also, negative results are clamped to zero.
Note that there are just workarounds that don't fix the underlying problems,
but these issues are very rare and I'm not sure if it's even possible to fix
the underlying problems without introducing a significant slowdown or quality
decrease in other situations.
Because of that and since 2.79 is happening very soon, I just went for these
workarounds for now.
Replaces the placeholder 'emtpy' icons of "Force Field" and "Group
Instance" entries in object-add menu with proper new ones.
Icons by @zlsa, thanks a lot!
Maniphest task T51291.
Technically not passing all buffers used by a kernel is undefined
behavior. We haven't had any issues with this so far on AMD or
Nvidia, but it's known to be a problem with Intel and we received
a report from AMD that this is a problem on newer hardware, so we
need to make this change at some point.
Unfortunately there a cost to being correct, about 5% for the
benchmark scenes. For low sample counts it's even worse, I've
seen up to 50% slowdown. For the latter case I think adjusting
tile updating logic can help, but not sure what that would look
like yet (it would be just a few lines change however).
Unlike regular path tracing, branched path tracing is usually used with lower
sample counts, at least for primary rays. This means that are less samples for
the GPU to work on in parallel and rendering is slower. As there is less work
overall there is also more inactive threads during rendering with BPT. This
patch makes use of those inactive rays to render branched samples in parallel
with other samples.
Each thread that is preparing for a branched sample will attempt to find an
inactive thread and if one is found the state for the sample is copied to that
thread. Potentially, if there are enough inactive threads, 100s of branched
samples could be generated from the same originating thread and ran in
parallel giving large speed ups.
Gives 70% faster render for pavillion midday scene. 20-60% faster on BMW
with car paint replaced with SSS/volumes.
Due to various driver issues with AMD GCN 1 cards we can no longer support
these GPUs. This patch makes them unavailable to select for Cycles rendering.
GCN cards 2 and higher are still supported. Please use the most recent
drivers available to ensure proper functionality.
See here for a list to check which GPUs are supported:
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units
Reported by Andy Goralczyk (@eyecandy) over IRC, thanks!
Simply nuke all that poor broken custom one-by-one handling in
object_relations.c code, and use highly complex but powerful and
well-tested BKE_library_make_local() in all cases of MakeLocal!
ID management, especially related to linking, is very hairy matters,
better to have as few as possible core functions managing all the dirty
details. ;)
Edge collapse was using bounding box center as the point to collapse to.
When collapsing multiple adjacent edges together, this caused
inconsistencies in placement of the collapsed point, depending on the
orientation of the edges in relation to the space axis.
This makes edge collapse use the mean point instead.
The previous outlier heuristic only checked whether the pixel is more than
twice as bright compared to the 75% quantile of the 5x5 neighborhood.
While this detected fireflies robustly, it also incorrectly marked a lot of
legitimate small highlights as outliers and filtered them away.
This commit adds an additional condition for marking a pixel as a firefly:
In addition to being above the reference brightness, the lower end of the
3-sigma confidence interval has to be below it.
Since the lower end approximates how low the true value of the pixel might be,
this test separates pixels that are supposed to be very bright from pixels that
are very bright due to random fireflies.
Also, since there is now a reliable outlier filter as a preprocessing step,
the additional confidence interval test in the reconstruction kernel is no
longer needed.
The problem was that the strokes in the copy-paste buffer could be keeping
dangling pointers to colors that were already freed. Therefore, this commit
makes it so that when copying the strokes, we now make copies of the colors
and put them in a hashtable beside the stroke buffer. This is convenient,
as it saves us having to look up what colours need to be copied over each
time when pasting.
The problem was that newly pasted strokes were still using colours from
the original datablock. As a result, you'd either get an immediate crash,
or if you managed to save the file before it crashed, each stroke would get
reloaded with a dummy colour.
This commit fixes makes it possible to copy/paste strokes between datablocks
again. However, there are still problems when trying to paste across file
boundaries (i.e. copy strokes in one file, paste in another), which the next
commit will address.
faces.
This was requested by script writers. Especially needed if beveling
wire edges with vertex_only.
Should be backward compatible as just adds two new keys to returned
dict in python ('edges' and 'verts').
Was internally a no-op operation, which only caused extra work
to be done during depsgrpah traversal and evaluation, without
making any measurable improvement.
This is a minor update add more information on how Blender handles modal
operators. The existing docs provide a good overview, but might not be
as helpful to those unfamiliar with modal programming. This patch also
corrects a few small grammar issues.
It was never actually used apart from being stored at a construciton time.
This caused some redundancy and ncertanty about which relation type to use
during construciton (often existing types were not close enough to particular
use case).
Made this resilient to unknown types, for now. Supporting specific INT
sockets (through implicit conversion to GPU_FLOAT ones) is considered nice TODO.
Was just keeping the default '1' user from `BKE_libblock_alloc()`,
instead of using correct way to handle extra virtual user needed when we
want to keep unused datablocks around...
Do not call invoke ops from outliner's operations menus. Invoke op would
search again for item under mouse coordinates... when it is invoked!
Means often entry menu you would have clicked would not be over target
item, leading to either nothing or operation being applied to wrong item.
Note: about groups, there is another minor annoyance leading to some
assert - groups have an annoying virtual fake user which breaks
usercount, will see whether this is easily fixable. :|
The idea is to accumulate all new tasks in a thread local queue
first without doing any thread synchronization (aka, locks and
conditional variables) and move those tasks to a scheduler queue
once they are all ready. This way we avoid per-task-pool lock
and only have one lock per bunch of tasks.
This is particularly handy when scheduling new dependency graph
node children. Brings FPS of cached simulation from the linked
below file from ~30 to ~50.
See documentation for BLI_task_pool_delayed_push_{begin, end}
and for TaskThreadLocalStorage::do_delayed_push.
Fixes T50027: Rigidbody playback and simulation performance regression with new depsgraph
Thanks Bastien for the review!
If users wanted to bake only a few of the mesh materials, they would
still need to create dummy textures for the other parts.
This commit report (as RPT_INFO) the materials with no texture, but move
on to bake the others materials.
This was causing proxies updates on every frame, even if they
do not really change. Additionally, it was causing second round
of armature update when used from inside dupligroup (viewport
ensures all objects from dupligroup are up to date before draw).
It's now less confusing (for example, using nr_of_samples directly,
instead of using 1 / 1 / nr_of_samples). Might also have fixed a bug.
Also added unittests.
The scale matrix must have its homogeneous 'w' (at mat[3][3]) set to the
scale in order to also scale the translations along with it. However, this
also scales the transform matrix's 'w' component, which is not supposed
to happen.
The old default values (start/end frame = 1) could have been an actually
desired setting (for example when exporting a non-animated model). To
make this worse, this was only interpreted as "start/end of the scene" by
the export operator when running interactively, but not when run from
Python.
By choosing INT_MIN as default it's highly unlikely that the interval
[start, end) was intended as actual export range.
This way we always have predictable behavior, especially from the
performance point of view. Additionally, if some bottleneck is found
in stack implementation it'll be easier for us to address.
Yep, that got reported... Was slightly more involved than UI message
fixing though: RNA string length getter shall return exact lentgh of
string (same as strlen), not size of allocated buffer to contain it!
Otherwise, NULL final char leaks in and...
Denoising was setting session parameters for every frame, which was detected as
a change and therefore caused a resync.
Since the parameter modification change is only needed for viewport rendering
(which doesn't support denoising anyways) and resyncing after a frame change
(which isn't affected by denoising settings), an easy fix is to just ignore
the denoising parameters like it's currently done with the samples.
Follow up to 9f044cb422
These comments described the difference between Microsoft & MinGW's struct definition. Now that we dropped MinGW we don't need to go into these details.
DM evaluation code was simply never clearing the `deformedOnly` flag
when evaluating a generative modifier...
Quite astonishing this never got catched before, a lot of particle code
relies on valid value of this flag!!!
The fact that we can end with uninstantiated objects is not expected
currently, but would rather not start chasing all corner cases that may
lead to that situation.
User shall be able to delete uninstantiated objects from Outliner, though!
Properly remap nodes' pointers to copied IDs in copied ntrees.
Note that this only affects root trees, node groups are not concerned
here, since they are assumed to be reusable chunks and hence *not*
duplicated.
The operator is indeed not adding frames but inserting them at the current frame (shifting all subsequent ones). Changed the operator name and description.
Approved by Antonio.
This operator relies on a rather specific context setup, so it shall not
be exposed to user in 'operator search' menu etc.
Based on D2528 by Vuk Gardašević (lijenstina).
The Issue
=======
For a long time now MinGW has been unsupported and unmaintained and at this point,
it looks like something that we should just leave behind and move on.
Why Remove
==========
One of the big motivations for MinGW back in the day is that it was free compared to MSVC which was licensed based.
However, now that this is no longer true we have basically stopped updating the need CMake files.
Along with the CMake files, there are several patches to the extern libs needed to make this work. For example, see:
https://developer.blender.org/diffusion/B/browse/master/extern/carve/patches/mingw_w64.patch
If we wanted to keep MinGW then we would need to make more custom patches to the external libs and
this is not something our platform maintainers are willing to do.
For example, here is the patches needed to build python: https://github.com/Alexpux/MINGW-packages/tree/master/mingw-w64-python3
Fixes T51301
Differential Revision: https://developer.blender.org/D2648
We had handling of fully duplicated polygons already, but... absolutely
nothing to sanitize partially merged polygons! This were giving us
totally invalid geometry, with duplicated vertices in single poly,
invalid edges, etc.
Now we do check for invalid loops inside polys, and generate new edges
as needed to get only valid polys.
For some reason this was a nightmare to get running fully OK, playing
with old and new indices is really, really mind breaking.
This feature got lost with new auto-track API,
Added it back by extending frame accessor class. This isn't really
a frame thing, but we don't have other type of accessor here.
Surely, we can use old-style API here and pass mask via region
tracker options for this particular case, but then it becomes much
less obvious how real auto-tracker will access this mask with old
style API.
So seems we do need an accessor for such data, just matter of
finding better place than frame accessor.
It's a bit ugly but I couldn't find a better way to keep fast installs and
correct handling of switching between master and blender2.8 with different
lib directories.
This makes it possible to have an animated / procedurally generated mesh
that starts empty and obtains data in later frames.
Fixes the export of an empty mesh with an Ocean Modifier, as described in
issue T51351.
This allows you to put any kind of animation data on the mesh, and its
shape will be exported on each timekey. Note that this timekey is unrelated
to the animation data (so we don't export on each keyframe, for example).
A practical example is the addition of an animated custom property to
trigger the export of animated mesh data. The mesh data can then be created
from any source, like Python scripts.
Not only is this useful in itself, it also provides a workaround for one
of the two issues described in T51351.
This is written in a custom metadata key, so it isn't shown by utilities
like abcecho or abcls. However, it's still something that's useful to
have available.
Houdini writes vertex data in a different format than Blender does; Houdini
uses "face-varying scope", which means that the vertex colours are indexed
by an ever-increasing number over all vertices of all faces instead of the
vertex index.
I've also merged the read_custom_data_mcols() and read_mcols() functions,
because the latter was only called from the former, and the changes in this
commit would add yet more function parameters to pass.
A big chunk of code was copied between the if and else bodies. By using
a boolean to store whether the c3f_ptr or c4f_ptr should be used, the
in-loop condition is kept as simple as possible.
Note: the angle in bug isn't really reflex - using the vertex normal
for this test isn't always right, but usually is. At any rate,
shouldn't try to put vertex on edge between if a reflex angle.
This was two-fold.
1) The export used viewport settings to obtain the particle cache, rather
than render settings.
2) The child hair writer tried to obtain UV-coordinates from the parent
chair, without checking whether those were available in the first place.
Since we already have a rather advanced PovRay exporter, makes sense to
also nicely display generated 'code'.
Patch by Maurice Raybaud (@mauriceraybaud), thanks!
Cleanup (mostly styling) by @mont29.
Brightness/contrast node was changing color but did not modify alpha
or ensured colors are premultiplied on the output. This was giving
artifacts later on unless alpha was manually converted.
Compositor is supposed to work in premultiplied alpha (except of
some really corner cases) so it makes sense to ensure premultiplied
alpha after brightness/contrast node.
This is now done as an option enabled by default, so we:
(a) Keep compatibility with old files.
(b) Have correct behavior for newly created files.
Later on we can get rid of this option.
We were looping over all vgroups in destination mesh and making string
comparison, for every vgroup of every vertex of merged mesh! Crazy!
Now we simply create a temp mapping of vgroup indices, seriously
simplifies things (and gives significant speedup when merging huge meshes
with lots of vgroups, here with quick stupid test went from 120ms in
vgroup merging to less than 5ms, 25 times quicker!).
Root of the issue here was that two stupid modifiers could create named
vgroup CD layers (vgroup editing ones... shame on me :") ).
Fix that, and added some versionning code to also fix 'corrupted' blend
files created by those so far.
Seems re-loading module invalidates memory pointers by the looks of it,
which gives an error on the next kernel call.
Not sure how to move memory pointer from one CUDA module to another one,
so for now simply disabling kernel re-load for CUDA devices. Not ideal,
but better than failing render.
Feature-selective option for CUDA is not an official feature anyway.
`screen_findedge()` is not expected to return NULL in that case, but
checking against that does not hurt (we do it in all its other call
cases anyway), better than crashing.
For some reason GCC-6 successfully compiles test program with
-Wno-implicit-fallthrough passed via command line. It just
silently ignores the unknown arguments which are starting with
-Wno-.
The issue is, if some other waning happens in the code, then
GCC will complain about unknown -Wno- argument which is not
supported by current GCC version.
This makes some misleading warning prints about unknown
command line argument when any other warning happens in code
from extern/.
Volume shaders without anything connected to the surface output are treated
as if they had a transparent BSDF as the surface shader in Cycles, so the
denoiser should skip feature pass writing for them just as it does with an
actual transparent BSDF.
If the central pixel is an outlier, the denoiser is supposed to predict its
value from the surrounding pixels. However, in some cases the confidence
interval test would reject every single surrounding pixel, which leaves the
model fitting with no data to work with.
- Some arguments were inapproriatry tagged as unused
using (void)foo semantic.
Only use such semantic in tricky casses, when something
needs to be ignored in release builds or something is
dependent on tricky ifndef policy.
For rest of the cases just use void foo(int /bar*/)
semantic, which ensures variable is not used. Solves
confusion and code running out of sync with later
development.
- Used proper unused semantic to some arguments.
- Added braces to make code easier to follow, tricky
indentation with ifdef, uh.
For example, when using a radius of 1, only 9 pixels (due to weighting maybe
even less) will be used, but the transform code may still decide to use a
5-dimensional (or even higher) fit.
This causes severe overfitting and therefore weird pixel values.
To avoid this, this commit limits the amount of dimensions to a third of the
pixel number. For a radius of 3 or more, this doesn't change anything, but
for 1 and 2 it can prevent fireflies and/or negative values being produced.
Once again, numerical instabilities causing the Cholesky decomposition to fail.
However, further increasing the diagonal correction just because of a few
pixels in very specific scenes and settings seems unjustified.
Therefore, this commit simply falls back to the basic NLM-filtered pixel
if the more advanced model fails.
I wouldn't mind switching fully to Google style, but i am against of
mixing two different styles in same project. So just stick to brace
at the new line after function definition.
There were following issues with ccl_restrict_ptr:
- We already had ccl_restrict for all platforms.
- It was secretly adding `const` qualifier to the declaration,
which is quite weird since non-const pointer can also be
declared as restricted.
- We never in Blender are using foo_ptr or FooPtr type definitions,
so not sure why we should introduce such a thing here.
- It is absolutely wrong from semantic point of view to put pointer
into the restrict macro -- const is a part of type, not part of
hint for compiler that some pointer is never aliased.
Denoise commit introduced kernel_write_result() which saves light passes, so
no need to call both kernel_write_result() and kernel_write_light_passes() from
the split kernel.
Weirdly enough. kernel_write_result() does not take care about debug passes.
The issue is coming from some weird semi-finished canvas feature, which
was remapping coordinate without applying any differential on the sampling
ellipse (in fact, there is no ellipse, sampling think is always a single
pixel).
The whole thing is just weak in the compositor, for now just bring behavior
back to how it was prior to optimization (multithreading) commit.
The problem was that Cycles implicitly uses a transparent surface shader when only
volume nodes are used, but since the black emission shader gets optimized away,
it was no longer detected and therefore no transparent surface was used.
Therefore, the shader now stores whether volume nodes were connected before
optimizing.
Extremely bright pixels in the rendered image cause the denoising algorithm
to produce extremely noticable artifacts. Therefore, a heuristic is needed
to exclude these pixels from the filtering process.
The new approach calculates the 75% percentile of the 5x5 neighborhood of
each pixel and flags the pixel if it is more than twice as bright.
During the reconstruction process, flagged pixels are skipped. Therefore,
they don't cause any problems for neighboring pixels, and the outlier pixels
themselves are replaced by a prediction of their actual value based on their
feature pass values and the neighboring pixels.
Therefore, the denoiser now also works as a smarter despeckling filter that
uses a more accurate prediction of the pixel instead of a simple average.
This can be used even if denoising isn't wanted by setting the denoising
radius to 1.
The implementation originally handled four different cases:
Regular glossy, glass, metallic fresnel glossy and diffuse.
However, only the first two are actually used currently. Therefore, this commit
removes the other two, which allows to simplify the code.
Additionally, due to the Principled BSDF, the function arguments are now
identical for glossy and glass, which allows to get rid of some ugly #ifdefs.
Use smarter check of where the file is coming from instead of
attempting to replace same source twice with different settings.
Brings down processing time from 3.6sec to 1.8sec.
It is not a good idea to:
1. Duplicate metadata to self
2. Ignore the fact that something might have had metadata already.
Also moved metadata copy to a preparation function, so it is
never lost.
The mesh interpolation code had an edge case where one of two
adjacent edges to a vertex has 0 length. This caused an assert
failure indexing the vertex mesh for splash Blenderman.blend.
Found when was looking into T49864. The issue is caused here
by render_copy_renderdata() doing a copy of views with
BLI_duplicatelist() so we can not just zero the pointers out.
Similar thing is happening for layers as well.
Fix T50882: VSE: Blend Modes on Scenes do not layer properly
Fix T51002: Scene strip with Alpha over not working as expected
The byte-to-float conversion was being skipped if the color spaces of the sequence and the scene
are the same, which is the default, resulting in any non-float strips becoming invisible.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D2635
Normally, segments up to 50 can be quite enough for most cases.
However, when dealing with things like braids,
the current limit can sometimes be quite a pain.
This commit fixes crash, but user feedback can be improved here to
inform artist that one can't use Render Result as a texture since that
will cause feedback loop.
Basically upon invoking cycles baking we could canell it which would
leave G.is_break hanging as true. Since we were not setting is_break to
false before exec baking, it would misbehave.
The issue was caused by stupid workaorund for libav. Now things works for
FFmpeg. There might need some tweaks needed for Libav, but that one is
not really priority for support.
Previous method was based on face-area, giving un-even results
based on topology and gave issues with zero area faces.
This method gives matching results for concave ngons and the same geometry triangulated.
The code only updated nodes in the nodetree of the scene to which the render layer belongs. Therefore, when using scene B in the compositor setup of scene A, A's node wouldn't be updated.
With this fix, the update function loops over all scenes and checks them for relevant nodes.
Was preventing update in 3DView etc. when changing something in the
World's NodeTree, especially annoying in blender2.8 branch (since legacy
depsgraph has been removed there), but also affecting master.
Old code was working quite unreliable in combination with fast math
flag, especially when compiling with Clang. It seems we were hitting
result of the following bug submitted to Clang [1].
Basically, it was happening so that (int)sqrtf(64) was 7 when Cycles
is built with Clang but was correct 8 when built with GCC.
This commit works this around. Annoying, but don't see other way to
keep sampling pattern the same for Clang and GCC.
[1] https://bugs.llvm.org//show_bug.cgi?id=24063
The idea here is to keep things in a logical order to match the order of ones worflow.
This concept can be seen in Graph > Dope Sheet > NLA. This issue is mainly affecting the manual.
Fixes T50709
Differential Revision: https://developer.blender.org/D2630
The goal is to reduce wasted space and improve clarity in the 'N' panel of the VSE through layout changes.
The changes are intentional conservative to avoid making people re-learn anything.
Author: @mpan3
Differential Revision: https://developer.blender.org/D2439
* "Filmic" and "False Color" view transforms added (sRGB display device only).
* "Very Low/Low/Base/High/Very High Contrast" looks added.
* Added filtering so that Filmic only shows look names prefixed with "Filmic - ".
Filmic Dynamic Range LUT configuration created by Troy James Sobotka with
special thanks and feedback from Guillermo, Claudio Rocha, Bassam Kurdali,
Eugenio Pignataro, Henri Hebeisen, Jason Clarke, Haarm-Peter Duiker, Thomas
Mansencal, and Timothy Lottes.
Differential Revision: https://developer.blender.org/D2659
This commit contains the first part of the new Cycles denoising option,
which filters the resulting image using information gathered during rendering
to get rid of noise while preserving visual features as well as possible.
To use the option, enable it in the render layer options. The default settings
fit a wide range of scenes, but the user can tweak individual settings to
control the tradeoff between a noise-free image, image details, and calculation
time.
Note that the denoiser may still change in the future and that some features
are not implemented yet. The most important missing feature is animation
denoising, which uses information from multiple frames at once to produce a
flicker-free and smoother result. These features will be added in the future.
Finally, thanks to all the people who supported this project:
- Google (through the GSoC) and Theory Studios for sponsoring the development
- The authors of the papers I used for implementing the denoiser (more details
on them will be included in the technical docs)
- The other Cycles devs for feedback on the code, especially Sergey for
mentoring the GSoC project and Brecht for the code review!
- And of course the users who helped with testing, reported bugs and things
that could and/or should work better!
Those shall not be considered while checking whether a to-be-made-local
ID will end up fully local, or still be partially used by linked data...
Even less since we already do have special handling of proxies later.
Fixes main remaining issue found with 04_01_H.lighting.blend Agent327
file, and allows us to switch back to optimized post-processing in
make_local code.
That one tags those ugly little 'from' ID pointers (shape keys and
proxies), which point back from used to user ID, and require a lot of
special care in data-block management...
Again, Agent327's 04_01_H.lighting.blend shows some problem here, it
triggers several times the 'not used at all' assert in step 5 of secure
code, and with optimized version we lose the connection between
rigs and the main characters!
Will keep investigating on this, but for now let's try to give something
working to the studio.
This should not be needed imho, we already set POSE_RECALC flag
correctly there, but it still is missing actual update of poses in some
(complex and convoluted) cases. So at least for now, let's go with this
hack, it's not really harming anyone anyway.
Fixes crash in Agent327's 04_01_H.lighting.blend when making all local.
Not sure how this happens, but in some cases we can evaluate
deformations of an armature which pose is not valid, at least put a
warning here to help identifying the issue quickly.
The issue was caused by unlimited textures commit, root of the issue is that
displacement code updates some of the image slots directly, so it needs to
ensure device vectors are all proper size.
This was set to maxcpu which in an 8 core box would be 8, each project would then spawn
8 instances of cl.exe, making a possible of 64 simultaneously running compiler instances
slowing the compile down instead of speeding it up.
We unfortunately cannot fix this for previous versions of Blender, but
at least the issue (Blender crashing on unknown IDProp types) should now
be addressed for future.
Simply reset unknown IDProp types to integer one, and reset its value to zero.
Previously, every RenderPass would have a bitfield that specified its type. That limits the number of passes to 32, which was reached a while ago.
However, most of the code already supported arbitrary RenderPasses since they were also used to store Multilayer EXR images.
Therefore, this commit completely removes the passflag from RenderPass and changes all code to use the unique pass name for identification.
Since Blender Internal relies on hardcoded passes and to preserve compatibility, 32 pass names are reserved for the old hardcoded passes.
To support these arbitrary passes, the Render Result compositor node now adds dynamic sockets. For compatibility, the old hardcoded sockets are always stored and just hidden when the corresponding pass isn't available.
To use these changes, the Render Engine API now includes a function that allows render engines to add arbitrary passes to the render result. To be able to add options for these passes, addons can now add their own properties to SceneRenderLayers.
To keep the compositor input node updated, render engine plugins have to implement a callback that registers all the passes that will be generated.
From a user perspective, nothing should change with this commit.
Differential Revision: https://developer.blender.org/D2443
Differential Revision: https://developer.blender.org/D2444
Reduce thread divergence in kernel_shader_eval.
Rays are sorted in blocks of 2048 according to shader->id.
On R9 290 Classroom is ~30% faster, and Pabellon Barcelone is ~8% faster.
No sorting for CUDA split kernel.
Reviewers: sergey, maiself
Reviewed By: maiself
Differential Revision: https://developer.blender.org/D2598
Previously the logic was different for duplis and regular objects: regular objects
were using render visibility when Render Layer option is enabled which duplis were
always using viewport visibility when rendering from the viewport.
This was quite confusing because caused different results in viewport and render
when artists were expecting them to match 1:1.
This implements branched path tracing for the split kernel.
General approach is to store the ray state at a branch point, trace the
branched ray as normal, then restore the state as necessary before iterating
to the next part of the path. A state machine is used to advance the indirect
loop state, which avoids the need to add any new kernels. Each iteration the
state machine recreates as much state as possible from the stored ray to keep
overall storage down.
Its kind of hard to keep all the different integration loops in sync, so this
needs lots of testing to make sure everything is working correctly. We should
probably start trying to deduplicate the integration loops more now.
Nonbranched BMW is ~2% slower, while classroom is ~2% faster, other scenes
could use more testing still.
Reviewers: sergey, nirved
Reviewed By: nirved
Subscribers: Blendify, bliblubli
Differential Revision: https://developer.blender.org/D2611
Negative scale on camera is a nice trick to invert render image on one
axis at no extra CPU cost. It was implemented in the Decklink branch but
I introduced a typo when porting it to master. It is now fixed.
The change was initially needed for Blender 2.8 branch but the actual
function was reverted in there. So no reason to keep dead unused
placeholder in the dependency graph.
This reverts commit fd69ba2255.
Avoid calculating a new split-index when re-fitting.
While checking if a knot can be removed, the index with the highest error
can be used as a candidate to replace the knot
(in the case it can't be removed).
Not sure if this is a proper fix, but was getting frequent crashes, so
committing this real quick just to make master sable again. Can be
reverted later if there's a better fix. The changes to images really
need a closer look...
Hi Guys,
as one of my clients needs the possibility to have custom menu entries in the general right click menu (all over Blender: in the node editor, properties, toolbars,..) I talked with Campbell about expanding our hard coded menu a bit. This is the outcome. As I only need those two, I support currently a button_prop and a button_pointer.
{F540397}
I tested the changes with a custom script where I added a custom entry and executed an operator on click - it seems to work exactly how it's intended to. The script: {F540435}
As I'm not too experienced in rna stuff I would really appreciate any review.
Thanks very much Campbell for his open ears & help on this issue!
Reviewers: campbellbarton, mont29
Reviewed By: campbellbarton, mont29
Subscribers: sybren, mont29
Tags: #addons
Differential Revision: https://developer.blender.org/D2612
Previous fix did not work for mixed textures. This one will over-allocate
information array, but it's better than not being able to render at all.
Some more cleanup and improvement is coming.
This patch allows for an unlimited number of textures in Cycles where the hardware allows. It replaces a number static arrays with dynamic arrays and changes the way the flat_slot indices are calculated. Eventually, I'd like to get to a point where there are only flat slots left and textures off all kinds are stored in a single array.
Note that the arrays in DeviceScene are changed from containing device_vector<T> objects to device_vector<T>* pointers. Ideally, I'd like to store objects, but dynamic resizing of a std:vector in pre-C++11 calls the copy constructor, which for a good reason is not implemented for device_vector. Once we require C++11 for Cycles builds, we can implement a move constructor for device_vector and store objects again.
The limits for CUDA Fermi hardware still apply.
Reviewers: tod_baudais, InsigMathK, dingto, #cycles
Reviewed By: dingto, #cycles
Subscribers: dingto, smellslikedonkey
Differential Revision: https://developer.blender.org/D2650
Previously canceling a render done by the split kernel could cause artifacts
such as very bright or dark tiles. This was caused by unfinished samples
being included in the output buffer. To avoid this we now wait till all the
currently rendering samples have finished, up to a limit of twice the
expected time for them to finish (currently this is no more than 20 seconds,
but usually its much less). If samples still haven't finished by then we
stop anyways in case there's an endless loop occurring.
It was totally unclear whether the device is enabled or disabled.
Lots of people got fully lost in the current interface.
While the solution is not fully ideal, it is at least solves
ambiguity in the interface.
Simple child hairs don't have a face index number assigned, so the
call to dm->getTessFaceData(dm, num, CD_MFACE) would cause a crash. To
work around this, UV and normal vectors are copied from the parent
hair.
I've also removed an unnecessary call to dm->getTessFaceArray(dm);
Reviewers: kevindietrich
Differential Revision: https://developer.blender.org/D2638
The function doesn't return whether the object is a shape at all, since
it also returns true for camera objects (and soon also for empties). It
returns true when objects of this type can be exported to Alembic at all.
This is now reflected in the name.
This works around a long outstanding issue T50176 with cycles on msvc2015/x86 . root cause is still unknown though,feels like a game of whack'a'mole
Reviewers: sergey, dingto
Subscribers: Blendify
Tags: #cycles
Differential Revision: https://developer.blender.org/D2573
Using -cl-fast-relaxed-math assumes no NaN/Inf values in any expression.
This causes problems on overflow, division by zero, square root of negative number.
Comparisons with NaN or infinite value are affected as well.
This patch causes <2% slowdown on benchmark scenes.
Fix T50985: Rendering volume scatter with GPU OpenCL comes to an halt after a few seconds
HDF5 Alembic files are not officially supported by Blender. With this
commit, the HDF5 format is detected even when Blender is compiled without
HDF5 support, and the user is given an explanatory error message (rather
than the generic "Could not open Alembic archive for reading".
The final goal to reach is to make vectorized types much easier to maintain
and the previous design had following issues:
- Having all types and methods implementation made the source file rather
bloated and unfun to navigate in.
- It was not possible to quickly glance available API for the type you are
interested in.
- Adding more vectorization types will bloat the file even more, making
things even more tricky to follow.
Fixes performance issues of C++ one with Windows MSVC debug builds...
Merely a translation from msgfmt.cc code by @sergey, using BLI libs intead of C++'s stdlib.
Reviewers: sergey, campbellbarton, LazyDodo
Subscribers: sergey
Differential Revision: https://developer.blender.org/D2605
QtKit was removed in macOS Sierra, this patch disables WITH_CODEC_QUICKTIME
in Sierra and greater versions of macOS.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2645
By mistake, the code relied on ALEMBIC_ROOT_DIR being defined by the user
running the tests. Now CMake macros are used to correctly find the Alembic
root directory.
Matrix.decompose() should either return "location, orientation, size" or
"translation, rotation, scale". Since there are constructors for the former,
I've replaced "location" in the documentation with "translation".
The code is still the same, I just changed the documentation.
The idea is to have osme geenric BSDF node which is subclassed by
"regular" BSDF nodes and uber shaders.
This way we can access special type and closure type for making
decisions somewhere else.
No longer passing time as float and constructing ISampleSelectors all
over the place. Instead, just construct an ISampleSelector once and
pass it along.
It is disabled by default, so should not affect existing configurations.
Main benefits of this goes as:
- Linux distros can use that to avoid libraries duplication and link
blender package against gflags package from the system.
- It it easier to test whether Blender works with updated version of
Gflags prior to re-bundling the library.
This switches the internal color representation of the eye dropper from display space to linear. Any time a linear color is requested and the color is picked from a linear object, the result is now precise to the bit as the color gets patched through directly. Color space conversion now only happens when a color is picked from non-linear display space objects or when the color is requested to be returned in non-linear space.
In addition, this patch changes the DifferenceMatte node to interpret a tolerance of 0.0 to accept colors that are identical bit by bit, as apposed to simply refusing all colors.
Replaced some STREQ(snode->tree_idname, ...) calls with ED_node_is_*() calls for improved readability, fixed one case where the STREQ was used the wrong way
That's a quick hack to address that specific case, new pointer IDProp
actually enlights a generic problem - datablocks using themselves - which
is not really handled by current code, would consider this not-so-urgent
TODO though.
The ABC_export and ABC_import functions both take a as_background_job
parameter, and return a boolean.
When as_background_job=true, returns false immediately after scheduling
a background job. This was the old behaviour of this function, which makes
it very hard for scripts to do something with the data after the import
or export completes.
When as_background_job=false, performs the export synchronously, and
returns true when the export was ok, and false if there were any errors.
This allows further processing.
The Scene.alembic_export() function is deprecated, and will be removed from
Blender 2.8 in favour of calling the bpy.ops.wm.alembic_export() operator.
As such, it has been hard-coded to the old background job behaviour.
The export is still slower than needed, as the particle systems themselves
aren't disabled during the export. It's only the writing to the Alembic
file that's skipped.
We could not edit them, but still could delete them, which makes no
sense, API-defined properties are similar to class members, removing
them from single instances is pure garbage. And it was broken anyway.
Found by @a.romanov while checking on T51198, thanks.
Curve resolution isn't natively supported by Alembic, hence it is stored
in a user property "blender:resolution". I've looked at a Maya curves
example file, but that also didn't contain any information about curve
resolution.
Differential Revision: https://developer.blender.org/D2634
Reviewers: kevindietrich
The order number written to Alembic is the same as we use in memory, so
the +1 wasn't needed, at least according to the reference Maya exporter
maya/AbcExport/MayaNurbsCurveWriter.cpp, function
MayaNurbsCurveWriter::write(), in the Alembic source code.
Furthermore, when writing an array of nurb orders, the curve type should
be set to kVariableOrder, otherwise the importer will ignore it.
commit 90778901c9
Merge: 76eebd93bf0026
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Mon Apr 3 07:52:05 2017 +0200
Merge branch 'master' into cycles_disney_brdf
commit 76eebd9379
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Thu Mar 30 15:34:20 2017 +0200
Updated copyright for the new files.
commit 013f4a152a
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Thu Mar 30 15:32:55 2017 +0200
Switched from multiplication of base and subsurface color to blending
between them using the subsurface parameter.
commit 482ec5d1f2
Author: Schoen <schoepas@deher1m1598.emea.adsint.biz>
Date: Mon Mar 13 15:47:12 2017 +0100
Fixed a bug that caused an additional white diffuse closure call when using
path tracing.
commit 26e906d162
Merge: 0593b8c223aff9
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Feb 6 11:32:31 2017 +0100
Merge branch 'master' into cycles_disney_brdf
commit 0593b8c51b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Feb 6 11:30:36 2017 +0100
Fixed the broken GLSL shader and implemented the Disney BRDF in the
real-time view port.
commit 8c7e11423b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Feb 3 14:24:05 2017 +0100
Fix to comply strict compiler flags and some code cleanup
commit 17724e9d2d
Merge: 379ba34520afa2
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jan 24 09:59:58 2017 +0100
Merge branch 'master' into cycles_disney_brdf
commit 379ba346b0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jan 24 09:28:56 2017 +0100
Renamed the Disney BSDF to Principled BSDF.
commit f80dcb4f34
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Dec 2 13:55:12 2016 +0100
Removed reflection call when roughness is low because of artifacts.
commit 732db8a57f
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Nov 16 09:22:25 2016 +0100
Indication if to use fresnel is now handled via the type of the BSDF.
commit 0103659f5e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Nov 11 13:04:11 2016 +0100
Fixed an error in the clearcoat where it appeared too bright for default
light sources (like directional lights)
commit 0aa68f5335
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Nov 7 12:04:38 2016 +0100
Resolved inconsistencies in using tabs and spaces
commit f5897a9494
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Nov 7 08:13:41 2016 +0100
Improved the clearcoat part by using GTR1 instead of GTR2
commit 3dfc240e61
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 31 11:31:36 2016 +0100
Use reflection BSDF for glossy reflections when roughness is 0.0 to
reduce computational expense and some code cleanup
Code cleanup includes:
- Code style cleanup and removed unused code
- Consolidated code in the bsdf_microfacet_multi_impl.h to reduce
some computational expense
commit a2dd0c5faf
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Oct 26 08:51:10 2016 +0200
Fixed glossy reflections and refractions for low roughness values and
cleaned up the code.
For low roughness values, the reflections had some strange behavior.
commit 9817375912
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Oct 25 12:37:40 2016 +0200
Removed default values in setup functions and added extra functions for
GGX with fresnel.
commit bbc5d9d452
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Oct 25 11:09:36 2016 +0200
Switched from uniform to cosine hemisphere sampling for the diffuse and
the sheen part.
commit d52d8f2813
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 16:17:13 2016 +0200
Removed the color parameters from the diffuse and sheen shader and use
them as closure weights instead.
commit 8f3d927385
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 09:57:06 2016 +0200
Fixed the issue with artifacts when using anisotropy without linking the
tangent input to a tangent node.
commit d93f680db9
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 09:14:51 2016 +0200
Added subsurface radius parameter to control the per color channel
effection radius of the subsurface scattering.
commit c708c3e53b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Oct 24 08:14:10 2016 +0200
Rearranged the inputs of the shader.
commit dfbfff9c38
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Oct 21 09:27:05 2016 +0200
Put spaces in the parameter names of the shader node
commit e5a748ced1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Oct 21 08:51:20 2016 +0200
Removed code that isn't in use anymore
commit 75992bebc1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Oct 21 08:50:07 2016 +0200
Code style cleanup
commit 4dfcf455f7
Merge: 243a0e32cd6a89
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Thu Oct 20 10:41:50 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit 243a0e3eb8
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Thu Oct 20 10:01:45 2016 +0200
Switching between OSL and SVM is more consistant now when using Disney
BSDF.
There were some minor differences in the OSL implementation, e.g. the
refraction roughness was missing.
commit 2a5ac50922
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 09:17:57 2016 +0200
Fixed a bug that caused transparency to be always white when using OSL and
selecting GGX as distribution of the Disney BSDF
commit e1fa862391
Merge: d0530a87f76f6f
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 08:59:32 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit d0530a8af0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 08:53:18 2016 +0200
Cleanup the Disney BSDF implementation and removing unneeded files.
commit 3f4fc826bd
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 27 08:36:07 2016 +0200
Unified the OSL implementation of the Disney clearcoat as a simple
microfacet shader like it was previously done in SVM
commit 4d3a0032ec
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Sep 26 12:35:36 2016 +0200
Enhanced performance for Disney materials without subsurface scattering
commit 3cd5eb56cf
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Sep 16 08:47:56 2016 +0200
Fixed a bug in the Disney BSDF that caused specular reflections to be too
bright and diffuse is now reacting to the roughness again
- A normalization for the fresnel was missing which caused the specular
reflections to become too bright for the single-scatter GGX
- The roughness value for the diffuse BSSRDF part has always been
overwritten and thus always 0
- Also the performance for refractive materials with roughness=0.0 has
been improved
commit 7cb37d7119
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Thu Sep 8 12:24:43 2016 +0200
Added selection field to the Disney BSDF node for switching between
"Multiscatter GGX" and "GGX"
In the "GGX" mode there is an additional parameter for changing the
refraction roughness for materials with smooth surfaces and rough interns
(e.g. honey). With the "Multiscatter GGX" this effect can't be produced at
the moment and so here will be no separation of the two roughness values.
commit cdd29d06bb
Merge: 02c315ab40d1c1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 6 15:59:05 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit 02c315aeb0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Sep 6 15:16:09 2016 +0200
Implemented the OSL part of the Disney shader
commit 5f880293ae
Merge: 630b80eb399a6d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Sep 2 10:53:36 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit 630b80e08b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Sep 2 10:52:13 2016 +0200
Fresnel in the microfacet multiscatter implementation improved
commit 0d9f4d7acb
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Aug 26 11:11:05 2016 +0200
Fixed refraction roughness problem (refractions were always 100% rough)
and set IOR of clearcoat to 1.5
commit 9eed34c7d9
Merge: ef29aaeae475e3
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Aug 16 15:22:32 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit ef29aaee1a
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Aug 16 15:17:12 2016 +0200
Implemented the fresnel in the multi-scatter GGX for the Disney BSDF
- The specular/metallic part uses the multi-scatter GGX
- The fresnel of the metallic part is controlled by the specular value
- The color of the reflection part when using transparency can be
controlled by the specularTint value
commit 88567af085
Merge: cc267e5285e082
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Aug 3 15:05:09 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit cc267e52f2
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Aug 3 15:00:25 2016 +0200
Implemented the Disney clearcoat as a variation of the microfacet bsdf,
removed the transparency roughness again and added an input for
anisotropic rotations
commit 81f6c06b1f
Merge: ece5a087065022
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Aug 3 11:42:02 2016 +0200
Merge branch 'master' into cycles_disney_brdf
commit ece5a08e0d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 16:29:21 2016 +0200
Base color now applied again to the refraction of transparent Disney
materials
commit e3aff6849e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 16:05:19 2016 +0200
Added subsurface color parameter to the Disney shader
commit b3ca6d8a2f
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 12:30:25 2016 +0200
Improvement of the SSS in the Disney shader
* Now the bump normal is correctly used for the SSS.
* SSS in Disney uses the Disney diffuse shader
commit d68729300e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jul 26 12:23:13 2016 +0200
Better calculation of the Disney diffuse part
Now the values for NdotL und NdotV are clamped to 0.0f for a better look
when using normal maps
commit cb6e500b12
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Jul 25 16:26:42 2016 +0200
Now one can disable specular reflactions again by setting specular and
metallic to 0 (cracked this in the previous commit)
commit bfb9cb11b5
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Jul 25 16:11:07 2016 +0200
fixed the Disney SSS and cleaned the initialization of the Disney shaders
commit 642c0fdad1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Jul 25 16:09:55 2016 +0200
fixed an error that was caused by the missing LABEL_REFLECT in the Disney
diffuse shader
commit c10b484dca
Author: Jens Verwiebe <info@jensverwiebe.de>
Date: Fri Jul 22 01:15:21 2016 +0200
Rollback attempt to fix sss crashing, it prevented crash by disabling sss completely, thus useless
commit 462bba3f97
Author: Jens Verwiebe <info@jensverwiebe.de>
Date: Thu Jul 21 23:11:59 2016 +0200
Add an undef for sc_next for safety
commit 32d348577d
Author: Jens Verwiebe <info@jensverwiebe.de>
Date: Thu Jul 21 00:15:48 2016 +0200
Attempt to fix Disney SSS
commit dbad91ca6d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed Jul 20 11:13:00 2016 +0200
Added a roughness parameter for refractions (for scattering of the rays
within an object)
With this, one can create a translucent material with a smooth surface and
with a milky look.
The final refraction roughness has to be calculated using the surface
roughness and the refraction roughness because those two are correlated
for refractions. If a ray hits a rough surface of a translucent material,
it is scattered while entering the surface. Then it is scattered further
within the object. The calculation I'm using is the following:
RefrRoughnessFinal = 1.0 - (1.0 - Roughness) * (1.0 - RefrRoughness)
commit 50ea5e3e34
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue Jun 7 10:24:50 2016 +0200
Disney BSDF is now supporting CUDA
commit 10974cc826
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 31 11:18:07 2016 +0200
Added parameters IOR and Transparency for refractions
With this, the Disney BRDF/BSSRDF is extended by the BTDF part.
commit 218202c090
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 15:08:18 2016 +0200
Added an additional normal for the clearcoat
With this normal one can simulate a thin layer of clearcoat by applying a
smoother normal map than the original to this input
commit dd139ead7e
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 12:40:56 2016 +0200
Switched to the improved subsurface scattering from Christensen and
Burley
commit 11160fa4e1
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 10:16:30 2016 +0200
Added Disney Sheen shader as a preparation to get to a BSSRDF
commit cee4fe0cc9
Merge: 4f955d06b5bab6
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon May 30 09:08:09 2016 +0200
Merge branch 'cycles_disney_brdf' of git.blender.org:blender into cycles_disney_brdf
Conflicts:
intern/cycles/kernel/closure/bsdf_disney_clearcoat.h
intern/cycles/kernel/closure/bsdf_disney_diffuse.h
intern/cycles/kernel/closure/bsdf_disney_specular.h
intern/cycles/kernel/closure/bsdf_util.h
intern/cycles/kernel/osl/CMakeLists.txt
intern/cycles/kernel/osl/bsdf_disney_clearcoat.cpp
intern/cycles/kernel/osl/bsdf_disney_diffuse.cpp
intern/cycles/kernel/osl/bsdf_disney_specular.cpp
intern/cycles/kernel/osl/osl_closures.h
intern/cycles/kernel/shaders/node_disney_bsdf.osl
intern/cycles/render/nodes.cpp
intern/cycles/render/nodes.h
commit 4f955d0523
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 24 16:38:23 2016 +0200
SVM and OSL are both working for the simple version of the Disney BRDF
commit 1f5c41874b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 24 09:58:50 2016 +0200
Disney node can be used without SVM and started to cleanup the OSL implementation
There is still some wrong behavior for SVM for the Schlick Fresnel part at the
specular and clearcoat
commit d4b814e930
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:22:29 2016 +0200
Switched from a parameter struct for Disney parameters to ShaderClosure params
commit b86a1f5ba5
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:19:57 2016 +0200
Added additional variables for storing parameters in the ShaderClosure struct
commit 585b886236
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 12:03:17 2016 +0200
added output parameter to the DisneyBsdfNode
That has been forgotten after removing the inheritance of BsdfNode
commit f91a286398
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 10:40:48 2016 +0200
removed BsdfNode class inheritance for DisneyBsdfNode
That's due to a naming difference. The Disney BSDF uses the name 'Base Color'
while the BsdfNode had a 'Color' input. That caused a text message to be
printed while rendering.
commit 30da91c9c5
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 16:08:10 2016 +0200
disney implementation cleaned
commit 30d41da0f0
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 13:23:07 2016 +0200
added the disney brdf as a shader node
commit 1f099fce24
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 3 16:54:49 2016 +0200
added clearcoat implementation
commit 00a1378b98
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Apr 29 22:56:49 2016 +0200
disney diffuse und specular implemented
commit 6baa7a7eb7
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 15:21:32 2016 +0200
disney diffuse is working correctly
commit d8fa169bf3
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 08:41:53 2016 +0200
added vessel for disney diffuse shader
commit 6b5bab6cec
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:22:29 2016 +0200
Switched from a parameter struct for Disney parameters to ShaderClosure params
commit f6499c2676
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 18 10:19:57 2016 +0200
Added additional variables for storing parameters in the ShaderClosure struct
commit 7100640b65
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 12:03:17 2016 +0200
added output parameter to the DisneyBsdfNode
That has been forgotten after removing the inheritance of BsdfNode
commit 419ee54411
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 17 10:40:48 2016 +0200
removed BsdfNode class inheritance for DisneyBsdfNode
That's due to a naming difference. The Disney BSDF uses the name 'Base Color'
while the BsdfNode had a 'Color' input. That caused a text message to be
printed while rendering.
commit 6006f91e87
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 16:08:10 2016 +0200
disney implementation cleaned
commit 0ed0895914
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Wed May 4 13:23:07 2016 +0200
added the disney brdf as a shader node
commit 0630b742d7
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Tue May 3 16:54:49 2016 +0200
added clearcoat implementation
commit 9f3d39744b
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Fri Apr 29 22:56:49 2016 +0200
disney diffuse und specular implemented
commit 9b26206376
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 15:21:32 2016 +0200
disney diffuse is working correctly
commit 4711a3927d
Author: Pascal Schoen <pascal_schoen@gmx.net>
Date: Mon Apr 18 08:41:53 2016 +0200
added vessel for disney diffuse shader
Differential Revision: https://developer.blender.org/D2313
This is possible to use surface-only nodes and connect them to volume output.
If there was something connected to surface output those extra connections
will not change anything visually but will force volume features to be included
into feature-adaptive kernels.
In fact, this exact reason seems to be causing slowdown of Barcelone file
comparing AMD OpenCL to NVidia CUDA.
Currently only supported by the final F12 renders because of the current design
of what gets optimized out when and how feature-adaptive kernel accesses
list of required features.
Reviewers: dingto, nirved, maiself, lukasstockner97, brecht
Reviewed By: brecht
Subscribers: bliblubli
Differential Revision: https://developer.blender.org/D2569
Remapping to itself is nonsense here (was triggering an assert in
BKE_library code actually), just make it a bail out early in RNA
callback in that case.
Sanitize a bit how cache path is handled by fluidsim (there is much more
to be done here though :( ), and forbid empty path (we reset to default
path relative to current .blend file in case it's empty).
If people really, really want to use current OS-wise directory, they can at
least use '.' as path. ;)
When moved the options to toolsetting, this part was missing. The problem was not the pointer as suggested in D2629.
Thanks Arvīds Kokins for his help fixing this bug
This avoids the unnecessary creation of bvhtree, which can be highly inefficient in some cases
(for example: in the `operator_modal_view3d_raycast.py` template)
Unfortunately this does break compatibility in that the viewport will look a
bit different depending on the settings, but the old behavior was simply not
usable for higher distances.
Object Info node can be useful to give some variation to a single material assigned to multiple instances. This patch adds support for Viewport and BI.
{F499530}
Example: {F499528}
Reviewers: merwin, brecht, dfelinto
Reviewed By: brecht
Subscribers: duarteframos, fclem, homyachetser, Evgeny_Rodygin, AlexKowel, yurikovelenov
Differential Revision: https://developer.blender.org/D2425
The U-resolution of the imported curves was kept at the default value
of 12, which is way too high for imported hair. We export hair at a
fairly high resolution already, so it's not needed to subdivide even
further when importing.
Of course this may have an impact on other curves that do require this
U-resolution to be higher. In that case the resolution can be
increased after importing.
I removed the default nu->orderu = num_verts, as that allowed every
point to influence the entire spline, which was more expensive for the
CPU, and unlikely to be needed. The orderu computations had off-by-one
errors in the curve importer, which are now also fixed. The correct
values are:
- Linear: orderu = 2
- Quadratic: orderu = 3
- Cubic: orderu = 4
These values are also what is stored in the Alembic file for curves of
type kVariableOrder, according to the reference Maya exporter
maya/AbcExport/MayaNurbsCurveWriter.cpp, function
MayaNurbsCurveWriter::write(), in the Alembic source code.
The result is a frame rate increase of roughly 100x (tested with one
100-hair test on one machine, so take with grain of salt).
This test checks that a set of cubes are exported with the correct
transform, both with flatten=True and flatten=False.
This commit also adds an easy to use superclass for upcoming Alembic
unit tests.
This supports our common character animation workflow, where a character,
its rig, and the custom bone shapes are all part of a group. This group
is then linked into the scene, the rig is proxified and animated. Such
a group can now be exported. Use "Renderable objects only" to prevent
writing the custom bone shapes to the Alembic file.
The absence of datablock properties "will certainly be resolved soon as the need for them is becoming obvious" said the [[http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.67/Python_Nodes|Python Nodes release notes]]. So this patch allows Python scripts to create ID Properties which reference datablocks.
This functionality is implemented for `PointerProperty` and now such properties can be created with Python.
In addition to the standard update callback, `PointerProperty` can have a `poll` callback (standard RNA) which is useful for search menus. For details see the test included in this patch.
Original author: @artfunkel
Alexander (Blend4Web Team)
Reviewers: brecht, artfunkel, mont29, campbellbarton
Reviewed By: mont29, campbellbarton
Subscribers: jta, sergey, campbellbarton, wisaac, poseidon4o, mont29, homyachetser, Evgeny_Rodygin, AlexKowel, yurikovelenov, fjuhec, sharlybg, cardboard, duarteframos, blueprintrandom, a.romanov, BYOB, disnel, aditiapratama, bliblubli, dfelinto, lukastoenne
Maniphest Tasks: T37754
Differential Revision: https://developer.blender.org/D113
We can not re-use anything for such pools, because we will know nothing about whether
the main thread is sleeping or not. So we identify such threads as 0, but we don't
use main thread's TLS.
This fixes dead-locks and crashes reported by Luca when doing playblasts.
Global size depends on memory usage which might change during rendering.
Havent seen it happen but seems possible that this could cause the global
size to be different than what was used for allocating buffers.
The issue here is that the preferences are still used because both can be accessed from the 3D View, view menu. In the future, it is likely that the old mode will be removed (maybe 2.8?) but for now we want to keep both operational.
Differential revision: https://developer.blender.org/D2320
Do not recompute both points's 2D coordinates for each segments, we can
copy over from previous one... Does not gives any measurable speedup off
hands, though.
Couple of things here:
- Boost is not necesserily compiled into your /opt/lib and system-wide
version might have been used. The recent change in Alembic did not
take this into account.
- Alembic needs some extra component of Boost.
This part might be missing now for other distros than DEB.
The issue was caused by recent change in inline policy.
There is some sort of memory corruption happening here, ASAN suggests
it's stack overflow issue. Not quite sure why it is happening tho and
was not able to solve anything here yet in the past hours.
Committing fix which works with a big TODO note.
The issue is visible on AVX2 machine when rendering cycles_reports_test.
Doing this in a fully 'clean' way is far from obvious, especially
unregister, you often end up leaving nasty 'orphanned' keymap items
referring to unregistered operators...
This seems to happen on Windows only, happened to Thomas and Nathan already.
Similar patch Thomas was showing, but i do not see it committted. So comitting
now in order to get more developers and users happy.
Ever since we merged the extra texture types (half etc) and spit kernel the compile time for cycles_kernel has been going out of control.
It's currently sitting at a cool 1295.762 seconds with our standard compiler (2013/x64/release)
I'm not entirely sure why msvc gets upset with it, but the inlining of matrix near the bottom of the tri-cubic 3d interpolator is the source of the issue, this patch excludes it from being inlined.
This patch bring it back down to a manageable 186 seconds. (7x faster!!)
with the attached bzzt.blend that @sergey kindly provided i got the following results with builds with identical hashes
58:51.73 buildbot
58:04.23 Patched
it's really close, the slight speedup could be explained by the switch instead of having multiple if's (switches do generate more optimal code than a chain of if/else/if/else statements) but in all honesty it might just have been pure luck (dev box,very polluted, bad for benchmarks) regardless, this patch doesn't seem to slow down anything with my limited testing.
{F532336}
{F532337}
Reviewers: brecht, lukasstockner97, juicyfruit, dingto, sergey
Reviewed By: brecht, dingto, sergey
Subscribers: InsigMathK, sergey
Tags: #cycles
Differential Revision: https://developer.blender.org/D2595
It's possible that cancellation occured between the creation of the reader
and the creation of the Blender object, in which case reader->object()
returns a NULL pointer.
BKE_libblock_free_us() was called on the object data, which decrements
its user count, after which the same function was called on the object,
which decrements the user count of the object data again. This double
decrement was too much.
The state mask wasnt applied before comparison giving false results. It
shouldnt really happen that a ray state contains any flags that need to
be masked away, but if it does happen its better to not get stuck.
Previously, a GHash was used to store a flattened mapping of parent
information based on the Alembic hierarchy, and then that hash was used to
set parent pointers on Blender objects. This resulted in errors and
some duplicate objects. The new approach stores parent pointers while
traversing the Alembic hierarchy, which means that there is much more
information about the actual context of the Alembic object itself,
producing a more stable import.
Also replaced the bool param "to_yup" with "AbcAxisSwapMode mode", so that
it's more explicit that axes are swapped.
Also added unittests for create_swapped_rotation_matrix.
There was a problem with parent-child relations not getting set up
correctly when an Alembic object was both the transform for a mesh object
and the parent of other mesh objects.
convert_matrix() now only converts from Imath::M44d to float[4][4] (taking
different camera orientations into account). Import-time scaling is now
performed by the caller.
Also renamed AbcObjectReader::readObjectMatrix to
setupObjectTransform, as it does more than just reading the object
matrix; it also sets up an object constraint if the Alembic Xform is
animated.
Alembic is an interchange and caching format, that can contain custom
object schemas. Blender shouldn't crash (because of failing asserts) just
because it doesn't know such an object type.
It's a mapping from full path of an Alembic object to an AbcObjectReader*.
The fact that at some point it is used to construct parent-child relations
doesn't matter.
The importer was guessing whether an Alembic IXform object was part of a
child object, or should be represented as an Empty in Blender. By reversing
the order in which objects are visited, the children can now claim their
parent as part of the same object (so IPolyMesh claims its parent IXform
as part of the same Blender object). This results in much less guesswork.
I've also removed similar guesswork from the code that sets parent pointers,
by simply searching for the parent in a hierarchical way, instead of trying
to predict (again) which IXforms were turned into empties.
Also, visit_object() now actually visits the object -- previously it only
visited its children, and assumed the object it was called on was already
handled by a previous call.
create_transform_matrix(float[4][4]) did mostly the same as
create_transform_matrix(Object *, float[4][4]), but more elegant.
However, the former has some inconsistencies with the latter (which
are now merged and made explicit, turned out one was for z-up→y-up
while the other was for y-up→z-up), and was renamed to
copy_m44_axis_swap(...) to convey its purpose more clearly.
Furthermore, "loc" has been renamed to "trans", as matrices don't
store locations but translations; and more variables now have a src_
or dst_ prefix to denote whether they contain a matrix/vector in the
source or destination axis orientation.
AbcExporter::createTransformWriter() tries to predict the parent Xform
name, but if it cannot be found has multiple ways of creating it, possibly
under a different name than originally searched for.
The idea is to have a system where we properly log error messages and
let the users know that errors occured redirecting them to the console
for explanations. This is only implemented for the exporter since the
importer already has similar functionalities; however they shall
ultimately be unified in some way.
Reviewers: sybren, dfelinto
Differential Revision: https://developer.blender.org/D2541
This avoids write access happening in non-atomic manner in
Shader::tag_update which modifies the global managers. Even
for 1 byte data types it's quite dangerous.
The issue is coming from the fact that float3 is actually 16 bytes aligned
data type and the "padding" was not initialized. This caused memcmp() to
access non-initialized memory.
This provides us with a clearer API (so I don't have to use const_cast<>
in upcoming code). It also allows layering of different Alembic files,
so you can have a base file and load a separate file containing overrides.
Verbally approved by Dr. Sergey.
Alembic requires one of ALEMBIC_LIB_USES_BOOST, ALEMBIC_LIB_USES_TR1, or
C++11, and silently defaults to the latter if the former two are OFF.
Before this change, Alembic was only built without C++11 of OpenEXR
was built at the same time. This dependency was both unnecessary and
undocumented.
This function was modifying texture datablock, which makes the call
unsafe for call from multiple threads. Now we pass the argument that
we don't need nodes to the underlying functions.
There will be still race condition in noise texture, but that should
at least be free from crashes. Doesn't mean we shouldn't fix it tho.
In this case the Pyobject gets lost from pybm, and bm.free() does not invalidate the PyElem.
This will cause the destructor of python to read invalid memory and crash.
The solution is to make a copy of the pyobjects pointers before overwriting.
This area is a subject of reconsideration, so for now used simplest
way possible -- ensure depsgraph's nodes have proper layer flags
when going in and out of local mode.
The issue was apparently caused by -fno-finite-math-only added to kernel.cpp
CFLAGS. For now just removed this flag from the kernel (we don't really want
it there at this point, and we don't have it for SSE/AVX optimized kernels).
But surely more investigation is needed here.
The idea is to make include statements more explicit and obvious where the
file is coming from, additionally reducing chance of wrong header being
picked up.
For example, it was not obvious whether bvh.h was refferring to builder
or traversal, whenter node.h is a generic graph node or a shader node
and cases like that.
Surely this might look obvious for the active developers, but after some
time of not touching the code it becomes less obvious where file is coming
from.
This was briefly mentioned in T50824 and seems @brecht is fine with such
explicitness, but need to agree with all active developers before committing
this.
Please note that this patch is lacking changes related on GPU/OpenCL
support. This will be solved if/when we all agree this is a good idea to move
forward.
Reviewers: brecht, lukasstockner97, maiself, nirved, dingto, juicyfruit, swerner
Reviewed By: lukasstockner97, maiself, nirved, dingto
Subscribers: brecht
Differential Revision: https://developer.blender.org/D2586
The intention of this commit it to address issues mentioned in the
reports T43865,T50164 and T50452.
The code is based on Embree code with some extra vectorization
to speed up single ray to single triangle intersection.
Unfortunately, such a fix is not coming for free. There is some
slowdown for AVX2 processors, mainly due to different vectorization
code, which caused different number of instructions to be executed
and different instructions-per-cycle counters. But on another hand
this commit makes pre-AVX2 platforms such as AVX and SSE4.1 a bit
faster. The prerformance goes as following:
2.78c AVX2 2.78c AVX Patch AVX2 Patch AVX
BMW 05:21.09 06:05.34 05:32.97 (+3.5%) 05:34.97 (-8.5%)
Classroom 16:55.36 18:24.51 17:10.41 (+1.4%) 17:15.87 (-6.3%)
Fishy Cat 08:08.49 08:36.26 08:09.19 (+0.2%) 08:12.25 (-4.7%
Koro 11:22.54 11:45.24 11:13.25 (-1.5%) 11:43.81 (-0.3%)
Barcelone 14:18.32 16:09.46 14:15.20 (-0.4%) 14:25.15 (-10.8%)
On GPU the performance is about 1.5-2% slower in my tests on GTX1080
but afraid we can't do much as a part of this chaneg here and
consider it a price to pay for more proper intersection check.
Made in collaboration with Maxym Dmytrychenko, big thanks to him!
Reviewers: brecht, juicyfruit, lukasstockner97, dingto
Differential Revision: https://developer.blender.org/D1574
That one was:
* Resetting non-ID pointers (lib_link_xxx funcs should only affect ID
pointers, everything else shall be done in direct_link_xxx func).
* Even worse, always calling lib_link_animdata, even when
LIB_TAG_NEED_LINK tag was unset...
We do not need any special handling anymore for usercount of images used
by faces/polygons (tpage stuff), since we have the 'real_user' handling,
which will gracefully cope with all possible situations.
So better not keep that ugly confusing useless special case.
Mainly:
* Add missing `IDP_LibLinkProperty()` calls for many ID types
(harmless currently, but better be consistent here!).
* Bring lib_link_xxx functions more in line with each other.
* Replace some long if/else by switch.
Simplifies code quite a bit, making it shorter and easier to extend.
Currently no functional changes for users, but is required for the
upcoming work of shadow catcher support with OpenCL.
It uses an idea of accumulating all possible light reachable across the
light path (without taking shadow blocked into account) and accumulating
total shaded light across the path. Dividing second figure by first one
seems to be giving good estimate of the shadow.
In fact, to my knowledge, it's something really similar to what is
happening in the denoising branch, so we are aligned here which is good.
The workflow is following:
- Create an object which matches real-life object on which shadow is
to be catched.
- Create approximate similar material on that object.
This is needed to make indirect light properly affecting CG objects
in the scene.
- Mark object as Shadow Catcher in the Object properties.
Ideally, after doing that it will be possible to render the image and
simply alpha-over it on top of real footage.
Before, Cycles would first sync the shader exactly as shown in the UI, then determine and sync the used attributes and later optimize the shader.
Therefore, even completely unconnected nodes would cause unneccessary attributes to be synced.
The reason for this is to avoid frequent resyncs when editing shaders interactively, but it can still be avoided for noninteractive renders - which is what this commit does.
Reviewed by: sergey
Differential Revision: https://developer.blender.org/D2285
Moving to manual class registration means its easier to accidentally
miss registering classes.
Now detect missing class registration
and warn when running with `--debug-python`
For Windows 8.1 and X11 (Linux, BSD) now use the DPI specified by the operating
system, which previously only worked on macOS. For Windows this is handled per
monitor, for X11 this is based on Xft.dpi or xrandr --dpi. This should result
in appropriate font and button sizes by default in most cases.
The UI has been simplified to a single UI Scale factor relative to the automatic
DPI, instead of two DPI and Virtual Pixel Size settings. There is forward and
backwards compatibility for existing user preferences.
Reviewed By: brecht, LazyDodo
Differential Revision: https://developer.blender.org/D2539
This adds the ability to switch between different application-configurations
without interfering with Blender's normal operation.
This commit doesn't include any templates,
so its mostly to allow collaboration for the Blender 101 project
and other custom configurations.
Application templates can be installed & selected from the file menu.
Other details:
- The `bl_app_template_utils` module handles template activation
(similar to `addon_utils`).
- The `bl_app_override` module is a general module
to assist scripts overriding parts of Blender in reversible way.
See docs:
https://docs.blender.org/manual/en/dev/advanced/app_templates.html
See patch: D2565
Use fast-math friendly version of this function.
We should probably avoid unsafe fast math, but this is to be done with
real care with all the benchmarks properly done.
For now comitting much safer fix.
Ideally we need to find a way to remove such a static limit here, but it's not so
trivial to implement for texture nodes. Requires some bigger system redesign there.
Just raising limit for now, which is fine for modern systems.
It is unused now and if we want similar function we should use
Pluecker intersection which is same performance with SSE optimization
but which is more watertight.
The title says it all actually. Gives up to 10% speedup on test scenes here
on i7-6800K.
Render times on GPU are unreliable here, but there might be some slowdown
caused by watertight nature of intersections.
Avoid construction of temporary array and make utility function force-inlined.
Additionally avoid calling float4_to_float3 twice.
This brings render times to the same values as before current patch series.
This is a preparation work for the followup commit which wil l move
remaining parts of Woop intersection logic to an utility file.
Doing it as a separate commit to keep changes more atomic and easier
to bisect when/if needed.
There are following benefits:
- Modifying intersection algorithm will not cause so much re-compilation.
- It works around header dependency hell and allows us to use vectorization
types much easier in there.
This removes the goal springs, in favor of simply calculating the goal forces on the vertices directly. The vertices already store all the necessary data for the goal forces, thus the springs were redundant, and just defined both ends as being the same vertex.
The main advantage of removing the goal springs, is an increase in flexibility, allowing us to much more nicely do some neat dynamic stuff with the goals/pins, such as animated vertex weights. But this also has the advantage of simpler code, and a slightly reduced memory footprint.
This also removes the `f`, `dfdx` and `dfdv` fields from the `ClothSpring` struct, as that data is only used by the solver, and is re-computed on each step, and thus does not need to be stored throughout the simulation.
Reviewers: sergey
Reviewed By: sergey
Tags: #physics
Differential Revision: https://developer.blender.org/D2514
This is already called by wm_init_userdef, in old code
different initialization methods were used but now it's not needed.
Confusing since prefs are loaded in this function that don't initialize temp.
There seems to be a compiler bug of MSVC2013. The issue does not happen on Linux and
does not happen on Windows when building with MSVC2015.
Since it's reallly a pain to debug release builds with MSVC2013 the AVX2 optimization
is disabled for curve sergemnts for this compiler.
Utility to get a file/dir in the path by index,
supporting negative indices to start from the end of the path.
Without this it wasn't straightforward to get
the a files parent directory name from a filepath.
When the new window didn't end up using the size stored in the preferences
the splash would not be centered (even outside the screen in some cases).
Now centered popups listen for window resizing.
For example, for RX480 you'll no longer see "Ellesmere" but will see
"AMD Radeon RX 480 Graphics" which makes more sense and allows to easily
distinguish which exact card it is when having multiple different cards
of Ellesmere codenames (i.e. RX480 and WX7100) in the same machine.
Previously, the code would only update the status string if the main status changed.
However, the main status did not include the remaining time, and therefore it wasn't updated until the amount of rendered tiles (which is part of the main status) changed.
This commit therefore makes the BlenderSession remember the time of the last status update and forces a status update if the last one was more than a second ago.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D2465
Although this wasn't so obvious since it
only showed up for factory settings and in the preferences window.
Panel display order depends on registration order,
Sorry for the noise. On the bright side we no longer need to move
classes around to re-arrange panels.
Instead of calling a function looping over whole list of a given ID
type, make whole loop over Main in parent function, and call functions
writing a single datablock at a time.
This design is more in line with all other places in Blender where we
handle whole content of Main (including readfile.c), and much more easy
to extend and add e.g. some generic processing of IDs before/after
writing, etc.
From user point, there should be no change at all, only difference is
that data-block types won't be saved in same order as before (.blend
file specs enforces no order here, so this is not an issue, but it could
bug some third party users using other, simplified .blend file reader maybe).
Reviewers: sergey, campbellbarton
Differential Revision: https://developer.blender.org/D2510
Meshes w/o modifiers wouldn't have their derived mesh applied.
Check was to avoid crash but its in fact meaningless,
since the modifier might be disabled, or there may be virtual modifiers.
Not sure why I did not put those from start... Actually *not* having an
undo point here can be problematic, since undoing some previous action
was trying to restore from bad pointer (I think) in UI, generating
asserts.
Note however that it's not a 'pure' undo, in that you may not find your
linked data in exact same state as before deleting it, after an undo,
since it actually implies *reloading* the deleted libraries (and not
restoring from a previously stored memory dump).
Reported by @sergey, thanks.
There is no way currently to prevent the option from showing in menu, so
instead report a warning to user (and curse again current nightmarish
system of operation in outliner...).
Reported by @sergey, thanks.
Pass globals as a bare pointer, same as it sued to be prior to split kernel rework.
AMD CPU platform and Intel OpenCL were complaining about this.
Perhaps we shouldn't pass globals as pointer at all, this isn't something what is
really portable and can cause issues on 32 bit perhaps.
Internal change needed for template support.
Loading the user preferences first so it's possible
for preferences to control startup behavior.
In general it's useful to load preferences before data-files,
so we know security settings for eg.
The range is controlled using the following command line arguments:
--cycles-resumable-start-chunk
--cycles-resumable-end-chunk
Those are 1-based index of range for rendering.
The thing i'm really starting to hate is the requirement to specify both
operation code and node type. Seems to be duplicated enums without real
need for that.
This addresses an issue raised by D2453 -
that there was no way to check if operators are run
multiple times in a row.
Actions are still ignored that don't cause an UNDO event.
Was using some threaded queue on top of task pool, tssk...
Now using properly task pool directly to crunch chunks of smooth fans.
No noticable changes in speed.
Tried to completely get rid of the 'no threading with few loops' code,
but even just creating/freeing the task pool, without actually pushing
any task, is enough to make code 50% slower in worst case scenario (i.e.
few thousands of simple cube objects).
The root of the issue was in custom normal code, so far it assumed that
we could only have one cyclic smooth fan around each vertex, which is...
blatantly wrong (again, the two cones sharing same vertex tip e.g.).
This required a rather deep change in how smooth fans/clnor spaces are processed,
took me some time to find a 'good' solution.
Note that new code is slightly slower than previous one (maybe about 5%),
not much to be done here, am afraid.
Tested against all older report files I could find, seems OK.
- Add optional 'display_name' callback
so callers can construct own names.
- Add optional 'prop_filepath' argument
(for operators that don't use "filepath").
- Add doc-string.
- Use keyword only arguments.
While this compiler is not officially supported yet, getting it to work is
a nice thing because more and more AMD cards will fall under MESA driver.
It's also nice to use explicit comparison with NULL, which makes it more
clear whether variable is a boolean or pointer. Even Rust enforces this!
Patch by Ian Bruce with own modifications.
The mesh convert operator can 'freeze' a mesh
(WYSIWYG, modifiers, shape keys etc).
However its not very obvious that the way to perform this
operation is to convert a mesh to a mesh.
Expose this as 'Visual Geometry to Mesh' in the 'Apply' menu,
since this is where users might expect to see it.
Use expanded names for bmesh primitive operations
(urmv jvke semv jfke).
Use 'bmesh_kernel_' prefix,
these functions aren't intended for wide use so favor readability.
Remove BM_face_vert_separate,
it wasn't used and only skipped step of finding correct loop of face.
It might be useful to keep the search string stored in some cases, but
in most it's not useful but confusing. Especially if the string is taken
from a menu showing a different enum.
When the loop region passed in had no loops to edge-split from,
it was assumed nothing needed to be done.
This ignored the case where loops share a vertex
without any shared edges.
Now BM_face_loop_separate_multi behaves like BM_face_loop_separate.
Fixed error where faces remained connected by verts in BM_mesh_separate_faces.
This commit adds new features to the breakdowner, giving animators more
control over what gets interpolated by the breakdowner. Specifically:
"Just as G R S let you move rotate scale, and then X Y Z let you do that
in one desired axis, when using the Breakdower it would be great to be
able to add GRS and XYZ to constrain what transform / axis is being
breakdowned."
As requested here:
https://rightclickselect.com/p/animation/csbbbc/breakdowner-constrain-transform-and-axis
Notes:
* In addition to G/R/S, there's also B (Bendy Bone settings and C (custom properties)
* Pressing G/R/S/B/C or X/Y/Z again will turn these constraints off again
- Connectivity length was overwritten by distance to closest selected.
- Vertices used the 'island' center of the closest vertex,
even if it wasn't connected.
Now optionally keep track of the original index of used as the closest
connected distance.
To support this needed to add optional support for islands of 1 vertex.
The changes introduced in rB3e628eefa9f55fac7b0faaec4fd4392c2de6b20e
made the non-subframe frame change behaviour less intuitive, by always
truncating downwards, instead of rounding to the nearest frame instead.
This made the UI a lot less forgiving of pointing precision errors
(for example, as a result of hand shake, or using a tablet on a highres scren)
This commit restores the old behaviour in this case only (subframe inspection
isn't affected by these changes)
Selection loop would draw the selection ignoring xray.
Now draw in a separate pass after clearing the depth buffer,
as with regular drawing.
Also disable depth sorting,
caller can sort the hit-list by depth if needed.
Single program generally compiles kernels faster (2-3 times), loads faster,
takes less drive space (2-3 times), and reduces the number of cached kernels.
Reduces memory allocation for split kernel.
This allows for faster rendering due to bigger global size,
specially when GPU memory is limited.
Perfromance results:
R9 290 total render time
Before After Change
BMW 4:37 4:34 -1.1 %
Classroom 14:43 14:30 -1.5 %
Fishy Cat 11:20 11:04 -2.4 %
Koro 12:11 12:04 -1.0 %
Pabellon Barcelona 22:01 20:44 -5.8 %
Pabellon Barcelona(*) 15:32 15:09 -2.5 %
(*) without glossy connected to volume
Decoupled ray marching is not supported yet.
Transparent shadows are always enabled for volume rendering.
Changes in kernel/bvh and kernel/geom are from Sergey.
This simiplifies code significantly, and prepares it for
record-all transparent shadow function in split kernel.
Intended to replace legacy GL_SELECT, without the limitations of
sample queries which can't access depth information.
This commit adds VIEW3D_SELECT_PICK_NEAREST and VIEW3D_SELECT_PICK_ALL
which access the depth buffers to detect whats under the pointer,
so initial selection is always the closest item.
The performance of this method depends a lot on the OpenGL
implementations glReadPixels.
Since reading depth can be slow, buffers are cached for object picking
so selecting re-uses depth data, performing 1 draw instead of 3
(for 24, 18, 10 px regions, picking with many items under the pointer).
Occlusion queries draw twice when picking nearest,
so worst case 6x draw calls per selection.
Even with these improvements occlusion queries is faster on AMD hardware.
Depth selection is disabled by default, toggle option under select method.
May enable by default if this works well on different hardware.
Reviewed as D2543
The issue was caused by sometimes negative color returned by the filter node.
Seems to be caused by precision issues. Don't see any reason why we would want
negative colors in output. Those only causing issues later on.
By calculating the size of the state buffer in the kernel rather than the host
less code is needed and the size actually reflects the requested features.
Will also be a little faster in some cases because of larger global work size.
Because the split kernel can render multiple samples in parallel it is
necessary to have everything initialized before rendering of any samples
begins. The code that normally handles initialization of
`rng_state` (`kernel_path_trace_setup()`) only does so for the first sample,
which was causing artifacts in the split kernel due to uninitialized
`rng_state` for some samples.
Note that because the split kernel can render samples in parallel this
means that the split kernel is incompatible with the LCG.
This was only needed for the previous implementation of parallel samples. As
we don't have that any more it can be removed.
Real reason for removal tho is this: `per_sample_output_buffers` was being
calculated too small and artifacts resulted. The tile buffer is already
the correct size and calculating the size for `per_sample_output_buffers`
is a bit difficult with the current layout of the code. As
`per_sample_output_buffers` was only needed for `sum_all_radiance`,
removing that kernel and writing output to the tile buffer directly
fixes the artifacts.
This is to help debug and track memory usage for generic buffers. We
have similar for textures already since those require a name, but for
buffers the name is only for debugging proposes.
Simple workaround for some issues we've been having with AMD drivers hanging
and rendering systems unresponsive. Unfortunately this makes things a bit
slower, but its better than having to do hard reboots. Will be removed when
drivers have been fixed.
Define CYCLES_DISABLE_DRIVER_WORKAROUNDS to disable for testing purposes.
This does a few things at once:
- Refactors host side split kernel logic into a new device
agnostic class `DeviceSplitKernel`.
- Removes tile splitting, a new work pool implementation takes its place and
allows as many threads as will fit in memory regardless of tile size, which
can give performance gains.
- Refactors split state buffers into one buffer, as well as reduces the
number of arguments passed to kernels. Means there's less code to deal
with overall.
- Moves kernel logic out of OpenCL kernel files so they can later be used by
other device types.
- Replaced OpenCL specific APIs with new generic versions
- Tiles can now be seen updating during rendering
Suspended pools allows to push huge amount of initial tasks
without any threading synchronization and hence overhead.
This gives ~50% speedup of cached rigid body with file from
T50027 and seems to have no negative affect in other scenes
here.
The idea is to allow some amount of tasks to be pushed from working
thread to it's local queue, so we can acquire some work without doing
whole mutex lock.
This should allow us to remove some hacks from depsgraph which was
added there to keep threads alive.
This allows us to avoid TLS stored in pool which gives us advantage of
using pre-allocated tasks pool for the pools created from non-main thread.
Even on systems with slow pthread TLS it should not be a problem because
we access it once at a pool construction time. If we want to use this more
often (for example, to get rid of push_from_thread) we'll have to do much
more accurate benchmark.
Basically move all thread-specific data (currently it's only task
memory pool) from a dedicated array of taskScheduler to TaskThread.
This way we can add more thread-specific data in the future with
less of a hassle.
This feature was adding extra complexity to task scheduling
which required yet extra variables to be worried about to be
modified in atomic manner, which resulted in following issues:
- More complex code to maintain, which increases risks of
something going wrong when we modify the code.
- Extra barriers and/or locks during task scheduling, which
causes extra threading overhead.
- Unable to use some other implementation (such as TBB) even for
the comparison tests.
Notes about other changes.
There are two places where we really had to use that limit.
One of them is the single threaded dependency graph. This will
now construct a single-threaded scheduler at evaluation time.
This shouldn't be a problem because it only happens when using
debugging command line arguments and the code simply don't
run in regular Blender operation.
The code seems a bit duplicated here across old and new
depsgraph, but think it's OK since the old depsgraph is already
gone in 2.8 branch and i don't see where else we might want
to use such a single-threaded scheduler.
When/if we'll want to do so, we can move it to a centralized
single-threaded scheduler in threads.c.
OpenGL render was a bit more tricky to port, but basically we
are using conditional variables to wait background thread to
do all the job.
This slightly changes SDef behavior, by now respecting object transforms
at bind time, thus not requiring the objects to be aligned in their
respective local spaces, but instead using world space.
When rendering multi-view in side-by-side or top-bottom mode, we squash
the UI to half of its size and draw it twice on screen. That means the
cursor coordinates used for UI interaction don't match what's visible on
screen.
This commit is a little event system hack (tm) to fix this. It has some
small glitches with cursor grabbing, but nothing to bad.
We'll also use it for viewport HMD support.
D1350, thanks for the feedback @dfelinto!
It was only possible to separate all geometry from an intersection or none.
Made this into an enum with a 3rd option to 'Cut', (now default)
which keeps each side of the intersection separate
without splitting faces in half.
There was a bug in the intended code behaviour to always seek with a
pitch of 1.0 regardless of pitch/pitch animation/doppler effects.
Check the bug report for a more detailed explanation of problems
concerning pitch and seeking.
Comments said that function was supposed to 'stop worker threads', but
it absolutely did not do anything like that, was merely wiping out TODO
queue of tasks from given pool (kind of subset of what
`BLI_task_pool_cancel()` does).
Misleading, and currently useless, we can always add it back if we need
it some day, but for now we try to simplify that area.
Freeing pool was calling `BLI_task_pool_stop()`, which only clears
pool's tasks that are in TODO queue, whithout ensuring no more tasks
from that pool are being processed in worker threads.
This could lead to use-after-free random (and seldom) crashes.
Now use instead `BLI_task_pool_cancel()`, which does waits for all tasks
being processed to finish, before returning.
The paint slot name was not the same as what is displayed on the texture properties panel.
Instead, the slot type (e.g. "Diffuse Color") was used as the name.
Patch by Suchaaver (@minifigmaster125) with minor changes from @mont29.
Reviewers: mont29, sergey
Maniphest Tasks: T50704
Differential Revision: https://developer.blender.org/D2523
Can't say enough how much I hate those proxies... their duality (sharing
some aspects of both direct *and* indirect users) is a nightmare to handle. :(
Issue was that the VIEW_OT_manipulator operator calls the transform
operators and passes them it's own operator properties. That means the
transform operator got properties passed that it doesn't have.
The custom poll function for surfacedeform_bind seems to have caused
issues when calling it from Python. Fixed by using the generic modifier
poll function, and setting the button to be active or not in the
Python UI code instead. (there might be a better way, but for now this
works fine)
The issue was introduced by 4df75e5 and seems we just need to explicitly
add new keymap item now.
There is still some difference from old behavior, which is planar transform
is using precision movement since e138cde and here i don't see nice solution
currently: the change was requested here in the studio and it's just a
conflict in picking shift key for something which is not supposed to be
accurate.
At least now it's possible to invoke planar constraint and simply unhold
shift.
Not really happy of per-pool threads limit, need to find better
approach to that. But at least it's possible to get rid of half
of the nastyness here by removing getter which was only used in
an assert statement.
That piece of code was already well-tested and this code becomes
obsolete in the new depsgraph and does no longer exists in blender
2.8 branch.
This was only used for progress report, and it's wrong because:
- Pool might in theory be re-used by different tasks
- We should not make any decision based on scheduling stats
Proper way is to take care of progress by the task itself.
Do a "full" update on leaving sculpt mode, so we are sure scene will be brought
to a consistent state.
Ideally we'll only do that when there are objects which depends on geometry
without re-calculating self geometry, but that's a bit tricky currently.
The idea is to make it simpler to remove noise from scenes when some prop uses
Sharp glossy closure and causes noise in certain cases. Previously Sharp Glossy
was not affected by Filter Glossy at all, which was quite confusing.
Here is a file which demonstrates the issue: {F417797}
After applying the patch all the noise from the scene is gone.
This change also solves fireflies reported in T50700.
Reviewers: brecht, lukasstockner97
Differential Revision: https://developer.blender.org/D2416
Can't see any reason to call AUD exit early in WM_exit, that's a
low-level module that has no dependency on anything else in Blender, but
is dependency of some other parts of Blender, so it should rather be
exited late in the process!
This adds an option to force fields of type "Force", which enables the
simulation of gravitational behavior (dist^-2 falloff).
Patch by @AndreasE
Reviewers: #physics, LucaRood, mont29
Reviewed By: #physics, LucaRood, mont29
Tags: #physics
Differential Revision: https://developer.blender.org/D2389
- for rc/release: /api/2.79c/, zip file named blender_python_reference_2.79c_release.zip
- for dev: /api/master/, zip file named blender_python_reference_2_79_4.zip
Please never, ever use same DNA var for two different things. Even worse
if they do not have same type and ranges!
This is only ensuring issues (as described in report, but also if
animating both RNA props using same DNA var... yuck).
And we were not even saving any byte in DNA, could reuse some padding
there to store the two new needed vars (yes, two, since we cannot re-use
existing one if we want to keep backward *and* forward compatibility).
Was actually harmeless and not crashing, but I'd say more or less only
by luck: the NULL-check for region data would only evaluate to true for
the correct 3D View region. However, if we were to add region data to a
different region type in future, this would lead to undefined behavior
if executed in the wrong region.
So... Curve+shapekey was even more broken than it looked, this report was
actually a nice crasher (immediate crash in an ASAN build when trying to
edit a curve shapekey with some viewport rendering enabled).
There were actually two different issues here.
I) The less critical: rB6f1493f68fe was not fully fixing issues from
T50614. More specifically, if you updated obdata from editnurb
*without* freeing editnurb afterwards, you had a 'restored' (to
original curve) editnurb, without the edited shapekey modifications
anymore. This was fixed by tweaking again `calc_shapeKeys()` behavior in
`ED_curve_editnurb_load()`.
II) The crasher: in `ED_curve_editnurb_make()`, the call to
`init_editNurb_keyIndex()` was directly storing pointers of obdata
nurbs. Since those get freed every time `ED_curve_editnurb_load()` is
executed, it easily ended up being pointers to freed memory. This was
fixed by copying those data, which implied more complex handling code
for editnurbs->keyindex, and some reshuffling of a few functions to
avoid duplicating things between editor's editcurve.c and BKE's curve.c
Note that the separation of functions between editors and BKE area for
curve could use a serious update, it's currently messy to say the least.
Then again, that area is due to rework since a long time now... :/
Finally, aligned 'for_render' curve evaluation to mesh one - now
editing a shapekey will show in rendered viewports, if it does have some
weight (exactly as with shapekeys of meshes).
This was fixed ages ago for the interface case but not for the
command line. The thing here is that currently external engines
are relying on reports system to indicate that error happened
so suppressing reports storage in the background mode prevented
render pipeline from detecting errors happened.
This is all weak and i don't like it, but this is better than
delivering black frames from the farm.
New logic of split_faces was leaving mesh in a proper state
from Blender's point of view, but Cycles wanted loop normals
to be "flushed" to vertex normals.
Now we do such a flush from Cycles side again, so we don't
leave bad meshes behind.
Thanks Bastien for assistance here!
Finding which loop should share its vertex with which others is not easy
with regular Mesh data (mostly due to lack of advanced topology info, as
opposed with BMesh case).
Custom loop normals computing already does that - and can return 'loop
normal spaces', which among other things contain definitions of 'smooth
fans' of loops around vertices.
Using those makes it easy to find vertices (and then edges) that needs
splitting.
This commit also adds support of non-autosmooth meshes, where we want to
split out flat faces from smooth ones.
The issue seems to be caused by vertex normal being re-calculated
to something else than loop normal, which also caused wrong loop
normals after re-calculation.
For now issue is solved by preserving CD_NORMAL for loops after
split_faces() is finished, so render engine can access original
proper value.
Logic of handling shapekeys when entering and leaving edit mode for
curves was... utterly broken.
Was leaving actual curve data with edited shapekey applied to it.
The release of these arrays should be the programmer's discretion since these arrays can continue to be used.
Only the expanded functions `bvhtree_from_mesh_edges_ex` and `bvhtree_from_mesh_looptri_ex` are currently being used in blender (in mesh_remap.c), and from what I could to analyze, these changes can prevent a crash.
A group of object groups can be formed by means of the dupli_group option in
the Object properties window. The present revision extends the Selection by
Group option in the Freestyle Line Set so as to support not only flat object
groups but also nested groups.
We need to first split all vertices before we can reliably
check whether edge can be reused or not.
There is still known issue happening with a edge-fan mesh
with some faces being on the same plane.
This commit adds a way to debug Cycles motion blur issues which
are usually happening due to something crazy happening in between
of frames. Biggest trouble was that artists had no clue about
what's happening in subframes before they render. This is at
least inefficient workflow when dealing with motion blur shots
with complex animation.
Now there is an option in Time Line Editor which could be found
in View -> Show Subframe. This option will expose current frame
with it's subframe to the time line editor header and it'll allow
scrubbing with a subframe precision in time line editor.
Please note that none of the tools in Blender are aware of
subframe, so they'll likely be using current integer frame still.
This is something we don't consider a bug for now, the whole
purpose for now is to give a tool for investigation. Eventually
we'll likely tweak all tools to be aware of subframe.
Hopefully now we can finish the movie here in the studio..
Now new edges will be properly created between original and
new split vertices.
Now topology is correct, but shading is still not quite in
some special cases.
Doesn't currently change anything, but would need for some future
work here.
It uses existing padding in kernel BVH structure, so there is
nothing changed memory-wise.
The issue here was mainly coming from minimal pixel width feature
which is quite commonly enabled in production shots.
This feature will use some probabilistic heuristic in the curve
intersection function to check whether we need to return intersection
or not. This probability is calculated for every intersection check.
Now, when we use multiple BVH nodes for curve primitives we increase
probability of that primitive to be considered a good intersection
for us. This is similar to increasing minimal width of curve.
What is worst here is that change in the intersection probability
fully depends on exact layout of BVH, meaning probability might
change differently depending on a view angle, the way how builder
binned the primitives and such. This makes it impossible to do
simple check like dividing probability by number of BVH steps.
Other solution might have been to split BVH into fully independent
trees, but that will increase memory usage of all the static
objects in the scenes, which is also not something desirable.
For now used most simple but robust approach: store BVH primitives
time and test it in curve intersection functions. This solves the
regression, but has two downsides:
- Uses more memory.
which isn't surprising, and ANY solution to this problem will
use more memory.
What we still have to do is to avoid this memory increase for
cases when we don't use BVH motion steps.
- Reduces number of maximum available textures on pre-kepler cards.
There is not much we can do here, hardware gets old but we need
to move forward on more modern hardware..
The change was delivering broken topology for certain cases.
The assumption that new edge only connects new vertices was
wrong.
Reverting to a commit which was giving correct render results
but was using more memory.
This reverts commit af1e48e8ab.
The bug T46099 no longer applies since the addition of `dist_squared_to_projected_aabb_simple`
Has also been added comments that relates to an occlusion bug with the ruler. I'll investigate this.
When importing an Alembic file with grouped transforms, it would badly name the transforms, taking the name of the parent instead of its own.
Patch by @maxime.robinot
Differential Revision: https://developer.blender.org/D2507
Quite simple fix for now which only deals with this case. Maybe we want to do
some "clipping" on image load time so regular textures wouldn't give NaN as
well.
This allows us to use faster math and still have reliable
isnan/isfinite tests.
Only do it for host side, kernels stays unchanged.
Thanks Lukas Stockner for the tip!
The issue was caused by usage of non-initialized image user, which
could have different settings, causing some random image being loaded
or not loaded at all.
This caused non-deterministic behavior of Cycles image loading because
it was querying image information from several places.
This fixes crash reported in T50616, but it's not a complete fix
because preview rendering in material is wrong (same wrong as in
2.78a release).
We now assert that we now file version of libraries (needed for
do_version after linking step), so for missing libraries, set dummy
numbers (using version of main .blend file actually).
Works similar to regular Cycles tests, just does OpenGL render to
get output image.
Seems to work fine with the only funny effect: Blender window will
pop up for each of the tests. This is current limitation of our
OpenGL context. Might be changed in the future.
Seems CUDA failed to de-duplicate the array across multiple inlined
versions of the shadow_blocked(). Helped it a bit with that now.
Gives about 100MB memory improvement on a scenes after previous
commit and brings up memory "regression" to only 100MB comparing to
the master branch now.
This commit enables record-all behavior of transparent shadows
rays.
Render times difference goes as following:
GTX 1080 render time
BMW -0.5%
Fishy Cat -0.0%
Pabellon Barcelona -11.6%
Classroom +1.2%
Koro -58.6%
Kernel will now use some extra VRAM memory to store the intersection
array (200MB on my configuration). This we can optimize out with some
further commits.
The idea is to record all possible transparent intersections when
shooting transparent ray on GPU (similar to what we were doing on
CPU already).
This avoids need of doing whole ray-to-scene intersections queries
for each intersection and speeds up a lot cases like transparent
hair in the cost of extra memory.
This commit is a base ground for now and this feature is kept
disabled for until some further tweaks.
Now we break the traversal cycle and then perform volume attenuation
and check with zero throughput. Not sure it makes any measurable sense
at this moment, but in the future it might help de-duplicating some
extra logic here.
Removed unnecessary call to DM_update_tessface_data(). This call is
already performed by DM_ensure_tessface(dm). The call being performed
twice caused a failing BLI_assert().
Reviewed by: Kévin Dietrich
With the new names the arguments (yup, zup) are in the same order as
they appear in the function name. The old names used copy_src_dst(dst,
src), which I found very confusing. Furthermore, now it is clear from
where to where the copy is made.
This makes the function names a little bit longer, though. If that is
a real issue, we can just name them zup_from_yup(zup, yup).
Reviewed by: Kévin Dietrich
Speedup is mainly gained by multi-threading. Gives about 3x
fps gain on an edit shot file.
There is still some room for improvements, will happen in one
of the upcoming commits.
Derived mesh for particles did not include tessellated faces when it
was expected to. Now added explicit function to copy CDDM with tess
faces without need to re-tessellate the result.
It is not necessary to add MOTO library dependency when we use
WITH_IK_SOLVER (now it uses Eigen) or we use WITH_MOD_BOOLEAN (it was
used by bsp intern library some time ago but it is not present in the
code anymore).
Reviewers: mont29, sergey
Subscribers: mont29, sergey
Differential Revision: https://developer.blender.org/D2477
No reason to not make this private to this file, and it gave conflict
when using bpy as module and loading it in a GLib application (which
also has a g_atexit var).
The title says it all actually. Use BLI task to loop over vertices
and distort their locations. Gives 2x FPS increase in a file with
just time-dependent displace modifier on my desktop.
This version will give less spin locks and now well-tested by render engines.
This should reduce amount of threading overhead when having multiple objects
with displace modifier enabled.
In the future this will also help us threading the modifier.
There are more modifiers which could benefit from this, but let's first
investigate the new behavior with one of them.
We (the Microsoft C++ team) use the Blender project as part of our "Real world code" tests.
I noticed a place in WIN32 specific code (dvpapi.cpp:85) where a string literal is losing
its const-ness when being passed to BLI_dynlib_open(). This is not permitted when using the
/permissive- conformance compiler switch (see our blog
https://blogs.msdn.microsoft.com/vcblog/2016/11/16/permissive-switch/)
My suggested fix is to add const and propagate it where needed. Another possible fix would be
to explicitly cast away the const.
Reviewers: mont29, sergey, LazyDodo
Subscribers: Blendify, sergey, mont29, LazyDodo
Tags: #platform:_windows
Differential Revision: https://developer.blender.org/D2495
Instead of reference the vertex first and test the bitmap afterwards. Test the bitmap first and reference the vertex after.
In a mesh with 31146 vertices and the entire bitmap disabled, the loop time is 243% faster
With all bitmap enabled, the time becomes 463473% faster!!!
One possible reason for this huge difference in peformance is that maybe the compiler is not putting the function "BM_vert_at_index" inline (I dont know if buildbot do this, but it's good to investigate).
Looks like `object_map` and `mem_arena` may be NULL sometimes...
Also, cleaned up function pointers declaration of Nearest2dUserData,
those were warning out in gcc. Please, *always* use typdef defined
prototypes for function pointers, it is sooooo much cleaner and clearer
that way. And easy to convert from compatible functions too.
BKE_lamp_free was somehow missing the refactor of datablocks handling
(which, among other things, completely separated ID refcounting and
linking management from ID freeing itself).
Either forgot during development, or lost during merge...
The code looks for the closest element between its centers. In the case of islands, the center of each vertex is the center of the island.
The solution here is to skip the search for islands when the operation is translation
Pretty straight forward actually, just do not bother about obdata part
of vgroups in that case, only copy object part of it.
And let's curse once again those stuff spread accross several types of
data-blocks...
The order was wrong from the semantic point of view, caused
by some legacy workarounds in Libmv. Didn't realize it's was
not how things were expected to be used.
Issue was indeed in join operation, mesh in which we join all others
could be re-added to final data after others, leading to undesired
re-ordering of CD layers, and existing vertices etc. being shifted away
from their original indices, etc.
All kind of more or less bad and undesired changes, fixed by always
re-inserting destination mesh first.
Also cleaned up a bit that code, it was doing some rather
non-recommanded things (like allocating zero-sized mem, doing own
coocking to remove a data-block from main, etc.).
Tricky issue caused by CDDM_copy() coying MFACE array but not MTFACE which
confused logic later on.
Now we don't copy ANY tessellation unless it is requested to.
Thanks Bastien for help and review!
'page' prop of scroll up/down operators would get stuck once set once by
pageup/down keys... Now only take this prop into account if explicitely
set, not when its value is inherited from previous run.
Strangely this change does not affect the performance very much.
Suzanne subdividide 6x (ortho view):
Before:0.00013983
After :0.00013920
But it makes it easier to read the code
When the function that tests snap on multiple elements starts from the face and ends at the vertex, the transition between elements becomes much smoother.
Better to have clear way to tell whether flag is parameter for
BKE_library_foreach_ID_link(), parameter for its callback function, or
return value from this callback function.
Taking advantage of the area, the depth is decreased 0.01 BU to each loop to give priority to elements in order: Vertice > Edge > Face. This increases the threshold and improves the snap to multiple elements
The previous solution took arbitrary values to determine if the mouse was near or not to the Bound Box (it simply scaled the Bound Box).
Now the same function that detected the distance from the BVHTree nodes to the mouse is used in the Bound Box
This revision extends the functionality of the "Fill Range by Selection" button in
the "Distance from Camera/Object" modifiers so that only selected mesh vertices
in the edit mode are taken into account (instead of considering all vertices when
in the object mode) to compute the min & max distances from the reference.
This will give users much finer control on the range values.
Use new Main->relations ID usages mapping in BKE_library_make_local().
This allows a noticeable simplification in code, and can be up to twice
quicker as previous code (Make Local: All from 2 to 1 minute e.g. in a
huge production file with thousands of linked data-blocks).
Note that new code has been successfuly tested with several complex cases
(production files from Agent327), as well as some testcases from recent
bug reports related to that function. But as always, nothing beats real
usage by real users, so please check this before we release 2.79. ;)
Main areas that would be affected: Make Local operations (L shortcut in
3DView), and append from libraries.
Use Main->relations in BKE_library_foreach_ID_link(), when possible
(i.e. IDWALK_READONLY is set), and if the data is available of course.
This is quite minor optimization, no sensible improvements are expected,
but does not hurt either to avoid potentially tens of looping over e.g.
objects constraints and modifiers, or heap of drivers...
The new MainIDRelations stores two mappings, one from ID users to ID
used, the other vice-versa.
That data is assumed to be short-living runtime, code creating it is
responsible to clear it asap. It will be much useful in places where we
handle relations between IDs for a lot of them at once.
Note: This commit is not fully functional, that is, the infamous, ugly,
PoS non-ID nodetrees will not be handled correctly when building relations.
Fix needed here is a bit noisy, so will be done in next own commit.
This provides a slight improvement in performance in specific cases, such as when the observer is inside a high poly object and executes snap to edge or vertex
Distance calculation performed by the "Fill Range by Selection" button of the
"Distance from Camera" color, alpha and thickness modifiers was incorrect,
limiting the usefulness of the functionality.
The problem was that the distance between the camera and individual vertex
locations was calculated in the world space, which was inconsistent with the
distance calculation done by the modifiers in the camera space.
The new `isect_ray_aabb_v3_simple` function replaces the `BKE_boundbox_ray_hit_check` and can be used in BVHTree Root (first AABB). So it is much more efficient.
In order to simplify the reading of these functions, the parameters: `snap_to`, `mval`, `ray_start`, `ray_dir`, `view_proj` and `depth_range` are now stored in the struct `SnapData`
Checking only whether mverts is same as base mesh one is not enough in
all cases, some modifiers (deform ones) can only generate new mvert
data, while keeping others from original mesh.
Now checking both mvert or medge, hopefully this will be enough to catch
all problematic cases this time.
Thanks @gaia for finding that problem. :)
Although the "BLI_bvhtree_find_nearest_to_ray" function is more practical than the generic "BLI_bvhtree_walk_dfs", it does not work to snap in perspective view. This makes it necessary to add "ifs" and functions that make the code difficult to understand
patch: D2474
This is a speed up option which is mainly useful for viewport. Gives nice speedup in
the barbershop scene of 2x when replacing GI with AO after 2nd bounce without loosing
too much details.
Reviewers: brecht
Subscribers: eyecandy, venomgfx
Differential Revision: https://developer.blender.org/D2383
We are not bumping file version, but we cannot have the doversion code running twice.
In this particular case it was crashing files, since we were setting node->storage to NULL, and later on accessing it.
The idea was to link something to a parent, but the point is:
we must not pass owner deep and then have any parent-type-related
logic implemented in the "children".
This is much more flexible solution which will allow doing some
more procedural features.
Reviewers: brecht, dfelinto, mont29
Reviewed By: mont29
Subscribers: Severin
Differential Revision: https://developer.blender.org/D2403
The freestyle data was never freed when removing a renderlayer.
```
blender -b --factory-startup --debug-memory --python-expr "import bpy;bpy.ops.scene.render_layer_add();bpy.context.scene.render.layers.active_index=0;bpy.ops.scene.render_layer_remove()"
```
Currently the tests don't run on windows for the following reasons
1) render_graph_finalize has an linking issue due missing a bunch of libraries (not sure why this is not an issue for linux)
2) This one is more interesting, in test/python/cmakelists.txt ${TEST_BLENDER_EXE_BARE} and ${TEST_BLENDER_EXE} are flat out wrong, but for some reason this doesn't matter for most tests, cause ctest will actually go out and look for the executable and fix the path for you *BUT* only for the command, if you use them in any of the parameters it'll happily pass on the wrong path.
3) on linux you can just run a .py file, windows is not as awesome and needs to be told to run it with pyton.
4) had to use the NAME/COMMAND long form of add_test otherwise $<TARGET_FILE:blender> doesn't get expanded, why? beats me.
5) missing idiff.exe for msvc2015/x64 in the libs folder.
This patch addresses 1-4 , but given I have no working Linux build environment, I'm unsure if it'll break anything there
5 has been fixed in rBL61751
Reviewers: juicyfruit, brecht, sergey
Reviewed By: sergey
Subscribers: Blendify
Tags: #cycles, #automated_testing
Differential Revision: https://developer.blender.org/D2367
Blenders baking system currently doesn't support the topology used by
adaptive subdivision and primitive ids will be wrong or out of range
leading to crashes. Updating the baking system to support other
topologies would be a bit involved, so for now we simply disable
subdivision while baking to avoid crashes.
We started to run out of bits there, so now we separate flags
which came from __object_flags and which are either runtime or
coming from __shader_flags.
Rule now is: SD_OBJECT_* flags are to be tested against new
object_flags field of ShaderData, all the rest flags are to
be tested against flags field of ShaderData.
There should be no user-visible changes, and time difference
should be minimal. In fact, from tests here can only see hardly
measurable difference and sometimes the new code is somewhat
faster (all within a noise floor, so hard to tell for sure).
Reviewers: brecht, dingto, juicyfruit, lukasstockner97, maiself
Differential Revision: https://developer.blender.org/D2428
Cycles add-on did not actually support reloading correctly.
When you want to correctly reload sub-modules (i.e. modules of an add-on
which is a package), you need to use importlib, a mere import will do
nothing with already loaded modules (RNA classes are sort of
pre-registered when they are evaluated, through the meta-class system).
New options to define the style of the animation paths in order to get
better visibility in complex scenes.
Now is possible define the color, thickness and several options relative
to the style of the lines used to draw motion path.
This way we can stop traversing BVH node early on.
Gives about 2-2.5x times render time improvement with 3 BVH steps.
Hopefully this gives no measurable performance loss for scenes with
single BVH step.
Traversal is currently only implemented for QBVH, meaning old CPUs
and GPU do not benefit from this change.
Similar to the previous commit, the statistics goes as:
BVH Steps Render time (sec) Memory usage (MB)
0 46 260
1 27 373
2 18 598
3 15 826
Scene used for the tests is the agent's body from one of the barber
shop scenes (no textures or anything, just a diffuse material).
Once again this is limited to regular (non-spatial split) BVH,
Support of spatial split to this feature will come later.
The idea is to create several smaller BVH nodes for each of the motion
curve primitives. This acts as a forced spatial split for the single
primitive.
This gives up render time speedup of motion blurred hair in the cost
of extra memory usage. The numbers goes as:
BVH Steps Render time (sec) Memory usage (MB)
0 258 191
1 123 278
2 69 453
3 43 627
Scene used for the tests is the agent's hair from one of the barber
shop scenes.
Currently it's only limited to scenes without spatial split enabled,
since the spatial split builder requires some changes to work properly
with motion steps coordinates.
Also fixed some issues with motion keys calculation:
- Clamp lower and upper limits of curves so we can safely call those
functions for the very first and very last curve segment.
- Fixed wrong indexing for the curve radius array.
- Fixed wrong motion attribute offset calculation.
Mimics how regular triangles are working and makes it more clear where
the stuff is located in the kernel.
Needed to have some forward declarations because of the current placement
of things in the kernel.
Following @AlonDan's feature request and @hjalti's screenshot yesterday,
I've decided to implement support for this to make it easier to scan which
keyframes correspond with which set of controls, especially when faced with
a large wall of keyframes.
In retrospect, I should've done this a long time ago!
Was a bit confusing to have transparent and translucent depth
exposed but no diffuse or glossy.
Reviewers: brecht
Subscribers: eyecandy
Differential Revision: https://developer.blender.org/D2399
This is important for the reliable behavior or isnan/isfinite/min/max
functions to work with nan and non-finite values. Some of the issues
with fast math are possible to work around, but didn't find a way to
have reliable min/max implementation yet.
Please NEVER EVER use such a statement, it's only causing HUGE
issues. What is even worse: it's not always possible to immediately
see that the hell is coming from such a statement.
There is still some statements in the existing code, will leave
those for a later cleanup.
- flushing hidden state ran when it didn't need to.
- flushing checks didn't early exit when first visible element found.
- low level BM_*_hide API calls like this can use skip iterators
can loop over struct members directly.
No user-visible changes.
- face-create-extend option could add hidden verts and edges into
the selection history (invalid state).
- faces could be created that included existing hidden edges
that remained hidden (invalid state too).
- newly created faces could copy hidden flag from surrounding faces,
giving very confusing results (looks as if face creation failed).
Surprising nobody noticed these years old bugs!
Experimental option for the Reproject Strokes operator to project strokes on to
geometry, instead of only doing this in a planar (i.e. parallel to viewplane) way.
The current implementation is quite rough, and may need to be improved before it
is really ready for use. Potential issues:
* Loss of precision (i.e. stairstepping artifacts) from the 3D -> 2D -> 3D conversion
as we don't have float version of one of the projection funcs
* Jagged depth if there are gaps, since it will default back to the 3d-cursor plane
if no geometry was found (instead of doing some fancy interpolation scheme)
* I'm not sure if it's that useful for adapting GP strokes to deforming geometry yet...
Now the eraser checks if there's an active frame with some strokes in it
before creating a new frame. There's no point in creating a new frame if
there are no strokes in the active frame (if one exists).
This still doesn't help much if there were strokes but they weren't touched though...
This operator adds a new frame with nothing in it on the current frame.
If there is already a frame there, all existing frames are shifted one frame later.
Quite often when animating, you may want a quick way to get a blank frame,
ready to start drawing something new. Or maybe you just need a quick way to
add a "placeholder" frame so that a suddenly-appearing element does not show
up before its time.
If the layers or the colors were renamed, the animation data was wrong
because the data path was not updated.
I also have fixed a possible stroke color name update if the name was duplicated moving
the rename function call after checking unique name.
Avoids possible jumps when one is trying to do some really preciese tweak.
Quite striaghtforward change for mouse input initialization: take Shift
state into account. However, this will interfere with the axis exclusion
which is currently also uses Shift (the feature to move something in a
plane which doesn't have selected axis). This is probably not so commonly
used feature (nobody in the studio even knew of it) and the only downside
now would be that such a constrainted movement will become accurate by
default. That's easy to deal from user side by just unholding Shift key.
Reviewers: brecht, mont29, Severin
Differential Revision: https://developer.blender.org/D2418
To make it faster to try different interpolation curves, there's a new operator
"Remove Breakdowns" which will delete all breakdowns sandwiched by normal
keyframes (i.e. all the ones that the previous run of the Interpolation op created)
This commit introduces the ability to use the Robert Penner easing equations
or a Custom Curve to control the way that the "Interpolate Sequence" operator
interpolates between keyframes. Previously, it was only possible to get linear
interpolation between the gp frames.
Workflow:
1) Place current frame between a pair of GP keyframes
2) Open the "Interpolate" panel in the Toolshelf
3) Choose the interpolation type (under "Sequence Options")
4) Adjust settings (e.g. if you're using "Custom Curve", use the curvemap widget
to define the way that the interpolation proceeds)
5) Click "Sequence" to interpolate
6) Play back/scrub the animation to see if you've got the result you want
7) If you need to make some tweaks, undo, or delete the generated keyframes,
then repeat the process again from step 4 until you've got the desired result.
The "gp_sculpt" settings should be strictly for stroke sculpting, and not abused by
other tools. (Similarly, if other general GP tools need one-off options, those should
go into the normal toolsettings->gpencil_flag)
Furthermore, this paves the way for introducing new settings for controlling the way
that GP interpolation takes place (e.g. with easing equations, or a custom curvemap)
* Reshuffled some blocks of code for better ease of navigation/flow in the file
* Improved some tooltips
* Removed "Helper" tag from some functions that serve bigger roles
* Fixed some errant formatting
The interpolation operators (and their associated code) occupied a significant
portion of gpencil_edit.c (which was getting a bit heavy). So, it's best to split
these out into a separate file to make things easier to handle, in preparation
for some further dev work.
Things like `BLI_uniquename` had nothing, but really nothing to do in
BLI_path_util files!
Also, got rid of length limitation in `BLI_uniquename_cb`, we can use
alloca here to avoid overhead of malloc while keeping free size (within
reasonable limits of course).
Just store bones that could not get renamed to desired flipped name on the
first try into a temp list, and try to rename them a second time.
This is rather simple solution, will induce 'over numbering' in case you
flip a bone to another unselected bone's name (since number will be
incremented in both rename attempts), but think this is acceptable minor
glitch, for a corner case situation that does not have any good
resolution anyway.
Also, set `strip_numbers` option of `BKE_deform_flip_side_name` to
false, otherwise chains of bones with same names would get their numbers
completely messed up after name flipping.
Based on work by @dfelinto in D2456 (https://developer.blender.org/D2456), thanks.
This adds two functions to project 3d coordinates onto a 3d plane,
to get 2d coordinates, essentially eliminating the plane's normal axis
from the coordinates.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2460
It is quite likely in a triangulated mesh that the actual island edge
belongs to a different triangle than the current pixel; for example
consider corners of a triangulated axis aligned rectangle face that
have the additional edge: a pixel there will have to be assigned to
one of the triangles, but one of the edges of the original rectangle
can only be accessed through the other triangle.
Thus for robust operation it is necessary to do a recursive search.
The search is limited by requiring that it only goes through edges
that bring it closer to the target point, and also by depth as a
safeguard.
Differential Revision: https://developer.blender.org/D2409
The code requires the pixel on the other side of the seam to be assigned
precisely to the expected triangle. This can cause false negatives around
vertices, where a pixel is likely to touch multiple triangles and thus
cannot be said to unambiguously belong to any one of them, so check
distance to the intended triangle and accept the result if it's close.
1. Forcibly symmetrize the neighbor relations, so that if A is neighbor
of B, B is neighbor of A. The existing code is guaranteed to violate
this if texture resolution is different between the sides of a seam.
2. In texture mode dynamic paint adds a 1 pixel wide border around the
islands. These pixels aren't really part of the dynamic paint domain
and thus by design can't have symmetrical neighbor relations. This
means they can't be treated by effects like normal pixels.
The simplest way to handle it in a consistent way is to exclude
them from effects, but add an additional pass that recomputes them
as average of their non-border neighbors, located on both sides of
the seam.
This avoids intersection AABB of different curve primitives
which makes it less ray-to-primitive intersections.
This gives about 30% speedup of hair rendering in the barber
shop scenes here. There is still some work to be done on those
files to solve major speed issues on certain frames.
This way we can have different limits for regular and motion curves
which we'll do in one of the upcoming commits in order to gain some
percents of speedup.
The reasoning here is that motion curves are usually intersecting
lots of others bounding boxes, which makes it inefficient to have
single primitive in the leaf node.
Maximal number of elements is supposed to be inclusive. That is what
it was always meant in this file and what @brecht considered still
the case in 6974b69c61.
In fact, the commit message to that change mentions that we allowed
up to 2 curve primitives per leaf while in fact it was doing up to 1
curve primitive.
Making it real 2 primitives at a max gives about 5% slowdown for the
koro.blend scene. This is a reason why BVHParams.max_curve_leaf_size
was changed to 1 by this change.
Since the beginning of times hair settings in cycles were global for
the whole scene but were located in the particle context. This causes
quite some trickery to get shots set up for the movies here in the
studio by forcing artists to create dummy particle system to change
settings of hair on the shot.
While ideally this settings should be properly become per-particle
system for the time being it will save sweat and blood to move the
settings to scene context.
Reviewers: brecht
Subscribers: jtheninja, eyecandy, venomgfx, Blendify
Differential Revision: https://developer.blender.org/D2287
Made them closer to how GTest shows the output, so reading test logs
is easier now (at least feels more uniform).
Additionally now we know how much time tests are taking so can tweak
samples/resolution to reduce render time of slow tests.
It is now also possible to enable colored messages using magic
CYCLESTEST_COLOR environment variable. This makes it even easier to
visually grep failed/passed tests using `ctest -R cycles -V`.
Reusing PROP_TEXTEDIT_UPDATE instead of adding a new property flag just for search strings. Currently it's only used for search strings anyway so seems fine for now.
Fixes T50336.
This splits `interp_weights_face_v3` into `interp_weights_tri_v3` and
`interp_weights_quad_v3`, in order to properly handle three sided polygons
without needing a useless extra index in your weight array. This also
improves clarity and consistency with other math_geom functions, thus
reducing potential future errors.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2461
The issue was that we used to compare number of vertices for mesh after the auto
smooth was applied (at the center of the shutter time) with number of vertices
prior to the auto smooth applied. This caused false-positive consideration of a
mesh as changing topology.
Now we do autosplit as early as possible and do it from blender side, so Cycles
does not need to re-implement splitting on it's side.
This way render engine can request mesh to be auto-split and not
worry about implementing this functionality on it's own.
Please note that this split is to be performed prior to tessellation.
Other than implementing a `mid_v3_v3_array` function, this removes
`cent_tri_v3` and `cent_quad_v3` in favor of `mid_v3_v3v3v3` and
`mid_v3_v3v3v3v3` respectively.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2459
When layout has only small buttons (buttons with icon and without label)
its size should be fixed. Code was modified to be able to add a new UI_ITEM_MIN
flag which indicates that the layout has only small fixed-width buttons.
Patch by @raa, with minor style edits by @mont29.
Reviewers: Severin, mont29
Reviewed By: mont29
Tags: #bf_blender, #user_interface
Differential Revision: https://developer.blender.org/D2423
Am pretty sure node update should not touch to Main database like that,
but for now let's allow it, I guess the hack is needed for things like
Sverchok. ;)
If the active object is in weight paint mode, but some armatures in pose mode, 'manipulate center points' still affects the transformation. See bd2034a749.
Also removed redundant check, we basically did the same check for paint modes twice.
If a very low wetness absolute alpha brush is used with spread and
drying effects enabled, some pixels will rapidly accumulate paint.
This happens because paint drying code applies a minimal wetness
threshold that causes the paint to instantly dry out.
Specifically, every frame the brush adds paint at the specified
absolute alpha and wetness set to the minimal threshold, spread
drops it below threshold, and finally drying moves all paint to
the dry layer. This drastically accelerates the rate of flow of
paint into the affected pixels.
Fortunately, the reason paint spread actually ends up decreasing
wetness turns out to be a simple floating point precision problem,
which can be easily fixed by restructuring the affected expression.
Reported on IRC by dfelinto, thanks.
Root of the issue was that opening a new text file would create
datablock with one user, when Text editor is actually a 'user one' user.
This was leaving Text datablocks in inconsitent user count, and
generating asserts in BKE_library area.
Also changed a weird piece of code related to that extra user thing in
main remapping func.
Main issue here was that in old usercount system 'user_real' did simply
not allow that kind of thing to work. With new pait of 'USER_EXTRA'
tags, it becomes possible to handle the case correctly, by merely refining
checks about indirectly use objects whene removing them from a scene.
Incidently, found another related bug, 'link group objects to scene' was not
incrementing objects' usercount - bad, very very bad!
The settings.frame_start rna was clamping frame start to frame end when frame start was bigger than frame end.
The fix is simply to set frame end first
This is a hacky fix for a regression introduced sometime after 2.76.
The "Strip Time" setting on NLA Strips could not be edited without the
value immediately jumping back to the current FCurve value (or 0.0 if no
keyframes existed); even enabling autokey wouldn't let you key the property.
Until we have proper overrides (that only lose their values on frame change),
it's best that this setting is editable, even if it does mean it you have to
manually change the frame to see the updated values.
Sometimes it can be useful to be able to keep onion skins visible in the
OpenGL renders and/or when doing animation playback. In particular, there
are two use cases where this is quite useful:
1) For creating a cheap motion-blur effect, especially when the before/after
values are also animated.
2) If you've animated a shot with onion skinning enabled, the poses may end
up looking odd if the ghosts are not shown (as you may have been accounting
for the ghosts when making the compositions).
This option can be found as the small "camera" toggle between the "Use Onion Skinning"
and "Use Custom Colors" options.
This is a regression introduced in rB5bd9e832
It looks more like a hack than a proper fix, but the shader logic
changed a lot for blender2.8, so I would rather do the elegant fix
there, while leaving master working.
If we ever do a 2.78b (or 2.79) this should get in.
That code was a joke, letting some invalid utf8 bytes pass, returning
wrong offset for some invalid sequences, not to mention length and
pointer easily going out of sync, NULL final byte being 'forgotten' by
memcpy, etc. etc.
The miracle here is that we could survive using this for so long!
Probably because we do not use utf-8 sanitizing enough in Blender,
actually... :/
This test should ensure we correctly detect all invalid utf-8 sequences in a given string.
DISCLAIMER:
Do not run this with current code - you'll either laugh or cry, nearly *all* checks fail!
Based on utf-8 decoder stress-test (https://www.cl.cam.ac.uk/~mgk25/ucs/examples/UTF-8-test.txt)
by Markus Kuhn <http://www.cl.cam.ac.uk/~mgk25/> - 2015-08-28 - CC BY 4.0
This is the same issue as was fixed with T39486: the adjustment pass
that tries to equalize different widths at either end of an edge
sometimes causes the widths to get bigger and bigger.
The previous fix was to let "clamp_overlap" do double duty as a way
to limit this behavior. But clearly this is undiscoverable, as the
current bug report shows. So I put in an "auto-limiting" mode that
detects when adjustments are going crazy and then acts as if
clamp_overlap were set.
The reason we can't always act as if clamp_overlap is set is that
certain models (e.g., Bent_test in regression tests) look bad if
that is enabled.
Include idea that Blender may fail to launch it even if path is correct,
in some cases (dear Windows...).
Based on idea from @lijenstina and @blendify (D2349), thanks.
Over time roll and orbit would scale the quaternion
which is documented as unit length.
In practice any errors would be subtle,
but better normalize as other operators do.
Basic idea is to store fileversion in Library datablock, and split again
Main by libraries after lib linking, do_versions_after_liblink on
those separated Mains, and merge again.
This allows to still have correct versions for each data-block in that
second do_versions step.
Note that this is not used currently in master (might be soon, though),
but is needed for 2.8 work.
Main scheduler would be created way before `-t` argument would be
parsed, since it was on forth pass! Moved it to first pass of argparse,
that kind of stuff should be initialized asap on startup.
When linking data-blocks from same library in several steps, the already
linked data-blocks of same lib would go again through versionning code...
Note: only fixed for libraries, I can't imagine how this could happen
with local data...
Data transfer was not checking if the required geometry existed, thus
causing a segfault when it didn't. This adds the required checks, and
reports errors if geometry is missing.
This also replaces instances of the words "polygon" and "loop" in error
messages with "face" and "corner" respectively, to be consistent with
the rest of the existing UI.
Reviewed By: mont29
Differential Revision: http://developer.blender.org/D2410
When append a datablock the default brushes were not created and only
were created when draw new strokes. Now the default brushes are created
when draw strokes if necessary.
It's now possible to change the shortcut that enables planar transformation with the transform manipulators (shift+LMB on axis).
This actually fixes the workaround added in rB20681f49801fd. Thing is that we needed to allow using the manipulators, even if a modifier key is held so things like snapping work right away. That's why normal LMB behavior uses KM_ANY. However, event handling would always execute the KM_ANY keymap handler because it's iterated over first. Simply solved this by registering the KM_SHIFT keymap item first, so it has priority over the KM_ANY one.
Enum properties with icon only flag should use minimum/fixed width in expanded layouts (alignment=UI_LAYOUT_ALIGN_EXPAND).
Differential Revision: https://developer.blender.org/D2415 by @raa (only made some really minor corrections)
This matches behavior of Multiscatter GGX and could become handy later on
when/if we decide it would be beneficial to replace on closure with another.
Reviewers: lukasstockner97, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2413
There was 16 bits reserved for primitive type, while we only need 4.
Reviewers: brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2401
There was fully wrong logic in comparison: was actually accessing memory
past the array boundary. Run test manually and the figure seems correct
to me now.
Spotted by @LazyDodo, thanks!
This code has not been used for a long time if not ever.
Most of the code was removed in rB1d3609262704f88c9e30b2cebdb236110b25cdc9
however, this was forgoten.
Adds two buttons to context (RMB) menu of path buttons:
* "Open File Externally" to open a file in an external app (only visible if path contains a filename)
* "Open Location Externally" to open a path in an external file browser
The functionallity for this was already there, just hidden behind Shift/Alt click of file_browse button (folder icon next to path button).
This gives us 9 flags available again for properties (we had none anymore),
and also makes things slightly cleaner.
To simplify (and make more clear the differences between mere properties
and function parameters), also added RNA_def_parameter_flags function (and
its clear counterpart), to be used instead of RNA_def_property_flag for
function parameters.
This patch is also a big cleanup (some RNA function definitions were
still using 'prop' PropertyRNA pointer, etc.).
And yes, am aware this will be annoying for all branches, but we really need
to get new flags available for properties (will need at least one for override, etc.).
Reviewers: sergey, Severin
Subscribers: dfelinto, brecht
Differential Revision: https://developer.blender.org/D2400
Was a waaaaayyyyy to much generic name for such a specific func, renamed
to much more descriptive BKE_libblock_relink_to_newid().
In near future (few weeks, to limit as much as possible silent mismatch
in branches), will rename BKE_libblock_relink_ex to BKE_libblock_relink,
this is the real generic data-block relinking func!
Discard the whole volume stack on the last bounce (but keep
world volume if present).
Volumes are expected to be closed manifol meshes, meaning if
ray entered the volume there should be an intersection event
of ray exisintg the volume. Case when ray hit nothing and
there are still non-world volumes in the stack can happen in
either of cases.
1. Mesh is not closed manifold.
Such configurations are not really supported anyway and should
not be used.
Previous code would have consider the infinite length of the
ray to sample across, so render result wasn't really correct
anyway.
2. Exit intersection is more far away than the camera far
clip distance.
This case also will behave differently now, but previously it
wasn't really correct either, so it's not like we're breaking
something which was working as expected.
3. We missed exit event due to intersection precision issues.
This is exact the case which this patch fixes and avoid
fireflies.
4. Volume has Camera only visibility (all the rest visibility
is set to off)
This is what could be considered a regression but could be
solved quite easily by checking volume stack's objects flags
and keep entries which doesn't have Volume Scatter visibility
(or even better: ensure Volume Scatter visibility for objects
with volume closure),
Fixes T46108: Cycles - Overlapping emissive volumes generates unexpected bright hotspots around the intersection
Also fixes fireflies appearing on the edges of cube with
emissive volue.
Reviewers: juicyfruit, brecht
Reviewed By: brecht
Maniphest Tasks: T46108
Differential Revision: https://developer.blender.org/D2212
//ui_item_enum_expand// function replaces all pie menu's sub-layouts with radial layout. It should replace only root layout.
To reproduce the issue paste the code in Blender's text editor and press Run Script button.
```
import bpy
class VIEW3D_PIE_template(bpy.types.Menu):
bl_label = "Select Mode"
def draw(self, context):
layout = self.layout.menu_pie()
layout.column().prop(
context.scene.render.image_settings, "color_mode", expand=True)
def register():
bpy.utils.register_class(VIEW3D_PIE_template)
def unregister():
bpy.utils.unregister_class(VIEW3D_PIE_template)
if __name__ == "__main__":
register()
bpy.ops.wm.call_menu_pie(name="VIEW3D_PIE_template")
```
Differential Revision: https://developer.blender.org/D2394 by @raa
Reading rest of the code, it's obvious we want to start à YOFF lines
from start of rect2i, so we have to also multiply by number of
components.
Also did some minor cleanup.
Code was not accounting for possibilities that width or height of given
buffers may be smaller than XOFF/YOFF...
Note that I seriously doubt that drop code actually works (as in, gives
expected results) when applied to tiles like it seems to be done
currently, but this is much more complex (and involved) topic.
compiled.
This adds a short message to the smoke, remesh and boolean modifiers' UI
when trying to use them when their compilation was turned off. This was
already implemented for the fluid and ocean simulation modifiers.
This also makes the 'quick fluid' and 'quick smoke' operator abort and
report when trying to use them when unavailable.
This kind of keeps threads "warmer" and should in theory give better
cache coherency bringing some %% of speedup. It was already tested
few months ago and it gave few % speedup in barber shop, but was
reverted due to some bone popping. The popping is now fixed so it
should be fine to use new scheduling policy.
This sets forces to zero, when Nabla is zero and a grayscale texture is
used or texture mode is Gradient or Curl.
Nabla equal to zero was causing a zero division, and forces ended up
being set to `nan`.
Reviewed By: mont29
Differential Revision: http://developer.blender.org/D2393
Own error when changing order,
moving experimental features last made some sense,
but causes them to be listed twice.
Reorder and comment to avoid it happening again.
- Expand overly dense & confusing delta assignments.
- Replace bit shift with multiply.
Also link to 'clipped' version of this function
which may be useful to add later.
The Progress system in Cycles had two limitations so far:
- It just counted tiles, but ignored their size. For example, when rendering a 600x500 image with 512x512 tiles, the right 88x500 tile would count for 50% of the progress, although it only covers 15% of the image.
- Scene update time was incorrectly counted as rendering time - therefore, the remaining time started very long and gradually decreased.
This patch fixes both problems:
First of all, the Progress now has a function to ignore time spans, and that is used to ignore scene update time.
The larger change is the tile size: Instead of counting samples per tile, so that the final value is num_samples*num_tiles, the code now counts every sample for every pixel, so that the final value is num_samples*num_pixels.
Along with that, some unused variables were removed from the Progress and Session classes.
Reviewers: brecht, sergey, #cycles
Subscribers: brecht, candreacchio, sergey
Differential Revision: https://developer.blender.org/D2214
Quite handy for debugging.
Unfortunately, this doesn't support viewport tweaks yet since those
require GLSL for colorspace conversion. Maybe this will be implemented
as well one day in the future..
They are defined for MSVC but seems to be missing in GCC and CLang-3.8.
Maybe some further tweaks to policy when to define those functions is
needed, but should be fine for now.
I can no longer reproduce crash with neither of the files where
the crash was originally visible. This is something where other
changes (light threshold, sampling) had an effect and made code
to work as it is supposed to. Could have been optimizator issue
or something like that.
Let's see if we hit same issue again.
`IMB_remakemipmap` may 'shrink' the mipmap list without actually freeing
anything, so we need to check all possible levels in `imb_freemipmapImBuf`
to avoid memory leaks, not only those currently used.
In ccgDM and emDM, looptri array recalculation was being handled
directly by `*DM_getLoopTriArray` (`getLoopTriArray` callback), while
`*DM_recalcLoopTri` (`recalcLoopTri` callback) was doing nothing.
This results in the array not being recalculated when other functions
that depend on the array data called the recalc function.
This moves all the recalculation code to `*DM_recalcLoopTri` and makes
`*DM_getLoopTriArray` call that.
This commit also makes a minor change to the `getNumLoopTri` function,
so that it returns the correct number without having to recalculate the
looptri array.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2375
There were two cases where correlation issues were obvious:
- File from T38710 was giving issues in 2.78a again
- File from T50116 was having totally different shadow between
sample 1 and sample 32.
Use some more simplified version of CMJ hash which seems to give
nice randomized value which solves the correlation.
This commit will break all unit test files, but it's a bug fix
so perhaps OK to commit this.
This also fixes T41143: Sobol gives nonuniform noise
Proper science paper about hash function is coming.
Reviewers: brecht
Reviewed By: brecht
Subscribers: lukasstockner97
Differential Revision: https://developer.blender.org/D2385
Most of them are harmless implicit conversions (e.g. Alembic deals with
doubles for storing time information when Blender uses both ints and
floats/doubles) or class/struct mismatch on forward declarations.
This aims at always ensuring that ID.newid (and relevant LIB_TAG_NEW)
stay in clean (i.e. cleared) state by default.
To achieve this, instead of clearing after all id copy call (would be
horribly noisy, and bad for performances), we try to completely remove
the setting of id->newid by default when copying a new ID.
This implies that areas actually needing that info (mainly, object editing
area (make single user...) and make local area) have to ensure they set
it themselves as needed.
This is far from simple change, many complex code paths to consider, so
will need some serious testing. :/
Was a ground work for some more improvements here, but got dragged
to some other studio maintenance job here.
The plan would be to enable exposure/gamma control for fallback mode
which will definitely be really handy for development and might be
handy for cases when OCIO config can not be read.
Crash is due by mismatching loops and faces counts between the Alembic
data and the Blender derivedmesh which does not appear so
straightforward to fix (the crash happens deep in the derivedmesh code).
So for now, try to detect if the topology has changed and if so, both
only read vertices (vertex colors and UVs won't be read, as tied to face
loops) and add a warning message in the modifier's UI to let the user
know.
The idea is simple: cache PD resolution from cache_point_density() RNA
function because that one is supposed to be called while database is
locked for original synchronization.
Ideally we would also pass array size to the sampling function, but
it turned out to be quite problematic because API only accepts int type
and passing size_t might cause some weird behavior.
This is a way to avoid possible memory corruption when render threads works
in parallel with UI thread.
Not guarantees complete safe, but makes things easier to check anyway.
To bring the API more into line with the UI (and the general expected behaviour of
Blender when it comes to adding stuff), newly created layers and palettes will be
made the active ones by default. It's possible to override this behaviour still
(e.g. in cases where you're auto-generating a large number of them), but otherwise,
this change will help prevent errors like T50123.
When there were no prior palettes, creating a new one didn't automatically make it active.
This caused problems when trying to rename the color, as the RNA code assumed that if there's
a color, it must come from the active palette.
This commit partially fixes the problem by ensuring that if there are no palettes, the first
one will always be made active.
- Remove 'rotate_m2', unlike 'rotate_m4' it created a new matrix
duplicating 'angle_to_mat2' - now used instead.
(better avoid matching functions having different behavior).
- Add 'axis_angle_to_mat4_single',
convenience wrapper for 'axis_angle_to_mat3_single'.
- Replace 'unit_m4(), rotate_m4()' with a single call to 'axis_angle_to_mat4_single'.
This is no longer needed since moving to MPoly/MLoop data structure.
Also use 3x3 matrix for transforming instead of quaternion
(slightly better performance).
Was giving huge artifacts in the barber shop file here in the studio,
Maybe not fully optimal solution, but committing it for now to have
closer look later.
All objects were being parented to a single instance of each parent
object, instead of their respective instances, when using dupliverts or
dupligroups.
Behavior was caused by the `persistent_id[0]` (vertex/face id) being
ignored when computing `parent_gh` hash, which caused all instances to
have the same hash, and thus only the first one was included.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2370
Was affecting armatures' pose drawing code, could try to draw with
non-updated pose, which may contain NULL bone pointers (e.g. after some
data-block management tool execution, like make local, remapping, etc.).
Main intention is to give some quick way to control scene's memory
usage by clamping textures which are too big. This is really handy
on the early production stages when you first create really nice
looking hi-res textures and only when it all works and approved
start investing time on optimizing your scene.
This is a new option in Scene Simplify panel and it acts as
following: when texture size is bigger than the given value it'll
be scaled down by half for until it fits into given limit.
There are various possible improvements, such as:
- Use threaded scaling using our own task manager.
This is actually one of the main reasons why image resize is
manually-implemented instead of using OIIO's resize. Other
reason here is that API seems limited to construct 3D texture
description easily.
- Vectorization of uchar4/float4/half4 textures.
- Use something smarter than box filter.
Was playing with some other filters, but not sure they are
really better: they kind of causes more fuzzy edges.
Even with such a TODOs in the code the option is already quite
useful.
Reviewers: brecht
Reviewed By: brecht
Subscribers: jtheninja, Blendify, gregzaal, venomgfx
Differential Revision: https://developer.blender.org/D2362
This will make triple buffer used by default for such configuration.
Ideally we would switch to triple buffer on all platforms, but let's
do it in 2.8 branch and don't open can of worms in master now.
This should solve issues like T49945.
This is very confusing, in fact, and rna tooltip was wrong,
BKE_object_make_local_ex actually ensures we never have several proxies
of same object, since it always clears proxy when it has to copy object
to make it local...
What that RNA function is probably missing, though, is same logic as in
BKE_library_make_local to actually remap proxy from old linked object to
new local one.
I) `clear_proxy` parameter was not assigned to parm in RNA define code,
so 'pyfunc optional' flag was set to `new_id` parameter of `user_remap`
func - super ugly!
II) `clear_proxy` parameter itself, when set to False, would allow to
leave .blend file in invalid state (more than one proxy of same object),
this should never, ever be allowed in RNA API imho. Left the PAI
untouched for now, just disabled any effect from this parameter (hence
always clearing proxy when copying).
There is some define conflict between system headers and clew,
so delay include of clew.h as much as possible.]
This is something which needed to be done in the code before
the refactor, hopefully such change will still work.
For the multi-GPU case users still have to reconfigure the devices they want to use.
Based on patch from Lukas Stockner.
Differential Revision: https://developer.blender.org/D2347
This can be used together with camera culling to keep nearby objects visible in
reflections, using a minimum distance within which objects are visible. It is
also useful to cull small objects far from the camera.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2332
When the parent object matrix change after the layer was parented, the
inverse matrix for strokes must be updated when editing strokes or the
transformations will be wrong.
We need to check node tree links are still valid, after we remapped
some NodeGroup.
Note: In fact, we have to run that for *all* ID types, since nodes may
use any kind of data-block (in theory)... :/
Forward compatibility code should never, ever be run during undo saving.
Note: related to T49991 (but does not fix it either, crash now happens
when doing a real file save...).
Adding a torus in edit-mode, with 'Generate UVs'
for example would either create another UV layer with the default name or
switch to the default UV layer name if it exists.
Now use the existing UV layer if present.
'1' threshold value would only allow to access a third of the basic
'color space' (from black to white, from 0.0 to 1.0 component values),
when you expect it to access the whole range.
Unfortunately, this needs a subversion bump to allow already defined
brushes to keep exact same behavior!
Also, did not change default value (0.2) for new brushes, think here
keeping current one makes more sense.
Thanks to @LucaRood for confirming the issue.
Empty images were implemented to expand (and eventually replace)
the background images functionalities. If we are ever to drop
background images "image empties" should support stereo/multi-view as well.
The renderpasses for grease pencil are not necessary when render from
sequencer.
This fix solves the GPF but we need to rethink the complete render
process for grease pencil and integrate better in the render and
composition workflow.
Thanks to Dalai Felinto por helping in the debug and fixing of the
problem.
it seems to me the icons are unused:
- VICO_VIEW3D_VEC
- VICO_EDIT_VEC
- VICO_EDITMODE_VEC_DEHLT
- VICO_EDITMODE_VEC_HLT
- VICO_DISCLOSURE_TRI_RIGHT_VEC
- VICO_DISCLOSURE_TRI_DOWN_VEC
- VICO_MOVE_UP_VEC
- VICO_MOVE_DOWN_VEC
- VICO_X_VEC
Since their code contains immediate mode GL calls and they seem to be unused i thought we could remove them.
Reviewers: mont29
Reviewed By: mont29
Subscribers: merwin
Tags: #bf_blender_2.8
Differential Revision: https://developer.blender.org/D2356
On second and third thoughts, this should have been done that way since
the begining, cases were you just delete a few data-blocks without any
serious knowledge of their usages are much, much more frequent than
cases where you are deleting thousands of data-blocks and are sure they
are not used anywhere anymore...
Own fault, but really frustrated that this topic was only raised the day
after 2.78a was released. :(
This has nothing to do here (freeing is not unlinking/remapping!), and
was actually redoing something already taken care of by
`BKE_libblock_relink_ex()` call in `BKE_libblock_free_ex()`.
Also, gives some noticeable speedup when removing datablocks with
do_unlink=True, about 5 to 10% quicker e.g. when deleting all objects
from a py console, in a big production file...
This commit reverts part of a fix for T33275, but things are:
- I can not reproduce the original issue at all, so doesn't seem to
cause any regressions.
- It is really bad idea to do delayed initialization in the threaded
environment, it's a straight way to some nasty issues.
- We can't do things like this anyway because we go more granular,
meaning such a delayed initialization will fail in the case of
having several IK solvers (unless they properly accommodate to
changed bone head).
- Verified the fix with various files from Mango project and all of
them seems to work nice with new depednency graph now (old depsgraph
has some flickering, but it's not related on DEG itself, but on
an environment with lots of proxies and threaded evaluation and it
is not a new behavior).
This reverts commit 9b5a32cbfb.
Apparently it is possible to have other thread mocking around with the hash.
Needs deeper investigation, for the time being reverting to prevent crashes.
This commit fixes two issues:
- UV/Image editor uvs menu did not match the 3D View's which was changed in rB2b240b043078
- Circle select tool was missing in particle edit mode
Reviewers: Severin
Differential Revision: https://developer.blender.org/D2329
Added BKE_libblock_free_data_ex() which takes special do_id_user
argument which basically indicates whether main database was already
taken care about not having "dead" pointers.
Gives about 40% speedup of main database free with quadbot scene
(3.4sec vs. 5.4 sec on quite powerful desktop).
Just fixing crash itself. Actually operator shouldn't run in most editors (not in dopesheet either I guess), but don't want to spend time on that right now.
This option makes an operator to not push a task to the undo stack if the previous stored elemen is the same operator or part of the same undo group.
The main usage is for animation, so you can change frames to inspect the
poses, and revert the previous pose without having to roll back tons of
"change frame" operator, or even see the undo stack full.
This complements rB13ee9b8e
Design with help by Sergey Sharybin.
Reviewers: sergey, mont29
Reviewed By: mont29, sergey
Subscribers: pyc0d3r, hjalti, Severin, lowercase, brecht, monio, aligorith, hadrien, jbakker
Differential Revision: https://developer.blender.org/D2330
- `bmesh_radial_faceloop_find_first` & `bmesh_disk_faceedge_find_first`
can be replaced with a single call to a new function:
`bmesh_disk_faceloop_find_first`
- `bmesh_disk_faceedge_find_first` called `bmesh_radial_facevert_check`
which isn't needed, since either the current or next loop in the
cycle is attached to the edge we're looking for.
The code was templated already, so don't see big reason to have
3 versions of templated functions. It was giving some extra code
to maintain and in fact already had divergency for support of huge
image resolution (missing size_t cast in byte image loading).
There should be no changes visible by artists.
Just return the face or NULL, like BM_edge_exists(),
Also for BM_face_exists_overlap & bm_face_exists_tri_from_loop_vert.
No functional changes.
Old code did some partial overlap checks where this made some sense,
but it's since been removed.
Object freeing may in some kind access its obdata (in case it has some
caches e.g.), since here obdata may have already been freed, let's set
object's data pointer to NULL (probably not ideal solution, but we don't
care much, those form archipelagos of unused linked datablocks,
we nuke'em all anyway).
Also fix stupid mistake in one of own recent commits (using ID we just
freed, tsst...).
This should make it easier to sculpt in high resolutions, downside is that the new way to calculate maximum edge length is a bit less intuitive. Maximum edge length used to be calculated as blender_unit * percentage_value, now it's blender_unit / value.
Reused old DNA struct member, but had to bump subversion to ensure correct compatibility conversion. Also changed default value slightly (would have had to set to 3.333... otherwise).
Was Requested by @monio (see https://rightclickselect.com/p/sculpting/zpbbbc/dyntopo-better-scale-input-in-constant-detail-mode) and I think it's worth testing.
Do not set 'real user' to groups every time we run the first clearing loop.
And do fully clear properly LIB_TAG_DOIT (this is not yet enforced in
existing code, but would love to get to that stage in future, so let's
do it at least with new code!).
Basic idea is to split first loop in two, and run checks before making
anything actually local, to detect data-blocks that we can directly make
local (because we are sure they are only used by already/future local
datablocks).
This allows to avoid a lot of overhead in later 'cleanup' steps of this
function, here with barbershop shot it's four times quicker (from 190s to 48s).
We are still far from the instantaneous results of MakeLocal in 2.77,
but in that version main characters lose their connection to their
armature and remain static after makelocal, so guess new code is still
better. ;)
There are probably more optimizations possible here, but would rather
polish this area of code once we get rid of proxies, those really
make it a nightmare to work on.
If the opengl render with grease pencil is run from VSE with the current
frame outside visible frames, the render pass is wrong and the render
must be canceled because nothing to render. Related to #T49975
Before this commit, the brush set was created with the first stroke
drawing, but if the user creates the datablock or the layer manually
(not drawing) the brush list was empty.
This commit complement the python fix by Sergey:
https://developer.blender.org/rB89c1f9db37cc1becdd437fcfdb1877306cc2b329
Culprit here was once more proxies. Think what was happening here was:
1) Both proxy and proxified armatures' PoseChannels were cleared
(needed after remapping due to Bone pointers being stored in pchans).
2) Proxy PoseChannels got rebuilt in `BKE_pose_rebuild_ex()`, which ends,
in proxy cases, by actually replacing rebuilt pchans by those from
the proxified object... which has not yet been rebuilt.
Fixed the issue by merely adding bone pointer to data copied from
original pchan into new 'from proxy' one... Sounds much, much safer and
sanier anyway, that way we can be sure bone pointer is actually pointing
to a bone of the object's armature (this is supposed to be the same
Armature datablock between proxy and proxified objects, but that may not
be always true especially during makelocal process).
Did similar trick to old dependency graph: tag invisible relations for update.
Might need some re-consideration, see the comment.
This should solve our issues with powerlib addon here in the studio.
New dependency graph expects strict separation between nodes and relations builder,
meaning, if we try to create relation with an object which is not in the graph yet
we'll have an error in depsgraph.
Now, so far object nodes were created from bases of the current scene, which caused
missing objects in graph in certain cases.
Didn't find better approach than to simply ensure object nodes exists when we know
they'll be used by relation builder.
`kernel_path.h` and `kernel_path_branched.h` have a lot of conditional code and
it was kind of hard to tell what code belonged to which directive. Should be
easier to read now.
Proxified objects can never be local, we can totally ignore them here.
This 'fixes' the asserts related to usercount when trying to remap poselib
of localized proxified objects (not sure what exactly was going on wrong here,
but proxies are a giant can of worms for sane data-blocks handling anyway :/).
This new `bpy.types.ID.make_local(clear_proxies=True)` allows Python
code to press the "Make Local" button on any ID block. I chose
`clear_proxies=True` as the default, since it's the default behaviour
of `id_make_local()` (defined in `library.c`).
The caller does need to take care of ensuring that linked-in objects
don't refer to local data, and that proxies aren't broken.
Reviewers: sergey, mont29
Reviewed By: mont29
Subscribers: dfelinto
Differential Revision: https://developer.blender.org/D2346
Add subframe to the animated seed hash calculation.
Should be no difference for the regular files, only for cases when scene is
rendered from sequencer with a speed effect, which is not really a common thing.
This is a late follow-up commit to the light sample threshold changes which
caused difference in rendering all existing .blend files which is not something
we are happy about: it is fine to use new optimized defaults for new files, but
existing ones should always be rendering in the same way as they used to be.
Sorry for the inconveniece, but such thing should have been done to begin with.
If this setting was modified it will not be reset to zero.
Now all render tests should be passing again.
P.S. Also really annoying to bump subversion for such reasons, but currently we
don't have better way to achieve what we want.
Lamp Data node requires shadow sample array which is only enabled when
Shadows are enabled in the shading settings.
This commit prevents crash but might not give expected render results
in such a configuration.
After CUDA dynload changes having CUDA toolkit became required
in order to compile Cycles. This only happened due to wrong
default value to the option.
The idea is simple: when falling back to one of the nodes which was partially
handled we "resume" checking outgoing relations from the index which we stopped.
This gives about 15-20% depsgraph construction time save.
This is more proper way to go:
- Avoids re-compilation of all dependent files when implementation changes
without changed API,
- Linker should have much simpler time now de-duplicating and getting rid
of redundant implementations.
The idea here is to address issue that name on it's own is not
always unique: for example, when adding driver operations the
name used for nodes is the RNA path (and multiple drivers can
write to different array indices of the path). Basically, now
it's possible to pass extra integer value to distinguish
operations in such cases.
So now we've already switched from sprintf() to construct unique
operation name to pass RNA path and array index.
There should be no functional changes yet, but this work is
required for further work about replacing string with const
char*.
There is no real reason to have nodes storing heap-allocated name
and description. Doing this increases amount of allocations during
dependency graph building, which usually means somewhat slowness.
We're temporarily loosing some eyecandy in the graphviz visualizer,
but those we can bring back as a part of graphiz dump (which happens
much less often than depsgraph build).
This will happen in multiple commits for the ease of bisect in the
future just in case this causes any regression. This commit contains
ID creation API changes.
Bullet spring constraint already supports rotational springs, but
they are not exposed in blender UI, likely due to a simple oversight.
Supporting them is as simple as adding a few DNA/RNA properties
with appropriate UI and passing them on to Bullet.
Reviewers: sergof
Reviewed By: sergof
Differential Revision: https://developer.blender.org/D2331
Previously, it was only possible to choose a single GPU or all of that type (CUDA or OpenCL).
Now, a toggle button is displayed for every device.
These settings are tied to the PCI Bus ID of the devices, so they're consistent across hardware addition and removal (but not when swapping/moving cards).
From the code perspective, the more important change is that now, the compute device properties are stored in the Addon preferences of the Cycles addon, instead of directly in the User Preferences.
This allows for a cleaner implementation, removing the Cycles C API functions that were called by the RNA code to specify the enum items.
Note that this change is neither backwards- nor forwards-compatible, but since it's only a User Preference no existing files are broken.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: brecht, juicyfruit, mib2berlin, Blendify
Differential Revision: https://developer.blender.org/D2338
With this fix, using a MIS map resolution equal to the image size for closest imterpolation or twice the size for linear interpolation gets rid of all fireflies.
Previously, a much higher resolution was needed to get acceptable noise levels.
There's more dll's hanging out in the ucrt folder, but I just grabbed the ones blender requested (not sure if that's a wise idea, but it seems to work)
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Differential Revision: https://developer.blender.org/D2335
We have to clear `newid` of all datablocks, not only object ones.
Note that this whole stuff is still using some kind of older, primitive
'ID remapping', would like to see whether we can replace it with new,
more generic one, but that's for another day.
Code would try to add multiple time the same key in `parent_gh` (for this
ghash a lot of dupliobjects may generate same key).
Was making the tool unusable in debug builds.
Also optimise things a bit by avoiding creating parent_gh when only
`use_base_parent` is set.
New code dealing with getting rid of lib-only cycles of data-blocks
could add several time the same datablock to the list of candidates. Now
this is avoided, and pointers are further cleaned up as double-safety
measure.
Feature request during bconf, makes sense to have it even as an hack for
now, since this is probably one of the most common use cases. This should
be redone in bmesh once we have proper custom noramls handling in edit mode...
The issue was caused by image ID nodes not being in the depsgraph.
Now, tricky part: we only add nodes but do not add relations yet. Reasoning:
- It's currently important to only call editor's ID update callback to solve
the issue, without need to flush changes somewhere deeper.
- Adding relations might cause some unwanted updates, so will leave that for
a later investigation.
Basically, the problem here was that the transform that's used to bring texture coordinates
to world space is either fetched while setting up the shader (with Object Motion is enabled) or
fetched when needed (otherwise). That helps to save ShaderData memory on OpenCL when Object Motion isn't needed.
Now, if OM is enabled, the Lamp transform can just be stored inside the ShaderData as well. The original commit just assumed it is.
However, when it's not (on OpenCL by default, for example), there is no easy way to fetch it when needed, since the ShaderData doesn't
store the Lamp index.
So, for now the lamps just don't support local texture coordinates anymore when Object Motion is disabled.
To fix and support this properly, one of the following could be done:
- Just always pre-fetch the transform. Downside: Memory Usage increases when not using OM on OpenCL
- Add a variable to ShaderData that stores the Lamp ID to allow fetching it when needed
- Store the Lamp ID inside prim or object. Problem: Cycles currently checks these for whether an object was hit - these checks would need to be changed.
- Enable OM whenever a Texture Coordinate's Normal output is used. Downside: Might not actually be needed.
Animation system has separate fcurves for each of array elements and
dependency graph creates separate nodes for each of fcurve, This is
needed to keep granularity of updates, but causes issues because
animation system will actually write the whole array to property when
modifying single value (this is a limitation of RNA API).
Worked around by adding operation relation between array drivers
so we never write same array form multiple threads.
It was possible to have synchronization issues whe naccumulating smooth
normal to a vertex, causing shading artifacts during playback.
Bug found by Dalai, thanks!
They were not real issues, it's just some areas of code tried to create
relations between non-existing nodes without checking whether such
relations are really needed.
Now it should be easier to see real bugs printed.
Hopefully should be no regressions here.
Some platforms are having hard time using this linker so added an option
to not use it. The options is an advanced one and enabled by default so
should not cause any changes for current users.
Request from Hjalti Hjalmarsson for the animation work.
Basically a common part of the workflow of animation is to change the pose, scrub back and forth a few times and roll back the changes when unsatisfied.
However if you go back and forth too many times the UNDO stack would be full, and it would not be possible to bring back the previous pose.
I'm leaving clip_editor change frames as it is for now. But we can
probably change the behaviour there as well.
Issue here was that py API code was keeping references (pointers) to the
liniked data-blocks, which can actually be duplicated and then deleted
during the 'make local' process...
Would have like to find a better way than passing optional GHash to get
the oldid->newid mapping, but could not think of a better idea.
Radial append/remove had swapped args and *slightly* different behavior.
- bmesh_radial_append(edge, loop)
- bmesh_radial_loop_remove(loop, edge)
Match logic for append/remove,
Logic for the one case where the edge needs to be left untouched
has been moved to: `bmesh_radial_loop_unlink`.
This is yet another debug option that allows to render an arbitrary
simulation field by using a color ramp to inspect its voxel values.
Note that when using this, fire rendering is turned off.
Reviewers: plasmasolutions, gottfried
Differential Revision: https://developer.blender.org/D1733
This allows to save a memory copy, which will be particularly useful for network rendering.
Reviewers: sergey, brecht, dingto, juicyfruit, maiself
Differential Revision: https://developer.blender.org/D2323
In scenes with many lights, some of them might have a very small contribution to some pixels, but the shadow rays are traced anyways.
To avoid that, this patch adds probabilistic termination to light samples - if the contribution before checking for shadowing is below a user-defined threshold, the sample will be discarded with probability (1 - (contribution / threshold)) and otherwise kept, but weighted more to remain unbiased.
This is the same approach that's also used in path termination based on length.
Note that the rendering remains unbiased with this option, it just adds a bit of noise - but if the setting is used moderately, the speedup gained easily outweighs the additional noise.
Reviewers: #cycles
Subscribers: sergey, brecht
Differential Revision: https://developer.blender.org/D2217
This option allows to create a smoother transition between Bricks and Mortar - 0 applies no smoothing, and 1 smooths across the whole mortar width.
Mainly useful for displacement textures.
The new default value for the smoothing option is 0.1 to give some smoothing that helps with antialiasing, but existing nodes are loaded with smoothing 0 to preserve compatibility.
Reviewers: sergey, dingto, juicyfruit, brecht
Reviewed By: brecht
Subscribers: Blendify, nutel
Differential Revision: https://developer.blender.org/D2230
When using the Normal output of the Texture Coordinate node on Point and Spot lamps, the coordinates now depend on the rotation of the lamp.
On Area lamps, the Parametric output of the Geometry node now returns UV coordinates on the area lamp.
Credit for the Area lamp part goes to Stefan Werner (from D1995).
constraints.
This avoids traversing the archive everytime object data is needed and
gives an overall consistent ~2x speedup here with files containing
between 136 and 500 Alembic objects. Also this somewhat nicely de-
duplicates code between data creation (upon import) and data streaming
(modifiers and constraints).
The only worying part is what happens when a CacheFile is deleted and/or
has its path changed. For now, we traverse the whole scene and for each
object using the CacheFile we free the pointer and NULL-ify it (see
BKE_cachefile_clean), but at some point this should be re-considered and
make use of the dependency graph.
The 'local' layers were not correctly set when redoing 'add object'
addons using object_utils.py helper (we always want to restore layers
from view in local view, even if we set 'real' layers from operator
afterwards).
Issue was happening when removal of custom icons was done while they
were still being rendered by preview job.
Now add a 'deffered deletion' system, to prevent main thread to delete
preview image until loading thread is done with them.
Note that ideally, calling `ED_preview_kill_jobs()` on custom icon
removal would have been simpler, but we don't have easy access to
context here...
Oh man, is it a compiler bug? Is it something we do stupid?
For now more crap to prevent crashes. During the conference will talk to
Maxyn about how can we troubleshoot such weird issues.
Basically don't use rcp() in areas which seems to be critical after
second look. Also disabled some multiplication operators, not sure
yet why they might be a problem.
Tomorrow will be setting up a full test with all cases which were
buggy in our farm to see if this fix is complete.
There is some precision issues for big magnitude coordinates which started
to give weird behavior of release builds. Some weird memory usage in BVH
which is tricky to nail down because only happens in release builds and GDB
reports all variables as optimized out when trying to use RelWithDebInfo.
There are two things in this commit:
- Attempt to make vectorized code closer to original one, hoping that it'll
eliminate precision issue.
This seems to work for transform_point().
- Similar trick did not work for transform_direction() even tho absolute
error here is much smaller. For now disabled that function, need a more
careful look here.
Rewrite the current range-tree API used by dyn-topo undo
to avoid inefficiencies from stdc++'s set use.
- every call to `take_any` (called for all verts & faces)
removed and added to the set.
- further range adjustment also took 2x btree edits.
This patch inlines a btree which is modified in-place,
so common resizing operations don't need to perform a remove & insert.
Ranges are stored in a list so `take_any` can access the first item
without a btree lookup.
Since range-tree isn't a bottleneck in sculpting, this only gives minor speedups.
Measured approx ~15% overall faster calculation for sculpting,
although this number time doesn't include GPU updates and depends on how
much edits fragment the range-tree.
Existing method was fine for basic polygons but didn't scale well
because its was checking all coordinates for every y-pixel.
Heres an optimized version.
Basic logic remains the same this just maintains an ordered list of intersections,
tracking in-out points, to avoid re-computing every row,
this means sorting is only done once when out of order segments are found,
the segments only need to be re-ordered if they cross each other.
Speedup isn't linear, test with full-screen complex lasso gave 11x speedup.
Do this for the new dependency graph: was missing handle of OB_UPDATE_TIME in tag update.
Hopefully it's all correct still.
Old dependency graph needs work, but i'm tempting to call it unsupported and move on
to 2.8 branch.
Several ideas here:
- Optimize calculation of near_{x,y,z} in a way that does not require
3 if() statements per update, which avoids negative effect of wrong
branch prediction.
- Optimization of direction clamping for BVH.
- Optimization of point/direction transform.
Brings ~1.5% speedup again depending on a scene (unfortunately, this
speedup can't be sum across all previous commits because speedup of
each of the changes varies from scene to scene, but it still seems to
be nice solid speedup of few percent on Linux and bigger speedup was
reported on Windows).
Once again ,thanks Maxym for inspiration!
Still TODO: We have multiple places where we need to calculate near
x,y,z indices in BVH, for now it's only done for main BVH traversal.
Will try to move this calculation to an utility function and see if
that can be easily re-used across all the BVH flavors.
Similar to the previous commit, avoid negative effect of bad branch prediction.
Gives measurable performance up to ~2% in tests here.
Once again, thanks to Maxym Dmytrychenko!
The idea here is to avoid if statements which could cause wrong
branch prediction.
Gives a bit of measurable speedup up to ~1%. Still nice :)
Inspired by Maxym Dmytrychenko, thanks!
Seems CMake will rearrange and copy libraries which are passed to the linker
when some of the libraries is listed twice (for example, -lz from png libraries
and -l for blender itself). This was causing libopenimageio to be added somewhere
at the end of linking flags without -ldl followed after which was causing linking
issues.
Similar to regular triangle intersection case. Gives about 3% speedup rendering
SSS object on my desktop,
Question: how to avoid such a code duplication in a nice way without speed loss?
This will confuse hell of a guarded allocators because it is possible
to have allocation happened prior to Blender's guarded allocator is
fully initialized.
This was causing crashes and assert failures when running blender
with fully guarded memory allocator.
Initialization order of global stats and node types was not strictly
defined and it was possible to have node types initialized first and
stats after that. This will zero out memory which was allocated from
the statistics causing assert failure when de-initializing node types.
It was possible to have non-initialized unaligned BVH split
to be used when regular BVH split SAH was inf. Now we ensure
that unaligned splitter is only used when it's really initialized.
It's a regression and should be in 2.78a.
Material linking might and does change the way how drawObject is calculated
but does not tag drawObject for recalculation in any way.
Now use dependency graph to tag draw object for reclaculation. Currently do
this using OB_RECALC_DATA taq since tagging is not very granular yet. In the
future we can introduce ore granular tagging in the new dependency graph
easily.
Simple and safe for 2.78a.
Using context manager for output file itself, and whole try/except block
to at least catch and print error in file.
Also some minor tweaks to previous 'list add-ons' commit.
Note that volume rendering is not supported yet, this is a step towards that.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2299
Now, the strokes can be locked to a plane set in the cursor location.
This option allow the artist to rotate the view and draw keeping the
strokes flat over the surface. This option is similar to surface option
but doesn't need a object.
The option is only valid for 3D view and strokes in CURSOR mode.
When ED_screen_animation_play is called from wm_event_do_handlers,ScrArea *sa = CTX_wm_area(C); is NULL in ED_screen_animation_timer.
Informing the audio system in CTX_data_main_set, that a new Main has been set.
Not really possible to precisely detect all cases in which they should or
should not be active, but at least now it won't show as disabled when it
actually has some effects.
2016-10-21 16:06:53 +02:00
1972 changed files with 679155 additions and 45845 deletions
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python35${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python35.lib#missing postfix on purpose, distutils is not expecting it
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python35${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python35${PYTHON_POSTFIX}.lib#other things like numpy still want it though.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.