This was introduced on 9ad2c0b615.
Although this still doesn't fix the issue, it updates the preview
system to use COLLECTION_DISABLED as intended.
What is missing now is for the flushing to work effectively.
This add the possibility to add screen space raytraced shadows to fix light leaking cause by shadows maps.
Theses inherit of the same artifacts as other screenspace methods.
Two issues here:
- Checking table size to be non-zero is not a proper way to go here. This is
because we first resize the table and then fill it in. So it was possible that
non-initialized table was used.
Trickery with using temporary memory and then doing table.swap() might work,
but we can not guarantee that table size will be set after the data pointer.
- Mutex guard was useless, because every thread was using own mutex. Need to
make mutex guard static so all threads are using same mutex.
Tried 101 but it gives colisions.
I think 257 is enough now that we dont have thousands of uniforms.
This gives some noticeable performance improvement.
Could be refined further.
The issue was caused by light sample being evaluated to nan at some point.
This is root of the cause which is to be fixed, but is very hard to trace down
especially via ssh (the issue only happens on AVX2 release build). Will give it
a closer look when back to my AVX2 machine.
For until then this is a good check to have anyway, it corresponds to what's
happening in regular radiance sum.
This changes quite a few things:
- Drops the allocation of inputs as a chunk.
- Merge the linked list system into the Gwn_ShaderInput.
- Put name buffer into another memory block, easily resizable.
- Use offset instead of char* to direct to input name.
- Add only requested uniforms dynamicaly to the Shader Interface.
This drops some minor optimisation and use a bit more memory for small shaders (which are fixed count).
But this saves a lot of memory when using UBOs because the names and the Gwn_ShaderInput were alloc'ed for every UBO variable.
This also reduce the Shader Interface initial generation.
The lookup time is left unchanged.
Camera clipping was left to default values, which won't work well for
very large (or small) objects. Now recompute valid clipping start/end
based on boundingbox of rendered data, and final location of camera.
This makes brush influence into a tube instead of a sphere.
It can be used along the outline of a mesh to adjust it's silhouette.
Note that all this takes advantage of changes from vertex paint,
from testing this seems useful so exposing from the brush options.
This is an internal structure, and we don't put it to a list for anything else
that hash collision resolution. No need to have dedicated entry here, saves us
from extra allocation and pointer dereference.
This way we reduce number of loops from look-over-all-inputs to
loop-over-collision, which is expected to be much less CPU ticks.
There is still possible optimization: use memory pool of some sort
to manage memory needed for hash entries, but that will only speedup
shader interface construction / deconstruction time.
There are also some trickery happening to speed up process even more
in the case there is no hash collisions detected when constructing
shader interface.
This behavior makes more sense for sculpt, less so for painting.
Restores non PBVH behavior, adding `BKE_pbvh_find_nearest_to_ray` -
similar to ray-cast except it finds the closest point on the surface.
The work size is still very conservative, and this doesn't help for progressive
refine. For that we will need to render multiple tiles at the same time. But this
should already help for denoising renders that require too much memory with big
tiles, and just generally soften the performance dropoff with small tiles.
Differential Revision: https://developer.blender.org/D2856
This was originally done with the first sample in the kernel for better
performance, but it doesn't work anymore with atomics. Any benefit was
very minor anyway, too small to measure it seems.
- Separate the Post Processes settings into sub panel.
- Rename "Viewport Anti-Aliasing" to sampling & super-sampling as it also reduce the noise of other effects.
- Remove Temporal Anti-Aliasing toggle and make it always active unless the number of samples is 1.
Restoring weights is problematic when the stroke overlaps its mirror.
It's better to simply compute the new weight based on the saved data
rather than restoring things, and check that the change is monotonic.
This way is also closer to how things worked before the merge.
This secondary accumulation option accumulated brush falloff.
The same option in image painting accumulates color
as vertex paiht 'Spray' does.
Giving this option different behavior for vertex paint seems strange.
Also this is basically increasing falloff over time.
Remove the new code, expose existing 'Spray' as 'Accumulate'
to match other paint modes.
Notes:
- Changes in paint_vertex.c were simple to merge, mainly related on passing
evaluation context.
- Conflicts in EditDM and drawmesh.c are solved using code from blender2.8
branch. Those areas are deprecated and not to be used in final release.
However, it's possible that some reference code from master is lost, so
keep attention when adding alpha support for vertex painting.
The issue here is that we can not read scale from socket when determining
dependent area of interest. This area will depend on current pixel. Now fall
back to more stupid but reliable thing: if scale size input is connected to some
nodes, we use the whole frame as area of interest.
This makes vertex paint match image painting more closely.
- Add falloff shape option sphere/circle
where sphere uses a 3D radius around the cursor and
circle uses a 2D radius (projected), like previous releases.
- Add normal angle option so you can control the falloff.
- Add Cull option, to paint onto faces pointing away.
Disabling normals, culling and using circle falloff
allows you to paint through the mesh.
When painting with spray disabled - we need to re-apply
on top of the original each time.
Applying the soc-2016-pbvh-painting branch removed this.
While I'd added back a simple previous weight array,
this won't work when multiple groups are painted at once.
This is most pronounced in Auto-Normalize + Multi-Paint. Unlike
vertex paint, the weights being painted on in weight paint mode
don't necessarily correspond to the weight actually stored in
any one vertex group, and may instead be a computed aggregate.
This restores original code behavior lost in rB4f616c93f7cb.
This required some small changes to the data display shaders so that they match the way the object mode renders them.
Strangely enough, I had to remove the normal attribute from the display code because it was being not bound as soon as I created another rendering call in object mode. The problem may be deeper but I did not have time for this so I derive the normal from the sphere pos.
Note that this tool seems like it might need to be rewritten
since results are quite strange.
Projecting on the view vector gives a small improvement though.
paint_vertex.c was getting too big, move all code unrelated to
mode switching and modal painting into their own files.
Also replace vertex-color operators region redraw tag /w notifiers.
GSOC 2017 by Darshan Kadu, see: D2859.
This is a partial merge of some of the features from
the soc-2017-vertex_paint branch.
- Alpha painting & drawing.
- 10 new color blending modes.
- Support for vertex select in vertex paint mode.
This removes a bunch of code that is no longer needed, and running
"make update" will now automatically download the new libraries.
Differential Revision: https://developer.blender.org/D2861
This is done by storing only a subset of PathRadiance, and by storing
direct light immediately in the main PathRadiance. Saves about 10% of
CUDA stack memory, and simplifies subsurface indirect ray code.
This is really convenient for development. Either for profiling the
generated shaders or to check if the generated code is correct.
It writes the shaders to the temporary blender session folder.
2016 GSOC project by @nathanvollmer, see D2150
- Mirrored painting and radial symmetry, like in sculpt mode.
- Volume based splash prevention,
which avoids painting vertices far away from the 3D brush location.
- Normal based splash prevention,
which avoids painting vertices with normals opposite the normal
at the 3D brush location.
- Blur mode now uses a nearest neighbor average.
- Average mode, which averages the color/weight
of the vertices within the brush
- Smudge mode, which pulls the colors/weights
along the direction of the brush
- RGB^2 color blending, which gives a more accurate
blend between two colors
- multithreading support. (PBVH leaves are painted in parallel.)
- Foreground/background color picker in vertex paint
by the transform constraint lines
Ported over e7395c75d5 from the
greasepencil-object branch. I should've fixed this ages ago, but
couldn't figure out why at the time.
This adds TAA to eevee. The only thing important to note is that we need to keep the unjittered depth buffer so that the other engines are composited correctly.
The problem was that orthographic views can have hit position that are negative. Thus we cannot encode the hit in the sign of the Z component.
The workaround is to store the hit position in screenspace. But since we are using floating point render target, we are loosing quite a bit of precision.
TODO: use RGBA16 instead of RGBA16F. But that means encoding the pdf value somehow.
You can change the amount of samples in the user preferences. You do not need to restart blender to see the effect in the new viewport.
This adds another Multisample Framebuffer and textures (so even more memory required).
It works by blitting the default_fb to the multisample_fb each time the renderer need to render one or more "wire" pass.
It it then blit back to the default_fb so that the rest of pipeline is working as expected.
We COULD lower the GPU memory / bandwidth usage to render everything to the same multisample fbo and change the logic depending on if MSAA is enabled or not, but I think it's a bit too much work for now.
One crucial thing here: OpenVDB shoudl be compiled WITHOUT
OPENVDB_ENABLE_3_ABI_COMPATIBLE flag. This is how OpenVDB's Makefile is
configured and it's not really possible to detect this for a compiled library.
If we ever want to support that option, we need to add extra CMake argument and
use old version 3 API everywhere.
Even if pointer assignment may be atomic, it does not prevent reordering
and other nifty compiler tricks, we need a memory barrier to ensure not
only that transferring pointer from wip array to final one is atomic,
but also that all previous writing to memory are “flushed” to
(visible by) all CPUs...
Thanks @sergey for finding the potential (though quite unlikely) issue.
Now uses "Text Selected" theme color of "Menu Back" widget colors. Also
repositioned text slightly to have same margin on top and right (measured
by eye ;) ).
Tested with all bundled themes (contrib and no-contrib) and worked fine.
Considering that different splashes may need different colors for
overlaid text, using theme color may not be the best solution. I would
like to try how this works before adding an ugly way to force a certain
text color though.
Also tried different approaches, but this one I find the least ugly :S
As far as longer term plans go, we wanted to get a redesigned multi-page
splash screen anyway. At this point we can rethink how splash colors work
in general (i.e. auto-contrast, own splash theme colors, etc).
It has been deprecated since at least macOS 10.9 and fully removed in 10.12.
I am unsure if we should remove it only in 2.8. But you cannot build blender with it supported when using a modern xcode version anyway so I would tend towards just removing it also for 2.79 if that ever happens.
Reviewers: mont29, dfelinto, juicyfruit, brecht
Reviewed By: mont29, brecht
Subscribers: Blendify, brecht
Maniphest Tasks: T52807
Differential Revision: https://developer.blender.org/D2333
Adds a FXAA for smoothing out the extracted outlines.
The Post Process Anti Aliasing is only done on the Alpha channel of the outlines.
Because of that we need to add bleed the outline color out of the silouhette so the AA'd alpha can blend the right color and not pick black when the alpha is smoothed out of the silhouette.
Also because of the AA needs to have clear contrast to work with, I decided to ditch the "bluring" or the occluded outlines.
The FXAA adds an overhead of 0.17ms but we gain back 0.22ms * 4 = 0.88ms by removing the blur.
The FXAA Implementation is from Corey Richardson (cmr) (D2717). I had to modify it a bit to only filter the alpha channel.
This introduce some little artifacts on the border of edges because some pixel with very low opacity does not get discarded and then occlude the face rendered behind if it has not been drawn yet.
To fix this. I added an offset in the geometry shader for the edge fixup. This make the artifact only visible on the border of the object if there is a very dense wire region. It's only visible in edge select mode since vertex and face center also hides the artifacts.
We can enable this only if AA is enabled but for now it's always enabled.
Introducing a new header using the Blender socket logo,
commit of the source file will follow soon.
Splash committee: Ton Roosendaal, Dalai Felinto, Pablo Vazquez.
Artwork is a screenshot of 'Wanderer', an Eevee sample file
by Daniel Bystedt, available on blender.org (license: CC-BY-SA)
Textures were bound once. But since it was not unbound it's bind_num would not change and considered still bound next time a shader needed it.
Fix T52866
Fix T52855
Partial revert of 9068c0743e.
This commit tried to do two things:
(1) Fix UBO binding logic [good]
(2) "Improve" texture binding logic [bad]
Don't ever mix different fixes and refactors in the same commit.
Iterate over invisible objects too, so lamps can still lit the scene.
Also, now you can use a collection to set an object to invisible, not
only to visible.
For example:
Scene > Master collection > bedroom > furniture
Scene > View Layer > bedroom (visible)
> furniture (invisible)
The View Layer has two linked collections, bedroom and furniture.
This setup will make the furniture collection invisible.
Note: Unlike what was suggested on D2849, this does not make collection
visibility influence camera visibility. I will keep this as a separate
patch.
Reviewers: sergey
Subscribers: sergey, brecht, fclem
Differential Revision: https://developer.blender.org/D2849
-No more hardcoded python35/36 tokens in the scripts
-disabled python module for boost, was not used
-Updated patches for python to support building with msvc2013
For the first bounce we now give each BSDF or BSSRDF a minimum sample weight,
which helps reduce noise for a typical case where you have a glossy BSDF with
a small weight due to Fresnel, but not necessarily small contribution relative
to a diffuse or transmission BSDF below.
We can probably find a better heuristic that also enables this on further
bounces, for example when looking through a perfect mirror, but I wasn't able
to find a robust one so far.
Similar to what we did for area lights previously, this should help
preserve stratification when using multiple BSDFs in theory. Improvements
are not easily noticeable in practice though, because the number of BSDFs
is usually low. Still nice to eliminate one sampling dimension.
Previously the Sobol pattern suffered from some correlation issues that
made the outline of objects like a smoke domain visible. This helps
simplify the code and also makes some other optimizations possible.
Right now this is exposed in the outliner, though all this
(visible/selectable/enable) should be moved to a new panel soon.
This removes objects from the depsgraph when the collection is disabled.
It allows you to "hide" lamps but still having them lighting the scene.
Same for light probes and other support objects.
Pending tasks:
* Have depsgraph to include invisible objects in the DEG_OBJECTS_ITER, and
then have Eevee and other engines to make a distinction between an
invisible and a visible object.
(for example, we probably want invisible objects to not show in the
viewport, but cast shadows and show up in light probes).
* Change how we evaluate collection settings so that an invisible
collection can force an object to be invisible.
Reviewers: campbellbarton
Subscribers: sergey
Differential Revision: https://developer.blender.org/D2848
Now we replace O(N^2) computational complexity with O(N) extra memory penalty.
Memory is much cheaper than CPU time. Keep in mind, memory penalty is like
4 megabytes per 1M vertices.
Tentative fix, since I cannot reproduce thenissue for some reason here
on linux.
Core of the problem is pretty clear though, thanks to Germano Cavalcante
(@mano-wii): another thread could try to use looptris data after worker
one had allocated it, but before it had actually computed looptris.
So now, we use a temp 'wip' pointer to store looptris being computed
(since this is protected by a mutex, other threads will have to wait on
it, no possibility for them to double-compute the looptris here).
This should probably be backported to 2.79a if done.
The issue was caused by threading conflict around looptris: it was possible
that DM will return non-NULL but non-initialized array of looptris.
Thanks Campbell for second pair of eyes!
Use triple buffer by default now on all platforms, remaing ones where:
* Mesa: seems to have been working well for a long time now, and not using
it gives issues with the latest Mesa 17.2.0.
* Windows software OpenGL: no longer supported since OpenGL 2.1 requirement
was introduced.
* OS X with thousands of colors: this option was removed in OS X 10.6, and
that's our minimum requirement.
Ubo needs to be rebound every times the shader changes.
This simplify the logic a bit.
Also modify texture binding logic to potentially reuse more already bound textures.
When 2x loops have different number of vertices,
the distribution for vertices fan-fill depended on the loop order
and was often lop-sided.
This caused noticeable inconstancies depending on the input
since edge-loops are flipped to match each others winding order.
This fix the crappy binding logic.
Note the current method is doing a lot of useless binding. We should somewhat order the texture so that reused textures are already bound most of the time.
Use more watertight and robust intersection test.
It uses now ray to triangle intersection, but it's all fine because segment was
covering the whole bounding box anyway.
Mainly when object origin is not at the geometry bounding box center.
Seems to be straightforward to fix, hopefully it doesn't break some obscure case
where this was a desired behavior.
The issue here was that removing datablock from main database will poke editors
update, which includes buttons context to free users of texture. Since Cycles
will free datablocks from job thread, it might crash Blender since main thread
might be in the middle of drawing.
Solved by exposing extra arguments to bpy.data.foo.remove() which indicates
whether we want to perform ID user count and interface updates. While scripts
shouldn't be using those normally, this is the only way to allow Cycles to skip
interface update when removing datablock.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2840
The issue was caused by render result identifier only consist of scene name,
which could indeed cause conflicts.
On the one hand, there are quite some areas in Blender where we need identifier
to be unique to properly address things. Usually this is required for sub-data
of IDs, like bones. On another hand, it's not that hard to support this
particular case and avoid possible frustration.
The idea is, we add library name to render identifier for linked scenes. We use
library name and not pointer so we preserve render results through undo stack.
Reviewers: campbellbarton, mont29, brecht
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2836
Add sanitizer. I wanted to stay away from this because I think we should fix what causes NaNs in the first place. But there can be too much different factor causing NaNs and it can be because of user inputs.
Was caused by numeric overflow when calculating preview dimensions.
Now we try to avoid really insance preview resolutions by fitting
aspect into square.
The branching introduced by the uniform caused problems on mesa + AMD in the resolve stage.
This patch create one shader per sample count without branching.
This improves performance of a single ray per pixel case (3.0ms against 3.6ms in my testing)
The issue was caused by operator redo which frees all object's evaluated data,
including bounding box. This bounding box can not be reconstructed properly
without full curve evaluation (need to at least convert font to nurbs, which is
not cheap already).
There is absolute no reason to have such an indentation level, it only causes
readability and maintainability issues. It is really simple to make code more
"streamlined".
This was cause by a fairly funky unitialize buffer (last frame) that was causing NANs during the SSR resolve stage.
They were then propagated to the whole image during the next swap.
Bypassing the SSR completly if no valid history exists fixes the problem. Also disabling SSR data output in this case so we can have correct reflection in the 1st history buffer.
Rather than treating all ray types equally, we now always render 1 glossy
bounce and unlimited transmission bounces. This makes it possible to get
good looking results with low AO bounces settings, making it useful to
speed up interior renders for example.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2818
You can now use a transparent shader as a completly transparent bsdf. And use whatever alpha mask in a mix shader between a transparent bsdf and another bsdf.
In fact, any type of baking might have caused holes in mesh.
The issue was caused by zspan_scanconvert() attempting to get order of traversal
'a-priori', which might have failed if check happens at the "tip" of span where
`zspan->span1[sn1] == zspan->span2[sn1]`.
Didn't see anything bad on making it a check when iterating over scanlines and
pick minimal span based on current scanline. It's slower, but unlikely to cause
measurable difference. Quality should stay the same unless i'm missing something.
Reviewers: brecht, dfelinto
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2837
Previously we used a 1D sequence to select a light, and another 2D sequence
to sample a point on the light. For multiple lights this meant each light
would get a random subset of a 2D stratified sequence, which is not
guaranteed to be stratified anymore.
Now we use only a 2D sequence, split into segments along the X axis, one for
each light. The samples that fall within a segment then each are a stratified
sequence, at least in the limit. So for example for two lights, we split up
the unit square into two segments [0,0.5[ x [0,1[ and [0.5,1[ x [0,1[.
This doesn't make much difference in most scenes, mainly helps if you have a
few large area lights or some types of HDR backgrounds.
This causes render differences in some scenes, for example fishy_cat
and pabellon scenes render brighter in a few spots. This is an old
bug, not due to recent RR changes.
This is in order to use the same texture on multiple sampler.
Also texture counter is reset after each shading group. This mimics the previous behaviour.
Just using same code for distribution for face/volume as the one
changed/used for vertices since some months.
Note that this change is breacking compatibility, in that distribution
of particles over faces/volume may not be exactly the same as
previously.
Previously only the active object was used.
Use coroutines to support baking frames for multiple objects at once,
without having to playback the animation multiple times.
- Replace poisson by concentric samples: Less variance. They are sorted by radius then by angle.
- Separate filtering into 2 blur. First blur is 3x3 box blur. Second is user dependant.
- Group fetches by group of 4.
This brings some data structure changes.
Shared shadow data are stored in ShadowData (in glsl) (aka EEVEE_Shadow in C).
This structure contains the array indices of the first shadow element of this shadow "object".
It also contains how many shadow to evaluate (to be used for Multiple shadow maps).
The filtering is noisy and needs improvement.
- Use only one 2d texture array to store all shadowmaps.
- Allow to change shadow maps resolution.
- Do not output radial distance when rendering shadowmaps. This will allow fast rendering of shadowmaps when we will drop the use of geometry shaders.
- Use indices instead of character args.
- Use numbered macros instead of variadic args.
Parsing using rtags used over 11gb of memory. While this should be
resolved upstream (report as #1053), the extra complexity didn't give
any real advantage.
Disabled forceinline for those architectures, which seems to be compiling
successfully more often.
There might be ~3% slowdown based on quick tests, but better be rendering
something rather than failing to compile kernels again and again.
Those architectures will be doomed for abandon once we'll switch to toolkit 9.
Empty BVH nodes are set to NaN which must be preserved all the way to the
tnear <= tfar test which can then give false for empty nodes. This needs
strict semantices and careful argument ordering for min() and max(), so
the second argument is used if either of the arguments is NaN.
Fixes T52635: crash in BVH traversal with SSE4.1.
Differential Revision: https://developer.blender.org/D2828
There was some invalid state in the screen here, some areas had
sa->full set even though no screen was maximized, which then caused
a restore from the wrong (empty) area, which then led to spacedata
being empty and a crash.
This fix properly clears the sa->full after restore, and also fixes
existing .blend files in such an invalid state.
Only the camera from View3D.localvd is used,
other pointers may be invalid.
Longer term we should probably clear these to ensure no accidents.
For now just follow the rest of Blender's code and don't access.
Baking rigid body cache was broken if some cached frames already
existed.
This is just a band aid for release, the logic need to be looked into
further.
Since the change to prevent shader recompilation at every update, we got
a regression when clearcoat was used.
Basically at the shader build time we would determine if the shader
needed clear coat, and if it didin't, it would build a different GLSL
program.
However if later the user updated the clearcoat value so that it would
then require the full clearcoat shader, the user wouldn't get it until
manually forcing the shader to recompile, or reopening the file.
We now handle the optimization in the GLSL code. That adds a minimum
overhead due to branching. But the overall performance seems unchanged
(tested on linux in AMD and NVidia).
Reviewers: pascal, brecht, fclem
Differential Revision: https://developer.blender.org/D2822
Operators and their properties are two different types
Previously both operators and their properties are added
causing C operators to access the properties, Python the classes.
Favor consistency in this case so only Python classes are added.
A single diagonal axis was used for sorting coordinates,
the algorithm relied on users not having vertices axis aligned.
Use BLI_kdtree to remove doubles instead.
Overall speed varies, it's more predictable than the previous method.
Some typical tests gave speedup of ~1.4x - 1.7x.
For example, if you have two keyframes:
k1 = 1px, k2 = 10px
it was doing:
1px, 9px, 8px, ..., 3px, 2px, 10px
instead of:
1px, 2px, 3px, ..., 8px, 9px, 10px
This reverts commit 134e927965.
Writing into const event is very bad,
but this change broke compositor manipulators.
Will look into better solution eventually.
For some specific pipelines (e.g., holographic rendering) you can easily
need over a million frames (1k * 1k view angles).
It seems a corner case, but there is no real reason not to allow users
doing that.
That said we do loose subframe precision in the highest frame range. Which can
affect motionblur. The current maximum sub-frame precision we have is 16.
While the previous limit of 500k frames has a precision of 32.
Thanks to Campbell Barton for the help here.
To be backported to 2.79
Own error in recent type checks, in many cases the 'idname'
is used for the struct identifier, not the 'identifier'
which is the Python class name in this context.
Mostly internal changes, keeping both manipulators
could have worked but there was no point long term.
There are still some glitches to resolve, will work on those next.
Basically since g_data was malloc'ed (instead of calloc'ed)
g_data->minzbuffer was never initialized.
So when running DRW_framebuffer_init after EEVEE_effects_init, the test
to *g_data->minzbuffer would lead to unpredictable results.
This was caught by valgrind, reported by Sergey Sharybin.
Previous version was trying to do a quick and simple process in the case
we were only considering smooth/flat status of faces.
Thing is, even then, the algorithm was not actually working in all
possible situations, e.g. two smooth faces having a single vertex in
common, but no common edges, would not have split that vertex, leading
to incorrect shading etc.
So now, tweaked slightly our split normals code to be able to generate
lnor spaces even when autosmooth is disabled, and we always go that way
when splitting faces.
Using smooth fans from clnor spaces is not only the only way to get 100%
correct results, it also makes face split code simpler.
One problem is that it was always using __mm_blendv_ps emulation even if the
instruction was supported. The other that the emulation function was wrong.
Thanks a lot to Ray Molenkamp for tracking this one down.
Apparently with Maya in a certain configuration, it's possible to have an
Alembic object without schema in the Alembic file. This is now handled
properly, instead of crashing on a null pointer.
This affects the curve display color setting, but is really intended
for future per-curve options.
The id_data reference in the created rna pointers refers to the object
even if the curve is actually owned by its action, which is somewhat
inconsistent, but the same problem can be found in existing code.
Fixing it requires changes in animdata filter API.
The is following: split copy on write update for node trees, and if we are only
tagging for uniform buffer update we skip whole datablock copy and only invoke
copy default_values form original nodetree to a copied one.
Thing which i'm not sure is: whether we need to use different branches in graph
itself to control such a conditional behavior, or whether we need to store tag
somewhere in the dependency graph. There are obviously cons and pros in both
approaches, and need to think about this. Maybe with more examples it becomes
more obvious which way is better.
This only fixes manual tweaks for now, animation support is coming.
Adding alongside the existing one for now,
but it should eventually replace it.
Uses a matrix instead of (position + scale),
written so rotation can be done more easily.
Currently has a primitive handle for rotation, supports corner scaling.
There were two issues here:
1. material_update did not do anything, because DEG_id_tag_update was storing
update tags in original IDs, which had nothing evaluated. Even more, material
update should have been called with evaluated version of material, Solved
this by copying update tag from original ID to a copied one.
However, perhaps DEG_id_tag_update should tag both original and copied ID,
so updates are never gets lots if some depsgraph is not visible.
2. Tagging material for update should ensure it's copied version of node tree is
up to date, otherwise material will still use old node tree.
This solves missing material updates when changing topology. Tweaking values is
still broken, because of GPUMaterial using pointer to original node's socket
value, which gets broken after copy-on-write of the node tree (pointers of nodes
are changing).
Previously it was returning short, which was really easy to (a) compare against
non-ID type value (b) forget to handle some specific value in switch statement.
Both issues happened in the nearest past, so it's time to tighten some nuts
here.
Most of the change related on silencing strict compiler warning now, but there
is also one tricky aspect: ID_NLA is not in the IDType enum. So there is still
cast to short to handle that switch. If someone has better ideas how to deal
with this please go ahead :)
A single manipulator could only assign a single operator to each part.
Now each part can have it's own.
Also modify 2D selection callback, 2D started at 1, 3D at 0.
Now use -1 for unset value, start both at 0.
- Vertex only meshes never restored their selection history.
- Select history was cleared on the source instead of the target.
Simple Optimizations:
- Avoid O(n^2) linked list looping that checked the entire list before
adding elements (NULL values in the source array to prevent dupes).
- Re-use vert & edge lookup tables instead of allocating new ones.
Issue was nasty hidden one, the dual status (mix of local and linked)
of proxies striking again.
Here, remapping process was considering obdata pointer of proxies as
indirect usage, hence clearing the 'LIB_TAG_EXTERN' of obdata pointer.
That would make savetoblend code not store any 'lib placeholder' for
obdata data-block, which was hence lost on next file read.
Another (probably better) solution here would be to actually consider
obdata of proxies are fully indirect usage, and simply reassign proxies
from their linked object's obdata on file read...
However, that change shall be safer for now, probably good for 2.79 too.
This reverts commit 3888227a7b.
This "Fix" was made while ORCO was broken. Now that orco itself is fixed
this is no longer required, otherwise Tangent node produces different
results in Cycles and Eevee.
Don't use quick sort for small arrays, bubble sort works way faster for small
arrays due to cache coherency. This is what qsort() from libc is doing actually.
We can also experiment unrolling some extra small arrays, for example 3 and 4
element arrays.
This reduces tangent space calculation for dragon from 3.1sec to 2.9sec.
Brings tangent space calculation from 4.6sec to 3.1sec for dragon model in BI.
Cycles is also somewhat faster, but it has other bottlenecks.
Funny thing, using simple `static inline` already gives a lot of speedup here.
That's just answering question whether it's OK to leave decision on what to
inline up to a compiler..
Would be nice to be able to catch this with assert as well, will see what would
be the best way to do this/.\
Need to verify with Mai that this solves crash for her and maybe consider
porting this to 2.79.
Orco should behave the same if it comes from unconnected vec inputs, or
from the Texture Coordinate -> Generated node output.
Fixup for 11e7e0769a.
This is related to T52528. The coordinates are still different between
Eevee and Cycles, but at least it behaves consistent within itself.
In fact the shader should now be correct, but the orco attributes we are
passing the shader seems to be where the problem is. But it's to be
tackled separately.
scripts being "Artwork" which is your sole property and free to license.
I've removed the reference to scripts in this text.
This was from 2002! With our Python scripts becoming part of how Blender runs,
such scripts now are officially required to be compliant with GNU GPL.
For more information; check the FAQ or consult foundation@blender.orghttps://www.blender.org/support/faq/
Some manipulators are used like on-screen buttons,
in this case it doesn't make sense to keep track of their state,
so zero the offset when its unused.
Needed for lamp-target manipulator.
We have a hardcored limit of 1000 images to be baked.
However anything anove 100 would be leading to overflow in the code.
Caught by warning from builder bot (my compiler doesn't even complain
about this, but it should).
Re-use operator return flags for manipulator modal & invoke,
this means manipulators can allow navigation or other events to be
handled as they run - see T52499
Fishy cat benchmark was rendering with wrong shadows. Cause is unclear,
adding printf or rearranging code seems to avoid this issue, possibly a
compiler bug. This reverts the fix and solves the OSL bug elsewhere.
This was needed when we accessed OSL closure memory after shader evaluation,
which could get overwritten by another shader evaluation. But all closures
are immediatley converted to ShaderClosure now, so no longer needed.
While unlikely to have had any serious effects because of limited use, the
previous implementation was not actually atomic due to a data race and
incorrectly coded CAS loop. We also had duplicates of this code in a few
places, it's now been moved to a single location with all other atomic
operations.
We need to make sure we can store all volume closures for all objects in volume
stack. This is a bit tricky to detect what would be the "nestness" level of
volumes so for now use maximum possible stack depth. Might cause some slowdown,
but better to give reliable render output than to fail quickly.
Should be safe for 2.79 after extra eyes.
We were showing "search for unknown menutype WM_MT_button_context" messages in terminal which were not helpful for users, so now they are disabled.
To be backported to 2.79
It is possible to have same image used multiple times at different frames,
which means we can not free it's buffers without any guard. From quick tests
this seems to be doing what it is supposed to.
Need more testing and port this to 2.79.
Although the problem was exposed in 9457715d9a, the problem was in the
original code that was copied over. To have:
```
} else { /* EXPECTED_VALUE */
```
Without an BLI_assert(value == EXPECTED_VALUE); is asking for troubles.
Yet another reason to favour switch statements with:
```
default:
BLI_assert(!"value not implemented or supported");
```
Instead of chained if/else if/else /* expected_value */.
- NOCHECK -> ALL
- ALL -> MAYBE_ALL
Where 'MAYBE_ALL' checks to see if the mesh has changed.
This is clearer that `BKE_MESH_BATCH_DIRTY_ALL` is dirty and
going to be updated without any guess-work.
This was meant to be generic but introduced possible type errors
and unnecessary complication.
Replace with typed PyC_Tuple_PackArray_* functions.
Also add PyC_Tuple_Pack_* macro which replaces some uses of
Py_BuildValue, with the advantage of not having to parse a string.
- WM_manipulatorgrouptype_remove- > free
- WM_manipulator_group -> WM_manipulator_group_type
Naming here is still a bit confusing,
now at least free/remove are differentiated.
This means we have less overall noise for rendered image.
SSR, AO, and Refraction are affected by this change.
SSR still exhibit artifacts because the reconstruction pattern needs to change every frame (TODO).
Regression from rBfed853ea78221, calling this inside thread worker was
not really good idea anyway, and we already have all the code we need in
pre-threading init function, was just disabled for vertex particles
before.
To be backported to 2.79.
This exposes 2 methods for manipulators:
- new_custom_shape
- draw_custom_shape
This can be used for script authors to create and re-use shapes
without dealing with lower level API's.
Python's C-API doesn't provide functions to get
int's at specific integer sizes.
Leaving the caller to check for overflow,
which ended up being ignored in practice.
Add API functions that convert int/uint 8/16/32/64, also bool.
Raising overflow exception for unsupported ranges.
The thing that most often still goes wrong for new users building blender on windows is checking out the libraries, some skip over the wiki, some check out to the wrong folder, in an effort to reduce the time i spend on this, I added detection of svn and misisng libs to make.bat .
When the user has svn installed, and the libdir is missing he'll be asked if he wants to download them
if svn is not installed, or the user chooses 'no' the current error message is shown.
Reviewers: Blendify, sergey, juicyfruit
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D2782
We should only early out with any hit in BVH traversal if the only visibility
bits used are opaque shadow. Not when opaque shadow is one of multiple bits.
Also pass by value and don't write back now that it is just a hash for seeding
and no longer an LCG state. Together this makes CUDA a tiny bit faster in my
tests, but mainly simplifies code.
It's purpose is to limit the amount of light that spread across the screen.
Not entierly sure if it's very usefull, but it sure help to avoid to drown the screen in bloom.
This function was called to recreate the lower mip level of the probe texture. But this is not it's usage and it introduced a stall.
This patch add cubemap mipmap level regeneration in eevee_effects.c
Since we started supporting the (Cycles) Material Output old files
stopped working. There is no reason to keep the original Eevee material
otuput anymore.
It includes doversion for old files.
Move floats around when needed to accomodate vec3 arrays efficiently.
With this we use slightly less memory when possible. Basically vec3s are not
treated as vec4 unless we have no float to use for padding).
Reviewers: fclem, sergey
Differential Revision: https://developer.blender.org/D2800
This includes big improvement:
- The horizon search is decoupled from the BSDF evaluation. This means using multiple BSDF nodes have a much lower impact when enbaling AO.
- The horizon search is optimized by splitting the search into 4 corners searching similar directions to help which GPU cache coherence.
- The AO options are now uniforms and do not trigger shader recompilation (aka. freeze UI).
- Include a quality slider similar to the SSR one.
- Add a switch for disabling bounce light approximation.
- Fix problem with Bent Normals when occlusion get very dark.
- Add a denoise option to that takes the neighbors pixel values via glsl derivatives. This reduces noise but exhibit 2x2 blocky artifacts.
The downside : Separating the horizon search uses more memory (~3MB for each samples on HD viewport). We could lower the bit depth to 4bit per horizon but it produce noticeable banding (might be fixed with some dithering).
Diffuse was not outputing the right normal. (this is not a problem with SSR actually)
Glass did not have proper ssr_id and was receiving environment lighting twice.
Also it did not have proper fresnel on lamps.
This fixes the Principled shader in Eevee, among other nodes.
Basically before we were treating all the vec3 as vec4 as far as memory
goes. We now only do it when required (aka, when the vec3 is not
followed by a float).
We can be even smarter about that and move the floats around to provide
padding for the vec3s. However this is for a separate patch.
That said, there seems to be some strong consensus in corners of the
internet against using vec3 at all [1]. Basically even if we get all the
padding correct, we may still suffer from poor driver implementations in
some consumer graphic cards.
It's not hard to move to vec4, but I think we can avoid doing it as much
as possible. By the time 2.8 is out hopefully most drivers will be
implementing things correctly.
[1] - https://stackoverflow.com/questions/38172696
Deleting the old internal audaspace.
Major changes from there are:
- The whole library was refactored to use C++11.
- Many stability and performance improvements.
- Major Python API refactor:
- Most requested: Play self generated sounds using numpy arrays.
- For games: Sound list, random sounds and dynamic music.
- Writing sounds to files.
- Sequencing API.
- Opening sound devices, eg. Jack.
- Ability to choose different OpenAL devices in the user settings.
This implements Arvo's "Stratified sampling of spherical triangles". Similar to how we sample rectangular area lights, this is sampling triangles over their solid angle. It does significantly improve sampling close to the triangle, but doesn't do much for more distant triangles. So I added a simple heuristic to switch between the two methods. Unfortunately, I expect this to add render time in any case, even when it does not make any difference whatsoever. It'll take some benchmarking with various scenes and hardware to estimate how severe the impact is and if it is worth the change.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: Vega-core, brecht, SteffenD
Tags: #cycles
Differential Revision: https://developer.blender.org/D2730
Use mesh batch cache for mesh selection.
Note that we could create the batches and free immediately
so they don't take up memory.
This resolves a problem where selection was limited
to immediate-mode buffer size.
Since the paste object is pasted in the active collection, and not on
its original one, we need to flush/calculate the new collection base
settings (visibility, selectability, ...).
DEG_id_tag_update() for the scene now. Though it may be better to tag
only the object specific IDs in the future.
Caused by own recent changes in handling of verts/edges/etc. arrays storage
for raycasting (rBe324172d9ca6690e8).
Issue was actually even weirder - there is absolutely no reason at all to
release DM here, those finaldm are stored in Object or EditMesh structs and
handled by general update system, other code shall never try to release them!
Originally we were not respecting the original visibility flags of the
collections. However this is required for Copy-on-write (CoW).
Remember to update the svn lib tests folder. I had to update some of the
json files there.
Also adding a new unittest for this particular issue:
Test render_layer_scene_copy_f
Flag ownership for each index array & vbo's
so we don't have to manually keep track of this and use the right free call.
Instead this can be passed on creation.
See D2676
2.8x branch added bContext arg in many places,
pass eval-context instead since its not simple to reason about what
what nested functions do when they can access and change almost anything.
Also use const to prevent unexpected modifications.
This fixes crash loading files with shadows,
since off-screen buffers use a NULL context for rendering.
I couldn't reproduce either, but calling min() with different argument
data types and indexing vectors with an index not known at compile time
seem likely to cause problems.
Ref T52404, T52404.
We stop using the .zip file and just have all files now in
lib/darwin/python/lib, along with numpy, numpy headers and requests.
This makes it consistent with Linux and simplifies code.
For old libraries the .zip stays, code for that gets removed when we
fully switch to new libraries.
We now have to support more complex copying types, which are controlled
by flags, so all copying logic will need to take those at some point (at
least, all potentially dealing with IDs).
_copy_data() functions shall not do that at all anymore. Kept as option
for now even though that helper is only called from here...
Also moar varnames renaming to standard _src/_dst sufixes.
We do have an history of those pieces of evil in our code, would be nice
to get fully rid of it, but at the very least let's not add more of them
in new code. :)
This can happen with Alembic files exported from Maya. I'm unsure as to the
root cause, but at least this fixes the crash itself.
Thanks to @looch for reporting this with a test file. The test file has to
remain confidential, though, so it's on my workstation only.
FFMPEG & VPX don't handle target with --build parameter, so we need to make sure use of plain configure command
Reviewed by: Brecht Van Lommel
Differential Revision: http://developer.blender.org/D2791
Was complicating general use case, also support for transforming with matrix_space set.
Add matrix_space support for manipulator_window_project_2d too.
This patch adds "Pixel Size" to the performance options, which allows to render
in a smaller resolution, which is especially useful for displays with high DPI.
Reviewers: Severin, dingto, sergey, brecht
Reviewed By: brecht
Subscribers: Severin, venomgfx, eyecandy, brecht
Differential Revision: https://developer.blender.org/D1619
The only reason shutter time was marked as non-animatable is because Blender
Internal render does not support such animation. But this is something what
users are keeping asking for and now Blender Internal is on it's way out.
Enabled animation of this property, but noted in tooltip that Blender Internal
does not support animation of this property.
Bug in new ID copying code, thanks once again to stupid nodetrees, we
ended up wrongly remapping MA node->id pointers to NodeTree when copying
materials using node trees...
Basically, make re-alloc and memcpy from the same lock, otherwise one
thread might be re-allocating thread while another one is trying to
copy data there.
Reported by Mohamed Sakr in IRC, thanks!
Enabled cache for frame accessor and tweaked policy so we guarantee keyframed
images to be always in the cache. The logic might fail in some real corner case
(for example, when doing multiple tracks at once on a system where we can not
fit 2 clip frames in cache) but things are much better now for regular use.
This fix a bug when occluder are on the edge of the screen and occludes more than they should.
Grouped the texture fetches together and clamp the ray at the border of the screen.
Also add a few util functions.
This should fixes the error message that a stall occured because of busy mipmap.
This happened on the minmax buffer generation and introduced a random 0.2ms latency.
I'm not sure of what was happening though.
Creating ngons with multiple axis aligned shapes in the middle of a
single face would fail in some cases.
This exposed multiple problems in BM_face_split_edgenet_connect_islands
- Islands needed to be sorted on Y axis when X was aligned.
- Checking edge intersections needed increased endpoint bias.
- BVH epsilon needed to be increased.
Note: this commit seems to work as expected (also with transform
snapping etc.). However, it is rather unsafe - not enough for 2.79 at
least, unless we get much more testing on it. It also depends on three
previous ones.
Note that using a global lock here is far from ideal, we should rather
have a lock per DM, but that will do for now, whole DM thing is doomed
to oblivion anyway in 2.8.
Also, we may need a `DM_DIRTY_LOOPTRIS` dirty flag at some point. Looks
like we can survive without it for now though... Probably because cached
looptris are never copied accross DM's?
This was... horribly wrong, CDDM will often *not* need to allocate
anything to return arrays of mesh items! Just check whether array
pointer is NULL.
Also, remove `DM_get_looptri_array`, that one is useless currently,
`dm->getLoopTriArray` will always return cached array (computing it if
needed).
The tweakmode flag and the selected-channels flag accidentally
used the same value, due to confusion over where these flags were
supposed to be set. The selected-channels flag has now been moved
to use a different value, so that there shouldn't be any further
conflicts.
To be ported to 2.79.
Old bevel 'Clamp overlap' code was very naive: just limit amount
to half edge length. This uses more accurate (but not perfect)
calculations for the max amount before (many) geometry collisions
happen. This is not a backward compatible change - meshes that
have modifiers with 'Clamp overlap' will likely have larger allowed
bevel widths now. But that can be fixed by turning off clamp overlap
and setting the amount to the desired value.
Own previous fix (rBd5d626df236b) was not valid, curves are actually
supported by SoftBodies. It was rather a mere UI bug, which was not
including Surfaces and Font obect types in those valid for softbody UI.
Thanks to @brecht for the head up!
Also, fix safe for 2.79, btw.
* 255 maximum seems excessive for F-Curve handle vertices; now reduced to 100
* Vertex Size is no longer restricted to the old 10px maximum size limit
(used because Windows limited the maximum vertex size drivers needed to
support)
It's more important that there is some form of feedback that the strips
are muted (i.e. dotted borders) than the fact that those dotted borders
may have slightly rounded corners. So, just use a regular sharp-cornered
rect when the strips need to be muted.
There's no reason to manually iterate over items in a DLRBT_Tree,
as the structure is designed to be able to be safely casted down
to a ListBase and ListBase-like nodes..
Adding structs was checking for duplicates
causing approx 75k string comparisons on startup.
While overall speedup is minimal,
Python access to `bpy.types` will now use a hash lookup
instead of a full linked list search.
See D2774
Changes for 2.8x to use EvaluationContext caused some confusion
- Would use scene layer passed from snap context.
- Would generate duplis from Main eval context.
- Would take context argument and use it to create another eval context.
Adding context args all over and filling in a new eval-context
for every ray-cast test isn't ideal either.
Remove the context argument since the purpose of
SnapObjectContext is to avoid this kind of confusion.
Store the EvaluationContext once and re-use.
This effectivly reduce firefly bleeding all over the place.
We still need the clamp in the resolve pass because the level 0 has not been clamped.
NOTE: I did not clamped each sample individually for performance BUT I did not profile it to know how much it cost.
This was surely cause by float overflow. Limit roughness in this case to limit the brdf intensity.
Also compute VH faster.
Add a sanitizer to the SSR pass for investigating where NANs come from. Play with the roughness until you see where the black pixel is / comes from.
It doesn't seem that useful in practice, was mostly added to match some
other renderers but also seems to be causing user confusing and accidental
long render times. So let's just keep the UI simple and remove this.
Differential Revision: https://developer.blender.org/D2768
We're adding some bias by default, which now I think is the right thing
to do from a usability point of view since you really need to use those
options anyway to get clean renders in a practical time.
Differential Revision: https://developer.blender.org/D2769
Adds thin/default/thick modes to add -1/0/1 to the auto detected line width,
while leaving the overall UI scale unchanged.
Also tweaks the default line width threshold, so thicker lines start from
slightly high UI scales.
Differential Revision: https://developer.blender.org/D2778
Group texture fetches to hide latency. 3.2ms -> 2.2ms (constant time improvement, not depending on scene complexity)
Could optimize further with textureGather (require OpenGL 4.0).
Theses Materials are rendered after the SSR pass.
The only difference with previous method is that they have a depth prepass (less overdraw) and are not sorted.
For the moment the only way to enable this is to:
- enable Screen Space REFLECTIONS.
- enable Screen Space Refraction in the SSR parameters.
- enable Screen Space Refraction in the material tab.
We track the previous ray position offseted by the thickness. If the sampled depth is between this value and the current ray position then we have a hit.
This fixes rays that are almost colinear with the view vector. Thickness is now only important for rays that are comming back to the camera.
As a consequence, this simplify a lot of things.
Also include some refactor.
Since we are working with non power of 2 textures, the mipmap level UV does not line up perfectly.
This resulted in skewed filtering and bad sampling of the min/max depth buffer.
We generate a 3D lut to precompute the btdf intensity.
I decided to use a 64*64*16 (N dot V, ior, roughness) because the btdf varies less with roughness than with IOR.
We also remap the ior to better use the space in the LUT.
*Changed categories of some keywords
*reordered some longer keywords that didn't appear
*Activated another color (reserved builtins) by Leonid
*added some HGPOV and UberPOV missing keywords
Patch by Maurice Raybaud (@mauriceraybaud). Thanks to Leonid for additions, feedback and Linux testing.
Related diffs: D2754 and D2755.
While not a regression, this is new feature and would be nice to have it
backported to final 2.79.
Bug introduced in recent ID copying refactor.
This commit basically sanitizes seq strip copying behavior, by making
destination scene pointer mandatory (and source one a const one).
Nothing then prevents you from using same pointer as source and
destination!
This is a bit confusing, especially when one mixes OpenCL code where ulong equals
to uint64_t with CPU side code where ulong is expected to be something else from
the naming.
This commit makes it so we use explicit name, common on all platforms.
When a mesh changes its number of vertices during the animation,
Blender rebuilds the DerivedMesh, after which the materials weren't
applied any more (causing the default to the first material slot).
Commit b6d7cdd3ce changed how the mesh data
is deformed, which wasn't taken into account yet in this unit test.
Instead of directly reading the mesh vertices (which aren't animated any
more), we convert the modified mesh to a new one, and inspect those
vertices instead.
Problem was that some code checks to see if device_pointer is null or
not and the new allocator wasn't even setting the pointer to anything
as it tracks memory location separately. Setting the pointer to non
null keeps all users of device_pointer happy.
Also internal changes so arrow3d matches grab3d's behavior.
Needed to add WM_MANIPULATOR_DRAW_OFFSET_SCALE flag so
we can optionally apply offset in worldspace or screen scaled values.
This could make output really polluted, where it'll be hard to see actual
issues.
It is still possible to have all backtraces printed using BLENDER_VERBOSE
environment variable.
As part of the fix for T51587, I removed the Depth output for non-Multilayer
images since it seemed weird that PNGs etc. that don't have a Z pass still get
a socket for it.
However, I forgot about non-multilayer EXRs, which are a special case that can
actually have a Z pass.
Therefore, this commit brings back the Depth output for non-multilayer images
just like it was in 2.78.
- Apparently MSVC does not support compound literals
in C++ (at least by the looks of it).
- Not sure how opencl_device_assert was managing to
set protected property of the Device class.
We don't enable global SSE optimizations in regular kernel, and we
keep those disabled on Linux 32bit.
One possible workaround would be to pass arguments by ccl_ref, but
that is quite a few of code which better be done accurately.
Steps to reproduce:
- Create shader Image texture -> Diffuse BSDF -> Output. Do NOT select image yet!
- Start viewport render.
- Select image from the ID browser of Image Texture node.
Thing is: with the memory manager we always need to inform device that memory
was freed.
Common folks, nobody considered master a C++11 only branch. Such decision is to
be done officially and will involve changes in quite a few infrastructure related
areas.
New code was handling correctly ID's internal references to self, but
not references between 'made real' different objects...
Regression, to be backported in 2.79.
It is defined to & for CPU side compilation, and defined to an empty for any GPU
platform. The idea here is to use this macro instead of #ifdef block with bunch
of duplicated lines just to make it so CPU code is efficient.
Eventually we might switch to references on CUDA as well, but that would require
some intensive testing.
The documentation for the bpy.app.handlers.scene_update_{pre,post}
handlers states that they're called "on updating the scenes data".
However, they're called even when the data hasn't changed. Of course
such handlers are useful, but the documentation should reflect the
current behaviour.
Reviewers: mont29, sergey
Subscribers: Blendify
Maniphest Tasks: T46329
Differential Revision: https://developer.blender.org/D1535
Image textures were being packed into a single buffer for OpenCL, which
limited the amount of memory available for images to the size of one
buffer (usually 4gb on AMD hardware). By packing textures into multiple
buffers that limit is removed, while simultaneously reducing the number
of buffers that need to be passed to each kernel.
Benchmarks were within 2%.
Fixes T51554.
Differential Revision: https://developer.blender.org/D2745
Simply disabled python tests, they can't be run anyway (since blender target is
not enabled) and we don't have any player-related tests in that folder.
This will allow much finer controll over how we copy data-blocks, from
full copy in Main database, to "lighter" ones (out of Main, inside an
already allocated datablock, etc.).
This commit also transfers a llot of what was previously handled by
per-ID-type custom code to generic ID handling code in BKE_library.
Hopefully will avoid in future inconsistencies and missing bits we had
all over the codebase in the past.
It also adds missing copying handling for a few types, most notably
Scene (which where using a fully customized handling previously).
Note that the type of allocation used during copying (regular in Main,
allocated but outside of Main, or not allocated by ID handling code at
all) is stored in ID's, which allows to handle them correctly when
freeing. This needs to be taken care of with caution when doing 'weird'
unusual things with ID copying and/or allocation!
As a final note, while rather noisy, this commit will hopefully not
break too much existing branches, old 'API' has been kept for the main
part, as a wrapper around new code. Cleaning it up will happen later.
Design task : T51804
Phab Diff: D2714
Note these are intended for platform maintainers, we do not intend to
support users making their own builds with these. For that precompiled
libraries from lib/ should be used.
Implemented by Martijn Berger, Ray Molenkamp and Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D2753
Since all the shadow catchers are already assumed to be in the footage,
the shadows they cast on each other are already in the footage too. So
don't just let shadow catchers skip self, but all shadow catchers.
Another justification is that it should not matter if the shadow catcher
is modeled as one object or multiple separate objects, the resulting
render should be the same.
Differential Revision: https://developer.blender.org/D2763
Note these are intended for platform maintainers, we do not intend to
support users making their own builds with these. For that precompiled
libraries from lib/ should be used.
Implemented by Martijn Berger, Ray Molenkamp and Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D2753
Since all the shadow catchers are already assumed to be in the footage,
the shadows they cast on each other are already in the footage too. So
don't just let shadow catchers skip self, but all shadow catchers.
Another justification is that it should not matter if the shadow catcher
is modeled as one object or multiple separate objects, the resulting
render should be the same.
Differential Revision: https://developer.blender.org/D2763
This will allow much finer controll over how we copy data-blocks, from
full copy in Main database, to "lighter" ones (out of Main, inside an
already allocated datablock, etc.).
This commit also transfers a llot of what was previously handled by
per-ID-type custom code to generic ID handling code in BKE_library.
Hopefully will avoid in future inconsistencies and missing bits we had
all over the codebase in the past.
It also adds missing copying handling for a few types, most notably
Scene (which where using a fully customized handling previously).
Note that the type of allocation used during copying (regular in Main,
allocated but outside of Main, or not allocated by ID handling code at
all) is stored in ID's, which allows to handle them correctly when
freeing. This needs to be taken care of with caution when doing 'weird'
unusual things with ID copying and/or allocation!
As a final note, while rather noisy, this commit will hopefully not
break too much existing branches, old 'API' has been kept for the main
part, as a wrapper around new code. Cleaning it up will happen later.
Design task : T51804
Phab Diff: D2714
Shows new, reference and diff renders, with mouse hover to flip between
new and ref for easy comparison. This generates a report.html in
build_dir/tests/cycles, stored along with the new and diff images.
Differential Revision: https://developer.blender.org/D2770
* Remove some unnecessary SSE emulation defines.
* Use full precision float division so we can enable it.
* Add sqrt(), sqr(), fabs(), shuffle variations, mask().
* Optimize reduce_add(), select().
Differential Revision: https://developer.blender.org/D2764
I need to use some macros defined in util_simd.h for float3/float4, to emulate
SSE4 instructions on SSE2. But due to issues with order of header includes this
was not possible, this does some refactoring to make it work.
Differential Revision: https://developer.blender.org/D2764
We already detect this automatically based on shading nodes and per shader
settings, and performance of this option is ok now all devices.
Differential Revision: https://developer.blender.org/D2767
- New manipulator tracks lamps to position under cursor.
- Works with multiple lamps, keeping relative offsets.
- Holding Ctrl moves the lamp.
- Access via manipulator or Shift-T.
Code could be improved, but like to get feedback from users.
Render-border & crop-node 2d-cage manipulators where unreasonably
complicated to implement because there was no good way to define
the sub-region the manipulator was transforming in
(render border within the camera's frame for example).
Add matrix-space variable,
remove scale property from cage2d manipulator, use matrix instead.
Was doing this with property get/set but this made view operations
require refreshing manipulator properties.
Simplify by operating on properties in their own space.
Also disable clamping for now since it assumes pixel-space.
Two main things here:
1. Replace all unsafe for #line directive characters into a single loop,
avoiding multiple iterations and multiple temporary strings created.
2. Don't merge token char by char but calculate start and end point and
then copy all substring at once.
This gives about 15% speedup of source processing time. At this point
(with all previous commits from today) we've shrinked down compiled
sources size from 108 MB down to ~5.5 MB and lowered processing time
from 4.5 sec down to 0.047 sec on my laptop running Linux (this was a
constant time which Blender will always spent first time loading kernel,
even if we've got compiled clbin).
Add a safe version of normalize since all uses of normalize
did zero length checks, move this into a function.
Also avoid unnecessary conversion.
Gives minor speedup here (approx 3-5%).
Basically gather lines as-is during traversal, avoiding allocating
memory for all the lines in headers.
Brings additional performance improvement abut 20%.
The idea here is that it is possible to mark certain include statements
as "precompiled" which means all subsequent includes of that file will
be replaced with an empty string.
This is a way to deal with tricky include pattern happening in single
program OpenCL split kernel which was including bunch of headers about
10 times.
This brings preprocessing time from ~1sec to ~0.1sec on my laptop.
The idea is to re-use files which were already processed. Gives about 4x speedup
of processing time (~4.5sec vs 1.0sec) on my laptop for the whole OpenCL kernel.
For users it will mean lower delay before OpenCL rendering might start.
Keyframe handle vertices (the circles on the ends of the handles)
should always be larger than the central vertex. This brings back the
"outer" radius value from the old gluDisk(), and doubles it to get the
necessary diameter, to scale it properly.
TODO's:
- Get rid of all fills inside these circles
- Make the central vertex square-shaped again
* Outlines of keyframes were too thick and ugly
* Size differences between keyframe types was being swallowed
by the pixel-fudge factor, leaving colour as the only distinguishing
factor (bad!)
This changes the Cycles exporting and Cycles/Eevee UI code to support both
output material nodes, giving priority to the renderer native one. Still
missing is Eevee code to prefer the Eevee output node.
This is still far from prefect, but yet much better than what we had so
far (more consistent with inheritent precision available in floats).
Note that this fixes some (currently commented out) units unittests, and
requires adjusting some others, will be done in next commit.
Bug was in RNA nodes code actually, itemf functions shall never, ever
return NULL!
Note that there were other itemf functions there that were potentially
buggy. Also harmonized a bit their code.
* Numbers with units (especially, angles) where not handled correctly
regarding number of significant digits (spotted by @brecht in T52222
comment, thanks).
* Zero value has no valid log, need to take that into account!
It now uses a quality slider instead of stride.
Lower quality takes larger strides between samples and use lower mips when tracing rough rays.
Now raytracing is done entierly in homogeneous coordinate space. This run much faster.
Should be fairly optimized. We are still Bandwidth bound.
Add a line-line intersection refine.
Add a ray jitter between the multiple ray per pixel to fill some undersampling in mirror reflections.
The tracing now stops if it goes behind an object. This needs some work to allow it to continue even if behind objects.
Running undo would notify manipulators to refresh,
but this still allowed for events in the queue to be handled,
where manipulators could be drawn for selection before
their refresh callback runs.
This made Python manipulators raise exceptions
about referencing invalid data (or crash).
Now tag manipulator update on file load (including undo)
and ensure the refresh callback runs
before drawing manipulator selection.
Also split manipulator map refresh flag in two since selection doesn't
perform the same operations as regular drawing.
This will only be noticeable for drawing many instances.
In contrived use-case with many instances, and `USE_PROFILE` disabled
this can close to double playback FPS.
The option to disable this is left in the code in case we want to
debug memory use.
See D2756 for details.
- Each allocation can be a different size
(but should be smaller than the chunk size).
- Result can be looped over in order of allocation.
- Allocations are aligned to pointer size to avoid unaligned reads.
When calling sculpt from Python,
setting 3D 'location' but not 2D 'mouse' stopped working in 2.78.
Now check if the operator is running non-interactively and
skip the mouse-over check.
Regression in D1812: PyDriver variables as Objects
Taking the Python representation is nice in general
but for enums it would convert them into strings,
breaking some existing drivers.
Some users really liked previous behavior,
so making it an option.
Cursor Lock Adjustment can be disabled to give something close to
2.4x behavior of cursor locking.
When lock-adjustment is disabled placing the cursor the view.
This avoids the issue reported in T40353
where the cursor could get *lost*.
Even strands that were excluded by the density texture were being added
to the DM passed to cloth, but these ended up having some invalid data,
because they were not fully constructed.
This simply excludes `UNEXISTED` particles from the DM generation, as
would be expected.
Note that fix is not perfect, systematically make refcounting of all IDs
assigned to node's id pointer, which breaks the 'do not refcount
scene/object/text datablocks' principle...
But besides that principle being far from ideal in general, it becomes
pretty much impossible to apply when using //generic// ID pointer,
unless we add some kind of type data to that pointer somehow.
So for now, better to live with that, than having broken usercount.
This commit is a work forward having less updates during playback, which speeds
things up a lot here. The idea is simple: stop update all copy-on-write
datablocks (which implies full re-evaluation actually) on frame change and
re-use existing evaluated meshes as much as possible.
This brings playback speed to 24 fps on the dino test scene here. Performance
drops down a lot when armature is animated tho, but that's because of need of
tangent space calculation which we can't do much about from just a dependency
graph.
Hopefully this doesn't make copy-on-write too unstable, quick tests here are
surviving fine.
Old performance debug was doing queries for every frame even if not debugging perf.
Also, it did not record when a pass was draw multiple time, leading to incorect measurement.
New module also allows to group the timers to limit infos displayed.
Also fix the background CPU draw timer.
The purpose of the keymap strings is probably for un-embossed menu items
like seen in most pulldowns. I can't see a reason for also adding that
string for regularly drawn buttons within popups, we don't add it
anywhere else in the UI either. So this commit makes sure shortcut
strings are only added to buttons that are drawn like pulldown-menu
items.
The purpose of the keymap strings is probably for un-embossed menu items
like seen in most pulldowns. I can't see a reason for also adding that
string for regularly drawn buttons within popups, we don't add it
anywhere else in the UI either. So this commit makes sure shortcut
strings are only added to buttons that are drawn like pulldown-menu
items.
The Blender text editor's built in python template "Gamelogic" has a reference near the bottom to "objectHitList" as an alleged attribute to the KX_TouchSensor. This name is incorrect, it's correct name is "hitObjectList."
Attempting to access the suggested objectHitList returns error...
```
AttributeError: 'KX_TouchSensor' object has no attribute 'objectHitList'
```
The provided diff corrects this minor error.
Reviewers: kupoman, moguri, campbellbarton, Blendify
Reviewed By: Blendify
Tags: #game_engine, #game_python
Differential Revision: https://developer.blender.org/D2748
`defvert_array_find_weight_safe()` was confusing 'invalid vgroup' and
'valid but totally empty vgroup' cases.
Note that this also affected at least ShrinkWrap and SimpleDeform
modifiers.
This means once an ID is created,
it will keep using the same PyObject instance.
This has some advantages:
- Avoids unnecessary re-creation of instances on UI poll / redraw.
- Accessing free'd ID's gives an exception instead of crashing.
(long standing annoyance!, though this only applies to ID's
and not yet other data that uses the ID's - vertices for eg).
- Allows using instance comparison (a little faster).
Note that the instances won't be kept between undo.
Was doing 2x lookups which is OK for click-select
but this runs on mouse-move and can become slow.
May enable this again if highlighting logic changes.
Also scale hotspot by pixelsize.
- Cleanup array access, move into functions.
- Store allocated size to avoid realloc's on every add/remove.
- Make select editable from Python.
- Rename select callback to select_refresh
(collided with select boolean).
- Call select_refresh when de-selecting as well as selection.
This was cause by some post process not always sampling the highest mipmap.
But if there is no need for mipmapping (i.e. no SSR) these levels will be undefined.
So forcing all Post FX shader to sample level 0 fix this.
This add the possibility to use planar probe informations to create SSR.
This has 2 advantages:
- Tracing is less expensive since the hit is found much quicker.
- We have much less artifact due to missing information.
There is still area for improvement.
BKE_scene_copy explicitly ignores visibility of "source" collections make all
collections visible. This is also tested by regression tests.
While it seems more logical to simply preserve all possible visibility flags
and overrides, don't feel like submitting to a behavior-changes without talking
to author of those guards first.
This commit fixes cycles material preview.
If object is only listed in collection but not added to any of layers we shouldn't create
placeholder for it, because otherwise we'll leave lots of placeholder ID nodes.
Question: can we make this exception to be more reliable?
This commit separate the depth texture into another texture array.
This remove the need to output radial depth into alpha.
Unfortunatly it's difficult to recover position from the non linear depth buffer when applying reflection without adding a bunch of stuff.
This is in preparation of SSR planar reflections.
- Encode normals for other opaque bsdf so they are not rejected by the normal facing test.
- Early out non reflective surfaces.
- Add small offset to raytrace to avoid self intersection.
- Fix fallback probes not appearing.
Output in 2 buffers Normals, Specular Color and roughness.
This way we can raytrace in a defered fashion and blend the exact contribution of the specular lobe on top of the opaque pass.
Ideally need to clean and sane and impossible-to-break way of making sure
evaluation context is fully initialized, but that would need some thoughts
and experimentation.
Otherwise we'll have confused dependency graph builder, which wouldn't be able to
build proper graph.
Didn't find a way to avoid world copy here, we can probably escape with some shallow
copy here, but that will currently complicate code a lot.
Ideas to consider here:
- Use shallow copy of existing world after new ID management API is in place.
Downside would be thread safety, kind of nice to have everything local.
- Switch depsgraph away from ID_TAG and do hash lookup or so.
This will slow down depsgraph builder, but will make code more reliable.
The order of evaluation of function arguments is undefined, and the order
was reversed between these compilers. This was causing regressions tests
to give different results between Linux and macOS.
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
Moving the ray_start_local to the new position does not lose as much precision as moving the ray_org_local to the corresponding position.
The problem of inaccuracy is within the functions: `bvhtree_ray_cast_data_precalc` and` fast_ray_nearest_hit`. And not directly in the values of the rays.
Those functions did not use evaluation context.
Also fixed lots of unused variables warnings caused by commented out code which
needs to be ported away from DerivedMesh and to evaluation context.
Note that some little parts of code have been dissabled because eval_ctx
was not available there. This should be resolved once DerivedMesh is
replaced.
The way we use it, UI_PRECISION_FLOAT_MAX is actually + 1 to get total
number of digits, and float only has 7 meaningful digits, so that define
shall be at 6.
GCC seems to detect uninitialized into function calls now, but then isn't
always smart enough to see that it is actually initialized. Disabling this
warning entirely seems a bit too much, so initialize a bit more now.
The remapping code was creating plkaceholders for objects coming from legacy
bases, but since those objects were never created by dependency graph (since
they are supposed to be ignored) the copy on write relations creation was
confused.
Now we do some special trickery to clear legacy bases on copy on write.
Currently RNA doesn't give us a good way of accessing singleton layers,
for now expose as a layer list (skin & paint-pask do this too).
Noted in T47811 that this should be changed.
This commit unifies the flattened texture slot names for bindless and regular CUDA textures. Texture indices are now identical across all CUDA architectures, where before Fermi used different indices, which lead to problems when rendering on multi-GPU setups mixing Fermi with newer hardware.
Those checks are not always helpful, since id remapping doesn't want to
worry about which components to tag for update. Perhaps in the future we
will introduce special flag which would mean "tag everything possible"/
We get already enough reports of people complaining about crashes on
edit mode unaware that they are in the non-supported Blender Render
engine.
Blender Render is going away, no reason to keep it around. Once we have
a nice fallback on Eevee and fast file loading we can default to Eevee
instead.
This is the fake ID nature of compositor again. Need to discard such
pointers before freeing datablock even for scenes (before it was done
for objects only).
Previously it was possible to run into situation when armature is constructed prior to
objects which are used for it's constraints. This was causing wrong scene evaluation.
Now we create placeholders for objects used by armature in case they don't have ID node
yet, which ensures we have proper mapping from original to copy-on-write ID pointer.
This shows the bug when IK solver doesn't update reliably when targeted an external
object and when that object is handled by build_object() after the armature.
The issue was caused by id_copy_no_main() changing pointers of constraints used
in pose to a newly allocated ID. This is correct, but caused confusion too our
copy on write remapping, because we are mimicing inplace duplication by copying
memory over from a temporarily duplicated ID to a proper placeholder. This was
causing dangling pointers in pose to a temporarily allocated ID.
Now we add special code to remapping callback which replaces temporary ID with
a proper one.
Couple of main things here:
- Properly handle PSYS_UPDATE_* flags from DEG_id_tag_update.
There are still some possible issues here related on the fact
that we don't differentiate different PSYS_UPDATE_* flags here
and handle the mall the same.
Other possibility here is that object level particle settings
evaluation might be forced when particle system evaluation is
tagged for update. Didn't see actual issue here yet, but need
a closer look.
- Don't tag non-object datablocks on visibility changes.
Those don't depend on visibility anyway. This prevents particle
settings IDs from flushing updates to all objects, causing all
cached particles to be lsot.
- Only update translation and geometry components on visibility
changes.
Once again, this prevents particle cache from being invalidated.
We might need to tag material components here still tho.
This commit makes it so that only ID components which correspond to the tag
flag are tagged for update (previously the whole ID would have been updated
in the most of cases).
This allows us to have more granular tag flags and prevent tagging of things
we don't want to be tagged.
Previously tagging particle settings for update will iterate over all objects and
all their particle system to see whether something needs an update or not. Now we
put ParticleSettings as an ID to the dependency graph, so tagging it for update
will nicely flush updates to all dependent particle systems.
Current downside of this is that due to limitation of flush routines it will cause
some extra particle system re-evaluation when it technically not needed, and what's
more annoying currently it will discard point caches more often.
However, this is a good and simple demonstration case to improve tagging/flushing
system to accommodate for such cases (similar issues happens with CoW and shading
components). So let's try to find some generic solution to the problem!
This replaces usage of generic PLACEHOLDEWR with string lookup with more
explicit opcode. This should make it faster to build dependency graph by
avoiding string comparisons when it's not needed.
There should be no user measurable different.
Hopefully fix it actually, at least could not reproduce it anymore with
that changen, but Was already quite hard to trigger before.
We need a memory barrier at this allocation, otherwise it might happen
after preview gets added to done queue, so preview could end up being
freed twice, leading to crash.
Change the implementation so it no longer takes over the mouse cursor motion
from the OS, instead only move it when warping, similar to Windows and X11.
Probably the reason it was not done this way originally is that you then get
a 500ms delay after warping, but we can use a trick to avoid that and get much
smoother mouse motion than before.
While drawing nice 'rounded' values is OK also for 'low precision'
editing like dragging and such, it's quite an issue when you type in a
precise value, validate, edit again the value, and find a rounded
version of it instead of what you typed in!
So now, *only when entering textedit of num buttons*, we always get the highest
reasonable precision for floats (and use exponential notation when
values are too low or too high, to avoid tremendous amounts of zero's).
Temporary screens caused a crash here,
if we can't assume an area will never have its screen changed
better not store a back-pointer.
This makes `rna_ManipulatorProperties_find_operator` search even more
exhaustive. Not nice but hard to avoid :S
Since we're unlikely to use this, it's tedious to keep so many
differences from 'blender2.8' branch.
Currently the only difference between this
branch and 2.8 is RNA/Python work-in-progress integration.
This is needed so manipulators can tag themselves for removal
without causing problems from freeing data within a callback.
Also use properties within the dial manipulator and fix an error where
removing a wmManipulatorGroupType didn't remove its keymap.
Until 3d-viewport manipulators change functionality we need to it
working how it already is.
So add modal operator which is called from draw when there are no
properties applied to the manipulator.
This will be needed to have a general rotation manipulator.
- Add a different kind of properties that use function callbacks
instead of RNA.
Needed for situations when there isn't 1:1 correspondence
between the manipulator's position and the internal value.
- Move manipulator properties into their own file.
These changes are intended for operators to register their own
widget-types temporarily, so _every_ operator that uses manipulators
doesn't need to keep them continuously polling the view to check if the
operator is running.
- Register `wmManipulatorGroupType` globally (like all other RNA types).
- Add `wmManipulatorGroupTypeRef` for type-maps to reference groups.
- Remove `wmManipulatorMapType.idname` (spaceid & regionid are enough info).
- Add PERSISTENT flag for `wmManipulatorGroupType`
intended for use with operators so loading a new file for eg doesn't
keep the manipulator type around.
Having the type in mixed in with instance
made it hard to expose types to RNA/Python.
This disables face-map select,
we could enable it but it looks like face maps will be made to work
differently.
- move callbacks into type struct
- rename render_3d_intersection -> draw_select
- add header for function signatures (needed for types and api headers).
Poly index was used for a short time when switching to BMesh.
This is practically the same kind of data,
any inconsistency between object face-maps will need to be supported
anyway - since object & mesh might span blend-files.
Manually setting up links per bone to an object & facemap is tedious.
It's also going to cause quiet a bit of book-keeping internally to
ensure its always valid (linking, adding removing objects proxies).
Instead match the face-maps names to bones (as with vertex groups).
The DNA members have been moved into runtime cache.
This also adds an option for bones to use facemaps,
to avoid excessive searching on heavy armatures that might use very few
(if any) face-maps.
This decoupled setting draw flags from calculating the manipulator center.
I can't see any advantage to doing this with the current code
It just increases the chance the two get out of sync.
This removes graph 3d render feature, also use same variables as master
for SpaceNode.
The feature (could be added back if needed),
but didn't get such a positive functionality review, see D1781,
although this is still an open topic.
At least for initial merge we don't need anything in DNA (and I'm not sure how DNA structs are handled which are not explicitely saved to disk... does it still store some info about it?).
Had to do some changes to allow this, basically the wmManipulator.mgroup pointer to get the manipulator-group from the manipulator is back which is a bit ugly. Think that's fine though, only a minor annoyance.
Updating and drawing works closer together now, makes it easier to define what to draw (3D vs 2D vs depth-culled manipulators) and to avoid unnecessary updates. Totally untested :)
Instead of having multiple manipulator-maps for 2D and 3D manipulators, manipulator-group-types should be tagged as being either 2D or 3D. When it comes to drawing these can be drawn separately.
Note that more work needs to be done here (added some todo marks), but there are probably going to be quite a few issues when merging into custom_manipulators, so will handle these first.
Will continue work on them in 2.8 based custom_manipulators branch, there I can also work on making manipulator drawing based on OpenGL 3.2 core profile.
This commit adds a new 'custom-manipulators' branch in which changes of the wiggly-widgets branch are applied onto the blender2.8 branch. I've done it so I can start porting manipulator drawing code to use the new abstractions for OpenGL 3.2 core profile.
At some point I had to do it anyway - better earlier than later to reduce loss of git history.
From now on this is the main branch for the custom manipulators project, I'll delete the wiggly-widgets branch in a bit (but keep it available at https://github.com/julianeisel/blender/tree/wiggly-widgets). It's still possible to merge some manipulators for pre-2.8 by rewriting the drawing of those to use OpenGL <= 2.1 (or copy & pasting from earlier state).
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}.lib#missing postfix on purpose, distutils is not expecting it
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib#other things like numpy still want it though.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.