Fixes the "emtpy scrolling" glitch by clamping the scroller offset to
the boundary of the view when it's smaller than the previous.
Fixes T45197. Patch by @januz.
Differential Revision: D1580
The new constraint is slower and not backward compatible, but should
be better, especially in the damping side. The new constraint also
has a different valid range of the damping coefficient, and a limit
implementation that bounces instead of making the object stationary.
Reviewers: sergof
Differential Revision: https://developer.blender.org/D3125
WEBM is the codec name, and VP9 is the encoder (the older encoder "VP8"
is less efficient than VP9).
WEBM/VP9 and h.264 both have options to control the file size versus
compression time (e.g. fast but big, or slow and small, for the same
output quality). Since WEBM/VP9 only has three choices, I've chosen to
map those to 3 of the 9 possible choices of h.264:
- BEST → SLOWER
- GOOD → MEDIUM
- REALTIME → SUPERFAST
The VERYSLOW and ULTRAFAST options give very little extra benefit.
Reviewed by: @Severin
Some of the code is simpler because we use Blender's triangulation directly
instead of dealing with quads. Also some progress printing code was removed
because the depsgraph can not tell us the number of objects ahead of time.
Differential Revision: https://developer.blender.org/D3127
- Disable scissor test for fast clear. This could lead to some issues but
I cannot think of one and could not find one either.
- Manually wait for queries to be available instead of making the driver
wait and issue warnings.
The encoding panel mentions "None" in a few places, which is confusing.
- "Codec: None" now reads "No Video"
- "Audio Codec: None" now reads "No Audio"
- "Output Quality: None; ..." now reads "Constant Bitrate"
When selecting "No Video" the remaining video encoding options are
hidden, making it even more explicit that there will not be video in the
output file.
The label "Codec" now reads "Video Codec" for symmetry with "Audio
Codec".
Previous code was assuming that the glyph texture would remain bound to
GL_TEXTURE0 until the cache would be drawn. This is not always the case,
so better save the texture and rebind it before drawing.
This port the Blurring of blf fonts to the final drawing shader.
We add a bit of extra padding to each glyph so that jittering the texture
coord does not sample the neighbor glyphs.
Overall 10% more performance on general UI drawing time.
This commit can introduce ordering problem on some elements.
In this case you need to flush the widget cache to ensure the element that
is going to be drawn is drawn on top of any widget base.
To flush the cache use UI_widgetbase_draw_cache_flush.
This is already done for BLF and Icons.
When importing multiple materials for one object,
the imported material animation curves have all been
assigned to the first material in the object.
This fix also improves the console logging whenever the importer
finds a consistency problem with the imported animation data.
Replace the 12 iterations of UI_draw_roundbox_4fv with only one batch.
This mean less overdraw and less drawcalls.
I had to hack the opacity falloff curve manually to get approximatly the
same result as previous technique. I'm sure with a bit more brain power
somebody could find the perfect function.
The MovieSequence and MovieClip classes now have a metadata() function
that exposes the `IDProperty *` holding the video metadata.
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
This is useful to create a mapping from the frame range in the video to
frame index in the blend file.
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
This is currently only supported by FFmpeg (so not frameserver, AVI RAW,
or AVI JPEG), and only seems to work when using Matroska or Ogg Theora
containers.
Only metadata that doesn't change from frame to frame is written to
video files. This distinction is visible in the UI by looking at the
stamp checkbox tooltips (they either mention "image" or "image/video").
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
- Metadata handling is now separate from `ImBuf *`, allowing it to be
used with a generic `IDProperty *`.
- Merged `IMB_metadata_add_field()` and `IMB_metadata_change_field()`
into a more robust `IMB_metadata_set_field()`. This new function
doesn't return any status (it now always succeeds, and the previously
existing return value was never checked anyway).
- Removed `IMB_metadata_del_field()` as it was never actually used
anywhere.
- Use `IMB_metadata_ensure()` instead of having
`IMB_metadata_set_field()` create the containing `IDProperty` for
you.
- Deduplicated function declarations, moved `intern/IMB_metadata.h` out
of `intern/`. Note that this does mean that we have some extra
`#include "IMB_metadata.h"` lines now, as the metadata functions are
no longer declared in `IMB_imbuf.h`.
- Deduplicated function declarations, all metadata-related declarations
are now in imbuf/IMB_metadata.h.
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
Use the new GPU_SHADER_2D_NODELINK and GPU_SHADER_2D_NODELINK_INST to
accelerate nodelink drawing.
This commit does not include the batching functionnality. So this should
not make a lot of difference.
Now use a list of preset batches with a function to add new ones to this
list.
This removes the need of new functions all over the place to reset/exit.
Special shader to draw nodelinks for the node editor.
We only pass bezier points to the GPU and vertex position is handled inside
the vertex shader.
The arrow is also part of the batch to avoid separate drawcalls for it.
We still draw 2 pass one for shadow and one for the link color on top.
One variation to draw instances of theses links so that we only do one
drawcall.
The issue was caused by component tag forcing CoW component to be run,
without actually flushing changes down the road from the CoW operation.
In a way, this is what we want: we do want CoW to run on changes, but
we don't want tiny change forcing full datablock update.
This commit makes it so order of updates is all correct, but the bigger
issue is still open: what parts of datablock CoW should be updating?
Now it's possible to open spring file and play around.
The issue was mainly visible when copy-on-write was enabled. This was forcing
lots of meshes to be freed from multiple thread, causing all sorts of race
conditions in Gawain's VAO code.
OpenGL resources seems already to be doing deferred deletion, need to do the
same for CPU side arrays.
Free code should not handle ID refcounting at all. This has to be done
at higher level, since in some case we want to free (temp) data that
actually did not refcount at all its IDs.
This change seems to be working OK, but as usual in that area, only
lots of testing in real-case situation will say whether there are some
hidden bugs or not.
Issue was, *some* IDs (like infamous nodetrees from materials etc.)
would not go through the 'main' read_libblock() func, so their tags were
never reset.
So now, we ensure direct_link_id() always clear the tags, and move
setting them in read_libblock() after the call to direct_link_id().
Needed for depsgraph, but general healthier fix actually.
The problem was that textures were assigned to different slots on different draw
calls, which caused shader specialization/patching by the driver. So the shader
would be compiled over and over until all possible assignments were used.
This is a part of copy-on-write sanitization, to avoid all the checks
which were attempting to keep sub-data pointers intact.
Point is: ID pointers never change for CoW datablocks, but nested
data pointers might change when updating existing copy.
Solution: Only bind ID data pointers and index of sub-data.
This will make CoW datablock 7update function was easier in 2.8.
In master we were only using pose channel pointers in callbacks,
this is exactly what this commit addresses. A linear lookup array
is created on pose evaluation init and is thrown away afterwards.
One thing we might consider doing is to keep indexed array of
poses, similar to chanhash.
Reviewers: campbellbarton
Reviewed By: campbellbarton
Subscribers: dfelinto
Differential Revision: https://developer.blender.org/D3124
Back in the days (2.4x and before), it was rather easy to get some
invalid utf-8 strings in Blender. This is totally breaking modern code,
so this commit adds a simple 'check & fix strings' operator, available
from the main File menu.
E.g. typing `bpy.data.bl_rna.properties[8].<tab>` in console would hard-crash
trying to dereference NULL pointer. Was a missing check in rna_Property_tags_itemf().
This is really minor but anyways, now it will only leak if you cancel the menu.
And that only if htis is the last time you called this operator before closing
Blender.
Better fix for T54457. It seems Debian compiles OpenVDB without ABI 3
compatibility, while Arch does enable it as is the default in the OpeVDB
CMake build system.
So now there's an option that the distribution can set depending on how
they compile their OpenVDB package.
Some functions always returned the input argument
which was never used.
This made code read as if there might be a leak.
Now return a boolean (true the imbuf is modified).
- Undo that changes modes currently asserts,
since undo is now screen data.
Most likely we will change how object mode and workspaces work
since it's not practical/maintainable at the moment.
- Removed view_layer from particle settings
(wasn't needed and complicated undo).
- Use a single undo history for all operations.
- UndoType's are registered and poll the context to check if they
should be used when performing an undo push.
- Mode switching is used to ensure the state is correct before
undo data is restored.
- Some undo types accumulate changes (image & text editing)
others store the state multiple times (with de-duplication).
This is supported by checking UndoStack.mode `ACCUMULATE` / `STORE`.
- Each undo step stores ID datablocks they use with utilities to help
manage restoring correct ID's.
Needed since global undo is now mixed with other modes undo.
- Currently performs each undo step when going up/down history
Previously this wasn't done, making history fail in some cases.
This can be optimized to skip some combinations of undo steps.
grease-pencil is an exception which has not been updated
since it integrates undo into the draw-session.
See D3113
For this we use a new shader that gets it's data from a uniform array.
Vertex shader position the vertices using these data.
Using glUniform is way faster than using imm for that matter.
Like BLF rendering, UI icons are always (as far as I know) non occluded and
displayed above everything else. They also does not overlap with texts so
they can be batched at the same time.
This adds less than a megabyte of mem usage.
FT_Get_Kerning was the 2nd hotspot when profilling. This commit completly
remove this cost.
One concern though: I don't know if the kerning data is constant for every
sizes but it seems to be the case. I tested different fonts at different
dpi scalling and saw no differences.
I left a flag to quickly debug if something is wrong.
But now that everything uses shader, it seems to be alright since a shader
is always set active before drawing.
I've made a separate version of the geom shader that works with full
3D modelviewmat.
This commit also includes some fixup inside blf_batching_start().
You can now use BLF_batching_start and BLF_batching_end to batch every
drawcall to BLF together minimizing the overhead introduced by BLF and the
opengl driver.
These calls cannot be nested (for now).
If the modelview matrix changes, previously batched calls are issued and a
the process resume with the new matrix.
However the projection matrix MUST not change and gl scissors as well.
This is not a perfect win just yet. It's now calling glBufferSubData for
every call (instead of using glMapBufferRange which is almost faster), but
with this system we will be able to batch drawcalls together.
See next commit.
This allows us to specify a the number of vertices to upload to the gpu.
This is to keep the same allocation on the System Memory but send the least
amount of data to the GPU/Driver.
- See `--log` help message for usage.
- Supports enabling categories.
- Color severity.
- Optionally logs to a file.
- Currently use to replace printf calls in wm module.
See D3120 for details.
Two issues are fixed with this commit:
1) When we build OIIO (on unixoid build environments) and no /src/doc/oiiotool was present we had no build target for it (which led to a build error). As we don't need docs for OIIO, we disable it now.
2) We specified a var that OIIO does not recognize (was removed upstream a long time ago): ILMBASE_VERSION.
Selecting not only the objetcs directly linked to the selected collection.
So we also do it for the objetcs in the nested collections, just as we can
do from the outliner.
Use Shift+G > Collection. If there is only one collection, it just selects it,
if there are multiple ones user get to pick which one to select.
This is the same behaviour we have for groups. Note, we only select objects
directly in the collection, not the ones in any nested collection.
Feature suggested by Pablo Vazquez (venomgfx)
This means smaller imm buffer usage.
This does not reduce the number of drawcalls.
This uses geometry shader which is slow for the GPU but given we are really
CPU bound on this case, it should not matter.
A perfect implementation would:
- Set the glyph coord in a bufferTexture and just send the glyph ID to the
GPU to read the bufferTexture.
- Use GWN_draw_primitive and draw 2*strllen triangle and just retrieve the
glyph ID and color based on gl_VertexID / 6.
- Stream fixed size buffer that the Driver can discard quickly but this is
the same as improving IMM directly.
This update adds a link to the Wikipedia article "Euler angles" to the description of the mathutils.Euler class.
I initially was not sure what a "Euler" represented in Blender API, but found the Wikipedia article helpful. I believe others will find the link helpful too if it appears in the class documentation.
This is similar to the Wikipedia links that appear in the mathutils.Matrix class, e.g: https://docs.blender.org/api/blender_python_api_current/mathutils.html?highlight=euler#mathutils.Matrix.adjugate
Author: @justasb
Reviewers: campbellbarton, trumanblending, Blendify
Reviewed By: Blendify
Subscribers: Blendify
Tags: #bf_blender
Differential Revision: https://developer.blender.org/D3077
Fix for T54437: Sequencer preview uses last updated scene
The fix started in master, moving EvaluationContext initialization
before we leave `deg_evaluate_on_refresh()`.
Upon merging master we can fix the actual issue which was to set
the EvaluationContext depsgraph even if the depsgraph was already updated.
This is required to T54437 (sequencer preview uses last updated scene).
Although the fix itself needs to be in 2.8, for the 2.8 specific
initialization code.
Introduce a UI batch cache. For the moment it's only used by widgetbase so
leaving it interface_widgets.c. If it grows, it can have its own file.
Like all preset batches (batches used by UI context), vaos must be refreshed
each time a new window context is binded.
This still does 3 GWN_batch_draw in the worst cases but at least it does
not use the IMM api.
I will continue and batch the 3 calls together since we are really CPU
bound, so shader complexity does not really matters.
I cannot spot any difference on all the widgets I could test. I did not use
any unit tests so I cannot tell if there is really any defects.
This is not a complete rewrite but it adresses the top bottleneck found
after a profilling session.
Use more generic id->recalc flag.
Also sanitize flag flush from settings to particle system.
Need to do such flush before triggering point cache reset, since
point cache reset will do some logic based on what flags are set.
This will solve crash caused by threaded update which will set
some bitflags while point cache reset is in progress.
The way how particle state is to be accessed or used did not change
in Blender 2.8, so the drawing code should follow old design.
This code is somewhat duplicated from drawobject.c, but old draw
code is on the way to be removed anyway.
This fixes issue with disappearing particles when tweaking number
of particles.
Tagging based on components might not be granular enough.
For example, for particles we would want to know what part
of particles was changed exactly. For the flushing we wouldn't
worry too much, because we will want less granular updates
there anyway.
Roughness baking previously defaulted to 1.0 for all diffuse materials,
now we also bake roughness values of Oren-Nayer and Principled Diffuse.
Differential Revision: https://developer.blender.org/D3115
How to use: Select a few objects, and press "M" in the viewport.
If you hold ctrl the objects will be added to the selected collection.
Otherwise they are removed from all their original collections and moved
to the selected one instead.
Development Notes
=================
The ideal solution would be to implement an elegant generic multi-level
menu system similar to toolbox_generic() in 2.49.
Instead I used `uiItemMenuF` to acchieve the required nesting of the menus.
The downside is that `uiItemMenuF` requires the data its callback uses to be
always valid until the menu is discarded. But since there is no callback we
can call when the menu is discarded for operators that exited with
`OPERATOR_INTERFACE`.
That means we are using static allocated data, that is only freed next time
the operator is called. Which also means there will always be some
memory leakage.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D3117
When importing an xfov curve, we must transformed the data to
Lens opening angles in degrees. While the curve value itself is
correctly transformed, the transformation of the tangents has been
forgotten. this is fixed now.
Use C++11 threads when available, and native critical section on Windows.
Later on we can remove pthread code when C+11 becomes required.
Differential Revision: https://developer.blender.org/D3116
Drawcall per window redraw on default layout:
- 4100+ without patch
- 1270 with patch
Theses drawcalls meant a lot of driver overhead since they each correspond
to one glMapBuffer which is slow.
- Get memory usage from MemFile instead of MEM API
avoids possible invalid when threads alloc memory.
- Use size_t instead of uint and uintptr_t to store size.
- Rename UndoElem.str -> filename
- Rename MemFileChunk.ident -> is_identical
Random numbers for step offset were correlated, now use stratified samples
which reduces noise as well for some types of volumes, mainly procedural
ones where the step size is bigger than the volume features.
Remain convinced that it should not be possible for undo code to run in
parallel with DEG eval... But for now, this whould prevent static
override code to dive into this collection.
This refactor modernise the use of framebuffers.
It also touches a lot of files so breaking down changes we have:
- GPUTexture: Allow textures to be attached to more than one GPUFrameBuffer.
This allows to create and configure more FBO without the need to attach
and detach texture at drawing time.
- GPUFrameBuffer: The wrapper starts to mimic opengl a bit closer. This
allows to configure the framebuffer inside a context other than the one
that will be rendering the framebuffer. We do the actual configuration
when binding the FBO. We also Keep track of config validity and save
drawbuffers state in the FBO. We remove the different bind/unbind
functions. These make little sense now that we have separate contexts.
- DRWFrameBuffer: We replace DRW_framebuffer functions by GPU_framebuffer
ones to avoid another layer of abstraction. We move the DRW convenience
functions to GPUFramebuffer instead and even add new ones. The MACRO
GPU_framebuffer_ensure_config is pretty much all you need to create and
config a GPUFramebuffer.
- DRWTexture: Due to the removal of DRWFrameBuffer, we needed to create
functions to create textures for thoses framebuffers. Pool textures are
now using default texture parameters for the texture type asked.
- DRWManager: Make sure no framebuffer object is bound when doing cache
filling.
- GPUViewport: Add new color_only_fb and depth_only_fb along with FB API
usage update. This let draw engines render to color/depth only target
and without the need to attach/detach textures.
- WM_window: Assert when a framebuffer is bound when changing context.
This balance the fact we are not track ogl context inside GPUFramebuffer.
- Eevee, Clay, Mode engines: Update to new API. This comes with a lot of
code simplification.
This also come with some cleanups in some engine codes.
This is a bit useless because gpu lamps are only used by the game engine
and it is planned to be "remove" in some way.
Doing this to clean gpu_framebuffer.c.
This includes a few modification:
- The biggest one is call glActiveTexture before doing any call to
glBindTexture for rendering purpose (uniform value depends on it).
This is also better to know what's going on when rendering UI. So if
there is missing UI elements because of this commit look for this first.
This allows us to have "less calls" to glActiveTexture (I did not
measure the final count) and less checks inside GPU_texture.
- Remove use of GL_TEXTURE0 as a uniform value in a few places.
- Be more strict and use BLI_assert for bad usage of GPU_texture functions.
- Disable filtering for integer and stencil textures (not supported by
OGL specs).
- Replace bools inside GPUTexture by a bitflag supporting more options to
identify texture types.
Undo sometimes reserved too much space in the buffer,
now assert when this happens and allocate the exact size needed.
Note prepares for moving text editor undo out of the text block (D3113)
which will split the undo buffer into a list of undo steps.
This commit essentially introduces a new RNA property flag, which when
set prevents affected property from being processed at all in comparison
code (also used to automatically generate static override rules).
The idea is to use it on very low-level data in RNA, like e.g. mesh's
geometry or psys' particles collections.
For now only applied to psys' particle collections, on the main mesh of
Agent327 pigeon, it goes from 100ms to 0.5ms on a full
auto-override-generating comparison...
Also added some new RNA property helper funcs to check on comparable and
overridable status.
With better directory layout and more proper include
statements we can avoid several local modifications,
such as changing config.h for Windows Glog and the
ones related on pass-through statements in logging
headers in Glog.
This commit also makes unused functions not-a-warning
for external code.
The list of editor-types is rather long by now, so better to arrange them into
sections.
Original patch by @jeske with updates by @Blendify and myself.
Design Task: T36028
Patch: D3112
* Pressing "OK" wouldn't close Blender anymore
* Using File -> Quit would use popup version, not OS native window
Cleaned up code a bit to avoid duplicated logic.
Steps to reproduce were:
* Open Blender (no need for factory settings, "Promt Quit" needs to be enabled)
* Edit the file (e.g. translate some object)
* Quit Blender but don't skip quit promt
* Press "Save & Quit"
* Save the file
Not sure if Windows supports the "Save & Quit" behavior, so this may not have
applied to Windows.
You only had to close Blender through File -> Quit.
Leaks happened because WM_exit() was called from within operator, UI wasn't able
to free some of it's heap data then. This data was the handler added in
uiTemplateRunningJobs() and the IDProperty group added in uiItemFullO_ptr_ex().
There was obviously a general design issue which only became visible in this
specific case.
We now delay the WM_exit call by wrapping it into a handler that gets registered
as usual. I didn't see a better way to do this, all tricks done in
ui_apply_but_funcs_after() to prevent leaks didn't work here. In fact they may
be redundant now, but am not brave enough to try ;)
Seems we can not use include directories order trick, since
files are included form inside ".." string, which forces current
directory to be checked first.
This module has no use now with the new DrawManager and DrawEngines and it
is using deprecated paths.
Moving gpu_shader_fullscreen_vert.glsl
to draw/modes/shaders/common_fullscreen_vert.glsl
Steps to reproduce were:
* Append a workspace (via '+' icon) - make sure its from the default workspaces.blend
* Activate it
* Should crash
Was accessing data from view-layer which wasn't updated yet (and thus could be
NULL). Crash occured after rB8153f89518b4a.
@campbellbarton, you may want to check if all object-mode stuff still works as
expected, not sure what's the state of it.
It is possible that datablock will not be re-used for the new
dependency graph building. Freeing function was freeing all
the nested pointers of databnlock, but not datablock memory
itself.
MSVC still defines __cplusplus as 199711L until it's in full conformance with the newer c++ standards, however the things we need from the standard are fully supported, hence a check for the msvc version was needed.
With the API recently added to gawain, it is now possible to update the vbos linked to the batch.
So the batch does not have to be destroyed.
The optimization is more sensitive when sculpt is made on low poly meshs
Without this a "Clearcoat" link could be moved to "Clearcoat Normal"
for example, which doesn't make much sense.
Differential Revision: https://developer.blender.org/D3105
Increasing the samplig dimensions like this is not optimal, I'm looking
into some deeper changes to reuse the random number and change the RR
probabilities, but this should fix the bug for now.
We revert to the malloc/realloc and manually manage the upload.
There seems to be a performance penalty from using glMapBuffer on some
hardware, prefering way is glBufferData(NULL) with glBufferSubData.
PointCache was having a collection of items of PointCache type, having a
collection of items of PointCache type, having...
Nuff said.
For now, chose the 'ugly' way to fix it, that is, the one that changes
nothing to API and scripts using it: we define another 'PointCacheItem'
RNA type for items of our point cache collection, which has exact same
interface as PointCache except for the collection.
This is doomed to be rewritten at some point anyway, not worth spending
time trying to define a really correct data layout for now.
- Upload the data to the GPU directly when creating the element buffer in
GWN_indexbuf_build_in_place().
- Convert data in place when squeezing the indices and removing the need
for another allocation.
- GWN_indexbuf_build_in_place() can be used with already used element
buffers and reupload their data without changing vbo id (keeping vaos
up to date).
We now alloc a vbo id on creation and let OpenGL manage its memory directly.
We use glMapBuffer to get this memory location.
This enables us to reuse and modify any vertex buffer directly without
destroying it with its associated Batches.
This commit does not really improve performance but will let us implement
more optimizations in the future.
We can also resize the buffer even if this can be slow if we need to keep
the existing data.
The addition of the usage hint makes dynamic buffers not a special case
anymore, simplifying things a bit.
* In the Collada Module parameters are typically ordered
in a similar way. I changed this to:
extern std::string get_joint_id(Object *ob, Bone *bone);
* The Object parameter was not used in get_joint_sid().
I changed this to:
extern std::string get_joint_sid(Bone *bone);
When a name property is defined for collection's struct, but no name is
actually set, we want to also fallback to index case. We cannot handle
empty names to address items of a collection!
We had a mix of two issues here actually:
* First, Brush are currently using their own sauce for custom previews,
this is not great, but moving them to use common ImagePreview system of
IDs is a low-priority TODO. For now, they should totally ignore their
own ImagePreview.
* Second, BKE_icon_changed() would systematically create a PreviewImage
for ID types supporting it, which does not really makes sense, this
function is merely here to 'tag' previews as outdated. Actual creation
of previews is deferred to later, when we actually need them.
They are used to start and end colored output in console.
Use with care, it is up to you to check that console actually
supports Truecolor ANSII.
In thew future we can extend this to other consoles and platforms.
Previous approach was not clear enough and caused problems.
UBOs were taking slots and not release them after a shading group even
if this UBO was only for this Shading Group (notably the nodetree ubo,
since we now share the same GPUShader for identical trees).
So I choose to have a better defined approach:
- Standard texture and ubo calls are assured to be valid for the shgrp
they are called from.
- (new) Persistent texture and ubo calls are assured to be valid accross
shgrps unless the shader changes.
The standards calls are still valids for the next shgrp but are not assured
to be so if this new shgrp binds a new texture.
This enables some optimisations by not adding redundant texture and ubo
binds.
Unity itself is deprecated, but the API is also supported by KDE and the GNOME Dock extension,
which means that it will be useful for a wide variety of distributions.
To get a progress bar, the system must have a blender.desktop file and libunity installed.
The need for libunity is annoying, but the only alternative would be to integrate a DBus library...
Reviewers: campbellbarton, brecht
Differential Revision: https://developer.blender.org/D3106
Basically, don't do full in-place copy of node tree datablock if it's
already expanded. Current way how node tree is evaluated is fully
built around the idea that evaluation copies values from original
to copied datablocks.
Changing links is handled on another level.
Old solution was to create a new vbo and copy it to the location of
the old vbo hoping for the batches to update their vaos before drawing.
The issue is that the new VAO caching is not updating the VAOs at all
unless the shader interface changes.
So unless we expand gawain to support updating of vertex buffers (in a
better way than the current "dynamic" buffer) we need to delete every batch
linked to the vbo we want to recreate.
This solution might have performance implications.
Requires BLI_utildefines.h to be included first,
(already noted in other inline code).
Possible alternative could be to move BLI_assert into own header.
Even if they are for safety they are not free to use !
On my system (Mesa + AMD Vega GPU) calling:
glBindVertexArray(1);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
in a loop, shows the same overhead as a full vao switching (which is more
or less 10 times slower than just calling glDrawArrays)
Moreover, now that we use OpenGL 3.3 binding a VAO is REQUIRED to issue a
drawcall so it is garanted to be overwritten before the next drawcall.
Problem can only happen if someone draws directly with opengl commands.
This allows to draw multiple primitive of the type
GWN_PRIM_LINE_STRIP
GWN_PRIM_LINE_LOOP
GWN_PRIM_TRI_STRIP
GWN_PRIM_TRI_FAN
GWN_PRIM_LINE_STRIP_ADJ
with only one drawcall. This should speed up some areas that are really
sensitive to drawcall counts : UV drawing, Hair drawing...
For IDProps IDarray, IDP_EqualsProperties was called for each item,
instead of IDP_EqualsProperties_ex, discarding value of `is_strict`
option.
Probably not an issue with current code, though.
The issue was only visible with copy-on-write enabled, and related to the
fact, that dependency graph builder binds original FCurves.
For now use smallest patch possible to make things to work and to make
draguu happy.
Need to think of a smarter way to deal with drivers, bones and view layers.
I tried to use real multisampling but the main problem is the outline detection that needs to have matching depth samples.
So adding FXAA instead. Always on for now, may add a parameter for it later.
One thing to note is that we need to copy the final output once again to the main color buffer because we cannot swap the dtxl textures (they can be referenced elsewhere like GPUOffscreen).
We could improve upon this and add TAA on top if viewport is still.
This means that rendering clay with AO only needs 1 geometry pass.
Thus greatly improving performance of poly heavy scene.
This also fix a self shadow issue in the AO, making low sample count
way better.
We also do not need to blit the depth anymore since we
are doing a fullscreen shading pass.
The constant cost of running the a deferred shading pass is negligeable.
This include quite a bit of code cleanup inside clay_engine.c.
The deferred pipeline is only enabled if at least one material needs it.
Multisampling is not supported yet.
Small hacks when doing deferred:
- We invert the normal before encoding it for precision.
- We put the facing direction into the sign of the mat_id.
- We dither the normal to fight the low bitdepth artifacts of the normal
buffer (which is 8bits per channel to reduce bandwidth usage).
- The common name in computer science are 'getters' and 'setters', so by
adding these names to the documentation (while 'get' and 'set are still
also mentioned) we improve findability. Having 'Getters/Setters' as a
title also makes it clearer that this example is not just about
getting or setting the property value.
- Added a little prefix to each printed value, so that print statement,
expected output, and real output can be matched easier.
The old example had two downsides:
- It promoted a blocking UI design, where the user is shown a popup
before actually executing the operator.
- It didn't show how to actually use the property values.
The new code avoids these mistakes. The properties are also shown in the
redo panel in the 3D view.
Note that I also changed the bl_idname, as this is an example about
properties, not about dialogue boxes, and changed the class name to use
the standard operator naming convention.
I also extended the example to include a panel that sets multiple
properties of the operator, since I see questions about this relatively
frequently.
Sequencer rendering can use multisample render targets. Be sure to sync
thoses after rendering.
Also disable the sample loop when not needed.
Do note that currently the color correction is broken with the sequencer.
Looks like someone changed the signature of some RNA update callback,
and for some reason that 'change skey' update function was not updated
(or later got merged from master)...
We'll need RNA to check for its func signatures, some day...
When adding scene strips to the sequencer, the wrong scenes were
getting getting added if some were skipped. For example:
Given 4 scenes (A, B, C, D) if you're trying to add the last 3 scenes
(B, C, D) as strips to the first scene (A), it would ended up adding
"A, B, C" instead of "B, C, D" as expected.
Fix provided by Andrew (signal9).
Calling the rendering operator seems to kill any other WM_job running, leaving
uncompiled materials into a GPU_MAT_QUEUED state. This then made the probe update
looping indefinitely (all_materials_updated remaining to false).
To fix this, we resume compilation for materials that are in this state.
Cancelling Render before all material compilation could make certain material
remain uncompiled. Fortunately, this is not allowed as of now.
Need Clear ID recalc flag on load. Otherwise it's possible to have
some IDs considered always updated by Cycles, when they were saved
in a tagged-for-update state.
Thanks Bastien for feedback and review!
Nothing user visible, only things needed for multi-object support,
making picking functions more flexible too.
- Support passing in an initialized hit-struct,
so it's possible to do multiple nearest calls on the same hit data.
- Replace manhattan distance w/ squared distance
so they can be compared.
- Return success to detect changes to a hit-data
which might already be initialized (also more readable).
This is mostly to avoid re-compilation when using undo/redo operators.
This also has the benefit to reuse the same GPUShader for multiple materials using the same nodetree configuration.
The cache stores GPUPasses that already contains the shader code and a hash to test for matches.
We use refcounts to know when a GPUPass is not used anymore.
I had to move the GPUInput list from GPUPass to GPUMaterial because it's containing references to the material nodetree and cannot be reused.
A garbage collection is hardcoded to run every 60 seconds to free every unused GPUPass.
* Suspicious usage of pointer:
short *type = 0; // this creates a null pointer
When this is later used for anything then blender would crash.
After following the code and check what happens i strongly believe
the author wanted to use a short and not a pointer to a short here.
* local variable where reused later in same function
While this did no harm, i still felt it was better to use a different
name here to make things more separated:
- moved variable declaraiotns into loop (for int a=0; ...)
- renamed uv_images to uv_image_set
- renamed index variable from i to j in inner loop that
reused same index name from outer loop
The iterator was redeclared 3 times. I fixed this to avoid future issues.
I commit separately because so the changes are less cluttered all over
the place.
The variable child was redeclared multiple times in the same function.
While this has not created any issues i still changed this to avoid
confusion and keep the usage of the variables more local.
The function validateConstraints() potentially causes a null pointer
exception. I changed this so that the function returns a failure as soon
as the validation fails. This avoids falling into the null pointer trap.
The 2 methods add_bezt() and create_bezt() do almost the same.
I combined them both into add_bezt() and added the optional parameter
eBezTriple_Interpolation ipo
Don't write the multichannel metadata when there is only a single layer,
and don't unnecessarily consider single layer images with Blender metadata
as multi layer.
This save a little memory and copying in the kernel by storing only a 4x3
matrix instead of a 4x4 matrix. We already did this in a few places, and
those don't need to be special exceptions anymore now.
This is in preparation of making Transform affine only, and also gives us
a little extra type safety so we don't accidentally treat it as a regular
4x4 matrix.
The purpose of the previous code refactoring is to make the code more readable,
but combined with this change benchmarks also render about 2-3% faster with an
NVIDIA Titan Xp.
This leads to less lookups to the GWNShaderInterface and less uniform upload.
We still keep a legacy path so that Builtin uniforms can still work. We might restrict this path to Builtin shader only in the future.
You can now cancel your renders that are too long. This will still output the current status of the render. For example if you cancel at 50% rendering progress, you will have a render result with only half the render samples.
This also fix a bug with the probe debug display when there was more than 2 probes. ped->probe_id was equal to 0 for all planar probes until the next frame. Resulting in all planar data debug to show probe 0.
If no custom URL was set, add-ons would get a "Report a Bug" button opening
the default developer.blender.org bug tracker. Now we only add this default
button if the add-on is bundled and not installed by the user.
Premise: When pose bones are selected, applying a pose library should
only affect the selected bones.
This commit fixes a bug where the pose was also applied when there was
no overlap between the selected bones and the bones in the pose. For
example, applying a pose which contains only keyframes for the left
hand, while only right-hand bones are selected, would apply the pose
to the left hand anyway.
The code is now also slightly more efficient; the removed 'selcount'
counter was only used as a binary (i.e. zero or non-zero). It's now
stored as a bitflag instead.
Entering edit-mode uses all selected mesh objects.
- drawing.
- select (picking, circle, border).
- select mode vert/edge/face switching.
- handful of tools: subdivide, delete, select(all)
- uv project & unwrap.
- transform (uv's and vertices)
(crazy-space and islands aren't currently working).
Almost nothing else works, this is a proof of concept.
Note, missing indentation in some added loops to reduce diff noise and merge conflicts.
Currently only covering handful of files from reports about wrong fps detected.
It will need D3083 applied first to get tests passed, also tests themselves
are to be committed to svn.
But there are some python code which needs to be reviewed, like blendfile
passed to run_blender().
Reviewers: sybren, mont29
Reviewed By: sybren, mont29
Subscribers: mont29
Differential Revision: https://developer.blender.org/D3096
This is an issue with which value to trust: fps vs. tbr. They both cam be
somewhat broken. Currently the idea is:
- If file was saved with FFmpeg AND we are decoding with FFmpeg we trust tbr.
- If we are decoding with Libav we use fps (there does not seem to be tbr in
Libav, unless i'm missing something).
- All other cases we use fps.
Seems to work all good for files from T53857, T54148 and T51153. Ideally we
would need to collect some amount of regression files to make further tweaks
more scientific.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D3083
- put render iterator in own scope
(would shadow it's own variable if used multiple times).
- enforce semicolon at end of iterator macros.
- no need to typedef one-off macro structs.
Each AnimData block has a set of Blend/Extrapolation/Influence settings
that can be used to control how the active action is blended with the
NLA stack. However, these settings were not getting copied over to the
newly created strips (as the push-down code existed long before these
settings were added).
This commit solves this in several ways:
* Active Action Blend/Extrapolation/Influence settings now get copied
to the new strips when adding them to the NLA stack via Push Down.
Note: This doesn't happen when there are no existing NLA tracks,
as these settings don't get used in that case.
* Strip Influence will be copied across when inf < 1.0 (i.e. when a
non-default value is used), to maintain the effect. To make this work,
the influence value will get added as a keyframe to the strip's
"Influence" Control FCurve.
- See code comments for an alternative approach and why that was not chosen
- Strip Time still doesn't get keyframes added automatically yet.
* To ensure the "extrapolation mode" settings don't get always overwritten,
I've put in place a compromise: the extrapolation will only get changed
if the chosen setting will cause problmes (i.e. hold forward & back -> hold forward
if there are other tracks before it already).
Not safe for backporting to 2.79[x] stable releases.
Now repeating the operator will use the previously chosen offset, either with
the modal operator or typed in. The modal operator will still start at zero.
Reverts rBb9ae517794765d6a1660 and fixes the issue properly. Old fix could cause
NULL to be passed to functions that expect all arguments to be non-NULL.
The idea is to separate the most common case from symmetrical frustum. And to make a simple but efficient calculation.
The new radius is usually 98% the size of the radius size of the asymmetric solution.
Thanks to @fclem for reviewing the patch on IRC
Also get rid of the static var and initialization.
This enables the user to see the progress on the info header.
Closing blender or reading a file also kill the job which is good.
Unfortunatly, this job cannot be interrupt by users directly. We could make it interruptible but we need a way to resume the compilation.
In the gpus like `AMD Radeon HD 7570M` and `Intel(R) HD Graphics 4000` this solution improves performance a hundreds or even thousands of times depending on the resolution.
Reviewed By: @brecht and @fclem
Differential Revision: https://developer.blender.org/D3095
Vertex group remapping utility function,
now shared between object join and array modifier cap-ends.
Weights which don't exist are removed.
D3092 by @Foaly
Main purpose is to make it possible to cover FPS detection with regression test.
But it might also be handy for some other scripters.
Thanks Campbell for review!
The issue was happening with fast Gaussian blur, and caused by NaN value pixels
in the input buffer.
Now made it so Map Range output does not produce NaN, by returning arbitrary
value of 0. Still better than NaN!
Previous fix for T53430 caused T54200.
The edge case for soft & hard cuts weren't working,
where the strip used start/end-still & the frame was placed exactly on
the start/end of of the sequence content.
T54200 fixed the end-still case but broke hard-cuts for all other cases.
This fixes the case for soft/hard cuts with/without start/end-still.
I keep reading that texture painting is not working yet. However it is fully
working. We even have a "Full Shading" option in the viewport display panel.
Clay/EEVEE still need their UI figured out. But the context itself is doing
its part after this patch, and at least for Cycles it's working like 2.79.
Instead of creating a new instancing shading group without attrib, we now have instancing calls. The benefits is that they can be culled.
They can be used in conjuction with the standard and generate calls but shader must support it (which is generally not the case).
We store a pointer to the actual count so that the number can be tweaked between redraw.
This will makes multi layer rendering more efficient.
Introduced explicit ID property node for driers in depsgraph,
so it is clear what is the input for driver, and what is the
output.
This also solved relations builder throwing lots of errors
due to ID property not being found.
It was too tricky to know ahead of time if an object would still
be visible in the new window/workspace/scene/layer combination,
especially since other windows may share some of these data-blocks.
So store the context, make the change, then check if the object is
still visible, freeing mode data of it's not.
It was possible to have relations like A -> B -> C -> A (import thing is
that no other operations points into this cluster) which were not detected
or reported by dependency cycle solver.
Now this is solved by ensuring we don't leave unvisited nodes behind.
This is probably a better way to handle it: instead of totally
discarding scaling of non-free axes, keep the ratio between them.
Basically the logic of the constraint is now that it rescales the
object uniformly in the non-free axis plane in order to force the
total volume change to the desired value.
Note that this code will likely be generalized,
currently each new case is a little different though
so it's too early to move them into general functions.
This (now removed code) calls gl_Vertex deprecated draws. It was doing
background drawing (color gradient, flat background) which is not used
by any engine.
It seems the reason the old version of the constraint overcompensates
as reported in T48079 is to allow the constraint to work with uniform
scaling on all axes. However the way it did that actually _requires_
uniform scaling for the constraint to work correctly, and breaks if
only the free scaling axis is used to avoid redundant channels.
This version attempts to allow both by discarding scaling in the non-
free directions instead of applying the correction on top of it.
This merges changes in internals, runtime-only of existing custom
normals code, which make sense as of themselves, and will make diff of
soc branch easier/lighter to review.
In the details, it mostly changes two things:
* Now, smooth fans (aka MLoopNorSpaceArray) can store either loop
indices, or pointers to BMLoop themselves. This makes sense since in
BMesh, it's relatively easy to get index from a BMElement, but nearly
impracticable to go the other way around.
* First change enforces another, now we cannot rely anymore on `loops`
being NULL in MLoopNorSpace to detect single-loop fans, so we instead
store that info in a new flag.
Again, these are expected to be totally non-functional changes.
around the volume.
We generate a tight mesh around the active voxels of the volume in order
to effectively skip empty space, and start volume ray marching as close
to interesting volume data as possible. See code comments for details on
how the mesh generation algorithm works.
This gives up to 2x speedups in some scenes.
Reviewed by: brecht, dingto
Reviewers: #cycles
Subscribers: lvxejay, jtheninja, brecht
Differential Revision: https://developer.blender.org/D3038
Selection code relies on being able to set the depth functions
however passes have their own depth settings.
Add DRW_state_lock to ignore passes settings for particular flags.
This fixes occlusion queries cycling through objects under the cursor.
By default select wasn't picking the nearest object,
this could have been fixed by not clearing the depth buffer,
but calling GPU_select_(begin/end) without the binded frame-buffer
caused issues for depth-picking. So move GPU_select begin/end to a
callback.
This also has the advantage that only needs to populate the engines once
to draw two passes.
Note that cycling through objects fails with occlusion queries still,
will fix shortly.
This is very efficient and add a pretty low overhead (0.1ms of drawing time for 10K objects passing through all tests, on my i3-4100M).
The like the rest of the DRWCallState, test is "cached" until the view matrices changes.
In some cases it doesn't make sense for add-ons to be listed for hiding.
Especially for import/export which use minimal UI space.
This adds `bl_info["use_owner"]` to add-ons,
currently defaulting to True for all non Import-Export add-ons.
This is more important now that we will have tigther volume bounds that
we hit multiple times. It also avoids some noise due to RR previously
affecting these surfaces, which shouldn't have been the case and should
eventually be fixed for transparent BSDFs as well.
For non-volume scenes I found no performance impact on NVIDIA or AMD.
For volume scenes the noise decrease and fixed artifacts are worth the
little extra render time, when there is any.
* Use a subsurface color equal to the base color, and give the subsurface
radius skin like values by default. This is how the parameter should
typically be used.
* Use GGX by default, multiscatter GGX is still quite noisy and has some
fireflies so let's keep it optional for now.
Now the only missing bit seems to be in Cycles to pass depsgraph to
builtin_image_float_pixels().
Ideally we could get evaluation context instead of using depsgraph + settings.
But for the other rna EvaluationContext functions this is how we are doing.
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D3087
While doing so with Bone_R.001, Bone_R.003, Bone_R.003 etc. is doomed to
issues, doing that on duplicates of actually correctly named bones can
be handy, and safe.
So adding back as an option (was removed in rB702bc5ba26d5).
Flip names operator changed in rB702bc5ba26d5, to some sensible
behavior. But this breaks common workflow of 'duplicate part of the
bones, scale-mirror new ones, and flip their names'.
So now, instead of doing this in two steps, trying to guesstimate which
bones should get which name, just add option to flip names to duplicate
operator itself. Simpler, safer, and much, much more consitent behavior
and predictable results.
This solves issue with tweaking brush size when interleaving particle edit
and texture paint modes. The issue was caused by texture paing setting more
operator properties then it's done for particle edit mode, which made window
manager to use saved proeprties for the "missing" ones.
Don't see any reason why we would want to save any of those properties.
This is a regression since rB83b60dac57a1.
Allows for each workspace to have it's own add-ons on display.
Filtering for: Panels, Menus, Keymaps & Manipulators.
Automatically applies to add-ons at the moment.
Access from workspace, toggled off by default
once enabled, add-ons can be white-listed.
See D3076
When changing the mode of an object, apply this to all other
workspaces that share the same active object.
Also use copy the object-mode when duplicating workspaces.
Note that setting `glDepthFunc` isn't important,
since 2.8 branch changes this value it might seem like an error
however it's harmless in this case - so better make note of this.
Refactor include:
- Removal of DRWInterface. (was useless)
- Split DRWCallHeader into a new struct DRWCallState that will be reused in the future.
- Use BLI_link_utils for APPEND/PREPEND.
- Creation of the new DRWManager struct type. This will enable us to create more than one manager in the future.
- Removal of some dead code.
This still does not make point density to work in Cycles, but at least it pass
the depsgraph down the line.
Note this was working fine before the depsgraph/render refactor to pass
evaluated depsgraph to the engines.
User notes
----------
Compositing, rendering of multi-layers in Eevee should be fully working now.
Development notes
-----------------
Up until now we were still using the same depsgraph for rendering and viewport
evaluation. And we had to go out of our ways to be sure the depsgraphs were
updated.
Now we iterate over the (to be rendered) view layers and create a depsgraph to
each one, fully evaluated and call the render engines (Cycles, Eevee, ...) with
this viewlayer/depsgraph/evaluation context.
At this time we are not handling data persistency, Depsgraph is created from
scratch prior to rendering each frame. So I got rid of most of the partial
update calls we had during the render pipeline.
Cycles: Brecht Van Lommel did a patch to tackle some of the required Cycles
changes but this commit mark these changes as TODOs. Basically Cycles needs to
render one layer at a time.
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D3073
-Make the view and object dependant matrices calculation isolated and separated, avoiding non-needed calculation.
-Adding a per drawcall matrix cache so that we can precompute these in advance in the future.
-Replaced integer uniform location of only view dependant builtins by DRWUniforms that are only updated once per shgroup.
Now that Eevee has support for offline rendering (F12) we can use it for
the Material previews.
Note: This makes the duplicated UI issue one panel worse. That happens when
Cycles if your scene engine, and Eevee is your workspace engine.
Depth picking needs to read the depth buffer after drawing
since GPU_select_end runs in a different OpenGL context
reading the depth buffer wasn't working.
This caused the last object to be unelectable.
This patch adds F12 offline Freestyle rendering support to Eevee.
Most functionalities are identical with those found in Cycles.
The only major difference is that the per-view layer "use Freestyle" toggle
option is currently placed in the "Passes" panel of the "View Layers"
properties window instead of a "Layer" panel as in Cycles. Since Freestyle
is a post-processed overlay and not a pass, the present option location is
a compromise. To describe this fact, the per-layer "use Freestyle" option
is in a subsection labeled as "Layer".
Reviewers: fclem, brecht, campbellbarton
Reviewed By: fclem, brecht
Subscribers: dfelinto
Differential Revision: https://developer.blender.org/D3084
This separate context allows two things:
- It allows viewports in multi-windows configuration.
- F12 render can use this context in a separate thread and do a non-blocking render.
The downside is that the context cannot be used while rendering so a request to refresh a viewport will lock the UI. This is something that will be adressed in the future.
Under the hood what does that mean:
- Not adding more mess with VAOs management in gawain.
- Doing depth only draw for operators / selection needs to be done in an offscreen buffer.
- The 3D cursor "autodis" operator is still reading the backbuffer so we need to copy the result to it.
- All FBOs needed by the drawmanager must to be created/destroyed with its context active.
- We cannot use batches created for UI in the DRW context and vice-versa. There is a clear separation of resources that enables the use of safe multi-threading.
Offscreen contexts are not attached to a window and can only be used for rendering to frambuffer objects.
CGL implementation : Brecht Van Lommel (brecht)
GLX implementation : Clément Foucault (fclem)
WGL implementation : Germano Cavalcante (mano-wii)
Other implementation are just place holder for now.
The exporter does export matrix data (4*4 Transformation matrix) only for Skeletal animation. For object animation only exporting to trans/rot/loc is implemented.
This task implements Matrix export also for simple Object animation.
Differential Revision: https://developer.blender.org/D3082
Turns out to be the call that was destroying performance.
I get 18ms->6ms improvement of drawing time with 10 000 unique objects.
And we can still improve upon this!
This started with a fix for an animated Object Hierarchy. Then i decided to cleanup and optimize a bit. But at the end this has become a more or less full rewrite of the Animation Exporter. All of this happened in a separate local branch and i have retained all my local commits to better see what i have done.
Brief description:
* I fixed a few issues with exporting keyframed animations of object hierarchies where the objects have parent inverse matrices which differ from the Identity matrix.
* I added the option to export sampled animations with a user defined sampling rate (new user interface option)
* I briefly tested Object Animations and Rig Animations.
What is still needed:
* Cleanup the code
* Optimize the user interface
* Do the Documentation
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D3070
This is used to determine which voxels are to be considered empty space.
Previously it was hardcoded for converting dense grids to OpenVDB grids
to reduce disk space usage.
This value is also useful for rendering engines to know, i.e. to
optimize ray marching.
Some other software cannot handle grid names with spaces in them. We still check for names with spaces so as to not break old
files.
This fixes T53802.
This replaces the blackbody to RGB code with the simpler and faster one from
Cycles. It's a little different but the other placing using this is the legacy
volume drawing, so no need to stay compatible with that.
Similar to the Principled BSDF, this should make it easier to set up volume
materials. Smoke and fire can be rendererd with just a single principled
volume node, the appropriate attributes will be used when available. The node
also works for simpler homogeneous volumes like water or mist.
Differential Revision: https://developer.blender.org/D3033
Technically the original issue is that xof/yof in render result is calculated
for drawing border render. So a simpler patch could be:
```
- rr->xof = re->disprect.xmin;
+ rr->xof = re->disprect.xmin + BLI_rcti_cent_x(&re->disprect) - (re->winx / 2);
```
However everywhere in the code we are getting border directly from re->disprect
which we may as well do here too.
Besides I'm taking this as a chance to get rid of RenderResult in the internal
loop of eevee, to help prepare the code to the upcoming rendering pipeline
changes.
When you were using autosmooth to generate some custom normals, and
created empty custom loop normal data, you would go back to an 'all
smooth' shading, cancelling some sharp edges generated by the mesh's
smooth threshold.
Now we will first tag such edges as sharp, such that shading remains the
same. This is not crucial in current master, but it is for clnors
editing gsoc branch!
Regression caused by earlier commits to improve the automerge behaviour.
In this case, the problems only occurred when moving a selected keyframe
forwards in time to overlap an unselected keyframe.
While it is necessary to ignore duplicates when doing Deselect/Column Select
(where double-updates may result in nothing being selected), for borderselect,
not including the duplicates meant that sometimes, nothing would happen
if you were trying to borderselect keyframes originating from hidden channels.
This was first noticed in the greasepencil-object branch, but affects all
animation channel types.
This is not an ideal solution but blender freeing system is already well tangled.
So tracking and clearing vao caches when destroying contexts does prevent bad behaviour.
This allows us to:
- Not mock around with tags stored in a global space,
and not to iterate over all datablocks in the database
to clear the tags.
- Properly deal with datablocks which might not be in main database.
While it sounds crazy, it might be handy when dealing with preview,
or some partial scene updates, such as motion paths.
- Avoids majority of places where depsgraph construction needed bmain.
This is something what could help in blender2.8 branch.
From tests with production file here did not see any measurable slowdown.
Hopefully, there is no functional changes :)
Was temporarily removed when moving object mode to workspace.
Note: there is an issue where eval_ctx->view_layer is NULL on load,
for now pass a view layer argument, we might wan't to set the value
instead.
This reverts commit 87c72a7d27.
Caused T54121 which breaks blend file saving.
For now crash on exit is preferable.
Possible solution is to free screen-manipulator batches in a separate
loop.
We now continue transparent paths after diffuse/glossy/transmission/volume
bounces are exceeded. This avoids unexpected boundaries in volumes with
transparent boundaries. It is also required for MIS to work correctly with
transparent surfaces, as we also continue through these in shadow rays.
The main visible changes is that volumes will now be lit by the background
even at volume bounces 0, same as surfaces.
Fixes T53914 and T54103.
A major bottleneck of current implementation is the call to create_bindings() for basically every drawcalls.
This is due to the VAO being tagged dirty when assigning a new shader to the Batch, defeating the purpose of the Batch (reuse it for drawing).
Since managing hundreds of batches in DrawManager and DrawCache seems not fun enough to me, I prefered rewritting the batches itself.
--- Batch changes ---
For this to happen I needed to change the Instancing to be part of the Batch rather than being another batch supplied at drawtime.
The Gwn_VertBuffers are copied from the batch to be instanciated and a new Gwn_VertBuffer is supplied for instancing attribs.
This mean a VAO can be generated and cached for this instancing case.
A Batch can be rendered with instancing, without instancing attribs and without the need for a new VAO using the GWN_batch_draw_range_ex with the force_instance parameter set to true.
--- Draw manager changes ---
The downside with this approach is that we must track the validity of the instanced batch (the original one). For this the only way (I could think of) is to set a callback for when the batch is getting free.
This means a bit of refactor in the DrawManager with the separation of batching and instancing Batches.
--- VAO cache ---
Each VAO is generated for a given ShaderInterface. This means we can keep it alive as long as the shader interface lives.
If a ShaderInterface is discarded, it needs to destroy every VAO associated to it. Otherwise, a new ShaderInterface with the same adress could be generated and reuse the same VAO with incorrect bindings.
The VAO cache itself is using a mix between a static array of VAO and a dynamic array if the is not enough space in the static.
Using this hybrid approach is a bit more performant than the dynamic array alone.
The array will not resize down but empty entries will be filled up again. It's unlikely we get a buffer overflow from this. Resizing could be done on next allocation if needed.
--- Results ---
Using Cached VAOs means that we are not querying each vertex attrib for each vbo for each drawcall, every redraw!
In a CPU limited test scene (10000 cubes in Clay engine) I get a reduction of CPU drawing time from ~20ms to 13ms.
The only area that is not caching VAOs is the instancing from particles (see comment DRW_shgroup_instance_batch).
This allows allocation of VAOs from different opengl contexts and thread as long as the drawing happens in the same context.
Allocation is thread safe as long as we abide by the "one opengl context per thread" rule.
We can still free from any thread and actual freeing will occur at new vao allocation or next context binding.
The other approach was causing too much error in some cases (e.g. favouring
the lower-valued keyframes). This fix should make the resulting curves less
bumpy/jagged.
This commit removes an earlier attempt at optimising the lookups
for duplicates of a particular tRetainedKeyframe once we'd already
deleted all the selected copies. The problem was that now, instead
of getting rid of the unselected keys (i.e. the basic function here),
we were only getting rid of the selected duplicates.
With this fix, unselected keyframes will now get removed (as expected)
again. However, we currently don't take their values into account
when merging keyframes, since it is assumed that we don't care so much
about their values when overriding.
that end up on the same frame
Currently, when scaling keyframes in the Dopesheet, if multiple
selected keyframes end up on the same frame post-scaling, they
would not get removed by the "Automerge" setting that normally
removes duplicates on the same frame.
This commit changes the behaviour so that when multiple selected
keyframes end up on the same frame, instead of keeping all these
around on the same frame (e.g. resulting in a column of keyframes
on different values), we will instead merge them into a single
keyframe (by averaging the values). This should result in a
smoother F-Curve with fewer "stair-steps" that need to be carefully
cleaned out afterwards.
Requested by @hjalti
This bug took a while to track down. In the test file with this report,
the Nla-Strip Control Curve for strip time would get disabled if you
changed the NLA Editor to a second Graph Editor instance.
It turns out that because this second Graph Editor would have the
"filter fcurves by name" option enabled, this would trigger a lookup
of the referenced property's name (in order to test whether it matched
the filtering criteria). However, since that filtering code was written
before the introduction of these curves, it still assumed that the names
for these Control Curves should be handled the same as for standard FCurves.
Unfortunately, that doesn't work, as the property lookups fail if the standard
method is used - when the lookups fail, the F-Curves get tagged as being
invalid/disabled (and need to be reset using the "Revive Disabled FCurves"
operator).
Note: The changes in this patch look complicated, as I've had to shuffle
a bit of code around so that the name-filtering check can have access to
the additional info it needs. In the process, I've also removed the earlier
(hacky) approach where the control curves were getting added to a temp
buffer to get changed from normal FCurves to special ANIMTYPE_NLACURVES.
Not sure why we need a relation from solver to a tip local transform, this
will be handled via parent relation.
Fixes remaining dependency cycles reported in T54083.
It is not possible to address transform at particular position of constraint
stack, and when constraint is being addressed is usually from driver variable.
This fixes some of dependency cycles reported in T54083.
We need to move the render result logic outside the render engine code.
It makes no sense for Eevee/Clay/... to have to re-implement the render resilt
creation logic. Beside the original implementation really got it wrong, by
ignoring the different render layers needed for the final render.
Finally, there is no need to re-create the logic for views. So this was also
fixed.
Note 1: This will break still if the depsgraph of the needed view layers is not
updated / created. We need to address this separately. For now if users want
to test this, just show each view layer in the viewport at least once.
Note 2: We are still getting depsgraph from scene and creating if needed.
`BKE_scene_get_depsgraph(scene, view_layer, true);` according to Sergey we need
to move the render depsgraph for the Render struct instead. I will do it
separately as well.
This is a regression in rB4f1c0a1 which only allowed cutting haior at the
second segment only, while there is nothing wrong with cutting hair at the
first segmewnt.
Don't use dm->get*Array for DM you don't own. This call can allocate temporary
CD layer, which is not thread safe at all.
Also removed hard-coded logic around CDDM check. new functions will do same
logic, but are mode DM-type-=independent.
We shouldn't mix image pool acuisition with and without user provided,
the fact that internally image.c uses last frame from Image datablock
confuses the logic.
Optionally don't remap indices for objects.
Checking all objects parent's would reference a freed pointer
while freeing all objects.
In the case of dynamic topology there is no use in keeping track
of hook/vertex-parent indices.
Also disable this when creating meshes for undo storage
since adding an undo step shouldn't be modifying other objects.
This reuses the Cycles regression test code to also work for OpenGL UI drawing.
We launch Blender with a bunch of .blend files, take a screenshot and compare
it with a reference screenshot, and generate a HMTL report showing the failed
tests and their differences.
For Cycles we keep small reference renders to compare to in svn, but for OpenGL
developers currently have to generate the references manually. How to use:
* WITH_OPENGL_DRAW_TESTS=ON in CMake
* BLENDER_TEST_UPDATE=1 ctest -R opengl_draw
* .. make code changes ..
* ctest -R opengl_draw
* open build_dir/tests/opengl_draw/report.html
Differential Revision: https://developer.blender.org/D3064
Once 'losing lib' issue is fixed (in previous commit), we have new issue
that this could lead to several copies of the same linked data-block in
.blend file. Which is not good. At all.
So had to add a GHash-based check in libraries reading code to ensure we
only load a same ID from a same lib once.
The issue was that when a same lib was found several times in loaded
.blend, we'd only keep the first occurence. But since Blender expects
next data-blocks to belong to last found library, we could actually
be adding data-blocks assigned to copies of the duplicated lib to
another, totally unrelated lib.
Those data-blocks were then obviously not found when actually loading
libs content, and lost.
Note that this only fix one part of the issue, current code can
generate several copies of same linked data-block now, will fix in
another commit.
While the script should be using INVOKE_PREVIEW for operators in clip view,
window manager was lacking some switch statements.
Thanks Brecht fore review!
- Use BLI_threadpool_ prefix for (deprecated)
thread/listbase API.
- Use BLI_thread as prefix for other functions.
See P614 to apply instead of manually resolving conflicts.
- When returning the number of items in a collection use BLI_*_len()
- Keep _size() for size in bytes.
- Keep _count() for data structures that don't store length
(hint this isn't a simple getter).
See P611 to apply instead of manually resolving conflicts.
This is kind of doesn't matter where macro itself is defined.
We should stick to the following:
- If some macro is actually more an inline function, follow regular
function name conventions.
- If macro is a macro, type it in capitals. Use module prefix if that
helps readability or it if helps avoiding accidents.
This completes twist feature, which is now possible to also control by
texture. Since textures can not easily contain negative values as well,
same trick with 0.5 neutral as vertex groups is used.
All in all, this twist features allows to do following things.
Original hair:
{F2287535}
Hair with scientifically calculated twist value of 0.5:
{F2287540}
And we can also twist braids in opposite directions dependent on left/right
side:
{F2287548}
The idea is to give a control over direction of twist, and maybe amount of
twist as well. More concrete example: make braids on left and right side of
character head to be twisting opposite directions.
Now, tricky part: we need some negative values to flip direction, but weights
can not be negative. So we use same trick as displacement map and tangent normal
maps, where 0.5 is neutral, values below 0.5 are considered negative and values
above 0.5 are considered positive.
It allows to have children hair to be twisted around parent curve, which is
quite an essential feature when creating hair braids.
There are currently two controls:
- Number of turns around parent children.
- Influence curve, which allows to modify "twistness" along the strand.
This isn't supported since there are subsequent reads to all point coordinates
after modification started.
Probably we need to create a temp copy of point, but that's like extra CPU
ticks.
view_layer is NULL when the render engine is created, this gets passed
around and ends up in this code causing a crash. This should be reverted
after the render engine api is updated to set view_layer.
The conditionals in particle code are... some sort of madness... I'm not
even sure what the correct behavior should be from looking at it.
In this case the path cache generation was being skipped in edit mode.
Due to changes in draw code particles from old files that may have had a
default draw size of 0 will not be visible. Old draw code would check
for this and adjust the size, however the unit for draw size has changed
from pixels to BU, and it no longer makes sense to have such checks.
This patch is to ensure particles from such files remain visible.
It seems to be useful still in cases where the particle are distributed in
a particular order or pattern, to colorize them along with that. This isn't
really well defined, but might as well avoid breaking backwards compatibility
for now.
This removes the need of custom attribs for instancing.
Instancing works fully with dynamic batches & Gwn_VertFormat now.
This is in prevision of the VAO manager patch.
This manager allows to distribute existing batches for instancing
attributes. This reduce the number of batches creation.
Querying a batch is done with a vertex format. This format should
be static so that it's pointer never changes (because we are using
this pointer as identifier [we don't want to check the full format
that would be too slow]).
This might make the original Instance Data manager useless but it's currently used by DRW_object_engine_data_ensure().
Theses batches keeps their memory chuck allocated after transfer to be reused and updated very often.
NOTE: This commit break instancing in DRW. (it's fixed in the next commit)
Now that the new 3D viewport draws to a multisample offscreen buffer, there is
no good reason anymore to create an entire multisample window and pay the
performance/memory cost for other regions that don't need it.
GL_MULTISAMPLE now only gets enabled for offscreen buffers, so we don't need
to check for it throughout the UI code anymore.
Differential Revision: https://developer.blender.org/D3062
This is like the only way to add variety to hair which is created
using simple children. Used here for the hair.
Maybe not ideal, but the time will show.
Burley SSS uses a bit of strange thing where the albedo and closure weight are
different, which makes the subsurface color act a bit like a subsurface radius
indirectly by the way the Burley SSS profile works.
This can't work for random walk SSS though, and it's not clear to me that this
is actually a good idea since it's really the subsurface radius that is supposed
to control this. For now I'll leave Burley SSS working the same to not break
backwards compatibility.
This can be very slow if it contains a big texture, and it's not
necessarily setup in a useful way anyway, and materials can be used
in multiple scenes.
Instead of cloning the window to create dummyHWNDs and dummyHDCs to avoid calling the SetPixelFormat more than once in the same window, use the original window and HDC and do not call the SetPixelFormat again.
In addition to avoiding a lot of unnecessary calls, it simplifies the code and makes it match the others OS
Cycles already uses 1.5 as default. BI original 1.0 filter doesn't look good for
Eevee. The ideal scenario would be for both Cycles AND Eevee to use the same DNA
setting.
But for now it is nice to at least have Eevee renders to look better by default.
Note: This handles doversion for 2.7x files only. Files previously created in
2.8 need to be manually corrected.
It is basically brute force volume scattering within the mesh, but part
of the SSS code for faster performance. The main difference with actual
volume scattering is that we assume the boundaries are diffuse and that
all lighting is coming through this boundary from outside the volume.
This gives much more accurate results for thin features and low density.
Some challenges remain however:
* Significantly more noisy than BSSRDF. Adding Dwivedi sampling may help
here, but it's unclear still how much it helps in real world cases.
* Due to this being a volumetric method, geometry like eyes or mouth can
darken the skin on the outside. We may be able to reduce this effect,
or users can compensate for it by reducing the scattering radius in
such areas.
* Sharp corners are quite bright. This matches actual volume rendering
and results in some other renderers, but maybe not so much real world
objects.
Differential Revision: https://developer.blender.org/D3054
Looks like there was no way to avoid that so far, since
WM_event_add_timer_notifier can set mere int-in-pointer there, this can
cause issues. So added mere flags system to wmTimer to allow
controlling this.
Instead of calling an operator I just call `collection.new()`. Moving the
code into a separate function also simplifies it. In its new form there is
also no undefined behaviour when me.vertex_colors is non-empty but without
active layer.
We were not passing a scene collection parent to the BKE_collection_add
function, which in turn made syncing not work.
Right now we:
* Explicitly pass the master collection in this case
* Fallback to the master collection in other cases
With unittest.
- normalize → average the vector: the vector isn't normalized here, because
it doesn't necessarily becomes unit length. Instead, the sum is converted
to an average vector.
- angle is the acos()…: the dot product between the vertex normal and the
average direction of the connected vertices is computed, and not the
opposite.
- The initial `con` list was discarded immediately and replaced by a new
list.
- File didn't end with a newline.
We've got quite comprehensive BMesh based implementation, which is way easier
for maintenance than abandoned Carve library.
After all the time BMesh implementation was working on the same level of
limitations about manifold meshes and touching edges than Carve. Is better
to focus on maintaining one boolean implementation now.
Reviewers: campbellbarton
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D3050
Previously quads always split along first-third vertices.
This is still the default, to avoid flickering with animated deformation
however concave quads that would create two opposing triangles now use
second-fourth split.
Reported as T53999 although this issue has been known limitation
for a long time.
- Read-only access can often use EvaluationContext.object_mode
- Write access to go to WorkSpace.object_mode.
- Some TODO's remain (marked as "TODO/OBMODE")
- Add-ons will need updating
(context.active_object.mode -> context.workspace.object_mode)
- There will be small/medium issues that still need resolving
this does work on a basic level though.
See D3037
Partially revert efe1af3d11
The offending commit over-zealously removed the datablocks viewer case
as well, when only the condition needed to be modified.
This was caused by dupli's ObjectEngineData that were not free.
This allocates the data using the instance data manager (no alloc/free between frames). Though the data should be treated as not persistent in this case.
Made shape keys to work for meshes. Also added missing code for curves.
Curves and lattices will not have shape keys visible, since modifiers support
is still to be done for them.
This brings separate initialization for libcuda and libnvrtc, which
fixes Cycles nvrtc compilation not working on build machines without
CUDA hardware available.
Differential Revision: https://developer.blender.org/D3045
It is possible to have non-NULL scene in graph which was never built yet,
this happens when ID is tagged for update for non-built graph.
Was causing crash opening deg_anim_pose_bones.
Reported by Mai in IRC, thanks!
This made vertex/weight/sculpt crash.
Add BKE_workspace_update_object_mode which sets the object mode from the
workspace.
We may want to re-visit exactly when this is set, for now call within
wm_event_do_refresh_wm_and_depsgraph.
This fixes an issue where old cache data was used after an object has been moved.
Particles were coming from very wrong positions. Reproduction case is to move an
object while animation is running and then let the animation loop back and
play again.
Differential Revision: https://developer.blender.org/D3044
Suggested by Pablo Vazquez (venomgfx).
The idea here is that it should be easy to work in the outliner by picking a
bunch of objects and adding them to a new collection.
Where is the new collection? In the same level as the "outliner active" object.
Note, since the outliner has no pure concept of an active object, I'm using
the highlight tag for this. Hopefully it works fine.
It should work in "Collections", "View Layer", and "Groups".
Only when collections are not filtered out.
The reason it appeared working was due to left-over debug code to force
time dependency.
Real fix seems to include force tagging objects used by duplication,
similar to what we do for some other modifiers already.
When destination IDProp did not exist, new code (related ot static
overrides) would not do nothing...
IDProps and RNA are really not easy to tame, thinking more and more we
should totally bypass RNA and directly use (add) IDP code to handle
comparison and diff creation/application of IDProps.
But for now, this bandage should to the trick.
Add a enum headers to DNA, to be included in other headers
so function signatures can use enums for better type safety.
Add DNA_*_enums.h matching DNA_*.types.h as needed.
The check to see if `use_advanced_hair` was enabled was actually in two places
(render panel `draw` function and physics panel `poll` function). As these
properties are only in one place now the check in `draw` isn't needed anymore.
Related: T53513, a6c69ca57f
As a follow-up to the commit rB354f92a49458795c69f857de927c5b1531cd3618
for fixing Freestyle crash when using Cycles (thanks Brecht for the fix), this revision
applies a related bugfix addressed partly in D3040 (item #2 in the description).
We should actually be using CL_DEVICE_MEM_BASE_ADDR_ALIGN for sub buffers,
previous change in this code was incorrect. Renamed the function now to
make the specific purpose of this alignment clear, it's not required for
data types in general.
Cycles old behaviour is to hide the duplicator on rendering at all times.
We have since a few months an option in 2.8 to control the duplicator
visibility on its own. However when the duplicator is also duplicated, things
were not working properly.
What we do now is, in addition to the duplicator visibility control, is to not
have the source collection of the duplicator object to ever influence its
visibility when the object is been duplicated.
So if the user wants to reproduce Cycles old behaviour all that is required is
to have different collections, one for the original to-be duplicated objects
that you hide in for the view layer used in the final render. And another
collection with only the first duplicator (which in turn duplicates other
duplicators).
I know this all may sound confusing, so please just give it a try, it's simpler
than it sounds.
T53783.
Before, profile=1 ("square outside") only worked well in a few cases
(some "pipes", cube corners). This makes it work well pretty much
everywhere.
This leads to a huge improvement of AntiAliasing quality.
There is no other distribution now and there is not settings displayed to the user. That's for another commit.
This patch changes the huge list of projects in visual studio into a nice tree matching the source folder structure. see D2823 for details.
Differential Revision: http://developer.blender.org/D2823
nvcc is very picky regarding compiler versions, severely limiting the compiler we can use, this commit adds a nvrtc based compiler that'll allow us to build the cubins even if the host compiler is unsupported. for details see D2913.
Differential Revision: http://developer.blender.org/D2913
This adds midlevel and object/world space for displacement, and a
vector displacement node with tangent/object/world space, midlevel
and scale.
Note that tangent space vector displacement still is not exactly
compatible with maps created by other software, this will require
changes to the tangent computation.
Differential Revision: https://developer.blender.org/D1734
When duplicating a layer collection directly linked to the view layer we copy
the collection and link it.
For all the not directly linked layer collectionns, we try to sync the layer
collection flags, overrides, ...
Also we make sure the new collection is right after the original collection.
We also expose this in RNA, via collection.duplicate().
Was happening when viewport visibility on the particle system is disabled.
This became an issue after c45afcf, but the actual issue goes a bit deeper
and the following aspects were involved:
- Relations builder for particle system was ignoring particle system if
it's visibility is not enabled for viewport. This is something what
shouldn't have been done -- depsgraph relations are supposed to be the
same no matter if it's viewport or render.
- Relation builder was only dealing with duplication set to object, but
was ignoring group duplication.
This is technically a regression in 2.79a-RC as well, so would need to
backport this fix to the branch after extra testing is done here in the
studio.
This is rather a workaround to avoid main thread freeing all glyph caches
at the same time as sequencer uses fonts to draw text sequences.
Ideally we need to either make cache more local, or user-counted or to make
somewhat more global locks. All this ends up in a bigger refactor which is
better for 2.8. For the meantime let's make Blender more stable with a tiny
workaround.
Downside is that keeping zooming things up and down in interface during render
will increase memory usage by unused glyph caches. It's not too bad though,
all unused caches will be freed first time at area zoom after render.
Thanks Bastien for review!
Behavior is expected to be simillar to 'make proxy' on linked groups, it
basically allows you to select which object in the group will be to
'root' override (usually, the armature), checks which other objects
needs to be overridden as well, overrides the group itself too, and
instantiates the group and the root overridden object.
It seems to be working, though handling of armature deformation is kind
of totally broken in blender2.8 currently (modifiers...). ;)
At some point, we could probably think about removing IRIS file format
support, don't think there are much of those around anymore. But for
now, let's add a translation context to wipe effect. :)
Reported in T43295 by @blend-it, thanks.
I was calling the ntree syncing function too late. So the index of the layer
was -1 since it was no longer in the ListBase, making all RenderLayer nodes
to decrease their respective `custom1` (even going to negative sometimes).
Since 30a966a726 when I removed the recursion, the code was still relying
on stack data. This would crash in release often, and it should crash always.
Big thanks to Sergey Sharybin for spotting the issue.
The issue was introduced by eb016eb as a fix for T41258, which added depsgraph
tagging with zero flag. The comment was saying that it's to make derived caches
to be updated, however bot sure how that could possibly work: tagging ID for
update with 0 flag only sets updated tags in bmain in old dependency graph.
In the new depsgraph, where object data is a part of depsgraph, doing such a
tag forces object to be updated, which re-triggers viewport rendering, which
is causing such an infinite viewport render rest.
Can not reproduce any crashes here, so maybe it's fine to move on with this
change.
This was leading to so much recursion that it was failing here.
How to test it: Open wanderer.blend and try to render (F12).
Note: This won't fix F12 rendering for wanderer with Eevee. Something else is
going wrong there.
For simplicity we choose to execute the rendering of Opengl engines in the main thread and block the interface.
This might be addressed in the future at least for video rendering.
A drawmanager wrapper (DRW_render_to_image) is called by the render pipeline to set up the Opengl state and then call the specific draw_engine->render_to_image function.
General idea of the fix: skip the whole draw manager callback madness which
was used to tag object's engine specific data as dirty. Use generic recalc
flag in ObjectEngineData structure instead. This gives us the following
benefits;
- Sovles mentioned bug report.
- Avoids whole interface lookup for opened viewports for EVERY changed ID.
- Fixes missing updates when viewport is temporarily invisible.
Reviewers: dfelinto, fclem
Differential Revision: https://developer.blender.org/D3028
Main idea is to make specific engine types be a subclass of generic
ObjectEngineData structure.
This required following changes:
- Have extra size argument to engine data allocation function.
Not sure whether there is less error-prone way of doing this.
- Add init() callback to engine data allocation function.
Additionally, added some extra checks to Eevee's engine data getters, so we do
not silently cast lamp data to lightprobe data.
Reviewers: dfelinto, fclem
Differential Revision: https://developer.blender.org/D3027
This was disabled to avoid updating the geometry every time when the
material includes displacement, because there was no way to distinguish
between surface shader and displacement updates.
As a solution, we now compute an MD5 hash of the nodes linked to the
displacement socket, and only update the mesh if that changes.
Differential Revision: https://developer.blender.org/D3018
This code was disable a while back and got re-enabled by some previous debug
process. Having relation names in dot file helps understanding what's going
on in one cases, but makes things spread too far away in others.
This does not affect current blender2.8, but is mandatory for asset
engine branch.
Bottom line being, we also need to 'survey' changes in actual
SpaceFileBrowser struct, not only its FileSelectParams sub-struct.
Generic ED_area_do_msg_notify_tag_refresh callback only tags area for
refresh, not redraw. This was not updating view e.g. when changing
ordering options in top region, until you'd mouse-over main filelisting
region...
So now, always tag area for redraw in filbrowser's refresh callback.
When pressing on a button to zoom for eg,
using zoom-to-mouse-position doesn't make any sense.
There is also zoom speed scaling which increases the closer the cursor
is to the top-edge of the screen, which was noticable since the
navigation widget is currently at the top of the screen.
The new depsgraph was only considering the active action
when attaching relations from the AnimData component/operation
to the properties that are affected by the animation data.
As a result, only properties animated by the active action
were working, while those animated by NLA strips did not change
when playing back/scrubbing the timeline.
This commit fixes this introducing a recursive method to properly
visit all NLA strips, and calling DepsRelBuilder::build_animdata_curves_targets()
on each of those strips.
- Rename eViewOpsOrbit to eViewOpsFlag
since VIEWOPS_ORBIT_DEPTH isn't just used for orbiting.
- Move use_ensure_persp & use_mouse_init into the flag.
- Remove viewops_data_create_ex.
Technically this was not a bug, as this functionality was not meant to
work. (Drivers were already handled though, as they are part of the rig)
It was assumed that there was little value in having this functionality
available, as in most pipelines, animation production only begins after
the rig has been locked down (see bug report comments for more details).
On reflection, in most common situations, there's probably no harm in
doing these rna path fixups. This commit takes advantage of some similar
code I recently put in place in the Grease Pencil branch (for joining GP
objects and their layers).
Important Note for Animators/Riggers/TD's:
Please be aware that after joining armatures, some of the animation may
still need to be redone (due to changes in the transform hierarchies/
transform spaces that the animation is applied in). We do not attempt
to correct for these problems, and it is unlikely that we will in future.
The "Apply Pose as Rest Pose" operator now affects Bendy Bone settings
too, making it possible to use interactive posing tools (e.g. Pose Sculpting
brushes) to get the desired shape for the rest-pose shape of Bendy Bones.
When such posing tools are available, this change makes it easier to get
the desired Bendy Bone shapes, as you are no longer restricted to using
buttons to get the desired effects.
USER_ZBUF_ORBIT -> USER_DEPTH_NAVIGATE
The name didn't make sense since it's used for all view navigation.
Also rename USER_ZBUF_CURSOR -> USER_DEPTH_CURSOR since zbuf
is an internal detail.
- Group initial/previous/current members
Was using terms old/prev/last/orig in confusing way.
- Replace x,y variables with vectors.
- Remove unused members.
When accessing view-port operators from widgets
we need the ability not to use auto-depth or zoom-to-mouse.
Trackball rotation still needs to be supported.
The old algorithm depended on vertex order.
The new one uses a global least squares solution on chains
and cycles of edges where loop slide induces a dependency.
See https://wiki.blender.org/index.php/Dev:Source/Modeling/Bevel
in the "Consistent Widths for Even Bevels" for derivation of
the new algorithm.
There was a check for volume bounces at every surface intersection. That could lead to a volume scattered path being terminated
when passing through a transparent surface. This check was superfluous, as the volume shader evaluation already checks the
number of volume bounces and once it passes the max, volume shaders will not return scatter events any more.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: brecht, #cycles
Tags: #cycles
Maniphest Tasks: T53914
Differential Revision: https://developer.blender.org/D3024
In my tests the previous loop was running in 200 ms. With this change it now runs in 17 ms.
The difference in the end is still not great because the `draw_uvs_lineloop_bmface` function is called for each face and has an ImmBegin and ImmEnd in the function itself
Previously we stored each color channel in a single closure, which was
convenient for sampling a closure and channel together. But this doesn't
work so well for algorithms where we want to render multiple color
channels together.
FFMPEG uses int for the numerator, while Blender uses a short. So in
cases people gave weird exotic framerate values and we cannot reduce
enough the numerator, we'd get totally weird values (even negative frame
rates sometimes!)
Now we add checks for short overflow and approximate as best as possible
in that case (error should not matter unless you have shots of at least
several hundreds of hours ;) ).
In the future we may have siblings to collections (like overrides) that are not
collections. This change make sure tooltips will keep working.
Note: This was originally wrongly committed together with a Collada fix,
re-committing separately now. See bd7060a87f.
I really would prefer if we were to use the dropbox API for this.
That said, we now have some tooltips that work.
I'm using the new draw callback draw API for outliner tooltips.
Reviewers: mont29
Subscribers: venomgfx, mano-wii, Severin
Differential Revision: https://developer.blender.org/D3020
This reverts commit dc2617130b, which disabled
writing of previews for undo. While this uses some memory, re-rendering all
previews is very expensive, especially if for example you have lots of materials
using high-res image textures.
As done by c42fc19a8a - this was needed originally because notifiers were
not working so I had to force tagging.
And for the records, I should have used DEG_TAG_BASE_FLAGS_UPDATE instead of 0.
This is not the issue actually mentioned there. However it is the most serious
one.
Now if the object being dragged was not in a collection linked in the viewlayer
or invisible, we add it to the active collection (or create one if necessary).
This is related to T50967, which is now fully fixed.
We can't have more than one NOTE_SUBTYPE in the same notifier.
This is a partial revert of: cd4d5dcb46. In particular to the part concerning
"Also fixed a missing notifier of the object instancing operator".
Not only this was mixed with the original reason for the commit for no reason,
but it actually introduced a bug. Bad, bad developers ;)
Note: Although this commit is not needed for master, blender2.8 requires it for
the forementioned bug report.
Only do special handling of ob->data pointer in case we are remapping to
a valid (non-NULL) other obdata. Otherwise, handle it as any other
'remapping to NULL' case.
Hopefully not breaking anything else...
Move timer and tip out of button code,
now the only requests a tooltip,
passing a creation callback to run.
Needed for manipulators in 2.8,
also helps de-duplicate logic - since we never want
multiple tool-tips showing at once.
The issue was caused by Cycles allocating ID property in a temporary object
which gets overwritten and thrown away every so often.
Now dependency graph will try to reliably check whether ID properties from
a temp object are to be freed.
The issue here is that we can not duplicate the whole datablock since we
use view layer pointers in depsgraph callbacks.
Maybe this whole chunk of code belongs to somewhere else, or maybe we
can find a smart solution to avoid need of CoW pointers passed to the
evaluation functions.
This fixes lack of viewport update when toggling collection enabled flag.
This is similar to idproperty_reset() defined in layer.c, but it does not
re-alloc property itself.
We should replace idproperty_reset() with IDP_Reset() now.
Make sure scene and view_layer set for depsgraph before running editors
update. This is required since tagging might happen before we created depsgraph.
The issue was caused by some incompatibility of new API which expects ID block
to be specified explicitly, while old code is tagging object's data using
object's ID with OB_RECALC_DATA flag.
We need to switch all areas to give proper ID and everything, but for until
then we'd better stop crashing.
Compared to usual cddm one, ccgdm one was not applying the
ob->derivedDeform deformation to the pbvh generated from the
original mesh geometry, when possible.
We can only support painting from subsurf DM in a limited subset of
cases, others (like multiple subsurf, or topology-modyfying ones,
break mapping to original geometry).
This is not the most ideal fix (ideally, we should always be able to get
a mapping to original geometry from any point in modifiers stack...).
We do not always have that one available, and even without the
isDisabled callback this func is helpful.
Note that this is a bot stupid, only modifier actually needing a valid
Scene pointer here is subsurf... :|
It is possible to have animation (or driver) to modify nested datablock, such
as shape key value for example (where animation is on Mesh level, but shape key
is it's own datablock). To deal with such cases we need to create relation
from nested datablock CoW to animaiton/driver operation.
`gpu_texture_try_alloc` invalidates zero-sized textures.
The message in the console is not correct in this case (because it is not due to lack of memory).
Note this comes from the greasepencil-object branch, and are merged to help
preventing future merge conflicts.
Also, I renamed the icons for consistency sake. So when this is merged in 2.8
other areas of the code will need to change.
Icons by Matias Mendiola
This reverts commits:
* f0ef360386 Grease-Pencil: Icons from the grease pencil branch
* 13bf4b3804 Grease-Pencil: Fixup for icons
* fb8c382fa1 Grease Pencil dat files fix
Previously only scalar displacement along the normal was supported,
now displacement can go in any direction. For backwards compatibility,
a Displacement node will be automatically inserted in existing files.
This will make it possible to support vector displacement maps in the
future. It's already possible to use them to some extent, but requires
a manual shader node setup. For tangent space maps the right tangent
may also not be available yet, depends on the map.
Differential Revision: https://developer.blender.org/D3015
This converts object space height to world space displacement, to be
linked to the new vector displacement material output.
Differential Revision: https://developer.blender.org/D3015
The ones I previously commited were done with Inkspace 0.92.2
But apparently this renders some parts of the icons transparent.
For example, the tip of the new grease pencil pencil icon.
This was we can introduce other types of BVH, for example, wider ones, without
causing too much mess around boolean flags.
Thoughs:
- Ideally device info should probably return bitflag of what BVH types it
supports.
It is possible to implement based on simple logic in device/ and mesh.cpp,
rest of the changes will stay the same.
- Not happy with workarounds in util_debug and duplicated enum in kernel.
Maybe enbum should be stores in kernel, but then it's kind of weird to include
kernel types from utils. Soudns some cyclkic dependency.
Reviewers: brecht, maxim_d33
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D3011
This was originally a good idea. However we will need to pay special attention
to this when doing the dynamic overrides anyways. The placeholders won't be
enough to spare us that job.
That said I left the ones on layer.c because we are actually calling these
BKE_override_*_add() functions from doversion, yet they don't do anything.
This was fixed in master with commit 9d873fc3de. However, this fix never made it to 2.8.
(The following merge (a96008f3aa) did not import the fixes.)
Note: This fix is ment to fix the alignment problem.
I don't know if other parts of the code not merged are interesting or not.
But if they are, they should be tackled separately.
Reviewers: dfelinto
Subscribers: venomgfx, dfelinto, raa, Severin
Differential Revision: https://developer.blender.org/D3014
This optimisation only works if no material in the scene require the AO pass.
For this either set the AO distance to 0 or both Cavity and Edges factors to 0.
This double the performance of scenes with very high triangle count.
This is an optimization / cleanup commit.
The use of a global ubo remove lots of uniform lookups and only transfert data when needed.
Lots of renaming for more consistent codestyle.
This would lead to sock.default_value pointing to the wrong data type,
possibly causing crashes. Unfortunately, this bug will still exist for
older Blender versions that try to load newer files, which makes
changing the type of a node socket problematic.
This was introduced to the outliner when we had no User Preference
window back in 2.5x. Right now it makes no sense to keep this around.
But how about addon user preferences:
* They belong in the user preference window under the addon.
How about the user preferences themselves:
* You find them in the user preference window.
And templates?
* Why are they here in the first place?
After talking to Pablo Vazquez (who in turn poked Sergey Sharybin) we found
it reasonable to get rid of this. If it turns out that we were wrong we
revert this.
As for leaving this exposed as a debug option (as suggested on IRC) I would say
no, please. This end up polluting the code and never cleaned up in the end.
(this was specific talking about templates).
Technical note: I left the functions in outliner still hanging around.
While I used UNUSED_FUNCTION for one of them, for the other one I had to use:
`#if 0` because the function was calling itself, which would fail to build if
I used UNUSED_FUNCTION.
Debug flags are to be controlling render behavior, nothing to do with low level
system utilities.
it was simple to hack, but logically is wrong. Lets do things where they are
supposed to be done!
We have different ways of explore the scene objects, namely View Layer and
Collections. This change let us focus on compositing elements only such as:
* View Layers
** Collections
** Render Passes
* Freestyle
* Grease Pencil?
Not included in this commit is an option to handle filtering of
collections passes, ... Not sure if we would like, though.
Since they are all properly nested under a "Collections" / "Passes"
parent.
User notes:
The outliner so far was a great system to handle the object oriented workflow
we had in Blender prior to 2.8. However with the introduction of collections
the bloated ammount of data we were exposed at a given time was eventually
getting on the way of fully utilizing the outliner to manage collections and
their objects.
We hope that with this filtering system the user can put together the outliner
with whichever options he or she seem fit for a given task.
Features:
* Collection filter: In case users are only focused on objects.
* Object filter: Allow users to focus on collections only.
* (Object) content filter: Modifiers, mesh, contrainst, materials, ...
* (Object) children filter: Hide object children [1].
* Object State (visible, active, selected).
* Compact header: hide search options under a search toggle.
* Preserve scrolling position before/after filtering [2].
[1] - Note we still need to be able to tell if a children of an object is in a
collection, or if the parent object is the only one in the collection.
This in fact was one of the first motivations for this patch. But it is to
be addressed separately now that we can at least hide children away.
[2] - We look at the top-most collection in the outliner, and try to find it again
after the filtering and make sure it is in the same position as before.
This works nice now. But to work REALLY, REALLY nice we need to also store
the previous filter options to be sure the element we try to keep on top
was valid for both old and new filters. I would rather do this later though
since this smell a lot like feature creeping ;)
Remove no longer needed display options:
* Current Scene (replaced by View Layer/Collections)
* Visible (replaced by filter)
* Selected (same)
* Active (same)
* Same Type (same-ish)
How about All Scenes? I have a patch that will come next to replace the current
behaviour and focus only on compositing. So basically stop showing the objects
and show only view layers, their passes and collections, besides freestyle.
Also, while at this I'm also reorganizing the menu to keep View Layer and
Collections on top.
Developer notes:
* Unlike the per-object filtering, for collections we need to filter at tree
creation time, to prevent duplication of objects in the outliner.
Acknowledgements:
Thanks Pablo Vazquez for helping testing, thinking some design questions
together and pushing this to its final polished state as you see here.
Thanks Sergey Sharybin and Julian Eisel for code review. Julian couldn't do a
final review pass after I addressed his concerns. So blame is on me for any
issue I may be introducing here. Sergey was the author of the "preserve
scrolling position" idea. I'm happy with how it is working, thank you.
Reviewers: sergey, Severin, venomgfx
Subscribers: lichtwerk, duarteframos
Differential Revision: https://developer.blender.org/D2992
Both object level and camera datablock properties animation did not work with
copy on write enabled.
The root of the issue is going to the fact, that all interface elements are
referencing original datablock. For example, View3D has pointer to camera it's
using, and all areas which does access v3d->camera should in fact query for
the evaluated version of that camera, within the current context.
Annoying part of this change is that we now need to pass depsgraph in lots
of places. Which is rather annoying.
Alternative would be to cache evaluated camera in viewport itself, but then
it makes it annoying to keep things in sync.
Not sure if there is nicer solution here.
Reviewers: dfelinto, campbellbarton, mont29
Subscribers: dragoneex
Differential Revision: https://developer.blender.org/D3007
The displacement shared was running before particle data was copied to the
device causing bad memory access when the particle info node was used. Fix
is simply to move particle update before mesh update so the data is
available to displacement shaders.
(Altho this fixes the crash the particle info node is still mostly useless
with displacement for now...)
Drawing hair weights read before the hair array start.
This code could be improved since it currently copy-pastes,
from do_particle_interpolation, but this would need larger changes.
For now just correct existing logic.
Sun is treated as a unit distant disk like in cycles.
Opti: Since computing the diffuse contribution via LTC is the same as not using the Linear Transformation, we can bypass most of the LTC code.
This replaces the sphere analytical diffuse computation as it gives a more pleasing result very close to cycles' AND cheaper.
Lights power have been retweaked to be coherent with cycles (except sun lamp with large radius where cycles has a non-uniform light distribution).
This is an improvement on the old spining quad method that was giving artifacts when the reflection ray was nearly aligned with the sphere center.
This might be a bit heavier but it's worth it.
This operator not only links a collection, but it creates a new one and then it
links it. Although the preferrable method for users to handle their collections
is when viewing the "Collections", let's explore this workflow for now.
Suggested by Pablo Vazquez, thank you.
Headers should not have operators as much as possible. The exception here is
for datablocks mode when you want to see the active keyset.
Edit menus on the other hand should be clearly distinct from the RMB context
menus. Edit menu options should be only the ones that apply to the entire
outliner, regardless of the selected element.
Context (rmb) menus should be related to the element you RMB on to invoke the
menu. I'm also taking this opportunity to start bringing the context menus
to Python. There is little reason not to, and it helps editing them (In this
case I'm doing it only for the Scene Collection one).
Solves these security issues from T52924:
CVE-2017-12102
CVE-2017-12103
CVE-2017-12104
While the specific overflow issue may be fixed, loading the repro .blend
files may still crash because they are incomplete and corrupt. The way
they crash may be impossible to exploit, but this is difficult to prove.
Differential Revision: https://developer.blender.org/D3002
Solves these security issues from T52924:
CVE-2017-12081
CVE-2017-12082
CVE-2017-12086
CVE-2017-12099
CVE-2017-12100
CVE-2017-12101
CVE-2017-12105
While the specific overflow issue may be fixed, loading the repro .blend
files may still crash because they are incomplete and corrupt. The way
they crash may be impossible to exploit, but this is difficult to prove.
Differential Revision: https://developer.blender.org/D3002
One thing i'm not fully happy with is all this is_same_* functions. Need to
get rid of this by probably adding explicit entry/init/whatever nodes and
maybe making node criteria aware of whether key will be used as "from" or
as "to" node.
This leads to a ~3ms improvement of CPU time during drawing.
This prevent the rendering from being stalled waiting for the texture data to be transfered.
This is because certain part of the engine may require a blank framebuffer to bind textures to.
This is the case when using only array textures, unsupported by DRW_framebuffer_init().
By adding the ANIMFILTER_NODUPLIS flag to the filter it'll only be
processing each F-Curve once, which means we can remove while iterating.
This also solves a potential issue when a datablock has a driver and is
shared among multiple objects.
The issue was happening because dependency graph did not inform particle
settings as modified. This is a regression caused by tagging and flushing
mechanism refactor.
The real fix would be to make particle settings to use ID level recalc flags
rather than own flags, which will also simplify relations around particle system,
and particle settings evaluation.
Reported by Mai in IRC.
Now hashed alpha materials are stable when moving the camera/not using TAA.
It also converge to a noise free image when using TAA. No more numerical imprecision.
There still can be situations with multiple overlapping transparent surfaces that can lead to residual noise.
Using GL_RG16I texture for the hit coordinates increase tremendously the precision of the hit.
The sign of the integer is used to 2 flags (has_hit and is_planar).
We do not store the depth and retrieve it from the depth buffer (increasing bandwith by +8bit/px).
The PDF is stored into another GL_R16F texture.
We remove the raycount for simplicity and to reduce compilation time (less branching in refraction shader).
Instead of creating non temp textures only at framebuffer creation, we create them and bind them if their pointer is NULL.
This should simplify the framebuffers creation code.
Now when you make an override of a linked armature, code will
automatically also override objects using that armature (deformed by, or
children of), trying to replicate make_proxy results.
Also some initial code to replicate 'make_proxy' in case of instantiated
linked groups, but that is not working yet (and will also require some
work in RNA part of group's objects collection anyway).
This simplifies remapping task, since you don't have to ensure your
overrides are created in the correct dependency order.
Uses famous LIB_TAG_DOIT to mark IDs to be overridden.
An index stored in Alembic wasn't used. Often this index is a no-op
(i.e. index[n] = n), in which case the result was fine. However, when it
isn't, it caused issues.
Before we were re-using newid pointer inside of ID structure where we were
storing pointer to an original datablock.
It seems there is no way we can avoid requirement of having pointer to an
original datablock, so let's stop obusing system which was only designed to
be a runtime only thingie. Will be more safe this way, without need to worry
about using any API which modifies newid.
There was a fake cyclic dependency happening when node of node tree is driving
another node of the same tree.
This is related to T53794, but more fixes is needed here.
Collection A [disabled]
-> Collection B
-> Collection C
-> object
Object should be invisible, but it is not. Reported by Antonio Vazquez.
Bug introduced on: 1f5106de61
How to reproduce it:
* Change Outliner from Active View Layer to Collections
* Create a new collection under Master Collection (Collection 2)
* Move all three objects from Collection 1 to Collection 2
* Move all three objects from Collection 2 to Collection 1
Bug introduced on fb4cd136a7 (multi-object drag-and-drop).
How to reproduce the bug:
* Create a new collection
* Move the Cube to the new collection
* Move the Camera to the new collection (Cube disappears)
* Move the Lamp to the new collection (Camera disappears)
Explanation of the bug:
The moved object was still selected, so we were trying to add the object to the
collection were the object was already inserted (which would fail silently) and
then remove it.
The reason for the crash is still a bit confusing, but on Windows with Intel HD Graphics 4000 it always happens when you enable `Use Nodes` or when you try to connect the Pricipled Shader node to the output without the `Subsurface Scattering` and `Subsurface Translucency` options enabled.
This patch fixes a 32-bit overflow that occurs on 64-bit systems due to a numeric literal being treated as 32-bit.
This patch allows for the generation of images that occupy more than 4GB of RAM, which previously caused a crash.
Reviewers: sergey
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D2975
Brushes themselves are still affected by the mask, but the viewport is not
showing the mask. This way it's easier to see details while sculpting.
Studio request by Julien Kaspar
Adds the code to get screen size of a point in world space, which is
used for subdividing geometry to the correct level. The approximate
method of treating the point as if it were directly in front of the
camera is used, as panoramic projections can become very distorted
near the edges of an image. This should be fine for most uses.
There is also no support yet for offscreen dicing scale, though
panorama cameras are often used for rendering 360° renders anyway.
Fixes T49254.
Differential Revision: https://developer.blender.org/D2468
There is even a chance the compilers handles this itself, but we should try to
use the internal storage as much as possible (and save 0.000001s in the process)
For experimental options, outside the scope of typical preferences.
While templates are developed we might want to make changes
to behavior which aren't fully compatible with typical work-flows.
Instead of mixing these options in with current preferences
expose separately (we could even force disable them when templates
aren't int use)
This can be enabled in the Film panel, with an option to control the
transmisison roughness below which glass becomes transparent.
Differential Revision: https://developer.blender.org/D2904
The offscreen dicing scale helps to significantly reduce memory usage,
by reducing the dicing rate for objects the further they are outside of
the camera view.
The dicing camera can be specified now, to keep the geometry fixed and
avoid crawling artifacts in animation. It is also useful for debugging,
to see the tesselation from a different camera location.
Differential Revision: https://developer.blender.org/D2891
The code for vertical line was assuming that we necessarily neeeded vertical
lines for all the elements. Which is not true since we are not drawing
vertical and horizontal lines for collections.
Patch made in contribution with Philippe Schmid (@Quetzal).
Use the libraries if they exist in ../lib/linux_x86_64 or similar, so
that you can run "make deps && make full" to get a full static build.
Note that install_deps.sh is still the only officially supported way to
build Blender dependencies on Linux, but this may be useful to some.
Differential Revision: https://developer.blender.org/D2980
Was due to the fact that the instances don't have a "static" obmat that can be referenced to use as a uniform.
Solution : precompute the full matrix for each bone and pass it as instance data. (theses are copied into a buffer and can be discarded right away)
Note: this could be optimized further and make only one drawcall (shgroup) to draw all bone instance of one type (vs. one call per armature).
Remove the critical OMP sections used to protect mem allocation.
First one can be done in a separate loop before main, parallelized one.
Second one only affect 'private' data, so we only need to ensure
guardedalloc thread safety is enabled.
This is committed as separated step to ease troubleshooting in case
bisecting becomes necesary.
Tests on my system with ~1200 objects with 128 shadow casting lamps (current max) show a significant perf improvment (cache timing : 22ms -> 9ms)
With a baseline with no shadow casting light at 6ms this give a reduction of the overhead from 16ms to 3ms.
This remove pretty much all allocations during the cache phase. Leading to a big improvement for scene with a large number of lights & shadowcasters.
The lamps storage has been replace by a union to remove the need to free/allocate everyframe (also reducing memory fragmentation).
We replaced the linked list system used to track shadow casters by a huge bitflag.
We gather the lights shadows bounds as well as the shadow casters AABB during the cache populate phase and put them in big arrays cache friendly.
Then in the cache finish phase, it's easier to iterate over the lamps shadow SphereBounds and test for intersection.
We use a double buffer system for the shadow casters arrays to detect deleted shadow casters.
Unfortunatly, it seems that deleting an object trigger an update for all other objects (thus tagging most shadow casting lamps to update), defeating the purpose of this tracking.
This needs further investigation.
Gives about 40% speedup of object which has simple-ish deformation applied
on top of subdivided mesh.
This might easily happen with single character animation.
Helps in cases of not very complex scenes and lots of system threads available.
A bit hard to measure change on it's own, it works best with the upcoming
changes and gives measurable improvements.
Mutex is now local to particular CCGDM, and guarding edge hash which is only
used by a single function only. There is no need to acquire read lock after
edge hash was created.
Unfortunately, we cannot perform set/unset checks on 'resolved'
properties (i.e. from actual IDProperties pointers, and not virtual RNA
placeholders)... IDProps in RNA are rather challenging topic. :|
This should fully fix T53715: 2.8: Removing keymap items no longer works
With factory settings, steps to reproduce were:
* Select "Collection 1" (in "RenderLayer")
* Delete
It might crash at this point, although maybe this crash is ASAN only.
However, this was also doing some weird things that I've corrected now. It
called outliner_build_tree in an operator callback. This should only be
called in the main redraw function or so, not in regular handlers.
Instead, we manually cleanup the tree to keep it valid.
This is quite common to have 64GB memory now, and even 128. There is no reason
to add any artificial caps on the cache and undo memory here. We can not protect
against using too much memory in one cases and allow use of full computer
potential in others.
Now 32 bit will use 2GB max (as it used to be), but 64bit will use whatever
number of megabytes fits into integer.
Reviewers: campbellbarton, mont29
Subscribers: sebastian_k
Differential Revision: https://developer.blender.org/D2972
This reverts change to BKE_brush_add,
callers now remove the extra user.
Note this isn't very convenient for callers but
is consistent with other ID types.
In the future we will probably remove this and have new
ID's created with zero users.
When using a tablet, detecting absolute motion only worked
when activating a tool with the tablet.
Pressing Enter to run a tool for e.g. would use relative motion.
Now store is_motion_absolute in the event,
set for new events based on the most recent motion events.
The idea is to support following: allow doing parallel for on a small range,
each iteration of which takes lots of compute power, but limit such range to
a subset of threads.
For example, on a machine with 44 threads we can occupy 4 threads to handle
range of 64 elements, 16 elements per thread, where each block of 16 elements
is very complex to compute.
The idea should be to use this setting instead of global use_threading flag,
which is only based on size of array. Proper use of the new flag will improve
threadability.
This commit only contains internal task scheduler changes, this setting is not
used yet by any areas.
Now all the fine-tuning is happening using parallel range settings structure,
which avoid passing long lists of arguments, allows extend fine-tuning further,
avoid having lots of various functions which basically does the same thing.
Still not fully working, more work TODO (IDProps are rather tedious to
handle in RNA... :/ ).
Partial fix of T53715: 2.8: Removing keymap items no longer works.
Some shortcuts can now be edited/deleted again, but some remain
mysteriously frozen!
This modify the selection code quite a bit but it's for the better.
When using selection we use the same batching / instancing process but we draw each element at a time using a an offset to the first element we want to draw and by drawing only one element.
This result much less memory allocation and better draw time.
This is a special memory manager that keeps memory blocks ready to send as vbo data.
Since we loose which memory block was used each DRWShadingGroup we need to redistribute them in the same order/size to avoid to realloc each frame.
This is why DRWInstanceDatas are sorted in a list for each different data size.
This a small cleanup of something which I think is just a typo anyway.
With all the recent talks of harrassment and groping, I think we better avoid
that within our source code! :)
Reviewers: sergey
Reviewed By: sergey
Tags: #motion_tracking
Differential Revision: https://developer.blender.org/D2979
We now can drag multiple objects at once in the outliner. You we restricted to
working within a single outliner. Be sure to drag from the objects name, not
from its icon (otherwise it will try to parent it).
We don't use the same drag'n'drop system as IDs here. Which although I dislike
allowed for this solution to be local, and not dependent on the entire
drag'n'drop system of Blender.
This is a feature Andy Goralczyk has requested a long time ago.
Kudos for him for his request.
This technically reverts 176698b2eb.
Drag and drop for scene collections requires id for its poll function. However
we were passing the collection as id pointer for outliner_add_element
(which is ok since the function doesn't require a real ID).
I couldn't reproduce the original issue tackled by the forementioned commit so
I'm going ahead and bringing drag and drop back for scene collections.
Note: We already pass the ID for view layer collections as well since we brought
collections into groups.
This is a partial revert of 1f5106de61.
First and firemost, for groups I was checking the wrong flag
(soops->flag & SO_GROUPS) instead of (soops->outlinevis == SO_GROUPS).
Second, the columns were entirely broken for things like Orphan Data.
Third, I tried to have different columns for different `outlinevis`, but we have
bones with only visible and select, modifiers with visible and render, render
passes with enable and another value ... I would rather stay away from this mess
at the moment, and stick to the more obvious bug fix.
Finally, there is a bug (not addressed here) where the whole line is selected,
regardless of the restriction column area. It should be fixed separately.
Result is less noisy ogl renders.
What this patch does:
- the draw loops gets accumulated into the output buffer.
- disable TXAA persmat jittering in ogl render since ogl render already does that.
- make noise texture update correct accross all draw loops. Previously it was reset between each FSAA samples.
- Hashed Alpha materials were outputing their alpha values even if the final pixel has no blending and thus no transparency.
- Opacity was not clamped when using "add closure" nodes.
We need to remove all transform to display during rendering for this to work. The float rect is then color managed when displayed.
This makes all interface colors wrongly displayed because they should be color managed when rendering.
This fixes any function that relied on these iterators such as:
* Outliner Same Type
* Metaballs
* scene.objects
We were not considering the collections when there was collections nested
to the collections nested to the master collection.
It includes a unittest.
Adding new context modes requires adding a string in CTX_data_mode_string,
but there is no error when omitting this other than panels using
incorrect contexts. The static assert should help detect simple
missing strings at least to avoid confusing errors.
In that case it can now fall back to CPU memory, at the cost of reduced
performance. For scenes that fit in GPU memory, this commit should not
cause any noticeable slowdowns.
We don't use all physical system RAM, since that can cause OS instability.
We leave at least half of system RAM or 4GB to other software, whichever
is smaller.
For image textures in host memory, performance was maybe 20-30% slower
in our tests (although this is highly hardware and scene dependent). Once
other type of data doesn't fit on the GPU, performance can be e.g. 10x
slower, and at that point it's probably better to just render on the CPU.
Differential Revision: https://developer.blender.org/D2056
This is part of T53495.
This makes sure the master collection is always expanded and you don't even get
the expand/collapse icons for it.
This is only for the Collecions (currently Master Collection Tree) option, not
for active view layer.
This is part of T53495.
This operator is actually using existing code. The only new thing about it is
that it has a shortcut.
It will be exposed in the UI soon together with the nested collection operator.
This is part of T53495.
This operator is intended for the outliner when viewing Collections (at the moment, Master Collection Tree).
It has a shortcut "C", and will be added to a menu shortly.
Technically this was introduced in 01b547f993 when
exposing size and randomness for particles.
This "fixes" makes sure particle size and size randomness is always in the
Render panel when it affects the particle system (i.e., always unless using
advanced hair or hair that is not rendering groups/objects).
Fix T52977: Parent bone name disappeared in the UI in pose mode.
Regression caused by own rBc57636f060018. So instead of changing widget
type, just flag it as disabled.
Note that core of the issue is elsewhere though - there is absolutely no
reasons to have a search widget for pointers we cannot change nor
search! But fixing this is not really top priority, one of the many
glitches of our UI code, so think we can live with current code.
To be backported to 2.79a.
This allows users to have "Support", "Rig", "Characters" collections nested to
different collections without having to resort to "House.Rig", "House.Characters"
or "Rig.001", "Characters.003" :/
This is part of T53495.
This fixes renaming the view layer via Python.
This bug was introduced originally in 3a95bdfc65. Although I suspect it was
around for longer, since this commit didn't touch this part of the code.
But basically we need the id of the RNA property to be the one that owns
the data (view layer).
This fixes renaming via the interface.
This bug was introduced originally in 9515737b55. We need the id of the RNA
property to be the one that owns the data (view layer).
So it can't be the window's id, but the scene one instead.
We were bumping user count when duplicating viewlayer and its freestyleconfig
depending on the flag, however when freeing we were always decreasing user
count.
This fixes this and get rid of the assert when running:
`--factory-startup --enable-copy-on-write`
And closing Blender.
SVM nodes need to read all data to get the right offset for the following node.
This is quite weak, a more generic solution would be good in the future.
The mental model is that a scene collection is a small wrap on top of the master
collection, so all objects are in the master collection at all times.
When we remove a collection there is no reason to remove an object. So if the
object was not linked to any other collection, we add link it to the master one.
We tried to do as much as possible in a single threaded callback, which
lead to using some nasty tricks like fake atomic-based spinlocks to
perform some operations (like float addition, which has no atomic
intrinsics).
While OK with 'standard' low number of working threads (8-16), because
collision were rather rare and implied memory barrier not *that* much
overhead, this performed poorly with more powerful systems reaching the
100 of threads and beyond (like workstations or render farm hardware).
There, both memory barrier overhead and more frequent collisions would
have significant impact on performances.
This was addressed by splitting further the process, we now have three
loops, one over polys, loops and vertices, and we added an intermediate
storage for weighted loop normals. This allows to avoid completely any
atomic operation in body of threaded loops, which should fix scalability
issues. This costs us slightly higher temp memory usage (something like
50Mb per million of polygons on average), but looks like acceptable
tradeoff.
Further more, tests showed that we could gain an additional ~7% of speed
in computing normals of heavy meshes, by also parallelizing the last two
loops (might be 1 or 2% on overall mesh update at best...).
Note that further tweaking in this code should be possible once Sergey
adds the 'minimum batch size' option to threaded foreach API, since very
light loops like the one on loops (mere v3 addition) require much bigger
batches than heavier code (like the one on polys) to keep optimal
performances.
So they are:
House
-> House 1
-> House 2
-> ...
The exception is when the parent collection is the master collection. In this case we get:
Master Collection
-> Collection 1
-> Collection 2
-> ...
This is part of "T53495: View layer and collection editing - Design Task"
This is a bit annoying to have per-DM locking, but it's way better (as in, up to
4 times better) for playback speed when having lots of subsurf objects,
The idea is to avoid any threading overhead when we start pushing tasks in a
loop. Similarly to how we do it from the new dependency graph. Gives couple of
percent of speedup here, but also improves scalability.
I could done a subversion bump, but I found a safe way to avoid it.
It leads a bit of an ugly code, but once we bump the subversion
next time we can clean it up easily.
This allows a duplicator (as known as dupli parent) to be in a visible
collection so its duplicated objects are visible, however while being
invisible for the final render.
An object that is a particle emitter is also considered a duplicator.
Many thanks for the reviewers for the extense feedback.
Reviewers: sergey, campbellbarton
Differential Revision: https://developer.blender.org/D2966
This statistics is only collected when debug_value is different from 0.
Stored in depsgraph node itself, so we can always have access to average data
and other stats which requires persistent storage. This way we also don't waste
time trying to find stats from a separately stored hash map.
I had to make Eevee draw its scene in the scene pass (before it was doing it
in the background pass). This is not ideal since reference images require
a separation between scene and background.
But it's the best way to solve it now. Clay is working fine.
This is something deliver form node type, there is no reason to try cache it
anywhere, especially since it's not used in any performance critical code.
Lighter weight dependency graph is what we want.
Since in Alembic the loop order seems to be reversed when exporting and
importing, and this was the only place where it was not, I was thinking
to match this to the convention of reversing the loop order as well.
Reviewers: sybren, kevindietrich
Tags: #alembic
Differential Revision: https://developer.blender.org/D2968
Notifier is getting through, yet tree wasn't rebuilding until
we force redraw by resizing the outliner.
Thanks to Danrae Pray (@spockTheGray) for looking at this issue.
Annoyingly, need to convert vfont to nurbs, do minmax and toss nurbs away.
This is likely to be fine, since this function is not intended to be used
a lot, and this is the only way to get more meaningful result.
However, it's not very clear what to do with font on curve.
This fixes rendering of font object with auto texture space in Cycles
introduced in c34f3c7.
It is probably possible to introduce new mode to vfont_to_curve which
will do boundbox without extra allocations, but that's more like an
optimization.
Reviewers: campbellbarton, mano-wii
Reviewed By: campbellbarton
Subscribers: zeauro
Differential Revision: https://developer.blender.org/D2971
We can not store pointers to elements of collection property in the
case we modify that collection. This is like storing pointers to
elements of array before calling realloc().
There is still a crash that you get because the draw manager needs to
handle duplis differently.
But the initial assert caused by this particular file is now fixed.
The goal is: have id->recalc flags set to components which got changed.
To make it possible for render engines to check on a more granular basis
what changed in the object. For example, is it a transform which changed
or is it just some ID property changed which has nothing to do with rendering.
The tricky part is: we don't want duplicated logic in tagging and flushing.
In order to avoid this duplication, we store ID recalc flag in the component
node type information. That type information could easily be accessed by both
tagging and flushing routines.
Remaining part of the changes are related on changing the way how tagging
works. The new idea here is to have utility function which maps update tag to
a component. This way we can easily set ID recalc flags right away. Without
any duplication of ID recalc flags set in multiple flag handler functions.
With all this being said, there should be no user measurable difference for
now, it's a gigantic basement for some upcoming work and fixes.
The issue actually goes a bit deeper, converting curve to mesh will
change texture space just because font and bezier curves are using CV
to calculate texture space.
So now when those objects are converted to mesh, we disable auto
texture space and copy evaluated space over.
Original fix was assuming that particle init operation is updated on every
frame, which is wrong behavior and that was fixed in previous commit to the
original bugfix.
- Highlight's were too intense/distracting
use more subtle alpha (consistent with the rest of our UI).
- Don't fill center cube (only draw edges).
- Draw widget while interacting since this is helpful in some cases.
Thanks to @jbakker for suggestions.
Also change axis hotspots so the nearest is always selected
for quicker axis picking (relies on dragging any axis to orbit).
Makes the 3D view navigation widget easier to use: dragging anywhere
in the rotation region now rotates without having to avoid the XYZ axis
hotspots which only activate on a single click.
Logic for drag detection is complicated by manipulators reliance
on keeping the modal operator running.
Currently this is wrapped in an ifdef,
we may want to implement it differently later.
A comparison should have not just have been against an epsilon,
but relative to the edge length involved.
Thanks to mano-wii for patch on which this is based.
The idea is to de-duplicate logic in DEG_id_tag_update() and flushing where we
need to translate depsgraph tag or component type to ID level recalc flag.
Currently unused, but is required for Blender 2.8.
Not only this helps merges form master to the branch, but also:
- Allows us to production-check changes as soon as possible.
- Avoids some unnecessary editors update about ID changes.
- Adds small optimization on queue size by always keeping one of the pointers
outside of the queue.
The idea is to allow iterating over ID nodes in exact order of their
construction, and in order which will not change dependent on memory
pointers or anything.
Now we stick to a single button, when data is directly linked, clicking
on it will make a local copy, while shift-clicking on it will make a
static override.
When data is a static override, icon is the DATA_OVERRIDE one, and clicking
on the button will make it a fully borring local data-block.
This is just the 'linked' icon with top-down arrow instead of left-right
one, if any graphist feels more inspired... ;)
Note that this is 'new inkscape' version of the svg file, hope
everything is alright (it does change all icons whe re-exporting :/ ).
Code also handling auto-generation of static overrides.
Aside from some naming consistency cleanup, this commit:
* Is the first step addressing the 'operator' issue with static
overrides, by implementing a first version of the 'restore from
reference' behavior.
* Fixes several issues that were discovered on the way in enhanced
RNA comparision code, like the 'zero-length dynamic array' case, or some
infinite looping caused by some non-ID pointers (that for some
mysterious reasons did not show up previously...).
* Factorizes a bit said RNA comparison code (auto-static override
generation and comparison/check were essentially doing the same thing).
This flag means that the pointer does not 'own' the data it references.
This is the case of nearly all ID RNA pointers (NodeTrees will probably
again be some nasty exception here :( ), but also several other cases.
That kind of information is mandatory for complex processing over whole
data-blocks done in RNA, like some static override tasks (advanced
comparison...).
We need to tag groups before and after rendering, so the group collections
viewport and render visibility are taken into account.
Note: This is a workaround, that will be removed once the render engine have
its own depsgraph, instead of re-using the viewport depsgraph.
Users can change the group collection visibility in the outliner
when looking at groups.
Regular collections on the other hand don't have any special visibility control,
if you need a collection to be invisible during render, either don't link it
into the view layer used for F12, or disable it.
This includes:
* Updated unittests - update your lib/tests/layers folder.
* Subversion bump - branches be aware of that.
Note:
Although we are using eval_ctx to determine the visibility of a group collection
when rendering, the depsgraph is still using the same depsgraph for the viewport
and the render engine, so at the moment the render visibility is ignored.
Following next is a workaround for this separately to tag the groups before and
after rendering to tackle that.
Currently this is a no-visible-changes change, but the idea is to use this
dedicated flag to tell which exact components of ID changed, make it more
granular than just OBJECT and OBJECT_DATA. Allow setting this field based
on what components new dependency graph flushed on evaluation.
Currently unused, but this is where LIB_TAG_ID_RECALC* flags will go.
Also modified other DNA to make pointer property being followed by pointer.
Makes it easier to keep track of alignment and extend nested structures without
ruining anything.
The possible issue with just listing arguments is that it might not be clear
what particular value is used for. For example, is it a scene itself, or is
it a parent scene?
Not as if it's not very clear now, but better be explicit for the future,
and me reading code in 10 years.
Outliner was using the old selection flag to show selected objects.
So if you selected an object in the outliner it would keep "selected"
(drawn in yellow) even after you selected another object.
New dependency graph is supposed to have relation from animation node to
the node which corresponds to a property which is modified by that curve.
This means it is up to dependency graph to flush recalc flags, and no
manual control is needed in the animation code.
This is a part of ongoing work in Blender 2.8, where we need to replace
`object->id.tag & LIB_TAG_ID_RECALC_DATA`
with
`object->data->id.tag & LIB_TAG_ID_RECALC`
Should be no user measurable difference.
We now select the LayerCollection at index 0 for the active ViewLayer after a
collection deletion operation.
Added some functions to query outliner tree data & get LayerCollection
by index using a similar approach as we do for SceneCollection indexing.
With warning and style cleanups by Dalai Felinto.
Reviewers: dfelinto
Tags: #bf_blender_2.8
Differential Revision: https://developer.blender.org/D2942
This is something what we would need to ensure anyway, so doesn't seem
to make sense to NOT allocate depsgraph and then worry about this externally.
Steps to reproduce: add cube, change it's size in redo panel.
Found by Campbell during code review session.
Remove unused func from public API.
Make parameters & variables naming more consistent accross the code.
Move RNAproperty validation/'conversion' (for IDProps case) to upper
level in code, this will avoid some useless re-processing.
Previously, hitting Shift-LMB will first invoke selection operator, which
then later on is transformed to mouse tweak used for reroute operator.
This was causing problems extending selection with Shift-LMB when clicking
fast or from a tablet.
When one creates a new local static override from another linked
data-block already overriding a third one etc., walk the whole
inheritance chain up to the original ancestor to try to find an
overriding template, instead of only checking the immediate reference...
Avoid creating new Python instances
every time a scene, object, mesh .. etc are accessed.
Also resolves crashes T28724, T53530
although it's only valid for ID types, not modifiers vertices etc.
Back-ported from blender2.8 branch.
WorkSpaceLayout->screen will be made public soon, but meanwhile this makes it
clear why we are not passing layout->screen to CALLBACK_INVOKE in this case.
We can not store pointer to an object ion temporary variable here, since then
pointer will not be updated in the base itself.
This fixes missing modifiers on objects coming from dupli-group.
Bug introduced on rB9f5bf197a0c3.
The offset for selection of vertices (`bm_vertoffs`) starts where the offset o edges ends (`bm_wireoffs`).
However, the `bm_wireoffs` depends on the offset of face selection (`bm_solidoffs`).
Before the commit that introduced the bug, the drawn of edges (in backbuff) was always computed along with the `bm_wireoffs`:
```
bm_wireoffs = bm_solidoffs + em->bm->totedge;
```
Now that the edges are not always drawn in backbuff, `bm_wireoffs` has to start from `bm_solidoffs`.
It is still based on generic collection evaluation, but the idea is to avoid
having view_layer pointer passed from group to it's evaluation function.
This is essential for copy-on-write, where we need to pass view_layer pointer
from a copied datablock, but that copy is not yet available at construction
time. Also, this is NOT the case where we want to expand datablock at a
construction time, just to keep our life easier.
View Layer was not duplicated between destination and source.
This would lead to a crash if you duplicated the group and assigned
the new group to any object.
Our own implementation was behaving different comparing to OSL and GPU,
namely on the border pixels OSL and CUDA was doing interpolation with
black, but we were clamping coordinate.
This partially fixes issue reported in T53452.
Similar change should also be done for 3D interpolation perhaps, but this
is to be investigated separately.
This only applies to ID being copied outside of bmain. Handy for cases when it
is important to check if the copy corresponds to a data block coming from
library.
Example of that is proxy evaluation with copy on write.
Thanks Bastien for review!
This should solve crash on files having proxies, but there will still be
assert failure because proxy_from is expected to come from library, which
is no longer truth for objects which got copied.
Before that it was up to lots of other places to keep track on whether
something is to be dependent on time or not. Was annoying, and unreliable,
and fragile.
This commit avoids hacks in object builder. Other areas will be adopted
soon.
This is something what should be supported by the new dependency graph.
Fixed by making it so, build_animation() adds relation between Animation
component and whatever-is-being-animated. In fact, for now, only relations to
ID properties are added. Rest of the relations are kind of hacked in all over
the code and needs to be removed and verified with specific .blend files.
We can't use a single component here, sine it might consist of multiple
operations. So, for example, having driver operation will confuse targets
of another driver.
Fill-selection would only go upward in list of items to find an already
selected one and fill-select all items in-between. Now, in case upward
search fails, it will also intent to go downward, effectiviely allowing
to 'fill-select' from bottom to top.
Note that top-to-bottom keeps priority (i.e. if a top-to-bottom
fill-selection is possible, it will always happen, even if a
bottom-to-top one is also possible).
* For the T48988 fix (i.e. separate Ease In/Out properties for Bendy Bones
in Edit vs Pose modes), old animation data needed to be patched to use
the new property names. This is needed to partially fix some of the
issues in T53356 (though the Rigify code itself still needs to be patched).
* For the T52009 fix, old files needed to have the frame_start and frame_end
properties on the FModifier (base-class) updated to match that of the
FMod_Stepped type-specific class. This wasn't done in the earlier commit
since it wasn't worth going through all animation data just for the sake
of updating these relatively-rare settings, but since we're doing it anyway
now, it makes sense to include this here.
Patch from Richard Erhardt, with some additions & modifications.
Changes bevel profile shape parameter so that can get arbitrarily
near square profile as parameter -> 1.
Adds code to make profile=0 case work, at least for cube corners,
so changed hard min of profile parameter to 0 from 0.15.
Currently shouldn't make any difference, but this is something what needs to be
done to sanitize drivers relations )with the idea to re-use some generic code
to get operations for driver variables.
Those are unused, and not clear whether we will ever support this.
Seems to be better having more like "component" tags, would be less magic
involved to guess what exactly is to be tagged.
One or two are OK, but more make it rather unreadable, and future work
is likely to require more toggle specific behavior here. So switched to
bitflags, switching from short to int and using 16 upper bits for
'internal' ones defined in BLO_readfile.h, combined with 'public' ones
from user interaction, defined in DNA_space_types.h
Updated collection_delete_exec() so we don't try to delete elements as we search
the outliner tree anymore.
Now we search the whole tree first for the selected nodes that need to be
deleted and delete them afterward.
Reviewers: dfelinto
Tags: #bf_blender_2.8
Differential Revision: https://developer.blender.org/D2936
Differential Revision: https://developer.blender.org/D2940
Use dynamically generated message publish/subscribe
so buttons and manipulators update properly.
This resolves common glitches where manipulators weren't updating
as well as the UI when add-ons exposed properties which
hard coded listeners weren't checking for.
Python can also publish/scribe changes via `bpy.msgbus`.
See D2917
This augment the existing irradiance grid with a new visibility precomputation.
We store a small shadowmap for each grid sample so that light does not leak through walls and such.
The visibility parameter are similar to the one used by the Variance Shadow Map for point lights.
Technical details:
We store the visibility in the same texture (array) as the irradiance itself (in order to reduce the number of sampler).
But the irradiance and the visibility are not the same data so we must encode them in order to use the same texture format.
We use RGBA8 normalized texture and encode irradiance as RGBE (shared exponent).
Using RGBE encoding instead of R11_G11_B10 may lead to some lighting changes, but quality seems to be nearly the same in my test cases.
Using full RGBA16/32F maybe a future option but that will require much more memory and reduce the perf significantly.
Visibility moments (VSM) are encoded as 16bits fixed point precision using a special range. This seems to retain enough precision for the needs.
Also interpolation does not seems to be big problem (even though it's incorrect).
Before this patch, if one of the grid was updated (moved) only the subsequents evaluated grids had their level reset and had all their bounces recomputed.
Tests were broken since e8c15e0ed1.
We now get view_layer from window, not workspace, since the same workspace can
have a different view_layer depending on the window scene.
We do not assume subversion bump until we actually change the subversion.
For example, a branch may have bumped its subversion to 3, yet still require
the new doversion code introduced on 108c4bd502.
This way we can inform editors about all edits at once. Currently this is not
used, but in the next commits we will inform editors about what exactly has
changed.
We are using NC_SCENE | ND_LAYER_CONTENT for the shader, however this does not work for groups
unless we manually handle the notifiers.
Otherwise the group id is passed, and the listener never gets the notification since a scene id
is expected, or no id at all.
Allow users to edit either the object group active collection or view layer one
We can't support users selecting the group collections from the outliner group
because that would be imply having an active group for the scene or workspace.
But the way it is now allows to see and edit the collection values after the
group is instanced.
You could still create groups as before, with Ctl + G. This will create a group
with a single visible collection.
However you can also create a group from an existing collection. Just go to
the menu you get in the outliner when clicking in a collection and pick
"Create Group".
Remember to instance the group afterwards, or link it into a new scene or file.
The group and the collection are not kept in sync afterwards. You need to manually
edit the group for further changes.
Since we are ditching layers from Blender (2.8) we need a replacement to
control groups visibility. This commit introduces collections as the building
blocks for groups, allowing users to control visibility as well as overrides
for groups.
Features
========
* Groups now have collections
This way you can change the visibility of a collection inside a group, and add
overrides which are part of the group and are prioritized over other overrides.
* Outliner
Groups can inspect their collections, change visibility, and add/remove members.
To change an override of a group collection, you need to select an instance of
the group, and then you can choose "group" in the collection properties editor
to edit this group active collection instead of the view layer one.
* Dupli groups overrides
We can now have multiple instances of the same group with an original "override"
and different overrides depending on the collection the instanced object is part
of.
Technical
=========
* Layers
We use the same api for groups and scene as much as possible.
Reviewers: sergey (depsgraph), mont29 (read/write and user count)
Differential Revision: https://developer.blender.org/D2892
Instead of storing a single active view-layer in the workspace, one is
stored for each scene the workspace showed before.
With this, some things become possible:
* Multiple windows in the same workspace but showing different scenes.
* Toggling back and forth scene keeps same active view-layer for each scene.
* Activating workspace which didn't show current scene before, the current view-layer is kept.
A necessary evil for this is that accessing view-layer and object mode
from .py can't be done via workspace directly anymore. It has to be done
through the window, so RNA can use the correct scene.
So instead of `workspace.view_layer`, it's `window.view_layer` now (same
with mode) even though it's still workspace data.
Fixes T53432.
It makes more sense to stick to DEG_iterator_object order in name, since we can
have functions to iterate over different entities and we want all of them to
have common prefix.
The idea of this flag was to prevent snapping onto an object which depends on
currently modifying ones. Using single flag makes more sense here, and also
makes it possible to replace some ob->recalc based magic with depsgraph query
to set those flags.
It looks stupid to first force some flag being set and then have workaround
to ignore that flag in snapping code. Let's just not set the flag in the first
place.
The only useful situation where such snapping was usable is to move roots of
disconnected hair, which still works just fine. However, there might be some
other hidden corner case where this workaround was needed.
We should keep base_flags after CoW object datablock was updated. Not entirely
happy with current solution, but it fixes crash and allows us to run tests
again.
More proper solution would be to make CoW operation a per-component thingie,
which will only update corresponding parts.
Previously, the NLM kernels would be launched once per offset with one thread per pixel.
However, with the smaller tile sizes that are now feasible, there wasn't enough work to fully occupy GPUs which results in a significant slowdown.
Therefore, the kernels are now launched in a single call that handles all offsets at once.
This has two downsides: Memory accesses to accumulating buffers are now atomic, and more importantly, the temporary memory now has to be allocated for every shift at once, increasing the required memory.
On the other hand, of course, the smaller tiles significantly reduce the size of the memory.
The main bottleneck right now is the construction of the transformation - there is nothing to be parallelized there, one thread per pixel is the maximum.
I tried to parallelize the SVD implementation by storing the matrix in shared memory and launching one block per pixel, but that wasn't really going anywhere.
To make the new code somewhat readable, the handling of rectangular regions was cleaned up a bit and commented, it should be easier to understand what's going on now.
Also, some variables have been renamed to make the difference between buffer width and stride more apparent, in addition to some general style cleanup.
For some blend modes there would be no effect with factor 1.0, even if factor
0.999 would give a very different image. Now the result should have no
discontinuity.
Differential Revision: https://developer.blender.org/D2925
This is very bold right now - you simply can replace (or add) an action
to an override data-block. Actions themselves are not 'customizable'
through override at all currently (we may at least add
'add/remove/replace fcurves' feature in future), and nothing else in
animdata is overridable currently.
You can now override loc/rot/scale of objects and posebones.
Also added a basic operator to make an override of active linked object,
but this is very limited/wip/testing feature (you have to manually override
object and its armature, and relink to proper local overrides
yourself...). Final 'make proxy killer' will be much more automated of
course.
First real 'usable' commit, will be needed by the 'virtual data-block'
asset feature (i.e. to be able to link a mere image file as if it was a
linked datablock, and generate automatically an override of it to make
it editable).
We could do that in several different way, e.g. adding some tag during
DEG evaluation, etc. But this is not a critical process (it's main
purpose is user feedback), so current solution seems to work well enough
- and it's dead simple! ;)
This is essentially a huge refactor/extension of our existing RNA
compare & copy code, since static override needs more advanced handling here.
Note that not all new features are implemented yet, advanced things like
collections insertion/deletion are still TODO (medium priority).
This completes the ground work for overrides, remaining commits will be
about UI and some basic/testing activation of overrides for a limited
set of data-blocks & properties.
For details see https://developer.blender.org/D2417
See https://developer.blender.org/D2417 for details.
Note that since static overrides rely heavily on RNA, this commit is
essentially invisible from user PoV, more in next commits.
This function swaps the memory content of two data-blocks (of same type
obviously), while preserving most of the ID 'header' itself.
It is intended to be used to quickly and easily replace the data of an
existing ID by another one, presumably a temporary 'working' one,
without having to suffer from things like name changes,
registering/removing from Main database, etc.
Do a direct update of object transform instead, without involving
manual trickery of recalc flag.
Shouldn't be functional changes as far as artists are concerned,
but will allow us to get rid of recalc flags in 2.8.
Thanks Bastien for review!
There is no reason to have such a long function, it is really easy to break it
down into a smaller ones, and call them from where needed. Makes them smaller
and easier to follow. Also avoids use of confusing goto's.
For functions which will allocate requested data if it does not exist yet
"_ensure" is to be used instead of "_get". "_get" functions should return
NULL in cases when requested data does not exist yet.
There were following issues:
- This was used in a similar way of DEG's ID update callback. No reason to have
yet-another-way of informing editors/engines about changes. Better to keep
regular update mechanism usable and fast for those needs.
- It wasn't granular at all, and granularity in flags is something what we
need to support anyway, even for existing ID update.
- There is no reason to have it per-object. Depsgraph operates on IDs.
- It wasn't clear when and who clears the flag, and was possible to run into
conflicts.
This replaces dedicated flag which wasn't clean who sets it and who clears it,
and which was also trying to re-implement existing functionality in a way.
Flushing is not currently very efficient but there are ways to speed this up
a lot, but needs more investigation.
Adds support for defining a number of tags as part of the rna-struct
definition, which its properties can set similar to property-flags.
BPY supports setting these tags when defining custom properties too.
* To define tags for a struct (which its properties can use then), define the tags in an `EnumPropertyItem` array, and assign them to the struct using `RNA_def_struct_property_tags(...)`.
* To set tags for an RNA-property in C, use the new `RNA_def_property_tags(...)`.
* To set tags for an RNA-property in Python, use the newly added tags parameter. E.g. `bpy.props.FloatProperty(name="Some Float", tags={'SOME_TAG', 'ANOTHER_TAG'})`.
ntree_shader_relink_displacement is creating a transient node that does not have a correct original to point to.
In this case we revert to constant uniform.
This way it is more clear what is needed to be passed and what is available
in the callback itself.
Thanks Dalai for review and tips about engine type!
This commit effectively reverts fix T45702 done in 067fe2719a.
Reasoning:
- Blender Internal is being replaced with Eevee, and will be removed entirely
rather soon.
- All render engines are planned to have own depsgraph, so such threading
conflicts should no longer be an issue.
- We don't want to spend time on porting workarounds for EOL things to a new
design. Less code -- faster the work :)
- If such notifications will end up needed for some other cases, we would
need to re-implement this a more proper depsgraph tagging/flushing and make
it to work with all copy-on-write datablocks and everything.
There might be much more logic involved there, also we might not know proper
evaluated CoW pointer there yet. So we leave this to dependency graph to
decide what exactly to do here.
No need to print status for basic & reliable operations,
build systems can output operations they run if needed,
or debug output changed in the source if developers are debugging.
Nice for ninja, so any printed text hints at a problem to fix.
User count was wrong for newly created files. We increase/decrease user count
when we link/delete objects from a SceneCollection.
So we don't want to leave user count of 1 after calling BKE_libblock_alloc in
BKE_object_add_only_object().
This is fully unreadable to have lots of boolean arguments scattered across the
whole argument list. What does `false, true, true` mean in terms of behavior?
Replace those with bitfield which has advantage of having more human readable
meaning.
Pretty straightforward this time, we already have a single struct
pointer containing all needed data (or nearly).
And we gain about 10-15% speed on tracking! :)
Two more 'not really useful' cases (OMP only shows some noticeable
speedup with above 1M elements, and since this is quick operation anyway
compared to even ather basic operators, gain is in the 1% area of total
processing time in best case).
So not worth parallelizing here, we'll gain much more on tackling heavy
operations. ;)
And BMesh is free from OMP now!
Performances tests on this one are quite surprising actually...
Parallelized loop itself is at least 10 times quicker with new BLI_task
code than it was with OMP. And subdividing e.g. a heavy mesh with 3
levels of multires (whole process) takes 8 seconds with new code, while
10 seconds with OMP one. And cherry on top, BLI_task code only uses
about 50% of CPU load, while OMP one was at nearly 100%!
In fact, I suspect OMP code was not properly declaring outside vars,
generating a lot of uneeded locks.
Also, raised the minimum level of subdiv to enable parallelization,
tests here showed that we only start to get significant gain with subdiv
levels of 4, below single threaded one is quicker.
Those three ones were actually giving no significant benefits, in fact
even slowing things down in one case compared to no parallelization at
all (in `BM_mesh_elem_table_ensure()`).
Point being, once more, parallelizing *very* small tasks (like index or
flag setting, etc.) is nearly never worth it.
Also note that we could not easlily use per-item parallel looping in
those three cases, since they are heavily relying on valid
loop-generated index (or are doing non-threadable things like allocation
from a mempool)...
Previously outcome depended on order of edges,
now the longest boundary edges are rotated first,
then the faces connected edges.
This gives more predictable results, allowing regions containing
a vertex fan to be rotated onto the next vertex.
That was a nasty one, Debug build would never have any issue (even tried
with 64 threads!), but Release build would deadlock nearly immediately,
even with only 2 threads!
What happened here (I think) is that gcc optimizer would generate a
specific path endlessly looping when initial value of virtual_lock was
FLT_MAX, by-passing re-assignment from v_no[0] and the atomic cas
completely. Which would have been correct, should v_no[0] not have been
shared (and modified) by multiple threads. ;)
Idea of that (broken) for loop was to avoid completely calling the
atomic cas as long as v_no[0] was locked by some other thread, but...
Guess the avoided/missing memory barrier was the root of the issue here.
Lesson of the evening: Remember kids, do not trust your compiler to
understand all possible threading-related side effects, and be explicit
rather than elegant when using atomic ops!
Side-effect lesson: do check both release and debug builds when messing
with said atomic ops...
Using atomic cas correctly is really hairy... ;)
In this case, the returned value from cas needs to validate *two*
conditions, it must not be FLT_MAX (which is our 'locked' value and
would mean another thread has already locked it), but it also must be
equal to previously stored value...
This means we need two steps per loop here, hence using a 'for' loop
instead of a 'while' one now.
Note that collisions are (as expected) very rare, less than 1 for 10k
typically, so did not catch the issue initially (also because I was
mostly working with release build to check on performances...).
Previously the lighting of SSS material was not present in reflection probe or irradiance grid.
This does not compute the SSS correctly but at least output the corresponding irradiance power to the correct output.
While this probably isn't the final solution we'll go with, it's nicer
as current one, which was basically broken. So consider this as
temporary solution.
It also allows testing how changing workspace changes mode & active
object, but only by having the workspaces use different view-layers.
Decided to remove WorkSpace.mode for now. If we need to bring it back,
we'll have to version patch it anyway.
This option prevent from automatically blurring the albedo color applied to the SSS.
While this is great for preserving details it can bleed more light onto the nearby objects since the blurring will be done on pure "white" irradiance.
This issue is to be tackled in a separate commit.
All depsgraphs are sharing same object state for now, which means doing set
scene evaluation after main scene evaluation will override all modifications
done by the main scene.
When setting background object, it might pull new objects in and those objects
will not have proper flags unless on_visible_update() was called afterwards.
This reverts commit 90ff88646d.
Can not do this yet, if object is not part of graph yet it will not have entry
taq. Need some more generic solution here.
It is possible to have situation when we need to both update relations and do
some updates on random IDs. This was only done before for objects using their
recalc field. This means, every update tag which did not fit into there would
have been lost after updating relations.
Now we do some smarter re-scheduling of operations after relations are updated.
object.is_from_set
object.is_from_duplicator
We need them for the unittests, and users can benefit from it as well.
Note, this only makes sense when reading objects from depsgraph:
`bpy.context.depsgraph.objects`
This is an incomplete test since we cannot check for the
depsgraph selection value with the current API, nor can we
see if the relationship lines are being drawn.
Previously it was done during depsgraph iteration, which is not good at all,
since after evaluation nobody should really modify how object was evaluated.
The idea then is to avoid doing depsgraph tag for each of the object which
selection is changed (which could be tricky to do anyway due to lots of areas
of selection code where this could happen), and simply tag scene's with
selection update tag.
This will involve synchronization of flags from base to objects, which is
rather cheap anyway.
This is crucial bit since batch cache is stored in the evaluated object,
meaning we can't tag it's hatch cache dirty from the notifier system.
Not easily at least. Better to leave this job to depsgraph, it knows
all the copies of data.
This cleanup removes the need of gigantic code duplication for each closure.
This also make some preformance improvement since it removes some branches and duplicated loops.
It also fix some mismatch (between cycles and eevee) with the principled shader.
This is a hack to make the user control the SSS radius even though the profile is baked with the default radius values.
This is completly against UI principles since you cannot edit the profile radiuses while there is something plugged into the radius socket.
Better solution will be to either have a dedicated node value for RGB radiuses and a SSS scale socket only for eevee.
`BM_mesh_normals_update` was converted from OMP to new parallel iterator code,
basic test with heavily subdivided cube (24.5k faces) gives:
- old OMP code: average 10ms per run.
- new BLI_task code: average 6ms per run.
So new code seems to be easily 40% quicker, in addition to getting rid of OMP. ;)
Reviewers: sergey, campbellbarton
Differential Revision: https://developer.blender.org/D2930
It merely uses the new thread-safe iterators system of mempool, quite
straight forward.
Note that to avoid possible confusion with two void pointers as
parameters of the callback, a dummy opaque struct pointer is used
instead for the second parameter (pointer generated by iteration over
mempool), callback functions must explicitely convert it to expected
real type.
Also added a basic gtest for this new feature.
This will allow threaded tasks to 'consume' all mempool items in
parallel tasks, each one working on a whole chunk at once (to reduce
concurrency managing overhead).
Adapted from http://www.pixelbeat.org/programming/gcc/static_assert.html.
Note that this macro just discards error message, so error when building
is much less nice than with gcc's _Static_assert... But error log will
point to right place in code, so should still be OK.
Pixel size was not initial early enough. For first time this was not a problem
because the bevel amount starts at 0 then, and after the mouse moves the pixel
size is initialized. For the second time the bevel amount starts at a non-zero
value, and it failed then.
These and other non-RGB passes should always be stored as full float, the
precision loss is too unpredictable.
Related to T53381, but that one is about file output nodes where we don't
know the type of data being saved currently.
Reason is motsly that dealing with type conversion in calling code is
not great, makes it less readable, and can generate hidden bugs in case
original type changes and atomic primitive calls are not updated
accordingly...
Sets the 'advanced' tag for some properties of following mesh edit operators:
* Loop Cut
* Subdivide
* Mark Seam
* Smooth Vertex
* Laplacian Smooth Vertex
* Merge
This will later be used to show advanced operator properties separate from
basic (as in non-advanced) ones in the UI.
Tagging a single operator property in C should be done via
`WM_operatortype_prop_tag()`. It does additional checks for type safety
that `RNA_def_property_tags()` doesn't do.
To avoid having to tag each advanced property individually, multiple
ones can be tagged by wrapping them into
`WM_operatortype_props_advanced_bein()` and
`WM_operatortype_props_advanced_end()` calls. It's also possible to only
call `_begin()`, all properties added after this will get tagged then.
In most cases this last approach should be sufficient.
Example of Python usage:
`my_float = bpy.props.FloatProperty(name="Some Float", tags={'ADVANCED'})`
Adds support for defining a number of tags as part of the rna-struct
definition, which its properties can set similar to property-flags.
BPY supports setting these tags when defining custom properties too.
* To define tags for a struct (which its properties can use then), define the tags in an `EnumPropertyItem` array, and assign them to the struct using `RNA_def_struct_property_tags(...)`.
* To set tags for an RNA-property in C, use the new `RNA_def_property_tags(...)`.
* To set tags for an RNA-property in Python, use the newly added tags parameter. E.g. `bpy.props.FloatProperty(name="Some Float", tags={'SOME_TAG', 'ANOTHER_TAG'})`.
Actual usage of this will be added in a follow-up commit.
The RenderResult struct still has a listbase of RenderLayer, but that's ok
since this is strictly for rendering.
* Subversion bump (to 2.80.2)
* DNA low level doversion (renames) - only for .blend created since 2.80 started
Note: We can't use DNA_struct_elem_find or get file version in init_structDNA,
so we are manually iterating over the array of the SDNA elements instead.
Note 2: This doversion change with renames can be reverted in a few months. But
so far it's required for 2.8 files created between October 2016 and now.
Reviewers: campbellbarton, sergey
Differential Revision: https://developer.blender.org/D2927
This reverts commit d749320e3b.
It's possible the container struct is larger,
we could do sizeof checks that falls back to memmove
but rather avoid complicating things.
Crash introduced on: 3a95bdfc65
We can't decrease user count of freestyle linestyle IDs before linking.
Moving doversion for after linking.
And for the records we are simply removing the freestyle data altogether.
This is only for files created with 2.8, so it should be fine.
This patch moves all the functionality previously in SceneRenderLayer to SceneLayer.
If we want to rename some of these structs now would be a good time to do it, before they are in SceneLayer.
Everything should be working, though I will test things further tomorrow. Once this is committed depsgraph can get
rid of the workaround added in rna_Main_meshes_new_from_object and finish whatever this patch was preventing from being finished.
This patch also adds a few placeholders for the overrides (samples, ...). These are obviously not working, so some unittests that rely on 'lay', and 'zmask' will fail.
This patch does not addressed the change of moving samples to ViewRender (I have this as a separate patch and needs some separate discussion).
Following next is the individual note of the individual parts that were committed.
Note 1: It is up to Cycles to still get rid of exclude_layer internally.
Note 2: Cycles still need to handle its own doversion for the use_layer_samples cases and
(1) Remove the override as it is
(2) Add a new override (scene.cycles.samples) if scene.cycles.use_layer_samples != IGNORE
Respecting the expected behaviour when scene.cycles.use_layer_samples == BOUNDED.
Note 3: Cycles still need to implement the per-object holdout
(similar to how we do shadow catcher).
Note 4: There are parts of the old (Blender Internal) rendering pipeline that is still
using lay, e.g., in shi->lay.
Honestly it will be easier to purge the entire Blender Internal code away instead of taking things from it bit by bit.
Reviewers: sergey, campbellbarton, brecht
Differential Revision: https://developer.blender.org/D2919
There are parts of the old (Blender Internal) rendering pipeline that is still
using lay, e.g., in shi->lay.
Honestly it will be easier to purge the entire Blender Internal code away instead
of taking things from it bit by bit.
We cannot assume a render layer does not have a setting that was needed for
compositing. Even if:
```
(scene->lay & render_layer->lay) != (scene_lay) &&
(render_layer->lay | render_layer->lay_exclude) == 0))
```
Which would mean use the scene layers just as they are.
Note: Cycles still need to handle its own doversion for theses cases and
(1) Remove the override as it is
(2) Add a new override (scene.cycles.samples) if scene.cycles.use_layer_samples != IGNORE
Respecting the expected behaviour when scene.cycles.use_layer_samples == BOUNDED.
This adds the possibility to simulate things like red ears with strong backlight or material with high scattering distances.
To enable it you need to turn on the "Subsurface Translucency" option in the "Options" tab of the Material Panel (and of course to have "regular" SSS enabled in both render settings and material options).
Since the effect is adding another overhead I prefer to make it optional. But this is open to discussion.
Be aware that the effect only works for direct lights (so no indirect/world lighting) that have shadowmaps, and is affected by the "softness" of the shadowmap and resolution.
Technical notes:
This is inspired by http://www.iryoku.com/translucency/ but goes a bit beyond that.
We do not use a sum of gaussian to apply in regards to the object thickness but we precompute a 1D kernel texture.
This texture stores the light transmited to a point at the back of an infinite slab of material of variying thickness.
We make the assumption that the slab is perpendicular to the light so that no fresnel or diffusion term is taken into account.
The light is considered constant.
If the setup is similar to the one assume during the profile baking, the realtime render matches cycles reference.
Due to these assumptions the computed transmitted light is in most cases too bright for curvy objects.
Finally we jitter the shadow map sample per pixel so we can simulate dispersion inside the medium.
Radius of the dispersion is in world space and derived by from the "soft" shadowmap parameter.
Idea for this come from this presentation http://www.iryoku.com/stare-into-the-future (slide 164).
Just removing it, such cases are not bottlenecks and not worth the
complication of doing real threading with own BLI_task.
Other (remaining) usages may be relevant, need case-by-case check.
This was expected behavior for over-exposured lamps when the mode was originally
created for Tears of Steel. Turns out, there could be really bad green screen in
real production which will only have green (or rather screen) channel over
exposured.
Tweaked condition now so we use least bright channel to see if the area has
proper exposure or not.
Seems to work fine in tests, but further tweaks are possible.
This is tricky since we may want granular polling depending on the setting.
Or an option to pick whether we want the context or the scene to drive the
panels to prevent too many panels when mixing Eevee and Cycles for example.
This way we don't modify scene to get current frame from. Will also let us to
hopefully get rid of Scene stored in ModifierData.
Only did for Wave modifier for now, maybe someone is around to check on another
modifiers? :)
This was dangerous to do such calculations, and now it is solvable by making
dependency graph more granular in this case. Removing the workaround also saves
us a hassle of passing lots of extra arguments down the evaluation routines.
In theory, we can also remove EvaluationCOntext from constraints evaluation as
well now. But probably better to wait with such removal for now.
This commit effectively reverts 1130c53. Will do a proper fix in dependency
graph itself.
CUDA 9.0.176 apparently caused some slow down on high-end Pascal cards that can be mitigated by increasing the number of registers. See https://developer.blender.org/F1142667 for a detailed comparison.
Was giving difference when using sharpness of 1.0 and 0.999 even though the
result was expected to be really close to each other.
This SSS profile will probably be removed in the future in favor of more
physically bases Burley, but for the time being don't see anything wrong
fixing an existing code.
Regression from rB823bcf1689a3 (VPaint 2017 GSoC, this is not in 2.79 release).
Also cleanup, using fake-array-ification to access struct members is
generally not a great idea, but when we already have a totally confusing
broken struct layout, this is pure evil, as demonstrated here!
Found while investigating T53341.
- initialize the cube-size from the bounding box when it's not set.
- no longer wrap faces to keep in 0-1 bounds,
other projection methods don't do this and calculating the scale
prevents the UV's from being too far outside the view.
Was using cursor position from within menu,
clicking on the same position for every selected item (toggling).
Now operate on each selected outliner element, without toggling.
This commit introduces the following changes:
* Modified the poll callback on the "Update Paths" operator for bones
so that it only checks if there are bones that have motion paths
(instead of checking whether the active bone has paths).
This makes it easier to update paths without having to first select one
that has them - useful when the paths are all on hidden/hard-to-select bones.
* Add a readonly property, "has_motion_paths" to the animviz.motion_path
RNA struct, providing easier access to the internal flag used above.
This makes it possible for the UI to display the "Update" button without
having to check various bones for motion paths.
Notes:
* The flag being used in these changes already existed, and was only really
intended for internal use. However, since it was already used in many places
for determining if auto-update of all bone paths was needed (e.g. after certain
editing ops), it should be safe to use here too.
* The update_paths operator currently bakes all paths when activated, so there's
currently no loss of functionality with changing to not checking if the active
bone has any paths (e.g. we couldn't only update the active bone only either).
That is still listed as a todo in the code.
There were 2 issues here (first was the one reported):
1) Curve shape changes if multiple consecutive pairs of keyframes
are selected. The problem is that after the first pair is handled,
subsequent pairs get sampled on the basis of the modified curve.
2) With multiple separate "islands" selected, unselected points in between
would get ignored, causing the entire curve to get sampled.
Previously, Mikktspace just bucketed the vertices based on one spatial coordinate and then ran full pairwise comparisons inside each bucket.
However, since models are three-dimensional, the bucketing has a massive false-positive rate, and since pairwise comparison is O(n^2), the merging process is very slow.
But, since we only care about exactly identical vertices, there is a much more efficient approach - we can just hash all values belonging to each vertex and form buckets based on the hash.
Since the hash has 32 bits and considers all values, false-positives are very unlikely - and since both hashing and the radixsort that's used for bucketing are O(n), both asymptotical and
real-world performance (as well as code complexity) are significantly improved.
No color pass because it's hard to define what to use as color in a volume.
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D2903
UV project mixed up global/local space,
3D cursor offset didn't take object scale into account.
Minor improvements:
- Match Cube Project 'center' behavior w/ sphere & cylinder.
- Add active-element center.
- Wrap UV's in Cube Project based on center instead of first vertex.
This seems to be a correct implementation of the same diffusion profile as Cycles uses by default.
There are a few bias though:
- We consider _A_ the albedo to be 1 when evaluating _s_.
- We use a factor of 0.6 when computing _d_ to match more or less cycles results.
Note that doing per pixel jittering does bias the result even further (loss of energy).
random_id() crashes when there is no current dupli object.
We could also throw a Python error when doing it via RNA, but as far as
Cycles is concerned we need to check if instanced.
The loc one (shift-alt-G) was same as 'remove selected from active group'
action... Clear delta transform is not a common operation, so we can
live without a default shortcut for it.
Note that using same key (G) in same space for two completely different
kind of operations is probably a rather bad thing, nice topic for future
keymap work. ;)
Probably nice to have in 2.79a.
This was caused by 93936b8643
From GL spec :
GL_INVALID_OPERATION is generated if mask contains GL_DEPTH_BUFFER_BIT or GL_STENCIL_BUFFER_BIT and the source and destination depth and stencil formats do not match.
So blitting framebuffer with depth or stencil require the SAME FORMAT.
The issue was caused by SpinLock implementation in old pthreads we ar eusing on
Windows. Using newer one (2.10-rc) demonstrates same exact behavior. But likely
using own atomics and memory barrier based implementation solves the issue.
A bit annoying that we need to change such a core part of Blender just to make
specific CPU happy, but it's better to have artists happy on all computers.
There is no expected downsides of this change, but it is so called "works for
me" category. Let's see how it all goes.
Samples : pretty self explanatory.
Jitter Threshold : Reduce cache misses and improve performance (greatly) by lowering this value. This settings let user decide how many samples should be jittered (rotated) to reduce banding artifacts.
How to use:
- Enable subsurface scattering in the render options.
- Add Subsurface BSDF to your shader.
- Check "Screen Space Subsurface Scattering" in the material panel options.
This initial implementation has a few limitations:
- only supports gaussian SSS.
- Does not support principled shader.
- The radius parameters is baked down to a number of samples and then put into an UBO. This means the radius input socket cannot be used. You need to tweak the default vector directly.
- The "texture blur" is considered as always set to 1
Add the 3D view ruler as a tool,
the modal operator remains for now
however it may be removed if we use the tool-system for 2.8.
Note that this does copy code from the operator,
its different enough not to attempt to de-duplicate.
This is not the commit you are looking for ...
This is not to be used lightly. But sometimes we change the name of the collections,
the initial value they have, ... and this helps to quickly update the tests.
Faces that have the last two indices equal are considered triangles, and not those that the last index is 0
Improvement of 7% in performance of the `polygonize` function
We now created nested collections for the original Collection 1, 2 ...
collections for the "hide" and "hide_render" objects.
Also, remove logic for rename single-collection files, it's now kept as
it was originally (Collection 1, Collection 5, ...).
Thanks Sergey Sharybin and Pablo Vazquez for patch review and suggestions.
There was some changes about namespaces, which causes ambiguities.
Replaces using namespace with an explicit symbols we need. Is good idea to NOT
pull in the whole namespace anyway!
Previously we picked one of the RGB channels with equal probability, but this
works poorly in a dense volume after many bounces. Now we take into account
the throughput and single scattering albedo.
This makes it a little more practical to do brute force SSS with volumes, but
is still very inefficient because we do direct light sampling at every volume
bounce even when inside an opaque mesh. In theory there could be a light inside
the mesh so we can't automatically disable direct lighting.
In fact this was an existing issue when exceeding the number of available
closure, but it's more common now that we set the number to 0 for shadows
and emission
Checked in really old revisions, seems like this was never used. So
doesn't matter for compatibility either (tested opening files saved with
this in 2.49).
Before this patch, the XBlur/YBlur compositor nodes would crash for me when run in a MSVC 2015 debug build (test scene: BMW27_cpu). I added the compiler instructions to explicitly align the local variables that the SSE instructions are accessing.
The only remaining part is the particle stuff, which needs a pointer to exact
particle system which does not exist yet. Leaving it for later a bit for until
it's more clear what do we do with particles.
Unless i'm mistaken, we've got all proper CoW pointers bound now.
This is a final step of having proper ownership. Now selecting different
layers in the "top bar" will actually do what this is expected to do.
Surely, there are still things to be done under the hood, that will happen
in a less intrusive way.
This was wrong since it's concenption in 28ee0f9218.
The if statement was returning true when pinid was NULL, and false otherwise.
However when scene is pinned we also want to run this code.
Code snippet by Brecht Van Lommel.
Goal is to reduce OpenCL kernel recompilations.
Currently viewport renders are still set to use 64 closures as this seems to
be faster and we don't want to cause a performance regression there. Needs
to be investigated.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2775
While getting rid of Scene->base we got the following fixes:
* Fix "Convert To" operator
* Fix "NLA allowing to selected objects that are not selectable
* Fix scene.objects (readonly, no option to link/unlink)
Note: Collada needs to use the context SceneLayer for adding objects
however I added a placeholder, so Collada maintainers can fix this
properly.
The idea is to allow iterating over ID nodes in exact order of their
construction, and in order which will not change dependent on memory
pointers or anything.
Depsgraph itself is still created fer the whole scene rather than for a
single layer, this is to be addressed next.
The storage for those dependency graphs is in scene, but now it is a hash
indexed by layer. In the future we can extend hash key to include extra
information (workspace? window?).
This fixes the issue for the Draw Manager, but for Cycles this is still not
working. The iterator bpy.context.depsgraph.duplis seems to be correct though.
* Fix saving a multiview render from the image editor giving invalid files.
* Fix failure to load multiview images with a single view per part.
* Fix loss of multiview metadata when saving/loading a single view.
* Fix Z-Buffer writing option for single layer EXR not being respected.
Multiview EXRs are now always handled as multilayer internally, significantly
reducing the amount of code.
Reviewed By: dfelinto
Differential Revision: https://developer.blender.org/D2887
The algorithm averages normals from nearby surfaces. It uses the same
sampling strategy as BSSRDFs, casting rays along the normal and two
orthogonal axes, and combining the samples with MIS.
The main concern here is that we are introducing raytracing inside
shader evaluation, which could be quite bad for GPU performance and
stack memory usage. In practice it doesn't seem so bad though.
Note that using this feature can easily slow down renders 20%, and
that if you care about performance then it's better to use a bevel
modifier. Mainly this is useful for baking, and for cases where the
mesh topology makes it difficult for the bevel modifier to work well.
Differential Revision: https://developer.blender.org/D2803
This causes some difference in the classroom scene, where ray visibility
tricks are used and break the MIS balance. Otherwise there doesn't seem
to be much effect, but better to use the right formulas. Problem originally
identified by Lukas.
We now initialize iter.valid as true as part of the main iterator (and manually
when using via Python). And we don't even bother setting iter->current to NULL
if it's invalid. Let's stick to using iter->valid only.
To help diagnose issues like T53259, it is useful to know the module causing the issue (is it us, or some opengl icd, or python module?) and while we cannot do stackdumps on release builds on windows, it is possible to display the faulting module. This commit changes the exception handler to output the following information:
Error : EXCEPTION_ACCESS_VIOLATION (Type of exception , this we had before)
Address : 0x0000000140193726 (Address of the exception, new)
Module : k:\BlenderGit\build_windows_Full_noge_x64_vc15_Release\bin\Release\blender.exe (module of the exception, new)
This was leading to crashes on Cycles as well as misleading
len(bpy.context.depsgraph.objects)
I can even move the iter->skip as part of DEGObjectsIteratorData instead of
BLI_Iterator, but if I do it will be a separate commit.
Thanks Sergey Sharibyn for the well done sample file and patch suggestion.
This was originally done as a fix for T37713, but now this workaround becomes
tricky since we don't know which layers to update scene for. Even more, render
engine is supposed to have own dependency graphs amd those ones do not exist
yet at the file open time.
Keep an eye on T37713, since that's where the original workaround is coming
from.
Need to ensure Render has proper dependency graph.
While this is a subject of re-design (render pipeline should manage all
dependency graphs it needs, and not demand external users to provide
depsgraph), this is good to have something working, so we can run regression
tests and such.
Although this works by itself, it should actually happen after:
"Reshuffle collections base flags evaluation, make it so object is gathering
its base flags from collections."
Meanwhile we have one single hacky function (deg_flush_base_flags_and_settings)
to be removed once the task above is tackled.
Reviewers: sergey
Differential Revision: https://developer.blender.org/D2899
This gets rid of the bottleneck of allocation / free of thousands of elements every frame.
Cache time (Eevee) (test scene is default file with cube duplicated 3241 times)
pre-patch: 23ms
post-patch: 14ms
This makes code closer to id_override/assent-engine ones, which
introduce a new type of linked data, and hence reserve
ID_IS_LINKED_DATABLOCK to real linked datablocks.
Inner DAG code would not check against NULL pointer, and in case of an
active linked scene, scene pointer will be NULL here, so we have to
check it ourself. ;)
New access is C.scene.render_layers.active.depsgraph. This will give depsgraph
for a given layer. In the future there will need to be some extra context to be
passed.
With a Titan Xp, reduces path trace local memory from 1092MB to 840MB.
Benchmark performance was within 1% with both RX 480 and Titan Xp.
Original patch was implemented by Sergey.
Differential Revision: https://developer.blender.org/D2249
This is a prequisite for getting host memory allocation to work. There appears
to be no support for 3D textures using host memory. The original version of
this code was written by Stefan Werner for D2056.
This is a lots of changes, but they are boiling down to a simple API
changes where we are no longer relying on implicit usage of scene's
depsgraph and pass depsgraph explicitly.
There should be no user measurable difference, render_layer* tests
are also passing.
Some drivers may report very large allocation sizes, which could cause
unnecessary memory usage. This is now limited to 2gb which should
still be enough to get the needed performance benefits without waste.
The legacy algorithm only considers two adjacent points when computing
the bezier handles, which cannot produce satisfactory results. Animators
are often forced to manually adjust all curves.
The new approach instead solves a system of equations to trace a cubic spline
with continuous second derivative through the whole segment of auto points,
delimited at ends by keyframes with handles set by other requirements.
This algorithm also adjusts Vector handles that face ordinary bezier keyframes
to achieve zero acceleration at the Vector keyframe, instead of simply pointing
it at the adjacent point.
Original idea and implementation by Benoit Bolsee <benoit.bolsee@online.be>;
code mostly rewritten to improve code clarity and extensibility.
Reviewers: aligorith
Differential Revision: https://developer.blender.org/D2884
This fix enables the usage of bbones easing parameters for edit and pose mode seperately. This allows animators to take advantage of the functionality and may eliminate confusion as the parameters now behave similar to other bbone parameters.
Note that splitting the parameters between the modes effectively creates a new parameter set. Blend files of previous versions do not contain this information and will have the values set to 0 on load. As it broke backwards compatibility for pose mode values anyway, I also took the liberty to rename the easing parameters in some places for consistency (which breaks edit mode values).
Reviewers: aligorith
Subscribers: aligorith
Tags: #animation
Differential Revision: https://developer.blender.org/D2796
This also:
- make sure to only compile the shader needed by the active effects.
- same thing for the shading groups.
- disable TAA if motion blur is active (avoid infinite refresh).
The idea is to make it possible to report extra meta data from
render engine to the file writing. This way we can provide
additional information such as number of samples rendered by
resumable Cycles rendering so we can easily combine files back.
Currently only report number of samples from Cycles when rendering
a single render-layer scene. This is something what was required
here at the studio. We can easily extend that further.
Ideally we would also need to support non-string metadata, but
that's for later.
Reviewers: mont29, campbellbarton
Reviewed By: mont29, campbellbarton
Subscribers: sybren, candreacchio
Differential Revision: https://developer.blender.org/D2502
If an object is in any visible collection, the object will be visible.
This behaviour has changed in 9ad2c0b615.
If it will change again, it will be for:
https://developer.blender.org/D2878
Bug introduced on 1c4c288727 (well technically in b48694639a).
We should not remove the renderlayer from the context, but instead the one that
is active from scene.
That said, the UI should make a distinction between the scene active render layer
and the one that is active in the UI (and that should be the one used when
removing it).
But for now this is at least more consistent for the users.
cmake's link_directories will supply forward slashes for the search paths, the msvc linker has some issues with that, while it will search for the needed libs just fine, the incremental linker gets fed forward slashes for some libs, while the previous binary has backward slashes in it's metadata, the linker assumes obj files got added and performs a full link instead of an incremental link. This change brings down the link time with newer msvc versions for a trivial edit down from a few minutes to a few seconds.
- Only basis balls are exported, as they represent the resulting mesh.
As a result the mesh is written to Alembic using the name of the basis
ball.
- MetaBalls are converted to a mesh on every frame, then an
AbcMeshWriter is used to write that mesh to Alembic.
When the mesh changed topology but kept the vertex count the same, it would
result in a corrupt mesh. By checking the face & loop counts too, this has
become less likely.
I've checked IPolyMeshSchema::isConstant(), but it returns true even when
we see that the mesh changed topology.
Recent addition of 'reinsert' didn't match logic for ghash API.
Rename to BLI_heap_node_value_update,
also add BLI_heap_insert_or_update since it's a common operation.
The single byte version of hash_data was casting from unsigned char
instead of signed.
This didn't cause any errors since the result of each aren't compared.
Even so, better keep them matching.
It should behave like cycles.
Even if not efficient at all, we still do the same create - draw - free process that was done in the old viewport to save vram (maybe not really the case now) and not care about simulation's GPU texture state sync.
This is quite basic as it only support boundbing boxes.
But the material can refine the volume shape in anyway the user like.
To overcome this limitation, a voxelisation should be done on the mesh (generating a SDF maybe?) and tested against every volumetric cell.
The system now uses several 3D textures in order to decouple every steps of the volumetric rendering.
See https://www.ea.com/frostbite/news/physically-based-unified-volumetric-rendering-in-frostbite for more details.
On the technical side, instead of using a compute shader to populate the 3D textures we use layered rendering with a geometry shader to render 1 fullscreen triangle per 3D texture slice.
Basically reverts rB65c4149f203610 and fixes the issue in a better way.
Keymaps using the removed operator will be affected. Switching header
from top to bottom now has the shortcut F5, just like switching other
regions.
This moves background images out of the 3D viewport,
to be used only as camera reference images.
For 3D viewport references,
background images can be used, see: D2827
Some work is still needed
(background option isn't working at the moment).
Tool for creating polygons, exact usage may change based on feedback.
LMB to add faces at boundaries (tris from edges, quads from verts).
- Ctrl splits edges
- Alt to dissolve edges/verts.
Works well with vertex snap & auto-merge.
This uses selection hover but isn't intended to introduce more widely
pre-selection highlighting, at least it will be restricted to this tool.
We are already using the AO distance, so might as well offer this extra
control over the intensity. Useful when an interior scene is supposed to
be significantly darker than the background shader.
This would break if using preview in VSE. We now use the scene engine
not the workspace engine.
That said we could have the preview engine defined as part of the sequence strip
as we had for draw modes in the past. But this is a separated topic for a
separated patch.
This issue in particular was introduced in e4f2b2be26.
Note: VSE preview is still broken in two cases:
* If you have Eevee as the engine in the Scene of the Scene strip.
* If you use Clay, save the file, and re-open.
Objects from set scene gets flattened out to the active scene depsgraph, so it
is a big question why do we need to build dependency graph for set scenes.
Make it a tag for relations update function instead, since we will not be able
to easily rebuild relations, and we wouldn't be able to iterate all scenes.
This is a part of mowing depsgraph to be per-workspace/layer in 2.8 branch.
This replaces usage of generic PLACEHOLDEWR with string lookup with more
explicit opcode. This should make it faster to build dependency graph by
avoiding string comparisons when it's not needed.
There should be no user measurable different.
Was never actually used and implementation seems to be slow: we shouldn't be
doing per-node evaluation hash lookups, adds too much overhead. We can instead
store statistics in the node itself, and maybe even group them somehow.
Ideally such a statistics should be user-friendly so riggers and animators
can see exactly what's happening.
Steps to reproduce were:
* Open Blender, create a new scene
* Go back to initial scene, transform object
* Switch back to newly created scene, change operator settings there
* Should cause a crash (at least with asan)
Should behave like 2.7 now, that is, switch scene back to where
operator was executed.
We wouldn't know which dependency graphs needs/safe for reconstruction,
so rather use API which tells that relations are out of date. This way
graph evaluation will take care of the rest.
Committing to 2.8 only since it's where we can't reliably know the graph
and is probably not that safe to apply this in master.
This needs to be re-implemented in a new fashion, without touching global list
of bases and become compatible with the new dependency graph.
The idea to go here would be to create new dependency graph for motion path
evaluation, bring a single object in there (which will pull all dependencies
at a construction) and use that.
Needs working copy-on-write first tho.
Was never actually used and implementation seems to be slow: we shouldn't be
doing per-node evaluation hash lookups, adds too much overhead. We can instead
store statistics in the node itself, and maybe even group them somehow.
Ideally such a statistics should be user-friendly so riggers and animators
can see exactly what's happening.
It will not be possible to do that after depsgraph becomes more context
oriented. Which means, all code will need to explicitly tell which graph
to free,
This is a first step towards an updated API where we pass explicit graph rather
than a scene. This is because we can no longer deduct which graph to use since
it will depend on a context.
Will happen in several steps, so bisecting will not be such a pain.
This is a corner-case, but one that is too easy to reproduce:
* Unlink all the collections of active view layer.
* Link a group without "Instancing" it.
The test will leak CPU devices, but is all passing other than that.
Leak will be fixed shortly.
P.S. Committing code refactor without running regression tests, tsk ;)
* Remove tex_* and pixels_* functions, replace by mem_*.
* Add MEM_TEXTURE and MEM_PIXELS as memory types recognized by devices.
* No longer create device_memory and call mem_* directly, always go
through device_only_memory, device_vector and device_pixels.
Was actually possible to invoke this assert failure in two ways:
* Transforming in newly created 3D View (like described in the report).
* Transforming in newly appended workspace from default workspaces.blend. Issue was that default workspaces.blend was saved in 2.8.1, but in a branch state that didn't include the transform-orientation changes. So versioning code wouldn't run when needed.
Note that files saved with this bug will still cause the assert to
fail. Can be ignored then.
This is not related to manipulators (as suggested in the report).
rna_scene.c was getting way too big with data that was related to
DNA_layer_types.h.
I tried doing it earlier, but failed. But now with the new changes I think it's
better to do this sooner than later.
Was using an edge hash for triangle -> edge lookups,
updating triangle indices for each edge-rotation.
Replace this with half-edge which can rotate edges much more simply,
writing triangles back once the solution has been calculated.
Gives ~33% speedup in own tests.
Progressive refine undoes memory saving from save buffers, so enabling
both does not make much sense. Previously enabling progressive refine
would disable denoising, but it should be the other way around since
denoise actually affects the render result.
Includes some code refactor for progressive refine render buffers, and
avoids recomputing tiles for each progressive sample.
CPU rendering will be restricted to a BVH2, which is not ideal for raytracing
performance but can be shared with the GPU. Decoupled volume shading will be
disabled to match GPU volume sampling.
The number of CPU rendering threads is reduced to leave one core dedicated to
each GPU. Viewport rendering will also only use GPU rendering still. So along
with the BVH2 usage, perfect scaling should not be expected.
Go to User Preferences > System to enable the CPU to render alongside the GPU.
Differential Revision: https://developer.blender.org/D2873
The tool-system it's self is primitive and may be changed.
Adding to 2.8 to develop operators and manipulators as tools.
Currently this is exposed in the toolbar, collapsed by default.
Work-flow remains unchanged if you don't change the active tool.
Placing the 3D cursor is now a Click instead of a Press event,
this allows tweak events to be mapped to tools such as border select,
keeping click for 3D cursor placement when selection tools are set.
This way evaluation routines will know which exact depsgraph evaluation
is happening for.
Mainly needed to get evaluation flags associated with ID nodes.
The idea is following: we do need to have multiple dependency graphs to denote
different scene layers (depsgraph should only contain objects from a specific
scene layer), and we also want to support same scene layer to be evaluated to
a different state in different windows. In order to achieve that we do need to
have a list or hash (for faster lookup presumably) somewhere. To keep things
easier for now, it will be a scene which owns that hash. This seems to make
sense anyway, since dependency graph only points to data which is owned by
scene.
This commit only introduces some basic API and hash itself stored in DNA, there
is no changes in behavior. See this as a first step towards getting rid of
scene-global dependency graph.
While such drivers will generally get evaluated too late to be of much
use during animations, it can still be useful to allow using drivers to
control a whole bunch of NLA strip properties (i.e. syncing NLA strip
timings via a single property/control).
Keyframe insertion however is still not allowed on these properties
(and an error message will now be displayed when trying to do so,
instead of silently failing), as it is useless.
This was never correctly implemented. It now works as expected (ala 2.79 behaviour).
The proxy object is added to all the collections of the original empty.
Before not only this wasn't the case, but it would crash Blender.
Loading blender with an unknown name would interpret it as a blend file.
This meant passing `--arg` arguments would end up creating new
blend files which could be confusing if you made a typo on a command
line argument.
Now check the string has a blend file extension,
exiting if it doesn't.
This only applies when LIB_ID_CREATE_NO_ALLOCATE flag is used and guarantees
that non-memset-zero memory can be used (or, that same memory chunk might be
used over and over again without need to clean it from the calleer).
OUr beloved root nodetrees... Had to check again the code to undersand
why we copy them with bmain even though they are not in bmain, so this
is worth a comment. ;)
Changes from D2876 by @meta-androcto /w own edits
Move 3x undo items into Undo menu,
these are such common operations they're typically accessed by keys.
Also add to menus which didn't have undo
(seemed random which modes had undo, undo history in their menus).
Changes from D2876 by @meta-androcto /w own edits
- Move view axis & camera selection into "Viewpoint" menu.
- Move render border and clipping into border menu.
- Move Camera operators into own menu.
- View Selected was located in two menus,
Only expose the "use_all_regions" version when quad-view is used.
This solves issue with user counter on materials, objects and such,
additionally avoids having too much overhead of temporary lock and
datablock allocation.
Still need to do similar thing for scene copy, and look into nested
ID datablocks somehow.
Before it was a compile time option which was not very easy to use or test. Now
the project is getting more mature, so very soon we will be able to call for a
public tests of limited features.
The copy-on-write (which includes animation, modifiers) is enabled using
--enable-copy-on-write command line argument.
This is fully unpredictable for artists when one damaged object makes the whole
scene to render incorrectly. This involves two main changes:
- It is not enough to check triangle bounds to be valid when building BVH.
This is because triangle might have some finite vertices and some non-finite.
- We shouldn't add non-finite triangle area to the overall area for MIS.
Cyclic extrapolation is implemented as an f-curve modifier, so this
technically violates abstraction separation and is something of a hack.
However without such behavior achieving smooth looping with cyclic
extrapolation is extremely cumbersome.
The new behavior is applied when the first modifier is Cyclic
extrapolation in Repeat or Repeat with Offset mode without
using influence, repeat count or range restrictions.
This change in behavior means that curve handles have to be updated
when the modifier is added, removed or its options change. Due to the
way code is structured, it seems it requires a helper link to the
containing curve from the modifier object.
Reviewers: aligorith
Differential Revision: https://developer.blender.org/D2783
It seems that `typestr` does not always define the final size of the element. And it varies by operating system.
Then use the `typestr` only to know the itemtype is `float` type or not.
Engine is not stored in WorkSpaces. That defines the "context" engine, which
is used for the entire UI.
The engine used for the poll of nodes (add node menu, new nodes when "Use Nodes")
is obtained from context.
Introduce a ViewRender struct for viewport settings that are defined for
workspaces and scene. This struct will be populated with the hand-picked
settings that can be defined per workspace as per the 2.8 design.
* use_scene_settings
* properties editor: workshop + organize context path
Use Scene Settings
==================
For viewport drawing, Workspaces have an option to use the Scene render
settings (F12) instead of the viewport settings.
This way users can quickly preview the final render settings, engine and
View Layer. This will affect all the editors in that workspace, and it will be
clearly indicated in the top-bar.
Properties Editor: Add Workspace and organize context path
==========================================================
We now have the properties of:
Scene, Scene > Layer, Scene > World, Workspace
[Scene | Workspace] > Render Layer > Object
[Scene | Workspace] > Render Layer > Object > Data
(...)
Reviewers: Campbell Barton, Julian Eisel
Differential Revision: https://developer.blender.org/D2842
NODE_NEWER_SHADING was introduced in e868b459bb however it should have been
added as a bitflag.
BKE_scene_uses_blender_eevee() was used in gpu_shader_output() as a workaround
for compatibility being poorly used.
Anyways this fixes this situation. This is necessary for an upcoming patch, even
though this is considered temporary - since the other NODE_*_SHADING values are
legacy from Blender Internal drawing.
Border and circle select wait for input by default.
This commit uses bool properties on the operators instead of
magic number (called "gesture_mode").
Keymaps that define 'deselect' for border/circle select
begin immediately, exiting when on button release.
There was noise correlation between the rotation random number and the radius random number used in the contact shadow algo.
Hacking a new distribution from the old distribution (may not be ideal because it's discrepency may be high)
Also distribute samples evenly on the shadow disc. (add sqrt)
Fix the "bias floating shadows", was cause by the discarding of backfacing geom which makes no sense in this case.
User count of scenes was inconsistant, screens only have 'user_one' kind
of owning over scenes, which means they shall never increment or
decrement their real user count. And usually, scenes have no real user
at all.
Would happen during panel's refresh drawing, if drawing code had to adjust
final panel position compared to the initial one computed based on the
mouse coordinates, and user had dragged the floating panel around.
Issue fixed by adjusting stored mouse coordinates once final panel
position is known, such that they would directly generate those
coordinates. that way, the basic offset applied to those stored mouse
coordinates during panel dragging is valid, and recreating panel based
on those won't make it jump in screen.
Note that panel will still jump in case user dragged it partially out of
view - we could prevent that, but imho it's better to keep that
behavior, since redraw can generate a popup of different size, which
could end up with a totally out-of-view one...
Hopefully this fix does not break anything else!
This adds a custom depth test that have the benefits to glitch less and be more visually pleasing.
Downside is that it let the grid pass trough the objects a little.
This effect is done in NDC space so that it counteract the logarithmic depth distribution imprecision (read as it's less visible near the camera but more present far away).
This patch also includes some cleanups.
Was a mistake in optimization commit which was disconnecting closures and nodes
which does not make sense for volume output.
OSL script we can't ignore and can't currently know in advance if it's a proper
volume shader or not. So we never disconnect OSL nodes from volume output.
This is a good candidate for corrective release.
This patch goes away form using C++ RNA during tangent space calculation which
avoids quite a bit of overhead. Now all calculation is done using data which
already exists in ccl::Mesh. This means, tangent space is now calculated from
triangles, which doesn't seem to be any different (at least as far as regression
tests are concerned).
One of the positive sides is that this change makes it possible to move tangent
space calculation from blender/ to render/ so we will have Cycles standalone
supporting tangent space.
Reviewers: brecht, lukasstockner97, campbellbarton
Differential Revision: https://developer.blender.org/D2810
Everything was fine if one batch is always used with instancing. But problem arise if the next drawcall for this batch is not using instancing as the attrib divisor stays set to 1 in th VAO.
As instancing is less used than normal drawing I prefer to reset the divisor after drawing as it is reset before drawing instances.
This was caused by small float precision being insuficient. The blue component of R11F_G11F_B10F has lower precision than the other 2 components. This resulted in colors drifting towards a yellowish tone.
Using RGBA16F for the concerned buffer. This double the memory usage of the framebuffers and add subsequent bandwidth usage.
This bug (explained here https://github.com/dfelinto/opengl-sandbox/blob/downsample/README.md) is breaking eevee beyond the point it's workable.
This patch workaround the issue by making sure every fbo have mipmaps that are strictly greater than 16px. This break the bloom visuals a bit but only for this setup.
This change affects CUDA GPUs not connected to a display or connected to a
display but supporting compute preemption so that the display does not
freeze. I couldn't find an official list, but compute preemption seems to be
only supported with GTX 1070+ and Linux (not GTX 1060- or Windows).
This helps improve small tile rendering performance further if there are
sufficient samples x number of pixels in a single tile to keep the GPU busy.
Best guess is that cuInit() somehow interferes with the AMD graphics driver
on Windows, and switching the initialization order to do OpenCL first seems
to solve the issue.
* Use common TextureInfo struct for all devices, except CUDA fermi.
* Move image sampling code to kernels/*/kernel_*_image.h files.
* Use arrays for data textures on Fermi too, so device_vector<Struct> works.
Instead of trying to be clever with swaps and lazy updating the weight
data, simply recalculate one single array. To improve performance, use
threading for that.
This was introduced on 9ad2c0b615.
Although this still doesn't fix the issue, it updates the preview
system to use COLLECTION_DISABLED as intended.
What is missing now is for the flushing to work effectively.
This add the possibility to add screen space raytraced shadows to fix light leaking cause by shadows maps.
Theses inherit of the same artifacts as other screenspace methods.
Two issues here:
- Checking table size to be non-zero is not a proper way to go here. This is
because we first resize the table and then fill it in. So it was possible that
non-initialized table was used.
Trickery with using temporary memory and then doing table.swap() might work,
but we can not guarantee that table size will be set after the data pointer.
- Mutex guard was useless, because every thread was using own mutex. Need to
make mutex guard static so all threads are using same mutex.
Tried 101 but it gives colisions.
I think 257 is enough now that we dont have thousands of uniforms.
This gives some noticeable performance improvement.
Could be refined further.
The issue was caused by light sample being evaluated to nan at some point.
This is root of the cause which is to be fixed, but is very hard to trace down
especially via ssh (the issue only happens on AVX2 release build). Will give it
a closer look when back to my AVX2 machine.
For until then this is a good check to have anyway, it corresponds to what's
happening in regular radiance sum.
This changes quite a few things:
- Drops the allocation of inputs as a chunk.
- Merge the linked list system into the Gwn_ShaderInput.
- Put name buffer into another memory block, easily resizable.
- Use offset instead of char* to direct to input name.
- Add only requested uniforms dynamicaly to the Shader Interface.
This drops some minor optimisation and use a bit more memory for small shaders (which are fixed count).
But this saves a lot of memory when using UBOs because the names and the Gwn_ShaderInput were alloc'ed for every UBO variable.
This also reduce the Shader Interface initial generation.
The lookup time is left unchanged.
Camera clipping was left to default values, which won't work well for
very large (or small) objects. Now recompute valid clipping start/end
based on boundingbox of rendered data, and final location of camera.
This makes brush influence into a tube instead of a sphere.
It can be used along the outline of a mesh to adjust it's silhouette.
Note that all this takes advantage of changes from vertex paint,
from testing this seems useful so exposing from the brush options.
This is an internal structure, and we don't put it to a list for anything else
that hash collision resolution. No need to have dedicated entry here, saves us
from extra allocation and pointer dereference.
This way we reduce number of loops from look-over-all-inputs to
loop-over-collision, which is expected to be much less CPU ticks.
There is still possible optimization: use memory pool of some sort
to manage memory needed for hash entries, but that will only speedup
shader interface construction / deconstruction time.
There are also some trickery happening to speed up process even more
in the case there is no hash collisions detected when constructing
shader interface.
This behavior makes more sense for sculpt, less so for painting.
Restores non PBVH behavior, adding `BKE_pbvh_find_nearest_to_ray` -
similar to ray-cast except it finds the closest point on the surface.
The work size is still very conservative, and this doesn't help for progressive
refine. For that we will need to render multiple tiles at the same time. But this
should already help for denoising renders that require too much memory with big
tiles, and just generally soften the performance dropoff with small tiles.
Differential Revision: https://developer.blender.org/D2856
This was originally done with the first sample in the kernel for better
performance, but it doesn't work anymore with atomics. Any benefit was
very minor anyway, too small to measure it seems.
- Separate the Post Processes settings into sub panel.
- Rename "Viewport Anti-Aliasing" to sampling & super-sampling as it also reduce the noise of other effects.
- Remove Temporal Anti-Aliasing toggle and make it always active unless the number of samples is 1.
Restoring weights is problematic when the stroke overlaps its mirror.
It's better to simply compute the new weight based on the saved data
rather than restoring things, and check that the change is monotonic.
This way is also closer to how things worked before the merge.
This secondary accumulation option accumulated brush falloff.
The same option in image painting accumulates color
as vertex paiht 'Spray' does.
Giving this option different behavior for vertex paint seems strange.
Also this is basically increasing falloff over time.
Remove the new code, expose existing 'Spray' as 'Accumulate'
to match other paint modes.
Notes:
- Changes in paint_vertex.c were simple to merge, mainly related on passing
evaluation context.
- Conflicts in EditDM and drawmesh.c are solved using code from blender2.8
branch. Those areas are deprecated and not to be used in final release.
However, it's possible that some reference code from master is lost, so
keep attention when adding alpha support for vertex painting.
The issue here is that we can not read scale from socket when determining
dependent area of interest. This area will depend on current pixel. Now fall
back to more stupid but reliable thing: if scale size input is connected to some
nodes, we use the whole frame as area of interest.
This makes vertex paint match image painting more closely.
- Add falloff shape option sphere/circle
where sphere uses a 3D radius around the cursor and
circle uses a 2D radius (projected), like previous releases.
- Add normal angle option so you can control the falloff.
- Add Cull option, to paint onto faces pointing away.
Disabling normals, culling and using circle falloff
allows you to paint through the mesh.
When painting with spray disabled - we need to re-apply
on top of the original each time.
Applying the soc-2016-pbvh-painting branch removed this.
While I'd added back a simple previous weight array,
this won't work when multiple groups are painted at once.
This is most pronounced in Auto-Normalize + Multi-Paint. Unlike
vertex paint, the weights being painted on in weight paint mode
don't necessarily correspond to the weight actually stored in
any one vertex group, and may instead be a computed aggregate.
This restores original code behavior lost in rB4f616c93f7cb.
This required some small changes to the data display shaders so that they match the way the object mode renders them.
Strangely enough, I had to remove the normal attribute from the display code because it was being not bound as soon as I created another rendering call in object mode. The problem may be deeper but I did not have time for this so I derive the normal from the sphere pos.
Note that this tool seems like it might need to be rewritten
since results are quite strange.
Projecting on the view vector gives a small improvement though.
paint_vertex.c was getting too big, move all code unrelated to
mode switching and modal painting into their own files.
Also replace vertex-color operators region redraw tag /w notifiers.
GSOC 2017 by Darshan Kadu, see: D2859.
This is a partial merge of some of the features from
the soc-2017-vertex_paint branch.
- Alpha painting & drawing.
- 10 new color blending modes.
- Support for vertex select in vertex paint mode.
This removes a bunch of code that is no longer needed, and running
"make update" will now automatically download the new libraries.
Differential Revision: https://developer.blender.org/D2861
This is done by storing only a subset of PathRadiance, and by storing
direct light immediately in the main PathRadiance. Saves about 10% of
CUDA stack memory, and simplifies subsurface indirect ray code.
This is really convenient for development. Either for profiling the
generated shaders or to check if the generated code is correct.
It writes the shaders to the temporary blender session folder.
2016 GSOC project by @nathanvollmer, see D2150
- Mirrored painting and radial symmetry, like in sculpt mode.
- Volume based splash prevention,
which avoids painting vertices far away from the 3D brush location.
- Normal based splash prevention,
which avoids painting vertices with normals opposite the normal
at the 3D brush location.
- Blur mode now uses a nearest neighbor average.
- Average mode, which averages the color/weight
of the vertices within the brush
- Smudge mode, which pulls the colors/weights
along the direction of the brush
- RGB^2 color blending, which gives a more accurate
blend between two colors
- multithreading support. (PBVH leaves are painted in parallel.)
- Foreground/background color picker in vertex paint
by the transform constraint lines
Ported over e7395c75d5 from the
greasepencil-object branch. I should've fixed this ages ago, but
couldn't figure out why at the time.
This adds TAA to eevee. The only thing important to note is that we need to keep the unjittered depth buffer so that the other engines are composited correctly.
The problem was that orthographic views can have hit position that are negative. Thus we cannot encode the hit in the sign of the Z component.
The workaround is to store the hit position in screenspace. But since we are using floating point render target, we are loosing quite a bit of precision.
TODO: use RGBA16 instead of RGBA16F. But that means encoding the pdf value somehow.
You can change the amount of samples in the user preferences. You do not need to restart blender to see the effect in the new viewport.
This adds another Multisample Framebuffer and textures (so even more memory required).
It works by blitting the default_fb to the multisample_fb each time the renderer need to render one or more "wire" pass.
It it then blit back to the default_fb so that the rest of pipeline is working as expected.
We COULD lower the GPU memory / bandwidth usage to render everything to the same multisample fbo and change the logic depending on if MSAA is enabled or not, but I think it's a bit too much work for now.
One crucial thing here: OpenVDB shoudl be compiled WITHOUT
OPENVDB_ENABLE_3_ABI_COMPATIBLE flag. This is how OpenVDB's Makefile is
configured and it's not really possible to detect this for a compiled library.
If we ever want to support that option, we need to add extra CMake argument and
use old version 3 API everywhere.
Even if pointer assignment may be atomic, it does not prevent reordering
and other nifty compiler tricks, we need a memory barrier to ensure not
only that transferring pointer from wip array to final one is atomic,
but also that all previous writing to memory are “flushed” to
(visible by) all CPUs...
Thanks @sergey for finding the potential (though quite unlikely) issue.
Now uses "Text Selected" theme color of "Menu Back" widget colors. Also
repositioned text slightly to have same margin on top and right (measured
by eye ;) ).
Tested with all bundled themes (contrib and no-contrib) and worked fine.
Considering that different splashes may need different colors for
overlaid text, using theme color may not be the best solution. I would
like to try how this works before adding an ugly way to force a certain
text color though.
Also tried different approaches, but this one I find the least ugly :S
As far as longer term plans go, we wanted to get a redesigned multi-page
splash screen anyway. At this point we can rethink how splash colors work
in general (i.e. auto-contrast, own splash theme colors, etc).
It has been deprecated since at least macOS 10.9 and fully removed in 10.12.
I am unsure if we should remove it only in 2.8. But you cannot build blender with it supported when using a modern xcode version anyway so I would tend towards just removing it also for 2.79 if that ever happens.
Reviewers: mont29, dfelinto, juicyfruit, brecht
Reviewed By: mont29, brecht
Subscribers: Blendify, brecht
Maniphest Tasks: T52807
Differential Revision: https://developer.blender.org/D2333
Adds a FXAA for smoothing out the extracted outlines.
The Post Process Anti Aliasing is only done on the Alpha channel of the outlines.
Because of that we need to add bleed the outline color out of the silouhette so the AA'd alpha can blend the right color and not pick black when the alpha is smoothed out of the silhouette.
Also because of the AA needs to have clear contrast to work with, I decided to ditch the "bluring" or the occluded outlines.
The FXAA adds an overhead of 0.17ms but we gain back 0.22ms * 4 = 0.88ms by removing the blur.
The FXAA Implementation is from Corey Richardson (cmr) (D2717). I had to modify it a bit to only filter the alpha channel.
This introduce some little artifacts on the border of edges because some pixel with very low opacity does not get discarded and then occlude the face rendered behind if it has not been drawn yet.
To fix this. I added an offset in the geometry shader for the edge fixup. This make the artifact only visible on the border of the object if there is a very dense wire region. It's only visible in edge select mode since vertex and face center also hides the artifacts.
We can enable this only if AA is enabled but for now it's always enabled.
Introducing a new header using the Blender socket logo,
commit of the source file will follow soon.
Splash committee: Ton Roosendaal, Dalai Felinto, Pablo Vazquez.
Artwork is a screenshot of 'Wanderer', an Eevee sample file
by Daniel Bystedt, available on blender.org (license: CC-BY-SA)
Textures were bound once. But since it was not unbound it's bind_num would not change and considered still bound next time a shader needed it.
Fix T52866
Fix T52855
Partial revert of 9068c0743e.
This commit tried to do two things:
(1) Fix UBO binding logic [good]
(2) "Improve" texture binding logic [bad]
Don't ever mix different fixes and refactors in the same commit.
Iterate over invisible objects too, so lamps can still lit the scene.
Also, now you can use a collection to set an object to invisible, not
only to visible.
For example:
Scene > Master collection > bedroom > furniture
Scene > View Layer > bedroom (visible)
> furniture (invisible)
The View Layer has two linked collections, bedroom and furniture.
This setup will make the furniture collection invisible.
Note: Unlike what was suggested on D2849, this does not make collection
visibility influence camera visibility. I will keep this as a separate
patch.
Reviewers: sergey
Subscribers: sergey, brecht, fclem
Differential Revision: https://developer.blender.org/D2849
-No more hardcoded python35/36 tokens in the scripts
-disabled python module for boost, was not used
-Updated patches for python to support building with msvc2013
For the first bounce we now give each BSDF or BSSRDF a minimum sample weight,
which helps reduce noise for a typical case where you have a glossy BSDF with
a small weight due to Fresnel, but not necessarily small contribution relative
to a diffuse or transmission BSDF below.
We can probably find a better heuristic that also enables this on further
bounces, for example when looking through a perfect mirror, but I wasn't able
to find a robust one so far.
Similar to what we did for area lights previously, this should help
preserve stratification when using multiple BSDFs in theory. Improvements
are not easily noticeable in practice though, because the number of BSDFs
is usually low. Still nice to eliminate one sampling dimension.
Previously the Sobol pattern suffered from some correlation issues that
made the outline of objects like a smoke domain visible. This helps
simplify the code and also makes some other optimizations possible.
Right now this is exposed in the outliner, though all this
(visible/selectable/enable) should be moved to a new panel soon.
This removes objects from the depsgraph when the collection is disabled.
It allows you to "hide" lamps but still having them lighting the scene.
Same for light probes and other support objects.
Pending tasks:
* Have depsgraph to include invisible objects in the DEG_OBJECTS_ITER, and
then have Eevee and other engines to make a distinction between an
invisible and a visible object.
(for example, we probably want invisible objects to not show in the
viewport, but cast shadows and show up in light probes).
* Change how we evaluate collection settings so that an invisible
collection can force an object to be invisible.
Reviewers: campbellbarton
Subscribers: sergey
Differential Revision: https://developer.blender.org/D2848
Now we replace O(N^2) computational complexity with O(N) extra memory penalty.
Memory is much cheaper than CPU time. Keep in mind, memory penalty is like
4 megabytes per 1M vertices.
Tentative fix, since I cannot reproduce thenissue for some reason here
on linux.
Core of the problem is pretty clear though, thanks to Germano Cavalcante
(@mano-wii): another thread could try to use looptris data after worker
one had allocated it, but before it had actually computed looptris.
So now, we use a temp 'wip' pointer to store looptris being computed
(since this is protected by a mutex, other threads will have to wait on
it, no possibility for them to double-compute the looptris here).
This should probably be backported to 2.79a if done.
The issue was caused by threading conflict around looptris: it was possible
that DM will return non-NULL but non-initialized array of looptris.
Thanks Campbell for second pair of eyes!
Use triple buffer by default now on all platforms, remaing ones where:
* Mesa: seems to have been working well for a long time now, and not using
it gives issues with the latest Mesa 17.2.0.
* Windows software OpenGL: no longer supported since OpenGL 2.1 requirement
was introduced.
* OS X with thousands of colors: this option was removed in OS X 10.6, and
that's our minimum requirement.
Ubo needs to be rebound every times the shader changes.
This simplify the logic a bit.
Also modify texture binding logic to potentially reuse more already bound textures.
When 2x loops have different number of vertices,
the distribution for vertices fan-fill depended on the loop order
and was often lop-sided.
This caused noticeable inconstancies depending on the input
since edge-loops are flipped to match each others winding order.
This fix the crappy binding logic.
Note the current method is doing a lot of useless binding. We should somewhat order the texture so that reused textures are already bound most of the time.
Use more watertight and robust intersection test.
It uses now ray to triangle intersection, but it's all fine because segment was
covering the whole bounding box anyway.
Mainly when object origin is not at the geometry bounding box center.
Seems to be straightforward to fix, hopefully it doesn't break some obscure case
where this was a desired behavior.
The issue here was that removing datablock from main database will poke editors
update, which includes buttons context to free users of texture. Since Cycles
will free datablocks from job thread, it might crash Blender since main thread
might be in the middle of drawing.
Solved by exposing extra arguments to bpy.data.foo.remove() which indicates
whether we want to perform ID user count and interface updates. While scripts
shouldn't be using those normally, this is the only way to allow Cycles to skip
interface update when removing datablock.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2840
The issue was caused by render result identifier only consist of scene name,
which could indeed cause conflicts.
On the one hand, there are quite some areas in Blender where we need identifier
to be unique to properly address things. Usually this is required for sub-data
of IDs, like bones. On another hand, it's not that hard to support this
particular case and avoid possible frustration.
The idea is, we add library name to render identifier for linked scenes. We use
library name and not pointer so we preserve render results through undo stack.
Reviewers: campbellbarton, mont29, brecht
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2836
Add sanitizer. I wanted to stay away from this because I think we should fix what causes NaNs in the first place. But there can be too much different factor causing NaNs and it can be because of user inputs.
Was caused by numeric overflow when calculating preview dimensions.
Now we try to avoid really insance preview resolutions by fitting
aspect into square.
The branching introduced by the uniform caused problems on mesa + AMD in the resolve stage.
This patch create one shader per sample count without branching.
This improves performance of a single ray per pixel case (3.0ms against 3.6ms in my testing)
The issue was caused by operator redo which frees all object's evaluated data,
including bounding box. This bounding box can not be reconstructed properly
without full curve evaluation (need to at least convert font to nurbs, which is
not cheap already).
There is absolute no reason to have such an indentation level, it only causes
readability and maintainability issues. It is really simple to make code more
"streamlined".
This was cause by a fairly funky unitialize buffer (last frame) that was causing NANs during the SSR resolve stage.
They were then propagated to the whole image during the next swap.
Bypassing the SSR completly if no valid history exists fixes the problem. Also disabling SSR data output in this case so we can have correct reflection in the 1st history buffer.
Rather than treating all ray types equally, we now always render 1 glossy
bounce and unlimited transmission bounces. This makes it possible to get
good looking results with low AO bounces settings, making it useful to
speed up interior renders for example.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2818
You can now use a transparent shader as a completly transparent bsdf. And use whatever alpha mask in a mix shader between a transparent bsdf and another bsdf.
In fact, any type of baking might have caused holes in mesh.
The issue was caused by zspan_scanconvert() attempting to get order of traversal
'a-priori', which might have failed if check happens at the "tip" of span where
`zspan->span1[sn1] == zspan->span2[sn1]`.
Didn't see anything bad on making it a check when iterating over scanlines and
pick minimal span based on current scanline. It's slower, but unlikely to cause
measurable difference. Quality should stay the same unless i'm missing something.
Reviewers: brecht, dfelinto
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2837
Previously we used a 1D sequence to select a light, and another 2D sequence
to sample a point on the light. For multiple lights this meant each light
would get a random subset of a 2D stratified sequence, which is not
guaranteed to be stratified anymore.
Now we use only a 2D sequence, split into segments along the X axis, one for
each light. The samples that fall within a segment then each are a stratified
sequence, at least in the limit. So for example for two lights, we split up
the unit square into two segments [0,0.5[ x [0,1[ and [0.5,1[ x [0,1[.
This doesn't make much difference in most scenes, mainly helps if you have a
few large area lights or some types of HDR backgrounds.
This causes render differences in some scenes, for example fishy_cat
and pabellon scenes render brighter in a few spots. This is an old
bug, not due to recent RR changes.
This is in order to use the same texture on multiple sampler.
Also texture counter is reset after each shading group. This mimics the previous behaviour.
Just using same code for distribution for face/volume as the one
changed/used for vertices since some months.
Note that this change is breacking compatibility, in that distribution
of particles over faces/volume may not be exactly the same as
previously.
Previously only the active object was used.
Use coroutines to support baking frames for multiple objects at once,
without having to playback the animation multiple times.
- Replace poisson by concentric samples: Less variance. They are sorted by radius then by angle.
- Separate filtering into 2 blur. First blur is 3x3 box blur. Second is user dependant.
- Group fetches by group of 4.
This brings some data structure changes.
Shared shadow data are stored in ShadowData (in glsl) (aka EEVEE_Shadow in C).
This structure contains the array indices of the first shadow element of this shadow "object".
It also contains how many shadow to evaluate (to be used for Multiple shadow maps).
The filtering is noisy and needs improvement.
- Use only one 2d texture array to store all shadowmaps.
- Allow to change shadow maps resolution.
- Do not output radial distance when rendering shadowmaps. This will allow fast rendering of shadowmaps when we will drop the use of geometry shaders.
- Use indices instead of character args.
- Use numbered macros instead of variadic args.
Parsing using rtags used over 11gb of memory. While this should be
resolved upstream (report as #1053), the extra complexity didn't give
any real advantage.
Disabled forceinline for those architectures, which seems to be compiling
successfully more often.
There might be ~3% slowdown based on quick tests, but better be rendering
something rather than failing to compile kernels again and again.
Those architectures will be doomed for abandon once we'll switch to toolkit 9.
Empty BVH nodes are set to NaN which must be preserved all the way to the
tnear <= tfar test which can then give false for empty nodes. This needs
strict semantices and careful argument ordering for min() and max(), so
the second argument is used if either of the arguments is NaN.
Fixes T52635: crash in BVH traversal with SSE4.1.
Differential Revision: https://developer.blender.org/D2828
There was some invalid state in the screen here, some areas had
sa->full set even though no screen was maximized, which then caused
a restore from the wrong (empty) area, which then led to spacedata
being empty and a crash.
This fix properly clears the sa->full after restore, and also fixes
existing .blend files in such an invalid state.
Only the camera from View3D.localvd is used,
other pointers may be invalid.
Longer term we should probably clear these to ensure no accidents.
For now just follow the rest of Blender's code and don't access.
Baking rigid body cache was broken if some cached frames already
existed.
This is just a band aid for release, the logic need to be looked into
further.
Since the change to prevent shader recompilation at every update, we got
a regression when clearcoat was used.
Basically at the shader build time we would determine if the shader
needed clear coat, and if it didin't, it would build a different GLSL
program.
However if later the user updated the clearcoat value so that it would
then require the full clearcoat shader, the user wouldn't get it until
manually forcing the shader to recompile, or reopening the file.
We now handle the optimization in the GLSL code. That adds a minimum
overhead due to branching. But the overall performance seems unchanged
(tested on linux in AMD and NVidia).
Reviewers: pascal, brecht, fclem
Differential Revision: https://developer.blender.org/D2822
Operators and their properties are two different types
Previously both operators and their properties are added
causing C operators to access the properties, Python the classes.
Favor consistency in this case so only Python classes are added.
A single diagonal axis was used for sorting coordinates,
the algorithm relied on users not having vertices axis aligned.
Use BLI_kdtree to remove doubles instead.
Overall speed varies, it's more predictable than the previous method.
Some typical tests gave speedup of ~1.4x - 1.7x.
For example, if you have two keyframes:
k1 = 1px, k2 = 10px
it was doing:
1px, 9px, 8px, ..., 3px, 2px, 10px
instead of:
1px, 2px, 3px, ..., 8px, 9px, 10px
This reverts commit 134e927965.
Writing into const event is very bad,
but this change broke compositor manipulators.
Will look into better solution eventually.
For some specific pipelines (e.g., holographic rendering) you can easily
need over a million frames (1k * 1k view angles).
It seems a corner case, but there is no real reason not to allow users
doing that.
That said we do loose subframe precision in the highest frame range. Which can
affect motionblur. The current maximum sub-frame precision we have is 16.
While the previous limit of 500k frames has a precision of 32.
Thanks to Campbell Barton for the help here.
To be backported to 2.79
Own error in recent type checks, in many cases the 'idname'
is used for the struct identifier, not the 'identifier'
which is the Python class name in this context.
Mostly internal changes, keeping both manipulators
could have worked but there was no point long term.
There are still some glitches to resolve, will work on those next.
Basically since g_data was malloc'ed (instead of calloc'ed)
g_data->minzbuffer was never initialized.
So when running DRW_framebuffer_init after EEVEE_effects_init, the test
to *g_data->minzbuffer would lead to unpredictable results.
This was caught by valgrind, reported by Sergey Sharybin.
Previous version was trying to do a quick and simple process in the case
we were only considering smooth/flat status of faces.
Thing is, even then, the algorithm was not actually working in all
possible situations, e.g. two smooth faces having a single vertex in
common, but no common edges, would not have split that vertex, leading
to incorrect shading etc.
So now, tweaked slightly our split normals code to be able to generate
lnor spaces even when autosmooth is disabled, and we always go that way
when splitting faces.
Using smooth fans from clnor spaces is not only the only way to get 100%
correct results, it also makes face split code simpler.
One problem is that it was always using __mm_blendv_ps emulation even if the
instruction was supported. The other that the emulation function was wrong.
Thanks a lot to Ray Molenkamp for tracking this one down.
Apparently with Maya in a certain configuration, it's possible to have an
Alembic object without schema in the Alembic file. This is now handled
properly, instead of crashing on a null pointer.
This affects the curve display color setting, but is really intended
for future per-curve options.
The id_data reference in the created rna pointers refers to the object
even if the curve is actually owned by its action, which is somewhat
inconsistent, but the same problem can be found in existing code.
Fixing it requires changes in animdata filter API.
The is following: split copy on write update for node trees, and if we are only
tagging for uniform buffer update we skip whole datablock copy and only invoke
copy default_values form original nodetree to a copied one.
Thing which i'm not sure is: whether we need to use different branches in graph
itself to control such a conditional behavior, or whether we need to store tag
somewhere in the dependency graph. There are obviously cons and pros in both
approaches, and need to think about this. Maybe with more examples it becomes
more obvious which way is better.
This only fixes manual tweaks for now, animation support is coming.
Adding alongside the existing one for now,
but it should eventually replace it.
Uses a matrix instead of (position + scale),
written so rotation can be done more easily.
Currently has a primitive handle for rotation, supports corner scaling.
There were two issues here:
1. material_update did not do anything, because DEG_id_tag_update was storing
update tags in original IDs, which had nothing evaluated. Even more, material
update should have been called with evaluated version of material, Solved
this by copying update tag from original ID to a copied one.
However, perhaps DEG_id_tag_update should tag both original and copied ID,
so updates are never gets lots if some depsgraph is not visible.
2. Tagging material for update should ensure it's copied version of node tree is
up to date, otherwise material will still use old node tree.
This solves missing material updates when changing topology. Tweaking values is
still broken, because of GPUMaterial using pointer to original node's socket
value, which gets broken after copy-on-write of the node tree (pointers of nodes
are changing).
Previously it was returning short, which was really easy to (a) compare against
non-ID type value (b) forget to handle some specific value in switch statement.
Both issues happened in the nearest past, so it's time to tighten some nuts
here.
Most of the change related on silencing strict compiler warning now, but there
is also one tricky aspect: ID_NLA is not in the IDType enum. So there is still
cast to short to handle that switch. If someone has better ideas how to deal
with this please go ahead :)
A single manipulator could only assign a single operator to each part.
Now each part can have it's own.
Also modify 2D selection callback, 2D started at 1, 3D at 0.
Now use -1 for unset value, start both at 0.
- Vertex only meshes never restored their selection history.
- Select history was cleared on the source instead of the target.
Simple Optimizations:
- Avoid O(n^2) linked list looping that checked the entire list before
adding elements (NULL values in the source array to prevent dupes).
- Re-use vert & edge lookup tables instead of allocating new ones.
Issue was nasty hidden one, the dual status (mix of local and linked)
of proxies striking again.
Here, remapping process was considering obdata pointer of proxies as
indirect usage, hence clearing the 'LIB_TAG_EXTERN' of obdata pointer.
That would make savetoblend code not store any 'lib placeholder' for
obdata data-block, which was hence lost on next file read.
Another (probably better) solution here would be to actually consider
obdata of proxies are fully indirect usage, and simply reassign proxies
from their linked object's obdata on file read...
However, that change shall be safer for now, probably good for 2.79 too.
This reverts commit 3888227a7b.
This "Fix" was made while ORCO was broken. Now that orco itself is fixed
this is no longer required, otherwise Tangent node produces different
results in Cycles and Eevee.
Don't use quick sort for small arrays, bubble sort works way faster for small
arrays due to cache coherency. This is what qsort() from libc is doing actually.
We can also experiment unrolling some extra small arrays, for example 3 and 4
element arrays.
This reduces tangent space calculation for dragon from 3.1sec to 2.9sec.
Brings tangent space calculation from 4.6sec to 3.1sec for dragon model in BI.
Cycles is also somewhat faster, but it has other bottlenecks.
Funny thing, using simple `static inline` already gives a lot of speedup here.
That's just answering question whether it's OK to leave decision on what to
inline up to a compiler..
Would be nice to be able to catch this with assert as well, will see what would
be the best way to do this/.\
Need to verify with Mai that this solves crash for her and maybe consider
porting this to 2.79.
Orco should behave the same if it comes from unconnected vec inputs, or
from the Texture Coordinate -> Generated node output.
Fixup for 11e7e0769a.
This is related to T52528. The coordinates are still different between
Eevee and Cycles, but at least it behaves consistent within itself.
In fact the shader should now be correct, but the orco attributes we are
passing the shader seems to be where the problem is. But it's to be
tackled separately.
scripts being "Artwork" which is your sole property and free to license.
I've removed the reference to scripts in this text.
This was from 2002! With our Python scripts becoming part of how Blender runs,
such scripts now are officially required to be compliant with GNU GPL.
For more information; check the FAQ or consult foundation@blender.orghttps://www.blender.org/support/faq/
Some manipulators are used like on-screen buttons,
in this case it doesn't make sense to keep track of their state,
so zero the offset when its unused.
Needed for lamp-target manipulator.
We have a hardcored limit of 1000 images to be baked.
However anything anove 100 would be leading to overflow in the code.
Caught by warning from builder bot (my compiler doesn't even complain
about this, but it should).
Re-use operator return flags for manipulator modal & invoke,
this means manipulators can allow navigation or other events to be
handled as they run - see T52499
Fishy cat benchmark was rendering with wrong shadows. Cause is unclear,
adding printf or rearranging code seems to avoid this issue, possibly a
compiler bug. This reverts the fix and solves the OSL bug elsewhere.
This was needed when we accessed OSL closure memory after shader evaluation,
which could get overwritten by another shader evaluation. But all closures
are immediatley converted to ShaderClosure now, so no longer needed.
While unlikely to have had any serious effects because of limited use, the
previous implementation was not actually atomic due to a data race and
incorrectly coded CAS loop. We also had duplicates of this code in a few
places, it's now been moved to a single location with all other atomic
operations.
We need to make sure we can store all volume closures for all objects in volume
stack. This is a bit tricky to detect what would be the "nestness" level of
volumes so for now use maximum possible stack depth. Might cause some slowdown,
but better to give reliable render output than to fail quickly.
Should be safe for 2.79 after extra eyes.
We were showing "search for unknown menutype WM_MT_button_context" messages in terminal which were not helpful for users, so now they are disabled.
To be backported to 2.79
It is possible to have same image used multiple times at different frames,
which means we can not free it's buffers without any guard. From quick tests
this seems to be doing what it is supposed to.
Need more testing and port this to 2.79.
Although the problem was exposed in 9457715d9a, the problem was in the
original code that was copied over. To have:
```
} else { /* EXPECTED_VALUE */
```
Without an BLI_assert(value == EXPECTED_VALUE); is asking for troubles.
Yet another reason to favour switch statements with:
```
default:
BLI_assert(!"value not implemented or supported");
```
Instead of chained if/else if/else /* expected_value */.
- NOCHECK -> ALL
- ALL -> MAYBE_ALL
Where 'MAYBE_ALL' checks to see if the mesh has changed.
This is clearer that `BKE_MESH_BATCH_DIRTY_ALL` is dirty and
going to be updated without any guess-work.
This was meant to be generic but introduced possible type errors
and unnecessary complication.
Replace with typed PyC_Tuple_PackArray_* functions.
Also add PyC_Tuple_Pack_* macro which replaces some uses of
Py_BuildValue, with the advantage of not having to parse a string.
- WM_manipulatorgrouptype_remove- > free
- WM_manipulator_group -> WM_manipulator_group_type
Naming here is still a bit confusing,
now at least free/remove are differentiated.
This means we have less overall noise for rendered image.
SSR, AO, and Refraction are affected by this change.
SSR still exhibit artifacts because the reconstruction pattern needs to change every frame (TODO).
Regression from rBfed853ea78221, calling this inside thread worker was
not really good idea anyway, and we already have all the code we need in
pre-threading init function, was just disabled for vertex particles
before.
To be backported to 2.79.
This exposes 2 methods for manipulators:
- new_custom_shape
- draw_custom_shape
This can be used for script authors to create and re-use shapes
without dealing with lower level API's.
Python's C-API doesn't provide functions to get
int's at specific integer sizes.
Leaving the caller to check for overflow,
which ended up being ignored in practice.
Add API functions that convert int/uint 8/16/32/64, also bool.
Raising overflow exception for unsupported ranges.
The thing that most often still goes wrong for new users building blender on windows is checking out the libraries, some skip over the wiki, some check out to the wrong folder, in an effort to reduce the time i spend on this, I added detection of svn and misisng libs to make.bat .
When the user has svn installed, and the libdir is missing he'll be asked if he wants to download them
if svn is not installed, or the user chooses 'no' the current error message is shown.
Reviewers: Blendify, sergey, juicyfruit
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D2782
We should only early out with any hit in BVH traversal if the only visibility
bits used are opaque shadow. Not when opaque shadow is one of multiple bits.
Also pass by value and don't write back now that it is just a hash for seeding
and no longer an LCG state. Together this makes CUDA a tiny bit faster in my
tests, but mainly simplifies code.
It's purpose is to limit the amount of light that spread across the screen.
Not entierly sure if it's very usefull, but it sure help to avoid to drown the screen in bloom.
This function was called to recreate the lower mip level of the probe texture. But this is not it's usage and it introduced a stall.
This patch add cubemap mipmap level regeneration in eevee_effects.c
Since we started supporting the (Cycles) Material Output old files
stopped working. There is no reason to keep the original Eevee material
otuput anymore.
It includes doversion for old files.
Move floats around when needed to accomodate vec3 arrays efficiently.
With this we use slightly less memory when possible. Basically vec3s are not
treated as vec4 unless we have no float to use for padding).
Reviewers: fclem, sergey
Differential Revision: https://developer.blender.org/D2800
This includes big improvement:
- The horizon search is decoupled from the BSDF evaluation. This means using multiple BSDF nodes have a much lower impact when enbaling AO.
- The horizon search is optimized by splitting the search into 4 corners searching similar directions to help which GPU cache coherence.
- The AO options are now uniforms and do not trigger shader recompilation (aka. freeze UI).
- Include a quality slider similar to the SSR one.
- Add a switch for disabling bounce light approximation.
- Fix problem with Bent Normals when occlusion get very dark.
- Add a denoise option to that takes the neighbors pixel values via glsl derivatives. This reduces noise but exhibit 2x2 blocky artifacts.
The downside : Separating the horizon search uses more memory (~3MB for each samples on HD viewport). We could lower the bit depth to 4bit per horizon but it produce noticeable banding (might be fixed with some dithering).
Diffuse was not outputing the right normal. (this is not a problem with SSR actually)
Glass did not have proper ssr_id and was receiving environment lighting twice.
Also it did not have proper fresnel on lamps.
This fixes the Principled shader in Eevee, among other nodes.
Basically before we were treating all the vec3 as vec4 as far as memory
goes. We now only do it when required (aka, when the vec3 is not
followed by a float).
We can be even smarter about that and move the floats around to provide
padding for the vec3s. However this is for a separate patch.
That said, there seems to be some strong consensus in corners of the
internet against using vec3 at all [1]. Basically even if we get all the
padding correct, we may still suffer from poor driver implementations in
some consumer graphic cards.
It's not hard to move to vec4, but I think we can avoid doing it as much
as possible. By the time 2.8 is out hopefully most drivers will be
implementing things correctly.
[1] - https://stackoverflow.com/questions/38172696
Deleting the old internal audaspace.
Major changes from there are:
- The whole library was refactored to use C++11.
- Many stability and performance improvements.
- Major Python API refactor:
- Most requested: Play self generated sounds using numpy arrays.
- For games: Sound list, random sounds and dynamic music.
- Writing sounds to files.
- Sequencing API.
- Opening sound devices, eg. Jack.
- Ability to choose different OpenAL devices in the user settings.
This implements Arvo's "Stratified sampling of spherical triangles". Similar to how we sample rectangular area lights, this is sampling triangles over their solid angle. It does significantly improve sampling close to the triangle, but doesn't do much for more distant triangles. So I added a simple heuristic to switch between the two methods. Unfortunately, I expect this to add render time in any case, even when it does not make any difference whatsoever. It'll take some benchmarking with various scenes and hardware to estimate how severe the impact is and if it is worth the change.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: Vega-core, brecht, SteffenD
Tags: #cycles
Differential Revision: https://developer.blender.org/D2730
Use mesh batch cache for mesh selection.
Note that we could create the batches and free immediately
so they don't take up memory.
This resolves a problem where selection was limited
to immediate-mode buffer size.
Since the paste object is pasted in the active collection, and not on
its original one, we need to flush/calculate the new collection base
settings (visibility, selectability, ...).
DEG_id_tag_update() for the scene now. Though it may be better to tag
only the object specific IDs in the future.
Caused by own recent changes in handling of verts/edges/etc. arrays storage
for raycasting (rBe324172d9ca6690e8).
Issue was actually even weirder - there is absolutely no reason at all to
release DM here, those finaldm are stored in Object or EditMesh structs and
handled by general update system, other code shall never try to release them!
Originally we were not respecting the original visibility flags of the
collections. However this is required for Copy-on-write (CoW).
Remember to update the svn lib tests folder. I had to update some of the
json files there.
Also adding a new unittest for this particular issue:
Test render_layer_scene_copy_f
Flag ownership for each index array & vbo's
so we don't have to manually keep track of this and use the right free call.
Instead this can be passed on creation.
See D2676
2.8x branch added bContext arg in many places,
pass eval-context instead since its not simple to reason about what
what nested functions do when they can access and change almost anything.
Also use const to prevent unexpected modifications.
This fixes crash loading files with shadows,
since off-screen buffers use a NULL context for rendering.
I couldn't reproduce either, but calling min() with different argument
data types and indexing vectors with an index not known at compile time
seem likely to cause problems.
Ref T52404, T52404.
We stop using the .zip file and just have all files now in
lib/darwin/python/lib, along with numpy, numpy headers and requests.
This makes it consistent with Linux and simplifies code.
For old libraries the .zip stays, code for that gets removed when we
fully switch to new libraries.
We now have to support more complex copying types, which are controlled
by flags, so all copying logic will need to take those at some point (at
least, all potentially dealing with IDs).
_copy_data() functions shall not do that at all anymore. Kept as option
for now even though that helper is only called from here...
Also moar varnames renaming to standard _src/_dst sufixes.
We do have an history of those pieces of evil in our code, would be nice
to get fully rid of it, but at the very least let's not add more of them
in new code. :)
This can happen with Alembic files exported from Maya. I'm unsure as to the
root cause, but at least this fixes the crash itself.
Thanks to @looch for reporting this with a test file. The test file has to
remain confidential, though, so it's on my workstation only.
FFMPEG & VPX don't handle target with --build parameter, so we need to make sure use of plain configure command
Reviewed by: Brecht Van Lommel
Differential Revision: http://developer.blender.org/D2791
Was complicating general use case, also support for transforming with matrix_space set.
Add matrix_space support for manipulator_window_project_2d too.
This patch adds "Pixel Size" to the performance options, which allows to render
in a smaller resolution, which is especially useful for displays with high DPI.
Reviewers: Severin, dingto, sergey, brecht
Reviewed By: brecht
Subscribers: Severin, venomgfx, eyecandy, brecht
Differential Revision: https://developer.blender.org/D1619
The only reason shutter time was marked as non-animatable is because Blender
Internal render does not support such animation. But this is something what
users are keeping asking for and now Blender Internal is on it's way out.
Enabled animation of this property, but noted in tooltip that Blender Internal
does not support animation of this property.
Bug in new ID copying code, thanks once again to stupid nodetrees, we
ended up wrongly remapping MA node->id pointers to NodeTree when copying
materials using node trees...
Basically, make re-alloc and memcpy from the same lock, otherwise one
thread might be re-allocating thread while another one is trying to
copy data there.
Reported by Mohamed Sakr in IRC, thanks!
Enabled cache for frame accessor and tweaked policy so we guarantee keyframed
images to be always in the cache. The logic might fail in some real corner case
(for example, when doing multiple tracks at once on a system where we can not
fit 2 clip frames in cache) but things are much better now for regular use.
This fix a bug when occluder are on the edge of the screen and occludes more than they should.
Grouped the texture fetches together and clamp the ray at the border of the screen.
Also add a few util functions.
This should fixes the error message that a stall occured because of busy mipmap.
This happened on the minmax buffer generation and introduced a random 0.2ms latency.
I'm not sure of what was happening though.
Creating ngons with multiple axis aligned shapes in the middle of a
single face would fail in some cases.
This exposed multiple problems in BM_face_split_edgenet_connect_islands
- Islands needed to be sorted on Y axis when X was aligned.
- Checking edge intersections needed increased endpoint bias.
- BVH epsilon needed to be increased.
Note: this commit seems to work as expected (also with transform
snapping etc.). However, it is rather unsafe - not enough for 2.79 at
least, unless we get much more testing on it. It also depends on three
previous ones.
Note that using a global lock here is far from ideal, we should rather
have a lock per DM, but that will do for now, whole DM thing is doomed
to oblivion anyway in 2.8.
Also, we may need a `DM_DIRTY_LOOPTRIS` dirty flag at some point. Looks
like we can survive without it for now though... Probably because cached
looptris are never copied accross DM's?
This was... horribly wrong, CDDM will often *not* need to allocate
anything to return arrays of mesh items! Just check whether array
pointer is NULL.
Also, remove `DM_get_looptri_array`, that one is useless currently,
`dm->getLoopTriArray` will always return cached array (computing it if
needed).
The tweakmode flag and the selected-channels flag accidentally
used the same value, due to confusion over where these flags were
supposed to be set. The selected-channels flag has now been moved
to use a different value, so that there shouldn't be any further
conflicts.
To be ported to 2.79.
Old bevel 'Clamp overlap' code was very naive: just limit amount
to half edge length. This uses more accurate (but not perfect)
calculations for the max amount before (many) geometry collisions
happen. This is not a backward compatible change - meshes that
have modifiers with 'Clamp overlap' will likely have larger allowed
bevel widths now. But that can be fixed by turning off clamp overlap
and setting the amount to the desired value.
Own previous fix (rBd5d626df236b) was not valid, curves are actually
supported by SoftBodies. It was rather a mere UI bug, which was not
including Surfaces and Font obect types in those valid for softbody UI.
Thanks to @brecht for the head up!
Also, fix safe for 2.79, btw.
* 255 maximum seems excessive for F-Curve handle vertices; now reduced to 100
* Vertex Size is no longer restricted to the old 10px maximum size limit
(used because Windows limited the maximum vertex size drivers needed to
support)
It's more important that there is some form of feedback that the strips
are muted (i.e. dotted borders) than the fact that those dotted borders
may have slightly rounded corners. So, just use a regular sharp-cornered
rect when the strips need to be muted.
There's no reason to manually iterate over items in a DLRBT_Tree,
as the structure is designed to be able to be safely casted down
to a ListBase and ListBase-like nodes..
Adding structs was checking for duplicates
causing approx 75k string comparisons on startup.
While overall speedup is minimal,
Python access to `bpy.types` will now use a hash lookup
instead of a full linked list search.
See D2774
Changes for 2.8x to use EvaluationContext caused some confusion
- Would use scene layer passed from snap context.
- Would generate duplis from Main eval context.
- Would take context argument and use it to create another eval context.
Adding context args all over and filling in a new eval-context
for every ray-cast test isn't ideal either.
Remove the context argument since the purpose of
SnapObjectContext is to avoid this kind of confusion.
Store the EvaluationContext once and re-use.
This effectivly reduce firefly bleeding all over the place.
We still need the clamp in the resolve pass because the level 0 has not been clamped.
NOTE: I did not clamped each sample individually for performance BUT I did not profile it to know how much it cost.
This was surely cause by float overflow. Limit roughness in this case to limit the brdf intensity.
Also compute VH faster.
Add a sanitizer to the SSR pass for investigating where NANs come from. Play with the roughness until you see where the black pixel is / comes from.
It doesn't seem that useful in practice, was mostly added to match some
other renderers but also seems to be causing user confusing and accidental
long render times. So let's just keep the UI simple and remove this.
Differential Revision: https://developer.blender.org/D2768
We're adding some bias by default, which now I think is the right thing
to do from a usability point of view since you really need to use those
options anyway to get clean renders in a practical time.
Differential Revision: https://developer.blender.org/D2769
Adds thin/default/thick modes to add -1/0/1 to the auto detected line width,
while leaving the overall UI scale unchanged.
Also tweaks the default line width threshold, so thicker lines start from
slightly high UI scales.
Differential Revision: https://developer.blender.org/D2778
Group texture fetches to hide latency. 3.2ms -> 2.2ms (constant time improvement, not depending on scene complexity)
Could optimize further with textureGather (require OpenGL 4.0).
Theses Materials are rendered after the SSR pass.
The only difference with previous method is that they have a depth prepass (less overdraw) and are not sorted.
For the moment the only way to enable this is to:
- enable Screen Space REFLECTIONS.
- enable Screen Space Refraction in the SSR parameters.
- enable Screen Space Refraction in the material tab.
We track the previous ray position offseted by the thickness. If the sampled depth is between this value and the current ray position then we have a hit.
This fixes rays that are almost colinear with the view vector. Thickness is now only important for rays that are comming back to the camera.
As a consequence, this simplify a lot of things.
Also include some refactor.
Since we are working with non power of 2 textures, the mipmap level UV does not line up perfectly.
This resulted in skewed filtering and bad sampling of the min/max depth buffer.
We generate a 3D lut to precompute the btdf intensity.
I decided to use a 64*64*16 (N dot V, ior, roughness) because the btdf varies less with roughness than with IOR.
We also remap the ior to better use the space in the LUT.
*Changed categories of some keywords
*reordered some longer keywords that didn't appear
*Activated another color (reserved builtins) by Leonid
*added some HGPOV and UberPOV missing keywords
Patch by Maurice Raybaud (@mauriceraybaud). Thanks to Leonid for additions, feedback and Linux testing.
Related diffs: D2754 and D2755.
While not a regression, this is new feature and would be nice to have it
backported to final 2.79.
Bug introduced in recent ID copying refactor.
This commit basically sanitizes seq strip copying behavior, by making
destination scene pointer mandatory (and source one a const one).
Nothing then prevents you from using same pointer as source and
destination!
This is a bit confusing, especially when one mixes OpenCL code where ulong equals
to uint64_t with CPU side code where ulong is expected to be something else from
the naming.
This commit makes it so we use explicit name, common on all platforms.
When a mesh changes its number of vertices during the animation,
Blender rebuilds the DerivedMesh, after which the materials weren't
applied any more (causing the default to the first material slot).
Commit b6d7cdd3ce changed how the mesh data
is deformed, which wasn't taken into account yet in this unit test.
Instead of directly reading the mesh vertices (which aren't animated any
more), we convert the modified mesh to a new one, and inspect those
vertices instead.
Problem was that some code checks to see if device_pointer is null or
not and the new allocator wasn't even setting the pointer to anything
as it tracks memory location separately. Setting the pointer to non
null keeps all users of device_pointer happy.
Also internal changes so arrow3d matches grab3d's behavior.
Needed to add WM_MANIPULATOR_DRAW_OFFSET_SCALE flag so
we can optionally apply offset in worldspace or screen scaled values.
This could make output really polluted, where it'll be hard to see actual
issues.
It is still possible to have all backtraces printed using BLENDER_VERBOSE
environment variable.
As part of the fix for T51587, I removed the Depth output for non-Multilayer
images since it seemed weird that PNGs etc. that don't have a Z pass still get
a socket for it.
However, I forgot about non-multilayer EXRs, which are a special case that can
actually have a Z pass.
Therefore, this commit brings back the Depth output for non-multilayer images
just like it was in 2.78.
- Apparently MSVC does not support compound literals
in C++ (at least by the looks of it).
- Not sure how opencl_device_assert was managing to
set protected property of the Device class.
We don't enable global SSE optimizations in regular kernel, and we
keep those disabled on Linux 32bit.
One possible workaround would be to pass arguments by ccl_ref, but
that is quite a few of code which better be done accurately.
Steps to reproduce:
- Create shader Image texture -> Diffuse BSDF -> Output. Do NOT select image yet!
- Start viewport render.
- Select image from the ID browser of Image Texture node.
Thing is: with the memory manager we always need to inform device that memory
was freed.
Common folks, nobody considered master a C++11 only branch. Such decision is to
be done officially and will involve changes in quite a few infrastructure related
areas.
New code was handling correctly ID's internal references to self, but
not references between 'made real' different objects...
Regression, to be backported in 2.79.
It is defined to & for CPU side compilation, and defined to an empty for any GPU
platform. The idea here is to use this macro instead of #ifdef block with bunch
of duplicated lines just to make it so CPU code is efficient.
Eventually we might switch to references on CUDA as well, but that would require
some intensive testing.
The documentation for the bpy.app.handlers.scene_update_{pre,post}
handlers states that they're called "on updating the scenes data".
However, they're called even when the data hasn't changed. Of course
such handlers are useful, but the documentation should reflect the
current behaviour.
Reviewers: mont29, sergey
Subscribers: Blendify
Maniphest Tasks: T46329
Differential Revision: https://developer.blender.org/D1535
Image textures were being packed into a single buffer for OpenCL, which
limited the amount of memory available for images to the size of one
buffer (usually 4gb on AMD hardware). By packing textures into multiple
buffers that limit is removed, while simultaneously reducing the number
of buffers that need to be passed to each kernel.
Benchmarks were within 2%.
Fixes T51554.
Differential Revision: https://developer.blender.org/D2745
Simply disabled python tests, they can't be run anyway (since blender target is
not enabled) and we don't have any player-related tests in that folder.
This will allow much finer controll over how we copy data-blocks, from
full copy in Main database, to "lighter" ones (out of Main, inside an
already allocated datablock, etc.).
This commit also transfers a llot of what was previously handled by
per-ID-type custom code to generic ID handling code in BKE_library.
Hopefully will avoid in future inconsistencies and missing bits we had
all over the codebase in the past.
It also adds missing copying handling for a few types, most notably
Scene (which where using a fully customized handling previously).
Note that the type of allocation used during copying (regular in Main,
allocated but outside of Main, or not allocated by ID handling code at
all) is stored in ID's, which allows to handle them correctly when
freeing. This needs to be taken care of with caution when doing 'weird'
unusual things with ID copying and/or allocation!
As a final note, while rather noisy, this commit will hopefully not
break too much existing branches, old 'API' has been kept for the main
part, as a wrapper around new code. Cleaning it up will happen later.
Design task : T51804
Phab Diff: D2714
Note these are intended for platform maintainers, we do not intend to
support users making their own builds with these. For that precompiled
libraries from lib/ should be used.
Implemented by Martijn Berger, Ray Molenkamp and Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D2753
Since all the shadow catchers are already assumed to be in the footage,
the shadows they cast on each other are already in the footage too. So
don't just let shadow catchers skip self, but all shadow catchers.
Another justification is that it should not matter if the shadow catcher
is modeled as one object or multiple separate objects, the resulting
render should be the same.
Differential Revision: https://developer.blender.org/D2763
Note these are intended for platform maintainers, we do not intend to
support users making their own builds with these. For that precompiled
libraries from lib/ should be used.
Implemented by Martijn Berger, Ray Molenkamp and Brecht Van Lommel.
Differential Revision: https://developer.blender.org/D2753
Since all the shadow catchers are already assumed to be in the footage,
the shadows they cast on each other are already in the footage too. So
don't just let shadow catchers skip self, but all shadow catchers.
Another justification is that it should not matter if the shadow catcher
is modeled as one object or multiple separate objects, the resulting
render should be the same.
Differential Revision: https://developer.blender.org/D2763
This will allow much finer controll over how we copy data-blocks, from
full copy in Main database, to "lighter" ones (out of Main, inside an
already allocated datablock, etc.).
This commit also transfers a llot of what was previously handled by
per-ID-type custom code to generic ID handling code in BKE_library.
Hopefully will avoid in future inconsistencies and missing bits we had
all over the codebase in the past.
It also adds missing copying handling for a few types, most notably
Scene (which where using a fully customized handling previously).
Note that the type of allocation used during copying (regular in Main,
allocated but outside of Main, or not allocated by ID handling code at
all) is stored in ID's, which allows to handle them correctly when
freeing. This needs to be taken care of with caution when doing 'weird'
unusual things with ID copying and/or allocation!
As a final note, while rather noisy, this commit will hopefully not
break too much existing branches, old 'API' has been kept for the main
part, as a wrapper around new code. Cleaning it up will happen later.
Design task : T51804
Phab Diff: D2714
Shows new, reference and diff renders, with mouse hover to flip between
new and ref for easy comparison. This generates a report.html in
build_dir/tests/cycles, stored along with the new and diff images.
Differential Revision: https://developer.blender.org/D2770
* Remove some unnecessary SSE emulation defines.
* Use full precision float division so we can enable it.
* Add sqrt(), sqr(), fabs(), shuffle variations, mask().
* Optimize reduce_add(), select().
Differential Revision: https://developer.blender.org/D2764
I need to use some macros defined in util_simd.h for float3/float4, to emulate
SSE4 instructions on SSE2. But due to issues with order of header includes this
was not possible, this does some refactoring to make it work.
Differential Revision: https://developer.blender.org/D2764
We already detect this automatically based on shading nodes and per shader
settings, and performance of this option is ok now all devices.
Differential Revision: https://developer.blender.org/D2767
- New manipulator tracks lamps to position under cursor.
- Works with multiple lamps, keeping relative offsets.
- Holding Ctrl moves the lamp.
- Access via manipulator or Shift-T.
Code could be improved, but like to get feedback from users.
Render-border & crop-node 2d-cage manipulators where unreasonably
complicated to implement because there was no good way to define
the sub-region the manipulator was transforming in
(render border within the camera's frame for example).
Add matrix-space variable,
remove scale property from cage2d manipulator, use matrix instead.
Was doing this with property get/set but this made view operations
require refreshing manipulator properties.
Simplify by operating on properties in their own space.
Also disable clamping for now since it assumes pixel-space.
Two main things here:
1. Replace all unsafe for #line directive characters into a single loop,
avoiding multiple iterations and multiple temporary strings created.
2. Don't merge token char by char but calculate start and end point and
then copy all substring at once.
This gives about 15% speedup of source processing time. At this point
(with all previous commits from today) we've shrinked down compiled
sources size from 108 MB down to ~5.5 MB and lowered processing time
from 4.5 sec down to 0.047 sec on my laptop running Linux (this was a
constant time which Blender will always spent first time loading kernel,
even if we've got compiled clbin).
Add a safe version of normalize since all uses of normalize
did zero length checks, move this into a function.
Also avoid unnecessary conversion.
Gives minor speedup here (approx 3-5%).
Basically gather lines as-is during traversal, avoiding allocating
memory for all the lines in headers.
Brings additional performance improvement abut 20%.
The idea here is that it is possible to mark certain include statements
as "precompiled" which means all subsequent includes of that file will
be replaced with an empty string.
This is a way to deal with tricky include pattern happening in single
program OpenCL split kernel which was including bunch of headers about
10 times.
This brings preprocessing time from ~1sec to ~0.1sec on my laptop.
The idea is to re-use files which were already processed. Gives about 4x speedup
of processing time (~4.5sec vs 1.0sec) on my laptop for the whole OpenCL kernel.
For users it will mean lower delay before OpenCL rendering might start.
Keyframe handle vertices (the circles on the ends of the handles)
should always be larger than the central vertex. This brings back the
"outer" radius value from the old gluDisk(), and doubles it to get the
necessary diameter, to scale it properly.
TODO's:
- Get rid of all fills inside these circles
- Make the central vertex square-shaped again
* Outlines of keyframes were too thick and ugly
* Size differences between keyframe types was being swallowed
by the pixel-fudge factor, leaving colour as the only distinguishing
factor (bad!)
This changes the Cycles exporting and Cycles/Eevee UI code to support both
output material nodes, giving priority to the renderer native one. Still
missing is Eevee code to prefer the Eevee output node.
This is still far from prefect, but yet much better than what we had so
far (more consistent with inheritent precision available in floats).
Note that this fixes some (currently commented out) units unittests, and
requires adjusting some others, will be done in next commit.
Bug was in RNA nodes code actually, itemf functions shall never, ever
return NULL!
Note that there were other itemf functions there that were potentially
buggy. Also harmonized a bit their code.
* Numbers with units (especially, angles) where not handled correctly
regarding number of significant digits (spotted by @brecht in T52222
comment, thanks).
* Zero value has no valid log, need to take that into account!
It now uses a quality slider instead of stride.
Lower quality takes larger strides between samples and use lower mips when tracing rough rays.
Now raytracing is done entierly in homogeneous coordinate space. This run much faster.
Should be fairly optimized. We are still Bandwidth bound.
Add a line-line intersection refine.
Add a ray jitter between the multiple ray per pixel to fill some undersampling in mirror reflections.
The tracing now stops if it goes behind an object. This needs some work to allow it to continue even if behind objects.
Running undo would notify manipulators to refresh,
but this still allowed for events in the queue to be handled,
where manipulators could be drawn for selection before
their refresh callback runs.
This made Python manipulators raise exceptions
about referencing invalid data (or crash).
Now tag manipulator update on file load (including undo)
and ensure the refresh callback runs
before drawing manipulator selection.
Also split manipulator map refresh flag in two since selection doesn't
perform the same operations as regular drawing.
This will only be noticeable for drawing many instances.
In contrived use-case with many instances, and `USE_PROFILE` disabled
this can close to double playback FPS.
The option to disable this is left in the code in case we want to
debug memory use.
See D2756 for details.
- Each allocation can be a different size
(but should be smaller than the chunk size).
- Result can be looped over in order of allocation.
- Allocations are aligned to pointer size to avoid unaligned reads.
When calling sculpt from Python,
setting 3D 'location' but not 2D 'mouse' stopped working in 2.78.
Now check if the operator is running non-interactively and
skip the mouse-over check.
Regression in D1812: PyDriver variables as Objects
Taking the Python representation is nice in general
but for enums it would convert them into strings,
breaking some existing drivers.
Some users really liked previous behavior,
so making it an option.
Cursor Lock Adjustment can be disabled to give something close to
2.4x behavior of cursor locking.
When lock-adjustment is disabled placing the cursor the view.
This avoids the issue reported in T40353
where the cursor could get *lost*.
Even strands that were excluded by the density texture were being added
to the DM passed to cloth, but these ended up having some invalid data,
because they were not fully constructed.
This simply excludes `UNEXISTED` particles from the DM generation, as
would be expected.
Note that fix is not perfect, systematically make refcounting of all IDs
assigned to node's id pointer, which breaks the 'do not refcount
scene/object/text datablocks' principle...
But besides that principle being far from ideal in general, it becomes
pretty much impossible to apply when using //generic// ID pointer,
unless we add some kind of type data to that pointer somehow.
So for now, better to live with that, than having broken usercount.
This commit is a work forward having less updates during playback, which speeds
things up a lot here. The idea is simple: stop update all copy-on-write
datablocks (which implies full re-evaluation actually) on frame change and
re-use existing evaluated meshes as much as possible.
This brings playback speed to 24 fps on the dino test scene here. Performance
drops down a lot when armature is animated tho, but that's because of need of
tangent space calculation which we can't do much about from just a dependency
graph.
Hopefully this doesn't make copy-on-write too unstable, quick tests here are
surviving fine.
Old performance debug was doing queries for every frame even if not debugging perf.
Also, it did not record when a pass was draw multiple time, leading to incorect measurement.
New module also allows to group the timers to limit infos displayed.
Also fix the background CPU draw timer.
The purpose of the keymap strings is probably for un-embossed menu items
like seen in most pulldowns. I can't see a reason for also adding that
string for regularly drawn buttons within popups, we don't add it
anywhere else in the UI either. So this commit makes sure shortcut
strings are only added to buttons that are drawn like pulldown-menu
items.
The purpose of the keymap strings is probably for un-embossed menu items
like seen in most pulldowns. I can't see a reason for also adding that
string for regularly drawn buttons within popups, we don't add it
anywhere else in the UI either. So this commit makes sure shortcut
strings are only added to buttons that are drawn like pulldown-menu
items.
The Blender text editor's built in python template "Gamelogic" has a reference near the bottom to "objectHitList" as an alleged attribute to the KX_TouchSensor. This name is incorrect, it's correct name is "hitObjectList."
Attempting to access the suggested objectHitList returns error...
```
AttributeError: 'KX_TouchSensor' object has no attribute 'objectHitList'
```
The provided diff corrects this minor error.
Reviewers: kupoman, moguri, campbellbarton, Blendify
Reviewed By: Blendify
Tags: #game_engine, #game_python
Differential Revision: https://developer.blender.org/D2748
`defvert_array_find_weight_safe()` was confusing 'invalid vgroup' and
'valid but totally empty vgroup' cases.
Note that this also affected at least ShrinkWrap and SimpleDeform
modifiers.
This means once an ID is created,
it will keep using the same PyObject instance.
This has some advantages:
- Avoids unnecessary re-creation of instances on UI poll / redraw.
- Accessing free'd ID's gives an exception instead of crashing.
(long standing annoyance!, though this only applies to ID's
and not yet other data that uses the ID's - vertices for eg).
- Allows using instance comparison (a little faster).
Note that the instances won't be kept between undo.
Was doing 2x lookups which is OK for click-select
but this runs on mouse-move and can become slow.
May enable this again if highlighting logic changes.
Also scale hotspot by pixelsize.
- Cleanup array access, move into functions.
- Store allocated size to avoid realloc's on every add/remove.
- Make select editable from Python.
- Rename select callback to select_refresh
(collided with select boolean).
- Call select_refresh when de-selecting as well as selection.
This was cause by some post process not always sampling the highest mipmap.
But if there is no need for mipmapping (i.e. no SSR) these levels will be undefined.
So forcing all Post FX shader to sample level 0 fix this.
This add the possibility to use planar probe informations to create SSR.
This has 2 advantages:
- Tracing is less expensive since the hit is found much quicker.
- We have much less artifact due to missing information.
There is still area for improvement.
BKE_scene_copy explicitly ignores visibility of "source" collections make all
collections visible. This is also tested by regression tests.
While it seems more logical to simply preserve all possible visibility flags
and overrides, don't feel like submitting to a behavior-changes without talking
to author of those guards first.
This commit fixes cycles material preview.
If object is only listed in collection but not added to any of layers we shouldn't create
placeholder for it, because otherwise we'll leave lots of placeholder ID nodes.
Question: can we make this exception to be more reliable?
This commit separate the depth texture into another texture array.
This remove the need to output radial depth into alpha.
Unfortunatly it's difficult to recover position from the non linear depth buffer when applying reflection without adding a bunch of stuff.
This is in preparation of SSR planar reflections.
- Encode normals for other opaque bsdf so they are not rejected by the normal facing test.
- Early out non reflective surfaces.
- Add small offset to raytrace to avoid self intersection.
- Fix fallback probes not appearing.
Output in 2 buffers Normals, Specular Color and roughness.
This way we can raytrace in a defered fashion and blend the exact contribution of the specular lobe on top of the opaque pass.
Ideally need to clean and sane and impossible-to-break way of making sure
evaluation context is fully initialized, but that would need some thoughts
and experimentation.
Otherwise we'll have confused dependency graph builder, which wouldn't be able to
build proper graph.
Didn't find a way to avoid world copy here, we can probably escape with some shallow
copy here, but that will currently complicate code a lot.
Ideas to consider here:
- Use shallow copy of existing world after new ID management API is in place.
Downside would be thread safety, kind of nice to have everything local.
- Switch depsgraph away from ID_TAG and do hash lookup or so.
This will slow down depsgraph builder, but will make code more reliable.
The order of evaluation of function arguments is undefined, and the order
was reversed between these compilers. This was causing regressions tests
to give different results between Linux and macOS.
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
In the future we should make these two buttons on one line
However because we need `gen_context = 'PAINT_STENCIL'`
this is a little hard and we need to find a proper solution.
One might be using `context_pointer_set`
Patch by @craig_jones with edits by @blendify
Differential Revision: https://developer.blender.org/D2710
Moving the ray_start_local to the new position does not lose as much precision as moving the ray_org_local to the corresponding position.
The problem of inaccuracy is within the functions: `bvhtree_ray_cast_data_precalc` and` fast_ray_nearest_hit`. And not directly in the values of the rays.
Those functions did not use evaluation context.
Also fixed lots of unused variables warnings caused by commented out code which
needs to be ported away from DerivedMesh and to evaluation context.
Note that some little parts of code have been dissabled because eval_ctx
was not available there. This should be resolved once DerivedMesh is
replaced.
The way we use it, UI_PRECISION_FLOAT_MAX is actually + 1 to get total
number of digits, and float only has 7 meaningful digits, so that define
shall be at 6.
GCC seems to detect uninitialized into function calls now, but then isn't
always smart enough to see that it is actually initialized. Disabling this
warning entirely seems a bit too much, so initialize a bit more now.
The remapping code was creating plkaceholders for objects coming from legacy
bases, but since those objects were never created by dependency graph (since
they are supposed to be ignored) the copy on write relations creation was
confused.
Now we do some special trickery to clear legacy bases on copy on write.
Currently RNA doesn't give us a good way of accessing singleton layers,
for now expose as a layer list (skin & paint-pask do this too).
Noted in T47811 that this should be changed.
This commit unifies the flattened texture slot names for bindless and regular CUDA textures. Texture indices are now identical across all CUDA architectures, where before Fermi used different indices, which lead to problems when rendering on multi-GPU setups mixing Fermi with newer hardware.
Those checks are not always helpful, since id remapping doesn't want to
worry about which components to tag for update. Perhaps in the future we
will introduce special flag which would mean "tag everything possible"/
We get already enough reports of people complaining about crashes on
edit mode unaware that they are in the non-supported Blender Render
engine.
Blender Render is going away, no reason to keep it around. Once we have
a nice fallback on Eevee and fast file loading we can default to Eevee
instead.
This is the fake ID nature of compositor again. Need to discard such
pointers before freeing datablock even for scenes (before it was done
for objects only).
Previously it was possible to run into situation when armature is constructed prior to
objects which are used for it's constraints. This was causing wrong scene evaluation.
Now we create placeholders for objects used by armature in case they don't have ID node
yet, which ensures we have proper mapping from original to copy-on-write ID pointer.
This shows the bug when IK solver doesn't update reliably when targeted an external
object and when that object is handled by build_object() after the armature.
The issue was caused by id_copy_no_main() changing pointers of constraints used
in pose to a newly allocated ID. This is correct, but caused confusion too our
copy on write remapping, because we are mimicing inplace duplication by copying
memory over from a temporarily duplicated ID to a proper placeholder. This was
causing dangling pointers in pose to a temporarily allocated ID.
Now we add special code to remapping callback which replaces temporary ID with
a proper one.
Couple of main things here:
- Properly handle PSYS_UPDATE_* flags from DEG_id_tag_update.
There are still some possible issues here related on the fact
that we don't differentiate different PSYS_UPDATE_* flags here
and handle the mall the same.
Other possibility here is that object level particle settings
evaluation might be forced when particle system evaluation is
tagged for update. Didn't see actual issue here yet, but need
a closer look.
- Don't tag non-object datablocks on visibility changes.
Those don't depend on visibility anyway. This prevents particle
settings IDs from flushing updates to all objects, causing all
cached particles to be lsot.
- Only update translation and geometry components on visibility
changes.
Once again, this prevents particle cache from being invalidated.
We might need to tag material components here still tho.
This commit makes it so that only ID components which correspond to the tag
flag are tagged for update (previously the whole ID would have been updated
in the most of cases).
This allows us to have more granular tag flags and prevent tagging of things
we don't want to be tagged.
Previously tagging particle settings for update will iterate over all objects and
all their particle system to see whether something needs an update or not. Now we
put ParticleSettings as an ID to the dependency graph, so tagging it for update
will nicely flush updates to all dependent particle systems.
Current downside of this is that due to limitation of flush routines it will cause
some extra particle system re-evaluation when it technically not needed, and what's
more annoying currently it will discard point caches more often.
However, this is a good and simple demonstration case to improve tagging/flushing
system to accommodate for such cases (similar issues happens with CoW and shading
components). So let's try to find some generic solution to the problem!
This replaces usage of generic PLACEHOLDEWR with string lookup with more
explicit opcode. This should make it faster to build dependency graph by
avoiding string comparisons when it's not needed.
There should be no user measurable different.
Hopefully fix it actually, at least could not reproduce it anymore with
that changen, but Was already quite hard to trigger before.
We need a memory barrier at this allocation, otherwise it might happen
after preview gets added to done queue, so preview could end up being
freed twice, leading to crash.
Change the implementation so it no longer takes over the mouse cursor motion
from the OS, instead only move it when warping, similar to Windows and X11.
Probably the reason it was not done this way originally is that you then get
a 500ms delay after warping, but we can use a trick to avoid that and get much
smoother mouse motion than before.
While drawing nice 'rounded' values is OK also for 'low precision'
editing like dragging and such, it's quite an issue when you type in a
precise value, validate, edit again the value, and find a rounded
version of it instead of what you typed in!
So now, *only when entering textedit of num buttons*, we always get the highest
reasonable precision for floats (and use exponential notation when
values are too low or too high, to avoid tremendous amounts of zero's).
2017-07-18 10:41:00 +02:00
3287 changed files with 242469 additions and 162511 deletions
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}.lib#missing postfix on purpose, distutils is not expecting it
COMMAND${CMAKE_COMMAND}-Ecopy"${PYTHON_OUTPUTDIR}/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib"${BUILD_DIR}/python/src/external_python/run/libs/python${PYTHON_SHORT_VERSION_NO_DOTS}${PYTHON_POSTFIX}.lib#other things like numpy still want it though.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.