D1713 from @angavrilov (with own edits)
The way current weight paint code works is that instead of making normalization lock aware, a separate `enforce_locks` function is called to do a different kind of normalization, hoping that by the time real normalize is called, there is nothing for it to do. The problem is that:
- That different normalization is based on adding the same amount to all unlocked groups, whereas true normalize uses multiplication to preserve ratio between them. The multiplicative approach should match the way weights operate better.
- If `enforce_locks` fails to achieve perfect normalization, true normalize will change locked groups.
Try to fix this by replacing `enforce_locks` with true lock-aware normalization support.
Supporting locked groups in normalize means that freezing the active group can make full normalization impossible if too much weight was added or removed, so it may be necessary to do two normalize passes. This is similar to how enforce_locks operates.
Also, now there is no need to go through the multi-paint branch just because of the locks. In the actual multi-paint case, the same normalize code can be used via a temporary flag array that represents the union of selected and locked groups.
User-visible effect should be:
- Auto-Normalize doesn't change behavior vertex to vertex depending on whether it has any locked groups.
- It never changes locked groups; if you lock some groups and start painting with seriously non-normalized weights, it's assumed you intended that.
- It never adds vertices to new groups, since the computer can't do that intelligently anyway - it was especially broken in case of mirroring.
- It always acts to preserve the ratio between groups it changes, instead of (sometimes, see point 1) adding or subtracting the same amount.
Checking for 'Closest' here isn't needed since
TransSnap.snapTarget callback is already ensuring the selected target is the closest.
Also don't reuse the pre-calculated distance,
since its only valid to do this when there are no snap points
and this isn't a significant gain to avoid the extra calculation - run once per update.
This patch adds a new API which allow us to access light shadow settings from python. The new API can be used to write custom GLSL materials with shadows.
Reviewers: brecht, kupoman, agoose77, panzergame, campbellbarton, moguri, hg1
Reviewed By: agoose77, panzergame, campbellbarton, moguri, hg1
Projects: #game_engine
Differential Revision: https://developer.blender.org/D1690
This is how the old code used to initialize it. Current value was
failing big time when baking all caches (always set to MAXFRAME), and it
was added right before commit to quiet a warning... (I know, I know...)
Issue was with very thin domains along one or two axes, these could lead to simulation
with only one cell width - and smoke code assumes we have at least 4 cells in each direction.
So now, we clamp resolution to a minimum of 4 in smoke_set_domain_from_derivedmesh().
Note: in extreme cases like this report, this will generate very un-cubic cells,
check it still works OK in 3DView is needed here.
Thanks to @genscher and @kevindietrich for help on this issue. :)
Based on usages so far:
- Split callback worker func in two, 'basic' and 'extended' versions. The former goes back
to the simplest verion, while the later keeps the 'userdata_chunk', and gets the thread_id too.
- Add use_threading to simple BLI_task_parallel_range(), turns out we need this pretty much systematically,
and allows to get rid of most usages of BLI_task_parallel_range_ex().
- Now BLI_task_parallel_range() expects 'basic' version of callback, while BLI_task_parallel_range_ex()
expectes 'extended' version of the callback.
All in all, this should make common usage of BLI_task_parallel_range simpler (less verbose), and add
access to advanced callback to thread id, which is mandatory in some (future) cases.
Our RenderSettings struct may have a bit too much levels, makes it hard to track
all 'pointer' data during copy...
Fixed several issues here, but not sure I found all existing ones. :/
I put all usage of GL_POINTS under the microscope. Fixed problems &
optimized a couple of spots.
- reduce calls to glPointSize by about 50%
- draw selected & unselected vertices together for UV editor & EditMesh
- draw initial gpencil stroke point the proper size
- a few other smaller fixes
New policy: each GL_POINTS draw call needs to set its desired point
size. This eliminates half our calls to glPointSize (setting it back to
its 1.0 default after every draw).
The combined pass is built with the contributions the user finds fit.
It is useful for lightmap baking, as well as non-view dependent effects
baking.
The manual will be updated once we get closer to the 2.77 release.
Meanwhile the new page can be found here:
http://dalaifelinto.com/blender-manual/render/cycles/baking.html
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D1674
- Only do this if PBVH is not used for display, this should correspond
to whats' happening in sculpt.c now.
- No need to do such synchronization for solid drawing, it supports
optimal display from PBVH.
In fact, such synchronization is only needed in the following case:
Drawing callback does not support PBVH draw and sculpt session is
configured to use PBVH for display and related operations.
Previously it was simplier logic which only checked whether the mouse
is inside of some area around sliding zone, which was resulting in
wrong sliding zone used when zoomed out.
924c626 broke loading on systems with different endian.
We could support reading values from SDNA, it isn't really worth the extra hassle.
Long support is still removed, just keep enum values the same.