interp_weights_face_v3 required a length four array for weights even when
calculating weights for a tri, otherwise, it would access unkown memory.
This fix allows a weight array of size three to be passed when only
calculating tri weights.
In ccgDM and emDM, looptri array recalculation was being handled
directly by `*DM_getLoopTriArray` (`getLoopTriArray` callback), while
`*DM_recalcLoopTri` (`recalcLoopTri` callback) was doing nothing.
This results in the array not being recalculated when other functions
that depend on the array data are called. These functions, such as
`getNumLoopTris`, call `recalcLoopTri` to ensure the data is up to date,
but in the case of CCGDerivedMesh that was doing nothing.
This moves all the recalculation code to `ccgDM_recalcLoopTri` and makes
`ccgDM_getLoopTriArray` call that.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2375
All objects were being parented to a single instance of each parent
object, instead of their respective instances, when using dupliverts or
dupligroups.
Behavior was caused by the `persistent_id[0]` (vertex/face id) being
ignored when computing `parent_gh` hash, which caused all instances to
have the same hash, and thus only the first one was included.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2370
Was affecting armatures' pose drawing code, could try to draw with
non-updated pose, which may contain NULL bone pointers (e.g. after some
data-block management tool execution, like make local, remapping, etc.).
Main intention is to give some quick way to control scene's memory
usage by clamping textures which are too big. This is really handy
on the early production stages when you first create really nice
looking hi-res textures and only when it all works and approved
start investing time on optimizing your scene.
This is a new option in Scene Simplify panel and it acts as
following: when texture size is bigger than the given value it'll
be scaled down by half for until it fits into given limit.
There are various possible improvements, such as:
- Use threaded scaling using our own task manager.
This is actually one of the main reasons why image resize is
manually-implemented instead of using OIIO's resize. Other
reason here is that API seems limited to construct 3D texture
description easily.
- Vectorization of uchar4/float4/half4 textures.
- Use something smarter than box filter.
Was playing with some other filters, but not sure they are
really better: they kind of causes more fuzzy edges.
Even with such a TODOs in the code the option is already quite
useful.
Reviewers: brecht
Reviewed By: brecht
Subscribers: jtheninja, Blendify, gregzaal, venomgfx
Differential Revision: https://developer.blender.org/D2362
This will make triple buffer used by default for such configuration.
Ideally we would switch to triple buffer on all platforms, but let's
do it in 2.8 branch and don't open can of worms in master now.
This should solve issues like T49945.
This is very confusing, in fact, and rna tooltip was wrong,
BKE_object_make_local_ex actually ensures we never have several proxies
of same object, since it always clears proxy when it has to copy object
to make it local...
What that RNA function is probably missing, though, is same logic as in
BKE_library_make_local to actually remap proxy from old linked object to
new local one.
I) `clear_proxy` parameter was not assigned to parm in RNA define code,
so 'pyfunc optional' flag was set to `new_id` parameter of `user_remap`
func - super ugly!
II) `clear_proxy` parameter itself, when set to False, would allow to
leave .blend file in invalid state (more than one proxy of same object),
this should never, ever be allowed in RNA API imho. Left the PAI
untouched for now, just disabled any effect from this parameter (hence
always clearing proxy when copying).
There is some define conflict between system headers and clew,
so delay include of clew.h as much as possible.]
This is something which needed to be done in the code before
the refactor, hopefully such change will still work.
For the multi-GPU case users still have to reconfigure the devices they want to use.
Based on patch from Lukas Stockner.
Differential Revision: https://developer.blender.org/D2347
This can be used together with camera culling to keep nearby objects visible in
reflections, using a minimum distance within which objects are visible. It is
also useful to cull small objects far from the camera.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2332
When the parent object matrix change after the layer was parented, the
inverse matrix for strokes must be updated when editing strokes or the
transformations will be wrong.
We need to check node tree links are still valid, after we remapped
some NodeGroup.
Note: In fact, we have to run that for *all* ID types, since nodes may
use any kind of data-block (in theory)... :/
Forward compatibility code should never, ever be run during undo saving.
Note: related to T49991 (but does not fix it either, crash now happens
when doing a real file save...).
Adding a torus in edit-mode, with 'Generate UVs'
for example would either create another UV layer with the default name or
switch to the default UV layer name if it exists.
Now use the existing UV layer if present.
'1' threshold value would only allow to access a third of the basic
'color space' (from black to white, from 0.0 to 1.0 component values),
when you expect it to access the whole range.
Unfortunately, this needs a subversion bump to allow already defined
brushes to keep exact same behavior!
Also, did not change default value (0.2) for new brushes, think here
keeping current one makes more sense.
Thanks to @LucaRood for confirming the issue.
Empty images were implemented to expand (and eventually replace)
the background images functionalities. If we are ever to drop
background images "image empties" should support stereo/multi-view as well.
The renderpasses for grease pencil are not necessary when render from
sequencer.
This fix solves the GPF but we need to rethink the complete render
process for grease pencil and integrate better in the render and
composition workflow.
Thanks to Dalai Felinto por helping in the debug and fixing of the
problem.
it seems to me the icons are unused:
- VICO_VIEW3D_VEC
- VICO_EDIT_VEC
- VICO_EDITMODE_VEC_DEHLT
- VICO_EDITMODE_VEC_HLT
- VICO_DISCLOSURE_TRI_RIGHT_VEC
- VICO_DISCLOSURE_TRI_DOWN_VEC
- VICO_MOVE_UP_VEC
- VICO_MOVE_DOWN_VEC
- VICO_X_VEC
Since their code contains immediate mode GL calls and they seem to be unused i thought we could remove them.
Reviewers: mont29
Reviewed By: mont29
Subscribers: merwin
Tags: #bf_blender_2.8
Differential Revision: https://developer.blender.org/D2356
On second and third thoughts, this should have been done that way since
the begining, cases were you just delete a few data-blocks without any
serious knowledge of their usages are much, much more frequent than
cases where you are deleting thousands of data-blocks and are sure they
are not used anywhere anymore...
Own fault, but really frustrated that this topic was only raised the day
after 2.78a was released. :(
This has nothing to do here (freeing is not unlinking/remapping!), and
was actually redoing something already taken care of by
`BKE_libblock_relink_ex()` call in `BKE_libblock_free_ex()`.
Also, gives some noticeable speedup when removing datablocks with
do_unlink=True, about 5 to 10% quicker e.g. when deleting all objects
from a py console, in a big production file...
This commit reverts part of a fix for T33275, but things are:
- I can not reproduce the original issue at all, so doesn't seem to
cause any regressions.
- It is really bad idea to do delayed initialization in the threaded
environment, it's a straight way to some nasty issues.
- We can't do things like this anyway because we go more granular,
meaning such a delayed initialization will fail in the case of
having several IK solvers (unless they properly accommodate to
changed bone head).
- Verified the fix with various files from Mango project and all of
them seems to work nice with new depednency graph now (old depsgraph
has some flickering, but it's not related on DEG itself, but on
an environment with lots of proxies and threaded evaluation and it
is not a new behavior).
This reverts commit 9b5a32cbfb.
Apparently it is possible to have other thread mocking around with the hash.
Needs deeper investigation, for the time being reverting to prevent crashes.
This commit fixes two issues:
- UV/Image editor uvs menu did not match the 3D View's which was changed in rB2b240b043078
- Circle select tool was missing in particle edit mode
Reviewers: Severin
Differential Revision: https://developer.blender.org/D2329
Added BKE_libblock_free_data_ex() which takes special do_id_user
argument which basically indicates whether main database was already
taken care about not having "dead" pointers.
Gives about 40% speedup of main database free with quadbot scene
(3.4sec vs. 5.4 sec on quite powerful desktop).
Just fixing crash itself. Actually operator shouldn't run in most editors (not in dopesheet either I guess), but don't want to spend time on that right now.
This option makes an operator to not push a task to the undo stack if the previous stored elemen is the same operator or part of the same undo group.
The main usage is for animation, so you can change frames to inspect the
poses, and revert the previous pose without having to roll back tons of
"change frame" operator, or even see the undo stack full.
This complements rB13ee9b8e
Design with help by Sergey Sharybin.
Reviewers: sergey, mont29
Reviewed By: mont29, sergey
Subscribers: pyc0d3r, hjalti, Severin, lowercase, brecht, monio, aligorith, hadrien, jbakker
Differential Revision: https://developer.blender.org/D2330
- `bmesh_radial_faceloop_find_first` & `bmesh_disk_faceedge_find_first`
can be replaced with a single call to a new function:
`bmesh_disk_faceloop_find_first`
- `bmesh_disk_faceedge_find_first` called `bmesh_radial_facevert_check`
which isn't needed, since either the current or next loop in the
cycle is attached to the edge we're looking for.
The code was templated already, so don't see big reason to have
3 versions of templated functions. It was giving some extra code
to maintain and in fact already had divergency for support of huge
image resolution (missing size_t cast in byte image loading).
There should be no changes visible by artists.
Just return the face or NULL, like BM_edge_exists(),
Also for BM_face_exists_overlap & bm_face_exists_tri_from_loop_vert.
No functional changes.
Old code did some partial overlap checks where this made some sense,
but it's since been removed.
Object freeing may in some kind access its obdata (in case it has some
caches e.g.), since here obdata may have already been freed, let's set
object's data pointer to NULL (probably not ideal solution, but we don't
care much, those form archipelagos of unused linked datablocks,
we nuke'em all anyway).
Also fix stupid mistake in one of own recent commits (using ID we just
freed, tsst...).
This should make it easier to sculpt in high resolutions, downside is that the new way to calculate maximum edge length is a bit less intuitive. Maximum edge length used to be calculated as blender_unit * percentage_value, now it's blender_unit / value.
Reused old DNA struct member, but had to bump subversion to ensure correct compatibility conversion. Also changed default value slightly (would have had to set to 3.333... otherwise).
Was Requested by @monio (see https://rightclickselect.com/p/sculpting/zpbbbc/dyntopo-better-scale-input-in-constant-detail-mode) and I think it's worth testing.
Do not set 'real user' to groups every time we run the first clearing loop.
And do fully clear properly LIB_TAG_DOIT (this is not yet enforced in
existing code, but would love to get to that stage in future, so let's
do it at least with new code!).
Basic idea is to split first loop in two, and run checks before making
anything actually local, to detect data-blocks that we can directly make
local (because we are sure they are only used by already/future local
datablocks).
This allows to avoid a lot of overhead in later 'cleanup' steps of this
function, here with barbershop shot it's four times quicker (from 190s to 48s).
We are still far from the instantaneous results of MakeLocal in 2.77,
but in that version main characters lose their connection to their
armature and remain static after makelocal, so guess new code is still
better. ;)
There are probably more optimizations possible here, but would rather
polish this area of code once we get rid of proxies, those really
make it a nightmare to work on.
If the opengl render with grease pencil is run from VSE with the current
frame outside visible frames, the render pass is wrong and the render
must be canceled because nothing to render. Related to #T49975
Before this commit, the brush set was created with the first stroke
drawing, but if the user creates the datablock or the layer manually
(not drawing) the brush list was empty.
This commit complement the python fix by Sergey:
https://developer.blender.org/rB89c1f9db37cc1becdd437fcfdb1877306cc2b329
Culprit here was once more proxies. Think what was happening here was:
1) Both proxy and proxified armatures' PoseChannels were cleared
(needed after remapping due to Bone pointers being stored in pchans).
2) Proxy PoseChannels got rebuilt in `BKE_pose_rebuild_ex()`, which ends,
in proxy cases, by actually replacing rebuilt pchans by those from
the proxified object... which has not yet been rebuilt.
Fixed the issue by merely adding bone pointer to data copied from
original pchan into new 'from proxy' one... Sounds much, much safer and
sanier anyway, that way we can be sure bone pointer is actually pointing
to a bone of the object's armature (this is supposed to be the same
Armature datablock between proxy and proxified objects, but that may not
be always true especially during makelocal process).
Did similar trick to old dependency graph: tag invisible relations for update.
Might need some re-consideration, see the comment.
This should solve our issues with powerlib addon here in the studio.
New dependency graph expects strict separation between nodes and relations builder,
meaning, if we try to create relation with an object which is not in the graph yet
we'll have an error in depsgraph.
Now, so far object nodes were created from bases of the current scene, which caused
missing objects in graph in certain cases.
Didn't find better approach than to simply ensure object nodes exists when we know
they'll be used by relation builder.
`kernel_path.h` and `kernel_path_branched.h` have a lot of conditional code and
it was kind of hard to tell what code belonged to which directive. Should be
easier to read now.
Proxified objects can never be local, we can totally ignore them here.
This 'fixes' the asserts related to usercount when trying to remap poselib
of localized proxified objects (not sure what exactly was going on wrong here,
but proxies are a giant can of worms for sane data-blocks handling anyway :/).
This new `bpy.types.ID.make_local(clear_proxies=True)` allows Python
code to press the "Make Local" button on any ID block. I chose
`clear_proxies=True` as the default, since it's the default behaviour
of `id_make_local()` (defined in `library.c`).
The caller does need to take care of ensuring that linked-in objects
don't refer to local data, and that proxies aren't broken.
Reviewers: sergey, mont29
Reviewed By: mont29
Subscribers: dfelinto
Differential Revision: https://developer.blender.org/D2346
Add subframe to the animated seed hash calculation.
Should be no difference for the regular files, only for cases when scene is
rendered from sequencer with a speed effect, which is not really a common thing.
This is a late follow-up commit to the light sample threshold changes which
caused difference in rendering all existing .blend files which is not something
we are happy about: it is fine to use new optimized defaults for new files, but
existing ones should always be rendering in the same way as they used to be.
Sorry for the inconveniece, but such thing should have been done to begin with.
If this setting was modified it will not be reset to zero.
Now all render tests should be passing again.
P.S. Also really annoying to bump subversion for such reasons, but currently we
don't have better way to achieve what we want.
Lamp Data node requires shadow sample array which is only enabled when
Shadows are enabled in the shading settings.
This commit prevents crash but might not give expected render results
in such a configuration.
After CUDA dynload changes having CUDA toolkit became required
in order to compile Cycles. This only happened due to wrong
default value to the option.
The idea is simple: when falling back to one of the nodes which was partially
handled we "resume" checking outgoing relations from the index which we stopped.
This gives about 15-20% depsgraph construction time save.
This is more proper way to go:
- Avoids re-compilation of all dependent files when implementation changes
without changed API,
- Linker should have much simpler time now de-duplicating and getting rid
of redundant implementations.
The idea here is to address issue that name on it's own is not
always unique: for example, when adding driver operations the
name used for nodes is the RNA path (and multiple drivers can
write to different array indices of the path). Basically, now
it's possible to pass extra integer value to distinguish
operations in such cases.
So now we've already switched from sprintf() to construct unique
operation name to pass RNA path and array index.
There should be no functional changes yet, but this work is
required for further work about replacing string with const
char*.
There is no real reason to have nodes storing heap-allocated name
and description. Doing this increases amount of allocations during
dependency graph building, which usually means somewhat slowness.
We're temporarily loosing some eyecandy in the graphviz visualizer,
but those we can bring back as a part of graphiz dump (which happens
much less often than depsgraph build).
This will happen in multiple commits for the ease of bisect in the
future just in case this causes any regression. This commit contains
ID creation API changes.
Bullet spring constraint already supports rotational springs, but
they are not exposed in blender UI, likely due to a simple oversight.
Supporting them is as simple as adding a few DNA/RNA properties
with appropriate UI and passing them on to Bullet.
Reviewers: sergof
Reviewed By: sergof
Differential Revision: https://developer.blender.org/D2331
Previously, it was only possible to choose a single GPU or all of that type (CUDA or OpenCL).
Now, a toggle button is displayed for every device.
These settings are tied to the PCI Bus ID of the devices, so they're consistent across hardware addition and removal (but not when swapping/moving cards).
From the code perspective, the more important change is that now, the compute device properties are stored in the Addon preferences of the Cycles addon, instead of directly in the User Preferences.
This allows for a cleaner implementation, removing the Cycles C API functions that were called by the RNA code to specify the enum items.
Note that this change is neither backwards- nor forwards-compatible, but since it's only a User Preference no existing files are broken.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: brecht, juicyfruit, mib2berlin, Blendify
Differential Revision: https://developer.blender.org/D2338
With this fix, using a MIS map resolution equal to the image size for closest imterpolation or twice the size for linear interpolation gets rid of all fireflies.
Previously, a much higher resolution was needed to get acceptable noise levels.
There's more dll's hanging out in the ucrt folder, but I just grabbed the ones blender requested (not sure if that's a wise idea, but it seems to work)
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Differential Revision: https://developer.blender.org/D2335
We have to clear `newid` of all datablocks, not only object ones.
Note that this whole stuff is still using some kind of older, primitive
'ID remapping', would like to see whether we can replace it with new,
more generic one, but that's for another day.
Code would try to add multiple time the same key in `parent_gh` (for this
ghash a lot of dupliobjects may generate same key).
Was making the tool unusable in debug builds.
Also optimise things a bit by avoiding creating parent_gh when only
`use_base_parent` is set.
New code dealing with getting rid of lib-only cycles of data-blocks
could add several time the same datablock to the list of candidates. Now
this is avoided, and pointers are further cleaned up as double-safety
measure.
Feature request during bconf, makes sense to have it even as an hack for
now, since this is probably one of the most common use cases. This should
be redone in bmesh once we have proper custom noramls handling in edit mode...
The issue was caused by image ID nodes not being in the depsgraph.
Now, tricky part: we only add nodes but do not add relations yet. Reasoning:
- It's currently important to only call editor's ID update callback to solve
the issue, without need to flush changes somewhere deeper.
- Adding relations might cause some unwanted updates, so will leave that for
a later investigation.
Basically, the problem here was that the transform that's used to bring texture coordinates
to world space is either fetched while setting up the shader (with Object Motion is enabled) or
fetched when needed (otherwise). That helps to save ShaderData memory on OpenCL when Object Motion isn't needed.
Now, if OM is enabled, the Lamp transform can just be stored inside the ShaderData as well. The original commit just assumed it is.
However, when it's not (on OpenCL by default, for example), there is no easy way to fetch it when needed, since the ShaderData doesn't
store the Lamp index.
So, for now the lamps just don't support local texture coordinates anymore when Object Motion is disabled.
To fix and support this properly, one of the following could be done:
- Just always pre-fetch the transform. Downside: Memory Usage increases when not using OM on OpenCL
- Add a variable to ShaderData that stores the Lamp ID to allow fetching it when needed
- Store the Lamp ID inside prim or object. Problem: Cycles currently checks these for whether an object was hit - these checks would need to be changed.
- Enable OM whenever a Texture Coordinate's Normal output is used. Downside: Might not actually be needed.
Animation system has separate fcurves for each of array elements and
dependency graph creates separate nodes for each of fcurve, This is
needed to keep granularity of updates, but causes issues because
animation system will actually write the whole array to property when
modifying single value (this is a limitation of RNA API).
Worked around by adding operation relation between array drivers
so we never write same array form multiple threads.
It was possible to have synchronization issues whe naccumulating smooth
normal to a vertex, causing shading artifacts during playback.
Bug found by Dalai, thanks!
They were not real issues, it's just some areas of code tried to create
relations between non-existing nodes without checking whether such
relations are really needed.
Now it should be easier to see real bugs printed.
Hopefully should be no regressions here.
Some platforms are having hard time using this linker so added an option
to not use it. The options is an advanced one and enabled by default so
should not cause any changes for current users.
Request from Hjalti Hjalmarsson for the animation work.
Basically a common part of the workflow of animation is to change the pose, scrub back and forth a few times and roll back the changes when unsatisfied.
However if you go back and forth too many times the UNDO stack would be full, and it would not be possible to bring back the previous pose.
I'm leaving clip_editor change frames as it is for now. But we can
probably change the behaviour there as well.
Issue here was that py API code was keeping references (pointers) to the
liniked data-blocks, which can actually be duplicated and then deleted
during the 'make local' process...
Would have like to find a better way than passing optional GHash to get
the oldid->newid mapping, but could not think of a better idea.
Radial append/remove had swapped args and *slightly* different behavior.
- bmesh_radial_append(edge, loop)
- bmesh_radial_loop_remove(loop, edge)
Match logic for append/remove,
Logic for the one case where the edge needs to be left untouched
has been moved to: `bmesh_radial_loop_unlink`.
This is yet another debug option that allows to render an arbitrary
simulation field by using a color ramp to inspect its voxel values.
Note that when using this, fire rendering is turned off.
Reviewers: plasmasolutions, gottfried
Differential Revision: https://developer.blender.org/D1733
This allows to save a memory copy, which will be particularly useful for network rendering.
Reviewers: sergey, brecht, dingto, juicyfruit, maiself
Differential Revision: https://developer.blender.org/D2323
In scenes with many lights, some of them might have a very small contribution to some pixels, but the shadow rays are traced anyways.
To avoid that, this patch adds probabilistic termination to light samples - if the contribution before checking for shadowing is below a user-defined threshold, the sample will be discarded with probability (1 - (contribution / threshold)) and otherwise kept, but weighted more to remain unbiased.
This is the same approach that's also used in path termination based on length.
Note that the rendering remains unbiased with this option, it just adds a bit of noise - but if the setting is used moderately, the speedup gained easily outweighs the additional noise.
Reviewers: #cycles
Subscribers: sergey, brecht
Differential Revision: https://developer.blender.org/D2217
This option allows to create a smoother transition between Bricks and Mortar - 0 applies no smoothing, and 1 smooths across the whole mortar width.
Mainly useful for displacement textures.
The new default value for the smoothing option is 0.1 to give some smoothing that helps with antialiasing, but existing nodes are loaded with smoothing 0 to preserve compatibility.
Reviewers: sergey, dingto, juicyfruit, brecht
Reviewed By: brecht
Subscribers: Blendify, nutel
Differential Revision: https://developer.blender.org/D2230
When using the Normal output of the Texture Coordinate node on Point and Spot lamps, the coordinates now depend on the rotation of the lamp.
On Area lamps, the Parametric output of the Geometry node now returns UV coordinates on the area lamp.
Credit for the Area lamp part goes to Stefan Werner (from D1995).
constraints.
This avoids traversing the archive everytime object data is needed and
gives an overall consistent ~2x speedup here with files containing
between 136 and 500 Alembic objects. Also this somewhat nicely de-
duplicates code between data creation (upon import) and data streaming
(modifiers and constraints).
The only worying part is what happens when a CacheFile is deleted and/or
has its path changed. For now, we traverse the whole scene and for each
object using the CacheFile we free the pointer and NULL-ify it (see
BKE_cachefile_clean), but at some point this should be re-considered and
make use of the dependency graph.
The 'local' layers were not correctly set when redoing 'add object'
addons using object_utils.py helper (we always want to restore layers
from view in local view, even if we set 'real' layers from operator
afterwards).
Issue was happening when removal of custom icons was done while they
were still being rendered by preview job.
Now add a 'deffered deletion' system, to prevent main thread to delete
preview image until loading thread is done with them.
Note that ideally, calling `ED_preview_kill_jobs()` on custom icon
removal would have been simpler, but we don't have easy access to
context here...
Oh man, is it a compiler bug? Is it something we do stupid?
For now more crap to prevent crashes. During the conference will talk to
Maxyn about how can we troubleshoot such weird issues.
Basically don't use rcp() in areas which seems to be critical after
second look. Also disabled some multiplication operators, not sure
yet why they might be a problem.
Tomorrow will be setting up a full test with all cases which were
buggy in our farm to see if this fix is complete.
There is some precision issues for big magnitude coordinates which started
to give weird behavior of release builds. Some weird memory usage in BVH
which is tricky to nail down because only happens in release builds and GDB
reports all variables as optimized out when trying to use RelWithDebInfo.
There are two things in this commit:
- Attempt to make vectorized code closer to original one, hoping that it'll
eliminate precision issue.
This seems to work for transform_point().
- Similar trick did not work for transform_direction() even tho absolute
error here is much smaller. For now disabled that function, need a more
careful look here.
Rewrite the current range-tree API used by dyn-topo undo
to avoid inefficiencies from stdc++'s set use.
- every call to `take_any` (called for all verts & faces)
removed and added to the set.
- further range adjustment also took 2x btree edits.
This patch inlines a btree which is modified in-place,
so common resizing operations don't need to perform a remove & insert.
Ranges are stored in a list so `take_any` can access the first item
without a btree lookup.
Since range-tree isn't a bottleneck in sculpting, this only gives minor speedups.
Measured approx ~15% overall faster calculation for sculpting,
although this number time doesn't include GPU updates and depends on how
much edits fragment the range-tree.
Existing method was fine for basic polygons but didn't scale well
because its was checking all coordinates for every y-pixel.
Heres an optimized version.
Basic logic remains the same this just maintains an ordered list of intersections,
tracking in-out points, to avoid re-computing every row,
this means sorting is only done once when out of order segments are found,
the segments only need to be re-ordered if they cross each other.
Speedup isn't linear, test with full-screen complex lasso gave 11x speedup.
Do this for the new dependency graph: was missing handle of OB_UPDATE_TIME in tag update.
Hopefully it's all correct still.
Old dependency graph needs work, but i'm tempting to call it unsupported and move on
to 2.8 branch.
Several ideas here:
- Optimize calculation of near_{x,y,z} in a way that does not require
3 if() statements per update, which avoids negative effect of wrong
branch prediction.
- Optimization of direction clamping for BVH.
- Optimization of point/direction transform.
Brings ~1.5% speedup again depending on a scene (unfortunately, this
speedup can't be sum across all previous commits because speedup of
each of the changes varies from scene to scene, but it still seems to
be nice solid speedup of few percent on Linux and bigger speedup was
reported on Windows).
Once again ,thanks Maxym for inspiration!
Still TODO: We have multiple places where we need to calculate near
x,y,z indices in BVH, for now it's only done for main BVH traversal.
Will try to move this calculation to an utility function and see if
that can be easily re-used across all the BVH flavors.
Similar to the previous commit, avoid negative effect of bad branch prediction.
Gives measurable performance up to ~2% in tests here.
Once again, thanks to Maxym Dmytrychenko!
The idea here is to avoid if statements which could cause wrong
branch prediction.
Gives a bit of measurable speedup up to ~1%. Still nice :)
Inspired by Maxym Dmytrychenko, thanks!
Seems CMake will rearrange and copy libraries which are passed to the linker
when some of the libraries is listed twice (for example, -lz from png libraries
and -l for blender itself). This was causing libopenimageio to be added somewhere
at the end of linking flags without -ldl followed after which was causing linking
issues.
Similar to regular triangle intersection case. Gives about 3% speedup rendering
SSS object on my desktop,
Question: how to avoid such a code duplication in a nice way without speed loss?
This will confuse hell of a guarded allocators because it is possible
to have allocation happened prior to Blender's guarded allocator is
fully initialized.
This was causing crashes and assert failures when running blender
with fully guarded memory allocator.
Initialization order of global stats and node types was not strictly
defined and it was possible to have node types initialized first and
stats after that. This will zero out memory which was allocated from
the statistics causing assert failure when de-initializing node types.
It was possible to have non-initialized unaligned BVH split
to be used when regular BVH split SAH was inf. Now we ensure
that unaligned splitter is only used when it's really initialized.
It's a regression and should be in 2.78a.
Material linking might and does change the way how drawObject is calculated
but does not tag drawObject for recalculation in any way.
Now use dependency graph to tag draw object for reclaculation. Currently do
this using OB_RECALC_DATA taq since tagging is not very granular yet. In the
future we can introduce ore granular tagging in the new dependency graph
easily.
Simple and safe for 2.78a.
Using context manager for output file itself, and whole try/except block
to at least catch and print error in file.
Also some minor tweaks to previous 'list add-ons' commit.
Note that volume rendering is not supported yet, this is a step towards that.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2299
Now, the strokes can be locked to a plane set in the cursor location.
This option allow the artist to rotate the view and draw keeping the
strokes flat over the surface. This option is similar to surface option
but doesn't need a object.
The option is only valid for 3D view and strokes in CURSOR mode.
When ED_screen_animation_play is called from wm_event_do_handlers,ScrArea *sa = CTX_wm_area(C); is NULL in ED_screen_animation_timer.
Informing the audio system in CTX_data_main_set, that a new Main has been set.
Not really possible to precisely detect all cases in which they should or
should not be active, but at least now it won't show as disabled when it
actually has some effects.
Previously an error message would be printed whenever the OpenCL build produced output.
However, some frameworks seem to print extra information even if the build succeeded, so now the actual returned error is checked as well.
When --debug-cycles is activated, the build output will always be printed, otherwise it only gets printed if there was an error.
This would cause Alembic to throw an exception and fail exporting
animations because it was trying to recreate and overwrite the
attributes for each frame.
Also use the operator as part of the UI keymap now, to deduplicate code and let
users configure a custom shortcut.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2303
this patch resolves the following warnings;
```
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 764
Warning C4098 'attach_stabilization_baseline_data': 'void' function returning a value blenkernel\intern\tracking_stabilize.c 139
Warning C4028 formal parameter 3 different from declaration blenkernel\intern\cachefile.c 148
Warning C4028 formal parameter 3 different from declaration blenkernel\intern\paint.c 413
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\editderivedmesh.c 591
Warning C4028 formal parameter 3 different from declaration blenkernel\intern\library_remap.c 709
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 754
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 758
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 759
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 763
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 764
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 765
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 769
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 770
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\DerivedMesh.c 3458
```
It's mostly things where the signature in the .h and the actual implementation in the .c do not match. And a bunch functions who do not match the TaskRunFunction declaration cause they leave out the __restrict keyword.
Reviewers: brecht, juicyfruit, sergey
Reviewed By: sergey
Subscribers: Blendify
Differential Revision: https://developer.blender.org/D2268
Blender doesn't necessarily crash when Python doesn't keep references to
the returned strings. As a result, someone that implements this incorrectly
could be lulled into a false sense of correctness by Blender not crashing.
Previously the editor will always try to only show UV faces with the same exact active
image or image texture, which is quite difficult to control on a production shaders, where
each material can have multiple objects assigned.
The idea of this commit is to bring option which allows to easily control what to display
when "Draw Other Objects" is enabled, so currently we can have old behavior ("Same Image")
or tell editor to show everything ("All"). In the future we can extend it with such filters
as "Same Material" and things like that.
Hopefully this will help @eyecandy's workflow of texturing.
the issue was caused by wrong default value for brush particle count
which was clamped on display from 0 to 1. This is technically a regression
but how to port this to 2.78a?
The problem here was, as the title says, that the two kernels were swapped.
Since shader evaluation is only used for building the samling map when World MIS is enabled, rendering without it would still work fine, although baking also was broken.
The previous refactor changed the code to use a separate logging mechanism to support multithreaded compilation.
However, since that's not supported by any frameworks yes, it just resulted in bad logging behaviour.
So, this commit changes the logging to go diectly to stdout/stderr once again by default.
Couple of issues here:
* Missing initialization for 3D view keyframe options for "Reset to Default Theme"
* Alpha values not reset correctly on "Reset to Default Theme"
* Alpha values of timeline keyframe options not reset correctly for old files
Also corrected old version patches even though they're overridden later, to avoid more issues in case people copy this code.
Corrections to d7af7a1e04 and 8d573aa0ec
This was giving some speedup but made intersection tests to fail
from watertight point of view.
Needs deeper investigation, but need to quickly get it fixed for
the studio.
Regression from rB69b66d549bcc8, was supposed to be non-functionnal
change, so not sure why search menu was reduced here? For now, restore
to 2.77 width.
Seems to be a bug in original implementation of a830280: code was always
using tangent space instead of UV map because it had the same name. Now
prefer UVMap over tangent because this is how Cycles works. At least it's
closer to.
Not sure it the save+reload issue is still relevant after this fix, that
needs to be double-checked.
Thanks @dfelinto for looking into the report and simplifying the case.
Should be included into 2.78a.
This allows appending of an entire scene from another blend file into this one,
even when that blend file contains proxified armatures.
This replaces the approach from commit 1cdc54dc7d.
Thanks @sergey for the help.
Column flow layout was abuse ui_item_fit in a weird way, which was
broken for last column items.
Now rather use own code, which basically spread available width as
equally as possible between all columns.
Seems to be rounding error. Hopefully new code handles the error fixed back in
SVN revision 28901 and still have proper frame number for Hjalti.
What could possibly go wrong here..
This gives about 5% speedup on AVX2 kernels (other kernels still
have SSE disabled for math operations) and this solves the slowdown
of koro scene mention in the previous commit.
The title says it all actually. This commit also contains
changes to pass float3 as const reference in affected functions.
This should make MSVC happier without breaking OpenCL because it's
only done in areas which are ifdef-ed for non-OpenCL.
Another patch based on inspiration from Maxym Dmytrychenko, thanks!
This commit basically vectorizes existing code using AVX2 instructions
(without modifying algorithm itself). This gives quite nice speedups:
BMW: -8%
Classroom: -5%
Cat: -5%
Koro: +1%
Barcelona: -8%
That's on Linux machine, reported performance improvement on Windows
goes up to 20%.
Not currently sure why Koro is somewhat slower because it mainly uses
curve intersection tests, could be a time noise? Or osmething with the
cache utilization perhaps? In any case speedup in other scenes makes
me thinking that current state is acceptable for initial implementation.
This is again inspired by Maxym Dmytrychenko.
Based on existing ssef data type and to my knowledge it's also what happens in
Embree nowadays.
Inspired by Maxym Dmytrychenko and required for the upcoming triangle
intersection commit.
Hopefully the copyright message is correct.
When ray hits curve segment with SSS shader it was possible to have
uninitialized hit_P variable used for sampling.
Seems that was a reason of our headache of difference between AVX2
and SSE4 render results here, so now we can revert all the nasty
ifdef-ed inline policies.
Original fix in this area was not really complete (but was the safest at
the release time). Now all the crazy configurations of slots going out
of sync should be handled here.
For now, we merely add an option that sets CXXFLAGS envvar with
'--std=c++11' option.
There is no check done to ensure compatibility with the system
libraries, mainly because:
- It is all but trivial to get this information in a generic and
reliable way.
- Currently even cutting edge distributions may still distribute some c++98
libraries.
- With recent stdlibc++, both ABIs are supported together, which means
that incompatibilities are rather unlikely.
To summarize: if your system is recent and built with gcc-5.1 or more,
you should not experience too much troubles with c++11.
It was possible to have two viewports opened and start using Ctrl-0
to make different objects an active camera for the viewport. This
worked fine for viewports which had decoupled camera from the scene,
but if viewport was locked to scene camera it was possible to run into
situation when two different viewports are locked to scene camera but
had different v3d->camera pointers.
Apparently, the whole G.is_break is not used by OpenGL render, meaning
this flag will not be clear before running the operator. This was
causing missing file output after pressing Esc once for the rest of
Blender session.
Reworked logic in the few places that still called this. Deleted the "GLSL not supported" fallbacks.
Also removed some nearby checks for ARB_multitexture and OpenGL 1.1. Blender 2.77 removed checks like this, but game engine still has some.
We were checking for number of tasks from given pool already active, and
then atomically increasing it if allowed - this is not correct, number
could be increased by another thread between check and atomic op!
Atomic primitives are nice, but you must be very careful with *how* you
use them... Now we atomically increase counter, check result, and if we
end up over max value, abort and decrease counter again.
Spotted by Sergey, thanks!
Since the collision modifier cannot be disabled, it causes a constant
hit on the viewport animation playback FPS. Most of this overhead can
be automatically removed in the case when the collider is static.
The updates are only skipped when the collider was stationary during
the preceding update as well, so the state is stored in a field.
Knowing that the collider is static can also be used to disable similar
BVH updates for substeps in the actual cloth simulation code.
Differential Revision: https://developer.blender.org/D2277
Previously if the rendering is much faster than saving (for example,
when transcoding stuff via VSE) it was possible to have 100s of frames
in memory.
This isn't ideal because of limited amount of RAM, so need to have
some sort of limit. This is exactly what is implemented in this commit.
By the design of task scheduler it was possible that tasks from somewhere
in the middle of scheduled list will be handled first.
For example, one thread might be iterating over the scheduled list and
ignore tasks because there is other thread is working on task from the
same pool. However, if that other thread finishes task before iteration
is over current thread will pick up task from somewhere in in the middle
of the list.
This isn't a problem in general case, but for movie rendering we do need
to have strict order of frames.
This allows appending of an entire scene from another blend file into
this one, even when that blend file contains proxified armatures.
Since the proxified object needs to be linked (not local), this will
only work when the "Localize all" checkbox is disabled. The appended
proxy object should also not be referenced from anything in a library
(for example in a constraint). Referencing it from the appended data
should be fine.
Fixes T49495.
Pretty much same reason as for the 'from' pointer of shapekeys - runtime
data creating loops and 'ghost' dependencies between datablocks.
We need to handle them in cases like remapping, but whall not take them
into account to check dependencies between datablocks... :/
The initial idea was to use Ctrl+E to interpolate stroke because this is
similar to Pose breakdown, but the Ctrl+E keymap is used to inverse
grease pencil sculpt effect.
The new keymap is Ctrl+Alt+E in order to fix the conflict
New recursive iteration over IDs in BKE_library_foreach_ID_link() was
broken by the infamous nodetree case. We cannot really recusively call
this function in that case, so better to deffer handling of
non-datablock NodeTrees as if real IDs here.
Also fixed initial ID not being stored as handled, in rare cases this
could also lead to infinite looping.
To be backported to 2.78a.
Mostly this is making inlining match CUDA 7.5 in a few performance critical
places. The end result is that performance is now better than before, possibly
due to less register spilling or other CUDA 8.0 compiler improvements.
On benchmarks scenes, there are 3% to 35% render time reductions. Stack memory
usage is reduced a little too.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D2269
We *always* want to increase mat user count when from Object (and not
Data), because in that case we are moving mat from object to temp
generated mesh, material can never be 'borrowed' in that case.
To be backported to 2.78a
If I didn't miss anything these are indeed not used. Old themes should still work (will only print info on redundant theme defines into console), but updated non-contrib themes already.
Fixes for clamp-omp, fewer shared variables, fix some cases of threads writing
to the same memory location. Issue found by Jens Verwiebe, who reports 30%
speedup with 16 core CPU, when using this with a recent clang-omp version.
- Explicitly specify the platform for msbuild, to facilitate builds with just the Visual C++ Build Tools installed.
- When vs2013 is not found, try looking for 2015 as a fallback
- Clear up any batch variables that might have been set from previous runs
This is also an important mathematical operation that can be folded
if it is known that one argument is a certain constant. For colors
the operation is provided as a Gamma node.
The SVM Gamma node needs a small fix to make it follow the 0 ^ 0 == 1
rule, same as the Power node, or the Gamma node itself in OSL mode.
Reviewers: #cycles
Differential Revision: https://developer.blender.org/D2263
'camera' Object pointer of TimeMarkers is a 'temp' hack since Durian project...
Would need to be either made definitive now, or removed/reworked/whatever.
But since we intend to use that object pointer for other needs, and current code
could lead to crashing .blend files, for now let's fix that mess (was missing
some bits in read code, and also totally ignored in libquery code).
Should be safe for 2.78a.
It will discard the whole tile, but it's still kind of more friendly than
fully locked interface (sort of) for until tile is fully sampled.
Sorry if it causes PITA to merge for the opencl split work, but this issue
bothering a lot when collecting benchmarks.
This change makes it so that when the sequences within a Scene strip are
evaluated, they use the Scene that they come from as the context as opposed
the Scene that the Scene strip is in. This is necessary, for example, in the
case of the MulticamSelector where it needs to reference strips in the original
Scene as opposed to the Scene where the Scene strip is located.
Patch by @Matt (HyperSphere), thanks!
Previously it was falling back to just a path after #include
statement was finished. Now we fall back to a proper current
file name after dealing with the preprocessor statement.
Some of the files were wrongly attributing code to some other
organizations and in few places proper attribution was missing.
This is mainly either a copy-paste error (when new file was
created from an existing one and header wasn't updated) or due
to some refactor which split non-original-BF code with purely
BF code.
Should solve some confusion around.
- The build folder name used to be depended on the order of the parameters, this is now normalized to
"build_windows_[Release/Full/Lite/Headless/Cycles/Bpy]_[x86/x64]_vc[12/14]_[Release/Debug]" regardless of the order of the parameters.
-Use CUDA8 for all kernels when building the release convenience target with visual studio 2015
In Windows, event dispatching code is throwing out the wheel scroll count value.
Despite of how many fast you move the wheel, it only make one-notch scroll event.
This patch convert wheel event to multiple 1-notch wheel events.
This also correct the handling of smooth scroll mouse wheel (which can report smaller than 1-notch wheel movement) by accumulating the small wheel delta values.
Reviewers: djnz, shadowrom, elubie, #platform:_windows, sergey, juicyfruit, brecht
Reviewed By: shadowrom, elubie, #platform:_windows, brecht
Subscribers: dingto, elubie, brachi, brecht
Differential Revision: https://developer.blender.org/D143
Another case of float imprecision leading to endless loop. INcreasing a bit 'noise threashold' seems to work OK.
Not a regression, but might be nice to have in 2.78a.
Not sure where this comes from, but code was converting BMEdge* to BMVert* to check oflags,
i.e. not accessing correct memory.
Regression, to be backported to 2.78a.
Apparently the keying sets system doesn't support subclassing
KeyingSetInfo subclasses. I have added a note to the top of the file to
indicate this to future developers.
Regression caused rBbcc863993ad, write code was assuming dw->ob was always valid,
wich is no more the case right after reading file e.g.
Another good example of how bad it is to use 'hidden' dependencies between datablocks. :(
And another fix to be backported to 2.78a. :(((
Previously the pose library used the WholeCharacter key set, which ignores
selection and add keys for almost all bones in the rig. This is a very
slow operation on complex rigs. With this patch, only selected bones are
keyed, defaulting to keying all bones when none are selected.
Note that this fixes the FIXME previously mentioned in the source.
Actually two errors here:
* Properties editor wasn't refreshing on (NC_SCENE | ND_RENDER_OPTIONS) notifiers
* Was using notifier info bits wrongly, needs to send two separate notifiers
Decided to remove ND_RENDER_OPTIONS rather than adding properties editor scene context refresh for it, this is more than a render option change.
Duplicates can happen at UV seams in case of resolution mismatch
or other complications. It's better not to store them in case it
confuses some math later on.
Reviewers: mont29
Differential Revision: https://developer.blender.org/D2261
1. When adding one pixel border to UV islands prefer grid directions.
This is more logical as it means neighbor pixels will commonly
share a border, opposed to just corners.
2. Don't subtract 0.5 when converting float UVs to int in computing
adjacency across a UV seam. Pixels cover a square with corners
at int positions and adding/subtracting 0.5 is only for dealing
with the center point of a pixel.
3. Use the neighbour_pixel field from the correct pixel, and check
it's not back to the origin point.
4. In the connected UV case, ensure that the returned index actually
refers to a valid active pixel.
This fixes paint spread not traversing some UV seams, while at the
same time spreading to random unrelated points on other seams.
The first problem is primarily fixed by 2, while the second one
is addressed by item 4.
Differential Revision: https://developer.blender.org/D2261
Moved that code forth and back a few times while creating rB776a8548f03a fix,
ended up forgetting to update it correctly for function where it layed down in the end.
Last-minute fixes are never a good thing... Now we already have real reason for 2.78 'a' release :(
Code was not getting correct boundbox in some cases (group only instancing other groups e.g.), now
compute our own bbox in those cases.
Based on patch by @lichtwerk, but extended the fix to include linked datablocks in some cases
(linked objects in local groups, linked objects in local scene, etc.), this was also broken in existing code.
Reviewers: mont29
Subscribers: duarteframos
Differential Revision: https://developer.blender.org/D2257
Regression from rBa4a968f, we would adjust current point's wetness without actually protecting it
in new multi-threaded context, leading to concurrent access mess.
Now delay applying wetness reduction to current point to end of function, allows us to avoid having
to lock current point twice together with neighbor one (and reducing spinlock awainting too).
To be backported to 2.78a.
Never call function that might recompute a DM in an RNA itemf callback (or any UI-related func in general)!
There was an XXX comment asking if this was OK - well, no, it was not. :P
Could be ported back to some 2.78 flavour should we need it.
Was only happening with new dependency graph.
The issue here is that scene's depsgraph layers will be 0 unless
it was ever visible. Worked around by checking for 0 layer in the
update_tagged of new depsgraph. This currently kind of following
logic of visible_layers, but is weak.
Committing so studio is unlocked here, will re-evaluate this layer.
Now using new system dedicated to that kind of cases, id_ensure_real_user(), instead.
That way, usercount of Scenes is handled correctly at deletion time.
Reported by @sergey over IRC, thanks.
The light sampling functions calculate light sampling PDF for the case that the light has been randomly selected out of all lights.
However, since BPT handles lamps and meshlights separately, this isn't the case. So, to avoid a wrong result, the code just included the 0.5 factor in the throughput.
In theory, however, the correction should be made to the sampling probability, which needs to be doubled. Now, for the regular calculation, that's no real difference since the throughput is divided by the pdf.
However, it does matter for the MIS calculation - it's unbiased both ways, but including the factor in the PDF instead of the throughput should give slightly better results.
Reviewers: sergey, brecht, dingto, juicyfruit
Differential Revision: https://developer.blender.org/D2258
- WITH_SMOKE macro was not defined so some code was not compiled, though
it was still accessible from the UI
- some UI elements were disappearing due to bad indentation, also rework
the UI code to not hide but rather disable/grey out button in the UI
- Display thickness was not used due to bad manual merge of the code
from the patch.
This basically exposes to the UI a function that was only available
through a debug macro ; the purpose is obviously to help debugging
simulations. It adds ways to draw the vectors either as colored needles
or as arrows showing the direction of the vectors. The colors are based
on the magnitude of the underlying vectors.
Reviewers: plasmasolutions, gottfried
Differential Revision: https://developer.blender.org/D1733
Current approach uses view aligned slicing to generate polygons for GL
texturing such that the generated polygons are always facing the view
plane. Now it is also possible to use object aligned slicing, which
creates polygons by slicing the object perpendicular to whichever axis
is facing the most the view plane. It is also possible to create a
single slice for inspecting the volume, or for 2D rendering effects.
Settings for this, along with a density multiplier setting, are to be
found in a newly added "Smoke Display Settings" panel in the smoke
domain properties tab.
Reviewers: plasmasolutions, gottfried
Differential Revision: https://developer.blender.org/D1733
Using context.active_gpencil_brush to access the active Grease Pencil brush
would result in a crash if trying to rename the brush, because the "ID" pointer
was not set.
To be backported to 2.78
This is something what was guaranteed in give_current_material(), just
copied some range checking logic from there.
Not sure what would be a proper fix here tho.
In fact, it was the whole remapping process that was broken in logic bricks area,
due to terrible design of links between those bricks...
Object copying was also broken in that case, fixed as well.
To be backported to 2.78.
Note that issue was actually probably there since ages, hidden behind dirty hacks
used in previous append code (though likely visible in some corner cases).
Listen kids: do not, never, ever, do what has been done for links between logic bricks. Never. Ever.
Even as pure runtime data it would have been bad, but as stored data...
Optimization attempt with BKE_library_idtype_can_use_idtype() was not taking into account
the fact that drivers may link virtually against any datablock...
Has to be rethinked, but for after 2.78 release, this commit is safe to backport.
Regression caused by rBb27ba26, we would always tag datablocks to update in G.main,
ignoring given bmain, now always use this one instead.
To be backported to 2.78.
This commit allows RNA properties to return additional info on their editable state which may then be displayed in tooltips. To show how it works, it also adds some info for the editable check of proxies. For generally un-editable properties or properties of a linked data-block, RNA returns default strings.
| {F362785} | {F362786} | {F362787} |
Reviewed by brecht, thanks!
Differential Revision: https://developer.blender.org/D2243
Drawing used colors for select (TH_EDGE_SELECT/TH_VERTEX_SELECT) which was inconsistent with crease, seam, sharp, .. (which all had their own them color -- also was a bit hard to read).
NOTE: UI team usually doesn't allow adding more theme options, this is an exception.
Differential Revision: https://developer.blender.org/D2234
It's now possible to change the shortcut for invoking the eyedropper while hovering a button (E by default). Also removed the keymap editor entry for the modal eyedropper keymap, it's now automatically appended to the eyedropper shortcut.
This patch changes a couple of things in the video output encoding.
{F362527}
- Clearer separation between container and codec. No more "format", as this is
too ambiguous. As a result, codecs were removed from the container list.
- Added FFmpeg speed presets, so the user can choosen from the range "Very
slow" to "Ultra fast". By default no preset is used.
- Added Constant Rate Factor (CRF) mode, which allows changing the bit-rate
depending on the desired quality and the input. This generally produces the
best quality videos, at the expense of not knowing the exact bit-rate and
file size.
- Added optional maximum of non-B-frames between B-frames (`max_b_frames`).
- Presets were adjusted for these changes, and new presets added. One of the
new presets is [recommended](https://trac.ffmpeg.org/wiki/Encode/VFX#H.264)
for reviewing videos, as it allows players to scrub through it easily. Might
be nice in weeklies. This preset also requires control over the
`max_b_frames` setting.
GUI-only changes:
- Renamed "MPEG" in the output file format menu with "FFmpeg", as this is more
accurate. After all, FFmpeg is used when this option is chosen, which can
also output non-MPEG files.
- Certain parts of the GUI are disabled when not in use:
- bit rate options are not used when a constant rate factor is given.
- audio bitrate & volume are not used when no audio is exported.
Note that I did not touch `BKE_ffmpeg_preset_set()`. There are currently two
preset systems for FFmpeg (`BKE_ffmpeg_preset_set()` and the Python preset
system). Before we do more work on `BKE_ffmpeg_preset_set()`, I think it's a
good idea to determine whether we want to keep it at all.
After this patch has been accepted, I'd be happy to go through the code and
remove any then-obsolete bits, such as the handling of "XVID" as a container
format.
Reviewers: sergey, mont29, brecht
Subscribers: mpan3, Blendify, brecht, fsiddi
Tags: #bf_blender
Differential Revision: https://developer.blender.org/D2242
For some reason (which I can't recall), backing was doing backface
culling. Since Cycles itself doesn't ignore them (nor does Blender
Internal), they should be visible.
Steps to reproduce:
* Go to modifier context in properties editor
* Add modifier, collapse it
* Press down LMB over collapse button of modifier, hold it
* Drag over pin-icon in properties editor (to keep fixed data-block displayed)
* Drag outside of window bounds (should crash)
Also could've solved by getting space data from callback arguments instead of context, but this fix is much nicer (though not totally un-risky).
Do not close and re-open the file in case it's compressed, gzip module can now directly take a file object as parameter.
Differential Revision: https://developer.blender.org/D2235
There might be some extra missing points here, but it's all rather
a TODO than a real bug and can be tweaked further once issues are
actually discovered.
We raised the minimum to GL 2.1 in Blender 2.77, and dropped support for older GPUs (pre-2012 Intel mostly). On Windows you get a popup message, but on Mac we simply crashed. Every Mac has a builtin software renderer for GL 2.1 so let's use that when the GPU is not capable!
Run blender --debug-gpu to see version detection & software fallback.
Quick fix for now, need to unlock studio here as well.
Proper fix would be to modify API a bit and pass flags which will
prevent expand called on bmain perhaps. But this we should discuss
a bit,
We were calling BLI_remlink and then BLI_insertlinkbefore/after quite often. BLI_listbase_link_move simplifies code a bit and makes it easier to follow. It also returns if link position has changed which can be used to avoid unnecessary updates.
Added it to a number of list reorder operators for now and made use of return value. Behavior shouldn't be changed.
Also some minor cleanup.
Was spawning error popup each time user tried to move a stroke higher or lower than the list allowed. We don't do that anywhere else and it's not really useful info for the user. So rather not bother her.
Idea here is to select the lowest isolation level that wont compromise quality.
By using the lowest level we save memory and processing time. This will also
help avoid precision issues that have been showing up from using the highest
level (T49179, T49257).
This is a pretty simple heuristic that gives ok results. There's more we could
do here, such as filtering for vertices/edges adjacent geometric features that
need isolation instead of checking them all, but the logic there could get a
bit involved.
There's potential for slight popping of edges during animation if the dice
rate is low, but I don't think this should be a problem since low dice rates
really shouldn't be used in animation anyways.
Reviewed By: brecht, sergey
Differential Revision: https://developer.blender.org/D2240
Crashes occured immediately when clicking on "OpenGL render image" because there was only a task pool created previously when it was an animation. Solved it by introducing a variable is_animation to the openglrender and omitting the task_pool call when it's no animation.
@sergey: Please check my changes, moved the pool_ok and the lock into the is_animation clause.
Buildbot machine was updated to the new SDK which seems to have
QTKit removed.
For until we've installed older SDK or ported our code to a new
AVFramework disabling QuickTime.
These may be exposed in UI (keymap editor & redo panel), so better avoid using identifiers like "UP" "DOWN". They are redundant anyway (already displayed).
Replace the W shortcut for subdivision by a new menu for edit specials
in order to keep consistency in UI.
Subdivision is not used all the time, so it's better assign this
shortcut to menu.
In some situations the artist needs to subdivide a stroke created with
few points before, specially for sculpting.
The subdivision is done for any pair of continuous selected points in
the same stroke.
The operator can be activated in edit mode with W key and has a
parameter for number of cuts.
The idea is to have a dedicated thread which is responsive for all the
file writing to a separate thread, so slow disk will not slow down
OpenGL itself.
Gives really nice speedup around 1.5x when exporting barber shop layout
file to h264 video.
any object
There were a couple of crashes caused by stupid typos in
rB631af9f930d2fd2c76751204ff22239aa95f761d and
rB78ea06fea4a74181c25254ed72d50d8a743b6954, as well as a shamefull lack
of 'testing before committing' which only affect exporting.
One crash was due to using RNA_boolean_get instead of RNA_enum_get, the
other one was a tricky case of order of deletion happening in the
destructors of AbcExporter and ArchiveWriter.
Should not affect RC or release.
It was annoyingly slow to do roundtrip from byte OpenGL render to
float render result and back to byte image format (which is used
in 99% of cases for the OpenGL previews),
Now we use render result's rect32 to store render result which is
already supposed to be in the display space.
Gives about 30% speed improvement for OpenGL previews here.
Previously converting from linear space to SRGB was doing rather
slow inverted 1D lookup. Adding explicit inverse LUT gives 20%
speedup of OpenGL render.
Next question is: why do we even bother with sRGB conversion here,
OpenGL is already in the proper space so in theory we can avoid
quite some color space conversions. In any case, having this case
optimized in nice anyway.
New features:
1) Release target that checks for both cuda 7.5 and 8 with WITH_CYCLES_CUDA_BINARIES=ON and CYCLES_CUDA_BINARIES_ARCH=sm_20;sm_21;sm_30;sm_35;sm_37;sm_50;sm_52;sm_60;sm_61 options set.
2) Option to switch between x86 and x64 builds, the default remains (auto detect the architecture) but can be overridden.
3) Option to switch between vs12(2013) and vs14(2015) default is 2013.
Reviewers: juicyfruit, sergey
Reviewed By: sergey
Tags: #platform:_windows
Differential Revision: https://developer.blender.org/D2180
This was quite weak to consider all scripted expression to be time-dependent.
Current solution is somewhat better but still crappy. Not sure how can we make
it really nice.
Stupid mistake wrapping path validation code inside a BLI_assert, which means it was
only called in Debug builds...
Found by Sergey, thanks.
Should be backported to 2.78.
Those 'never null' ID pointers are really a PITA to handle... luckily we don't have much of those around!
Found by Sybren, thanks.
Should be backported to 2.78.
This is internal pointer helper for scene evaluation and tools, though exposed to bpy API,
it can give false 'dependency cycles' in bpy.data.user_map() results.
That's followup to rBe007552442634 really, both should be backported to 2.78
This is internal pointer helper for scene evaluation and tools, it's not exposed to bpy API anyway,
and can give false 'dependency cycles' in bpy.data.user_map() results.
Found by sybren in his Splode work.
Uses similar way of storing temp data as object copy paste, just
uses different read entrypoint which does not modify current bmain.
This gives ability to easily copy-paste poses from one blender to
another one.
Hopefully doesn't introduce user-measurable differences.
Request from Peer here in the studio.
Reviewers: mont29
Reviewed By: mont29
Subscribers: hjalti, fsiddi
Differential Revision: https://developer.blender.org/D2229
This reverts commit ecbfa31caa.
Original commit broke logic in nodes re-fitting. That area can
access non-existing children momentarely. Not sure what would
be best solution here, for now simply reverting the change/
Problem was zero length normal caused by a precision issue in patch evaluation.
This is somewhat of a quick fix, but is better than allowing possible NaNs to
occur and cause problems elsewhere.
Both spot and area light have large areas where they're not visible.
Therefore, this patch stops the light sampling code when one of these cases (outside of the spotlight cone or behind the area light) occurs, before the lamp shader is evaluated.
In the case of the area light, the solid angle sampling can also be skipped.
In a test scene with Sample All Lights and 18 Area lamps and 9 Spot lamps that all point away from the area that the camera sees, render time drops from 12sec to 5sec.
Reviewers: brecht, sergey, dingto, juicyfruit
Differential Revision: https://developer.blender.org/D2216
Small issues in GHOST
- use NSApplicationDelegate protocol for our app delegate
- make sure NSApp is initialized before using
(cherry picked from commit df7be04ca6)
Regression from rB036c006cefe471. We can't use self here, self is bpy.app, not pydescriptor of python path getsetter...
So for now, do not try to replace getsetter by actual value in bpy.app's dict,
just return static var generated on first run.
Should be safe for 2.78.
I) Filename was not put in temp Main generated to save selected data only,
this was breaking readcode when trying to open partial file, leading to missing
filename in final loaded Main data.
II) Read code would confuse partial .blend files with Undo ones, when they had no screen in them
(which happens to 99.999% of partial .blend files I guess).
Reported by @sybren, thanks.
Should be safe enough for 2.78 release.
The title says it all actually. From tests with barber shop scene here
gives 2-3x speedup for shader compilation on my oldie i7 machine. The
gain is mainly due to textures metadata query from jpeg files (which
seems to requite de-compression before metadata can be read). But in
theory could give nice improvements for scenes with huge node trees
as well (i'm talking about node trees of complexity of fractal which
we had reports about in the past).
Reviewers: juicyfruit, dingto, lukasstockner97, brecht
Reviewed By: brecht
Subscribers: monio, Blendify
Differential Revision: https://developer.blender.org/D2215
The issue was caused by some false-positive empty non-AABB intersection.
Tried to tweak it a bit so it does not record intersection anymore.
Hopefully will work for all platforms. Tested here on iMac and Debian.
The idea is to allow certain animation channels to be always visible in
animation editors. So, for example, one can pin Camera animation to the
editor so it is always possible to refine/tweak camera animation when
animating something else in the scene.
There is probably some more polishing required, and some current
limitations could be solved in the future but should be a good starting
point already.
Currently only works for object without recursing into deeper datablock
(so for example, it's not possible to pin object material animation).
Studio request by Colin Levy.
Basically just moves cached kernels from ~/.config/blender/BLENDER_VERSION to
~/.cache/cycles/kernels. This has following benefits:
- Follows XDG specification more closely,
not as if it's totally crucial or measurable by users, but still nice.
- Prevents unexpected sizes of config folder, makes disk space used in more
predictable for users way.
- Allows to share kernels across multiple Blender versions,
which makes it easier debugging at the times close to release.
- "Copy Previous Settings" operator will no longer be copying possibly
gigabytes of cached kernels, which used to lead to really nast disk usage
and annoying delays of copying settings.
- In the future we can have some smart logic to clear old unused cached
kernels.
Currently only done for Linux and OSX. Windows still follows old "cache"
folder logic, but it's not really important for now because we don't
support kernel compilation on this platform yet.
Reviewers: dingto, juicyfruit, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2197
Constant folding was removing all nodes connected to the displacement output
if they evaluated to a constant, causing there to be no valid graph for
displacement even when there was displacement to be applied, and sometimes
caused crashes.
Using ones complement for detecting if transform has been applied was confusing
and led to several bugs. With this proper checks are made.
Also added a few transforms where they were missing, mostly affecting baking
and displacement when `P` is used in the shader (previously `P` was in the
wrong space for these shaders)
Also removed `TIME_INVALID` as this may have resulted in incorrect
transforms in some cases.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2192
Bump mapping was happening in world space while displacement happens in object
space, causing shading errors when displacement type was used with bump mapping.
To fix this the proper transforms are added to bump nodes. This is only done
for automatic bump mapping however, to avoid visual changes from other uses of
bump mapping. It would be nice to do this for all bump mapping to be consistent
but that will have to wait till we can break compatibility.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2191
Now the factor works similar to other Blender areas to make the factor
more consistent for artists. The value 0% means equal to original
stroke, 100% equal to final stroke (50% means half way). Any value below
0% or greater than 100% create an overshoot of the stroke.
The existing code uses the input value count of the first channel
for all of them. If the first channel is the largest, it leads to
a crash-causing buffer overrun in memcpy below. Likely this was
left since the time when only one channel was supported.
As a crash fix, probably should go into 2.78
This commits changes two things:
* It adds more keysyms preferably taken from XLookupKeysym than XLookupString (namely, all numpad ones).
* It falls back to keysyms from XLookupKeysym in other cases, when XLookupString does not produce anything we know of.
Finding the correct balance here is far from easy, but think we are comming rather close to it now...
Root of the issue is that active render index became wrong. This is the actual
thing to be fixed, but as usual this is quite tricky to reproduce. Since such
bad situation might have happened more and fix isn't really difficult or
intruisive let's avoid crash for now.
Can be revisited once we figure out root of the issue.
Nice for 2.78 release.
Own fault in new ID management work, thought rebuild the DAG itself was
enough to actually update whole scene, but we actually need to tag datablocks
for update as well, when we change (or remove) one of their ID pointers...
OCD commit, but cleans the code a bit:
- the first `if 0` block was supposed to draw collision objects but is
vastly outdated as most of the SmokeCollisionSettings member variables
were removed a few years ago and collision objects are drawn like other
objects anyway. Also it was committed already commented out back in
2009.
- the second `if 0` block was doing pretty much the same thing as the
few lines above it.
Most of the time, Lamps in Cycles are just a constant emission closure, no texturing etc. Therefore, running a full shader evaluation is wasteful.
To avoid that, Cycles now detects these constant emission shaders and stores their value in the lamp data along with a flag in the shader.
Then, at runtime, if this flag is set, the lamp code just uses this value and only runs the full shader evaluation if it is neccessary.
In scenes with a lot of lamps and with "Sample all direct/indirect" enabled, this saves up to 20% of rendering time in my tests.
Reviewers: #cycles
Differential Revision: https://developer.blender.org/D2193
For proper indexing to work we need to use unaligned node with
identity transform instead of aligned nodes when doing refit.
To be backported to 2.78 release.
This is really hack-fix actually, not sure why `get_pointcache_keys_for_time()` seems to assume
it will always find key for given part index at least for current frame, and whether this assumption
is wrong or whether bug happens elsewhere...
Anyway, this is to be wiped out in 2.8, so no point loosing too much time on it, for now merely
returning unchanged (i.e. zero'ed) ParticleKeys in case index2 is invalid. Won't hurt anyway,
even if this did not crash in release builds, would be returning giberish values.
Weirdly enough, this version of XCode seems to have static_assert()
even when NOT using C++11. This is totally weird and counter intuitive
since static_assert() is supposed to be C++11 onlky feature.
Can XCode stop using future, please? :)
The two SVM nodes added with e7ea1ae78c caused a slowdown on AMD cards when rendering with OpenCL, whether displacement was used or not.
In the Barcelona Pavillon scene on a RX480, this would cause a 12% slowdown.
Therefore, this commit adds a additional flag for feature-adaptive compilation so that the new SVM nodes are only enabled when they are needed (Node tree connected to the Displacement output and Displacement type set to Both).
Also, the nodes were also added to shaders when the Displacement Type was set to Bump (the default), which was unneccessary and is fixed now.
Thanks to linda2 on IRC for reporting and testing and to maiself for help with the displacement shader code.
This fix might be relevant for 2.78, but it should be tested further before including it.
When drawing with Grease Pencil "continous drawing" for a long time
(i.e. basically, drawing a very large number of strokes), it could be
possible to cause lower-specced machines to run out of RAM and start
swapping. This was because there was no limit on the number of undo
states that the GP undo code was storing; since the undo states grow
exponentially on each stroke (i.e. each stroke results in another undo
state which contains all the existing strokes AND the newest stroke), this
could cause issues when taken to the extreme.
See commit's comments for details, but this boils down to: do not try to use
purely runtime cache data as a 'real' ID pointer in readcode, it's likely
doomed to fail in some cases, and is bad practice in any case!
Thix fix implies dupliweight's object will be invalid until first scene update
(i.e. first particles evaluation).
Two new modal operators to create a grease pencil interpolate drawing
for one frame or a complete sequence between two frames. For drawing
the temporary strokes in the viewport, two drawing handlers have been
added to manage 3D and 2D stuff.
Video: https://youtu.be/qxYwO5sSg5Y
The operator shortcuts are Ctrl+E and Ctrl+Shift+E. During the modal
operator, the interpolation can be adjusted using the mouse (moving
left/right) or the wheel mouse.
Could not reproduce it here, but since users having the issue claims it comes from
rB16cb9391634dcc50e, let's try to use again ugly `XLookupKeysym()` for those modifier keys too...
Cpack generates a standard filename with git information in it, which might not always be wanted for release builds, this patch adds an option to override that default filename.
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Differential Revision: https://developer.blender.org/D2199
ammended to fix: wrong variable name in main CMakeLists.txt
This reverts commit 9444cd56db.
This commit caused some flickering in the bones when swapping IK to Fk.
While it's unclear why such change caused any regressions, let's revert
it to unlock the studio.
Cpack generates a standard filename with git information in it, which might not always be wanted for release builds, this patch adds an option to override that default filename.
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Differential Revision: https://developer.blender.org/D2199
Based on reading documentation around. This particular version
is based on the ImageMagic documentation which could be found
there:
http://www.imagemagick.org/Usage/filter/http://www.imagemagick.org/Usage/filter/nicolas/
Current filter is based on measuring mean error with the current
splash screen and choosing combination of parameters which
gives minimal mean error.
Fix cast shadows options (in material tab) not working in the viewport.
An off-by-one error. See D2194 for more.
Committing for Ulysse Martin (youle) who found & fixed this.
In User Preferences, Properties Editor and toolshelf, Ctrl+Tab and Ctrl+Shift+Tab now activates the next or previous space context (or category in case of toolshelf tabs), respectively.
For Properties Editor such functionality was completely missing, only toolshelf allowed cycling using ctrl+mousewheel (or only mousewheel while hovering tab region). Ctrl+Tab and Ctrl+Shift+Tab are common web browser shortcuts, so they're a reasonable choice to go with.
Reaching the first/last item doesn't cause the cycling to stop, we continue at the other end of the list then. (I didn't add this to Ctrl+Mousewheel toggling in toolshelf since I wanted to keep its behavior unchanged.)
We could get rid of (Ctrl+)Mousewheel cycling in toolshelf, but this may break user habits.
The cycling happens using a new operator, UI_OT_space_context_cycle, for toolshelf tabs it's hardcoded in panel handling code though.
Generalized rna_property_enum_step a bit and moved it to rna_access.c to allow external reuse.
Reviewed By: venomgfx
Differential Revision: https://developer.blender.org/D2189
This is a bug in the multithreaded task manager in negative value range.
The problem here is that if previter is unsigned, the comparison in the
return statement is unsigned, and works incorrectly if stop < 0 &&
iter >= 0. This in turn can happen if stop is close to 0, because this
code is designed to overrun the stop by chunk_size*num_threads as
the threads terminate.
This probably should go into 2.78 as it prevents a crash.
Based on patch by @codemanx, but with slightly less verbose descriptions.
More detailed behavior etc. rather belongs to doc/python_api/examples/bpy.ops.x.py imho.
Was incorrect indexing done in the array. Caused by 5abae51.
Not sure why it needed to be changed here, but array here is supposed to be
a loop data, so bringing back loop index as it originally was. The shading was
wrong in edit mode with BI active as well (so it's not like it's needed for
BI only).
Patch in collaboration with Alexander Gavrilov (angavrilov), thanks!
Should be double-checked and ported to 2.78.
Crash was due to a name collision in Alembic objects caused by the fact
that names derive from the one of the Blender object. An object having
multiple particles system would thus give its name to various
subobjects.
Now use the name of the particles system for the Alembic object.
Image of the result of this patch:
{F352849}
The only thing different from the above image and this patch is I added a colon after "Falloff" after I took the screen shot.
Reviewers: Severin, meta-androcto
Subscribers: plyczkowski
Tags: #bf_blender, #user_interface
Maniphest Tasks: T33436
Differential Revision: https://developer.blender.org/D2195
Another great example of inconsistency in usercount handling - dupli_group was considered
as refcounted by readfile.c code (and hence by library_query.c one, which is based on it),
but not by editor/BKE_object code, which never increased group's usercount when creating
an instance of it etc.
To be backported to 2.78.
We did not update them for really long time and the currently used
hashes are quite old and probably wouldn't work without manually
updating all submodules.
Not as if it's something totally crucial (we ask to update submodules
all the times and that's what `make update` does) but updating hashes
will save some cloning/checkout time.
Includes:
- Version bump to 2.78
- Doxy file update
- New splash screen
- Wrapped some do_versions with version check
- Updated template to use proper font
After poking around a lot it seems Droid Sans was used during 2.7x series.
(or at least difference between using this font and comparing to previous
splash screens gives none visible difference).
There were two inconsistencies in how 'save image' op initiated its settings:
* It would always use ImBuf->planes value.
* It would always use Image views settings.
Both of those settings should come from scene->r.im_format (as everything else)
when saving render/compo result...
Object coordinates can now be used in the displacement shader and will give
correct results, where as before bump mapping was calculated from the displace
positions and resulted in incorrect shading.
This works by evaluating the shader in two parts, first bump then surface, and
setting the shader state to match what it would be if the surface was
undisplaced for the bump shader evaluation. Currently only `P` is set as if
undisplaced, but other shader variables could be set as well, such as `I` or
`time`. Since these aren't set to anything meaningful for displacement I left
them out of this patch, we can decide what to do with them separately.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2156
Storing multiple copies of a shader was needed when the displacement method was
a mesh option and could be different for each mesh. Now that its a shader option
this is unnecessary.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2156
Didn't do any bisecting, but guess it's caused rBb54e95a5c8dcb7 (2.74 is fine, 2.75 isn't).
I think this fix makes some other hacks redundant but need to check details (will do when we're back to bcon1 to avoid regressions).
Issue was that the wm.open_mainfile OP caused all handlers to be removed and since rB45592291 cancelled (which is correct in general), the menu that triggered the OP should not be cancelled though.
Not sure if this is a nice fix or not, it's however the safest fix I found. A different fix would be to call UI_popup_block_close before WM_operator_call_ex (in dialog_exec_cb), but not sure how safe this is and want to further investigate if it makes other hacks/fixes redundant.
There's still a crash with --debug-memory that confused the heck out of me (since I always have --debug-memory enabled), but I'll commit fix for that separately.
When using metadata stamping, it's often handy to have "Camera" in
front of the camera name, "Marker" in front of the marker text, etc.,
but sometimes those get in the way. This patch allows an artist to
turn those labels on/off.
Reviewed by: sergey, mont29, venomgfx
Our usercount handling was really... infuriating :|
Here, localization (i.e. 'shalow' copy that should not touch to usercounts) was incrementing
usercounts of the sole Textures IDs of lamps and worlds (on the weak and fallacious pretext
that related BKE_free... functions would decrement those counts)... Seriously...
So now, localize funcs do not increment any usercount anymore (since matching BKE_free... ones do
not decrement any either), and we do not call anymore that stupid unlink when freeing temp
localized copies of lamps/materials at end of preview generation.
Note that we probably still have a lot to do to cleanup that copy/localize code, pretty sure
we can dedpulicate a lot more.
The option is controlled with the WITH_WINDOWS_CODESIGN option and needs:
- Signtool must be found on the system, the standard windows sdk folders will be searched for it.
- The path to the pfx file (WINDOWS_CODESIGN_PFX)
- The password for the pfx , this can either be set by the WINDOWS_CODESIGN_PFX_PASSWORD variable but given that ends up in CMakeCache.txt (which might be undesirable) there is a backup option of setting the PFXPASSWORD environment variable on the system.
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Tags: #bf_blender, #platform:_windows
Differential Revision: https://developer.blender.org/D2182
Curves and meshes (when no modifier application required) would increase their material usercount twice.
Not sure how/why it worked in previous code, but with new, stricter ID handling we need more
careful check of ID 'ownership' handling.
Reported by Sergey over IRC, thanks.
There basically are two issues here: in smooth mode (and all non-tangent
normal map types) it doesn't invert the normal for backfacing polys;
on the other hand for flat shaded tangent type it is inverted too soon.
This fix does a brute force correction by checking the backfacing flag.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Differential Revision: https://developer.blender.org/D2181
When undo in UV/Image editor and press ESC key, there was segment fault
in Toolsettings because the reference was missing. Now the toolsetting
is loaded from context and not from local operator data.
Was kinda split in two different places (one allowed to modify given path to always get a valid one,
the other only checking for validity of given path), not nice - and broken in asset branch case.
So rather extended a bit FileList->checkdirf to handle both cases (modifying and non-modifying path).
We usually don't silence migh-be-uninitialized warning (which is the only
thing which could explain setting matrix to all zeroes) so we can catch
such errors when using tools like Valgrind.
I don't get warning here and the initializer was wrong, so removing it.
If it-s _REALLY_ needed please do a proper initialization.
Previously, they were in a column alongside the list, but because the lists were
rarely that long, there would always be a large gap left below the list.
This was because the poll callback was checking for the presence of an active layer.
If you just create an empty datablock and try to paste, nothing would happen.
However, this check was kindof redundant anyway, as the operator would add a layer for
you if it didn't find one.
(Later this calculation should be moved into the iteration macro instead, since
it only needs to be applied once per layer along with the diff_mat calculation)
A common problem encountered by artists was that they would accidentally move
the 3D cursor while drawing, causing their strokes to end up in weird places in
3D space when viewing the drawing again from other perspectives.
This operator helps fix up this mess by taking the selected strokes, projecting them
to screenspace, and then back to 3D space again. As a result, it should be as if
you had directly drawn the whole thing again, but from the current viewpoint instead.
Unfortunately, if there was originally some depth information present (i.e. you already
started reshaping the sketch in 3D), then that will get lost during this process.
But so far, my tests indicate that this seems to work well enough.
After the GP v2 changes, it wasn't possible to easily set the thickness of strokes
if you didn't know about the pie menus already. This just exposes the same set of
settings.
Parenting options are not visible there (i.e. they're only for the 3D view),
so reserving space that isn't going to be used in those editors doesn't really
make much sense. Furthermore, those property regions are often quite narrow
too, so it doesn't help too much to keep these settings so narrow there.
transparency.
The issue is that we are rendering to a 0..1 clamped sRGB buffer with
unpremultiplied alpha, where the correct thing to do would be to render
to an unclamped linear premultiplied alpha buffer. Then we would just
make fire purely emissive without affecting the alpha channel at all,
but that doesn't work here.
So for now, draw fire and smoke separately using different shaders and
blend modes, like it used to before the smoke programs were rewritten
(see rB0372b642).
Meshes with Cycles subdivision were being transformed to world space leading to
normals to sometimes be calculated in that space, while they should be in
object space. Also caused dicing to happen at the wrong rate for scaled meshes.
General reshuffling of defines and spacing/brace usage for consistency.
In particular:
* When defining types, don't mix pointers and non-pointer types on same line
to avoid confusion
* As much as possible, have all defines at the top of each block instead of
scattered haphazardly throughout the code
Users have been getting a bit confused by the way things are worded/arranged in
the UI. This patch makes a few changes to the UI to make it more clear how to
use subdivision:
- make Subdivide UVs option inactive when adaptive subdivision is enabled as UV
subdivision is currently unsupported
- add "px" to dicing rates in the Geometry Panel
- display the final dicing rate in the modifier
- reworded "Dicing Rate" in the modifier to "Dicing Scale" to make more clear
that this is a multiplier for the scene dicing rate and added a note the the
tooltip pointing the user to that setting in the Geometry Panel
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2174
Assigning NULL to scopes' data pointer in 'non-UI readfile' context was terribly wrong for sure,
if we'd really want to reset them here we should freed them first.
But we can rather just ignore them here, those are purely runtime data managed by image editor,
no need to touch them here.
* Stroke editing functions should be in gpencil_edit.c not gpencil_data.c
(the latter is only for handling "CRUD" operations on things like
layers, brushes, and palettes)
* Deduplicate the GP_STROKE_BUFFER_MAX define
Added a way to select all the currently visible strokes that use the same
color as the selected stroke. This can be accessed via the Select Grouped (Shift-G)
operator as an alternative to selecting by layer.
This commit adds optional "pressure" and "strength" arguments to the
stroke.points.add() method. These are given default values of 1.0,
so that old scripts can be ported over to the new API with less effort
while reducing confusion about why auto generated strokes won't appear.
The idea of the change is to avoid queue growing too long
and handle all the operations as quick as possible.
Gives about 3% speedup on one of the barber shots here.
Now we pass streams to Alembic instead of passing the filename string.
That way we can open the stream ourselves with the proper unicode
encoding.
Note that this only applies to Ogawa archive, as HDF5 does not support
streams.
Differential Revision: https://developer.blender.org/D2160
Changes from microdisplacement work broke previous support for subdivision
meshes, sometimes leading to crashes; this makes things work again. Files
that contain "patch" nodes will need to be updated to use meshes instead, as
specifying patches was both inefficient and completely unsupported by the new
subdivision code.
Auto-scale is expected to work just fine now.
Only thing changed now is the pivot point for the scale: it is now
the same as rotation pivot, so scaling happens around weighted median
of the translation tracks. This seems to be what is actually required
for the VFX workflow.
Previously, this extension used the translation compensated image centre
as reference point for rotation measurement and compensation. During
user tests, it turned out that this setup tends to give poor results
with very simple track configurations.
This can be improved by useiing the weighted average of the location
tracks for each frame as pivot point. But there is a technical problem:
the existing public API functions do not allow to pass the pivot point
for each frame alongside with the stabilisation data. Thus this
change implements a trick to package a compensation shift into
the translation offset, so the rotation can be performed around
a fixed point (center of frame). The compensation shift will then shift
the image as if it had been rotated around the desired pivot point.
It is common in blender to use 1-based counting for
frame sequences (while 0-based is allowed). Thus
initializing to use frame 1 as reference for stabilization
is likely to produce smooth start values in most cases
values > 1 will zoom in and values < 1 zoom out
Rationale: the changed orientation is more natural
from a user POV and doing it this way is also more
consistent with the calculation of the other
target_* parameters.
Compatibility: This will break *.blend files saved
with the previous version of this patch from the
last days (test period). It will *not* break any
old/migrated files: Previously, the DNA field "scale"
was only used to cache autoscale. Only with the
Stabilisator rework, "scale" becomes a first class
persistent DNA field. There is migration code to
init this field to 1.0
We should treat all three "target" ("expected") parameters in a similar way:
The "influence" control should only work on the measurement part of stabilisation,
i.e. it should only control the automatic part of stabilisation, while
the target parameters are deliberately set by the user and thus should
even be in effect when the automatic stabilsation is turned down.
It used to be so for location and rotation, but for the scale part,
I re-used the existing code for autoscale, which also had the scale influence
work on the autoscale factor. This was sensible in the old version,
since scale_influence was the only way to control the result. But now,
the user has always total control trough the "target_*" parameters
and thus we should prefer to treat all similar.
The if branches were reordered when the original patch was
committed, which broke the implicit non-NULL guarantee on link.
To prevent re-occurrence, add a couple of unit tests.
For best results use the latest 3Dconnexion driver. But latest is only
supported on Mac OS 10.9+. We go all the way back to Mac OS 10.6 so
have to deal with older driver versions.
See the original dlclose line for my faulty assumption. Waiting to
unload the driver later fixes the crash. Newer drivers don’t seem to
have this issue.
Also removed WITH_INPUT_NDOF guards as NDOFManager.h takes care of
this. Follow-up to b10d005 a few days ago.
Was a concurrent access of pointcache from both particle system and UI (time space).
Pointcache not being threadsafe is really an issue to be addressed for its next version,
for now simply locking spacetime (like we already do with 3DView), not ideal fix
but it's working and safe for release.
This code obviously should also use the cache_fields flag variable,
like the code for reading the lowres data in the same function.
This is because fluid_fields actually represents the old state before
smoke was reallocated to match cache_fields read from the file, and if
it has some fields enabled that aren't allocated any more, it crashes.
This also fixes a reverse glitch: when a file was loaded with
the current frame in the middle of a baked smoke+fire simulation,
smoke appeared immediately, but the fire didn't until the frame
was changed. The reason is the same: after file load no fields
are initially allocated and thus fluid_fields is 0.
ccgDM_drawMappedFacesMat was missig a smooth shade model restore, some other
functions were redundantly setting it since we can assume it to be the default
state already.
ifdef’ing out defines in DNA/RNA is not a good idea, was breaking alternative keymaps loading
from splash screen e.g. (reported by Sergey over IRC, thanks).
Joins some things to the same row and uses aligned columns to get
minimum use of vertical space.
Probably still some tweaks required, but getting there :)
export.
This aligns the behaviour of the file selection with the other
exporters. The Alembic case would fail if the filepath did not have an
extension set.
Also set a default file name for the Alembic export operator in case the
Blender file was not saved before exporting.
This function is only really secure in a very limited amount of cases,
and can especially bite you later if you change some buffer sizes...
So not worth bothering with it, just always use BLI_strncpy instead.
Error: `origin` and `edge` args were swapped in final `FindOccludee()` call of `ViewMapBuilder::ComputeRayCastingVisibility()`
Cleanup: main for loop in `Strip::createStrip()` was really confusing (though correct),
generated a false positive in coverity scan, now should be cleaner how it loops over its vprev/v/v2
triplet of consecutive items.
When WITH_INPUT_NDOF is disabled, 3D mouse handling code is removed
from:
- GHOST (was mostly done, finished the job)
- window manager
- various editors
- RNA
- keymaps
The input tab of user prefs does not show 3D mouse settings. Key map
editor does not show NDOF mappings.
DNA does not change.
On my Mac the compiled binary is 42KB smaller after this change. It
runs fine WITH_INPUT_NDOF on or off.
These latter can cause MSVC debug asserts if the array is empty. With C++11
we'll be able to do this for std::vector later. This hopefully fixes an assert
in the Cycles subdivision code.
Basically title says it all.
The goal is to make platform maintenance easier, so you don't have
to constantly scroll back and forth looking for if() branches to
check which exact platform you're currently working on.
Ideally we also would move option defaults to a platform files,
but that i'm not sure how to implement in a nice way yet.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2148
If the ruler is saved, a new color was created for each ruler. With the
change, a color is created for the first instance, and it reused in the
following instances. The default color is the current default color for
GP.
It's becoming annoying to have public API dependent on build type
and everything. Let's just always have API defined and do stubs
in the function implementation instead.
Current implementation more or less indiscriminately links physics
objects to colliders and forces, ignoring precise details of layer
checks and collider groups. The new depsgraph seemed to lack some
such links at all. The relevant code in modifiers suffers from a
lot of duplication.
Different physics simulations use independent implementations of
collision and similar things, which results in a lot of variance:
* Cloth collides with objects on same or visible layer with dupli.
* Softbody collides with objects on same layer without dupli.
* Non-hair particles collide on same layer with dupli.
* Smoke uses same code as cloth, but needs different modifier.
* Dynamic paint "collides" with brushes on any layer without dupli.
Force fields with absorption also imply dependency on colliders:
* For most systems, colliders are selected from same layer as field.
* For non-hair particles, it uses the same exact set as the particles.
As a special quirk, smoke ignores smoke flow force fields; on the other
hand dependency on such field implies dependency on the smoke domain.
This introduces two utility functions each for old and new depsgraph
that are flexible enough to handle all these variations, and uses them
to handle particles, cloth, smoke, softbody and dynpaint.
One thing to watch out for is that depsgraph code shouldn't rely on
any properties that don't cause a graph rebuild when changed. This
was violated in the original code that was building force field links,
while taking zero field weights into account.
This change may cause new dependency cycles in cases where necessary
dependencies were missing, but may also remove cycles in situations
where unnecessary links were previously created. It's also now possible
to solve some cycles by switching to explicit groups, since they are
now properly taken into account for dependencies.
Differential Revision: https://developer.blender.org/D2141
For now simply reshuffle option so they keep proper dependency flow.
Benefits:
- Has an ability to hide tracks lists to work with other sliders around.
Could be really handy to quickly get rid of lenghty lists.
- From a feedback seems to be fitting workflow better.
Things to doublecheck on:
- Feels a bit misordered: first you define whether one want to have
rotation stabilized, then have tracks, then scale options.
While this follows dependency flow (which is really good and which
we should not violate) it has weird feeling on whether things are
really where they have to be.
- Autoscale controls visibility of max-scale, can we just make it
active/inactive instead?
- Autoscale replaces slider with label. Can it be disabled slider
instead to reduce visual jumping (disabled slider prevents user
input)
Hopefully we'll still want to have collapsable box after re-iterating
over this points, so we don't waste bits in DNA.
See this page for motivation and description of concepts:
https://github.com/Ichthyostega/blender/wiki
See this video for UI explanation and demonstration of usage
http://vimeo.com/blenderHack/stabilizerdemo
This proposal attempts to improve usability of Blender's image stabilization
feature for real-world footage esp. with moving and panning camera. It builds
upon the feature tracking to get a measurement of 2D image movement.
- Use a weighted average of movement contributions (instead of a median).
- Allow for rotation compensation and zoom (image scale) compensation.
- Allow to pick a different set of tracks for translation and for
rotation/zoom.
- Treat translation / rotation / zoom contributions systematically in a
similar way.
- Improve handling of partial tracking data with gaps and varying
start / end points.
- Have a user definable anchor frame and interpolate / extrapolate data to
avoid jumping back to "neutral" position when no tracking data is available.
- Support for travelling and panning shots by including an //intended//
position/rotation/zoom ("target position"). The idea is for these parameters
to be //animated// by the user, in order to supply an smooth, intended
camera movement. This way, we can keep the image content roughly in frame
even when moving completely away from the initial view.
A known shortcoming is that the pivot point for rotation compensation is set to
the translation compensated image center. This can produce spurious rotation on
travelling shots, which needs to be compensated manually (by animating the
target rotation parameter). There are several possible ways to address that
problem, yet all of them are considered beyond the scope of this improvement
proposal for now.
Own modifications:
- Restrict line length, it's really handy for split-view editing
- In motion tracking we prefer fully human-readable comments, meaning we
don't use doxygen with it's weird markup and comments are supposed to
start with capital and end with a full stop,
- Add explicit comparison of pointer to NULL.
Reviewers: sergey
Subscribers: kusi, kdawg, forest-house, mardy, Samoth, plasmasolutions, willolis, sebastian_k, hype, enetheru, sunboy, jta, leon_cheung
Maniphest Tasks: T49036
Differential Revision: https://developer.blender.org/D583
Reduces noise from --debug-gpu so we can spot serious errors.
Blender 2.7x uses OpenGL 2.1, we don't care if features are deprecated.
I'll re-enable these warnings for blender2.8 after next merge.
Kernels can now be built without patch evaluation when not needed by the
scene (Catmull-Clark subdivision not in use), giving a performance boost
for some devices.
Enable based on --debug-gpu at the command line. Linux already works
this way.
This commit in master (2.78 timeframe) is the smallest change that gets
the desired result. I did a similar commit in blender2.8. Most ongoing
GL debug work will go into 2.8, not 2.7x.
In support of T49089
When running blender --debug-gpu
Display which debug facilities are available. One of these, in order of preference:
- OpenGL 4.3
- KHR_debug
- ARB_debug_output
- AMD_debug_output
All messages are logged now, not just errors. Will probably turn some of these off later.
GL_DEBUG_OUTPUT_SYNCHRONOUS lets us break on errors and backtrace to the exact trouble spot.
Callers of GPU_string_marker no longer pass in a message length, just the message itself (null terminated).
Apple provides no GL debug logging features.
By calling `tessellate()` from the mesh manager in Cycles we can do pre/post
processing or even threaded tessellation without concerning client side code
with the details.
This way OpenCL devices can also benefit from a smaller memory footprint, when using e.g. bumpmaps (greyscale, 1 channel).
Additional target for my GSoC 2016.
When updating the max values under stiffness scaling, they clip at the normal stiffness values
as expected, however when updating stiffness values, you could set them higher than the max
values, and the max values weren't updated accordingly. As the stiffness scaling computes using
the absolute difference between the max values and the stiffness values, you got higher
stiffnesses in scaled areas even though your max is actually lower than the normal stiffness.
This diff fixes that behaviour, by updating the max values to be equal to the stiffness whenever
you set a higher stiffness than the max value.
Also, I have initialized the max values to the same as the stiffnesses, as they were previously
just set to zero, and caused the same problem described above.
Reviewers: lukastoenne
Reviewed By: lukastoenne
Tags: #physics
Differential Revision: https://developer.blender.org/D2147
Node tree update calls in the middle of a socket loop are dangerous, they can change sockets
on group nodes and link instances in particular. Updates should only happen after the operator
has finished.
Simply removed the extra convenience check for validity now. Worst case an invalid (red) link
is created which can be removed by the user as well and should simply be ignored by node systems.
The update system in nodes needs a complete rewrite to handle complex cases like this, where an
operator may need to react to changes during its execution.
GPencil conversion would just always run for file version 2.77.3. This wasn't an issue in master, but possibly for other branches that used the 2.77.3 block.
Wasn't aware that you have to add the asterisk for pointers either, this is kinda weird. Anyway, it's running correctly now.
New dependency graph puts materials to the graph in order to deal with animation
assigned to them and things like that. This leads us to a requirement to update
relations when slots changes.
This fixes: T49075 Assignment of a keyframed material using the frame_change_pre handler
doesn't update the keyframe using the new dependency graph
This is mainly required for the new dependency graph where non-object
datablocks are a part of dependency graph.
This solves issue when making mesh shared by multiple objects a single
user one.
Now we have the 4 component ones first (float4, byte4, half4) followed by the 1 component ones (float, byte, half).
Makes code a bit more consistent and also reduces code a bit when enabling half support on GPU in next commit.
This also exposed a typo in half CPU images for 3D textures, which wasn't used yet, but good to have that one fixed anyway.
Point cache read code contains checks designed to prevent it reading
stale data when the relevant simulation code should instead compute
the next frame from the previous one. However in some situations like
motion blur subframes the simulation can't possibly do it and just
exits. This causes completely incorrect motion blur at or after the
last cached frame.
To fix, add a parameter that tells the cache code whether it should
apply the checks and exit, or read what it can even if stale (true
means exactly same as old behavior).
Doing this in cache rather than clamping the frame number better in
the caller lets it handle the case of incomplete cache that stops
before the official last frame.
Reviewed By: mont29, lukastoenne
Maniphest Tasks: T49004
Differential Revision: https://developer.blender.org/D2144
Using unit tests is a wrong way to control static behavior of the
application. They should only be used for checking dynamic behavior,
all the rest is easily controllable at compile time.
Doing tests at ocmpile time are actually more robust approach since
we don't have strict policy of runnign unit tests before accepting
any change.
Proper alignment control is coming shortly.
This reverts commit 7c3a06c349.
Since the optimal values depend on the device used, this option doesn't make much sense in the XML.
Therefore, it's now specified via the command line, just like the device itself.
message(FATAL_ERROR"WITH_WINDOWS_CODESIGN is on but WINDOWS_CODESIGN_PFX_PASSWORD not set, and environment variable PFXPASSWORD not found, unable to sign code.")
@@ -187,7 +187,7 @@ The next table describes the information in the file-header.
</table>
<p>
<ahref="http://en.wikipedia.org/wiki/Endianness">Endianness</a> addresses the way values are ordered in a sequence of bytes(see the <ahref="#example-endianess">example</a> below):
<ahref="https://en.wikipedia.org/wiki/Endianness">Endianness</a> addresses the way values are ordered in a sequence of bytes(see the <ahref="#example-endianess">example</a> below):
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.