This adds an initial (and already working), generic API for defining, handling and drawing of table UIs.
It supports two types of tables:
* Vertical flow: Rows are all in a vertical stack.
* Horizontal flow: Rows are stacked vertically, until some threshold height is reached. The table is then split into another vertical stack that is drawn next to the former one. Such drawing could be used for the file browser in non-thumbnail mode.
A table is built out of - guess what - rows and columns. The API allows defining new collumns (identified by an id-name) and inserting new rows at any point. When drawing, we draw a cell for each column/row combination, using a custom cell-drawing callback.
The uiTable API will probably get some support for drawing buttons (uiBut) into custom layouts (uiLayout).
The API is written with big data sets in mind. In future we may get a spreadsheet view where data like all vertices of a mesh is displayed. That would result in thousands of rows, the uiTable API should be ready for this.
This includes a few fixes in the MBC_ api.
The idea here is for this to be the only interface the render engines
will deal with for the meshes.
If we need to expose special options for sculpting engine we refactor
this accordingly. But for now we are shaping this in a per-case base.
Note:
* We still need to hook up to the depsgraph to force clear/update of
batch_cache when mesh changes
(I'm waiting for Sergey Sharybin's depsgraph update for this though)
* Also ideally we could/should use BMesh directly instead of
DerivedMesh, but this will do for now.
Note 2:
In the end I renamed the `BKE_mesh_render` functions to `static
mesh_render`. We can re-expose them as BKE_* later once we need it.
Reviewers: merwin
Subscribers: fclem
Differential Revision: https://developer.blender.org/D2476
We started to run out of bits there, so now we separate flags
which came from __object_flags and which are either runtime or
coming from __shader_flags.
Rule now is: SD_OBJECT_* flags are to be tested against new
object_flags field of ShaderData, all the rest flags are to
be tested against flags field of ShaderData.
There should be no user-visible changes, and time difference
should be minimal. In fact, from tests here can only see hardly
measurable difference and sometimes the new code is somewhat
faster (all within a noise floor, so hard to tell for sure).
Reviewers: brecht, dingto, juicyfruit, lukasstockner97, maiself
Differential Revision: https://developer.blender.org/D2428
Cycles add-on did not actually support reloading correctly.
When you want to correctly reload sub-modules (i.e. modules of an add-on
which is a package), you need to use importlib, a mere import will do
nothing with already loaded modules (RNA classes are sort of
pre-registered when they are evaluated, through the meta-class system).
New options to define the style of the animation paths in order to get
better visibility in complex scenes.
Now is possible define the color, thickness and several options relative
to the style of the lines used to draw motion path.
This way we can stop traversing BVH node early on.
Gives about 2-2.5x times render time improvement with 3 BVH steps.
Hopefully this gives no measurable performance loss for scenes with
single BVH step.
Traversal is currently only implemented for QBVH, meaning old CPUs
and GPU do not benefit from this change.
Similar to the previous commit, the statistics goes as:
BVH Steps Render time (sec) Memory usage (MB)
0 46 260
1 27 373
2 18 598
3 15 826
Scene used for the tests is the agent's body from one of the barber
shop scenes (no textures or anything, just a diffuse material).
Once again this is limited to regular (non-spatial split) BVH,
Support of spatial split to this feature will come later.
The idea is to create several smaller BVH nodes for each of the motion
curve primitives. This acts as a forced spatial split for the single
primitive.
This gives up render time speedup of motion blurred hair in the cost
of extra memory usage. The numbers goes as:
BVH Steps Render time (sec) Memory usage (MB)
0 258 191
1 123 278
2 69 453
3 43 627
Scene used for the tests is the agent's hair from one of the barber
shop scenes.
Currently it's only limited to scenes without spatial split enabled,
since the spatial split builder requires some changes to work properly
with motion steps coordinates.
Also fixed some issues with motion keys calculation:
- Clamp lower and upper limits of curves so we can safely call those
functions for the very first and very last curve segment.
- Fixed wrong indexing for the curve radius array.
- Fixed wrong motion attribute offset calculation.
Mimics how regular triangles are working and makes it more clear where
the stuff is located in the kernel.
Needed to have some forward declarations because of the current placement
of things in the kernel.
Following @AlonDan's feature request and @hjalti's screenshot yesterday,
I've decided to implement support for this to make it easier to scan which
keyframes correspond with which set of controls, especially when faced with
a large wall of keyframes.
In retrospect, I should've done this a long time ago!
Was a bit confusing to have transparent and translucent depth
exposed but no diffuse or glossy.
Reviewers: brecht
Subscribers: eyecandy
Differential Revision: https://developer.blender.org/D2399
This is important for the reliable behavior or isnan/isfinite/min/max
functions to work with nan and non-finite values. Some of the issues
with fast math are possible to work around, but didn't find a way to
have reliable min/max implementation yet.
Please NEVER EVER use such a statement, it's only causing HUGE
issues. What is even worse: it's not always possible to immediately
see that the hell is coming from such a statement.
There is still some statements in the existing code, will leave
those for a later cleanup.
- flushing hidden state ran when it didn't need to.
- flushing checks didn't early exit when first visible element found.
- low level BM_*_hide API calls like this can use skip iterators
can loop over struct members directly.
No user-visible changes.
Took pieces from D2316 and D2359, changed a few more things.
- use new immediate mode
- use new matrix stack
- remove state push/pop
Part of T49043 and T49450
- face-create-extend option could add hidden verts and edges into
the selection history (invalid state).
- faces could be created that included existing hidden edges
that remained hidden (invalid state too).
- newly created faces could copy hidden flag from surrounding faces,
giving very confusing results (looks as if face creation failed).
Surprising nobody noticed these years old bugs!
Experimental option for the Reproject Strokes operator to project strokes on to
geometry, instead of only doing this in a planar (i.e. parallel to viewplane) way.
The current implementation is quite rough, and may need to be improved before it
is really ready for use. Potential issues:
* Loss of precision (i.e. stairstepping artifacts) from the 3D -> 2D -> 3D conversion
as we don't have float version of one of the projection funcs
* Jagged depth if there are gaps, since it will default back to the 3d-cursor plane
if no geometry was found (instead of doing some fancy interpolation scheme)
* I'm not sure if it's that useful for adapting GP strokes to deforming geometry yet...
Now the eraser checks if there's an active frame with some strokes in it
before creating a new frame. There's no point in creating a new frame if
there are no strokes in the active frame (if one exists).
This still doesn't help much if there were strokes but they weren't touched though...
This operator adds a new frame with nothing in it on the current frame.
If there is already a frame there, all existing frames are shifted one frame later.
Quite often when animating, you may want a quick way to get a blank frame,
ready to start drawing something new. Or maybe you just need a quick way to
add a "placeholder" frame so that a suddenly-appearing element does not show
up before its time.
If the layers or the colors were renamed, the animation data was wrong
because the data path was not updated.
I also have fixed a possible stroke color name update if the name was duplicated moving
the rename function call after checking unique name.
Avoids possible jumps when one is trying to do some really preciese tweak.
Quite striaghtforward change for mouse input initialization: take Shift
state into account. However, this will interfere with the axis exclusion
which is currently also uses Shift (the feature to move something in a
plane which doesn't have selected axis). This is probably not so commonly
used feature (nobody in the studio even knew of it) and the only downside
now would be that such a constrainted movement will become accurate by
default. That's easy to deal from user side by just unholding Shift key.
Reviewers: brecht, mont29, Severin
Differential Revision: https://developer.blender.org/D2418
To make it faster to try different interpolation curves, there's a new operator
"Remove Breakdowns" which will delete all breakdowns sandwiched by normal
keyframes (i.e. all the ones that the previous run of the Interpolation op created)
This commit introduces the ability to use the Robert Penner easing equations
or a Custom Curve to control the way that the "Interpolate Sequence" operator
interpolates between keyframes. Previously, it was only possible to get linear
interpolation between the gp frames.
Workflow:
1) Place current frame between a pair of GP keyframes
2) Open the "Interpolate" panel in the Toolshelf
3) Choose the interpolation type (under "Sequence Options")
4) Adjust settings (e.g. if you're using "Custom Curve", use the curvemap widget
to define the way that the interpolation proceeds)
5) Click "Sequence" to interpolate
6) Play back/scrub the animation to see if you've got the result you want
7) If you need to make some tweaks, undo, or delete the generated keyframes,
then repeat the process again from step 4 until you've got the desired result.
The "gp_sculpt" settings should be strictly for stroke sculpting, and not abused by
other tools. (Similarly, if other general GP tools need one-off options, those should
go into the normal toolsettings->gpencil_flag)
Furthermore, this paves the way for introducing new settings for controlling the way
that GP interpolation takes place (e.g. with easing equations, or a custom curvemap)
* Reshuffled some blocks of code for better ease of navigation/flow in the file
* Improved some tooltips
* Removed "Helper" tag from some functions that serve bigger roles
* Fixed some errant formatting
The interpolation operators (and their associated code) occupied a significant
portion of gpencil_edit.c (which was getting a bit heavy). So, it's best to split
these out into a separate file to make things easier to handle, in preparation
for some further dev work.
After revert the commit rB4b99958ca12642, the line added at the end of the enum is not necessary anymore because it is replaced by the corresponding element in the list in the right position.
Things like `BLI_uniquename` had nothing, but really nothing to do in
BLI_path_util files!
Also, got rid of length limitation in `BLI_uniquename_cb`, we can use
alloca here to avoid overhead of malloc while keeping free size (within
reasonable limits of course).
Just store bones that could not get renamed to desired flipped name on the
first try into a temp list, and try to rename them a second time.
This is rather simple solution, will induce 'over numbering' in case you
flip a bone to another unselected bone's name (since number will be
incremented in both rename attempts), but think this is acceptable minor
glitch, for a corner case situation that does not have any good
resolution anyway.
Also, set `strip_numbers` option of `BKE_deform_flip_side_name` to
false, otherwise chains of bones with same names would get their numbers
completely messed up after name flipping.
Based on work by @dfelinto in D2456 (https://developer.blender.org/D2456), thanks.
This is part of T49043
fixed up some color/rect calls
fixed up ANIM_channel_draw()
Reviewers: krash, merwin
Reviewed By: merwin
Tags: #bf_blender_2.8
Maniphest Tasks: T49043
Differential Revision: https://developer.blender.org/D2377
Had to add a few utility functions to replace existing functions. Let me know if these are duplicates.
Reviewers: merwin
Reviewed By: merwin
Tags: #bf_blender_2.8
Maniphest Tasks: T49043
Differential Revision: https://developer.blender.org/D2434
This adds two functions to project 3d coordinates onto a 3d plane,
to get 2d coordinates, essentially eliminating the plane's normal axis
from the coordinates.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2460
It is quite likely in a triangulated mesh that the actual island edge
belongs to a different triangle than the current pixel; for example
consider corners of a triangulated axis aligned rectangle face that
have the additional edge: a pixel there will have to be assigned to
one of the triangles, but one of the edges of the original rectangle
can only be accessed through the other triangle.
Thus for robust operation it is necessary to do a recursive search.
The search is limited by requiring that it only goes through edges
that bring it closer to the target point, and also by depth as a
safeguard.
Differential Revision: https://developer.blender.org/D2409
The code requires the pixel on the other side of the seam to be assigned
precisely to the expected triangle. This can cause false negatives around
vertices, where a pixel is likely to touch multiple triangles and thus
cannot be said to unambiguously belong to any one of them, so check
distance to the intended triangle and accept the result if it's close.
1. Forcibly symmetrize the neighbor relations, so that if A is neighbor
of B, B is neighbor of A. The existing code is guaranteed to violate
this if texture resolution is different between the sides of a seam.
2. In texture mode dynamic paint adds a 1 pixel wide border around the
islands. These pixels aren't really part of the dynamic paint domain
and thus by design can't have symmetrical neighbor relations. This
means they can't be treated by effects like normal pixels.
The simplest way to handle it in a consistent way is to exclude
them from effects, but add an additional pass that recomputes them
as average of their non-border neighbors, located on both sides of
the seam.
This avoids intersection AABB of different curve primitives
which makes it less ray-to-primitive intersections.
This gives about 30% speedup of hair rendering in the barber
shop scenes here. There is still some work to be done on those
files to solve major speed issues on certain frames.
This way we can have different limits for regular and motion curves
which we'll do in one of the upcoming commits in order to gain some
percents of speedup.
The reasoning here is that motion curves are usually intersecting
lots of others bounding boxes, which makes it inefficient to have
single primitive in the leaf node.
Maximal number of elements is supposed to be inclusive. That is what
it was always meant in this file and what @brecht considered still
the case in 6974b69c61.
In fact, the commit message to that change mentions that we allowed
up to 2 curve primitives per leaf while in fact it was doing up to 1
curve primitive.
Making it real 2 primitives at a max gives about 5% slowdown for the
koro.blend scene. This is a reason why BVHParams.max_curve_leaf_size
was changed to 1 by this change.
Since the beginning of times hair settings in cycles were global for
the whole scene but were located in the particle context. This causes
quite some trickery to get shots set up for the movies here in the
studio by forcing artists to create dummy particle system to change
settings of hair on the shot.
While ideally this settings should be properly become per-particle
system for the time being it will save sweat and blood to move the
settings to scene context.
Reviewers: brecht
Subscribers: jtheninja, eyecandy, venomgfx, Blendify
Differential Revision: https://developer.blender.org/D2287
Made them closer to how GTest shows the output, so reading test logs
is easier now (at least feels more uniform).
Additionally now we know how much time tests are taking so can tweak
samples/resolution to reduce render time of slow tests.
It is now also possible to enable colored messages using magic
CYCLESTEST_COLOR environment variable. This makes it even easier to
visually grep failed/passed tests using `ctest -R cycles -V`.
Reusing PROP_TEXTEDIT_UPDATE instead of adding a new property flag just for search strings. Currently it's only used for search strings anyway so seems fine for now.
Fixes T50336.
This splits `interp_weights_face_v3` into `interp_weights_tri_v3` and
`interp_weights_quad_v3`, in order to properly handle three sided polygons
without needing a useless extra index in your weight array. This also
improves clarity and consistency with other math_geom functions, thus
reducing potential future errors.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2461
The issue was that we used to compare number of vertices for mesh after the auto
smooth was applied (at the center of the shutter time) with number of vertices
prior to the auto smooth applied. This caused false-positive consideration of a
mesh as changing topology.
Now we do autosplit as early as possible and do it from blender side, so Cycles
does not need to re-implement splitting on it's side.
This way render engine can request mesh to be auto-split and not
worry about implementing this functionality on it's own.
Please note that this split is to be performed prior to tessellation.
Other than implementing a `mid_v3_v3_array` function, this removes
`cent_tri_v3` and `cent_quad_v3` in favor of `mid_v3_v3v3v3` and
`mid_v3_v3v3v3v3` respectively.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2459
When layout has only small buttons (buttons with icon and without label)
its size should be fixed. Code was modified to be able to add a new UI_ITEM_MIN
flag which indicates that the layout has only small fixed-width buttons.
Patch by @raa, with minor style edits by @mont29.
Reviewers: Severin, mont29
Reviewed By: mont29
Tags: #bf_blender, #user_interface
Differential Revision: https://developer.blender.org/D2423
This does not address stapling shader in 2.8, though the solution can be
similar (own shader, not polutting interlace shader).
part of T49043
Reviewers: merwin
Differential Revision: https://developer.blender.org/D2440
Am pretty sure node update should not touch to Main database like that,
but for now let's allow it, I guess the hack is needed for things like
Sverchok. ;)
If the active object is in weight paint mode, but some armatures in pose mode, 'manipulate center points' still affects the transformation. See bd2034a749.
Also removed redundant check, we basically did the same check for paint modes twice.
If a very low wetness absolute alpha brush is used with spread and
drying effects enabled, some pixels will rapidly accumulate paint.
This happens because paint drying code applies a minimal wetness
threshold that causes the paint to instantly dry out.
Specifically, every frame the brush adds paint at the specified
absolute alpha and wetness set to the minimal threshold, spread
drops it below threshold, and finally drying moves all paint to
the dry layer. This drastically accelerates the rate of flow of
paint into the affected pixels.
Fortunately, the reason paint spread actually ends up decreasing
wetness turns out to be a simple floating point precision problem,
which can be easily fixed by restructuring the affected expression.
Reported on IRC by dfelinto, thanks.
Root of the issue was that opening a new text file would create
datablock with one user, when Text editor is actually a 'user one' user.
This was leaving Text datablocks in inconsitent user count, and
generating asserts in BKE_library area.
Also changed a weird piece of code related to that extra user thing in
main remapping func.
Main issue here was that in old usercount system 'user_real' did simply
not allow that kind of thing to work. With new pait of 'USER_EXTRA'
tags, it becomes possible to handle the case correctly, by merely refining
checks about indirectly use objects whene removing them from a scene.
Incidently, found another related bug, 'link group objects to scene' was not
incrementing objects' usercount - bad, very very bad!
The settings.frame_start rna was clamping frame start to frame end when frame start was bigger than frame end.
The fix is simply to set frame end first
This is a hacky fix for a regression introduced sometime after 2.76.
The "Strip Time" setting on NLA Strips could not be edited without the
value immediately jumping back to the current FCurve value (or 0.0 if no
keyframes existed); even enabling autokey wouldn't let you key the property.
Until we have proper overrides (that only lose their values on frame change),
it's best that this setting is editable, even if it does mean it you have to
manually change the frame to see the updated values.
Sometimes it can be useful to be able to keep onion skins visible in the
OpenGL renders and/or when doing animation playback. In particular, there
are two use cases where this is quite useful:
1) For creating a cheap motion-blur effect, especially when the before/after
values are also animated.
2) If you've animated a shot with onion skinning enabled, the poses may end
up looking odd if the ghosts are not shown (as you may have been accounting
for the ghosts when making the compositions).
This option can be found as the small "camera" toggle between the "Use Onion Skinning"
and "Use Custom Colors" options.
This is a regression introduced in rB5bd9e832
It looks more like a hack than a proper fix, but the shader logic
changed a lot for blender2.8, so I would rather do the elegant fix
there, while leaving master working.
If we ever do a 2.78b (or 2.79) this should get in.
That code was a joke, letting some invalid utf8 bytes pass, returning
wrong offset for some invalid sequences, not to mention length and
pointer easily going out of sync, NULL final byte being 'forgotten' by
memcpy, etc. etc.
The miracle here is that we could survive using this for so long!
Probably because we do not use utf-8 sanitizing enough in Blender,
actually... :/
This test should ensure we correctly detect all invalid utf-8 sequences in a given string.
DISCLAIMER:
Do not run this with current code - you'll either laugh or cry, nearly *all* checks fail!
Based on utf-8 decoder stress-test (https://www.cl.cam.ac.uk/~mgk25/ucs/examples/UTF-8-test.txt)
by Markus Kuhn <http://www.cl.cam.ac.uk/~mgk25/> - 2015-08-28 - CC BY 4.0
Please **DO NOT** add changes from master when it's totally uneeded!
Changes to BLI_ area most certainly shall *always* be done in master,
there is absolutely no point in adding more diff between the two
branches than needed, will only makes merging more cumbersome!
Conflicts:
CMakeLists.txt
source/blender/blenlib/intern/math_vector_inline.c
This is the same issue as was fixed with T39486: the adjustment pass
that tries to equalize different widths at either end of an edge
sometimes causes the widths to get bigger and bigger.
The previous fix was to let "clamp_overlap" do double duty as a way
to limit this behavior. But clearly this is undiscoverable, as the
current bug report shows. So I put in an "auto-limiting" mode that
detects when adjustments are going crazy and then acts as if
clamp_overlap were set.
The reason we can't always act as if clamp_overlap is set is that
certain models (e.g., Bent_test in regression tests) look bad if
that is enabled.
This reverts commit 5aa19be912 and b4a721af69.
Due to postponement of particle system rewrite it was decided to put particle code
back into the 2.8 branch for the time being.
Include idea that Blender may fail to launch it even if path is correct,
in some cases (dear Windows...).
Based on idea from @lijenstina and @blendify (D2349), thanks.
Over time roll and orbit would scale the quaternion
which is documented as unit length.
In practice any errors would be subtle,
but better normalize as other operators do.
Basic idea is to store fileversion in Library datablock, and split again
Main by libraries after lib linking, do_versions_after_liblink on
those separated Mains, and merge again.
This allows to still have correct versions for each data-block in that
second do_versions step.
Note that this is not used currently in master (might be soon, though),
but is needed for 2.8 work.
Main scheduler would be created way before `-t` argument would be
parsed, since it was on forth pass! Moved it to first pass of argparse,
that kind of stuff should be initialized asap on startup.
When linking data-blocks from same library in several steps, the already
linked data-blocks of same lib would go again through versionning code...
Note: only fixed for libraries, I can't imagine how this could happen
with local data...
Data transfer was not checking if the required geometry existed, thus
causing a segfault when it didn't. This adds the required checks, and
reports errors if geometry is missing.
This also replaces instances of the words "polygon" and "loop" in error
messages with "face" and "corner" respectively, to be consistent with
the rest of the existing UI.
Reviewed By: mont29
Differential Revision: http://developer.blender.org/D2410
When append a datablock the default brushes were not created and only
were created when draw new strokes. Now the default brushes are created
when draw strokes if necessary.
It's now possible to change the shortcut that enables planar transformation with the transform manipulators (shift+LMB on axis).
This actually fixes the workaround added in rB20681f49801fd. Thing is that we needed to allow using the manipulators, even if a modifier key is held so things like snapping work right away. That's why normal LMB behavior uses KM_ANY. However, event handling would always execute the KM_ANY keymap handler because it's iterated over first. Simply solved this by registering the KM_SHIFT keymap item first, so it has priority over the KM_ANY one.
Enum properties with icon only flag should use minimum/fixed width in expanded layouts (alignment=UI_LAYOUT_ALIGN_EXPAND).
Differential Revision: https://developer.blender.org/D2415 by @raa (only made some really minor corrections)
This matches behavior of Multiscatter GGX and could become handy later on
when/if we decide it would be beneficial to replace on closure with another.
Reviewers: lukasstockner97, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2413
There was 16 bits reserved for primitive type, while we only need 4.
Reviewers: brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2401
There was fully wrong logic in comparison: was actually accessing memory
past the array boundary. Run test manually and the figure seems correct
to me now.
Spotted by @LazyDodo, thanks!
This code has not been used for a long time if not ever.
Most of the code was removed in rB1d3609262704f88c9e30b2cebdb236110b25cdc9
however, this was forgoten.
Adds two buttons to context (RMB) menu of path buttons:
* "Open File Externally" to open a file in an external app (only visible if path contains a filename)
* "Open Location Externally" to open a path in an external file browser
The functionallity for this was already there, just hidden behind Shift/Alt click of file_browse button (folder icon next to path button).
This gives us 9 flags available again for properties (we had none anymore),
and also makes things slightly cleaner.
To simplify (and make more clear the differences between mere properties
and function parameters), also added RNA_def_parameter_flags function (and
its clear counterpart), to be used instead of RNA_def_property_flag for
function parameters.
This patch is also a big cleanup (some RNA function definitions were
still using 'prop' PropertyRNA pointer, etc.).
And yes, am aware this will be annoying for all branches, but we really need
to get new flags available for properties (will need at least one for override, etc.).
Reviewers: sergey, Severin
Subscribers: dfelinto, brecht
Differential Revision: https://developer.blender.org/D2400
Was a waaaaayyyyy to much generic name for such a specific func, renamed
to much more descriptive BKE_libblock_relink_to_newid().
In near future (few weeks, to limit as much as possible silent mismatch
in branches), will rename BKE_libblock_relink_ex to BKE_libblock_relink,
this is the real generic data-block relinking func!
Discard the whole volume stack on the last bounce (but keep
world volume if present).
Volumes are expected to be closed manifol meshes, meaning if
ray entered the volume there should be an intersection event
of ray exisintg the volume. Case when ray hit nothing and
there are still non-world volumes in the stack can happen in
either of cases.
1. Mesh is not closed manifold.
Such configurations are not really supported anyway and should
not be used.
Previous code would have consider the infinite length of the
ray to sample across, so render result wasn't really correct
anyway.
2. Exit intersection is more far away than the camera far
clip distance.
This case also will behave differently now, but previously it
wasn't really correct either, so it's not like we're breaking
something which was working as expected.
3. We missed exit event due to intersection precision issues.
This is exact the case which this patch fixes and avoid
fireflies.
4. Volume has Camera only visibility (all the rest visibility
is set to off)
This is what could be considered a regression but could be
solved quite easily by checking volume stack's objects flags
and keep entries which doesn't have Volume Scatter visibility
(or even better: ensure Volume Scatter visibility for objects
with volume closure),
Fixes T46108: Cycles - Overlapping emissive volumes generates unexpected bright hotspots around the intersection
Also fixes fireflies appearing on the edges of cube with
emissive volue.
Reviewers: juicyfruit, brecht
Reviewed By: brecht
Maniphest Tasks: T46108
Differential Revision: https://developer.blender.org/D2212
//ui_item_enum_expand// function replaces all pie menu's sub-layouts with radial layout. It should replace only root layout.
To reproduce the issue paste the code in Blender's text editor and press Run Script button.
```
import bpy
class VIEW3D_PIE_template(bpy.types.Menu):
bl_label = "Select Mode"
def draw(self, context):
layout = self.layout.menu_pie()
layout.column().prop(
context.scene.render.image_settings, "color_mode", expand=True)
def register():
bpy.utils.register_class(VIEW3D_PIE_template)
def unregister():
bpy.utils.unregister_class(VIEW3D_PIE_template)
if __name__ == "__main__":
register()
bpy.ops.wm.call_menu_pie(name="VIEW3D_PIE_template")
```
Differential Revision: https://developer.blender.org/D2394 by @raa
Reading rest of the code, it's obvious we want to start à YOFF lines
from start of rect2i, so we have to also multiply by number of
components.
Also did some minor cleanup.
Code was not accounting for possibilities that width or height of given
buffers may be smaller than XOFF/YOFF...
Note that I seriously doubt that drop code actually works (as in, gives
expected results) when applied to tiles like it seems to be done
currently, but this is much more complex (and involved) topic.
compiled.
This adds a short message to the smoke, remesh and boolean modifiers' UI
when trying to use them when their compilation was turned off. This was
already implemented for the fluid and ocean simulation modifiers.
This also makes the 'quick fluid' and 'quick smoke' operator abort and
report when trying to use them when unavailable.
This kind of keeps threads "warmer" and should in theory give better
cache coherency bringing some %% of speedup. It was already tested
few months ago and it gave few % speedup in barber shop, but was
reverted due to some bone popping. The popping is now fixed so it
should be fine to use new scheduling policy.
Shuffle existing code, hook it up to the new (& old) viewport.
Also the 3D mouse rotation guide for NEW viewport only. Minor feature not worth enabling for legacy 3D view.
This sets forces to zero, when Nabla is zero and a grayscale texture is
used or texture mode is Gradient or Curl.
Nabla equal to zero was causing a zero division, and forces ended up
being set to `nan`.
Reviewed By: mont29
Differential Revision: http://developer.blender.org/D2393
Own error when changing order,
moving experimental features last made some sense,
but causes them to be listed twice.
Reorder and comment to avoid it happening again.
- Expand overly dense & confusing delta assignments.
- Replace bit shift with multiply.
Also link to 'clipped' version of this function
which may be useful to add later.
The Progress system in Cycles had two limitations so far:
- It just counted tiles, but ignored their size. For example, when rendering a 600x500 image with 512x512 tiles, the right 88x500 tile would count for 50% of the progress, although it only covers 15% of the image.
- Scene update time was incorrectly counted as rendering time - therefore, the remaining time started very long and gradually decreased.
This patch fixes both problems:
First of all, the Progress now has a function to ignore time spans, and that is used to ignore scene update time.
The larger change is the tile size: Instead of counting samples per tile, so that the final value is num_samples*num_tiles, the code now counts every sample for every pixel, so that the final value is num_samples*num_pixels.
Along with that, some unused variables were removed from the Progress and Session classes.
Reviewers: brecht, sergey, #cycles
Subscribers: brecht, candreacchio, sergey
Differential Revision: https://developer.blender.org/D2214
Quite handy for debugging.
Unfortunately, this doesn't support viewport tweaks yet since those
require GLSL for colorspace conversion. Maybe this will be implemented
as well one day in the future..
They are defined for MSVC but seems to be missing in GCC and CLang-3.8.
Maybe some further tweaks to policy when to define those functions is
needed, but should be fine for now.
I can no longer reproduce crash with neither of the files where
the crash was originally visible. This is something where other
changes (light threshold, sampling) had an effect and made code
to work as it is supposed to. Could have been optimizator issue
or something like that.
Let's see if we hit same issue again.
`IMB_remakemipmap` may 'shrink' the mipmap list without actually freeing
anything, so we need to check all possible levels in `imb_freemipmapImBuf`
to avoid memory leaks, not only those currently used.
Basically all this does is drawing layout previews into the opened layout search menu.
https://youtu.be/RHYWtZP7pyA
The previews are drawn using offscreen rendering so they can't use multi-threading (yet!). But that shouldn't be an issue since only a handful of previews are drawn at the same time. Normally we only need to redraw the preview if a screen layout was changed. Would be nice if PreviewImage could store if it supports threaded rendering.
Previews are saved in files, might be useful if you later want to support appending layouts.
Adds a new file screen_draw.c.
In ccgDM and emDM, looptri array recalculation was being handled
directly by `*DM_getLoopTriArray` (`getLoopTriArray` callback), while
`*DM_recalcLoopTri` (`recalcLoopTri` callback) was doing nothing.
This results in the array not being recalculated when other functions
that depend on the array data called the recalc function.
This moves all the recalculation code to `*DM_recalcLoopTri` and makes
`*DM_getLoopTriArray` call that.
This commit also makes a minor change to the `getNumLoopTri` function,
so that it returns the correct number without having to recalculate the
looptri array.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2375
There were two cases where correlation issues were obvious:
- File from T38710 was giving issues in 2.78a again
- File from T50116 was having totally different shadow between
sample 1 and sample 32.
Use some more simplified version of CMJ hash which seems to give
nice randomized value which solves the correlation.
This commit will break all unit test files, but it's a bug fix
so perhaps OK to commit this.
This also fixes T41143: Sobol gives nonuniform noise
Proper science paper about hash function is coming.
Reviewers: brecht
Reviewed By: brecht
Subscribers: lukasstockner97
Differential Revision: https://developer.blender.org/D2385
Most of them are harmless implicit conversions (e.g. Alembic deals with
doubles for storing time information when Blender uses both ints and
floats/doubles) or class/struct mismatch on forward declarations.
This aims at always ensuring that ID.newid (and relevant LIB_TAG_NEW)
stay in clean (i.e. cleared) state by default.
To achieve this, instead of clearing after all id copy call (would be
horribly noisy, and bad for performances), we try to completely remove
the setting of id->newid by default when copying a new ID.
This implies that areas actually needing that info (mainly, object editing
area (make single user...) and make local area) have to ensure they set
it themselves as needed.
This is far from simple change, many complex code paths to consider, so
will need some serious testing. :/
Was a ground work for some more improvements here, but got dragged
to some other studio maintenance job here.
The plan would be to enable exposure/gamma control for fallback mode
which will definitely be really handy for development and might be
handy for cases when OCIO config can not be read.
Crash is due by mismatching loops and faces counts between the Alembic
data and the Blender derivedmesh which does not appear so
straightforward to fix (the crash happens deep in the derivedmesh code).
So for now, try to detect if the topology has changed and if so, both
only read vertices (vertex colors and UVs won't be read, as tied to face
loops) and add a warning message in the modifier's UI to let the user
know.
The idea is simple: cache PD resolution from cache_point_density() RNA
function because that one is supposed to be called while database is
locked for original synchronization.
Ideally we would also pass array size to the sampling function, but
it turned out to be quite problematic because API only accepts int type
and passing size_t might cause some weird behavior.
This is a way to avoid possible memory corruption when render threads works
in parallel with UI thread.
Not guarantees complete safe, but makes things easier to check anyway.
This fixes multiple problems on latest Mac OS + Xcode. Hopefully does not cause any on other platforms.
The Xcode detection logic could use further cleanup. It's checking several old versions that are unsupported for Blender 2.8+ development.
Needed because deps graph can destroy objects from any thread. We ran into the same problem & solved it in GPU_buffers.
Implemented in C++11 since it provides the needed machinery. The interface is in C like the rest of Gawain.
This means editing a property will now always affect all selected objects, bones or sequencer strips. Support for this was added in rBdfbb876d4660 but you had to hold the Alt-key to use it. The old behavior of only editing the active object will not be kept like decided in the 2.8 workflow meeting (reports comming). If you only want to edit the active object, you have to deselect others.
There are still a couple of issues to be resolved (listed below), but having it enabled by default helps testing and getting used to it and should motivate us to fix them ;)
To be fixed:
* Give users hint when edits are applied to all objects/bones/strips ("Applying to x objects") - there are ideas but we need to finalize and implement them
* Make it work better in corner cases (material editing, modifier property editing, etc)
Note: Values usually override the initial value of the object/bones/strips, except of number buttons where it depends if you enter the value (absolute override) or drag the value (add value change). This behavior is consistent with multi-button editing.
To bring the API more into line with the UI (and the general expected behaviour of
Blender when it comes to adding stuff), newly created layers and palettes will be
made the active ones by default. It's possible to override this behaviour still
(e.g. in cases where you're auto-generating a large number of them), but otherwise,
this change will help prevent errors like T50123.
When there were no prior palettes, creating a new one didn't automatically make it active.
This caused problems when trying to rename the color, as the RNA code assumed that if there's
a color, it must come from the active palette.
This commit partially fixes the problem by ensuring that if there are no palettes, the first
one will always be made active.
- Remove 'rotate_m2', unlike 'rotate_m4' it created a new matrix
duplicating 'angle_to_mat2' - now used instead.
(better avoid matching functions having different behavior).
- Add 'axis_angle_to_mat4_single',
convenience wrapper for 'axis_angle_to_mat3_single'.
- Replace 'unit_m4(), rotate_m4()' with a single call to 'axis_angle_to_mat4_single'.
This is no longer needed since moving to MPoly/MLoop data structure.
Also use 3x3 matrix for transforming instead of quaternion
(slightly better performance).
Was giving huge artifacts in the barber shop file here in the studio,
Maybe not fully optimal solution, but committing it for now to have
closer look later.
All objects were being parented to a single instance of each parent
object, instead of their respective instances, when using dupliverts or
dupligroups.
Behavior was caused by the `persistent_id[0]` (vertex/face id) being
ignored when computing `parent_gh` hash, which caused all instances to
have the same hash, and thus only the first one was included.
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2370
Was affecting armatures' pose drawing code, could try to draw with
non-updated pose, which may contain NULL bone pointers (e.g. after some
data-block management tool execution, like make local, remapping, etc.).
Main intention is to give some quick way to control scene's memory
usage by clamping textures which are too big. This is really handy
on the early production stages when you first create really nice
looking hi-res textures and only when it all works and approved
start investing time on optimizing your scene.
This is a new option in Scene Simplify panel and it acts as
following: when texture size is bigger than the given value it'll
be scaled down by half for until it fits into given limit.
There are various possible improvements, such as:
- Use threaded scaling using our own task manager.
This is actually one of the main reasons why image resize is
manually-implemented instead of using OIIO's resize. Other
reason here is that API seems limited to construct 3D texture
description easily.
- Vectorization of uchar4/float4/half4 textures.
- Use something smarter than box filter.
Was playing with some other filters, but not sure they are
really better: they kind of causes more fuzzy edges.
Even with such a TODOs in the code the option is already quite
useful.
Reviewers: brecht
Reviewed By: brecht
Subscribers: jtheninja, Blendify, gregzaal, venomgfx
Differential Revision: https://developer.blender.org/D2362
This will make triple buffer used by default for such configuration.
Ideally we would switch to triple buffer on all platforms, but let's
do it in 2.8 branch and don't open can of worms in master now.
This should solve issues like T49945.
This is very confusing, in fact, and rna tooltip was wrong,
BKE_object_make_local_ex actually ensures we never have several proxies
of same object, since it always clears proxy when it has to copy object
to make it local...
What that RNA function is probably missing, though, is same logic as in
BKE_library_make_local to actually remap proxy from old linked object to
new local one.
I) `clear_proxy` parameter was not assigned to parm in RNA define code,
so 'pyfunc optional' flag was set to `new_id` parameter of `user_remap`
func - super ugly!
II) `clear_proxy` parameter itself, when set to False, would allow to
leave .blend file in invalid state (more than one proxy of same object),
this should never, ever be allowed in RNA API imho. Left the PAI
untouched for now, just disabled any effect from this parameter (hence
always clearing proxy when copying).
There is some define conflict between system headers and clew,
so delay include of clew.h as much as possible.]
This is something which needed to be done in the code before
the refactor, hopefully such change will still work.
For the multi-GPU case users still have to reconfigure the devices they want to use.
Based on patch from Lukas Stockner.
Differential Revision: https://developer.blender.org/D2347
This can be used together with camera culling to keep nearby objects visible in
reflections, using a minimum distance within which objects are visible. It is
also useful to cull small objects far from the camera.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2332
When the parent object matrix change after the layer was parented, the
inverse matrix for strokes must be updated when editing strokes or the
transformations will be wrong.
We need to check node tree links are still valid, after we remapped
some NodeGroup.
Note: In fact, we have to run that for *all* ID types, since nodes may
use any kind of data-block (in theory)... :/
Forward compatibility code should never, ever be run during undo saving.
Note: related to T49991 (but does not fix it either, crash now happens
when doing a real file save...).
Adding a torus in edit-mode, with 'Generate UVs'
for example would either create another UV layer with the default name or
switch to the default UV layer name if it exists.
Now use the existing UV layer if present.
'1' threshold value would only allow to access a third of the basic
'color space' (from black to white, from 0.0 to 1.0 component values),
when you expect it to access the whole range.
Unfortunately, this needs a subversion bump to allow already defined
brushes to keep exact same behavior!
Also, did not change default value (0.2) for new brushes, think here
keeping current one makes more sense.
Thanks to @LucaRood for confirming the issue.
Empty images were implemented to expand (and eventually replace)
the background images functionalities. If we are ever to drop
background images "image empties" should support stereo/multi-view as well.
Updated the GL calls to the new immediate mode.
I left some glcolor calls which I'm not sure wether thats right?
Part of T49043
warm regards,
Sebastian Witt
Reviewers: merwin
Reviewed By: merwin
Tags: #bf_blender_2.8
Maniphest Tasks: T49043
Differential Revision: https://developer.blender.org/D2305
The renderpasses for grease pencil are not necessary when render from
sequencer.
This fix solves the GPF but we need to rethink the complete render
process for grease pencil and integrate better in the render and
composition workflow.
Thanks to Dalai Felinto por helping in the debug and fixing of the
problem.
Just as mentioned in title. Need new functions for calls found in `transform.c`
T49043
Reviewers: merwin
Tags: #bf_blender_2.8
Differential Revision: https://developer.blender.org/D2358
it seems to me the icons are unused:
- VICO_VIEW3D_VEC
- VICO_EDIT_VEC
- VICO_EDITMODE_VEC_DEHLT
- VICO_EDITMODE_VEC_HLT
- VICO_DISCLOSURE_TRI_RIGHT_VEC
- VICO_DISCLOSURE_TRI_DOWN_VEC
- VICO_MOVE_UP_VEC
- VICO_MOVE_DOWN_VEC
- VICO_X_VEC
Since their code contains immediate mode GL calls and they seem to be unused i thought we could remove them.
Reviewers: mont29
Reviewed By: mont29
Subscribers: merwin
Tags: #bf_blender_2.8
Differential Revision: https://developer.blender.org/D2356
This conversion is pretty straightforward.
The code for debug drawing is not great, but it does the job.
Rewriting it is for another day, if it becomes more widely used.
Convert UI_view2d_constant_grid_draw to new immediate mode.
Part of T49043.
Reviewers: merwin
Reviewed By: merwin
Tags: #bf_blender_2.8
Differential Revision: https://developer.blender.org/D2298
Convert ED_region_grid_draw to new immediate mode.
Part of T49043
Reviewers: merwin
Reviewed By: merwin
Tags: #bf_blender_2.8
Differential Revision: https://developer.blender.org/D2289
On second and third thoughts, this should have been done that way since
the begining, cases were you just delete a few data-blocks without any
serious knowledge of their usages are much, much more frequent than
cases where you are deleting thousands of data-blocks and are sure they
are not used anywhere anymore...
Own fault, but really frustrated that this topic was only raised the day
after 2.78a was released. :(
This has nothing to do here (freeing is not unlinking/remapping!), and
was actually redoing something already taken care of by
`BKE_libblock_relink_ex()` call in `BKE_libblock_free_ex()`.
Also, gives some noticeable speedup when removing datablocks with
do_unlink=True, about 5 to 10% quicker e.g. when deleting all objects
from a py console, in a big production file...
This commit reverts part of a fix for T33275, but things are:
- I can not reproduce the original issue at all, so doesn't seem to
cause any regressions.
- It is really bad idea to do delayed initialization in the threaded
environment, it's a straight way to some nasty issues.
- We can't do things like this anyway because we go more granular,
meaning such a delayed initialization will fail in the case of
having several IK solvers (unless they properly accommodate to
changed bone head).
- Verified the fix with various files from Mango project and all of
them seems to work nice with new depednency graph now (old depsgraph
has some flickering, but it's not related on DEG itself, but on
an environment with lots of proxies and threaded evaluation and it
is not a new behavior).
This reverts commit 9b5a32cbfb.
Apparently it is possible to have other thread mocking around with the hash.
Needs deeper investigation, for the time being reverting to prevent crashes.
This commit fixes two issues:
- UV/Image editor uvs menu did not match the 3D View's which was changed in rB2b240b043078
- Circle select tool was missing in particle edit mode
Reviewers: Severin
Differential Revision: https://developer.blender.org/D2329
Added BKE_libblock_free_data_ex() which takes special do_id_user
argument which basically indicates whether main database was already
taken care about not having "dead" pointers.
Gives about 40% speedup of main database free with quadbot scene
(3.4sec vs. 5.4 sec on quite powerful desktop).
Just fixing crash itself. Actually operator shouldn't run in most editors (not in dopesheet either I guess), but don't want to spend time on that right now.
This option makes an operator to not push a task to the undo stack if the previous stored elemen is the same operator or part of the same undo group.
The main usage is for animation, so you can change frames to inspect the
poses, and revert the previous pose without having to roll back tons of
"change frame" operator, or even see the undo stack full.
This complements rB13ee9b8e
Design with help by Sergey Sharybin.
Reviewers: sergey, mont29
Reviewed By: mont29, sergey
Subscribers: pyc0d3r, hjalti, Severin, lowercase, brecht, monio, aligorith, hadrien, jbakker
Differential Revision: https://developer.blender.org/D2330
all is in the title too..
Reviewers: merwin
Reviewed By: merwin
Subscribers: Blendify, Severin
Tags: #bf_blender_2.8, #opengl_gfx
Maniphest Tasks: T49043
Differential Revision: https://developer.blender.org/D2337
- `bmesh_radial_faceloop_find_first` & `bmesh_disk_faceedge_find_first`
can be replaced with a single call to a new function:
`bmesh_disk_faceloop_find_first`
- `bmesh_disk_faceedge_find_first` called `bmesh_radial_facevert_check`
which isn't needed, since either the current or next loop in the
cycle is attached to the edge we're looking for.
The code was templated already, so don't see big reason to have
3 versions of templated functions. It was giving some extra code
to maintain and in fact already had divergency for support of huge
image resolution (missing size_t cast in byte image loading).
There should be no changes visible by artists.
Consumes much less memory (1/3 for both normals = 32 bytes less per edge). Same visual result.
We can pack normals for other draw modes to get similar savings.
Part of T49165
Most useful for packed normals, which take 1/3 the space of float32 normals.
2-bit alpha|w component is ignored for now.
Batch API can use these now, will add support to immediate mode API if desired.
Enabling on Windows first. Will enable on all platforms after we switch Blender to core profile.
Just return the face or NULL, like BM_edge_exists(),
Also for BM_face_exists_overlap & bm_face_exists_tri_from_loop_vert.
No functional changes.
Old code did some partial overlap checks where this made some sense,
but it's since been removed.
Object freeing may in some kind access its obdata (in case it has some
caches e.g.), since here obdata may have already been freed, let's set
object's data pointer to NULL (probably not ideal solution, but we don't
care much, those form archipelagos of unused linked datablocks,
we nuke'em all anyway).
Also fix stupid mistake in one of own recent commits (using ID we just
freed, tsst...).
Drop SciTech support & workarounds for WinCE and OpenGL ES.
AMD_debug_output is still in the code but disabled. Once I verify the newer extensions are available on all the GPU + OS combos we support we can delete this disabled code.
This should make it easier to sculpt in high resolutions, downside is that the new way to calculate maximum edge length is a bit less intuitive. Maximum edge length used to be calculated as blender_unit * percentage_value, now it's blender_unit / value.
Reused old DNA struct member, but had to bump subversion to ensure correct compatibility conversion. Also changed default value slightly (would have had to set to 3.333... otherwise).
Was Requested by @monio (see https://rightclickselect.com/p/sculpting/zpbbbc/dyntopo-better-scale-input-in-constant-detail-mode) and I think it's worth testing.
Motivations:
1) GLenum is too broad; tightly-defined enum just for this is safer.
2) enable a Vulkan future
New code should use these instead of GL_FLOAT etc. When all existing code has been updated to use new enum, we can drop compatibility with GLenum values.
Early work towards 10_10_10 format, more to come soon.
Do not set 'real user' to groups every time we run the first clearing loop.
And do fully clear properly LIB_TAG_DOIT (this is not yet enforced in
existing code, but would love to get to that stage in future, so let's
do it at least with new code!).
Basic idea is to split first loop in two, and run checks before making
anything actually local, to detect data-blocks that we can directly make
local (because we are sure they are only used by already/future local
datablocks).
This allows to avoid a lot of overhead in later 'cleanup' steps of this
function, here with barbershop shot it's four times quicker (from 190s to 48s).
We are still far from the instantaneous results of MakeLocal in 2.77,
but in that version main characters lose their connection to their
armature and remain static after makelocal, so guess new code is still
better. ;)
There are probably more optimizations possible here, but would rather
polish this area of code once we get rid of proxies, those really
make it a nightmare to work on.
If the opengl render with grease pencil is run from VSE with the current
frame outside visible frames, the render pass is wrong and the render
must be canceled because nothing to render. Related to #T49975
Before this commit, the brush set was created with the first stroke
drawing, but if the user creates the datablock or the layer manually
(not drawing) the brush list was empty.
This commit complement the python fix by Sergey:
https://developer.blender.org/rB89c1f9db37cc1becdd437fcfdb1877306cc2b329
Culprit here was once more proxies. Think what was happening here was:
1) Both proxy and proxified armatures' PoseChannels were cleared
(needed after remapping due to Bone pointers being stored in pchans).
2) Proxy PoseChannels got rebuilt in `BKE_pose_rebuild_ex()`, which ends,
in proxy cases, by actually replacing rebuilt pchans by those from
the proxified object... which has not yet been rebuilt.
Fixed the issue by merely adding bone pointer to data copied from
original pchan into new 'from proxy' one... Sounds much, much safer and
sanier anyway, that way we can be sure bone pointer is actually pointing
to a bone of the object's armature (this is supposed to be the same
Armature datablock between proxy and proxified objects, but that may not
be always true especially during makelocal process).
Did similar trick to old dependency graph: tag invisible relations for update.
Might need some re-consideration, see the comment.
This should solve our issues with powerlib addon here in the studio.
New dependency graph expects strict separation between nodes and relations builder,
meaning, if we try to create relation with an object which is not in the graph yet
we'll have an error in depsgraph.
Now, so far object nodes were created from bases of the current scene, which caused
missing objects in graph in certain cases.
Didn't find better approach than to simply ensure object nodes exists when we know
they'll be used by relation builder.
`kernel_path.h` and `kernel_path_branched.h` have a lot of conditional code and
it was kind of hard to tell what code belonged to which directive. Should be
easier to read now.
Caused by 4811b2d356 which caused the event handler hack that is used to fire up the file browser from other operators to fail. Basically the context from before the file browser is opened gets stored and used later for executing the actual file read/write operation (in this case, saving image). This context storage is cleared when exiting an editor since 4811b2d356, which is technically correct, but causes usage of NULLed context data in this case, because the file browser is exited before the file read/write operation is executed.
For now I solved this by moving the fileselect handler to list of normal handlers, instead of modal ones. 4811b2d356 only touches list of modal handlers so we avoid the crash. Ideally we'd completely refactor how the file browser opening works to get rid of these event handler hacks.
Note that I wouldn't be suprised if this causes other regressions, but I couldn't find one so worth a try.
Proxified objects can never be local, we can totally ignore them here.
This 'fixes' the asserts related to usercount when trying to remap poselib
of localized proxified objects (not sure what exactly was going on wrong here,
but proxies are a giant can of worms for sane data-blocks handling anyway :/).
This new `bpy.types.ID.make_local(clear_proxies=True)` allows Python
code to press the "Make Local" button on any ID block. I chose
`clear_proxies=True` as the default, since it's the default behaviour
of `id_make_local()` (defined in `library.c`).
The caller does need to take care of ensuring that linked-in objects
don't refer to local data, and that proxies aren't broken.
Reviewers: sergey, mont29
Reviewed By: mont29
Subscribers: dfelinto
Differential Revision: https://developer.blender.org/D2346
As our library of built-in shaders grows, it's important to create, access, and discard them efficiently.
Lookup via GPU_shader_get_builtin is now constant time instead of linear (# of built-in shaders). This is called very often with our new immediate mode.
Creation and discard are unified.
Adding a new shader requires fewer steps.
365 lines shorter :D
all is in the title.
Reviewers: merwin
Tags: #bf_blender_2.8, #opengl_gfx
Maniphest Tasks: T49043
Differential Revision: https://developer.blender.org/D2336
This one is for the straight line (white with width 2.0 over a black with width 4.0) drawn when you use the gradient tool.
To test: Image editor, create / open an image, choose image paint mode and on the tool shelf: choose the Fill brush and enable "Use Gradient" for it. Then click and drag on the image.
From what I checked, calls to glLineWidth are not being removed yet, so I kept them.
Reviewers: dfelinto, Severin, merwin
Reviewed By: merwin
Tags: #bf_blender_2.8, #opengl_gfx
Maniphest Tasks: T49043
Differential Revision: https://developer.blender.org/D2312
D2311 by @ianwill
This is the radial control that appears when we change the size of a brush in sculpt and vertex and texture painting modes, by pressing "f".
Also includes a new built-in shader that can be useful in other places.
Part of T49043
Add subframe to the animated seed hash calculation.
Should be no difference for the regular files, only for cases when scene is
rendered from sequencer with a speed effect, which is not really a common thing.
This is a late follow-up commit to the light sample threshold changes which
caused difference in rendering all existing .blend files which is not something
we are happy about: it is fine to use new optimized defaults for new files, but
existing ones should always be rendering in the same way as they used to be.
Sorry for the inconveniece, but such thing should have been done to begin with.
If this setting was modified it will not be reset to zero.
Now all render tests should be passing again.
P.S. Also really annoying to bump subversion for such reasons, but currently we
don't have better way to achieve what we want.
Lamp Data node requires shadow sample array which is only enabled when
Shadows are enabled in the shading settings.
This commit prevents crash but might not give expected render results
in such a configuration.
Gawain does very strict runtime checking to help us catch coding errors. Final release should disable most of these checks, so I'm disabling now for all non-debug builds.
When writing Blender code that uses Gawain, always make debug builds and test there! "make lite debug" is my favorite.
Right now this only affects other objects in wireframe. The idea is to do something similar for other draw modes, and keep focus on the edit object.
As seen at #bcon16
Most of this was already in place, just enabling & adding comments.
One fix was needed to make batch uniforms stick between multiple draws.
Added comments to selection outline; no functional changes there.
After CUDA dynload changes having CUDA toolkit became required
in order to compile Cycles. This only happened due to wrong
default value to the option.
The idea is simple: when falling back to one of the nodes which was partially
handled we "resume" checking outgoing relations from the index which we stopped.
This gives about 15-20% depsgraph construction time save.
This is more proper way to go:
- Avoids re-compilation of all dependent files when implementation changes
without changed API,
- Linker should have much simpler time now de-duplicating and getting rid
of redundant implementations.
The idea here is to address issue that name on it's own is not
always unique: for example, when adding driver operations the
name used for nodes is the RNA path (and multiple drivers can
write to different array indices of the path). Basically, now
it's possible to pass extra integer value to distinguish
operations in such cases.
So now we've already switched from sprintf() to construct unique
operation name to pass RNA path and array index.
There should be no functional changes yet, but this work is
required for further work about replacing string with const
char*.
There is no real reason to have nodes storing heap-allocated name
and description. Doing this increases amount of allocations during
dependency graph building, which usually means somewhat slowness.
We're temporarily loosing some eyecandy in the graphviz visualizer,
but those we can bring back as a part of graphiz dump (which happens
much less often than depsgraph build).
This will happen in multiple commits for the ease of bisect in the
future just in case this causes any regression. This commit contains
ID creation API changes.
Bullet spring constraint already supports rotational springs, but
they are not exposed in blender UI, likely due to a simple oversight.
Supporting them is as simple as adding a few DNA/RNA properties
with appropriate UI and passing them on to Bullet.
Reviewers: sergof
Reviewed By: sergof
Differential Revision: https://developer.blender.org/D2331
Previously, it was only possible to choose a single GPU or all of that type (CUDA or OpenCL).
Now, a toggle button is displayed for every device.
These settings are tied to the PCI Bus ID of the devices, so they're consistent across hardware addition and removal (but not when swapping/moving cards).
From the code perspective, the more important change is that now, the compute device properties are stored in the Addon preferences of the Cycles addon, instead of directly in the User Preferences.
This allows for a cleaner implementation, removing the Cycles C API functions that were called by the RNA code to specify the enum items.
Note that this change is neither backwards- nor forwards-compatible, but since it's only a User Preference no existing files are broken.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Subscribers: brecht, juicyfruit, mib2berlin, Blendify
Differential Revision: https://developer.blender.org/D2338
With this fix, using a MIS map resolution equal to the image size for closest imterpolation or twice the size for linear interpolation gets rid of all fireflies.
Previously, a much higher resolution was needed to get acceptable noise levels.
There's more dll's hanging out in the ucrt folder, but I just grabbed the ones blender requested (not sure if that's a wise idea, but it seems to work)
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Differential Revision: https://developer.blender.org/D2335
As seen at #bcon16
Geometry shader version is automatically used on modern GL runtimes. Legacy version is used on pre-3.2 systems (Mac, Mesa compat profile). They have the same inputs and visual result.
TODO: specialized versions that are less flexible -- draw ALL edges or draw JUST silhouette edges.
Part of T49165
We have to clear `newid` of all datablocks, not only object ones.
Note that this whole stuff is still using some kind of older, primitive
'ID remapping', would like to see whether we can replace it with new,
more generic one, but that's for another day.
Code would try to add multiple time the same key in `parent_gh` (for this
ghash a lot of dupliobjects may generate same key).
Was making the tool unusable in debug builds.
Also optimise things a bit by avoiding creating parent_gh when only
`use_base_parent` is set.
New code dealing with getting rid of lib-only cycles of data-blocks
could add several time the same datablock to the list of candidates. Now
this is avoided, and pointers are further cleaned up as double-safety
measure.
Feature request during bconf, makes sense to have it even as an hack for
now, since this is probably one of the most common use cases. This should
be redone in bmesh once we have proper custom noramls handling in edit mode...
The issue was caused by image ID nodes not being in the depsgraph.
Now, tricky part: we only add nodes but do not add relations yet. Reasoning:
- It's currently important to only call editor's ID update callback to solve
the issue, without need to flush changes somewhere deeper.
- Adding relations might cause some unwanted updates, so will leave that for
a later investigation.
Basically, the problem here was that the transform that's used to bring texture coordinates
to world space is either fetched while setting up the shader (with Object Motion is enabled) or
fetched when needed (otherwise). That helps to save ShaderData memory on OpenCL when Object Motion isn't needed.
Now, if OM is enabled, the Lamp transform can just be stored inside the ShaderData as well. The original commit just assumed it is.
However, when it's not (on OpenCL by default, for example), there is no easy way to fetch it when needed, since the ShaderData doesn't
store the Lamp index.
So, for now the lamps just don't support local texture coordinates anymore when Object Motion is disabled.
To fix and support this properly, one of the following could be done:
- Just always pre-fetch the transform. Downside: Memory Usage increases when not using OM on OpenCL
- Add a variable to ShaderData that stores the Lamp ID to allow fetching it when needed
- Store the Lamp ID inside prim or object. Problem: Cycles currently checks these for whether an object was hit - these checks would need to be changed.
- Enable OM whenever a Texture Coordinate's Normal output is used. Downside: Might not actually be needed.
Animation system has separate fcurves for each of array elements and
dependency graph creates separate nodes for each of fcurve, This is
needed to keep granularity of updates, but causes issues because
animation system will actually write the whole array to property when
modifying single value (this is a limitation of RNA API).
Worked around by adding operation relation between array drivers
so we never write same array form multiple threads.
It was possible to have synchronization issues whe naccumulating smooth
normal to a vertex, causing shading artifacts during playback.
Bug found by Dalai, thanks!
They were not real issues, it's just some areas of code tried to create
relations between non-existing nodes without checking whether such
relations are really needed.
Now it should be easier to see real bugs printed.
Hopefully should be no regressions here.
Some platforms are having hard time using this linker so added an option
to not use it. The options is an advanced one and enabled by default so
should not cause any changes for current users.
Request from Hjalti Hjalmarsson for the animation work.
Basically a common part of the workflow of animation is to change the pose, scrub back and forth a few times and roll back the changes when unsatisfied.
However if you go back and forth too many times the UNDO stack would be full, and it would not be possible to bring back the previous pose.
I'm leaving clip_editor change frames as it is for now. But we can
probably change the behaviour there as well.
Issue here was that py API code was keeping references (pointers) to the
liniked data-blocks, which can actually be duplicated and then deleted
during the 'make local' process...
Would have like to find a better way than passing optional GHash to get
the oldid->newid mapping, but could not think of a better idea.
Radial append/remove had swapped args and *slightly* different behavior.
- bmesh_radial_append(edge, loop)
- bmesh_radial_loop_remove(loop, edge)
Match logic for append/remove,
Logic for the one case where the edge needs to be left untouched
has been moved to: `bmesh_radial_loop_unlink`.
This is yet another debug option that allows to render an arbitrary
simulation field by using a color ramp to inspect its voxel values.
Note that when using this, fire rendering is turned off.
Reviewers: plasmasolutions, gottfried
Differential Revision: https://developer.blender.org/D1733
This allows to save a memory copy, which will be particularly useful for network rendering.
Reviewers: sergey, brecht, dingto, juicyfruit, maiself
Differential Revision: https://developer.blender.org/D2323
In scenes with many lights, some of them might have a very small contribution to some pixels, but the shadow rays are traced anyways.
To avoid that, this patch adds probabilistic termination to light samples - if the contribution before checking for shadowing is below a user-defined threshold, the sample will be discarded with probability (1 - (contribution / threshold)) and otherwise kept, but weighted more to remain unbiased.
This is the same approach that's also used in path termination based on length.
Note that the rendering remains unbiased with this option, it just adds a bit of noise - but if the setting is used moderately, the speedup gained easily outweighs the additional noise.
Reviewers: #cycles
Subscribers: sergey, brecht
Differential Revision: https://developer.blender.org/D2217
This option allows to create a smoother transition between Bricks and Mortar - 0 applies no smoothing, and 1 smooths across the whole mortar width.
Mainly useful for displacement textures.
The new default value for the smoothing option is 0.1 to give some smoothing that helps with antialiasing, but existing nodes are loaded with smoothing 0 to preserve compatibility.
Reviewers: sergey, dingto, juicyfruit, brecht
Reviewed By: brecht
Subscribers: Blendify, nutel
Differential Revision: https://developer.blender.org/D2230
When using the Normal output of the Texture Coordinate node on Point and Spot lamps, the coordinates now depend on the rotation of the lamp.
On Area lamps, the Parametric output of the Geometry node now returns UV coordinates on the area lamp.
Credit for the Area lamp part goes to Stefan Werner (from D1995).
constraints.
This avoids traversing the archive everytime object data is needed and
gives an overall consistent ~2x speedup here with files containing
between 136 and 500 Alembic objects. Also this somewhat nicely de-
duplicates code between data creation (upon import) and data streaming
(modifiers and constraints).
The only worying part is what happens when a CacheFile is deleted and/or
has its path changed. For now, we traverse the whole scene and for each
object using the CacheFile we free the pointer and NULL-ify it (see
BKE_cachefile_clean), but at some point this should be re-considered and
make use of the dependency graph.
The 'local' layers were not correctly set when redoing 'add object'
addons using object_utils.py helper (we always want to restore layers
from view in local view, even if we set 'real' layers from operator
afterwards).
Issue was happening when removal of custom icons was done while they
were still being rendered by preview job.
Now add a 'deffered deletion' system, to prevent main thread to delete
preview image until loading thread is done with them.
Note that ideally, calling `ED_preview_kill_jobs()` on custom icon
removal would have been simpler, but we don't have easy access to
context here...
Oh man, is it a compiler bug? Is it something we do stupid?
For now more crap to prevent crashes. During the conference will talk to
Maxyn about how can we troubleshoot such weird issues.
Basically don't use rcp() in areas which seems to be critical after
second look. Also disabled some multiplication operators, not sure
yet why they might be a problem.
Tomorrow will be setting up a full test with all cases which were
buggy in our farm to see if this fix is complete.
(also, fix warning regarding const float being written)
You only see the color if you use the "modern" viewport option
(otherwise I believe Blender is drawing the old on top of the new outline).
That said, in the "modern" viewport we have unfreed mem. To be
investigated separately.
There is some precision issues for big magnitude coordinates which started
to give weird behavior of release builds. Some weird memory usage in BVH
which is tricky to nail down because only happens in release builds and GDB
reports all variables as optimized out when trying to use RelWithDebInfo.
There are two things in this commit:
- Attempt to make vectorized code closer to original one, hoping that it'll
eliminate precision issue.
This seems to work for transform_point().
- Similar trick did not work for transform_direction() even tho absolute
error here is much smaller. For now disabled that function, need a more
careful look here.
Rewrite the current range-tree API used by dyn-topo undo
to avoid inefficiencies from stdc++'s set use.
- every call to `take_any` (called for all verts & faces)
removed and added to the set.
- further range adjustment also took 2x btree edits.
This patch inlines a btree which is modified in-place,
so common resizing operations don't need to perform a remove & insert.
Ranges are stored in a list so `take_any` can access the first item
without a btree lookup.
Since range-tree isn't a bottleneck in sculpting, this only gives minor speedups.
Measured approx ~15% overall faster calculation for sculpting,
although this number time doesn't include GPU updates and depends on how
much edits fragment the range-tree.
Existing method was fine for basic polygons but didn't scale well
because its was checking all coordinates for every y-pixel.
Heres an optimized version.
Basic logic remains the same this just maintains an ordered list of intersections,
tracking in-out points, to avoid re-computing every row,
this means sorting is only done once when out of order segments are found,
the segments only need to be re-ordered if they cross each other.
Speedup isn't linear, test with full-screen complex lasso gave 11x speedup.
Totally WIP.
Started with copies of legacy routines, modified to use the new shaders & batch cache. Not all features are implemented; this is why we keep legacy viewport around during development!
Do this for the new dependency graph: was missing handle of OB_UPDATE_TIME in tag update.
Hopefully it's all correct still.
Old dependency graph needs work, but i'm tempting to call it unsupported and move on
to 2.8 branch.
Several ideas here:
- Optimize calculation of near_{x,y,z} in a way that does not require
3 if() statements per update, which avoids negative effect of wrong
branch prediction.
- Optimization of direction clamping for BVH.
- Optimization of point/direction transform.
Brings ~1.5% speedup again depending on a scene (unfortunately, this
speedup can't be sum across all previous commits because speedup of
each of the changes varies from scene to scene, but it still seems to
be nice solid speedup of few percent on Linux and bigger speedup was
reported on Windows).
Once again ,thanks Maxym for inspiration!
Still TODO: We have multiple places where we need to calculate near
x,y,z indices in BVH, for now it's only done for main BVH traversal.
Will try to move this calculation to an utility function and see if
that can be easily re-used across all the BVH flavors.
Similar to the previous commit, avoid negative effect of bad branch prediction.
Gives measurable performance up to ~2% in tests here.
Once again, thanks to Maxym Dmytrychenko!
The idea here is to avoid if statements which could cause wrong
branch prediction.
Gives a bit of measurable speedup up to ~1%. Still nice :)
Inspired by Maxym Dmytrychenko, thanks!
Gawain batches are built on demand while drawing, then kept in this per-DerivedMesh cache.
A mesh's batches try to share vertex buffers as much as possible.
Not sure if this file is the best home for this code, but functions in this file are the only users of the cache. So maybe.
Big part of T49165
Surveying buffer usage & clears for new viewport. Not yet perfect, but closer. Committing from Mac so I can test this on Windows.
Using new matrix API (T49450) for gradient background.
Seems CMake will rearrange and copy libraries which are passed to the linker
when some of the libraries is listed twice (for example, -lz from png libraries
and -l for blender itself). This was causing libopenimageio to be added somewhere
at the end of linking flags without -ldl followed after which was causing linking
issues.
Removed some of my earlier glActiveTexture calls. After reviewing the
code I now trust that GL_TEXTURE0 is active by default. Fewer GL calls,
same results.
Fixed some misuse of glActiveTexture & glUniformi, mostly my fault.
Caught by --debug-gpu on Windows. Don't know why this appeared to be
working previously!
Plus some easy cleanup nearby.
Similar to regular triangle intersection case. Gives about 3% speedup rendering
SSS object on my desktop,
Question: how to avoid such a code duplication in a nice way without speed loss?
This will confuse hell of a guarded allocators because it is possible
to have allocation happened prior to Blender's guarded allocator is
fully initialized.
This was causing crashes and assert failures when running blender
with fully guarded memory allocator.
Initialization order of global stats and node types was not strictly
defined and it was possible to have node types initialized first and
stats after that. This will zero out memory which was allocated from
the statistics causing assert failure when de-initializing node types.
It was possible to have non-initialized unaligned BVH split
to be used when regular BVH split SAH was inf. Now we ensure
that unaligned splitter is only used when it's really initialized.
It's a regression and should be in 2.78a.
Works great on Mac now. Will test on Windows & Linux (Mesa) tomorrow. Related to T49505
Main fix is glActiveTexture and immUniform1i.
TEXTURE_2D vs TEXTURE_RECTANGLE is now a compile-time option. Both are available starting in GL 3.1 so there's no need for a run-time check.
Removed glClears that I don't think are necessary.
Prevent TEXTURE_2D from creating extra mipmap levels. We only need level 0.
Some minor cleanup: booleans and variable declarations.
Material linking might and does change the way how drawObject is calculated
but does not tag drawObject for recalculation in any way.
Now use dependency graph to tag draw object for reclaculation. Currently do
this using OB_RECALC_DATA taq since tagging is not very granular yet. In the
future we can introduce ore granular tagging in the new dependency graph
easily.
Simple and safe for 2.78a.
Using context manager for output file itself, and whole try/except block
to at least catch and print error in file.
Also some minor tweaks to previous 'list add-ons' commit.
Note that volume rendering is not supported yet, this is a step towards that.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2299
Now, the strokes can be locked to a plane set in the cursor location.
This option allow the artist to rotate the view and draw keeping the
strokes flat over the surface. This option is similar to surface option
but doesn't need a object.
The option is only valid for 3D view and strokes in CURSOR mode.
When ED_screen_animation_play is called from wm_event_do_handlers,ScrArea *sa = CTX_wm_area(C); is NULL in ED_screen_animation_timer.
Informing the audio system in CTX_data_main_set, that a new Main has been set.
At the moment this already shows that the depth is the same after the solid plates and in the very end of drawing, while they should be different. Later on we can adapt this to show different buffers we want to debug.
I am using near=0.1, far=2.0 for my tests. I decided not to make a doversion for near/far because this is for debugging only
The active camera has a solid "up" triangle instead of the usual outline. We want to see both sides of this triangle. Disable face culling only when drawing the active camera, not for every camera.
Not really possible to precisely detect all cases in which they should or
should not be active, but at least now it won't show as disabled when it
actually has some effects.
Previously an error message would be printed whenever the OpenCL build produced output.
However, some frameworks seem to print extra information even if the build succeeded, so now the actual returned error is checked as well.
When --debug-cycles is activated, the build output will always be printed, otherwise it only gets printed if there was an error.
- any shader program can use matrix state (not only built-in shaders)
- you can mix matrix & begin/end calls, and the bound shader will use the latest matrix state
Part of T49450 & T49043
This would cause Alembic to throw an exception and fail exporting
animations because it was trying to recreate and overwrite the
attributes for each frame.
Also use the operator as part of the UI keymap now, to deduplicate code and let
users configure a custom shortcut.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2303
Mostly the same as before. Except:
- avoid drawing same lines multiple times
- helper functions take "bool filled" argument instead of GLenum
- drawcamera_volume draws its own near & far planes
widget_draw_text was calculating wrong display length when field is too narrow to show entire input string. Gawain assert caught this 11 function calls away!
Thanks to @ianwill for reporting.
Fixes an assert in drawing code.
Might need further work to support variable-thickness strokes (from pressure-sensitive stylus). This all is due for geometry shader overhaul anyway.
this patch resolves the following warnings;
```
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 764
Warning C4098 'attach_stabilization_baseline_data': 'void' function returning a value blenkernel\intern\tracking_stabilize.c 139
Warning C4028 formal parameter 3 different from declaration blenkernel\intern\cachefile.c 148
Warning C4028 formal parameter 3 different from declaration blenkernel\intern\paint.c 413
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\editderivedmesh.c 591
Warning C4028 formal parameter 3 different from declaration blenkernel\intern\library_remap.c 709
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 754
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 758
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 759
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 763
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 764
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 765
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 769
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\ocean.c 770
Warning C4028 formal parameter 1 different from declaration blenkernel\intern\DerivedMesh.c 3458
```
It's mostly things where the signature in the .h and the actual implementation in the .c do not match. And a bunch functions who do not match the TaskRunFunction declaration cause they leave out the __restrict keyword.
Reviewers: brecht, juicyfruit, sergey
Reviewed By: sergey
Subscribers: Blendify
Differential Revision: https://developer.blender.org/D2268
Blender doesn't necessarily crash when Python doesn't keep references to
the returned strings. As a result, someone that implements this incorrectly
could be lulled into a false sense of correctness by Blender not crashing.
- rename image shaders to describe exactly what they do
- rename inputs to match other built-in shaders
- set & use active texture unit
- no need to enable/disable textures with GLSL
- pull vertex format setup out of loops
Turns out CTX_wm_region returns mostly NULL in wm_manipulatormaps_handled_modal_update. Now propertly unsetting area/region data of handlers when deleting area/region.
Previously the editor will always try to only show UV faces with the same exact active
image or image texture, which is quite difficult to control on a production shaders, where
each material can have multiple objects assigned.
The idea of this commit is to bring option which allows to easily control what to display
when "Draw Other Objects" is enabled, so currently we can have old behavior ("Same Image")
or tell editor to show everything ("All"). In the future we can extend it with such filters
as "Same Material" and things like that.
Hopefully this will help @eyecandy's workflow of texturing.
the issue was caused by wrong default value for brush particle count
which was clamped on display from 0 to 1. This is technically a regression
but how to port this to 2.78a?
The problem here was, as the title says, that the two kernels were swapped.
Since shader evaluation is only used for building the samling map when World MIS is enabled, rendering without it would still work fine, although baking also was broken.
The previous refactor changed the code to use a separate logging mechanism to support multithreaded compilation.
However, since that's not supported by any frameworks yes, it just resulted in bad logging behaviour.
So, this commit changes the logging to go diectly to stdout/stderr once again by default.
Non-power-of-two textures are always allowed. Keeping the disabled checks in the code in case we support OpenGL ES in the future. Even then it should be a compile-time check, not at run-time.
Own mistake in 9a9a663f40. Guessed there is a case where we have to rebuild the tree but everything seemed fine... It didn't work in display modes like "Data-Blocks".
Couple of issues here:
* Missing initialization for 3D view keyframe options for "Reset to Default Theme"
* Alpha values not reset correctly on "Reset to Default Theme"
* Alpha values of timeline keyframe options not reset correctly for old files
Also corrected old version patches even though they're overridden later, to avoid more issues in case people copy this code.
Corrections to d7af7a1e04 and 8d573aa0ec
Couple of issues here:
* Missing initialization for 3D view keyframe options for "Reset to Default Theme"
* Alpha values not reset correctly on "Reset to Default Theme"
* Alpha values of timeline keyframe options not reset correctly for old files
Also corrected old version patches even though they're overridden later, to avoid more issues in case people copy this code.
Corrections to d7af7a1e04 and 8d573aa0ec
* LMB now replaces selection instead of adding to it. Shift+LMB adds to selection (or removes if already selected). This is usual selection behavior Blender.
* Outliner selection isn't completely separate from object/sequencer-strip/render-layer/... selection anymore, when selecting an outliner item we now always try to select (and activate) the object it belongs to. Previously you had to click the name or icon of an item to select the object (or whatever) and on empty space within the row to set outliner selection.
* Collapsed items may show click-able icons for their children (nothing new). Clicking on such an icon will also select the hidden item it represents now, you'll notice after opening the parent. This valid from a technical POV, I'm not sure if this is wanted from user POV though. Changing would be easy, feedback welcome!
* Code cleanup.
Part of T37430.
GL 3.3 is the new minimum. Compatibility profile for now, core profile eventually. During development, GL 3.0 (on Mesa) and 2.1 (on Mac) will still work.
Part of T49012
Make color values compact. Set color once per primitive. Use new immSkipAttrib to avoid useless color copies.
All of this should make text drawing less CPU hungry.
Now you can explicitly skip a vertex attribute -- you don't give it a value and it won't get a copy of the previous vert's value. Useful for flat interpolated per-primitive values.
This is an advanced feature. Expect garbage in the empty spaces, and copies of garbage if you rely on the attrib copy behavior after skipping.
This was already fast on Apple, but @Severin and @dfelinto noticed slowdowns in user prefs, which is text heavy.
The problem was immBeginAtMost not being smart about VBO write flushing. immBeginAtMost can use all of its allocated range or only a subrange. The previous code was forcing back-to-back draw calls and buffer writes to serialize. This commit lets OpenGL know that our VBO writes never overlap, so there's no need to wait.
Should be much faster now!
1 or 2 draw calls per node instead of 1 per socket (inputs + outputs).
Rearranged draw order so we set uniforms less frequently.
Some style & dead code cleanup.
Part of T49043
Smooth round point with outline (uniform color) and fill (varying color).
Updated shader naming scheme: a shader that doesn't deal with color does not have to say "no color". Vertex shaders do not have to say "uniform color" since their frag counterpart actually has the uniform. Each name should describe what that shader *does*, not what it *doesn't do*.
I use your new point shader to draw the node's soket
Reviewers: Severin, merwin
Maniphest Tasks: T49043
Differential Revision: https://developer.blender.org/D2286
Some little UI polish to get familiar with outliner code (but also because it's a useful feature). Committing to blender2.8 branch but can also port to master (2.7) if wanted.
This basically causes the mouse hovered element to be highlighted. Contrast of the highlight should be fine, even with a non-default theme. Also did some minor cleanup.
The problem was the function tried to draw a line with one point only. This fix will be replaced by new geometry shaders, but we need while this change is not ready.
This was giving some speedup but made intersection tests to fail
from watertight point of view.
Needs deeper investigation, but need to quickly get it fixed for
the studio.
Before this change, the stroke was filled only after complete the stroke drawing. For artist is better to get a feedback of the area he is filling while drawing, so this commit draws the filling area while drawing.
The triangulation of the stroke is recalculated every time the function is called because using a cache is not useful because the points information is changing all the time while the stroke is being drawing.
Regression from rB69b66d549bcc8, was supposed to be non-functionnal
change, so not sure why search menu was reduced here? For now, restore
to 2.77 width.
API stays exactly the same.
Attribute names can still be of variable length, as long as the average length does not exceed AVG_VERTEX_ATTRIB_NAME_LEN. Since this includes unused attributes (length = 0) the current avg of 5 might even be too high.
Seems to be a bug in original implementation of a830280: code was always
using tangent space instead of UV map because it had the same name. Now
prefer UVMap over tangent because this is how Cycles works. At least it's
closer to.
Not sure it the save+reload issue is still relevant after this fix, that
needs to be double-checked.
Thanks @dfelinto for looking into the report and simplifying the case.
Should be included into 2.78a.
This allows appending of an entire scene from another blend file into this one,
even when that blend file contains proxified armatures.
This replaces the approach from commit 1cdc54dc7d.
Thanks @sergey for the help.
Column flow layout was abuse ui_item_fit in a weird way, which was
broken for last column items.
Now rather use own code, which basically spread available width as
equally as possible between all columns.
New functions activate & deactivate immediate mode. Call these when switching context and the internal VAO will be handled properly. VAOs are one of the few things *not* shared between OpenGL contexts.
Seems to be rounding error. Hopefully new code handles the error fixed back in
SVN revision 28901 and still have proper frame number for Hjalti.
What could possibly go wrong here..
This gives about 5% speedup on AVX2 kernels (other kernels still
have SSE disabled for math operations) and this solves the slowdown
of koro scene mention in the previous commit.
The title says it all actually. This commit also contains
changes to pass float3 as const reference in affected functions.
This should make MSVC happier without breaking OpenCL because it's
only done in areas which are ifdef-ed for non-OpenCL.
Another patch based on inspiration from Maxym Dmytrychenko, thanks!
This commit basically vectorizes existing code using AVX2 instructions
(without modifying algorithm itself). This gives quite nice speedups:
BMW: -8%
Classroom: -5%
Cat: -5%
Koro: +1%
Barcelona: -8%
That's on Linux machine, reported performance improvement on Windows
goes up to 20%.
Not currently sure why Koro is somewhat slower because it mainly uses
curve intersection tests, could be a time noise? Or osmething with the
cache utilization perhaps? In any case speedup in other scenes makes
me thinking that current state is acceptable for initial implementation.
This is again inspired by Maxym Dmytrychenko.
Based on existing ssef data type and to my knowledge it's also what happens in
Embree nowadays.
Inspired by Maxym Dmytrychenko and required for the upcoming triangle
intersection commit.
Hopefully the copyright message is correct.
When ray hits curve segment with SSS shader it was possible to have
uninitialized hit_P variable used for sampling.
Seems that was a reason of our headache of difference between AVX2
and SSE4 render results here, so now we can revert all the nasty
ifdef-ed inline policies.
Was multiplying matrices backward, so concatenation was broken. Fixed!
Also a way to mix legacy matrix stacks with the new library. Just during the transition! Anything within SUPPORT_LEGACY_MATRIX will go away after we switch to core profile.
Part of T49450
Original fix in this area was not really complete (but was the safest at
the release time). Now all the crazy configurations of slots going out
of sync should be handled here.
For now, we merely add an option that sets CXXFLAGS envvar with
'--std=c++11' option.
There is no check done to ensure compatibility with the system
libraries, mainly because:
- It is all but trivial to get this information in a generic and
reliable way.
- Currently even cutting edge distributions may still distribute some c++98
libraries.
- With recent stdlibc++, both ABIs are supported together, which means
that incompatibilities are rather unlikely.
To summarize: if your system is recent and built with gcc-5.1 or more,
you should not experience too much troubles with c++11.
It was possible to have two viewports opened and start using Ctrl-0
to make different objects an active camera for the viewport. This
worked fine for viewports which had decoupled camera from the scene,
but if viewport was locked to scene camera it was possible to run into
situation when two different viewports are locked to scene camera but
had different v3d->camera pointers.
Apparently, the whole G.is_break is not used by OpenGL render, meaning
this flag will not be clear before running the operator. This was
causing missing file output after pressing Esc once for the rest of
Blender session.
Reworked logic in the few places that still called this. Deleted the "GLSL not supported" fallbacks.
Also removed some nearby checks for ARB_multitexture and OpenGL 1.1. Blender 2.77 removed checks like this, but game engine still has some.
Reworked logic in the few places that still called this. Deleted the "GLSL not supported" fallbacks.
Also removed some nearby checks for ARB_multitexture and OpenGL 1.1. Blender 2.77 removed checks like this, but game engine still has some.
Built-in shaders now use uniforms instead of legacy built-in matrices. So far I only hooked this up for new immediate mode.
We use the same matrix naming convention as OpenGL, but without the gl_ prefix, e.g. gl_ModelView becomes ModelView.
Right now it can skip the new matrix stack and use the legacy built-in matrices app-side. This will help us transition gradually from glMatrix functions to gpuMatrix functions.
Still some work to do in gpuBindMatrices. See TODO comments in gpu_matrix.c for specifics.
@zeauro reported this issue:
texture2DRect needs the ARB_texture_rectangle extension.
But isn't that an OpenGL 2.1 feature and should be part of GLSL 1.2+?
This should fix it, and future shaders should do something similar.
We were checking for number of tasks from given pool already active, and
then atomically increasing it if allowed - this is not correct, number
could be increased by another thread between check and atomic op!
Atomic primitives are nice, but you must be very careful with *how* you
use them... Now we atomically increase counter, check result, and if we
end up over max value, abort and decrease counter again.
Spotted by Sergey, thanks!
Got rid of GLU and some matrix manipulation. Everything is shader driven now, drawn with point sprites.
Still plenty to do in this file...
Part of T49042 and T49043
Simple convert of drawing emboss lines to new immediate mode.
Part of T49043
Reviewers: merwin
Reviewed By: merwin
Subscribers: dfelinto
Tags: #bf_blender_2.8
Differential Revision: https://developer.blender.org/D2271
Made function categories more clear & added more notes about how to use this API.
immEndVertex is no longer part of the public API.
Minor cleanup & organizing of recent additions.
I change UI_draw_roundbox_gl_mode to use immediate API.
The rest of the change is the call to the function.
I also make some change in UI_ThemeColor4(int colorid) for eg to make convenience to use.
I would really like to know if it's the good way to do, if yes I will make all the change in the node_daw.c after, else say me what's wrong and how to deal with color else.
Reviewers: merwin, dfelinto, Severin
Reviewed By: merwin
Subscribers: fablefox, Severin
Tags: #bf_blender_2.8, #opengl_gfx
Maniphest Tasks: T49043
Differential Revision: https://developer.blender.org/D2274
Since the collision modifier cannot be disabled, it causes a constant
hit on the viewport animation playback FPS. Most of this overhead can
be automatically removed in the case when the collider is static.
The updates are only skipped when the collider was stationary during
the preceding update as well, so the state is stored in a field.
Knowing that the collider is static can also be used to disable similar
BVH updates for substeps in the actual cloth simulation code.
Differential Revision: https://developer.blender.org/D2277
@dfelinto reported crash when setting grid subdivisions too low.
Code was setting color twice and Gawain was catching this. Fix is to only set regular grid color when we have regular grid lines to draw. Then emphasized grid lines are free to set their own color further down.
Previously if the rendering is much faster than saving (for example,
when transcoding stuff via VSE) it was possible to have 100s of frames
in memory.
This isn't ideal because of limited amount of RAM, so need to have
some sort of limit. This is exactly what is implemented in this commit.
By the design of task scheduler it was possible that tasks from somewhere
in the middle of scheduled list will be handled first.
For example, one thread might be iterating over the scheduled list and
ignore tasks because there is other thread is working on task from the
same pool. However, if that other thread finishes task before iteration
is over current thread will pick up task from somewhere in in the middle
of the list.
This isn't a problem in general case, but for movie rendering we do need
to have strict order of frames.
This commit lands the core backend of the Custom Manipulators project onto the blender2.8 branch. It is a generic backend for managinig interactive on-screen controls that can be integrated into any 2D or 3D edito. It's also already integrated into the window-manager and editor code where needed.
NOTE: The changes here should not be visible for users at all. It's really just a back-end patch. Neither does this include any RNA or Python integration.
Of course, there's still lots of work ahead for custom manipulators, but this is a big milestone. WIP code that actually uses this backend can be found in the 'custom-manipulators' branch (previously called 'wiggly-widgets').
The work here isn't completely my own, all the initial work was done by @Antony Riakiotakis (psy-fi) and - although it has changed a lot since them - it's still the same in essence. He definitely deserves a big credit! Some changes in this patch were also done by @Campbell Barton (campbellbarton). Thank you guys!
Merge accepted by @brecht and @merwin.
Patch: https://developer.blender.org/D2232
Code documentation: https://wiki.blender.org/index.php/Dev:2.8/Source/Custom_Manipulator
Main task: https://developer.blender.org/T47343
More info: https://code.blender.org/2015/09/the-custom-manipulator-project-widget-project/
This allows appending of an entire scene from another blend file into
this one, even when that blend file contains proxified armatures.
Since the proxified object needs to be linked (not local), this will
only work when the "Localize all" checkbox is disabled. The appended
proxy object should also not be referenced from anything in a library
(for example in a constraint). Referencing it from the appended data
should be fine.
Fixes T49495.
Pretty much same reason as for the 'from' pointer of shapekeys - runtime
data creating loops and 'ghost' dependencies between datablocks.
We need to handle them in cases like remapping, but whall not take them
into account to check dependencies between datablocks... :/
Want to avoid updating code we no longer use anyway.
Comments for areas to investigate or deadlines for deletion.
also some minor bool cleanup
Part of T49165
We can't prevent users from using this branch, so I'd say it's reasonable to be a bit careful about what we store to files. In this concrete case we were storing a bit-flag for temporary use (only during early viewport transition) in a bit-field that's saved in files. Doing so would mean we either can't reuse this bit later or we risk breaking files (admittedly, likely in a pretty minor way). Moved the bit-flag to a new bit-field which can be removed later.
Makes it easier to enable it and avoids jumping of the panel when activating/deactivating it (because some panels disappear then). Also changed how panel title is drawn to make it behave like other panels.
We will keep the old system working as long as we can. At the moment even the visibility flags we are getting from the old system. That will continue like this until we have decided on the new UI
The initial idea was to use Ctrl+E to interpolate stroke because this is
similar to Pose breakdown, but the Ctrl+E keymap is used to inverse
grease pencil sculpt effect.
The new keymap is Ctrl+Alt+E in order to fix the conflict
A new option (set in the properties region) allows the user to pick the
"new viewport" for the rendering (in the UI: Modern Viewport).
For now we have a semi-blank file (view3d_draw.c) that can starts to take
over the drawing pipeline.
I can't guarantee we will be able to keep both drawing systems working
through the entire 2.8 development, but it should do for now.
also, we can use branches for some of the viewport development, but it's
better to keep things in 2.8 whenever we can, so people can test it.
New recursive iteration over IDs in BKE_library_foreach_ID_link() was
broken by the infamous nodetree case. We cannot really recusively call
this function in that case, so better to deffer handling of
non-datablock NodeTrees as if real IDs here.
Also fixed initial ID not being stored as handled, in rare cases this
could also lead to infinite looping.
To be backported to 2.78a.
Mostly this is making inlining match CUDA 7.5 in a few performance critical
places. The end result is that performance is now better than before, possibly
due to less register spilling or other CUDA 8.0 compiler improvements.
On benchmarks scenes, there are 3% to 35% render time reductions. Stack memory
usage is reduced a little too.
Reviewed By: sergey
Differential Revision: https://developer.blender.org/D2269
We *always* want to increase mat user count when from Object (and not
Data), because in that case we are moving mat from object to temp
generated mesh, material can never be 'borrowed' in that case.
To be backported to 2.78a
If I didn't miss anything these are indeed not used. Old themes should still work (will only print info on redundant theme defines into console), but updated non-contrib themes already.
Fixes for clamp-omp, fewer shared variables, fix some cases of threads writing
to the same memory location. Issue found by Jens Verwiebe, who reports 30%
speedup with 16 core CPU, when using this with a recent clang-omp version.
Transition away from GLU and legacy matrix stack. Using point sprites eliminated the need for most of the matrix math!
Depends on decent support of large aliased points. NVIDIA is good at this, must test limits on AMD & Intel systems.
Still needs proper scaling based on view zoom.
Part of T49042, touches on T49043 and T49450.
Apparently GL_POINT_SPRITE is important to GL 3.2+ compatibility profile, not just to GL 2.1 as thought.
We'll remove this during the core profile transition.
- Explicitly specify the platform for msbuild, to facilitate builds with just the Visual C++ Build Tools installed.
- When vs2013 is not found, try looking for 2015 as a fallback
- Clear up any batch variables that might have been set from previous runs
This is also an important mathematical operation that can be folded
if it is known that one argument is a certain constant. For colors
the operation is provided as a Gamma node.
The SVM Gamma node needs a small fix to make it follow the 0 ^ 0 == 1
rule, same as the Power node, or the Gamma node itself in OSL mode.
Reviewers: #cycles
Differential Revision: https://developer.blender.org/D2263
At context startup, make sure our assumptions about the OpenGL version are true. Should match since we set up the contexts... but this is what asserts are for, to check "should"s!
Part of T49012
'camera' Object pointer of TimeMarkers is a 'temp' hack since Durian project...
Would need to be either made definitive now, or removed/reworked/whatever.
But since we intend to use that object pointer for other needs, and current code
could lead to crashing .blend files, for now let's fix that mess (was missing
some bits in read code, and also totally ignored in libquery code).
Should be safe for 2.78a.
patch P397 by @lichtwert + minor const by @merwin
Notes from drawvertsN function:
this used to be called twice (once for selected/active, once for unselected -- guess: to avoid state switches[color]?)
this used to be called in a loop, too (subcurves), moved the loop here to avoid multiple init stuff
Part of T49043
It will discard the whole tile, but it's still kind of more friendly than
fully locked interface (sort of) for until tile is fully sampled.
Sorry if it causes PITA to merge for the opencl split work, but this issue
bothering a lot when collecting benchmarks.
This change makes it so that when the sequences within a Scene strip are
evaluated, they use the Scene that they come from as the context as opposed
the Scene that the Scene strip is in. This is necessary, for example, in the
case of the MulticamSelector where it needs to reference strips in the original
Scene as opposed to the Scene where the Scene strip is located.
Patch by @Matt (HyperSphere), thanks!
Previously it was falling back to just a path after #include
statement was finished. Now we fall back to a proper current
file name after dealing with the preprocessor statement.
Some of the files were wrongly attributing code to some other
organizations and in few places proper attribution was missing.
This is mainly either a copy-paste error (when new file was
created from an existing one and header wasn't updated) or due
to some refactor which split non-original-BF code with purely
BF code.
Should solve some confusion around.
- The build folder name used to be depended on the order of the parameters, this is now normalized to
"build_windows_[Release/Full/Lite/Headless/Cycles/Bpy]_[x86/x64]_vc[12/14]_[Release/Debug]" regardless of the order of the parameters.
-Use CUDA8 for all kernels when building the release convenience target with visual studio 2015
In Windows, event dispatching code is throwing out the wheel scroll count value.
Despite of how many fast you move the wheel, it only make one-notch scroll event.
This patch convert wheel event to multiple 1-notch wheel events.
This also correct the handling of smooth scroll mouse wheel (which can report smaller than 1-notch wheel movement) by accumulating the small wheel delta values.
Reviewers: djnz, shadowrom, elubie, #platform:_windows, sergey, juicyfruit, brecht
Reviewed By: shadowrom, elubie, #platform:_windows, brecht
Subscribers: dingto, elubie, brachi, brecht
Differential Revision: https://developer.blender.org/D143
Another case of float imprecision leading to endless loop. INcreasing a bit 'noise threashold' seems to work OK.
Not a regression, but might be nice to have in 2.78a.
Not sure where this comes from, but code was converting BMEdge* to BMVert* to check oflags,
i.e. not accessing correct memory.
Regression, to be backported to 2.78a.
Apparently the keying sets system doesn't support subclassing
KeyingSetInfo subclasses. I have added a note to the top of the file to
indicate this to future developers.
Regression caused rBbcc863993ad, write code was assuming dw->ob was always valid,
wich is no more the case right after reading file e.g.
Another good example of how bad it is to use 'hidden' dependencies between datablocks. :(
And another fix to be backported to 2.78a. :(((
Previously the pose library used the WholeCharacter key set, which ignores
selection and add keys for almost all bones in the rig. This is a very
slow operation on complex rigs. With this patch, only selected bones are
keyed, defaulting to keying all bones when none are selected.
Note that this fixes the FIXME previously mentioned in the source.
Actually two errors here:
* Properties editor wasn't refreshing on (NC_SCENE | ND_RENDER_OPTIONS) notifiers
* Was using notifier info bits wrongly, needs to send two separate notifiers
Decided to remove ND_RENDER_OPTIONS rather than adding properties editor scene context refresh for it, this is more than a render option change.
Duplicates can happen at UV seams in case of resolution mismatch
or other complications. It's better not to store them in case it
confuses some math later on.
Reviewers: mont29
Differential Revision: https://developer.blender.org/D2261
1. When adding one pixel border to UV islands prefer grid directions.
This is more logical as it means neighbor pixels will commonly
share a border, opposed to just corners.
2. Don't subtract 0.5 when converting float UVs to int in computing
adjacency across a UV seam. Pixels cover a square with corners
at int positions and adding/subtracting 0.5 is only for dealing
with the center point of a pixel.
3. Use the neighbour_pixel field from the correct pixel, and check
it's not back to the origin point.
4. In the connected UV case, ensure that the returned index actually
refers to a valid active pixel.
This fixes paint spread not traversing some UV seams, while at the
same time spreading to random unrelated points on other seams.
The first problem is primarily fixed by 2, while the second one
is addressed by item 4.
Differential Revision: https://developer.blender.org/D2261
Moved that code forth and back a few times while creating rB776a8548f03a fix,
ended up forgetting to update it correctly for function where it layed down in the end.
Last-minute fixes are never a good thing... Now we already have real reason for 2.78 'a' release :(
Code was not getting correct boundbox in some cases (group only instancing other groups e.g.), now
compute our own bbox in those cases.
Based on patch by @lichtwerk, but extended the fix to include linked datablocks in some cases
(linked objects in local groups, linked objects in local scene, etc.), this was also broken in existing code.
Reviewers: mont29
Subscribers: duarteframos
Differential Revision: https://developer.blender.org/D2257
Regression from rBa4a968f, we would adjust current point's wetness without actually protecting it
in new multi-threaded context, leading to concurrent access mess.
Now delay applying wetness reduction to current point to end of function, allows us to avoid having
to lock current point twice together with neighbor one (and reducing spinlock awainting too).
To be backported to 2.78a.
Never call function that might recompute a DM in an RNA itemf callback (or any UI-related func in general)!
There was an XXX comment asking if this was OK - well, no, it was not. :P
Could be ported back to some 2.78 flavour should we need it.
WARNING! Full build is broken, alembic has not been merged in correctly and has some references to particle stuff.
Don't have time to tackle this now (and probably would be better if someone knowing what he's doing does it anyway).
Conflicts:
release/scripts/startup/bl_ui/properties_particle.py
source/blender/blenkernel/intern/library_remap.c
source/blender/blenkernel/intern/smoke.c
source/blender/editors/physics/particle_object.c
source/blender/editors/physics/physics_intern.h
source/blender/editors/physics/physics_ops.c
source/blender/editors/space_outliner/outliner_intern.h
source/blender/editors/space_view3d/drawvolume.c
source/blender/makesrna/intern/rna_smoke.c
Was only happening with new dependency graph.
The issue here is that scene's depsgraph layers will be 0 unless
it was ever visible. Worked around by checking for 0 layer in the
update_tagged of new depsgraph. This currently kind of following
logic of visible_layers, but is weak.
Committing so studio is unlocked here, will re-evaluate this layer.
Now using new system dedicated to that kind of cases, id_ensure_real_user(), instead.
That way, usercount of Scenes is handled correctly at deletion time.
Reported by @sergey over IRC, thanks.
The light sampling functions calculate light sampling PDF for the case that the light has been randomly selected out of all lights.
However, since BPT handles lamps and meshlights separately, this isn't the case. So, to avoid a wrong result, the code just included the 0.5 factor in the throughput.
In theory, however, the correction should be made to the sampling probability, which needs to be doubled. Now, for the regular calculation, that's no real difference since the throughput is divided by the pdf.
However, it does matter for the MIS calculation - it's unbiased both ways, but including the factor in the PDF instead of the throughput should give slightly better results.
Reviewers: sergey, brecht, dingto, juicyfruit
Differential Revision: https://developer.blender.org/D2258
Complete (for our needs) 2D & 3D transformation API. Should be easy to port legacy OpenGL matrix stack-based code to this. Still needs testing.
Ported ortho, frustum, lookAt functions from Viewport FX (rB194998766c65). Kept plenty of Viewport FX code from previous commit.
Stack API and 2D routines ported from Gawain. This version uses BLI_math library so everything is licensed under GPL instead of the usual MPL.
Part of T49450
- WITH_SMOKE macro was not defined so some code was not compiled, though
it was still accessible from the UI
- some UI elements were disappearing due to bad indentation, also rework
the UI code to not hide but rather disable/grey out button in the UI
- Display thickness was not used due to bad manual merge of the code
from the patch.
mul_m4_m4m4(R, A, B) gives us R = AB in general. Existing code assumed the worst, that A and B both alias the output R. For safety it makes internal copies of A and B before calculating & writing R.
This is the least common case. Usually all 3 matrices differ. Often we see M = AM or M = MB, but never M = MM.
With this revision mul_m4_m4m4 is called in exactly the same way but copies inputs only when needed. If you know the inputs are independent of the output use the "uniq" variant to skip the saftety checks.
This basically exposes to the UI a function that was only available
through a debug macro ; the purpose is obviously to help debugging
simulations. It adds ways to draw the vectors either as colored needles
or as arrows showing the direction of the vectors. The colors are based
on the magnitude of the underlying vectors.
Reviewers: plasmasolutions, gottfried
Differential Revision: https://developer.blender.org/D1733
Current approach uses view aligned slicing to generate polygons for GL
texturing such that the generated polygons are always facing the view
plane. Now it is also possible to use object aligned slicing, which
creates polygons by slicing the object perpendicular to whichever axis
is facing the most the view plane. It is also possible to create a
single slice for inspecting the volume, or for 2D rendering effects.
Settings for this, along with a density multiplier setting, are to be
found in a newly added "Smoke Display Settings" panel in the smoke
domain properties tab.
Reviewers: plasmasolutions, gottfried
Differential Revision: https://developer.blender.org/D1733
Using context.active_gpencil_brush to access the active Grease Pencil brush
would result in a crash if trying to rename the brush, because the "ID" pointer
was not set.
To be backported to 2.78
This is something what was guaranteed in give_current_material(), just
copied some range checking logic from there.
Not sure what would be a proper fix here tho.
In fact, it was the whole remapping process that was broken in logic bricks area,
due to terrible design of links between those bricks...
Object copying was also broken in that case, fixed as well.
To be backported to 2.78.
Note that issue was actually probably there since ages, hidden behind dirty hacks
used in previous append code (though likely visible in some corner cases).
Listen kids: do not, never, ever, do what has been done for links between logic bricks. Never. Ever.
Even as pure runtime data it would have been bad, but as stored data...
Optimization attempt with BKE_library_idtype_can_use_idtype() was not taking into account
the fact that drivers may link virtually against any datablock...
Has to be rethinked, but for after 2.78 release, this commit is safe to backport.
Regression caused by rBb27ba26, we would always tag datablocks to update in G.main,
ignoring given bmain, now always use this one instead.
To be backported to 2.78.
This commit allows RNA properties to return additional info on their editable state which may then be displayed in tooltips. To show how it works, it also adds some info for the editable check of proxies. For generally un-editable properties or properties of a linked data-block, RNA returns default strings.
| {F362785} | {F362786} | {F362787} |
Reviewed by brecht, thanks!
Differential Revision: https://developer.blender.org/D2243
Drawing used colors for select (TH_EDGE_SELECT/TH_VERTEX_SELECT) which was inconsistent with crease, seam, sharp, .. (which all had their own them color -- also was a bit hard to read).
NOTE: UI team usually doesn't allow adding more theme options, this is an exception.
Differential Revision: https://developer.blender.org/D2234
It's now possible to change the shortcut for invoking the eyedropper while hovering a button (E by default). Also removed the keymap editor entry for the modal eyedropper keymap, it's now automatically appended to the eyedropper shortcut.
This patch changes a couple of things in the video output encoding.
{F362527}
- Clearer separation between container and codec. No more "format", as this is
too ambiguous. As a result, codecs were removed from the container list.
- Added FFmpeg speed presets, so the user can choosen from the range "Very
slow" to "Ultra fast". By default no preset is used.
- Added Constant Rate Factor (CRF) mode, which allows changing the bit-rate
depending on the desired quality and the input. This generally produces the
best quality videos, at the expense of not knowing the exact bit-rate and
file size.
- Added optional maximum of non-B-frames between B-frames (`max_b_frames`).
- Presets were adjusted for these changes, and new presets added. One of the
new presets is [recommended](https://trac.ffmpeg.org/wiki/Encode/VFX#H.264)
for reviewing videos, as it allows players to scrub through it easily. Might
be nice in weeklies. This preset also requires control over the
`max_b_frames` setting.
GUI-only changes:
- Renamed "MPEG" in the output file format menu with "FFmpeg", as this is more
accurate. After all, FFmpeg is used when this option is chosen, which can
also output non-MPEG files.
- Certain parts of the GUI are disabled when not in use:
- bit rate options are not used when a constant rate factor is given.
- audio bitrate & volume are not used when no audio is exported.
Note that I did not touch `BKE_ffmpeg_preset_set()`. There are currently two
preset systems for FFmpeg (`BKE_ffmpeg_preset_set()` and the Python preset
system). Before we do more work on `BKE_ffmpeg_preset_set()`, I think it's a
good idea to determine whether we want to keep it at all.
After this patch has been accepted, I'd be happy to go through the code and
remove any then-obsolete bits, such as the handling of "XVID" as a container
format.
Reviewers: sergey, mont29, brecht
Subscribers: mpan3, Blendify, brecht, fsiddi
Tags: #bf_blender
Differential Revision: https://developer.blender.org/D2242
For some reason (which I can't recall), backing was doing backface
culling. Since Cycles itself doesn't ignore them (nor does Blender
Internal), they should be visible.
Steps to reproduce:
* Go to modifier context in properties editor
* Add modifier, collapse it
* Press down LMB over collapse button of modifier, hold it
* Drag over pin-icon in properties editor (to keep fixed data-block displayed)
* Drag outside of window bounds (should crash)
Also could've solved by getting space data from callback arguments instead of context, but this fix is much nicer (though not totally un-risky).
Is actually a redundant cast since Blender uses -funsigned-char, however I think it's fine to be explicit about it in new code so cast is required to make compiler happy. Am not a fan of -funsigned-char anyway...
Do not close and re-open the file in case it's compressed, gzip module can now directly take a file object as parameter.
Differential Revision: https://developer.blender.org/D2235
There might be some extra missing points here, but it's all rather
a TODO than a real bug and can be tweaked further once issues are
actually discovered.
We raised the minimum to GL 2.1 in Blender 2.77, and dropped support for older GPUs (pre-2012 Intel mostly). On Windows you get a popup message, but on Mac we simply crashed. Every Mac has a builtin software renderer for GL 2.1 so let's use that when the GPU is not capable!
Run blender --debug-gpu to see version detection & software fallback.
Quick fix for now, need to unlock studio here as well.
Proper fix would be to modify API a bit and pass flags which will
prevent expand called on bmain perhaps. But this we should discuss
a bit,
We were calling BLI_remlink and then BLI_insertlinkbefore/after quite often. BLI_listbase_link_move simplifies code a bit and makes it easier to follow. It also returns if link position has changed which can be used to avoid unnecessary updates.
Added it to a number of list reorder operators for now and made use of return value. Behavior shouldn't be changed.
Also some minor cleanup.
Was spawning error popup each time user tried to move a stroke higher or lower than the list allowed. We don't do that anywhere else and it's not really useful info for the user. So rather not bother her.
Idea here is to select the lowest isolation level that wont compromise quality.
By using the lowest level we save memory and processing time. This will also
help avoid precision issues that have been showing up from using the highest
level (T49179, T49257).
This is a pretty simple heuristic that gives ok results. There's more we could
do here, such as filtering for vertices/edges adjacent geometric features that
need isolation instead of checking them all, but the logic there could get a
bit involved.
There's potential for slight popping of edges during animation if the dice
rate is low, but I don't think this should be a problem since low dice rates
really shouldn't be used in animation anyways.
Reviewed By: brecht, sergey
Differential Revision: https://developer.blender.org/D2240
Changed drawing to use smooth lines, and to fade away when axis points toward / away from screen. (transform manipulators do this already)
Also fixed a nearby (but unrelated) missing immUnbindProgram.
Part of T49043
Ignore texture matrix in the shader, stop messing with texture matrix in BLF code.
Use linear screen-space interpolation instead of perspective.
Avoid redundant call to glMatrixMode.
With USE_GLSL enabled, GPU_basic_shader(TEXTURE|COLOR) always rendered black. New shader uses a solid color + alpha channel of texture (which in our case is a font glyph). See fragment shader for details.
I prefer this approah -- multiple shaders that each do one thing well (and are easy to read/write/understand), instead of one shader that can do many things given the right options.
Crashes occured immediately when clicking on "OpenGL render image" because there was only a task pool created previously when it was an animation. Solved it by introducing a variable is_animation to the openglrender and omitting the task_pool call when it's no animation.
@sergey: Please check my changes, moved the pool_ok and the lock into the is_animation clause.
Buildbot machine was updated to the new SDK which seems to have
QTKit removed.
For until we've installed older SDK or ported our code to a new
AVFramework disabling QuickTime.
These may be exposed in UI (keymap editor & redo panel), so better avoid using identifiers like "UP" "DOWN". They are redundant anyway (already displayed).
Replace the W shortcut for subdivision by a new menu for edit specials
in order to keep consistency in UI.
Subdivision is not used all the time, so it's better assign this
shortcut to menu.
In some situations the artist needs to subdivide a stroke created with
few points before, specially for sculpting.
The subdivision is done for any pair of continuous selected points in
the same stroke.
The operator can be activated in edit mode with W key and has a
parameter for number of cuts.
The idea is to have a dedicated thread which is responsive for all the
file writing to a separate thread, so slow disk will not slow down
OpenGL itself.
Gives really nice speedup around 1.5x when exporting barber shop layout
file to h264 video.
any object
There were a couple of crashes caused by stupid typos in
rB631af9f930d2fd2c76751204ff22239aa95f761d and
rB78ea06fea4a74181c25254ed72d50d8a743b6954, as well as a shamefull lack
of 'testing before committing' which only affect exporting.
One crash was due to using RNA_boolean_get instead of RNA_enum_get, the
other one was a tricky case of order of deletion happening in the
destructors of AbcExporter and ArchiveWriter.
Should not affect RC or release.
Fixed compile error in debug build (thanks mont29)
Renamed some functions for consistency.
New features:
Create a Batch with immediate mode! Just use immBeginBatch instead of immBegin. You can keep the result and draw it as many times as you like. This partially replaces the need for display lists.
Copy a VertexFormat, and create a VertexBuffer using an existing format.
Resize a VertexBuffer to a different number of vertices. (can only resize BEFORE using it to draw)
Was already done for immediate mode, but rearranged code to make a clean separation. Cleaned up #includes for code that uses this feature.
Added same for batched rendering.
Follow-up to rBddb1d5648dbd
API is nearly complete but untested.
1) create batch with vertex buffer & optional index buffer
2) choose shader program
3) draw!
It was annoyingly slow to do roundtrip from byte OpenGL render to
float render result and back to byte image format (which is used
in 99% of cases for the OpenGL previews),
Now we use render result's rect32 to store render result which is
already supposed to be in the display space.
Gives about 30% speed improvement for OpenGL previews here.
Previously converting from linear space to SRGB was doing rather
slow inverted 1D lookup. Adding explicit inverse LUT gives 20%
speedup of OpenGL render.
Next question is: why do we even bother with sRGB conversion here,
OpenGL is already in the proper space so in theory we can avoid
quite some color space conversions. In any case, having this case
optimized in nice anyway.
New features:
1) Release target that checks for both cuda 7.5 and 8 with WITH_CYCLES_CUDA_BINARIES=ON and CYCLES_CUDA_BINARIES_ARCH=sm_20;sm_21;sm_30;sm_35;sm_37;sm_50;sm_52;sm_60;sm_61 options set.
2) Option to switch between x86 and x64 builds, the default remains (auto detect the architecture) but can be overridden.
3) Option to switch between vs12(2013) and vs14(2015) default is 2013.
Reviewers: juicyfruit, sergey
Reviewed By: sergey
Tags: #platform:_windows
Differential Revision: https://developer.blender.org/D2180
This function modifies the GL program object, which reduces our ability to share a shader among meshes with different vertex formats. Recommended approach is to use get_attrib_locations.
This was quite weak to consider all scripted expression to be time-dependent.
Current solution is somewhat better but still crappy. Not sure how can we make
it really nice.
Stupid mistake wrapping path validation code inside a BLI_assert, which means it was
only called in Debug builds...
Found by Sergey, thanks.
Should be backported to 2.78.
Those 'never null' ID pointers are really a PITA to handle... luckily we don't have much of those around!
Found by Sybren, thanks.
Should be backported to 2.78.
This is internal pointer helper for scene evaluation and tools, though exposed to bpy API,
it can give false 'dependency cycles' in bpy.data.user_map() results.
That's followup to rBe007552442634 really, both should be backported to 2.78
This is internal pointer helper for scene evaluation and tools, it's not exposed to bpy API anyway,
and can give false 'dependency cycles' in bpy.data.user_map() results.
Found by sybren in his Splode work.
Uses similar way of storing temp data as object copy paste, just
uses different read entrypoint which does not modify current bmain.
This gives ability to easily copy-paste poses from one blender to
another one.
Hopefully doesn't introduce user-measurable differences.
Request from Peer here in the studio.
Reviewers: mont29
Reviewed By: mont29
Subscribers: hjalti, fsiddi
Differential Revision: https://developer.blender.org/D2229
This reverts commit ecbfa31caa.
Original commit broke logic in nodes re-fitting. That area can
access non-existing children momentarely. Not sure what would
be best solution here, for now simply reverting the change/
Problem was zero length normal caused by a precision issue in patch evaluation.
This is somewhat of a quick fix, but is better than allowing possible NaNs to
occur and cause problems elsewhere.
Both spot and area light have large areas where they're not visible.
Therefore, this patch stops the light sampling code when one of these cases (outside of the spotlight cone or behind the area light) occurs, before the lamp shader is evaluated.
In the case of the area light, the solid angle sampling can also be skipped.
In a test scene with Sample All Lights and 18 Area lamps and 9 Spot lamps that all point away from the area that the camera sees, render time drops from 12sec to 5sec.
Reviewers: brecht, sergey, dingto, juicyfruit
Differential Revision: https://developer.blender.org/D2216
Small issues in GHOST
- use NSApplicationDelegate protocol for our app delegate
- make sure NSApp is initialized before using
(cherry picked from commit df7be04ca6)
Regression from rB036c006cefe471. We can't use self here, self is bpy.app, not pydescriptor of python path getsetter...
So for now, do not try to replace getsetter by actual value in bpy.app's dict,
just return static var generated on first run.
Should be safe for 2.78.
I) Filename was not put in temp Main generated to save selected data only,
this was breaking readcode when trying to open partial file, leading to missing
filename in final loaded Main data.
II) Read code would confuse partial .blend files with Undo ones, when they had no screen in them
(which happens to 99.999% of partial .blend files I guess).
Reported by @sybren, thanks.
Should be safe enough for 2.78 release.
The title says it all actually. From tests with barber shop scene here
gives 2-3x speedup for shader compilation on my oldie i7 machine. The
gain is mainly due to textures metadata query from jpeg files (which
seems to requite de-compression before metadata can be read). But in
theory could give nice improvements for scenes with huge node trees
as well (i'm talking about node trees of complexity of fractal which
we had reports about in the past).
Reviewers: juicyfruit, dingto, lukasstockner97, brecht
Reviewed By: brecht
Subscribers: monio, Blendify
Differential Revision: https://developer.blender.org/D2215
The issue was caused by some false-positive empty non-AABB intersection.
Tried to tweak it a bit so it does not record intersection anymore.
Hopefully will work for all platforms. Tested here on iMac and Debian.
Vertex Buffer to store vertex attribute data.
Element List (AKA Index Buffer) to select which vertices to use.
Batch combines these into an object that can be built once then drawn
many times.
Porting over from the C++ version… Most of this C code is compiled but
unused. Some of it is not even compiled. Committing now in case I’m
lost at sea.
Put Gawain source code in a subfolder to make the boundary between the
library and the rest of Blender clear.
Changed Gawain’s license from Apache to Mozilla Public License. Has
more essence of copyleft — closer to GPL but not as restrictive.
Split immediate.c into several files so parts can be reused (adding
more files soon…)
The idea is to allow certain animation channels to be always visible in
animation editors. So, for example, one can pin Camera animation to the
editor so it is always possible to refine/tweak camera animation when
animating something else in the scene.
There is probably some more polishing required, and some current
limitations could be solved in the future but should be a good starting
point already.
Currently only works for object without recursing into deeper datablock
(so for example, it's not possible to pin object material animation).
Studio request by Colin Levy.
Basically just moves cached kernels from ~/.config/blender/BLENDER_VERSION to
~/.cache/cycles/kernels. This has following benefits:
- Follows XDG specification more closely,
not as if it's totally crucial or measurable by users, but still nice.
- Prevents unexpected sizes of config folder, makes disk space used in more
predictable for users way.
- Allows to share kernels across multiple Blender versions,
which makes it easier debugging at the times close to release.
- "Copy Previous Settings" operator will no longer be copying possibly
gigabytes of cached kernels, which used to lead to really nast disk usage
and annoying delays of copying settings.
- In the future we can have some smart logic to clear old unused cached
kernels.
Currently only done for Linux and OSX. Windows still follows old "cache"
folder logic, but it's not really important for now because we don't
support kernel compilation on this platform yet.
Reviewers: dingto, juicyfruit, brecht
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2197
Constant folding was removing all nodes connected to the displacement output
if they evaluated to a constant, causing there to be no valid graph for
displacement even when there was displacement to be applied, and sometimes
caused crashes.
Using ones complement for detecting if transform has been applied was confusing
and led to several bugs. With this proper checks are made.
Also added a few transforms where they were missing, mostly affecting baking
and displacement when `P` is used in the shader (previously `P` was in the
wrong space for these shaders)
Also removed `TIME_INVALID` as this may have resulted in incorrect
transforms in some cases.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2192
Bump mapping was happening in world space while displacement happens in object
space, causing shading errors when displacement type was used with bump mapping.
To fix this the proper transforms are added to bump nodes. This is only done
for automatic bump mapping however, to avoid visual changes from other uses of
bump mapping. It would be nice to do this for all bump mapping to be consistent
but that will have to wait till we can break compatibility.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2191
Now the factor works similar to other Blender areas to make the factor
more consistent for artists. The value 0% means equal to original
stroke, 100% equal to final stroke (50% means half way). Any value below
0% or greater than 100% create an overshoot of the stroke.
The existing code uses the input value count of the first channel
for all of them. If the first channel is the largest, it leads to
a crash-causing buffer overrun in memcpy below. Likely this was
left since the time when only one channel was supported.
As a crash fix, probably should go into 2.78
This commits changes two things:
* It adds more keysyms preferably taken from XLookupKeysym than XLookupString (namely, all numpad ones).
* It falls back to keysyms from XLookupKeysym in other cases, when XLookupString does not produce anything we know of.
Finding the correct balance here is far from easy, but think we are comming rather close to it now...
Root of the issue is that active render index became wrong. This is the actual
thing to be fixed, but as usual this is quite tricky to reproduce. Since such
bad situation might have happened more and fix isn't really difficult or
intruisive let's avoid crash for now.
Can be revisited once we figure out root of the issue.
Nice for 2.78 release.
Own fault in new ID management work, thought rebuild the DAG itself was
enough to actually update whole scene, but we actually need to tag datablocks
for update as well, when we change (or remove) one of their ID pointers...
OCD commit, but cleans the code a bit:
- the first `if 0` block was supposed to draw collision objects but is
vastly outdated as most of the SmokeCollisionSettings member variables
were removed a few years ago and collision objects are drawn like other
objects anyway. Also it was committed already commented out back in
2009.
- the second `if 0` block was doing pretty much the same thing as the
few lines above it.
Most of the time, Lamps in Cycles are just a constant emission closure, no texturing etc. Therefore, running a full shader evaluation is wasteful.
To avoid that, Cycles now detects these constant emission shaders and stores their value in the lamp data along with a flag in the shader.
Then, at runtime, if this flag is set, the lamp code just uses this value and only runs the full shader evaluation if it is neccessary.
In scenes with a lot of lamps and with "Sample all direct/indirect" enabled, this saves up to 20% of rendering time in my tests.
Reviewers: #cycles
Differential Revision: https://developer.blender.org/D2193
For proper indexing to work we need to use unaligned node with
identity transform instead of aligned nodes when doing refit.
To be backported to 2.78 release.
This is really hack-fix actually, not sure why `get_pointcache_keys_for_time()` seems to assume
it will always find key for given part index at least for current frame, and whether this assumption
is wrong or whether bug happens elsewhere...
Anyway, this is to be wiped out in 2.8, so no point loosing too much time on it, for now merely
returning unchanged (i.e. zero'ed) ParticleKeys in case index2 is invalid. Won't hurt anyway,
even if this did not crash in release builds, would be returning giberish values.
Weirdly enough, this version of XCode seems to have static_assert()
even when NOT using C++11. This is totally weird and counter intuitive
since static_assert() is supposed to be C++11 onlky feature.
Can XCode stop using future, please? :)
The two SVM nodes added with e7ea1ae78c caused a slowdown on AMD cards when rendering with OpenCL, whether displacement was used or not.
In the Barcelona Pavillon scene on a RX480, this would cause a 12% slowdown.
Therefore, this commit adds a additional flag for feature-adaptive compilation so that the new SVM nodes are only enabled when they are needed (Node tree connected to the Displacement output and Displacement type set to Both).
Also, the nodes were also added to shaders when the Displacement Type was set to Bump (the default), which was unneccessary and is fixed now.
Thanks to linda2 on IRC for reporting and testing and to maiself for help with the displacement shader code.
This fix might be relevant for 2.78, but it should be tested further before including it.
When drawing with Grease Pencil "continous drawing" for a long time
(i.e. basically, drawing a very large number of strokes), it could be
possible to cause lower-specced machines to run out of RAM and start
swapping. This was because there was no limit on the number of undo
states that the GP undo code was storing; since the undo states grow
exponentially on each stroke (i.e. each stroke results in another undo
state which contains all the existing strokes AND the newest stroke), this
could cause issues when taken to the extreme.
See commit's comments for details, but this boils down to: do not try to use
purely runtime cache data as a 'real' ID pointer in readcode, it's likely
doomed to fail in some cases, and is bad practice in any case!
Thix fix implies dupliweight's object will be invalid until first scene update
(i.e. first particles evaluation).
Two new modal operators to create a grease pencil interpolate drawing
for one frame or a complete sequence between two frames. For drawing
the temporary strokes in the viewport, two drawing handlers have been
added to manage 3D and 2D stuff.
Video: https://youtu.be/qxYwO5sSg5Y
The operator shortcuts are Ctrl+E and Ctrl+Shift+E. During the modal
operator, the interpolation can be adjusted using the mouse (moving
left/right) or the wheel mouse.
immBegin requires us to know how many vertices will be drawn. Most times this is fine, but sometimes it can be tricky. Do we make the effort to count everything in one pass, then draw it in a second?
immBeginAtMost makes this simple. Example: I'll draw at most 100 vertices. Supply only 6 verts and it draws only 6.
Any unused space is reclaimed and given to the next immBegin.
Was using GL_NONE to mean "no primitive" but GL_NONE and GL_POINTS are both defined as 0x0000.
Introducing PRIM_NONE = 0xF which does not clash with any primitive types.
Could not reproduce it here, but since users having the issue claims it comes from
rB16cb9391634dcc50e, let's try to use again ugly `XLookupKeysym()` for those modifier keys too...
Cpack generates a standard filename with git information in it, which might not always be wanted for release builds, this patch adds an option to override that default filename.
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Differential Revision: https://developer.blender.org/D2199
ammended to fix: wrong variable name in main CMakeLists.txt
This reverts commit 9444cd56db.
This commit caused some flickering in the bones when swapping IK to Fk.
While it's unclear why such change caused any regressions, let's revert
it to unlock the studio.
Cpack generates a standard filename with git information in it, which might not always be wanted for release builds, this patch adds an option to override that default filename.
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Differential Revision: https://developer.blender.org/D2199
Based on reading documentation around. This particular version
is based on the ImageMagic documentation which could be found
there:
http://www.imagemagick.org/Usage/filter/http://www.imagemagick.org/Usage/filter/nicolas/
Current filter is based on measuring mean error with the current
splash screen and choosing combination of parameters which
gives minimal mean error.
Fix cast shadows options (in material tab) not working in the viewport.
An off-by-one error. See D2194 for more.
Committing for Ulysse Martin (youle) who found & fixed this.
In User Preferences, Properties Editor and toolshelf, Ctrl+Tab and Ctrl+Shift+Tab now activates the next or previous space context (or category in case of toolshelf tabs), respectively.
For Properties Editor such functionality was completely missing, only toolshelf allowed cycling using ctrl+mousewheel (or only mousewheel while hovering tab region). Ctrl+Tab and Ctrl+Shift+Tab are common web browser shortcuts, so they're a reasonable choice to go with.
Reaching the first/last item doesn't cause the cycling to stop, we continue at the other end of the list then. (I didn't add this to Ctrl+Mousewheel toggling in toolshelf since I wanted to keep its behavior unchanged.)
We could get rid of (Ctrl+)Mousewheel cycling in toolshelf, but this may break user habits.
The cycling happens using a new operator, UI_OT_space_context_cycle, for toolshelf tabs it's hardcoded in panel handling code though.
Generalized rna_property_enum_step a bit and moved it to rna_access.c to allow external reuse.
Reviewed By: venomgfx
Differential Revision: https://developer.blender.org/D2189
This is a bug in the multithreaded task manager in negative value range.
The problem here is that if previter is unsigned, the comparison in the
return statement is unsigned, and works incorrectly if stop < 0 &&
iter >= 0. This in turn can happen if stop is close to 0, because this
code is designed to overrun the stop by chunk_size*num_threads as
the threads terminate.
This probably should go into 2.78 as it prevents a crash.
Based on patch by @codemanx, but with slightly less verbose descriptions.
More detailed behavior etc. rather belongs to doc/python_api/examples/bpy.ops.x.py imho.
Was incorrect indexing done in the array. Caused by 5abae51.
Not sure why it needed to be changed here, but array here is supposed to be
a loop data, so bringing back loop index as it originally was. The shading was
wrong in edit mode with BI active as well (so it's not like it's needed for
BI only).
Patch in collaboration with Alexander Gavrilov (angavrilov), thanks!
Should be double-checked and ported to 2.78.
Crash was due to a name collision in Alembic objects caused by the fact
that names derive from the one of the Blender object. An object having
multiple particles system would thus give its name to various
subobjects.
Now use the name of the particles system for the Alembic object.
Image of the result of this patch:
{F352849}
The only thing different from the above image and this patch is I added a colon after "Falloff" after I took the screen shot.
Reviewers: Severin, meta-androcto
Subscribers: plyczkowski
Tags: #bf_blender, #user_interface
Maniphest Tasks: T33436
Differential Revision: https://developer.blender.org/D2195
Another great example of inconsistency in usercount handling - dupli_group was considered
as refcounted by readfile.c code (and hence by library_query.c one, which is based on it),
but not by editor/BKE_object code, which never increased group's usercount when creating
an instance of it etc.
To be backported to 2.78.
We did not update them for really long time and the currently used
hashes are quite old and probably wouldn't work without manually
updating all submodules.
Not as if it's something totally crucial (we ask to update submodules
all the times and that's what `make update` does) but updating hashes
will save some cloning/checkout time.
Includes:
- Version bump to 2.78
- Doxy file update
- New splash screen
- Wrapped some do_versions with version check
- Updated template to use proper font
After poking around a lot it seems Droid Sans was used during 2.7x series.
(or at least difference between using this font and comparing to previous
splash screens gives none visible difference).
There were two inconsistencies in how 'save image' op initiated its settings:
* It would always use ImBuf->planes value.
* It would always use Image views settings.
Both of those settings should come from scene->r.im_format (as everything else)
when saving render/compo result...
Object coordinates can now be used in the displacement shader and will give
correct results, where as before bump mapping was calculated from the displace
positions and resulted in incorrect shading.
This works by evaluating the shader in two parts, first bump then surface, and
setting the shader state to match what it would be if the surface was
undisplaced for the bump shader evaluation. Currently only `P` is set as if
undisplaced, but other shader variables could be set as well, such as `I` or
`time`. Since these aren't set to anything meaningful for displacement I left
them out of this patch, we can decide what to do with them separately.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2156
Storing multiple copies of a shader was needed when the displacement method was
a mesh option and could be different for each mesh. Now that its a shader option
this is unnecessary.
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2156
Didn't do any bisecting, but guess it's caused rBb54e95a5c8dcb7 (2.74 is fine, 2.75 isn't).
I think this fix makes some other hacks redundant but need to check details (will do when we're back to bcon1 to avoid regressions).
Issue was that the wm.open_mainfile OP caused all handlers to be removed and since rB45592291 cancelled (which is correct in general), the menu that triggered the OP should not be cancelled though.
Not sure if this is a nice fix or not, it's however the safest fix I found. A different fix would be to call UI_popup_block_close before WM_operator_call_ex (in dialog_exec_cb), but not sure how safe this is and want to further investigate if it makes other hacks/fixes redundant.
There's still a crash with --debug-memory that confused the heck out of me (since I always have --debug-memory enabled), but I'll commit fix for that separately.
When using metadata stamping, it's often handy to have "Camera" in
front of the camera name, "Marker" in front of the marker text, etc.,
but sometimes those get in the way. This patch allows an artist to
turn those labels on/off.
Reviewed by: sergey, mont29, venomgfx
Our usercount handling was really... infuriating :|
Here, localization (i.e. 'shalow' copy that should not touch to usercounts) was incrementing
usercounts of the sole Textures IDs of lamps and worlds (on the weak and fallacious pretext
that related BKE_free... functions would decrement those counts)... Seriously...
So now, localize funcs do not increment any usercount anymore (since matching BKE_free... ones do
not decrement any either), and we do not call anymore that stupid unlink when freeing temp
localized copies of lamps/materials at end of preview generation.
Note that we probably still have a lot to do to cleanup that copy/localize code, pretty sure
we can dedpulicate a lot more.
The option is controlled with the WITH_WINDOWS_CODESIGN option and needs:
- Signtool must be found on the system, the standard windows sdk folders will be searched for it.
- The path to the pfx file (WINDOWS_CODESIGN_PFX)
- The password for the pfx , this can either be set by the WINDOWS_CODESIGN_PFX_PASSWORD variable but given that ends up in CMakeCache.txt (which might be undesirable) there is a backup option of setting the PFXPASSWORD environment variable on the system.
Reviewers: sergey, juicyfruit
Reviewed By: juicyfruit
Tags: #bf_blender, #platform:_windows
Differential Revision: https://developer.blender.org/D2182
Curves and meshes (when no modifier application required) would increase their material usercount twice.
Not sure how/why it worked in previous code, but with new, stricter ID handling we need more
careful check of ID 'ownership' handling.
Reported by Sergey over IRC, thanks.
There basically are two issues here: in smooth mode (and all non-tangent
normal map types) it doesn't invert the normal for backfacing polys;
on the other hand for flat shaded tangent type it is inverted too soon.
This fix does a brute force correction by checking the backfacing flag.
Reviewers: #cycles, brecht
Reviewed By: #cycles, brecht
Differential Revision: https://developer.blender.org/D2181
When undo in UV/Image editor and press ESC key, there was segment fault
in Toolsettings because the reference was missing. Now the toolsetting
is loaded from context and not from local operator data.
Was kinda split in two different places (one allowed to modify given path to always get a valid one,
the other only checking for validity of given path), not nice - and broken in asset branch case.
So rather extended a bit FileList->checkdirf to handle both cases (modifying and non-modifying path).
We usually don't silence migh-be-uninitialized warning (which is the only
thing which could explain setting matrix to all zeroes) so we can catch
such errors when using tools like Valgrind.
I don't get warning here and the initializer was wrong, so removing it.
If it-s _REALLY_ needed please do a proper initialization.
Previously, they were in a column alongside the list, but because the lists were
rarely that long, there would always be a large gap left below the list.
This was because the poll callback was checking for the presence of an active layer.
If you just create an empty datablock and try to paste, nothing would happen.
However, this check was kindof redundant anyway, as the operator would add a layer for
you if it didn't find one.
(Later this calculation should be moved into the iteration macro instead, since
it only needs to be applied once per layer along with the diff_mat calculation)
A common problem encountered by artists was that they would accidentally move
the 3D cursor while drawing, causing their strokes to end up in weird places in
3D space when viewing the drawing again from other perspectives.
This operator helps fix up this mess by taking the selected strokes, projecting them
to screenspace, and then back to 3D space again. As a result, it should be as if
you had directly drawn the whole thing again, but from the current viewpoint instead.
Unfortunately, if there was originally some depth information present (i.e. you already
started reshaping the sketch in 3D), then that will get lost during this process.
But so far, my tests indicate that this seems to work well enough.
After the GP v2 changes, it wasn't possible to easily set the thickness of strokes
if you didn't know about the pie menus already. This just exposes the same set of
settings.
Parenting options are not visible there (i.e. they're only for the 3D view),
so reserving space that isn't going to be used in those editors doesn't really
make much sense. Furthermore, those property regions are often quite narrow
too, so it doesn't help too much to keep these settings so narrow there.
transparency.
The issue is that we are rendering to a 0..1 clamped sRGB buffer with
unpremultiplied alpha, where the correct thing to do would be to render
to an unclamped linear premultiplied alpha buffer. Then we would just
make fire purely emissive without affecting the alpha channel at all,
but that doesn't work here.
So for now, draw fire and smoke separately using different shaders and
blend modes, like it used to before the smoke programs were rewritten
(see rB0372b642).
Meshes with Cycles subdivision were being transformed to world space leading to
normals to sometimes be calculated in that space, while they should be in
object space. Also caused dicing to happen at the wrong rate for scaled meshes.
Application code can pass ubytes, Gawain converts to float vec4 expected by shader.
For now the conversion is simple linear. We can add sRGB support later if needed.
General reshuffling of defines and spacing/brace usage for consistency.
In particular:
* When defining types, don't mix pointers and non-pointer types on same line
to avoid confusion
* As much as possible, have all defines at the top of each block instead of
scattered haphazardly throughout the code
Users have been getting a bit confused by the way things are worded/arranged in
the UI. This patch makes a few changes to the UI to make it more clear how to
use subdivision:
- make Subdivide UVs option inactive when adaptive subdivision is enabled as UV
subdivision is currently unsupported
- add "px" to dicing rates in the Geometry Panel
- display the final dicing rate in the modifier
- reworded "Dicing Rate" in the modifier to "Dicing Scale" to make more clear
that this is a multiplier for the scene dicing rate and added a note the the
tooltip pointing the user to that setting in the Geometry Panel
Reviewed By: brecht
Differential Revision: https://developer.blender.org/D2174
Assigning NULL to scopes' data pointer in 'non-UI readfile' context was terribly wrong for sure,
if we'd really want to reset them here we should freed them first.
But we can rather just ignore them here, those are purely runtime data managed by image editor,
no need to touch them here.
Significant rewrite with some improvements.
Maintain visual hierarchy of the grid:
- emphasized lines draw atop normal lines
- axes draw atop all lines (same as before)
Draw axes only once, not twice.
Return early if nothing to draw.
Single draw call for the default case (grid floor with X and Y axes).
Z axis needs a second draw call because it uses 3D coordinates.
Part of T49043
* Stroke editing functions should be in gpencil_edit.c not gpencil_data.c
(the latter is only for handling "CRUD" operations on things like
layers, brushes, and palettes)
* Deduplicate the GP_STROKE_BUFFER_MAX define
Added a way to select all the currently visible strokes that use the same
color as the selected stroke. This can be accessed via the Select Grouped (Shift-G)
operator as an alternative to selecting by layer.
This commit adds optional "pressure" and "strength" arguments to the
stroke.points.add() method. These are given default values of 1.0,
so that old scripts can be ported over to the new API with less effort
while reducing confusion about why auto generated strokes won't appear.
Also reduced number of matrix ops by generating final positions directly.
Also removed a display list (deprecated in modern GL).
Tried to reuse sinval & cosval tables but those values are skewed (last value repeats first value, middle values are squished to compensate). Went with sinf & cosf instead.
Part of T49042
The idea of the change is to avoid queue growing too long
and handle all the operations as quick as possible.
Gives about 3% speedup on one of the barber shots here.
Now we pass streams to Alembic instead of passing the filename string.
That way we can open the stream ourselves with the proper unicode
encoding.
Note that this only applies to Ogawa archive, as HDF5 does not support
streams.
Differential Revision: https://developer.blender.org/D2160
Changes from microdisplacement work broke previous support for subdivision
meshes, sometimes leading to crashes; this makes things work again. Files
that contain "patch" nodes will need to be updated to use meshes instead, as
specifying patches was both inefficient and completely unsupported by the new
subdivision code.
Auto-scale is expected to work just fine now.
Only thing changed now is the pivot point for the scale: it is now
the same as rotation pivot, so scaling happens around weighted median
of the translation tracks. This seems to be what is actually required
for the VFX workflow.
Previously, this extension used the translation compensated image centre
as reference point for rotation measurement and compensation. During
user tests, it turned out that this setup tends to give poor results
with very simple track configurations.
This can be improved by useiing the weighted average of the location
tracks for each frame as pivot point. But there is a technical problem:
the existing public API functions do not allow to pass the pivot point
for each frame alongside with the stabilisation data. Thus this
change implements a trick to package a compensation shift into
the translation offset, so the rotation can be performed around
a fixed point (center of frame). The compensation shift will then shift
the image as if it had been rotated around the desired pivot point.
It is common in blender to use 1-based counting for
frame sequences (while 0-based is allowed). Thus
initializing to use frame 1 as reference for stabilization
is likely to produce smooth start values in most cases
values > 1 will zoom in and values < 1 zoom out
Rationale: the changed orientation is more natural
from a user POV and doing it this way is also more
consistent with the calculation of the other
target_* parameters.
Compatibility: This will break *.blend files saved
with the previous version of this patch from the
last days (test period). It will *not* break any
old/migrated files: Previously, the DNA field "scale"
was only used to cache autoscale. Only with the
Stabilisator rework, "scale" becomes a first class
persistent DNA field. There is migration code to
init this field to 1.0
We should treat all three "target" ("expected") parameters in a similar way:
The "influence" control should only work on the measurement part of stabilisation,
i.e. it should only control the automatic part of stabilisation, while
the target parameters are deliberately set by the user and thus should
even be in effect when the automatic stabilsation is turned down.
It used to be so for location and rotation, but for the scale part,
I re-used the existing code for autoscale, which also had the scale influence
work on the autoscale factor. This was sensible in the old version,
since scale_influence was the only way to control the result. But now,
the user has always total control trough the "target_*" parameters
and thus we should prefer to treat all similar.
The little grabby handle in the corner of an area. Now uses 1 draw call
instead of 6.
Also one version of the (+) icon to show a hidden region. Why do we
have multiple versions of this?
Fixed a harmless signed/unsigned error.
Fixed a GL state error that prematurely disabled blending.
Added imm_draw_filled_circle function, which can be used for drawing
other widgets.
Work toward T49042 and T49043
The if branches were reordered when the original patch was
committed, which broke the implicit non-NULL guarantee on link.
To prevent re-occurrence, add a couple of unit tests.
For best results use the latest 3Dconnexion driver. But latest is only
supported on Mac OS 10.9+. We go all the way back to Mac OS 10.6 so
have to deal with older driver versions.
See the original dlclose line for my faulty assumption. Waiting to
unload the driver later fixes the crash. Newer drivers don’t seem to
have this issue.
Also removed WITH_INPUT_NDOF guards as NDOFManager.h takes care of
this. Follow-up to b10d005 a few days ago.
In addition to pack of conflicts listed below, also had to comment out particle part of new Alembic code... :/
Conflicts:
intern/ghost/intern/GHOST_WindowWin32.cpp
source/blender/blenkernel/BKE_effect.h
source/blender/blenkernel/BKE_pointcache.h
source/blender/blenkernel/intern/cloth.c
source/blender/blenkernel/intern/depsgraph.c
source/blender/blenkernel/intern/dynamicpaint.c
source/blender/blenkernel/intern/effect.c
source/blender/blenkernel/intern/particle_system.c
source/blender/blenkernel/intern/pointcache.c
source/blender/blenkernel/intern/rigidbody.c
source/blender/blenkernel/intern/smoke.c
source/blender/blenkernel/intern/softbody.c
source/blender/depsgraph/intern/builder/deg_builder_relations.cc
source/blender/gpu/intern/gpu_debug.c
source/blender/makesdna/DNA_object_types.h
source/blender/makesrna/intern/rna_particle.c
Scanned Blender code for commonly used glVertex, glColor functions.
Implemented immVertex, immAttrib versions of these to ease transition
away from legacy OpenGL.
Keeping unused gla2D code because it might be useful, or inspire
something useful, for Blender 2.8 development.
Also removed an old Mac driver bug workaround. Disabled this before the
2.77 release and nobody has complained.
Was a concurrent access of pointcache from both particle system and UI (time space).
Pointcache not being threadsafe is really an issue to be addressed for its next version,
for now simply locking spacetime (like we already do with 3DView), not ideal fix
but it's working and safe for release.
This code obviously should also use the cache_fields flag variable,
like the code for reading the lowres data in the same function.
This is because fluid_fields actually represents the old state before
smoke was reallocated to match cache_fields read from the file, and if
it has some fields enabled that aren't allocated any more, it crashes.
This also fixes a reverse glitch: when a file was loaded with
the current frame in the middle of a baked smoke+fire simulation,
smoke appeared immediately, but the fire didn't until the frame
was changed. The reason is the same: after file load no fields
are initially allocated and thus fluid_fields is 0.
ccgDM_drawMappedFacesMat was missig a smooth shade model restore, some other
functions were redundantly setting it since we can assume it to be the default
state already.
ifdef’ing out defines in DNA/RNA is not a good idea, was breaking alternative keymaps loading
from splash screen e.g. (reported by Sergey over IRC, thanks).
Joins some things to the same row and uses aligned columns to get
minimum use of vertical space.
Probably still some tweaks required, but getting there :)
Errors are caught & reported by our GL debug callback. This gives us way more useful information than sporadic calls to glGetError.
I removed almost all use of glGetError, including our own GPU_ASSERT_NO_GL_ERRORS and GPU_CHECK_ERRORS_AROUND macros.
Still used in rna_Image_gl_load because it passes unvalidated input to OpenGL functions.
Still used in gpu_state_print_fl_ex as an exception handling hack -- will rewrite this soon.
The optimism embodied by this commit will not prevent OpenGL errors. We need to analyze what would cause GL to fail at certain points and proactively intercept these failures. Or guarantee they can't happen.
export.
This aligns the behaviour of the file selection with the other
exporters. The Alembic case would fail if the filepath did not have an
extension set.
Also set a default file name for the Alembic export operator in case the
Blender file was not saved before exporting.
This function is only really secure in a very limited amount of cases,
and can especially bite you later if you change some buffer sizes...
So not worth bothering with it, just always use BLI_strncpy instead.
Error: `origin` and `edge` args were swapped in final `FindOccludee()` call of `ViewMapBuilder::ComputeRayCastingVisibility()`
Cleanup: main for loop in `Strip::createStrip()` was really confusing (though correct),
generated a false positive in coverity scan, now should be cleaner how it loops over its vprev/v/v2
triplet of consecutive items.
When WITH_INPUT_NDOF is disabled, 3D mouse handling code is removed
from:
- GHOST (was mostly done, finished the job)
- window manager
- various editors
- RNA
- keymaps
The input tab of user prefs does not show 3D mouse settings. Key map
editor does not show NDOF mappings.
DNA does not change.
On my Mac the compiled binary is 42KB smaller after this change. It
runs fine WITH_INPUT_NDOF on or off.
These latter can cause MSVC debug asserts if the array is empty. With C++11
we'll be able to do this for std::vector later. This hopefully fixes an assert
in the Cycles subdivision code.
Basically title says it all.
The goal is to make platform maintenance easier, so you don't have
to constantly scroll back and forth looking for if() branches to
check which exact platform you're currently working on.
Ideally we also would move option defaults to a platform files,
but that i'm not sure how to implement in a nice way yet.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D2148
Legacy OpenGL has a matching Vertex3fv for every Vertex3f, and so on. Add something similar to Gawain, just for a few common functions. Might add more as the need arises.
If the ruler is saved, a new color was created for each ruler. With the
change, a color is created for the first instance, and it reused in the
following instances. The default color is the current default color for
GP.
These are intended for very simple drawing. No lighting etc.
Shares some fragment code with the 2D shaders.
Similar to their 2D counterparts, but are not combined because of
future plans for separate 2D & 3D matrix stacks.
It's becoming annoying to have public API dependent on build type
and everything. Let's just always have API defined and do stubs
in the function implementation instead.
Current implementation more or less indiscriminately links physics
objects to colliders and forces, ignoring precise details of layer
checks and collider groups. The new depsgraph seemed to lack some
such links at all. The relevant code in modifiers suffers from a
lot of duplication.
Different physics simulations use independent implementations of
collision and similar things, which results in a lot of variance:
* Cloth collides with objects on same or visible layer with dupli.
* Softbody collides with objects on same layer without dupli.
* Non-hair particles collide on same layer with dupli.
* Smoke uses same code as cloth, but needs different modifier.
* Dynamic paint "collides" with brushes on any layer without dupli.
Force fields with absorption also imply dependency on colliders:
* For most systems, colliders are selected from same layer as field.
* For non-hair particles, it uses the same exact set as the particles.
As a special quirk, smoke ignores smoke flow force fields; on the other
hand dependency on such field implies dependency on the smoke domain.
This introduces two utility functions each for old and new depsgraph
that are flexible enough to handle all these variations, and uses them
to handle particles, cloth, smoke, softbody and dynpaint.
One thing to watch out for is that depsgraph code shouldn't rely on
any properties that don't cause a graph rebuild when changed. This
was violated in the original code that was building force field links,
while taking zero field weights into account.
This change may cause new dependency cycles in cases where necessary
dependencies were missing, but may also remove cycles in situations
where unnecessary links were previously created. It's also now possible
to solve some cycles by switching to explicit groups, since they are
now properly taken into account for dependencies.
Differential Revision: https://developer.blender.org/D2141
For now simply reshuffle option so they keep proper dependency flow.
Benefits:
- Has an ability to hide tracks lists to work with other sliders around.
Could be really handy to quickly get rid of lenghty lists.
- From a feedback seems to be fitting workflow better.
Things to doublecheck on:
- Feels a bit misordered: first you define whether one want to have
rotation stabilized, then have tracks, then scale options.
While this follows dependency flow (which is really good and which
we should not violate) it has weird feeling on whether things are
really where they have to be.
- Autoscale controls visibility of max-scale, can we just make it
active/inactive instead?
- Autoscale replaces slider with label. Can it be disabled slider
instead to reduce visual jumping (disabled slider prevents user
input)
Hopefully we'll still want to have collapsable box after re-iterating
over this points, so we don't waste bits in DNA.
See this page for motivation and description of concepts:
https://github.com/Ichthyostega/blender/wiki
See this video for UI explanation and demonstration of usage
http://vimeo.com/blenderHack/stabilizerdemo
This proposal attempts to improve usability of Blender's image stabilization
feature for real-world footage esp. with moving and panning camera. It builds
upon the feature tracking to get a measurement of 2D image movement.
- Use a weighted average of movement contributions (instead of a median).
- Allow for rotation compensation and zoom (image scale) compensation.
- Allow to pick a different set of tracks for translation and for
rotation/zoom.
- Treat translation / rotation / zoom contributions systematically in a
similar way.
- Improve handling of partial tracking data with gaps and varying
start / end points.
- Have a user definable anchor frame and interpolate / extrapolate data to
avoid jumping back to "neutral" position when no tracking data is available.
- Support for travelling and panning shots by including an //intended//
position/rotation/zoom ("target position"). The idea is for these parameters
to be //animated// by the user, in order to supply an smooth, intended
camera movement. This way, we can keep the image content roughly in frame
even when moving completely away from the initial view.
A known shortcoming is that the pivot point for rotation compensation is set to
the translation compensated image center. This can produce spurious rotation on
travelling shots, which needs to be compensated manually (by animating the
target rotation parameter). There are several possible ways to address that
problem, yet all of them are considered beyond the scope of this improvement
proposal for now.
Own modifications:
- Restrict line length, it's really handy for split-view editing
- In motion tracking we prefer fully human-readable comments, meaning we
don't use doxygen with it's weird markup and comments are supposed to
start with capital and end with a full stop,
- Add explicit comparison of pointer to NULL.
Reviewers: sergey
Subscribers: kusi, kdawg, forest-house, mardy, Samoth, plasmasolutions, willolis, sebastian_k, hype, enetheru, sunboy, jta, leon_cheung
Maniphest Tasks: T49036
Differential Revision: https://developer.blender.org/D583
EXT_gpu_shader4 lets us say “noperspective” in GLSL #version 120 just
like in later GLSL.
Mac shader now matches modern GLSL available on other platforms.
Reduces noise from --debug-gpu so we can spot serious errors.
Blender 2.7x uses OpenGL 2.1, we don't care if features are deprecated.
I'll re-enable these warnings for blender2.8 after next merge.
Kernels can now be built without patch evaluation when not needed by the
scene (Catmull-Clark subdivision not in use), giving a performance boost
for some devices.
Enable based on --debug-gpu at the command line. Linux already works
this way.
This commit in master (2.78 timeframe) is the smallest change that gets
the desired result. I did a similar commit in blender2.8. Most ongoing
GL debug work will go into 2.8, not 2.7x.
In support of T49089
When running blender --debug-gpu
Display which debug facilities are available. One of these, in order of preference:
- OpenGL 4.3
- KHR_debug
- ARB_debug_output
- AMD_debug_output
All messages are logged now, not just errors. Will probably turn some of these off later.
GL_DEBUG_OUTPUT_SYNCHRONOUS lets us break on errors and backtrace to the exact trouble spot.
Callers of GPU_string_marker no longer pass in a message length, just the message itself (null terminated).
Apple provides no GL debug logging features.
By calling `tessellate()` from the mesh manager in Cycles we can do pre/post
processing or even threaded tessellation without concerning client side code
with the details.
This way OpenCL devices can also benefit from a smaller memory footprint, when using e.g. bumpmaps (greyscale, 1 channel).
Additional target for my GSoC 2016.
When we ask for GL 3.2 compatibility profile:
AMD (Radeon HD 6970) gives us exactly this version
NVIDIA (Quadro K600) gives at least this version
Still need to check Intel behavior
We want *at least* the version requested, plus more recent features if
available.
Both GPUs tested & mentioned above are capable of GL 4.5. With this
commit they both give 4.5 to Blender.
Helps most when real-world units are used.
Previous code started at the smallest visible unit (e.g. Inches) then
followed to Feet, Yards, Chains, Furlongs, Miles. Always to the largest
unit of the set, even though most would be way off screen.
New code knows whether it skipped any grid lines for the next unit to
fill in, can stop once all lines are on screen.
Previous commit works on Windows, found some issues after trying on Mac.
- benign warnings about && within ||
- replaced nearbyint() with round() to avoid floating point environment
surprises
- remquo function appears to be broken on Mac (!) results were way way
off. Replaced with simple division.
- minor tweaks to debug output
Work toward T49043, with a side of client vertex arrays.
Not a straightforward port from glVertex to immVertex since Gawain needs
to know how many vertices we'll be drawing *before* we start drawing.
Fixed these not-so-great aspects of grid drawing:
- coarse grids would draw atop some lines from the finer grids
- visible axes would draw atop lines from coarse grid
- axes were drawn even if they weren't in view
- terrible misuse of vertex arrays
- each line issued its own draw call
New code draws each line exactly once. The entire grid is one draw call.
Bonus: I had to / got to learn how the units system works!
New value of 4MB should handle our needs without taking up too many GPU
resources.
Old value of 1KB was for observing what happens when the buffer fills up
and we need to flush and start a new one.
When updating the max values under stiffness scaling, they clip at the normal stiffness values
as expected, however when updating stiffness values, you could set them higher than the max
values, and the max values weren't updated accordingly. As the stiffness scaling computes using
the absolute difference between the max values and the stiffness values, you got higher
stiffnesses in scaled areas even though your max is actually lower than the normal stiffness.
This diff fixes that behaviour, by updating the max values to be equal to the stiffness whenever
you set a higher stiffness than the max value.
Also, I have initialized the max values to the same as the stiffnesses, as they were previously
just set to zero, and caused the same problem described above.
Reviewers: lukastoenne
Reviewed By: lukastoenne
Tags: #physics
Differential Revision: https://developer.blender.org/D2147
Node tree update calls in the middle of a socket loop are dangerous, they can change sockets
on group nodes and link instances in particular. Updates should only happen after the operator
has finished.
Simply removed the extra convenience check for validity now. Worst case an invalid (red) link
is created which can be removed by the user as well and should simply be ignored by node systems.
The update system in nodes needs a complete rewrite to handle complex cases like this, where an
operator may need to react to changes during its execution.
GPencil conversion would just always run for file version 2.77.3. This wasn't an issue in master, but possibly for other branches that used the 2.77.3 block.
Wasn't aware that you have to add the asterisk for pointers either, this is kinda weird. Anyway, it's running correctly now.
Getting this ready for Gawain treatment.
Removed setlinestyle(0) -- solid lines are the default, hope this isn't
really needed.
Eliminated redundant math.
Arithmetic is still double precision, passed to OpenGL as single
precision. Even though it said GL_DOUBLE before, values were converted
to GL_FLOAT internally.
Use C99-isms for declaring variables close to where they're used.
Minor whitespace tweaks.
New dependency graph puts materials to the graph in order to deal with animation
assigned to them and things like that. This leads us to a requirement to update
relations when slots changes.
This fixes: T49075 Assignment of a keyframed material using the frame_change_pre handler
doesn't update the keyframe using the new dependency graph
This is mainly required for the new dependency graph where non-object
datablocks are a part of dependency graph.
This solves issue when making mesh shared by multiple objects a single
user one.
Now we have the 4 component ones first (float4, byte4, half4) followed by the 1 component ones (float, byte, half).
Makes code a bit more consistent and also reduces code a bit when enabling half support on GPU in next commit.
This also exposed a typo in half CPU images for 3D textures, which wasn't used yet, but good to have that one fixed anyway.
Point cache read code contains checks designed to prevent it reading
stale data when the relevant simulation code should instead compute
the next frame from the previous one. However in some situations like
motion blur subframes the simulation can't possibly do it and just
exits. This causes completely incorrect motion blur at or after the
last cached frame.
To fix, add a parameter that tells the cache code whether it should
apply the checks and exit, or read what it can even if stale (true
means exactly same as old behavior).
Doing this in cache rather than clamping the frame number better in
the caller lets it handle the case of incomplete cache that stops
before the official last frame.
Reviewed By: mont29, lukastoenne
Maniphest Tasks: T49004
Differential Revision: https://developer.blender.org/D2144
Using unit tests is a wrong way to control static behavior of the
application. They should only be used for checking dynamic behavior,
all the rest is easily controllable at compile time.
Doing tests at ocmpile time are actually more robust approach since
we don't have strict policy of runnign unit tests before accepting
any change.
Proper alignment control is coming shortly.
This reverts commit 7c3a06c349.
This serves as a good example of the Gawain API. (I’ve thought of a
better way to draw drop shadows, but that can wait!)
Part of T49043.
This is what I had in mind for D1753.
If you don’t specify a vertex’s color, it will use the color of the
previous vertex. Similar for all other attributes.
This matches the legacy behavior of glColor, glNormal, etc. *except* in
Gawain the first vertex of each immBegin must be fully specified. There
is no “current” color in the new system.
Should be simpler to use now.
Made vertex format structure private. New immVertexFormat() function
clears and returns the format. Devs can start with add_attrib(format...)
and not have to clear it first.
immBindProgram automatically packs the vertex format if needed.
Updated 3D cursor drawing to use new API.
Since the optimal values depend on the device used, this option doesn't make much sense in the XML.
Therefore, it's now specified via the command line, just like the device itself.
glBindAttribLocation does not take effect until the program is
re-linked. In other words I was doing it wrong!
New code gets attrib locations from program, then remembers the attrib
-> location mapping for subsequent draw calls.
The program and VertexFormat are not modified (makes threading and reuse
easier).
There are older ways to give OpenGL hints about buffer invalidation, but
glInvalidateBufferData does exactly what we want. Use this function when
OpenGL 4.3 is available (Windows and proprietary Linux drivers).
Part of Gawain immediate mode.
When running blender --debug-gpu
Display which debug facilities are available. One of these, in order of preference:
- OpenGL 4.3
- KHR_debug
- ARB_debug_output
- AMD_debug_output
All messages are logged now, not just errors. Will probably turn some of these off later.
GL_DEBUG_OUTPUT_SYNCHRONOUS lets us break on errors and backtrace to the exact trouble spot.
Callers of GPU_string_marker no longer pass in a message length, just the message itself (null terminated).
Apple provides no GL debug logging features.
Old code had a mix of framebuffer error codes from OpenGL ES, EXT_framebuffer_object, and desktop GL. We use desktop GL (or ARB_framebuffer_object which acts just like GL 3.x) so I made it compatible with that.
Changed messages to the actual GL_FRAMEBUFFER_XXX symbols. These are less friendly and more accurate. Can easily look up what an error means, unfiltered by what a Blender dev thinks it means.
Kept ES error codes around in case we support that one day. Just flip the #if or use a compile-time option.
Replace legacy OpenGL with Gawain. Use shaders instead of fixed
function pipeline.
This simple UI element is shown at startup so is easy to verify things
are working. It also serves as a good example for people converting
other parts of the code.
Part of T49043
Use simple alternating colored lines instead of stippled overdraw.
Reimplement circ function to not use deprecated GLU (T49042). It also
leaves matrix stack untouched.
Remove unused circf function.
immBindBuiltinProgram extends Gawain’s immBindProgram to use Blender’s
library of built-in shader programs.
It uses imm prefix instead of GPU_ so people won’t be tempted to call
GPU_unbind_program() afterward.
From my understanding, Apache code is not allowed to call GPL code, so
this function needs to be in the GPU lib.
How to use:
1) set up vertex format
2) bind a shader
3) draw with immBegin … immEnd
4) unbind shader
TODO: expand this a little, so we can send uniform values to the bound
shader.
glMapBufferRange is a wonderful function that doesn’t exist on GL < 3.0.
Use the APPLE_flush_buffer_range extension on Mac. It offers several of
glMapBufferRange’s benefits.
Use older “black arts” method to orphan VBOs when we are done with
them. In modern OpenGL this behavior is more obvious.
Add APPLE_flush_buffer_range to Mac requirements. Every GPU is
supported. T49012
Apple invented VAOs and exposes them via an extension in legacy GL.
Other platforms use at least GL 3.0 which has VAOs built in.
QUADS were removed from core profile but are useful for immediate-mode
drawing. We’ll have to implement our own QUAD drawing before switching
to core profile.
Immediate mode no longer leaves its internals bound after use. Part of
transition from a simple prototype app to non-simple Blender, which has
lots of other parts using OpenGL.
More ways to send values via immAttrib:
2D float vectors
3 & 4 component ubytes (for colors mostly)
New immVertex functions that act more like familiar glVertex. We’ll
find a balance between making this API convenient and keeping it small.
2f and 3f are enough for now.
ARB_framebuffer_object replaces several related EXT extensions. The ARB
version pulls GL 3 FBO features into GL 2.1, useful for Mac platform.
Its functions and enums have no ARB suffix so transition to modern GL
will be seamless!
Extension is checked at startup, so is guaranteed to be true at runtime.
Part of T49012
Mac’s OpenGL version is furthest away from our target of GL 3.2. This
commit brings Mac closer to other platforms, so that our shaders and
other code don’t diverge too much during development.
According to Apple’s OpenGL matrix these useful extensions are
available on all GPUs that will be able to run Blender 2.8.
Only checked in debug builds; we might need something more forceful.
Part of T49012
The first two of several new simple built-in shaders (will test these
before adding more). These are intended for the new immediate mode API,
but you can use them just like any built-in GPUShader.
Due to limitations on different platforms, shaders need to work with
GLSL versions 120, 130 and 150. Final Blender 2.8 will be pure #version
150.
Introducing an immediate mode drawing API that works with modern GL 3.2
core profile. I wrote and tested this using a core context on Mac.
This is part of the Gawain library which is Apache 2 licensed. Be very
careful not to pull other Blender code into these files.
Modifications for the Blender integration:
- prefix filenames to match rest of Blender’s GPU libs
- include GPU_glew.h instead of <OpenGL/gl3.h>
- disable thread-local vars until we figure out how best to do this
This implements Mac part of T49012.
Removed options for EGL, ES2, compatibility profile. None of these
exist on Mac platform.
Create a GL 3.2 core context when requested at build time. Old code
just pretended to support core profile.
Implements the Linux part of T49012.
Simplify the options for context creation. No options for legacy GL or EGL or ES2. Select 3.2 CORE or COMPATIBILITY profile at build time.
If that fails, use a GL 3.0 context. This keeps Mesa supported while we work on full 3.2 core elsewhere in the code.
This greatly simplifies the options for context creation. No options for
legacy GL or EGL or ES2. Select CORE or COMPATIBILITY profile at build
time.
OpenGL 3.2 core profile will be our final target on all platforms. Until
all our code is ready we can use 3.2 compatibility profile or "legacy"
GL 2.1 on platforms that don't support compatibility profile.
At the moment light shading in Blender is produced in viewspace. Apparently, that's why
shader nodes work with normals in camera space. But it is not convenient for artists.
The more convenient approach is implemented in Cycles where normals are represented in world space.
Blend4Web Team designed the engine keeping in mind shader parameters readability,
so normals are interpreted in world space as well. And now our users have to use some tweaks, like
empty node group with the name "Replace", which is replacing one input by another on the engine side
(replacing working configuration in Blender Viewport by the configuration that has the same behavior in the engine).
This patch adds the ability to switch to world space for normals and lamp vector in BI and Viewport.
This patch is very important to us and we crave to see this patch in Blender 2.7 because
it will significantly simplify Blend4Web material creation workflow.
{F315547}
{F315548}
Reviewers: campbellbarton, brecht
Reviewed By: brecht
Subscribers: homyachetser, Evgeny_Rodygin, AlexKowel, yurikovelenov
Differential Revision: https://developer.blender.org/D2046
The purpose of the patch is to replace deprecated glShadeModel.
To decrease glShadeModel calls I've set GL_SMOOTH by default
Reviewers: merwin, brecht
Reviewed By: brecht
Subscribers: blueprintrandom, Evgeny_Rodygin, AlexKowel, yurikovelenov
Differential Revision: https://developer.blender.org/D1958
Note that this only removes the actual dependencies of Cycles on the
particle code in Blender, but not the internal "particle" definition
or the curve type handling inside Cycles. These structures may be in need
of some improvement themselves, but that is out of scope here.
There are a lot of cases here where deciding for removal is a bit tricky.
Many features have options for "use_particles" and similar settings. Only
features which actually store a particle object reference or work on actual
particle data have been removed.
message(FATAL_ERROR"WITH_WINDOWS_CODESIGN is on but WINDOWS_CODESIGN_PFX_PASSWORD not set, and environment variable PFXPASSWORD not found, unable to sign code.")
@@ -187,7 +187,7 @@ The next table describes the information in the file-header.
</table>
<p>
<ahref="http://en.wikipedia.org/wiki/Endianness">Endianness</a> addresses the way values are ordered in a sequence of bytes(see the <ahref="#example-endianess">example</a> below):
<ahref="https://en.wikipedia.org/wiki/Endianness">Endianness</a> addresses the way values are ordered in a sequence of bytes(see the <ahref="#example-endianess">example</a> below):
// MSVC warns about implicitly converting from double to int for
// possible loss of data, so we need to temporarily disable the
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.