Since area lights are affected by scaling them, it only makes sense to support applying the scale to the lamp size.
Of course, applying location or rotation does not work.
If a scaling that changes the aspect ratio is applied to a square lamp, the mode is automatically changed to Rectangle.
This commit does two things:
- Adds an option to do the calculation in different color spaces (BT601
or BT709).
- Changes the default caluclation from legacy BT601 to BT709.
This affects several areas:
- UI areas (mainly scopes)
- ViewLevelsNode
- Several other nodes that use `COM_ConvertOperation.h`
This caused too much trouble, also it's possible users run with
'release' in their CWD causing issues.
Developers can symlink "release/" to "bin/2.79".
There were two issues here actually:
* The hack to allow running Blender directly from the source directory
would just check for a 'release' directory, without actually ensuring it
is release dir from blender source tree, and not some other random
folder.
* GHOST_getSystemDir returns nothing for portable installations, now
we'll then check directly in the blender binary dir in that case.
This fix is more critical in 2.8 branch, where that system path is used
to retrieve new '3D' icons...
Each parameter of the function is copied into the memory stack.
This also brought an improvement in peformance of snapping functions between 5% and 12% in my tests.
Applied similar fix to T54233 to get the "Record with NLA" feature working with
active action blending + influence settings. Extrapolation is explicitly ignored
though, as it shouldn't be used with this feature (i.e. it is already disabled
with the new strips and also on the animdata by default)
It is hidden behind the --debug-io flag for now.
Idea is to try to catch broken libraries state in current Main before we
actually write the file on disk, should help catching and understanding
what happens in Spring corruption cases.
This is still broken I cant tell if it is the fact that the in_band
funtion does not work properally or an issue in the box algorithm, or
both.
It seems like the calculation of the size of the box while roatated
needs to be fixed also.
The work is mainly from Lukas Toenne, with some modifications from myself.
Includes following obvious changes:
- Particle system selection is now name-based, with lookup menu.
- Lots of new options to control varieties.
Changes comparing to the Gooseberry branch:
- Default values and versioning code ensures same behavior as the
old modifier.
- Custom data layers are coming from vertex color, the modifier
does not create arbitrary layers now. The hope is to keep data
more manageable, and maybe make it easier to select in the shader
later on.
This means, values are quantized to 256 values, but it should be
enough to get varieties in practice.
Reviewers: brecht, campbellbarton
Reviewed By: brecht
Subscribers: eyecandy
Differential Revision: https://developer.blender.org/D3157
This statement is only relevant in 2.8, but causes confusion in master.
I kept the 'default' label to prevent compiler warnings about unhandled
cases. The break is needed because there should be at least one statement
after 'default'.
This finally allows us to use Random factor to add variations to the
interpolated children. This feature never worked since 2007L there was
random factor slider in the interface, but it was only used by simple
children. Now it has affect on interpolated children as well.
Technically, this will break compatibility if older file had random
factor set to something else than 0 (default value is 0 though). But
we are leaving 2.7 series, so can accept such breackage in the name
of supported features.
Tab after C++ comment broke parsing and didn't remove the line at all.
Was there since 2002 at least, probably confused some peeps.
This means commented out code was actually written to SDNA.
Why exactly this happens remains unclear, found that in the
autumn.blenrig file of Spring production while working on static
overrides... Tons of ugly IDProps in that rig. xD
Added a lock-free deferred queue for deletion. Now if ID icon
is requested to be freed from non-main thread, it will be added
to the deferred list. Actual deletion will happen later from main
thread.
Currently actual deletion only happens next time BKE_icon_id_delete()
is called, which might not be enough. But it's easy to enforce
deferred deletion.
Icons for preview images are not covered by deferred deletion yet.
Reviewers: mont29
Differential Revision: https://developer.blender.org/D3146
The tests as they are now make string comparisons. This only works
on Windows because the reference files look different for different
operating systems because of different number formatting.
The collada tests need a complete rework (wip)
- Wasn't clear which functions handle edit-bones.
- Mixed both ebone and edit_bone in names.
- Didn't use ED_armature_* prefix for public API.
See P655 to apply to branches.
- Move static undo variable into 'WriteData',
'memfile_chunk_add' used arguments in a confusing way,
sometimes to set/clear static var.
- Replace checks for 'wd->current' with 'wd->use_memfile'
move memfile vars into 'wd->mem' struct.
Having that one when opening a file or loading some lib makes absolutely
no sense, and switching that 'temp' editor to some other type can
trigger all kind of funny bugs...
Note that using the shortcuts keys (Shift-F5 etc.) is still possible,
removing those seems a bit more involved. :/
It's not just the Graph Editor that needed this - the NLA also uses similar code
and thus suffers from a similar problem.
(My first commit from the Blender Institute v2.0 - Just testing that everything works)
In preparation of the removal of blender internal render we
moved the vectorblur code that was placed in the render package
(legacy) to the compositor. The compositor is only using this
code even the blender internal renderer did not use the code at
all.
The previous assert assumed '..' is always there, which isn't necessarily
true (for example when in the root of an Asset Engine repository).
The new code asserts that if '..' is present it should be the first entry
(rather than forcing the first entry to be '..').
the Vector Transform node was added to the "Vector" category in
nodeitems_builtins.py
but was using the "NODE_CLASS_CONVERTOR" internally (thus using e.g. the
'wrong' theme color)
thanx @dingto for review
Differential Revision: https://developer.blender.org/D3138
Fixes the "emtpy scrolling" glitch by clamping the scroller offset to
the boundary of the view when it's smaller than the previous.
Fixes T45197. Patch by @januz.
Differential Revision: D1580
The new constraint is slower and not backward compatible, but should
be better, especially in the damping side. The new constraint also
has a different valid range of the damping coefficient, and a limit
implementation that bounces instead of making the object stationary.
Reviewers: sergof
Differential Revision: https://developer.blender.org/D3125
WEBM is the codec name, and VP9 is the encoder (the older encoder "VP8"
is less efficient than VP9).
WEBM/VP9 and h.264 both have options to control the file size versus
compression time (e.g. fast but big, or slow and small, for the same
output quality). Since WEBM/VP9 only has three choices, I've chosen to
map those to 3 of the 9 possible choices of h.264:
- BEST → SLOWER
- GOOD → MEDIUM
- REALTIME → SUPERFAST
The VERYSLOW and ULTRAFAST options give very little extra benefit.
Reviewed by: @Severin
The encoding panel mentions "None" in a few places, which is confusing.
- "Codec: None" now reads "No Video"
- "Audio Codec: None" now reads "No Audio"
- "Output Quality: None; ..." now reads "Constant Bitrate"
When selecting "No Video" the remaining video encoding options are
hidden, making it even more explicit that there will not be video in the
output file.
The label "Codec" now reads "Video Codec" for symmetry with "Audio
Codec".
When importing multiple materials for one object,
the imported material animation curves have all been
assigned to the first material in the object.
This fix also improves the console logging whenever the importer
finds a consistency problem with the imported animation data.
The MovieSequence and MovieClip classes now have a metadata() function
that exposes the `IDProperty *` holding the video metadata.
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
This is useful to create a mapping from the frame range in the video to
frame index in the blend file.
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
This is currently only supported by FFmpeg (so not frameserver, AVI RAW,
or AVI JPEG), and only seems to work when using Matroska or Ogg Theora
containers.
Only metadata that doesn't change from frame to frame is written to
video files. This distinction is visible in the UI by looking at the
stamp checkbox tooltips (they either mention "image" or "image/video").
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
- Metadata handling is now separate from `ImBuf *`, allowing it to be
used with a generic `IDProperty *`.
- Merged `IMB_metadata_add_field()` and `IMB_metadata_change_field()`
into a more robust `IMB_metadata_set_field()`. This new function
doesn't return any status (it now always succeeds, and the previously
existing return value was never checked anyway).
- Removed `IMB_metadata_del_field()` as it was never actually used
anywhere.
- Use `IMB_metadata_ensure()` instead of having
`IMB_metadata_set_field()` create the containing `IDProperty` for
you.
- Deduplicated function declarations, moved `intern/IMB_metadata.h` out
of `intern/`. Note that this does mean that we have some extra
`#include "IMB_metadata.h"` lines now, as the metadata functions are
no longer declared in `IMB_imbuf.h`.
- Deduplicated function declarations, all metadata-related declarations
are now in imbuf/IMB_metadata.h.
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
Free code should not handle ID refcounting at all. This has to be done
at higher level, since in some case we want to free (temp) data that
actually did not refcount at all its IDs.
This change seems to be working OK, but as usual in that area, only
lots of testing in real-case situation will say whether there are some
hidden bugs or not.
Issue was, *some* IDs (like infamous nodetrees from materials etc.)
would not go through the 'main' read_libblock() func, so their tags were
never reset.
So now, we ensure direct_link_id() always clear the tags, and move
setting them in read_libblock() after the call to direct_link_id().
Needed for depsgraph, but general healthier fix actually.
This is a part of copy-on-write sanitization, to avoid all the checks
which were attempting to keep sub-data pointers intact.
Point is: ID pointers never change for CoW datablocks, but nested
data pointers might change when updating existing copy.
Solution: Only bind ID data pointers and index of sub-data.
This will make CoW datablock 7update function was easier in 2.8.
In master we were only using pose channel pointers in callbacks,
this is exactly what this commit addresses. A linear lookup array
is created on pose evaluation init and is thrown away afterwards.
One thing we might consider doing is to keep indexed array of
poses, similar to chanhash.
Reviewers: campbellbarton
Reviewed By: campbellbarton
Subscribers: dfelinto
Differential Revision: https://developer.blender.org/D3124
Back in the days (2.4x and before), it was rather easy to get some
invalid utf-8 strings in Blender. This is totally breaking modern code,
so this commit adds a simple 'check & fix strings' operator, available
from the main File menu.
E.g. typing `bpy.data.bl_rna.properties[8].<tab>` in console would hard-crash
trying to dereference NULL pointer. Was a missing check in rna_Property_tags_itemf().
Better fix for T54457. It seems Debian compiles OpenVDB without ABI 3
compatibility, while Arch does enable it as is the default in the OpeVDB
CMake build system.
So now there's an option that the distribution can set depending on how
they compile their OpenVDB package.
Some functions always returned the input argument
which was never used.
This made code read as if there might be a leak.
Now return a boolean (true the imbuf is modified).
- Use a single undo history for all operations.
- UndoType's are registered and poll the context to check if they
should be used when performing an undo push.
- Mode switching is used to ensure the state is correct before
undo data is restored.
- Some undo types accumulate changes (image & text editing)
others store the state multiple times (with de-duplication).
This is supported by checking UndoStack.mode `ACCUMULATE` / `STORE`.
- Each undo step stores ID datablocks they use with utilities to help
manage restoring correct ID's.
Needed since global undo is now mixed with other modes undo.
- Currently performs each undo step when going up/down history
Previously this wasn't done, making history fail in some cases.
This can be optimized to skip some combinations of undo steps.
grease-pencil is an exception which has not been updated
since it integrates undo into the draw-session.
See D3113
- See `--log` help message for usage.
- Supports enabling categories.
- Color severity.
- Optionally logs to a file.
- Currently use to replace printf calls in wm module.
See D3120 for details.
Two issues are fixed with this commit:
1) When we build OIIO (on unixoid build environments) and no /src/doc/oiiotool was present we had no build target for it (which led to a build error). As we don't need docs for OIIO, we disable it now.
2) We specified a var that OIIO does not recognize (was removed upstream a long time ago): ILMBASE_VERSION.
This update adds a link to the Wikipedia article "Euler angles" to the description of the mathutils.Euler class.
I initially was not sure what a "Euler" represented in Blender API, but found the Wikipedia article helpful. I believe others will find the link helpful too if it appears in the class documentation.
This is similar to the Wikipedia links that appear in the mathutils.Matrix class, e.g: https://docs.blender.org/api/blender_python_api_current/mathutils.html?highlight=euler#mathutils.Matrix.adjugate
Author: @justasb
Reviewers: campbellbarton, trumanblending, Blendify
Reviewed By: Blendify
Subscribers: Blendify
Tags: #bf_blender
Differential Revision: https://developer.blender.org/D3077
This is required to T54437 (sequencer preview uses last updated scene).
Although the fix itself needs to be in 2.8, for the 2.8 specific
initialization code.
Roughness baking previously defaulted to 1.0 for all diffuse materials,
now we also bake roughness values of Oren-Nayer and Principled Diffuse.
Differential Revision: https://developer.blender.org/D3115
When importing an xfov curve, we must transformed the data to
Lens opening angles in degrees. While the curve value itself is
correctly transformed, the transformation of the tangents has been
forgotten. this is fixed now.
Use C++11 threads when available, and native critical section on Windows.
Later on we can remove pthread code when C+11 becomes required.
Differential Revision: https://developer.blender.org/D3116
- Get memory usage from MemFile instead of MEM API
avoids possible invalid when threads alloc memory.
- Use size_t instead of uint and uintptr_t to store size.
- Rename UndoElem.str -> filename
- Rename MemFileChunk.ident -> is_identical
Random numbers for step offset were correlated, now use stratified samples
which reduces noise as well for some types of volumes, mainly procedural
ones where the step size is bigger than the volume features.
Undo sometimes reserved too much space in the buffer,
now assert when this happens and allocate the exact size needed.
Note prepares for moving text editor undo out of the text block (D3113)
which will split the undo buffer into a list of undo steps.
With better directory layout and more proper include
statements we can avoid several local modifications,
such as changing config.h for Windows Glog and the
ones related on pass-through statements in logging
headers in Glog.
This commit also makes unused functions not-a-warning
for external code.
The list of editor-types is rather long by now, so better to arrange them into
sections.
Original patch by @jeske with updates by @Blendify and myself.
Design Task: T36028
Patch: D3112
* Pressing "OK" wouldn't close Blender anymore
* Using File -> Quit would use popup version, not OS native window
Cleaned up code a bit to avoid duplicated logic.
Steps to reproduce were:
* Open Blender (no need for factory settings, "Promt Quit" needs to be enabled)
* Edit the file (e.g. translate some object)
* Quit Blender but don't skip quit promt
* Press "Save & Quit"
* Save the file
Not sure if Windows supports the "Save & Quit" behavior, so this may not have
applied to Windows.
You only had to close Blender through File -> Quit.
Leaks happened because WM_exit() was called from within operator, UI wasn't able
to free some of it's heap data then. This data was the handler added in
uiTemplateRunningJobs() and the IDProperty group added in uiItemFullO_ptr_ex().
There was obviously a general design issue which only became visible in this
specific case.
We now delay the WM_exit call by wrapping it into a handler that gets registered
as usual. I didn't see a better way to do this, all tricks done in
ui_apply_but_funcs_after() to prevent leaks didn't work here. In fact they may
be redundant now, but am not brave enough to try ;)
Seems we can not use include directories order trick, since
files are included form inside ".." string, which forces current
directory to be checked first.
MSVC still defines __cplusplus as 199711L until it's in full conformance with the newer c++ standards, however the things we need from the standard are fully supported, hence a check for the msvc version was needed.
Without this a "Clearcoat" link could be moved to "Clearcoat Normal"
for example, which doesn't make much sense.
Differential Revision: https://developer.blender.org/D3105
Increasing the samplig dimensions like this is not optimal, I'm looking
into some deeper changes to reuse the random number and change the RR
probabilities, but this should fix the bug for now.
PointCache was having a collection of items of PointCache type, having a
collection of items of PointCache type, having...
Nuff said.
For now, chose the 'ugly' way to fix it, that is, the one that changes
nothing to API and scripts using it: we define another 'PointCacheItem'
RNA type for items of our point cache collection, which has exact same
interface as PointCache except for the collection.
This is doomed to be rewritten at some point anyway, not worth spending
time trying to define a really correct data layout for now.
* In the Collada Module parameters are typically ordered
in a similar way. I changed this to:
extern std::string get_joint_id(Object *ob, Bone *bone);
* The Object parameter was not used in get_joint_sid().
I changed this to:
extern std::string get_joint_sid(Bone *bone);
We had a mix of two issues here actually:
* First, Brush are currently using their own sauce for custom previews,
this is not great, but moving them to use common ImagePreview system of
IDs is a low-priority TODO. For now, they should totally ignore their
own ImagePreview.
* Second, BKE_icon_changed() would systematically create a PreviewImage
for ID types supporting it, which does not really makes sense, this
function is merely here to 'tag' previews as outdated. Actual creation
of previews is deferred to later, when we actually need them.
They are used to start and end colored output in console.
Use with care, it is up to you to check that console actually
supports Truecolor ANSII.
In thew future we can extend this to other consoles and platforms.
Unity itself is deprecated, but the API is also supported by KDE and the GNOME Dock extension,
which means that it will be useful for a wide variety of distributions.
To get a progress bar, the system must have a blender.desktop file and libunity installed.
The need for libunity is annoying, but the only alternative would be to integrate a DBus library...
Reviewers: campbellbarton, brecht
Differential Revision: https://developer.blender.org/D3106
Requires BLI_utildefines.h to be included first,
(already noted in other inline code).
Possible alternative could be to move BLI_assert into own header.
For IDProps IDarray, IDP_EqualsProperties was called for each item,
instead of IDP_EqualsProperties_ex, discarding value of `is_strict`
option.
Probably not an issue with current code, though.
- The common name in computer science are 'getters' and 'setters', so by
adding these names to the documentation (while 'get' and 'set are still
also mentioned) we improve findability. Having 'Getters/Setters' as a
title also makes it clearer that this example is not just about
getting or setting the property value.
- Added a little prefix to each printed value, so that print statement,
expected output, and real output can be matched easier.
The old example had two downsides:
- It promoted a blocking UI design, where the user is shown a popup
before actually executing the operator.
- It didn't show how to actually use the property values.
The new code avoids these mistakes. The properties are also shown in the
redo panel in the 3D view.
Note that I also changed the bl_idname, as this is an example about
properties, not about dialogue boxes, and changed the class name to use
the standard operator naming convention.
I also extended the example to include a panel that sets multiple
properties of the operator, since I see questions about this relatively
frequently.
When adding scene strips to the sequencer, the wrong scenes were
getting getting added if some were skipped. For example:
Given 4 scenes (A, B, C, D) if you're trying to add the last 3 scenes
(B, C, D) as strips to the first scene (A), it would ended up adding
"A, B, C" instead of "B, C, D" as expected.
Fix provided by Andrew (signal9).
Need Clear ID recalc flag on load. Otherwise it's possible to have
some IDs considered always updated by Cycles, when they were saved
in a tagged-for-update state.
Thanks Bastien for feedback and review!
Nothing user visible, only things needed for multi-object support,
making picking functions more flexible too.
- Support passing in an initialized hit-struct,
so it's possible to do multiple nearest calls on the same hit data.
- Replace manhattan distance w/ squared distance
so they can be compared.
- Return success to detect changes to a hit-data
which might already be initialized (also more readable).
* Suspicious usage of pointer:
short *type = 0; // this creates a null pointer
When this is later used for anything then blender would crash.
After following the code and check what happens i strongly believe
the author wanted to use a short and not a pointer to a short here.
* local variable where reused later in same function
While this did no harm, i still felt it was better to use a different
name here to make things more separated:
- moved variable declaraiotns into loop (for int a=0; ...)
- renamed uv_images to uv_image_set
- renamed index variable from i to j in inner loop that
reused same index name from outer loop
The iterator was redeclared 3 times. I fixed this to avoid future issues.
I commit separately because so the changes are less cluttered all over
the place.
The variable child was redeclared multiple times in the same function.
While this has not created any issues i still changed this to avoid
confusion and keep the usage of the variables more local.
The function validateConstraints() potentially causes a null pointer
exception. I changed this so that the function returns a failure as soon
as the validation fails. This avoids falling into the null pointer trap.
The 2 methods add_bezt() and create_bezt() do almost the same.
I combined them both into add_bezt() and added the optional parameter
eBezTriple_Interpolation ipo
Don't write the multichannel metadata when there is only a single layer,
and don't unnecessarily consider single layer images with Blender metadata
as multi layer.
This save a little memory and copying in the kernel by storing only a 4x3
matrix instead of a 4x4 matrix. We already did this in a few places, and
those don't need to be special exceptions anymore now.
This is in preparation of making Transform affine only, and also gives us
a little extra type safety so we don't accidentally treat it as a regular
4x4 matrix.
The purpose of the previous code refactoring is to make the code more readable,
but combined with this change benchmarks also render about 2-3% faster with an
NVIDIA Titan Xp.
If no custom URL was set, add-ons would get a "Report a Bug" button opening
the default developer.blender.org bug tracker. Now we only add this default
button if the add-on is bundled and not installed by the user.
Premise: When pose bones are selected, applying a pose library should
only affect the selected bones.
This commit fixes a bug where the pose was also applied when there was
no overlap between the selected bones and the bones in the pose. For
example, applying a pose which contains only keyframes for the left
hand, while only right-hand bones are selected, would apply the pose
to the left hand anyway.
The code is now also slightly more efficient; the removed 'selcount'
counter was only used as a binary (i.e. zero or non-zero). It's now
stored as a bitflag instead.
Currently only covering handful of files from reports about wrong fps detected.
It will need D3083 applied first to get tests passed, also tests themselves
are to be committed to svn.
But there are some python code which needs to be reviewed, like blendfile
passed to run_blender().
Reviewers: sybren, mont29
Reviewed By: sybren, mont29
Subscribers: mont29
Differential Revision: https://developer.blender.org/D3096
This is an issue with which value to trust: fps vs. tbr. They both cam be
somewhat broken. Currently the idea is:
- If file was saved with FFmpeg AND we are decoding with FFmpeg we trust tbr.
- If we are decoding with Libav we use fps (there does not seem to be tbr in
Libav, unless i'm missing something).
- All other cases we use fps.
Seems to work all good for files from T53857, T54148 and T51153. Ideally we
would need to collect some amount of regression files to make further tweaks
more scientific.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D3083
Each AnimData block has a set of Blend/Extrapolation/Influence settings
that can be used to control how the active action is blended with the
NLA stack. However, these settings were not getting copied over to the
newly created strips (as the push-down code existed long before these
settings were added).
This commit solves this in several ways:
* Active Action Blend/Extrapolation/Influence settings now get copied
to the new strips when adding them to the NLA stack via Push Down.
Note: This doesn't happen when there are no existing NLA tracks,
as these settings don't get used in that case.
* Strip Influence will be copied across when inf < 1.0 (i.e. when a
non-default value is used), to maintain the effect. To make this work,
the influence value will get added as a keyframe to the strip's
"Influence" Control FCurve.
- See code comments for an alternative approach and why that was not chosen
- Strip Time still doesn't get keyframes added automatically yet.
* To ensure the "extrapolation mode" settings don't get always overwritten,
I've put in place a compromise: the extrapolation will only get changed
if the chosen setting will cause problmes (i.e. hold forward & back -> hold forward
if there are other tracks before it already).
Not safe for backporting to 2.79[x] stable releases.
Now repeating the operator will use the previously chosen offset, either with
the modal operator or typed in. The modal operator will still start at zero.
Vertex group remapping utility function,
now shared between object join and array modifier cap-ends.
Weights which don't exist are removed.
D3092 by @Foaly
Main purpose is to make it possible to cover FPS detection with regression test.
But it might also be handy for some other scripters.
Thanks Campbell for review!
The issue was happening with fast Gaussian blur, and caused by NaN value pixels
in the input buffer.
Now made it so Map Range output does not produce NaN, by returning arbitrary
value of 0. Still better than NaN!
Previous fix for T53430 caused T54200.
The edge case for soft & hard cuts weren't working,
where the strip used start/end-still & the frame was placed exactly on
the start/end of of the sequence content.
T54200 fixed the end-still case but broke hard-cuts for all other cases.
This fixes the case for soft/hard cuts with/without start/end-still.
Introduced explicit ID property node for driers in depsgraph,
so it is clear what is the input for driver, and what is the
output.
This also solved relations builder throwing lots of errors
due to ID property not being found.
It was possible to have relations like A -> B -> C -> A (import thing is
that no other operations points into this cluster) which were not detected
or reported by dependency cycle solver.
Now this is solved by ensuring we don't leave unvisited nodes behind.
This is probably a better way to handle it: instead of totally
discarding scaling of non-free axes, keep the ratio between them.
Basically the logic of the constraint is now that it rescales the
object uniformly in the non-free axis plane in order to force the
total volume change to the desired value.
It seems the reason the old version of the constraint overcompensates
as reported in T48079 is to allow the constraint to work with uniform
scaling on all axes. However the way it did that actually _requires_
uniform scaling for the constraint to work correctly, and breaks if
only the free scaling axis is used to avoid redundant channels.
This version attempts to allow both by discarding scaling in the non-
free directions instead of applying the correction on top of it.
This merges changes in internals, runtime-only of existing custom
normals code, which make sense as of themselves, and will make diff of
soc branch easier/lighter to review.
In the details, it mostly changes two things:
* Now, smooth fans (aka MLoopNorSpaceArray) can store either loop
indices, or pointers to BMLoop themselves. This makes sense since in
BMesh, it's relatively easy to get index from a BMElement, but nearly
impracticable to go the other way around.
* First change enforces another, now we cannot rely anymore on `loops`
being NULL in MLoopNorSpace to detect single-loop fans, so we instead
store that info in a new flag.
Again, these are expected to be totally non-functional changes.
around the volume.
We generate a tight mesh around the active voxels of the volume in order
to effectively skip empty space, and start volume ray marching as close
to interesting volume data as possible. See code comments for details on
how the mesh generation algorithm works.
This gives up to 2x speedups in some scenes.
Reviewed by: brecht, dingto
Reviewers: #cycles
Subscribers: lvxejay, jtheninja, brecht
Differential Revision: https://developer.blender.org/D3038
This is more important now that we will have tigther volume bounds that
we hit multiple times. It also avoids some noise due to RR previously
affecting these surfaces, which shouldn't have been the case and should
eventually be fixed for transparent BSDFs as well.
For non-volume scenes I found no performance impact on NVIDIA or AMD.
For volume scenes the noise decrease and fixed artifacts are worth the
little extra render time, when there is any.
* Use a subsurface color equal to the base color, and give the subsurface
radius skin like values by default. This is how the parameter should
typically be used.
* Use GGX by default, multiscatter GGX is still quite noisy and has some
fireflies so let's keep it optional for now.
While doing so with Bone_R.001, Bone_R.003, Bone_R.003 etc. is doomed to
issues, doing that on duplicates of actually correctly named bones can
be handy, and safe.
So adding back as an option (was removed in rB702bc5ba26d5).
Flip names operator changed in rB702bc5ba26d5, to some sensible
behavior. But this breaks common workflow of 'duplicate part of the
bones, scale-mirror new ones, and flip their names'.
So now, instead of doing this in two steps, trying to guesstimate which
bones should get which name, just add option to flip names to duplicate
operator itself. Simpler, safer, and much, much more consitent behavior
and predictable results.
This solves issue with tweaking brush size when interleaving particle edit
and texture paint modes. The issue was caused by texture paing setting more
operator properties then it's done for particle edit mode, which made window
manager to use saved proeprties for the "missing" ones.
Don't see any reason why we would want to save any of those properties.
This is a regression since rB83b60dac57a1.
Note that setting `glDepthFunc` isn't important,
since 2.8 branch changes this value it might seem like an error
however it's harmless in this case - so better make note of this.
The exporter does export matrix data (4*4 Transformation matrix) only for Skeletal animation. For object animation only exporting to trans/rot/loc is implemented.
This task implements Matrix export also for simple Object animation.
Differential Revision: https://developer.blender.org/D3082
This started with a fix for an animated Object Hierarchy. Then i decided to cleanup and optimize a bit. But at the end this has become a more or less full rewrite of the Animation Exporter. All of this happened in a separate local branch and i have retained all my local commits to better see what i have done.
Brief description:
* I fixed a few issues with exporting keyframed animations of object hierarchies where the objects have parent inverse matrices which differ from the Identity matrix.
* I added the option to export sampled animations with a user defined sampling rate (new user interface option)
* I briefly tested Object Animations and Rig Animations.
What is still needed:
* Cleanup the code
* Optimize the user interface
* Do the Documentation
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D3070
This is used to determine which voxels are to be considered empty space.
Previously it was hardcoded for converting dense grids to OpenVDB grids
to reduce disk space usage.
This value is also useful for rendering engines to know, i.e. to
optimize ray marching.
Some other software cannot handle grid names with spaces in them. We still check for names with spaces so as to not break old
files.
This fixes T53802.
Similar to the Principled BSDF, this should make it easier to set up volume
materials. Smoke and fire can be rendererd with just a single principled
volume node, the appropriate attributes will be used when available. The node
also works for simpler homogeneous volumes like water or mist.
Differential Revision: https://developer.blender.org/D3033
When you were using autosmooth to generate some custom normals, and
created empty custom loop normal data, you would go back to an 'all
smooth' shading, cancelling some sharp edges generated by the mesh's
smooth threshold.
Now we will first tag such edges as sharp, such that shading remains the
same. This is not crucial in current master, but it is for clnors
editing gsoc branch!
Regression caused by earlier commits to improve the automerge behaviour.
In this case, the problems only occurred when moving a selected keyframe
forwards in time to overlap an unselected keyframe.
While it is necessary to ignore duplicates when doing Deselect/Column Select
(where double-updates may result in nothing being selected), for borderselect,
not including the duplicates meant that sometimes, nothing would happen
if you were trying to borderselect keyframes originating from hidden channels.
This was first noticed in the greasepencil-object branch, but affects all
animation channel types.
This allows us to:
- Not mock around with tags stored in a global space,
and not to iterate over all datablocks in the database
to clear the tags.
- Properly deal with datablocks which might not be in main database.
While it sounds crazy, it might be handy when dealing with preview,
or some partial scene updates, such as motion paths.
- Avoids majority of places where depsgraph construction needed bmain.
This is something what could help in blender2.8 branch.
From tests with production file here did not see any measurable slowdown.
Hopefully, there is no functional changes :)
We now continue transparent paths after diffuse/glossy/transmission/volume
bounces are exceeded. This avoids unexpected boundaries in volumes with
transparent boundaries. It is also required for MIS to work correctly with
transparent surfaces, as we also continue through these in shadow rays.
The main visible changes is that volumes will now be lit by the background
even at volume bounces 0, same as surfaces.
Fixes T53914 and T54103.
The other approach was causing too much error in some cases (e.g. favouring
the lower-valued keyframes). This fix should make the resulting curves less
bumpy/jagged.
This commit removes an earlier attempt at optimising the lookups
for duplicates of a particular tRetainedKeyframe once we'd already
deleted all the selected copies. The problem was that now, instead
of getting rid of the unselected keys (i.e. the basic function here),
we were only getting rid of the selected duplicates.
With this fix, unselected keyframes will now get removed (as expected)
again. However, we currently don't take their values into account
when merging keyframes, since it is assumed that we don't care so much
about their values when overriding.
that end up on the same frame
Currently, when scaling keyframes in the Dopesheet, if multiple
selected keyframes end up on the same frame post-scaling, they
would not get removed by the "Automerge" setting that normally
removes duplicates on the same frame.
This commit changes the behaviour so that when multiple selected
keyframes end up on the same frame, instead of keeping all these
around on the same frame (e.g. resulting in a column of keyframes
on different values), we will instead merge them into a single
keyframe (by averaging the values). This should result in a
smoother F-Curve with fewer "stair-steps" that need to be carefully
cleaned out afterwards.
Requested by @hjalti
This bug took a while to track down. In the test file with this report,
the Nla-Strip Control Curve for strip time would get disabled if you
changed the NLA Editor to a second Graph Editor instance.
It turns out that because this second Graph Editor would have the
"filter fcurves by name" option enabled, this would trigger a lookup
of the referenced property's name (in order to test whether it matched
the filtering criteria). However, since that filtering code was written
before the introduction of these curves, it still assumed that the names
for these Control Curves should be handled the same as for standard FCurves.
Unfortunately, that doesn't work, as the property lookups fail if the standard
method is used - when the lookups fail, the F-Curves get tagged as being
invalid/disabled (and need to be reset using the "Revive Disabled FCurves"
operator).
Note: The changes in this patch look complicated, as I've had to shuffle
a bit of code around so that the name-filtering check can have access to
the additional info it needs. In the process, I've also removed the earlier
(hacky) approach where the control curves were getting added to a temp
buffer to get changed from normal FCurves to special ANIMTYPE_NLACURVES.
Not sure why we need a relation from solver to a tip local transform, this
will be handled via parent relation.
Fixes remaining dependency cycles reported in T54083.
It is not possible to address transform at particular position of constraint
stack, and when constraint is being addressed is usually from driver variable.
This fixes some of dependency cycles reported in T54083.
This is a regression in rB4f1c0a1 which only allowed cutting haior at the
second segment only, while there is nothing wrong with cutting hair at the
first segmewnt.
Don't use dm->get*Array for DM you don't own. This call can allocate temporary
CD layer, which is not thread safe at all.
Also removed hard-coded logic around CDDM check. new functions will do same
logic, but are mode DM-type-=independent.
We shouldn't mix image pool acuisition with and without user provided,
the fact that internally image.c uses last frame from Image datablock
confuses the logic.
Optionally don't remap indices for objects.
Checking all objects parent's would reference a freed pointer
while freeing all objects.
In the case of dynamic topology there is no use in keeping track
of hook/vertex-parent indices.
Also disable this when creating meshes for undo storage
since adding an undo step shouldn't be modifying other objects.
This reuses the Cycles regression test code to also work for OpenGL UI drawing.
We launch Blender with a bunch of .blend files, take a screenshot and compare
it with a reference screenshot, and generate a HMTL report showing the failed
tests and their differences.
For Cycles we keep small reference renders to compare to in svn, but for OpenGL
developers currently have to generate the references manually. How to use:
* WITH_OPENGL_DRAW_TESTS=ON in CMake
* BLENDER_TEST_UPDATE=1 ctest -R opengl_draw
* .. make code changes ..
* ctest -R opengl_draw
* open build_dir/tests/opengl_draw/report.html
Differential Revision: https://developer.blender.org/D3064
Once 'losing lib' issue is fixed (in previous commit), we have new issue
that this could lead to several copies of the same linked data-block in
.blend file. Which is not good. At all.
So had to add a GHash-based check in libraries reading code to ensure we
only load a same ID from a same lib once.
The issue was that when a same lib was found several times in loaded
.blend, we'd only keep the first occurence. But since Blender expects
next data-blocks to belong to last found library, we could actually
be adding data-blocks assigned to copies of the duplicated lib to
another, totally unrelated lib.
Those data-blocks were then obviously not found when actually loading
libs content, and lost.
Note that this only fix one part of the issue, current code can
generate several copies of same linked data-block now, will fix in
another commit.
While the script should be using INVOKE_PREVIEW for operators in clip view,
window manager was lacking some switch statements.
Thanks Brecht fore review!
- Use BLI_threadpool_ prefix for (deprecated)
thread/listbase API.
- Use BLI_thread as prefix for other functions.
See P614 to apply instead of manually resolving conflicts.
- When returning the number of items in a collection use BLI_*_len()
- Keep _size() for size in bytes.
- Keep _count() for data structures that don't store length
(hint this isn't a simple getter).
See P611 to apply instead of manually resolving conflicts.
This is kind of doesn't matter where macro itself is defined.
We should stick to the following:
- If some macro is actually more an inline function, follow regular
function name conventions.
- If macro is a macro, type it in capitals. Use module prefix if that
helps readability or it if helps avoiding accidents.
This completes twist feature, which is now possible to also control by
texture. Since textures can not easily contain negative values as well,
same trick with 0.5 neutral as vertex groups is used.
All in all, this twist features allows to do following things.
Original hair:
{F2287535}
Hair with scientifically calculated twist value of 0.5:
{F2287540}
And we can also twist braids in opposite directions dependent on left/right
side:
{F2287548}
The idea is to give a control over direction of twist, and maybe amount of
twist as well. More concrete example: make braids on left and right side of
character head to be twisting opposite directions.
Now, tricky part: we need some negative values to flip direction, but weights
can not be negative. So we use same trick as displacement map and tangent normal
maps, where 0.5 is neutral, values below 0.5 are considered negative and values
above 0.5 are considered positive.
It allows to have children hair to be twisted around parent curve, which is
quite an essential feature when creating hair braids.
There are currently two controls:
- Number of turns around parent children.
- Influence curve, which allows to modify "twistness" along the strand.
This isn't supported since there are subsequent reads to all point coordinates
after modification started.
Probably we need to create a temp copy of point, but that's like extra CPU
ticks.
It seems to be useful still in cases where the particle are distributed in
a particular order or pattern, to colorize them along with that. This isn't
really well defined, but might as well avoid breaking backwards compatibility
for now.
This is like the only way to add variety to hair which is created
using simple children. Used here for the hair.
Maybe not ideal, but the time will show.
Burley SSS uses a bit of strange thing where the albedo and closure weight are
different, which makes the subsurface color act a bit like a subsurface radius
indirectly by the way the Burley SSS profile works.
This can't work for random walk SSS though, and it's not clear to me that this
is actually a good idea since it's really the subsurface radius that is supposed
to control this. For now I'll leave Burley SSS working the same to not break
backwards compatibility.
This can be very slow if it contains a big texture, and it's not
necessarily setup in a useful way anyway, and materials can be used
in multiple scenes.
It is basically brute force volume scattering within the mesh, but part
of the SSS code for faster performance. The main difference with actual
volume scattering is that we assume the boundaries are diffuse and that
all lighting is coming through this boundary from outside the volume.
This gives much more accurate results for thin features and low density.
Some challenges remain however:
* Significantly more noisy than BSSRDF. Adding Dwivedi sampling may help
here, but it's unclear still how much it helps in real world cases.
* Due to this being a volumetric method, geometry like eyes or mouth can
darken the skin on the outside. We may be able to reduce this effect,
or users can compensate for it by reducing the scattering radius in
such areas.
* Sharp corners are quite bright. This matches actual volume rendering
and results in some other renderers, but maybe not so much real world
objects.
Differential Revision: https://developer.blender.org/D3054
Looks like there was no way to avoid that so far, since
WM_event_add_timer_notifier can set mere int-in-pointer there, this can
cause issues. So added mere flags system to wmTimer to allow
controlling this.
Instead of calling an operator I just call `collection.new()`. Moving the
code into a separate function also simplifies it. In its new form there is
also no undefined behaviour when me.vertex_colors is non-empty but without
active layer.
- normalize → average the vector: the vector isn't normalized here, because
it doesn't necessarily becomes unit length. Instead, the sum is converted
to an average vector.
- angle is the acos()…: the dot product between the vertex normal and the
average direction of the connected vertices is computed, and not the
opposite.
- The initial `con` list was discarded immediately and replaced by a new
list.
- File didn't end with a newline.
We've got quite comprehensive BMesh based implementation, which is way easier
for maintenance than abandoned Carve library.
After all the time BMesh implementation was working on the same level of
limitations about manifold meshes and touching edges than Carve. Is better
to focus on maintaining one boolean implementation now.
Reviewers: campbellbarton
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D3050
Previously quads always split along first-third vertices.
This is still the default, to avoid flickering with animated deformation
however concave quads that would create two opposing triangles now use
second-fourth split.
Reported as T53999 although this issue has been known limitation
for a long time.
commits:
Category: UV Unwrapping SLIM Algorithm Integration
Added SLIM Subfolder
Category: UV Unwrapping SLIM Algorithm Integration
added subfolder SLIM to CMakeFile of /intern
Category: UV Unwrapping SLIM Algorithm Integration
integrating SLIM including data gathering and transfer from Blender to SLIM
This commit is huge, because I copied over the code from a different repository. Not commit-by-commit.
The Algorithm can be invoked either by choosing SLIM from the dropdown in the unwrapping settings or by
hitting ctrl. + m in the uv editor for relaxation. Tried adding it to the menu the same way as minimizing stretch is there but failed.
Category: UV Unwrapping SLIM Algorithm Integration
preserving vertex ids and gathering weights
Category: UV Unwrapping SLIM Algorithm Integration
adding more members to phandle and creating param_begin
Category: UV Unwrapping SLIM Algorithm Integration
fixing bug that causes weight-per-vertex mapping to be wrong
Category: UV Unwrapping SLIM Algorithm Integration
adjustments to the UI parameters
Category: UV Unwrapping SLIM Algorithm Integration
adding weightinfluence
Category: UV Unwrapping SLIM Algorithm Integration
slim interactive exec now only for changing parameters
Category: UV Unwrapping SLIM Algorithm Integration
taking care of memory leaks
Category: UV Unwrapping SLIM Algorithm Integration
adding relative scale, reflection mode and Vertex group input
Category: UV Unwrapping SLIM Algorithm Integration
correcting wrong comment on SLIM phases
Category: UV Unwrapping SLIM Algorithm Integration
Adding SLIM code by means of git read-tree
Category: UV Unwrapping SLIM Algorithm Integration
freeing matrix_transfer properly
Category: UV Unwrapping SLIM Algorithm Integration
adding (unsupported-) eigen files
Reviewers: brecht, sergey
Differential Revision: https://developer.blender.org/D2530
integrating SLIM including data gathering and transfer from Blender to SLIM
This commit is huge, because I copied over the code from a different repository. Not commit-by-commit.
The Algorithm can be invoked either by choosing SLIM from the dropdown in the unwrapping settings or by
hitting ctrl. + m in the uv editor for relaxation. Tried adding it to the menu the same way as minimizing stretch is there but failed.
@@ -69,4 +67,3 @@ base class --- :class:`CPropValue`
..warning::
The id can't be stored as an integer in game object properties, as those only have a limited range that the id may not be contained in. Instead an id can be stored as a string game property and converted back to an integer for use in from_id lookups.
@@ -52,4 +50,3 @@ base class --- :class:`SCA_ILogicBrick`
..note::
Order of execution between high priority controllers is not guaranteed.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.