In some situations the artist needs to subdivide a stroke created with
few points before, specially for sculpting.
The subdivision is done for any pair of continuous selected points in
the same stroke.
The operator can be activated in edit mode with W key and has a
parameter for number of cuts.
The idea is to have a dedicated thread which is responsive for all the
file writing to a separate thread, so slow disk will not slow down
OpenGL itself.
Gives really nice speedup around 1.5x when exporting barber shop layout
file to h264 video.
any object
There were a couple of crashes caused by stupid typos in
rB631af9f930d2fd2c76751204ff22239aa95f761d and
rB78ea06fea4a74181c25254ed72d50d8a743b6954, as well as a shamefull lack
of 'testing before committing' which only affect exporting.
One crash was due to using RNA_boolean_get instead of RNA_enum_get, the
other one was a tricky case of order of deletion happening in the
destructors of AbcExporter and ArchiveWriter.
Should not affect RC or release.
It was annoyingly slow to do roundtrip from byte OpenGL render to
float render result and back to byte image format (which is used
in 99% of cases for the OpenGL previews),
Now we use render result's rect32 to store render result which is
already supposed to be in the display space.
Gives about 30% speed improvement for OpenGL previews here.
This was quite weak to consider all scripted expression to be time-dependent.
Current solution is somewhat better but still crappy. Not sure how can we make
it really nice.
Stupid mistake wrapping path validation code inside a BLI_assert, which means it was
only called in Debug builds...
Found by Sergey, thanks.
Should be backported to 2.78.
Those 'never null' ID pointers are really a PITA to handle... luckily we don't have much of those around!
Found by Sybren, thanks.
Should be backported to 2.78.
This is internal pointer helper for scene evaluation and tools, though exposed to bpy API,
it can give false 'dependency cycles' in bpy.data.user_map() results.
That's followup to rBe007552442634 really, both should be backported to 2.78
This is internal pointer helper for scene evaluation and tools, it's not exposed to bpy API anyway,
and can give false 'dependency cycles' in bpy.data.user_map() results.
Found by sybren in his Splode work.
Uses similar way of storing temp data as object copy paste, just
uses different read entrypoint which does not modify current bmain.
This gives ability to easily copy-paste poses from one blender to
another one.
Hopefully doesn't introduce user-measurable differences.
Request from Peer here in the studio.
Reviewers: mont29
Reviewed By: mont29
Subscribers: hjalti, fsiddi
Differential Revision: https://developer.blender.org/D2229
Regression from rB036c006cefe471. We can't use self here, self is bpy.app, not pydescriptor of python path getsetter...
So for now, do not try to replace getsetter by actual value in bpy.app's dict,
just return static var generated on first run.
Should be safe for 2.78.
I) Filename was not put in temp Main generated to save selected data only,
this was breaking readcode when trying to open partial file, leading to missing
filename in final loaded Main data.
II) Read code would confuse partial .blend files with Undo ones, when they had no screen in them
(which happens to 99.999% of partial .blend files I guess).
Reported by @sybren, thanks.
Should be safe enough for 2.78 release.
The idea is to allow certain animation channels to be always visible in
animation editors. So, for example, one can pin Camera animation to the
editor so it is always possible to refine/tweak camera animation when
animating something else in the scene.
There is probably some more polishing required, and some current
limitations could be solved in the future but should be a good starting
point already.
Currently only works for object without recursing into deeper datablock
(so for example, it's not possible to pin object material animation).
Studio request by Colin Levy.
Now the factor works similar to other Blender areas to make the factor
more consistent for artists. The value 0% means equal to original
stroke, 100% equal to final stroke (50% means half way). Any value below
0% or greater than 100% create an overshoot of the stroke.
Root of the issue is that active render index became wrong. This is the actual
thing to be fixed, but as usual this is quite tricky to reproduce. Since such
bad situation might have happened more and fix isn't really difficult or
intruisive let's avoid crash for now.
Can be revisited once we figure out root of the issue.
Nice for 2.78 release.
Own fault in new ID management work, thought rebuild the DAG itself was
enough to actually update whole scene, but we actually need to tag datablocks
for update as well, when we change (or remove) one of their ID pointers...
OCD commit, but cleans the code a bit:
- the first `if 0` block was supposed to draw collision objects but is
vastly outdated as most of the SmokeCollisionSettings member variables
were removed a few years ago and collision objects are drawn like other
objects anyway. Also it was committed already commented out back in
2009.
- the second `if 0` block was doing pretty much the same thing as the
few lines above it.
This is really hack-fix actually, not sure why `get_pointcache_keys_for_time()` seems to assume
it will always find key for given part index at least for current frame, and whether this assumption
is wrong or whether bug happens elsewhere...
Anyway, this is to be wiped out in 2.8, so no point loosing too much time on it, for now merely
returning unchanged (i.e. zero'ed) ParticleKeys in case index2 is invalid. Won't hurt anyway,
even if this did not crash in release builds, would be returning giberish values.
When drawing with Grease Pencil "continous drawing" for a long time
(i.e. basically, drawing a very large number of strokes), it could be
possible to cause lower-specced machines to run out of RAM and start
swapping. This was because there was no limit on the number of undo
states that the GP undo code was storing; since the undo states grow
exponentially on each stroke (i.e. each stroke results in another undo
state which contains all the existing strokes AND the newest stroke), this
could cause issues when taken to the extreme.
See commit's comments for details, but this boils down to: do not try to use
purely runtime cache data as a 'real' ID pointer in readcode, it's likely
doomed to fail in some cases, and is bad practice in any case!
Thix fix implies dupliweight's object will be invalid until first scene update
(i.e. first particles evaluation).