This is actually rather slow process, commeting it out gives us another
10% speedup... On a non-animated char! Don't even want to know how much
this would have costed on a rig with hundreds of fcurves!
And checking this is not really critical for us anyway, once animated
you do not really care whether props are also statically overridden or
not.
Applied similar fix to T54233 to get the "Record with NLA" feature working with
active action blending + influence settings. Extrapolation is explicitly ignored
though, as it shouldn't be used with this feature (i.e. it is already disabled
with the new strips and also on the animdata by default)
While window manager is supposed to exist after file was fully read
and do-versioned, we can not rely on window manager to exist while
reading file and setting up an environment.
* Fixes crashes in versioning code for the removed editor types (see d8c719d8d8)
* Removes 'Game Logic' workspace.
It still only contains workspaces converted from old screen-layouts, proper
defaults are on the TODO.
Timelines and Logic Editors are gone. So far they were simply replaced by broken
Info Editors, now they are replaced by Dopesheets in the new Timeline mode.
We reuse ScrArea.butspacetype to temporarily store the space-type identifier of
the deprecated editor (see 9db492de6d). That way we can identify it in
versioning code and replace it nicely.
Action editor creation needs a scene to set the scrolling based on frame range.
Active screen-layouts use the active scene of the window they are displayed in.
Inactive screens simply use the first scene in the main data base.
Note that inactive editors don't need version patching, readfile.c converts them
to SPACE_EMPTY already, so users can't activate them.
Files saved since the editors were removed will still be broken.
Workspace config files saved before this will also crash (will update default
one in followup commit).
The only real reason we need `butspacetype` is while switching areas, where we
need to delay the actual switch to the RNA _update callback since only there we
can access context.
So instead of trying to sync it with `spacetype`, only set while needed and
unset it afterwards (as in set to `SPACE_EMPTY`).
This should also allow us to re-use `butspacetype` in versioning code when
trying to read removed editors. It'll store the space type value of the removed
editor which we can then use on versioning.
For backwards compatibility, we store `butspacetype` with the value of
`spacetype`.
This makes easier to distinguish between different editors
(as oppose to an editor and its regions).
Note action zones look a bit strange with this. I recommend we do next:
* Make sure all 4 corners can be used as action zones.
* Remove their drawing code (or show them only on mouse hover).
- UILayout.popover(.. panel_type ..)
A single panel
- UILayout.popover_group(.. panel categories ..)
Expands all panels matching args.
Currently used in the topbar for redo and paint options.
Requiring context means we can't easily create new editors to replace deprecated
ones in versioning code.
Think it's reasonable to give editors access to scene and area data for their
initial setup though. They mostly need it for setting "the view", as in,
scrolling values.
Also did minor cleanup in top-bar creation function.
Steps to recreate were:
* Create custom transform orientation.
* Change properties editor into 3D View.
* Trigger refresh of 3D view header by mouse hovering it.
Mistake in rB7d055da327b9555f.
Dithering the output color for 8bit precision framebuffer with bayer matrix.
On my tests the bayer matrux patterns are not noticeable at all.
Note that it also does that in opengl rendered mode which can be in a much
higher bitdepth. We can fix that if that's a problem in the future but I
doubt it will.
* Make bottom half of topbar a bit higher
* Make tabs higher and put them closer together
* Remove screen layouts dropdown, we'll have one layout per window
* Hide action zones from topbar
* Don't change topbar background color when activating
This will never get run, because direct_link_area() already flags/resets
every space type isn't registered, meaning that we don't have any opportunity
to apply our patching.
This commit removes all references to the old timeline editor.
Unfortuantely, the removal of the Timeline spacetype defining
functions has ended up breaking the version patching code I'd
been working on earlier (as now, the editor gets marked as
"unknown/info" before we get a chance to patch it!)
After a lot of failed attempts and head banging working trying to find a way to reuse
the standard editor-switching/creation code, I've just hacked in a temporary solution
here so that users can load old files and see the old timeline instances replaced
with Dopesheet-Timelines.
Note: This is not nice code, and copies a lot of the standard initialisation code,
but it works well enough for now. We can revisit this later when the other mode changes
come along.
These were getting set in the init() callback instead of the new(). As a result,
the settings would get reset everytime you resized the window/area - not quite
something you'd really want happening everything the size changes!
instead of using "black" curtains
With most editors now showing the start/end range by default, we need a way of
easily distinguishing when preview range is now enabled. By using a different color
(the exact color used is something we can change/adjust later), there is a more distinct
visual difference between them, making it easier to see what's happening.
This uses the global scene range, with styling matching the sequencer's start/end
frame drawing.
(The graph editor's "drivers" mode is exempt, as that doesn't really display time
in a linear way, so the start/end frames don't apply)
Eventually the idea is that they'll get remapped to some more global/generic hotkey
that can get used across all animation editors (see T54728). However, to facilitate
the removal of the timeline editor, it's better we do this now.
These were runtime only data, used in pre 2.8 Blender to make use of GL vertex arrays
to draw these more efficiently. Maybe we might restore these sometime as an optimisation
step, but for now, they're not needing and were confusing.
These now live in the action editor/dopesheet related files.
Apart from these, the timeline didn't actually have other settings
of its own that were of any interest to anyone.
For many years, animators have been requesting the ability to edit keyframes in the
timeline. However, implementing such tools in the timeline quickly becomes a slippery
slope, where we'll eventually end up having to duplicate all the functionality from the
dopesheet editor.
Discussing with William and Pablo this morning, we realised that perhaps it might be possible
to just make the Timeline a mode of the Dopesheet Editor (and kill off the old standalone
Timeline), meaning that we essentially get all the Dopesheet Editor goodness for free!
Also, with some proposed UI updates (i.e. allowing "submodes" of editors to be part of the
the main editors selector), it might not even matter that there isn't an "actual" timeline
editor anymore.
This commit implements the following changes (which are actually sufficient for supporting
most basic workflows):
* Timeline mode in Dopesheet Editor
* Tweaks to UI code to make the Timeline header/menus show up in Dopesheet editor
TODO:
* Hide channels list when switching to timeline mode
* Port over cache-file indicators
* Add missing timeline-only settings that need a new home in the dopesheet
* Go through fixing all timeline editor operators (e.g. Bind to camera)
* Port over start/end frame shading (and adjust preview range rendering to make the
distinction between these clear)
* Remove old timeline editor, and transfer over any leftover code
This "improve" the viewport experience by reducing the noise from random
sampling effects (SSAO, Contact Shadows, SSR) when moving the viewport or
during playback.
This does not do Anti Aliasing because this would conflict with the outline
pass. We could enable AA jittering in "only render" mode though.
There are many things to improve but this is a solid basis to build upon.
This pass create a velocity buffer which is basically a 2D motion vector
texture. This is not yet used for rendering but will be usefull for motion
blur and temporal reprojection.
== Main Features/Changes for Users
* Add horizontal bar at top of all non-temp windows, consisting out of two horizontal sub-bars.
* Upper sub-bar contains global menus (File, Render, etc.), tabs for workspaces and scene selector.
* Lower sub-bar contains object mode selector, screen-layout and render-layer selector. Later operator and/or tool settings will be placed here.
* Individual sections of the topbar are individually scrollable.
* Workspace tabs can be double- or ctrl-clicked for renaming and contain 'x' icon for deleting.
* Top-bar should scale nicely with DPI.
* The lower half of the top-bar can be hided by dragging the lower top-bar edge up. Better hiding options are planned (e.g. hide in fullscreen modes).
* Info editors at the top of the window and using the full window width with be replaced by the top-bar.
* In fullscreen modes, no more info editor is added on top, the top-bar replaces it.
== Technical Features/Changes
* Adds initial support for global areas
A global area is part of the window, not part of the regular screen-layout.
I've added a macro iterator to iterate over both, global and screen-layout level areas. When iterating over areas, from now on developers should always consider if they have to include global areas.
* Adds a TOPBAR editor type
The editor type is hidden in the UI editor type menu.
* Adds a variation of the ID template to display IDs as tab buttons (template_ID_tabs in BPY)
* Does various changes to RNA button creation code to improve their appearance in the horizontal top-bar.
* Adds support for dynamically sized regions. That is, regions that scale automatically to the layout bounds.
The code for this is currently a big hack (it's based on drawing the UI multiple times). This should definitely be improved.
* Adds a template for displaying operator properties optimized for the top-bar. This will probably change a lot still and is in fact disabled in code.
Since the final top-bar design depends a lot on other 2.8 designs (mainly tool-system and workspaces), we decided to not show the operator or tool settings in the top-bar for now. That means most of the lower sub-bar is empty for the time being.
NOTE: Top-bar or global area data is not written to files or SDNA. They are simply added to the window when opening Blender or reading a file. This allows us doing changes to the top-bar without having to care for compatibility.
== ToDo's
It's a bit hard to predict all the ToDo's here are the known main ones:
* Add options for the new active-tool system and for operator redo to the topbar.
* Automatically hide the top-bar in fullscreen modes.
* General visual polish.
* Top-bar drag & drop support (WIP in temp-tab_drag_drop).
* Improve dynamic regions (should also fix some layout glitches).
* Make internal terminology consistent.
* Enable topbar file writing once design is more advanced.
* Address TODO's and XXX's in code :)
Thanks @brecht for the review! And @sergey for the complaining ;)
Differential Revision: D2758
- need a draw mode in workbench engine.
- reorganized render engine retrieval in 3d view. There are 2 places
where this happenes 1. 3d view draw code and 2. draw manager.
the draw manager code is not used for external engines, currently added
an exception in for cycles. will need to have a better solution in
place.
issue was the factoring of the workspace engine that was removed. the logic implied that the clay could not be rendered as clay will be a draw mode we placed it already there so it is accessible in any engine. Should eventually fix the clay engine by migrating it to the workbench engine.
- Removed the depth pass as it will reuse the depth pass of the render
engine
- Used gl_FrontFacing to determine the facing
- Blender the result with the render engine result
use stack instead of always allocating memory for RNA paths of checked
properties! From average 167ms to 118ms here with Autumn rig... Still a
lot to improve, but that's already much better.
It is hidden behind the --debug-io flag for now.
Idea is to try to catch broken libraries state in current Main before we
actually write the file on disk, should help catching and understanding
what happens in Spring corruption cases.
Implemented the face orientation overlay for testing.
Overlay mode is only drawn when there are overlays to be rendered.
The overlay mode is rendered before the object mode.
While the feature is interesting, it's not much from what we can tell.
Retargeting is an important feature but needs
to fit in better with typical animation work-flows.
See: T52809
This is rather uncommon when operator will operate on a non-active view layer,
so there is no need to do full scene update.
This change solves lag first time using Extrude operator in edit mode.
This is still broken I cant tell if it is the fact that the in_band
funtion does not work properally or an issue in the box algorithm, or
both.
It seems like the calculation of the size of the box while roatated
needs to be fixed also.
@campbellbarton: This operator works (as intended) on an object level, wich means that it won't remove doubles for vertices that are close to each other but contained in different objects - is that really helpful?
Brecht authored this commit, but he gave me the honours to actually
do it. Here it goes; Blender Internal. Bye bye, you did great!
* Point density, voxel data, ocean, environment map textures were removed,
as these only worked within BI rendering. Note that the ocean modifier
and the Cycles point density shader node continue to work.
* Dynamic paint using material shading was removed, as this only worked
with BI. If we ever wanted to support this again probably it should go
through the baking API.
* GPU shader export through the Python API was removed. This only worked
for the old BI GLSL shaders, which no longer exists. Doing something
similar for Eevee would be significantly more complicated because it
uses a lot of multiplass rendering and logic outside the shader, it's
probably impractical.
* Collada material import / export code is mostly gone, as it only worked
for BI materials. We need to add Cycles / Eevee material support at some
point.
* The mesh noise operator was removed since it only worked with BI
material texture slots. A displacement modifier can be used instead.
* The delete texture paint slot operator was removed since it only worked
for BI material texture slots. Could be added back with node support.
* Not all legacy viewport features are supported in the new viewport, but
their code was removed. If we need to bring anything back we can look at
older git revisions.
* There is some legacy viewport code that I could not remove yet, and some
that I probably missed.
* Shader node execution code was left mostly intact, even though it is not
used anywhere now. We may eventually use this to replace the texture
nodes with Cycles / Eevee shader nodes.
* The Cycles Bake panel now includes settings for baking multires normal
and displacement maps. The underlying code needs to be merged properly,
and we plan to add back support for multires AO baking and add support
to Cycles baking for features like vertex color, displacement, and other
missing baking features.
* This commit removes DNA and the Python API for BI material, lamp, world
and scene settings. This breaks a lot of addons.
* There is more DNA that can be removed or renamed, where Cycles or Eevee
are reusing some old BI properties but the names are not really correct
anymore.
* Texture slots for materials, lamps and world were removed. They remain
for brushes, particles and freestyle linestyles.
* 'BLENDER_RENDER' remains in the COMPAT_ENGINES of UI panels. Cycles and
other renderers use this to find all panels to show, minus a few panels
that they have their own replacement for.
Will be part of the collection manager where per collection the
ob->col can be set. This currently depends on DepsGraph +
CollectionManager.
I removed it for now so the code won't influence development
Was caused by ec0756af6c, once again, we can't pass view layer,
need to pass index.
The sad part is that currently we don't have quick way to look up
view layer by index. Can do similar thing as we do for bones and
bases.
The work is mainly from Lukas Toenne, with some modifications from myself.
Includes following obvious changes:
- Particle system selection is now name-based, with lookup menu.
- Lots of new options to control varieties.
Changes comparing to the Gooseberry branch:
- Default values and versioning code ensures same behavior as the
old modifier.
- Custom data layers are coming from vertex color, the modifier
does not create arbitrary layers now. The hope is to keep data
more manageable, and maybe make it easier to select in the shader
later on.
This means, values are quantized to 256 values, but it should be
enough to get varieties in practice.
Reviewers: brecht, campbellbarton
Reviewed By: brecht
Subscribers: eyecandy
Differential Revision: https://developer.blender.org/D3157
- added `object_color_type` where the user can set if the collection
determines the color, or the object will be used for the color.
Implemented it as an enum as later this can have a random color option.
- moved OB_LIGHTING_* to DNA_view3d_types and renamed it.
- Fixed some DRY in workbench_materials.c. Can remove more DRY's but
will need to discuss the responsibility of the workbench engine as it
might become part of the eevee renderer.
ViewRender was removed, which means we can't get the render engine for files
saved in 2.8. We assume that any files saved in 2.8 were intended to use Eevee
and set the engine to that.
A fix included with this is that .blend thumbails now draw with Clay mode,
and never Eevee or Cycles. These were drawn with solid mode in 2.7, and should
be very fast and not e.g. load heavy image textures.
Differential Revision: https://developer.blender.org/D3156
Assume files saved in 2.8 were intended for Eevee and set them to material
viewport shading. In Eevee this is equal to rendered draw mode, in Cycles
this will draw with Eevee. This way Eevee demo files still show something
interesting when opened.
Similar materials will reuse the same shadergroup. Currently using
a custom hash function that might select too similar colors into the
same material.
Reintroduced the workbench_materials.c this file will be responsible for
material lookup/creation and shader compilation
Fixed a GPUShader mem leak
This statement is only relevant in 2.8, but causes confusion in master.
I kept the 'default' label to prevent compiler warnings about unhandled
cases. The break is needed because there should be at least one statement
after 'default'.
I.E. only enable auto-override for 'active' selected object when making
an override of a linked group. This will ease on auto-override creation,
and you typically do not want to auto-override most objects in the group
anyway (in proxy system, you could only proxyfy one object of the group
anyaway!).
This is by default. We can still enable the thicker outlines for high dpi
screens or personnal preference but it's not used atm. This also improve
the performance removing 1/3 of the outline cost.
This finally allows us to use Random factor to add variations to the
interpolated children. This feature never worked since 2007L there was
random factor slider in the interface, but it was only used by simple
children. Now it has affect on interpolated children as well.
Technically, this will break compatibility if older file had random
factor set to something else than 0 (default value is 0 though). But
we are leaving 2.7 series, so can accept such breackage in the name
of supported features.
In more complex models the object color uniform data is freed before rendered.
We should copy it to local data. but for now we redirected it to a
constant.
Currently uses static lighting. Will become HDRI lighting.
Added do_versions to set default drawtype_solid and drawtype_texture to
OB_LIGHTING_STUDIO. When View3D space is created drawtype_solid and
drawtype_texture are also set to OB_LIGHTING_STUDIO.
Current studio lighting uses a dot product to simulate static lighting.
Will need to be changed in the future with different lighting models.
Tab after C++ comment broke parsing and didn't remove the line at all.
Was there since 2002 at least, probably confused some peeps.
This means commented out code was actually written to SDNA.
- added the drawtype_solid, drawtype_wireframe, drawtype_texture to
View3D
- enabled workbench panels for important render engines
- merged workbench_materials to solid_flat_mode. All draw modes will get
its own fast implementation in the workbench
Folders removed entirely:
* //extern/recastnavigation
* //intern/decklink
* //intern/moto
* //source/blender/editors/space_logic
* //source/blenderplayer
* //source/gameengine
This includes DNA data and any reference to the BGE code in Blender itself.
We are bumping the subversion.
Pending tasks:
* Tile/clamp code in image editor draw code.
* Viewport drawing code (so much of this will go away because of BI removal
that we can wait until then to remove this.
This is a way to deal with animated properties in evaluated version
off datablock. Previously, running blender with copy-on-write enabled
will show original values. Now we can see proper properties, while
typing values in still goes to the original datablock.
Thanks Brecht for the review!
Was caused by two things from the past:
- Tagging would set id->recalc to COW update flag.
This one is to be ignored.
- Particle tagging will use psys recalc flags on id->recalc,
but we only need to use flags from particles. Otherwise
it will be some collisions in bit masks.
Why exactly this happens remains unclear, found that in the
autumn.blenrig file of Spring production while working on static
overrides... Tons of ugly IDProps in that rig. xD
This is just to have hair rendering and editing mostly working as in
master. A better fix is probably needed, there seems to be some
missing depsgraph relations for particle edit settings, and particle
edit code doesn't rebuild caches after applying edits. But at least
you can see and interact with hair now until those things can be
sorted out.
The depsgraph was always created within a fixed evaluation context. Passing
both risks the depsgraph and evaluation context not matching, and it
complicates the Python API where we'd have to expose both which is not so
easy to understand.
This also removes the global evaluation context in main, which assumed there
to be a single active scene and view layer.
Differential Revision: https://developer.blender.org/D3152
This changes quite a few things.
- Outline is now per object.
- No more outline at object intersection (fix hairs problem).
- Simplify the code quite a bit.
We use a R16UI buffer to save one id per object outline. We convert this id
to color when detecting the outline.
Added textureGatherOffsets to the code but could not test on current hardware
so leaving it commented for now.
This uniforms can be used to have a unique id for each drawcall of a shgrp.
This only works for standard shgroups and is an exception for the outline
drawing.
This adds initial multi-object editing support.
- Selected objects are used when entering edit & pose modes.
- Selection & tools work on all objects however many tools need porting
See: T54641 for remaining tasks.
Indentation will be done separately.
See patch: D3101
Blender allows this.
The Cube in the file in the report would always disappear with the non camera view.
The clip_end was too small.
The correction here is only on the assert.
Note: tried with a complex production file characters, this is currently
totally non-functionnal (crashing even in a infinite recursion loop),
IDProps and override need some love :(
Although somewhat less micro efficient, I decided to separate the `viewinv` matrix to calculate the world position separately.
This makes it easier to understand the code.
While 3d cursor is mainly an UI thing and isn't needed for scene evaluation,
it is stored in scene DNA. This means, operator is inform depsgraph that data
has changed, so all copies of that scene can copy new values.
Fixes lack of visual feedback when changing cursor location in viewport
with copy-on-write enabled.
Added a lock-free deferred queue for deletion. Now if ID icon
is requested to be freed from non-main thread, it will be added
to the deferred list. Actual deletion will happen later from main
thread.
Currently actual deletion only happens next time BKE_icon_id_delete()
is called, which might not be enough. But it's easy to enforce
deferred deletion.
Icons for preview images are not covered by deferred deletion yet.
Reviewers: mont29
Differential Revision: https://developer.blender.org/D3146
The tests as they are now make string comparisons. This only works
on Windows because the reference files look different for different
operating systems because of different number formatting.
The collada tests need a complete rework (wip)
This gets rid of the need of a geom shader and instancing.
Both are pretty slow compared to the new method.
The only moment the old method could be better is when scene is filled
with lots of objects and most of the objects in the shadow map appear
on every layer.
But even then, we could optimize the culling and minimize the overhead.
E.g. the vertices created for each of the defines would require a
certain offset. If you don't know what to look for, finding out about
this is pretty difficult. Make them easily searchable instead.
There is quite some mess going on in that most of the old triangle
drawing code is still there, but does almost nothing effectively.
Instead values are hardcoded in the shader, however it doesn't support
the drawing options the triangle functions expose.
E.g. the 'where' variable to set triangle direction doesn't work.
- Wasn't clear which functions handle edit-bones.
- Mixed both ebone and edit_bone in names.
- Didn't use ED_armature_* prefix for public API.
See P655 to apply to branches.
- Move static undo variable into 'WriteData',
'memfile_chunk_add' used arguments in a confusing way,
sometimes to set/clear static var.
- Replace checks for 'wd->current' with 'wd->use_memfile'
move memfile vars into 'wd->mem' struct.
For correct results these must have been set already when the depsgraph was
created and evaluated, so all dependencies have appropriate resolutions too.
For particle we no longer backup and restore the viewport particles to avoid
overwriting them during render, as copy-on-write solves this for us. Even
without COW particles seem to work ok.
This also removes the particle simplification options based on camera. This
was never used much and only available in Blender Internal.
Differential Revision: https://developer.blender.org/D3148
This was only used for viewport rendering, where we can just pass the engine
type directly. There is no technical reason why we can't draw the same depsgrpah
with different render engines.
It also led to some weird things like requiring a render engine for snapping
and raycast API functions.
Differential Revision: https://developer.blender.org/D3145
Scene, view layer and mode are now set in the constructor and never changed.
Time is updated on frame changes to indicate which frame is being or has been
evaluated last.
This is a step towards making EvaluationContext obsolete.
Differential Revision: https://developer.blender.org/D3144
Cycles is no longer using this. There are still addons using it but for
correct results with the new depsgraph this API should not be used.
Differential Revision: https://developer.blender.org/D3143
Having that one when opening a file or loading some lib makes absolutely
no sense, and switching that 'temp' editor to some other type can
trigger all kind of funny bugs...
Note that using the shortcuts keys (Shift-F5 etc.) is still possible,
removing those seems a bit more involved. :/
We now only look into dupli groups to find point caches to edit. This
feature is a leftover from the old proxy system, and evaluating the
full dupli list and all transforms was overkill. With static overrides
we may want to get rid of using duplis entirely, and just let users
select the objects directly.
I can see how it's slowing things down: glFinish make sure that every query
are finished but the first query may have been finished a long time ago.
This might create bubbles because of the PIL_sleep_ms.
It's not just the Graph Editor that needed this - the NLA also uses similar code
and thus suffers from a similar problem.
(My first commit from the Blender Institute v2.0 - Just testing that everything works)
Unless there is an external action from an user, there should not
be need in re-copying original datablock to a copied one.
This brings performance up from 5fps to 11fps with Spring runcycle
(performance in master is 14fps).
Quite straightforward implementation, allows us to remove all the cherry-picking
update of specified scene/view layer/collection fields. Makes it possible to use
generic function to update scene.
The tricky part is that we need to know view layer pointer before the whole
evaluation starts. So we actually expand scene at initialization of evaluation.
context. This is still a bit of an exceptional case, but at least we still avoid
dangerous cherry-picking update.
For the performance we convert object bases list to an array
during view layer evaluation. This makes it possible to have
very cheap index-based base lookup.
The goal of this change is to get rid of base used for function
binding, and avoid scene datablock expansion at the depsgraph
construction time.
Use single function to evaluate all the collections for the given view layer.
This way we avoid need to get scene ID sub-data. Similar to pchan index, this
allows us to avoid build-time scene expansion, which also simplifies update of
the scene datablock.
Well, sort of. There is still work to be done to get rid of build-time scene
datablock expansion, which includes:
- Need to pass view layer by index.
Annoying part would be to get actual view layer for that index. In practice
doing list lookup might not be such a bad idea, since such lookup will not
happen very often, and it is unlikely to have more than handful of view
layer anyway.
Other idea could be to use view layer from evaluation context.
Or maybe from depsgraph, which is supposed to be in the context. Can have
some assert statements to make sure everything is good.
- Need to get id of base binding for flags flush.
We can replace that with index-based lookup from an array created by view
layer evaluation.
Reviewers: dfelinto
Differential Revision: https://developer.blender.org/D3141
In preparation of the removal of blender internal render we
moved the vectorblur code that was placed in the render package
(legacy) to the compositor. The compositor is only using this
code even the blender internal renderer did not use the code at
all.
Got lost in big undo refactor.
Note that this is probably (maybe) not how we want to have it in the
end, things like EditMode undo should probably not trigger this check?
The previous assert assumed '..' is always there, which isn't necessarily
true (for example when in the root of an Asset Engine repository).
The new code asserts that if '..' is present it should be the first entry
(rather than forcing the first entry to be '..').
the Vector Transform node was added to the "Vector" category in
nodeitems_builtins.py
but was using the "NODE_CLASS_CONVERTOR" internally (thus using e.g. the
'wrong' theme color)
thanx @dingto for review
Differential Revision: https://developer.blender.org/D3138
Only recreate ogl context if we cannot reuse the one of the previous thread.
Adding lots of shaders were recreating as many ogl context which was very
slow.
This happened when creating a window with the cursor over the timeline area.
I still don't know exactly what happened but for a reason batches were not
reset in this case.
This means fewer indices to store. That being said, it seems to be a little
slower because of the restart index. But that's in the case we would be
vertex bound, which is mostly never going to happen.
Fixes the "emtpy scrolling" glitch by clamping the scroller offset to
the boundary of the view when it's smaller than the previous.
Fixes T45197. Patch by @januz.
Differential Revision: D1580
The new constraint is slower and not backward compatible, but should
be better, especially in the damping side. The new constraint also
has a different valid range of the damping coefficient, and a limit
implementation that bounces instead of making the object stationary.
Reviewers: sergof
Differential Revision: https://developer.blender.org/D3125
WEBM is the codec name, and VP9 is the encoder (the older encoder "VP8"
is less efficient than VP9).
WEBM/VP9 and h.264 both have options to control the file size versus
compression time (e.g. fast but big, or slow and small, for the same
output quality). Since WEBM/VP9 only has three choices, I've chosen to
map those to 3 of the 9 possible choices of h.264:
- BEST → SLOWER
- GOOD → MEDIUM
- REALTIME → SUPERFAST
The VERYSLOW and ULTRAFAST options give very little extra benefit.
Reviewed by: @Severin
Some of the code is simpler because we use Blender's triangulation directly
instead of dealing with quads. Also some progress printing code was removed
because the depsgraph can not tell us the number of objects ahead of time.
Differential Revision: https://developer.blender.org/D3127
- Disable scissor test for fast clear. This could lead to some issues but
I cannot think of one and could not find one either.
- Manually wait for queries to be available instead of making the driver
wait and issue warnings.
The encoding panel mentions "None" in a few places, which is confusing.
- "Codec: None" now reads "No Video"
- "Audio Codec: None" now reads "No Audio"
- "Output Quality: None; ..." now reads "Constant Bitrate"
When selecting "No Video" the remaining video encoding options are
hidden, making it even more explicit that there will not be video in the
output file.
The label "Codec" now reads "Video Codec" for symmetry with "Audio
Codec".
Previous code was assuming that the glyph texture would remain bound to
GL_TEXTURE0 until the cache would be drawn. This is not always the case,
so better save the texture and rebind it before drawing.
This port the Blurring of blf fonts to the final drawing shader.
We add a bit of extra padding to each glyph so that jittering the texture
coord does not sample the neighbor glyphs.
Overall 10% more performance on general UI drawing time.
This commit can introduce ordering problem on some elements.
In this case you need to flush the widget cache to ensure the element that
is going to be drawn is drawn on top of any widget base.
To flush the cache use UI_widgetbase_draw_cache_flush.
This is already done for BLF and Icons.
When importing multiple materials for one object,
the imported material animation curves have all been
assigned to the first material in the object.
This fix also improves the console logging whenever the importer
finds a consistency problem with the imported animation data.
Replace the 12 iterations of UI_draw_roundbox_4fv with only one batch.
This mean less overdraw and less drawcalls.
I had to hack the opacity falloff curve manually to get approximatly the
same result as previous technique. I'm sure with a bit more brain power
somebody could find the perfect function.
The MovieSequence and MovieClip classes now have a metadata() function
that exposes the `IDProperty *` holding the video metadata.
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
This is useful to create a mapping from the frame range in the video to
frame index in the blend file.
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
This is currently only supported by FFmpeg (so not frameserver, AVI RAW,
or AVI JPEG), and only seems to work when using Matroska or Ogg Theora
containers.
Only metadata that doesn't change from frame to frame is written to
video files. This distinction is visible in the UI by looking at the
stamp checkbox tooltips (they either mention "image" or "image/video").
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
- Metadata handling is now separate from `ImBuf *`, allowing it to be
used with a generic `IDProperty *`.
- Merged `IMB_metadata_add_field()` and `IMB_metadata_change_field()`
into a more robust `IMB_metadata_set_field()`. This new function
doesn't return any status (it now always succeeds, and the previously
existing return value was never checked anyway).
- Removed `IMB_metadata_del_field()` as it was never actually used
anywhere.
- Use `IMB_metadata_ensure()` instead of having
`IMB_metadata_set_field()` create the containing `IDProperty` for
you.
- Deduplicated function declarations, moved `intern/IMB_metadata.h` out
of `intern/`. Note that this does mean that we have some extra
`#include "IMB_metadata.h"` lines now, as the metadata functions are
no longer declared in `IMB_imbuf.h`.
- Deduplicated function declarations, all metadata-related declarations
are now in imbuf/IMB_metadata.h.
Part of: https://developer.blender.org/D2273
Reviewed by: @campbellbarton
Use the new GPU_SHADER_2D_NODELINK and GPU_SHADER_2D_NODELINK_INST to
accelerate nodelink drawing.
This commit does not include the batching functionnality. So this should
not make a lot of difference.
Now use a list of preset batches with a function to add new ones to this
list.
This removes the need of new functions all over the place to reset/exit.
Special shader to draw nodelinks for the node editor.
We only pass bezier points to the GPU and vertex position is handled inside
the vertex shader.
The arrow is also part of the batch to avoid separate drawcalls for it.
We still draw 2 pass one for shadow and one for the link color on top.
One variation to draw instances of theses links so that we only do one
drawcall.
The issue was caused by component tag forcing CoW component to be run,
without actually flushing changes down the road from the CoW operation.
In a way, this is what we want: we do want CoW to run on changes, but
we don't want tiny change forcing full datablock update.
This commit makes it so order of updates is all correct, but the bigger
issue is still open: what parts of datablock CoW should be updating?
Now it's possible to open spring file and play around.
The issue was mainly visible when copy-on-write was enabled. This was forcing
lots of meshes to be freed from multiple thread, causing all sorts of race
conditions in Gawain's VAO code.
OpenGL resources seems already to be doing deferred deletion, need to do the
same for CPU side arrays.
Free code should not handle ID refcounting at all. This has to be done
at higher level, since in some case we want to free (temp) data that
actually did not refcount at all its IDs.
This change seems to be working OK, but as usual in that area, only
lots of testing in real-case situation will say whether there are some
hidden bugs or not.
Issue was, *some* IDs (like infamous nodetrees from materials etc.)
would not go through the 'main' read_libblock() func, so their tags were
never reset.
So now, we ensure direct_link_id() always clear the tags, and move
setting them in read_libblock() after the call to direct_link_id().
Needed for depsgraph, but general healthier fix actually.
The problem was that textures were assigned to different slots on different draw
calls, which caused shader specialization/patching by the driver. So the shader
would be compiled over and over until all possible assignments were used.
This is a part of copy-on-write sanitization, to avoid all the checks
which were attempting to keep sub-data pointers intact.
Point is: ID pointers never change for CoW datablocks, but nested
data pointers might change when updating existing copy.
Solution: Only bind ID data pointers and index of sub-data.
This will make CoW datablock 7update function was easier in 2.8.
In master we were only using pose channel pointers in callbacks,
this is exactly what this commit addresses. A linear lookup array
is created on pose evaluation init and is thrown away afterwards.
One thing we might consider doing is to keep indexed array of
poses, similar to chanhash.
Reviewers: campbellbarton
Reviewed By: campbellbarton
Subscribers: dfelinto
Differential Revision: https://developer.blender.org/D3124
Back in the days (2.4x and before), it was rather easy to get some
invalid utf-8 strings in Blender. This is totally breaking modern code,
so this commit adds a simple 'check & fix strings' operator, available
from the main File menu.
E.g. typing `bpy.data.bl_rna.properties[8].<tab>` in console would hard-crash
trying to dereference NULL pointer. Was a missing check in rna_Property_tags_itemf().
This is really minor but anyways, now it will only leak if you cancel the menu.
And that only if htis is the last time you called this operator before closing
Blender.
Better fix for T54457. It seems Debian compiles OpenVDB without ABI 3
compatibility, while Arch does enable it as is the default in the OpeVDB
CMake build system.
So now there's an option that the distribution can set depending on how
they compile their OpenVDB package.
Some functions always returned the input argument
which was never used.
This made code read as if there might be a leak.
Now return a boolean (true the imbuf is modified).
- Undo that changes modes currently asserts,
since undo is now screen data.
Most likely we will change how object mode and workspaces work
since it's not practical/maintainable at the moment.
- Removed view_layer from particle settings
(wasn't needed and complicated undo).
- Use a single undo history for all operations.
- UndoType's are registered and poll the context to check if they
should be used when performing an undo push.
- Mode switching is used to ensure the state is correct before
undo data is restored.
- Some undo types accumulate changes (image & text editing)
others store the state multiple times (with de-duplication).
This is supported by checking UndoStack.mode `ACCUMULATE` / `STORE`.
- Each undo step stores ID datablocks they use with utilities to help
manage restoring correct ID's.
Needed since global undo is now mixed with other modes undo.
- Currently performs each undo step when going up/down history
Previously this wasn't done, making history fail in some cases.
This can be optimized to skip some combinations of undo steps.
grease-pencil is an exception which has not been updated
since it integrates undo into the draw-session.
See D3113
For this we use a new shader that gets it's data from a uniform array.
Vertex shader position the vertices using these data.
Using glUniform is way faster than using imm for that matter.
Like BLF rendering, UI icons are always (as far as I know) non occluded and
displayed above everything else. They also does not overlap with texts so
they can be batched at the same time.
This adds less than a megabyte of mem usage.
FT_Get_Kerning was the 2nd hotspot when profilling. This commit completly
remove this cost.
One concern though: I don't know if the kerning data is constant for every
sizes but it seems to be the case. I tested different fonts at different
dpi scalling and saw no differences.
I left a flag to quickly debug if something is wrong.
But now that everything uses shader, it seems to be alright since a shader
is always set active before drawing.
I've made a separate version of the geom shader that works with full
3D modelviewmat.
This commit also includes some fixup inside blf_batching_start().
You can now use BLF_batching_start and BLF_batching_end to batch every
drawcall to BLF together minimizing the overhead introduced by BLF and the
opengl driver.
These calls cannot be nested (for now).
If the modelview matrix changes, previously batched calls are issued and a
the process resume with the new matrix.
However the projection matrix MUST not change and gl scissors as well.
This is not a perfect win just yet. It's now calling glBufferSubData for
every call (instead of using glMapBufferRange which is almost faster), but
with this system we will be able to batch drawcalls together.
See next commit.
This allows us to specify a the number of vertices to upload to the gpu.
This is to keep the same allocation on the System Memory but send the least
amount of data to the GPU/Driver.
- See `--log` help message for usage.
- Supports enabling categories.
- Color severity.
- Optionally logs to a file.
- Currently use to replace printf calls in wm module.
See D3120 for details.
Two issues are fixed with this commit:
1) When we build OIIO (on unixoid build environments) and no /src/doc/oiiotool was present we had no build target for it (which led to a build error). As we don't need docs for OIIO, we disable it now.
2) We specified a var that OIIO does not recognize (was removed upstream a long time ago): ILMBASE_VERSION.
Selecting not only the objetcs directly linked to the selected collection.
So we also do it for the objetcs in the nested collections, just as we can
do from the outliner.
Use Shift+G > Collection. If there is only one collection, it just selects it,
if there are multiple ones user get to pick which one to select.
This is the same behaviour we have for groups. Note, we only select objects
directly in the collection, not the ones in any nested collection.
Feature suggested by Pablo Vazquez (venomgfx)
This means smaller imm buffer usage.
This does not reduce the number of drawcalls.
This uses geometry shader which is slow for the GPU but given we are really
CPU bound on this case, it should not matter.
A perfect implementation would:
- Set the glyph coord in a bufferTexture and just send the glyph ID to the
GPU to read the bufferTexture.
- Use GWN_draw_primitive and draw 2*strllen triangle and just retrieve the
glyph ID and color based on gl_VertexID / 6.
- Stream fixed size buffer that the Driver can discard quickly but this is
the same as improving IMM directly.
This update adds a link to the Wikipedia article "Euler angles" to the description of the mathutils.Euler class.
I initially was not sure what a "Euler" represented in Blender API, but found the Wikipedia article helpful. I believe others will find the link helpful too if it appears in the class documentation.
This is similar to the Wikipedia links that appear in the mathutils.Matrix class, e.g: https://docs.blender.org/api/blender_python_api_current/mathutils.html?highlight=euler#mathutils.Matrix.adjugate
Author: @justasb
Reviewers: campbellbarton, trumanblending, Blendify
Reviewed By: Blendify
Subscribers: Blendify
Tags: #bf_blender
Differential Revision: https://developer.blender.org/D3077
Fix for T54437: Sequencer preview uses last updated scene
The fix started in master, moving EvaluationContext initialization
before we leave `deg_evaluate_on_refresh()`.
Upon merging master we can fix the actual issue which was to set
the EvaluationContext depsgraph even if the depsgraph was already updated.
This is required to T54437 (sequencer preview uses last updated scene).
Although the fix itself needs to be in 2.8, for the 2.8 specific
initialization code.
Introduce a UI batch cache. For the moment it's only used by widgetbase so
leaving it interface_widgets.c. If it grows, it can have its own file.
Like all preset batches (batches used by UI context), vaos must be refreshed
each time a new window context is binded.
This still does 3 GWN_batch_draw in the worst cases but at least it does
not use the IMM api.
I will continue and batch the 3 calls together since we are really CPU
bound, so shader complexity does not really matters.
I cannot spot any difference on all the widgets I could test. I did not use
any unit tests so I cannot tell if there is really any defects.
This is not a complete rewrite but it adresses the top bottleneck found
after a profilling session.
Use more generic id->recalc flag.
Also sanitize flag flush from settings to particle system.
Need to do such flush before triggering point cache reset, since
point cache reset will do some logic based on what flags are set.
This will solve crash caused by threaded update which will set
some bitflags while point cache reset is in progress.
The way how particle state is to be accessed or used did not change
in Blender 2.8, so the drawing code should follow old design.
This code is somewhat duplicated from drawobject.c, but old draw
code is on the way to be removed anyway.
This fixes issue with disappearing particles when tweaking number
of particles.
Tagging based on components might not be granular enough.
For example, for particles we would want to know what part
of particles was changed exactly. For the flushing we wouldn't
worry too much, because we will want less granular updates
there anyway.
Roughness baking previously defaulted to 1.0 for all diffuse materials,
now we also bake roughness values of Oren-Nayer and Principled Diffuse.
Differential Revision: https://developer.blender.org/D3115
How to use: Select a few objects, and press "M" in the viewport.
If you hold ctrl the objects will be added to the selected collection.
Otherwise they are removed from all their original collections and moved
to the selected one instead.
Development Notes
=================
The ideal solution would be to implement an elegant generic multi-level
menu system similar to toolbox_generic() in 2.49.
Instead I used `uiItemMenuF` to acchieve the required nesting of the menus.
The downside is that `uiItemMenuF` requires the data its callback uses to be
always valid until the menu is discarded. But since there is no callback we
can call when the menu is discarded for operators that exited with
`OPERATOR_INTERFACE`.
That means we are using static allocated data, that is only freed next time
the operator is called. Which also means there will always be some
memory leakage.
Reviewers: campbellbarton
Differential Revision: https://developer.blender.org/D3117
When importing an xfov curve, we must transformed the data to
Lens opening angles in degrees. While the curve value itself is
correctly transformed, the transformation of the tangents has been
forgotten. this is fixed now.
Use C++11 threads when available, and native critical section on Windows.
Later on we can remove pthread code when C+11 becomes required.
Differential Revision: https://developer.blender.org/D3116
Drawcall per window redraw on default layout:
- 4100+ without patch
- 1270 with patch
Theses drawcalls meant a lot of driver overhead since they each correspond
to one glMapBuffer which is slow.
- Get memory usage from MemFile instead of MEM API
avoids possible invalid when threads alloc memory.
- Use size_t instead of uint and uintptr_t to store size.
- Rename UndoElem.str -> filename
- Rename MemFileChunk.ident -> is_identical
Random numbers for step offset were correlated, now use stratified samples
which reduces noise as well for some types of volumes, mainly procedural
ones where the step size is bigger than the volume features.
Remain convinced that it should not be possible for undo code to run in
parallel with DEG eval... But for now, this whould prevent static
override code to dive into this collection.
This refactor modernise the use of framebuffers.
It also touches a lot of files so breaking down changes we have:
- GPUTexture: Allow textures to be attached to more than one GPUFrameBuffer.
This allows to create and configure more FBO without the need to attach
and detach texture at drawing time.
- GPUFrameBuffer: The wrapper starts to mimic opengl a bit closer. This
allows to configure the framebuffer inside a context other than the one
that will be rendering the framebuffer. We do the actual configuration
when binding the FBO. We also Keep track of config validity and save
drawbuffers state in the FBO. We remove the different bind/unbind
functions. These make little sense now that we have separate contexts.
- DRWFrameBuffer: We replace DRW_framebuffer functions by GPU_framebuffer
ones to avoid another layer of abstraction. We move the DRW convenience
functions to GPUFramebuffer instead and even add new ones. The MACRO
GPU_framebuffer_ensure_config is pretty much all you need to create and
config a GPUFramebuffer.
- DRWTexture: Due to the removal of DRWFrameBuffer, we needed to create
functions to create textures for thoses framebuffers. Pool textures are
now using default texture parameters for the texture type asked.
- DRWManager: Make sure no framebuffer object is bound when doing cache
filling.
- GPUViewport: Add new color_only_fb and depth_only_fb along with FB API
usage update. This let draw engines render to color/depth only target
and without the need to attach/detach textures.
- WM_window: Assert when a framebuffer is bound when changing context.
This balance the fact we are not track ogl context inside GPUFramebuffer.
- Eevee, Clay, Mode engines: Update to new API. This comes with a lot of
code simplification.
This also come with some cleanups in some engine codes.
This is a bit useless because gpu lamps are only used by the game engine
and it is planned to be "remove" in some way.
Doing this to clean gpu_framebuffer.c.
This includes a few modification:
- The biggest one is call glActiveTexture before doing any call to
glBindTexture for rendering purpose (uniform value depends on it).
This is also better to know what's going on when rendering UI. So if
there is missing UI elements because of this commit look for this first.
This allows us to have "less calls" to glActiveTexture (I did not
measure the final count) and less checks inside GPU_texture.
- Remove use of GL_TEXTURE0 as a uniform value in a few places.
- Be more strict and use BLI_assert for bad usage of GPU_texture functions.
- Disable filtering for integer and stencil textures (not supported by
OGL specs).
- Replace bools inside GPUTexture by a bitflag supporting more options to
identify texture types.
Undo sometimes reserved too much space in the buffer,
now assert when this happens and allocate the exact size needed.
Note prepares for moving text editor undo out of the text block (D3113)
which will split the undo buffer into a list of undo steps.
This commit essentially introduces a new RNA property flag, which when
set prevents affected property from being processed at all in comparison
code (also used to automatically generate static override rules).
The idea is to use it on very low-level data in RNA, like e.g. mesh's
geometry or psys' particles collections.
For now only applied to psys' particle collections, on the main mesh of
Agent327 pigeon, it goes from 100ms to 0.5ms on a full
auto-override-generating comparison...
Also added some new RNA property helper funcs to check on comparable and
overridable status.
With better directory layout and more proper include
statements we can avoid several local modifications,
such as changing config.h for Windows Glog and the
ones related on pass-through statements in logging
headers in Glog.
This commit also makes unused functions not-a-warning
for external code.
The list of editor-types is rather long by now, so better to arrange them into
sections.
Original patch by @jeske with updates by @Blendify and myself.
Design Task: T36028
Patch: D3112
* Pressing "OK" wouldn't close Blender anymore
* Using File -> Quit would use popup version, not OS native window
Cleaned up code a bit to avoid duplicated logic.
Steps to reproduce were:
* Open Blender (no need for factory settings, "Promt Quit" needs to be enabled)
* Edit the file (e.g. translate some object)
* Quit Blender but don't skip quit promt
* Press "Save & Quit"
* Save the file
Not sure if Windows supports the "Save & Quit" behavior, so this may not have
applied to Windows.
You only had to close Blender through File -> Quit.
Leaks happened because WM_exit() was called from within operator, UI wasn't able
to free some of it's heap data then. This data was the handler added in
uiTemplateRunningJobs() and the IDProperty group added in uiItemFullO_ptr_ex().
There was obviously a general design issue which only became visible in this
specific case.
We now delay the WM_exit call by wrapping it into a handler that gets registered
as usual. I didn't see a better way to do this, all tricks done in
ui_apply_but_funcs_after() to prevent leaks didn't work here. In fact they may
be redundant now, but am not brave enough to try ;)
Seems we can not use include directories order trick, since
files are included form inside ".." string, which forces current
directory to be checked first.
This module has no use now with the new DrawManager and DrawEngines and it
is using deprecated paths.
Moving gpu_shader_fullscreen_vert.glsl
to draw/modes/shaders/common_fullscreen_vert.glsl
Steps to reproduce were:
* Append a workspace (via '+' icon) - make sure its from the default workspaces.blend
* Activate it
* Should crash
Was accessing data from view-layer which wasn't updated yet (and thus could be
NULL). Crash occured after rB8153f89518b4a.
@campbellbarton, you may want to check if all object-mode stuff still works as
expected, not sure what's the state of it.
It is possible that datablock will not be re-used for the new
dependency graph building. Freeing function was freeing all
the nested pointers of databnlock, but not datablock memory
itself.
MSVC still defines __cplusplus as 199711L until it's in full conformance with the newer c++ standards, however the things we need from the standard are fully supported, hence a check for the msvc version was needed.
With the API recently added to gawain, it is now possible to update the vbos linked to the batch.
So the batch does not have to be destroyed.
The optimization is more sensitive when sculpt is made on low poly meshs
Without this a "Clearcoat" link could be moved to "Clearcoat Normal"
for example, which doesn't make much sense.
Differential Revision: https://developer.blender.org/D3105
Increasing the samplig dimensions like this is not optimal, I'm looking
into some deeper changes to reuse the random number and change the RR
probabilities, but this should fix the bug for now.
We revert to the malloc/realloc and manually manage the upload.
There seems to be a performance penalty from using glMapBuffer on some
hardware, prefering way is glBufferData(NULL) with glBufferSubData.
PointCache was having a collection of items of PointCache type, having a
collection of items of PointCache type, having...
Nuff said.
For now, chose the 'ugly' way to fix it, that is, the one that changes
nothing to API and scripts using it: we define another 'PointCacheItem'
RNA type for items of our point cache collection, which has exact same
interface as PointCache except for the collection.
This is doomed to be rewritten at some point anyway, not worth spending
time trying to define a really correct data layout for now.
- Upload the data to the GPU directly when creating the element buffer in
GWN_indexbuf_build_in_place().
- Convert data in place when squeezing the indices and removing the need
for another allocation.
- GWN_indexbuf_build_in_place() can be used with already used element
buffers and reupload their data without changing vbo id (keeping vaos
up to date).
We now alloc a vbo id on creation and let OpenGL manage its memory directly.
We use glMapBuffer to get this memory location.
This enables us to reuse and modify any vertex buffer directly without
destroying it with its associated Batches.
This commit does not really improve performance but will let us implement
more optimizations in the future.
We can also resize the buffer even if this can be slow if we need to keep
the existing data.
The addition of the usage hint makes dynamic buffers not a special case
anymore, simplifying things a bit.
* In the Collada Module parameters are typically ordered
in a similar way. I changed this to:
extern std::string get_joint_id(Object *ob, Bone *bone);
* The Object parameter was not used in get_joint_sid().
I changed this to:
extern std::string get_joint_sid(Bone *bone);
When a name property is defined for collection's struct, but no name is
actually set, we want to also fallback to index case. We cannot handle
empty names to address items of a collection!
We had a mix of two issues here actually:
* First, Brush are currently using their own sauce for custom previews,
this is not great, but moving them to use common ImagePreview system of
IDs is a low-priority TODO. For now, they should totally ignore their
own ImagePreview.
* Second, BKE_icon_changed() would systematically create a PreviewImage
for ID types supporting it, which does not really makes sense, this
function is merely here to 'tag' previews as outdated. Actual creation
of previews is deferred to later, when we actually need them.
They are used to start and end colored output in console.
Use with care, it is up to you to check that console actually
supports Truecolor ANSII.
In thew future we can extend this to other consoles and platforms.
Previous approach was not clear enough and caused problems.
UBOs were taking slots and not release them after a shading group even
if this UBO was only for this Shading Group (notably the nodetree ubo,
since we now share the same GPUShader for identical trees).
So I choose to have a better defined approach:
- Standard texture and ubo calls are assured to be valid for the shgrp
they are called from.
- (new) Persistent texture and ubo calls are assured to be valid accross
shgrps unless the shader changes.
The standards calls are still valids for the next shgrp but are not assured
to be so if this new shgrp binds a new texture.
This enables some optimisations by not adding redundant texture and ubo
binds.
Unity itself is deprecated, but the API is also supported by KDE and the GNOME Dock extension,
which means that it will be useful for a wide variety of distributions.
To get a progress bar, the system must have a blender.desktop file and libunity installed.
The need for libunity is annoying, but the only alternative would be to integrate a DBus library...
Reviewers: campbellbarton, brecht
Differential Revision: https://developer.blender.org/D3106
Basically, don't do full in-place copy of node tree datablock if it's
already expanded. Current way how node tree is evaluated is fully
built around the idea that evaluation copies values from original
to copied datablocks.
Changing links is handled on another level.
Old solution was to create a new vbo and copy it to the location of
the old vbo hoping for the batches to update their vaos before drawing.
The issue is that the new VAO caching is not updating the VAOs at all
unless the shader interface changes.
So unless we expand gawain to support updating of vertex buffers (in a
better way than the current "dynamic" buffer) we need to delete every batch
linked to the vbo we want to recreate.
This solution might have performance implications.
Requires BLI_utildefines.h to be included first,
(already noted in other inline code).
Possible alternative could be to move BLI_assert into own header.
Even if they are for safety they are not free to use !
On my system (Mesa + AMD Vega GPU) calling:
glBindVertexArray(1);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
in a loop, shows the same overhead as a full vao switching (which is more
or less 10 times slower than just calling glDrawArrays)
Moreover, now that we use OpenGL 3.3 binding a VAO is REQUIRED to issue a
drawcall so it is garanted to be overwritten before the next drawcall.
Problem can only happen if someone draws directly with opengl commands.
This allows to draw multiple primitive of the type
GWN_PRIM_LINE_STRIP
GWN_PRIM_LINE_LOOP
GWN_PRIM_TRI_STRIP
GWN_PRIM_TRI_FAN
GWN_PRIM_LINE_STRIP_ADJ
with only one drawcall. This should speed up some areas that are really
sensitive to drawcall counts : UV drawing, Hair drawing...
For IDProps IDarray, IDP_EqualsProperties was called for each item,
instead of IDP_EqualsProperties_ex, discarding value of `is_strict`
option.
Probably not an issue with current code, though.
The issue was only visible with copy-on-write enabled, and related to the
fact, that dependency graph builder binds original FCurves.
For now use smallest patch possible to make things to work and to make
draguu happy.
Need to think of a smarter way to deal with drivers, bones and view layers.
I tried to use real multisampling but the main problem is the outline detection that needs to have matching depth samples.
So adding FXAA instead. Always on for now, may add a parameter for it later.
One thing to note is that we need to copy the final output once again to the main color buffer because we cannot swap the dtxl textures (they can be referenced elsewhere like GPUOffscreen).
We could improve upon this and add TAA on top if viewport is still.
This means that rendering clay with AO only needs 1 geometry pass.
Thus greatly improving performance of poly heavy scene.
This also fix a self shadow issue in the AO, making low sample count
way better.
We also do not need to blit the depth anymore since we
are doing a fullscreen shading pass.
The constant cost of running the a deferred shading pass is negligeable.
This include quite a bit of code cleanup inside clay_engine.c.
The deferred pipeline is only enabled if at least one material needs it.
Multisampling is not supported yet.
Small hacks when doing deferred:
- We invert the normal before encoding it for precision.
- We put the facing direction into the sign of the mat_id.
- We dither the normal to fight the low bitdepth artifacts of the normal
buffer (which is 8bits per channel to reduce bandwidth usage).
- The common name in computer science are 'getters' and 'setters', so by
adding these names to the documentation (while 'get' and 'set are still
also mentioned) we improve findability. Having 'Getters/Setters' as a
title also makes it clearer that this example is not just about
getting or setting the property value.
- Added a little prefix to each printed value, so that print statement,
expected output, and real output can be matched easier.
The old example had two downsides:
- It promoted a blocking UI design, where the user is shown a popup
before actually executing the operator.
- It didn't show how to actually use the property values.
The new code avoids these mistakes. The properties are also shown in the
redo panel in the 3D view.
Note that I also changed the bl_idname, as this is an example about
properties, not about dialogue boxes, and changed the class name to use
the standard operator naming convention.
I also extended the example to include a panel that sets multiple
properties of the operator, since I see questions about this relatively
frequently.
Sequencer rendering can use multisample render targets. Be sure to sync
thoses after rendering.
Also disable the sample loop when not needed.
Do note that currently the color correction is broken with the sequencer.
Looks like someone changed the signature of some RNA update callback,
and for some reason that 'change skey' update function was not updated
(or later got merged from master)...
We'll need RNA to check for its func signatures, some day...
When adding scene strips to the sequencer, the wrong scenes were
getting getting added if some were skipped. For example:
Given 4 scenes (A, B, C, D) if you're trying to add the last 3 scenes
(B, C, D) as strips to the first scene (A), it would ended up adding
"A, B, C" instead of "B, C, D" as expected.
Fix provided by Andrew (signal9).
Calling the rendering operator seems to kill any other WM_job running, leaving
uncompiled materials into a GPU_MAT_QUEUED state. This then made the probe update
looping indefinitely (all_materials_updated remaining to false).
To fix this, we resume compilation for materials that are in this state.
Cancelling Render before all material compilation could make certain material
remain uncompiled. Fortunately, this is not allowed as of now.
Need Clear ID recalc flag on load. Otherwise it's possible to have
some IDs considered always updated by Cycles, when they were saved
in a tagged-for-update state.
Thanks Bastien for feedback and review!
Nothing user visible, only things needed for multi-object support,
making picking functions more flexible too.
- Support passing in an initialized hit-struct,
so it's possible to do multiple nearest calls on the same hit data.
- Replace manhattan distance w/ squared distance
so they can be compared.
- Return success to detect changes to a hit-data
which might already be initialized (also more readable).
This is mostly to avoid re-compilation when using undo/redo operators.
This also has the benefit to reuse the same GPUShader for multiple materials using the same nodetree configuration.
The cache stores GPUPasses that already contains the shader code and a hash to test for matches.
We use refcounts to know when a GPUPass is not used anymore.
I had to move the GPUInput list from GPUPass to GPUMaterial because it's containing references to the material nodetree and cannot be reused.
A garbage collection is hardcoded to run every 60 seconds to free every unused GPUPass.
* Suspicious usage of pointer:
short *type = 0; // this creates a null pointer
When this is later used for anything then blender would crash.
After following the code and check what happens i strongly believe
the author wanted to use a short and not a pointer to a short here.
* local variable where reused later in same function
While this did no harm, i still felt it was better to use a different
name here to make things more separated:
- moved variable declaraiotns into loop (for int a=0; ...)
- renamed uv_images to uv_image_set
- renamed index variable from i to j in inner loop that
reused same index name from outer loop
The iterator was redeclared 3 times. I fixed this to avoid future issues.
I commit separately because so the changes are less cluttered all over
the place.
The variable child was redeclared multiple times in the same function.
While this has not created any issues i still changed this to avoid
confusion and keep the usage of the variables more local.
The function validateConstraints() potentially causes a null pointer
exception. I changed this so that the function returns a failure as soon
as the validation fails. This avoids falling into the null pointer trap.
The 2 methods add_bezt() and create_bezt() do almost the same.
I combined them both into add_bezt() and added the optional parameter
eBezTriple_Interpolation ipo
Don't write the multichannel metadata when there is only a single layer,
and don't unnecessarily consider single layer images with Blender metadata
as multi layer.
This save a little memory and copying in the kernel by storing only a 4x3
matrix instead of a 4x4 matrix. We already did this in a few places, and
those don't need to be special exceptions anymore now.
This is in preparation of making Transform affine only, and also gives us
a little extra type safety so we don't accidentally treat it as a regular
4x4 matrix.
The purpose of the previous code refactoring is to make the code more readable,
but combined with this change benchmarks also render about 2-3% faster with an
NVIDIA Titan Xp.
This leads to less lookups to the GWNShaderInterface and less uniform upload.
We still keep a legacy path so that Builtin uniforms can still work. We might restrict this path to Builtin shader only in the future.
You can now cancel your renders that are too long. This will still output the current status of the render. For example if you cancel at 50% rendering progress, you will have a render result with only half the render samples.
This also fix a bug with the probe debug display when there was more than 2 probes. ped->probe_id was equal to 0 for all planar probes until the next frame. Resulting in all planar data debug to show probe 0.
If no custom URL was set, add-ons would get a "Report a Bug" button opening
the default developer.blender.org bug tracker. Now we only add this default
button if the add-on is bundled and not installed by the user.
Premise: When pose bones are selected, applying a pose library should
only affect the selected bones.
This commit fixes a bug where the pose was also applied when there was
no overlap between the selected bones and the bones in the pose. For
example, applying a pose which contains only keyframes for the left
hand, while only right-hand bones are selected, would apply the pose
to the left hand anyway.
The code is now also slightly more efficient; the removed 'selcount'
counter was only used as a binary (i.e. zero or non-zero). It's now
stored as a bitflag instead.
Currently only covering handful of files from reports about wrong fps detected.
It will need D3083 applied first to get tests passed, also tests themselves
are to be committed to svn.
But there are some python code which needs to be reviewed, like blendfile
passed to run_blender().
Reviewers: sybren, mont29
Reviewed By: sybren, mont29
Subscribers: mont29
Differential Revision: https://developer.blender.org/D3096
This is an issue with which value to trust: fps vs. tbr. They both cam be
somewhat broken. Currently the idea is:
- If file was saved with FFmpeg AND we are decoding with FFmpeg we trust tbr.
- If we are decoding with Libav we use fps (there does not seem to be tbr in
Libav, unless i'm missing something).
- All other cases we use fps.
Seems to work all good for files from T53857, T54148 and T51153. Ideally we
would need to collect some amount of regression files to make further tweaks
more scientific.
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D3083
- put render iterator in own scope
(would shadow it's own variable if used multiple times).
- enforce semicolon at end of iterator macros.
- no need to typedef one-off macro structs.
Each AnimData block has a set of Blend/Extrapolation/Influence settings
that can be used to control how the active action is blended with the
NLA stack. However, these settings were not getting copied over to the
newly created strips (as the push-down code existed long before these
settings were added).
This commit solves this in several ways:
* Active Action Blend/Extrapolation/Influence settings now get copied
to the new strips when adding them to the NLA stack via Push Down.
Note: This doesn't happen when there are no existing NLA tracks,
as these settings don't get used in that case.
* Strip Influence will be copied across when inf < 1.0 (i.e. when a
non-default value is used), to maintain the effect. To make this work,
the influence value will get added as a keyframe to the strip's
"Influence" Control FCurve.
- See code comments for an alternative approach and why that was not chosen
- Strip Time still doesn't get keyframes added automatically yet.
* To ensure the "extrapolation mode" settings don't get always overwritten,
I've put in place a compromise: the extrapolation will only get changed
if the chosen setting will cause problmes (i.e. hold forward & back -> hold forward
if there are other tracks before it already).
Not safe for backporting to 2.79[x] stable releases.
Now repeating the operator will use the previously chosen offset, either with
the modal operator or typed in. The modal operator will still start at zero.
Reverts rBb9ae517794765d6a1660 and fixes the issue properly. Old fix could cause
NULL to be passed to functions that expect all arguments to be non-NULL.
The idea is to separate the most common case from symmetrical frustum. And to make a simple but efficient calculation.
The new radius is usually 98% the size of the radius size of the asymmetric solution.
Thanks to @fclem for reviewing the patch on IRC
Also get rid of the static var and initialization.
This enables the user to see the progress on the info header.
Closing blender or reading a file also kill the job which is good.
Unfortunatly, this job cannot be interrupt by users directly. We could make it interruptible but we need a way to resume the compilation.
In the gpus like `AMD Radeon HD 7570M` and `Intel(R) HD Graphics 4000` this solution improves performance a hundreds or even thousands of times depending on the resolution.
Reviewed By: @brecht and @fclem
Differential Revision: https://developer.blender.org/D3095
Vertex group remapping utility function,
now shared between object join and array modifier cap-ends.
Weights which don't exist are removed.
D3092 by @Foaly
Main purpose is to make it possible to cover FPS detection with regression test.
But it might also be handy for some other scripters.
Thanks Campbell for review!
The issue was happening with fast Gaussian blur, and caused by NaN value pixels
in the input buffer.
Now made it so Map Range output does not produce NaN, by returning arbitrary
value of 0. Still better than NaN!
Previous fix for T53430 caused T54200.
The edge case for soft & hard cuts weren't working,
where the strip used start/end-still & the frame was placed exactly on
the start/end of of the sequence content.
T54200 fixed the end-still case but broke hard-cuts for all other cases.
This fixes the case for soft/hard cuts with/without start/end-still.
I keep reading that texture painting is not working yet. However it is fully
working. We even have a "Full Shading" option in the viewport display panel.
Clay/EEVEE still need their UI figured out. But the context itself is doing
its part after this patch, and at least for Cycles it's working like 2.79.
Instead of creating a new instancing shading group without attrib, we now have instancing calls. The benefits is that they can be culled.
They can be used in conjuction with the standard and generate calls but shader must support it (which is generally not the case).
We store a pointer to the actual count so that the number can be tweaked between redraw.
This will makes multi layer rendering more efficient.
Introduced explicit ID property node for driers in depsgraph,
so it is clear what is the input for driver, and what is the
output.
This also solved relations builder throwing lots of errors
due to ID property not being found.
It was too tricky to know ahead of time if an object would still
be visible in the new window/workspace/scene/layer combination,
especially since other windows may share some of these data-blocks.
So store the context, make the change, then check if the object is
still visible, freeing mode data of it's not.
It was possible to have relations like A -> B -> C -> A (import thing is
that no other operations points into this cluster) which were not detected
or reported by dependency cycle solver.
Now this is solved by ensuring we don't leave unvisited nodes behind.
This is probably a better way to handle it: instead of totally
discarding scaling of non-free axes, keep the ratio between them.
Basically the logic of the constraint is now that it rescales the
object uniformly in the non-free axis plane in order to force the
total volume change to the desired value.
Note that this code will likely be generalized,
currently each new case is a little different though
so it's too early to move them into general functions.
This (now removed code) calls gl_Vertex deprecated draws. It was doing
background drawing (color gradient, flat background) which is not used
by any engine.
It seems the reason the old version of the constraint overcompensates
as reported in T48079 is to allow the constraint to work with uniform
scaling on all axes. However the way it did that actually _requires_
uniform scaling for the constraint to work correctly, and breaks if
only the free scaling axis is used to avoid redundant channels.
This version attempts to allow both by discarding scaling in the non-
free directions instead of applying the correction on top of it.
This merges changes in internals, runtime-only of existing custom
normals code, which make sense as of themselves, and will make diff of
soc branch easier/lighter to review.
In the details, it mostly changes two things:
* Now, smooth fans (aka MLoopNorSpaceArray) can store either loop
indices, or pointers to BMLoop themselves. This makes sense since in
BMesh, it's relatively easy to get index from a BMElement, but nearly
impracticable to go the other way around.
* First change enforces another, now we cannot rely anymore on `loops`
being NULL in MLoopNorSpace to detect single-loop fans, so we instead
store that info in a new flag.
Again, these are expected to be totally non-functional changes.
around the volume.
We generate a tight mesh around the active voxels of the volume in order
to effectively skip empty space, and start volume ray marching as close
to interesting volume data as possible. See code comments for details on
how the mesh generation algorithm works.
This gives up to 2x speedups in some scenes.
Reviewed by: brecht, dingto
Reviewers: #cycles
Subscribers: lvxejay, jtheninja, brecht
Differential Revision: https://developer.blender.org/D3038
Selection code relies on being able to set the depth functions
however passes have their own depth settings.
Add DRW_state_lock to ignore passes settings for particular flags.
This fixes occlusion queries cycling through objects under the cursor.
By default select wasn't picking the nearest object,
this could have been fixed by not clearing the depth buffer,
but calling GPU_select_(begin/end) without the binded frame-buffer
caused issues for depth-picking. So move GPU_select begin/end to a
callback.
This also has the advantage that only needs to populate the engines once
to draw two passes.
Note that cycling through objects fails with occlusion queries still,
will fix shortly.
This is very efficient and add a pretty low overhead (0.1ms of drawing time for 10K objects passing through all tests, on my i3-4100M).
The like the rest of the DRWCallState, test is "cached" until the view matrices changes.
In some cases it doesn't make sense for add-ons to be listed for hiding.
Especially for import/export which use minimal UI space.
This adds `bl_info["use_owner"]` to add-ons,
currently defaulting to True for all non Import-Export add-ons.
This is more important now that we will have tigther volume bounds that
we hit multiple times. It also avoids some noise due to RR previously
affecting these surfaces, which shouldn't have been the case and should
eventually be fixed for transparent BSDFs as well.
For non-volume scenes I found no performance impact on NVIDIA or AMD.
For volume scenes the noise decrease and fixed artifacts are worth the
little extra render time, when there is any.
* Use a subsurface color equal to the base color, and give the subsurface
radius skin like values by default. This is how the parameter should
typically be used.
* Use GGX by default, multiscatter GGX is still quite noisy and has some
fireflies so let's keep it optional for now.
Now the only missing bit seems to be in Cycles to pass depsgraph to
builtin_image_float_pixels().
Ideally we could get evaluation context instead of using depsgraph + settings.
But for the other rna EvaluationContext functions this is how we are doing.
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D3087
While doing so with Bone_R.001, Bone_R.003, Bone_R.003 etc. is doomed to
issues, doing that on duplicates of actually correctly named bones can
be handy, and safe.
So adding back as an option (was removed in rB702bc5ba26d5).
Flip names operator changed in rB702bc5ba26d5, to some sensible
behavior. But this breaks common workflow of 'duplicate part of the
bones, scale-mirror new ones, and flip their names'.
So now, instead of doing this in two steps, trying to guesstimate which
bones should get which name, just add option to flip names to duplicate
operator itself. Simpler, safer, and much, much more consitent behavior
and predictable results.
This solves issue with tweaking brush size when interleaving particle edit
and texture paint modes. The issue was caused by texture paing setting more
operator properties then it's done for particle edit mode, which made window
manager to use saved proeprties for the "missing" ones.
Don't see any reason why we would want to save any of those properties.
This is a regression since rB83b60dac57a1.
Allows for each workspace to have it's own add-ons on display.
Filtering for: Panels, Menus, Keymaps & Manipulators.
Automatically applies to add-ons at the moment.
Access from workspace, toggled off by default
once enabled, add-ons can be white-listed.
See D3076
When changing the mode of an object, apply this to all other
workspaces that share the same active object.
Also use copy the object-mode when duplicating workspaces.
Note that setting `glDepthFunc` isn't important,
since 2.8 branch changes this value it might seem like an error
however it's harmless in this case - so better make note of this.
Refactor include:
- Removal of DRWInterface. (was useless)
- Split DRWCallHeader into a new struct DRWCallState that will be reused in the future.
- Use BLI_link_utils for APPEND/PREPEND.
- Creation of the new DRWManager struct type. This will enable us to create more than one manager in the future.
- Removal of some dead code.
This still does not make point density to work in Cycles, but at least it pass
the depsgraph down the line.
Note this was working fine before the depsgraph/render refactor to pass
evaluated depsgraph to the engines.
User notes
----------
Compositing, rendering of multi-layers in Eevee should be fully working now.
Development notes
-----------------
Up until now we were still using the same depsgraph for rendering and viewport
evaluation. And we had to go out of our ways to be sure the depsgraphs were
updated.
Now we iterate over the (to be rendered) view layers and create a depsgraph to
each one, fully evaluated and call the render engines (Cycles, Eevee, ...) with
this viewlayer/depsgraph/evaluation context.
At this time we are not handling data persistency, Depsgraph is created from
scratch prior to rendering each frame. So I got rid of most of the partial
update calls we had during the render pipeline.
Cycles: Brecht Van Lommel did a patch to tackle some of the required Cycles
changes but this commit mark these changes as TODOs. Basically Cycles needs to
render one layer at a time.
Reviewers: sergey, brecht
Differential Revision: https://developer.blender.org/D3073
-Make the view and object dependant matrices calculation isolated and separated, avoiding non-needed calculation.
-Adding a per drawcall matrix cache so that we can precompute these in advance in the future.
-Replaced integer uniform location of only view dependant builtins by DRWUniforms that are only updated once per shgroup.
Now that Eevee has support for offline rendering (F12) we can use it for
the Material previews.
Note: This makes the duplicated UI issue one panel worse. That happens when
Cycles if your scene engine, and Eevee is your workspace engine.
Depth picking needs to read the depth buffer after drawing
since GPU_select_end runs in a different OpenGL context
reading the depth buffer wasn't working.
This caused the last object to be unelectable.
This patch adds F12 offline Freestyle rendering support to Eevee.
Most functionalities are identical with those found in Cycles.
The only major difference is that the per-view layer "use Freestyle" toggle
option is currently placed in the "Passes" panel of the "View Layers"
properties window instead of a "Layer" panel as in Cycles. Since Freestyle
is a post-processed overlay and not a pass, the present option location is
a compromise. To describe this fact, the per-layer "use Freestyle" option
is in a subsection labeled as "Layer".
Reviewers: fclem, brecht, campbellbarton
Reviewed By: fclem, brecht
Subscribers: dfelinto
Differential Revision: https://developer.blender.org/D3084
This separate context allows two things:
- It allows viewports in multi-windows configuration.
- F12 render can use this context in a separate thread and do a non-blocking render.
The downside is that the context cannot be used while rendering so a request to refresh a viewport will lock the UI. This is something that will be adressed in the future.
Under the hood what does that mean:
- Not adding more mess with VAOs management in gawain.
- Doing depth only draw for operators / selection needs to be done in an offscreen buffer.
- The 3D cursor "autodis" operator is still reading the backbuffer so we need to copy the result to it.
- All FBOs needed by the drawmanager must to be created/destroyed with its context active.
- We cannot use batches created for UI in the DRW context and vice-versa. There is a clear separation of resources that enables the use of safe multi-threading.
Offscreen contexts are not attached to a window and can only be used for rendering to frambuffer objects.
CGL implementation : Brecht Van Lommel (brecht)
GLX implementation : Clément Foucault (fclem)
WGL implementation : Germano Cavalcante (mano-wii)
Other implementation are just place holder for now.
The exporter does export matrix data (4*4 Transformation matrix) only for Skeletal animation. For object animation only exporting to trans/rot/loc is implemented.
This task implements Matrix export also for simple Object animation.
Differential Revision: https://developer.blender.org/D3082
Turns out to be the call that was destroying performance.
I get 18ms->6ms improvement of drawing time with 10 000 unique objects.
And we can still improve upon this!
This started with a fix for an animated Object Hierarchy. Then i decided to cleanup and optimize a bit. But at the end this has become a more or less full rewrite of the Animation Exporter. All of this happened in a separate local branch and i have retained all my local commits to better see what i have done.
Brief description:
* I fixed a few issues with exporting keyframed animations of object hierarchies where the objects have parent inverse matrices which differ from the Identity matrix.
* I added the option to export sampled animations with a user defined sampling rate (new user interface option)
* I briefly tested Object Animations and Rig Animations.
What is still needed:
* Cleanup the code
* Optimize the user interface
* Do the Documentation
Reviewers: mont29
Reviewed By: mont29
Differential Revision: https://developer.blender.org/D3070
This is used to determine which voxels are to be considered empty space.
Previously it was hardcoded for converting dense grids to OpenVDB grids
to reduce disk space usage.
This value is also useful for rendering engines to know, i.e. to
optimize ray marching.
Some other software cannot handle grid names with spaces in them. We still check for names with spaces so as to not break old
files.
This fixes T53802.
This replaces the blackbody to RGB code with the simpler and faster one from
Cycles. It's a little different but the other placing using this is the legacy
volume drawing, so no need to stay compatible with that.
Similar to the Principled BSDF, this should make it easier to set up volume
materials. Smoke and fire can be rendererd with just a single principled
volume node, the appropriate attributes will be used when available. The node
also works for simpler homogeneous volumes like water or mist.
Differential Revision: https://developer.blender.org/D3033
Technically the original issue is that xof/yof in render result is calculated
for drawing border render. So a simpler patch could be:
```
- rr->xof = re->disprect.xmin;
+ rr->xof = re->disprect.xmin + BLI_rcti_cent_x(&re->disprect) - (re->winx / 2);
```
However everywhere in the code we are getting border directly from re->disprect
which we may as well do here too.
Besides I'm taking this as a chance to get rid of RenderResult in the internal
loop of eevee, to help prepare the code to the upcoming rendering pipeline
changes.
When you were using autosmooth to generate some custom normals, and
created empty custom loop normal data, you would go back to an 'all
smooth' shading, cancelling some sharp edges generated by the mesh's
smooth threshold.
Now we will first tag such edges as sharp, such that shading remains the
same. This is not crucial in current master, but it is for clnors
editing gsoc branch!
Regression caused by earlier commits to improve the automerge behaviour.
In this case, the problems only occurred when moving a selected keyframe
forwards in time to overlap an unselected keyframe.
While it is necessary to ignore duplicates when doing Deselect/Column Select
(where double-updates may result in nothing being selected), for borderselect,
not including the duplicates meant that sometimes, nothing would happen
if you were trying to borderselect keyframes originating from hidden channels.
This was first noticed in the greasepencil-object branch, but affects all
animation channel types.
This is not an ideal solution but blender freeing system is already well tangled.
So tracking and clearing vao caches when destroying contexts does prevent bad behaviour.
This allows us to:
- Not mock around with tags stored in a global space,
and not to iterate over all datablocks in the database
to clear the tags.
- Properly deal with datablocks which might not be in main database.
While it sounds crazy, it might be handy when dealing with preview,
or some partial scene updates, such as motion paths.
- Avoids majority of places where depsgraph construction needed bmain.
This is something what could help in blender2.8 branch.
From tests with production file here did not see any measurable slowdown.
Hopefully, there is no functional changes :)
Was temporarily removed when moving object mode to workspace.
Note: there is an issue where eval_ctx->view_layer is NULL on load,
for now pass a view layer argument, we might wan't to set the value
instead.
This reverts commit 87c72a7d27.
Caused T54121 which breaks blend file saving.
For now crash on exit is preferable.
Possible solution is to free screen-manipulator batches in a separate
loop.
We now continue transparent paths after diffuse/glossy/transmission/volume
bounces are exceeded. This avoids unexpected boundaries in volumes with
transparent boundaries. It is also required for MIS to work correctly with
transparent surfaces, as we also continue through these in shadow rays.
The main visible changes is that volumes will now be lit by the background
even at volume bounces 0, same as surfaces.
Fixes T53914 and T54103.
A major bottleneck of current implementation is the call to create_bindings() for basically every drawcalls.
This is due to the VAO being tagged dirty when assigning a new shader to the Batch, defeating the purpose of the Batch (reuse it for drawing).
Since managing hundreds of batches in DrawManager and DrawCache seems not fun enough to me, I prefered rewritting the batches itself.
--- Batch changes ---
For this to happen I needed to change the Instancing to be part of the Batch rather than being another batch supplied at drawtime.
The Gwn_VertBuffers are copied from the batch to be instanciated and a new Gwn_VertBuffer is supplied for instancing attribs.
This mean a VAO can be generated and cached for this instancing case.
A Batch can be rendered with instancing, without instancing attribs and without the need for a new VAO using the GWN_batch_draw_range_ex with the force_instance parameter set to true.
--- Draw manager changes ---
The downside with this approach is that we must track the validity of the instanced batch (the original one). For this the only way (I could think of) is to set a callback for when the batch is getting free.
This means a bit of refactor in the DrawManager with the separation of batching and instancing Batches.
--- VAO cache ---
Each VAO is generated for a given ShaderInterface. This means we can keep it alive as long as the shader interface lives.
If a ShaderInterface is discarded, it needs to destroy every VAO associated to it. Otherwise, a new ShaderInterface with the same adress could be generated and reuse the same VAO with incorrect bindings.
The VAO cache itself is using a mix between a static array of VAO and a dynamic array if the is not enough space in the static.
Using this hybrid approach is a bit more performant than the dynamic array alone.
The array will not resize down but empty entries will be filled up again. It's unlikely we get a buffer overflow from this. Resizing could be done on next allocation if needed.
--- Results ---
Using Cached VAOs means that we are not querying each vertex attrib for each vbo for each drawcall, every redraw!
In a CPU limited test scene (10000 cubes in Clay engine) I get a reduction of CPU drawing time from ~20ms to 13ms.
The only area that is not caching VAOs is the instancing from particles (see comment DRW_shgroup_instance_batch).
This allows allocation of VAOs from different opengl contexts and thread as long as the drawing happens in the same context.
Allocation is thread safe as long as we abide by the "one opengl context per thread" rule.
We can still free from any thread and actual freeing will occur at new vao allocation or next context binding.
The other approach was causing too much error in some cases (e.g. favouring
the lower-valued keyframes). This fix should make the resulting curves less
bumpy/jagged.
This commit removes an earlier attempt at optimising the lookups
for duplicates of a particular tRetainedKeyframe once we'd already
deleted all the selected copies. The problem was that now, instead
of getting rid of the unselected keys (i.e. the basic function here),
we were only getting rid of the selected duplicates.
With this fix, unselected keyframes will now get removed (as expected)
again. However, we currently don't take their values into account
when merging keyframes, since it is assumed that we don't care so much
about their values when overriding.
that end up on the same frame
Currently, when scaling keyframes in the Dopesheet, if multiple
selected keyframes end up on the same frame post-scaling, they
would not get removed by the "Automerge" setting that normally
removes duplicates on the same frame.
This commit changes the behaviour so that when multiple selected
keyframes end up on the same frame, instead of keeping all these
around on the same frame (e.g. resulting in a column of keyframes
on different values), we will instead merge them into a single
keyframe (by averaging the values). This should result in a
smoother F-Curve with fewer "stair-steps" that need to be carefully
cleaned out afterwards.
Requested by @hjalti
This bug took a while to track down. In the test file with this report,
the Nla-Strip Control Curve for strip time would get disabled if you
changed the NLA Editor to a second Graph Editor instance.
It turns out that because this second Graph Editor would have the
"filter fcurves by name" option enabled, this would trigger a lookup
of the referenced property's name (in order to test whether it matched
the filtering criteria). However, since that filtering code was written
before the introduction of these curves, it still assumed that the names
for these Control Curves should be handled the same as for standard FCurves.
Unfortunately, that doesn't work, as the property lookups fail if the standard
method is used - when the lookups fail, the F-Curves get tagged as being
invalid/disabled (and need to be reset using the "Revive Disabled FCurves"
operator).
Note: The changes in this patch look complicated, as I've had to shuffle
a bit of code around so that the name-filtering check can have access to
the additional info it needs. In the process, I've also removed the earlier
(hacky) approach where the control curves were getting added to a temp
buffer to get changed from normal FCurves to special ANIMTYPE_NLACURVES.
Not sure why we need a relation from solver to a tip local transform, this
will be handled via parent relation.
Fixes remaining dependency cycles reported in T54083.
It is not possible to address transform at particular position of constraint
stack, and when constraint is being addressed is usually from driver variable.
This fixes some of dependency cycles reported in T54083.
We need to move the render result logic outside the render engine code.
It makes no sense for Eevee/Clay/... to have to re-implement the render resilt
creation logic. Beside the original implementation really got it wrong, by
ignoring the different render layers needed for the final render.
Finally, there is no need to re-create the logic for views. So this was also
fixed.
Note 1: This will break still if the depsgraph of the needed view layers is not
updated / created. We need to address this separately. For now if users want
to test this, just show each view layer in the viewport at least once.
Note 2: We are still getting depsgraph from scene and creating if needed.
`BKE_scene_get_depsgraph(scene, view_layer, true);` according to Sergey we need
to move the render depsgraph for the Render struct instead. I will do it
separately as well.
This is a regression in rB4f1c0a1 which only allowed cutting haior at the
second segment only, while there is nothing wrong with cutting hair at the
first segmewnt.
Don't use dm->get*Array for DM you don't own. This call can allocate temporary
CD layer, which is not thread safe at all.
Also removed hard-coded logic around CDDM check. new functions will do same
logic, but are mode DM-type-=independent.
We shouldn't mix image pool acuisition with and without user provided,
the fact that internally image.c uses last frame from Image datablock
confuses the logic.
Optionally don't remap indices for objects.
Checking all objects parent's would reference a freed pointer
while freeing all objects.
In the case of dynamic topology there is no use in keeping track
of hook/vertex-parent indices.
Also disable this when creating meshes for undo storage
since adding an undo step shouldn't be modifying other objects.
This reuses the Cycles regression test code to also work for OpenGL UI drawing.
We launch Blender with a bunch of .blend files, take a screenshot and compare
it with a reference screenshot, and generate a HMTL report showing the failed
tests and their differences.
For Cycles we keep small reference renders to compare to in svn, but for OpenGL
developers currently have to generate the references manually. How to use:
* WITH_OPENGL_DRAW_TESTS=ON in CMake
* BLENDER_TEST_UPDATE=1 ctest -R opengl_draw
* .. make code changes ..
* ctest -R opengl_draw
* open build_dir/tests/opengl_draw/report.html
Differential Revision: https://developer.blender.org/D3064
Once 'losing lib' issue is fixed (in previous commit), we have new issue
that this could lead to several copies of the same linked data-block in
.blend file. Which is not good. At all.
So had to add a GHash-based check in libraries reading code to ensure we
only load a same ID from a same lib once.
The issue was that when a same lib was found several times in loaded
.blend, we'd only keep the first occurence. But since Blender expects
next data-blocks to belong to last found library, we could actually
be adding data-blocks assigned to copies of the duplicated lib to
another, totally unrelated lib.
Those data-blocks were then obviously not found when actually loading
libs content, and lost.
Note that this only fix one part of the issue, current code can
generate several copies of same linked data-block now, will fix in
another commit.
While the script should be using INVOKE_PREVIEW for operators in clip view,
window manager was lacking some switch statements.
Thanks Brecht fore review!
- Use BLI_threadpool_ prefix for (deprecated)
thread/listbase API.
- Use BLI_thread as prefix for other functions.
See P614 to apply instead of manually resolving conflicts.
- When returning the number of items in a collection use BLI_*_len()
- Keep _size() for size in bytes.
- Keep _count() for data structures that don't store length
(hint this isn't a simple getter).
See P611 to apply instead of manually resolving conflicts.
This is kind of doesn't matter where macro itself is defined.
We should stick to the following:
- If some macro is actually more an inline function, follow regular
function name conventions.
- If macro is a macro, type it in capitals. Use module prefix if that
helps readability or it if helps avoiding accidents.
This completes twist feature, which is now possible to also control by
texture. Since textures can not easily contain negative values as well,
same trick with 0.5 neutral as vertex groups is used.
All in all, this twist features allows to do following things.
Original hair:
{F2287535}
Hair with scientifically calculated twist value of 0.5:
{F2287540}
And we can also twist braids in opposite directions dependent on left/right
side:
{F2287548}
The idea is to give a control over direction of twist, and maybe amount of
twist as well. More concrete example: make braids on left and right side of
character head to be twisting opposite directions.
Now, tricky part: we need some negative values to flip direction, but weights
can not be negative. So we use same trick as displacement map and tangent normal
maps, where 0.5 is neutral, values below 0.5 are considered negative and values
above 0.5 are considered positive.
It allows to have children hair to be twisted around parent curve, which is
quite an essential feature when creating hair braids.
There are currently two controls:
- Number of turns around parent children.
- Influence curve, which allows to modify "twistness" along the strand.
This isn't supported since there are subsequent reads to all point coordinates
after modification started.
Probably we need to create a temp copy of point, but that's like extra CPU
ticks.
view_layer is NULL when the render engine is created, this gets passed
around and ends up in this code causing a crash. This should be reverted
after the render engine api is updated to set view_layer.
The conditionals in particle code are... some sort of madness... I'm not
even sure what the correct behavior should be from looking at it.
In this case the path cache generation was being skipped in edit mode.
Due to changes in draw code particles from old files that may have had a
default draw size of 0 will not be visible. Old draw code would check
for this and adjust the size, however the unit for draw size has changed
from pixels to BU, and it no longer makes sense to have such checks.
This patch is to ensure particles from such files remain visible.
It seems to be useful still in cases where the particle are distributed in
a particular order or pattern, to colorize them along with that. This isn't
really well defined, but might as well avoid breaking backwards compatibility
for now.
This removes the need of custom attribs for instancing.
Instancing works fully with dynamic batches & Gwn_VertFormat now.
This is in prevision of the VAO manager patch.
This manager allows to distribute existing batches for instancing
attributes. This reduce the number of batches creation.
Querying a batch is done with a vertex format. This format should
be static so that it's pointer never changes (because we are using
this pointer as identifier [we don't want to check the full format
that would be too slow]).
This might make the original Instance Data manager useless but it's currently used by DRW_object_engine_data_ensure().
Theses batches keeps their memory chuck allocated after transfer to be reused and updated very often.
NOTE: This commit break instancing in DRW. (it's fixed in the next commit)
Now that the new 3D viewport draws to a multisample offscreen buffer, there is
no good reason anymore to create an entire multisample window and pay the
performance/memory cost for other regions that don't need it.
GL_MULTISAMPLE now only gets enabled for offscreen buffers, so we don't need
to check for it throughout the UI code anymore.
Differential Revision: https://developer.blender.org/D3062
This is like the only way to add variety to hair which is created
using simple children. Used here for the hair.
Maybe not ideal, but the time will show.
Burley SSS uses a bit of strange thing where the albedo and closure weight are
different, which makes the subsurface color act a bit like a subsurface radius
indirectly by the way the Burley SSS profile works.
This can't work for random walk SSS though, and it's not clear to me that this
is actually a good idea since it's really the subsurface radius that is supposed
to control this. For now I'll leave Burley SSS working the same to not break
backwards compatibility.
This can be very slow if it contains a big texture, and it's not
necessarily setup in a useful way anyway, and materials can be used
in multiple scenes.
Instead of cloning the window to create dummyHWNDs and dummyHDCs to avoid calling the SetPixelFormat more than once in the same window, use the original window and HDC and do not call the SetPixelFormat again.
In addition to avoiding a lot of unnecessary calls, it simplifies the code and makes it match the others OS
Cycles already uses 1.5 as default. BI original 1.0 filter doesn't look good for
Eevee. The ideal scenario would be for both Cycles AND Eevee to use the same DNA
setting.
But for now it is nice to at least have Eevee renders to look better by default.
Note: This handles doversion for 2.7x files only. Files previously created in
2.8 need to be manually corrected.
It is basically brute force volume scattering within the mesh, but part
of the SSS code for faster performance. The main difference with actual
volume scattering is that we assume the boundaries are diffuse and that
all lighting is coming through this boundary from outside the volume.
This gives much more accurate results for thin features and low density.
Some challenges remain however:
* Significantly more noisy than BSSRDF. Adding Dwivedi sampling may help
here, but it's unclear still how much it helps in real world cases.
* Due to this being a volumetric method, geometry like eyes or mouth can
darken the skin on the outside. We may be able to reduce this effect,
or users can compensate for it by reducing the scattering radius in
such areas.
* Sharp corners are quite bright. This matches actual volume rendering
and results in some other renderers, but maybe not so much real world
objects.
Differential Revision: https://developer.blender.org/D3054
Looks like there was no way to avoid that so far, since
WM_event_add_timer_notifier can set mere int-in-pointer there, this can
cause issues. So added mere flags system to wmTimer to allow
controlling this.
Instead of calling an operator I just call `collection.new()`. Moving the
code into a separate function also simplifies it. In its new form there is
also no undefined behaviour when me.vertex_colors is non-empty but without
active layer.
We were not passing a scene collection parent to the BKE_collection_add
function, which in turn made syncing not work.
Right now we:
* Explicitly pass the master collection in this case
* Fallback to the master collection in other cases
With unittest.
- normalize → average the vector: the vector isn't normalized here, because
it doesn't necessarily becomes unit length. Instead, the sum is converted
to an average vector.
- angle is the acos()…: the dot product between the vertex normal and the
average direction of the connected vertices is computed, and not the
opposite.
- The initial `con` list was discarded immediately and replaced by a new
list.
- File didn't end with a newline.
We've got quite comprehensive BMesh based implementation, which is way easier
for maintenance than abandoned Carve library.
After all the time BMesh implementation was working on the same level of
limitations about manifold meshes and touching edges than Carve. Is better
to focus on maintaining one boolean implementation now.
Reviewers: campbellbarton
Reviewed By: campbellbarton
Differential Revision: https://developer.blender.org/D3050
Previously quads always split along first-third vertices.
This is still the default, to avoid flickering with animated deformation
however concave quads that would create two opposing triangles now use
second-fourth split.
Reported as T53999 although this issue has been known limitation
for a long time.
- Read-only access can often use EvaluationContext.object_mode
- Write access to go to WorkSpace.object_mode.
- Some TODO's remain (marked as "TODO/OBMODE")
- Add-ons will need updating
(context.active_object.mode -> context.workspace.object_mode)
- There will be small/medium issues that still need resolving
this does work on a basic level though.
See D3037
Proxy to armature pose channel. Allows to read and set armature pose.
The attributes are identical to RNA attributes, but mostly in read-only mode.
..attribute:: name
channel name (=bone name), read-only.
:type:string
..attribute:: bone
return the bone object corresponding to this pose channel, read-only.
:type::class:`BL_ArmatureBone`
..attribute:: parent
return the parent channel object, None if root channel, read-only.
:type::class:`BL_ArmatureChannel`
..attribute:: has_ik
true if the bone is part of an active IK chain, read-only.
This flag is not set when an IK constraint is defined but not enabled (miss target information for example).
:type:boolean
..attribute:: ik_dof_x
true if the bone is free to rotation in the X axis, read-only.
:type:boolean
..attribute:: ik_dof_y
true if the bone is free to rotation in the Y axis, read-only.
:type:boolean
..attribute:: ik_dof_z
true if the bone is free to rotation in the Z axis, read-only.
:type:boolean
..attribute:: ik_limit_x
true if a limit is imposed on X rotation, read-only.
:type:boolean
..attribute:: ik_limit_y
true if a limit is imposed on Y rotation, read-only.
:type:boolean
..attribute:: ik_limit_z
true if a limit is imposed on Z rotation, read-only.
:type:boolean
..attribute:: ik_rot_control
true if channel rotation should applied as IK constraint, read-only.
:type:boolean
..attribute:: ik_lin_control
true if channel size should applied as IK constraint, read-only.
:type:boolean
..attribute:: location
displacement of the bone head in armature local space, read-write.
:type:vector [X, Y, Z].
..note::
You can only move a bone if it is unconnected to its parent. An action playing on the armature may change the value. An IK chain does not update this value, see joint_rotation.
..note::
Changing this field has no immediate effect, the pose is updated when the armature is updated during the graphic render (see :data:`BL_ArmatureObject.update`).
..attribute:: scale
scale of the bone relative to its parent, read-write.
:type:vector [sizeX, sizeY, sizeZ].
..note::
An action playing on the armature may change the value. An IK chain does not update this value, see joint_rotation.
..note::
Changing this field has no immediate effect, the pose is updated when the armature is updated during the graphic render (see :data:`BL_ArmatureObject.update`)
..attribute:: rotation_quaternion
rotation of the bone relative to its parent expressed as a quaternion, read-write.
:type:vector [qr, qi, qj, qk].
..note::
This field is only used if rotation_mode is 0. An action playing on the armature may change the value. An IK chain does not update this value, see joint_rotation.
..note::
Changing this field has no immediate effect, the pose is updated when the armature is updated during the graphic render (see :data:`BL_ArmatureObject.update`)
..attribute:: rotation_euler
rotation of the bone relative to its parent expressed as a set of euler angles, read-write.
:type:vector [X, Y, Z].
..note::
This field is only used if rotation_mode is > 0. You must always pass the angles in [X, Y, Z] order; the order of applying the angles to the bone depends on rotation_mode. An action playing on the armature may change this field. An IK chain does not update this value, see joint_rotation.
..note::
Changing this field has no immediate effect, the pose is updated when the armature is updated during the graphic render (see :data:`BL_ArmatureObject.update`)
..attribute:: rotation_mode
Method of updating the bone rotation, read-write.
:type:integer (one of :ref:`these constants <armaturechannel-constants-rotation-mode>`)
..attribute:: channel_matrix
pose matrix in bone space (deformation of the bone due to action, constraint, etc), Read-only.
This field is updated after the graphic render, it represents the current pose.
:type:matrix [4][4]
..attribute:: pose_matrix
pose matrix in armature space, read-only,
This field is updated after the graphic render, it represents the current pose.
:type:matrix [4][4]
..attribute:: pose_head
position of bone head in armature space, read-only.
:type:vector [x, y, z]
..attribute:: pose_tail
position of bone tail in armature space, read-only.
:type:vector [x, y, z]
..attribute:: ik_min_x
minimum value of X rotation in degree (<= 0) when X rotation is limited (see ik_limit_x), read-only.
:type:float
..attribute:: ik_max_x
maximum value of X rotation in degree (>= 0) when X rotation is limited (see ik_limit_x), read-only.
:type:float
..attribute:: ik_min_y
minimum value of Y rotation in degree (<= 0) when Y rotation is limited (see ik_limit_y), read-only.
:type:float
..attribute:: ik_max_y
maximum value of Y rotation in degree (>= 0) when Y rotation is limited (see ik_limit_y), read-only.
:type:float
..attribute:: ik_min_z
minimum value of Z rotation in degree (<= 0) when Z rotation is limited (see ik_limit_z), read-only.
:type:float
..attribute:: ik_max_z
maximum value of Z rotation in degree (>= 0) when Z rotation is limited (see ik_limit_z), read-only.
:type:float
..attribute:: ik_stiffness_x
bone rotation stiffness in X axis, read-only.
:type:float between 0 and 1
..attribute:: ik_stiffness_y
bone rotation stiffness in Y axis, read-only.
:type:float between 0 and 1
..attribute:: ik_stiffness_z
bone rotation stiffness in Z axis, read-only.
:type:float between 0 and 1
..attribute:: ik_stretch
ratio of scale change that is allowed, 0=bone can't change size, read-only.
:type:float
..attribute:: ik_rot_weight
weight of rotation constraint when ik_rot_control is set, read-write.
:type:float between 0 and 1
..attribute:: ik_lin_weight
weight of size constraint when ik_lin_control is set, read-write.
:type:float between 0 and 1
..attribute:: joint_rotation
Control bone rotation in term of joint angle (for robotic applications), read-write.
When writing to this attribute, you pass a [x, y, z] vector and an appropriate set of euler angles or quaternion is calculated according to the rotation_mode.
When you read this attribute, the current pose matrix is converted into a [x, y, z] vector representing the joint angles.
The value and the meaning of the x, y, z depends on the ik_dof_x/ik_dof_y/ik_dof_z attributes:
* 1DoF joint X, Y or Z: the corresponding x, y, or z value is used an a joint angle in radiant
* 2DoF joint X+Y or Z+Y: treated as 2 successive 1DoF joints: first X or Z, then Y. The x or z value is used as a joint angle in radiant along the X or Z axis, followed by a rotation along the new Y axis of y radiants.
* 2DoF joint X+Z: treated as a 2DoF joint with rotation axis on the X/Z plane. The x and z values are used as the coordinates of the rotation vector in the X/Z plane.
* 3DoF joint X+Y+Z: treated as a revolute joint. The [x, y, z] vector represents the equivalent rotation vector to bring the joint from the rest pose to the new pose.
:type:vector [x, y, z]
..note::
The bone must be part of an IK chain if you want to set the ik_dof_x/ik_dof_y/ik_dof_z attributes via the UI, but this will interfere with this attribute since the IK solver will overwrite the pose. You can stay in control of the armature if you create an IK constraint but do not finalize it (e.g. don't set a target) the IK solver will not run but the IK panel will show up on the UI for each bone in the chain.
..note::
[0, 0, 0] always corresponds to the rest pose.
..note::
You must request the armature pose to update and wait for the next graphic frame to see the effect of setting this attribute (see :data:`BL_ArmatureObject.update`).
..note::
You can read the result of the calculation in rotation or euler_rotation attributes after setting this attribute.
This is a list like object used in the game engine internally that behaves similar to a python list in most ways.
As well as the normal index lookup (``val= clist[i]``), CListValue supports string lookups (``val= scene.objects["Cube"]``)
Other operations such as ``len(clist)``, ``list(clist)``, ``clist[0:10]`` are also supported.
..method:: append(val)
Add an item to the list (like pythons append)
..warning::
Appending values to the list can cause crashes when the list is used internally by the game engine.
..method:: count(val)
Count the number of instances of a value in the list.
:return:number of instances
:rtype:integer
..method:: index(val)
Return the index of a value in the list.
:return:The index of the value in the list.
:rtype:integer
..method:: reverse()
Reverse the order of the list.
..method:: get(key, default=None)
Return the value matching key, or the default value if its not found.
:return:The key value or a default.
..method:: from_id(id)
This is a funtion especially for the game engine to return a value with a spesific id.
Since object names are not always unique, the id of an object can be used to get an object from the CValueList.
Example:
..code-block::python
myObID=id(gameObject)
ob=scene.objects.from_id(myObID)
Where ``myObID`` is an int or long from the id function.
This has the advantage that you can store the id in places you could not store a gameObject.
..warning::
The id is derived from a memory location and will be different each time the game engine starts.
..warning::
The id can't be stored as an integer in game object properties, as those only have a limited range that the id may not be contained in. Instead an id can be stored as a string game property and converted back to an integer for use in from_id lookups.
The type of measurement that the sensor make when it is active.
Can be one of :ref:`these constants <armaturesensor-type>`
:type:integer.
..attribute:: constraint
The constraint object this sensor is watching.
:type::class:`BL_ArmatureConstraint`
..attribute:: value
The threshold used in the comparison with the constraint error
The linear error is only updated on CopyPose/Distance IK constraint with iTaSC solver
The rotation error is only updated on CopyPose+rotation IK constraint with iTaSC solver
The linear error on CopyPose is always >= 0: it is the norm of the distance between the target and the bone
The rotation error on CopyPose is always >= 0: it is the norm of the equivalent rotation vector between the bone and the target orientations
The linear error on Distance can be positive if the distance between the bone and the target is greater than the desired distance, and negative if the distance is smaller.
See :data:`sphereInsideFrustum` and :data:`boxInsideFrustum`
..data:: INTERSECT
See :data:`sphereInsideFrustum` and :data:`boxInsideFrustum`
..data:: OUTSIDE
See :data:`sphereInsideFrustum` and :data:`boxInsideFrustum`
..attribute:: lens
The camera's lens value.
:type:float
..attribute:: fov
The camera's field of view value.
:type:float
..attribute:: ortho_scale
The camera's view scale when in orthographic mode.
:type:float
..attribute:: near
The camera's near clip distance.
:type:float
..attribute:: far
The camera's far clip distance.
:type:float
..attribute:: shift_x
The camera's horizontal shift.
:type:float
..attribute:: shift_y
The camera's vertical shift.
:type:float
..attribute:: perspective
True if this camera has a perspective transform, False for an orthographic projection.
:type:boolean
..attribute:: frustum_culling
True if this camera is frustum culling.
:type:boolean
..attribute:: projection_matrix
This camera's 4x4 projection matrix.
..note::
This is the identity matrix prior to rendering the first frame (any Python done on frame 1).
:type:4x4 Matrix [[float]]
..attribute:: modelview_matrix
This camera's 4x4 model view matrix. (read-only).
:type:4x4 Matrix [[float]]
..note::
This matrix is regenerated every frame from the camera's position and orientation. Also, this is the identity matrix prior to rendering the first frame (any Python done on frame 1).
..attribute:: camera_to_world
This camera's camera to world transform. (read-only).
:type:4x4 Matrix [[float]]
..note::
This matrix is regenerated every frame from the camera's position and orientation.
..attribute:: world_to_camera
This camera's world to camera transform. (read-only).
:type:4x4 Matrix [[float]]
..note::
Regenerated every frame from the camera's position and orientation.
..note::
This is camera_to_world inverted.
..attribute:: useViewport
True when the camera is used as a viewport, set True to enable a viewport for this camera.
:type:boolean
..method:: sphereInsideFrustum(centre, radius)
Tests the given sphere against the view frustum.
:arg centre:The centre of the sphere (in world coordinates.)
:type centre:list [x, y, z]
:arg radius:the radius of the sphere
:type radius:float
:return::data:`~bge.types.KX_Camera.INSIDE`, :data:`~bge.types.KX_Camera.OUTSIDE` or :data:`~bge.types.KX_Camera.INTERSECT`
:rtype:integer
..note::
When the camera is first initialized the result will be invalid because the projection matrix has not been set.
..code-block::python
frombgeimportlogic
cont=logic.getCurrentController()
cam=cont.owner
# A sphere of radius 4.0 located at [x, y, z] = [1.0, 1.0, 1.0]
Whether or not the character is on the ground. (read-only)
:type:boolean
..attribute:: gravity
The gravity value used for the character.
:type:float
..attribute:: maxJumps
The maximum number of jumps a character can perform before having to touch the ground. By default this is set to 1. 2 allows for a double jump, etc.
:type:int in [0, 255], default 1
..attribute:: jumpCount
The current jump count. This can be used to have different logic for a single jump versus a double jump. For example, a different animation for the second jump.
:type:int
..attribute:: walkDirection
The speed and direction the character is traveling in using world coordinates. This should be used instead of applyMovement() to properly move the character.
Servo control allows to regulate force to achieve a certain speed target.
..attribute:: force
The force applied by the actuator.
:type:Vector((x, y, z))
..attribute:: useLocalForce
A flag specifying if the force is local.
:type:boolean
..attribute:: torque
The torque applied by the actuator.
:type:Vector((x, y, z))
..attribute:: useLocalTorque
A flag specifying if the torque is local.
:type:boolean
..attribute:: dLoc
The displacement vector applied by the actuator.
:type:Vector((x, y, z))
..attribute:: useLocalDLoc
A flag specifying if the dLoc is local.
:type:boolean
..attribute:: dRot
The angular displacement vector applied by the actuator
:type:Vector((x, y, z))
..note::
Since the displacement is applied every frame, you must adjust the displacement based on the frame rate, or you game experience will depend on the player's computer speed.
..attribute:: useLocalDRot
A flag specifying if the dRot is local.
:type:boolean
..attribute:: linV
The linear velocity applied by the actuator.
:type:Vector((x, y, z))
..attribute:: useLocalLinV
A flag specifying if the linear velocity is local.
:type:boolean
..note::
This is the target speed for servo controllers.
..attribute:: angV
The angular velocity applied by the actuator.
:type:Vector((x, y, z))
..attribute:: useLocalAngV
A flag specifying if the angular velocity is local.
:type:boolean
..attribute:: damping
The damping parameter of the servo controller.
:type:short
..attribute:: forceLimitX
The min/max force limit along the X axis and activates or deactivates the limits in the servo controller.
:type:list [min(float), max(float), bool]
..attribute:: forceLimitY
The min/max force limit along the Y axis and activates or deactivates the limits in the servo controller.
:type:list [min(float), max(float), bool]
..attribute:: forceLimitZ
The min/max force limit along the Z axis and activates or deactivates the limits in the servo controller.
:type:list [min(float), max(float), bool]
..attribute:: pid
The PID coefficients of the servo controller.
:type:list of floats [proportional, integral, derivate]
..attribute:: reference
The object that is used as reference to compute the velocity for the servo controller.
:type::class:`CListValue` of :class:`KX_GameObject`
..attribute:: objectsInactive
A list of objects on background layers (used for the addObject actuator), (read-only).
:type::class:`CListValue` of :class:`KX_GameObject`
..attribute:: lights
A list of lights in the scene, (read-only).
:type::class:`CListValue` of :class:`KX_LightObject`
..attribute:: cameras
A list of cameras in the scene, (read-only).
:type::class:`CListValue` of :class:`KX_Camera`
..attribute:: active_camera
The current active camera.
:type::class:`KX_Camera`
..note::
This can be set directly from python to avoid using the :class:`KX_SceneActuator`.
..attribute:: world
The current active world, (read-only).
:type::class:`KX_WorldInfo`
..attribute:: suspended
True if the scene is suspended, (read-only).
:type:boolean
..attribute:: activity_culling
True if the scene is activity culling.
:type:boolean
..attribute:: activity_culling_radius
The distance outside which to do activity culling. Measured in manhattan distance.
:type:float
..attribute:: dbvt_culling
True when Dynamic Bounding box Volume Tree is set (read-only).
:type:boolean
..attribute:: pre_draw
A list of callables to be run before the render step.
:type:list
..attribute:: post_draw
A list of callables to be run after the render step.
:type:list
..attribute:: pre_draw_setup
A list of callables to be run before the drawing setup (i.e., before the model view and projection matrices are computed).
:type:list
..attribute:: gravity
The scene gravity using the world x, y and z axis.
:type:Vector((gx, gy, gz))
..method:: addObject(object, reference, time=0)
Adds an object to the scene like the Add Object Actuator would.
:arg object:The (name of the) object to add.
:type object::class:`KX_GameObject` or string
:arg reference:The (name of the) object which position, orientation, and scale to copy (optional), if the object to add is a light and there is not reference the light's layer will be the same that the active layer in the blender scene.
:type reference::class:`KX_GameObject` or string
:arg time:The lifetime of the added object, in frames. A time of 0 means the object will last forever (optional).
:type time:integer
:return:The newly added object.
:rtype::class:`KX_GameObject`
..method:: end()
Removes the scene from the game.
..method:: restart()
Restarts the scene.
..method:: replace(scene)
Replaces this scene with another one.
:arg scene:The name of the scene to replace this scene with.
:type scene:string
:return:True if the scene exists and was scheduled for addition, False otherwise.
:rtype:boolean
..method:: suspend()
Suspends this scene.
..method:: resume()
Resume this scene.
..method:: get(key, default=None)
Return the value matching key, or the default value if its not found.
The :data:`startSound`, :data:`pauseSound` and :data:`stopSound` do not require the actuator to be activated - they act instantly provided that the actuator has been activated once at least.
..attribute:: volume
The volume (gain) of the sound.
:type:float
..attribute:: time
The current position in the audio stream (in seconds).
:type:float
..attribute:: pitch
The pitch of the sound.
:type:float
..attribute:: mode
The operation mode of the actuator. Can be one of :ref:`these constants<logic-sound-actuator>`
:type:integer
..attribute:: sound
The sound the actuator should play.
:type:Audaspace factory
..attribute:: is3D
Whether or not the actuator should be using 3D sound. (read-only)
:type:boolean
..attribute:: volume_maximum
The maximum gain of the sound, no matter how near it is.
:type:float
..attribute:: volume_minimum
The minimum gain of the sound, no matter how far it is away.
:type:float
..attribute:: distance_reference
The distance where the sound has a gain of 1.0.
:type:float
..attribute:: distance_maximum
The maximum distance at which you can hear the sound.
:type:float
..attribute:: attenuation
The influence factor on volume depending on distance.
:type:float
..attribute:: cone_angle_inner
The angle of the inner cone.
:type:float
..attribute:: cone_angle_outer
The angle of the outer cone.
:type:float
..attribute:: cone_volume_outer
The gain outside the outer cone (the gain in the outer cone will be interpolated between this value and the normal gain in the inner cone).
The Delay sensor generates positive and negative triggers at precise time,
expressed in number of frames. The delay parameter defines the length of the initial OFF period. A positive trigger is generated at the end of this period.
The duration parameter defines the length of the ON period following the OFF period.
There is a negative trigger at the end of the ON period. If duration is 0, the sensor stays ON and there is no negative trigger.
The sensor runs the OFF-ON cycle once unless the repeat option is set: the OFF-ON cycle repeats indefinately (or the OFF cycle if duration is 0).
Use :class:`SCA_ISensor.reset` at any time to restart sensor.
..attribute:: delay
length of the initial OFF period as number of frame, 0 for immediate trigger.
:type:integer.
..attribute:: duration
length of the ON period in number of frame after the initial OFF period.
If duration is greater than 0, a negative trigger is sent at the end of the ON pulse.
:type:integer
..attribute:: repeat
1 if the OFF-ON cycle should be repeated indefinately, 0 if it should run once.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.