Main goal is to make API simpler to follow (at least ion terms what
is defined/declared where, as opposite of handful big headers which
includes all the declarations), and also avoid a big set of long and
obscure functions.
Now C-API files are split into smaller ones, following OpenSubdiv
behavior more closely, and also function pointers in structures
used a lot more, which shortens functions names,
UV integration part in GL Mesh is mainly stripped away, it needs
to be done differently. On a related topic, UV coordinates API in
converter needs to be removed as well, we do not need coordinates,
only island connectivity information there.
Additional changes:
- Varying interpolation in evaluator API are temporarily disabled,
need to extend API somewhere (probably, evaluator's API) to inform
layout information of vertex data (whether it contains varying
data, width, stride and such).
- Evaluator now can interpolate face-varying data.
Only works for adaptive refiner, since some issues in OpenSubdiv
itself.
Planned changes:
- Remove uv coordinates from TopologyConverter.
- Support evaluation of patches (as opposite to individual coordinates
as it happens currently).
- Support more flexible layout of varying and face-varying data.
It is stupid to assume varying is 3 floats and face-varying 2 floats.
- Support of second order derivatives.
- Everything else what i'm missing in this list.
- Made OpenSubdiv_GLMesh private
Previously, it was still accessible via C-API from C++ code.
- Don't implicitly refine evaluator when updating coarse positions,
now there is an explicit call to do this.
Allows to first apply all changes to the coarse mesh and then
refine once.
- Added coarse positions update from a continuous buffer with given
starts offset and stride.
Allows to update coarse positions directly from MVert array.
- Refiner is no longer freed when CPU evaluator is created.
Allows to re-use refiner for multiple purposes.
Remove support for loading interlaced image sequences because
its less common now to record interlaced video,
the option to de-interlace video on load remains.
This separate probe rendering from viewport rendering, making possible to
run the baking in another thread (non blocking and faster).
The baked lighting is saved in the blend file. Nothing needs to be
recomputed on load.
There is a few missing bits / bugs:
- Cache cannot be saved to disk as a separate file, it is saved in the DNA
for now making file larger and memory usage higher.
- Auto update only cubemaps does update the grids (bug).
- Probes cannot be updated individually (considered as dynamic).
- Light Cache cannot be (re)generated during render.
- Texture creation now requires explicit data type.
- GPU_texture_add_mipmap enable explicit mipmap upload.
- GPU_texture_get_mipmap_size can be used to get the size of a mipmap level
of an existing GPUTexture
- GPU_texture_read let you read back data from a gpu texture.
The approach of setting 'refresh' flags on the modifier, and performing
the associated actions when the modifier is being evaluated, is a bad
one. Instead, we use the separation of the original and the evaluated
copy to 'refresh' certain things (because they simply aren't set at all
on the original). Other actions are now done directly with BKE_ocean_xxx
functions on the original data, intead of during evaluation.
Now we light with just a user defined HDRI by default, which is useful
for material setup and texture painting and lighting without having to
set up any scene lights.
Previously it would use the scene world without lights by default, which
in some files is just black.
This makes it possible to have a single shading nodetree that contains
separate Cycles and Eevee shaders. By default the target is set to All
so shaders are shared.
Will help entering sculpt mode on file load by making it possible
to fully initialize sculpt session. The goal is to make sure PBVH
exists since the very beginning of file open (missing PBVH is a
reason why object is not visible before first stroke).
This is not enough yet to fully solve the issue, since entering
sculpt mode tags object for Copy-on-Write update, which frees
PBVH.
It was a bit odd that the scene was stored per window but not the view
layer. The reasoning was that you would use different view layers for
different tasks. This is still possible, but it's more predictable to
switch them both explicitly, and with child window support manually
syncing the view layers between multiple windows is no longer needed
as often.
* Main windows show a topbar and statusbar, and select a workspace and
scene. They are created with Window > New Main Window.
* Child windows do not show a topbar or statusbar. These follow the
workspace and scene of their parent main window. Created with Window >
New Window or View > Duplicate Area into New Window.
* The purpose of this change is to support multi monitor setups where you
just want to put more editors on the other monitors. Without multiple
topbars and statusbars, working within a single workspace and scene.
Creating multiple main windows is intended to be a concious choice to
do different tasks in different workspaces and scenes.
* Note these changes do not currently affect how the operating system
treats the windows.
* When changing the workspace, the layout in all child windows changes.
This makes sense if we consider child windows to be just a way to
extend the main window across more monitors. In some case it may be
useful to keep the same layout though, we can add an option for this
depending on user feedback.