This mimics the behaviour we have in the Clip Editor.
I personally would prefer if we had no border once in fullscreen
(current border is 5 pixels).
I will consult Sergey Sharybin first to see if we can change that in the clip editor as well (though there I
believe the border is useful - the bottom of the editor is used to indicate 'tracked' frames.
Issue was caused by operator redo saving values for previous
inverted channels, meaning the same channels will be inverted
next time operator runs.
Don't think it's useful to save operator values here, since
you don;t have visual feedback about which channels were
inverted. So marked all this properties as SKIP_SAVE. Gives
much more predictable results.
in fact the problem was caused by own previous fix/improvement for a different case, now this works as follows...
- render uses last-saved name, falls back to 'untitled' in blend file path.
- non render uses id-name always, saves into dir of last-saved image, fall back to blend file path.
Was caused by svn rev53181, and it worked before because
image buffer didn't have quality set and in this case
fall back to scene settings happened.
Now or render result quality from scene settings is always
used, image buffer's settings is ignored.
This makes it so sample line (for all image editor, sequencer and compositor)
displaying managed color for byte buffers as well. It was simply not implemented
before.
from Lawrence D'Oliveiro (ldo)
Get rid of BLI_splitdirstring, replace with calls to BLI_split_dirfile, BLI_split_dir_part and BLI_split_file_part as appropriate.
Issue was caused by couple of circumstances:
- Normal Map node requires tesselated faces to compute tangent space
- All temporary meshes needed for Cycles export were adding to G.main
- Undo pushes would temporary set meshes tessfaces to NULL
- Moving node will cause undo push and tree re-evaluate fr preview
All this leads to threading conflict between preview render and undo
system.
Solved it in way that all temporary meshes are adding to that exact
Main which was passed to Cycles via BlendData. This required couple
of mechanic changes like adding extra parameter to *_add() functions
and adding some *_ex() functions to make it possible RNA adds objects
to Main passed to new() RNA function.
This was tricky to pass Main to RNA function and IMO that's not so
nice to pass main to function, so ended up with such decision:
- Object.to_mesh() will add temp mesh to G.main
- Added Main.meshes.new_from_object() which does the same as to_mesh,
but adds temporary mesh to specified Main.
So now all temporary meshes needed for preview render would be added
to preview_main which does not conflict with undo pushes.
Viewport render shall not be an issue because object sync happens from
main thread in this case.
It could be some issues with final render, but that's not so much
likely to happen, so shall be fine.
Thanks to Brecht for review!
Use COLOR_GAMMA subtype for new image color since this color is
actually being color managed. Also made it so byte and float
buffers would have the same exact display color after creation
with the same color value.
Also made it so color strip's color have COLOR_GAMMA subtype,
otherwise swatch color wouldn't match render result which is
not nice at all.
Trackpad zoom (swipe + CTRL) direction was inverted compared to MMB-drag
or scrollwheel usage. In the 3D viewport it was OK, in all others not.
Now the same physical gesture maps identical to zooming everywhere. Or to
recap (with blender factory settings)
Zooming in:
- MMB-drag, move mouse towards screen
- Scroll wheel, move finger towards screen
- Magic Mouse, move finger towards screen
- Trackpad 2-finger swipe: move fingers toward screen.
To make this extra confusing: this is only consistent if you set your system
to inperpret trackpad swipes as "inverted" (pan view left = swipe to right).
This is a typical default, although Apple wants you to call this "Unnatural" :)
Next commit will be testing on laptop if all pinch gestures zoom consistent.
And following to that, a sensible user preference to map trackpad use for
Blender yourself, to invert system defaults again. :)
Blame and thanks goes to Sebastian Koenig, for his perseverance on getting this
solved :)
- UV Image editor and other 2d views didn't zoom for CTRL+swipe yet.
(2 finger trackpad, 1 finger mighty mouse)
- Switched defaults for 3D window swiping...
- default rotate view
- SHIFT for translate
- CTRL for zooms
This makes all editors use 'swipe' like 'middle mouse', and not
like scrollwheel (as in releases).
This is nice for consistancy, but it still feels a bit weird...
Of course users can config this in keymaps. We need a sensible
default though, and to make a 2D input input device behave like
middle mouse seeems more sensible than like a 1D wheel...
Proposal therefore for defaults:
- 1D scrollwheels: zoom in 3d, zoom in 2d, but scroll for list views.
- 2D trackpads: pan for all 2d views, rotate for 3D
I'll check with frequent trackpad users about this and we can freeze it
before release. Give it a try :)
This codec is absolutely needed to generate DCP using OpenDCP,
before that external application to convert JP2 to J2K was used
which slowed down export a lot.
New codec is exposed to image format settings panel and called
Codec. Default one is JP2 which creates files with .jp2 extension,
new one is called J2K which creates with .j2c extension.
Other changes:
- Fixed avi jpeg warning which was treating as error here.
- Made it so extension is detecting from ImageFormatData instead
of image file type, which makes it possible to have different
extension for the same file type depending on it's settings.
IRIS format should still be changed (depending on number of
channels it'll be .bw, .rgb or .rgba extension)
- Default image format settings would be set from image buffer
when re-saving it. Makes it possible to easily open .j2c file
and save it using J2K codec (without this change it'll save as
.jp2 using JP2 codec)
This commit makes BKE_image_acquire_ibuf referencing result, which means once
some area requested for image buffer, it'll be guaranteed this buffer wouldn't
be freed by image signal.
To de-reference buffer BKE_image_release_ibuf should now always be used.
To make referencing working correct we can not rely on result of
image_get_ibuf_threadsafe called outside from thread lock. This is so because
we need to guarantee getting image buffer from list of loaded buffers and it's
referencing happens atomic. Without lock here it is possible that between call
of image_get_ibuf_threadsafe and referencing the buffer IMA_SIGNAL_FREE would
be called. Image signal handling too is blocking now to prevent such a
situation.
Threads are locking by spinlock, which are faster than mutexes. There were some
slowdown reports in the past about render slowdown when using OSX on Xeon CPU.
It shouldn't happen with spin locks, but more tests on different hardware would
be really welcome. So far can not see speed regressions on own computers.
This commit also removes BKE_image_get_ibuf, because it was not so intuitive
when get_ibuf and acquire_ibuf should be used.
Thanks to Ton and Brecht for discussion/review :)
Oldie: Texture buttons - "Add New Image" - crashes on changing X or Y resolution.
I've greyed out these buttons now, changing image memory that's in use by the
preview render is not supported.
A real fix I did was assigning the new image to the texture, that was missing.
image-save now poll's for rendering while saving an image, this can't easily work in a reliable way (buffers are being written to), so disable and set the poll fail message so the tooltip explains why this tools disabled.
Color management would be applied on both of float and byte buffers on image
save in cases if file format doesn't require linear float buffer and if image
is saving as render result.
This solves both initial report issue and TODO marked in previous fix.
Also de-duplicated image buffer color managing code and gave some more
meaningful names for few functions. Also wrote documentation around this
function, so current assumptions about spaces should be clear enough.
Made regression tests by saving EXR/PNG images to all supported format and
rendering OpenGL/Normal animation, in all cases seems everything is fine,
but more tests for sure would be welcome.
Replace old color pipeline which was supporting linear/sRGB color spaces
only with OpenColorIO-based pipeline.
This introduces two configurable color spaces:
- Input color space for images and movie clips. This space is used to convert
images/movies from color space in which file is saved to Blender's linear
space (for float images, byte images are not internally converted, only input
space is stored for such images and used later).
This setting could be found in image/clip data block settings.
- Display color space which defines space in which particular display is working.
This settings could be found in scene's Color Management panel.
When render result is being displayed on the screen, apart from converting image
to display space, some additional conversions could happen.
This conversions are:
- View, which defines tone curve applying before display transformation.
These are different ways to view the image on the same display device.
For example it could be used to emulate film view on sRGB display.
- Exposure affects on image exposure before tone map is applied.
- Gamma is post-display gamma correction, could be used to match particular
display gamma.
- RGB curves are user-defined curves which are applying before display
transformation, could be used for different purposes.
All this settings by default are only applying on render result and does not
affect on other images. If some particular image needs to be affected by this
transformation, "View as Render" setting of image data block should be set to
truth. Movie clips are always affected by all display transformations.
This commit also introduces configurable color space in which sequencer is
working. This setting could be found in scene's Color Management panel and
it should be used if such stuff as grading needs to be done in color space
different from sRGB (i.e. when Film view on sRGB display is use, using VD16
space as sequencer's internal space would make grading working in space
which is close to the space using for display).
Some technical notes:
- Image buffer's float buffer is now always in linear space, even if it was
created from 16bit byte images.
- Space of byte buffer is stored in image buffer's rect_colorspace property.
- Profile of image buffer was removed since it's not longer meaningful.
- OpenGL and GLSL is supposed to always work in sRGB space. It is possible
to support other spaces, but it's quite large project which isn't so
much important.
- Legacy Color Management option disabled is emulated by using None display.
It could have some regressions, but there's no clear way to avoid them.
- If OpenColorIO is disabled on build time, it should make blender behaving
in the same way as previous release with color management enabled.
More details could be found at this page (more details would be added soon):
http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management
--
Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO
integration and to Brecht van Lommel for some further development and code/
usecase review!