This commit makes BKE_image_acquire_ibuf referencing result, which means once
some area requested for image buffer, it'll be guaranteed this buffer wouldn't
be freed by image signal.
To de-reference buffer BKE_image_release_ibuf should now always be used.
To make referencing working correct we can not rely on result of
image_get_ibuf_threadsafe called outside from thread lock. This is so because
we need to guarantee getting image buffer from list of loaded buffers and it's
referencing happens atomic. Without lock here it is possible that between call
of image_get_ibuf_threadsafe and referencing the buffer IMA_SIGNAL_FREE would
be called. Image signal handling too is blocking now to prevent such a
situation.
Threads are locking by spinlock, which are faster than mutexes. There were some
slowdown reports in the past about render slowdown when using OSX on Xeon CPU.
It shouldn't happen with spin locks, but more tests on different hardware would
be really welcome. So far can not see speed regressions on own computers.
This commit also removes BKE_image_get_ibuf, because it was not so intuitive
when get_ibuf and acquire_ibuf should be used.
Thanks to Ton and Brecht for discussion/review :)
Screencast recording stopped on a undo/redo. This was because all thread jobs
were killed then. Now it leaves screen jobs (screen cast) running, that's
data that doesn't change on undos.
Also renamed jobs_stop_all() to jobs_kill_all() - it terminates threads.
Issue was caused by buffer shadows were binding buffer after offscreen buffers
was bind which lead to some unpredictable results.
Made it so ED_view3d_draw_offscreen wouldn't bind any buffers and for proper
shadows ED_view3d_draw_offscreen_init should be manually be called before
drawing to an offscreen.
This should also make open gl render with AA enabled a bit faster.
Also fixed missing sequencer cache invalidation when open gl render type is
changing.
Material and Rendered modes are still a TODO for sequencer.
Color management would be applied on both of float and byte buffers on image
save in cases if file format doesn't require linear float buffer and if image
is saving as render result.
This solves both initial report issue and TODO marked in previous fix.
Also de-duplicated image buffer color managing code and gave some more
meaningful names for few functions. Also wrote documentation around this
function, so current assumptions about spaces should be clear enough.
Made regression tests by saving EXR/PNG images to all supported format and
rendering OpenGL/Normal animation, in all cases seems everything is fine,
but more tests for sure would be welcome.
Replace old color pipeline which was supporting linear/sRGB color spaces
only with OpenColorIO-based pipeline.
This introduces two configurable color spaces:
- Input color space for images and movie clips. This space is used to convert
images/movies from color space in which file is saved to Blender's linear
space (for float images, byte images are not internally converted, only input
space is stored for such images and used later).
This setting could be found in image/clip data block settings.
- Display color space which defines space in which particular display is working.
This settings could be found in scene's Color Management panel.
When render result is being displayed on the screen, apart from converting image
to display space, some additional conversions could happen.
This conversions are:
- View, which defines tone curve applying before display transformation.
These are different ways to view the image on the same display device.
For example it could be used to emulate film view on sRGB display.
- Exposure affects on image exposure before tone map is applied.
- Gamma is post-display gamma correction, could be used to match particular
display gamma.
- RGB curves are user-defined curves which are applying before display
transformation, could be used for different purposes.
All this settings by default are only applying on render result and does not
affect on other images. If some particular image needs to be affected by this
transformation, "View as Render" setting of image data block should be set to
truth. Movie clips are always affected by all display transformations.
This commit also introduces configurable color space in which sequencer is
working. This setting could be found in scene's Color Management panel and
it should be used if such stuff as grading needs to be done in color space
different from sRGB (i.e. when Film view on sRGB display is use, using VD16
space as sequencer's internal space would make grading working in space
which is close to the space using for display).
Some technical notes:
- Image buffer's float buffer is now always in linear space, even if it was
created from 16bit byte images.
- Space of byte buffer is stored in image buffer's rect_colorspace property.
- Profile of image buffer was removed since it's not longer meaningful.
- OpenGL and GLSL is supposed to always work in sRGB space. It is possible
to support other spaces, but it's quite large project which isn't so
much important.
- Legacy Color Management option disabled is emulated by using None display.
It could have some regressions, but there's no clear way to avoid them.
- If OpenColorIO is disabled on build time, it should make blender behaving
in the same way as previous release with color management enabled.
More details could be found at this page (more details would be added soon):
http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management
--
Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO
integration and to Brecht van Lommel for some further development and code/
usecase review!
Issue was caused by the fact, that sequencer is working in sRGB space, but
when there's only image input strips we need to make sure conversion from
byte to float buffer would keep float buffer in sRGB space and wouldn't
make it linear as it's supposed to be in other areas.
while opengl could be used for display you couldn't output it to a file.
extended the existing opengl render operator to optionally take input from the sequencer.
notes:
- doesn't redraw in the viewport yet (only output in terminal)
- doesn't do OSA
Issue was caused by incorrectly set PTS value frames came form Scene strip renderer.
This value used to be calculated from RenderData current and start frame which
lead to non-uniformuly counting which totally confuses encoder.
Switch append_avi and append_ffmpeg to use current frame from rendering scene
(which was already passing to this functions and was used mostly for logging)
and start frame of rendering scene (it's new parameter added). This allowed to
calculate correct PTS value easily and get rid of global static sframe variable
in writeavi.c file.
byte => float, float => float, byte => byte conversions with profile, dither
and predivide. Previously code for this was spread out too much.
There should be no functional changes, this is so the predivide/table/dither
patches can work correctly.
This also revealed an issue where the opengl render float buffer was not linear,
and toggling back to a render slot would show wrong colors. Now it converts the
float buffer to linear so that this goes ok, disadvantage is that it's slower.