This repository has been archived on 2023-10-09. You can view files and clone it, but cannot push or open issues or pull requests.
Files
blender-archive/source/blender/blenkernel/SConscript

196 lines
5.1 KiB
Python
Raw Normal View History

#!/usr/bin/env python
#
# ***** BEGIN GPL LICENSE BLOCK *****
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# The Original Code is Copyright (C) 2006, Blender Foundation
# All rights reserved.
#
# The Original Code is: all of this file.
#
# Contributor(s): Nathan Letwory.
#
# ***** END GPL LICENSE BLOCK *****
Import ('env')
import os
sources = env.Glob('intern/*.c')
Color Management, Stage 2: Switch color pipeline to use OpenColorIO Replace old color pipeline which was supporting linear/sRGB color spaces only with OpenColorIO-based pipeline. This introduces two configurable color spaces: - Input color space for images and movie clips. This space is used to convert images/movies from color space in which file is saved to Blender's linear space (for float images, byte images are not internally converted, only input space is stored for such images and used later). This setting could be found in image/clip data block settings. - Display color space which defines space in which particular display is working. This settings could be found in scene's Color Management panel. When render result is being displayed on the screen, apart from converting image to display space, some additional conversions could happen. This conversions are: - View, which defines tone curve applying before display transformation. These are different ways to view the image on the same display device. For example it could be used to emulate film view on sRGB display. - Exposure affects on image exposure before tone map is applied. - Gamma is post-display gamma correction, could be used to match particular display gamma. - RGB curves are user-defined curves which are applying before display transformation, could be used for different purposes. All this settings by default are only applying on render result and does not affect on other images. If some particular image needs to be affected by this transformation, "View as Render" setting of image data block should be set to truth. Movie clips are always affected by all display transformations. This commit also introduces configurable color space in which sequencer is working. This setting could be found in scene's Color Management panel and it should be used if such stuff as grading needs to be done in color space different from sRGB (i.e. when Film view on sRGB display is use, using VD16 space as sequencer's internal space would make grading working in space which is close to the space using for display). Some technical notes: - Image buffer's float buffer is now always in linear space, even if it was created from 16bit byte images. - Space of byte buffer is stored in image buffer's rect_colorspace property. - Profile of image buffer was removed since it's not longer meaningful. - OpenGL and GLSL is supposed to always work in sRGB space. It is possible to support other spaces, but it's quite large project which isn't so much important. - Legacy Color Management option disabled is emulated by using None display. It could have some regressions, but there's no clear way to avoid them. - If OpenColorIO is disabled on build time, it should make blender behaving in the same way as previous release with color management enabled. More details could be found at this page (more details would be added soon): http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management -- Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO integration and to Brecht van Lommel for some further development and code/ usecase review!
2012-09-15 10:05:07 +00:00
sources.remove('intern' + os.sep + 'mask_rasterize.c')
sources.remove('intern' + os.sep + 'mask_evaluate.c')
Color Management, Stage 2: Switch color pipeline to use OpenColorIO Replace old color pipeline which was supporting linear/sRGB color spaces only with OpenColorIO-based pipeline. This introduces two configurable color spaces: - Input color space for images and movie clips. This space is used to convert images/movies from color space in which file is saved to Blender's linear space (for float images, byte images are not internally converted, only input space is stored for such images and used later). This setting could be found in image/clip data block settings. - Display color space which defines space in which particular display is working. This settings could be found in scene's Color Management panel. When render result is being displayed on the screen, apart from converting image to display space, some additional conversions could happen. This conversions are: - View, which defines tone curve applying before display transformation. These are different ways to view the image on the same display device. For example it could be used to emulate film view on sRGB display. - Exposure affects on image exposure before tone map is applied. - Gamma is post-display gamma correction, could be used to match particular display gamma. - RGB curves are user-defined curves which are applying before display transformation, could be used for different purposes. All this settings by default are only applying on render result and does not affect on other images. If some particular image needs to be affected by this transformation, "View as Render" setting of image data block should be set to truth. Movie clips are always affected by all display transformations. This commit also introduces configurable color space in which sequencer is working. This setting could be found in scene's Color Management panel and it should be used if such stuff as grading needs to be done in color space different from sRGB (i.e. when Film view on sRGB display is use, using VD16 space as sequencer's internal space would make grading working in space which is close to the space using for display). Some technical notes: - Image buffer's float buffer is now always in linear space, even if it was created from 16bit byte images. - Space of byte buffer is stored in image buffer's rect_colorspace property. - Profile of image buffer was removed since it's not longer meaningful. - OpenGL and GLSL is supposed to always work in sRGB space. It is possible to support other spaces, but it's quite large project which isn't so much important. - Legacy Color Management option disabled is emulated by using None display. It could have some regressions, but there's no clear way to avoid them. - If OpenColorIO is disabled on build time, it should make blender behaving in the same way as previous release with color management enabled. More details could be found at this page (more details would be added soon): http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management -- Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO integration and to Brecht van Lommel for some further development and code/ usecase review!
2012-09-15 10:05:07 +00:00
sources.remove('intern' + os.sep + 'mask.c')
sources_mask = env.Glob('intern/mask*.c')
2004-01-04 21:11:59 +00:00
incs = [
'.',
'#/extern/libmv',
'#/intern/ffmpeg',
'#/intern/guardedalloc',
'#/intern/memutil',
'#/intern/mikktspace',
'#/intern/raskter',
'#/intern/rigidbody',
'#/extern/bullet2/src',
env['BF_GLEW_INC'],
'#/intern/ghost',
'#/intern/glew-mx',
'#/intern/elbeem/extern',
'#/intern/iksolver/extern',
'#/intern/smoke/extern',
'#/intern/atomic',
'../avi',
'../blenfont',
'../blenlib',
'../blenloader',
'../bmesh',
Depsgraph: New dependency graph integration commit This commit integrates the work done so far on the new dependency graph system, where goal was to replace legacy depsgraph with the new one, supporting loads of neat features like: - More granular dependency relation nature, which solves issues with fake cycles in the dependencies. - Move towards all-animatable, by better integration of drivers into the system. - Lay down some basis for upcoming copy-on-write, overrides and so on. The new system is living side-by-side with the previous one and disabled by default, so nothing will become suddenly broken. The way to enable new depsgraph is to pass `--new-depsgraph` command line argument. It's a bit early to consider the system production-ready, there are some TODOs and issues were discovered during the merge period, they'll be addressed ASAP. But it's important to merge, because it's the only way to attract artists to really start testing this system. There are number of assorted documents related on the design of the new system: * http://wiki.blender.org/index.php/User:Aligorith/GSoC2013_Depsgraph#Design_Documents * http://wiki.blender.org/index.php/User:Nazg-gul/DependencyGraph There are also some user-related information online: * http://code.blender.org/2015/02/blender-dependency-graph-branch-for-users/ * http://code.blender.org/2015/03/more-dependency-graph-tricks/ Kudos to everyone who was involved into the project: - Joshua "Aligorith" Leung -- design specification, initial code - Lukas "lukas_t" Toenne -- integrating code into blender, with further fixes - Sergey "Sergey" "Sharybin" -- some mocking around, trying to wrap up the project and so - Bassam "slikdigit" Kurdali -- stressing the new system, reporting all the issues and recording/writing documentation. - Everyone else who i forgot to mention here :)
2015-05-12 15:05:57 +05:00
'../depsgraph',
'../gpu',
'../ikplugin',
'../imbuf',
'../makesdna',
'../makesrna',
'../modifiers',
'../nodes',
'../physics',
'../render/extern/include',
'../windowmanager',
env['BF_ZLIB_INC'],
]
incs = ' '.join(incs)
2004-01-04 21:11:59 +00:00
defs = env['BF_GL_DEFINITIONS']
if env['WITH_BF_SMOKE']:
defs.append('WITH_SMOKE')
if env['WITH_BF_FRAMESERVER']:
defs.append('WITH_FRAMESERVER')
if env['WITH_BF_PYTHON']:
incs += ' ../python'
defs.append('WITH_PYTHON')
if env['BF_DEBUG']:
2010-10-18 07:24:08 +00:00
defs.append('DEBUG')
'''
if env['WITH_BF_ELTOPO']:
incs += ' #/extern/eltopo'
incs += ' #/extern/eltopo/eltopo3d'
defs.append('WITH_ELTOPO')
'''
if env['WITH_BF_QUICKTIME']:
incs += ' ../quicktime'
if env['WITH_BF_SDL']:
incs += ' ' + env['BF_SDL_INC']
defs.append('WITH_SDL')
if env['WITH_BF_OIIO']:
defs.append('WITH_OPENIMAGEIO')
if env['WITH_BF_OPENEXR']:
defs.append('WITH_OPENEXR')
if env['WITH_BF_TIFF']:
defs.append('WITH_TIFF')
if env['WITH_BF_OPENJPEG']:
defs.append('WITH_OPENJPEG')
if env['WITH_BF_DDS']:
defs.append('WITH_DDS')
if env['WITH_BF_CINEON']:
defs.append('WITH_CINEON')
if env['WITH_BF_HDR']:
defs.append('WITH_HDR')
if env['WITH_BF_AUDASPACE']:
defs += env['BF_AUDASPACE_DEF']
incs += ' ' + env['BF_AUDASPACE_C_INC']
if env['WITH_BF_JACK']:
defs.append('WITH_JACK')
if env['WITH_BF_FFMPEG']:
defs.append('WITH_FFMPEG')
incs += ' ' + env['BF_FFMPEG_INC']
if env['WITH_BF_QUICKTIME']:
defs.append('WITH_QUICKTIME')
incs += ' ' + env['BF_QUICKTIME_INC']
if env['WITH_BF_BULLET']:
2013-01-23 07:26:39 +00:00
defs.append('WITH_BULLET')
if env['WITH_BF_FLUID']:
defs.append('WITH_MOD_FLUID')
if env['WITH_BF_OCEANSIM']:
defs.append('WITH_OCEANSIM')
if env['WITH_BF_LZO']:
incs += ' #/extern/lzo/minilzo'
defs.append('WITH_LZO')
if env['WITH_BF_LZMA']:
incs += ' #/extern/lzma'
defs.append('WITH_LZMA')
if env['WITH_BF_GAMEENGINE']:
incs += ' #/extern/recastnavigation'
defs.append('WITH_GAMEENGINE')
else:
sources.remove('intern' + os.sep + 'navmesh_conversion.c')
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 12:55:18 +00:00
if env['WITH_BF_LIBMV']:
defs.append('WITH_LIBMV')
if env['WITH_BF_FFTW3']:
defs.append('FFTW3=1')
incs += ' ' + env['BF_FFTW3_INC']
if env['WITH_BF_INTERNATIONAL']:
defs.append('WITH_INTERNATIONAL')
if env['WITH_BF_FREESTYLE']:
defs.append('WITH_FREESTYLE')
OpenSubdiv: Commit of OpenSubdiv integration into Blender This commit contains all the remained parts needed for initial integration of OpenSubdiv into Blender's subdivision surface code. Includes both GPU and CPU backends which works in the following way: - When SubSurf modifier is the last in the modifiers stack then GPU pipeline of OpenSubdiv is used, making viewport performance as fast as possible. This also requires graphscard with GLSL 1.5 support. If this requirement is not met, then no GPU pipeline is used at all. - If SubSurf is not a last modifier or if DerivesMesh is being evaluated for rendering then CPU limit evaluation API from OpenSubdiv is used. This only replaces the legacy evaluation code from CCGSubSurf_legacy, but keeps CCG structures exactly the same as they used to be for ages now. This integration is fully covered with ifdef and not enabled by default because there are several TODOs to be solved first: - Face varying data interpolation is not really cleanly implemented for GPU in OpenSubdiv 3.0. It is also not implemented for limit evaluation API. This basically means we'll have really hard time supporting UVs. - Limit evaluation only works with adaptivly subdivided meshes so far, which basically means all the points of CCG are pushed to the limit. This gives different result from old code. - There are some serious optimizations possible on the topology refiner creation, which would speed up initial OpenSubdiv mesh creation. - There are some hardcoded asumptions in the GPU and DerivedMesh areas which could be generalized. That's something where Antony and Campbell can help, making it so the code is structured in a way which is reusable by all planned viewport projects. - There are also some workarounds in the dependency graph to make sure OpenGL buffers are only freed from the main thread. Those who'll be wanting to make experiments with this code should grab dev branch (NOT master) from https://github.com/Nazg-Gul/OpenSubdiv/tree/dev There are some patches applied in there which we're working on on getting into upstream.
2015-07-20 16:08:06 +02:00
if env['WITH_BF_OPENSUBDIV']:
defs.append('WITH_OPENSUBDIV')
incs += ' #intern/opensubdiv'
incs += ' ' + env['BF_OPENSUBDIV_INC']
if env['OURPLATFORM'] in ('win32-vc', 'win32-mingw', 'linuxcross', 'win64-vc', 'win64-mingw'):
incs += ' ' + env['BF_PTHREADS_INC']
incs += ' ../../../intern/utfconv'
if env['WITH_BF_BINRELOC']:
incs += ' #extern/binreloc/include'
defs.append('WITH_BINRELOC')
Color Management, Stage 2: Switch color pipeline to use OpenColorIO Replace old color pipeline which was supporting linear/sRGB color spaces only with OpenColorIO-based pipeline. This introduces two configurable color spaces: - Input color space for images and movie clips. This space is used to convert images/movies from color space in which file is saved to Blender's linear space (for float images, byte images are not internally converted, only input space is stored for such images and used later). This setting could be found in image/clip data block settings. - Display color space which defines space in which particular display is working. This settings could be found in scene's Color Management panel. When render result is being displayed on the screen, apart from converting image to display space, some additional conversions could happen. This conversions are: - View, which defines tone curve applying before display transformation. These are different ways to view the image on the same display device. For example it could be used to emulate film view on sRGB display. - Exposure affects on image exposure before tone map is applied. - Gamma is post-display gamma correction, could be used to match particular display gamma. - RGB curves are user-defined curves which are applying before display transformation, could be used for different purposes. All this settings by default are only applying on render result and does not affect on other images. If some particular image needs to be affected by this transformation, "View as Render" setting of image data block should be set to truth. Movie clips are always affected by all display transformations. This commit also introduces configurable color space in which sequencer is working. This setting could be found in scene's Color Management panel and it should be used if such stuff as grading needs to be done in color space different from sRGB (i.e. when Film view on sRGB display is use, using VD16 space as sequencer's internal space would make grading working in space which is close to the space using for display). Some technical notes: - Image buffer's float buffer is now always in linear space, even if it was created from 16bit byte images. - Space of byte buffer is stored in image buffer's rect_colorspace property. - Profile of image buffer was removed since it's not longer meaningful. - OpenGL and GLSL is supposed to always work in sRGB space. It is possible to support other spaces, but it's quite large project which isn't so much important. - Legacy Color Management option disabled is emulated by using None display. It could have some regressions, but there's no clear way to avoid them. - If OpenColorIO is disabled on build time, it should make blender behaving in the same way as previous release with color management enabled. More details could be found at this page (more details would be added soon): http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management -- Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO integration and to Brecht van Lommel for some further development and code/ usecase review!
2012-09-15 10:05:07 +00:00
Depsgraph: New dependency graph integration commit This commit integrates the work done so far on the new dependency graph system, where goal was to replace legacy depsgraph with the new one, supporting loads of neat features like: - More granular dependency relation nature, which solves issues with fake cycles in the dependencies. - Move towards all-animatable, by better integration of drivers into the system. - Lay down some basis for upcoming copy-on-write, overrides and so on. The new system is living side-by-side with the previous one and disabled by default, so nothing will become suddenly broken. The way to enable new depsgraph is to pass `--new-depsgraph` command line argument. It's a bit early to consider the system production-ready, there are some TODOs and issues were discovered during the merge period, they'll be addressed ASAP. But it's important to merge, because it's the only way to attract artists to really start testing this system. There are number of assorted documents related on the design of the new system: * http://wiki.blender.org/index.php/User:Aligorith/GSoC2013_Depsgraph#Design_Documents * http://wiki.blender.org/index.php/User:Nazg-gul/DependencyGraph There are also some user-related information online: * http://code.blender.org/2015/02/blender-dependency-graph-branch-for-users/ * http://code.blender.org/2015/03/more-dependency-graph-tricks/ Kudos to everyone who was involved into the project: - Joshua "Aligorith" Leung -- design specification, initial code - Lukas "lukas_t" Toenne -- integrating code into blender, with further fixes - Sergey "Sergey" "Sharybin" -- some mocking around, trying to wrap up the project and so - Bassam "slikdigit" Kurdali -- stressing the new system, reporting all the issues and recording/writing documentation. - Everyone else who i forgot to mention here :)
2015-05-12 15:05:57 +05:00
if env['WITH_BF_LEGACY_DEPSGRAPH']:
defs.append('WITH_LEGACY_DEPSGRAPH')
if env['OURPLATFORM'] in ('win32-vc', 'win64-vc'):
env.BlenderLib ( libname = 'bf_blenkernel', sources = sources, includes = Split(incs), defines = defs, libtype=['core','player'], priority = [166,25]) #, cc_compileflags = env['CCFLAGS'].append('/WX') )
else:
env.BlenderLib ( libname = 'bf_blenkernel', sources = sources, includes = Split(incs), defines = defs, libtype=['core','player', 'player2'], priority = [166,25,0] )
Color Management, Stage 2: Switch color pipeline to use OpenColorIO Replace old color pipeline which was supporting linear/sRGB color spaces only with OpenColorIO-based pipeline. This introduces two configurable color spaces: - Input color space for images and movie clips. This space is used to convert images/movies from color space in which file is saved to Blender's linear space (for float images, byte images are not internally converted, only input space is stored for such images and used later). This setting could be found in image/clip data block settings. - Display color space which defines space in which particular display is working. This settings could be found in scene's Color Management panel. When render result is being displayed on the screen, apart from converting image to display space, some additional conversions could happen. This conversions are: - View, which defines tone curve applying before display transformation. These are different ways to view the image on the same display device. For example it could be used to emulate film view on sRGB display. - Exposure affects on image exposure before tone map is applied. - Gamma is post-display gamma correction, could be used to match particular display gamma. - RGB curves are user-defined curves which are applying before display transformation, could be used for different purposes. All this settings by default are only applying on render result and does not affect on other images. If some particular image needs to be affected by this transformation, "View as Render" setting of image data block should be set to truth. Movie clips are always affected by all display transformations. This commit also introduces configurable color space in which sequencer is working. This setting could be found in scene's Color Management panel and it should be used if such stuff as grading needs to be done in color space different from sRGB (i.e. when Film view on sRGB display is use, using VD16 space as sequencer's internal space would make grading working in space which is close to the space using for display). Some technical notes: - Image buffer's float buffer is now always in linear space, even if it was created from 16bit byte images. - Space of byte buffer is stored in image buffer's rect_colorspace property. - Profile of image buffer was removed since it's not longer meaningful. - OpenGL and GLSL is supposed to always work in sRGB space. It is possible to support other spaces, but it's quite large project which isn't so much important. - Legacy Color Management option disabled is emulated by using None display. It could have some regressions, but there's no clear way to avoid them. - If OpenColorIO is disabled on build time, it should make blender behaving in the same way as previous release with color management enabled. More details could be found at this page (more details would be added soon): http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Color_Management -- Thanks to Xavier Thomas, Lukas Toene for initial work on OpenColorIO integration and to Brecht van Lommel for some further development and code/ usecase review!
2012-09-15 10:05:07 +00:00
env.BlenderLib ( libname = 'bf_blenkernel_mask', sources = sources_mask, includes = Split(incs), defines = defs, libtype=['core','player', 'player2'], priority = [200,25,0] )