Compare commits

...

276 Commits

Author SHA1 Message Date
fe226dc5b3 Do less nanosleep loops.
tested that already without much change yesterday, but for some reasons
today it gives me another 2-3% speedup in both test files.

And it should also mitigate the (supposed) almost-starving situation,
hopefully.
2017-03-03 17:42:33 +01:00
9dab894e64 Inline all task-pushing helpers.
Those are small enough to be worth it, and it does give me ~2% speedup here...
2017-03-03 17:42:33 +01:00
08b0b1c439 Revert "Attempt to address nearly-starving cases."
This reverts commit 32959917ee112200125e3e742afb528fc2196072.

Definitively gives worse performances. Looks like every overhead we add
to task management is always worse than potential better scheduling it
might give us...
2017-03-03 17:42:33 +01:00
0c451bc530 Attempt to address nearly-starving cases.
Idea here is to reduce number of threads a pool is allowed to work on,
in case it does not get tasks quickly enough.

This does not seem to be really great result (have to only do the checks
once every 200 tasks pushed to avoid too much overhead), but I cannot
reproduce that nearly-starving case here so far. @sergey, curious if it
gives any difference on your 12cores with 14_03_G?
2017-03-03 17:42:33 +01:00
1ea1b04e54 Never starve main thread (run_and_wait) from tasks! 2017-03-03 17:42:33 +01:00
1659d42bf7 Fix use-after-free concurrent issues. 2017-03-03 17:42:33 +01:00
fbe84185de Some minor changes from review. 2017-03-03 17:42:33 +01:00
599f314f1d Cleanup, factorization, comments, and some fixes for potential issues. 2017-03-03 17:42:33 +01:00
e0d486eb36 Attempt to address performances issues of task scheduler with lots of very small tasks.
This is partially  based on Sergey's work from D2421, but pushing the things
a bit further. Basically:
- We keep a sheduler-counter of TODO tasks, which avoids us to do any
locking (even of the spinlock) when queue is empty, in workers.
- We spin/nanosleep a bit (less than a ms) when we cannot find a task,
before going into real condition-waiting sleep.
- We keep a counter of condition-sleeping threads, and only use
condition notifications in case we know some are waiting on it.

In other words, when no tasks are available, we spend a bit of time in a
rather high-activity but very cheap and totally lock-free loop, before
going into more expansive real condition-waiting sleep.

No noticeable speedup in complex production scene (barbershop one), here
master, D2421 and this code give roughly same performances (about 30%
slower in new than in old despgraph).

But with testfile from T50027 and new depsgraph, after initial bake,
with master I have ~14fps, with D2421 ~14.5fps, and with this code ~19.5fps.

Note that in theory, we could get completely rid of condition and stay
in the nanosleep loop, but this implies some rather high 'noise' (about
3% of CPU usage here with 8 cores), and going into condition-waiting
state after a few hundreds of micro-seconds does not give any measurable
slow down for me.

Also note that this code is only working on POSIX systems (so no Windows, not
sure how to do our nanosleeps on this OS :/ ).

Reviewers: sergey

Differential Revision: https://developer.blender.org/D2426
2017-03-03 17:42:33 +01:00
7b92b64742 Fix own previous commit, sorry about that :( 2017-03-03 17:23:22 +01:00
2e8398c095 Get rid of BLI_task_pool_stop().
Comments said that function was supposed to 'stop worker threads', but
it absolutely did not do anything like that, was merely wiping out TODO
queue of tasks from given pool (kind of subset of what
`BLI_task_pool_cancel()` does).

Misleading, and currently useless, we can always add it back if we need
it some day, but for now we try to simplify that area.
2017-03-03 17:16:39 +01:00
18c2a44333 Fix ugly mistake in BLI_task - freeing while some tasks are still being processed.
Freeing pool was calling `BLI_task_pool_stop()`, which only clears
pool's tasks that are in TODO queue, whithout ensuring no more tasks
from that pool are being processed in worker threads.

This could lead to use-after-free random (and seldom) crashes.

Now use instead `BLI_task_pool_cancel()`, which does waits for all tasks
being processed to finish, before returning.
2017-03-03 17:12:03 +01:00
5f05dac28f Update comment which was remained in an old place 2017-03-03 16:36:21 +01:00
17cf423f30 Cleanup: Indentation 2017-03-03 15:53:55 +01:00
91ce13e90d Fix T50842: NLA Influence Curve draws out of bounds when it exceeds the 0-1 range 2017-03-04 01:24:21 +13:00
c0d0ef142f Cleanup: GPU_select never took NULL rect 2017-03-03 22:24:08 +11:00
25de610876 Cleanup: redundant header, use const, short -> bool 2017-03-03 22:24:08 +11:00
cdfae957f2 When creating texture/image in Texture Paint mode, both datablocks should get the same name
The paint slot name was not the same as what is displayed on the texture properties panel.
Instead, the slot type (e.g. "Diffuse Color") was used as the name.

Patch by Suchaaver (@minifigmaster125) with minor changes from @mont29.

Reviewers: mont29, sergey

Maniphest Tasks: T50704

Differential Revision: https://developer.blender.org/D2523
2017-03-03 10:50:01 +01:00
810d7d4694 Cycles: Fix possibly uninitialized variable
Hopefully this was a reason of randomly disappearing textures in our renders.
2017-03-03 10:10:26 +01:00
df88d54284 Fix T49655: Reloading library breaks proxies.
Can't say enough how much I hate those proxies... their duality (sharing
some aspects of both direct *and* indirect users) is a nightmare to handle. :(
2017-03-03 08:52:19 +01:00
42cb93205c Fix own stupid mistake in recent mesh 'split_faces' rework.
Was assigning new edge index to ml_prev->e, and then assigning ml_pre->e
to orig_index...
2017-03-02 17:22:03 +01:00
Julian Eisel
a78717a72d Fix duplicated 'Accurate' property for manipulator keymap item
Is already added through Transform_Properties
2017-03-02 13:39:01 +01:00
Julian Eisel
e7dc46d278 Fix weird "use_planar_constraint" button in redo panel
Issue was that the VIEW_OT_manipulator operator calls the transform
operators and passes them it's own operator properties. That means the
transform operator got properties passed that it doesn't have.
2017-03-02 13:37:42 +01:00
a83a68b9b6 Threads: Use atomics instead of spin when entering threaded malloc 2017-03-02 12:42:34 +01:00
87f8bb8d1d Fix another part of T50565: Planar constraints were always initialized to accurate transform
Now it is defined by keymap.
2017-03-02 12:18:07 +01:00
499faa8b11 Fix second part T50565: Using planar transform once makes it enabled by default
Was caused by property being saved by the operator manager.
2017-03-02 11:20:57 +01:00
856077618a Fix T50830: Wrong context when calling surfacedeform_bind
The custom poll function for surfacedeform_bind seems to have caused
issues when calling it from Python. Fixed by using the generic modifier
poll function, and setting the button to be active or not in the
Python UI code instead. (there might be a better way, but for now this
works fine)
2017-03-01 17:56:10 -03:00
193827e59b Correct comment
Thanks to @dingto for noticing.
2017-03-01 14:12:03 -05:00
49c99549eb Cleanup: Use .enabled instead of .active 2017-03-01 13:06:19 -05:00
278fce1170 Fix T50565: Planar constraints don't work properly with non-Blender key configurations
The issue was introduced by 4df75e5 and seems we just need to explicitly
add new keymap item now.

There is still some difference from old behavior, which is planar transform
is using precision movement since e138cde and here i don't see nice solution
currently: the change was requested here in the studio and it's just a
conflict in picking shift key for something which is not supposed to be
accurate.

At least now it's possible to invoke planar constraint and simply unhold
shift.
2017-03-01 18:00:54 +01:00
7fcae7ba60 Task scheduler: Remove query for the pool's number of threads
Not really happy of per-pool threads limit, need to find better
approach to that. But at least it's possible to get rid of half
of the nastyness here by removing getter which was only used in
an assert statement.

That piece of code was already well-tested and this code becomes
obsolete in the new depsgraph and does no longer exists in blender
2.8 branch.
2017-03-01 18:00:54 +01:00
ecee40e919 All drop-down buttons should use the same width 2017-03-01 19:30:18 +03:00
714e85b534 Cleanup: code-style, duplicate header 2017-03-02 00:16:36 +11:00
32c5f3d772 Fix text and icon positioning issues 2017-03-01 16:11:21 +03:00
f0cf15b5c6 Task scheduler: Remove counter of done tasks
This was only used for progress report, and it's wrong because:

- Pool might in theory be re-used by different tasks
- We should not make any decision based on scheduling stats

Proper way is to take care of progress by the task itself.
2017-03-01 12:45:51 +01:00
351c9239ed Cleanup: Use explicit unsigned int in atomics 2017-03-01 12:01:19 +01:00
c1012c6c3a Cleanup: update copyright and Blender description 2017-02-28 12:04:43 -05:00
87f236cd10 Cycles: Fix division by zero in volume code which was producing -nan 2017-02-28 17:33:06 +01:00
efe78d824e Fix/workaround T48549: Crash baking high-to-low-poly normal map in cycles
For now only prevent crash.
2017-02-28 14:08:33 +01:00
a581b65822 Fix T49936: Cycles point density get's it's bounding box from basis shape key 2017-02-28 12:41:56 +01:00
6d1ac79514 Cleanup: Grey --> Gray 2017-02-27 19:33:57 -05:00
4fa4132e45 Surface Deform Modifier (SDef)
Implementation of the SDef modifier, which allows meshes to be bound by
surface, thus allowing things such as cloth simulation proxies.

User documentation: https://wiki.blender.org/index.php/User:Lucarood/SurfaceDeform

Reviewers: mont29, sergey

Subscribers: Severin, dfelinto, plasmasolutions, kjym3

Differential Revision: https://developer.blender.org/D2462
2017-02-27 13:49:14 -03:00
cd5c853307 Fix memory leak when making duplicates real and parent had constraints
Thanks Bastien for help!
2017-02-27 17:46:41 +01:00
2342cd0a0f Fix/workaround T50677: Shrinkwrap constraint don't get updated when target mesh gets modified
Do a "full" update on leaving sculpt mode, so we are sure scene will be brought
to a consistent state.

Ideally we'll only do that when there are objects which depends on geometry
without re-calculating self geometry, but that's a bit tricky currently.
2017-02-27 16:27:53 +01:00
691ffb60b9 Similar to previous commit, but for object constraints 2017-02-27 16:19:52 +01:00
bf7006c15a Depsgraph: Shrinkwrap constraint actually depends on geometry 2017-02-27 16:00:39 +01:00
5acac13eb4 Cycles: Fix compilation error on vanilla Ubuntu 16.10
Patch by @swerner, thanks!
2017-02-27 15:22:51 +01:00
f1b21d5960 Fix T50634: Hair Primitive as Triangles + Hair shader with a texture = crash
Attributes were not resized after pushing new triangles to the mesh.
2017-02-27 15:21:14 +01:00
209a64111e Fix part of T50634: Hair Primitive as Triangles + Hair shader with a texture = crash
Wrong formula was used to calculate needed verts and tris to be reserved.
2017-02-27 15:21:14 +01:00
00ceb6d2f4 Cycles: Make it more clear values never changes by using const qualifier 2017-02-27 15:21:14 +01:00
f7d67835e9 Cleanup: typo in struct name 2017-02-28 00:38:33 +11:00
cc78690be3 Cycles: Forgot this in previous commit 2017-02-27 12:54:35 +01:00
238db604c5 Cycles: Add more logs about what's going on in shader optimization 2017-02-27 12:38:24 +01:00
845ba1a6fb Cycles: Experiment with replacing Sharp Glossy with GGX when Filter Glossy is used
The idea is to make it simpler to remove noise from scenes when some prop uses
Sharp glossy closure and causes noise in certain cases. Previously Sharp Glossy
was not affected by Filter Glossy at all, which was quite confusing.

Here is a file which demonstrates the issue: {F417797}

After applying the patch all the noise from the scene is gone.

This change also solves fireflies reported in T50700.

Reviewers: brecht, lukasstockner97

Differential Revision: https://developer.blender.org/D2416
2017-02-27 12:33:59 +01:00
406398213c Fix missing break setting curve auto-handles 2017-02-27 13:35:03 +11:00
631ecbc4ca Fix unreported bug: Ensure you have the correct array directory even after the dm->release(dm) 2017-02-26 14:16:54 -03:00
112e4de885 Improve add-on UI error message
Show the paths of the duplicate addons

D791 by @gregzaal
2017-02-27 03:57:11 +11:00
0561aa771b Cleanup: minor changes to array_store
- remove unused struct member.
- misleading variable name.
2017-02-26 15:29:09 +11:00
5c3216e233 Fix compiling after a0b8a9f 2017-02-25 14:58:08 +01:00
d66d5790e9 Fix (unreported) missing update when adding constraint from RNA. 2017-02-25 11:38:02 +01:00
94ca09e01c Fix rows with fixed last item (D2524) 2017-02-25 13:18:41 +03:00
2c4564b044 Alembic: avoid crashing when reading non-indexed UV params. 2017-02-25 07:08:42 +01:00
a0b8a9fe68 Alembic: addition of a scope timer to perform basic profiling. 2017-02-25 07:08:42 +01:00
8c5826f59a Fix T50698: Cycles baking artifacts with transparent surfaces. 2017-02-25 03:12:53 +01:00
15f1072ee2 Fix build error with macOS / clang / c++11. 2017-02-25 03:12:53 +01:00
caaf5f0a09 Fix T50757: Alembic, assign imported materials to the object data
instead of to the object itself.
2017-02-24 21:19:52 +01:00
9062c086b4 Fix T50676: Crash on closing while frameserver rendering.
Can't see any reason to call AUD exit early in WM_exit, that's a
low-level module that has no dependency on anything else in Blender, but
is dependency of some other parts of Blender, so it should rather be
exited late in the process!
2017-02-24 14:58:38 +01:00
1e29286c8c Cycles: Fix compilation warning with CUDA on OSX 2017-02-24 14:33:10 +01:00
f49e28bae7 Cycles: Fix non-zero exit status when rendering animation from CLI and running out of memory 2017-02-24 14:25:38 +01:00
4c164487bc Add "Gravitation" option to "Force" type force fields
This adds an option to force fields of type "Force", which enables the
simulation of gravitational behavior (dist^-2 falloff).

Patch by @AndreasE

Reviewers: #physics, LucaRood, mont29

Reviewed By: #physics, LucaRood, mont29

Tags: #physics

Differential Revision: https://developer.blender.org/D2389
2017-02-23 19:23:39 -03:00
29859d0d5e Fix some more minor issue with updated py doc generation. 2017-02-23 22:31:21 +01:00
c067f1d0d4 Fix stupid mistake in previous commit for release builds of API doc. 2017-02-23 22:08:01 +01:00
c7ad27fc07 Update py API doc generation tools to comply to new name scheme on server.
- for rc/release: /api/2.79c/, zip file named blender_python_reference_2.79c_release.zip
 - for dev: /api/master/, zip file named blender_python_reference_2_79_4.zip
2017-02-23 21:45:20 +01:00
6a249bb000 Usual UI messages fixes... 2017-02-23 21:10:43 +01:00
50328b41a7 Cycles: Fix compilation error on 32bit Linux 2017-02-23 17:30:26 +01:00
4e12113bea Cycles: Fix wrong render results with texture limit and half-float textures 2017-02-23 14:46:22 +01:00
13e075600a Cycles: Add utility function to convert float to half
handles overflow and underflow, but not NaN/inf.
2017-02-23 14:42:06 +01:00
9eb647f1c8 Fix T50656: Compositing node editor is empty, no nodes can be added 2017-02-23 11:23:49 +01:00
60592f6778 Fix T50748: Render Time incorrect when refreshing rendered preview in GPU mode 2017-02-23 10:51:06 +01:00
9dd194716b Fix T50736: Zero streaks in Glare node.
Please never, ever use same DNA var for two different things. Even worse
if they do not have same type and ranges!

This is only ensuring issues (as described in report, but also if
animating both RNA props using same DNA var... yuck).

And we were not even saving any byte in DNA, could reuse some padding
there to store the two new needed vars (yes, two, since we cannot re-use
existing one if we want to keep backward *and* forward compatibility).
2017-02-23 10:39:51 +01:00
Julian Eisel
7359cc1060 Fix possible crash in various 3D View operators
Was actually harmeless and not crashing, but I'd say more or less only
by luck: the NULL-check for region data would only evaluate to true for
the correct 3D View region. However, if we were to add region data to a
different region type in future, this would lead to undefined behavior
if executed in the wrong region.
2017-02-23 02:14:27 +01:00
43299f9465 Columns should be expandable by default 2017-02-23 00:06:54 +03:00
5e1d4714fe Fix T50745: Shape key editing on bezier objects broken with Rendered Viewport Shading
So... Curve+shapekey was even more broken than it looked, this report was
actually a nice crasher (immediate crash in an ASAN build when trying to
edit a curve shapekey with some viewport rendering enabled).

There were actually two different issues here.

I) The less critical: rB6f1493f68fe was not fully fixing issues from
T50614. More specifically, if you updated obdata from editnurb
*without* freeing editnurb afterwards, you had a 'restored' (to
original curve) editnurb, without the edited shapekey modifications
anymore. This was fixed by tweaking again `calc_shapeKeys()` behavior in
`ED_curve_editnurb_load()`.

II) The crasher: in `ED_curve_editnurb_make()`, the call to
`init_editNurb_keyIndex()` was directly storing pointers of obdata
nurbs. Since those get freed every time `ED_curve_editnurb_load()` is
executed, it easily ended up being pointers to freed memory. This was
fixed by copying those data, which implied more complex handling code
for editnurbs->keyindex, and some reshuffling of a few functions to
avoid duplicating things between editor's editcurve.c and BKE's curve.c

Note that the separation of functions between editors and BKE area for
curve could use a serious update, it's currently messy to say the least.
Then again, that area is due to rework since a long time now... :/

Finally, aligned 'for_render' curve evaluation to mesh one - now
editing a shapekey will show in rendered viewports, if it does have some
weight (exactly as with shapekeys of meshes).
2017-02-22 21:56:49 +01:00
b637db2a7a Cleanup: remove unused orig_nu from keyIndex ghash of editcurves. 2017-02-22 21:56:49 +01:00
99947e2943 Use new api doc links
Differential Revision: https://developer.blender.org/D2522
2017-02-22 11:19:30 -05:00
75cc33fa20 Fix Cycles still saving render output when error happened
This was fixed ages ago for the interface case but not for the
command line. The thing here is that currently external engines
are relying on reports system to indicate that error happened
so suppressing reports storage in the background mode prevented
render pipeline from detecting errors happened.

This is all weak and i don't like it, but this is better than
delivering black frames from the farm.
2017-02-22 13:06:24 +01:00
36c4fc1ea9 Cycles: Fix shading with autosmooth and custom normals
New logic of split_faces was leaving mesh in a proper state
from Blender's point of view, but Cycles wanted loop normals
to be "flushed" to vertex normals.

Now we do such a flush from Cycles side again, so we don't
leave bad meshes behind.

Thanks Bastien for assistance here!
2017-02-22 10:54:36 +01:00
2c30fd83f1 Cycles: Additionally report all OpenCL cflags
This way we can control exact spaces and such added to the cflags
which is crucial to troubleshoot certain drivers.
2017-02-22 10:06:02 +01:00
ae1c1cd8c0 Refactor Mesh split_faces() code to use loop normal spaces.
Finding which loop should share its vertex with which others is not easy
with regular Mesh data (mostly due to lack of advanced topology info, as
opposed with BMesh case).

Custom loop normals computing already does that - and can return 'loop
normal spaces', which among other things contain definitions of 'smooth
fans' of loops around vertices.

Using those makes it easy to find vertices (and then edges) that needs
splitting.

This commit also adds support of non-autosmooth meshes, where we want to
split out flat faces from smooth ones.
2017-02-22 09:40:46 +01:00
3622074bf7 Fix Drawing nested box layouts (D2508) 2017-02-21 21:02:56 +03:00
4e9b17da4c Cycles: Speedup by avoiding extra calculations in noise texture when unneeded
Noise texture is now faster when the color socket is unused. Potential for
speedup spotted by @nutel.

Some performance results:

                     Render Time Before    After    Difference
Gooseberry benchmark         47:51.34    45:55.57       -4%
Koro                         12:24.92    12:18.46     -0.8%
Simple cube (Color socket)      48.53       48.72     +0.3%
Simple cube (Fac socket)        48.74       32.78    -32.7%
Goethe displacement           1:21.18     1:08.47    -15.6%
Cycles brick displacement     3:02.38     2:16.76    -25.0%
Large displacement scene     23:54.12    20:09.62    -15.6%

Reviewed By: sergey

Differential Revision: https://developer.blender.org/D2513
2017-02-21 07:24:33 -05:00
34a502c16a Cleanup: use proper link to the api 2017-02-20 20:21:57 -05:00
696836af1d Fix T50718: Regression: Split Normals Render Problem with Cycles
The issue seems to be caused by vertex normal being re-calculated
to something else than loop normal, which also caused wrong loop
normals after re-calculation.

For now issue is solved by preserving CD_NORMAL for loops after
split_faces() is finished, so render engine can access original
proper value.
2017-02-20 11:56:02 +01:00
75ce4ebc12 Mesh faces split: Add missing vertex normal copy 2017-02-20 11:47:43 +01:00
333dc8d60f Fix T50719: Memory usage won't reset to zero while re-rendering on two video cards
Was only visible with Persistent Images option ON.
2017-02-20 11:02:19 +01:00
9992e6a169 Fix a few compiler warnings with macOS / clang. 2017-02-18 23:59:34 +01:00
3f5b2e2682 Fix T50564: 3D view panning with scroll wheel inconsistent with dragging. 2017-02-18 22:41:56 +01:00
6f1493f68f Fix T50614: Curve doesn't restore initial form after deleting all its shapekeys
Logic of handling shapekeys when entering and leaving edit mode for
curves was... utterly broken.

Was leaving actual curve data with edited shapekey applied to it.
2017-02-17 18:55:52 +01:00
31123f09cd Remove unused functions related to distance between BoundBox and ray 2017-02-17 09:49:20 -03:00
d41451a0ca Forgotten in last commit: Check the allocation 2017-02-16 23:41:38 -03:00
6c59a3b37a Do not release the arrays used in the parameters of the expanded functions of bvhutils
The release of these arrays should be the programmer's discretion since these arrays can continue to be used.

Only the expanded functions `bvhtree_from_mesh_edges_ex` and `bvhtree_from_mesh_looptri_ex` are currently being used in blender (in mesh_remap.c), and from what I could to analyze, these changes can prevent a crash.
2017-02-16 22:55:01 -03:00
7819d36d4e Make File: Print 'blender.exe' at the end of the path to run from 2017-02-16 17:08:33 -05:00
99a6bbf7dd Cleanup: Spelling, Spaces --> Tabs, Whitespace 2017-02-16 17:06:03 -05:00
21eae869ad UI: Move 'relations extras' right below 'relations'
Differential Revision: https://developer.blender.org/D2218
2017-02-16 12:02:32 -05:00
306acb7dda Fix T50687: Cycles baking time estimate and progress bar doesn't work / progress when baking with high samples 2017-02-16 17:15:08 +01:00
26c8d559fe Register test for mesh.split_faces() 2017-02-16 15:36:00 +01:00
6468cb5f9c Faces split: Don't leave CD_NORMAL after split
This is supposed to be a temporary layer.

If someone needs loop normals after split it should explicitly
ask for that.
2017-02-16 11:00:17 +01:00
5cbaf56b26 Cyctes tests: Commit blender.git side changes 2017-02-16 10:36:22 +01:00
fc185fb1d2 CDDM Copy: Only tag data layers dirty if we ignored tessellation data
This solves assert failure in CustomData_from_bmeshpoly() happening with
broom.blend file from barber shop SVN.
2017-02-16 09:55:44 +01:00
809ed38075 Cleanup: Indentation 2017-02-16 09:16:20 +01:00
781507d2dd Freestyle: Feature edge selection by nested object groups.
A group of object groups can be formed by means of the dupli_group option in
the Object properties window.  The present revision extends the Selection by
Group option in the Freestyle Line Set so as to support not only flat object
groups but also nested groups.
2017-02-16 10:53:11 +09:00
9b3d415f6a Fix more corner cases failing in mesh faces split
Now we handle properly case with edge-fan meshes, which should
fix bad topology calculated for cash register which was causing
crashes in the studio.
2017-02-15 23:09:31 +01:00
40e5bc15e9 Fix wrong edges created by split faces
We need to first split all vertices before we can reliably
check whether edge can be reused or not.

There is still known issue happening with a edge-fan mesh
with some faces being on the same plane.
2017-02-15 21:41:25 +01:00
41e0085fd3 [Alembic] Fix msvc warning - C4138 '*/' found outside of comment 2017-02-15 12:40:41 -07:00
e22d4699cb Cycles: Cleanup, style 2017-02-15 20:33:49 +01:00
13d31b1604 Fix T50542: Wrong metadata frame when using OpenGL render 2017-02-15 17:09:49 +01:00
3e628eefa9 Motion blur investigation feature
This commit adds a way to debug Cycles motion blur issues which
are usually happening due to something crazy happening in between
of frames. Biggest trouble was that artists had no clue about
what's happening in subframes before they render. This is at
least inefficient workflow when dealing with motion blur shots
with complex animation.

Now there is an option in Time Line Editor which could be found
in View -> Show Subframe. This option will expose current frame
with it's subframe to the time line editor header and it'll allow
scrubbing with a subframe precision in time line editor.

Please note that none of the tools in Blender are aware of
subframe, so they'll likely be using current integer frame still.

This is something we don't consider a bug for now, the whole
purpose for now is to give a tool for investigation. Eventually
we'll likely tweak all tools to be aware of subframe.

Hopefully now we can finish the movie here in the studio..
2017-02-15 16:19:05 +01:00
efbe47f9cd Fix T50662: Auto-split affects on smooth mesh when it sohuldn't
Seems to be a precision error comparing proper floating point
normal with the one coming from short.
2017-02-15 15:21:15 +01:00
fe47163a1e Cycles: Fix CUDA compilation error after recent changes 2017-02-15 15:01:08 +01:00
20283bfa0b Fix wrong loop normals left after face splitting
Let's keep all data in a consistent state, so we don't have any
issues later on.

This solves rendering artifacts mentioned in the previous commit.
2017-02-15 14:58:49 +01:00
dd79f907a7 Mesh: Re-implement face split solving issue mentioned earlier
Now new edges will be properly created between original and
new split vertices.

Now topology is correct, but shading is still not quite in
some special cases.
2017-02-15 14:49:42 +01:00
8b8c0d0049 Cycles: Don't calculate primitive time if BVH motion steps are not used
Solves memory regression by the default configuration.
2017-02-15 12:59:31 +01:00
6cdc954e8c Cycles: Pass special flag whether BVH motion steps are used
Doesn't currently change anything, but would need for some future
work here.

It uses existing padding in kernel BVH structure, so there is
nothing changed memory-wise.
2017-02-15 12:45:06 +01:00
dc7bbd731a Cycles: Fix wrong hair render results when using BVH motion steps
The issue here was mainly coming from minimal pixel width feature
which is quite commonly enabled in production shots.

This feature will use some probabilistic heuristic in the curve
intersection function to check whether we need to return intersection
or not. This probability is calculated for every intersection check.
Now, when we use multiple BVH nodes for curve primitives we increase
probability of that primitive to be considered a good intersection
for us. This is similar to increasing minimal width of curve.

What is worst here is that change in the intersection probability
fully depends on exact layout of BVH, meaning probability might
change differently depending on a view angle, the way how builder
binned the primitives and such. This makes it impossible to do
simple check like dividing probability by number of BVH steps.

Other solution might have been to split BVH into fully independent
trees, but that will increase memory usage of all the static
objects in the scenes, which is also not something desirable.

For now used most simple but robust approach: store BVH primitives
time and test it in curve intersection functions. This solves the
regression, but has two downsides:

- Uses more memory.

  which isn't surprising, and ANY solution to this problem will
  use more memory.

  What we still have to do is to avoid this memory increase for
  cases when we don't use BVH motion steps.

- Reduces number of maximum available textures on pre-kepler cards.

  There is not much we can do here, hardware gets old but we need
  to move forward on more modern hardware..
2017-02-15 12:45:04 +01:00
088c6a17ba Cycles: Fix missing initialization of triangle BVH steps
Likely was harmless for Blender, but better be safe here.
2017-02-15 12:44:52 +01:00
5723aa8c02 Cycles: Fix wrong pointiness caused by precision issues 2017-02-15 12:40:13 +01:00
b36e26bbce Revert "Mesh: Solve incorrect result of mesh.split_faces()"
The change was delivering broken topology for certain cases.
The assumption that new edge only connects new vertices was
wrong.

Reverting to a commit which was giving correct render results
but was using more memory.

This reverts commit af1e48e8ab.
2017-02-15 12:40:13 +01:00
384b7e18f1 UI: Wireframe modifier- make crease grayed out when disabled 2017-02-14 23:48:35 -05:00
402b0aa59b Comments: notes on polyfill2d, minor corrections 2017-02-15 14:17:06 +11:00
af1e48e8ab Mesh: Solve incorrect result of mesh.split_faces()
This function was keeping original edges and was creating some
extra vertices which is not something we are really looking
forward to,
2017-02-14 17:02:22 +01:00
737a3b8a0a Mesh: Cleanup, use shorter version of loop 2017-02-14 16:27:09 +01:00
324d057b25 Mesh: Use faster calculation of previous loop 2017-02-14 16:27:09 +01:00
4d325693e1 BKE_boundbox_ensure_minimum_dimensions is no longer necessary
The bug T46099 no longer applies since the addition of `dist_squared_to_projected_aabb_simple`
Has also been added comments that relates to an occlusion bug with the ruler. I'll investigate this.
2017-02-14 10:25:00 -03:00
6c104f62b9 transform_snap_object: Remove do_bb parameter. It is always true 2017-02-14 09:38:20 -03:00
54102ab36e Alembic: fix naming of imported transforms.
When importing an Alembic file with grouped transforms, it would badly name the transforms, taking the name of the parent instead of its own.

Patch by @maxime.robinot

Differential Revision: https://developer.blender.org/D2507
2017-02-14 08:15:13 +01:00
930186d3df Cycles: Optimize sorting of transparent intersections on CUDA 2017-02-13 18:24:45 +01:00
21dbfb7828 Cycles: Fix wrong transparent shadows with CUDA
Was a bug in recent optimization commit.
2017-02-13 18:22:10 +01:00
581c819013 Cycles: Fix wrong shading on GPU when background has NaN pixels and MIS enabled
Quite simple fix for now which only deals with this case. Maybe we want to do
some "clipping" on image load time so regular textures wouldn't give NaN as
well.
2017-02-13 16:32:55 +01:00
81eee0f536 Cycles: Use fast math without finite optimization
This allows us to use faster math and still have reliable
isnan/isfinite tests.

Only do it for host side, kernels stays unchanged.

Thanks Lukas Stockner for the tip!
2017-02-13 16:25:35 +01:00
37afa965a4 Fix T50655: Pointiness is too slow to calculate
Optimize vertex de-duplication the same way as we do doe Remove Doubles.
2017-02-13 12:00:10 +01:00
594015fb7e Cycles: Use Cycles-side mesh instead of C++ RNA
Those are now matching and it's faster to skip C++ RNA to
calculate pointiness.
2017-02-13 10:40:05 +01:00
9148ce9f3c F-Curve normalization: Do proper curve min/max instead of handle min/max
Would be cool to find some way to cache the results.
2017-02-13 10:02:04 +01:00
5552e83b53 Cycles: Don't use built-in API for image sequences in preview mode
Our Python API is not ready for such things at all. Better be slower
but more correct for until we improve our API.
2017-02-11 22:24:59 +01:00
e76364adcd Image: Fix non-deterministic behavior of image sequence loading
The issue was caused by usage of non-initialized image user, which
could have different settings, causing some random image being loaded
or not loaded at all.

This caused non-deterministic behavior of Cycles image loading because
it was querying image information from several places.

This fixes crash reported in T50616, but it's not a complete fix
because preview rendering in material is wrong (same wrong as in
2.78a release).
2017-02-11 22:19:49 +01:00
1ac6e4c7a2 UI: Redesign the VSE multicam strip
Idea from https://rightclickselect.com/p/sequencer/zfbbbc/sequencer-panels-update by @pauloup

|{F434631}|{F434624}|
|Before |After|

Test file:
{F434643}
2017-02-11 11:35:02 -05:00
3ede515b5b Use dummy versionning numbers for missing libraries.
We now assert that we now file version of libraries (needed for
do_version after linking step), so for missing libraries, set dummy
numbers (using version of main .blend file actually).
2017-02-10 22:50:45 +01:00
9d8a9cacc3 De-duplicate min/max calculation in F-Curve normalization 2017-02-10 18:10:26 +01:00
e33e58bf23 CTests: Initial work to cover Cycles nodes with OpenGL tests
Works similar to regular Cycles tests, just does OpenGL render to
get output image.

Seems to work fine with the only funny effect: Blender window will
pop up for each of the tests. This is current limitation of our
OpenGL context. Might be changed in the future.
2017-02-10 14:52:54 +01:00
e991af0934 Cleanup: Trailing whitespace 2017-02-10 14:08:12 +01:00
cd4309ced0 Cycles: Cleanup, move EdgeMap to blender_util
it's better place for such an utility structure. Still not fully ideal tho.
2017-02-10 13:34:10 +01:00
0178915ce9 Cycles: Make an utility class for edge map
Simplifies some logic.
2017-02-10 13:34:09 +01:00
fd7e9f7974 Cycles: Fix pointiness attribute giving wrong results with autosplit
Basically made the algorithm to handle vertices with the same coordinate
as a single vertex.
2017-02-10 13:34:09 +01:00
d395d81bfc Cycles: Cleanup: Use less indentation by inverting condition 2017-02-10 13:34:09 +01:00
0b65b889ef Cycles: Calculate all vertex attribute after faces generation
This way the calculation is not spread over multiple places.
2017-02-10 13:34:09 +01:00
b26da8b467 Cycles: Cleanup: use vector instead of bare malloc
This way memory is more "manageable" and easier to follow.
2017-02-10 13:34:09 +01:00
b929eef8c5 Alembic: fixed mistake in bounding box computation
By performing the Z-up to Y-up conversion, the change in sign of the
Z-coordinate swaps "minimum" and "maximum".
2017-02-10 11:54:00 +01:00
38155c7d3c Do not overide text 2017-02-09 16:25:04 -05:00
bb1367cdaf Fix T50629 -- Add remove doubles to the cleanup menu
Also move it up in the verticies menu
2017-02-09 16:18:33 -05:00
e523cde574 Cleanup: Remove commented code
Code has been commented from before 2010 and relates to old Background image code.
2017-02-09 09:26:57 -05:00
d2f4900d1a Use a smaller cross icon for clearing search box contents 2017-02-09 19:08:58 +13:00
351eb4fad1 More tweaks to Normalisation options in Graph Editor
* Added a new dedicated icon for normalize
* Only use an icon for "Auto"
2017-02-09 18:59:51 +13:00
316d23f2ba Graph Editor: Replace Normalise/Auto checkboxes with toggle buttons
These take less space, fit in better with rest of the UI, and make their relationship clearer
2017-02-09 17:10:49 +13:00
117d90b3da Fix: GPencil delete operators did not respect color locking 2017-02-09 17:10:48 +13:00
b16fd22018 Cycles: Fix regression with transparent shadows in volume 2017-02-08 14:00:48 +01:00
da31a82832 Cycles: Solve speed regression by casting opaque ray first 2017-02-08 14:00:48 +01:00
04cf1538b5 Cycles: Fix compilation error on OpenCL 2017-02-08 14:00:48 +01:00
31a025f51e Cycles: Split shadow functions to avoid some duplicated calculations 2017-02-08 14:00:48 +01:00
dde40989f3 Cycles: Store shadow intersections in the kernel globals
Seems CUDA failed to de-duplicate the array across multiple inlined
versions of the shadow_blocked(). Helped it a bit with that now.

Gives about 100MB memory improvement on a scenes after previous
commit and brings up memory "regression" to only 100MB comparing to
the master branch now.
2017-02-08 14:00:48 +01:00
7447950bc3 Cycles: Speedup transparent shadows on CUDA
This commit enables record-all behavior of transparent shadows
rays.

Render times difference goes as following:

               GTX 1080 render time
BMW                  -0.5%
Fishy Cat            -0.0%
Pabellon Barcelona   -11.6%
Classroom            +1.2%
Koro                 -58.6%

Kernel will now use some extra VRAM memory to store the intersection
array (200MB on my configuration). This we can optimize out with some
further commits.
2017-02-08 14:00:48 +01:00
9830eeb44b Cycles: Implement record-all transparent shadow function for GPU
The idea is to record all possible transparent intersections when
shooting transparent ray on GPU (similar to what we were doing  on
CPU already).

This avoids need of doing whole ray-to-scene intersections queries
for each intersection and speeds up a lot cases like transparent
hair in the cost of extra memory.

This commit is a base ground for now and this feature is kept
disabled for until some further tweaks.
2017-02-08 14:00:48 +01:00
9c3d202e56 Cycles: Use an utility function to sort intersections array 2017-02-08 14:00:48 +01:00
58a10122d0 Cycles: Make GPU version of shadow_blocked() closer to CPU
Now we break the traversal cycle and then perform volume attenuation
and check with zero throughput. Not sure it makes any measurable sense
at this moment, but in the future it might help de-duplicating some
extra logic here.
2017-02-08 14:00:48 +01:00
98a1855803 Cycles: De-duplicate transparent shadows attenuation
Fair amount of code was duplicated for CPU and GPU, now we are
using inlined function to avoid such duplication.
2017-02-08 14:00:48 +01:00
8cda364d6f Fix T49249: Alembic export with multiple hair systems crash blender
Removed unnecessary call to DM_update_tessface_data(). This call is
already performed by DM_ensure_tessface(dm). The call being performed
twice caused a failing BLI_assert().

Reviewed by: Kévin Dietrich
2017-02-08 12:26:36 +01:00
ac38d5652b Alembic export: avoid infinite loops trying to find parent objects.
Also added some assertions for debugging purposes

Reviewed by: Kévin Dietrich
2017-02-08 12:26:36 +01:00
95e7f93fa2 Alembic export: only create transform writer if the object should be exported
Reviewed by: Kévin Dietrich
2017-02-08 12:26:36 +01:00
b320873382 Alembic: #undef'ed the correct macro
TEST_RET is not defined anywhere in Blender's sources, and LAYER_CMP
is no longer used after this function ends.
2017-02-08 12:26:36 +01:00
ce9df09067 Alembic: Use getXForm() in check, because it's used in rest of the function too
This makes the code within the function consistent.
2017-02-08 12:26:36 +01:00
82df7100c8 Alembic: Renamed copy_zup_yup to copy_yup_from_zup (and same for zup_from_yup)
With the new names the arguments (yup, zup) are in the same order as
they appear in the function name. The old names used copy_src_dst(dst,
src), which I found very confusing. Furthermore, now it is clear from
where to where the copy is made.

This makes the function names a little bit longer, though. If that is
a real issue, we can just name them zup_from_yup(zup, yup).

Reviewed by: Kévin Dietrich
2017-02-08 12:26:36 +01:00
69dbeeca48 Cleanup: Use const qualifier in some of color management code 2017-02-07 17:49:54 +01:00
b641d016e1 Sequencer: Some extra speedup in color space conversion
Use the new utility from coloranagement which multi-threads byte to
float conversion.

Gives extra 10% speedup from quick tests.
2017-02-07 17:49:54 +01:00
ce629c5dd9 Color management: Add utility function to convert byte to float with processor applied 2017-02-07 17:49:54 +01:00
e5bb005369 Sequencer: Speedup conversion to sequencer space
Speedup is mainly gained by multi-threading. Gives about 3x
fps gain on an edit shot file.

There is still some room for improvements, will happen in one
of the upcoming commits.
2017-02-07 17:49:54 +01:00
5d6177111d Color management: Implement threaded byte buffer conversion
The title says it all actually: now we can convert byte buffer
directly, without need of temporary float buffer.
2017-02-07 17:49:54 +01:00
03be3102c7 Param is_cached not being used in bvhtree_from_mesh_edges_setup_data
This could cause bugs in the memory release
2017-02-07 11:03:10 -03:00
03544eccb4 Fix missing hair after rendering with different viewport/render settings
Derived mesh for particles did not include tessellated faces when it
was expected to. Now added explicit function to copy CDDM with tess
faces without need to re-tessellate the result.
2017-02-07 14:21:29 +01:00
53896d4235 Fix T49253: Cycles blackbody is wrong on AVX2 CPU on Windows
Seems to be bug in optimizer, but managed to reshuffle in a way
which should also give some speedup.
2017-02-07 13:05:19 +01:00
1158800d1b PIL_time_utildefines: also show total time in TIMEIT_AVERAGED. 2017-02-07 10:14:46 +01:00
f7eaaf35b4 Fix (unreported) Object previews being written even for skipped objects. 2017-02-06 20:58:18 +01:00
e217839fd3 Cleanup writefile code a bit.
Modernize some of it a bit, saves quite some lines of blabla (using
shile instead of for loops... tsssts...).
2017-02-06 20:43:14 +01:00
dbdc346e9f CMake: Remove MOTO library dependency when it is not needed
It is not necessary to add MOTO library dependency when we use
WITH_IK_SOLVER (now it uses Eigen) or we use WITH_MOD_BOOLEAN (it was
used by bsp intern library some time ago but it is not present in the
code anymore).

Reviewers: mont29, sergey

Subscribers: mont29, sergey

Differential Revision: https://developer.blender.org/D2477
2017-02-06 19:29:42 +01:00
0170c682fe Specify the correct size of the BVHTree of edges
~edge_num~ edges_num_active
Not always all the edges enter in the build
2017-02-06 14:59:31 -03:00
e3f99329d8 Standardization and style for BKE_bvhutils
Add `bvhtree_from_mesh_edges_ex` and callbacks to nearest_to_ray (Similar to the other functions of this code)
2017-02-06 14:11:06 -03:00
ac8348d033 Fix 'public' global 'g_atexit' var in Blender.
No reason to not make this private to this file, and it gave conflict
when using bpy as module and loading it in a GLib application (which
also has a g_atexit var).
2017-02-06 17:42:30 +01:00
9e97b00873 Fix compilation error after recent change 2017-02-06 15:29:13 +01:00
c7f40caa2c Add shortcuts for unsigned int, short, long and char
Feel free to use those in the new code.

And stay away from simple "unsigned".
2017-02-06 15:04:13 +01:00
c5cc9e046d Use hash instead of linear lookup in armature deform
This avoids calling linear lookup 100s of time when dealing with
real-life character.

Still some tweaks possible.
2017-02-06 14:47:36 +01:00
d0015cba02 Multi-thread displace modifier
The title says it all actually. Use BLI task to loop over vertices
and distort their locations. Gives 2x FPS increase in a file with
just time-dependent displace modifier on my desktop.
2017-02-06 14:21:29 +01:00
89f3837d68 Displace modifier: Use special version of texture sampling
This version will give less spin locks and now well-tested by render engines.

This should reduce amount of threading overhead when having multiple objects
with displace modifier enabled.

In the future this will also help us threading the modifier.

There are more modifiers which could benefit from this, but let's first
investigate the new behavior with one of them.
2017-02-06 12:37:08 +01:00
385fe4f0ce Add special texture sampling function which takes image pool argument
Using image pool will reduce number of thread locks when acquiring image.
Useful when it's needed to sample texture fewzillion times a second.
2017-02-06 12:23:03 +01:00
223aff987a Fix memory leak when building without audaspace 2017-02-06 11:18:20 +01:00
Phil Christensen
351c409317 C++ conformance fixes (MSVC /permissive-)
We (the Microsoft C++ team) use the Blender project as part of our "Real world code" tests.
I noticed a place in WIN32 specific code (dvpapi.cpp:85) where a string literal is losing
its const-ness when being passed to BLI_dynlib_open().  This is not permitted when using the
/permissive- conformance compiler switch (see our blog
https://blogs.msdn.microsoft.com/vcblog/2016/11/16/permissive-switch/)

My suggested fix is to add const and propagate it where needed.  Another possible fix would be
to explicitly cast away the const.

Reviewers: mont29, sergey, LazyDodo

Subscribers: Blendify, sergey, mont29, LazyDodo

Tags: #platform:_windows

Differential Revision: https://developer.blender.org/D2495
2017-02-06 10:44:56 +01:00
22156d951d fix T50602: Avoid crash when executing transform_snap_context_project_view3d_mixed with dist_px NULL 2017-02-06 01:01:39 -03:00
da08aa4b96 Cleaning of the last commit: lack of attention with the debug of time X(
This was a stupid mistake
2017-02-04 19:06:41 -03:00
75aa866211 Optimize BVHTree creation of vertices that have BLI_bitmap test
Instead of reference the vertex first and test the bitmap afterwards. Test the bitmap first and reference the vertex after.

In a mesh with 31146 vertices and the entire bitmap disabled, the loop time is 243% faster
With all bitmap enabled, the time becomes 463473% faster!!!

One possible reason for this huge difference in peformance is that maybe the compiler is not putting the function "BM_vert_at_index" inline (I dont know if buildbot do this, but it's good to investigate).
2017-02-04 19:01:29 -03:00
47caf343c0 fix T50592: Scene.raycast not working
Ray_start and ray_normal values were being ignored
2017-02-04 18:17:15 -03:00
a2c469edc2 Fix (unreported) crash in new snap code.
Looks like `object_map` and `mem_arena` may be NULL sometimes...

Also, cleaned up function pointers declaration of Nearest2dUserData,
those were warning out in gcc. Please, *always* use typdef defined
prototypes for function pointers, it is sooooo much cleaner and clearer
that way. And easy to convert from compatible functions too.
2017-02-04 21:51:27 +01:00
6663099810 Fix T50590: BI lamp doesn't hold a texture in this case.
BKE_lamp_free was somehow missing the refactor of datablocks handling
(which, among other things, completely separated ID refcounting and
linking management from ID freeing itself).

Either forgot during development, or lost during merge...
2017-02-04 21:31:52 +01:00
c367e23d46 Snap System: Use callbaks to differentiate how referenced vertives of DerivedMeshs and Bmeshs
Before it was informed the type of object in the `userdata`, and a same function ran between the types to obtain the coordinates of the vertices
2017-02-03 20:08:57 -03:00
a0561a05ef Remove flag: SNAP_OBJECT_USE_CACHE from snap_context
Since the cache is created in one way or another, this flag is not really making a difference
More details here: D2496
2017-02-03 19:03:31 -03:00
21f3767809 fix T46892: snap to closest point now works with Individual Origins
The code looks for the closest element between its centers. In the case of islands, the center of each vertex is the center of the island.
The solution here is to skip the search for islands when the operation is translation
2017-02-03 13:15:44 -03:00
0b4a9caf51 Forgotten in committee ddf99214dc
In obect mode, the rotation matrix need to be restored to the initial value if a snap point is not found
2017-02-03 12:57:02 -03:00
0e459ad1a3 Buildbot: Re-enable cuda support for OSX 2017-02-03 16:11:05 +01:00
52696a0d3f Fix T50125: Shortcut keys missing in menus for Clear Location, Rotation, and Scale.
Menu entries and shortcuts did not have exact same behavior, now they do
(using shortcuts' behavior).
2017-02-03 16:10:00 +01:00
f3a7104adb Fix T49860: Copying vgroups between objects sharing the same obdata was not possible.
Pretty straight forward actually, just do not bother about obdata part
of vgroups in that case, only copy object part of it.

And let's curse once again those stuff spread accross several types of
data-blocks...
2017-02-03 15:47:44 +01:00
a1820afa30 Depsgraph: Add some extra debug prints on eval 2017-02-03 14:05:59 +01:00
030e99588d Tests: Use proper order for EXPECT_EQ() 2017-02-03 12:03:59 +01:00
aea17a612d Tests: Use EXPECT_FALSE() instead of EXPECT_EQ(foo, false) 2017-02-03 11:52:47 +01:00
dc1b45ff1a Tests: Use EXPECT_TRUE() instead of EXPECT_EQ(foo, true) 2017-02-03 11:52:29 +01:00
e1e85454ea Cycles: Cleanup, order of arguments to EXPECT_EQ
The order was wrong from the semantic point of view, caused
by some legacy workarounds in Libmv. Didn't realize it's was
not how things were expected to be used.
2017-02-03 11:35:34 +01:00
103f2655ab Explode modifier: Don't tessellate DM if we are not going to apply modifier 2017-02-03 11:03:47 +01:00
ddf99214dc fix T49494: snap_align_rotation should use a local pivot to make the transformation
The problem was simple, just transform the global coordinates of t->tsnap.snapTarget to local coordinates.
(Some comments were added to the code)
2017-02-03 02:27:57 -03:00
9f67367f0a Fix T50084: Adding torus re-orders UV layers.
Issue was indeed in join operation, mesh in which we join all others
could be re-added to final data after others, leading to undesired
re-ordering of CD layers, and existing vertices etc. being shifted away
from their original indices, etc.

All kind of more or less bad and undesired changes, fixed by always
re-inserting destination mesh first.

Also cleaned up a bit that code, it was doing some rather
non-recommanded things (like allocating zero-sized mem, doing own
coocking to remove a data-block from main, etc.).
2017-02-02 21:42:00 +01:00
33e456b0ce install_deps.sh: don't use backticks
The script complained that it could not find the executable "--build-all".
2017-02-02 17:49:24 +01:00
9e2dca933e Fix T50524: Basis shapekey editing while rendering bug.
Root of the issue was BM_mesh_bm_to_me() breaking application of basis
offset to 'child' shapekeys, when called more than once from same BMesh.
2017-02-02 17:05:48 +01:00
feb588060a Fix T50497: prop_search not correctly drew in UI (D2473) 2017-02-02 17:30:50 +03:00
86747ff180 Fix T50535: Cycles render segfault when Explode modifier before hair particle modifier + UV material
Tricky issue caused by CDDM_copy() coying MFACE array but not MTFACE which
confused logic later on.

Now we don't copy ANY tessellation unless it is requested to.

Thanks Bastien for help and review!
2017-02-02 14:36:30 +01:00
6edfa73f5c Revert the change of a default in a recent commit
This was my own mistake
2017-02-01 23:46:53 -05:00
Michael Stahre
7f10a889e3 Fix incorrect spot lamp blend in python GPU uniform export.
Reviewed By: brecht

Differential Revision: https://developer.blender.org/D2378
2017-02-02 04:03:26 +01:00
Michael Stahre
d4e0557cf1 Fix missing uniform type for python GPU uniform export.
Reviewed By: brecht

Differential Revision: https://developer.blender.org/D2379
2017-02-02 04:03:26 +01:00
Aaron
3ab8895610 UI: Add missing colon 2017-02-01 14:56:11 -05:00
fb61711b1a Fix T50570: pressing pgup or pgdn in any scrollable area irreversably alters scrolling speed.
'page' prop of scroll up/down operators would get stuck once set once by
pageup/down keys... Now only take this prop into account if explicitely
set, not when its value is inherited from previous run.
2017-02-01 12:52:26 +01:00
c231c29afa Cycles tests: Allow python auto-exec 2017-02-01 10:13:40 +01:00
fa19940dc6 Cycles: Fix rng_state initialization when using resumable rendering 2017-02-01 05:43:17 +01:00
92258f3678 Snap System: BVH: Ignore calculations, in parent nodes, used only in perspective view
Strangely this change does not affect the performance very much.
Suzanne subdividide 6x (ortho view):
Before:0.00013983
After :0.00013920

But it makes it easier to read the code
2017-01-31 21:43:44 -03:00
75a4c836d6 Snap System: Invert the test order of the elements to snap (useful for ruler)
When the function that tests snap on multiple elements starts from the face and ends at the vertex, the transition between elements becomes much smoother.
2017-01-31 17:10:34 -03:00
a90622ce93 Fix T50331: New Dependency Graph - "frame" python driver expression not working 2017-01-31 12:17:55 +01:00
326516c9d7 Cycles: Fix spelling in comment 2017-01-31 12:08:19 +01:00
b3690cdecd Fix variable shadow and avoid calculating same value twice 2017-01-31 11:55:29 +01:00
Dalai Felinto
887fc0ff6d Silence unused var warnings after rBac58a7fa 2017-01-31 11:00:50 +01:00
b5682a6fdd Cleanup: use 'cb_flag', not 'cd_flag' for library_query callbacks.
`cd_flag` tends to be used for CustomData flags in mesh area, while for
library_query those are rather callback flags...
2017-01-31 10:41:25 +01:00
60e387f5e3 Cleanup: Rename callback flags from library_query to IDWALK_CB_...
Better to have clear way to tell whether flag is parameter for
BKE_library_foreach_ID_link(), parameter for its callback function, or
return value from this callback function.
2017-01-31 09:47:59 +01:00
a928a9c1e1 Fix compilation error: too few arguments to function call.
D2492 by @tomjpsun.
2017-01-31 07:00:31 +01:00
d07e2416db Fix bug not reported: Ruler/Protractor: Snap to vertices and edges was not considering the depth variation
Taking advantage of the area, the depth is decreased 0.01 BU to each loop to give priority to elements in order: Vertice > Edge > Face. This increases the threshold and improves the snap to multiple elements
2017-01-30 23:49:09 -03:00
b841d60bf8 Snap System: Return depth by snapping to edges and vertices, because the Ruler only works right this way
The Ruler snaps to the element with the lowest depth.
2017-01-30 22:49:44 -03:00
a50b173952 Use the same solution to test the pixel distance to the AABB, with BoundBox
The previous solution took arbitrary values to determine if the mouse was near or not to the Bound Box (it simply scaled the Bound Box).

Now the same function that detected the distance from the BVHTree nodes to the mouse is used in the Bound Box
2017-01-30 22:27:38 -03:00
4e1025376e Freestyle: Use of the Fill Range by Selection operator in the mesh edit mode.
This revision extends the functionality of the "Fill Range by Selection" button in
the "Distance from Camera/Object" modifiers so that only selected mesh vertices
in the edit mode are taken into account  (instead of considering all vertices when
in the object mode) to compute the min & max distances from the reference.
This will give users much finer control on the range values.
2017-01-31 09:08:10 +09:00
bc4aeefe82 Make 'make local' twice quicker.
Use new Main->relations ID usages mapping in BKE_library_make_local().

This allows a noticeable simplification in code, and can be up to twice
quicker as previous code (Make Local: All from 2 to 1 minute e.g. in a
huge production file with thousands of linked data-blocks).

Note that new code has been successfuly tested with several complex cases
(production files from Agent327), as well as some testcases from recent
bug reports related to that function. But as always, nothing beats real
usage by real users, so please check this before we release 2.79. ;)

Main areas that would be affected: Make Local operations (L shortcut in
3DView), and append from libraries.
2017-01-30 22:33:20 +01:00
eadfd901ad Optimization: pass Main to BKE_library_foreach_ID_link() and use its relations.
Use Main->relations in BKE_library_foreach_ID_link(), when possible
(i.e. IDWALK_READONLY is set), and if the data is available of course.

This is quite minor optimization, no sensible improvements are expected,
but does not hurt either to avoid potentially tens of looping over e.g.
objects constraints and modifiers, or heap of drivers...
2017-01-30 22:33:20 +01:00
fbd28d375a Fix missing non-ID nodetrees in ID relationships built from library_query.c
This shall fix both existing code (bpy mapping, and local/lib usages
checks), and new Main->relations generation.
2017-01-30 22:33:20 +01:00
4443bad30a Add optional, free-after-use usages mapping of IDs to Main.
The new MainIDRelations stores two mappings, one from ID users to ID
used, the other vice-versa.

That data is assumed to be short-living runtime, code creating it is
responsible to clear it asap. It will be much useful in places where we
handle relations between IDs for a lot of them at once.

Note: This commit is not fully functional, that is, the infamous, ugly,
PoS non-ID nodetrees will not be handled correctly when building relations.
Fix needed here is a bit noisy, so will be done in next own commit.
2017-01-30 22:33:20 +01:00
997a210b08 Fix T49632: Grease pencil in "Edit Strokes" mode: Snap tool did not snap points to active object
A simple confusion between enums: ~SNAP_NOT_ACTIVE~
2017-01-30 18:30:19 -03:00
4580ace4c1 Alembic/CacheFile: fix crash de-referencing NULL pointer. 2017-01-30 10:46:24 +01:00
62f2c44ffb Cleanup: Unused function and and variables in snap code
Please doublecheck ED_transform_snap_object_project_ray_ex() is really
valid, it's weird to have extra arguments here unused.
2017-01-30 09:26:33 +01:00
505ff16dbf Snap System: BVH: ignore AABBs behind ray
This provides a slight improvement in performance in specific cases, such as when the observer is inside a high poly object and executes snap to edge or vertex
2017-01-30 02:49:41 -03:00
318ee2e8c1 Fix unreported bug: parameter ray_start repeated
The bug would only be seen in terms of performance
2017-01-30 02:26:00 -03:00
167ab03f36 Solve compilation error: Field has incomplete type 'enum eViewProj'
Error reported by @tomjpsun
Patch D2491
2017-01-30 02:19:18 -03:00
cdff659036 Freestyle: Fix (unreported) wrong distance calculation in the Fill Range by Selection operator.
Distance calculation performed by the "Fill Range by Selection" button of the
"Distance from Camera" color, alpha and thickness modifiers was incorrect,
limiting the usefulness of the functionality.

The problem was that the distance between the camera and individual vertex
locations was calculated in the world space, which was inconsistent with the
distance calculation done by the modifiers in the camera space.
2017-01-30 12:18:39 +09:00
6c23a1b8b9 Remove BKE_boundbox_ray_hit_check
Remove `BKE_boundbox_ray_hit_check` since it is no longer being used and can be easily replaced by `isect_ray_aabb_v3_simple`
2017-01-29 14:19:58 -03:00
b99491caf7 [msvc] Set proper OpenSubdiv flags when not using find_package to find opensubdiv. Fixes T50548 2017-01-29 10:00:38 -07:00
cf6ca226fa New math_geom function isect_ray_aabb_v3_simple
The new `isect_ray_aabb_v3_simple` function replaces the `BKE_boundbox_ray_hit_check` and can be used in BVHTree Root (first AABB). So it is much more efficient.
2017-01-29 13:56:58 -03:00
dead79a16a Rename func set_SnapData to snap_data_set
Don't use CamelCase in functions and try to keep area affected first, and action last, in names
2017-01-29 13:13:14 -03:00
88b0b22914 fix T50486: Don't always do the ray_start_correction in the ortho view
You need to make sure that ray_start is really far away, because even in the Orthografic view, in some cases, the ray can start inside the object
2017-01-29 12:26:15 -03:00
cd596fa1c7 Remove struct PreDefProject and store all immutable parameters within the new struct SnapData
In order to simplify the reading of these functions, the parameters: `snap_to`, `mval`, `ray_start`, `ray_dir`, `view_proj` and `depth_range` are now stored in the struct `SnapData`
2017-01-29 12:07:14 -03:00
d6f965b99c Fix T50550: GPUShader: compile error - Background image not showing in
viewport.

Caused by rBd6cf28c5e15739f864fbf04614c2a50708b4b152, which forgot to
update the GLSL code for the "Light Path" node.
2017-01-29 16:00:25 +01:00
15b253c082 Fix blurry icons 2017-01-29 17:21:57 +03:00
ba116c8e9c fix D2489: Collada exporter broke edit data when exporting Armature while in Armature edit mode 2017-01-28 22:10:20 +01:00
c64c901535 fix D2489: Collada exporter broke edit data when exporting Armature while in Armature edit mode 2017-01-28 21:51:18 +01:00
e4e1900012 Fix (IRC reported) DataTransfer modifier affecting base mesh in some cases.
Checking only whether mverts is same as base mesh one is not enough in
all cases, some modifiers (deform ones) can only generate new mvert
data, while keeping others from original mesh.

Now checking both mvert or medge, hopefully this will be enough to catch
all problematic cases this time.

Thanks @gaia for finding that problem. :)
2017-01-27 19:27:07 +01:00
11abb13483 Fix T50534, Part II: warn user when DataTransfer mod affects custom normals.
Custom normals need Autosmooth setting to be enabled, always!
2017-01-27 19:07:29 +01:00
fb2f95c91a Fix T50534: Part I, cleanup loop normals generated during modifier stack evaluation.
Those could stay around, and be displayed in 3DView even when autosmooth
was disabled (but would not be 'active').
2017-01-27 19:07:29 +01:00
bfe3b967fa UI: Move Scene Game Properties to the Scene Tab (was in world) 2017-01-27 11:36:53 -05:00
436f1e3345 Snap Functions: Remove the use of the function 'BLI_bvhtree_find_nearest_to_ray' in transform_snap_object
Although the "BLI_bvhtree_find_nearest_to_ray" function is more practical than the generic "BLI_bvhtree_walk_dfs", it does not work to snap in perspective view. This makes it necessary to add "ifs" and functions that make the code difficult to understand

patch: D2474
2017-01-27 13:02:05 -03:00
0330741548 Cycles: Add option to replace GI with AO approximation after certain amount of bounces
This is a speed up option which is mainly useful for viewport. Gives nice speedup in
the barbershop scene of 2x when replacing GI with AO after 2nd bounce without loosing
too much details.

Reviewers: brecht

Subscribers: eyecandy, venomgfx

Differential Revision: https://developer.blender.org/D2383
2017-01-27 14:21:49 +01:00
Dalai Felinto
84b18162cf Fixup for rBac58a7fa (HSV doversion)
We are not bumping file version, but we cannot have the doversion code running twice.
In this particular case it was crashing files, since we were setting node->storage to NULL, and later on accessing it.
2017-01-27 11:24:23 +01:00
bce9e80d82 CMake: Fix typo 2017-01-27 05:24:58 +01:00
315 changed files with 7237 additions and 4729 deletions

View File

@@ -445,6 +445,7 @@ option(WITH_BOOST "Enable features depending on boost" ON)
# Unit testsing
option(WITH_GTESTS "Enable GTest unit testing" OFF)
option(WITH_OPENGL_TESTS "Enable OpenGL related unit testing (Experimental)" OFF)
# Documentation
@@ -723,7 +724,7 @@ if(NOT WITH_BOOST)
macro(set_and_warn
_setting _val)
if(${${_setting}})
message(STATUS "'WITH_BOOST' is disabled: forceing 'set(${_setting} ${_val})'")
message(STATUS "'WITH_BOOST' is disabled: forcing 'set(${_setting} ${_val})'")
endif()
set(${_setting} ${_val})
endmacro()

View File

@@ -795,7 +795,7 @@ CXXFLAGS_BACK=$CXXFLAGS
if [ "$USE_CXX11" = true ]; then
WARNING "You are trying to use c++11, this *should* go smoothely with any very recent distribution
However, if you are experiencing linking errors (also when building Blender itself), please try the following:
* Re-run this script with `--build-all --force-all` options.
* Re-run this script with '--build-all --force-all' options.
* Ensure your gcc version is at the very least 4.8, if possible you should really rather use gcc-5.1 or above.
Please note that until the transition to C++11-built libraries if completed in your distribution, situation will

View File

@@ -72,10 +72,8 @@ if 'cmake' in builder:
# Set up OSX architecture
if builder.endswith('x86_64_10_6_cmake'):
cmake_extra_options.append('-DCMAKE_OSX_ARCHITECTURES:STRING=x86_64')
cmake_extra_options.append('-DCUDA_NVCC_EXECUTABLE=/usr/local/cuda8-hack/bin/nvcc')
cmake_extra_options.append('-DWITH_CODEC_QUICKTIME=OFF')
cmake_extra_options.append('-DCMAKE_OSX_DEPLOYMENT_TARGET=10.6')
build_cubins = False
elif builder.startswith('win'):

View File

@@ -1,5 +1,7 @@
set(PROJECT_DESCRIPTION "Blender is a very fast and versatile 3D modeller/renderer.")
set(PROJECT_COPYRIGHT "Copyright (C) 2001-2012 Blender Foundation")
string(TIMESTAMP CURRENT_YEAR "%Y")
set(PROJECT_DESCRIPTION "Blender is the free and open source 3D creation suite software.")
set(PROJECT_COPYRIGHT "Copyright (C) 2001-${CURRENT_YEAR} Blender Foundation")
set(PROJECT_CONTACT "foundation@blender.org")
set(PROJECT_VENDOR "Blender Foundation")
@@ -135,4 +137,3 @@ unset(MINOR_VERSION)
unset(PATCH_VERSION)
unset(BUILD_REV)

View File

@@ -453,6 +453,12 @@ if(WITH_OPENSUBDIV OR WITH_CYCLES_OPENSUBDIV)
debug ${OPENSUBDIV_LIBPATH}/osdCPU_d.lib
debug ${OPENSUBDIV_LIBPATH}/osdGPU_d.lib
)
set(OPENSUBDIV_HAS_OPENMP TRUE)
set(OPENSUBDIV_HAS_TBB FALSE)
set(OPENSUBDIV_HAS_OPENCL TRUE)
set(OPENSUBDIV_HAS_CUDA FALSE)
set(OPENSUBDIV_HAS_GLSL_TRANSFORM_FEEDBACK TRUE)
set(OPENSUBDIV_HAS_GLSL_COMPUTE TRUE)
windows_find_package(OpenSubdiv)
endif()

View File

@@ -681,7 +681,7 @@ Image classes
.. attribute:: zbuff
Use depth component of render as grey scale color - suitable for texture source.
Use depth component of render as grayscale color - suitable for texture source.
:type: bool
@@ -817,7 +817,7 @@ Image classes
.. attribute:: zbuff
Use depth component of viewport as grey scale color - suitable for texture source.
Use depth component of viewport as grayscale color - suitable for texture source.
:type: bool
@@ -1260,8 +1260,8 @@ Filter classes
.. class:: FilterGray
Filter for gray scale effect.
Proportions of R, G and B contributions in the output gray scale are 28:151:77.
Filter for grayscale effect.
Proportions of R, G and B contributions in the output grayscale are 28:151:77.
.. attribute:: previous

View File

@@ -427,9 +427,9 @@ if BLENDER_REVISION != "Unknown":
BLENDER_VERSION_DOTS += " " + BLENDER_REVISION # '2.62.1 SHA1'
BLENDER_VERSION_PATH = "_".join(blender_version_strings) # '2_62_1'
if bpy.app.version_cycle == "release":
BLENDER_VERSION_PATH = "%s%s_release" % ("_".join(blender_version_strings[:2]),
bpy.app.version_char) # '2_62_release'
if bpy.app.version_cycle in {"rc", "release"}:
# '2_62a_release'
BLENDER_VERSION_PATH = "%s%s_release" % ("_".join(blender_version_strings[:2]), bpy.app.version_char)
# --------------------------DOWNLOADABLE FILES----------------------------------

View File

@@ -96,6 +96,11 @@ def main():
rsync_base = "rsync://%s@%s:%s" % (args.user, args.rsync_server, args.rsync_root)
blenver = blenver_zip = ""
api_name = ""
branch = ""
is_release = False
# I) Update local mirror using rsync.
rsync_mirror_cmd = ("rsync", "--delete-after", "-avzz", rsync_base, args.mirror_dir)
subprocess.run(rsync_mirror_cmd, env=dict(os.environ, RSYNC_PASSWORD=args.password))
@@ -108,19 +113,24 @@ def main():
subprocess.run(doc_gen_cmd)
# III) Get Blender version info.
blenver = blenver_zip = ""
getver_file = os.path.join(tmp_dir, "blendver.txt")
getver_script = (""
"import sys, bpy\n"
"with open(sys.argv[-1], 'w') as f:\n"
" f.write('%d_%d%s_release\\n' % (bpy.app.version[0], bpy.app.version[1], bpy.app.version_char)\n"
" if bpy.app.version_cycle in {'rc', 'release'} else '%d_%d_%d\\n' % bpy.app.version)\n"
" f.write('%d_%d_%d' % bpy.app.version)\n")
" is_release = bpy.app.version_cycle in {'rc', 'release'}\n"
" branch = bpy.app.build_branch.split()[0].decode()\n"
" f.write('%d\\n' % is_release)\n"
" f.write('%s\\n' % branch)\n"
" f.write('%d.%d%s\\n' % (bpy.app.version[0], bpy.app.version[1], bpy.app.version_char)\n"
" if is_release else '%s\\n' % branch)\n"
" f.write('%d_%d%s_release' % (bpy.app.version[0], bpy.app.version[1], bpy.app.version_char)\n"
" if is_release else '%d_%d_%d' % bpy.app.version)\n")
get_ver_cmd = (args.blender, "--background", "-noaudio", "--factory-startup", "--python-exit-code", "1",
"--python-expr", getver_script, "--", getver_file)
subprocess.run(get_ver_cmd)
with open(getver_file) as f:
blenver, blenver_zip = f.read().split("\n")
is_release, branch, blenver, blenver_zip = f.read().split("\n")
is_release = bool(int(is_release))
os.remove(getver_file)
# IV) Build doc.
@@ -132,7 +142,7 @@ def main():
os.chdir(curr_dir)
# V) Cleanup existing matching dir in server mirror (if any), and copy new doc.
api_name = "blender_python_api_%s" % blenver
api_name = blenver
api_dir = os.path.join(args.mirror_dir, api_name)
if os.path.exists(api_dir):
shutil.rmtree(api_dir)
@@ -150,19 +160,15 @@ def main():
os.rename(zip_path, os.path.join(api_dir, "%s.zip" % zip_name))
# VII) Create symlinks and html redirects.
#~ os.symlink(os.path.join(DEFAULT_SYMLINK_ROOT, api_name, "contents.html"), os.path.join(api_dir, "index.html"))
os.symlink("./contents.html", os.path.join(api_dir, "index.html"))
if blenver.endswith("release"):
symlink = os.path.join(args.mirror_dir, "blender_python_api_current")
if is_release:
symlink = os.path.join(args.mirror_dir, "current")
os.remove(symlink)
os.symlink("./%s" % api_name, symlink)
with open(os.path.join(args.mirror_dir, "250PythonDoc/index.html"), 'w') as f:
f.write("<html><head><title>Redirecting...</title><meta http-equiv=\"REFRESH\""
"content=\"0;url=../%s/\"></head><body>Redirecting...</body></html>" % api_name)
else:
symlink = os.path.join(args.mirror_dir, "blender_python_api_master")
os.remove(symlink)
os.symlink("./%s" % api_name, symlink)
elif branch == "master":
with open(os.path.join(args.mirror_dir, "blender_python_api/index.html"), 'w') as f:
f.write("<html><head><title>Redirecting...</title><meta http-equiv=\"REFRESH\""
"content=\"0;url=../%s/\"></head><body>Redirecting...</body></html>" % api_name)

View File

@@ -62,7 +62,7 @@ if(WITH_IK_ITASC)
add_subdirectory(itasc)
endif()
if(WITH_IK_SOLVER OR WITH_GAMEENGINE OR WITH_MOD_BOOLEAN)
if(WITH_GAMEENGINE)
add_subdirectory(moto)
endif()

View File

@@ -101,11 +101,11 @@ ATOMIC_INLINE size_t atomic_fetch_and_add_z(size_t *p, size_t x);
ATOMIC_INLINE size_t atomic_fetch_and_sub_z(size_t *p, size_t x);
ATOMIC_INLINE size_t atomic_cas_z(size_t *v, size_t old, size_t _new);
ATOMIC_INLINE unsigned atomic_add_and_fetch_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_sub_and_fetch_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_fetch_and_add_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_fetch_and_sub_u(unsigned *p, unsigned x);
ATOMIC_INLINE unsigned atomic_cas_u(unsigned *v, unsigned old, unsigned _new);
ATOMIC_INLINE unsigned int atomic_add_and_fetch_u(unsigned int *p, unsigned int x);
ATOMIC_INLINE unsigned int atomic_sub_and_fetch_u(unsigned int *p, unsigned int x);
ATOMIC_INLINE unsigned int atomic_fetch_and_add_u(unsigned int *p, unsigned int x);
ATOMIC_INLINE unsigned int atomic_fetch_and_sub_u(unsigned int *p, unsigned int x);
ATOMIC_INLINE unsigned int atomic_cas_u(unsigned int *v, unsigned int old, unsigned int _new);
/* WARNING! Float 'atomics' are really faked ones, those are actually closer to some kind of spinlock-sync'ed operation,
* which means they are only efficient if collisions are highly unlikely (i.e. if probability of two threads

View File

@@ -113,58 +113,58 @@ ATOMIC_INLINE size_t atomic_cas_z(size_t *v, size_t old, size_t _new)
/******************************************************************************/
/* unsigned operations. */
ATOMIC_INLINE unsigned atomic_add_and_fetch_u(unsigned *p, unsigned x)
ATOMIC_INLINE unsigned int atomic_add_and_fetch_u(unsigned int *p, unsigned int x)
{
assert(sizeof(unsigned) == LG_SIZEOF_INT);
assert(sizeof(unsigned int) == LG_SIZEOF_INT);
#if (LG_SIZEOF_INT == 8)
return (unsigned)atomic_add_and_fetch_uint64((uint64_t *)p, (uint64_t)x);
return (unsigned int)atomic_add_and_fetch_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_INT == 4)
return (unsigned)atomic_add_and_fetch_uint32((uint32_t *)p, (uint32_t)x);
return (unsigned int)atomic_add_and_fetch_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
ATOMIC_INLINE unsigned atomic_sub_and_fetch_u(unsigned *p, unsigned x)
ATOMIC_INLINE unsigned int atomic_sub_and_fetch_u(unsigned int *p, unsigned int x)
{
assert(sizeof(unsigned) == LG_SIZEOF_INT);
assert(sizeof(unsigned int) == LG_SIZEOF_INT);
#if (LG_SIZEOF_INT == 8)
return (unsigned)atomic_add_and_fetch_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
return (unsigned int)atomic_add_and_fetch_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
#elif (LG_SIZEOF_INT == 4)
return (unsigned)atomic_add_and_fetch_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
return (unsigned int)atomic_add_and_fetch_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
#endif
}
ATOMIC_INLINE unsigned atomic_fetch_and_add_u(unsigned *p, unsigned x)
ATOMIC_INLINE unsigned int atomic_fetch_and_add_u(unsigned int *p, unsigned int x)
{
assert(sizeof(unsigned) == LG_SIZEOF_INT);
assert(sizeof(unsigned int) == LG_SIZEOF_INT);
#if (LG_SIZEOF_INT == 8)
return (unsigned)atomic_fetch_and_add_uint64((uint64_t *)p, (uint64_t)x);
return (unsigned int)atomic_fetch_and_add_uint64((uint64_t *)p, (uint64_t)x);
#elif (LG_SIZEOF_INT == 4)
return (unsigned)atomic_fetch_and_add_uint32((uint32_t *)p, (uint32_t)x);
return (unsigned int)atomic_fetch_and_add_uint32((uint32_t *)p, (uint32_t)x);
#endif
}
ATOMIC_INLINE unsigned atomic_fetch_and_sub_u(unsigned *p, unsigned x)
ATOMIC_INLINE unsigned int atomic_fetch_and_sub_u(unsigned int *p, unsigned int x)
{
assert(sizeof(unsigned) == LG_SIZEOF_INT);
assert(sizeof(unsigned int) == LG_SIZEOF_INT);
#if (LG_SIZEOF_INT == 8)
return (unsigned)atomic_fetch_and_add_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
return (unsigned int)atomic_fetch_and_add_uint64((uint64_t *)p, (uint64_t)-((int64_t)x));
#elif (LG_SIZEOF_INT == 4)
return (unsigned)atomic_fetch_and_add_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
return (unsigned int)atomic_fetch_and_add_uint32((uint32_t *)p, (uint32_t)-((int32_t)x));
#endif
}
ATOMIC_INLINE unsigned atomic_cas_u(unsigned *v, unsigned old, unsigned _new)
ATOMIC_INLINE unsigned int atomic_cas_u(unsigned int *v, unsigned int old, unsigned int _new)
{
assert(sizeof(unsigned) == LG_SIZEOF_INT);
assert(sizeof(unsigned int) == LG_SIZEOF_INT);
#if (LG_SIZEOF_INT == 8)
return (unsigned)atomic_cas_uint64((uint64_t *)v, (uint64_t)old, (uint64_t)_new);
return (unsigned int)atomic_cas_uint64((uint64_t *)v, (uint64_t)old, (uint64_t)_new);
#elif (LG_SIZEOF_INT == 4)
return (unsigned)atomic_cas_uint32((uint32_t *)v, (uint32_t)old, (uint32_t)_new);
return (unsigned int)atomic_cas_uint32((uint32_t *)v, (uint32_t)old, (uint32_t)_new);
#endif
}

View File

@@ -74,6 +74,7 @@ elseif(CMAKE_COMPILER_IS_GNUCC)
if(CXX_HAS_AVX2)
set(CYCLES_AVX2_KERNEL_FLAGS "-ffast-math -msse -msse2 -msse3 -mssse3 -msse4.1 -mavx -mavx2 -mfma -mlzcnt -mbmi -mbmi2 -mf16c -mfpmath=sse")
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -ffast-math -fno-finite-math-only")
elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
check_cxx_compiler_flag(-msse CXX_HAS_SSE)
check_cxx_compiler_flag(-mavx CXX_HAS_AVX)
@@ -89,6 +90,7 @@ elseif(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
if(CXX_HAS_AVX2)
set(CYCLES_AVX2_KERNEL_FLAGS "-ffast-math -msse -msse2 -msse3 -mssse3 -msse4.1 -mavx -mavx2 -mfma -mlzcnt -mbmi -mbmi2 -mf16c")
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -ffast-math -fno-finite-math-only")
endif()
if(CXX_HAS_SSE)

View File

@@ -62,7 +62,7 @@ def _parse_command_line():
num_resumable_chunks = None
current_resumable_chunk = None
# TODO(sergey): Add some nice error ptins if argument is not used properly.
# TODO(sergey): Add some nice error prints if argument is not used properly.
idx = 0
while idx < len(argv) - 1:
arg = argv[idx]

View File

@@ -638,6 +638,20 @@ class CyclesRenderSettings(bpy.types.PropertyGroup):
items=enum_texture_limit
)
cls.ao_bounces = IntProperty(
name="AO Bounces",
default=0,
description="Approximate indirect light with background tinted ambient occlusion at the specified bounce, 0 disables this feature",
min=0, max=1024,
)
cls.ao_bounces_render = IntProperty(
name="AO Bounces Render",
default=0,
description="Approximate indirect light with background tinted ambient occlusion at the specified bounce, 0 disables this feature",
min=0, max=1024,
)
# Various fine-tuning debug flags
def devices_update_callback(self, context):

View File

@@ -1038,10 +1038,11 @@ class CyclesWorld_PT_ambient_occlusion(CyclesButtonsPanel, Panel):
layout = self.layout
light = context.world.light_settings
scene = context.scene
row = layout.row()
sub = row.row()
sub.active = light.use_ambient_occlusion
sub.active = light.use_ambient_occlusion or scene.render.use_simplify
sub.prop(light, "ao_factor", text="Factor")
row.prop(light, "distance", text="Distance")
@@ -1612,6 +1613,13 @@ class CyclesScene_PT_simplify(CyclesButtonsPanel, Panel):
row.active = cscene.use_distance_cull
row.prop(cscene, "distance_cull_margin", text="Distance")
split = layout.split()
col = split.column()
col.prop(cscene, "ao_bounces")
col = split.column()
col.prop(cscene, "ao_bounces_render")
def draw_device(self, context):
scene = context.scene
layout = self.layout

View File

@@ -411,6 +411,7 @@ static void ExportCurveTrianglePlanes(Mesh *mesh, ParticleCurveData *CData,
}
}
mesh->resize_mesh(mesh->verts.size(), mesh->triangles.size());
mesh->attributes.remove(ATTR_STD_VERTEX_NORMAL);
mesh->attributes.remove(ATTR_STD_FACE_NORMAL);
mesh->add_face_normals();
@@ -434,8 +435,8 @@ static void ExportCurveTriangleGeometry(Mesh *mesh,
if(CData->curve_keynum[curve] <= 1 || CData->curve_length[curve] == 0.0f)
continue;
numverts += (CData->curve_keynum[curve] - 2)*2*resolution + resolution;
numtris += (CData->curve_keynum[curve] - 2)*resolution;
numverts += (CData->curve_keynum[curve] - 1)*resolution + resolution;
numtris += (CData->curve_keynum[curve] - 1)*2*resolution;
}
}
@@ -545,6 +546,7 @@ static void ExportCurveTriangleGeometry(Mesh *mesh,
}
}
mesh->resize_mesh(mesh->verts.size(), mesh->triangles.size());
mesh->attributes.remove(ATTR_STD_VERTEX_NORMAL);
mesh->attributes.remove(ATTR_STD_FACE_NORMAL);
mesh->add_face_normals();
@@ -890,7 +892,7 @@ void BlenderSync::sync_curves(Mesh *mesh,
}
/* obtain general settings */
bool use_curves = scene->curve_system_manager->use_curves;
const bool use_curves = scene->curve_system_manager->use_curves;
if(!(use_curves && b_ob.mode() != b_ob.mode_PARTICLE_EDIT)) {
if(!motion)
@@ -898,11 +900,11 @@ void BlenderSync::sync_curves(Mesh *mesh,
return;
}
int primitive = scene->curve_system_manager->primitive;
int triangle_method = scene->curve_system_manager->triangle_method;
int resolution = scene->curve_system_manager->resolution;
size_t vert_num = mesh->verts.size();
size_t tri_num = mesh->num_triangles();
const int primitive = scene->curve_system_manager->primitive;
const int triangle_method = scene->curve_system_manager->triangle_method;
const int resolution = scene->curve_system_manager->resolution;
const size_t vert_num = mesh->verts.size();
const size_t tri_num = mesh->num_triangles();
int used_res = 1;
/* extract particle hair data - should be combined with connecting to mesh later*/

View File

@@ -27,6 +27,7 @@
#include "subd_patch.h"
#include "subd_split.h"
#include "util_algorithm.h"
#include "util_foreach.h"
#include "util_logging.h"
#include "util_math.h"
@@ -525,69 +526,177 @@ static void attr_create_uv_map(Scene *scene,
}
/* Create vertex pointiness attributes. */
/* Compare vertices by sum of their coordinates. */
class VertexAverageComparator {
public:
VertexAverageComparator(const array<float3>& verts)
: verts_(verts) {
}
bool operator()(const int& vert_idx_a, const int& vert_idx_b)
{
const float3 &vert_a = verts_[vert_idx_a];
const float3 &vert_b = verts_[vert_idx_b];
if(vert_a == vert_b) {
/* Special case for doubles, so we ensure ordering. */
return vert_idx_a > vert_idx_b;
}
const float x1 = vert_a.x + vert_a.y + vert_a.z;
const float x2 = vert_b.x + vert_b.y + vert_b.z;
return x1 < x2;
}
protected:
const array<float3>& verts_;
};
static void attr_create_pointiness(Scene *scene,
Mesh *mesh,
BL::Mesh& b_mesh,
bool subdivision)
{
if(mesh->need_attribute(scene, ATTR_STD_POINTINESS)) {
const int numverts = b_mesh.vertices.length();
AttributeSet& attributes = (subdivision)? mesh->subd_attributes: mesh->attributes;
Attribute *attr = attributes.add(ATTR_STD_POINTINESS);
float *data = attr->data_float();
int *counter = new int[numverts];
float *raw_data = new float[numverts];
float3 *edge_accum = new float3[numverts];
/* Calculate pointiness using single ring neighborhood. */
memset(counter, 0, sizeof(int) * numverts);
memset(raw_data, 0, sizeof(float) * numverts);
memset(edge_accum, 0, sizeof(float3) * numverts);
BL::Mesh::edges_iterator e;
int i = 0;
for(b_mesh.edges.begin(e); e != b_mesh.edges.end(); ++e, ++i) {
int v0 = b_mesh.edges[i].vertices()[0],
v1 = b_mesh.edges[i].vertices()[1];
float3 co0 = get_float3(b_mesh.vertices[v0].co()),
co1 = get_float3(b_mesh.vertices[v1].co());
float3 edge = normalize(co1 - co0);
edge_accum[v0] += edge;
edge_accum[v1] += -edge;
++counter[v0];
++counter[v1];
}
i = 0;
BL::Mesh::vertices_iterator v;
for(b_mesh.vertices.begin(v); v != b_mesh.vertices.end(); ++v, ++i) {
if(counter[i] > 0) {
float3 normal = get_float3(b_mesh.vertices[i].normal());
float angle = safe_acosf(dot(normal, edge_accum[i] / counter[i]));
raw_data[i] = angle * M_1_PI_F;
if(!mesh->need_attribute(scene, ATTR_STD_POINTINESS)) {
return;
}
const int num_verts = b_mesh.vertices.length();
/* STEP 1: Find out duplicated vertices and point duplicates to a single
* original vertex.
*/
vector<int> sorted_vert_indeices(num_verts);
for(int vert_index = 0; vert_index < num_verts; ++vert_index) {
sorted_vert_indeices[vert_index] = vert_index;
}
VertexAverageComparator compare(mesh->verts);
sort(sorted_vert_indeices.begin(), sorted_vert_indeices.end(), compare);
/* This array stores index of the original vertex for the given vertex
* index.
*/
vector<int> vert_orig_index(num_verts);
for(int sorted_vert_index = 0;
sorted_vert_index < num_verts;
++sorted_vert_index)
{
const int vert_index = sorted_vert_indeices[sorted_vert_index];
const float3 &vert_co = mesh->verts[vert_index];
bool found = false;
for(int other_sorted_vert_index = sorted_vert_index + 1;
other_sorted_vert_index < num_verts;
++other_sorted_vert_index)
{
const int other_vert_index =
sorted_vert_indeices[other_sorted_vert_index];
const float3 &other_vert_co = mesh->verts[other_vert_index];
/* We are too far away now, we wouldn't have duplicate. */
if ((other_vert_co.x + other_vert_co.y + other_vert_co.z) -
(vert_co.x + vert_co.y + vert_co.z) > 3 * FLT_EPSILON)
{
break;
}
else {
raw_data[i] = 0.0f;
/* Found duplicate. */
if(len_squared(other_vert_co - vert_co) < FLT_EPSILON) {
found = true;
vert_orig_index[vert_index] = other_vert_index;
break;
}
}
/* Blur vertices to approximate 2 ring neighborhood. */
memset(counter, 0, sizeof(int) * numverts);
memcpy(data, raw_data, sizeof(float) * numverts);
i = 0;
for(b_mesh.edges.begin(e); e != b_mesh.edges.end(); ++e, ++i) {
int v0 = b_mesh.edges[i].vertices()[0],
v1 = b_mesh.edges[i].vertices()[1];
data[v0] += raw_data[v1];
data[v1] += raw_data[v0];
++counter[v0];
++counter[v1];
if(!found) {
vert_orig_index[vert_index] = vert_index;
}
for(i = 0; i < numverts; ++i) {
data[i] /= counter[i] + 1;
}
/* Make sure we always points to the very first orig vertex. */
for(int vert_index = 0; vert_index < num_verts; ++vert_index) {
int orig_index = vert_orig_index[vert_index];
while(orig_index != vert_orig_index[orig_index]) {
orig_index = vert_orig_index[orig_index];
}
delete [] counter;
delete [] raw_data;
delete [] edge_accum;
vert_orig_index[vert_index] = orig_index;
}
sorted_vert_indeices.free_memory();
/* STEP 2: Calculate vertex normals taking into account their possible
* duplicates which gets "welded" together.
*/
vector<float3> vert_normal(num_verts, make_float3(0.0f, 0.0f, 0.0f));
/* First we accumulate all vertex normals in the original index. */
for(int vert_index = 0; vert_index < num_verts; ++vert_index) {
const float3 normal = get_float3(b_mesh.vertices[vert_index].normal());
const int orig_index = vert_orig_index[vert_index];
vert_normal[orig_index] += normal;
}
/* Then we normalize the accumulated result and flush it to all duplicates
* as well.
*/
for(int vert_index = 0; vert_index < num_verts; ++vert_index) {
const int orig_index = vert_orig_index[vert_index];
vert_normal[vert_index] = normalize(vert_normal[orig_index]);
}
/* STEP 3: Calculate pointiness using single ring neighborhood. */
vector<int> counter(num_verts, 0);
vector<float> raw_data(num_verts, 0.0f);
vector<float3> edge_accum(num_verts, make_float3(0.0f, 0.0f, 0.0f));
BL::Mesh::edges_iterator e;
EdgeMap visited_edges;
int edge_index = 0;
memset(&counter[0], 0, sizeof(int) * counter.size());
for(b_mesh.edges.begin(e); e != b_mesh.edges.end(); ++e, ++edge_index) {
const int v0 = vert_orig_index[b_mesh.edges[edge_index].vertices()[0]],
v1 = vert_orig_index[b_mesh.edges[edge_index].vertices()[1]];
if(visited_edges.exists(v0, v1)) {
continue;
}
visited_edges.insert(v0, v1);
float3 co0 = get_float3(b_mesh.vertices[v0].co()),
co1 = get_float3(b_mesh.vertices[v1].co());
float3 edge = normalize(co1 - co0);
edge_accum[v0] += edge;
edge_accum[v1] += -edge;
++counter[v0];
++counter[v1];
}
for(int vert_index = 0; vert_index < num_verts; ++vert_index) {
const int orig_index = vert_orig_index[vert_index];
if(orig_index != vert_index) {
/* Skip duplicates, they'll be overwritten later on. */
continue;
}
if(counter[vert_index] > 0) {
const float3 normal = vert_normal[vert_index];
const float angle =
safe_acosf(dot(normal,
edge_accum[vert_index] / counter[vert_index]));
raw_data[vert_index] = angle * M_1_PI_F;
}
else {
raw_data[vert_index] = 0.0f;
}
}
/* STEP 3: Blur vertices to approximate 2 ring neighborhood. */
AttributeSet& attributes = (subdivision)? mesh->subd_attributes: mesh->attributes;
Attribute *attr = attributes.add(ATTR_STD_POINTINESS);
float *data = attr->data_float();
memcpy(data, &raw_data[0], sizeof(float) * raw_data.size());
memset(&counter[0], 0, sizeof(int) * counter.size());
edge_index = 0;
visited_edges.clear();
for(b_mesh.edges.begin(e); e != b_mesh.edges.end(); ++e, ++edge_index) {
const int v0 = vert_orig_index[b_mesh.edges[edge_index].vertices()[0]],
v1 = vert_orig_index[b_mesh.edges[edge_index].vertices()[1]];
if(visited_edges.exists(v0, v1)) {
continue;
}
visited_edges.insert(v0, v1);
data[v0] += raw_data[v1];
data[v1] += raw_data[v0];
++counter[v0];
++counter[v1];
}
for(int vert_index = 0; vert_index < num_verts; ++vert_index) {
data[vert_index] /= counter[vert_index] + 1;
}
/* STEP 4: Copy attribute to the duplicated vertices. */
for(int vert_index = 0; vert_index < num_verts; ++vert_index) {
const int orig_index = vert_orig_index[vert_index];
data[vert_index] = data[orig_index];
}
}
@@ -656,9 +765,6 @@ static void create_mesh(Scene *scene,
generated[i++] = get_float3(v->undeformed_co())*size - loc;
}
/* Create needed vertex attributes. */
attr_create_pointiness(scene, mesh, b_mesh, subdivision);
/* create faces */
vector<int> nverts(numfaces);
vector<int> face_flags(numfaces, FACE_FLAG_NONE);
@@ -671,6 +777,15 @@ static void create_mesh(Scene *scene,
int shader = clamp(f->material_index(), 0, used_shaders.size()-1);
bool smooth = f->use_smooth() || use_loop_normals;
if(use_loop_normals) {
BL::Array<float, 12> loop_normals = f->split_normals();
for(int i = 0; i < n; i++) {
N[vi[i]] = make_float3(loop_normals[i * 3],
loop_normals[i * 3 + 1],
loop_normals[i * 3 + 2]);
}
}
/* Create triangles.
*
* NOTE: Autosmooth is already taken care about.
@@ -718,6 +833,7 @@ static void create_mesh(Scene *scene,
/* Create all needed attributes.
* The calculate functions will check whether they're needed or not.
*/
attr_create_pointiness(scene, mesh, b_mesh, subdivision);
attr_create_vertex_color(scene, mesh, b_mesh, nverts, face_flags, subdivision);
attr_create_uv_map(scene, mesh, b_mesh, nverts, face_flags, subdivision, subdivide_uvs);
@@ -1178,4 +1294,3 @@ void BlenderSync::sync_mesh_motion(BL::Object& b_ob,
}
CCL_NAMESPACE_END

View File

@@ -1352,6 +1352,9 @@ void BlenderSession::update_resumable_tile_manager(int num_samples)
VLOG(1) << "Samples range start is " << range_start_sample << ", "
<< "number of samples to render is " << range_num_samples;
scene->integrator->start_sample = range_start_sample;
scene->integrator->tag_update(scene);
session->tile_manager.range_start_sample = range_start_sample;
session->tile_manager.range_num_samples = range_num_samples;
}

View File

@@ -609,7 +609,8 @@ static ShaderNode *add_node(Scene *scene,
bool is_builtin = b_image.packed_file() ||
b_image.source() == BL::Image::source_GENERATED ||
b_image.source() == BL::Image::source_MOVIE ||
b_engine.is_preview();
(b_engine.is_preview() &&
b_image.source() != BL::Image::source_SEQUENCE);
if(is_builtin) {
/* for builtin images we're using image datablock name to find an image to
@@ -662,7 +663,8 @@ static ShaderNode *add_node(Scene *scene,
bool is_builtin = b_image.packed_file() ||
b_image.source() == BL::Image::source_GENERATED ||
b_image.source() == BL::Image::source_MOVIE ||
b_engine.is_preview();
(b_engine.is_preview() &&
b_image.source() != BL::Image::source_SEQUENCE);
if(is_builtin) {
int scene_frame = b_scene.frame_current();

View File

@@ -322,6 +322,15 @@ void BlenderSync::sync_integrator()
integrator->volume_samples = volume_samples;
}
if(b_scene.render().use_simplify()) {
if(preview) {
integrator->ao_bounces = get_int(cscene, "ao_bounces");
}
else {
integrator->ao_bounces = get_int(cscene, "ao_bounces_render");
}
}
if(integrator->modified(previntegrator))
integrator->tag_update(scene);
}

View File

@@ -19,6 +19,7 @@
#include "mesh.h"
#include "util_algorithm.h"
#include "util_map.h"
#include "util_path.h"
#include "util_set.h"
@@ -78,7 +79,7 @@ static inline BL::Mesh object_to_mesh(BL::BlendData& data,
me.calc_normals_split();
}
else {
me.split_faces();
me.split_faces(false);
}
}
if(subdivision_type == Mesh::SUBDIVISION_NONE) {
@@ -786,6 +787,35 @@ struct ParticleSystemKey {
}
};
class EdgeMap {
public:
EdgeMap() {
}
void clear() {
edges_.clear();
}
void insert(int v0, int v1) {
get_sorted_verts(v0, v1);
edges_.insert(std::pair<int, int>(v0, v1));
}
bool exists(int v0, int v1) {
get_sorted_verts(v0, v1);
return edges_.find(std::pair<int, int>(v0, v1)) != edges_.end();
}
protected:
void get_sorted_verts(int& v0, int& v1) {
if(v0 > v1) {
swap(v0, v1);
}
}
set< std::pair<int, int> > edges_;
};
CCL_NAMESPACE_END
#endif /* __BLENDER_UTIL_H__ */

View File

@@ -81,6 +81,7 @@ void BVH::build(Progress& progress)
pack.prim_type,
pack.prim_index,
pack.prim_object,
pack.prim_time,
params,
progress);
BVHNode *root = bvh_build.run();
@@ -256,6 +257,10 @@ void BVH::pack_instances(size_t nodes_size, size_t leaf_nodes_size)
pack.leaf_nodes.resize(leaf_nodes_size);
pack.object_node.resize(objects.size());
if(params.num_motion_curve_steps > 0 || params.num_motion_triangle_steps > 0) {
pack.prim_time.resize(prim_index_size);
}
int *pack_prim_index = (pack.prim_index.size())? &pack.prim_index[0]: NULL;
int *pack_prim_type = (pack.prim_type.size())? &pack.prim_type[0]: NULL;
int *pack_prim_object = (pack.prim_object.size())? &pack.prim_object[0]: NULL;
@@ -264,6 +269,7 @@ void BVH::pack_instances(size_t nodes_size, size_t leaf_nodes_size)
uint *pack_prim_tri_index = (pack.prim_tri_index.size())? &pack.prim_tri_index[0]: NULL;
int4 *pack_nodes = (pack.nodes.size())? &pack.nodes[0]: NULL;
int4 *pack_leaf_nodes = (pack.leaf_nodes.size())? &pack.leaf_nodes[0]: NULL;
float2 *pack_prim_time = (pack.prim_time.size())? &pack.prim_time[0]: NULL;
/* merge */
foreach(Object *ob, objects) {
@@ -309,6 +315,7 @@ void BVH::pack_instances(size_t nodes_size, size_t leaf_nodes_size)
int *bvh_prim_type = &bvh->pack.prim_type[0];
uint *bvh_prim_visibility = &bvh->pack.prim_visibility[0];
uint *bvh_prim_tri_index = &bvh->pack.prim_tri_index[0];
float2 *bvh_prim_time = bvh->pack.prim_time.size()? &bvh->pack.prim_time[0]: NULL;
for(size_t i = 0; i < bvh_prim_index_size; i++) {
if(bvh->pack.prim_type[i] & PRIMITIVE_ALL_CURVE) {
@@ -324,6 +331,9 @@ void BVH::pack_instances(size_t nodes_size, size_t leaf_nodes_size)
pack_prim_type[pack_prim_index_offset] = bvh_prim_type[i];
pack_prim_visibility[pack_prim_index_offset] = bvh_prim_visibility[i];
pack_prim_object[pack_prim_index_offset] = 0; // unused for instances
if(bvh_prim_time != NULL) {
pack_prim_time[pack_prim_index_offset] = bvh_prim_time[i];
}
pack_prim_index_offset++;
}
}

View File

@@ -68,6 +68,8 @@ struct PackedBVH {
array<int> prim_index;
/* mapping from BVH primitive index, to the object id of that primitive. */
array<int> prim_object;
/* Time range of BVH primitive. */
array<float2> prim_time;
/* index of the root node. */
int root_index;

View File

@@ -93,12 +93,14 @@ BVHBuild::BVHBuild(const vector<Object*>& objects_,
array<int>& prim_type_,
array<int>& prim_index_,
array<int>& prim_object_,
array<float2>& prim_time_,
const BVHParams& params_,
Progress& progress_)
: objects(objects_),
prim_type(prim_type_),
prim_index(prim_index_),
prim_object(prim_object_),
prim_time(prim_time_),
params(params_),
progress(progress_),
progress_start_time(0.0),
@@ -465,6 +467,9 @@ BVHNode* BVHBuild::run()
}
spatial_free_index = 0;
need_prim_time = params.num_motion_curve_steps > 0 ||
params.num_motion_triangle_steps > 0;
/* init progress updates */
double build_start_time;
build_start_time = progress_start_time = time_dt();
@@ -475,6 +480,12 @@ BVHNode* BVHBuild::run()
prim_type.resize(references.size());
prim_index.resize(references.size());
prim_object.resize(references.size());
if(need_prim_time) {
prim_time.resize(references.size());
}
else {
prim_time.resize(0);
}
/* build recursively */
BVHNode *rootnode;
@@ -849,6 +860,9 @@ BVHNode *BVHBuild::create_object_leaf_nodes(const BVHReference *ref, int start,
prim_type[start] = ref->prim_type();
prim_index[start] = ref->prim_index();
prim_object[start] = ref->prim_object();
if(need_prim_time) {
prim_time[start] = make_float2(ref->time_from(), ref->time_to());
}
uint visibility = objects[ref->prim_object()]->visibility;
BVHNode *leaf_node = new LeafNode(ref->bounds(), visibility, start, start+1);
@@ -891,11 +905,13 @@ BVHNode* BVHBuild::create_leaf_node(const BVHRange& range,
* can not control.
*/
typedef StackAllocator<256, int> LeafStackAllocator;
typedef StackAllocator<256, float2> LeafTimeStackAllocator;
typedef StackAllocator<256, BVHReference> LeafReferenceStackAllocator;
vector<int, LeafStackAllocator> p_type[PRIMITIVE_NUM_TOTAL];
vector<int, LeafStackAllocator> p_index[PRIMITIVE_NUM_TOTAL];
vector<int, LeafStackAllocator> p_object[PRIMITIVE_NUM_TOTAL];
vector<float2, LeafTimeStackAllocator> p_time[PRIMITIVE_NUM_TOTAL];
vector<BVHReference, LeafReferenceStackAllocator> p_ref[PRIMITIVE_NUM_TOTAL];
/* TODO(sergey): In theory we should be able to store references. */
@@ -918,6 +934,8 @@ BVHNode* BVHBuild::create_leaf_node(const BVHRange& range,
p_type[type_index].push_back(ref.prim_type());
p_index[type_index].push_back(ref.prim_index());
p_object[type_index].push_back(ref.prim_object());
p_time[type_index].push_back(make_float2(ref.time_from(),
ref.time_to()));
bounds[type_index].grow(ref.bounds());
visibility[type_index] |= objects[ref.prim_object()]->visibility;
@@ -947,9 +965,13 @@ BVHNode* BVHBuild::create_leaf_node(const BVHRange& range,
vector<int, LeafStackAllocator> local_prim_type,
local_prim_index,
local_prim_object;
vector<float2, LeafTimeStackAllocator> local_prim_time;
local_prim_type.resize(num_new_prims);
local_prim_index.resize(num_new_prims);
local_prim_object.resize(num_new_prims);
if(need_prim_time) {
local_prim_time.resize(num_new_prims);
}
for(int i = 0; i < PRIMITIVE_NUM_TOTAL; ++i) {
int num = (int)p_type[i].size();
if(num != 0) {
@@ -962,6 +984,9 @@ BVHNode* BVHBuild::create_leaf_node(const BVHRange& range,
local_prim_type[index] = p_type[i][j];
local_prim_index[index] = p_index[i][j];
local_prim_object[index] = p_object[i][j];
if(need_prim_time) {
local_prim_time[index] = p_time[i][j];
}
if(params.use_unaligned_nodes && !alignment_found) {
alignment_found =
unaligned_heuristic.compute_aligned_space(p_ref[i][j],
@@ -1028,11 +1053,17 @@ BVHNode* BVHBuild::create_leaf_node(const BVHRange& range,
prim_type.reserve(reserve);
prim_index.reserve(reserve);
prim_object.reserve(reserve);
if(need_prim_time) {
prim_time.reserve(reserve);
}
}
prim_type.resize(range_end);
prim_index.resize(range_end);
prim_object.resize(range_end);
if(need_prim_time) {
prim_time.resize(range_end);
}
}
spatial_spin_lock.unlock();
@@ -1041,6 +1072,9 @@ BVHNode* BVHBuild::create_leaf_node(const BVHRange& range,
memcpy(&prim_type[start_index], &local_prim_type[0], new_leaf_data_size);
memcpy(&prim_index[start_index], &local_prim_index[0], new_leaf_data_size);
memcpy(&prim_object[start_index], &local_prim_object[0], new_leaf_data_size);
if(need_prim_time) {
memcpy(&prim_time[start_index], &local_prim_time[0], sizeof(float2)*num_new_leaf_data);
}
}
}
else {
@@ -1053,6 +1087,9 @@ BVHNode* BVHBuild::create_leaf_node(const BVHRange& range,
memcpy(&prim_type[start_index], &local_prim_type[0], new_leaf_data_size);
memcpy(&prim_index[start_index], &local_prim_index[0], new_leaf_data_size);
memcpy(&prim_object[start_index], &local_prim_object[0], new_leaf_data_size);
if(need_prim_time) {
memcpy(&prim_time[start_index], &local_prim_time[0], sizeof(float2)*num_new_leaf_data);
}
}
}

View File

@@ -48,6 +48,7 @@ public:
array<int>& prim_type,
array<int>& prim_index,
array<int>& prim_object,
array<float2>& prim_time,
const BVHParams& params,
Progress& progress);
~BVHBuild();
@@ -112,6 +113,9 @@ protected:
array<int>& prim_type;
array<int>& prim_index;
array<int>& prim_object;
array<float2>& prim_time;
bool need_prim_time;
/* Build parameters. */
BVHParams params;

View File

@@ -104,6 +104,7 @@ public:
primitive_mask = PRIMITIVE_ALL;
num_motion_curve_steps = 0;
num_motion_triangle_steps = 0;
}
/* SAH costs */

View File

@@ -15,6 +15,7 @@
*/
#include <climits>
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

View File

@@ -162,6 +162,7 @@ public:
void mem_free(device_memory& mem)
{
device_ptr tmp = mem.device_pointer;
stats.mem_free(mem.device_size);
foreach(SubDevice& sub, devices) {
mem.device_pointer = sub.ptr_map[tmp];
@@ -170,7 +171,6 @@ public:
}
mem.device_pointer = 0;
stats.mem_free(mem.device_size);
}
void const_copy_to(const char *name, void *host, size_t size)
@@ -202,6 +202,7 @@ public:
void tex_free(device_memory& mem)
{
device_ptr tmp = mem.device_pointer;
stats.mem_free(mem.device_size);
foreach(SubDevice& sub, devices) {
mem.device_pointer = sub.ptr_map[tmp];
@@ -210,7 +211,6 @@ public:
}
mem.device_pointer = 0;
stats.mem_free(mem.device_size);
}
void pixels_alloc(device_memory& mem)

View File

@@ -309,6 +309,8 @@ bool OpenCLDeviceBase::OpenCLProgram::build_kernel(const string *debug_src)
string build_options;
build_options = device->kernel_build_options(debug_src) + kernel_build_options;
VLOG(1) << "Build options passed to clBuildProgram: '"
<< build_options << "'.";
cl_int ciErr = clBuildProgram(program, 0, NULL, build_options.c_str(), NULL, NULL);
/* show warnings even if build is successful */

View File

@@ -357,7 +357,7 @@ ccl_device_inline float3 ray_offset(float3 P, float3 Ng)
#endif
}
#if defined(__SHADOW_RECORD_ALL__) || defined (__VOLUME_RECORD_ALL__)
#if defined(__VOLUME_RECORD_ALL__) || (defined(__SHADOW_RECORD_ALL__) && defined(__KERNEL_CPU__))
/* ToDo: Move to another file? */
ccl_device int intersections_compare(const void *a, const void *b)
{
@@ -373,5 +373,28 @@ ccl_device int intersections_compare(const void *a, const void *b)
}
#endif
CCL_NAMESPACE_END
#if defined(__SHADOW_RECORD_ALL__)
ccl_device_inline void sort_intersections(Intersection *hits, uint num_hits)
{
#ifdef __KERNEL_GPU__
/* Use bubble sort which has more friendly memory pattern on GPU. */
bool swapped;
do {
swapped = false;
for(int j = 0; j < num_hits - 1; ++j) {
if(hits[j].t > hits[j + 1].t) {
struct Intersection tmp = hits[j];
hits[j] = hits[j + 1];
hits[j + 1] = tmp;
swapped = true;
}
}
--num_hits;
} while(swapped);
#else
qsort(hits, num_hits, sizeof(Intersection), intersections_compare);
#endif
}
#endif /* __SHADOW_RECORD_ALL__ | __VOLUME_RECORD_ALL__ */
CCL_NAMESPACE_END

View File

@@ -229,6 +229,15 @@ ccl_device_forceinline bool bvh_cardinal_curve_intersect(KernelGlobals *kg, Inte
float3 P, float3 dir, uint visibility, int object, int curveAddr, float time,int type, uint *lcg_state, float difl, float extmax)
#endif
{
const bool is_curve_primitive = (type & PRIMITIVE_CURVE);
if(!is_curve_primitive && kernel_data.bvh.use_bvh_steps) {
const float2 prim_time = kernel_tex_fetch(__prim_time, curveAddr);
if(time < prim_time.x || time > prim_time.y) {
return false;
}
}
int segment = PRIMITIVE_UNPACK_SEGMENT(type);
float epsilon = 0.0f;
float r_st, r_en;
@@ -257,7 +266,7 @@ ccl_device_forceinline bool bvh_cardinal_curve_intersect(KernelGlobals *kg, Inte
#ifdef __KERNEL_AVX2__
avxf P_curve_0_1, P_curve_2_3;
if(type & PRIMITIVE_CURVE) {
if(is_curve_primitive) {
P_curve_0_1 = _mm256_loadu2_m128(&kg->__curve_keys.data[k0].x, &kg->__curve_keys.data[ka].x);
P_curve_2_3 = _mm256_loadu2_m128(&kg->__curve_keys.data[kb].x, &kg->__curve_keys.data[k1].x);
}
@@ -268,7 +277,7 @@ ccl_device_forceinline bool bvh_cardinal_curve_intersect(KernelGlobals *kg, Inte
#else /* __KERNEL_AVX2__ */
ssef P_curve[4];
if(type & PRIMITIVE_CURVE) {
if(is_curve_primitive) {
P_curve[0] = load4f(&kg->__curve_keys.data[ka].x);
P_curve[1] = load4f(&kg->__curve_keys.data[k0].x);
P_curve[2] = load4f(&kg->__curve_keys.data[k1].x);
@@ -363,7 +372,7 @@ ccl_device_forceinline bool bvh_cardinal_curve_intersect(KernelGlobals *kg, Inte
float4 P_curve[4];
if(type & PRIMITIVE_CURVE) {
if(is_curve_primitive) {
P_curve[0] = kernel_tex_fetch(__curve_keys, ka);
P_curve[1] = kernel_tex_fetch(__curve_keys, k0);
P_curve[2] = kernel_tex_fetch(__curve_keys, k1);
@@ -689,6 +698,15 @@ ccl_device_forceinline bool bvh_curve_intersect(KernelGlobals *kg, Intersection
# define dot3(x, y) dot(x, y)
#endif
const bool is_curve_primitive = (type & PRIMITIVE_CURVE);
if(!is_curve_primitive && kernel_data.bvh.use_bvh_steps) {
const float2 prim_time = kernel_tex_fetch(__prim_time, curveAddr);
if(time < prim_time.x || time > prim_time.y) {
return false;
}
}
int segment = PRIMITIVE_UNPACK_SEGMENT(type);
/* curve Intersection check */
int flags = kernel_data.curve.curveflags;
@@ -703,7 +721,7 @@ ccl_device_forceinline bool bvh_curve_intersect(KernelGlobals *kg, Intersection
#ifndef __KERNEL_SSE2__
float4 P_curve[2];
if(type & PRIMITIVE_CURVE) {
if(is_curve_primitive) {
P_curve[0] = kernel_tex_fetch(__curve_keys, k0);
P_curve[1] = kernel_tex_fetch(__curve_keys, k1);
}
@@ -738,7 +756,7 @@ ccl_device_forceinline bool bvh_curve_intersect(KernelGlobals *kg, Intersection
#else
ssef P_curve[2];
if(type & PRIMITIVE_CURVE) {
if(is_curve_primitive) {
P_curve[0] = load4f(&kg->__curve_keys.data[k0].x);
P_curve[1] = load4f(&kg->__curve_keys.data[k1].x);
}

View File

@@ -54,7 +54,8 @@ ccl_device_inline void compute_light_pass(KernelGlobals *kg,
float rbsdf = path_state_rng_1D(kg, &rng, &state, PRNG_BSDF);
shader_eval_surface(kg, sd, &rng, &state, rbsdf, state.flag, SHADER_CONTEXT_MAIN);
/* TODO, disable the closures we won't need */
/* TODO, disable more closures we don't need besides transparent */
shader_bsdf_disable_transparency(kg, sd);
#ifdef __BRANCHED_PATH__
if(!kernel_data.integrator.branched) {

View File

@@ -76,7 +76,10 @@ typedef struct KernelGlobals {
#ifdef __KERNEL_CUDA__
__constant__ KernelData __data;
typedef struct KernelGlobals {} KernelGlobals;
typedef struct KernelGlobals {
/* NOTE: Keep the size in sync with SHADOW_STACK_MAX_HITS. */
Intersection hits_stack[64];
} KernelGlobals;
# ifdef __KERNEL_CUDA_TEX_STORAGE__
# define KERNEL_TEX(type, ttype, name) ttype name;

View File

@@ -109,6 +109,10 @@ ccl_device void kernel_path_indirect(KernelGlobals *kg,
/* intersect scene */
Intersection isect;
uint visibility = path_state_ray_visibility(kg, state);
if(state->bounce > kernel_data.integrator.ao_bounces) {
visibility = PATH_RAY_SHADOW;
ray->t = kernel_data.background.ao_distance;
}
bool hit = scene_intersect(kg,
*ray,
visibility,
@@ -292,6 +296,9 @@ ccl_device void kernel_path_indirect(KernelGlobals *kg,
break;
}
else if(state->bounce > kernel_data.integrator.ao_bounces) {
break;
}
/* setup shading */
shader_setup_from_ray(kg,
@@ -627,6 +634,11 @@ ccl_device_inline float4 kernel_path_integrate(KernelGlobals *kg,
lcg_state = lcg_state_init(rng, &state, 0x51633e2d);
}
if(state.bounce > kernel_data.integrator.ao_bounces) {
visibility = PATH_RAY_SHADOW;
ray.t = kernel_data.background.ao_distance;
}
bool hit = scene_intersect(kg, ray, visibility, &isect, &lcg_state, difl, extmax);
#else
bool hit = scene_intersect(kg, ray, visibility, &isect, NULL, 0.0f, 0.0f);
@@ -769,6 +781,9 @@ ccl_device_inline float4 kernel_path_integrate(KernelGlobals *kg,
break;
}
else if(state.bounce > kernel_data.integrator.ao_bounces) {
break;
}
/* setup shading */
shader_setup_from_ray(kg, &sd, &isect, &ray);

View File

@@ -30,7 +30,7 @@ ccl_device_inline void kernel_path_trace_setup(KernelGlobals *kg,
int num_samples = kernel_data.integrator.aa_samples;
if(sample == 0) {
if(sample == kernel_data.integrator.start_sample) {
*rng_state = hash_int_2d(x, y);
}

View File

@@ -685,6 +685,18 @@ ccl_device float3 shader_bsdf_transparency(KernelGlobals *kg, ShaderData *sd)
return eval;
}
ccl_device void shader_bsdf_disable_transparency(KernelGlobals *kg, ShaderData *sd)
{
for(int i = 0; i < ccl_fetch(sd, num_closure); i++) {
ShaderClosure *sc = ccl_fetch_array(sd, closure, i);
if(sc->type == CLOSURE_BSDF_TRANSPARENT_ID) {
sc->sample_weight = 0.0f;
sc->weight = make_float3(0.0f, 0.0f, 0.0f);
}
}
}
ccl_device float3 shader_bsdf_alpha(KernelGlobals *kg, ShaderData *sd)
{
float3 alpha = make_float3(1.0f, 1.0f, 1.0f) - shader_bsdf_transparency(kg, sd);

View File

@@ -16,9 +16,84 @@
CCL_NAMESPACE_BEGIN
#ifdef __SHADOW_RECORD_ALL__
/* Attenuate throughput accordingly to the given intersection event.
* Returns true if the throughput is zero and traversal can be aborted.
*/
ccl_device_forceinline bool shadow_handle_transparent_isect(
KernelGlobals *kg,
ShaderData *shadow_sd,
ccl_addr_space PathState *state,
# ifdef __VOLUME__
struct PathState *volume_state,
# endif
Intersection *isect,
Ray *ray,
float3 *throughput)
{
#ifdef __VOLUME__
/* Attenuation between last surface and next surface. */
if(volume_state->volume_stack[0].shader != SHADER_NONE) {
Ray segment_ray = *ray;
segment_ray.t = isect->t;
kernel_volume_shadow(kg,
shadow_sd,
volume_state,
&segment_ray,
throughput);
}
#endif
/* Setup shader data at surface. */
shader_setup_from_ray(kg, shadow_sd, isect, ray);
/* Attenuation from transparent surface. */
if(!(ccl_fetch(shadow_sd, flag) & SD_HAS_ONLY_VOLUME)) {
path_state_modify_bounce(state, true);
shader_eval_surface(kg,
shadow_sd,
NULL,
state,
0.0f,
PATH_RAY_SHADOW,
SHADER_CONTEXT_SHADOW);
path_state_modify_bounce(state, false);
*throughput *= shader_bsdf_transparency(kg, shadow_sd);
}
/* Stop if all light is blocked. */
if(is_zero(*throughput)) {
return true;
}
#ifdef __VOLUME__
/* Exit/enter volume. */
kernel_volume_stack_enter_exit(kg, shadow_sd, volume_state->volume_stack);
#endif
return false;
}
/* Shadow function to compute how much light is blocked, CPU variation.
/* Special version which only handles opaque shadows. */
ccl_device bool shadow_blocked_opaque(KernelGlobals *kg,
ShaderData *shadow_sd,
ccl_addr_space PathState *state,
Ray *ray,
Intersection *isect,
float3 *shadow)
{
const bool blocked = scene_intersect(kg,
*ray,
PATH_RAY_SHADOW_OPAQUE,
isect,
NULL,
0.0f, 0.0f);
#ifdef __VOLUME__
if(!blocked && state->volume_stack[0].shader != SHADER_NONE) {
/* Apply attenuation from current volume shader. */
kernel_volume_shadow(kg, shadow_sd, state, ray, shadow);
}
#endif
return blocked;
}
#ifdef __TRANSPARENT_SHADOWS__
# ifdef __SHADOW_RECORD_ALL__
/* Shadow function to compute how much light is blocked,
*
* We trace a single ray. If it hits any opaque surface, or more than a given
* number of transparent surfaces is hit, then we consider the geometry to be
@@ -36,261 +111,355 @@ CCL_NAMESPACE_BEGIN
* or there is a performance increase anyway due to avoiding the need to send
* two rays with transparent shadows.
*
* This is CPU only because of qsort, and malloc or high stack space usage to
* record all these intersections. */
* On CPU it'll handle all transparent bounces (by allocating storage for
* intersections when they don't fit into the stack storage).
*
* On GPU it'll only handle SHADOW_STACK_MAX_HITS-1 intersections, so this
* is something to be kept an eye on.
*/
#define STACK_MAX_HITS 64
# define SHADOW_STACK_MAX_HITS 64
ccl_device_inline bool shadow_blocked(KernelGlobals *kg, ShaderData *shadow_sd, PathState *state, Ray *ray, float3 *shadow)
/* Actual logic with traversal loop implementation which is free from device
* specific tweaks.
*
* Note that hits array should be as big as max_hits+1.
*/
ccl_device bool shadow_blocked_transparent_all_loop(KernelGlobals *kg,
ShaderData *shadow_sd,
ccl_addr_space PathState *state,
Ray *ray,
Intersection *hits,
uint max_hits,
float3 *shadow)
{
*shadow = make_float3(1.0f, 1.0f, 1.0f);
if(ray->t == 0.0f)
return false;
bool blocked;
if(kernel_data.integrator.transparent_shadows) {
/* check transparent bounces here, for volume scatter which can do
* lighting before surface path termination is checked */
if(state->transparent_bounce >= kernel_data.integrator.transparent_max_bounce)
return true;
/* intersect to find an opaque surface, or record all transparent surface hits */
Intersection hits_stack[STACK_MAX_HITS];
Intersection *hits = hits_stack;
const int transparent_max_bounce = kernel_data.integrator.transparent_max_bounce;
uint max_hits = transparent_max_bounce - state->transparent_bounce - 1;
/* prefer to use stack but use dynamic allocation if too deep max hits
* we need max_hits + 1 storage space due to the logic in
* scene_intersect_shadow_all which will first store and then check if
* the limit is exceeded */
if(max_hits + 1 > STACK_MAX_HITS) {
if(kg->transparent_shadow_intersections == NULL) {
kg->transparent_shadow_intersections =
(Intersection*)malloc(sizeof(Intersection)*(transparent_max_bounce + 1));
/* Intersect to find an opaque surface, or record all transparent
* surface hits.
*/
uint num_hits;
const bool blocked = scene_intersect_shadow_all(kg,
ray,
hits,
max_hits,
&num_hits);
/* If no opaque surface found but we did find transparent hits,
* shade them.
*/
if(!blocked && num_hits > 0) {
float3 throughput = make_float3(1.0f, 1.0f, 1.0f);
float3 Pend = ray->P + ray->D*ray->t;
float last_t = 0.0f;
int bounce = state->transparent_bounce;
Intersection *isect = hits;
# ifdef __VOLUME__
PathState ps = *state;
# endif
sort_intersections(hits, num_hits);
for(int hit = 0; hit < num_hits; hit++, isect++) {
/* Adjust intersection distance for moving ray forward. */
float new_t = isect->t;
isect->t -= last_t;
/* Skip hit if we did not move forward, step by step raytracing
* would have skipped it as well then.
*/
if(last_t == new_t) {
continue;
}
hits = kg->transparent_shadow_intersections;
}
uint num_hits;
blocked = scene_intersect_shadow_all(kg, ray, hits, max_hits, &num_hits);
/* if no opaque surface found but we did find transparent hits, shade them */
if(!blocked && num_hits > 0) {
float3 throughput = make_float3(1.0f, 1.0f, 1.0f);
float3 Pend = ray->P + ray->D*ray->t;
float last_t = 0.0f;
int bounce = state->transparent_bounce;
Intersection *isect = hits;
last_t = new_t;
/* Attenuate the throughput. */
if(shadow_handle_transparent_isect(kg,
shadow_sd,
state,
#ifdef __VOLUME__
PathState ps = *state;
&ps,
#endif
qsort(hits, num_hits, sizeof(Intersection), intersections_compare);
for(int hit = 0; hit < num_hits; hit++, isect++) {
/* adjust intersection distance for moving ray forward */
float new_t = isect->t;
isect->t -= last_t;
/* skip hit if we did not move forward, step by step raytracing
* would have skipped it as well then */
if(last_t == new_t)
continue;
last_t = new_t;
#ifdef __VOLUME__
/* attenuation between last surface and next surface */
if(ps.volume_stack[0].shader != SHADER_NONE) {
Ray segment_ray = *ray;
segment_ray.t = isect->t;
kernel_volume_shadow(kg, shadow_sd, &ps, &segment_ray, &throughput);
}
#endif
/* setup shader data at surface */
shader_setup_from_ray(kg, shadow_sd, isect, ray);
/* attenuation from transparent surface */
if(!(shadow_sd->flag & SD_HAS_ONLY_VOLUME)) {
path_state_modify_bounce(state, true);
shader_eval_surface(kg, shadow_sd, NULL, state, 0.0f, PATH_RAY_SHADOW, SHADER_CONTEXT_SHADOW);
path_state_modify_bounce(state, false);
throughput *= shader_bsdf_transparency(kg, shadow_sd);
}
/* stop if all light is blocked */
if(is_zero(throughput)) {
return true;
}
/* move ray forward */
ray->P = shadow_sd->P;
if(ray->t != FLT_MAX) {
ray->D = normalize_len(Pend - ray->P, &ray->t);
}
#ifdef __VOLUME__
/* exit/enter volume */
kernel_volume_stack_enter_exit(kg, shadow_sd, ps.volume_stack);
#endif
bounce++;
isect,
ray,
&throughput))
{
return true;
}
#ifdef __VOLUME__
/* attenuation for last line segment towards light */
if(ps.volume_stack[0].shader != SHADER_NONE)
kernel_volume_shadow(kg, shadow_sd, &ps, ray, &throughput);
#endif
*shadow = throughput;
return is_zero(throughput);
/* Move ray forward. */
ray->P = ccl_fetch(shadow_sd, P);
if(ray->t != FLT_MAX) {
ray->D = normalize_len(Pend - ray->P, &ray->t);
}
bounce++;
}
# ifdef __VOLUME__
/* Attenuation for last line segment towards light. */
if(ps.volume_stack[0].shader != SHADER_NONE) {
kernel_volume_shadow(kg, shadow_sd, &ps, ray, &throughput);
}
# endif
*shadow = throughput;
return is_zero(throughput);
}
else {
Intersection isect;
blocked = scene_intersect(kg, *ray, PATH_RAY_SHADOW_OPAQUE, &isect, NULL, 0.0f, 0.0f);
}
#ifdef __VOLUME__
# ifdef __VOLUME__
if(!blocked && state->volume_stack[0].shader != SHADER_NONE) {
/* apply attenuation from current volume shader */
/* Apply attenuation from current volume shader/ */
kernel_volume_shadow(kg, shadow_sd, state, ray, shadow);
}
#endif
# endif
return blocked;
}
#undef STACK_MAX_HITS
/* Here we do all device specific trickery before invoking actual traversal
* loop to help readability of the actual logic.
*/
ccl_device bool shadow_blocked_transparent_all(KernelGlobals *kg,
ShaderData *shadow_sd,
ccl_addr_space PathState *state,
Ray *ray,
uint max_hits,
float3 *shadow)
{
# ifdef __KERNEL_CUDA__
Intersection *hits = kg->hits_stack;
# else
Intersection hits_stack[SHADOW_STACK_MAX_HITS];
Intersection *hits = hits_stack;
# endif
# ifndef __KERNEL_GPU__
/* Prefer to use stack but use dynamic allocation if too deep max hits
* we need max_hits + 1 storage space due to the logic in
* scene_intersect_shadow_all which will first store and then check if
* the limit is exceeded.
*
* Ignore this on GPU because of slow/unavailable malloc().
*/
if(max_hits + 1 > SHADOW_STACK_MAX_HITS) {
if(kg->transparent_shadow_intersections == NULL) {
const int transparent_max_bounce = kernel_data.integrator.transparent_max_bounce;
kg->transparent_shadow_intersections =
(Intersection*)malloc(sizeof(Intersection)*(transparent_max_bounce + 1));
}
hits = kg->transparent_shadow_intersections;
}
# endif /* __KERNEL_GPU__ */
/* Invoke actual traversal. */
return shadow_blocked_transparent_all_loop(kg,
shadow_sd,
state,
ray,
hits,
max_hits,
shadow);
}
# endif /* __SHADOW_RECORD_ALL__ */
#else
/* Shadow function to compute how much light is blocked, GPU variation.
# ifdef __KERNEL_GPU__
/* Shadow function to compute how much light is blocked,
*
* Here we raytrace from one transparent surface to the next step by step.
* To minimize overhead in cases where we don't need transparent shadows, we
* first trace a regular shadow ray. We check if the hit primitive was
* potentially transparent, and only in that case start marching. this gives
* one extra ray cast for the cases were we do want transparency. */
* one extra ray cast for the cases were we do want transparency.
*/
ccl_device_noinline bool shadow_blocked(KernelGlobals *kg,
ShaderData *shadow_sd,
ccl_addr_space PathState *state,
ccl_addr_space Ray *ray_input,
float3 *shadow)
/* This function is only implementing device-independent traversal logic
* which requires some precalculation done.
*/
ccl_device bool shadow_blocked_transparent_stepped_loop(
KernelGlobals *kg,
ShaderData *shadow_sd,
ccl_addr_space PathState *state,
Ray *ray,
Intersection *isect,
const bool blocked,
const bool is_transparent_isect,
float3 *shadow)
{
*shadow = make_float3(1.0f, 1.0f, 1.0f);
if(ray_input->t == 0.0f)
return false;
#ifdef __SPLIT_KERNEL__
Ray private_ray = *ray_input;
Ray *ray = &private_ray;
#else
Ray *ray = ray_input;
#endif
#ifdef __SPLIT_KERNEL__
Intersection *isect = &kg->isect_shadow[SD_THREAD];
#else
Intersection isect_object;
Intersection *isect = &isect_object;
#endif
bool blocked = scene_intersect(kg, *ray, PATH_RAY_SHADOW_OPAQUE, isect, NULL, 0.0f, 0.0f);
#ifdef __TRANSPARENT_SHADOWS__
if(blocked && kernel_data.integrator.transparent_shadows) {
if(shader_transparent_shadow(kg, isect)) {
float3 throughput = make_float3(1.0f, 1.0f, 1.0f);
float3 Pend = ray->P + ray->D*ray->t;
int bounce = state->transparent_bounce;
#ifdef __VOLUME__
PathState ps = *state;
#endif
for(;;) {
if(bounce >= kernel_data.integrator.transparent_max_bounce)
return true;
if(!scene_intersect(kg, *ray, PATH_RAY_SHADOW_TRANSPARENT, isect, NULL, 0.0f, 0.0f))
{
#ifdef __VOLUME__
/* attenuation for last line segment towards light */
if(ps.volume_stack[0].shader != SHADER_NONE)
kernel_volume_shadow(kg, shadow_sd, &ps, ray, &throughput);
#endif
*shadow *= throughput;
return false;
}
if(!shader_transparent_shadow(kg, isect)) {
return true;
}
#ifdef __VOLUME__
/* attenuation between last surface and next surface */
if(ps.volume_stack[0].shader != SHADER_NONE) {
Ray segment_ray = *ray;
segment_ray.t = isect->t;
kernel_volume_shadow(kg, shadow_sd, &ps, &segment_ray, &throughput);
}
#endif
/* setup shader data at surface */
shader_setup_from_ray(kg, shadow_sd, isect, ray);
/* attenuation from transparent surface */
if(!(ccl_fetch(shadow_sd, flag) & SD_HAS_ONLY_VOLUME)) {
path_state_modify_bounce(state, true);
shader_eval_surface(kg, shadow_sd, NULL, state, 0.0f, PATH_RAY_SHADOW, SHADER_CONTEXT_SHADOW);
path_state_modify_bounce(state, false);
throughput *= shader_bsdf_transparency(kg, shadow_sd);
}
/* stop if all light is blocked */
if(is_zero(throughput)) {
return true;
}
/* move ray forward */
ray->P = ray_offset(ccl_fetch(shadow_sd, P), -ccl_fetch(shadow_sd, Ng));
if(ray->t != FLT_MAX) {
ray->D = normalize_len(Pend - ray->P, &ray->t);
}
#ifdef __VOLUME__
/* exit/enter volume */
kernel_volume_stack_enter_exit(kg, shadow_sd, ps.volume_stack);
#endif
bounce++;
if(blocked && is_transparent_isect) {
float3 throughput = make_float3(1.0f, 1.0f, 1.0f);
float3 Pend = ray->P + ray->D*ray->t;
int bounce = state->transparent_bounce;
# ifdef __VOLUME__
PathState ps = *state;
# endif
for(;;) {
if(bounce >= kernel_data.integrator.transparent_max_bounce) {
return true;
}
}
}
if(!scene_intersect(kg,
*ray,
PATH_RAY_SHADOW_TRANSPARENT,
isect,
NULL,
0.0f, 0.0f))
{
break;
}
if(!shader_transparent_shadow(kg, isect)) {
return true;
}
/* Attenuate the throughput. */
if(shadow_handle_transparent_isect(kg,
shadow_sd,
state,
#ifdef __VOLUME__
else if(!blocked && state->volume_stack[0].shader != SHADER_NONE) {
/* apply attenuation from current volume shader */
&ps,
#endif
isect,
ray,
&throughput))
{
return true;
}
/* Move ray forward. */
ray->P = ray_offset(ccl_fetch(shadow_sd, P), -ccl_fetch(shadow_sd, Ng));
if(ray->t != FLT_MAX) {
ray->D = normalize_len(Pend - ray->P, &ray->t);
}
bounce++;
}
# ifdef __VOLUME__
/* Attenuation for last line segment towards light. */
if(ps.volume_stack[0].shader != SHADER_NONE) {
kernel_volume_shadow(kg, shadow_sd, &ps, ray, &throughput);
}
# endif
*shadow *= throughput;
return is_zero(throughput);
}
# ifdef __VOLUME__
if(!blocked && state->volume_stack[0].shader != SHADER_NONE) {
/* Apply attenuation from current volume shader. */
kernel_volume_shadow(kg, shadow_sd, state, ray, shadow);
}
#endif
#endif
# endif
return blocked;
}
ccl_device bool shadow_blocked_transparent_stepped(
KernelGlobals *kg,
ShaderData *shadow_sd,
ccl_addr_space PathState *state,
Ray *ray,
Intersection *isect,
float3 *shadow)
{
const bool blocked = scene_intersect(kg,
*ray,
PATH_RAY_SHADOW_OPAQUE,
isect,
NULL,
0.0f, 0.0f);
const bool is_transparent_isect = blocked
? shader_transparent_shadow(kg, isect)
: false;
return shadow_blocked_transparent_stepped_loop(kg,
shadow_sd,
state,
ray,
isect,
blocked,
is_transparent_isect,
shadow);
}
# endif /* __KERNEL_GPU__ */
#endif /* __TRANSPARENT_SHADOWS__ */
ccl_device_inline bool shadow_blocked(KernelGlobals *kg,
ShaderData *shadow_sd,
ccl_addr_space PathState *state,
ccl_addr_space Ray *ray_input,
float3 *shadow)
{
/* Special trickery for split kernel: some data is coming from the
* global memory.
*/
#ifdef __SPLIT_KERNEL__
Ray private_ray = *ray_input;
Ray *ray = &private_ray;
Intersection *isect = &kg->isect_shadow[SD_THREAD];
#else /* __SPLIT_KERNEL__ */
Ray *ray = ray_input;
Intersection isect_object;
Intersection *isect = &isect_object;
#endif /* __SPLIT_KERNEL__ */
/* Some common early checks. */
*shadow = make_float3(1.0f, 1.0f, 1.0f);
if(ray->t == 0.0f) {
return false;
}
/* Do actual shadow shading. */
/* First of all, we check if integrator requires transparent shadows.
* if not, we use simplest and fastest ever way to calculate occlusion.
*/
#ifdef __TRANSPARENT_SHADOWS__
if(!kernel_data.integrator.transparent_shadows)
#endif
{
return shadow_blocked_opaque(kg,
shadow_sd,
state,
ray,
isect,
shadow);
}
#ifdef __TRANSPARENT_SHADOWS__
# ifdef __SHADOW_RECORD_ALL__
/* For the transparent shadows we try to use record-all logic on the
* devices which supports this.
*/
const int transparent_max_bounce = kernel_data.integrator.transparent_max_bounce;
/* Check transparent bounces here, for volume scatter which can do
* lighting before surface path termination is checked.
*/
if(state->transparent_bounce >= transparent_max_bounce) {
return true;
}
const uint max_hits = transparent_max_bounce - state->transparent_bounce - 1;
# ifdef __KERNEL_GPU__
/* On GPU we do trickey with tracing opaque ray first, this avoids speed
* regressions in some files.
*
* TODO(sergey): Check why using record-all behavior causes slowdown in such
* cases. Could that be caused by a higher spill pressure?
*/
const bool blocked = scene_intersect(kg,
*ray,
PATH_RAY_SHADOW_OPAQUE,
isect,
NULL,
0.0f, 0.0f);
const bool is_transparent_isect = blocked
? shader_transparent_shadow(kg, isect)
: false;
if(!blocked || !is_transparent_isect ||
max_hits + 1 >= SHADOW_STACK_MAX_HITS)
{
return shadow_blocked_transparent_stepped_loop(kg,
shadow_sd,
state,
ray,
isect,
blocked,
is_transparent_isect,
shadow);
}
# endif /* __KERNEL_GPU__ */
return shadow_blocked_transparent_all(kg,
shadow_sd,
state,
ray,
max_hits,
shadow);
# else /* __SHADOW_RECORD_ALL__ */
/* Fallback to a slowest version which works on all devices. */
return shadow_blocked_transparent_stepped(kg,
shadow_sd,
state,
ray,
isect,
shadow);
# endif /* __SHADOW_RECORD_ALL__ */
#endif /* __TRANSPARENT_SHADOWS__ */
}
#undef SHADOW_STACK_MAX_HITS
CCL_NAMESPACE_END

View File

@@ -32,6 +32,7 @@ KERNEL_TEX(uint, texture_uint, __prim_visibility)
KERNEL_TEX(uint, texture_uint, __prim_index)
KERNEL_TEX(uint, texture_uint, __prim_object)
KERNEL_TEX(uint, texture_uint, __object_node)
KERNEL_TEX(float2, texture_float2, __prim_time)
/* objects */
KERNEL_TEX(float4, texture_float4, __objects)
@@ -177,7 +178,6 @@ KERNEL_IMAGE_TEX(uchar4, texture_image_uchar4, __tex_image_byte4_085)
KERNEL_IMAGE_TEX(uchar4, texture_image_uchar4, __tex_image_byte4_086)
KERNEL_IMAGE_TEX(uchar4, texture_image_uchar4, __tex_image_byte4_087)
KERNEL_IMAGE_TEX(uchar4, texture_image_uchar4, __tex_image_byte4_088)
KERNEL_IMAGE_TEX(uchar4, texture_image_uchar4, __tex_image_byte4_089)
# else
/* bindless textures */

View File

@@ -84,6 +84,7 @@ CCL_NAMESPACE_BEGIN
# define __VOLUME_SCATTER__
# define __SUBSURFACE__
# define __CMJ__
# define __SHADOW_RECORD_ALL__
#endif /* __KERNEL_CUDA__ */
#ifdef __KERNEL_OPENCL__
@@ -1143,6 +1144,8 @@ typedef struct KernelIntegrator {
int max_transmission_bounce;
int max_volume_bounce;
int ao_bounces;
/* transparent */
int transparent_min_bounce;
int transparent_max_bounce;
@@ -1186,7 +1189,8 @@ typedef struct KernelIntegrator {
float light_inv_rr_threshold;
int pad1;
int start_sample;
int pad1, pad2, pad3;
} KernelIntegrator;
static_assert_align(KernelIntegrator, 16);
@@ -1198,7 +1202,8 @@ typedef struct KernelBVH {
int have_curves;
int have_instancing;
int use_qbvh;
int pad1, pad2;
int use_bvh_steps;
int pad1;
} KernelBVH;
static_assert_align(KernelBVH, 16);

View File

@@ -966,7 +966,7 @@ ccl_device VolumeIntegrateResult kernel_volume_decoupled_scatter(
mis_weight = 2.0f*power_heuristic(pdf, distance_pdf);
}
}
if(sample_t < 1e-6f) {
if(sample_t < 1e-6f || pdf == 0.0f) {
return VOLUME_PATH_SCATTERED;
}

View File

@@ -130,8 +130,10 @@ kernel_cuda_path_trace(float *buffer, uint *rng_state, int sample, int sx, int s
int x = sx + blockDim.x*blockIdx.x + threadIdx.x;
int y = sy + blockDim.y*blockIdx.y + threadIdx.y;
if(x < sx + sw && y < sy + sh)
kernel_path_trace(NULL, buffer, rng_state, sample, x, y, offset, stride);
if(x < sx + sw && y < sy + sh) {
KernelGlobals kg;
kernel_path_trace(&kg, buffer, rng_state, sample, x, y, offset, stride);
}
}
#ifdef __BRANCHED_PATH__
@@ -142,8 +144,10 @@ kernel_cuda_branched_path_trace(float *buffer, uint *rng_state, int sample, int
int x = sx + blockDim.x*blockIdx.x + threadIdx.x;
int y = sy + blockDim.y*blockIdx.y + threadIdx.y;
if(x < sx + sw && y < sy + sh)
kernel_branched_path_trace(NULL, buffer, rng_state, sample, x, y, offset, stride);
if(x < sx + sw && y < sy + sh) {
KernelGlobals kg;
kernel_branched_path_trace(&kg, buffer, rng_state, sample, x, y, offset, stride);
}
}
#endif
@@ -154,8 +158,9 @@ kernel_cuda_convert_to_byte(uchar4 *rgba, float *buffer, float sample_scale, int
int x = sx + blockDim.x*blockIdx.x + threadIdx.x;
int y = sy + blockDim.y*blockIdx.y + threadIdx.y;
if(x < sx + sw && y < sy + sh)
if(x < sx + sw && y < sy + sh) {
kernel_film_convert_to_byte(NULL, rgba, buffer, sample_scale, x, y, offset, stride);
}
}
extern "C" __global__ void
@@ -165,8 +170,9 @@ kernel_cuda_convert_to_half_float(uchar4 *rgba, float *buffer, float sample_scal
int x = sx + blockDim.x*blockIdx.x + threadIdx.x;
int y = sy + blockDim.y*blockIdx.y + threadIdx.y;
if(x < sx + sw && y < sy + sh)
if(x < sx + sw && y < sy + sh) {
kernel_film_convert_to_half_float(NULL, rgba, buffer, sample_scale, x, y, offset, stride);
}
}
extern "C" __global__ void
@@ -183,7 +189,8 @@ kernel_cuda_shader(uint4 *input,
int x = sx + blockDim.x*blockIdx.x + threadIdx.x;
if(x < sx + sw) {
kernel_shader_evaluate(NULL,
KernelGlobals kg;
kernel_shader_evaluate(&kg,
input,
output,
output_luma,
@@ -200,8 +207,10 @@ kernel_cuda_bake(uint4 *input, float4 *output, int type, int filter, int sx, int
{
int x = sx + blockDim.x*blockIdx.x + threadIdx.x;
if(x < sx + sw)
kernel_bake_evaluate(NULL, input, output, (ShaderEvalType)type, filter, x, offset, sample);
if(x < sx + sw) {
KernelGlobals kg;
kernel_bake_evaluate(&kg, input, output, (ShaderEvalType)type, filter, x, offset, sample);
}
}
#endif

View File

@@ -144,7 +144,6 @@ ccl_device float4 svm_image_texture(KernelGlobals *kg, int id, float x, float y,
case 86: r = kernel_tex_image_interp(__tex_image_byte4_086, x, y); break;
case 87: r = kernel_tex_image_interp(__tex_image_byte4_087, x, y); break;
case 88: r = kernel_tex_image_interp(__tex_image_byte4_088, x, y); break;
case 89: r = kernel_tex_image_interp(__tex_image_byte4_089, x, y); break;
default:
kernel_assert(0);
return make_float4(0.0f, 0.0f, 0.0f, 0.0f);

View File

@@ -134,32 +134,37 @@ ccl_device float3 svm_math_blackbody_color(float t) {
{ 6.72595954e-13f, -2.73059993e-08f, 4.24068546e-04f, -7.52204323e-01f },
};
if(t >= 12000.0f)
int i;
if(t >= 12000.0f) {
return make_float3(0.826270103f, 0.994478524f, 1.56626022f);
}
else if(t >= 6365.0f) {
i = 5;
}
else if(t >= 3315.0f) {
i = 4;
}
else if(t >= 1902.0f) {
i = 3;
}
else if(t >= 1449.0f) {
i = 2;
}
else if(t >= 1167.0f) {
i = 1;
}
else if(t >= 965.0f) {
i = 0;
}
else {
/* For 800 <= t < 965 color does not change in OSL implementation, so keep color the same */
return make_float3(4.70366907f, 0.0f, 0.0f);
}
/* Define a macro to reduce stack usage for nvcc */
#define MAKE_BB_RGB(i) make_float3(\
rc[i][0] / t + rc[i][1] * t + rc[i][2],\
gc[i][0] / t + gc[i][1] * t + gc[i][2],\
((bc[i][0] * t + bc[i][1]) * t + bc[i][2]) * t + bc[i][3])
if(t >= 6365.0f)
return MAKE_BB_RGB(5);
if(t >= 3315.0f)
return MAKE_BB_RGB(4);
if(t >= 1902.0f)
return MAKE_BB_RGB(3);
if(t >= 1449.0f)
return MAKE_BB_RGB(2);
if(t >= 1167.0f)
return MAKE_BB_RGB(1);
if(t >= 965.0f)
return MAKE_BB_RGB(0);
#undef MAKE_BB_RGB
/* For 800 <= t < 965 color does not change in OSL implementation, so keep color the same */
return make_float3(4.70366907f, 0.0f, 0.0f);
const float t_inv = 1.0f / t;
return make_float3(rc[i][0] * t_inv + rc[i][1] * t + rc[i][2],
gc[i][0] * t_inv + gc[i][1] * t + gc[i][2],
((bc[i][0] * t + bc[i][1]) * t + bc[i][2]) * t + bc[i][3]);
}
ccl_device_inline float3 svm_math_gamma_color(float3 color, float gamma)

View File

@@ -18,8 +18,19 @@ CCL_NAMESPACE_BEGIN
/* Noise */
ccl_device_inline void svm_noise(float3 p, float detail, float distortion, float *fac, float3 *color)
ccl_device void svm_node_tex_noise(KernelGlobals *kg, ShaderData *sd, float *stack, uint4 node, int *offset)
{
uint co_offset, scale_offset, detail_offset, distortion_offset, fac_offset, color_offset;
decode_node_uchar4(node.y, &co_offset, &scale_offset, &detail_offset, &distortion_offset);
decode_node_uchar4(node.z, &color_offset, &fac_offset, NULL, NULL);
uint4 node2 = read_node(kg, offset);
float scale = stack_load_float_default(stack, scale_offset, node2.x);
float detail = stack_load_float_default(stack, detail_offset, node2.y);
float distortion = stack_load_float_default(stack, distortion_offset, node2.z);
float3 p = stack_load_float3(stack, co_offset) * scale;
int hard = 0;
if(distortion != 0.0f) {
@@ -32,36 +43,17 @@ ccl_device_inline void svm_noise(float3 p, float detail, float distortion, float
p += r;
}
*fac = noise_turbulence(p, detail, hard);
*color = make_float3(*fac,
noise_turbulence(make_float3(p.y, p.x, p.z), detail, hard),
noise_turbulence(make_float3(p.y, p.z, p.x), detail, hard));
}
float f = noise_turbulence(p, detail, hard);
ccl_device void svm_node_tex_noise(KernelGlobals *kg, ShaderData *sd, float *stack, uint4 node, int *offset)
{
uint co_offset, scale_offset, detail_offset, distortion_offset, fac_offset, color_offset;
decode_node_uchar4(node.y, &co_offset, &scale_offset, &detail_offset, &distortion_offset);
uint4 node2 = read_node(kg, offset);
float scale = stack_load_float_default(stack, scale_offset, node2.x);
float detail = stack_load_float_default(stack, detail_offset, node2.y);
float distortion = stack_load_float_default(stack, distortion_offset, node2.z);
float3 co = stack_load_float3(stack, co_offset);
float3 color;
float f;
svm_noise(co*scale, detail, distortion, &f, &color);
decode_node_uchar4(node.z, &color_offset, &fac_offset, NULL, NULL);
if(stack_valid(fac_offset))
if(stack_valid(fac_offset)) {
stack_store_float(stack, fac_offset, f);
if(stack_valid(color_offset))
}
if(stack_valid(color_offset)) {
float3 color = make_float3(f,
noise_turbulence(make_float3(p.y, p.x, p.z), detail, hard),
noise_turbulence(make_float3(p.y, p.z, p.x), detail, hard));
stack_store_float3(stack, color_offset, color);
}
}
CCL_NAMESPACE_END

View File

@@ -73,7 +73,7 @@ public:
bool need_update;
int total_pixel_samples;
size_t total_pixel_samples;
private:
BakeData *m_bake_data;

View File

@@ -285,9 +285,8 @@ int ImageManager::add_image(const string& filename,
thread_scoped_lock device_lock(device_mutex);
/* Do we have a float? */
if(type == IMAGE_DATA_TYPE_FLOAT || type == IMAGE_DATA_TYPE_FLOAT4)
is_float = true;
/* Check whether it's a float texture. */
is_float = (type == IMAGE_DATA_TYPE_FLOAT || type == IMAGE_DATA_TYPE_FLOAT4);
/* No single channel and half textures on CUDA (Fermi) and no half on OpenCL, use available slots */
if((type == IMAGE_DATA_TYPE_FLOAT ||

View File

@@ -43,6 +43,8 @@ NODE_DEFINE(Integrator)
SOCKET_INT(transparent_max_bounce, "Transparent Max Bounce", 7);
SOCKET_BOOLEAN(transparent_shadows, "Transparent Shadows", false);
SOCKET_INT(ao_bounces, "AO Bounces", 0);
SOCKET_INT(volume_max_steps, "Volume Max Steps", 1024);
SOCKET_FLOAT(volume_step_size, "Volume Step Size", 0.1f);
@@ -62,6 +64,7 @@ NODE_DEFINE(Integrator)
SOCKET_INT(mesh_light_samples, "Mesh Light Samples", 1);
SOCKET_INT(subsurface_samples, "Subsurface Samples", 1);
SOCKET_INT(volume_samples, "Volume Samples", 1);
SOCKET_INT(start_sample, "Start Sample", 0);
SOCKET_BOOLEAN(sample_all_lights_direct, "Sample All Lights Direct", true);
SOCKET_BOOLEAN(sample_all_lights_indirect, "Sample All Lights Indirect", true);
@@ -111,6 +114,13 @@ void Integrator::device_update(Device *device, DeviceScene *dscene, Scene *scene
kintegrator->transparent_max_bounce = transparent_max_bounce + 1;
kintegrator->transparent_min_bounce = transparent_min_bounce + 1;
if(ao_bounces == 0) {
kintegrator->ao_bounces = INT_MAX;
}
else {
kintegrator->ao_bounces = ao_bounces - 1;
}
/* Transparent Shadows
* We only need to enable transparent shadows, if we actually have
* transparent shaders in the scene. Otherwise we can disable it
@@ -152,6 +162,7 @@ void Integrator::device_update(Device *device, DeviceScene *dscene, Scene *scene
kintegrator->mesh_light_samples = mesh_light_samples;
kintegrator->subsurface_samples = subsurface_samples;
kintegrator->volume_samples = volume_samples;
kintegrator->start_sample = start_sample;
if(method == BRANCHED_PATH) {
kintegrator->sample_all_lights_direct = sample_all_lights_direct;

View File

@@ -43,6 +43,8 @@ public:
int transparent_max_bounce;
bool transparent_shadows;
int ao_bounces;
int volume_max_steps;
float volume_step_size;
@@ -64,6 +66,7 @@ public:
int mesh_light_samples;
int subsurface_samples;
int volume_samples;
int start_sample;
bool sample_all_lights_direct;
bool sample_all_lights_indirect;

View File

@@ -486,10 +486,18 @@ static void background_cdf(int start,
float2 *cond_cdf)
{
/* Conditional CDFs (rows, U direction). */
/* NOTE: It is possible to have some NaN pixels on background
* which will ruin CDF causing wrong shading. We replace such
* pixels with black.
*/
for(int i = start; i < end; i++) {
float sin_theta = sinf(M_PI_F * (i + 0.5f) / res);
float3 env_color = (*pixels)[i * res];
float ave_luminance = average(env_color);
/* TODO(sergey): Consider adding average_safe(). */
if(!isfinite(ave_luminance)) {
ave_luminance = 0.0f;
}
cond_cdf[i * cdf_count].x = ave_luminance * sin_theta;
cond_cdf[i * cdf_count].y = 0.0f;
@@ -497,6 +505,9 @@ static void background_cdf(int start,
for(int j = 1; j < res; j++) {
env_color = (*pixels)[i * res + j];
ave_luminance = average(env_color);
if(!isfinite(ave_luminance)) {
ave_luminance = 0.0f;
}
cond_cdf[i * cdf_count + j].x = ave_luminance * sin_theta;
cond_cdf[i * cdf_count + j].y = cond_cdf[i * cdf_count + j - 1].y + cond_cdf[i * cdf_count + j - 1].x / res;

View File

@@ -1873,9 +1873,14 @@ void MeshManager::device_update_bvh(Device *device, DeviceScene *dscene, Scene *
dscene->prim_object.reference((uint*)&pack.prim_object[0], pack.prim_object.size());
device->tex_alloc("__prim_object", dscene->prim_object);
}
if(pack.prim_time.size()) {
dscene->prim_time.reference((float2*)&pack.prim_time[0], pack.prim_time.size());
device->tex_alloc("__prim_time", dscene->prim_time);
}
dscene->data.bvh.root = pack.root_index;
dscene->data.bvh.use_qbvh = scene->params.use_qbvh;
dscene->data.bvh.use_bvh_steps = (scene->params.num_bvh_time_steps != 0);
}
void MeshManager::device_update_flags(Device * /*device*/,
@@ -2152,6 +2157,7 @@ void MeshManager::device_free(Device *device, DeviceScene *dscene)
device->tex_free(dscene->prim_visibility);
device->tex_free(dscene->prim_index);
device->tex_free(dscene->prim_object);
device->tex_free(dscene->prim_time);
device->tex_free(dscene->tri_shader);
device->tex_free(dscene->tri_vnormal);
device->tex_free(dscene->tri_vindex);
@@ -2173,6 +2179,7 @@ void MeshManager::device_free(Device *device, DeviceScene *dscene)
dscene->prim_visibility.clear();
dscene->prim_index.clear();
dscene->prim_object.clear();
dscene->prim_time.clear();
dscene->tri_shader.clear();
dscene->tri_vnormal.clear();
dscene->tri_vindex.clear();

View File

@@ -27,6 +27,7 @@
#include "util_sky_model.h"
#include "util_foreach.h"
#include "util_logging.h"
#include "util_transform.h"
CCL_NAMESPACE_BEGIN
@@ -1931,21 +1932,38 @@ GlossyBsdfNode::GlossyBsdfNode()
void GlossyBsdfNode::simplify_settings(Scene *scene)
{
if(distribution_orig == NBUILTIN_CLOSURES) {
roughness_orig = roughness;
distribution_orig = distribution;
}
else {
/* By default we use original values, so we don't worry about restoring
* defaults later one and can only do override when needed.
*/
roughness = roughness_orig;
distribution = distribution_orig;
}
Integrator *integrator = scene->integrator;
ShaderInput *roughness_input = input("Roughness");
if(integrator->filter_glossy == 0.0f) {
/* Fallback to Sharp closure for Roughness close to 0.
* Note: Keep the epsilon in sync with kernel!
*/
ShaderInput *roughness_input = input("Roughness");
if(!roughness_input->link && roughness <= 1e-4f) {
VLOG(1) << "Using sharp glossy BSDF.";
distribution = CLOSURE_BSDF_REFLECTION_ID;
}
}
else {
/* Rollback to original distribution when filter glossy is used. */
distribution = distribution_orig;
/* If filter glossy is used we replace Sharp glossy with GGX so we can
* benefit from closure blur to remove unwanted noise.
*/
if(roughness_input->link == NULL &&
distribution == CLOSURE_BSDF_REFLECTION_ID)
{
VLOG(1) << "Using GGX glossy with filter glossy.";
distribution = CLOSURE_BSDF_MICROFACET_GGX_ID;
roughness = 0.0f;
}
}
closure = distribution;
}
@@ -1953,7 +1971,8 @@ void GlossyBsdfNode::simplify_settings(Scene *scene)
bool GlossyBsdfNode::has_integrator_dependency()
{
ShaderInput *roughness_input = input("Roughness");
return !roughness_input->link && roughness <= 1e-4f;
return !roughness_input->link &&
(distribution == CLOSURE_BSDF_REFLECTION_ID || roughness <= 1e-4f);
}
void GlossyBsdfNode::compile(SVMCompiler& compiler)
@@ -2008,21 +2027,38 @@ GlassBsdfNode::GlassBsdfNode()
void GlassBsdfNode::simplify_settings(Scene *scene)
{
if(distribution_orig == NBUILTIN_CLOSURES) {
roughness_orig = roughness;
distribution_orig = distribution;
}
else {
/* By default we use original values, so we don't worry about restoring
* defaults later one and can only do override when needed.
*/
roughness = roughness_orig;
distribution = distribution_orig;
}
Integrator *integrator = scene->integrator;
ShaderInput *roughness_input = input("Roughness");
if(integrator->filter_glossy == 0.0f) {
/* Fallback to Sharp closure for Roughness close to 0.
* Note: Keep the epsilon in sync with kernel!
*/
ShaderInput *roughness_input = input("Roughness");
if(!roughness_input->link && roughness <= 1e-4f) {
VLOG(1) << "Using sharp glass BSDF.";
distribution = CLOSURE_BSDF_SHARP_GLASS_ID;
}
}
else {
/* Rollback to original distribution when filter glossy is used. */
distribution = distribution_orig;
/* If filter glossy is used we replace Sharp glossy with GGX so we can
* benefit from closure blur to remove unwanted noise.
*/
if(roughness_input->link == NULL &&
distribution == CLOSURE_BSDF_SHARP_GLASS_ID)
{
VLOG(1) << "Using GGX glass with filter glossy.";
distribution = CLOSURE_BSDF_MICROFACET_GGX_GLASS_ID;
roughness = 0.0f;
}
}
closure = distribution;
}
@@ -2030,7 +2066,8 @@ void GlassBsdfNode::simplify_settings(Scene *scene)
bool GlassBsdfNode::has_integrator_dependency()
{
ShaderInput *roughness_input = input("Roughness");
return !roughness_input->link && roughness <= 1e-4f;
return !roughness_input->link &&
(distribution == CLOSURE_BSDF_SHARP_GLASS_ID || roughness <= 1e-4f);
}
void GlassBsdfNode::compile(SVMCompiler& compiler)
@@ -2085,21 +2122,38 @@ RefractionBsdfNode::RefractionBsdfNode()
void RefractionBsdfNode::simplify_settings(Scene *scene)
{
if(distribution_orig == NBUILTIN_CLOSURES) {
roughness_orig = roughness;
distribution_orig = distribution;
}
else {
/* By default we use original values, so we don't worry about restoring
* defaults later one and can only do override when needed.
*/
roughness = roughness_orig;
distribution = distribution_orig;
}
Integrator *integrator = scene->integrator;
ShaderInput *roughness_input = input("Roughness");
if(integrator->filter_glossy == 0.0f) {
/* Fallback to Sharp closure for Roughness close to 0.
* Note: Keep the epsilon in sync with kernel!
*/
ShaderInput *roughness_input = input("Roughness");
if(!roughness_input->link && roughness <= 1e-4f) {
VLOG(1) << "Using sharp refraction BSDF.";
distribution = CLOSURE_BSDF_REFRACTION_ID;
}
}
else {
/* Rollback to original distribution when filter glossy is used. */
distribution = distribution_orig;
/* If filter glossy is used we replace Sharp glossy with GGX so we can
* benefit from closure blur to remove unwanted noise.
*/
if(roughness_input->link == NULL &&
distribution == CLOSURE_BSDF_REFRACTION_ID)
{
VLOG(1) << "Using GGX refraction with filter glossy.";
distribution = CLOSURE_BSDF_MICROFACET_GGX_REFRACTION_ID;
roughness = 0.0f;
}
}
closure = distribution;
}
@@ -2107,7 +2161,8 @@ void RefractionBsdfNode::simplify_settings(Scene *scene)
bool RefractionBsdfNode::has_integrator_dependency()
{
ShaderInput *roughness_input = input("Roughness");
return !roughness_input->link && roughness <= 1e-4f;
return !roughness_input->link &&
(distribution == CLOSURE_BSDF_REFRACTION_ID || roughness <= 1e-4f);
}
void RefractionBsdfNode::compile(SVMCompiler& compiler)

View File

@@ -388,7 +388,7 @@ public:
bool has_integrator_dependency();
ClosureType get_closure_type() { return distribution; }
float roughness;
float roughness, roughness_orig;
ClosureType distribution, distribution_orig;
};
@@ -400,7 +400,7 @@ public:
bool has_integrator_dependency();
ClosureType get_closure_type() { return distribution; }
float roughness, IOR;
float roughness, roughness_orig, IOR;
ClosureType distribution, distribution_orig;
};
@@ -412,7 +412,7 @@ public:
bool has_integrator_dependency();
ClosureType get_closure_type() { return distribution; }
float roughness, IOR;
float roughness, roughness_orig, IOR;
ClosureType distribution, distribution_orig;
};

View File

@@ -69,6 +69,7 @@ public:
device_vector<uint> prim_visibility;
device_vector<uint> prim_index;
device_vector<uint> prim_object;
device_vector<float2> prim_time;
/* mesh */
device_vector<uint> tri_shader;

View File

@@ -230,7 +230,9 @@ void Session::run_gpu()
while(1) {
scoped_timer pause_timer;
pause_cond.wait(pause_lock);
progress.add_skip_time(pause_timer, params.background);
if(pause) {
progress.add_skip_time(pause_timer, params.background);
}
update_status_time(pause, no_tiles);
progress.set_update();
@@ -520,7 +522,9 @@ void Session::run_cpu()
while(1) {
scoped_timer pause_timer;
pause_cond.wait(pause_lock);
progress.add_skip_time(pause_timer, params.background);
if(pause) {
progress.add_skip_time(pause_timer, params.background);
}
update_status_time(pause, no_tiles);
progress.set_update();

View File

@@ -92,7 +92,7 @@ public:
template<typename T>
ShaderGraphBuilder& add_node(const T& node)
{
EXPECT_EQ(NULL, find_node(node.name()));
EXPECT_EQ(find_node(node.name()), (void*)NULL);
graph_->add(node.node());
node_map_[node.name()] = node.node();
return *this;
@@ -104,8 +104,8 @@ public:
vector<string> tokens_from, tokens_to;
string_split(tokens_from, from, "::");
string_split(tokens_to, to, "::");
EXPECT_EQ(2, tokens_from.size());
EXPECT_EQ(2, tokens_to.size());
EXPECT_EQ(tokens_from.size(), 2);
EXPECT_EQ(tokens_to.size(), 2);
ShaderNode *node_from = find_node(tokens_from[0]),
*node_to = find_node(tokens_to[0]);
EXPECT_NE((void*)NULL, node_from);

View File

@@ -18,7 +18,7 @@
#include "util/util_aligned_malloc.h"
#define CHECK_ALIGNMENT(ptr, align) EXPECT_EQ(0, (size_t)ptr % align)
#define CHECK_ALIGNMENT(ptr, align) EXPECT_EQ((size_t)ptr % align, 0)
CCL_NAMESPACE_BEGIN

View File

@@ -26,63 +26,63 @@ CCL_NAMESPACE_BEGIN
TEST(util_path_filename, simple_unix)
{
string str = path_filename("/tmp/foo.txt");
EXPECT_EQ("foo.txt", str);
EXPECT_EQ(str, "foo.txt");
}
TEST(util_path_filename, root_unix)
{
string str = path_filename("/");
EXPECT_EQ("/", str);
EXPECT_EQ(str, "/");
}
TEST(util_path_filename, last_slash_unix)
{
string str = path_filename("/tmp/foo.txt/");
EXPECT_EQ(".", str);
EXPECT_EQ(str, ".");
}
TEST(util_path_filename, alternate_slash_unix)
{
string str = path_filename("/tmp\\foo.txt");
EXPECT_EQ("tmp\\foo.txt", str);
EXPECT_EQ(str, "tmp\\foo.txt");
}
#endif /* !_WIN32 */
TEST(util_path_filename, file_only)
{
string str = path_filename("foo.txt");
EXPECT_EQ("foo.txt", str);
EXPECT_EQ(str, "foo.txt");
}
TEST(util_path_filename, empty)
{
string str = path_filename("");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
#ifdef _WIN32
TEST(util_path_filename, simple_windows)
{
string str = path_filename("C:\\tmp\\foo.txt");
EXPECT_EQ("foo.txt", str);
EXPECT_EQ(str, "foo.txt");
}
TEST(util_path_filename, root_windows)
{
string str = path_filename("C:\\");
EXPECT_EQ("\\", str);
EXPECT_EQ(str, "\\");
}
TEST(util_path_filename, last_slash_windows)
{
string str = path_filename("C:\\tmp\\foo.txt\\");
EXPECT_EQ(".", str);
EXPECT_EQ(str, ".");
}
TEST(util_path_filename, alternate_slash_windows)
{
string str = path_filename("C:\\tmp/foo.txt");
EXPECT_EQ("foo.txt", str);
EXPECT_EQ(str, "foo.txt");
}
#endif /* _WIN32 */
@@ -92,63 +92,63 @@ TEST(util_path_filename, alternate_slash_windows)
TEST(util_path_dirname, simple_unix)
{
string str = path_dirname("/tmp/foo.txt");
EXPECT_EQ("/tmp", str);
EXPECT_EQ(str, "/tmp");
}
TEST(util_path_dirname, root_unix)
{
string str = path_dirname("/");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
TEST(util_path_dirname, last_slash_unix)
{
string str = path_dirname("/tmp/foo.txt/");
EXPECT_EQ("/tmp/foo.txt", str);
EXPECT_EQ(str, "/tmp/foo.txt");
}
TEST(util_path_dirname, alternate_slash_unix)
{
string str = path_dirname("/tmp\\foo.txt");
EXPECT_EQ("/", str);
EXPECT_EQ(str, "/");
}
#endif /* !_WIN32 */
TEST(util_path_dirname, file_only)
{
string str = path_dirname("foo.txt");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
TEST(util_path_dirname, empty)
{
string str = path_dirname("");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
#ifdef _WIN32
TEST(util_path_dirname, simple_windows)
{
string str = path_dirname("C:\\tmp\\foo.txt");
EXPECT_EQ("C:\\tmp", str);
EXPECT_EQ(str, "C:\\tmp");
}
TEST(util_path_dirname, root_windows)
{
string str = path_dirname("C:\\");
EXPECT_EQ("C:", str);
EXPECT_EQ(str, "C:");
}
TEST(util_path_dirname, last_slash_windows)
{
string str = path_dirname("C:\\tmp\\foo.txt\\");
EXPECT_EQ("C:\\tmp\\foo.txt", str);
EXPECT_EQ(str, "C:\\tmp\\foo.txt");
}
TEST(util_path_dirname, alternate_slash_windows)
{
string str = path_dirname("C:\\tmp/foo.txt");
EXPECT_EQ("C:\\tmp", str);
EXPECT_EQ(str, "C:\\tmp");
}
#endif /* _WIN32 */
@@ -157,152 +157,152 @@ TEST(util_path_dirname, alternate_slash_windows)
TEST(util_path_join, empty_both)
{
string str = path_join("", "");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
TEST(util_path_join, empty_directory)
{
string str = path_join("", "foo.txt");
EXPECT_EQ("foo.txt", str);
EXPECT_EQ(str, "foo.txt");
}
TEST(util_path_join, empty_filename)
{
string str = path_join("foo", "");
EXPECT_EQ("foo", str);
EXPECT_EQ(str, "foo");
}
#ifndef _WIN32
TEST(util_path_join, simple_unix)
{
string str = path_join("foo", "bar");
EXPECT_EQ("foo/bar", str);
EXPECT_EQ(str, "foo/bar");
}
TEST(util_path_join, directory_slash_unix)
{
string str = path_join("foo/", "bar");
EXPECT_EQ("foo/bar", str);
EXPECT_EQ(str, "foo/bar");
}
TEST(util_path_join, filename_slash_unix)
{
string str = path_join("foo", "/bar");
EXPECT_EQ("foo/bar", str);
EXPECT_EQ(str, "foo/bar");
}
TEST(util_path_join, both_slash_unix)
{
string str = path_join("foo/", "/bar");
EXPECT_EQ("foo//bar", str);
EXPECT_EQ(str, "foo//bar");
}
TEST(util_path_join, directory_alternate_slash_unix)
{
string str = path_join("foo\\", "bar");
EXPECT_EQ("foo\\/bar", str);
EXPECT_EQ(str, "foo\\/bar");
}
TEST(util_path_join, filename_alternate_slash_unix)
{
string str = path_join("foo", "\\bar");
EXPECT_EQ("foo/\\bar", str);
EXPECT_EQ(str, "foo/\\bar");
}
TEST(util_path_join, both_alternate_slash_unix)
{
string str = path_join("foo", "\\bar");
EXPECT_EQ("foo/\\bar", str);
EXPECT_EQ(str, "foo/\\bar");
}
TEST(util_path_join, empty_dir_filename_slash_unix)
{
string str = path_join("", "/foo.txt");
EXPECT_EQ("/foo.txt", str);
EXPECT_EQ(str, "/foo.txt");
}
TEST(util_path_join, empty_dir_filename_alternate_slash_unix)
{
string str = path_join("", "\\foo.txt");
EXPECT_EQ("\\foo.txt", str);
EXPECT_EQ(str, "\\foo.txt");
}
TEST(util_path_join, empty_filename_dir_slash_unix)
{
string str = path_join("foo/", "");
EXPECT_EQ("foo/", str);
EXPECT_EQ(str, "foo/");
}
TEST(util_path_join, empty_filename_dir_alternate_slash_unix)
{
string str = path_join("foo\\", "");
EXPECT_EQ("foo\\", str);
EXPECT_EQ(str, "foo\\");
}
#else /* !_WIN32 */
TEST(util_path_join, simple_windows)
{
string str = path_join("foo", "bar");
EXPECT_EQ("foo\\bar", str);
EXPECT_EQ(str, "foo\\bar");
}
TEST(util_path_join, directory_slash_windows)
{
string str = path_join("foo\\", "bar");
EXPECT_EQ("foo\\bar", str);
EXPECT_EQ(str, "foo\\bar");
}
TEST(util_path_join, filename_slash_windows)
{
string str = path_join("foo", "\\bar");
EXPECT_EQ("foo\\bar", str);
EXPECT_EQ(str, "foo\\bar");
}
TEST(util_path_join, both_slash_windows)
{
string str = path_join("foo\\", "\\bar");
EXPECT_EQ("foo\\\\bar", str);
EXPECT_EQ(str, "foo\\\\bar");
}
TEST(util_path_join, directory_alternate_slash_windows)
{
string str = path_join("foo/", "bar");
EXPECT_EQ("foo/bar", str);
EXPECT_EQ(str, "foo/bar");
}
TEST(util_path_join, filename_alternate_slash_windows)
{
string str = path_join("foo", "/bar");
EXPECT_EQ("foo/bar", str);
EXPECT_EQ(str, "foo/bar");
}
TEST(util_path_join, both_alternate_slash_windows)
{
string str = path_join("foo/", "/bar");
EXPECT_EQ("foo//bar", str);
EXPECT_EQ(str, "foo//bar");
}
TEST(util_path_join, empty_dir_filename_slash_windows)
{
string str = path_join("", "\\foo.txt");
EXPECT_EQ("\\foo.txt", str);
EXPECT_EQ(str, "\\foo.txt");
}
TEST(util_path_join, empty_dir_filename_alternate_slash_windows)
{
string str = path_join("", "/foo.txt");
EXPECT_EQ("/foo.txt", str);
EXPECT_EQ(str, "/foo.txt");
}
TEST(util_path_join, empty_filename_dir_slash_windows)
{
string str = path_join("foo\\", "");
EXPECT_EQ("foo\\", str);
EXPECT_EQ(str, "foo\\");
}
TEST(util_path_join, empty_filename_dir_alternate_slash_windows)
{
string str = path_join("foo/", "");
EXPECT_EQ("foo/", str);
EXPECT_EQ(str, "foo/");
}
#endif /* !_WIN32 */
@@ -311,31 +311,31 @@ TEST(util_path_join, empty_filename_dir_alternate_slash_windows)
TEST(util_path_escape, no_escape_chars)
{
string str = path_escape("/tmp/foo/bar");
EXPECT_EQ("/tmp/foo/bar", str);
EXPECT_EQ(str, "/tmp/foo/bar");
}
TEST(util_path_escape, simple)
{
string str = path_escape("/tmp/foo bar");
EXPECT_EQ("/tmp/foo\\ bar", str);
EXPECT_EQ(str, "/tmp/foo\\ bar");
}
TEST(util_path_escape, simple_end)
{
string str = path_escape("/tmp/foo/bar ");
EXPECT_EQ("/tmp/foo/bar\\ ", str);
EXPECT_EQ(str, "/tmp/foo/bar\\ ");
}
TEST(util_path_escape, multiple)
{
string str = path_escape("/tmp/foo bar");
EXPECT_EQ("/tmp/foo\\ \\ bar", str);
EXPECT_EQ(str, "/tmp/foo\\ \\ bar");
}
TEST(util_path_escape, simple_multiple_end)
{
string str = path_escape("/tmp/foo/bar ");
EXPECT_EQ("/tmp/foo/bar\\ \\ ", str);
EXPECT_EQ(str, "/tmp/foo/bar\\ \\ ");
}
/* ******** Tests for path_is_relative() ******** */

View File

@@ -25,25 +25,25 @@ CCL_NAMESPACE_BEGIN
TEST(util_string_printf, no_format)
{
string str = string_printf("foo bar");
EXPECT_EQ(str, "foo bar");
EXPECT_EQ("foo bar", str);
}
TEST(util_string_printf, int_number)
{
string str = string_printf("foo %d bar", 314);
EXPECT_EQ(str, "foo 314 bar");
EXPECT_EQ("foo 314 bar", str);
}
TEST(util_string_printf, float_number_default_precision)
{
string str = string_printf("foo %f bar", 3.1415);
EXPECT_EQ(str, "foo 3.141500 bar");
EXPECT_EQ("foo 3.141500 bar", str);
}
TEST(util_string_printf, float_number_custom_precision)
{
string str = string_printf("foo %.1f bar", 3.1415);
EXPECT_EQ(str, "foo 3.1 bar");
EXPECT_EQ("foo 3.1 bar", str);
}
/* ******** Tests for string_printf() ******** */
@@ -78,44 +78,44 @@ TEST(util_string_split, empty)
{
vector<string> tokens;
string_split(tokens, "");
EXPECT_EQ(0, tokens.size());
EXPECT_EQ(tokens.size(), 0);
}
TEST(util_string_split, only_spaces)
{
vector<string> tokens;
string_split(tokens, " \t\t \t");
EXPECT_EQ(0, tokens.size());
EXPECT_EQ(tokens.size(), 0);
}
TEST(util_string_split, single)
{
vector<string> tokens;
string_split(tokens, "foo");
EXPECT_EQ(1, tokens.size());
EXPECT_EQ("foo", tokens[0]);
EXPECT_EQ(tokens.size(), 1);
EXPECT_EQ(tokens[0], "foo");
}
TEST(util_string_split, simple)
{
vector<string> tokens;
string_split(tokens, "foo a bar b");
EXPECT_EQ(4, tokens.size());
EXPECT_EQ("foo", tokens[0]);
EXPECT_EQ("a", tokens[1]);
EXPECT_EQ("bar", tokens[2]);
EXPECT_EQ("b", tokens[3]);
EXPECT_EQ(tokens.size(), 4);
EXPECT_EQ(tokens[0], "foo");
EXPECT_EQ(tokens[1], "a");
EXPECT_EQ(tokens[2], "bar");
EXPECT_EQ(tokens[3], "b");
}
TEST(util_string_split, multiple_spaces)
{
vector<string> tokens;
string_split(tokens, " \t foo \ta bar b\t ");
EXPECT_EQ(4, tokens.size());
EXPECT_EQ("foo", tokens[0]);
EXPECT_EQ("a", tokens[1]);
EXPECT_EQ("bar", tokens[2]);
EXPECT_EQ("b", tokens[3]);
EXPECT_EQ(tokens.size(), 4);
EXPECT_EQ(tokens[0], "foo");
EXPECT_EQ(tokens[1], "a");
EXPECT_EQ(tokens[2], "bar");
EXPECT_EQ(tokens[3], "b");
}
/* ******** Tests for string_replace() ******** */
@@ -124,35 +124,35 @@ TEST(util_string_replace, empty_haystack_and_other)
{
string str = "";
string_replace(str, "x", "");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
TEST(util_string_replace, empty_haystack)
{
string str = "";
string_replace(str, "x", "y");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
TEST(util_string_replace, empty_other)
{
string str = "x";
string_replace(str, "x", "");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
TEST(util_string_replace, long_haystack_empty_other)
{
string str = "a x b xxc";
string_replace(str, "x", "");
EXPECT_EQ("a b c", str);
EXPECT_EQ(str, "a b c");
}
TEST(util_string_replace, long_haystack)
{
string str = "a x b xxc";
string_replace(str, "x", "FOO");
EXPECT_EQ("a FOO b FOOFOOc", str);
EXPECT_EQ(str, "a FOO b FOOFOOc");
}
/* ******** Tests for string_endswith() ******** */
@@ -192,25 +192,25 @@ TEST(util_string_endswith, simple_false)
TEST(util_string_strip, empty)
{
string str = string_strip("");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
TEST(util_string_strip, only_spaces)
{
string str = string_strip(" ");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
TEST(util_string_strip, no_spaces)
{
string str = string_strip("foo bar");
EXPECT_EQ("foo bar", str);
EXPECT_EQ(str, "foo bar");
}
TEST(util_string_strip, with_spaces)
{
string str = string_strip(" foo bar ");
EXPECT_EQ("foo bar", str);
EXPECT_EQ(str, "foo bar");
}
/* ******** Tests for string_remove_trademark() ******** */
@@ -218,31 +218,31 @@ TEST(util_string_strip, with_spaces)
TEST(util_string_remove_trademark, empty)
{
string str = string_remove_trademark("");
EXPECT_EQ("", str);
EXPECT_EQ(str, "");
}
TEST(util_string_remove_trademark, no_trademark)
{
string str = string_remove_trademark("foo bar");
EXPECT_EQ("foo bar", str);
EXPECT_EQ(str, "foo bar");
}
TEST(util_string_remove_trademark, only_tm)
{
string str = string_remove_trademark("foo bar(TM) zzz");
EXPECT_EQ("foo bar zzz", str);
EXPECT_EQ(str, "foo bar zzz");
}
TEST(util_string_remove_trademark, only_r)
{
string str = string_remove_trademark("foo bar(R) zzz");
EXPECT_EQ("foo bar zzz", str);
EXPECT_EQ(str, "foo bar zzz");
}
TEST(util_string_remove_trademark, both)
{
string str = string_remove_trademark("foo bar(TM)(R) zzz");
EXPECT_EQ("foo bar zzz", str);
EXPECT_EQ(str, "foo bar zzz");
}
CCL_NAMESPACE_END

View File

@@ -18,6 +18,7 @@
#define __UTIL_HALF_H__
#include "util_types.h"
#include "util_math.h"
#ifdef __KERNEL_SSE2__
#include "util_simd.h"
@@ -110,6 +111,28 @@ ccl_device_inline float4 half4_to_float4(half4 h)
return f;
}
ccl_device_inline half float_to_half(float f)
{
const uint u = __float_as_uint(f);
/* Sign bit, shifted to it's position. */
uint sign_bit = u & 0x80000000;
sign_bit >>= 16;
/* Exponent. */
uint exponent_bits = u & 0x7f800000;
/* Non-sign bits. */
uint value_bits = u & 0x7fffffff;
value_bits >>= 13; /* Align mantissa on MSB. */
value_bits -= 0x1c000; /* Adjust bias. */
/* Flush-to-zero. */
value_bits = (exponent_bits < 0x38800000) ? 0 : value_bits;
/* Clamp-to-max. */
value_bits = (exponent_bits > 0x47000000) ? 0x7bff : value_bits;
/* Denormals-as-zero. */
value_bits = (exponent_bits == 0 ? 0 : value_bits);
/* Re-insert sign bit and return. */
return (value_bits | sign_bit);
}
#endif
#endif

View File

@@ -19,6 +19,7 @@
#include "util_algorithm.h"
#include "util_debug.h"
#include "util_half.h"
#include "util_image.h"
CCL_NAMESPACE_BEGIN
@@ -38,6 +39,52 @@ const T *util_image_read(const vector<T>& pixels,
return &pixels[index];
}
/* Cast input pixel from unknown storage to float. */
template<typename T>
inline float cast_to_float(T value);
template<>
inline float cast_to_float(float value)
{
return value;
}
template<>
inline float cast_to_float(uchar value)
{
return (float)value / 255.0f;
}
template<>
inline float cast_to_float(half value)
{
return half_to_float(value);
}
/* Cast float value to output pixel type. */
template<typename T>
inline T cast_from_float(float value);
template<>
inline float cast_from_float(float value)
{
return value;
}
template<>
inline uchar cast_from_float(float value)
{
if(value < 0.0f) {
return 0;
}
else if(value > (1.0f - 0.5f / 255.0f)) {
return 255;
}
return (uchar)((255.0f * value) + 0.5f);
}
template<>
inline half cast_from_float(float value)
{
return float_to_half(value);
}
template<typename T>
void util_image_downscale_sample(const vector<T>& pixels,
const size_t width,
@@ -71,15 +118,22 @@ void util_image_downscale_sample(const vector<T>& pixels,
components,
nx, ny, nz);
for(size_t k = 0; k < components; ++k) {
accum[k] += pixel[k];
accum[k] += cast_to_float(pixel[k]);
}
++count;
}
}
}
const float inv_count = 1.0f / (float)count;
for(size_t k = 0; k < components; ++k) {
result[k] = T(accum[k] * inv_count);
if(count != 0) {
const float inv_count = 1.0f / (float)count;
for(size_t k = 0; k < components; ++k) {
result[k] = cast_from_float<T>(accum[k] * inv_count);
}
}
else {
for(size_t k = 0; k < components; ++k) {
result[k] = T(0.0f);
}
}
}

View File

@@ -1329,7 +1329,7 @@ ccl_device_inline float3 safe_divide_even_color(float3 a, float3 b)
y = (b.y != 0.0f)? a.y/b.y: 0.0f;
z = (b.z != 0.0f)? a.z/b.z: 0.0f;
/* try to get grey even if b is zero */
/* try to get gray even if b is zero */
if(b.x == 0.0f) {
if(b.y == 0.0f) {
x = z;

View File

@@ -43,7 +43,9 @@ template <> class StaticAssertFailure<true> {};
# endif /* __COUNTER__ */
# endif /* C++11 or MSVC2015 */
#else /* __KERNEL_GPU__ */
# define static_assert(statement, message)
# ifndef static_assert
# define static_assert(statement, message)
# endif
#endif /* __KERNEL_GPU__ */
/* TODO(sergey): For until C++11 is a bare minimum for us,

View File

@@ -36,7 +36,7 @@
#if !defined(__MINGW64__)
# if defined(_WIN32) || defined(__APPLE__) || \
defined(__FreeBSD__) || defined(__NetBSD__)
static void sincos(double x, double *sinx, double *cosx) {
inline void sincos(double x, double *sinx, double *cosx) {
*sinx = sin(x);
*cosx = cos(x);
}

View File

@@ -93,7 +93,7 @@ wchar_t *alloc_utf16_from_8(const char *in8, size_t add);
/* Easy allocation and conversion of new utf-16 string. New string has _16 suffix. Must be deallocated with UTF16_UN_ENCODE in right order*/
#define UTF16_ENCODE(in8str) if (1) { \
wchar_t *in8str ## _16 = alloc_utf16_from_8((char *)in8str, 0)
wchar_t *in8str ## _16 = alloc_utf16_from_8((const char *)in8str, 0)
#define UTF16_UN_ENCODE(in8str) \
free(in8str ## _16); } (void)0

View File

@@ -5,8 +5,8 @@ REM This is for users who like to configure & build Blender with a single comman
setlocal ENABLEEXTENSIONS
set BLENDER_DIR=%~dp0
set BLENDER_DIR_NOSPACES=%BLENDER_DIR: =%
if not "%BLENDER_DIR%"=="%BLENDER_DIR_NOSPACES%" (
echo There are spaces detected in the build path "%BLENDER_DIR%", this is currently not supported, exiting....
if not "%BLENDER_DIR%"=="%BLENDER_DIR_NOSPACES%" (
echo There are spaces detected in the build path "%BLENDER_DIR%", this is currently not supported, exiting....
goto EOF
)
set BUILD_DIR=%BLENDER_DIR%..\build_windows
@@ -79,7 +79,7 @@ if NOT "%1" == "" (
set NOBUILD=1
) else if "%1" == "showhash" (
for /f "delims=" %%i in ('git rev-parse HEAD') do echo Branch_hash=%%i
cd release/datafiles/locale
cd release/datafiles/locale
for /f "delims=" %%i in ('git rev-parse HEAD') do echo Locale_hash=%%i
cd %~dp0
cd release/scripts/addons
@@ -132,13 +132,13 @@ if "%BUILD_ARCH%"=="x64" (
if "%target%"=="Release" (
rem for vc12 check for both cuda 7.5 and 8
rem for vc12 check for both cuda 7.5 and 8
if "%CUDA_PATH%"=="" (
echo Cuda Not found, aborting!
goto EOF
)
set BUILD_CMAKE_ARGS=%BUILD_CMAKE_ARGS% ^
-C"%BLENDER_DIR%\build_files\cmake\config\blender_release.cmake"
-C"%BLENDER_DIR%\build_files\cmake\config\blender_release.cmake"
)
:DetectMSVC
@@ -157,7 +157,7 @@ if DEFINED MSVC_VC_DIR goto msvc_detect_finally
if DEFINED MSVC_VC_DIR call "%MSVC_VC_DIR%\vcvarsall.bat"
if DEFINED MSVC_VC_DIR goto sanity_checks
rem MSVC Build environment 2017 and up.
rem MSVC Build environment 2017 and up.
for /F "usebackq skip=2 tokens=1-2*" %%A IN (`REG QUERY "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\SXS\VS7" /v %BUILD_VS_VER%.0 2^>nul`) DO set MSVC_VS_DIR=%%C
if DEFINED MSVC_VS_DIR goto msvc_detect_finally_2017
REM Check 32 bits
@@ -202,7 +202,7 @@ if NOT EXIST %BLENDER_DIR%..\lib\nul (
if "%TARGET%"=="" (
echo Error: Convenience target not set
echo This is required for building, aborting!
echo .
echo .
goto HELP
)
@@ -266,15 +266,15 @@ echo.
echo At any point you can optionally modify your build configuration by editing:
echo "%BUILD_DIR%\CMakeCache.txt", then run "make" again to build with the changes applied.
echo.
echo Blender successfully built, run from: "%BUILD_DIR%\bin\%BUILD_TYPE%"
echo Blender successfully built, run from: "%BUILD_DIR%\bin\%BUILD_TYPE%\blender.exe"
echo.
goto EOF
:HELP
echo.
echo Convenience targets
echo - release ^(identical to the offical blender.org builds^)
echo - release ^(identical to the official blender.org builds^)
echo - full ^(same as release minus the cuda kernels^)
echo - lite
echo - lite
echo - headless
echo - cycles
echo - bpy
@@ -289,11 +289,10 @@ goto EOF
echo - with_tests ^(enable building unit tests^)
echo - debug ^(Build an unoptimized debuggable build^)
echo - packagename [newname] ^(override default cpack package name^)
echo - x86 ^(override host autodetect and build 32 bit code^)
echo - x64 ^(override host autodetect and build 64 bit code^)
echo - x86 ^(override host auto-detect and build 32 bit code^)
echo - x64 ^(override host auto-detect and build 64 bit code^)
echo - 2013 ^(build with visual studio 2013^)
echo - 2015 ^(build with visual studio 2015^) [EXPERIMENTAL]
echo.
:EOF

View File

@@ -28201,7 +28201,7 @@
xlink:href="#linearGradient37542-29"
id="linearGradient17610"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(-1,0,0,1,461.01011,-167)"
gradientTransform="matrix(-1,0,0,1,865.01833,131.0342)"
x1="392.0101"
y1="224.99998"
x2="392.0101"
@@ -28263,7 +28263,7 @@
xlink:href="#linearGradient37542-29-7"
id="linearGradient17610-0"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(-1,0,0,1,461.01011,-167)"
gradientTransform="translate(128.00098,130.98191)"
x1="392.0101"
y1="224.99998"
x2="392.0101"
@@ -28402,7 +28402,7 @@
x2="228.5468"
y1="118.91647"
x1="228.5468"
gradientTransform="matrix(1.180548,0,0,0.90042534,265.27784,265.13062)"
gradientTransform="matrix(1.180548,0,0,0.90042534,265.83288,265.61628)"
gradientUnits="userSpaceOnUse"
id="linearGradient17838"
xlink:href="#linearGradient319-36-40-2"
@@ -28433,7 +28433,7 @@
x2="228.5468"
y1="118.91647"
x1="228.5468"
gradientTransform="matrix(1.180548,0,0,0.90042534,223.26222,270.47438)"
gradientTransform="matrix(1.180548,0,0,0.90042534,223.81726,270.99473)"
gradientUnits="userSpaceOnUse"
id="linearGradient17872"
xlink:href="#linearGradient319-36-40-2-4"
@@ -29073,7 +29073,7 @@
xlink:href="#linearGradient27854-0-6-9"
id="linearGradient17162"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0,1,-1,0,782.48614,-14.46331)"
gradientTransform="matrix(0,1,-1,0,783.04118,-13.977664)"
x1="388.86502"
y1="244.02"
x2="391.43173"
@@ -29083,7 +29083,7 @@
xlink:href="#linearGradient37542-29-7-8"
id="linearGradient17165"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0,1,-1,0,782.48614,-14.46331)"
gradientTransform="matrix(0,1,-1,0,783.04118,-13.977664)"
x1="368.97806"
y1="249.99998"
x2="393.85385"
@@ -29113,7 +29113,7 @@
xlink:href="#linearGradient37542-29-1"
id="linearGradient17185"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0,-1,-1,0,740.48614,764.46331)"
gradientTransform="matrix(0,-1,-1,0,741.04118,764.98366)"
x1="409.93588"
y1="249.99998"
x2="385.11514"
@@ -31338,6 +31338,26 @@
d="m 125.5,433.5 23,0 0,41 -33,0 0,-31 10,-10 z"
style="display:inline;fill:url(#linearGradient13110);fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:1;marker:none" />
</clipPath>
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient1610-6"
id="linearGradient18199"
gradientUnits="userSpaceOnUse"
x1="189.76083"
y1="248.13905"
x2="116.05637"
y2="183.6826" />
<radialGradient
inkscape:collect="always"
xlink:href="#linearGradient22562"
id="radialGradient23167-6"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(0.99220964,-0.12457927,0.11585516,0.92272644,-34.13325,22.766225)"
cx="-0.78262758"
cy="294.63174"
fx="-0.78262758"
fy="294.63174"
r="6.6750002" />
</defs>
<sodipodi:namedview
id="base"
@@ -31349,17 +31369,17 @@
objecttolerance="10000"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="1.274018"
inkscape:cx="519.70993"
inkscape:cy="325.90484"
inkscape:zoom="19.997864"
inkscape:cx="462.52244"
inkscape:cy="435.14241"
inkscape:document-units="px"
inkscape:current-layer="layer1"
inkscape:current-layer="g23149-4"
showgrid="true"
inkscape:window-width="1920"
inkscape:window-height="1005"
inkscape:window-x="-2"
inkscape:window-y="27"
inkscape:snap-nodes="false"
inkscape:window-height="1025"
inkscape:window-x="-8"
inkscape:window-y="-8"
inkscape:snap-nodes="true"
inkscape:snap-bbox="true"
showguides="true"
inkscape:guide-bbox="true"
@@ -31369,14 +31389,16 @@
inkscape:snap-intersection-grid-guide="false"
inkscape:window-maximized="1"
inkscape:bbox-paths="false"
inkscape:snap-global="false"
inkscape:snap-global="true"
inkscape:snap-bbox-midpoints="false"
inkscape:snap-grids="true"
inkscape:snap-grids="false"
inkscape:snap-to-guides="false"
inkscape:snap-page="false"
units="pt"
inkscape:snap-center="false"
inkscape:snap-object-midpoints="true">
inkscape:snap-object-midpoints="true"
inkscape:snap-midpoints="false"
inkscape:snap-others="false">
<inkscape:grid
type="xygrid"
id="grid17394"
@@ -88001,37 +88023,37 @@
<g
style="display:inline;enable-background:new"
id="ICON_COLLAPSEMENU"
transform="translate(279.8665,506.92392)">
transform="translate(280,508)">
<rect
y="111"
x="103"
height="16"
width="16"
id="rect24489-7-4"
style="opacity:0;fill:#b3b3b3;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:2.79999995;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate" />
style="display:inline;overflow:visible;visibility:visible;opacity:0;fill:#b3b3b3;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:2.79999995;marker:none;enable-background:accumulate" />
<rect
style="fill:#ececec;fill-opacity:1;stroke:#141414;stroke-width:0.79452544;stroke-opacity:1"
id="rect29842"
width="11.816368"
height="2.1883197"
x="105.18671"
y="-116.88043"
width="11.209318"
height="2.1883163"
x="105.39484"
y="-116.60292"
transform="scale(1,-1)" />
<rect
style="fill:#ececec;fill-opacity:1;stroke:#141414;stroke-width:0.79452544;stroke-opacity:1;display:inline;enable-background:new"
style="display:inline;fill:#ececec;fill-opacity:1;stroke:#141414;stroke-width:0.79452544;stroke-opacity:1;enable-background:new"
id="rect29842-4"
width="11.816368"
height="2.1883197"
x="105.31538"
y="-120.80865"
width="11.191971"
height="2.2056611"
x="105.41944"
y="-120.61786"
transform="scale(1,-1)" />
<rect
style="fill:#ececec;fill-opacity:1;stroke:#141414;stroke-width:0.79452544;stroke-opacity:1;display:inline;enable-background:new"
style="display:inline;fill:#ececec;fill-opacity:1;stroke:#141414;stroke-width:0.79500002;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;enable-background:new"
id="rect29842-4-5"
width="11.816368"
height="2.1883197"
x="105.41832"
y="-124.71391"
width="11.22666"
height="2.2056642"
x="105.38363"
y="-124.60985"
transform="scale(1,-1)" />
</g>
<g
@@ -89360,17 +89382,18 @@
y="69" />
<g
id="g17605"
transform="translate(-1.5467961,-0.48613592)">
transform="translate(-0.99177519,0.03419629)">
<path
sodipodi:nodetypes="ccccccccccc"
style="fill:url(#linearGradient17610);fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.80000001;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 472.51758,370.53516 -1.04688,0.0312 0.0371,9.96875 1.00977,0 3.95117,-3.70704 -5.3e-4,3.72852 3.1119,0 0,-10.01758 -3.1119,0 5.3e-4,3.73242 z"
transform="translate(-404.00822,-298.0342)"
id="path11011-6"
inkscape:connector-curvature="0"
d="m 72.839729,82.521675 2.731705,0 0,-10.016275 -2.731705,0 L 72.84,76.59374 68.51011,72.5 67.4632,72.53125 67.5,82.49999 68.51011,82.5 72.84,78.43749 z"
style="fill:url(#linearGradient17610);fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.80000001;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="path11011-6" />
sodipodi:nodetypes="ccccccccccc" />
<path
sodipodi:nodetypes="ccc"
style="fill:none;stroke:url(#linearGradient17612);stroke-width:0.91056824px;stroke-linecap:round;stroke-linejoin:round;stroke-opacity:1"
d="m 73.7503,81.408784 0,-7.99274 0.910568,-6.3e-5"
d="m 73.368723,81.408784 0,-7.99274 1.292145,-6.3e-5"
id="path10830-6"
inkscape:export-filename="C:\Documents and Settings\Tata\Pulpit\BLENDER ICONSET\Kopia blender\.blender\icons\jendrzych's iconset.png"
inkscape:export-xdpi="90"
@@ -89397,17 +89420,18 @@
y="69" />
<g
id="g17605-3"
transform="translate(-1.5467961,-0.48613592)">
transform="translate(-1.9392553,-0.11820549)">
<path
sodipodi:nodetypes="ccccccccccc"
style="fill:url(#linearGradient17610-0);fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.80000001;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 520.5,370.48242 -4.01758,3.80078 -6.6e-4,-3.79492 -3.04231,0 0,10.01563 3.04231,0 6.6e-4,-3.79297 4.01758,3.77148 1.01172,0 0.0371,-9.96875 z"
transform="matrix(-1,0,0,1,589.01109,-297.98191)"
id="path11011-6-2"
inkscape:connector-curvature="0"
d="m 72.839729,82.521675 2.731705,0 0,-10.016275 -2.731705,0 L 72.84,76.59374 68.51011,72.5 67.4632,72.53125 67.5,82.49999 68.51011,82.5 72.84,78.43749 z"
style="fill:url(#linearGradient17610-0);fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.80000001;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
id="path11011-6-2" />
sodipodi:nodetypes="ccccccccccc" />
<path
sodipodi:nodetypes="ccc"
style="fill:none;stroke:url(#linearGradient17612-5);stroke-width:0.91056824px;stroke-linecap:round;stroke-linejoin:round;stroke-opacity:1"
d="m 74.660868,81.408784 0,-7.99274 -0.910568,-6.3e-5"
d="m 74.660868,81.408784 0,-7.99274 -1.220453,-6.3e-5"
id="path10830-6-2"
inkscape:export-filename="C:\Documents and Settings\Tata\Pulpit\BLENDER ICONSET\Kopia blender\.blender\icons\jendrzych's iconset.png"
inkscape:export-xdpi="90"
@@ -89432,16 +89456,16 @@
y="-546"
transform="matrix(0,-1,-1,0,0,0)" />
<path
style="fill:url(#linearGradient17165);fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.80000001;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 533.51953,371.46094 0,3.04042 3.79492,5.9e-4 -3.77343,4.01953 0,1.01172 9.96875,0.0352 0.0312,-1.04688 -3.80274,-4.01953 3.79688,-5.9e-4 0,-3.04042 z"
id="path11011-6-2-1"
style="fill:url(#linearGradient17165);fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.80000001;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
d="m 532.96447,373.70707 0,-2.7317 10.01627,0 0,2.7317 -4.08834,-2.7e-4 4.09374,4.32989 -0.0312,1.04691 -9.96874,-0.0368 -1e-5,-1.01011 4.06251,-4.32989 z"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccccccccccc" />
<path
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccc"
id="path11013-5-7-1"
d="m 541.73615,378.0468 -3.75,-4 -3.75,4"
d="m 542.29119,378.53246 -3.75,-4 -3.75,4"
style="fill:none;stroke:url(#linearGradient17162);stroke-width:1px;stroke-linecap:round;stroke-linejoin:round;stroke-opacity:1" />
<path
inkscape:connector-curvature="0"
@@ -89449,8 +89473,8 @@
inkscape:export-xdpi="90"
inkscape:export-filename="C:\Documents and Settings\Tata\Pulpit\BLENDER ICONSET\Kopia blender\.blender\icons\jendrzych's iconset.png"
id="path10830-6-2-8"
d="m 542.11634,371.83103 -8.26386,0 -6e-5,0.90043"
style="fill:none;stroke:url(#linearGradient17838);stroke-width:0.92071104px;stroke-linecap:round;stroke-linejoin:round;stroke-opacity:1;display:inline;enable-background:new"
d="m 542.67138,372.31669 -8.26386,0 -6e-5,1.20843"
style="display:inline;fill:none;stroke:url(#linearGradient17838);stroke-width:0.92071104px;stroke-linecap:round;stroke-linejoin:round;stroke-opacity:1;enable-background:new"
sodipodi:nodetypes="ccc" />
</g>
<g
@@ -89464,16 +89488,16 @@
y="-504"
transform="matrix(0,1,-1,0,0,0)" />
<path
style="fill:url(#linearGradient17185);fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.80000001;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 501.50977,371.4375 -9.96875,0.0352 0,1.01172 3.69336,3.93554 -3.71485,-0.005 0,3.13033 10.01563,0 0,-3.13033 -3.7168,0.007 3.72266,-3.9375 z"
id="path11011-6-8"
style="fill:url(#linearGradient17185);fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.80000001;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none"
d="m 490.96446,376.29293 0,2.7317 10.01628,0 0,-2.7317 -4.08834,2.7e-4 4.09374,-4.32989 -0.0312,-1.04691 -9.96874,0.0368 -10e-6,1.01011 4.06251,4.32989 z"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccccccccccc" />
<path
inkscape:connector-curvature="0"
sodipodi:nodetypes="cc"
id="path11013-5-6"
d="m 492.23615,371.9532 7.5,0"
d="m 492.79119,372.47355 7.5,0"
style="fill:none;stroke:#ffffff;stroke-width:1px;stroke-linecap:round;stroke-linejoin:round;stroke-opacity:1" />
<path
inkscape:connector-curvature="0"
@@ -89481,8 +89505,8 @@
inkscape:export-xdpi="90"
inkscape:export-filename="C:\Documents and Settings\Tata\Pulpit\BLENDER ICONSET\Kopia blender\.blender\icons\jendrzych's iconset.png"
id="path10830-6-2-8-8"
d="m 500.10071,377.17478 -8.26386,0 -6e-5,0.90043"
style="fill:none;stroke:url(#linearGradient17872);stroke-width:0.92071104px;stroke-linecap:round;stroke-linejoin:round;stroke-opacity:1;display:inline;enable-background:new"
d="m 500.65575,377.29722 -8.26386,0 -6e-5,1.29834"
style="display:inline;fill:none;stroke:url(#linearGradient17872);stroke-width:0.92071104px;stroke-linecap:round;stroke-linejoin:round;stroke-opacity:1;enable-background:new"
sodipodi:nodetypes="ccc" />
</g>
<g
@@ -92656,6 +92680,56 @@
style="opacity:0.51999996;fill:url(#radialGradient21448-8-143);fill-opacity:1;fill-rule:evenodd;stroke:none" />
</g>
</g>
<g
transform="translate(335.99871,21.048284)"
style="display:inline;enable-background:new"
id="ICON_ROTATE-7">
<rect
y="178"
x="110"
height="16"
width="16"
id="rect37989-8"
style="display:inline;overflow:visible;visibility:visible;opacity:0;fill:#b3b3b3;fill-opacity:1;fill-rule:evenodd;stroke:none;stroke-width:2.79999995;marker:none;enable-background:accumulate" />
<path
style="display:inline;overflow:visible;visibility:visible;fill:none;stroke:#000000;stroke-width:2.4000001;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;enable-background:new"
d="m 114.5,192.5 -3,0 0,-3 m 13,0 0,3 -3,0 m -0.25,-13 3.25,0 0,3 m -13,0 0,-3 3,0"
id="path37498-7"
sodipodi:nodetypes="cccccccccccc"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="cccccccccccc"
id="rect38140-7"
d="m 114.5,192.5 -3,0 0,-3 m 13,0 0,3 -3,0 m -0.25,-13 3.25,0 0,3 m -13,0 0,-3 3,0"
style="display:inline;overflow:visible;visibility:visible;fill:none;stroke:url(#linearGradient18199);stroke-width:1;stroke-linecap:square;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dashoffset:0;stroke-opacity:1;marker:none;enable-background:new"
inkscape:connector-curvature="0" />
<g
transform="matrix(0.59971056,0,0,0.59971056,116.78278,9.7425599)"
style="display:inline;enable-background:new"
id="g23145-9">
<g
id="g23149-4">
<path
id="path39832-9"
style="display:inline;overflow:visible;visibility:visible;fill:none;stroke:#1a1a1a;stroke-width:4.66725159;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;enable-background:accumulate"
d="m -4.3682386,287.81345 1.5,0 c 0.999089,0 2.07885534,1.30514 2.50490386,2.78207 1.06592652,3.69512 2.80867074,9.82446 5.88525404,9.96406 2.6782554,0 1.6181317,-5.11535 3.1736046,-5.26275 l 0.25,0"
sodipodi:nodetypes="cssccc"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ccscc"
d="m 9.3647983,295.22328 -0.4793018,0 c -2.2335161,0 0.1796731,4.94901 -3.4398065,5.09984 -4.44796752,0.18536 -5.37272213,-12.59185 -8.0767581,-12.56237 l -2,0"
style="display:inline;overflow:visible;visibility:visible;fill:none;stroke:#a8df84;stroke-width:2.93474906;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;enable-background:accumulate"
id="path39834-2"
inkscape:connector-curvature="0" />
</g>
<path
id="path39836-9"
style="display:inline;overflow:visible;visibility:visible;opacity:0.35;fill:none;stroke:url(#radialGradient23167-6);stroke-width:3.53503864;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;enable-background:accumulate"
d="M 5.6770841,300.48165 C 0.7393262,300.21066 0.54777814,287.99792 -2.9522219,287.99792"
sodipodi:nodetypes="cc"
inkscape:connector-curvature="0" />
</g>
</g>
</g>
<g
inkscape:groupmode="layer"

Before

Width:  |  Height:  |  Size: 4.4 MiB

After

Width:  |  Height:  |  Size: 4.4 MiB

View File

@@ -568,7 +568,7 @@ class pyRandomColorShader(StrokeShader):
class py2DCurvatureColorShader(StrokeShader):
"""
Assigns a color (greyscale) to the stroke based on the curvature.
Assigns a color (grayscale) to the stroke based on the curvature.
A higher curvature will yield a brighter color.
"""
def shade(self, stroke):
@@ -584,7 +584,7 @@ class py2DCurvatureColorShader(StrokeShader):
class pyTimeColorShader(StrokeShader):
"""
Assigns a greyscale value that increases for every vertex.
Assigns a grayscale value that increases for every vertex.
The brightness will increase along the stroke.
"""
def __init__(self, step=0.01):

View File

@@ -1170,6 +1170,7 @@ class Seed:
_seed = Seed()
def get_dashed_pattern(linestyle):
"""Extracts the dashed pattern from the various UI options """
pattern = []
@@ -1185,6 +1186,15 @@ def get_dashed_pattern(linestyle):
return pattern
def get_grouped_objects(group):
for ob in group.objects:
if ob.dupli_type == 'GROUP' and ob.dupli_group is not None:
for dupli in get_grouped_objects(ob.dupli_group):
yield dupli
else:
yield ob
integration_types = {
'MEAN': IntegrationType.MEAN,
'MIN': IntegrationType.MIN,
@@ -1267,7 +1277,7 @@ def process(layer_name, lineset_name):
# prepare selection criteria by group of objects
if lineset.select_by_group:
if lineset.group is not None:
names = {getQualifiedObjectName(ob): True for ob in lineset.group.objects}
names = {getQualifiedObjectName(ob): True for ob in get_grouped_objects(lineset.group)}
upred = ObjectNamesUP1D(names, lineset.group_negation == 'EXCLUSIVE')
selection_criteria.append(upred)
# prepare selection criteria by image border

View File

@@ -31,8 +31,9 @@ __all__ = (
import bpy as _bpy
_user_preferences = _bpy.context.user_preferences
error_duplicates = False
error_encoding = False
# (name, file, path)
error_duplicates = []
addons_fake_modules = {}
@@ -57,12 +58,11 @@ def paths():
def modules_refresh(module_cache=addons_fake_modules):
global error_duplicates
global error_encoding
import os
error_duplicates = False
error_encoding = False
error_duplicates.clear()
path_list = paths()
@@ -168,7 +168,7 @@ def modules_refresh(module_cache=addons_fake_modules):
if mod.__file__ != mod_path:
print("multiple addons with the same name:\n %r\n %r" %
(mod.__file__, mod_path))
error_duplicates = True
error_duplicates.append((mod.bl_info["name"], mod.__file__, mod_path))
elif mod.__time__ != os.path.getmtime(mod_path):
print("reloading addon:",

View File

@@ -933,6 +933,9 @@ km = kc.keymaps.new('3D View', space_type='VIEW_3D', region_type='WINDOW', modal
kmi = km.keymap_items.new('view3d.cursor3d', 'ACTIONMOUSE', 'PRESS')
kmi = km.keymap_items.new('view3d.rotate', 'LEFTMOUSE', 'PRESS', alt=True)
kmi = km.keymap_items.new('view3d.manipulator', 'LEFTMOUSE', 'PRESS', shift=True)
kmi.properties.release_confirm = True
kmi.properties.use_planar_constraint = True
kmi = km.keymap_items.new('view3d.manipulator', 'LEFTMOUSE', 'PRESS', any=True)
kmi.properties.release_confirm = True
kmi = km.keymap_items.new('view3d.move', 'MIDDLEMOUSE', 'PRESS', alt=True)

View File

@@ -62,28 +62,66 @@ class SCENE_OT_freestyle_fill_range_by_selection(bpy.types.Operator):
m = linestyle.alpha_modifiers[self.name]
else:
m = linestyle.thickness_modifiers[self.name]
# Find the source object
# Find the reference object
if m.type == 'DISTANCE_FROM_CAMERA':
source = scene.camera
ref = scene.camera
matrix_to_camera = ref.matrix_world.inverted()
elif m.type == 'DISTANCE_FROM_OBJECT':
if m.target is None:
self.report({'ERROR'}, "Target object not specified")
return {'CANCELLED'}
source = m.target
ref = m.target
target_location = ref.location
else:
self.report({'ERROR'}, "Unexpected modifier type: " + m.type)
return {'CANCELLED'}
# Find selected mesh objects
selection = [ob for ob in scene.objects if ob.select and ob.type == 'MESH' and ob.name != source.name]
if selection:
# Compute the min/max distance between selected mesh objects and the source
# Find selected vertices in editmesh
ob = bpy.context.active_object
if ob.type == 'MESH' and ob.mode == 'EDIT' and ob.name != ref.name:
bpy.ops.object.mode_set(mode='OBJECT')
selected_verts = [v for v in bpy.context.active_object.data.vertices if v.select]
bpy.ops.object.mode_set(mode='EDIT')
# Compute the min/max distance from the reference to mesh vertices
min_dist = sys.float_info.max
max_dist = -min_dist
for ob in selection:
for vert in ob.data.vertices:
dist = (ob.matrix_world * vert.co - source.location).length
if m.type == 'DISTANCE_FROM_CAMERA':
ob_to_cam = matrix_to_camera * ob.matrix_world
for vert in selected_verts:
# dist in the camera space
dist = (ob_to_cam * vert.co).length
min_dist = min(dist, min_dist)
max_dist = max(dist, max_dist)
elif m.type == 'DISTANCE_FROM_OBJECT':
for vert in selected_verts:
# dist in the world space
dist = (ob.matrix_world * vert.co - target_location).length
min_dist = min(dist, min_dist)
max_dist = max(dist, max_dist)
# Fill the Range Min/Max entries with the computed distances
m.range_min = min_dist
m.range_max = max_dist
return {'FINISHED'}
# Find selected mesh objects
selection = [ob for ob in scene.objects if ob.select and ob.type == 'MESH' and ob.name != ref.name]
if selection:
# Compute the min/max distance from the reference to mesh vertices
min_dist = sys.float_info.max
max_dist = -min_dist
if m.type == 'DISTANCE_FROM_CAMERA':
for ob in selection:
ob_to_cam = matrix_to_camera * ob.matrix_world
for vert in ob.data.vertices:
# dist in the camera space
dist = (ob_to_cam * vert.co).length
min_dist = min(dist, min_dist)
max_dist = max(dist, max_dist)
elif m.type == 'DISTANCE_FROM_OBJECT':
for ob in selection:
for vert in ob.data.vertices:
# dist in the world space
dist = (ob.matrix_world * vert.co - target_location).length
min_dist = min(dist, min_dist)
max_dist = max(dist, max_dist)
# Fill the Range Min/Max entries with the computed distances
m.range_min = min_dist
m.range_max = max_dist

View File

@@ -1008,11 +1008,9 @@ class WM_OT_doc_view(Operator):
doc_id = doc_id
if bpy.app.version_cycle == "release":
_prefix = ("https://www.blender.org/api/blender_python_api_%s%s_release" %
("_".join(str(v) for v in bpy.app.version[:2]), bpy.app.version_char))
_prefix = ("https://docs.blender.org/api/blender_python_api_current")
else:
_prefix = ("https://www.blender.org/api/blender_python_api_%s" %
"_".join(str(v) for v in bpy.app.version))
_prefix = ("https://docs.blender.org/api/blender_python_api_master")
def execute(self, context):
url = _wm_doc_get_id(self.doc_id, do_url=True, url_prefix=self._prefix)

View File

@@ -951,6 +951,23 @@ class DATA_PT_modifiers(ModifierButtonsPanel, Panel):
def SURFACE(self, layout, ob, md):
layout.label(text="Settings are inside the Physics tab")
def SURFACE_DEFORM(self, layout, ob, md):
col = layout.column()
col.active = not md.is_bound
col.prop(md, "target")
col.prop(md, "falloff")
layout.separator()
col = layout.column()
col.active = md.target is not None
if md.is_bound:
col.operator("object.surfacedeform_bind", text="Unbind")
else:
col.operator("object.surfacedeform_bind", text="Bind")
def UV_PROJECT(self, layout, ob, md):
split = layout.split()
@@ -1320,7 +1337,9 @@ class DATA_PT_modifiers(ModifierButtonsPanel, Panel):
row.prop(md, "thickness_vertex_group", text="Factor")
col.prop(md, "use_crease", text="Crease Edges")
col.prop(md, "crease_weight", text="Crease Weight")
row = col.row()
row.active = md.use_crease
row.prop(md, "crease_weight", text="Crease Weight")
col = split.column()

View File

@@ -469,8 +469,86 @@ class SceneButtonsPanel:
bl_context = "scene"
class SCENE_PT_game_physics(SceneButtonsPanel, Panel):
bl_label = "Physics"
COMPAT_ENGINES = {'BLENDER_GAME'}
@classmethod
def poll(cls, context):
scene = context.scene
return (scene.render.engine in cls.COMPAT_ENGINES)
def draw(self, context):
layout = self.layout
gs = context.scene.game_settings
layout.prop(gs, "physics_engine", text="Engine")
if gs.physics_engine != 'NONE':
layout.prop(gs, "physics_gravity", text="Gravity")
split = layout.split()
col = split.column()
col.label(text="Physics Steps:")
sub = col.column(align=True)
sub.prop(gs, "physics_step_max", text="Max")
sub.prop(gs, "physics_step_sub", text="Substeps")
col.prop(gs, "fps", text="FPS")
col = split.column()
col.label(text="Logic Steps:")
col.prop(gs, "logic_step_max", text="Max")
col = layout.column()
col.label(text="Physics Deactivation:")
sub = col.row(align=True)
sub.prop(gs, "deactivation_linear_threshold", text="Linear Threshold")
sub.prop(gs, "deactivation_angular_threshold", text="Angular Threshold")
sub = col.row()
sub.prop(gs, "deactivation_time", text="Time")
col = layout.column()
col.prop(gs, "use_occlusion_culling", text="Occlusion Culling")
sub = col.column()
sub.active = gs.use_occlusion_culling
sub.prop(gs, "occlusion_culling_resolution", text="Resolution")
else:
split = layout.split()
col = split.column()
col.label(text="Physics Steps:")
col.prop(gs, "fps", text="FPS")
col = split.column()
col.label(text="Logic Steps:")
col.prop(gs, "logic_step_max", text="Max")
class SCENE_PT_game_physics_obstacles(SceneButtonsPanel, Panel):
bl_label = "Obstacle Simulation"
bl_options = {'DEFAULT_CLOSED'}
COMPAT_ENGINES = {'BLENDER_GAME'}
@classmethod
def poll(cls, context):
scene = context.scene
return (scene.render.engine in cls.COMPAT_ENGINES)
def draw(self, context):
layout = self.layout
gs = context.scene.game_settings
layout.prop(gs, "obstacle_simulation", text="Type")
if gs.obstacle_simulation != 'NONE':
layout.prop(gs, "level_height")
layout.prop(gs, "show_obstacle_simulation")
class SCENE_PT_game_navmesh(SceneButtonsPanel, Panel):
bl_label = "Navigation mesh"
bl_label = "Navigation Mesh"
bl_options = {'DEFAULT_CLOSED'}
COMPAT_ENGINES = {'BLENDER_GAME'}
@@ -484,7 +562,7 @@ class SCENE_PT_game_navmesh(SceneButtonsPanel, Panel):
rd = context.scene.game_settings.recast_data
layout.operator("mesh.navmesh_make", text="Build navigation mesh")
layout.operator("mesh.navmesh_make", text="Build Navigation Mesh")
col = layout.column()
col.label(text="Rasterization:")
@@ -656,83 +734,6 @@ class WORLD_PT_game_mist(WorldButtonsPanel, Panel):
layout.prop(world.mist_settings, "intensity", text="Minimum Intensity")
class WORLD_PT_game_physics(WorldButtonsPanel, Panel):
bl_label = "Physics"
COMPAT_ENGINES = {'BLENDER_GAME'}
@classmethod
def poll(cls, context):
scene = context.scene
return (scene.world and scene.render.engine in cls.COMPAT_ENGINES)
def draw(self, context):
layout = self.layout
gs = context.scene.game_settings
layout.prop(gs, "physics_engine", text="Engine")
if gs.physics_engine != 'NONE':
layout.prop(gs, "physics_gravity", text="Gravity")
split = layout.split()
col = split.column()
col.label(text="Physics Steps:")
sub = col.column(align=True)
sub.prop(gs, "physics_step_max", text="Max")
sub.prop(gs, "physics_step_sub", text="Substeps")
col.prop(gs, "fps", text="FPS")
col = split.column()
col.label(text="Logic Steps:")
col.prop(gs, "logic_step_max", text="Max")
col = layout.column()
col.label(text="Physics Deactivation:")
sub = col.row(align=True)
sub.prop(gs, "deactivation_linear_threshold", text="Linear Threshold")
sub.prop(gs, "deactivation_angular_threshold", text="Angular Threshold")
sub = col.row()
sub.prop(gs, "deactivation_time", text="Time")
col = layout.column()
col.prop(gs, "use_occlusion_culling", text="Occlusion Culling")
sub = col.column()
sub.active = gs.use_occlusion_culling
sub.prop(gs, "occlusion_culling_resolution", text="Resolution")
else:
split = layout.split()
col = split.column()
col.label(text="Physics Steps:")
col.prop(gs, "fps", text="FPS")
col = split.column()
col.label(text="Logic Steps:")
col.prop(gs, "logic_step_max", text="Max")
class WORLD_PT_game_physics_obstacles(WorldButtonsPanel, Panel):
bl_label = "Obstacle Simulation"
COMPAT_ENGINES = {'BLENDER_GAME'}
@classmethod
def poll(cls, context):
scene = context.scene
return (scene.world and scene.render.engine in cls.COMPAT_ENGINES)
def draw(self, context):
layout = self.layout
gs = context.scene.game_settings
layout.prop(gs, "obstacle_simulation", text="Type")
if gs.obstacle_simulation != 'NONE':
layout.prop(gs, "level_height")
layout.prop(gs, "show_obstacle_simulation")
class DataButtonsPanel:
bl_space_type = 'PROPERTIES'
bl_region_type = 'WINDOW'

View File

@@ -152,6 +152,33 @@ class OBJECT_PT_relations(ObjectButtonsPanel, Panel):
sub.active = (parent is not None)
class OBJECT_PT_relations_extras(ObjectButtonsPanel, Panel):
bl_label = "Relations Extras"
bl_options = {'DEFAULT_CLOSED'}
def draw(self, context):
layout = self.layout
ob = context.object
split = layout.split()
if context.scene.render.engine != 'BLENDER_GAME':
col = split.column()
col.label(text="Tracking Axes:")
col.prop(ob, "track_axis", text="Axis")
col.prop(ob, "up_axis", text="Up Axis")
col = split.column()
col.prop(ob, "use_slow_parent")
row = col.row()
row.active = ((ob.parent is not None) and (ob.use_slow_parent))
row.prop(ob, "slow_parent_offset", text="Offset")
layout.prop(ob, "use_extra_recalc_object")
layout.prop(ob, "use_extra_recalc_data")
class GROUP_MT_specials(Menu):
bl_label = "Group Specials"
@@ -296,33 +323,6 @@ class OBJECT_PT_duplication(ObjectButtonsPanel, Panel):
layout.prop(ob, "dupli_group", text="Group")
class OBJECT_PT_relations_extras(ObjectButtonsPanel, Panel):
bl_label = "Relations Extras"
bl_options = {'DEFAULT_CLOSED'}
def draw(self, context):
layout = self.layout
ob = context.object
split = layout.split()
if context.scene.render.engine != 'BLENDER_GAME':
col = split.column()
col.label(text="Tracking Axes:")
col.prop(ob, "track_axis", text="Axis")
col.prop(ob, "up_axis", text="Up Axis")
col = split.column()
col.prop(ob, "use_slow_parent")
row = col.row()
row.active = ((ob.parent is not None) and (ob.use_slow_parent))
row.prop(ob, "slow_parent_offset", text="Offset")
layout.prop(ob, "use_extra_recalc_object")
layout.prop(ob, "use_extra_recalc_data")
from bl_ui.properties_animviz import (
MotionPathButtonsPanel,
OnionSkinButtonsPanel,

View File

@@ -274,6 +274,8 @@ def basic_force_field_settings_ui(self, context, field):
col.prop(field, "use_global_coords", text="Global")
elif field.type == 'HARMONIC':
col.prop(field, "use_multiple_springs")
if field.type == 'FORCE':
col.prop(field, "use_gravity_falloff", text="Gravitation")
split = layout.split()

View File

@@ -377,7 +377,7 @@ class RENDER_PT_stamp(RenderButtonsPanel, Panel):
sub.active = rd.use_stamp_note
sub.prop(rd, "stamp_note_text", text="")
if rd.use_sequencer:
layout.label("Sequencer")
layout.label("Sequencer:")
layout.prop(rd, "use_stamp_strip_meta")

View File

@@ -42,10 +42,11 @@ class GRAPH_HT_header(Header):
dopesheet_filter(layout, context)
layout.prop(st, "use_normalization", text="Normalize")
row = layout.row()
row.active = st.use_normalization
row.prop(st, "use_auto_normalization", text="Auto")
row = layout.row(align=True)
row.prop(st, "use_normalization", icon='NORMALIZE_FCURVES', text="Normalize", toggle=True)
sub = row.row(align=True)
sub.active = st.use_normalization
sub.prop(st, "use_auto_normalization", icon='FILE_REFRESH', text="", toggle=True)
row = layout.row(align=True)

View File

@@ -652,17 +652,39 @@ class SEQUENCER_PT_effect(SequencerButtonsPanel, Panel):
col.prop(strip, "rotation_start", text="Rotation")
elif strip.type == 'MULTICAM':
layout.prop(strip, "multicam_source")
col = layout.column(align=True)
strip_channel = strip.channel
row = layout.row(align=True)
sub = row.row(align=True)
sub.scale_x = 2.0
col.prop(strip, "multicam_source", text="Source Channel")
sub.operator("screen.animation_play", text="", icon='PAUSE' if context.screen.is_animation_playing else 'PLAY')
# The multicam strip needs at least 2 strips to be useful
if strip_channel > 2:
BT_ROW = 4
col.label("Cut To:")
row = col.row()
for i in range(1, strip_channel):
if (i % BT_ROW) == 1:
row = col.row(align=True)
# Workaround - .enabled has to have a separate UI block to work
if i == strip.multicam_source:
sub = row.row(align=True)
sub.enabled = False
sub.operator("sequencer.cut_multicam", text="%d" % i).camera = i
else:
sub_1 = row.row(align=True)
sub_1.enabled = True
sub_1.operator("sequencer.cut_multicam", text="%d" % i).camera = i
if strip.channel > BT_ROW and (strip_channel - 1) % BT_ROW:
for i in range(strip.channel, strip_channel + ((BT_ROW + 1 - strip_channel) % BT_ROW)):
row.label("")
else:
col.separator()
col.label(text="Two or more channels are needed below this strip", icon="INFO")
row.label("Cut To")
for i in range(1, strip.channel):
row.operator("sequencer.cut_multicam", text="%d" % i).camera = i
elif strip.type == 'TEXT':
col = layout.column()

View File

@@ -49,7 +49,10 @@ class TIME_HT_header(Header):
row.prop(scene, "frame_preview_start", text="Start")
row.prop(scene, "frame_preview_end", text="End")
layout.prop(scene, "frame_current", text="")
if scene.show_subframe:
layout.prop(scene, "frame_float", text="")
else:
layout.prop(scene, "frame_current", text="")
layout.separator()
@@ -135,6 +138,7 @@ class TIME_MT_view(Menu):
layout.prop(st, "show_frame_indicator")
layout.prop(scene, "show_keys_from_selected_only")
layout.prop(scene, "show_subframe")
layout.separator()

View File

@@ -1243,7 +1243,7 @@ class USERPREF_MT_addons_online_resources(Menu):
"wm.url_open", text="API Concepts", icon='URL',
).url = bpy.types.WM_OT_doc_view._prefix + "/info_quickstart.html"
layout.operator("wm.url_open", text="Add-on Tutorial", icon='URL',
).url = "http://www.blender.org/api/blender_python_api_current/info_tutorial_addon.html"
).url = bpy.types.WM_OT_doc_view._prefix + "/info_tutorial_addon.html"
class USERPREF_PT_addons(Panel):
@@ -1317,11 +1317,18 @@ class USERPREF_PT_addons(Panel):
# set in addon_utils.modules_refresh()
if addon_utils.error_duplicates:
self.draw_error(col,
"Multiple addons using the same name found!\n"
"likely a problem with the script search path.\n"
"(see console for details)",
)
box = col.box()
row = box.row()
row.label("Multiple addons with the same name found!")
row.label(icon='ERROR')
box.label("Please delete one of each pair:")
for (addon_name, addon_file, addon_path) in addon_utils.error_duplicates:
box.separator()
sub_col = box.column(align=True)
sub_col.label(addon_name + ":")
sub_col.label(" " + addon_file)
sub_col.label(" " + addon_path)
if addon_utils.error_encoding:
self.draw_error(col,

View File

@@ -1346,9 +1346,9 @@ class VIEW3D_MT_object_clear(Menu):
def draw(self, context):
layout = self.layout
layout.operator("object.location_clear", text="Location")
layout.operator("object.rotation_clear", text="Rotation")
layout.operator("object.scale_clear", text="Scale")
layout.operator("object.location_clear", text="Location").clear_delta = False
layout.operator("object.rotation_clear", text="Rotation").clear_delta = False
layout.operator("object.scale_clear", text="Scale").clear_delta = False
layout.operator("object.origin_clear", text="Origin")
@@ -1744,6 +1744,7 @@ class VIEW3D_MT_brush_paint_modes(Menu):
layout.prop(brush, "use_paint_weight", text="Weight Paint")
layout.prop(brush, "use_paint_image", text="Texture Paint")
# ********** Vertex paint menu **********
@@ -1813,6 +1814,7 @@ class VIEW3D_MT_vertex_group(Menu):
layout.operator("object.vertex_group_remove", text="Remove Active Group").all = False
layout.operator("object.vertex_group_remove", text="Remove All Groups").all = True
# ********** Weight paint menu **********
@@ -1851,6 +1853,7 @@ class VIEW3D_MT_paint_weight(Menu):
layout.operator("paint.weight_set")
# ********** Sculpt menu **********
@@ -2004,6 +2007,7 @@ class VIEW3D_MT_particle_specials(Menu):
class VIEW3D_MT_particle_showhide(ShowHideMenu, Menu):
_operator_name = "particle"
# ********** Pose Menu **********
@@ -2277,6 +2281,7 @@ class VIEW3D_MT_bone_options_disable(Menu, BoneOptions):
bl_label = "Disable Bone Options"
type = 'DISABLE'
# ********** Edit Menus, suffix from ob.type **********
@@ -2444,6 +2449,7 @@ class VIEW3D_MT_edit_mesh_vertices(Menu):
with_bullet = bpy.app.build_options.bullet
layout.operator("mesh.merge")
layout.operator("mesh.remove_doubles")
layout.operator("mesh.rip_move")
layout.operator("mesh.rip_move_fill")
layout.operator("mesh.rip_edge_move")
@@ -2466,7 +2472,6 @@ class VIEW3D_MT_edit_mesh_vertices(Menu):
if with_bullet:
layout.operator("mesh.convex_hull")
layout.operator("mesh.vertices_smooth")
layout.operator("mesh.remove_doubles")
layout.operator("mesh.blend_from_shape")
@@ -2623,6 +2628,7 @@ class VIEW3D_MT_edit_mesh_clean(Menu):
layout.operator("mesh.face_make_planar")
layout.operator("mesh.vert_connect_nonplanar")
layout.operator("mesh.vert_connect_concave")
layout.operator("mesh.remove_doubles")
layout.operator("mesh.fill_holes")

View File

@@ -113,25 +113,25 @@ static OArchive create_archive(std::ostream *ostream,
Alembic::Abc::MetaData &md,
bool ogawa)
{
md.set(Alembic::Abc::kApplicationNameKey, "Blender");
md.set(Alembic::Abc::kApplicationNameKey, "Blender");
md.set(Alembic::Abc::kUserDescriptionKey, scene_name);
time_t raw_time;
time(&raw_time);
char buffer[128];
time_t raw_time;
time(&raw_time);
char buffer[128];
#if defined _WIN32 || defined _WIN64
ctime_s(buffer, 128, &raw_time);
ctime_s(buffer, 128, &raw_time);
#else
ctime_r(&raw_time, buffer);
ctime_r(&raw_time, buffer);
#endif
const std::size_t buffer_len = strlen(buffer);
if (buffer_len > 0 && buffer[buffer_len - 1] == '\n') {
buffer[buffer_len - 1] = '\0';
}
const std::size_t buffer_len = strlen(buffer);
if (buffer_len > 0 && buffer[buffer_len - 1] == '\n') {
buffer[buffer_len - 1] = '\0';
}
md.set(Alembic::Abc::kDateWrittenKey, buffer);
md.set(Alembic::Abc::kDateWrittenKey, buffer);
ErrorHandler::Policy policy = ErrorHandler::kThrowPolicy;

View File

@@ -102,7 +102,7 @@ void AbcCurveWriter::do_write()
const BPoint *point = nurbs->bp;
for (int i = 0; i < totpoint; ++i, ++point) {
copy_zup_yup(temp_vert.getValue(), point->vec);
copy_yup_from_zup(temp_vert.getValue(), point->vec);
verts.push_back(temp_vert);
weights.push_back(point->vec[3]);
widths.push_back(point->radius);
@@ -118,7 +118,7 @@ void AbcCurveWriter::do_write()
/* TODO(kevin): store info about handles, Alembic doesn't have this. */
for (int i = 0; i < totpoint; ++i, ++bezier) {
copy_zup_yup(temp_vert.getValue(), bezier->vec[1]);
copy_yup_from_zup(temp_vert.getValue(), bezier->vec[1]);
verts.push_back(temp_vert);
widths.push_back(bezier->radius);
}
@@ -322,7 +322,7 @@ void read_curve_sample(Curve *cu, const ICurvesSchema &schema, const float time)
weight = (*weights)[idx];
}
copy_yup_zup(bp->vec, pos.getValue());
copy_zup_from_yup(bp->vec, pos.getValue());
bp->vec[3] = weight;
bp->f1 = SELECT;
bp->radius = radius;
@@ -361,7 +361,7 @@ void read_curve_sample(Curve *cu, const ICurvesSchema &schema, const float time)
* object directly and create a new DerivedMesh from that. Also we might need to
* create new or delete existing NURBS in the curve.
*/
DerivedMesh *AbcCurveReader::read_derivedmesh(DerivedMesh */*dm*/, const float time, int /*read_flag*/, const char **/*err_str*/)
DerivedMesh *AbcCurveReader::read_derivedmesh(DerivedMesh * /*dm*/, const float time, int /*read_flag*/, const char ** /*err_str*/)
{
ISampleSelector sample_sel(time);
const ICurvesSchema::Sample sample = m_curves_schema.getValue(sample_sel);
@@ -389,7 +389,7 @@ DerivedMesh *AbcCurveReader::read_derivedmesh(DerivedMesh */*dm*/, const float t
for (int i = 0; i < totpoint; ++i, ++point, ++vertex_idx) {
const Imath::V3f &pos = (*positions)[vertex_idx];
copy_yup_zup(point->vec, pos.getValue());
copy_zup_from_yup(point->vec, pos.getValue());
}
}
else if (nurbs->bezt) {
@@ -397,7 +397,7 @@ DerivedMesh *AbcCurveReader::read_derivedmesh(DerivedMesh */*dm*/, const float t
for (int i = 0; i < totpoint; ++i, ++bezier, ++vertex_idx) {
const Imath::V3f &pos = (*positions)[vertex_idx];
copy_yup_zup(bezier->vec[1], pos.getValue());
copy_zup_from_yup(bezier->vec[1], pos.getValue());
}
}
}

View File

@@ -327,6 +327,11 @@ static void read_custom_data_ex(const ICompoundProperty &prop,
}
else if (data_type == CD_MLOOPUV) {
IV2fGeomParam uv_param(prop, prop_header.getName());
if (!uv_param.isIndexed()) {
return;
}
IV2fGeomParam::Sample sample;
uv_param.getIndexed(sample, iss);

Some files were not shown because too many files have changed in this diff Show More