Line Art further improvement list #87739

Open
opened 2021-04-23 09:40:55 +02:00 by YimingWu · 70 comments
Member

Removed achieved goals from the list as of 2021/03/01. Added new items as development requires.


Line Art Further Improvement Proposal

Current major problems

Line Art GPencil modifier has been useful in creating feature lines for 3d scenes. However since its development and integration, users have found some problems with the implementation and it would be nice to address them in further updates. Those problems are mainly:

Object Loading

Currently line art is inefficient in multiple aspects. First of all, it needs to load geometries with adjacent information, but the implementation right now uses BMesh conversion during loading, which is very inefficient (it's single threaded and it does a lot of extra work that line art doesn't end up using) A specialized line art loading code is needed to speed up this stage.

Sebastian provided flame graphs showing major performance bottleneck in geometry loading and intersection calculation in current line art (master) code. The time range showing in those graphs represents one full line art calculation for test scene "Mr.Elephant", which has some imbalanced mesh density across the object list.

Seen below in blue is the object loader code execution time. Here 10s-23s is line art calculation, and one thread in the object loader worked way too long and took more than half the total time in that stage.

图片.png

Overall calculation performance

The 2d acceleration structure I developed for line art couldn't utilize machine's full capacity, end up having way too much duplicated iteration during calculation, which needs improvement. Related to this design, it's relatively hard to have a efficiently threaded triangle intersection detection (one of the major advantage of line art over freestyle), this further slowed down everything.

Seen below in blue is the 2D acceleration structure building time, same file and time range as above example, as it indicated, roughly one thirds of the time (~4s) is spent building the acceleration structure, but the later occlusion query took less than half a second.

图片.png

Seen below in blue is for when intersection calculation is turned on, the intersection+acceleration structure building time. Considerable performance bottleneck (17 seconds for this stage alone) can be seen clearly.

图片.png

In temp-lineart-contained branch, an ad-hoc threading solution is in place on top of the existing code, and it does improve the performance to a extent, however the result still isn't anywhere near as ideal.

Silhouette/Shadow support

Due to the unique nature of how line art calculates/registers edge visibility, it's rather complicated to have a silhouette support (which is probably one of the easiest kind of line detection in most feature line renderers), but it's still quite vital in a lot of line-based artworks. Some time ago I have researched the method that line art could use to calculate silhouette and projection shadows in similar fashion arithmetically, and a lineart-shadow branch has a implementation of that. However, it suffers from the same performance problem as mentioned before.

Stroke smoothness

Stroke smoothness in mesh feature line rendering has always been a problem in geometry-based feature line rendering techniques. Line art already has a few ways to let uses smooth out the output, however the stroke quality is still not ideal in a lot of cases, especially in rendering subdivided meshes. Jaggy stroke and overlapped stroke is still not tackled from the root cause.

Current main goals

So to fix those problems, there are some improvements we can make:

  • Implement embree core (Use embree instead of my own acceleration algorithm).
  • Embree core is not as fast as I expected, but using embree only for doing intersection did speed things up in general compared to legacy line art algorithm. Need to settle this down.
  • Eventually settled down on using atomic compare and swap method for adding triangles, this is the fastest method so far.
  • Implement faster object loading (With the help from Sebastian's code).
  • Shadow casting support (If embree code is success then this will also use embree).
    • Need to have different types for cast shadow lines and light/shadow separation line (light contour).
    • Silhouette/Non-Silhouette feature line types (Needs to reuse shadow code to do front-to-back casting thus determine if stroke is in front of any triangle, see below).
  • Stroke quality improvements
    • Smooth contour modifier (for mesh).
    • Overlapping stroke reduction.

Updates as of 0516:

(All updates are in the weekly nodes on the wiki )

  • The faster object loading code is in master now (Yay!).
  • At the moment we settled down on an acceleration method named "atomic compare and swap" for add_triangles, which provided a lot of performance boost on the legacy acceleration algorithm. Embree turned out to be "not that helpful". Here's the performance result as of now:

图片.png


Development plan

Embree core

Time: 2 weeks for fully working core. More time for geometry checking and debugging

Rewrite the line art triangle intersection and occlusion code with embree acceleration. The goal is for line art to run much faster, and potentially get rid of long hanging time when there's dense triangle concentrations at one point in image space.

Embree is supposed to be well accelerated than my own acceleration structure, but these two methods handles "potential collision" differently, embree in 3d, while line art original uses 2d, so depends on scene structure and camera viewing angle, performance may vary, needs testing after the code is done and see how much it really improves. A preliminary embree-based tri-tri intersection code is done, and it showed reasonable acceleration for intersection detection stage. So it shows promise.

Current embree (tri-tri intersection only) performance improvements:

图片.png

Another great benefit of using embree is that the code can be more easily put on GPU, further speed up calculation (GPU embree line art not yet in the scope of this project).

A fully working new core should take no more than 2 weeks to code from current status (maybe add a little time for just doing math stuff). I could picture there being precision-related bugs that may need (way?)more time to track down, but there could also be very few to none of them, in my experience with legacy line art code, the image-space cutting function was pretty robust, but the triangle/line geometry side is always troublesome, I need to look at the situation after initial implementation.

Faster object loading

Time: 1 week or less for it to work.

Again this just a performance goal. @ZedDB did some work on this, and it's also promising. The main principle is to avoid converting to BMesh to get the adjacent info. This should be quick to integrate into lineart embree.

Shadow casting and silhouette support

Time: 2.5(?) week or less for shadow to work, then should take about 2 weeks for silhouette stuff to work.

This is to use embree to support shadow casting. If embree turns out to be not beneficial, then I'll multi thread current shadow code. There's already a lineart-shadow branch that utilizes legacy line art algorithm to cast shadows. Works accurately (I only find very few missing lines in a city scape scene where there are a lot of buildings and overlapped edges), but not multi-threaded, so it's slow.

Silhouette calculation in principle shares the same "shadow result" as if the light is at the camera position. So this has to be done after shadow casting support is working correctly.

Example of current shadow implementation, showing accurate projection result:

图片.png 图片.png

There's one problem on shadow/silhouette support, that is line art has to run two times, first time to project shadow (almost like creating a shadow buffer in real time graphics), then run it again from actual camera. So I think in terms of UI representation there needs some additional thoughts.

Maybe 4 weeks for these, likely to take less time, depending on how robust the code worked.

Update on 3/31:

It's possible to do a overall selection of "inside/outside shadow region", which means we can erase lines in lit/dark regions selectively. This can be done if I figured out a way to make sure the indices would match between two line art runs.

Stroke quality improvements

Sebastian has implemented a smooth contour modifier (for mesh) some time ago based on Pierre Bénard et.al.

If we successfully integrate this modifier, then we can remove some of those chain smooth options inside line art modifier, because those are more like a hack and doesn't solve the real problem in the geometry. Smooth contour modifier can also be beneficial to generating a smooth light/shadow separation line.

The actual algorithm for overlapping stroke reduction needs some further research, the necessity of implementing one depends on the result on the smooth contour modifier, if it gives good stroke quality with basic line art chaining, this overlapping reduction algorithm may or may not be needed.


Some "nice to have" features

These features are not in high priority, but will still benefit some special use cases. I'll have some notes updated down below as I go.

  • Normal controlled thickness. (Together with that light selection in the line type section)
    • Now use the bGPDspoint::surface_normal- [x] approach.
    • Should use custom data approach.
  • Generate flattened 2D strokes directly from Line Art to avoid re-projection later on. (See below)
  • Suggestive contour (Involve solving curvatures and render line-on-top-of-triangles, may be more complex.)
  • Manga camera distortion (Probably will come with a script or something).

Normal controlled thickness

Normal controlled thickness can be a great way to convey lighting direction in NPR renderings. (Sorry I don't have a screenshot right now but here's a mock up) should looks like this:

图片.png

Directly output 2D strokes [?]

After some further discussion with Sebastian, we indeed settled down on not caring about 2D and use a dedicated reproject modifier for now. The main reason being line art modifier itself is already very cluttered, adding more features would make it even more complex.

Original description: (I thought I still just keep this option here so I don't forget this stuff)

A lot of use cases won't use 3D strokes, and Line Art itself has 2D data as the final result, so it would be best to generate 2D result directly rather than doing a reprojection, because that would also lose some precision while the 2D result in Line Art is the most precise data. Also, it makes it more convenient if the user needs a 2D output that can be put at different depths in front of the camera physically, which gives more flexibility for direct composition of multiple Line Art result as well as with other GP/Mesh objects.

Suggestive contour

Freestyle has this implemented, but this could make current line art core algorithm unnecessarily complex (due to handling line-on-surface situation), and it's hardly useful in meshes with medium to sparse triangle density. So maybe not needed at the moment.

Manga camera distortion

Manga camera distortion feature will utilize multiple stages of frustum in different FOV to distort perspective at different depths. Principally it should work like this:

图片.png

A manually composited result (Link1 Link2 ) showing two-segmented FOV configuration, where the red segment (closer) has larger FOV and the green section has smaller FOV, ending up stretching the geometry closest to the camera:

图片.png

Manual camera set up, with near/far clipping tuned to align with one another:

图片.png

Note: this feature would be best if implemented into render engine level, but it can be done with line art efficiently in one go.

Removed achieved goals from the list as of 2021/03/01. Added new items as development requires. ------ # Line Art Further Improvement Proposal ## Current major problems Line Art GPencil modifier has been useful in creating feature lines for 3d scenes. However since its development and integration, users have found some problems with the implementation and it would be nice to address them in further updates. Those problems are mainly: ### Object Loading Currently line art is inefficient in multiple aspects. First of all, it needs to load geometries with adjacent information, but the implementation right now uses `BMesh` conversion during loading, which is very inefficient (it's single threaded and it does a lot of extra work that line art doesn't end up using) A specialized line art loading code is needed to speed up this stage. Sebastian provided flame graphs showing major performance bottleneck in geometry loading and intersection calculation in current line art (master) code. The time range showing in those graphs represents one full line art calculation for test scene "Mr.Elephant", which has some imbalanced mesh density across the object list. Seen below in blue is the object loader code execution time. Here 10s-23s is line art calculation, and one thread in the object loader worked way too long and took more than half the total time in that stage. ![图片.png](https://archive.blender.org/developer/F12895478/图片.png) ### Overall calculation performance The 2d acceleration structure I developed for line art couldn't utilize machine's full capacity, end up having way too much duplicated iteration during calculation, which needs improvement. Related to this design, it's relatively hard to have a efficiently threaded triangle intersection detection (one of the major advantage of line art over freestyle), this further slowed down everything. Seen below in blue is the 2D acceleration structure building time, same file and time range as above example, as it indicated, roughly one thirds of the time (~4s) is spent building the acceleration structure, but the later occlusion query took less than half a second. ![图片.png](https://archive.blender.org/developer/F12895485/图片.png) Seen below in blue is for when intersection calculation is turned on, the intersection+acceleration structure building time. Considerable performance bottleneck (17 seconds for this stage alone) can be seen clearly. ![图片.png](https://archive.blender.org/developer/F12895487/图片.png) In `temp-lineart-contained` branch, an ad-hoc threading solution is in place on top of the existing code, and it does improve the performance to a extent, however the result still isn't anywhere near as ideal. ### Silhouette/Shadow support Due to the unique nature of how line art calculates/registers edge visibility, it's rather complicated to have a silhouette support (which is probably one of the easiest kind of line detection in most feature line renderers), but it's still quite vital in a lot of line-based artworks. Some time ago I have researched the method that line art could use to calculate silhouette and projection shadows in similar fashion arithmetically, and a `lineart-shadow` branch has a implementation of that. However, it suffers from the same performance problem as mentioned before. ### Stroke smoothness Stroke smoothness in mesh feature line rendering has always been a problem in geometry-based feature line rendering techniques. Line art already has a few ways to let uses smooth out the output, however the stroke quality is still not ideal in a lot of cases, especially in rendering subdivided meshes. Jaggy stroke and overlapped stroke is still not tackled from the root cause. ## Current main goals So to fix those problems, there are some improvements we can make: - [x] Implement **embree core** (Use embree instead of my own acceleration algorithm). - [x] **Embree core is not as fast as I expected, but using embree only for doing intersection did speed things up in general compared to legacy line art algorithm. Need to settle this down.** - [ ] Eventually settled down on using `atomic compare and swap` method for adding triangles, this is the fastest method so far. - [x] Implement **faster object loading** (With the help from Sebastian's code). - [x] **Shadow casting** support (If embree code is success then this will also use embree). - [x] Need to have different types for cast shadow lines and light/shadow separation line (light contour). - [x] **Silhouette/Non-Silhouette feature line types** (Needs to reuse shadow code to do front-to-back casting thus determine if stroke is in front of any triangle, see below). - [ ] **Stroke quality improvements** - [ ] Smooth contour modifier (for mesh). - [ ] Overlapping stroke reduction. ------ ## Updates as of 0516: (All updates are in the weekly nodes [on the wiki ](https://wiki.blender.org/wiki/User:Yiming/LineArt_Further_Improvements)) - The faster object loading code is in master now (Yay!). - At the moment we settled down on an acceleration method named "atomic compare and swap" for `add_triangles`, which provided a lot of performance boost on the legacy acceleration algorithm. Embree turned out to be "not that helpful". Here's the performance result as of now: ![图片.png](https://archive.blender.org/developer/F13077988/图片.png) ------ ## Development plan ### Embree core **Time: 2 weeks for fully working core. More time for geometry checking and debugging** Rewrite the line art triangle intersection and occlusion code with embree acceleration. The goal is for line art to run much faster, and potentially get rid of long hanging time when there's dense triangle concentrations at one point in image space. Embree is supposed to be well accelerated than my own acceleration structure, but these two methods handles "potential collision" differently, embree in 3d, while line art original uses 2d, so depends on scene structure and camera viewing angle, performance may vary, needs testing after the code is done and see how much it really improves. A preliminary embree-based tri-tri intersection code is done, and it **showed reasonable acceleration** for intersection detection stage. So it shows promise. Current embree (tri-tri intersection only) performance improvements: ![图片.png](https://archive.blender.org/developer/F12894201/图片.png) Another great benefit of using embree is that the code can be more easily put on GPU, further speed up calculation (GPU embree line art not yet in the scope of this project). A fully working new core should take no more than 2 weeks to code from current status (maybe add a little time for just doing math stuff). I could picture there being precision-related bugs that may need (way?)more time to track down, but there could also be very few to none of them, in my experience with legacy line art code, the image-space cutting function was pretty robust, but the triangle/line geometry side is always troublesome, I need to look at the situation after initial implementation. ### Faster object loading **Time: 1 week or less for it to work.** Again this just a performance goal. @ZedDB did some work on this, and it's also promising. The main principle is to avoid converting to `BMesh` to get the adjacent info. This should be quick to integrate into lineart embree. ### Shadow casting and silhouette support **Time: 2.5(?) week or less for shadow to work, then should take about 2 weeks for silhouette stuff to work.** This is to use embree to support shadow casting. If embree turns out to be not beneficial, then I'll multi thread current shadow code. There's already a `lineart-shadow` branch that utilizes legacy line art algorithm to cast shadows. Works accurately (I only find very few missing lines in a city scape scene where there are a lot of buildings and overlapped edges), but not multi-threaded, so it's slow. Silhouette calculation in principle shares the same "shadow result" as if the light is at the camera position. So this has to be done after shadow casting support is working correctly. Example of current shadow implementation, showing accurate projection result: ![图片.png](https://archive.blender.org/developer/F12894197/图片.png) ![图片.png](https://archive.blender.org/developer/F12894199/图片.png) There's one problem on shadow/silhouette support, that is line art has to run two times, first time to project shadow (almost like creating a shadow buffer in real time graphics), then run it again from actual camera. So I think in terms of UI representation there needs some additional thoughts. Maybe 4 weeks for these, likely to take less time, depending on how robust the code worked. Update on 3/31: It's possible to do a overall selection of "inside/outside shadow region", which means we can erase lines in lit/dark regions selectively. This can be done if I figured out a way to make sure the indices would match between two line art runs. ### Stroke quality improvements Sebastian has implemented a smooth contour modifier (for mesh) some time ago based on [Pierre Bénard et.al](https://www.labri.fr/perso/pbenard/publications/contours/). If we successfully integrate this modifier, then we can remove some of those chain smooth options inside line art modifier, because those are more like a hack and doesn't solve the real problem in the geometry. Smooth contour modifier can also be beneficial to generating a smooth light/shadow separation line. The actual algorithm for overlapping stroke reduction needs some further research, the necessity of implementing one depends on the result on the smooth contour modifier, if it gives good stroke quality with basic line art chaining, this overlapping reduction algorithm may or may not be needed. ----- ## Some "nice to have" features These features are not in high priority, but will still benefit some special use cases. I'll have some notes updated down below as I go. - [ ] Normal controlled thickness. (Together with that light selection in the line type section) - ~~Now use the `bGPDspoint::surface_normal- [x]` approach.~~ - [ ] Should use custom data approach. - ~~Generate flattened 2D strokes directly from Line Art to avoid re-projection later on.~~ (See below) - [ ] Suggestive contour (Involve solving curvatures and render line-on-top-of-triangles, may be more complex.) - [ ] Manga camera distortion (Probably will come with a script or something). **Normal controlled thickness** Normal controlled thickness can be a great way to convey lighting direction in NPR renderings. (Sorry I don't have a screenshot right now but here's a mock up) should looks like this: ![图片.png](https://archive.blender.org/developer/F12894218/图片.png) **Directly output 2D strokes [?]** After some further discussion with Sebastian, we indeed settled down on not caring about 2D and use a dedicated reproject modifier for now. The main reason being line art modifier itself is already very cluttered, adding more features would make it even more complex. Original description: (I thought I still just keep this option here so I don't forget this stuff) > A lot of use cases won't use 3D strokes, and Line Art itself has 2D data as the final result, so it would be best to generate 2D result directly rather than doing a reprojection, because that would also lose some precision while the 2D result in Line Art is the most precise data. Also, it makes it more convenient if the user needs a 2D output that can be put at different depths in front of the camera physically, which gives more flexibility for direct composition of multiple Line Art result as well as with other GP/Mesh objects. **Suggestive contour** Freestyle has this implemented, but this could make current line art core algorithm unnecessarily complex (due to handling line-on-surface situation), and it's hardly useful in meshes with medium to sparse triangle density. So maybe not needed at the moment. **Manga camera distortion** Manga camera distortion feature will utilize multiple stages of frustum in different FOV to distort perspective at different depths. Principally it should work like this: ![图片.png](https://archive.blender.org/developer/F12895523/图片.png) A manually composited result ([Link1](https:*twitter.com/ChengduLittleA/status/1304331562402500608) [Link2](https:*twitter.com/ChengduLittleA/status/1304342135081021440) ) showing two-segmented FOV configuration, where the red segment (closer) has larger FOV and the green section has smaller FOV, ending up stretching the geometry closest to the camera: ![图片.png](https://archive.blender.org/developer/F12895602/图片.png) Manual camera set up, with near/far clipping tuned to align with one another: ![图片.png](https://archive.blender.org/developer/F12895606/图片.png) Note: this feature would be best if implemented into render engine level, but it can be done with line art efficiently in one go.
YimingWu self-assigned this 2021-04-23 09:40:55 +02:00
Author
Member

Added subscriber: @ChengduLittleA

Added subscriber: @ChengduLittleA

Added subscriber: @antoniov

Added subscriber: @antoniov

Not sure if you have included to improve the In Front problem in any of the points. If not, please, add it to the list because this is a critical feature.

Not sure if you have included to improve the `In Front` problem in any of the points. If not, please, add it to the list because this is a critical feature.

Added subscriber: @laurelkeys

Added subscriber: @laurelkeys

Added subscriber: @GeorgiaPacific

Added subscriber: @GeorgiaPacific

Added subscriber: @bunny

Added subscriber: @bunny

Often use nested Collections to keep characters/scenes organized.
It would be great if Collections behaved the same as Objects with regard to the Line Art Usage parameter and inheritance, so that child Collections could override their parent Collection's lineart_usage setting the same way individual Objects can.

Often use nested Collections to keep characters/scenes organized. It would be great if Collections behaved the same as Objects with regard to the Line Art Usage parameter and inheritance, so that child Collections could override their parent Collection's lineart_usage setting the same way individual Objects can.
Author
Member

In #87739#1156767, @bunny wrote:
Often use nested Collections to keep characters/scenes organized.
It would be great if Collections behaved the same as Objects with regard to the Line Art Usage parameter and inheritance, so that child Collections could override their parent Collection's lineart_usage setting the same way individual Objects can.

Humm... interesting, that's indeed some valid use. I'll see if I can make that happen.

> In #87739#1156767, @bunny wrote: > Often use nested Collections to keep characters/scenes organized. > It would be great if Collections behaved the same as Objects with regard to the Line Art Usage parameter and inheritance, so that child Collections could override their parent Collection's lineart_usage setting the same way individual Objects can. Humm... interesting, that's indeed some valid use. I'll see if I can make that happen.
newlifefoundationss commented 2021-05-13 09:53:41 +02:00 (Migrated from localhost:3001)

Added subscriber: @newlifefoundationss

Added subscriber: @newlifefoundationss
newlifefoundationss commented 2021-05-13 09:53:41 +02:00 (Migrated from localhost:3001)

This comment was removed by @newlifefoundationss

*This comment was removed by @newlifefoundationss*
newlifefoundationss commented 2021-05-13 09:57:55 +02:00 (Migrated from localhost:3001)

This comment was removed by @newlifefoundationss

*This comment was removed by @newlifefoundationss*

Added subscriber: @dr.sybren

Added subscriber: @dr.sybren

Added subscriber: @zzt

Added subscriber: @zzt

This comment was removed by @zzt

*This comment was removed by @zzt*
Author
Member

Humm... I'll take a look

Humm... I'll take a look

Removed subscribers: @dr.sybren, @newlifefoundationss

Removed subscribers: @dr.sybren, @newlifefoundationss

Removed subscriber: @laurelkeys

Removed subscriber: @laurelkeys

This issue was referenced by 6ad4b8b764

This issue was referenced by 6ad4b8b764a80b9deccd8e53b8c754829dda5e92
Member

Changed status from 'Needs Triage' to: 'Confirmed'

Changed status from 'Needs Triage' to: 'Confirmed'

Removed subscriber: @zzt

Removed subscriber: @zzt

This issue was referenced by 1b07b7a068

This issue was referenced by 1b07b7a068e9c81274d36ff5b21703b25fe68bfa

This issue was referenced by 247abdbf41

This issue was referenced by 247abdbf4148843daf469285a6a63ab9cd0aeef9

This issue was referenced by 841df831e8

This issue was referenced by 841df831e89dfc4011c323203c2efb8265dc1878

This issue was referenced by 3558bb8eae

This issue was referenced by 3558bb8eae758aa7e6483d0a6fc00dc83407d4cf

This issue was referenced by df7db41e1b

This issue was referenced by df7db41e1b6a349230870db3131bc954533af8f9

This issue was referenced by d1e0059eac

This issue was referenced by d1e0059eac99654624edee2a2390a3e2fdc4c7cb

This issue was referenced by 8e9d06f5a0

This issue was referenced by 8e9d06f5a0425255ce526e9c1aa7f852165749f0

This issue was referenced by c1cf66bff3

This issue was referenced by c1cf66bff3c0753512a2d1f2f8c03430bdd1f045

This issue was referenced by 80f7bc6d8e

This issue was referenced by 80f7bc6d8e7e7a5e543df5418313c04df5140c43

This issue was referenced by c3ef1c15f5

This issue was referenced by c3ef1c15f5653ac323f4d077c7faf7b46f3eee0d

This issue was referenced by efbd36429a

This issue was referenced by efbd36429a0c381a972f7da97bc9fbc9096e5f20

This issue was referenced by ec831ce5df

This issue was referenced by ec831ce5df86cbdbed8030d8a056f8a9b4eb0273

Added subscriber: @brecht

Added subscriber: @brecht

Detaching task from a specific Blender release. In general this should only be used when the feature is being worked on and targeted for a specific release, rather than a general module task.

Detaching task from a specific Blender release. In general this should only be used when the feature is being worked on and targeted for a specific release, rather than a general module task.

Added subscriber: @Garek

Added subscriber: @Garek

This issue was referenced by 40c8e23d48

This issue was referenced by 40c8e23d481cbee683ea890a6bf49129e7fcf5df

This issue was referenced by 5ae76fae90

This issue was referenced by 5ae76fae90da795891e7edccc14bccf36f93da54

This issue was referenced by 579e8ebe79

This issue was referenced by 579e8ebe79a1fd5ebd2fb4562c4d4b3f8c22f47d

This issue was referenced by dde997086c

This issue was referenced by dde997086ce24482e731fe6d1b779cdbfd125ffa
Author
Member

Added subscriber: @ZedDB

Added subscriber: @ZedDB

Added subscriber: @Sergey

Added subscriber: @Sergey

The goals of the project and overall proposal in the development plan sounds good to me! There are few things which are related to Embree which would be very nice to clarify.

Another great benefit of using embree is that the code can be more easily put on GPU, further speed up calculation (GPU embree line art not yet in the scope of this project).

Currently Embree does not support GPU acceleration. Even if it comes there the timeframe and supported hardware and backend is unknown. To me it does not seem easy at all to put Embree onto GPU.

The main principle is to avoid converting to BMesh to get the adjacent info. This should be quick to integrate into lineart embree.

Embree does not need adjacency, so I am not really sure what you mean by integrating adjacency into lineart Embree.

The goals of the project and overall proposal in the development plan sounds good to me! There are few things which are related to Embree which would be very nice to clarify. > Another great benefit of using embree is that the code can be more easily put on GPU, further speed up calculation (GPU embree line art not yet in the scope of this project). Currently Embree does not support GPU acceleration. Even if it comes there the timeframe and supported hardware and backend is unknown. To me it does not seem easy at all to put Embree onto GPU. > The main principle is to avoid converting to BMesh to get the adjacent info. This should be quick to integrate into lineart embree. Embree does not need adjacency, so I am not really sure what you mean by integrating adjacency into lineart Embree.

Currently Embree does not support GPU acceleration. Even if it comes there the timeframe and supported hardware and backend is unknown. To me it does not seem easy at all to put Embree onto GPU.

This is my fault, I thought Embree had a GPU backend already, so this is misinformation from my part.
However, if we rewrite the core occlusion checks in LineArt to use a BVH structure and do standard tri x tri intersection checks, then it would probably be easier to port that code to run on the GPU than the current implementation.

Embree does not need adjacency, so I am not really sure what you mean by integrating adjacency into lineart Embree.

That sentence could be clarified.
What it means is that the main goal of the object geometry loader rewrite is to skip the conversion to BMesh.
We used that conversion to get adjacency info that is needed to detect creases or other surface features.

The "This should be quick to integrate into lineart embree." is more of an off hand comment that if we do the Embree re-write, this shouldn't need any special adoption or rework as we don't need to adapt this change to the Embree rewrite.

> Currently Embree does not support GPU acceleration. Even if it comes there the timeframe and supported hardware and backend is unknown. To me it does not seem easy at all to put Embree onto GPU. This is my fault, I thought Embree had a GPU backend already, so this is misinformation from my part. However, if we rewrite the core occlusion checks in LineArt to use a BVH structure and do standard tri x tri intersection checks, then it would probably be easier to port that code to run on the GPU than the current implementation. > Embree does not need adjacency, so I am not really sure what you mean by integrating adjacency into lineart Embree. That sentence could be clarified. What it means is that the main goal of the object geometry loader rewrite is to skip the conversion to BMesh. We used that conversion to get adjacency info that is needed to detect creases or other surface features. The "This should be quick to integrate into lineart embree." is more of an off hand comment that if we do the Embree re-write, this shouldn't need any special adoption or rework as we don't need to adapt this change to the Embree rewrite.

Very interesting proposal. There are some points I would like to know if it's possible to implement or not.

When you bake a LineArt modifier, the real strokes are created and there are two possible improvements here:

Be able to keep all drawing flat (reprojected) in a plane. Sometimes the final use is a 2D flat drawing and to have a 3D strokes is not practical.

Related to point 1) is the option to determine and apply the thickness of the line relative to distance of the camera in reprojected flat drawing.

For example, if the drawing in 3D is far from camera, the thickness of the line would be thinner due perspective effect, but if we convert to flat, the thickness of the line would be the same and the perspective effect will be ruined.

This thinner line in flat reproject logic could be implemented in the reproject operator too.

Very interesting proposal. There are some points I would like to know if it's possible to implement or not. When you bake a LineArt modifier, the real strokes are created and there are two possible improvements here: # Be able to keep all drawing flat (reprojected) in a plane. Sometimes the final use is a 2D flat drawing and to have a 3D strokes is not practical. # Related to point 1) is the option to determine and apply the thickness of the line relative to distance of the camera in reprojected flat drawing. For example, if the drawing in 3D is far from camera, the thickness of the line would be thinner due perspective effect, but if we convert to flat, the thickness of the line would be the same and the perspective effect will be ruined. This thinner line in flat reproject logic could be implemented in the reproject operator too.

Added subscriber: @frogstomp-4

Added subscriber: @frogstomp-4

@antoniov I see you are addressing the long standing request about stroke thickness when camera reproject is used, thank you :)

However I think this would better be solved in an existing operator like "Bake grease pencil animation to strokes".

The reasoning is, that this would allow to apply the same logic in a broader sense, not just with lineart.. you Might want to block out town, use grease pencil for line generation and also draw on surface of those block, then bake and join objects and reproject. There you'd want to thickness applied on reprojection, be it lineart generated or hand drawn.

The missing part in stroke reprojection with bake operator is still that in order to produce lineart for storyboarding from multiple angles, we would like to reproject to a single plane instead of reprojection plane moving with camera.

So long story short.. would rather see this functionality in an operator to cover more cases.

@antoniov I see you are addressing the long standing request about stroke thickness when camera reproject is used, thank you :) However I think this would better be solved in an existing operator like "Bake grease pencil animation to strokes". The reasoning is, that this would allow to apply the same logic in a broader sense, not just with lineart.. you Might want to block out town, use grease pencil for line generation and also draw on surface of those block, then bake and join objects and reproject. There you'd want to thickness applied on reprojection, be it lineart generated or hand drawn. The missing part in stroke reprojection with bake operator is still that in order to produce lineart for storyboarding from multiple angles, we would like to reproject to a single plane instead of reprojection plane moving with camera. So long story short.. would rather see this functionality in an operator to cover more cases.

Be able to keep all drawing flat (reprojected) in a plane. Sometimes the final use is a 2D flat drawing and to have a 3D strokes is not practical.

This was discussed when LineArt was first implemented as a modifier.
The conclusion then was that we already have a "flatten strokes" operator that we can reuse as a new modifier.
By having it as a modifier it can be reused in other non LineArt related tasks and would allow for much more options on how to flatten the strokes.

To me this is not really LineArt related as it is not the only case where you would want to flatten already drawn GP strokes.

Related to point 1) is the option to determine and apply the thickness of the line relative to distance of the camera in reprojected flat drawing.

I think this is an other feature for the "Flatten strokes" modifier/operator. This is because it is up to the flattening operator to handle any "data loss" like the actual depth of the strokes.

So long story short.. would rather see this functionality in an operator to cover more cases.

Ditto (but also as a modifier too as I stated). :)

> Be able to keep all drawing flat (reprojected) in a plane. Sometimes the final use is a 2D flat drawing and to have a 3D strokes is not practical. This was discussed when LineArt was first implemented as a modifier. The conclusion then was that we already have a "flatten strokes" operator that we can reuse as a new modifier. By having it as a modifier it can be reused in other non LineArt related tasks and would allow for much more options on how to flatten the strokes. To me this is not really LineArt related as it is not the only case where you would want to flatten already drawn GP strokes. > Related to point 1) is the option to determine and apply the thickness of the line relative to distance of the camera in reprojected flat drawing. I think this is an other feature for the "Flatten strokes" modifier/operator. This is because it is up to the flattening operator to handle any "data loss" like the actual depth of the strokes. > So long story short.. would rather see this functionality in an operator to cover more cases. Ditto (but also as a modifier too as I stated). :)

Having these features as a modifier would be great, but also designing so that we can reuse the same BKE_gpencil.... function to be reused in the reproject operator... I mean, have both modifier and bake operator options.

Having these features as a modifier would be great, but also designing so that we can reuse the same `BKE_gpencil....` function to be reused in the reproject operator... I mean, have both modifier and bake operator options.
Author
Member

Ahh I haven't been looking at this thread...

The conclusion then was that we already have a "flatten strokes" operator that we can reuse as a new modifier.

Yes, reproject modifier sounds most natural to me. Although there is already line results in 2d in line art, it's just not generated into strokes because there's only XY in framebuffer coordinates, so technically we don't really need a reproject modifier [if its sole purpose is flattening line art], maybe in line art we can specify a projection depth and then those strokes will appear at that distance from the camera in a plane.

This thinner line in flat reproject logic could be implemented in the reproject operator too.

Right, that will have to be in consideration.

For embree & gpu, I think if it doesn't run on GPU at all, we at least get the CPU side speed up, then maybe if we can port some long iterations onto GPU after embree generated pairs, that could also work (or maybe not, due to transfer bottle neck). The entire embree thing for line art is in experimental so I don't know what exactly the performance is gonna look like.

Ahh I haven't been looking at this thread... > The conclusion then was that we already have a "flatten strokes" operator that we can reuse as a new modifier. Yes, reproject modifier sounds most natural to me. Although there is already line results in 2d in line art, it's just not generated into strokes because there's only XY in framebuffer coordinates, so technically we don't really need a reproject modifier [if its sole purpose is flattening line art], maybe in line art we can specify a projection depth and then those strokes will appear at that distance from the camera in a plane. > This thinner line in flat reproject logic could be implemented in the reproject operator too. Right, that will have to be in consideration. For embree & gpu, I think if it doesn't run on GPU at all, we at least get the CPU side speed up, then maybe if we can port some long iterations onto GPU after embree generated pairs, that could also work (or maybe not, due to transfer bottle neck). The entire embree thing for line art is in experimental so I don't know what exactly the performance is gonna look like.

Added subscriber: @JSM

Added subscriber: @JSM

Added subscriber: @hzuika

Added subscriber: @hzuika

Added subscriber: @okuma_10

Added subscriber: @okuma_10

Since there will be work on GP. Can I propose to expose or implement a way to assign a view bound texture coordinate for GP material's fill->texture feature.
At the moment each GP verts have UV coordinates that are generated at creation. Thus each new "shape" created with "fill" will have different oriented texture.
What I'm suggesting is to have the ability to also, use UV coordinates that are based on the view. Something like gl_FragCoord.xy in fragment shaders. Which will make the shapes appear as a cutout into a homogeneous texture.
Here are examples of what I mean
BlenderGPTextureCoordSuggestion.png

Krita's Fill layer uses pixar's SeExpr. The shader in the krita example is out of the box and oddly it uses the the document size for the dot generation, so it's more of a ellipse than a circle. Still it's a good example of what I am proposing.

Since there will be work on GP. Can I propose to expose or implement a way to assign a view bound texture coordinate for GP material's fill->texture feature. At the moment each GP verts have UV coordinates that are generated at creation. Thus each new "shape" created with "fill" will have different oriented texture. What I'm suggesting is to have the ability to also, use UV coordinates that are based on the view. Something like gl_FragCoord.xy in fragment shaders. Which will make the shapes appear as a cutout into a homogeneous texture. Here are examples of what I mean ![BlenderGPTextureCoordSuggestion.png](https://archive.blender.org/developer/F12928766/BlenderGPTextureCoordSuggestion.png) Krita's Fill layer uses pixar's SeExpr. The shader in the krita example is out of the box and oddly it uses the the document size for the dot generation, so it's more of a ellipse than a circle. Still it's a good example of what I am proposing.
Author
Member

Added subscriber: @mendio

Added subscriber: @mendio
Author
Member

@okuma_10 Hi! This is a quite interesting use case. I'd suggest @antoniov or @mendio to look at it or maybe move this to generic gpencil/rendering task?

@okuma_10 Hi! This is a quite interesting use case. I'd suggest @antoniov or @mendio to look at it or maybe move this to generic gpencil/rendering task?

Added subscriber: @fclem

Added subscriber: @fclem

In #87739#1324485, @ChengduLittleA wrote:
@okuma_10 Hi! This is a quite interesting use case. I'd suggest @antoniov or @mendio to look at it or maybe move this to generic gpencil/rendering task?

I guess this is more for @fclem and the new Eevee implementation.

> In #87739#1324485, @ChengduLittleA wrote: > @okuma_10 Hi! This is a quite interesting use case. I'd suggest @antoniov or @mendio to look at it or maybe move this to generic gpencil/rendering task? I guess this is more for @fclem and the new Eevee implementation.

Added subscriber: @TinyNick

Added subscriber: @TinyNick

This issue was referenced by 03aba8046e

This issue was referenced by 03aba8046e07d956e43073502383579b7dfbb284

Added subscriber: @Hologram

Added subscriber: @Hologram

Added subscriber: @Yuro

Added subscriber: @Yuro

This issue was referenced by 2719869a2a

This issue was referenced by 2719869a2a98e8a0cb8d229f0efe7b9ec5138720

This issue was referenced by 369f652c80

This issue was referenced by 369f652c8046955ea83436a138d61a5b130c04c4

This issue was referenced by 9f8254fd34

This issue was referenced by 9f8254fd34ac397147b531747122d8853014c89c

This issue was referenced by 6f00b1500c

This issue was referenced by 6f00b1500cbd162dfe2fa7f8de2c46f156f75da4

This issue was referenced by 14a5a91e0e

This issue was referenced by 14a5a91e0e033d712134c112a4778b495bd73ba1

Added subscriber: @RobertS

Added subscriber: @RobertS

Is it possible to test the lineart-shadow branch without having to built it yourself? It apparently has performance issues, but it surely can't be worse than #102913?

Is it possible to test the lineart-shadow branch without having to built it yourself? It apparently has performance issues, but it surely can't be worse than #102913?

The lineart shadow feature is already merged into master.
You don't need to use any special branch to use it.

The lineart shadow feature is already merged into master. You don't need to use any special branch to use it.

Thank you for the information. What about the silhouette rendering of individual objects (including masking of partial occlusion)?

Is there updated documentation for this? I did not get any shadow rendering when I tried grease pencil line art on a collection.

Thank you for the information. What about the silhouette rendering of individual objects (including masking of partial occlusion)? Is there updated documentation for this? I did not get any shadow rendering when I tried grease pencil line art on a collection.
Sergey Sharybin added
Module
Grease Pencil
and removed
Module
VFX & Video
labels 2023-02-13 18:10:30 +01:00
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
21 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#87739
No description provided.