API Read/Write Vertex Group with one command #71390

Closed
opened 2019-11-06 19:30:33 +01:00 by Alessandro Zomparelli · 8 comments

Dear developers,
as suggested by one of you I'm reporting this as bug, even if it is actually a request of improvement.
I'm the developer of Tissue, and during the Reaction-Diffusion simulation based on vertex groups, I'm realising that the slowest part is not the simulation itself, but Reading and Writing the vertex group each frame. This is a short example of the computing time for each frame using 50 iterations (read and write occurs only once) in the simulation on a geometry with 55298 vertices:

RD - Read Vertex Groups: 0.09177565574645996
RD - Preparation Time: 0.0059850215911865234
RD - Simulation Time: 0.08377623558044434
RD - Closing Time: 0.19245696067810059

This is what I do in "RD - Read Vertex Group":

            for i in range(n_verts):
                try: a[i] = ob.vertex_groups["A"].weight(i)
                except: pass
                try: b[i] = ob.vertex_groups["B"].weight(i)
                except: pass

I'm using a "try" in case that some vertices are not in the group

while this is what I do in "RD - Closing Time":

            for i in range(n_verts):
                ob.vertex_groups['A'].add([i], a[i], 'REPLACE')
                ob.vertex_groups['B'].add([i], b[i], 'REPLACE')

It would be great to have an easy and fast method for having a specific vertex group as list of values for each vertex, maybe with a null or just zero for vertices that are not in the group. The same thing for writing back the vertex group.
Is it possible?

Thank you for your work,
Alessandro

Dear developers, as suggested by one of you I'm reporting this as bug, even if it is actually a request of improvement. I'm the developer of Tissue, and during the Reaction-Diffusion simulation based on vertex groups, I'm realising that the slowest part is not the simulation itself, but Reading and Writing the vertex group each frame. This is a short example of the computing time for each frame using 50 iterations (read and write occurs only once) in the simulation on a geometry with 55298 vertices: ``` RD - Read Vertex Groups: 0.09177565574645996 RD - Preparation Time: 0.0059850215911865234 RD - Simulation Time: 0.08377623558044434 RD - Closing Time: 0.19245696067810059 ``` This is what I do in "RD - Read Vertex Group": ``` for i in range(n_verts): try: a[i] = ob.vertex_groups["A"].weight(i) except: pass try: b[i] = ob.vertex_groups["B"].weight(i) except: pass ``` I'm using a "try" in case that some vertices are not in the group while this is what I do in "RD - Closing Time": ``` for i in range(n_verts): ob.vertex_groups['A'].add([i], a[i], 'REPLACE') ob.vertex_groups['B'].add([i], b[i], 'REPLACE') ``` It would be great to have an easy and fast method for having a specific vertex group as list of values for each vertex, maybe with a null or just zero for vertices that are not in the group. The same thing for writing back the vertex group. Is it possible? Thank you for your work, Alessandro
Author
Member

Added subscriber: @AlessandroZomparelli

Added subscriber: @AlessandroZomparelli
Member

Added subscriber: @JacquesLucke

Added subscriber: @JacquesLucke
Member

Changed status from 'Needs Triage' to: 'Archived'

Changed status from 'Needs Triage' to: 'Archived'
Jacques Lucke self-assigned this 2020-01-09 16:52:04 +01:00
Member

I think this can be closed because we still have the patch open. Generally I think features like this are good, something similar was actually my first patch for Blender.

You say these loops are a major bottleneck in your code. Then you should try to optimize them more. Even just moving ob.vertex_groups['A'].add out of the loop might result in a noticeable performance improvement.

I think this can be closed because we still have the patch open. Generally I think features like this are good, something similar was actually my first patch for Blender. You say these loops are a major bottleneck in your code. Then you should try to optimize them more. Even just moving `ob.vertex_groups['A'].add` out of the loop might result in a noticeable performance improvement.
Author
Member

Hi @JacquesLucke is the patch covering both read and write operations?
Regarding your suggestion, the loop is actually needed for adding the weight value to ALL the vertices, starting from two liists of values "a" and "b". Or there is another way?

Thanks
Alessandro

Hi @JacquesLucke is the patch covering both read and write operations? Regarding your suggestion, the loop is actually needed for adding the weight value to ALL the vertices, starting from two liists of values "a" and "b". Or there is another way? Thanks Alessandro
Member

Hm, no, I think it it's only for one of the two. In any case, it is a bit weird to have this as a bug report... The situation is similar to when image pixels have to be accessed, I think we closed the related reports as well.

One inefficiency in this loop is that you are doing many lookups much more often than necessary. You lookup ob.vertex_groups in every iteration. You lookup vertex_groups["name"] in every iteration. You lookup vertex_group.add/vertex_group.weight in every iteration. These lookups should be done before the loop in cases like this one.

A hot loop should contain as little code as possible.

Hm, no, I think it it's only for one of the two. In any case, it is a bit weird to have this as a bug report... The situation is similar to when image pixels have to be accessed, I think we closed the related reports as well. One inefficiency in this loop is that you are doing many lookups much more often than necessary. You lookup `ob.vertex_groups` in every iteration. You lookup `vertex_groups["name"]` in every iteration. You lookup `vertex_group.add`/`vertex_group.weight` in every iteration. These lookups should be done before the loop in cases like this one. A hot loop should contain as little code as possible.
Author
Member

Thank you @JacquesLucke , I will definetly try to minimize the amount of code in the loop. I didn't know that those call would had affect the performances.
I wasn't actually sure to report that as a bug, I just followed the suggestion gave by a developer met during the Blender Conference. She told me that this would be considered as a bug, even if it's not technically a bug.

Thank you @JacquesLucke , I will definetly try to minimize the amount of code in the loop. I didn't know that those call would had affect the performances. I wasn't actually sure to report that as a bug, I just followed the suggestion gave by a developer met during the Blender Conference. She told me that this would be considered as a bug, even if it's not technically a bug.
Member

Also see this: https://github.com/JacquesLucke/animation_nodes/blob/master/animation_nodes/nodes/mesh/vertex_group_input.py#L88-L90
There I'm doing something similar. It's probably not the fastest approach, but it looks like I put some effort into it back in the day. Not sure if I'd do it the same nowadays. Let me know when you can measure any speedup.

Also see this: https://github.com/JacquesLucke/animation_nodes/blob/master/animation_nodes/nodes/mesh/vertex_group_input.py#L88-L90 There I'm doing something similar. It's probably not the fastest approach, but it looks like I put some effort into it back in the day. Not sure if I'd do it the same nowadays. Let me know when you can measure any speedup.
Sign in to join this conversation.
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender-addons#71390
No description provided.