The SLIM UV unwrap algorithm implementation for Blender 4.x #114545
Open
Lukasz Czyz
wants to merge 197 commits from
When changing the target branch, be careful to rebase the branch in your fork to match. See documentation.
glukoz/blender:uv_unwrapping_slim_algorithm_v5_update
into main
pull from: glukoz/blender:uv_unwrapping_slim_algorithm_v5_update
merge into: blender:main
blender:main
blender:blender-v4.2-release
blender:npr-prototype
blender:lib_macos_png
blender:icons-cleanup
blender:blender-v3.6-release
blender:blender-v3.3-release
blender:brush-assets-project
blender:pr-extensions-tidy-space
blender:blender-v4.0-release
blender:universal-scene-description
blender:blender-v4.1-release
blender:blender-v3.6-temp_wmoss_animrig_public
blender:temp-sculpt-dyntopo
blender:gpencil-next
blender:anim/animation-id-113594
blender:blender-projects-basics
blender:bridge-curves
blender:sculpt-blender
blender:asset-browser-frontend-split
blender:asset-shelf
blender:tmp-usd-python-mtl
blender:tmp-usd-3.6
blender:blender-v3.5-release
blender:blender-v2.93-release
blender:realtime-clock
blender:sculpt-dev
blender:bevelv2
blender:xr-dev
When changing the target branch, be careful to rebase the branch in your fork to match. See documentation.
SUMMARY
The patch integrates an existing implementation of the SLIM unwrapping algorithm into Blender. More info about SLIM here: https://igl.ethz.ch/projects/slim/. The patch is based on the integration code written by Aurel Gruber for Blender 2.7x (not finished and never actually merged with the master branch). To learn more about SLIM, watch the following presentation by Aurel: https://www.youtube.com/watch?v=PLUoLQUK3-s
In this PR the original Aurel's code was rebased to a recent main commit and a bunch of further improvements were introduced.
HANDLING DOUBLED VERTICES AND ZERO TRIANGLES
First of all I used the original
p_chart_simplify
disabled code to implement a new function:p_chart_collapse_doubles
. The collapsing code used byp_chart_collapse_doubles
is almost identical top_chart_simplify
- the criteria of collapsing is different though: as the name suggests it collapses edges only to remove doubled vertices.Thanks to the simplified collapse criteria, it seems that we can avoid explicit uncollapsing after the operation is done - instead we can easily flush UV coords to collapsed vertices just before applying the result. It is the approach the current code uses (check the
p_chart_flush_collapsed_uvs
function).To sum up (together with changes from the previous update): currently the mesh preprocessing before SLIM unwrap consists of two phases:
p_chart_collapse_doubles
- removing doubled verticesp_chart_correct_zero_angles
- correcting zero area triangles.WARNING - at the current stage it is not guaranteed that the above functions will work properly together with the live unwrap functionality. They may require some improvements. The point is that I cannot risk spending much time on improving and polishing these solutions without a confirmation that such an approach is acceptable by Blender devs on the general level.
So at this moment I need a confirmation (or rejection) of this general approach in order to continue work on this task.
The SLIM UV unwrap algorithm implementation for Blender 4.xto WIP: The SLIM UV unwrap algorithm implementation for Blender 4.xThanks, I will find some time to look into this again. Hopefully next week.
I'm still working my way through the code, but pushed a bunch of changes to cut the size of this PR in half. And to use proper SPDX headers and align closer to our code style.
I see various warnings when building this, should show on the buildbot too:
@blender-bot build
I pulled your changes and after that building fails. Is that expect or is it an issue with my environment?
It's not expected. Maybe try to fix the errors, and if you can't show the error log?
Ok, I fixed compilation and all warnings related to this PR.
There were more warnings in the buildbot logs. I fixed them, except these:
I guess certain charts need to be skipped based on these, similar to
p_chart_lscm_begin
.@blender-bot build
@ -222,0 +263,4 @@
}
else {
/* We use the properties from the last unwrap operator for subsequent
* live unwrap and minize stretch operators. */
This comment seems to be incorrect, and perhaps this code and
WM_operator_current_or_default_properties_alloc
can be removed?ED_uvedit_live_unwrap
passes in toolsettings, so this code does not rununwrap_exec
does not pass in tool settings, but the operator should already remember its propertiespack_islands_exec
andaverage_islands_scale_exec
do not pass in tool settings, but these properties seems irrelevant to them.The main rationale of this code was to make the minimize stretch operator use the latest options used by the unwrap operator.
The rationale is not reflected in the current state of the implementation because the minimize stretch operator hasn't been ported to use SLIM yet. But it will probably be ported sooner or later so this code may be useful in the future.
The last used unwrap settings are already stored in the scene tool settings, so minimize stretch should get them from there instead.
I see. So I removed that call.
@ -2605,0 +2826,4 @@
else {
uiDefAutoButsRNA(
layout, &ptr, unwrap_draw_check_prop_abf, NULL, NULL, UI_BUT_LABEL_ALIGN_NONE, false);
}
I think this would benefit from a manual layout now, using
uiItemR
for individual properties instead ofuiDefAUtoButsRNA
. That way properties can be grouped better.For SLIM I think it could be grouped like this:
Done, though I wasn't able to add separators between prop groups you specified. I was looking for a C++ counterpart of the Python
layout.separator()
call but with no luck.It's
uiItemS()
.@ -2610,3 +2838,4 @@
{0, nullptr, 0, nullptr, nullptr},
};
static const EnumPropertyItem reflection_items[] = {{0, "ALLOW", 0, "Allow Flips", ""},
Could this be made into a
allow_flips
boolean? Doesn't seem like it should be an enum.Done.
@ -4286,0 +4961,4 @@
slim::MatrixTransferChart &mt_chart = mt->mt_charts[i];
bool select = false, deselect = false;
int npins = 0;
select
,deselect
andnpins
are not used. Does this need similar logic top_chart_lscm_begin
, skipping some charts based on these?Right now when you use live unwrap, and select a pin on one island and move it, it will continue solving other islands. I think this is unexpected, and would cause poor performance when there are many islands but you only want to work on one.
Done.
Is this code in the pull request? I do not see functions with these names.
The approach sounds good to me.
I was going to open a devtalk topic for user feedback, but it seems the code for doubled vertices and zero triangles is missing? If so I can wait for that. Text was going to be:
Indeed, I forgot to rebase the collapse_doubles part of the original change. Now it's fixed. I think you can open the devtalk topic now :)
I will look into other change requests in the next week.
@blender-bot package
Package build started. Download here when ready.
The double vertex collapsing code looks fine to me. After deduplicating it with the simplify implementation, it's also not much extra code.
User feedback topic:
https://devtalk.blender.org/t/slim-uv-unwrapping/32192
@glukoz are you following the feedback topic? Would be good if you can reply there too.
I think the most obvious improvements to make are to have two separate entries in the Unwrap menu, and to rename Vertex Group to indicate its purpose. For both the trickiest part is finding good names. We should also consider renaming SLIM to something that's easier to understand along with that, as we did for ABF and LSCM in the past.
There was one example of an unwrap failing, maybe possible to solve by tweaking the degenerate triangle handling?
https://devtalk.blender.org/t/slim-uv-unwrapping/32192/26
Report about pins not working well:
https://devtalk.blender.org/t/slim-uv-unwrapping/32192/27
@blender-bot package
Package build started. Download here when ready.
Sure, I will take a look on it.
Sure, I can tweak the algorithm so that the given mesh is processed BUT keep in mind that even after tweaking there may be another mesh which will not pass the test. I mean the algorithm does its best to fix a mesh but it cannot guarantee that it will work for every degenerated example. The question is whether you can accept such an approach in the long term.
What I did for ABF and LSCM is to keep investigating and fixing user reports until failures become rare. There's no well defined cut-off point for that. But it's important to look at some user reported ones since they tend to catch things a developer can find in testing.
If it turns out there is a certain class of problems we don't know how to solve, and it's common, then we can decide if some other algorithm is needed to clean up the geometry or if we accept it. But I would expect simple adjustments could go a long way. For example the in
uv_initializer.cpp
, if that step fails perhaps the angles could be clamped to some minimum value. Something similar really helped for ABF. If some other step fails, maybe similar clamping is possible in the solver.What do you think about naming like this?
In the operator properties:
SLIM -> Minimum Stretch
Vertex Group -> Importance Weights
In the unwrap menu add two separate operators:
It ties in with the same algorithm being used for minimize stretch later.
I ran ABF and SLIM on a random subset of the dataset here. See attachment for script and results.
http://staff.ustc.edu.cn/~fuxm/projects/ProgressivePara/dataset.html
There are a fair amount of models where SLIM fails. Some of them actually succeed when removing the minimum angle and double vertex threshold, though have not checked if the results are good in general.
The naming seems ok for me.
I will take a look at the failing examples soon.
In this update I worked on improving the function for correcting degenerate triangles. The function name is now
p_chart_correct_degenerate_triangles
. I reworked the function so it now checks two triangle parameters: area and minimum angle - the function tries to correct both parameters so they are above specified thresholds (note that both parameters are independent from the mathematical point of view).The
p_chart_correct_degenerate_triangles
function is now able to deal with double vertices so the question is whether we still want to use thep_chart_collapse_doubles
function.I ran a test on the enrite D2 mesh set and it turned out that the correction function in the current state lowers the failing mesh count from 28 to 18. Note that the entire D2 set consists of 6190 meshes.
I investigated some of the failing 18 meshes and it turns out that SLIM may fail even when the correction function is able to correct all triangles in the mesh (by min area and min angle). Note that I double check that all triangles are actually corrected before the function returns (using asserts). So it looks like the failing meshes may be caused by bugs in the SLIM implementation itself.
Now I will take a look at the issue with pins not being respected by SLIM.
@blender-bot package
Package build started. Download here when ready.
I solved the problem with pins not being respected.
Now the main question is whether you still want to merge this algorithm given the fact that SLIM may fail even when the correction function is able to correct all triangles in the mesh (by min area and min angle) - as I explained above.
@brecht Could you start a build with the recent changes? I am not sure whether I can do it on my own.
@blender-bot package
Package build started. Download here when ready.
I'd like to have another look to see if I can find some solution for those failures. I don't think it's a blocker if we can't, but would be nice.
Then I guess this is nearly ready to merge for Blender 4.2? Still need to check all the latest changes.
@brecht Could ask people for feedback for the latest build in the dev thread? Unfortunately I cannot add 3 messages in a row there.
@glukoz I've changed your trust level on devtalk to remove that restriction.
Very happy to see more updates on this patch! Really looking forward to it! Thanks for all the work Lukasz and Brecht! :)
@blender-bot package
Package build started. Download here when ready.
@ -0,0 +1,89 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,17 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,94 @@
/* SPDX-FileCopyrightText: 2013 Alec Jacobson
Add
Blende Authors 2024
I did not consider there to be significant changes that required claiming copyright from Blender authors.
@ -0,0 +1,40 @@
/* SPDX-FileCopyrightText: 2014 Alec Jacobson, 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,196 @@
/* SPDX-FileCopyrightText: 2013 Alec Jacobson
Add
Blender Authors 2024
I did not consider there to be significant changes that required claiming copyright from Blender authors.
@ -0,0 +1,48 @@
/* SPDX-FileCopyrightText: 2013 Alec Jacobson, 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,43 @@
/* SPDX-FileCopyrightText: 2013 Alec Jacobson, 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,58 @@
/* SPDX-FileCopyrightText: 2013 Alec Jacobson, 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,165 @@
/* SPDX-FileCopyrightText: 2016 Michael Rabinovich, 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,42 @@
/* SPDX-FileCopyrightText: 2016 Michael Rabinovich, 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,222 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,65 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,235 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,14 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,803 @@
/* SPDX-FileCopyrightText: 2016 Michael Rabinovich, 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,118 @@
/* SPDX-FileCopyrightText: 2016 Michael Rabinovich, 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,47 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,162 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,288 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,59 @@
/* SPDX-FileCopyrightText: 2023 Blender Authors
Change
2023
by2024
@ -0,0 +1,95 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
Add
/* SPDX-FileCopyrightText: 2024 Blender Authors
and delete/
before* SPDX-License-Identifier: GPL-2.0-or-later
It's just some license information changes
Has a final decision to change the naming like this already been made? Should I implement it?
Hi @glukoz ,
Thank you for your work implementing SLIM for Blender. I've been implementing SLIM privately and noticed some differences between our implementations, as well as the original paper repo. Just some thoughts/qns:
Intersections. It is possible that SLIM creates new intersections between triangles in the unwrap of a chart. This can be fixed by implementing the second check in Section 3.3 of the Smith and Schaefer paper. (ie. solving the quadratic eqn for when the determinant is zero between all boundary edges and boundary vertices). While the original repo doesn't seem to address this, having no new unwrap intersections might be something Blender users want.
Line search. The original repo performs a bisection line search until Wolfe conditions are satisfied, whereas this repo has a more lightweight approach. As I am not an expert in this area, do you have any insight into the difference here?
Anyways, first time posting on the Blender repo. Do let me know if there's anything I can do to help. Thanks once again for your efforts here.
Yes, you can make the changes.
@Gangneron Copyright years do not have to be updated to the latest. Adding some missing Blender Authors is good though.
https://developer.blender.org/docs/handbook/guidelines/license_headers/
Thank you for your input. Unfortunately I am familiar with the mathematical side of Blender-SLIM integration as it was originally implemented by Aurel Gruber - I am only taking care of the integration from the general architecture point of view.
That being said, I noticed a strange SLIM behaviour: If an input is passed to SLIM with no flipped triangles, it looks like SLIM may still generate an output with flips from such an input. Could you confirm that it may be the case indeed?
One question about the optimal way to implement this: what about moving the
method
prop fromwmOperatorType::srna
towmOperatorType::prop
and renaming the method values to:Angle Based
Angle Based (Conformal)
Minimum Stretch
?
Or find some other name for Conformal?
I would just keep it "Angle Based", "Conformal" and "Minimum Stretch". I don't think potential new names are going to be much more clear, that's it's worth users re-learning them (and tutorials going out of date).
I didn't understand what you meant moving from
wmOperatorType::srna
andwmOperatorType::prop
. All operators properties are insrna
, andprop
is used for the "primary" property in the user interface in some places. Though I don't think such a primary property is needed here.Correct me if I am wrong, but wouldn't such a move result in Blender showing method choices explicitly for example when searching for the Unwrap operator in the F3 menu? Like it is the case for the Merge operator for example:
Wouldn't that be convenient for the user?
Good point, it seems useful to do it for that case.
Recent changes:
renaming:
SLIM -> Minimum Stretch
Vertex Group -> Importance Weights
Factor -> Weights Factor
expanding the Unwrap operator in the UV editor menu using methods:
reusing the UV editor unwrap menu in the 3D viewport UV menu (to make the UI consistent):
One issue: the 'Importance Weights' label is truncated in the operator popup due to the popup is too narrow:
Can the width of the popup be increased somehow?
Not possible to my knowledge, at least not in Phyton, and I can not think of any operation that has more width than default in Blender either.
So I guess it could just be shortend to Weight or Importance and just have a tool tip with the full name and what it does, for now.
In the future it would be good if it was supported and exposed in the phyton api, as it adds some extra flexibility. Like how width is currently supported in for example invoke_props_dialogue menus.
If the popup cannot be widened, then my proposition is to shorten the label to
Imp. Weights
One way to get around it would be to add a boolean:
Maybe by having a default attribute name, it encourages users to use the same one making it easier to re-run the operator on different objects.
I've implemented it.
One suggestion: what about such names:
?
It would be an indication for the user that the given props are logically related.
Sure, those names are fine. You can also add a few separators (uiItemS) to better group properties.
@blender-bot package
Package build started. Download here when ready.
@glukoz I'll be on a 6 month sabbatical starting next week, so someone else will have to land this. I think this is in a good state now and my review comments have been addressed. It would be nice if somehow this could be made more numerically robust, though it's also not a blocker I think.
For the release notes we'll need some text, explaining advantages and limitations on a user level. And I think images that demonstrate the results on 2 or 3 models.
CC @howardt @ideasman42, maybe a topic for the modeling module meeting?
FYI @Sergey
I am currently on a short leave. I will update the names and the prop layout after this weekend.
Improvements done.
Can we merge this change now?
@glukoz Thanks for the updates.
It was discussed during the Modelling Meeting. Howard and Campbell will pick up the review.
Hi again @glukoz, apologies for the late response. I have not experienced this, but then again I have been running SLIM on small datasets. Will let you know if it crops up in more rigorous testing.
Just a thought: perhaps this is an issue in
flip_avoiding_line_search.cpp
. As an experiment, maybe try changing line 161-162 to the following:min_step_to_singularity
should be the exact update value at which a flip occurs. If it is a smaller value than our intended update (1 * d), then we should use it instead. If this fixes the issue, then we know it is a problem inside theline_search
function.I tried this out and it seems to work well. @glukoz are you still working on things, or do you think it is ready at this point?
I think it is ready.
I'm getting crashes when using
average island scale
in this build.Blender freezes and crashes
Other than the
average island scale crash
#114545 (comment)
, I think this build works really nice. 👍 ❤️
@ideasman42 mentioned in the module meeting 240715 that he got different result when using live unwrap vs just using the
minimize stretch unwrap
with default settings. It seems to me that this is a result of live unwrap generates more iteration when moving vertices around.If the the operators parameter
iterations
is changed to from 10 to 100 you get a result that more resembles to what you get when using live unwrap.Fixed it with the latest commit. Thank you for thorough testing!
If it's considered ready from the side of the PR author, I'll remove the WIP prefix.
For live unwrap there is a trade-off. If it starts from scratch after every mouse motion, it will give the same results as regular unwrap, but it will be much slower and it won't be very interactive. This is why it works incrementally.
For LSCM/ABF there is a trick with matrix factorization to make it fast, but that's not applicable to SLIM unfortunately.
WIP: The SLIM UV unwrap algorithm implementation for Blender 4.xto The SLIM UV unwrap algorithm implementation for Blender 4.xThank you. What would be the next step for this?
@ideasman42 @howardt Is this something on your agenda to review?
@ -99,0 +114,4 @@
*/
struct ParamSlimOptions {
public:
public:
for structure can be omited.@ -99,0 +121,4 @@
bool skip_initialization = false;
};
void uv_parametrizer_slim_solve(ParamHandle *handle,
Pass const arguments first.
No, I think it's better to keep the API consistent and always put ParamHandle first.
@ -99,0 +126,4 @@
int *count_changed,
int *count_failed);
void uv_parametrizer_slim_live_begin(ParamHandle *handle, const ParamSlimOptions *mt_options);
Use references where this possible.
I would not expect changes like this to an existing file to start using newer conventions. That kind of refactoring can be done separately.
@ -7,2 +7,4 @@
*/
#include <functional>
#include <vector>
Use blender containers instead of std where this possible (
BLI_vector.hh
)It's not obviously the right thing here when this is interfacing with code in intern/ where we generally don't use these Blender containers.
I agree that is appropriate for interfacing with code in intern/
@ -344,6 +354,31 @@ static void p_face_angles(PFace *f, double *r_a1, double *r_a2, double *r_a3)
p_triangle_angles(v1->co, v2->co, v3->co, r_a1, r_a2, r_a3);
}
static float p_vec_cos(const float v1[3], const float v2[3], const float v3[3])
Use
BLI_math_vector_types.hh
instead of C math for new code.I would not expect changes like this to an existing file to start using newer conventions. That kind of refactoring can be done separately.
@ -645,2 +703,3 @@
static PVert *p_vert_add(ParamHandle *handle, PHashKey key, const float co[3], PEdge *e)
static PVert *p_vert_add(
ParamHandle *handle, PHashKey key, const float co[3], const float weight, PEdge *e)
Use
r_
prefix for destination arguments, likePEdge *r_edge
.This is just adding a weight argument to an existing function, this PR is not the place to start refactoring existing code.
@ -1026,3 +1088,1 @@
e1->vert = p_vert_lookup(handle, vkeys[i1], co[i1], e1);
e2->vert = p_vert_lookup(handle, vkeys[i2], co[i2], e2);
e3->vert = p_vert_lookup(handle, vkeys[i3], co[i3], e3);
float weight1, weight2, weight3;
Do not create multiple variables at single line.
I don't see this in our C/C++ guidelines, does not make sense to me either.
@ -1680,6 +1755,7 @@ static void p_collapse_edge(PEdge *edge, PEdge *pair)
}
}
#if 0
Is this code is unused? Can be deleted? Or at least add comment why...
It should be kept, it's part of previously uncommented code that may still get used again.
@ -2201,0 +2250,4 @@
{
/* Move from `collapsed_*`. */
PVert *v, *nextv = nullptr;
next_vert
,vert_iter
,next_edge
,edge_iter
,next_face
, ...This is existing code that was moved, not the place here to start refactoring that.
@ -2201,0 +2254,4 @@
PEdge *e, *nexte = nullptr;
PFace *f, *nextf = nullptr;
for (v = chart->collapsed_verts; v; v = nextv) {
for (PVert vert_iter = chart->collapsed_verts; vert_iter != nullptr; vert_iter = next_vert)
This is existing code that was moved, not the place here to start refactoring that.
I do not see where is from this code is moved/
It was moved out of p_chart_simplify_compute in the same file.
I do not see the change of this file in this patch, i do not think copy-past of the function from somewhere as part of new feature is such a trivial cleanup that don't worh to be cleaned more.
Ah, this is was a function, had to ask my first question for ome other part of the code..\
@ -3802,1 +3883,4 @@
float *tri_uv[3] = {uv[v0], uv[v1], uv[v2]};
float *tri_weight = nullptr;
float tri_weight_tmp[3];
float3
That kind of refactoring should be done for the file as a whole, doing it for individual places just makes things more messy.
@ -4286,0 +4389,4 @@
p_collapsing_verts(edge, pair, &oldv, &keepv);
/* do not collapse a pinned vertex unless the target vertex
* is also pinned */
Add period in the end of the sentence.
@ -4286,0 +4408,4 @@
return p_edge_length_squared(collapse_e);
}
static void p_chart_collapse_doubles(PChart *chart, float threshold)
const float threshold
@ -4286,0 +4443,4 @@
static bool p_validate_corrected_coords_point(const PEdge *corr_e,
const float corr_co1[3],
const float corr_co2[3],
float min_area,
const float min_area
@ -4286,0 +4520,4 @@
const PVert *other_v1 = e->next->vert;
const PVert *other_v2 = e->next->next->vert;
float area = area_tri_v3(corr_co, other_v1->co, other_v2->co);
const float area
@ -4286,0 +4542,4 @@
static bool p_edge_matrix(float R[3][3], const float edge_dir[3])
{
static const float eps = 1.0e-5;
constexpr float eps =
@ -4286,0 +4603,4 @@
float min_area,
float min_angle_cos)
{
static const float ref_edges[][3] = {{1.0f, 0.0f, 0.0f},
std::array<float3, ...>
@ -4286,0 +4635,4 @@
PEdge *e = f->edge;
for (int i = 0; i < 3; i++) {
float angle = (float)i / 3.0f * 2.0f * M_PI;
Use function style cast.
math::numbers::pi
That kind of refactoring should be done for the file as a whole, doing it for individual places just makes things more inconsistent.
@ -4286,0 +4697,4 @@
float face_area = p_face_area(f);
PEdge *max_edge = nullptr;
float max_edge_len = -std::numeric_limits<float>::infinity();
::lowest()
-infinity and lowest are not the same.
@ -4286,0 +4700,4 @@
float max_edge_len = -std::numeric_limits<float>::infinity();
PEdge *min_edge = nullptr;
float min_edge_len = std::numeric_limits<float>::infinity();
::max()
infinity and max are not the same.
I asked to change this not because they are the same.
I still see no reason to change this code?
max
/lowest
initial values seems more convenient in looking for the min/max.@ -4286,0 +4771,4 @@
PVert *corr_v = corr_e->vert;
/* check 4 distinct directions */
static const int DIR_COUNT = 16;
Name style for the varibles,
constexpr int
@ -4286,0 +5168,4 @@
slim::MatrixTransferChart &mt_chart,
slim::PinnedVertexData &pinned_vertex_data)
{
auto &pinned_vertex_indices = pinned_vertex_data.pinned_vertex_indices;
Always write the type instead of
auto
except of the cases with complicated templates (like map iterator).