SIAM International Meshing Roundtable 2023 (Lecture Notes in Computational Science and Engineering, 147) [2024 ed.] 3031405935, 9783031405938

This volume comprises selected papers from the SIAM International Meshing Roundtable Workshop 2023, SIAM IMR 2023, held

97 29 19MB

English Pages 472 [456] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
SIAM International Meshing Roundtable Workshop 2023 Organization
Reviewers
Preface
Contents
Data Structures and Management
Generation of Polygonal Meshes in Compact Space
1 Introduction
2 Background
2.1 Compact Representation of a Planar Graph
2.2 Half-Edge Data Structure
2.3 pemb Data Sructure
2.4 The Polylla Algorithm
3 Half-Edge Data Structure Implementation
3.1 Non-compact Half-Edge: AoS half-edge
3.2 Compact Half-Edge
3.3 Additional Data Structures
4 Half-Edge Polylla Algorithm
4.1 Label Phase
4.2 Traversal Phase
4.3 Repair Phase
5 Experiments
5.1 Implementation
5.2 Datasets
5.3 Experimental Setup
5.4 Results
6 Conclusions and Future Work
References
Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms
1 Introduction
2 Bounding Box Algorithm
3 Mesh Redistribution Using a KD-Tree
3.1 Target Mesh Shape Approximation
3.2 Overlap Detection
3.3 Mesh Migration
4 Numerical Results
4.1 Sphere Shell Mesh
4.2 Tesseract Mesh
5 Conclusion
References
Coupe: A Mesh Partitioning Platform
1 Introduction
2 Mesh Partitioning and Load Balancing
2.1 Balance Oriented Partitioning
2.2 Geometric Partitioning
2.3 ``Communication'' Optimized Partitioning
2.4 ``Memory'' Optimized Partitioning
2.5 Other Specificities
3 Algorithms
3.1 Direct Algorithms
3.2 Refinement Algorithms
3.3 Algorithms Composition
4 Coupe: A Platform Dedicated to Mesh Partitioning
4.1 Rust as Primary Development Language
4.2 Integration with Other Languages
4.3 The Coupe Toolkit
5 Experiments
6 Conclusion and Perspectives
References
Formal Definition of Hexahedral Blocking operations Using n-G-Maps
1 Introduction
1.1 State of the Art
1.2 Main Contributions and Outlook
2 Representing Block Structures with n-G-Maps
2.1 N-G-Map to Represent Topology
2.2 Orbits and Geometry Classification
2.3 Atomic Modification Operations
3 Hexahedral Blocking Operations
3.1 Sheet Selection
3.2 Sheet Collapse
3.3 Sheet Insertion
4 Conclusion and Future Works
References
Machine Learning
Machine Learning Classification and Reduction of CAD Parts
1 Introduction
2 Background
3 Overview
4 Features
5 Ground Truth
6 Machine Learning Methods
6.1 Neural Network
6.2 Ensemble of Decision Trees
7 In-Situ Classification
8 Results
9 Comparison
10 Implementation
10.1 Training
10.2 Prediction
11 Reduction of CAD Parts
11.1 Fastener Reduction
11.2 Spring Reduction
12 Conclusion
References
Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning
1 Introduction
2 Mesh Spacing and Control
2.1 Mesh Spacing Controlled by Sources
2.2 Mesh Spacing Controlled by a Background Mesh
3 Target Spacing
4 Spacing Description Using Sources
4.1 Generating Point Sources for One Solution
4.2 Generating Global Sources for a Set of Solutions
5 Spacing Description Using a Background Mesh
5.1 Interpolating the Spacing on a Background Mesh
6 Using a Neural Network to Predict the Spacing
6.1 Spacing Prediction Using Sources
6.2 Spacing Prediction Using a Background Mesh
7 Numerical Examples
7.1 Near-Optimal Mesh Predictions on the ONERA M6 Wing
7.2 Near-Optimal Mesh Predictions on the Falcon Aircraft
8 Concluding Remarks
References
Mesh Generation for Fluid Applications
Block-Structured Quad Meshing for Supersonic Flow Simulations
1 Introduction
1.1 State of the Art
1.2 Main Contributions
2 Terminology and Problem Statement
2.1 Supersonic Vehicle and Environment
2.2 Approach Overview
3 Block-Structured Mesh Generation Algorithm
3.1 Vehicle Wall Block Discretization
3.2 Fields Computation
3.3 Blocking Generation
3.4 From Blocks to Quadrilaterals
4 Results and Applications
4.1 Mesh Quality
4.2 Navier–Stokes Equations
4.3 Subsonic NACA 0012 Airfoil
4.4 Supersonic Diamond Airfoil
5 Conclusion
References
Robust Generation of Quadrilateral/Prismatic Boundary Layer Meshes Based on Rigid Mapping
1 Introduction
1.1 Prismatic Mesh Generation
1.2 Rigid Transformation
1.3 Contribution
2 Methods Overview
3 Initial Mesh and Target Mesh Generation
3.1 Initial Mesh Generation
3.2 Target Mesh Generation
3.3 Multiple Normals Configuration
4 Rigid Mapping
4.1 Problem Statement
4.2 Energy Definition
4.3 Positive Volume Gurantee
5 Post Process
5.1 Retention Layer
5.2 Mesh Refinement
6 Result
6.1 IMR
6.2 30P–30N Airfoil
6.3 U-Shape
6.4 DLR F6 (One Layer)
7 Conclusion and Limitation
References
Explicit Interpolation-Based CFD Mesh Morphing
1 Introduction
2 Mesh Morphing Environment
2.1 Elements of Morphing Process
2.2 Preprocessing Environment Interaction
2.3 Morphing Application Space
3 Structured Mesh Morphing
3.1 Morphing Workflow Example
3.2 Method Performance
4 Unstructured Mesh Morphing
5 Concluding Remarks
References
Mesh Adaption and Refinement
A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra
1 Introduction
2 Methodology
2.1 Isotropic PUMA Terminology
2.2 Anisotropic PUMA Terminology
2.3 Coarsening with PUMA
2.4 Distributed Parallel PUMA
3 Examples
3.1 Refinement of a Tetrahedral Mesh with Boundary Layers
3.2 Isotropic PUMA for the Dam Break Problem
3.3 Anisotropic PUMA for Fuselage, Wing Configuration
3.4 Combined Isotropic and Anisotropic PUMA for Space Capsule Re-Entry
4 Conclusions
References
Tetrahedralization of Hexahedral Mesh
1 Introduction
2 Background
2.1 Hexahedral Triangulation
2.2 Prism Decomposition
3 General Hex-to-Tet: A General Algorithm for Tetrahedralizing a Hexahedral Complex
3.1 Generalizing Prism Decomposition to Cubes
3.2 Decomposition into Five Tetrahedra
3.3 Solving the Degenerate Cases
3.4 The Main Algorithm
4 Conclusion and Future Work
References
Combinatorial Methods in Grid Based Meshing
1 Introduction
2 Background
3 Related Work
4 Algorithm
5 Super Element Generation
6 Super Element Assignment
7 Entity Mapping
8 Results on Examples
9 Conclusion and Future Work
References
Estimating the Number of Similarity Classes for Marked Bisection in General Dimensions
1 Introduction
2 Preliminaries
2.1 Simplicial Meshes, Conformity, and Bisection
2.2 Marked Bisection
2.3 Unique Mid-Vertex Identifiers
2.4 Consistent Bisection Edge
2.5 Similarity Classes
3 Marked Bisection in General Dimensions
3.1 Co-Dimensional Marking Process
3.2 First Bisection Stage: Tree Simplices
3.3 Second Bisection Stage: Casting to Maubach
3.4 Third Stage: Maubach's Bisection
4 Estimation of the Number of Similarity Classes
5 Number of Uniform Refinements to Obtain All the Similarity Classes
6 Examples
6.1 Number of Similarity Classes
7 Concluding Remarks
8 Algorithms
References
Cross Field Mesh Generation
Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces From a Given Cross-Field
1 Introduction and Related Work
2 Quadrilateral Mesh From a Cross-Field
2.1 Cross-Fields Definition
2.2 Index
2.3 Compatibility Constrain on the Cross-Field
2.4 Alignment of Cross-Field
2.5 Boundary Singularities
3 Non-simply Connected Domains
4 Case of Non-planar Surfaces
5 Conclusion
References
Ground Truth Crossfield Guided Mesher-Native Box Imprinting for Automotive Crash Analysis
1 Problem Definition
2 Previous Work
3 Mesher-Native Imprinting Strategy
3.1 Design and Architecture for Mesher-Native Shape Imprinting
3.2 Mesh Imprinting Box-With-Hole Shape
3.3 Shape Imprint Driven Multiblocking
4 Mesh Direction Fields
4.1 Method I: Minimum Oriented Bounding Box Based (MOBB)
4.2 Method II: Ground Truth Frame/Cross Field Based (GTFF/GTCF)
5 Box-With-Hole Orientation
6 Hole Orientation Inside Box
7 Meshing Algorithms for the Virtual Faces
7.1 Washer Mesh Control
7.2 Templatized Meshers for BWH Face
7.3 Box Sizing
7.4 Mesher Selection Algorithm
8 Mesh Quality for Crash Analysis
9 Conclusion
Appendix I
A UML Sequence Diagram of the Proposed Architecture for Mesher-Native Shape Imprinting
Appendix II
Performance Analysis
References
Integrable Cross-Field Generation Based on Imposed Singularity Configuration—The 2D Manifold Case
1 Introduction and Related Work
2 Cross-Field Computation on Prescribed Singularity Configuration
2.1 Curvature and Levi-Civita Connection on the 2D Manifold
2.2 Conformal Mapping
3 Integrability Condition with Isotropic Scaling
3.1 H PDE on the Boundary
3.2 H PDE in the Smooth Region on the Interior of M
3.3 H PDE at Singular Points
3.4 Boundary Value Problem for H
3.5 Retrieving Crosses Orientation From H
4 Preliminary Results
4.1 Valid Singularity Configurations for Conformal Quad Meshing
4.2 Dealing with Suboptimal Distribution of Singularities
5 Integrability Condition with Anisotropic Scaling
5.1 Local Manifold Basis Generation and θ Initialization
5.2 Computing (H1,H2) From Imposed barθ
5.3 Computing θ From (barH1 barH2)
5.4 Minimizing Integrability Error E Regarding (θ,H1,H2)
6 Conclusion and Future Work
References
Element Design
Optimally Convergent Isoparametric upper P squaredP2 Mesh Generation
1 Introduction
2 Interpolation Error Model and Metric Tensor
2.1 Curve Parameterizations
2.2 Interpolation Error Estimate
2.3 Optimal Metric
3 Mesh Generation
3.1 Principal Directions of the Mesh
3.2 Vertices Generation and Triangulation
3.3 Curving the Edges
3.4 Making the Mesh Valid
3.5 Edge Swaps
4 Numerical Results
5 Conclusion and Future Work
References
Towards a Volume Mesh Generator Tailored for NEFEM
1 Introduction
2 NEFEM Fundamentals
2.1 NEFEM Rationale
2.2 Geometric Mapping of NEFEM Elements
3 NEFEM Surface Mesh Generation
3.1 Surface Meshing Strategy
3.2 GS-Points
3.3 The Sub-Mesh
3.4 Validity Check
4 NEFEM Volume Mesh Generation
4.1 Volume Meshing Strategy
4.2 Growing Volume Elements
4.3 Self-intersection Check
5 Examples
5.1 A Flat Plate with Two Cylinders
5.2 A Wing with a Blunt Trailing Edge
5.3 Falcon Aircraft
6 Concluding Remarks
References
Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM)
1 Introduction
2 High-Order VEM Basics
2.1 Extension to Curved Edges
3 ``A posteriori'' High-Order VEM Mesh Generation
3.1 Generation of the Straight-Sided Polygonal Mesh
3.2 API to a CAD Engine for Geometrical Queries
3.3 CAD Projection of Additional Points
3.4 Ensuring Mesh Validity
3.5 Implementation
4 Verification and Example of Application
4.1 VEM Verification
4.2 A Practical 2D Geometry
5 Conclusions and Further Work
References
Refining Simplex Points for Scalable Estimation of the Lebesgue Constant
1 Introduction
2 Related Work
3 Neighbor-Aware Coordinates for Point Refinement
3.1 Outline
3.2 Neighbor-Aware Coordinates
3.3 Point Refinement
3.4 Smooth Gradation
4 Adaptive Point Refinement
4.1 Algorithm
4.2 Stopping Criterion
5 Results: Estimation of the Lebesgue Constant
5.1 Verification in 2D and 3D
5.2 Performance Comparison in 2D
5.3 Results in 4D, 5D, and 6D
6 Concluding Remarks
References
Recommend Papers

SIAM International Meshing Roundtable 2023 (Lecture Notes in Computational Science and Engineering, 147) [2024 ed.]
 3031405935, 9783031405938

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

147

Eloi Ruiz-Gironés · Rubén Sevilla · David Moxey Editors

SIAM International Meshing Roundtable 2023 Editorial Board T. J. Barth M. Griebel D. E. Keyes R. M. Nieminen D. Roose T. Schlick

Lecture Notes in Computational Science and Engineering Volume 147

Series Editors Timothy J. Barth, NASA Ames Research Center, Moffett Field, CA, USA Michael Griebel, Institut für Numerische Simulation, Universität Bonn, Bonn, Germany David E. Keyes, Applied Mathematics and Computational Science, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia Risto M. Nieminen, Department of Applied Physics, Aalto University School of Science & Technology, Aalto, Finland Dirk Roose, Department of Computer Science, Katholieke Universiteit Leuven, Leuven, Belgium Tamar Schlick, Courant Institute of Mathematical Sciences, New York University, New York, NY, USA

This series contains monographs of lecture notes type, lecture course material, and high-quality proceedings on topics described by the term “computational science and engineering”. This includes theoretical aspects of scientific computing such as mathematical modeling, optimization methods, discretization techniques, multiscale approaches, fast solution algorithms, parallelization, and visualization methods as well as the application of these approaches throughout the disciplines of biology, chemistry, physics, engineering, earth sciences, and economics.

Eloi Ruiz-Gironés · Rubén Sevilla · David Moxey Editors

SIAM International Meshing Roundtable 2023

Editors Eloi Ruiz-Gironés Barcelona Supercomputing Center Barcelona, Spain

Rubén Sevilla College of Engineering Swansea University Swansea, UK

David Moxey King’s College London London, UK

ISSN 1439-7358 ISSN 2197-7100 (electronic) Lecture Notes in Computational Science and Engineering ISBN 978-3-031-40593-8 ISBN 978-3-031-40594-5 (eBook) https://doi.org/10.1007/978-3-031-40594-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

SIAM International Meshing Roundtable Workshop 2023 Organization

Organizing Committee David Moxey (Committee Chair), King’s College London, [email protected] Eloi Ruiz-Gironés (Papers Chair), Barcelona Supercomputing Center—BSC, eloi. [email protected] Rubén Sevilla (Papers Chair), Swansea University, [email protected] Ketan Mittal (Research Notes Chair), Lawrance Livermore National Laboratory, [email protected] Jonathan Makem (Technical Posters & Meshing Contest Chair), Siemens, jonathan. [email protected] Na Lei (Plenary Speakers Chair), Dalian University of Technology, [email protected]. cn Franck Ledoux (Short Courses Chair), CEA, [email protected] Scott Mittchell (Discussion Panels Chair), Sandia National Laboratories, samitch@ sandia.gov Carolyn Woeber (Sponsorship Chair), Cadence, [email protected] Julian Marcon (Communications & Website Chair), Luminary Cloud, julian@ luminarycloud.com Jannis Teunissen (Local Organizing Chair), Centrum Wiskunde & Informatica, [email protected]

Steering Committee Scott Canann (Committee Chair), CD-adapco, Siemens, scott.canann@cd-adapco. com Suzanne Shontz (SIAM Liaison), University of Kansas, [email protected] Trevor Robinson, Queens University Belfast, [email protected]

v

vi

SIAM International Meshing Roundtable Workshop 2023 Organization

John Verdicchio, Siemens Digital Industries Software, john.verdicchio@siemens. com John Chawner, Cadence, [email protected]

Reviewers

Cecil Armstrong, Queen’s University Belfast Pierre-Alexandre Beaufort, Universität Bern David Bommes, University of Bern Chris Budd, University of Bath Jean Cabello, Siemens Marcel Campen, Osnabrück University Jean-Christophe Cuilliere, Université du Québec à Trois-Rivières Franco Dassi, University Milano Bicocca Nicola Ferro, Politecnico di Milano Harry Fogg, Siemens W. Randolph Franklin, Rensselaer Polytechnic Institute Abel Gargallo-Peiró, Barcelona Supercomputer Center—BSC David Xianfeng Gu, University of New York at Stony Brook Oubay Hassan, Swansea University Nancy Hitschfeld, Universidad de Chile Weiyang Lin, Siemens Digital Industries Software Ahmed Mahmoud, Autodesk Research and UC Davis Ivan Malcevic, General Electric Research Center Loic Marechal, Dassault-Systemes/INRIA Mohammad Al Bukhari Marzuki, Sultan Azlan Shah Polytechnic Erik Melin, COMSOL AB Nilanjan Mukherjee, Siemens Digital Industries Software Walter Nissen, Lawrence Livermore National Laboratory Mike Park, NASA Joaquim Peiró, Imperial College London Per-Olof Persson, UC Berkeley Serge Prudhomme, Polytechnique Montréal Alexander Rand, Siemens Digital Industries Software Navamita Ray, Los Alamos National Laboratory Jean-Francois Remacle, Université catholique de Louvain Xevi Roca, Barcelona Supercomputer Center—BSC vii

viii

Sergio Salinas-Fernández, University of Chile Robert Schneiders, Magma Giessereitechnologie GmbH Jose Pablo Suarez Rivero, U de Las Palmas de Gran Canaria Vijai Kumar Suriyababu, Technical University of Delft Vladimir Tomov, Lawrence Livermore Lab John Verdicchio, Siemens Chaman Singh Verma, Avail MedySystems Nicholas Vining, NVIDIA Jeroen Wackers, École Centrale de Nantes/CNRS Rui Wang, Ningbo University Hongfei Ye, Zhejiang University Xi Zou, Swansea University

Reviewers

Preface

The papers in this volume were selected for presentation at the SIAM International Meshing Roundtable Workshop 2023 (SIAM IMR 2023), held on March 6–9, 2023 in Amsterdam, the Netherlands. The IMR was started by Sandia National Laboratories in 1992 as a small meeting of organizations striving to establish a common focus for research and development in the field of mesh generation. Since 2021, the IMR is held under the umbrella of the Society for Industrial and Applied Mathematics (SIAM) and, for 2 years, was held online because of the COVID pandemic. Thus, the SIAM IMR 2023 is our first in situ conference since 2019. We thank David Moxey and Scott Canann as chairs of the organizing and steering committee for their efforts in organizing the conference. The SIAM International Meshing Roundtable 2023 consisted of short courses, technical presentations from keynotes, contributed talks and research notes, a poster session and meshing contest, and discussion panels. The Steering & Organizing Committee would like to express our appreciation to all the participants that made the SIAM IMR 2023 a successful and enriching experience. In particular, we extend our appreciation to the plenary speakers, the course instructors, and the panelists for their time and effort. The papers in these proceedings present novel contributions that range from the theoretical point of view to technical applications. The committee selected these papers based on the input from peer reviewers on the quality, originality, and appropriateness to the theme of the SIAM IMR. We would like to thank all the people who submitted a paper. We also extend our appreciation to the colleagues who reviewed the submitted manuscripts. We acknowledge the names of these reviewers on the following pages.

ix

x

Preface

The conference has received travel support from SIAM for student attendees. We deeply acknowledge the support of Cadence, Los Alamos National Laboratory, Siemens, and DesignFOIL. We would also like to thank Nada Mitrovic from the Centrum Wiskunde & Informatica and all the CWI staff for their support in holding the in situ conference and all the help they provided. Finally, we also explicitly thank Wil Schilders (TU/e) as the CSE23 co-chair and Richard Moore for the SIAM support. March 2023

SIAM IMR 2023 Steering & Organizing Committee

Contents

Data Structures and Management Generation of Polygonal Meshes in Compact Space . . . . . . . . . . . . . . . . . . . Sergio Salinas-Fernández, José Fuentes-Sepúlveda, and Nancy Hitschfeld-Kahler Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Navamita Ray, Daniel Shevitz, Yipeng Li, Rao Garimella, Angela Herring, Evgeny Kikinzon, Konstantin Lipnikov, Hoby Rakotoarivelo, and Jan Velechovsky Coupe: A Mesh Partitioning Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cédric Chevalier, Hubert Hirtz, Franck Ledoux, and Sébastien Morais Formal Definition of Hexahedral Blocking operations Using n-G-Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Valentin Postat, Nicolas Le Goff, Simon Calderan, Franck Ledoux, and Guillaume Hutzler

3

25

43

65

Machine Learning Machine Learning Classification and Reduction of CAD Parts . . . . . . . . . Steven J. Owen, Armida J. Carbajal, Matthew G. Peterson, and Corey D. Ernst

93

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Callum Lock, Oubay Hassan, Ruben Sevilla, and Jason Jones Mesh Generation for Fluid Applications Block-Structured Quad Meshing for Supersonic Flow Simulations . . . . . 139 Claire Roche, Jérôme Breil, Thierry Hocquellet, and Franck Ledoux

xi

xii

Contents

Robust Generation of Quadrilateral/Prismatic Boundary Layer Meshes Based on Rigid Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Hongfei Ye, Taoran Liu, Jianjun Chen, and Yao Zheng Explicit Interpolation-Based CFD Mesh Morphing . . . . . . . . . . . . . . . . . . . 189 Ivan Malcevic and Arash Mousavi Mesh Adaption and Refinement A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Sandeep Menon and Thomas Gessner Tetrahedralization of Hexahedral Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Aman Timalsina and Matthew Knepley Combinatorial Methods in Grid Based Meshing . . . . . . . . . . . . . . . . . . . . . . 253 Henrik Stromberg, Valentin Mayer-Eichberger, and Armin Lohrengel Estimating the Number of Similarity Classes for Marked Bisection in General Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Guillem Belda-Ferrín, Eloi Ruiz-Gironés, and Xevi Roca Cross Field Mesh Generation Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces From a Given Cross-Field . . . . . . . . . . . . . . . . . . 293 Kokou M. Dotse, Vincent Mouysset, and Sébastien Pernet Ground Truth Crossfield Guided Mesher-Native Box Imprinting for Automotive Crash Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Nilanjan Mukherjee Integrable Cross-Field Generation Based on Imposed Singularity Configuration—The 2D Manifold Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Jovana Jezdimirovi´c, Alexandre Chemin, and Jean-François Remacle Element Design Optimally Convergent Isoparametric P 2 Mesh Generation . . . . . . . . . . . . 373 Arthur Bawin, André Garon, and Jean-François Remacle Towards a Volume Mesh Generator Tailored for NEFEM . . . . . . . . . . . . . 397 Xi Zou, Sui Bun Lo, Ruben Sevilla, Oubay Hassan, and Kenneth Morgan

Contents

xiii

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Kaloyan Kirilov, Joaquim Peiró, Mashy Green, David Moxey, Lourenço Beirão da Veiga, Franco Dassi, and Alessandro Russo Refining Simplex Points for Scalable Estimation of the Lebesgue Constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Albert Jiménez-Ramos, Abel Gargallo-Peiró, and Xevi Roca

Data Structures and Management

Generation of Polygonal Meshes in Compact Space Sergio Salinas-Fernández, José Fuentes-Sepúlveda, and Nancy Hitschfeld-Kahler

1 Introduction Polygonal mesh generation is a research area broadly studied, with applications in many fields such as computer graphics [1], geographic information systems [2], Finite Element Methods (FEM) [3], among others. In the particular case of FEM, the polygons composing a mesh has to fulfill some quality shape criteria. Typical meshes tend to contain only triangles or quadrilaterals, except for Voronoi meshes that contain convex polygons as basic cells [4]. In recent years, the Virtual Element Method (VEM) [5] has shown that mesh generation can be based not only on convex but also non-convex polygons [6, 7], opening a new research line to generate quality meshes for VEM [8, 9]. There are several approaches to generate spatial discretizations, usually composed of triangles, quadrilateral, or both cell types [10, 11]. In general, mesh algorithms can be classified into two groups [12, 13]: (i) direct algorithms: meshes are generated from the input geometry, and (ii) indirect algorithms: meshes are generated starting from an input mesh, typically an initial triangle mesh. By joining triangles, several algorithms have been developed to generate quad meshes [14–16]. Such kind of mesh generators are also known as tri-to-polygon mesh generators. An advantage of using indirect methods is that the automatic generation of triangular meshes is a

S. Salinas-Fernández (B) · N. Hitschfeld-Kahler Department of Computer Science, University of Chile, Santiago, CL-RM, Chile e-mail: [email protected] N. Hitschfeld-Kahler e-mail: [email protected] J. Fuentes-Sepúlveda University of Concepción, Concepción, CL-BI, Chile e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_1

3

4

S. Salinas-Fernández et al.

well-studied problem and several efficient and robust tools are available for free to generate triangulations [17–19]. Huge simulations such as hydrological modeling on the earth’s surface, earthquakes, and climate modeling, among other applications, require solving numerical methods using meshes with millions of points and faces. A way to handle those kinds of applications is by using GPU parallel programming, but GPU solutions are more limited in memory than CPU solutions. An approach to face this kind of memory problem is using compact data structures. Compact data structures store information using a compact representation of it while supporting operations directly over such a representation. Examples of compact data structures include integer vectors, trees, graphs, text indexes, etc. (see [20] for a thorough list). Of particular interest for this work are the results of Ferres et al. [21] to represent planar graph embeddings in compact space. From now on, we will refer to their compact data structure as pemb. Interestingly, pemb can be seen as a compact version of the half-edge data structure [22], using around 5 bits per edge. Given a planar graph .τ , and an arbitrary edge .e and face . f of .τ , pemb accomplishes: • • • •

Edge .e has an orientation Edge .e has a twin edge with opposite orientation Edge .e is accessible by random access All edges of face . f have the same orientation

In this work we show how to use pemb as a compact half-edge data structure to implement compact Polylla, a compact version of the polygonal mesh generator Polylla [23], a mesh generator based on terminal-edge regions. The original version of Polylla does not use the half-edge data structure, so as a byproduct we will provide a new non-compact version based on the half-edge data structure. An example of a Polylla mesh is shown in Fig. 1. Both pemb and Polylla are aimed to work on planar 2D polygonal meshes. Thus, all the results of this work are limited to 2d geometries defined by PSLGs. The paper is organized as follows: Sect. 2 explains the concepts necessary to understand this paper. Section 3 explains the implementation of the compact and the non-compact half-edge data structure. Section 4 shows a half-edge version of Polylla. Section 5 shows the experiments of time and memory, and Sect. 6 shows the conclusions and future work for Polylla and pemb.

2 Background This section explains the half-edge data structure, how the pemb data structure works, and the Polylla algorithm.

Generation of Polygonal Meshes in Compact Space

5

Fig. 1 Polylla mesh of the football team Club Universidad de Chile’s logo. The mesh contains 410 polygons and 1039 edges. White spaces represent holes

2.1 Compact Representation of a Planar Graph A way to address the memory problem of representing large data sets is using lossless compression, that is, reducing the number of bits as much as possible without losing information. However, in general, a disadvantage of this alternative is that it only allows fast navigation with decompressing the data. An alternative is to represent the data using a compact representation and build data structures on top of it to support fast operations. For instance, Tutte [24] showed that 3.58 m bits suffice to represent a planar graph embedding with m edges in the worst case. In the case of triangular meshes, works exist that try to reach such a bound. For example, the compact data structure showed in [25] reduces the representation of a graph by assigning a unique id to each half-edge and stores only the correspondence between adjacent half-edges and a mapping from each vertex to any of its incident half-edges. Catalog representation [26] gathers the triangles of a triangulation into patches to reduce the number of references to the elements in the triangulation. The Star-Vertex data structure [27] stores the geometrical position of the vertices and a list of their neighbor’s vertices to represent a planar mesh. More compact data structures to represent planar graphs can be seen in [28, 29]. The space consumption of the previous works is . O(m) references, equivalent to . O(m log m) bits. For a more detailed analysis, see [28, Sect. I]. The space consumption can be improved to . O(m) bits using succinct data structures. In this paper, we use the work of Ferres et al. [30], a succinct data structure to represent planar embeddings (see Sect. 2.3). A succinct data structure [20, Foreword] is a more restricted version of a compact data structure, where a

6

S. Salinas-Fernández et al.

Fig. 2 Visual representation of the queries that can be applied to the half-edge .e

combinatorial object, such as a graph or a tree, is represented using space closed to its information theory lower bound and add only a lower-order term to support fast queries. Aleardi and Devillers showed a similar result [31] that uses the succinct representation of the Schnyder wood triangulation to get similar queries of the winged-edge data structure [32]. However, their solution is limited to triangulations, while Ferres et al. works for any planar graph embedding.

2.2 Half-Edge Data Structure The half-edge data structure, also known as doubly connected edge list (DCEL) [22], is an edge-based data structure where each edge of a polygonal mesh is represented as two half-edges of opposite orientation. Each half-edge contains information about its orientation and adjacent elements, allowing easy navigation inside the mesh. Given a half-edge .e, the primitive queries [33, Chap. 2] supported by the data structure are the following (see Fig. 2): • twin(.e): return the opposite half-edge of .e, sharing the same endpoints. • next(.e): return the half-edge next to .e inside the same face in counter-clockwise order. • prev(.e): return the half-edge previous to .e inside the same face in counterclockwise order. • origin(.e): return the source vertex of the half-edge .e. • target(.e): return the target vertex of the half-edge .e. • face(.e): return the index of the incident face to the half-edge .e. Based on the primitive queries, more complex queries can be defined. Given a half-edge .e, vertex .v, and face . f of a mesh, we define the following complex queries that will be used later: • CCWvertexEdge(.e): return the half-edge with source vertex .origin(e) and next to .e in counter-clockwise. • CWvertexEdge(.e): return the half-edge with source vertex .origin(e) and previous to .e in clockwise.

Generation of Polygonal Meshes in Compact Space

• • • • •

7

edgeOfVertex(.v): return an arbitrary half-edge with source vertex .v. incidentHalfEdge(. f ): return an arbitrary half-edge delimiting face . f . isBorder(.e): return true if half-edge .e is incident to the outer face of the mesh. length(.e): return the length of the half-edge .e. degree(.v): return the number of the edges incident to with source vertex .v.

2.3 pemb Data Sructure pemb [30] is a compact data structure designed to represent planar graph embeddings. Given a planar graph embedding .τ = (V, E), pemb represents .τ in .4|E| + o(|E|) bits and support navigational operations in near-optimal time [34]. To construct pemb, .τ is decomposed into two spanning trees: an arbitrary spanning tree .T for .τ and the complementary spanning tree .T ' of the dual graph of .τ . Thus, navigational operations over .τ are mapped to navigational operations over the spanning trees. .T is traversed in counter-clockwise order in a depth-first manner, starting at an edge incident to the outer face, generating a balanced parenthesis representation of .T , where open/close parentheses are represented with .0/.1 bits, respectively. During the traversal of.T , a clockwise depth-first traversal of.T ' is induced, generating a balanced parenthesis representation of.T ' and a bitvector representing how both spanning trees are intertwined. The balanced parenthesis representations of both trees are stored as compact trees [20, Chap. 8], while the bitvector is stored as a compact bitvector [20, Chap. 4]. After the construction of pemb, the vertices are referred to by their rank in the depth-first traversal of .T . Thus, the first visited vertex in the traversal has id .0 and the last id .|V | − 1. A planar graph .τ = (V, E) is represented as three bitvectors: • a bitvector . A[1..2|E|] in which . A[i] = 1 if and only if the .ith edge we process in the traversal of .τ is in .T , and . A[i] = 0 otherwise. • a bitvector . B[1..2(|V | − 1)] in which . B[i] = 0 if and only if the .ith time we process an edge in .T during the traversal, is the first time we process that edge, and . B[i] = 1 otherwise. • a bitvector . B ∗ [1..2(|E| − |V | + 1)] in which . B ∗ [i] = 0 if and only if the .ith time we process an edge in .T ' during the traversal, is the first time we process that edge, and . B ∗ [i] = 1 otherwise. Figure 3 shows an example of the decomposition of a triangulation into two intertwined spanning trees. Its representation is stored as: A[0..48] = 011000110001100000110101010000001101010011111110 B[0..22] = 0010101000010001111111 B ∗ [0..26] = 00001000000011111100011111

8

S. Salinas-Fernández et al.

Fig. 3 Representation of a triangulation as the decomposition into two spanning trees. Thick edges represent the edges of the spanning trees, red for the spanning tree of the triangulation, .T , and green for the spanning tree of the dual, .T ' . For each edge of the triangulation, its orientation and rank after the traversal .T are shown

Some of the operations supported by pemb that we will use in our compact halfedge representation, are: • pemb_vertex(.i): return the id of source vertex of the .ith visited edge. • pemb_first(.v): return .i such that when visiting the .ith edge during the traversal of .T , it is the first edge whose source vertex is .v. • pemb_last(.v): return .i such that when visiting the .ith edge during the traversal of .T , it is the last edge whose source vertex is .v. • pemb_next(.i): return . j such that the . jth visited edge is next to the .ith edge, in counter-clockwise, of the visited edges of pemb_vertex(.i) during the traversal of . T . If the .ith edge corresponds to the last visited edge of pemb_vertex(.i), then return pemb_first(.v). • pemb_prev(.i): return . j such that the . jth visited edge is previous to the .ith edge, in counter-clockwise, of the visited edges of pemb_vertex(.i) during the traversal of .T . If the .ith edge corresponds to the first visited edge of pemb_vertex(.i), then return pemb_last(.v). • pemb_mate(.i): return . j such that we process the same edge .ith and . jth during the traversal of .T ; • pemb_degree(.v): return the number of edges incident to vertex .v. • pemb_first_dual(. f ): return the position of the first visited edge incident to face . f during the traversal of . T . • pemb_get_face(.e): return the id of the face incident to edge .e.

Generation of Polygonal Meshes in Compact Space

9

Fig. 4 Example of a longest-edge propagation path of a triangle and a terminal-edge region. Dashed edges are the terminal-edge. The marked polygon is a terminal-edge region formed by the union of the triangles belonging to the Lepp(.ta ), Lepp(.tb ), and Lepp(.tc ). The triangles with the line pattern correspond to Lepp(.tc )

2.4 The Polylla Algorithm The Polylla mesh generator [23] takes an initial triangulation as input .τ = (V, E) to generate a polygonal mesh .τ ' = (V, E ' ). Any triangulation works. The algorithm merges triangles to generate polygons of arbitrary shape (convex and non-convex shapes). To understand how the algorithm works, we must introduce the longestedge propagation path and terminal-edge regions. Definition 1 (Longest-edge propagation path [35]) For any triangle .t0 of any conforming triangulation .τ , the Longest-Edge Propagation Path of .t0 (. Lepp(t0 )) is the ordered list of all the triangles .t0 , t1 , t2 , ..., tn−1 , such that .ti is the neighbor triangle of .ti−1 by the longest edge of .ti−1 , for .i = 1, 2, ..., n. The longest-edge adjacent to .tn and .tn−1 is called terminal-edge. Definition 2 (Terminal-edge region [36]) A terminal-edge region . R is a region formed by the union of all triangles .ti such that Lepp(.ti ) has the same terminaledge. An example of both concepts is shown in Fig. 4. To convert terminal-edge regions into polygons, the Polylla algorithm works in three main phases: (i) Label phase: Each edge.e ∈ E, adjacent to triangles.t1 and.t2 , is labelled according its length as terminal-edge, internal-edge or frontier-edge: • Internal-edge: .e is the longest edge of .t1 or .t2 , but not of both. • Frontier-edge [37]: .e is neither the longest-edge of .t1 nor .t2 . If .t2 = null, .e is also a frontier-edge. Frontier-edges are the border of terminal-edge regions and so the edges of the polygons in the final mesh. A particular case of frontier-edges is barrier-edges

10

S. Salinas-Fernández et al.

Fig. 5 The output of the label phase to generate terminal-edge regions. Black lines are frontier-edges, and dotted gray lines are internal-edges. Terminal-edges are red dashed lines. Since terminal-edges can be inside or at the boundary of the geometric domain, dashed lines are border terminal-edges, and dotted dashed lines are internal terminal-edges. Barrier-edge tips are green squared vertices and seed triangles with a blue cross

where .t1 and .t2 belong to the same terminal-edge region. An endpoint of a barrier-edge belonging to only one frontier-edge is called a barrier-edge tip. Figure 5 shows a triangulation with labeled edges and triangles. The labeled triangles are terminal triangles, i.e., triangles that share a terminal-edge. In the next phase, one terminal triangle per each terminal-edge is labeled as seed triangle. (ii) Traversal phase: In this phase, polygons are generated from seed triangles.For each seed triangle, the vertices of frontier-edges are traversed and stored in counter-clockwise order, delimiting the frontier of the terminal-edge region. During the traversal, some non-simple polygons with barrier-edges can be generated. Those polygons are processed later in the next phase. An example of this phase is shown in Fig. 6. (iii) Repair phase: Non-simple polygons with barrier-edges (a polygon with dangling interior edges) are partitioned into simple polygons. Interior edges with a barrier-edge tip as an endpoint are used to split it into two new polygons, and per each new polygon, a triangle is labeled as a seed. The final output is a polygonal mesh composed of simple polygons after applying the Traversal phase to the new polygons. An example of this phase is shown in Fig. 7.

Generation of Polygonal Meshes in Compact Space

11

Fig. 6 Traversal phase example: arrows inside terminal-regions are the paths of the algorithm during the conversion from a terminal-edge region to a polygon. The path starts at a triangle labeled as a seed triangle. Each terminal-edge region has only one seed triangle

Fig. 7 Example of a non-simple polygon split using interior edges with barrier-edge tips as endpoints. a Non-simple polygon. b Middle interior edges incident to barrier-edge tips are labeled as frontier-edges (solid lines), and cross-labelled triangles are stored as seed triangles. c The algorithm repeats the travel phase using a new seed triangle but avoiding generating the same polygon again. Source [23]

3 Half-Edge Data Structure Implementation This section shows how to implement the non-compact and compact versions of the half-edge data structure. Additionally, we introduce some extra data structures needed for the Polylla algorithm.

12

S. Salinas-Fernández et al.

3.1 Non-compact Half-Edge: AoS half-edge To store the initial triangulation .τ = (V, E) as a set of half-edges, the half-edge data structure is implemented as an array-based adjacency list, storing the vertices in an array of length .|V | and the half-edges in an array of length .2|E|. Each vertex .v stores its coordinate (.v.coor d), the index of an arbitrary incident half-edge (.v.hedge), and a boolean indicating if .v is incident to the outer face (.v.is_bor der ). For each halfedge .e its source .(e.sr c) and target .(e.tgt) vertices, twin .(e.twin), next .(e.next) and previous .(e. pr ev) half-edges, incident face .(e. f ace) and a boolean indicating if .e is incident to the outer .(e.is_bor der )face are stored. The three half-edges bordering a face are stored consecutively in the array of half-edges, i.e., the half-edges of face .i are stored in the indices .3i, .3i + 1 and .3i + 2. Thus, most of the half-edge primitive queries are supported in constant time by returning the corresponding field (e.g. edgeOfVertex(.v) returns .v.hedge and isBorder(.e) returns .e.is_bor der ). More complex queries are supported as follows: • • • • •

CCWvertexEdge(.e): twin(next(.e)). CWvertexEdge(.e): twin(prev(.e)). incidentHalfEdge(. f ): Half-edge at index .3 f in the array of half-edges. length(.e): Euclidean distance of the coordinates of origin(.e) and target(.e). degree(.e): Using the query CCWvertexEdge(.e), iterate over the neighbors of origin(.e) until reaching .e.

3.2 Compact Half-Edge Our compact representation of the half-edge data structure has two components: (1) a compact representation of the initial triangulation, using pemb, and (2) a noncompact vector with the coordinates of the vertices.The vertex identifiers in pemb are not necessarily the same as the input triangulation since pemb assigns new identifiers according to the traversal of the trees. To simplify the mapping between the components (1) and (2), the coordinate of the vertex with id .i, in pemb, is stored at the entry .i of the vector of coordinates. Notice that no extra data structures are needed since pemb provides all navigational queries to implement the half-edge data structure. Thus, the compact half-edge data structure uses .4|E| + o(|E|) bits for the first component and . O(|V | log |V |) bits for the second component, where the .log |V | term comes from the fact that at least . O(log |V |) bits are necessary to represent . O(|V |) coordinates. In pemb, the edges of a face are oriented clockwise, the opposite orientation of the half-edge data structure (see Fig. 3). Additionally, we notice that queries pemb_next and pemb_prev of pemb have a different meaning than queries next and prev of the half-edge. The former queries refer to edges incident to a vertex, while the latter

Generation of Polygonal Meshes in Compact Space

13

refers to edges incident to a face. It is possible to orientate the faces of pemb counterclockwise by traversing the primal spanning tree clockwise. In what follows, we show how to support half-edge queries with pemb: • • • • • •

twin(.e): pemb_mate(.e) next(.e): pemb_prev(pemb_mate(.e)) prev(.e): pemb_mate(pemb_next(.e)) origin(.e): pemb_vertex(pemb_mate(.e)) target(.e): pemb_vertex(.e) face(.e): pemb_get_face(.e) Additional queries are implemented as follows:

• • • • •

CCWvertexEdge(.e): pemb_next(.e) CWvertexEdge(.e): pemb_prev(.e) edgeOfVertex(.v): pemb_mate(pemb_first(.v)) incidentHalfedge(. f ): pemb_first_dual(. f ) isBorder(.e): return true if pemb_get_face(.e) returns the id of the outer face. Otherwise, return false • length(.v): Euclidean distance of the coordinates, stored in the component (2) of compact half-edge, of origin(.e) and target(.e) • degree(.v): pemb_degree(.v)

3.3 Additional Data Structures Before implementing the Polylla algorithm based on the half-edge data structure, we need some additional temporary data structures. To label each edge of the triangulation, we use two bitvectors, max-edge and frontier-edge, to mark the longest edge of a triangle and frontier edges, respectively. Both bitvectors are of length .2|E|, the number of half-edges. For the seed triangles, a vector called seed-list stores the indices of the incident terminal-edges. For the repair phase, we use two auxiliary arrays to avoid the duplication of the polygons, subseed-list, that is declared empty, and usage bitvector, of length .|E|. Finally, the output mesh is stored as a 2-dimensional array called mesh array, where each row stores a set of vertices representing a polygon. Notice that we do not return a compact version of the output mesh directly. Instead, after the generation of mesh array, we can store it in compact space by constructing its compact half-edge representation.

14

S. Salinas-Fernández et al.

4 Half-Edge Polylla Algorithm This section explains how to implement the Polylla algorithm using a half-edge data structure. The algorithm takes a triangulation .τ (V, E) as input and generates a polygonal mesh as output. All the phases of the Polylla mesh are . O(|V |).

4.1 Label Phase This phase labels each edge .e ∈ E as a frontier-edge, the longest edge of a face, and/or a seed edge incident to a triangle seed. The pseudo-code of this process is shown in Algorithm 1. The algorithm iterates over each triangle .t ∈ τ , where the edges delimiting .t are obtained with the queries .e = incidentHalfedge(.t), next(.e) and prev(.e). The edges of a triangle .t are then compared, and the id of the longest one is marked in max-edge bitvector (lines 1–3). Afterward, the algorithm iterates over all the half-edges of .τ . If a half-edge .e or its twin twin(.e) are at the geometric boundary, i.e. is_border(.e) = true or is_border(twin(.e))= true, or both half-edges were not marked in max-edge, then .e is labelled as a frontier-edge (lines 4–9). Alongside, the algorithm searches for seed edges: if a half-edge .e and its twin(.e) are a terminaledge or border terminal-edge incident to an interior face, then the algorithm labels any of the half-edges (lines 10–12). Algorithm 1 Label phase Input: Half-edge data structure HalfEdge Output: Bitvectors frontie-edge and max-edge, and vector seed-list 1: for all triangle t in HalfEdge do 2: Mark the longest edge of t in max-edge 3: end for 4: for all half-edge e in HalfEdges do 5: if e and twin(e) are not in max-edge then 6: Mark e in frontier-edge 7: else if e or twin(e) are border edges then 8: Mark e in frontier-edge 9: end if 10: if e is terminal-edge or border terminal-edge then 11: Store the id of e or twin(e) in the seed list 12: end if 13: end for

Generation of Polygonal Meshes in Compact Space

15

4.2 Traversal Phase In the second phase, the algorithm uses seed edges generated in the previous phase to build terminal-edge regions. For each generated region . R, its vertices are stored in counter-clockwise order in a set . P. For each seed half-edge .e in seed list, Algorithm 2 is called. The algorithm iterates in clockwise order around origin(.e) until it finds a frontier-edge .einit , an edge that will be part of the final polygonal mesh (lines 1–7). Once a frontier-edge is found, the algorithm iterates, using the query CWvertexEdge(), over the edges of the region . R until reaching the next frontier-edge in counter-clockwise order (lines 8–14). Each discovered frontier-edge’s source vertex is added to the output polygon (lines 7 and 13). This process ends when all boundary vertices of . R are stored in . P Each polygon . P is checked if it is a simple or non-simple polygon. The algorithm iterates over all vertices in . P, looking for three consecutive vertices, .vi , .v j , and .vk with .vi == vk . If true, then .v j is a barrier-edge tip, and the polygon is a non-simple polygon. If the polygon is simple, it is stored in the mesh array. If not, it is sent to the repair phase. Algorithm 2 Polygon construction Input: Seed edge e of a terminal-edge region Output: Arbitrary shape polygon P 1: P ← ∅ 2: while e is not a frontier-edge do 3: e ← CWvertexEdge(e) 4: end while 5: einit ← e 6: ecurr ← next(e) 7: P ← P ∪ origin(e) 8: while einit /= ecurr do 9: while ecurr is not a frontier-edge do 10: ecurr ← CWvertexEdge(e) 11: end while 12: ecurr ← next(ecurr ) 13: P ← P ∪ origin(ecurr ) 14: end while 15: return P

4.3 Repair Phase The repair phase works similarly to the label and the travel phases but is limited to the triangles of a non-simple terminal-edge region. In summary, the algorithm labels an internal-edge .e incident to each barrier-edge tip as frontier-edge and repeats the travel phase using the triangles adjacent to .e to generate two new polygons (see Algorithm 3).

16

S. Salinas-Fernández et al.

Given a non-simple polygon . P, for each barrier-edge tip .b ∈ P, the algorithm searches for the barrier-edge incident to .b (lines 4–7). To do that, the algorithm uses the query edgeOfVertex(.b to get a starting half-edge .e incident to .b from where the incident half-edges of .b are traversed using CWvertexEdge(.e) until getting a frontier-edge of .b. Afterward, the algorithm chooses one of the incident internal-edges of.b to split the polygon in two (lines 8–10). To choose an internal-edge, the algorithm calculates the number of incident edges as degree(.b).−1 (.−1 because of the frontier-edge incident to .b), and circles around .b (degree(.b).−1)./2 times to split the polygon evenly. The target internal-edge .e is labelled as frontier-edge, marking its two half-edges in the frontier-edge bitvector, labelled as True in usage bitvector, to mark them as visited during this phase, and stored in subseed-list to use them later as seed edges to generate a new polygon (lines 11–13). For each half-edge .e inside subseed-list and usage bitvector[.e] .= True, the algorithm repeats the travel phase (line 18) to build a new polygon, marking usage bitvector[.e] .= False after building it to avoid the generation of the same polygon more than once. The final set of simple polygons is returned and stored as part of the mesh in the mesh array. Algorithm 3 Non-simple polygon reparation Input: Non-simple polygon P Output: Set of simple polygons S 1: subseed list as L p and usage bitarray as A 2: S ← ∅ 3: for all barrier-edge tip b in P do 4: e ← edgeOfVertex(b) 5: while e is not a frontier-edge do 6: e ← CWvertexEdge(e) 7: end while 8: for 0 to (degree(b) - 1)/2 do 9: e ← CWvertexEdge(e) 10: end for 11: Label e as frontier-edge 12: Save half-edges h 1 and h 2 of e in L p 13: A[h 1 ] ← True, A[h 2 ] ← True 14: end for 15: for all half-edge h in L p do 16: if A[h] is True then 17: A[h] ← False 18: Generate new polygon P ' starting from h using Algorithm 2. 19: Set as False all indices of half-edges in A used to generate P ' 20: end if 21: S ← S ∪ P' 22: end for 23: return S

Generation of Polygonal Meshes in Compact Space

17

5 Experiments 5.1 Implementation The implementations of the Polylla algorithm and the compact data structures are in C++.1 The algorithm described in Sect. 4 was implemented as a class that calls virtual methods of the abstract class Mesh. This abstract class contains all the methods of the half-edge data structure, shown in Sect. 2.2. Those methods were implemented into two child classes: Triangulation, that contains the implementations of the functions shown in Sect. 3.1, and CompactTriangulation, that contains the implementations of the functions shown in Sect. 3.2. The CompactTriangulation class encapsulates the class Pemb, that contains the implementation of pemb using the library SDSL [38].2

5.2 Datasets To test our implementations, we generated several Delaunay triangulations from random point sets inside a square of dimensions.10, 000 × 10, 000. To see the behaviour of Polylla meshes using another dataset see [23]. For the generation of the triangulations, we used the 2D package of the software CGAL [39]. An example of the generated meshes is shown in Fig. 8.

5.3 Experimental Setup To run the experiments, a machine with processor Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60 GHz and main memory of 126 GB was used. To measure the memory consumption, the library malloc count3 was used. From this library, we use the function malloc_count_peak() to obtain the peak of memory consumption and malloc_count_current() to obtain the memory used to store the generated polygonal mesh. The size of pemb was obtained with the support of the SDSL library. Memory usage and the execution time were calculated using the Chronos library without considering the time to load the input triangulations. Each experiment was run five times, and the average was reported. We generated meshes from 10 million vertices to 40 million.

1

The source code of our implementations are available at https://github.com/ssalinasfe/CompactPolylla-Mesh. 2 The original implementation of pemb is available at https://github.com/jfuentess/sdsl-lite. 3 https://panthema.net/2013/malloc_count/.

18

S. Salinas-Fernández et al.

Fig. 8 Example of a Polylla mesh generated from a Delaunay triangulation over .10k random vertices

5.4 Results Half-edge data structures construction. Table 1 and Fig. 9 show the time needed to construct the half-edge data structure (GHF) and to generate the polygonal mesh using the Polylla algorithm (GP). The construction of AoS half- edge data structure is 1.95x faster than the construction of compact half-edge. In the same line, the generation of a polygonal mesh using AoS half- edge is 42.7x faster than the generation using the compact half-edge. Additionally, as a reference, we include the time needed by CGAL to generate a Delaunay mesh from a random point set.

Table 1 Time comparison in minutes of the Delaunay triangulation generation (GDT), the halfedge data structure generation (GHF), and the Polylla mesh generation (GP) #V GDT AoS Compact GHF GP GHF GP 10 M 15 M 20 M 25 M 30 M 35 M 40 M

4.67 7.19 9.58 11.92 14.70 16.08 19.17

0.14 0.20 0.27 0.33 0.40 0.47 0.52

0.58 0.87 1.15 1.43 1.73 2.01 2.25

0.26 0.39 0.52 0.65 0.78 0.92 1.05

23.86 36.31 48.82 60.90 74.06 87.77 98.67

Generation of Polygonal Meshes in Compact Space

19

Fig. 9 (LogLog plot) Running time, in minutes, to generate the data structures. The continuous line is the time to generate Polylla mesh using compact half-edge and AoS half- edge (GP compact and GP AoS, respectively), and the Delaunay triangulations (GDT) using CGAL. Dashed lines correspond to the time to generate compact half-edge and AoS half- edge (GHF compact and GHF AoS, respectively)

Fig. 10 (LogLog plot) Running time, in minutes, that takes each phase of the Polylla algorithm using AoS Half-edge and compact Half-edge. The continuous line is the total sum of the algorithm, while the dashed lines show the time for each phase

Phases of the Polylla algorithm. Figure 10 shows the running time to generate polygonal meshes with Polylla. Despite the data structure used to generate the Polylla meshes, all phases of Polylla have the same growth. Remember that all the phases of

20

S. Salinas-Fernández et al.

Table 2 Memory usage in gigabytes by the algorithm. HF is the memory used to store the half-edge data structure, GHF is the memory cost to generate the triangulation, and GP is the memory cost to generate the Polylla mesh. In the case of the compact HF, HF is also the memory used to store the vertices coordinates (Coord) and the Pemb data structure. The Polylla column is the memory used to store the Polylla mesh #V AoS Compact Polylla HF GHF GP HF Pemb Coord GHF GP 10 M 15 M 20 M 25 M 30 M 35 M 40 M

1.79 2.68 3.58 4.47 5.36 6.26 7.15

3.02 4.53 6.03 7.54 9.05 10.56 12.07

2.25 3.49 4.50 5.54 6.97 7.99 9.01

0.17 0.25 0.34 0.42 0.51 0.59 0.68

0.02 0.03 0.04 0.05 0.06 0.07 0.08

0.15 0.23 0.30 0.38 0.45 0.53 0.60

1.96 2.95 3.93 4.91 5.90 6.88 7.86

0.63 1.06 1.26 1.49 2.12 2.32 2.53

0.60 0.95 1.20 1.44 1.91 2.15 2.40

Polylla have a complexity of . O(|V |). Notice that each phase uses different queries. The most costly phase is the label phase, which visits all faces in the triangulation, calculates the length of the edges using the queries next(.·) and prev(.·), and then labels the edges using queries is_border(.·) and twin(.·). The second costly phase is the traversal phase. This phase uses the queries origin(.·), next(.·) and CWvertexEdge(.·) to generate each polygon. During this phase, all the edges of the triangulation are revisited. The repair phase is the fastest, as it is only used by the .1% of the polygons [23] generated in the traversal phase. One particular query used during the repair phase is the query degree(.·) to calculate the middle internaledge. Memory usage. Results of the memory usage are shown in Table 2. To calculate the memory usage to generate the data structure, we compute the memory peak of the algorithm (the columns with the prefix “G”). The memory usage for the triangulation once the half-edge data structure was created shown without the “G” prefix. It can be observed that the memory usage to generate the polygonal meshes (GP) using the AoS Polylla requires.3.49x more memory than the compact version (compact GP). In the case of the generation of the data structures (GHF), the AoS half-edge generation (AoS GHF) takes .3.49x more memory than the compact version (compact GHF). The peak of memory usage is shown in Fig. 12. After the half-edge data structure (compact or non-compact) was initialized, the memory usage decreased because several temporal information was not needed anymore. The memory usage during the application of the Polylla phases is shown in Fig. 11. The topological information of a triangulation can be compacted by a .99%.4 respect to AoS half- edge, that is, without considering the memory usage to store the coordinates.

4

Obtained by dividing columns Pemb and AoS HF in Table 2.

Generation of Polygonal Meshes in Compact Space

21

Fig. 11 Memory, in gigabytes, used to store the data structures

Fig. 12 Peaks of memory achieved, in gigabytes, during the generation of the data structures

Most of the memory used by the compact and non-compact half-edge data structure is related to the coordinates of each vertex of the triangulation. The memory to store the compact triangulation is distributed in .88.67% for the point coordinates and .11.33% for the half-edge data structure. As the position of the vertices is float values, they can not be compacted easily.

22

S. Salinas-Fernández et al.

Despite the fact that compact Half-edge is slower than AoS half- edge, we argue that in scenarios where the non-compact representation of half-edge does not fit in main memory while compact Half-edge does, the latter will be faster due to the memory hierarchy effect. Empirical evidence for this scenario, but applied on general planar embeddings, can be found in [21].

6 Conclusions and Future Work We have shown that the succinct data structure known as pemb is useful for representing polygon meshes and generating tri-to-polygon meshes. Using pemb, the space usage of a mesh is largely reduced, allowing to process of huge meshes. One of the advantages of pemb is that their queries are reduced to simple and fast operations over three static bitvectors. We expect a future development where those operations work in a GPU architecture, as GPU parallelization could take advantage of the low memory usage of pemb. Additionally, in this work, we used only 7 of the 17 queries supported by pemb [34]. As future work, we will explore pemb operations to study the possibility of implementing more queries such as vertex insertion and edge flipping. In the case of the Polylla algorithm, we showed a new half-edge version that is easier to read and implement in any language. Future work involves taking advantage of this implementation to develop a parallel version of Polylla and extends this work to a 3D using an extension of the half-edge data structure. Acknowledgements This work was partially funded by ANID doctoral scholarship 21202379 (first author), ANID FONDECYT grant 11220545 (second author) and ANID FONDECYT grant 1211484 (third author).

References 1. Marco Attene, Marcel Campen, and Leif Kobbelt. Polygon mesh repairing: An application perspective. ACM Comput. Surv., 45(2), March 2013. 2. Otto Huisman and Rolf de By. Principles of geographic information systems : an introductory textbook. Oxford University Press, 01 2009. 3. K. Ho-Le. Finite element mesh generation methods: a review and classification. ComputerAided Design, 20(1):27–38, 1988. 4. S. Ghosh and R.L. Mallett. Voronoi cell finite elements. Computers & Structures, 50(1):33–46, 1994. 5. L. Beir, Franco Brezzi, and S. Arabia. Basic principles of virtual element methods. Mathematical Models and Methods in Applied Sciences, 23:199–214, 2013. 6. H. Chi, L. Beirão da Veiga, and G.H. Paulino. Some basic formulations of the virtual element method (vem) for finite deformations. Computer Methods in Applied Mechanics and Engineering, 318:148–192, 2017.

Generation of Polygonal Meshes in Compact Space

23

7. Kyoungsoo Park, Heng Chi, and Glaucio H. Paulino. On nonconvex meshes for elastodynamics using virtual element methods with explicit time integration. Computer Methods in Applied Mechanics and Engineering, 356:669–684, 2019. 8. Marco Attene, Silvia Biasotti, Silvia Bertoluzza, Daniela Cabiddu, Marco Livesu, Giuseppe Patanè, Micol Pennacchio, Daniele Prada, and Michela Spagnuolo. Benchmarking the geometrical robustness of a virtual element poisson solver. Mathematics and Computers in Simulation, 190:1392–1414, 2021. 9. Tommaso Sorgente, Daniele Prada, Daniela Cabiddu, Silvia Biasotti, Giuseppe Patanè, Micol Pennacchio, Silvia Bertoluzza, Gianmarco Manzini, and Michela Spagnuolo. VEM and the mesh. CoRR, abs/2103.01614, 2021. 10. David Bommes, Bruno Lévy, Nico Pietroni, Enrico Puppo, Claudio Silva, Marco Tarini, and Denis Zorin. Quad-mesh generation and processing: A survey. In Computer Graphics Forum, volume 32, pages 51–76, 2013. 11. Steven J Owen, Matthew L Staten, Scott A Canann, and Sunil Saigal. Q-morph: an indirect approach to advancing front quad meshing. International journal for numerical methods in engineering, 44(9):1317–1340, 1999. 12. Steven J Owen. A survey of unstructured mesh generation technology. IMR, 239:267, 1998. 13. Amaury Johnen. Indirect quadrangular mesh generation and validation of curved finite elements. PhD thesis, Université de Liège, Liège, Belgique, 2016. 14. C.K. Lee and S.H. Lo. A new scheme for the generation of a graded quadrilateral mesh. Computers Structures, 52(5):847–857, 1994. 15. J.-F. Remacle, J. Lambrechts, B. Seny, E. Marchandise, A. Johnen, and C. Geuzainet. Blossomquad: A non-uniform quadrilateral mesh generator using a minimum-cost perfect-matching algorithm. International Journal for Numerical Methods in Engineering, 89(9):1102–1119, 2012. 16. Dorit Merhof, Roberto Grosso, Udo Tremel, and Günther Greiner. Anisotropic quadrilateral mesh generation : an indirect approach. Advances in Engineering Software, 38(11/12):860–867, 2007. 17. C. Bradford Barber, David P. Dobkin, and Hannu Huhdanpaa. The quickhull algorithm for convex hulls. Acm Transactions on Mathematical Software, 22(4):469–483, 1996. 18. Jonathan Richard Shewchuk. Triangle: Engineering a 2d quality mesh generator and delaunay triangulator. In Ming C. Lin and Dinesh Manocha, editors, Applied Computational Geometry Towards Geometric Engineering, pages 203–222, Berlin, Heidelberg, 1996. Springer Berlin Heidelberg. 19. Hang Si. An introduction to unstructured mesh generation methods and softwares for scientific computing. Course, 7 2019. 20. Gonzalo Navarro. Compact Data Structures – A practical approach. Cambridge University Press, 2016. 21. Leo Ferres, José Fuentes-Sepúlveda, Travis Gagie, Meng He, and Gonzalo Navarro. Fast and compact planar embeddings. Computational Geometry, 89:101630, 2020. 22. D.E. Muller and F.P. Preparata. Finding the intersection of two convex polyhedra. Theoretical Computer Science, 7(2):217–236, 1978. 23. Sergio Salinas-Fernández, Nancy Hitschfeld-Kahler, Alejandro Ortiz-Bernardin, and Hang Si. Polylla: polygonal meshing algorithm based on terminal-edge regions. Engineering with Computers, 2022. 24. W. T. Tutte. A census of planar maps. Canadian Journal of Mathematics, 15:249-271, 1963. 25. Tyler J. Alumbaugh and Xiangmin Jiao. Compact array-based mesh data structures. In Byron W. Hanks, editor, Proceedings of the 14th International Meshing Roundtable, pages 485–503, Berlin, Heidelberg, 2005. Springer Berlin Heidelberg. 26. Luca Castelli Aleardi, Oliver Devillers, and Abdelkrim Mebarki. Catalog-based representation of 2d triangulations. International Journal of Computational Geometry & Applications, 21(04):393–402, 2011. 27. Marcelo Kallmann and Daniel Thalmann. Star-vertices: A compact representation for planar meshes with adjacency information. Journal of Graphics Tools, 6(1):7–18, 2001.

24

S. Salinas-Fernández et al.

28. Luca Castelli Aleardi, Olivier Devillers, and Jarek Rossignac. Esq: Editable squad representation for triangle meshes. In 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images, pages 110–117, 2012. 29. Topraj Gurung and Jarek Rossignac. Sot: Compact representation for tetrahedral meshes. In 2009 SIAM/ACM Joint Conference on Geometric and Physical Modeling, SPM ’09, page 79-88, New York, NY, USA, 2009. Association for Computing Machinery. 30. Leo Ferres, José Fuentes-Sepúlveda, Travis Gagie, Meng He, and Gonzalo Navarro. Fast and compact planar embeddings. In WADS, 2017. 31. Luca Castelli Aleardi and Olivier Devillers. Array-based compact data structures for triangulations: Practical solutions with theoretical guarantees. Journal of Computational Geometry, 9(1):247–289, 2018. 32. Bruce G. Baumgart. A polyhedron representation for computer vision. In Proceedings of the May 19-22, 1975, National Computer Conference and Exposition, AFIPS ’75, page 589-596, New York, NY, USA, 1975. 33. Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars. Computational Geometry: Algorithms and Applications. Springer-Verlag TELOS, Santa Clara, CA, USA, 3rd ed. edition, 2008. 34. José Fuentes-Sepúlveda, Gonzalo Navarro, and Diego Seco. Navigating planar topologies in near-optimal space and time. Computational Geometry, 109:101922, 2023. 35. María-Cecilia Rivara. New longest-edge algorithms for the refinement and/or improvement of unstructured triangulations. International Journal for Numerical Methods in Engineering, 40(18):3313–3324, 1997. 36. R. Alonso, J. Ojeda, N. Hitschfeld, C. Hervías, and L.E. Campusano. Delaunay based algorithm for finding polygonal voids in planar point sets. Astronomy and Computing, 22:48 – 62, 2018. 37. Carlos Hervías, Nancy Hitschfeld-Kahler, Luis E. Campusano, and Giselle Font. On finding large polygonal voids using Delaunay triangulation: The case of planar point sets. In Proceedings of the 22nd International Meshing Roundtable, pages 275–292, 2013. 38. Simon Gog, Timo Beller, Alistair Moffat, and Matthias Petri. From theory to practice: Plug and play with succinct data structures. In 13th International Symposium on Experimental Algorithms, (SEA 2014), pages 326–337, 2014. 39. Mariette Yvinec. 2D triangulations. In CGAL User and Reference Manual. CGAL Editorial Board, CGAL project, 5.3.1 edition, 2021.

Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms Navamita Ray, Daniel Shevitz, Yipeng Li, Rao Garimella, Angela Herring, Evgeny Kikinzon, Konstantin Lipnikov, Hoby Rakotoarivelo, and Jan Velechovsky

1 Introduction Data remapping is used in many multi-physics applications to transfer numerical fields from a source mesh to target mesh. For example, in Arbitrary LagrangianEulerian (ALE) methods [1, 5, 7, 8] for hydrodynamics applications, the Lagrangian N. Ray (B) · D. Shevitz · E. Kikinzon Computational and Statistical Sciences, CCS-7, Los Alamos National Laboratory, Los Alamos, NM, USA e-mail: [email protected] D. Shevitz e-mail: [email protected] E. Kikinzon e-mail: [email protected] Y. Li Department of Applied Mathematics and Statistics, StonyBrook University, Stony Brook, NY, USA OneFlow, Beijing, China R. Garimella · K. Lipnikov · H. Rakotoarivelo Theoretical Division, T-5, Los Alamos National Laboratory, Los Alamos, NM, USA e-mail: [email protected] K. Lipnikov e-mail: [email protected] H. Rakotoarivelo e-mail: [email protected] A. Herring · J. Velechovsky X-Computational Physics, XCP-4, Los Alamos National Laboratory, Los Alamos, NM, USA e-mail: [email protected] J. Velechovsky e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_2

25

26

N. Ray et al.

(a) Source Mesh Partitions

(b) Target Mesh Partitions

Fig. 1 Example source and target mesh partitions on four ranks. The colors correspond to the rank of the mesh partition

mesh is moved along with the fluid flow for some time steps before the cells distort excessively. Then, the mesh nodes are rezoned or smoothed to yield a better quality mesh, and finally, the fields on the Lagrangian source mesh are interpolated to the improved target mesh. In other multi-physics applications [3, 9, 11], where different physics domains depend on each other through shared domain boundaries, there is a need to transfer fields along the domain boundary to solve the governing equations of that component. In order to remap data in parallel onto the target mesh, the target mesh partition on any Message Passing Interface (MPI) rank should have all the source mesh cells covering it available on the same rank. This is a requirement of many remap methods, particularly, of conservative field remap methods, where quantities like intersection volumes, field gradients, etc., are needed for data interpolation. For parallel remapping on distributed systems, generally the source and target meshes are partitioned independently of each other. This can lead to scenarios where the target mesh partition is only partially (or not at all) covered by the source mesh partition on the same MPI rank. For example, Fig. 1 shows a partitioning of a simple source and target mesh on four MPI ranks where the partitions are color-coded, so that source and target mesh partitions on the same rank have the same color. The yellow source mesh partition only covers part of the yellow target mesh partition, whereas the green target mesh partition is not covered at all by the green source mesh partition. To perform the remapping correctly, we must perform a mesh redistribution, i.e. bring the necessary source mesh information from all other MPI ranks to each target rank. Portage [4] is a numerical library, which provides a suite of numerical algorithms for remapping fields from a source mesh to a target mesh. Currently Portage uses a coarse-grained bounding box based overlap detection algorithm to redistribute the meshes. While this method is failsafe, it also frequently sends unnecessary source data and with increased execution time and memory usage. To detect if the target mesh partitions on other ranks overlap with the source mesh partition on the current rank, we need to have sufficient information about the shape of the target mesh partitions. We also need to figure out how much the source mesh partition on the current rank overlaps with the target mesh partitions on other ranks.

Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms

27

Ideally, the redistribution process should send only as much information as necessary to other partitions. In this paper, we present a new method to perform better overlap detection and more precisely control the information copied across ranks. In [10], two mesh redistribution algorithms are described suitable for distributed systems. They use a rendezvous technique wherein a third decomposition is computed so that both the source and target mesh partitions overlap completely on this third decomposition. The recursive coordinate bisectioning (RCB) partitioning strategy is used to obtain this third decomposition. The decomposition is primarily for nodal remap, where one needs to know the source cell containing each target node, so that the nodal values from the source cell are interpolated at the target node. The Data Transfer Kit [12] a software library designed to provide parallel services for mesh and geometry searching and data transfer. The algorithms implemented in Data Transfer Kit are based on the rendezvous algorithm described in [10]. In [13], a dynamically load-balancing algorithm for parallel particle redistribution using KDtrees for particle tracing applications is described. In the algorithm, each process starts with a statically partitioned axis-aligned data block that partially overlaps with neighboring blocks in other processes along with a dynamically determined k-d tree leaf node that bounds the active particles for computation. The particles are periodically redistributed based on a constrained KD-tree decomposition, which is limited to the expanded overlapping layers. Our method differs from these approaches on multiple aspects. Portage is more general, and supports both nodal and cell-value remapping algorithms as well as other remapping algorithms. The mesh redistribution needs to satisfy the conditions of all such remapping algorithms. Also, computing a new partitioning of the source and target meshes might be computationally expensive as the library might be used as part of a multi-physics application where a remap needs to happen every time step of the simulation. The K-dimensional tree (KD-tree, [2]) is a data structure that splits K-dimensional data for efficient range queries and K-neighbor queries. Our method uses the KDtree data structure to capture the general shape of the target mesh partition, which is subsequently used to detect the specific source cells that intersect with this target mesh partition shape approximation. Based on this refined overlap detection, we send only part of the mesh from an overlapping source mesh partition to the target mesh partition rank. We control the amount of information copied (sent) across partitions by controlling the depth of the KD-tree on the target mesh partitions. We performed numerical studies to show the improvements in both memory and time by the new method in comparison to the current approach. In Sect. 2, we start with a brief overview of the default bounding box algorithm. Section 3 describes the new approach to mesh redistribution. In Sect. 4, we present numerical studies comparing the new algorithm with the bounding box method.

28

N. Ray et al.

2 Bounding Box Algorithm The coarsest geometric representation of a general shape is its axis aligned bounding box. The bounding box algorithm implemented in Portage utilizes this description, and is a simple rendezvous algorithm. Bounding boxes of both the target and source mesh partitions are used to detect overlaps. The key steps in the algorithm are as follows: 1. On each rank, the axis aligned bounding box of the target and source mesh partitions are constructed. 2. Each rank broadcasts the bounding box description of its target mesh partition to all ranks, so that each rank has an approximated shape of the global target mesh. 3. On each rank, if the source bounding box intersects with any received target bounding box, then all the cells in the source mesh partition are sent to the target rank. By design, this method is conservative in its approach. As a result, it almost always overestimates the number of cells that must be copied over to the target partitions. For example, even when the source mesh partition bounding box is only slightly intersecting any of the received target mesh partition bounding boxes, the overlap detection deduces that they intersect, and sends the whole source mesh partition to the target rank. Due to this conservative overlap detection, it can happen that multiple source mesh partitions are migrated to a target rank which can lead to scenarios where almost the whole global source mesh is on a target rank after redistribution, resulting in significant increase in memory usage. In worst cases, the remap code can fail at runtime due to the large memory overload.

3 Mesh Redistribution Using a KD-Tree In the new approach, we focus on improving all components of the overlap detection process. First, we use a KD-tree data structure to generate a better and controllable description of the target mesh shape. The representation is tunable, ranging from the coarsest one bounding box covering the target to the finest depth with a bounding box for each target cell. Second, we use an efficient search on the source mesh partition, again using a KD-tree data structure to obtain the list of candidate cells that intersect with the target bounding boxes. Finally, we migrate only part of the mesh from an overlapping source mesh partition to the target mesh partition rank.

Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms

(d) Rank 3

(b) Rank 1 (a) Rank 0

29

(c) Rank 2

Fig. 2 Target boxes constructed using a KD-tree at depth 2. Note the 4 bounding boxes per partition

In this section, we describe the key steps (listed below) of the new algorithm in more detail. 1. Target Mesh Shape Approximation: On each rank, generate an approximation of the target mesh partition shape using a KD-tree data structure and broadcast the approximate target shape to all ranks. The approximated target mesh partition shape is a list of target bounding boxes depending on the depth of the KD-tree representation. 2. Overlap Detection: On each rank, find the cells in the source mesh partition that overlap the target mesh partition shapes received from other ranks. This step results in obtaining lists of source cells, one list of candidate source cells for each target rank it detects an overlap with. 3. Mesh Migration: Each rank sends the overlapping source cells along with their field data to the target ranks. Figures 1, 2 and 3 show an example of the overlap detection process. Figure 1a, b are a source and target mesh partitioned into four ranks where the source and target parts on the same rank have the same color. The bounding boxes of a depth 2 KDtree over the target mesh partitions are shown in Fig. 2a, b, c, d. Note that since a KD-tree is a binary tree at any fixed depth there will be a power of two number of bounding boxes ignoring incomplete filling due to an unbalanced tree. In Fig. 3a, the aggregation of target bounding boxes from all target ranks on each source rank are shown. During the overlap detection, these bounding boxes are used to find lists of source cells that need to be sent to target partitions. For example, in Fig. 3b, the source mesh partitions on green, blue and red ranks will detect the cells intersecting with the target bounding boxes from the yellow rank and select only the subset of the meshes that overlap these boxes.

3.1 Target Mesh Shape Approximation K-Dimensional tree (KD-tree) is a well-known space-partitioning data structure for organizing points in a k-dimensional space and is used for efficient searching. We

30

N. Ray et al.

(a) All Target Boxes

(b) Source Target Overlap

Fig. 3 The target box description of the target mesh globally as well as overlap of target bounding boxes on rank 3 with other source mesh partitions

use KD-trees for two purposes. First, we use the data structure to create a finer approximation of the target geometry as a collection of bounding boxes. Second, we use it to perform efficient searches for overlap detection between the target shape approximation and the source mesh partition as is described in Sect. 3.2. The KD-tree construction is agnostic to which mesh it is created on, so the description of the KD-tree construction uses the term mesh instead of target mesh. Indeed, we compute KD-trees on both the source and target mesh partitions, albeit for different purposes. In our KD-tree construction, each node at any depth is a bounding box. We begin by computing the axis aligned bounding box of each cell by using the minimum and maximum coordinates in each dimension. For each such box, we next compute its centroid. The construction algorithm takes as input the set of bounding boxes, the point set consisting of bounding box centroids, and the depth up to which the tree is to be constructed. The space partitioning uses the point set whereas the bounding boxes are used to construct the nodes of the tree. At any depth in the tree, the parent node is the encapsulating bounding box of a set of cells. We next find which axis or direction (x/y/z) should be used to partition the point space (consisting of the bounding box centroids). We choose the axis with the longest side of the bounding box of the current node as the cutting direction. Once a direction has been chosen, we group the cells under the node into a left and a right set, where the left set has cells with centroid values less than the median along the cutting direction. The left and right children are now constructed out of the cells in the left and right sets. The tree construction is either stopped at the depth provided, or continues until the full depth of the tree possible for the input set. In Algorithm 1, we present the pseudocode for the KD-tree construction. The algorithm uses a stack data structure to construct nodes of the tree, where the root node is the bounding box of the entire mesh. We also maintain a permutation array of the cell ids which is used to store the partitioning of the space as the tree construction progresses. We begin by finding the longest side of the root node bounding box. The axis corresponding to the longest side is chosen as the cutting direction. We next permute the cell ids (as stored in the permutation array), so that the median of the array is the cell id with median coordinate value of the centroid corresponding to the

Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms

(a) Depth 0

(b) Depth 1

(c) Depth 2

31

(d) Depth 3

Fig. 4 The KD-tree based representation with increasing depths

cutting direction. This groups the list of cell ids into a left and right part, where the left part has cell ids with centroid coordinate less than the median along the cutting direction. Similarly, the right part constitutes of cell ids with centroid coordinate greater than and equal to the median along the cutting direction. The left and right child node bounding boxes are now computed by gathering all the bounding boxes of the cells making up that child. For each child, the pointers to the minimum and maximum of the permutation array are stored, so that for the next level, the median is found only for that part. These steps are followed until the desired depth of the tree is obtained or the full tree is constructed. Figure 4a, b, c, d show four depths of KD-tree based shape approximation. As we can see, with increasing depths the tree leaf bounding boxes capture better the shape of the mesh which leads to better overlap detection. Using the above process, we construct a KD-tree over the target mesh partition. We construct the tree over the owned cells of the target mesh partition as there may be ghost layers during initial partitioning, which are owned by other ranks. The output of the construction is a list of leaf bounding boxes at a fixed depth in the tree. This list is then broadcast across all ranks, at the end of which each rank has an approximate description of the global target shape.

3.2 Overlap Detection After the broadcast of the bounding boxes of the target mesh partition across ranks, each rank now has an approximate description of the whole target mesh shape in the form of bounding boxes and the target partitions they belong to. We next want to detect the list of source cells that intersect the target bounding boxes from a received rank. Instead of nested linear loops over all received target boxes, and over the source cells to detect which cells in the source intersect target bounding boxes, we use another KD-tree, this time for efficient searching. We create the full tree on the source mesh partition (so that the leaf nodes are the bounding boxes of the source cells), and perform the search between the source tree and target bounding boxes. Since the average cost of look up is O(LogN), we can avoid a linear search over source cells. The pseudocode for the overlap detection algorithm is shown in Algorithm 2. After the search, we end up with candidate lists of source cells that need to be sent to specific ranks. The candidate list to be sent to a specific rank is conservative

32

N. Ray et al.

Algorithm 1 KD-tree Construction Input: B: N bounding boxes Input: C: N centroids of the bounding boxes Input: L: depth of the tree Output: leaves: Leaf bounding boxes P ← Permutation array of size N root ← Bounding box encompassing all input boxes if L = 0 then return root else stack: array storing tree node ids min_idx: array storing minimum index into leaves array for a node max_idx: array storing maximum index into leaves array for a node leaves: array storing leaf boxes current_depth = 0 top ← 0 stack[top] ← 0 nextp ← 1 min_idx[top] ← 0 max_idx[top] ← N while top ≥ 0 do current_depth = current_depth + 1 min = min_idx[top] max = max_idx[top] top-cut_dir = cutting direction based the longest side of the node mid = (min + max)/2 Reorder part of P such that ∀i : C[i][cur _dir ] ≤ C[mid][cut_dir ] if mid = min ||current_depth= L then box = construct bounding box encompassing using boxes from min to mid Add box to leaves else box = construct bounding box encompassing using boxes from min to mid Add box to leaves top++ stack[top] = nextp min_idx[top] = min max_idx[top] = mid nextp++ end if if mid + 1 = max ||current_depthL= L then box = construct bounding box encompassing using boxes from mid+1 to max Add box to leaves else box = construct bounding box encompassing using boxes from mid+1 to max Add box to leaves top++ stack[top] = nextp min_idx[top] = mid+1 max_idx[top] = max nextp++ end if end while end if

Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms

33

because any given source cell may not actually intersect any target cell due to our use of bounding boxes of a chosen granularity to represent the target shape. Importantly, we also add cells that are neighbors of the cells in this list based on the requirements of second or higher order remapping algorithms. Such methods need to construct gradients of the numerical field over the source mesh partition and require a complete stencil (set of cells surrounding the cell) for any source cell. Adding the neighbors completes the stencils of the source cells that intersect a target cell on its partition boundary. Algorithm 2 Overlap Detection Input: T B: target bounding boxes from all ranks Output: candidates: list of candidate cells src_tree: Construct the full KD-tree on the source mesh partition candidates: list of candidate cells for r : target ranks do for b: TB[r] do cells = intersect target bounding box b with the src_tree for c: cells do Add c to candidates[r] ngbs = find node connected cell neighbors of the cell c Add ngbs to candidates[r] end for end for end for

3.3 Mesh Migration After overlap detection, we finally do a mesh migration to send the partial source mesh partition to the required ranks. During overlap detection, the candidate lists can include both owned and ghost cells and we ensure uniqueness of entities on a particular rank after mesh migration. Our mesh migration algorithm is based on a two-pass communication strategy. 1. First pass: We send the number of total counts (owned plus ghost entities) and the number of ghost counts to all ranks using all-to-all communication mechanism. At the end of first pass, all ranks have received the total number of new cells (as well as nodes, topology and numerical fields) they are going to receive on their rank. Based on this information, the receiving data buffers are set to the correct size. 2. Second pass: In this round of communication, we perform a point-to-point blocking send to transmit the actual data, and a non-blocking receive to receive the data from other ranks.

34

N. Ray et al.

We start by sending the global ids of the candidate source cells on the current rank. After this round of communication, each rank now might have cells with the same global ids, requiring de-duplication. We perform a de-duplication based on the unique global ids so that each entity has only one instance and no duplicate data is stored. We do the same for node global ids as well as other auxiliary mesh entities like edges, faces, etc. We then continue to communicate all the necessary mesh information such as node coordinates, adjacencies such as cell to node connectivity, node to cell connectivity, etc. as well as the numerical fields.

4 Numerical Results For our numerical studies, we use two sets of geometries. The first shape, shown in Fig. 5a, is part of a spherical shell. The second shape, shown in Fig. 5b, is of a notional tesseract with six pyramids covering a cube. We chose this shape because while the exterior is a cube, the bounding boxes of the pyramids are highly intersecting. We are trying to represent a worst case example for intersecting partitions. The mesh details for these geometries are provided in Table 1. We compare the KD-tree method with the bounding box method. Our parameter space for the study consists of: 1. the depth of the KD-tree representation of the target mesh partition, and 2. total MPI ranks (from 2 to 36). For each point in the parameter space, we collect two pieces of data: 1. the number of new cells received on a rank after redistribution, and 2. the total time to perform the redistribution.

(a) Spherical shell (b) Tesseract with six pyramids covering a cube

Fig. 5 The geometry of two test cases used for the numerical studies

Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms Table 1 Mesh details

Mesh

Source

Target

Sphere #Cells #Points Tesseract #Cells #Points

Tetrahedral 1523150 283924 Tetrahedral 231828 45742

Hexahedral 44160 50952 Tetrahedral 695805 130467

35

The count of only the new cells provides an approximate estimation of how much extra memory would need to be stored after redistribution. By varying the number of MPI ranks, we can get very different qualities of partitioning. If the number of ranks respects the symmetries of a mesh we can get a quite good partitioning, but if this is not the case, we can get poor partitioning because cells can be “just stuffed” anywhere. Our study is intended to evaluate all possibilities and not just best case. We use the ParMetis partitioner [6] for initial partitioning of both source and target meshes.

4.1 Sphere Shell Mesh Figure 6a, b show the number of migrated cells which is a proxy for the memory estimates of each method after redistribution. The x-axis is the number of ranks on which the test is run, and the y-axis is the depth of the tree. In the waterfall plots, the depth axis has no meaning for the bounding box distributor but we keep it in the figures to make comparisons easier. At each x and y point, we plot the maximum among all the ranks corresponding to the worst case rank. The color map in each plot is a monochromatic palette with the deeper color represent a higher value. In comparison to the bounding box algorithm, the new method copies significantly fewer source cells to target partitions resulting in substantial reduction in memory usage and network traffic. With increasing depths of the target KD-tree, the target mesh partition representation becomes finer, and as a result the overlap detection improves until it gets to a point where the optimal overlap is detected. We see this behavior in the plot. Each KD-tree representation is a subset of the representation at any coarser depth. Due to this fact, the number of migrated cells is monotonically decreasing with increasing depth. Also note in the figures that there is no data for higher depths and higher ranks. This is because as the number of ranks increase the average number of cells per rank decreases and the maximum depth of the KDtree on the smaller partitions are less than the maximum depth of partitions on lower number of ranks, so we don’t run those cases. Clearly, for a particular number of ranks, increasing the KD-tree depths leads to better overlap detection in comparison to the bounding box algorithm. We also observe that as the number of ranks increase, KD-

36

N. Ray et al.

(a) Counts using the Bounding Box redistributor

(b) Counts using KD-tree redistributor

Fig. 6 The maximum count of new cells received among all ranks at each number of ranks and for each KD-tree depth of the sphere mesh

(a) Timing of Bounding Box redistributor

(b) Timing of KD-tree redistributor

Fig. 7 The maximum time across ranks for each KD-tree depth of the sphere mesh

tree based overlap detection improves the cell counts when compared to bounding box algorithm which does not improve as it is too conservative. With increasing ranks and depths, we see savings around one order of magnitude with the KD-tree algorithm. We observe a similar pattern in the time taken by the redistributors as shown in Fig. 7a, b. The KD-tree algorithm outperforms the bounding box algorithm both in terms of memory savings and time as the number of ranks and depths increase. We also observe a slightly concave pattern with regards to the KD-tree depth, especially on the lower ranks and higher depths, due to increased computation needed for overlap detection.

Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms

(a) Bounding Box redistributor

37

(b) KD-tree redistributor

Fig. 8 The maximum count of new cells received across ranks for each KD-tree depth of the tesseract mesh shown for both KD-tree redistributor and bounding box redistribution. There the depth do not have any meaning for the bounding box algorithm

(a) Counts using the Bounding Box redistributor

(b) Counts using KD-tree redistributor

Fig. 9 The maximum count of new cells received across ranks for each KD-tree depth of the tesseract mesh

4.2 Tesseract Mesh Figure 8a, b show the surface plots of the number of migrated cells which is a proxy for the memory estimates and network traffic of each method after redistribution. The surface plot shows a more complex landscape. Repeating what was stated earlier, the tesseract is designed to be representative of a worst case scenario because of the highly intersecting nature of the bounding boxes by construction. Figure 9a, b show another view of the same data. Here, we plot the values for all depths at each point on the x-axis. In comparison to the bounding box algorithm, the new method performs significantly better in terms of memory savings. The target mesh partition representation becomes better resolved with increasing target KD-tree depths, and subsequently the overlap detection becomes optimal after a certain depth. Note both the monotonically decreasing number of migrated cells with increasing depth and the generally improved performance of both algorithms when the number of partitions is a multiple of 6 which is a natural symmetry of the mesh giving “nicer” partitions. We see

38

N. Ray et al.

(a) Target mesh partition on rank 3

(b) Source mesh partition in grey on rank 3

Fig. 10 Target and source mesh partitions on rank three on a four rank partition

this behavior in both plots. The bounding box algorithm, on the other hand, does not improve even when the number of ranks is increased, as the bounding box based overlap detection is too conservative. Again, depth has no meaning in the bounding box redistributor. This results in receiving entire source mesh partitions for many ranks where only small pieces are needed. This behavior is due to how the tesseract mesh is partitioned. For example, Fig. 10a shows the target part on rank 3 of a four rank run, the source partition is shown in Fig. 10b which does not cover the target part at all. This disconnected partitioning is generated by the ParMetis partitioner and is not a pathological construction. As the bounding box (shown in Fig. 11a) is the whole cube, clearly all the other mesh parts would be migrated to this rank. We also observe a similar pattern for the KD-tree algorithm when the depth is zero, but, as we increase the number of depths, for example as shown in Fig. 11b

(a) Bounding box corresponding to depth 0

(b) Bounding box corresponding to depth 3

Fig. 11 Difference in target description based on KD-tree depths

Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms

39

(b) Timing of KD-tree redistributor (a) Timing of Bounding Box redistributor

Fig. 12 The maximum time across ranks for each KD-tree depth

where the bounding boxes of depth 3 tree is overlayed on the target mesh partition, we see significant gains due to better capturing of the target mesh partition shape and finer overlap detection. As we increase the number of ranks and the number of depths, we obtain from 50% reduction to an order of magnitude improved savings. Figure 12a, b show the surface plots of timings of each method after redistribution. Here the timing landscape is complex, and shows concavity. In Fig. 13a, b, we plot another view of the same data. The bounding box algorithm takes relatively the same amount of time independently of the number of ranks it is run on. This is consistent with the number of new cells received. However, the KD-tree algorithm performance shows greater variability. Overall KD-tree algorithm outperforms the bounding box algorithm both in terms of memory savings and time as the number of ranks and depths increase.

(a) Timing of Bounding Box redistributor

(b) Timing of KD-tree redistributor

Fig. 13 The maximum time across ranks for each KD-tree depth

40

N. Ray et al.

On lower number of ranks, the higher the number of depths, the longer it takes in comparison to the bounding box algorithm. Overall, we observe a concave pattern along the y-axis (the number of depths). Upon investigation, we found the detection was the biggest contributor to the increased timing. As the depths increase, the granularity of the target mesh partition representation also increases. Because the global target mesh is around 700 k cells, the overlap detection works with almost that many cells on each rank for higher depths, and thus takes more time. This effect is more prominent on lower ranks as they have significant number of source cells as well to compute their intersection. We also observed this pattern for the sphere mesh, though not this pronounced as the size of the global target mesh is comparatively small around 44 k cells. Finally, as the number of ranks is increased, the KD-tree algorithm becomes comparable or faster than the bounding box algorithm.

5 Conclusion We present a new approach to mesh redistribution for data remapping algorithms. Our method utilizes the KD-tree data structure to improve overlap detection between source and target partitions. We demonstrate the significant savings both in terms of memory and timing by the new algorithm in comparison to the default bounding box algorithm. We observe that, in general, sending the full tree of the target mesh partitions leads to both optimal memory savings and total time to redistribute the mesh. For a coarse target mesh, the shape of the mesh on an individual rank becomes regular and has only a few elements as the number of ranks become large, there probably won’t be much gain in describing the geometry using depth zero or the full tree. However, if the target mesh is fine enough so that even on large ranks, the mesh part consists of hundreds to thousands of elements, the over estimation of overlap between target can be significantly reduced by using higher depths. However, if the global target mesh is large, then it might take a lot more time to compute the optimal overlap. In such scenarios, an intermediate depth would perform decently both in terms of memory savings and time. Acknowledgements This work is supported by the U.S. Department of Energy for Los Alamos National Laboratory under contract 89233218CNA000001. We thank ASC NGC Ristra and Portage for support. LA-UR-23-21719.

References 1. Barlow, A.J., Maire, P.H., Rider, W.J., Rieben, R.N., Shashkov, M.J.: Arbitrary lagrangianeulerian methods for modeling high-speed compressible multimaterial flows. Journal of Computational Physics 322, 603–665 (2016). https://doi.org/10.1016/j.jcp.2016.07.001 2. Bentley, J.L.: Multidimensional binary search trees used for associative searching. Commun. ACM 18(9), 509-517 (1975). https://doi.org/10.1145/361002.361007

Efficient KD-Tree Based Mesh Redistribution for Data Remapping Algorithms

41

3. Burton, D.E.: Lagrangian hydrodynamics in the flag code. Los Alamos National Laboratory, Los Alamos, NM, Technical Report No. LA-UR-07-7547 (2007) 4. Herring, A., Ferenbaugh, C., Malone, C., Shevitz, D., Kikinzon, E., Dilts, G., Rakotoarivelo, H., Velechovsky, J., Lipnikov, K., Ray, N., et al.: Portage: A modular data remap library for multiphysics applications on advanced architectures. Journal of Open Research Software 9(1) (2021) 5. Hirt, C., Amsden, A., Cook, J.: An arbitrary lagrangian-eulerian computing method for all flow speeds. Journal of Computational Physics 14(3), 227–253 (1974). https://doi.org/10.1016/ 0021-9991(74)90051-5 6. Karypis, G.: Encyclopedia of Parallel Computing, chap. METIS and ParMETIS, pp. 1117– 1124. Springer US, Boston, MA (2011). https://doi.org/10.1007/978-0-387-09766-4-500 7. Kucharik, M., Breil, J., Galera, S., Maire, P.H., Berndt, M., Shashkov, M.: Hybrid remap for multi-material ALE. Computers & Fluids 46(1), 293–297 (2011) 8. Margolin, L., Shashkov, M.: Second-order sign-preserving conservative interpolation (remapping) on general grids. Journal of Computational Physics 184(1), 266–298 (2003) 9. Painter, S.L., Coon, E.T., Atchley, A.L., Berndt, M., Garimella, R., Moulton, J.D., Svyatskiy, D., Wilson, C.J.: Integrated surface/subsurface permafrost thermal hydrology: Model formulation and proof-of-concept simulations. Water Resources Research 52(8), 6062–6077 (2016). https:// doi.org/10.1002/2015WR018427 10. Plimpton, S.J., Hendrickson, B., Stewart, J.R.: A parallel rendezvous algorithm for interpolation between multiple grids. Journal of Parallel and Distributed Computing 64(2), 266–276 (2004). https://doi.org/10.1016/j.jpdc.2003.11.006 11. Robinson, A., Brunner, T., Carroll, S., Drake, R., Garasi, C., Gardiner, T., Haill, T., Hanshaw, H., Hensinger, D., Labreche, D., et al.: Alegra: An arbitrary lagrangian-eulerian multimaterial, multiphysics code. In: 46th AIAA Aerospace Sciences Meeting and Exhibit, p. 1235 (2008) 12. Slattery, S.R., Wilson, P.P.H., Pawlowski, R.P.: The data transfer kit: A geometric rendezvousbased tool for multiphysics data transfer. American Nuclear Society (2013). URL https://www. osti.gov/biblio/22212795 13. Zhang, J., Guo, H., Hong, F., Yuan, X., Peterka, T.: Dynamic load balancing based on constrained k-d tree decomposition for parallel particle tracing. IEEE Transactions on Visualization and Computer Graphics 24(1), 954–963 (2018). https://doi.org/10.1109/TVCG.2017.2744059

Coupe: A Mesh Partitioning Platform Cédric Chevalier, Hubert Hirtz, Franck Ledoux, and Sébastien Morais

1 Introduction For numerical simulation-based analysis, High-Performance Computing (HPC) solutions are nowadays a standard. Numerous solvers run in parallel, and large multiphysics codes are built to take advantage of HPC cluster architectures. Large-scale numerical simulations that run on large-scale parallel computers require the simulation data to be distributed across the computing units (GPU, CPU, or any type of core) to exploit these architectures efficiently. Each unit must process a fair share of the work according to its computing capability. The straightforward way to achieve a “good” load balance is to model each job by its cost and to partition all the jobs between the computing units relatively. The problem with two identical computing units is relative to the classical number partitioning problem [1]. A large category of simulation codes is based on discrete numerical models that rely on the Finite Element Methods or the Finite Volume Methods. In both cases, the geometrical study domain noted .Ω must be spatially split into a set of simple atomic elements, called cells, that geometrically partition .Ω. This set of cells is called a mesh. Depending on the numerical methods, those meshes will be very structured, C. Chevalier · H. Hirtz · F. Ledoux · S. Morais (B) CEA, DAM, DIF, 91297 Arpajon, France e-mail: [email protected] C. Chevalier e-mail: [email protected] H. Hirtz e-mail: [email protected] F. Ledoux e-mail: [email protected] LIHPC - Laboratoire en Informatique Haute Performance pour le Calcul et la simulation - DAM Île-de-France, University of Paris-Saclay, Paris, France © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_3

43

44

C. Chevalier et al.

like an entirely regular grid of cubes, mainly structured or fully unstructured. In the latter case, cells are generally simplices (triangles in 2D and tetrahedra in 3D) or generic polyhedra. The mesh and the numerical and physical data attached to the mesh cells must be partitioned between the different computing units. The goal of the partitioning stage can drastically differ between applications. For load balancing simulations, several tools exist [2–5] and solve complex graph or hyper-graph partitioning problems [6–8]. Among the tools mentioned, many have a resolution approach based on topological algorithms and do not take advantage of the geometric information associated with the mesh. Moreover, these tools are mainly focusing on minimizing the communication costs while keeping the imbalance of the solution below a given maximal imbalance. They do not minimize the load imbalance as an objective, and do not take the memory load and memory capacity of the computing units into account. In this paper, we present Coupe a mesh partitioning platform that aims to fill the gaps mentioned above. For this purpose, algorithms under developments as well as well known algorithms for geometrical, topological and number partitioning are available. These algorithms are either direct or refinement algorithms and can be easily chained together. The remainder of this paper is structured as follows. In Sect. 2 we define the Mesh Partitioning Problem and more precise sub-problems which can be of interest, for large-scale applications, depending on what objectives have higher priority. Then, in Sect. 3, we concisely present the main practical algorithms for each partitioning problem. On top of those algorithm we mention two refinement algorithms under development which are used in our experiments. Section 4 focuses on Coupe and discuss its software choices, architecture, and technical differences with other partitioning tools. Moreover, we also present the multiple tools provided to easily experiment partitioning apparoches with Coupe, and other partitioners. Finally, in Sect. 5, we experimentally evaluate our tool by combining multiple kind of algorithms and compare their results to Scotch and Metis.

2 Mesh Partitioning and Load Balancing Previously we have briefly introduced the need to break down the elements of the mesh into several subsets that different computing units will handle. Load balancing is paramount for any large-scale application: the higher the number of computing units, the most likely one computing unit will have to wait for another, locally stopping the computation. This paper focuses on mesh-based applications and how solving a Mesh Partitioning Problem can enable high scalability. Mesh Partitioning works with numerical simulation solvers [9] as well as mesh generation software [10]. Giving a precise definition of the mesh partitioning problem is difficult due to the number of constraints and objectives one wants to satisfy. It depends on how the application uses the mesh, the programming paradigm, the data structure layouts, and

Coupe: A Mesh Partitioning Platform

45

what kind of hardware performs the computations. We can start a skeleton problem given in Definition 1. Definition 1 (Mesh Partitioning Problem) Given a mesh .M for which each cell c has a computation cost .wc , find a family .Π = (Ci )0≤i dim(gci (d ' )) then for all.d '' ∈ cif , gci (d '' ) = gci (d ' ) and.e(d '' ) = e(d ' ). Otherwise it means that the two .i-cells are classified on distinct geometrical entities of same dimension and the sewing operations is so impossible.

3 Hexahedral Blocking Operations The block structure that we handle is full-hexahedral. As a consequence, updating the block structure consists in modifying the topology and geometry of a hexahedral mesh. We consider here two type of operations, which are sheet removal and sheet insertion [27]. In this section, we present each operation, the definition in the .n-Gmap model, and the corresponding pseudo-code algorithm. We begin with the sheet selection which gives us all the cells that belongs to a sheet.

3.1 Sheet Selection We equally consider a 2D quad block structure or a 3D hexahedral block structure. We define a sheet . S, or layer of cells, as a subset of cells (quads in 2D, hexes in 3D). Starting from an edge .e of the mesh, we define . E e as being the smallest subset of . E that verifies: Fig. 4 A .2-G-map is classified onto a geometric model. On the top, darts of .0-cells that are classified on geometric points are colored in red, while those classified on curves are colored in green. On the bottom, darts .d and .d ' are .2-sewed and their .0-cells are fused

74

V. Postat et al.

Fig. 5 Example of 3D sheets in a hexahedral mesh. In (a), the full mesh, in (b) three sheets are represented: a regular sheet (yellow), a self intersecting sheet (red) and a self-touching sheet (green)

Fig. 6 Building opposite sheet dart set . Sd (see Algorithm 1). Starting from a first marked dart (in red) the front . F is propagated via the alpha-links described lines 5, 6 and 8 and the darts are then added to . Sd . (a, b and c) represent . Sd after several iterations

e ∈ E e and ∀ei ∈ E e ⇒ E e//i ⊆ E e ,

.

//

with . E ei the set of edges opposed to .ei in the cells that are incident to .ei . The sheet . S H is the set of cells that are incident to an edge of . E e at least. Examples of 3D sheets are given on Fig. 5 where we can see three types of sheets: in .(a), a simple sheet is depicted; in .(b) the sheet intersects itself along a complete chord of hexahedral cells; in .(c), the sheet touches itself along several faces. Second and third sheets are respectively qualified as being self-intersecting and self-touching (Fig. 6). In the .n-G-map model, the edge selection process consists in picking a dart .d, that α1> (d) and then the set of edges that are topologically “opposite” defines the edge . (d). The expected set of darts, called . Sd is given by Definition 4. This set to . (d). orbit . (Sd ), 1. . D ' = D− 0/d(αn−1 αn' )k1 = d and .∃k2 > 0/d(αn−2 αn−1 )k2 = d.

In order to try and explain this definition, let us consider the 2D case shown on Fig. 13. Starting from the hyperplane .H (red darts) given as an input in Fig. 13a, we show the ghost layer extension in .(b) and the ghosted hyperplane .Hg . Then for each dart of .Hg , we show the inserted 2D pattern (see Definition 9) in Fig. 13c. We have here all the darts of . D ' (item 1 of Definition 11). Item 2 indicates that we preserve all the .αi link for the darts of .G, for all .i ∈ [[0; n − 1]]. The third item indicate that we also preserve .α2 links for the darts that are not involved in the sheet insertion process (those that are not connected to a dart of .Hg ). We also reconnect the inserted patterns via . p(d).l and . p(d). f (see Fig. 13d). As .G ' is a 2-G-map, some.α0 link are implicitly performed to ensure that.α02 is an involution (see Fig. 13e). The fourth item close some open cells. In 2D the darts . p3 (d) and . p4 (d) are 1linked with the first dart 1-free .m of their orbits . (d), d /= m (see Fig. 13d). (around vertex in 2D and around edge in 3D). It gives us the result of Fig. 13e. It remains then to remove4 some flat or compressed .(n − 1)-cells to get from Fig. 13e, f. In fact, Definition 11 allows us to define the result of Fig. 13f without the ghost darts. From Definition 11, we derive the Algorithm 3 that defines the sheet insertion both 2 and 3D. Only the link stage differs between 2D and 3D (line 6). We keep using Fig. 13 to explain the algorithm. Given a selection set . De , that is an admissible hyperplane (see Definition 7), we first insert a ghost layer (line 1), as defined in Definition 6, to get from Fig. 13a, b. To ease the final suppression of the ghost layer, we mark the dart of a pattern generated for the ghost layer as to be removed later. The usage of the ghost layers and the ghosted hyperplane (line 2) allows us to write a generic algorithm without having to consider specific cases for boundary darts. 4

We do not formally define this operation, which is quite general, for a lack of space.

84

V. Postat et al.

Fig. 13 Pipeline of the 2-Insertion for selected darts in red ( see Algorithm 3). Links added are in pink a Selection in red b Add the ghost-layer in blue, extend the selection in red with darts of ghost p

layer c Memorize .αn , .2 − unsew darts selected d .2-link . p(d).

f ,. p(d).l and .1-link

. p3 , . p4

. p(d).l

and . p(d). f (e) .0-link

(f) Result with collapsed .2-compressed faces and

removed ghost layer

Fig. 14 a Marked darts; b after an insertion operation and collapse of the central inserted face (see Fig. 15)

Note that unlike Definition 11, we incrementally update the .n-G-map in the algop rithm.5 So we store the initial function .αn , noted .αn ,to be used later to insert the patterns (line 3). After that, we unsew all the darts of . De for .αn . Then the pattern is inserted for every dart .d ∈ De but not .n-sew to the initial n-G-map already (see 5

A side effect is that n-G-map properties are not necessarily verified during the algorithm, but just at the end.

Formal Definition of Hexahedral Blocking operations Using n-G-Maps

85

Algorithm 3: n-Insertion 1 2 3 4 5 6 7 8

Data: A n-G-map H = (D, α0 , . . . , αn ), a set of selected darts De Add a ghost-layer g on the boundary of H ; De ← De + darts selected in g (see Definition 8) ; /* Fig. 13b p Memorise αn , ∀d ∈ De ; n − unsew all darts ∀d ∈ De ; Generate local nD-pattern ∀d ∈ De ; /* Figs. 9, 10 and 13c Link patterns ; /* Fig. 13d–e Collapse compressed n-cells ; Remove g ; /* Fig. 13f

*/

*/ */ */

Fig. 13c). Line 6 of Algorithm 3 differs in 2D and 3D. It corresponds to the fourth item of Definition 11. We give some details about their implementation afterward.

At line 7, we collapse compressed.n-cells. The 2-insertion pattern, given in Definition 9 generates compressed 2-cells (see Fig. 16). We get the same kind of compressed .3-cells in 3D (see on the right). In both dimension, we detect such cells as they own at least one dart .d such that .dα0101 = d. Once detected, we remove every compressed .n cell. In 2D, a compressed 2-cell .C2 . In 3D, a compressed 3-cell .C3 are removed and we 3-sew .α23 (d) with .α123 (d) for .d a dart of .C3 . Finally the ghost layer is removed (line 8). Links in 3D, we first 0-link darts .{0, 1, 4, 5, 8, 11} of a selected dart .d with darts of . p(dα0 ). For example dart . p0 (d) is 0-linked with . p0 (dα0 ). After that, darts . p(d). f , . p(d).l and . p2,3 (d) are respectively 1- and 2-linked with darts of . p(dα1 ) (see Fig. 12c). We now link darts. p5,9 (d) to the pattern spawned by the marked dart.d ' found in the orbit . (d), d ' /= d. Dart . p5 (d) is 2-linked to . p5 (d ' ) while . p9 (d) is 1-linked to . p9 (d ' ); if there is no such .d ' nothing is done. We then look for the marked dart .m p (.m /= d) in orbit . (dα3 ). If .m is found we 1-link . p10 (d) with . p9 (m) and 2-link . p8 (d) with . p5 (m). We proceed similarly with .d ' . If there is no such dart .m, we are in the usual case illustrated in Fig. 17e where we form a flat 3-cell between . p(d) and . p(d ' ). Figure 17f shows the auto-intersecting case where the patterns open-up to form an hexahedron (see Fig. 17a, b).

86

V. Postat et al.

Fig. 15 Collapse a 2-cell to generate a self-touching pattern

Fig. 16 Simple insertion of two 2-cell (green). Darts in pink form three compressed 2-cells

We then close the orbit . p10 (d) by 1-linking together the two .α1 -free darts . f and . f ' , and we 2-link . f α21 and . f ' α21 which closes the flat 3-cells and the hexahedra. After the link done in n-D we have to remove the compressed n-cell; in 3D, we remove the compressed 3-cells and 3-sew into a chord the hexahedra created on the auto-intersection. For a 3-free dart .d of such an hexahedron, we 3-sew it with the other 3-free dart of . (d). To address the case of a self-touching self-intersection ( see Fig. 14a) in 2D, we define an operation which collapse a 2-cell when two new 2-cells inserted are 2-linked Fig. 15b.

4 Conclusion and Future Works In this work, we formally defined and implemented hexahedral blocking operations using the .n-G-map models. Using this model brings us many benefits: (1) the ability to get a unique definition in 2D and 3D for our operations; (2) the clear separation of concerns between topology and geometry; (3) formal pre- and post-conditions to validate the block structure during blocking operations; (4) the usage of orbits, which are much more general than regular cells.

Formal Definition of Hexahedral Blocking operations Using n-G-Maps Fig. 17 Marked darts of a grid mesh along which an auto-intersected sheet will be inserted (a) and the resulting mesh with the highlighted sheet (b). In (c) a more complex case with (d) a cross-section to show the inserted sheet. The two highlighted 12-darts patterns spawned by the two red darts (see Fig. 12) form a flat 3-cell in (e) (the same is illustrated in 2D in Fig. 16). In (f) where the sheet auto-intersects the four patterns form an additional hexahedron 3-cell

87

88

V. Postat et al.

The first three items were very important to be able to get a clean implementation of the sheet operations, especially the sheet insertion. We are able to insert self-intersecting and self-touching sheets in a robust manner. The merging rules, introduced to handle geometrical classification and vertex location, coupled with the sewing and unsewing operations helped us guarantee the robustness of the operations. In order to go further, we expect to allow more complex sheet insertion patterns. We also plan to formally prove the robustness of our approach by deriving from the definitions we proposed in Sect. 3 a system of rules using, for instance, the Jerboa framework [28]) to ensure our definitions and algorithms are correct.

References 1. N. Pietroni, M. Campen, A. Sheffer, G. Cherchi, D. Bommes, X. Gao, R. Scateni, F. Ledoux, J.-F. Remacle, and M. Livesu, “Hex-mesh generation and processing: A survey,” ACM Trans. Graph., jul 2022. Just Accepted. 2. Cubit, “Sandia national laboratories: CUBIT geometry and mesh generation toolkit.” 3. M. Smith, ABAQUS/Standard User’s Manual, Version 6.9. United States: Dassault Systèmes Simulia Corp, 2009. 4. ANSYS, “Ansys fluent - cfd software | ansys,” 2016. 5. M. Mäntylä, An Introduction to Solid Modeling. USA: Computer Science Press, Inc., 1987. 6. P. Lienhardt, “N-dimensional generalized combinatorial maps and cellular quasi-manifolds,” Int. J. Comput. Geom. Appl., vol. 4, pp. 275–324, 1994. 7. J.-F. Remacle and M. Shephard, “An algorithm oriented mesh database,” International Journal for Numerical Methods in Engineering, vol. 58, no. 2, 2003. 8. E. S. Seol, FMDB: Flexible Distributed Mesh Database for Parallel Automated Adaptive Analysis. PhD thesis, Rensselaer Polytechnic Institute, 2005. 9. R. V. Garimella, “Mesh data structure selection for mesh generation and fea applications,” in International Journal for Numerical Methods in Engineering, vol. 55, pp. 451–478, 2002. 10. R. V. Garimella, MSTK: MeSh ToolKit, v 1.3 User’s manual. Los Alamos National Laboratory, 2012. LA-UR-04-0878. 11. T. Tautges, C. Ernst, K. Merkley, R. Meyers, and C. Stimpson, “Mesh oriented database (moab),” 2005. http://cubit.sandia.gov/cubit 12. F. Ledoux, Y. Bertand, and J.-C. Weill, Generic Mesh Data Structure in HPC Context, vol. 26 of Computation Technologies and Innovation Series, ch. 3, pp. 49–80. Stirlingshire: Saxe-Coburg Publications, 2010. 13. H. Edelsbrunner, Algorithms in Combinatorial Geometry. New-York: Springer-Verlag, 1987. 14. B. G. Baumgart, “A polyhedron representation for computer vision,” in Proceedings of the May 19-22, 1975, National Computer Conference and Exposition, AFIPS ’75, (New York, NY, USA), p. 589-596, Association for Computing Machinery, 1975. 15. D. E. Muller and F. P. Preparata, “Finding the intersection of two convex polyhedra,” Theor. Comput. Sci., vol. 7, pp. 217–236, 1978. 16. K. Weiler, “Edge-based data structures for solid modeling in curved-surface environments,” IEEE Computer Graphics and Applications, vol. 5, no. 1, pp. 21–40, 1985. 17. D. Sieger and M. Botsch, “Design, implementation, and evaluation of the surface_mesh data structure,” in IMR, 2011. 18. G. Damiand and P. Lienhardt, Combinatorial Maps: Efficient Data Structures for Computer Graphics and Image Processing. A K Peters/CRC Press, September 2014. 19. J. Rossignac, “3d compression made simple: Edgebreaker with zipandwrap on a corner-table,” in Proceedings International Conference on Shape Modeling and Applications, pp. 278–283, 2001.

Formal Definition of Hexahedral Blocking operations Using n-G-Maps

89

20. T. J. Tautges, “Local topological modifications of hexahedral meshes using dual-based operations,” in 8th U.S. National Conference on Computational Mathematics, July 2005. 21. J. F. Shepherd and C. R. Johnson, “Hexahedral mesh generation constraints,” Engineering with Computers, vol. 24, no. 3, pp. 195–213, 2008. 22. F. Ledoux and J. F. Shepherd, “Topological and geometrical properties of hexahedral meshes,” Engineering with Computers, vol. 26, no. 4, pp. 419–432, 2010. 23. F. Ledoux and J. F. Shepherd, “Topological modifications of hexahedral meshes via sheet operations: a theoretical study,” Engineering with Computers, vol. 26, no. 4, pp. 433–447, 2010. 24. E. Brisson, “Representing geometric structures in d dimensions: Topology and order,” in Symposium on Computational Geometry, pp. 218–227, 1989. 25. P. Lienhardt, “Subdivisions of .n-dimensional spaces and .n-dimensional generalized maps,” in Annual ACM Symposium on Computational Geometry, pp. 228–236, 1989. 26. P. Lienhardt, “Topological models for boundary representation: a comparison with ndimensional generalized maps,” Computer Aided Design, vol. 23, no. 1, pp. 59–82, 1991. 27. M. L. Staten, J. F. Shepherd, F. Ledoux, and K. Shimada, “Hexahedral mesh matching: Converting non-conforming hexahedral-to-hexahedral interfaces into conforming interfaces,” International Journal for Numerical Methods in Engineering, vol. 82, no. 12, pp. 1475–1509, 2010. 28. H. Belhaouari, A. Arnould, P. Le Gall, and T. Bellet, “JERBOA: A Graph Transformation Library for Topology-Based Geometric Modeling,” in 7th International Conference on Graph Transformation (ICGT 2014) (H. Giese and B. König, eds.), vol. 8571, (York, United Kingdom), Springer, July 2014.

Machine Learning

Machine Learning Classification and Reduction of CAD Parts Steven J. Owen, Armida J. Carbajal, Matthew G. Peterson, and Corey D. Ernst

1 Introduction Complex assemblies frequently include many common mechanisms such as bolts, screws, springs, bearings and so forth. In practice, analysts will spend extensive time identifying and then transforming each mechanism to prepare for analysis. For example, bolted connections may require specific geometric simplifications, specialized meshing and boundary condition assignment. For assemblies with hundreds of bolts, model preparation can be tedious and often error prone. This work uses machine learning methods to rapidly classify CAD parts into categories of mechanisms. Once classified the analyst is able to preview and apply category-specific solutions to quickly transform them to a simulation-ready form. The new environment, as shown in Figs. 1 and 2, enables the real-time grouping of volumes in a CAD assembly using our proposed classification procedure. In this example, volumes classified as bolts can be efficiently converted into a simulationready form using a single operation that may include automatic defeaturing, meshing, and boundary condition assignment. The user can preview the reduced form from a variety of options and apply the reduction operation to multiple bolts simultaneously. Additional reduction operations are being developed for other part categories based on user-driven use cases. This work aims to identify a machine learning model that can predict specific categories of mechanisms in real time from a set of parts in a complex CAD assembly. Our objective is to enable rapid category-specific reduction operations and significantly reduce the amount of time required by the user to prepare models for analysis.

S. J. Owen (B) · A. J. Carbajal · M. G. Peterson · C. D. Ernst Sandia National Laboratories, Albuquerque, New Mexico, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_5

93

94

S. J. Owen et al.

Fig. 1 Parts of a complex CAD assembly are categorized and displayed as expanding lists in a graphical user interface

Fig. 2 Examples of fastener reduction operations that can be quickly performed to prepare for analysis input

2 Background While machine learning has been widely applied to text, image, audio, and video analysis, there has been limited research on its use in model preparation for simulation. One notable example is the work of Danglade et al. in [1], which describes a limited environment for defeaturing CAD models using machine learning driven by heuristic rule-based outcomes. While they propose several new criteria for evaluating the results of trained models, they rely on human interaction to judge the quality of the results, which makes the approach difficult to scale. ML-based part classification is frequently used for rapid sorting of mechanisms in industrial manufacturing processes. Recent work that has demonstrated the usefulness of machine learning methods for shape recognition and classification of CAD models includes [2–4, 24–26]. However, these methods do not extend to driving modifications to the CAD model, such as those required for mesh generation and simulation. Lambourne et al. [5] propose sorting part classification models into one of four groups: point cloud, volumetric, image-based, and graph-based approaches.

Machine Learning Classification and Reduction of CAD Parts

95

They provide a brief review of each of these methods, citing several examples along with their benefits and drawbacks. In our application, complex CAD assemblies are typically produced by advanced 3D design tools such as Solidworks [8] or PTC Creo [7] for design and manufacturing purposes. Analysts usually use a modified form of the original CAD assembly as the basis for a computational simulation model. The assembly data consists of multiple parts described in file formats such as .step or .sat. These formats describe a hierarchical arrangement of entities, including vertices, curves, surfaces, and volumes, or boundary representation (BREP), and each entity has an underlying numerical description [9]. The metadata conventions in these formats can often identify a name or other attribute that can assist in part classification. However, as we often encounter data from a variety of sources, including legacy CAD assemblies, we cannot assume a consistent metadata convention and must use other means for classification.

3 Overview Supervised machine learning is a problem where, given a training dataset (x1 , y1 ), ..., (xn , yn ) with vector input features.x and vector output features.y (referred to as labels or ground-truth), it is assumed that there exists an unknown function .y = f (x) that maps input features to output features. A learning algorithm can be used to train a model (or fit it) to the data, such that the model approximates . f . Once the model has been trained, it can be used to evaluate new, previously unseen input vectors to estimate (or predict) the corresponding output vectors. To apply supervised machine learning in a new problem area, the researcher must determine the domain-specific outputs, identify the available domain-specific input features that can be used to predict them, and create a training dataset containing enough examples of each to adequately represent their distributions. For this work, our first decision was to limit the scope to the classification of individual CAD parts. Next, we needed to define our machine learning model outputs, or labels. Since our goal is to classify geometric volumes based on a mechanism’s function, we selected a few common categories, including: bolt, nut, washer, spring, ball, race, pin, and gear. Similarly, the input features .x for each model are chosen to characterize the local CAD model geometry and topology that we believed would drive those outcomes. With a machine learning model that can predict the classification category of a geometric volume, we can use the predicted classification to present users with a categorized list of parts based on their mechanism function. This can help users quickly identify and select specific parts for further analysis or processing. .

96

S. J. Owen et al.

4 Features To predict mechanism categories based on a geometric volume, we need to characterize the geometry and topology of the CAD part. For each volume .G 3 composed of vertices, curves, and surfaces, we defined a characteristic feature vector .xG 3 . The selected features that characterize .G 3 are based on a fixed-length set of numerical values that describe the geometric volume. Table 1 describes the attributes used for the features of .G 3 . These attributes are queried from a geometry engine for each volume and used to construct .xG 3 . For this work, we selected 48 features based on common characteristics of curves, surfaces, and volumes frequently used for mesh generation. Each feature can be easily computed or derived from common query functions of a 3D geometric modeling kernel [10]. Table 1 includes a representative sample of these features, along with a brief description of each. Table 1 Representative sample of 48 features computed for each CAD volume and used for training data ID Feature Description 0 1 2 3 4 5 6 7 9 10 19 20 21 23 24 25 26 27 28 32 38 39 40 41 ∗ Indicates

.

genus.* min_aspect max_aspect volume_bbox_ratio.* princ_moments[0].* princ_moments[1] princ_moments[2].* dist_ctr_to_bbox_ctr min_area_ratio max_area_ratio area_ratio_end area_ratio_interior.* area_ratio_side area_no_curvature area_low_curvature area_med_curvature area_high_curvature.* curve_length curve_to_area_ratio len_straight_ratio.* reversal_angles.* corner_angles side_angles.* end_angles_ratio features used in reduced set

Number of through holes Tight bbox. min l/w Tight bbox. max l/w Volume/vol. tight bbox. Principal moment Moment of inertia Smallest moment Distance vol. centroid to bbox. centroid Min area/Tot surf area Max area/Tot surf area Area w/curves .225◦ > θ > 360◦ Area w/curves .0◦ > θ > 135◦ Area w/curves .135◦ > θ > 225◦ Area surfs with no curvature (planar) Area surfs with rad .> 100 * small_curve Area surfs with rad. .> 10 * small_curve Area surfs. with rad. .> small_curve Len. all curves / bbox. diagonal Len. all curves * bbox. diagonal / tot. area Len. linear curves / len. all curves Len. curves w/angle .315◦ > θ > 360◦ Len. curves w/angle .225◦ > θ > 315◦ Len. curves w/angle .135◦ > θ > 225◦ Len. curves w/angle .0◦ > θ > 135◦

Machine Learning Classification and Reduction of CAD Parts

97

5 Ground Truth For our supervised machine learning model associated with each volume .G 3 , we needed to provide a ground truth classification. This was initially done by developing a python script that reads a CAD part and presents the operator with an isometric image of the volume. To evaluate our methods, we used 5035 single-part ACIS files that were gathered from internal proprietary sources and external sources, including GrabCAD [11]. GrabCAD is a free subscription service that provides a large database of CAD models in a wide variety of formats, contributed by sources from multiple industries, including aerospace, transportation, animation, and many others. The selected CAD assemblies were processed by our python script and separated into individual parts. The operator then chose from the predefined set of 9 mechanism categories for each CAD part. At that time, a feature vector, .xG 3 , was generated and appended to one of 9 .csv files named for its classification category. For example, if the operator identifies the part as a gear, features are computed for the volume and appended to a file named gear.csv. While any CAD kernel with the relevant evaluators could be used, we developed our tool using both the Spatial ACIS [10] and an internally developed geometry kernel. In Sect. 7, we describe how our approach allows for the dynamic establishment and enrichment of categories within the CAD tool environment, providing a more comprehensive and up-to-date specification of ground truth (Fig. 3).

Fig. 3 Examples of CAD parts used to create ground truth categories for mechanism classification

98

S. J. Owen et al.

6 Machine Learning Methods In this work, we evaluated several existing machine learning classification methods, including random forests and neural networks. These methods are commonly used in the literature for classification problems, and we chose to use them in our work to compare their performance for CAD component classification. Specifically, we used ensemble decision tree (EDT) algorithms from the Scikit-learn (sklearn) library [13] and deep learning techniques with neural network architectures in PyTorch [14]. We found that the EDT method outperformed the NN approach for this task. We were able to utilize these open-source tools without the need to develop new ML technology.

6.1 Neural Network Neural networks (NNs) are a type of machine learning algorithm that are inspired by the structure and function of the human brain. They consist of multiple interconnected nodes or “neurons” that are activated based on input criteria. The input layer of an NN typically consists of a set of characteristics or “features” that describe the data being processed, and the output layer provides a predicted value or classification. NNs are commonly used in image recognition tasks, where the input features are the pixel values of the image, and the output layer predicts the probability of the image belonging to a certain category. A neural network is trained by providing it with a large number of examples of the input data along with their corresponding correct outputs or “labels”. As the network processes each example, it adjusts the values of its internal parameters, known as “weights”, in order to produce the correct output for each example. This process continues until the network reaches a satisfactory level of accuracy on the training data. Once trained, the network should be able to predict the correct output for new, unseen examples of the input data. PyTorch [14] is a popular open-source tool for training and managing neural network models, and was used to implement our classification method. Our application involves a traditional classification problem, where the input layer consists of 48 features computed from the characteristics of the CAD part, and the output layer consists of 9 nodes representing our 9 classification categories. After experimenting with different configurations, we found that a single hidden layer with a batch size of 128 and Sigmoid activation function provided the best performance. In a neural network, the activation function determines the threshold at which a neuron will “fire” or adjust its weight, and the Sigmoid function is a common choice for classification tasks. By doubling the size of the hidden layer between Sigmoid activations, we were able to obtain the desired 9-category output. Each of the 9 output nodes is a floating point value which roughly approximates a probability score of whether the CAD part, represented by the 48 input features,

Machine Learning Classification and Reduction of CAD Parts

99

can be categorized by one of the 9 categories, where each of the 9 positions of the output vector correspond to one of the 9 categories. The 9 output nodes of the neural network represent the probability that the input CAD part belongs to each of the 9 classification categories. Each position of the output vector corresponds to one of the categories, and the value at that position approximates the likelihood that the part belongs to that category.

Feature Correlation Our choice of features was mostly based on intuition, and we selected those that we believed to be unique to each CAD part. However, this approach has the potential weakness of introducing correlated features, which are features that are strongly related to each other and provide redundant information. Techniques for measuring feature correlation have been well studied in the machine learning literature [16], and using these methods can help identify and eliminate correlated features, leading to improved performance of the classification model. We found that many of the features we selected were highly correlated, and we suspected that reducing the number of features would improve the efficiency of the training process. To identify correlated features, we used Spearman’s rank correlation coefficient [15], which measures the monotonic relationship between two features. After applying this method, we were able to reduce the number of features without significant loss of performance. We performed a stepwise removal of features by iteratively eliminating the most correlated feature until the remaining features had a Spearman’s rank correlation coefficient of 0.29 or less. This process was done one feature at a time to ensure that the removal of a highly correlated feature did not introduce new correlations among the remaining features. The 9 features that were retained after this procedure are indicated with an asterisk (*) in Table 1. These features can be considered as composite features that capture most of the information needed for the classification task. By using the reduced set of features, we were able to significantly reduce the training time of the neural network, without significantly affecting overall accuracy of the models (see Table 2). On a MacOS machine, the training time was reduced from days to 15 min, and on a Linux machine with an NVIDIA Quadra RTX 6000 24GB GPU, it was reduced to about 9 min. Although this improvement was significant, the training time of the neural network still could not compete with the one-second or less training time of the ensemble of decision trees (EDT) method. However, further modifications to the neural network parameters have improved the performance of the 48- and 9-feature models, as shown in Table 3. Despite these improvements, the EDT method remains more efficient for this task.

100

S. J. Owen et al.

Table 2 Accuracy of EDT and NN models on 5035 CAD parts using 5 .× 5 K fold cross validation EDT

NN

48 features Precision

9 features Recall

Precision

48 features Recall

Precision

9 features Recall

Precision

Recall

Support

Bolt

100.0

99.0

97.5

98.0

98.5

99.0

95

95.6

998

Nut

100.0

100.0

100.0

96.2

97.5

86.8

84.0

73.2

114

Washer 97.6

97.6

97.4

90.5

94.8

96.2

79.3

76.4

204

Spring

100.0

100.0

100.0

91.3

97.2

93.2

89.3

77.6

110

Ball

100.0

100.0

100.0

100.0

99.7

100.0

99.9

100.0

543

Race

100.0

100.0

94.3

100.0

95.7

96.0

90.1

87

148

Pin

100.0

100.0

100.0

100.0

98.2

97.8

92.0

94.3

328

Gear

100.0

93.3

96.3

86.7

92.0

91.9

79.4

47.2

210

Other

99.0

99.8

97.6

98.8

97.8

98.1

89.7

93.9

2380

Total

99.4

99.3

97.9

97.7

97.7

95.5

91.0

83.5

5035

Table 3 Performance of EDT and NN models. 5035 models with 5 .× 5 K fold cross validation EDT NN 46 features 9 features 46 features 9 features 0.83s

0.51s

541s

512s

6.2 Ensemble of Decision Trees An EDT is a collection of individual decision trees, each of which is trained on a subset of the full training data. At evaluation time, the EDT’s prediction is a weighted sum of the predictions of each of its individual trees. In prior work, [17, 18] the authors used a regression EDT to predict mesh quality outcomes based on local geometric features of a CAD model. This work uses a similar approach where we extend EDT to use geometric features for classification. As mentioned earlier, the features in our dataset are highly interdependent, which can sometimes lead to multicollinearity. However, this is not an issue for EDTs because they are able to trim and prune the decision trees during training, using a voting process to select the best output for each class. Furthermore, even with all 48 features included, the EDT can be trained quickly on most 64-bit MacOS and WindowsOS systems, typically taking only a few seconds.

7 In-Situ Classification Our initial classification methods were well-received, but analysts requested an interactive method for enhancing their training data or adding custom categories. To address this need, we developed methods that allow for user input and customiza-

Machine Learning Classification and Reduction of CAD Parts

(a) Static SL model

101

(b) Dynamic SL model

Fig. 4 Dynamic supervised learning (SL) model for custom in-situ classification of CAD parts

tion. The “Normal” developer training scenario, shown in Fig. 4a, involves collecting examples of CAD parts that represent the initial 9 categories. The developer then assigns labels to each CAD part and computes the corresponding features, which are written to a .csv file. Once a sufficient number of labeled examples have been collected, the model can be trained using the sklearn RandomForestClassifier (EDT) functions in a separate Python script (see Sect. 10.1). The trained model is then serialized and saved as a pickle file. During prediction, the pickled model is loaded and used to classify new CAD parts into the initial set of 9 categories (see Sect. 10.2). To make the supervised learning procedures more versatile and enable in-situ classification, our objectives included the following: 1. Custom categories: Allow the user to dynamically add additional classification categories from within the CAD tool. 2. User defined training data: Allow the user to interactively add additional ground truth to their training models. 3. Sharable training data: Allow users to share user training data. 4. Reclassification: Allow users to modify the classification assignment. 5. In-situ training: Allow the user to update the classification model on demand. Figure 4b shows how this was accomplished. Starting with the existing training data, the user can interactively select one or more parts and assign them to a category string. This category can be chosen from the existing categories or the user can specify a new one. A feature vector (see Table 1) is then computed for each selected volume and written to a .csv file in a persistent user directory. This allows the user to add their own labeled data to the training set and customize the classification categories. Whenever the user updates their training data, the current EDT model is discarded and a new one is generated using both the user-defined data and the existing training

102

S. J. Owen et al.

set. This allows the model to incorporate the user’s custom labels and categories. Because EDT training is very efficient and typically takes less than one second, rebuilding the model after each update has minimal impact on performance. Once the new EDT model is loaded, consisting of both developer-defined and user-defined data, the user can make additional predictions using the standard procedure outlined in Sect. 10.2. In some cases, it may be necessary to reclassify a part by changing its ground truth or label. Our ML library allows for this situation by implementing a remove_data() function. Given a set of features for a CAD part, this function searches through the existing data and removes any rows that match the input features. The removed data is then added to the corrected class category, and the EDT model is retrained to incorporate the updated labels. This allows the user to easily modify the classification assignments and update the model as needed. When establishing a new category, it is ideal for the analyst to provide a large number of ground truth examples to avoid overfitting. Overfitting is a common problem in machine learning [23], where a model fits the training data too closely, leading to poor performance on unseen data. However, in some cases, it may not be possible to collect a large number of examples for a new category. In these situations, overfitting can be useful for initially establishing the category on known problems. As the analyst provides more diverse examples, the overall accuracy for unseen models will improve.

8 Results We report initial results in Table 2 from both NN and EDT models using the full set of 48 features and a reduced set of 9 features. To evaluate our results, we use .k-fold cross-validation [27], a well-established technique in machine learning. We choose .k = 5 and .n = 5, where we randomly split the data into 80% training and 20% testing sets, and repeat the process for a total of 25 iterations. This allows us to assess the performance of our models on unseen data and avoid overfitting. Table 2 shows a slight decrease in accuracy when using a reduced set of 9 features compared to the full set. This decrease is more pronounced for NN than for EDT, indicating that NNs are more sensitive to the removal of features, particularly when it comes to recall. Although the results for both reduced feature sets may be sufficient for many applications, the time spent identifying correlations and reducing the feature sets did not result in a significant improvement in performance when compared to the training time of the two methods. In addition to the classification accuracy results, Table 3 also reports the CPU training time for each of our 4 models. These results are the average training time for one iteration of the .k-fold cross-validation procedure, and they provide insight into the efficiency of the different models. While time to tune hyper-parameters for each of the models was not included in the timing results, we note a significantly higher overhead for NN as compared to EDT to evaluate and reduce feature correlation.

Machine Learning Classification and Reduction of CAD Parts

103

These results show that EDT outperforms neural networks on our training set, with a training time that is about three orders of magnitude faster. While both models achieved precision and recall above 95% when using the full set of features, we observed a significant decrease in accuracy for the reduced set of features with the NN model. Although pruning the features slightly improved the performance of both models, the benefit was minimal. Overall, these results suggest that EDT is a more efficient and effective model for our dataset. After experimenting with various ML tools and approaches, including NN and EDT, we found that EDT was the best model for our purposes. As shown in Table 3, EDT was much faster to train than NN, which was critical for our objective of incorporating real-time in-situ training. We also observed that NN was more sensitive to feature interdependence, while EDT was not affected by this issue. This reduced the need for extensive feature selection and allowed us to use the full set of features without sacrificing performance. Additionally, we found that the accuracy of EDT was comparable to NN, with a slight advantage for EDT. Overall, these factors made EDT the preferred model for our use case.

9 Comparison To evaluate our method against other machine learning techniques, we use the Mechanical Component Benchmark (MCB) [19]. This benchmark provides two large datasets of over 58,000 mechanical parts. While other public repositories of CAD parts exist [20, 21], MCB is particularly useful as it groups parts based on userdefined categories, providing clear ground truth. Additionally, several existing deep learning methods have published results based on MCB. The first dataset (A) is divided into 68 categories, and the second (B) contains about 18,000 objects divided into 25 categories. Each object is in the form of an .obj file, which is a common format used in graphics applications. However, this format represents objects using only facets (triangles), which is not well-suited to a boundary representation (BREP)-based approach like ours. Nevertheless, we were able to adapt most of the training data for our EDT classification method. Since our features are dependent on topological entities, we used a mesh-based BREP [22] to represent the objects in the MCB datasets. This method breaks the surfaces and curves of the objects at angles exceeding 135 degrees. However, we also observed anomalies in the data that could not be represented using our current methods [22]. As a result, we discarded those objects that did not meet our criteria before evaluating the performance of our models. To ensure consistency in the evaluation of different models, the MCB dataset includes separate training and testing sets for both datasets A and B. We tested our models on 5,713 objects with 68 classes in dataset A and 2,679 objects with 25 classes in dataset B. We compared our results with those of multiple published deep learning models reported in Kim et al. [19] on the same datasets. We replicate their results in Table 4 for Accuracy over Object and Average Precision for both datasets

104

S. J. Owen et al.

Table 4 Comparison of 7 deep learning models to CubitEDT Accuracy (%) Precision (%) Method A B A PointCNN PointNet++ SpiderCNN MVCNN RotationNet DLAN VRN CubitEDT

93.89 87.45 93.59 64.67 97.35 93.53 93.53 97.04

93.67 93.91 89.31 79.17 94.73 91.38 85.44 92.9

90.13 73.45 86.64 77.69 87.58 89.80 85.72 91.79

B 93.86 91.33 82.47 79.82 84.87 90.14 77.36 85.81

A (68 classes) and B (25 classes), and we also include the results of out EDT model, named CubitEDT, for comparison. Our analysis shows that the accuracy and precision of CubitEDT is on par with, or exceeds, the majority of other deep learning methods reported on the MCB datasets. For instance, PointCNN has an accuracy of over 90% on both datasets. In comparison, CubitEDT demonstrates improved accuracy and precision when compared to PointCNN for dataset A, but slightly lower accuracy for dataset B. Overall, CubitEDT compares very favorably to the reported accuracy and precision of other deep learning models. Notably, Kim et al. [19] did not report performance metrics for comparison.

10 Implementation To make the new part classification capabilities available to analysts, we implemented them in the Cubit™Geometry and Meshing Toolkit [6]. The toolkit provides both a command-line interface and a graphical user interface, and it is built on top of a new machine learning library that can be accessed through an application programming interface (API) using C++ or Python. This allows analysts to easily use the classification tools within their existing workflow. Our objective in developing a new ML library was to provide a common environment for external CAD-based applications to use these tools without the need to access the capabilities through a specific end-user meshing tool. This allows external applications to link with the ML libraries and include its headers, as a third-party library. While the meshing tool served as the initial recipient and test case for the ML libraries, they were developed with the intent of including them in next generation software. Included in the ML libraries are functions to generate the standard set of 48 features given a single part CAD model. This involves querying the CAD kernel

Machine Learning Classification and Reduction of CAD Parts

105

to compute each of the 48 features shown in Table 1. While initially the features were generated based on the ACIS [10] kernel, we have more recently developed a CAD abstraction interface that allows for other CAD kernels. For our purposes, we specifically targeted an internally-developed geometry kernel that is currently under development. The following is a general outline of the procedures used to train a set of CAD parts and generate predictions:

10.1 Training The training process for building a serialized model for machine learning consists of the following steps: 1. Generating training data: The procedure for generating training data is described in Sect. 5. It involves providing a fixed set of .csv files, where each row of a .csv file contains exactly 48 entries corresponding to the features of one CAD volume. Each .csv file is named according to its ground truth category. 2. Importing training data: Standard python tools are used to import each of the .csv files and the features and labels are stored as vectors, .Xtrain and .Ytrain respectively. 3. Executing EDT training: The .sklearn .RandomForrestClassifier class is invoked directly using the following functions: model = sklearn.ensemble.RandomForestClassifier( n_estimators = tree_count, max_depth = max_depth) model = model.fit(X_train, Y_train)

The .sklearn library also allows for optional arguments to customize the decision tree methods. The .tree_count and .max_depth arguments control the maximum number of decision trees and the maximum depth of branching for each individual tree respectively. Experimentation revealed that .tree_count = 5 and .max_depth = 20 provided the optimal performance/accuracy tradeoffs. Larger values for these arguments can potentially deliver more accurate results, but may result in longer prediction times and larger pickled models. 4. Serializing the EDT model: Once a successful EDT model is generated, it can be dumped to a pickle file. This will encode the model object as a byte stream on disk for later use when predicting classification categories.

106

S. J. Owen et al.

10.2 Prediction The procedure for predicting the classification category of a CAD part using a serialized model is as follows: 1. Importing the serialized classification model: The serialized EDT model object is imported and stored. Once successfully imported, it can be queried to predict any classification given a set of features. 2. Identifying the CAD part: The user will identify one or more CAD parts for which a classification category is to be predicted. 3. Generating features: The 48 features described in Table 1 are computed for each CAD part. 4. Transforming/scaling the features: As features cannot be used directly, a scaling pipeline is first applied to each of the features. 5. Predicting: The .sklearn library is invoked and a result vector of probabilities is returned. Y_classify = model.predict_proba(X_classify)

In this function, .X_classify is a 2-dimensional vector of size = .48 × n where 48 is the number of features and n is the number of CAD parts. The return vector .Y_classify is a vector of size = .9 × n, where 9 is the number of classification categories. 6. Identifying the most likely category: The category with the highest probability is chosen as the classification category. However, it may be useful to provide the probability or confidence values to the user when results are not clear cut.

11 Reduction of CAD Parts In this study, we not only sought to identify common categories of mechanisms in design solid models, but we also aimed to develop simplified methods for quickly reducing the original solid model representation with minimal user interaction. As an example problem, we focused on the fastener reduction problem and also addressed the reduction of spring components. Other mechanism types will be considered as needed.

11.1 Fastener Reduction Fasteners may require various representations depending on the physics and fidelity of the simulation [28, 29]. In some cases, the simplification, boundary condition

Machine Learning Classification and Reduction of CAD Parts

107

Fig. 5 Example before and after the Reduce operation. Also shows optional insert geometry at the bolt shaft

assignment and meshing of an individual fastener could take upwards of 30 minutes to an hour of user time. With many assemblies consisting of tens or hundreds of bolted connections, fastener preparation becomes a tedious, time consuming and potentially error prone endeavor. We outline one possible automatic recipe for reducing bolts for analysis. In this case a diagram of a single bolt, fastening two volumes is shown in Fig. 5 where an optional insert, or cylindrical band, is modeled surrounding the shaft of the bolt, which is often modeled physically overlapping its surrounding geometry. In this scenario, the user may choose from multiple options when reducing the fasteners, including removal of chamfers, rounds, cavities, modification of the diameter of hole or bolt, adjusting alignment and fit of the bolt with the hole, separation into different volumes representing head, shaft and plug components, hex meshing at a specific resolution, and automatic assignment of boundary conditions. In practice, the user will typically experiment with input options, using the GUI panel illustrated in Fig. 1 and then apply the same reduction recipe to multiple bolts simultaneously. A few examples of options applied to the bolt pictured in Fig. 6a are pictured with results display in Fig. 6b–e

Bolt Reduction Algorithm The following method illustrates the procedure used for reducing one or more fasteners and their surrounding geometry to a simulation-ready state. Input: The method takes as input one or more volumes classified as “bolt”. Optional corresponding volumes classified as “insert” may also be specified. Output: The output of the method is a reduced set of bolt and insert geometry that is webcut, meshed with boundary conditions applied. Depending on user options, the neighboring volumes may also be modified. Method: 1. Identify Nearby Volumes: This step involves identifying at least one upper volume (dark grey volume in Fig. 5) and a lower volume (light grey volume in

108

S. J. Owen et al.

Fig. 6 Example of four different variations of syntax for the reduce bolt fit_volume command on a single bolt

2.

3.

4.

5.

6.

7. 8.

Fig. 5). If not already provided by the user, an optional insert volume can also be determined based on proximity. Identify dimensions, axis, and surfaces of the bolt: This step includes extracting top and bottom surfaces, as well as shaft and head, based on expected common characteristics of known bolt geometry. Autosize: If a mesh size is not specified by the user, an autosize is computed, which is a mesh size based on the relative dimensions of the bolt volumes. This value is used for both meshing and determining tolerances in the next step. Identify surfaces to be removed: Geometric diagnostics are performed to determine whether the bolts’ surfaces have certain traits, such as blends, chamfers, cavities, close loops, small faces, or conical surfaces. Simplify bolt geometry: Successive CAD operations are performed to remove the surfaces identified in step 4. It is important to note that removal of a surface of one trait characteristic may introduce other surfaces that require removal. As a result, steps 4 and 5 are repeated until no further surface removal operations are possible. Align bolt to hole axis: If the align bolt option is used, this step checks for alignment of the hole and bolt axis. If not properly aligned, the bolt geometry is transformed to match the hole, such that the bolt and hole axes are colinear. Simplify insert geometry: If an insert is present, the procedure described in steps 4 and 5 is used to simplify the insert geometry. Modify Bolt Diameter: If a diameter value is specified in the command, a CAD surface offset operation is used to adjust the diameter of the bolt shaft.

Machine Learning Classification and Reduction of CAD Parts

109

9. Simplify Hole Geometry: If the simplify hole option is used, any chamfers or rounds decorating the hole geometry, as well as any conical surfaces at the bottom of the hole, are identified and removed. 10. Remove gaps and overlaps between shaft and lower volume: Utilize a boolean subtract operation to eliminate any overlap between the lower volume and the shaft geometry when the tight fit option is selected. This will ensure a precise fit between the two components, eliminating any gaps or overlaps. Note that this option is not applicable if an insert geometry is present. 11. Remove Insert overlap: Use a boolean subtract operation to remove any overlap between the insert and the lower volume or the bolt shaft, if an insert is present. 12. Cut Geometry: Utilize a sheet extended from the base of the bolt head to split the head from the shaft when the cut option is selected. Use web-cutting with a sheet extended from the top surface of the lower volume to separate the shaft from the plug. Perform a merge operation on the three bolt components to ensure a contiguous mesh is generated. 13. Cut head for multisweep: When the key cavity remains in the bolt geometry and the mesh option is selected, cut the bolt head using a cylindrical surface extended from the bolt shaft to facilitate use of the pave-and-sweep many-toone tool. This is done to ensure that only one target surface is required for many-to-one sweeping when the cavity remains in the bolt. 14. Create material blocks: Create material blocks for each bolt component, including the insert (if present), and name/number them according to the user input options. When multiple bolts are reduced in the same command, allow the user to specify consecutive numbering conventions for easy identification of the different bolt components. 15. Mesh: Invoke the internal meshing tools and use the input mesh size (or autosize computed in step 3) followed by the pave and sweep tools to generate a hex mesh on each of the bolt components, as well as the insert (if present). Check mesh quality following meshing and report any potential element quality issues to the user. The fastener reduction procedure outlined above is one of the many reduction methods developed in this work. We also considered other scenarios involving different physics, analysis codes, and resolution needs. Figure 2 illustrates some of the results obtained from these alternate reduction options. Figure 7a shows an example of the use of the fastener reduction operators on an assembly containing many similar bolted connections. Here we illustrate one group of similar fasteners that all require similar analysis preparation. Traditional approaches would require hours of tedious geometry manipulation by an experienced engineer/analyst, as well as wearisome book-keeping of boundary conditions. Figure 7b shows the result of a single reduction operation that utilizes the method described above. Once classification is complete, the user can select similar bolts and apply the same reduction recipe, including meshing and boundary condition assignment. For this example, the full reduction operation on the 16 bolts in Fig. 7 took approximately 17 seconds on a desktop machine running serial.

110

S. J. Owen et al.

(a) Bolts prior to reduce operations

(b) Bolts after reduce operations

Fig. 7 Illustrates the efficient reduction of 16 bolts using the proposed method. The bolts are simplified, fit to the surrounding geometry, cut, merged and meshed with a single operation

Machine Learning Classification and Reduction of CAD Parts

111

Fig. 8 Example of spring reduction from solid to beam representation

11.2 Spring Reduction Another common issue faced by analysts is the preparation of spring components for analysis [30]. Using a full 3D solid representation of a spring can require a large number of hexahedra or tetrahedra to accurately capture its behavior, which can be computationally intensive and time-consuming to generate. To overcome this challenge, analysts often use a simplified, dimensionally reduced version of the spring in their analysis. This can be simpler to model and faster to compute, while still providing accurate results. Figure 8 depicts the process of simplifying a 3D solid model of a spring to one or more geometric curves along the axis of its helical geometry. This dimensional reduction process is performed automatically by our tool, making it easy for analysts to prepare the spring for finite element analysis. The resulting curves can then be quickly meshed using internal meshing tools, and assigned to a material block, greatly reducing the time and effort required for spring analysis.

Spring Reduction Algorithm Input: One or more volumes classified as “spring”. Output: One or more connected curves following the mid-curve of the spring, optionally meshed with beam elements.

112

S. J. Owen et al.

Method: 1. Heal surfaces: Check and merge surfaces that have blends or can be split into parts. 2. Identify tube-like surfaces: Identify surfaces such as cylinders, tori, NURBs with circular cross-sections, and helical sweeps that sweep a circle along a helix. 3. Extract mid-curves: From each identified surface, extract the curve at the middle of the cross-section. 4. Trim Curves: Remove any capping surfaces from the mid-curves. 5. Join Curves: Combine the mid-curves into a single wire body if desired. 6. Create Spline: Fit all mid-curves to a single NURBs curve if a single curve is desired. 7. Generate beam mesh: Generate beam elements and/or blocks based on user input.

12 Conclusion In conclusion, we have successfully developed and demonstrated new classification and reduction methods that leverage AI and machine learning to improve the efficiency, accuracy, and reproducibility of preparing simulation-ready models from a design solid model. Our in-situ ML-based tool allows for on-the-fly custom classification and suitability predictions for certain types of geometric operators, and serves as a foundation for establishing a centralized knowledge base for CAD and model preparation operations. These capabilities can significantly reduce the time and effort required for common preparation tasks, and enable analysts to focus on more complex and critical tasks. We believe that our approach has the potential to greatly improve the productivity and effectiveness of engineering analysts in design and validation of critical assemblies. Acknowledgements Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. SAND2022-12981 C.

References 1. F. Danglade, J-P. Pernot, and V. Philippe, “On the use of Machine Learning to Defeature CAD Models for Simulation,” Computer Aided Design and Application, vol. 11(3), pp. –, 2013. 2. C. Y. Ip and W. C. Regli, “A 3D object classifier for discriminating manufacturing processes,” Computers & Graphics, vol. 30, pp. 903–916, 2006. 3. Z. Niu, “Declarative CAD Feature Recognition - An Efficient Approach,” PhD thesis, Cardiff University, 2015.

Machine Learning Classification and Reduction of CAD Parts

113

4. F. Qin, L. Li, S. Gao, X. Yang, and X. Chen, “A deep learning approach to the classification of 3D CAD models,” Journal of Zhejiang University-SCIENCE C, vol. 15(2), pp. 91–106, 2014. 5. J. G. Lambourne, K. D. D. Willis, P. K. Jayaraman, A. Sanghi, P. Meltzer, and H. Shayani, “BRepNet: A topological message passing system for solid models,” CoRR, vol. abs/2104.00706, 2021. [Online]. Available: https://arxiv.org/abs/2104.00706 6. Sandia National Laboratories, “Cubit Geometry and Meshing Toolkit,” 2022. [Online]. Available: https://cubit.sandia.gov. [Accessed: 2022-09-06]. 7. PTC, “Creo Parametric 3D Modeling Software,” 2022. [Online]. Available: https://www.ptc. com/en/products/creo/parametric. [Accessed: 2022-01-04]. 8. “MySolidworks,” 2022. [Online]. Available: https://my.solidworks.com. [Accessed: 2022-0104]. 9. A. R. Colligan, T. T. Robinson, D. C. Nolan, Y. Hua, and W. Cao, “Hierarchical CADNet: Learning from B-Reps for Machining Feature Recognition,” Computer-Aided Design, vol. 147, p. 103226, 2022. 10. Spatial Corporation, “3D Acis Modeler,” 2022. [Online]. Available: https://www.spatial.com/ products/3d-acis-modeling. [Accessed: 2022-09-06]. 11. GrabCAD, Making Additive Manufacturing at Scale Possible, Accessed: 2022-09-12, https:// grabcad.com. 12. L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001. 13. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011. 14. A. Paszke, “PyTorch: An Imperative Style, High-Performance Deep Learning Library,” in H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, and R. Garnett, eds., Advances in Neural Information Processing Systems 32, Curran Associates, Inc., 2019, pp. 8024–8035. 15. C. Xiao, J. Ye, R. Esteves, and C. Rong, “Using Spearman’s correlation coefficients for exploratory data analysis on big dataset,” Concurrency and Computation: Practice and Experience, vol. 28, no. 12, pp. 3448–3458, 2015, https://doi.org/10.1002/cpe.3745. 16. J. Gama, I. S. Pinto, and F. C. Pereira, “Identification of Highly Correlated Features in Data Streams,” in Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 2005, pp. 193–202. 17. S. Owen, T. Shead, and S. Martin, “CAD Defeaturing Using Machine Learning,” in 28th International Meshing Roundtable, Buffalo NY, Oct. 2019, https://doi.org/10.5281/zenodo. 3653426,url: https://doi.org/10.5281/zenodo.3653426 18. S. J. Owen, T. Shead, S. Martin, and A. J. Carbajal, “Entity Modification of Models,” US Patent: 17/016,543, DOE NNSA, September 2020. 19. S. Kim, H. Chi, X. Hu, Q. Huang, and K. Ramani, “A Large-Scale Annotated Mechanical Components Benchmark for Classification and Retrieval Tasks with Deep Neural Networks,” in Computer Vision – ECCV 2020, A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds., Cham, 2020, pp. 175–191, Springer International Publishing, isbn: 978-3-030-5852320. Kaichun Mo, Shilin Zhu, Angel X. Chang, Li Yi, Subarna Tripathi, Leonidas J. Guibas, and Hao Su. “PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding.” CoRR, vol. abs/1812.02713, 2018. 21. Yu Xiang, Wonhui Kim, Wei Chen, Jingwei Ji, Christopher Bongsoo Choy, Hao Su, Roozbeh Mottaghi, Leonidas J. Guibas, and Silvio Savarese. “ObjectNet3D: A Large Scale Database for 3D Object Recognition.” In European Conference on Computer Vision, 2016. 22. Steven Owen and David White. “Mesh-Based Geometry: A Systematic Approach To Constructing Geometry From A Finite Element Mesh.” In 10th International Meshing Roundtable, Newport Beach CA, November 2001, pp. 83–98. 23. Xue Ying. “An Overview of Overfitting and its Solutions.” Journal of Physics: Conference Series, vol. 1168, no. 2, 2019. 24. Guozhong Dong, Dongming Yan, and Ning An. “A CAD-Based Method for Automated Classification of Mechanical Parts.” Computer-Aided Design, vol. 41, no. 5, pp. 489–500, 2009.

114

S. J. Owen et al.

25. Hyunju Kim, Cheolhong An, and Hanseok Ko. “A Hybrid Machine Learning Approach for CAD Part Classification.” In Proceedings of the 2nd International Conference on Machine Learning and Computing, 2012, pp. 647–651. 26. Mohammad Javad Shafiee and Amir H Behzadan. “Automated Classification of 3D CAD Models Using Convolutional Neural Networks.” In Proceedings of the 5th International Conference on 3D Vision, 2017, pp. 583–592. 27. S. S. Keerthi and C. K. Shevade. “Improvements to Platt’s SMO Algorithm for SVM Regression.” Neural Computation, vol. 13, no. 3, pp. 637–649, 2001. 28. Abdulrahman Mohammad Ibrahim. “On the Effective Finite Element Simplification of Bolted Joints: Static and Modal Analyses.” PhD thesis, Rochester Institute of Technology, 2020. 29. Michael Ross, Andrew Murphy, and Brian Stevens. “Fastener Modeling Effects on Fatigue Predictions for Mock Hardware in a Random Vibration Environment.” In AIAA Scitech 2019 Forum, San Diego, California, 2019. 30. A. Yu and C. Yang, “Formulation and Evaluation of an Analytical Study for Cylindrical Helical Springs," Acta Mechanica Solida Sinica, vol. 23, no. 1, pp. 45-54, 2010.

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning Callum Lock, Oubay Hassan, Ruben Sevilla, and Jason Jones

1 Introduction The generation of unstructured meshes for complex geometric models is still one of the most time consuming parts of the simulation pipeline [7, 13, 19]. This is due to the large amount of human intervention and expertise that is required to produce suitable meshes for simulation. Mesh generation techniques require the definition of a suitable spacing function that dictates the size of the elements to be generated. The objective is to produce a mesh that concentrate elements only in the regions where they are needed, i.e. regions with complex geometric features to be resolved or regions where complex solution features will be present. The spacing function can be defined using multiple approaches. The more flexible approaches involve the use of point, line or triangular sources [16, 21] and via a structured or unstructured background mesh [17]. Other popular approaches include the refinement based on curvature of the boundary [21] and the definition of the required spacing on selected geometric entities. These approaches can be used independently but they are often combined to achieve a greater control of the spacing. Despite the flexibility of the available tools to produce a suitable spacing function, setting up the required sources or defining an appropriate C. Lock (B) · O. Hassan · R. Sevilla · J. Jones Zienkiewicz Institute for Modelling, Data and AI, Faculty of Science and Engineering, Swansea University, Swansea, Wales SA1 8EN, UK e-mail: [email protected] O. Hassan e-mail: [email protected] R. Sevilla e-mail: [email protected] J. Jones e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_6

115

116

C. Lock et al.

spacing on a background mesh still requires a significant level of human intervention and expertise. An alternative is found on mesh adaptive algorithms [9]. These methods start with a coarse mesh defined by the user and iteratively refine the mesh by identifying the regions where more elements are needed. The main advantage of this approach is the level of automation that can be achieved. However, the initial coarse mesh must be able to capture the solution features to some extend. Otherwise, despite many refinement loops are performed, the solution features will not be captured by the final mesh. Approaches to utilise neural networks (NNs) to assist the mesh generation process have been also proposed. The earliest attempts to utilise NNs in a mesh generation framework date back to the 1990s and can be found in the field of magnetic device simulations [2, 5, 8]. In the last two years, NNs have been used to assist mesh adaptive algorithms [4, 6, 22] and to predict the spacing in terms of some characteristics of the problem such as the partial differential equation, the boundary conditions and the geometry [23, 24]. In this work we propose a novel approach based on NN to predict the spacing that is required on a background mesh to generate meshes suitable for simulations. The ultimate goal is to utilise the vast amount of data that is available in industry to accelerate the mesh generation stage. To obtain datasets that are suitable for training a NN, a new interpolation approach is presented to transfer the spacing from a fine mesh, where a solution is available, to a coarse background mesh. The interpolation approach is designed to ensure that the spacing in the coarse meshes is able to produce a mesh capable of capturing all the features of the original solution. The proposed approach is also compared to a recently proposed strategy where a NN is used to predict the position and the spacing at a number of point sources [15]. The comparison is performed based on the time required to train the NNs, including the fine tuning of the hyperparameters, the size of the training dataset required to produce accurate predictions and the accuracy of the predicted spacing function. The comparison is performed by using an example that involves the prediction of nearoptimal meshes for a three dimensional wing configuration in the context of inviscid compressible flow simulations. The approach proposed in this work is finally used for a more complex problem involving a full aircraft configuration. In this work, an optimal mesh is considered to be a mesh with the minimum number of elements that captures all the features of the solution. The proposed technique is aimed at producing near-optimal meshes in the spacing function lies within 5% of the target spacing. The remainder of the paper is organised as follows. In Sect. 2 a brief summary of the two strategies considered to control spacing is presented, namely the use of sources and a background mesh. Section 3 describes the strategy used to compute the required spacing to produce a mesh that captures all the features of a given solution. In Sect. 4 the strategy to compute a set of global sources that is capable of producing the required spacing to capture a number of given solutions is described. Similarly, Sect. 5 describes the approach to compute the spacing on a background mesh that is capable of representing a number of given solutions. The use of a NN to predict the

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning

117

source characteristics or the spacing of a background mesh is presented in Sect. 6. Two examples are considered in Sect. 7. The first example is used to compare the approaches based on sources and the proposed approach based on a background mesh. The second example shows the potential of the proposed strategy on a large scale problem involving a full aircraft. Finally, Sect. 8 summarises the conclusions of the work that has been presented.

2 Mesh Spacing and Control This section introduces the fundamental concepts on how mesh spacing is defined within a mesh that are utilised when presenting the two proposed strategies to predict near-optimal meshes. Within the aerospace industry, there is a preference for using unstructured meshes for CFD simulations, owing to their ability to efficiently discretise complex geometric domains. This is due to the possibility to locally refine targetted regions without inducing a refinement in regions that are not of interest. Refinement techniques can be classified into automatic techniques, based for instance on mesh adaptivity, or controlled manually based on the expertise of the user. Automatic adaptive algorithms can be used to localise the refinement only in the regions where high gradients of the solution are present. However, it is clear that if the initial mesh is not able to capture some solution features, they will not be captured even if a large number of refinement loops are undertaken. There are different methods available for the user to control the spacing function, which can be used independently or in combination with one another. These techniques include the use point, line or triangular sources [16, 21], the use of a background structured or unstructured mesh [17, 21], the specification of the spacing at geometric entities or the refinement based on the curvature of the boundary [21]. The use of sources and a background unstructured mesh are considered here as the strategies that provide greater flexibility.

2.1 Mesh Spacing Controlled by Sources Sources provide the ability to control the spacing desired at a localised region of the domain. A point source consists of a given location . x a desired spacing .δ0 and a radius of influence .r . The spacing function induced by the source is constant, and equal to .δ0 , within the sphere of centre . x and radius .r . To ensure a smooth transition of the spacing outside the sphere of influence, an exponential increase of the spacing is defined by specifying a second radius, . R, where the spacing doubles. The spacing at a distance .d, from . x, is defined as

118

C. Lock et al.

{ δ(d) =

.

δ0 d−r δ0 eln(2) R−r

if d < r . otherwise

(1)

A line source is a natural extension of the concept of point sources. Line sources are made of two point sources. To determine the spacing induced by a line source at a given point, . p, the closest point to . p on the line is found, namely . pˆ . A linear interpolation of the radii and spacing of the two points forming the line source is used to determine the radius and spacing that must be associated to the projected point . pˆ . The spacing induced by the line source is computed by assuming that a point source is present at . pˆ with the interpolated radius and spacing. Similarly, it is possible to extend this concept to other geometric entities such as triangular sources. It is worth noting that when multiple sources are used to control the spacing function, the minimum of all the spacings induced by all the sources is used to specify the required spacing at a given location.

2.2 Mesh Spacing Controlled by a Background Mesh Alternatively, the spacing can be controlled by using a background structured or unstructured mesh [17, 21]. In this scenario a coarse mesh that covers the whole computational domain is generated, and the spacing is specified at each node of the background mesh. To determine the spacing at any point of the domain, the element of the background mesh that contains the point is first identified. Then, a linear interpolation of the nodal values of the spacing defined at the elements of the background mesh is employed.

3 Target Spacing It is assumed that a dataset of accurate solutions is available. The solutions might have been computed with different numerical schemes, and very often, on overrefined meshes. For this reason, this work proposes a learning procedure that is based on the solution, rather than on the meshes that were used to compute the solutions. However, the technique proposed can be modified to learn from existing meshes in cases where the meshes are considered to be optimal, i.e. obtained after an adaptive process or manually created by an expert, by obtaining the spacing distribution of said mesh. The first stage involves the computation of the spacing function that would provide a mesh capable of reproducing a given solution. This is done by borrowing concepts of error analysis and relating the desired spacing to the second-order derivatives of the solution, namely

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning

⎛ δ2 ⎝

N Σ

. β

119

⎞ Hi j βi β j ⎠ = K ,

(2)

i, j=1

where .β is an arbitrary unit vector, .δβ is the spacing along the .β direction , .H is the Hessian matrix of a key variable .σ and . K is a user-defined constant. Here, a recovery process [25, 26] is employed to numerically evaluate the second derivatives of the selected key variable. Next, by evaluating in the direction of each eigenvector of .H, the optimal value of the spacing at a node is defined as {/ δ = min

. i

i=1,...,n

K λi

} ,

(3)

where .{λi }i=1,...,n denote the eigenvalues of .H. The discrete spacing is uniquely defined after the user specifies the scaling factor, . K . In regions where the solution is smooth, the scaling reflects the value of the mean square error that is to considered acceptable. In practice, to account for the possibility of vanishing eigenvalues of.H, the spacing of Eq. (3) is bounded by a maximum allowable value. Similarly, to avoid an excessive refinement near element with very steep gradients (e.g., near shocks), a minimum value of the spacing is also defined by the user. At this stage a discrete representation of the spacing function is obtained. However, for each solution available the number of mesh nodes is generally different so two strategies are considered to homogenise the data in such a way that is suitable for training a NN. The first strategy, proposed in [15] consists of building a global set of sources that is capable of describing the spacing function of each case. The second approach, proposed here for the first time consists of building a spacing function by using a background mesh that is also suitable to describe the spacing function of each case. The two strategies are described in the next two sections.

4 Spacing Description Using Sources The main idea is to construct a set of sources that induce a spacing function that closely represents the discrete spacing obtained using the strategy described in the previous section. The full details and the algorithmic implementation are described in detail in [15].

120

C. Lock et al.

4.1 Generating Point Sources for One Solution The process starts by grouping points based on the associated spacing. Point sources are created at the centre of a group of points and with a radius that covers all the points in the group. The strategy developed guarantees that the spacing required at every node of the given mesh is represented by at least one point source. To simplify the implementation, this work assumes that the second radius of influence of a source is always double the first radius, namely . R = 2r . It is also imposed that two values of the spacings are close enough, if they differ by at most .5% of the spacing at the node of interest. The process for creating a point source ends when the spacing at a surrounding layer is larger than the spacing of the point source at a distance equal to . R. Figure 1 shows the result of creating sources to represent the spacing required to capture a given solution. The solution corresponds to an actual two dimensional inviscid transonic flow simulation. It can be observed how the sources with smaller spacing (blue colour) are concentrated near the regions with steep gradients.

4.2 Generating Global Sources for a Set of Solutions When the process described in the previous section is applied to a set of different solutions, the number of sources obtained is, in general, very different. As the objective is to utilise this data to train a NN, a procedure to obtain the same number of sources for a set of solutions is devised. The process starts by initialising the set of global sources to be the set of sources of the first case and creating a mapping that relates global sources to the local sources of each case. The rest of the cases are then considered sequentially. To ensure an efficient implementation, the sources of all cases are inserted in an alternating digital tree (ADT). The process then considers each one of the remaining cases sequentially.

(a) Solution

(b) Point sources

Fig. 1 Transonic flow CFD solution and point sources to create a spacing function capable of capturing the solution for a NACA1206

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning

(a) Solution

121

(b) Point sources

Fig. 2 Transonic flow CFD solution and point sources to create a spacing function capable of capturing the solution for a NACA4324

(a)

(b)

Fig. 3 Global sources for (a) the case of Fig. 1 and for (b) the case of Fig. 2

The ADT is employed to identify global sources that are in close proximity to the unprocessed local sources. When a global source is not found in close proximity to a local source, a new source is added to the global list and the mapping between local and global sources is updated. In contrast, when a global source is in close proximity to a local source, no new sources are added and the mapping is updated. After the set of global sources is produced, it is customised to accurately represent the spacing function associated to each solution. Figure 2 shows the result of creating sources to represent the spacing required to capture a solution different to the one shown in Fig. 1, i.e. a different geometry and different flow conditions. It can be observed that the number of sources will significantly differ depending on the flow conditions and geometry. By using the process briefly described in this section a set of global sources is created for each case, as depicted in Fig. 3. Both set of global sources are different but they have the same number of sources, which is necessary to ensure that this data can be used to train a NN.

122

C. Lock et al.

5 Spacing Description Using a Background Mesh A novel approach to build a spacing function that ensures uniformity of the data and therefore its possible use for training a NN is presented here. The process consists of creating a coarse background mesh and to devise a strategy to compute the spacing at each node of the background mesh that induces the required spacing to capture a given solution. First, the spacing is computed on the mesh where the solution is provided. Figure 4 shows a solution and the spacing function that will provide the required mesh to capture the solution. The implementation of this strategy introduces several advantages when compared to the existing strategy of using sources. First, the computation of the spacing at the nodes of the background mesh is simpler than the computation of the sources as it only requires interpolating the spacing from a fine mesh to a coarse mesh. This is described in detail in this section. Second, uniformity of the data is guaranteed if the topology of the background mesh is unchanged. For the examples considered in this work, with design parameters not affecting the geometry of the domain, a fixed background mesh can be used for all cases. For more complex scenarios, a mesh morphing algorithm would be required to ensure that the same background mesh can be used for all cases. This is out of the scope of the current work. The fixed background mesh is produced using a combination of curvature control and minimum spacing defined at each individual surface of the geometry; an octree is then used to propagate the spacing into the domain.

(a) Solution

(b) Spacing function

Fig. 4 A solution and its corresponding calculated spacing function that describes the optimal spacing suitable for capturing the solution

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning

123

5.1 Interpolating the Spacing on a Background Mesh As mentioned above, the proposed strategy to use a background mesh requires interpolating the discrete spacing from a mesh where a solution is available to a coarse background mesh. Interpolating a field from one mesh to another is a relatively easy task but special care must be taken when the quantity to be interpolated is the spacing. If a naïve interpolation is employed, many features of the solution can be unresolved by the spacing function embedded in the background mesh. To illustrate the proposed strategy, let us consider the scenario of Fig. 5. The extract of the mesh with continuous red edges corresponds to the mesh where the solution is available and where the spacing required at the nodes has been computed, as described in Sect. 3. The extract of the mesh with discontinuous black edges corresponds to the background mesh. The objective is to obtain the spacing at the nodes of the background mesh, . x a , . x b and . x c in Fig. 5 so that the spacing at the nodes of the fine mesh can be accurately reproduced. A naïve interpolation approach would consider the element of the fine mesh containing each node of the coarse mesh and perform a linear interpolation of the nodal values. However, this will make very limited use of the rich information available in the fine mesh. If values of the spacing are interpolated in this way, it is possible to obtain very large values of the spacing at the nodes of the background mesh even if very small values are present in the vicinity of the nodes of the background mesh. Referring to the example of Fig. 5, if for instance the spacing is very small at nodes . x 6 , . x 7 , . x 8 and . x 9 , but it is very large at the remaining nodes, a naïve interpolation will compute a large value of the spacing at the nodes . x a , . x b and . x c of the background mesh. This will induce a spacing function not suitable to capture the initial solution. To avoid this problem, a different strategy is proposed to interpolate the spacing. For each element of the background mesh, the list of nodes of the fine mesh that are contained in the background element is identified. In the example of Fig. 5 all

xc x9 x4

x1 xa

x2

x5 x3

x13

x12

x8 x7 x6

x11 x10 xb

Fig. 5 Detail of two triangular meshes, the fine mesh where the solution is computed—denoted by continuous red edges, and a coarser background mesh denoted by discontinuous black edges. The green circles denote the nodes of the fine mesh contained in one element of the background mesh. The blue triangles denote the nodes of the background mesh where the interpolated element spacing is to be computed

124

C. Lock et al.

(a) Original spacing

(b) Naïve interpolation

(c) Proposed interpolation

Fig. 6 Illustrative example of two possible interpolations of the spacing onto a background mesh Fig. 7 Spacing function on a background mesh after interpolating the spacing of Fig. 4b

the numbered nodes, from . x 1 to . x 13 are identified. A very conservative approach is then adopted in this work, which is to define the spacing at the element nodes of the background mesh as the minimum of the spacing of all the nodes of the fine mesh contained in the element. This strategy will ensure that the resulting spacing is certainly able to capture the required solution. Other strategies that can be explored include the use of the arithmetic mean, the harmonic mean, or a weighted arithmetic mean. The process is finalised by assigning to each node of the background mesh the minimum of the spacings computed from each element sharing this node. An example is shown in Fig. 6 to illustrate the process. The spacing function obtained on a reference mesh is transfered to a background mesh using a naïve interpolation approach and the proposed approach. It can be clearly observed that the naïve approach does not produce an accurate representation of the original spacing function. When used for mesh generation, this background spacing will lead to a mesh that is not capable of representing the features of the target solution. In contrast, with the proposed interpolation, a conservative approach is favoured and the resulting spacing will lead to a finer mesh, ensring that all the features of the target solution are captured when a mesh is generated with this spacing. Figure 7 shows the interpolated spacing on a background mesh for the example of Fig. 7. The example clearly illustrates the ability of the proposed interpolation strategy to capture the required spacing on a coarse background mesh. It is worth noting

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning

125

that the proposed approach is designed to produce a spacing capable of capturing all the required solution features. However, when the background mesh is excessively coarse, it can produce a spacing function that leads to an over-refined mesh.

6 Using a Neural Network to Predict the Spacing The two strategies presented above are designed to preprocess a dataset of available solutions and produce a dataset suitable for training a neural network. The inputs of the neural network are design parameters (e.g., boundary conditions, geometry). The examples considered in this work involve inviscid compressible flows in three dimensions and the design parameters are the flow conditions, namely the free-stream Mach number and the angle of attack. For the strategy based on sources, the output consists on the position (thre coordinates), the spacing and the radius of the global set of sources. For the second approach, based on a background mesh, the output is simply the spacing at the nodes of the background mesh. It is worth noting that the use of a background mesh implies a reduction of the number of outputs by a factor of five, when compared to the strategy based on sources. In general, the values of the spacing, in both approaches, and the radius, in the first approach, vary by more than two orders of magnitude. To facilitate the training of the NN, these outputs are scaled logarithmically. The scaling not only prevents a bias towards larger values but also prevents the prediction of unrealistic negative values. The type of NN employed in this work is a standard multi-layer perceptron and extensively described in the literature [3, 11]. In terms of the implementation, TensorFlow 2.7.0 [1] is employed to construct the NNs. To minimise the influence of the random initialisation of the weights, each training is performed five times by performing a variation of the initial guess used in the optimisation. The maximum number of iterations allowed for the optimisation is 500, and the process is stopped either when this number of iterations is reached or when the objective cost function does not decrease during 50 consecutive iterations. Through preliminary numerical experimentation on the influence of the activation function on the accuracy of the NN, the sigmoid activation function was chosen as it tended to produce more accurate results compared to other classical activation functions. Therefore, for each NN produced, the sigmoid function was employed for all the hidden layers, with a linear function being used on the output layer. Respectively, these activation functions are given by .

S(x) =

1 1 + e−x

and

L(x) = x.

(4)

126

C. Lock et al.

To train the NNs, the cost function used is the mean square error (MSE), with the optimisation function used to minimise the cost being the ADAM optimiser [14], with a learning rate of 0.001. As usual in this context, the hyperparameters of the NN are tuned to ensure that the best architecture is employed. In [15] the authors demonstrate that even performing a fine tuning of the hyperparameters and repeating the training five times, the resulting approach is more efficient than the usual practice in industry of generating an overrefined mesh to perform the simulations for varying flow conditions using a fixed grid. The design of the NNs considered in this work requires selecting an appropriate number of layers, number of neurons per layer and activation functions. For each numerical example in Sect. 7, the number of layers . Nl and the number of neurons in each layer . Nn is varied in the pursuit of finding the optimal hyperparameter configuration. The hyperparameter variation is defined by a grid using the ranges . Nl = [1, 2, . . . , 5, 6] and . Nn = [25, 50, . . . , 225, 250]. The accuracy of the predictions is measured using the statistical R.2 measure [10]. To better analyse the results, when the approach using sources is considered, the R.2 measure is reported independently for the five source characteristics (i.e., the three coordinates of the source, the spacing and the radius).

6.1 Spacing Prediction Using Sources After the NN is trained, it is used to predict the characteristics of the global sources for cases not seen during the training stage. It is possible to directly use the predicted global sources to define the mesh spacing function that is required to generate a nearoptimal mesh. However, due to the use of a global set of sources, it is expected that the predicted sources for a new case contain redundant information. For this reason an extra step is required with this technique to minimise the number of queries that the mesh generation requires to define the spacing at a given point. The process, described in detail in [15], involves removing sources with an associated spacing function that can be described by other sources. In addition, an attempt is made to reduce the number of sources by merging point sources into line sources when possible. Figure 8 shows the result of the process used to reduce sources for a predicted global set of sources.

6.2 Spacing Prediction Using a Background Mesh In this case, once the NN is trained it can be used to predict the spacing at the nodes of the background mesh. With this approach, there is no need to perform any further processing of the predicted data and it can be directly used by a mesh generator to obtain the near-optimal mesh for an unseen case.

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning

(a) Predicted sources

127

(b) Reduced sources

Fig. 8 Predicted global sources for an unseen case and the resulting sources after removing redundant sources

7 Numerical Examples This section presents a numerical example to demonstrate the potential of both approaches and to compare their performance. The example involves the prediction of near-optimal meshes for three dimensional inviscid compressible flow simulations over a wing for varying flow conditions. A second numerical example is presented to show the ability of the approach best performing for the first example in a more realistic scenario involving the inviscid compressible flow around a full aircraft configuration. In both examples the variation of the flow conditions considered induce a significant variation of the solution and include subsonic and transonic flows. All the CFD simulations used in this work were conducted using the in-house flow solver FLITE [20].

7.1 Near-Optimal Mesh Predictions on the ONERA M6 Wing The ONERA M6 wing [18] is considered for this example and flow conditions are described by the free-stream Mach number, . M∞ , and the angle of attack, .α. The range used for the two design parameters, . M∞ ∈ [0.3, 0.9] and .α ∈ [0◦ , 12◦ ], leads to subsonic and transonic flows. Therefore, the mesh requirements for different cases are substantially different, posing a challenge in the prediction of the near-optimal mesh for a given set of parameters. The variation of the solution that is induced by the variation of the parameters is illustrated in Fig. 9, showing the pressure coefficient,.C p , for two flow conditions. For the subsonic case, with . M∞ = 0.41 and .α = 8.90◦ , the solution requires refinement only near the leading and trailing edges. In contrast, for the transonic case, with ◦ . M∞ = 0.79 and .α = 5.39 , the mesh should be refined to also capture the .λ-shock on the top surface. The simulations were conducted using tetrahedral meshes with approximately 1.3 M elements and 230 K nodes.

128

C. Lock et al.

(a)



= 0.41,

= 8.90

(b)



= 0.79,

= 5.39

Fig. 9 Pressure coefficient, .C p , for the ONERA M6 wing and for two flow conditions

For the purpose of this study, training and testing data sets were generated by employing a Halton sampling [12] in the parametric space. The training set comprises . Ntr = 160 cases, whereas the test set is made of . Ntst = 90 cases. To ensure that the conclusions are not biased by an incorrect use of the NN for extrapolation, the range of values used to generate the test set is slightly smaller than the range used to generate the training data. The approach using sources, required between 2,142 and 5,593 sources to represent the spacing of each training case. When combined, the resulting number of global sources is 19,345. This means that the number of outputs of the NN to be trained is almost 100 K. For the second approach a background mesh with 14,179 nodes is employed, meaning that the NN to be trained has almost seven times less outputs when compared to the approach that uses sources. After tuning the NN that best predicts the spacing, both approaches can be compared. For one of the 90 unseen cases Fig. 10 shows the regression plot for the spacing for both approaches. The results indicate a better performance of the approach using a background mesh for this particular unseen case. To better compare the accuracy, the minimum R.2 for each of the 90 unseen test cases is taken and compared in Fig. 11, for an increasing number of training cases. The results show that the strategy that uses sources leads to a very accurate prediction of the location of the sources. However, predicting the spacing and the radius of influence seems much more challenging. To achieve an R.2 of 90 in all the outputs, the whole training data set, with 160 cases, must be considered. In contrast, for the strategy that uses a background mesh 10 training cases are enough to provide an R.2 above 90. By comparing the results, it is clear that the approach that uses a background mesh is significantly more efficient as with 10 training cases the results are as accurate as with the approach that uses sources employing 160 training cases. It is also worth remarking that the approach that uses sources requires the training of multiple NN, whereas only one NN is to be trained by the approach proposed here. In this example the tuning and training of the NN for the proposed approach is almost four times faster than the approach using sources.

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning

(a) Sources

129

(b) Background mesh

100

100

95

95

90

90

85

x y z δ0 r

80 75

0

40 80 120 160 Number of training cases (Ntr )

(a) Sources

Best R2

Best R2

Fig. 10 The regression plots for the spacing, .δ0 , for the approach using sources and the approach using a background mesh

85 80 75

δ0 0

40 80 120 160 Number of training cases (Ntr )

(b) Background mesh

Fig. 11 ONERA M6: Minimum R.2 for the predicted outputs as a function of the number of training cases for the two methods

To further analyse the performance of the two approaches, the predicted spacing function through the domain is compared against the target spacing function for the two methods. At the centroid of each element of a target mesh, and for all test cases, the spacing induced by the two strategies is compared to the target spacing. Figure 12 shows a histogram of the ratio between the predicted and target spacing for both methods. The results correspond to both approaches using all the available training data. Red bars are used to depict the minimum and maximum values for each bin in the histogram and the standard deviation from the mean is represented by the orange bars. A value of the ratio of spacings between 1/1.05 and 1.05 is considered accurate enough to generate a mesh that is capable of resolving all the required flow features. Values higher than 1.05 where the predicted spacing is larger than the target spacing and, analogously, values below 1/1.05 indicate regions where the NN prediction will induce more refinement than required.

130

C. Lock et al.

Fig. 12 ONERA M6: Histogram of the ratio between the predicted and target spacing for the two strategies

The results in Fig. 12 clearly illustrate the superiority of the strategy proposed in this work, by using a background mesh. The strategy based on sources provides approximately 70% of the elements with an appropriate spacing whereas the approach based on a background grid accurately predicts the spacing for almost 95% of the elements. In addition, the worst performing case for the approach using sources is less accurate than the worst case for the approach that uses a background mesh. Finally, it is worth mentioning that when the background mesh approach is less accurate, it tends to produce a smaller spacing, which is preferred to a larger spacing, as this will ensure that all solution features are resolved with the predicted near-optimal mesh. This tendency to over-refine can be explained by the conservative interpolation scheme that has been introduced in Sect. 5.1. Given the high accuracy observed in Fig. 11 for the approach that uses a background mesh with very few training case, Fig. 13 shows the histogram of the ratio of between predicted and target spacing for an increasing number of training cases. The results corroborate the conclusions obtained from the R.2 measure in Fig. 11 and show that with a significantly smaller number of training cases, the approach using a background mesh not only produces an R.2 comparable to the approach with sources with all training data, but also the predicted spacing is as accurate. To illustrate the potential of the strategies being compared, the trained NNs are used to predict the spacing function for unseen cases and near-optimal meshes are generated and compared to the target meshes. It is worth remarking that the approach that uses sources undergoes the extra processing step to reduce and merge sources as mentioned in Sect. 6.1. Figure 14 shows two target meshes and the near-optimal mesh prediction obtained with the strategy based on sources for two test cases not seen during the training of the NN. The comparison between target and predicted meshes using the strategy

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning

131

Fig. 13 ONERA M6: Histogram of the ratio between the predicted and target spacing for the strategy using a background mesh for an increasing number of training cases

(a)



= 0.41,

= 8.90

(b)



= 0.79,

= 5.39

(c)



= 0.41,

= 8.90

(d)



= 0.79,

= 5.39

Fig. 14 Target (top row) and predicted (bottom row) meshes using the strategy based on sources

based on a background mesh is shown in Fig. 15. It is worth noting that the target meshes for the two strategies considered are slightly different due to the different definition of the target spacing function. The results visually show superior accuracy of the proposed approach, based on a background mesh. Not only the meshes obtained with the predicted spacing functions resemble the target more than the meshes predicted with sources, but also the spacing gradation is visually smoother with the approach based on a background mesh. Further numerical experiments, not reported here for brevity demonstrate that the CFD calculations on the near-optimal predicted meshes result in accurate CFD simulations. More precisely, the aerodynamic quantities of interest (e.g., lift and drag) are obtained with the required accuracy for the aerospace industry.

132

C. Lock et al.

(a)



= 0.41,

= 8.90

(b)



= 0.79,

= 5.39

(c)



= 0.41,

= 8.90

(d)



= 0.79,

= 5.39

Fig. 15 Target (top row) and predicted (bottom row) meshes using the strategy based on a background mesh

7.2 Near-Optimal Mesh Predictions on the Falcon Aircraft After demonstrating the superiority of the approach based on a background mesh, this section considers an example with a more complex and realistic geometry to show the potential of this approach. Halton sequencing of the two input parameters is used to generate a training dataset consisting of . Ntr = 56 training cases and . Ntst = 14 testing cases. The range used for the parameters is . M∞ ∈ [0.35, 0.8] and .α ∈ [−4◦ , 10◦ ], leading, again, to subsonic and transonic flow regimes. For each training and test case, the CFD solution is obtained using FLITE [20] on an unstructured tetrahedral mesh consisting of 6M elements and 1M nodes. The distribution of the pressure coefficient for two test cases is shown in Fig. 16. The Figure shows the different flow features that are induced by a change in the design parameters. To represent the spacing function, the spacing is first determined at each node of the mesh where the solution was computed. A coarse unstructured background

(a)



= 0.41,

= 4.50°

(b)



= 0.71,

= 8.00°

Fig. 16 Falcon aircraft: Pressure coefficient, .C p , for two different flow conditions

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning Fig. 17 Minimum R.2 for the characteristics as a function of the number of training cases

133

100 98

Best R2

96 94 92 90 δ0 88

0

10 20 30 40 50 Number of training cases (Ntr )

60

mesh is then generated, containing approximately 30 K tetrahedral elements, and the spacing is interpolated to the background mesh using the technique described in Sect. 5.1. A NN is then trained and the hyperparameters are tuned, following the procedure described in the previous example. After the training is performed, the spacing is predicted for the 14 unseen test cases and the accuracy of the predictions is evaluated using the R.2 measure. Figure 17 shows the minimum R.2 , as a function of the number of training cases. The results show that, even for this more complex example, the behaviour is almost identical to the one observed for the previous geometry. With less than 10 training cases the predicted spacing achieves an excellent accuracy, with the value of R.2 above 96%. If the total set of available training cases is considered, the value of R.2 reaches almost 100. To further assess the accuracy of the predictions, the ratio between predicted and target spacing is evaluated to quantify the performance of NN in producing new meshes for unseen flight conditions. Figure 18 shows the histogram of the ratio between predicted and target spacing at the nodes of the background mesh. The minimum and maximum values for each bin in the histogram are depicted with red error bars, whereas the orange bar represents the standard deviation from the mean. A value of the ratio between 1/1.05 and 1.05 is considered sufficiently accurate to produce a mesh able to capture the targeted flow features. The histogram confirms the accuracy of the predictions, with the middle bin containing more than 90% of the elements. The trained NNs are next used to predict the spacing for the background mesh, from which its subsequent near-optimal mesh is generated and compared with the corresponding target meshes. Figure 19 displays the target and ML-produced meshes for the two unseen examples outline in Fig. 16. The results clearly show the ability of the proposed technique, based on a background mesh, to automatically produce

134

C. Lock et al.

Fig. 18 Falcon aircraft: Histogram of the ratio between the predicted and target spacing

(a)



= 0.41,

= 4.50°

(b)



= 0.71,

= 8.00°

(c)



= 0.41,

= 4.50°

(d)



= 0.71,

= 8.00°

Fig. 19 Falcon aircraft: Target (top row) and predicted (bottom row) meshes for two flow conditions

meshes that are locally refined near the relevant regions. For the subsonic case, the NN has appropriately refined the leading and trailing edges of the main wing, the vertical and horizontal stabiliser, as well as the entry and exit of the jet engine. Similarly, for the transonic case, those features are also appropriately captured, but in addition, the NN has successfully predicted the presence and location of a shock along the main wing and consequently appropriately refined this region.

Predicting the Near-Optimal Mesh Spacing for a Simulation Using Machine Learning

135

8 Concluding Remarks A novel technique to predict the required spacing for a simulation has been presented. The approach is based on the use of background mesh and a NN to predict the required spacing at each node of the background mesh. Using available data from previous simulations, the required spacing to capture a given solution is computed on the available mesh. Then, a method to interpolate the spacing onto the background mesh is devised. The approach is conservative and avoids the problems that a naïve interpolation will induce. Once the available data is processed, a NN is trained where the inputs are design parameters (i.e., flow conditions in the example considered here) and the output is the required spacing at the background mesh. When the spacing is available, a standard mesh generator can be used to obtain the near-optimal mesh suitable for a new simulation. The strategy has been compared to a recently proposed approach in which a NN is used to predict the position, strength and radius of influence of a set of sources. The results show that the proposed approach is much more efficient. First, it requires significantly less training data to provide the same accuracy. Second, the NNs to be trained are significantly smaller due to the need to only predict the spacing at the nodes of the background mesh. In addition, it does not require a complex processing of the available data to create a set of global sources. The proposed approach has been applied to two examples, relevant to the aerospace industry. Flow conditions were considered as the design parameters and three dimensional examples showed the potential of the proposed approach in dealing with large scale problems. Future work will include the extension of this approach to deal with geometric parameters and the ability to predict anisotropy in the near-optimal meshes.

References 1. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al.: TensorFlow: a system for Large-Scale machine learning. In: 12th USENIX symposium on operating systems design and implementation (OSDI 16), pp. 265–283 (2016) 2. Alfonzetti, S., Coco, S., Cavalieri, S., Malgeri, M.: Automatic mesh generation by the let-itgrow neural network. IEEE transactions on magnetics 32(3), 1349–1352 (1996) 3. Balla, K., Sevilla, R., Hassan, O., Morgan, K.: An application of neural networks to the prediction of aerodynamic coefficients of aerofoils and wings. Applied Mathematical Modelling 96, 456–479 (2021) 4. Bohn, J., Feischl, M.: Recurrent neural networks as optimal mesh refinement strategies. Computers & Mathematics with Applications 97, 61–76 (2021) 5. Chedid, R., Najjar, N.: Automatic finite-element mesh generation using artificial neural networks-part i: Prediction of mesh density. IEEE Transactions on Magnetics 32(5), 5173– 5178 (1996) 6. Chen, G., Fidkowski, K.: Output-based error estimation and mesh adaptation using convolutional neural networks: Application to a scalar advection-diffusion problem. In: AIAA Scitech 2020 Forum, p. 1143 (2020)

136

C. Lock et al.

7. Dawes, W., Dhanasekaran, P., Demargne, A., Kellar, W., Savill, A.: Reducing bottlenecks in the CAD-to-mesh-to-solution cycle time to allow CFD to participate in design. Journal of Turbomachinery 123(3), 552–557 (2001) 8. Dyck, D., Lowther, D., McFee, S.: Determining an approximate finite element mesh density using neural network techniques. IEEE transactions on magnetics 28(2), 1767–1770 (1992) 9. George, P.L., Borouchaki, H., Alauzet, F., Laug, P., Loseille, A., Marcum, D., Maréchal, L.: Mesh generation and mesh adaptivity: Theory and techniques. In: E. Stein, R. de Borst, T.J.R. Hughes (eds.) Encyclopedia of Computational Mechanics Second Edition, vol. Part 1 Fundamentals, chap. 7. John Wiley & Sons, Ltd., Chichester (2017) 10. Glantz, S.A., Slinker, B.K.: Primer of applied regression & analysis of variance, ed, vol. 654. McGraw-Hill, Inc., New York (2001) 11. Hagan, M.T., Demuth, H.B., Beale, M.: Neural network design. PWS Publishing Co. (1997) 12. Halton, J.H.: Algorithm 247: Radical-inverse quasi-random point sequence. Communications of the ACM 7(12), 701–702 (1964) 13. Karman, S.L., Wyman, N., Steinbrenner, J.P.: Mesh generation challenges: A commercial software perspective. In: 23rd AIAA Computational Fluid Dynamics Conference, p. 3790 (2017) 14. Kingma, D.P., Ba, J.: ADAM: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 15. Lock, C., Hassan, O., Sevilla, R., Jones, J.: Meshing using neural networks for improving the efficiency of computer modelling. Engineering with Computers (2023). https://doi.org/10. 1007/s00366-023-01812-z 16. Löhner, R.: Applied computational fluid dynamics techniques: an introduction based on finite element methods. John Wiley & Sons (2008) 17. Peraire, J., Peiro, J., Morgan, K.: Adaptive remeshing for three-dimensional compressible flow computations. Journal of Computational Physics 103(2), 269–285 (1992) 18. Schmitt, V.: Pressure distributions on the ONERA M6-wing at transonic Mach numbers, experimental data base for computer program assessment. AGARD AR-138 (1979) 19. Slotnick, J.P., Khodadoust, A., Alonso, J., Darmofal, D., Gropp, W., Lurie, E., Mavriplis, D.J.: CFD vision 2030 study: a path to revolutionary computational aerosciences. Tech. rep. (2014) 20. Sørensen, K., Hassan, O., Morgan, K., Weatherill, N.: A multigrid accelerated hybrid unstructured mesh method for 3D compressible turbulent flow. Computational mechanics 31(1-2), 101–114 (2003) 21. Thompson, J.F., Soni, B.K., Weatherill, N.P.: Handbook of grid generation. CRC press (1998) 22. Yang, J., Dzanic, T., Petersen, B., Kudo, J., Mittal, K., Tomov, V., Camier, J.S., Zhao, T., Zha, H., Kolev, T., et al.: Reinforcement learning for adaptive mesh refinement. In: International Conference on Learning Representations (2022) 23. Zhang, Z., Jimack, P.K., Wang, H.: Meshingnet3d: Efficient generation of adapted tetrahedral meshes for computational mechanics. Advances in Engineering Software 157, 103021 (2021) 24. Zhang, Z., Wang, Y., Jimack, P.K., Wang, H.: Meshingnet: A new mesh generation method based on deep learning. In: International Conference on Computational Science, pp. 186–198. Springer (2020) 25. Zienkiewicz, O.C., Zhu, J.Z.: The superconvergent patch recovery and a posteriori error estimates. part 1: The recovery technique. International Journal for Numerical Methods in Engineering 33(7), 1331–1364 (1992) 26. Zienkiewicz, O.C., Zhu, J.Z.: The superconvergent patch recovery and a posteriori error estimates. part 2: Error estimates and adaptivity. International Journal for Numerical Methods in Engineering 33(7), 1365–1382 (1992)

Mesh Generation for Fluid Applications

Block-Structured Quad Meshing for Supersonic Flow Simulations Claire Roche, Jérôme Breil, Thierry Hocquellet, and Franck Ledoux

1 Introduction Mesh generation is a critical component of a computational physics based analysis process. The mesh used for a simulation has a considerable impact on the quality of the solution, the stability, and the resources expended to complete the simulations. In this work, we consider the specific field of Computational Fluid Dynamics (CFD) and more precisely supersonic flow simulations. According to Chawner et al. [1], multiblock structured meshes provide the most accurate solutions for CFD. This is among the most popular meshing techniques for flow simulation [2]. But the generation of such meshes is very challenging and time-consuming for high-skilled engineers who can spend weeks or months to generate the adequate mesh using complex interactive tools. It is considered as one of the most time consuming step in the CFD process [1, 3]. The context of our work is the atmospheric (re)entry of a vehicle that can be a spacecraft (see Fig. 1 for an example). The geometric domain . we consider here is a sphere that surrounds the vehicle and our final goal is to pave the path C. Roche (B) · J. Breil · T. Hocquellet CEA-CESTA, Le Barp, France e-mail: [email protected] J. Breil e-mail: [email protected] T. Hocquellet e-mail: [email protected] C. Roche · F. Ledoux LiHPC, CEA, Paris-Saclay University, Paris, France e-mail: [email protected] F. Ledoux CEA, DAM, DIF, 91297 Arpajon, France © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_7

139

140

C. Roche et al.

(a) Triangulation

(b) Distance Field

(c) Vector Field

(d) Blocks

(e) Mesh

(f) Results

Fig. 1 The main stages of our approach. Starting from a triangulation of the domain (a), we first generate and combine distance fields (b) and build a vector field that ensure wall orthogonality and the alignment with the angle of attack (c). Using those fields, we generate curved blocks (d) and a final quad mesh where the element size is carefully controlled in the boundary layer (e). Numerical simulation can then be launched (f)

to automatically generate the adequate block-structured meshes for supersonic flow simulations in 2D and 3D. We focus in this paper on the 2D case with the constraint that the different choices of the proposed solution do not meet specific restrictions to be extended in 3D. Dealing with supersonic flow simulation induces to consider in the meshing process the geometrical shape of ., but most importantly several simulation parameters (difference between vehicle front and back, boundary layers, angle of attack) that have a strong impact on the simulation results. With these constraints in mind, we adapt current quadrilateral meshing techniques. Quad meshing is a very well-studied domain for many years. While the problem can be globally considered as solved if you look at recent results [4–7], many methods do not provide suitable inputs for supersonic flow simulations. In our case, we require to have a block-structured mesh and to control the size, boundary orthogonality and cell direction in some areas. Most of the time, the mesh size can be controlled at the price of losing/degrading the mesh structure, while controlling boundary orthogonality is ensured with interactive software. In this work, we focus on a very demanding field, which is supersonic flow simulation codes. Such aerodynamics application require to handle thin boundary layers around the re-entrance vehicle, to control the cell size and orientation and to capture some shock areas.

Block-Structured Quad Meshing for Supersonic Flow Simulations

141

1.1 State of the Art Due to the complexity of supersonic flow simulations, the grid density required to obtain the resolution of flow field gradients is unknown a priori [8]. Thereby some researchers concentrate on using mesh adaptation during the simulation [9]. Currently, unstructured meshes with high cell quality can be generated on complex geometries in a fully automatic way, which saves time. Unstructured meshes are also easier to adapt to specific metrics. But, in CFD, solvers may be less efficient in terms of memory, execution speed, and numerical convergence on this type of mesh topology [10]. Considering that multi-block structured meshes provide the most accurate solutions for CFD [1, 2], therefore those meshes are preferred. However, the generation of such meshes is very challenging and time-consuming for high-skilled engineers, especially in 3D. Fully automatic 3D multi-block structured mesh generation is a complex problem and currently there is no algorithm able to generate an ideal block topology. Then, other types of meshes may be used, such as over-set grids [11]. This makes easier the process of mesh generation on complex multi-component geometries. Even if these meshes provide a solution as accurate as the structured ones, it requires specific solvers with complex interpolation. Hybrid meshes for CFD (a thin layer of hexahedral elements near the wall and tetrahedral cells in the far field) are easier and faster to generate. Nevertheless, there is no proof that hybrid meshes provide a solution as accurate as block-structured or over-set meshes. In practice, the mesh quality is strongly linked to solver algorithms. Even if the same physic is solved, each solver has its quality criteria [3]. As explained by Chawner et al. [1], the mesh quality criteria for CFD simulations are always stated in a non-quantitatively way. For instance, terms like “nearly orthogonal”, “spacing should not be allowed to change too rapidly”, “give consideration to skewness”, “sufficiently refined ”, “adequate resolution” and “use high aspect ratios” are used frequently and casually. Mesh generation relies on engineering experience. Thus, it is easier to check if a mesh is not “bad”, instead of if it is “good”. Indeed, an a priori mesh must at least pass the ‘validity’ requirements of the utilized flow solver (no negative volume cells, no overlapping cells, no void between cells, ...). The VERDICT library [12] is a reference software package for this type of mesh quality evaluation. In fact, the ultimate quality measure for a mesh is the global error measure on the quantity of interest after the simulation. In order to find a way to generate a 2D quad block structure with an approach that extends to 3D, we can take a look at [4, 13, 14] that provide complete surveys of existing techniques in 2D, 3D surfaces and 3D. Considering we expect to get a block-structured mesh, polycube-based methods and frame fields seems the most relevant. Polycubes were first used in computer graphics for seamless texturing of triangulated surfaces [15]. Many techniques [16–21] improved first results. But the orientation sensitivy and the simple structure of a final coarse polycube does not fit our requirement. For several years now, frame fields have offered a promising solution for both quadrilateral and hexahedral mesh generation. They are computed

142

C. Roche et al.

as a continuous relaxation of an integer-grid map with internal singularities (that overcome some limitation of polycubes). The majority of frame field methods have three major steps: first they create and optimize a boundary-aligned frame field; then they generate an integer-grid map, which is aligned with previously defined frame field [22]; and finally, they extract integer isolines (in 2D) or isosurfaces (in 3D) to form an explicit block-structured mesh [23]. To the best of our knowledge, generating a 3D frame field remains challenging and state-of-the-art methods still fail to produce a hex-compatible frame field in 3D. Considering that our application field is limited to the outer space that surrounds a single vehicle with a zone of interest near the vehicle wall, we can adopt the strategy proposed in [24] where they use an advancing-front approach to mesh such configurations in 3D. Such algorithm, like the paving algorithm in 2D [25] are relevant for our purpose. In the paving method, each boundary is previously meshed. In this work, since we do not have constraints far from the vehicle, only the vehicle wall is pre-meshed. Moreover, we differ from the original paving algorithm in creating new points: starting from a front point . p, we transport . p along a flow (defined by a vector field) to get the next point.

1.2 Main Contributions Generating adequate quadrilateral mesh for supersonic flow simulation requires to consider both the geometrical shape of the domain . but also some simulation parameters like the angle of attack, the thickness of the boundary layer, the distinct behaviour required in the front or the back of the vehicle and so on. Such conditions can be achieved manually in 2D. We propose to do it automatically in this work with the aim to extend it in 3D afterward. That’s why our approach relies on the work of Roca et al. [24] and extend it to our special case with considering: 1. the boundary layer around the vehicle wall as a special area where we apply specific smoothing and discretization algorithms; 2. several geometry- and physics-based scalar fields that are mixed to control the mesh generation process; 3. test cases are proposed to compare other algorithm on given mesh quality criteria.

2 Terminology and Problem Statement This work aims to propose an algorithm to automatically generate block-structured quadrilateral meshes for supersonic computational fluid dynamics.

Block-Structured Quad Meshing for Supersonic Flow Simulations

143

Flow u∞ AoA α

Vehicle ∂ΩV

Boundary layer flow

Shock Far-field ∂ΩF F Fig. 2 Flow around a supersonic vehicle

2.1 Supersonic Vehicle and Environment Figure 2 shows briefly the traditional flow topology observed during a supersonic flow simulation. The direction of the inflow is represented by the black vectors.u∞ ( ) and the angle of attack (AoA) .α. Due to the effect of viscosity, a very thin boundary layer plotted in orange ( ) develops on the wall. This region is characterized by very strong gradients of velocity and temperature. To compute an accurate solution of the Navier-Stokes (NS) equations, very thin and regular cells are needed in the wall-normal direction. Thus, globally structured mesh are well suited for this area. As few as possible singular nodes (nodes that are not of valence four) are admitted in the orange part of the mesh. In general, for CFD, gradients are calculated more accurately if the cells are aligned to the streamlines, particularly in the boundary layer, and along the shock represented in red ( ) too. However, unlike boundary layers, mesh refinement is less restrictive near the shock to compute it accurately. In this work, supersonic bodies are completely immersed in the fluid and a single wall is considered. The far field plotted in blue ( ) is a smooth boundary (circle, ellipse), far from the physical phenomena to simulate. In this way, the flow structures around the vehicle do not impact the far-field boundary conditions. As the accuracy of the simulation is not needed in this area, there is no hard constraint on cell quality near the far-field boundary. The thin region in front of the vehicle (on the left side of the vehicle in Fig. 2) is the key part that will govern the simulation. In this very specific zone, the mesh has to be as regular as possible, and the singular nodes are not admitted.

144

C. Roche et al.

2.2 Approach Overview Let . be a 2D domain bounded by an inner boundary, the vehicle wall .∂V and an outer spherical boundary, the far-field boundary .∂ F F . Let .α be the angle of attack of the vehicle, the aim of our approach is to automatically generate a quadrilateral block-structured mesh . Q  of . that captures the flow around the vehicle and the main flow direction defined by .α. Other user parameters are the boundary layer thickness .δ B L along .∂V , an edge size .sw on the wall .∂V , a size .sw⊥ of the first edge on the wall-normal direction and a global edge size .sG . To this purpose we propose the following method (see Fig. 1): 1. We first discretize the boundary curve .∂V (see Sect. 3.1). This stage requires to preserve geometric corners and maximum block edge size given as an input parameter. 2. Then we build several distance fields in order to drive the advancing front creation of block layers. Those fields are fused into a single one, called .d, which have the property that any point . p ∈ ∂V verifies .d( p) = 0 and any point . p ∈ ∂ F F verifies .d( p) = 1 (see Fig. 1b and Sect. 3.2.1). 3. We extract a gradient field .∇d from one of the previously computed distance field and combine it with a constant vector field that represents the flow direction to produce the vector field .v that captures both wall orthogonality and the flow direction (see Fig. 1c and Sect. 3.2.2). 4. The scalar field .d and the vector field .v drive the creation of a quadrilateral block structure .B where we create each node block in an advancing-front manner. Then curved blocks are created (see Fig. 1d and Sect. 3.3). 5. We eventually generate cells of. Q  by distinguishing the first block layer where we control size transitions and wall orthogonality and the remaining blocks where we discretize blocks using a transfinite interpolation scheme in each block. To ensure to get edges of size about .sG we apply a simple interval assignment algorithm along non-constrained block edges (see Fig. 1e and Sect. 3.4).

3 Block-Structured Mesh Generation Algorithm Our approach is inspired by [24] where an advancing-front algorithm is proposed to mesh the outer space around an object. Starting from a set of block corners and edges on the wall of the vehicle, the algorithm uses distance fields and a vector field to control the layer extrusion process. To perform our algorithm, the input is a triangular mesh .T of .. The first part of the algorithm aims to build the unstructured quadrilateral block topology, while the second part will produce the final mesh.

Block-Structured Quad Meshing for Supersonic Flow Simulations

145

3.1 Vehicle Wall Block Discretization The first stage consists in discretizing.∂V . To do it we traverse the vertices.v0 , ..., vm of .T located on .∂V and we select a vertex .vi if and only if it satisfies one of the following conditions: • .vi is located on an extremum of the boundary profile, i.e; a vertex of .∂V that minimizes or maximizes the x or y coordinate. • .vi is a geometric corner of .∂V ; • the curvilinear distance from the previous selected vertex and .vi is greater than the limit length given as an input parameter. Let us note that this approach does not guarantee that the boundary edges will have the same size. It is not an issue for our process since those edges will be refined later to get the final mesh.

3.2 Fields Computation Distance and vector fields are core components of our approach to drive the layer extrusion. The idea, inspired by [24] is to mix several fields to know how to insert block nodes during the layer creation process. In practice, those fields are discrete and defined at the vertices of .T .

Distance Fields Computation As in [24], we compute a distance field .d by merging two distance fields: the first one is the distance from the vehicle boundary .∂V ; the second one is the distance from the far-field boundary .∂ FF . To compute those fields we solve the Eikonal equation [24] given by  ||∇d|| = f in , . (1) d|F = 0. In this equation, . ⊂ Rn is the physical domain, . f is a known function, .|| · || is the Euclidean norm, .F is the front and .d is the distance to this front. In this work, . f is considered constant and equal to .1. The problem is solved on .T . The first field .dV in Fig. 3a is the distance field from the vehicle boundary (.∂V ):  .

||∇dV || = 1 in  dV |∂V = 0

(2)

The second field .d F F in Fig. 3b is the distance from the far-field boundary (.∂ F F ) in Fig. 2:

146

C. Roche et al.

(b)

(a)

(c)

Fig. 3 The distance fields computed on the NACA 0012 airfoil geometry [26]

 .

d

||∇d F F || = 1 in  d F F|∂ F F = 0

(3)

The third field .d represented in Fig. 3c is a combination of the two fields .dV and : dV . (4) .d = dV + d F F

. FF

The combination of the two distance fields makes it possible to obtain a field .d normalized between.[0, 1] on the domain. This field.d verifies.0 ≤ d(x) ≤ 1,.∀x ∈  and the boundary conditions .d|∂V = 0 and .d|∂ F F = 1. This mixed field ensures us to reach the far-field for the same layer during the extrusion, it prevents the front to divide.

Vector Field Computation In combination with the previously defined distance field .d, we compute several vector fields to drive the layer creation. In supersonic flow simulation, we must particularly care about the front of the vehicle, its near boundary and global mesh direction in the back of the vehicle. We drive the mesh behaviour in the back area with the angle of attack .α, which is flow-related information. To this purpose we define the vectorfield .u∞as being constant on . and equals to the far-field flow cos(α) direction, .u∞ = . sin(α) For the front of the vehicle, we consider two possible options based on the previously-computed distance fields. The first vector field we use is the gradient of the distance field .dV , noted .∇dV , and the second one is the gradient of the mixed distance field .d, noted .∇d. We compute these vector fields at the vertices of .T with the Least Squares fit of Directional Derivatives (LSDD) method [27]. Let us note .vfront the vector field selected between .∇d V and .∇d.

Block-Structured Quad Meshing for Supersonic Flow Simulations

147

Flow u∞ AoA α

Vehicle

vback vtransit

vfront

xf ront

xback

Fig. 4 Linear transition between the front and back vector fields

(a)

(b) u∞

(c) v

Fig. 5 Vector fields computed on the NACA 0012 geometry. The vector field (c) is a mix from (a) for .x < 1.5, and .u∞ (b) for .x > 5.0. The damping zone is for .x ∈ [1.5, 5.0] and the angle of attack .α is equal to .15◦

.∇dV

In order to consider both the front and back vector fields, we eventually compute the vector field .v as a linear combination of those two vector field in a transition area (see Fig. 4). Two physical limits are set by the user in ., .x f r ont and .xback . For each node .n i ∈ T at point . p = {xi , yi , z i } • if .xi < x f r ont , then .vi = vfront,i , • if .xi > xback , then .vi = vback,i , • if .x f r ont ≤ xi ≤ xback , then .vi = (1 − θ)vfront,i + θvback,i , x −x

i f r ont is a damping parameter between 0 and 1. Figure 5 illustrates where .θ = xback −x f r ont some of the different vector fields we use. Note that by default, we normalize all the vector fields as we only use the field direction and not its magnitude.

148

C. Roche et al.

3.3 Blocking Generation We build the block structure using an advancing-front approach (see Algorithm 1). We know a priori the number of layers . NL that will be generated. At each step .i, we build a complete layer of quadrilateral blocks, that we denote .Li . The new inserted nodes and edges define the extrusion front, noted .Fi , built from the nodes of the front .Fi−1 . They share common properties: • All the nodes of .Fi are at the same distance .dFi considering the distance fields computed in Sect. 3.2.1. We use the distance field .dV for .F1 and the distance field .d for .Fi with .i > 1. This way, we ensure the nodes of the front .F1 are all at a distance superior to the boundary layer thickness imposed by the user. In another hand, all the nodes .Fi are on a same level set and the front can not separate. • Each node of .Fi is connected by edges to two other nodes on the front of .Li ; • The front .Fi forms a single loop. We generate the blocking structure by inserting one layer at a time in an independent way. The process is the same for all the layers (line 4 of Algorithm 1 and Sect. 3.3.1) except for the first boundary layer (line 1 of Algorithm 1 and Sect. 3.3.2). Algorithm 1 Extrusion algorithm Input: T , F0 nodes, distance field d, distance field dV , vector field v, boundary layer thickness δB L Output: Blocking B 1: L1 ← compute 1st Layer (L0 , dV , v, δ B L ) 2: layer_step ← 1/NL 3: for all i ∈ 2, ..., NL do 4: Li ← compute Layer (Li−1 , d, v, i*layer_step) 5: end for

Block Layer Generation We generate a complete layer of quadrilateral cells following Algorithm 2. The algorithm starts from the first front of nodes and edges .F0 which corresponds to the discretization of .∂V . Front node location j Let us consider the generation of .Fi from .Fi−1 . For each node .n i−1 of .Fi−1 , we j compute the ideal location of the next node .n i on .Fi (Line 3 of the Algorithm 2) by solving the advection equation ∂OM . (5) =v ∂t

Block-Structured Quad Meshing for Supersonic Flow Simulations

149 j

using a 4th order Runge-Kutta method. The origin node .O=n i−1 is advected along the direction of the vector field .v, until his distance .dn j in the distance field .d reach j .dFi . We obtain then the point .M=n i . Unlike [24], we define the position of a new node by decoupling the distance to be covered, provided by the distance field .d, from the direction to be followed, provided by the vector field .v. This way, characteristics of the flow such as the angle of attack .α are taken into account in the vector field built for the extrusion. Once those positions computed, we check some validity rules to ensure that the quad blocks of the layer .Li have the adequate shape. Those rules are similar to the ones introduced by Blacker et al. [25] but used both geometric and physical criteria to classified nodes of .Fi−1 : we consider the geometrical shape of the quadrilaterals and the alignment with vector field .v. According to this classification, we are going to insert or erase some nodes in .Fi . This process ensures a strong property of the computed layer: all the nodes of .Fi are at the same distance .dFi along the input distance field. This property ensures that the front cannot separate and that all the nodes will reach the outer boundary at the same time on the last layer. Algorithm 2 ComputeLayer Input: T , Fi nodes, distance field d, vector field v, distance dFi of the nodes in the distance field d Output: Quad blocking, and a set of nodes and edges of the layer Fi+1 j 1: for all node n i ∈ Fi do j j 2: n i+1 ← Compute Ideal Position (n i , dFi , d, v) 3: end for 4: while there is a singular node n ik ∈ Fi do 5: n ik ← get Singular Node(Fi , singu_type) 6: if singu_type is 0 then 7: Fi+1 ← Fi+1 + {insert Quad At Point(n ik )} 8: else if singu_type is 1 then 9: Fi+1 ← Fi+1 + {contract Quad At Point(n ik )} 10: end if 11: end while

Figure 6 represents the extrusion of regular blocks on a layer. In Fig. 6a, a small part of .Fi−1 (where .i = 3) is plotted in red ( ) and previously generated blocks (previous layers) in light blue ( ). If there is no conflict on the layer due to block expansion or shrinking, then all the blocks are built in a regular way, and the front nodes of layer .L3 are now the input for another step of the Algorithm 2 (see Fig. 6b). Block insertion To avoid blocks of poor quality, we allow the insertion of blocks in areas specified by the user. As explained before, to create a layer .Li , each node of .Fi−1 generates a node of .Fi at the distance .di in the distance field .d, following the vector field .v. Let j k of .Fi−1 sharing an edge. They are respectively us consider two nodes .n i−1 and .n i−1 j going to generate the nodes .n i and .n ik of .Fi , and we may insert the block defined

150

C. Roche et al.

Fig. 6 Regular layer computation

n13

n03

n23

n13

n03

n23

n12

n02

n22

n12

n02

n22

(b)

(a)

Fig. 7 Block insertion

n13

n03

n13

n33

n03

w  n12

n02

a

b

n12

n22

(a)

j

n23

n02

n43

n22

n23

(b)

j

k by .(n i−1 , n i , n ik , n i−1 ). Depending on quality angle, we can reject this block. For instance, on Fig. 7a, the node .n 02 will generate the node .n 03 and the two adjacent blocks will not respect our angle quality. As a consequence, we would generate two nodes from .n 02 and create an extra block (see Fig. 7b). In this work, an extra block is inserted if the four following criteria are respected (using notations given on Fig. 7):

1. The node is in an area where the insertion is allowed by the user; 2. The adjacent nodes on the same red layer .L2 have not already inserted elements; w·vn ) < π4 + σ1 , where .vn is the value of the vector field at 3. . π4 − σ1 < arccos ( ||w||·||v n || the position of the node; w·a w·b ) + arccos( ||w||·||b|| ) > 3π . 4. .arccos( ||w||·||a|| 2 Where .σ1 = 0.174 is an arbitrary tolerance corresponding to .10◦ . It is common to take the aspect ratio between two opposite edges as an insertion or shrinking criterion. However, for the applications of this work, there is no constraint on this specific ratio. To compute the position of one of the two new nodes that are used to create the inserted block, we proceed as follows. To build the node .n 33 , the point .n 02 in Fig. 7 is a w + ||a|| until reaching the distance .dFi in advected following the constant vector . ||w|| the distance field .d. We do the same for the second point. Block shrinking The block shrinking operation is the opposite of the block insertion. Considering three j k  and .n i−1 of .Fi−1 , that respectively generate the nodes consecutive nodes .n i−1 , .n i−1 j k  .n i ,.n i and.n i of.Fi , we apply the shrinking process when we meet the configuration of Fig. 8a, where the three generated nodes are geometrically close. More specifically, we fuse the three generated nodes into a single one. To detect the places where the operation is necessary, the proximity of the ideal positions of the nodes of the next

Block-Structured Quad Meshing for Supersonic Flow Simulations

n32

n33 n13

n12

n32

n33

n03 n23

151

n03 n43

n02

n12

n43

n02 n22

n42

(a)

n22

n42

(b)

Fig. 8 Block shrinking

Fig. 9 Block shrinking in orange (

) and insertion in blue (

) on the second layer

layer is controlled with a tolerance. After the fusion, the adjacent nodes on the layer (connected by an edge) are not able to perform an insertion or fusion operation. Figure 9 illustrates how blocks can be inserted and shrunk in a whole domain on the mars spacecraft geometry [28].

Boundary Layer Extrusion The boundary layer is a thin layer close to the wall where the fluid flow is dominated by viscosity effects. We take a particular care to this area, which we manage with the Algorithm 3. The distance field considered is .dV , and the distance of the layer is supposed to be higher than the thickness of the boundary layer .δ B L . Boundary layer insertion As explained before, insertions may be performed in the boundary layer. This operation must remain as occasional as possible to not introduce many singularities in the near boundary layer. The insertion is performed only in the

152

C. Roche et al.

Algorithm 3 Compute1stLayer Input: T , F0 , expected boundary layer thickness δ B L , distance field dV , vector field v Output: Quad blocks of the layer L1 j 1: for all node n 0 ∈ F0 do j j 2: n 1 ← Compute Ideal Position(n 0 , δ B L , dV , v) 3: end for 4: B ← Compute Blocks(L1 )

case there is a very sharp angle on the geometry (for example the NACA 0012 airfoil in Fig. 3). This insertion is always a two-block insertion and is performed as shown in Fig. 10. In Fig. 10a, the front considered for the extrusion is .F0 , and the nodes of this front are plotted in red. Let us remember that .F0 is on the wall geometry. Each node of .F0 computes the ideal position of the next node. At the position of the node 0 .n 0 , a sharp angle is detected on the geometry surface. Then, two blocks are inserted. If the inserted upper right block of Fig. 10 is considered, the two new block corners are placed this way. The first one, connected to .n 11 by a block edge, is the position 0 . pn 0 of the node .n 0 advected at the distance .δ B L in the distance field .dV following 0 a constant vector field equal to the vector .c1 in Fig. 10a. The second block corner is . The second block of this insertion is built placed at the position . p = pn 00 + l1 ||cc11 +w +w|| 2 the same way, from the node .n 0 on the other side of .n 00 . Figure 11 illustrates how this two-block insertion is performed on the boundary layer blocking.

Fig. 10 Block insertion on the boundary layer

n11

n11

c˜1 n10

w ˜

n20

n00 c˜2

(a)

layer for the diamond airfoil. This insertion in the front of the airfoil to ensure good quality elements in this sensitive area

n10 n20

c1 n00

n21

n21

Fig. 11 Insertion of two blocks ( ) on the boundary

n01

(b)

c2

l1

l2

Block-Structured Quad Meshing for Supersonic Flow Simulations

153

Fig. 12 Purple topological chord .C ( ) composed of the blue edges (

) of the

blocking

3.4 From Blocks to Quadrilaterals Once the block structure built, we generate the final quad mesh. It requires to assign the right discretization to every block edges and to discretize each block with the appropriate regular grid. The boundary layer is meshed considering strong constraints about wall-orthogonality and aspect ratio.

Interval Assignment The interval assignment algorithm aims to select the number of mesh edges for each block edge. This is fundamentally an integer-valued optimization problem that was tackled in several works [29–31]. In particular, an incremental interval assignment using integer linear algebra is proposed by [31] and gives very satisfying results in terms of target size respect and speed performance. In this work, we follow a simple procedure that we describe thereafter. Even if the problem is initially composed of . N integer unknowns, with . N the number of block edges, it can be reduced considering the topological chords of the blocking. A topological chord .C is defined as a set of opposite edges [32] (see Fig. 12). As a conformal mesh is expected, all the edges .{ei }i=1..n c of the same chord .C need to have the same discretization. Otherwise, the blocking discretization is not valid. Then, starting from a block structure composed of . N edges, the problem can be reduced to .n integer variables, where .n is the number of topological chords in the block structure. For instance, in the case of Fig. 12, there are eight chords hence eight integer unknowns. This number of unknown is going to decrease again due to our application case where the thin boundary layer along the vehicle wall is handled specifically. Let us consider Fig. 13 where the blue edges are on the vehicle wall and the green ones in the boundary layer. The discretization of the blue edges is controlled by an input parameter .sw that fixes a target length of each blue edge, and so propagate along the corresponding chords. The discretization of the green edges helps to capture the boundary layer flow. This discretization is again an input parameter, which strongly depends on the simulation. Again some unknowns are so removed.

154

C. Roche et al.

Fig. 13 The block edges hard constrained in our algorithm are set in blue ( ) and green ( )

Fig. 14 Boundary layer offset

n21

n21

n20

n20 n11

˜ viproj n10 (a)

˜ viproj pi

n11

pi

n10 (b)

It is important to notice that if two edges of a chord .Ci have different hard constraints, the problem can not be solved and the mesh is not generated. In our case, it should not happen. The simple structure of our problem (full conform block structure) and the few number of hard constraints allows us in practice to avoid to build over-constrained systems.

Boundary Layer Meshing Boundary layer discretization This part is about the discretization of the topological chords constrained by the blue and green edges in Fig. 13. In this part, the block edges are linear. The objective of the blocking is to split the domain into a small set of blocks. As a direct consequence, when the geometry is curved (as the NACA 0012 airfoil), we do not obtain a good discretization of the vehicle wall and some parts of the boundary layer can be totally out of .. To solve it, we first create the mesh edges corresponding to the blue block edges in Fig. 13: each block edge is linearly split by inserting .k points .{ pi }i=1,...,k , that are projected onto .∂V afterward. For each point . pi , we keep the offset vector pr oj .vi used to project . pi onto .∂V . As the boundary layer is very thin, we apply the pr oj same offset vector .vi for the mesh points used to linearly discretized the opposite block edge (see Fig. 14). By this way, we avoid to generate tangled meshes. For the discretization of the boundary layer, we require three input user parameters, which are: .sw , the size of the final mesh edges on the wall vehicle; .n ⊥ w , the number of mesh edges in the wall-normal direction; .sw⊥ the size of the first mesh edge in

Block-Structured Quad Meshing for Supersonic Flow Simulations

155

the wall-normal direction. With these parameters, the edges are set uniformly in the streamwise direction. Boundary layer smoothing Even if the block edges were placed in an orthogonal way, the computation remains local for each node. As a consequence, there is no reason for the resulting mesh to be orthogonal to the wall (see Fig. 16.a). At this stage, we perform a smoothing algorithms on the boundary layer blocks that have an edge on .∂V . This smoothing aims to enhance the orthogonality of the first cells in the block. The smoothing algorithm [33] is performed on each block. It is a modification of the Line-Sweeping method introduced by Yao [34], which was specifically developed for structured meshes. The Line-Sweeping method is a geometric, local, iterative, and fully explicit method that aims to uniformize the cell sizes of a block. Let .B be a block of size 1 . N x × N y , and .n i, j be a node in .B with .0 < i < N x − 1 and .0 < j < N y − 1. To compute the new position at the iteration .t + 1 by the Line-Sweeping, we consider the stencil made of the six black nodes on Fig. 15. From this stencil, six points are computed, the three plotted in red ( ) (.V j−1 , .V j , .V j+1 ) and the three in green ( ) (. Hi−1 , . Hi , . Hi+1 ). Red points are placed in the middle of each vertical branch. For instance, .V j+1 is in the middle of the branch made up of the three nodes .n i−1, j+1 , .n i, j+1 , .n i+1, j+1 . In the same way, the three green points are placed in the middle of each horizontal branch. For instance, . Hi−1 is at the middle of the branch .n i−1, j−1 , .n i−1, j , .n i−1, j+1 . From these six points, two branches of two segments each are built, the red one (.V j−1 , .V j , .V j+1 ) and the green one (. Hi−1 , . Hi , . Hi+1 ). The Line-Sweeping places the new position of the node .n i, j at the iteration .t + 1 as being the intersection of these two branches, represented by the orange point . X 2 . A damping coefficient .θd ∈ [0, 1] chosen by the user can be added to enhance the convergence. As the Line-Sweeping does not provide the near-wall orthogonality needed for this work, a modification was introduced in [33]. Assuming the block edge on the wall is at the index . j = 0. For each node .n i, j as the one in Fig. 15, we compute the position . X 2 with the Line-Sweeping method, and another orange point (. X 1 ) is placed. Two vectors .n1 and .n2 ( ) are computed, normal to the respective segments .[n i−1, j−1 , n i, j−1 ], and .[n i, j−1 , n i−1, j−1 ]. Then, the sum .n = n1 + n2 is considered to place the point . X 1 . This new point is at the intersection of the dashed orange line ( ) passing through the point .n i, j−1 and carried by the vector, and the green branch. A new orange branch . X 1 , n i,t j , . X 2 ( ) is considered. According to the index . j of the node considered in the block, the new point .n i,t+1 j is placed on the orange branch at the t+1 position . pi, j = αγ X 1 + (1 − γ)X 2 . In this work, .γ = ( 6(Nj−1 )0.01 . This way, the y −1) closer the node is to the wall, the stronger the orthogonality is. Figure 16 illustrates how this smoothing stage improves the wall orthogonality. Boundary layer refinement

1

Which means .n i, j is not a boundary node.

156

C. Roche et al.

ni−1,j+1

X1

ni,j+1

Vj+1

ni+1,j+1

nt+1 i,j Hi

X2 Vj

Hi+1

nti,j − → n

Hi−1 ni−1,j

ni+1,j

Vj−1

ni,j−1

− → n 2

ni+1,j−1

− → n 1 ni−1,j−1

Fig. 15 Modified Line-Sweeping method on an internal node in a block

(a) Without boundary layer smoothing

(b) With boundary layer smoothing

Fig. 16 Comparison of the near wall mesh before and after smoothing

In the wall-normal direction, a refinement law is performed on any chord containing a block with a block edge on the geometry (the green edges of Fig. 13). This implies the cells can be very anisotropic which is not a problem since gradients are in the wall-normal direction. Considering a vector of adjacent nodes .n 1 , ..., n N +1 . Each node .n i is a 1D point at the position .li . According to the refinement law used, the new position of the node .n i is given by l = l1 + f n (l N +1 − l1 )

. i

(6)

where . f n = 1 + β 1−e , . p = z(1 − i−1 ), .z = log(r ) and .r = β+1 . 1+e p N β−1 From this law and a set of adjacent edges, the .β parameter can be computed using a Newton method and three values: the sum of the length of the edges, the length of the first edge .sw⊥ , the number of nodes .n ⊥ w . This refinement law is particularly adapted to the boundary layer where the size of the first cell can be very small. It avoids to generate too large cell size far from the boundary layer. p

Block-Structured Quad Meshing for Supersonic Flow Simulations

157

Block Discretization In the case there is no hard constraint on the given chord .C composed of the edges e , ..., en c , we get the number of mesh edges for the chord by minimizing:

. 0

.

F(t) =



ωi (t − Ti )2 ,

(7)

ei

where .ωi is the weight of the edge .ei and .Ti is the ideal discretization of the edge .ei . To compute .Ti , there is a target parameter .sG corresponding to the ideal default size of the edges of the final mesh. F is a second-degree polynomial in .t made of positive = 0. We have terms that reaches a minimum when . F(t) ∂t .

 F(t) =2 ωi (t − Ti ). ∂t e

(8)

i



So the minimum is

e t = i

ωi Ti

. 0

ei

ωi

(9)

Then, we choose the closest integer from .t0 as a solution and so the discretization of the edges of the chord .C.

Final Curved Blocking It remains to finally mesh the blocks that are not in the boundary layers. All the block nodes are created and located using the advancing-front algorithm described in Sect. 1. To avoid discontinuities and low-quality cells, some block edges are not discretized linearly between two block nodes. In fact, we curve every block edge that has its two end points located on the same front .Fi with .i > 1. To do it, we build a control point . pC and the edge is represented by a quadratic Bézier curve. Let us consider the example of Fig. 17, the block corners.n 13 and.n 23 are on the same front .Fi and .i > 1. Then, the block edge between them can be curved. To build the quadratic Bézier curve, we choose to insert the control point . pC as the intersection ) and the one in orange ( ). The blue of two lines, the one plotted in blue ( line is defined by the point at the block corner .n 13 , and the vector normal to .n21 n31 . Then, the Bézier curve is controlled by .(n 13 , pC , n 23 ). Figure 18 shows how the mesh blocks are curved with this procedure. After that, every edge .e included curved ones, are subdivided according to the number of subdivision assign to their chord (see Sect. 3.4.3). We finally perform a transfinite interpolation scheme to generate the grid structured mesh in each block that is not located in the boundary layer.

158

C. Roche et al.

pC n13

n23

n12

n22

Fig. 17 Curve block edges as a quadratic Bézier curve

(a) Linear blocking

(b) Curved blocking

Fig. 18 From linear (a) to curved blocks (b)

4 Results and Applications To demonstrate the well-behaviour of our approach, we tested it onto different types of vehicles. Here, we focus on a selected set of samples but our heuristic was evaluated on a larger data set. We first checked the mesh quality and the impact of some key parameters as the angle of attacks and the ability to insert/contract quadrilateral blocks in each layers. Then we consider two validation cases. The first one is the wellstudied case of the NACA 0012 airfoil [26]. The second one is a two-dimensional supersonic flow around a diamond-shaped airfoil. For this case, analytical solutions are available [35, 36]. The simulations are run using the open multiphysics simulation software SU2 [37] and our meshing algorithm is freely available and implemented in the open-source C++ meshing framework GMDS [38].2 2

https://github.com/LIHPC-Computational-Geometry/gmds.

Block-Structured Quad Meshing for Supersonic Flow Simulations

159

4.1 Mesh Quality Figure 19a, b illustrate the block structures of two meshes generated with our algorithm. To generate these meshes, the used angle of attack is .α = 0◦ . For the vector field computation, the transition area is set between.x f r ont = 1.5 and.xback = 5.0. For the boundary layer meshing, we require a thickness of .δ B L = 4 × 10−2 m, .n ⊥ w = 100 cells in the wall-normal direction and the size of the first cell at.sw⊥ = 1 × 10−8 m. The minimum number of block corners on the wall is set to .33 and the number of layers is set to . NL = 4. The size of the edges streamwise on the wall is .sw = 1 × 10−3 m. In the rest of the domain, the edge size is set by default to .sG = 1.2 × 10−2 m. The only difference between those two meshes is the permission to insert blocks during the layer extrusion process for the block-structure generation (see Fig. 19b). The

(a) Blocking without block insertions

(b) Blocking with block insertions ·105

4

Number of cells

Number of cells

·105

3 2 1 0

0

0.2

0.4

0.6

0.8

1

Scaled Jacobian (c) Scaled jacobian of the cells of blocking (a)

4 3 2 1 0

0

0.2

0.4

0.6

0.8

1

Scaled Jacobian (d) Scaled jacobian of the cells of blocking (b)

Fig. 19 Mesh quality comparison between a block structure generated without (a) and with block insertions (b)

160

C. Roche et al.

Number of cells

·105 4 3 2 1 0

0

0.2

0.4

0.6

0.8

1

Scaled Jacobian (a)

(b)

Fig. 20 Mesh generated with an angle of attack .α = 15◦ (a) and the quality of the cells (b)

number of cells with high scaled jacobian (as defined in VERDICT [12]) increases for the mesh generated with inserted blocks (see Fig. 19c, d). Figure 20 represents the same mesh generated in Fig. 19.b but with an angle of attack .α = 15◦ . Unlike the previous blocking, we see the block edges align with the flow behind the airfoil in Fig. 20a. Figure 20b shows the quality of cells in the mesh. In comparison with Fig. 19d, the geometric quality of cells is not degraded by taking into account this angle. Figure 21 shows a mesh generated on the diamond airfoil geometry. For this generation, the angle of attack is set to .α = 0◦ . The vector field is computed with the parameters .x f r ont = 1.5m and .xback = 6.0m. The number of layers is set to . NL = 4 and the block insertions are allowed. For the boundary layer, a thickness of .δ B L = 5 × 10−2 m is required, with .n ⊥ w = 100 cells in the wall-normal direction and the size of the first wall-normal edge is set to .sw⊥ = 1 × 10−9 m. The size of the edges on the wall is set to .sw = 4 × 10−3 m, and the default size of edges in the whole domain is .sG = 1 × 10−2 m. We require at least .4 blocks in the boundary layer. In the boundary layer, .200 iterations of the modified line-sweeping smoother are performed with a damping parameter .θd = 0.2. This way, the mesh computed is orthogonal near the wall boundary (Fig. 21b). The block structure in Fig. 21a shows two blocks inserted at each end of the geometry. Two additional blocks are inserted on the second layer, at the back of the airfoil. The algorithm provides good cells quality considering the scaled jacobian plotted in Fig. 21c.

Block-Structured Quad Meshing for Supersonic Flow Simulations

161

(a) Blocking on the diamond airfoil

Number of cells

2.5 2 1.5 1 0.5 0

(b) Cells orthogonality near-wall boundary

·105

0

0.2

0.4

0.6

0.8

1

Scaled Jacobian

(c) Scaled jacobian of the cells of the blocking (a)

Fig. 21 Scaled Jacobian for the mesh generated on the diamond airfoil

4.2 Navier–Stokes Equations The Navier-Stokes equations are nonlinear partial differential equations used in fluid mechanics to describe the flow of a viscous and compressible fluid. The first equation .

∂ρ + ∇ · (ρV) = 0 ∂t

(10)

is the continuity equation. The momentum conservation equations are .

∂(ρV) + ∇ · (ρVV) = ∇ · (τ − p I ) + ρg. ∂t

Then, the energy equation is given by

(11)

162

C. Roche et al.

.

∂(ρE) + ∇ · (ρEV) = ∇ · ((τ − p I ) · V) + ρg · V − ∇ · . ∂t

(12)

In these equations, t is the time (s), .ρ is the fluid density (kg m.−3 ), .V is the fluid particle velocity vector (m.s.−1 ), . p is the pressure (Pa), .τ is the viscous stress tensor, −2 . I is the unit tensor, .g is the gravity vector (m.s. ) and . is the heat flux vector −2 −1 (J m. s. ). In this work, we consider that the fluid is characterized by a perfect gas equation of state. The simulations are performed with the Reynolds Averaged Navier-Stokes (RANS) solver of SU2 [37]. Thus, the viscosity and turbulence are taken into account in the near-wall region. Here, the k–.ω SST turbulence model is used [39].

4.3 Subsonic NACA 0012 Airfoil A simulation of the NACA 0012 airfoil is performed with the data set in Table 1 to validate the accuracy of the results on our generated mesh. The angle of attack is .α = 15◦ , the Mach number is . M∞ = 0.3, the Reynolds number is set to . Re = 3 × 106 and the temperature .T∞ = 293K . In Fig. 22, the pressure coefficients CP =

.

p − p∞ 1 ρ u2 2 ∞ ∞

(13)

where . p is the pressure and .ρ the density, are compared. For a Reynolds number of . Re = 3 × 106 , the experimental data set of Gregory and O’Reilly [40] seems to be the most appropriate for CFD validation [26]. These experimental data used as reference points for the surface pressure coefficients are plotted with black dots (•). The pressure coefficient plotted with red crosses (+) is the result of the simulation of this configuration on the mesh generated by our algorithm with the same parameters than the one of Fig. 20. The two red curves are the result of the simulation plotted on both sides of the airfoil. However, the experimental data set gives only the result on one side of the airfoil. The results obtained with this configuration on our mesh are in good agreement with the experimental results.

Table 1 Simulation parameters for the subsonic NACA 0012 airfoil . M∞ AoA .α . Re .0.3



.15

.3

× 106

. T∞

293 K

Block-Structured Quad Meshing for Supersonic Flow Simulations Table 2 Simulation parameters for the supersonic diamond airfoil . M∞ AoA .α . Re .1.5

◦ .3

6 .3 × 10

163

. T∞

293 K

4.4 Supersonic Diamond Airfoil In this part, a two-dimensional supersonic flow around a diamond-shaped airfoil is simulated. Figure 23 represents the geometry of the airfoil and the different areas and angles of the supersonic flow. Here, the viscosity effects are taken into account. As a consequence, the value of velocity on the wall is zero. For the case of the supersonic ) diamond airfoil, the analytical angles of the oblique shocks represented in red ( are given by Liepmann et al. [36]. The shock direction depends on the Mach . M∞ , the angle .θ of the geometry, and the angle of attack .α. In this study, .θ = 5◦ and the chord of the airfoil is .1m. For the first simulation, the parameters in Table 2 are set. In Fig. 24, the mach number distribution is plotted and compared to the analytical positions of the shocks. The value of the computed angle after the simulation is .βu = 43.1◦ , and the value of the angle .βb is .45.7◦ which represents an error of .1◦ compared to analytical values. For this configuration, mach numbers are constant in the area .u 1 , .b1 , .u 2 and .b2 . In the zone .u 1 , we reach a constant mach number of. Mu 1 = 1.42 in Fig. 24, and . Mb1 = 1.19 for the area.b1 . These results are consistent with those given by the tables [35]. Regarding these results, the mesh generated by our algorithm captures the expected physics.

Fig. 22 Pressure coefficient on the NACA 0012 airfoil for the simulation parameters in Table 1

Fig. 23 Scheme of the various zones and angles around the diamond airfoil [41]

Flow u∞ AoA α

u1 βu βb



b1

u2

Airfoil b2

164

C. Roche et al.

Fig. 24 Mach-number distribution around a diamond-shaped airfoil immersed in a supersonic flow field at . M∞ = 1.5, 6 ◦ . Re = 3 × 10 and .α = 3

5 Conclusion With this work, we propose a solution to automatically generate 2D quadrilateral block-structured meshes that are dedicated to flow simulation around a single vehicle. We take into account the geometrical shape of the domain and some simulation parameters like the angle of attack and the boundary layer thickness. First obtained results demonstrate that the generated meshes are usable for subsonic and supersonic flow simulations. A few minor adjustments have to be made in the near future like improving the mesh size transition between the boundary layer and the other layers or integrating some mesh smoothing techniques. But the main part of the future work is twofold: first, we will extend the method to 3D. We are already able to generate the driving distance and vector fields in 3D and we will extend now the work of [24] to highorder blocks; second, we intend to use this approach in a loosely-coupled adaptive loop. We propose to iteratively adapt the distance and vector fields to encompass directional and size fields provided by a previous simulation code run.

References 1. J. R. Chawner, J. Dannenhoffer, and N. J. Taylor, “Geometry, mesh generation, and the cfd 2030 vision,” in 46th AIAA Fluid Dynamics Conference, p. 3485, 2016. 2. Z. Ali, P. G. Tucker, and S. Shahpar, “Optimal mesh topology generation for cfd,” Computer Methods in Applied Mechanics and Engineering, vol. 317, pp. 431–457, 2017. 3. H. Thornburg, “Overview of the pettt workshop on mesh quality/resolution, practice, current research, and future directions,” in 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, p. 606, 2012. 4. D. Bommes, B. Lévy, N. Pietroni, E. Puppo, C. Silva, M. Tarini, and D. Zorin, “Quad-mesh generation and processing: A survey,” Computer Graphics Forum, vol. 32, no. 6, pp. 51–76, 2013. 5. J. Jezdimirovic, A. Chemin, M. Reberol, F. Henrotte, and J. Remacle, “Quad layouts with high valence singularities for flexible quad meshing,” CoRR, vol. abs/2103.02939, 2021. 6. M. Reberol, C. Georgiadis, and J. Remacle, “Quasi-structured quadrilateral meshing in gmsh - a robust pipeline for complex CAD models,” CoRR, vol. abs/2103.04652, 2021.

Block-Structured Quad Meshing for Supersonic Flow Simulations

165

7. N. Pietroni, S. Nuvoli, T. Alderighi, P. Cignoni, and M. Tarini, “Reliable feature-line driven quad-remeshing,” ACM Trans. Graph., vol. 40, jul 2021. 8. S. Alter, “A structured grid quality measure for simulated hypersonic flows,” in 42nd AIAA aerospace sciences meeting and exhibit, p. 612, 2004. 9. P.-J. Frey and F. Alauzet, “Anisotropic mesh adaptation for cfd computations,” Computer methods in applied mechanics and engineering, vol. 194, no. 48-49, pp. 5068–5082, 2005. 10. N. R. Secco, G. K. Kenway, P. He, C. Mader, and J. R. Martins, “Efficient mesh generation and deformation for aerodynamic shape optimization,” AIAA Journal, vol. 59, no. 4, pp. 1151–1168, 2021. 11. W. M. Chan, “Overset grid technology development at nasa ames research center,” Computers & Fluids, vol. 38, no. 3, pp. 496–503, 2009. 12. P. M. Knupp, C. Ernst, D. C. Thompson, C. Stimpson, and P. P. Pebay, “The verdict geometric quality library.,” tech. rep., Sandia National Laboratories (SNL), Albuquerque, NM, and Livermore, CA, 2006. 13. M. Campen, “Partitioning surfaces into quadrilateral patches: A survey,” in Computer graphics forum, vol. 36, pp. 567–588, Wiley Online Library, 2017. 14. N. Pietroni, M. Campen, A. Sheffer, G. Cherchi, D. Bommes, X. Gao, R. Scateni, F. Ledoux, J.-F. Remacle, and M. Livesu, “Hex-mesh generation and processing: A survey,” ACM Trans. Graph., jul 2022. Just Accepted. 15. M. Tarini, K. Hormann, P. Cignoni, and C. Montani, “Polycube-maps,” ACM Trans. Graph., vol. 23, no. 3, 2004. 16. J. Gregson, A. Sheffer, and E. Zhang, “All-hex mesh generation via volumetric polycube deformation,” Computer Graphics Forum, vol. 30, no. 5, pp. 1407–1416, 2011. 17. M. Livesu, N. Vining, A. Sheffer, J. Gregson, and R. Scateni, “Polycut: Monotone graph-cuts for polycube base-complex construction,” ACM Trans. Graph., vol. 32, no. 6, pp. 171:1–171:12, 2013. 18. K. Hu and Y. J. Zhang, “Centroidal voronoi tessellation based polycube construction for adaptive all-hexahedral mesh generation,” Computer Methods in Applied Mechanics and Engineering, vol. 305, pp. 405 – 421, 2016. 19. J. Huang, T. Jiang, Z. Shi, Y. Tong, H. Bao, and M. Desbrun, “.1-based construction of polycube maps from complex shapes,” ACM Trans. Graph., vol. 33, no. 3, pp. 25:1–25:11, 2014. 20. X. Fang, W. Xu, H. Bao, and J. Huang, “All-hex meshing using closed-form induced polycube,” ACM Trans. Graph., vol. 35, no. 4, pp. 124:1–124:9, 2016. 21. X.-M. Fu, C.-Y. Bai, and Y. Liu, “Efficient volumetric polycube-map construction,” Computer Graphics Forum, vol. 35, no. 7, pp. 97–106, 2016. 22. M. Nieser, U. Reitebuch, and K. Polthier, “Cubecover-parameterization of 3d volumes,” Computer Graphics Forum, vol. 30, no. 5, pp. 1397–1406, 2011. 23. M. Lyon, D. Bommes, and L. Kobbelt, “Hexex: robust hexahedral mesh extraction,” ACM Trans. Graph., vol. 35, no. 4, p. 123, 2016. 24. E. Ruiz-Gironés, X. Roca, and J. Sarrate, “The receding front method applied to hexahedral mesh generation of exterior domains,” Engineering with computers, vol. 28, no. 4, pp. 391–408, 2012. 25. T. D. Blacker and M. B. Stephenson, “Paving: A new approach to automated quadrilateral mesh generation,” International journal for numerical methods in engineering, vol. 32, no. 4, pp. 811–847, 1991. 26. C. Rumsey, “2DN00: 2D NACA 0012 Airfoil Validation Case,” 2021. [Online; accessed 5August-2022]. 27. C. Mancinelli, M. Livesu, and E. Puppo, “A comparison of methods for gradient field estimation on simplicial meshes,” Computers & Graphics, vol. 80, pp. 37–50, 2019. 28. L. C. Scalabrin, Numerical simulation of weakly ionized hypersonic flow over reentry capsules. PhD thesis, Citeseer, 2007. 29. K. Beatty and N. Mukherjee, “A transfinite meshing approach for body-in-white analyses,” in Proceedings of the 19th International Meshing Roundtable, 2010.

166

C. Roche et al.

30. J. Gould, D. Martineau, and R. Fairey, “Automated two-dimensional multiblock meshing using the medial object,” in Proceedings of the 20th International Meshing Roundtable, Springer, 2011. 31. S. Mitchell, “Incremental Interval Assignment by Integer Linear Algebra,” proc. of the International Meshing Roundtable, Oct. 2021. 32. M. L. Staten, J. F. Shepherd, and K. Shimada, “Mesh matching–creating conforming interfaces between hexahedral meshes,” in Proceedings of the 17th International Meshing Roundtable, pp. 467–484, Springer, 2008. 33. C. Roche, J. Breil, and M. Olazabal, “Mesh regularization of ablating hypersonic vehicles,” in 8th European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2022), (Oslo, Norway), June 2022. 34. J. Yao, “A mesh relaxation study and other topics,” tech. rep., Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States), 2013. 35. A. research staff, “Raport 1135: Equations, tables, and charts for compressible flow.,” tech. rep., Ames Aeronautical Laboratory, 1953. 36. H. W. Liepmann and A. Roshko, Elements of gasdynamics. Courier Corporation, 2001. 37. T. D. Economon, F. Palacios, S. R. Copeland, T. W. Lukaczyk, and J. J. Alonso, “Su2: An opensource suite for multiphysics simulation and design,” AIAA Journal, vol. 54, no. 3, pp. 828–846, 2016. 38. F. Ledoux, J.-C. Weill, and Y. Bertrand, “Gmds: A generic mesh data structure,” 39. F. R. Menter, “Two-equation eddy-viscosity turbulence models for engineering applications,” AIAA journal, vol. 32, no. 8, pp. 1598–1605, 1994. 40. N. Gregory and C. O’reilly, “Low-speed aerodynamic characteristics of naca 0012 aerofoil section, including the effects of upper-surface roughness simulating hoar frost,” 1970. 41. N. Frapolli, S. S. Chikatamarla, and I. V. Karlin, “Entropic lattice boltzmann model for gas dynamics: Theory, boundary conditions, and implementation,” Physical Review E, vol. 93, no. 6, p. 063302, 2016.

Robust Generation of Quadrilateral/Prismatic Boundary Layer Meshes Based on Rigid Mapping Hongfei Ye, Taoran Liu, Jianjun Chen, and Yao Zheng

1 Introduction 1.1 Prismatic Mesh Generation A boundary layer mesh is a semi-structured layered mesh around a given geometry. The early generation methods of boundary layer meshes are mainly PDE-Based methods, which have been widely used in early structured mesh methods [13, 19, 22, 23]. Later, as the geometry model became more and more complex, the semistructured mesh gradually developed, and a separate boundary layer mesh concept was gradually formed, along with the unstructured mesh filled in between the boundary layer mesh and the bounding box. Among different schemes of meshes for solving partial differential equations (PDE) by numerical methods near the boundary, the generation of a layered prismatic (in 3D) and quadrilateral (in 2D) mesh with isotropic mesh has gained popularity due to its good compromise between viscous accuracy and ease of use [3]. In this mesh, layered elements are configured on the near field of viscous walls to resolve high flow gradients normal to the walls. In contrast, the remaining domain and the surface geometry are filled with unstructured meshes. H. Ye · T. Liu · J. Chen (B) · Y. Zheng Centor for Engineering and Scientific Computation, Zhejiang University, Hangzhou, Zhejiang, China e-mail: [email protected] H. Ye e-mail: [email protected] T. Liu e-mail: [email protected] Y. Zheng e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_8

167

168

H. Ye et al.

Fig. 1 The final 2D viscous mesh of three-letter model generated by the proposed method. FullLayered boundary layer meshes are colored green

The most widely applicable method for generating layered meshes is the Advancing Layer Method (ALM). This method is usually generated in a layered manner, and premature stopping caused by global intersections may occur, requiring pyramid transition elements to handle mesh continuity. Generally, pyramids used for transitions are highly twisted, and their exposed faces are not conducive to the following isotropic mesh generation. Therefore, full-layer boundary layer mesh without transition elements is more sought after. However, full-layer generation usually encounters problems, the most serious of which is the negative volume cell. This paper proposes a global method with strictly positive volume guarantees for generating full-layer boundary layer mesh under arbitrary input. One of the resulting meshes in 2D is shown in Fig. 1. We can see that the algorithm handles the narrow gap well and do well in both the boundary layer mesh completeness and normal orthogonality. Here mesh completeness indicates the area/volume covered by the boundary layer mesh. Usually, the larger the boundary layer region is covered, the higher the accuracy of the solution. The global technique entails solving marching normal information globally, typically via the use of a set of linear equations or numerical methods. Some of the study [9, 33] still rely on or partly rely on the ALM framework, and some [26, 34] do not. The widely-recognized advantage of the global method is that its normals are globally optimized. Practically, the shortcomings of this method are also pronounced: 1. the technique is usually time-consuming, whether for the explicit or implicit way. 2. since the normal is globally optimized, unsuitable normal may be generated locally, such as the singularity [34] or negative elements. PDE-based One of the methods for normal smoothing is the PDE-governed approach, and the equation is often solved by the implicit method. The PDE-based method provides a new global angle to view the normal smoothing problem, such as Laplacian equation [33], Eikonal equation [25, 26] and level set equation [29], which models ALM as a hyperbolic differential equation. The computation of marching normal directions is defined in the solution space of the adopted governing equation. Since the solution is smooth in the flow domain, the marching normal is naturally

Robust Generation of Quadrilateral/Prismatic Boundary Layer …

169

smoothed following the corresponding equation. For instance, the marching direction at a point could be defined as the gradient vector of the solution proposed by Wang et al. [27], based on a variation of Eikonal equation solution about minimum Euclidean distance. Zheng et al. [33] proposed a method to solve the Laplace equation of three components of marching normal respectively by boundary element method (BEM). Variation-based Another global method is optimizing regeneration elements globally based on the variational method, which relies on a valid initial mesh. Variation-based methods are usually solved by explicit methods. This approach relies on a partial [9] or full [6] background mesh and then achieves both mesh orthogonality and mesh untangling in a quality-optimized manner. In general, weighted energy will be defined, including orthogonality and normal smoothing energy, and minimizing the energy will be used to obtain the normals. Two typical applications of this approach are the method proposed by Dyedov et al. [6], which minimizes the control triangle shapes energy and side-edge orthogonality. Garanzha et al. [9], which minimizes the objective function relates to the Jacobian matrix of all prisms from Lagrangian coordinates to Eulerian coordinates. Local Method The local method means there is no global function solved during the normal calculation, and smoothing is performed on each layer or each cell. The smoothing is often locally optimized by authors’ extensive experience [7, 18, 24, 31]. The biggest advantage of the local method is its high flexibility in the normal direction so that it perfectly coincides with the idea of local greed in the ALM. Moreover, the process is usually not time-consuming since the smoothing is performed locally. However, since the smoothing problem is usually non-convex, the local optimal usually cannot lead to the global optimal. Herefore, the final mesh may need more advantages of the global method, such as mesh completeness. Loseille et al. [12] proposed a 3D local operator that combines several local topology operations and uses it to generate an anisotropic boundary layer tetrahedron mesh.

1.2 Rigid Transformation Rigid Transformation(also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the metrics of Euler spaces [28]. This concept is widely studied in computer graphics, especially parameterization [5, 11, 17] and mesh deformation [21], and also shape interpolation [1, 30]. Similar to the application of boundary layer meshes, the most difficult goal in the study of rigid transformations is flip-free mapping with non-intersecting boundaries, also known as bijective in the field of surface parameterization. However, boundary layer mesh generation is more complex than parameterization because the quality of the initial mesh usually needs improvement. In addition, the application of air mesh can handle the self-intersection of rigid mapping at the free outer boundary [11]. The idea of air mesh is straightforward. The air mesh is an isotropic tetrahedral mesh between the outermost triangle mesh of the boundary layer mesh and the bounding box. When the mesh is deformed, the fold-free isotropic mesh is equivalent to the

170

H. Ye et al.

self-intersection-free boundary layer mesh. After that, Müller et al. [14] extends the technique of [32] to add the concept of triangle flipping based on a quality measure during the optimization instead of retriangulating the air mesh.

1.3 Contribution This paper proposes a robust boundary layer mesh generation algorithm based on rigid mapping that has been partly validated in 2D and 3D. This work is mainly inspired by surface parameterization. Our contribution can be listed below: 1. Introduces the rigid mapping into layered boundary layer mesh generation, along with the air mesh technique, which is used to prevent negative elements. The introduction of these technologies makes high-quality full-layer boundary layer mesh generation with positive volumes guaranteed under arbitrary input theoretically possible. 2. This paper proposes the generation scheme of the target mesh and the initial mesh of rigid transformation. Besides, by introducing an adaptive vertical target mesh adjustment and multiple normals configuration, the quality of the boundary layer mesh has been significantly improved. 3. The experimental 2D version of the algorithm open sourced at github.1

2 Methods Overview Figure 2 presents the proposed workflow of 2D layered boundary layer mesh generation. It inputs a Planar Straight line Graph (PSLG) and a few user parameters defining the preferred property of the output mesh. A typical set of these user parameters includes the height of the first layer and the ratio between the heights of neighboring layers. In addition, the loops in PSLG should be properly wound out by the winding number [4] to determine the direction of boundary layer growth. 1. Initial Mesh Iteration start from the initial mesh.M = (V, F ), where.V is the set of vertices’ coordinate and .F is the set of all connection between vertexes. First, marching normals are defined on each node in PSLG as the average front normal of both edges that share this node, which ensures normal visibility. Second, an extremely small initial marching step length is defined, which must be small enough to free the mesh from global and local intersections. The value can be given by the user and obtained by the dichotomy method. Third, a layered quadrilateral mesh with the same number of layers is generated with respect to the user input parameters and the marching normal.

1

https://github.com/HongviYe/2D-viscous-mesh-generation.

Robust Generation of Quadrilateral/Prismatic Boundary Layer …

171 Final Mesh with Refinement

Initial Mesh Yes Good enough? No

Input PSLG

Target Mesh

Rigid Transform Iteration

Air Mesh Regeneration

Fig. 2 Overview of the proposed method (2D)

2. Target Mesh The target mesh .M' = (V' , F ' ) defines our “expectation” for boundary layer mesh. For each segment in PSLG, two normals orthogonal to the segment are generated, and the length of the two normals are identical, which is specified by the user. Then, two normals and the segment can form a rectangle. Layer by layer, the entire target mesh is generated with respect to the user input parameters. As illustrated in Fig. 2, the target mesh cells are all rectangular and arranged for easy visualization. All target mesh cell has no neighbor relationship since it is only used to define the “expectation”, which means the deformation target. 3. Air Mesh Regeneration Auxiliary 2-simplicies/3-simplicies .M A = (VA , FA ) are used to fill the domain between the bounding box and the boundary layer mesh. The primary purpose of the application of air mesh is to prevent the fold and self-intersection of the boundary layer mesh in subsequent iterations. We only need to maintain the positivity of the volume of the air mesh cell during the iteration process. The idea of auxiliary 2-simplex/3-simplex comes from the Air Meshes [14], which is widely used for collision handling. After each iteration, the air mesh may need to be regenerated to improve its quality. 4. Rigid Transform Iteration The quadrilateral/prismatic meshes in the initial mesh and the target mesh have the same number of simplices .||F || = ||F ' ||. The purpose of the transformation is to minimize the rigid mapping energy between the initial and target mesh to make them “similar”. The word “rigid” means the meshes are similar in size and shape. 5. Final Mesh With Refinment At the end of the process, the air mesh can be discarded, and an unstructured high-quality isotropic mesh can be generated around the boundary layer mesh. The quality of the air mesh is limited since it is only used to avoid intersections of the boundary layer mesh. It is worth noting that in order to simplify the algorithm, the authors used 2simplex/3-simplex Jacobi in 2D and 3D, respectively, to compute the energy. Therefore, every quadrilateral mesh in both the initial mesh and target mesh is decomposed into .2 × 2 = 4 triangles as shown in Fig. 3. Compared with only decomposition into two triangles by diagonals, this decomposing scheme has its advantage: we only need to preserve the positivity of the triangle’s area to ensure the convexity of the final

172

H. Ye et al.

Fig. 3 Decomposition of a quadrilateral mesh

Fig. 4 One of the decomposition schemes of a prismatic mesh

quadrilateral mesh. In 3D, similarly, a triangular prism can be decomposed into three tetrahedrons, as shown in Fig. 4. Because there are six possible schemes for decomposing a triangular prism into tetrahedrons, every triangular prism is decomposed into .6 × 3 = 18 tetrahedrons with overlapping regions in implementation.

3 Initial Mesh and Target Mesh Generation 3.1 Initial Mesh Generation The existence of the initial mesh is the fundamental guarantee of robustness. Similar to conventional ALM, the initial mesh depends on the marching normal and distance. The critical point is that no self-intersection is allowed in the initial mesh. For the marching distance, we can prove that the mesh is free of fold and intersection as long as the marching step size is small enough. Algorithm 1 shows the procedure of generation:

Robust Generation of Quadrilateral/Prismatic Boundary Layer …

173

Algorithm 1 Initial Mesh Generation Calculate the initial normal. Generate only one thick layer boundary layer mesh following the fixed initial normals and initial marching step length Hall . while there exists fold or self-intersection in the outermost loop/surface in 2D/3D do Hall = 0.5 ∗ Hall . Split the one layer boundary layer mesh into initial mesh.

Fig. 5 Illustrative example of the adjustment of the vertical size of target mesh and final mesh. The subfigure connected by the solid line in the upper right corner of each figure shows the zoom-in view of triangles, and the subfigure connected by the dotted line in the lower right corner shows the corresponding target mesh (a) the final mesh with vertical size adjustment. b the final mesh without vertical size adjustment

For the marching normal, in 2D, a reasonable choice for point normal is the average of neighbor front normal. In 3D, the “most normal” [2] is introduced for calculation. Sometimes one normal may not be enough for extremely complex corners, and a multiple normals configuration must be introduced to solve the problem of the existence of the initial mesh.

3.2 Target Mesh Generation The target mesh is the combination of ideal mesh elements, which define the target of iteration. The design of the target mesh will directly determine the effect of the final mesh. The intuitive idea is that the corresponding input PSLG’s segment length determines the horizontal size. The vertical size, in turn, is determined by the user input parameters, including the first layer’s height, height ratio, and layer number. This intuitive idea may lead to low-quality mesh in narrow gap areas. Figure 5 shows the target mesh of the model and its corresponding boundary layer mesh and compares the effect with and without adaptive target mesh adjustment. Because the large area target mesh and narrow gap are incompatible, the rigid transformation algorithm must balance the mesh quality and area by increasing distortion. The target mesh in 2D is rectangular. Since the target mesh is rotationallyinvariant, it has only two degrees of freedom: the horizontal size . H and the vertical size .V , i.e., .V' = (V, H ).

174

H. Ye et al.

Fig. 6 Illustrative example of the adjustment of the horizontal size of the target mesh

Horizontal The horizontal size is the rectangular mesh that parallels the boundary. In general, the horizontal size of segment/facet .e is decided by the initial size . He0 and the ideal size . Hekmax , where .kmax is the maximum number of layers. . H 0 is equivalent to the length of the corresponding segment in the input PSLG, while . Hekmax can be obtained by the Laplace smoothing with. He0 as the initial value. Finally, the horizontal size of .kth layer can be linearly defined as: .

Hek =

k Hekmax (kmax − k)He0 + kmax kmax

(1)

Figure 6 shows the example of target mesh after adjusting the horizontal size; a pronounced sawtooth can be observed between the adjoint edge of two rectangular. Vertical The vertical size defines the height of the target mesh. Figure 5 shows the comparison between with and without adaptive vertical size adjustment. Figure 5b shows the fixed target mesh and its corresponding final mesh after infinite iteration. Some twisted elements are generated since the narrow gap constraint. Figure 5a shows the target and final mesh after adjustment, and twisting is alleviated. We can also see that the corresponding target mesh is compressed. The vertical size adjustment of the target mesh is usually achieved by shrinking the step length. An overly aggressive shrinking step length strategy usually results in a slow convergence, while an overly loose strategy cannot achieve the desired goal. The author proposes a strategy of shrinking layers. For segment .e in each Σ iteration, max αγ k , suppose the ideal vertical height calculated by user input as .Veideal = kk=0 Then, the distance where .α is the height of the first layer, and .γ is the growing ratio. kmax 1 Σ dim 0 between the 0 layer and.kmax layer center of.e as.Vecurr ent = dim j=1 ||ve, j − ve, j ||, where.ve, j) denoted the coordinate of vertex of simplex.e. We can calculate the vertical size in the next iteration as follows: / k Vecurr ent Veideal (2) . Ve = Since Eq. 2 is related to the current height, after each iteration, the target mesh needs to be recalculated, including the gradient. Obviously, .Vek > Vecurr ent , so the height of the target mesh is higher than the height of the existing iteration mesh. After several iterations, this value will eventually converge.

Robust Generation of Quadrilateral/Prismatic Boundary Layer …

175

Fig. 7 The final mesh and target mesh of a sharp convex example with and without multiple normals configuration. a the final mesh with multiple normal generates an extra normal on the convex point. b the final mesh without multiple normals. c the target mesh with multiple normal, the mesh size of the extra segment gradually increases with height. In particular, the first layer’s target mesh only has one triangle

3.3 Multiple Normals Configuration Multiple normals configurations enhance the initial mesh generation in both 2D and 3D. The configuration adopts the notion of virtual input for PSLG. In an extremely sharp convex point, a segment of zero length is inserted at the sharp corners of the PSLG, i.e., extra coincident points are generated at the sharp corners so that the subsequent algorithm requires only slight modification. Figure 7 shows the mesh comparison after infinite iterations between with and without multiple normal configurations. An obvious extra normal can be observed in Fig. 7a compared with Fig. 7b. Figure 7c shows the target mesh with multiple normal; We can observe three extra target mesh straps marching from the degenerated segment. It is worth noting that degenerate triangles are discarded for the first layer of extra straps of the degenerate segments, and only one triangle is generated instead.

4 Rigid Mapping 4.1 Problem Statement Suppose the rigid mapping can be defined as .φ : V' → V, our target is to minimize the mapping energy: min E (φ) V . (3) s.t.M is self-intersection free.

176

H. Ye et al.

Air mesh [14] is widely used to solve the global intersection problem of .M, while the M is local intersection free if there are no flipped or negative oriented area triangles in it. .M A share the same nodes in the outermost layer .M. Therefore, the .M A without fold is equivalent to the .M without global self-intersection. Formally, suppose . A(t) is the oriented area of triangle (simplex) .t, the Problem 3 can be rewritten as:

.

min E (φ) V

.

s.t.∀t ∈ M, A (t) > 0

(4)

∀t A ∈ MA , A (t A ) > 0 The energy function determines the shape of the final mesh. And proper energy function and precise control of step size by line search [20], e.g., energy function with zero barriers, can prevent negative simplex.

4.2 Energy Definition Rigid Mapping Energy Rigid distortion energy has been well-studied in mesh deformation and surface parameterization. Suppose the Jacobian of the map .φ computed from each simplex .f ∈ F: J f := ∇φ f

.

(5)

Where .φ f is the restriction of .φ over the simplex . f , which is an affine map. Then we can denote the energy as: .

E (φ) =

Σ

D (J f )

(6)

f ∈F

Where .D(·) is the distortion energy of each simplex. The slight difference from the application in parameterization or mesh deformation is that we do not need weight coefficients such as mesh area, etc. The physical properties of the boundary layer mesh determine that the unweighted equation is enough. Generally, conforming mapping contains those, whether preserving the angle or preserving the scale, and under the boundary constraint, these two conditions conflict. The most popular form of rigid mapping energy called “as-rigid-as-possible” (ARAP) mapping was proposed by Liu et al. [10], which is a famous mapping method that balances those two conditions, and the distortion energy of it can be defined as: DARAP (J f ) = ||J f − R (J f )||2F

.

(7)

Robust Generation of Quadrilateral/Prismatic Boundary Layer …

177

Where .R(J f ) is the closest rotation to .J f , and .|| · || F denotes the Frobenius norm. The idea of finding the closet rotation .R is that if we use SVD to decomposite T T .J = UΣ V , .R(J) = UV as a rotate matrix, while .Σ stand for scaling. One of the advantages of this energy is that it can be performed by the local/global method [10].

Local/Global Iteration The Local/Global Iteration method was first published by Liu et al. [10]. This method unprecedentedly decomposes the global deformation energy optimization into local and global linear calculations, which makes it possible to optimize the global energy by solving only linear systems. The iteration is decomposite into two steps: 1. Local the computation of the closest rotation to Jacobian for complex . f in .k iteration .Rkf := UVT . (.U and .V is the first and the third part of SVD decomposition of the Jacobian) 2. Global the computation to solve a global linear equation 7 that minimizing the distortion energy as: .

Σ

arg min ( Vk

||J f (Vk ) − R (J f (Vk−1 ))||2F )

(8)

f ∈ (F ∪FA )

This equation is simple quadratic energy if .R is known since we can rewrite it as the most celebrated weight recipes are the so-called cotangent weights [16]. Thus the extreme point of the energy can be obtained by solving a linear equation.

Symmetry Dirichlet Energy According to the author’s experiment, .DARAP did not apply to the boundary layer mesh. In this manuscript, we chose one of its variants: another rigid mapping energy called Symmetry Dirichlet Energy proposed by Smith [20] is utilized to measure the distortion, which has been proved rotation invariant: DSDE (J f ) =

.

||J f ||2F

+

2 ||J−1 f || F

=

dim Σ

(σi2 + σi−2 )

(9)

i

Where .σi is the eigenvalue of .Σ , or say the singular value of .J, .dim means the dimension of the problem. Apparently, the Eq. 9 is singular when .σ is taken to 0. Moreover, the geometrical meaning of .σ = 0 is that the triangle has completely degenerated, and the area is 0. In order to take advantage of the local/global method while using the symmetry Dirichlet energy, Jiang et al. [17] proposed the Weighted Proxy Functions, which extended the method to the anisotropic weights. We can rewrite the distortion measure as follows:

178

H. Ye et al.

2 DW SDE (J f ) = ||W (J f − R (J f ))|| F

.

(10)

Where .W is the 2 .× 2 proxy matrix. The proxy matrix arbitrary energy .D (J) can be written as: 1 −1 21 .W = ( ∇J D (J) (J − R) ) (11) 2 Suppose .J = UΣ VT is the singular value decomposition of .J and .I is the identity matrix. Since then the energy of equation 9 is rotate invariant, the Eq. 11 can be rewritten as: 1 1 W = U ( ∇Σ D (Σ ) (Σ − I)−1 ) 2 UT = UΣ W UT 2

.

(12)

For the energy of equation 9, the proxy matrix is: Σ W = (

.

σi − σi−3 1 )2 σi − 1

(13)

Come back to Eq. 10, both .R and .W can be calculated by the current State, thus minimizing the Eq. 10 is also equivalent to solving a linear equation. Therefore, local/global method is also available: 1. Local the computation of the .R and .W 2. Global the computation to solve a global linear equation.

Air Mesh Energy The intersection detection of the air mesh is also handled by Eq. 9. No self-intersection happens by controlling the energy away from the singularities without crossing over it in each iteration. Therefore, the two constraints of Problem 4 are both held by the Eq. 9. The air mesh quality is not as important as the boundary layer mesh; thus, in each iteration, the air mesh in the last round is chosen as the target mesh of the air = (Vk−1 mesh. Formally, suppose the .Mk−1 A A , F ) as the air mesh in .k − 1’s round → VkA . By of iteration, the rigid mapping of air mesh can be defined as .ψ : Vk−1 A putting the air mesh and boundary layer mesh into the same frame of consideration, the proposed method is implemented by optimizing the following equations: min E (φ) + λE (ψ) V

.

s.t.∀t ∈ F , A (t) > 0 ∀t A ∈ FA , A (t A ) > 0

(14)

Robust Generation of Quadrilateral/Prismatic Boundary Layer …

179

Because we only care about the singularity of.ψ, the choice of.λ is small enough to guarantee the functionality of Air Mesh with little to no effect on the final result. In our 1 , where .||F || represents the number of simplexes in .F . experience, .λ = 10000||F || The computation of . E(ψ) follows the same pattern as . E(φ). Because they share points on the boundary, they are calculated together, and the only difference is the weights, which are decided by the flexible factor .λ.

4.3 Positive Volume Gurantee Since the solution of equation 14 may cause flip or self-intersection(The simplex f with .det(J f ) < 0 may still have small energy according to Eq. 9), the simple application of the solution is not enough. In the same way as the method used in the above papers, the authors also used the Line-Search to avoid self-intersection mesh. The Line-Search is a widely-used technique for optimization first published as detailed in Nocedal et al. [15]. Suppose the .Vk and .Vd is the coordinate of .kth and .k + 1th iteration, and if we can guarantee that the direction of optimization must lead to energy reduction, then there must exist an .α such that the energy of k+1 .V = αVk + (1 − α)Vd is smaller than .Vk while ensuring the there is no flip or self-intersection. This is a guarantee of the robustness of the whole algorithm. Negative cells are fatal to the simulation. Also, the algorithms using line search are usually sensitive to the choice of the optimization direction. .

5 Post Process 5.1 Retention Layer Generally, one of the common criticisms of the boundary layer mesh is that anisotropic meshes leave too small of a gap after generation, making high-quality isotropic mesh generation a hard problem. Therefore, the top layer of the boundary layer mesh is used as the retention layer to avoid narrow gaps. Thus, boundary layers will retain a gap height of at least two preserving layers. A control parameter.β is used to control the ratio between the height of the reserved layer and the default height. Figure 8 compares the retention layer and the final mesh at .β = 0.1 and .β = 2.

180

H. Ye et al.

Fig. 8 The retained layer schematic and final mesh at .β = 0.1 and 2

5.2 Mesh Refinement After removing the retention layer, the proposed algorithm fills the remaining domain with an isotropic mesh. Unlike the air mesh, the calculation of the size field is driven by boundaries (Fig. 1), and the number of mesh cells is significantly increased, which may be more conducive to simulation calculations.

6 Result 6.1 IMR The model contains three English letters, “I”, “M” and “R”. sharp concave corner can be found in “M”, and there exists a nested ring in “R”. There are both straight and curved turns in this example, as well as sharp concave corner. There is also a narrow gap at the bottom of the “M” letter, making it difficult to generate high-quality full-layer meshes. Figure 9 shows the mesh with air mesh under the different number of iterations. It can be seen that the mesh gradually expands from the initial mesh, and after about 100 iterations, the mesh gradually expands to the ideal height. There are also no extremely twisted elements in the narrow gap. The model contains three English letters, “I”, “M” and “R”. sharp concave corner can be found in “M”, and there exists a nested ring in “R”. There are both straight and curved turns in this example, as well as sharp concave corner. There is also a narrow gap at the bottom of the “M” letter, making it difficult to generate high-quality full-layer meshes. Figure 9 shows the mesh with air mesh under the different number of iterations. It can be seen that the mesh gradually expands from the initial mesh, and after about 100 iterations, the mesh gradually expands to the ideal height. There are also no extreme, twisted elements in the narrow gap.

Robust Generation of Quadrilateral/Prismatic Boundary Layer … Iteration:0

Iteration:5

Iteration:10

Iteration:20

Iteration:30

Iteration:40

Iteration:100

Iteration:inf

181

Fig. 9 The initial mesh and the air mesh of the IMR English letter model by a different number of iterations

6.2 30P–30N Airfoil To further verify the method, a complex configuration of a 2D three-element airfoil, the 30P–30N airfoil, is tested. Figure 11 shows the initial and target mesh under the different number of iterations. In this example, after 300 iterations, the mesh quality at the narrow gap increases as the height of the target mesh decreases. Figure 10 shows the final layered mesh, the details of the gaps are shown in the two subfigures at the bottom, and the algorithm in this manuscript can handle the narrow gap at the cross-region of different assemblies.

6.3 U-Shape An academic example named U-shape is introduced to demonstrate the preliminary results obtained by the program in 3D. The U-shape model is a small box obtained by the Boolean subtraction of two cubes of different sizes. As shown in Fig. 12, the input surface mesh contains 3,102 points and 6,200 triangular meshes, and 45 layers of prismatic mesh generate initially. Figure 12 shows the final prismatic mesh after

182

H. Ye et al.

Fig. 10 The final mesh of 30P–30N airfoil

Fig. 11 The initial mesh and the air mesh of the 30P–30N airfoil model by different iterations. The subfigure connected by the solid line in the upper right corner of each figure shows the mesh, and the subfigure connected by the dotted line in the lower right corner shows the corresponding target mesh

the different number of iterations. The result indicates that at least 100 iterations are required for the program to obtain a high-quality mesh. To demonstrate the effectiveness of the proposed algorithm, the author compares the final mesh with the commercial software Pointwise, a piece of prevalent commercial software for meshing tasks with the same surface input. Figure 13 shows the cut-view comparison between the two meshes, and we can see that Pointwise cannot generate a complete mesh near the concave corner. In addition, Equiangule Skewness2 is utilized to measure the mesh quality. Figure 14 shows the quality distribution of the two meshes. Since lower-quality meshes are more harmful to simulation, a log2

https://www.pointwise.com/doc/user-manual/examine/functions/equiangle-skewness.html.

Robust Generation of Quadrilateral/Prismatic Boundary Layer …

183

Fig. 12 The surface mesh and the layered boundary layer mesh of U-shape model by the different number of iterations Fig. 13 Meshes generated by the proposed algorithm and pointwise

arithmically vertical axis is presented in the comparison. It can be observed that the algorithm proposed in this manuscript is ahead of Pointwise in this indicator. Moreover, the worst quality usually plays a decisive role in the simulation convergence speed and accuracy. The author compares the maximum prism equiangle skewness of the two meshes, and the results of the proposed algorithm (0.9914) outperform Pointwise (0.9504).

6.4 DLR F6 (One Layer) The method proposed in this manuscript is very time-consuming in 3D. Due to the limitation of running time, we only generate one layer thick boundary layer mesh in the F6 model. This is a challenging task for highly curved surface [8]. Figure 15 shows the comparison of the input surface mesh (the green part) and the

184

H. Ye et al.

Fig. 14 comparison of mesh quality of meshes generated by the proposed algorithm and pointwise

Fig. 15 comparison of input surface mesh and single-layer prismatic mesh

outermost surface mesh (the white part) of the final prismatic mesh. Figure 16 shows the detailed prismatic mesh around the connection point of the aircraft hanger from different views. It can be seen that the prismatic mesh is full around complex corner points. The generation of this example takes about 15.0 hours, and the mapping energy is reduced from .1.5 × 1016 to .6.5 × 108 . A single-layer mesh is simulation meaningless, but it may open a door for large-scale full-layered prismatic mesh generation.

Fig. 16 The details of the single-layer prismatic mesh of f6

Robust Generation of Quadrilateral/Prismatic Boundary Layer …

185

7 Conclusion and Limitation This article presents a novel robust method for full-layer boundary layer mesh generation. By defining the target mesh and initial mesh Symmetry Dirichlet mapping Energy, we can gradually expand any thin initial mesh to the ideal through iterations. In addition, the proposed algorithm will dynamically adjust the size of the target mesh for better boundary layer mesh quality. We achieve good results in 2D and some preliminary results in 3D, along with multiple normal configurations. However, the current version of the proposed method in 3D has a huge time bottleneck. This is because .Vk of Eq. 8 needs to be obtained by solving a linear equation. This linear equation system has a sparse matrix with a row of (number of prisms .× 18) and a column of (number of mesh points). Considering the boundary layer meshes in real industry, the quantity of mesh often exceeds one million. Whether the iterative or direct solution is used, it will be very time-consuming to solve this linear equation system. In addition, its convergence speed is also relatively slow, and it takes more than 500 iterations to obtain good results. Acknowledgements The authors would like to thank the support received from the Science and Technology on Scramjet Laboratory Fund, China (No. 2022-JCJQ-LB-020-05).

References 1. Alexa, M., Cohen-Or, D., Levin, D.: As-Rigid-As-Possible Shape Interpolation. Proceedings of SIGGRAPH 2000 (2000). 10.1145/344779.344859 2. Aubry, R., Löhner, R.: On the ‘most normal’ normal. Communications in Numerical Methods in Engineering 24(12), 1641–1652 (2008). 10.1002/cnm.1056 3. Baker, T.J.: Mesh generation: Art or science? Progress in Aerospace Sciences 41(1), 29–63 (2005). 10.1016/j.paerosci.2005.02.002 4. Barill, G., Dickson, N., Schmidt, R., Levin, D., Jacobson, A.: Fast winding numbers for soups and clouds. ACM Transactions on Graphics 37, 1–12 (2018). 10.1145/3197517.3201337 5. Botsch, M., Sorkine, O.: On Linear Variational Surface Deformation Methods. IEEE transactions on visualization and computer graphics 14, 213–30 (2008). 10.1109/TVCG.2007.1054 6. Dyedov, V., Einstein, D.R., Jiao, X., Kuprat, A.P., Carson, J.P., Del Pin, F.: Variational generation of prismatic boundary-layer meshes for biomedical computing. International Journal for Numerical Methods in Engineering 79(8), 907–945 (2009). 10.1002/nme.2583 7. Eller, D., Tomac, M.: Implementation and evaluation of automated tetrahedral–prismatic mesh generation software. Computer-Aided Design 72, 118–129 (2016). 10.1016/j.cad.2015.06.010 8. Garanzha, V., Kaporin, I., Kudryavtseva, L., Protais, F., Ray, N., Sokolov, D.: Foldoverfree maps in 50 lines of code. ACM Transactions on Graphics 40, 1–16 (2021). 10.1145/3450626.3459847 9. Garanzha, V., Kudryavtseva, L., Belokrys-Fedotov, A.: Single and multiple springback technique for construction and control of thick prismatic mesh layers. Russian Journal of Numerical Analysis and Mathematical Modelling 36, 1–15 (2021). 10.1515/rnam-2021-0001

186

H. Ye et al.

10. Gotsman, C., Liu, L., Zhang, L., Xu, Y., Gortler, S.: A Local/Global Approach to Mesh Parameterization. Computer Graphics Forum 27 (2008). 10.1111/j.1467-8659.2008.01290.x 11. Jiang, Z., Schaefer, S., Panozzo, D.: Simplicial complex augmentation framework for bijective maps. ACM Transactions on Graphics 36, 1–9 (2017). 10.1145/3130800.3130895 12. Loseille, A., Löhner, R.: Robust Boundary Layer Mesh Generation. In: X. Jiao, J.C. Weill (eds.) Proceedings of the 21st International Meshing Roundtable, pp. 493–511. Springer Berlin Heidelberg, Berlin, Heidelberg (2013). 10.1007/978-3-642-33573-0_29 13. Middlecoff, J., Thomas, P.: Direct Control of the Grid Point Distribution in Meshes Generated by Elliptic Equations. AIAA Journal 18 (1979). 10.2514/3.50801 14. Müller, M., Chentanez, N., Kim, T., Macklin, M.: Air Meshes for Robust Collision Handling. ACM Transactions on Graphics 34 (2015). 10.1145/2766907 15. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (1999) 16. Pinkall, U., Polthier, K.: Computing Discrete Minimal Surfaces and Their Conjugates. Experimental Mathematics 2 (1996). 10.1080/10586458.1993.10504266 17. Rabinovich, M., Poranne, R., Panozzo, D., Sorkine-Hornung, O.: Scalable Locally Injective Mappings. ACM Transactions on Graphics 36, 1 (2017). 10.1145/3072959.2983621 18. Roget, B., Sitaraman, J., Lakshminarayan, V., Wissink, A.: Prismatic mesh generation using minimum distance fields. Computers & Fluids 200, 104429 (2020). https://doi.org/10.1016/j. compfluid.2020.104429 19. Sadrehaghighi, I.: Mesh Generation in CFD. No. Patch 1.86.7 in CFD Open Series. ANNAPOLIS (2020) 20. Smith, J., Schaefer, S.: Bijective Parameterization with Free Boundaries. ACM Transactions on Graphics 34, 70:1–70:9 (2015). https://doi.org/10.1145/2766947 21. Sorkine, O., Alexa, M.: As-Rigid-As-Possible Surface Modeling. Symposium on Geometry processing 4, 109–116 (2007). https://doi.org/10.1145/1281991.1282006 22. Steger, J., Sorenson, R.: Use of Hyperbolic Partial Differential Equations to Generate Body Fitted Coordinates. In: Numerical grid generation techniques. NASA. Langley Research Center Numerical Grid Generation Tech (1980) 23. Thompson, J.: A general 3D elliptic grid generation system on a composite block structure. Computer Methods in Applied Mechanics and Engineering 64, 377–411 (1987). https://doi. org/10.1016/0045-7825(87)90047-8 24. Wang, F., Mare, L.: Hybrid meshing using constrained Delaunay triangulation for viscous flow simulations. International Journal for Numerical Methods in Engineering 108(13), 1667–1685 (2016). https://doi.org/10.1002/nme.5272 25. Wang, Y.: Eikonal Equation Based Front Propagation Technique and its Applications. In: 47th AIAA Aerospace Sciences Meeting Including The New Horizons Forum and Aerospace Exposition, Aerospace Sciences Meetings. American Institute of Aeronautics and Astronautics (2009). https://doi.org/10.2514/6.2009-375 26. Wang, Y., Guibault, F., Camarero, R.: Eikonal equation-based front propagation for arbitrary complex configurations. International Journal for Numerical Methods in Engineering 73, 226– 247 (2008). https://doi.org/10.1002/nme.2063 27. Wang, Y., Murgie, S.: Hybrid Mesh Generation for Viscous Flow Simulation. In: P.P. Pébay (ed.) Proceedings of the 15th International Meshing Roundtable, pp. 109–126. Springer Berlin Heidelberg, Berlin, Heidelberg (2006) 28. Weinstein, A.: Theoretical Kinematics (O. Bottema and B. Roth). SIAM Review 22, 519–520 (1980). https://doi.org/10.1137/1022104 29. Xia, H., Tucker, P.G., Dawes, W.N.: Level sets for CFD in aerospace engineering. Progress in Aerospace Sciences 46(7), 274–283 (2010). https://doi.org/10.1016/j.paerosci.2010.03.001 30. Xu, D., Zhang, H., Wang, Q., Bao, H.: Poisson shape interpolation. Graphical Models 68, 268–281 (2006). https://doi.org/10.1016/j.gmod.2006.03.001 31. Ye, H., Liu, Y., Chen, B., Liu, Z., Zheng, J., Pang, Y., Chen, J.: Hybrid grid generation for viscous flow simulations in complex geometries. Advances in Aerodynamics 2(1) (2020). https://doi. org/10.1186/s42774-020-00042-x

Robust Generation of Quadrilateral/Prismatic Boundary Layer …

187

32. Zhang, E., Mischaikow, K., Turk, G.: Feature-Based Surface Parameterization and Texture Mapping. ACM Transactions on Graphics 24 (2005). https://doi.org/10.1145/1037957. 1037958 33. Zheng, Y., Xiao, Z., Chen, J., Zhang, J.: Novel Methodology for Viscous-Layer Meshing by the Boundary Element Method. AIAA Journal 56(1), 209–221 (2018). https://doi.org/10.2514/1. J056126 34. Zhu, Y., Wang, S., Zheng, X., Lei, N., Luo, Z., Chen, B.: Prismatic mesh generation based on anisotropic volume harmonic field. Advances in Aerodynamics 3(1) (2021). https://doi.org/10. 1186/s42774-021-00065-y

Explicit Interpolation-Based CFD Mesh Morphing Ivan Malcevic and Arash Mousavi

1 Introduction A CFD simulation cycle consists of pre-processing (geometry modeling and meshing), flow solution computation, and post-processing of the results. Traditionally, the most expensive part of the cycle is obtaining a converged flow solution. Improvements in numerical schemes, and the emergence of massively parallel CPU and GPU solvers drastically reduced flow solution wall clock time. Today, a well-converged solution can be obtained in just a few hours even for computationally demanding simulations. At the same time, the progress in geometry modeling and meshing has been modest. Parallel scalability is limited to a small number of processing units. A typical run time to generate an unstructured CFD mesh of an aircraft model or a wind turbine with a few hundred million elements is at least several hours. In many cases, mesh generation is already the dominant component in a CFD simulation cycle, with no clear solution in sight. As a result, the importance of alternative paths grows rapidly, mesh morphing being one of the most popular. Mesh morphing can be loosely described as a modification of the baseline mesh while preserving mesh structure, also referred to as connectivity or topology. It should be noted that morphing cannot fully replace mesh generation but can provide a useful path to CFD mesh if a prototype exists. Compared to mesh generation, mesh morphing offers several advantages, but has its limits as well. A major advantage is a much shorter workflow cycle. A morphing input is a baseline mesh, which has built-in know-how of mesh generation. If the I. Malcevic (B) General Electric - Research Center, Niskayuna, NY, USA e-mail: [email protected] A. Mousavi General Electric - Aerospace, Evendale, OH, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_9

189

190

I. Malcevic and A. Mousavi

existing mesh has already been used in a previous CFD run, it is reasonable to expect that it has proper surface and volume feature resolution (e.g. wake, leading or trailing edge resolution), wall-cell spacing (y + distribution), design-practice “approved” boundary layer, defeaturing parameters applied, etc. These aspects (and much more) are part of a full mesh generation workflow and are rarely fully automated. They come for “free” in a mesh morphing workflow. Because morphing “just” deforms the existing mesh, one can expect a much shorter turnaround time. And, since mesh connectivity is unchanged, uncertainty associated with differences in meshing two similar (but still different) models is eliminated—an important aspect when comparing CFD results in re-design and optimization applications. However, fixed connectivity is also the main constraint since it limits the scope of a morphing workflow to similar models with limited geometry variations before the deformed mesh becomes unusable. As a result, morphing has seen limited use in small amplitude optimization and fluid–structure workflows. The success of an industrial morphing system is measured by how it addresses the major questions: (i) what its useful application space is, (ii) what displacement types and amplitudes it can handle, (iii) parallel capabilities, and (iv) scalability. The history of research on mesh morphing is as long as the mesh generation. Early methods targeting mapped meshes were succeeded by methods using elastic and spring analogy, followed by mesh-based PDE solutions including Laplacian and other smoothing methods. Examples of review papers on the comparison of various morphing methods can be found in [1, 2]. In recent years, the radial basis function approach (RBF) is the method of choice. See [3–6] among others. RBF represents a meshless approach to morphing and offers an interpolation-like technique for distributing displacement fields into the domain. Much of the recent work on RBF focuses on the choice and support of the interpolation functions targeting specific application space. For examples see [7–9]. Most morphing methods, including the popular RBF morph, are implicit which assumes solving a large system of equations (meshless or not) to compute displacement field across a domain. The scalability and run-time are directly affected by the size of this system. For the mesh-based methods, the size of the system is typically the size of the background mesh used to discretize the domain (which may or may not be the original CFD mesh). In the case of meshless methods, system size is driven by the size of the displacement source array. This depends on the nature of the application and how the system is set. In some applications, like fluid–structure interaction, it is driven by the number of surface mesh nodes, and the system size can easily grow into tens or hundreds of thousands. In such cases, the performance of the implicit methods could be significantly degraded. Various approximation techniques have been proposed to improve the performance in these cases, and this is still an area of active research. The other success criterion, arguably more important than time performance, is the question of mesh quality. The morphed mesh needs to be usable for CFD simulation. This statement contains dual requirements. First, mesh quality needs to exceed minimum thresholds. Those are usually specified in terms of element quality metrics like aspect ratio, minimum element angles, or surface and volume ratios. However,

Explicit Interpolation-Based CFD Mesh Morphing

191

equally important is the question of preserving mesh properties, not necessarily in the form of element quality metrics. The baseline (nominal) mesh contains the built-in engineering design practice that qualifies it as the “golden standard”. Mesh properties like wake resolution, local surface refinement, and boundary layer characteristics are important features of any CFD mesh and should be preserved during the morphing. Remember, morphing serves as the alternate for full mesh generation, and full mesh generation workflow would most likely account for all design changes in deformed configuration, much like in the case of the original mesh. To date, the question of maintaining the mesh quality during morphing has not been fully answered. Certain classes of methods are known to produce higher-quality elements than others. Other methods are known to be able to better tolerate lower initial mesh quality. Even within the same class of methods, the choice of underlying shape functions is very important, since it significantly influences the performance for specific problem categories. The requirement of preserving desired mesh properties adds to the complexity of implementation. Depending on the morphing method, these additions may be hard to implement or may result in a reduced smoothness level of the resulting displacement field. For example, the requirement of preserving the properties of the boundary layer translates into additional terms of the system matrix such that locally, the stiffness of the region near walls becomes very high. This in turn could potentially lead to a less numerically stable system and produce lower quality elements than desired. Additional requirements like re-positioning the wake region of the airfoil mesh in case of airfoil turn or moving the region of mesh refinement to keep up with expected shock location change due to airfoil redesign are even harder to implement. The consequence of the above-mentioned difficulties is limited application space of morphing methods, usually expressed through small displacement amplitudes, and limited practical mesh size in a certain class of applications like fluid–structure interaction. In addition, the morphing step is usually accompanied by subsequent corrective action typically aimed at locally (and sometimes globally) improve on mesh quality. Such actions include smoothing, local node repositioning, and in some cases other mesh-improving techniques like swapping, refinement and coarsening. These actions complicate practical implementation and do not guarantee that the mesh quality issue will be resolved. To address both runtime requirements and be able to better control mesh quality during morphing, we propose an explicit interpolation-based morphing method. The method was originally developed for structured mesh turbomachinery CFD applications to enable handling large displacements for various applications including airfoil design and re-design, optimization, and fluid–structure interaction but also applications like geometry feature additions and system applications. The method utilizes the structured nature of the CFD meshes to automatically determine the optimal domain decomposition and the order and direction of interpolation. The explicit nature of the method allows for a very fast run time and produces an algorithm that scales linearly with mesh size. Finally, the explicit method allows for easy control of local features like boundary layer properties, region-focused mesh control, local element quality

192

I. Malcevic and A. Mousavi

check, and local real-time adjustments during mesh movement (rather than doing it post-factum). The remainder of this paper is organized as follows: The next section focuses on the morphing environment and the interaction of morphing with other pre-processing (geometry and meshing) techniques. It is important to understand these elements and their interactions since the performance of the proposed morphing methodology can be properly understood only within the larger framework where several techniques come together to form a fully functional system with capabilities exceeding the capabilities of each technique applied separately. The main section describing the method to morph structured CFD meshes then follows. Finally, the extensions of the method to unstructured CFD mesh morphing are detailed. The capabilities and usability of the proposed method are illustrated in the examples shown throughout this paper.

2 Mesh Morphing Environment To fully understand the method described below, it is important to learn how it fits within a mesh morphing environment and, more generally, how it interacts with various modules of a dynamic pre-processing environment.

2.1 Elements of Morphing Process The crucial aspect for the success of any morphing application (and a more general pre-processing application) is a holistic approach. In the case of a morphing workflow this means the inclusion of three fundamental elements: ● Motion definition ● Surface mesh morphing ● Volume mesh morphing. Morphing workflow buildup starts with the motion definition step. In this stage, information on surface and volume motion and related constraints should be gathered and organized. The result of this stage is a set of rules that describe the motion of the domain boundary and internal surfaces, and additionally, the motion of regions of special interest. The set of rules should describe a conformal, unambiguous, and well-defined motion field (particularly at intersection regions). In general, motion definition constraints are derived from two sources: target application and modeling system/tools environment. An example of motion definition for a turbomachinery CFD application is illustrated in Fig. 1. A domain consists of a single passage (slice of a full wheel) around an airfoil. A cross-section of the model mesh is shown in the figure. Air flows from left (inlet surface) to right (outlet surface). Circumferential symmetry employed in the simulation allows the model to be reduced to a single

Explicit Interpolation-Based CFD Mesh Morphing

193

airfoil with bottom and top periodic surfaces, with 1–1 node and element match. Radially, the CFD domain is bounded between the inner (hub) and outer (casing) surfaces of the revolution. In addition to airfoil shape change, the motion definition set of rules should specify how each of the six bounding surfaces moves for the CFD simulation intent. For some surfaces, the motion could be explicitly specified. For example, in the case of a fluid–structure interaction, the motion of the airfoil surface mesh is specified in the form of a displacement field obtained from a model mechanical analysis. In this case, the motion is specified directly on mesh nodes. In a case of an airfoil redesign, the surface motion might be specified in the form of a new CAD representation while mesh motion needs to be computed separately. Some of the surfaces might be stationary (hub and casing surfaces), while the associated mesh moves (slides) along the surface (slip motion condition). The motion of inlet and exit surfaces might depend on a wider CFD setup. If there are additional blade rows in this simulation forward and aft of this airfoil, the axial (horizontal) location of the inlet and exit surfaces should be constrained. Depending on the capabilities of the CFD solver and the type of simulation, inlet and exit surface meshes might be allowed to slide in the circumferential direction (top–bottom direction in the figure) or might need to be frozen. For the example in Fig. 1 inlet mesh (left boundary) was allowed to slide and the exit mesh (right side) was kept frozen. The decision on periodic surface and mesh motion depends on solver capabilities, and the type of simulation. In this case, the inlet surface mesh slides circumferentially, hence, the periodic surface mesh must move as well. At the top and bottom, the motion is constrained to the hub and casing surface. The motion in the middle of the periodic surface is decided based on simulation intent. One should choose to move the periodic mesh nodes to mimic airfoil motion to maximize the morphed elements’ quality (Fig. 2). Using the above example as a guide it can be observed that the set of rules describing target application and motion definition are not the same. This is an important distinction when considering CFD versus, for example, structural analysis. Consider the target application of the airfoil re-design described above. For an Fig. 1 Single airfoil passage mesh morphing

194

I. Malcevic and A. Mousavi

Fig. 2 Periodic surface mesh motion

engineer working on airfoil optimization, airfoil shape change fully defines the problems. The same airfoil shape change is sufficient for a mechanical analyst as well since the new airfoil shape fully defines the structural model. However, an airfoil is just one of the surfaces of the larger CFD domain. As described above, other considerations that stem from a larger hardware system (of which the airfoil is just one component), the type of the CFD simulation, and modeling system capabilities play a role in how the CFD domain changes its shape. Hence, while in both cases the target application is the airfoil re-design, one ends with a significantly different (wider) set of motion rules in the case of a CFD workflow. Once the motion definition is complete, the surface mesh morphing can commence. Depending on the nature of surface motion the computation of new surface mesh could be done in a separate step (explicit surface mesh motion) or can be performed simultaneously with volume morphing (implicit surface mesh motion constraint). For the example shown in Fig. 1, inlet surface motion could be precomputed by prescribing the amount of the circumferential shift or could be left to be computed at the time of volume mesh morphing as a fallout from the volume mesh morphing interpolation scheme with the additional constraint that the axial motion of the inlet surface nodes is set to 0. The exact workflow depends on the capability of the morphing method used, and the existence of additional CFD constraints like the need to impose a specified flow angle in the inlet region for a smooth transition to the upstream blade row. Note how the motion definition (circumferential slide), surface mesh morphing (explicit or implicit, constrained, left to be computed during volume morph), and volume mesh morph elements of the morphing environment merge into a single workflow decision which has a direct consequence on the quality of the morphed mesh. Also, note the flexibility of the modular environment. Depending on the requirements, each of the steps can operate in a different mode. For example, we could freeze the inlet mesh, or in the case of circumferential slide precompute new inlet mesh. The decision on how to operate should be derived from motion definition constraints, capabilities of the morphing, pre-processing, and simulation systems, and projected morphed mesh quality for each of the possible workflows. Surface mesh displacement (explicitly computed or implicitly constrained) serves as a boundary condition for volume mesh morph. The role of volume morphing is

Explicit Interpolation-Based CFD Mesh Morphing

195

to distribute surface mesh displacements into the domain volume in an “optimal” way. The definition of “optimal” depends on several factors including the original mesh quality, displacement amplitudes, the local and global mesh quality criteria (e.g. boundary layer properties), etc. When discussing the morphing environment, the emphasis is on the modular structure. Both surface mesh and volume mesh morphing parts of the environment should contain several different morphing modules instead of focusing solely on one method. Every modern commercial pre-processing package offers several meshing methods. Workflows for meshing complex models (manual or automated based on feature recognition) break the process into several stages each using the right meshing tool for the job. The same approach should be taken in mesh morphing. Parts of the model might be best morphed using projection, others by parametric mapping, while the rest might best be served by interpolation.

2.2 Preprocessing Environment Interaction The discussion now shifts one level up to focus on the interaction of the morphing environment with the wider set of pre-processing modules. A pre-processing environment can be loosely described as a collection of modules/capabilities each performing specific geometry or meshing task. The morphing environment is the subset of this pre-processing module set. Other subsets might include mesh generation modules (Delaunay, advancing front, overset, etc.), and geometry handling modules, but also capabilities like smoothing, domain decomposition, splicing, inflation, grafting, etc. In a dynamic environment, a set of such modules is pulled together into a workflow serving the target application space. The previous section illustrated how the interaction between morphing modules results in different system performances. Similarly, the interaction between a morphing environment and the wider pre-processing module set brings a new quality to modeling and enables morphing methods to work in their sweet zones. For example, this paper discusses the extension of the volume morphing method originally developed for the morphing of structured meshes. The method is grouped with shrink wrap, automated blocking, and domain decomposition techniques to extend it to the morphing of unstructured meshes. The illustration in Fig. 3 shows how morphing, and mesh inflation can be paired to enable a transformation of a standard airfoil mesh shown on the left, to a more complex model that contains an airfoil root fillet (right). In this case, the combination of inflation and morphing enabled a transition between two geometrically topologically different models. A qualitatively new, higher fidelity CFD simulation is enabled. Workflows like this are referred to as morphing-enabled systems emphasizing the morphing role (although morphing is one of several technologies used to realize the application).

196

I. Malcevic and A. Mousavi

Fig. 3 Morphing + inflation workflow

2.3 Morphing Application Space This section concludes with a few examples illustrating the CFD mesh morphing application space. The examples shown represent a small sample of what morphing can be used for. In each example, an emphasis is made on how the morphing capability was paired-up with other pre-processing modules to realize additional simulation benefits. It is worth noting that all the models shown were generated using the morphing method described in the next section. When discussing morphing application space, the usual association is re-design and optimization. Morphing-enabled workflow operates in existing capability space and the primary benefit is the short simulation cycle time. In a typical meshing workflow, time-to-mesh is anywhere from several minutes to an hour. Efficient morphing implementation can reduce time-to-mesh to just a few seconds. The redesign and optimization cycle usually requires hundreds of runs which then translates to big computational and design time savings. In Fig. 4 several examples of airfoil redesigns are shown. In such workflows, morphing is usually paired with the underlying geometry (CAD) system used to design the component of interest. Knowledge of the CAD system is used to understand the best approach to surface mesh morphing. It is important to note that this is a morphing-to-target-geometry application where airfoil displacements are given in the form of a new geometry model, as opposed to a point cloud model where a displacement field is explicitly specified on a point set surrounding model. Note the ability to handle large changes in airfoil shapes, one of the features of the proposed morphing method. The example shown in Fig. 5 represents the use of morphing for so-called cold-tohot and hot-to-cold shape changes. Typical aero design is performed in the so-called hot state which, for aircraft engines, might correspond to airplane cruise speed. For manufacturing, these shapes need to be converted to so-called cold shapes (hotto-cold transformation) and later transformed into shapes that represent different operating conditions (partial or overspeed). In these workflows, morphing is usually paired with mapping/interpolation used to map the displacements computed on a coarse structural mechanical model to a much finer CFD mesh. The primary benefit of morphing-based workflow is the direct coupling of mechanical/thermal with aero modeling compared to one-way and reduced order implicit information sharing, leading to more accurate predictions.

Explicit Interpolation-Based CFD Mesh Morphing

197

Fig. 4 Airfoil re-design and optimization Fig. 5 Hot–cold transformation

Morphing can also be used to extend the modeling capabilities of existing systems. Examples shown in Figs. 3 and 6 show such morphing use. In the example in Fig. 6, morphing is used to study the secondary flow effects on airfoil thermal and aero performance by introducing a so-called leading edge bulb fillet. The primary benefit of morphing-based workflows in these cases is a much shorter lead time to modeling new capabilities compared to modifying existing design systems to accept new shapes. It allows for fast new concept evaluation and down selection. Accepted concepts can then be added to production system modeling capabilities at a later stage. Finally, the example in Fig. 7 illustrates the use of morphing in a system modeling application. In this case, the morphing of a single airfoil passage mesh (like the one shown in Fig. 1) was paired with domain decomposition and splicing to enable the generation of a full-wheel turbomachinery mesh to study the effects of airfoil resequencing.

198

I. Malcevic and A. Mousavi

Fig. 6 Study of secondary flow effects on airfoil performance using leading edge bulb fillet Fig. 7 Morphing for CFD system applications

3 Structured Mesh Morphing In this section, the algorithm to distribute surface mesh displacements into the volume of a CFD domain is described. The discussion is restricted to structured CFD meshes. Structured CFD meshes (also known as body-fitted or mapped meshes) consist of one or more blocks. Each block has the topology of a cube. Block faces are associated with external surfaces of the domain or coincide with a face of another block (internal interfaces). Structured meshes do not have an explicit element structure. Mesh nodes are accessed via a block number and local indices. There is no explicit node connectivity data stored in memory since index values are sufficient to compute any “element” related values (e.g. stiffness matrix). Structured meshes are very efficient in discretizing high aspect ratio structures like aircraft wings and fuselage, wind turbines, leading and trailing edges of airfoils, and other regions where high aspect ratio elements are desirable (e.g. clearances). The major drawback is the difficulty associated with discretizing complex domains, which limits their application space. However, when available, structured CFD meshes are preferred and used as the golden standard to benchmark solution quality.

Explicit Interpolation-Based CFD Mesh Morphing

199

Structured meshes have properties that lend them to rather a straight-forward morphing process as well. If one considers the domain as a collection of cuboids, the problem of domain deformation reduces to deforming each chunk to reasonably preserve a cube-like shape. In such a case, we can expect that the mesh inside each block will remain valid and conform to quality thresholds. The important message is that, instead of working at the element or vertex level, the process abstracts to the block level. This is a major benefit. Even in the most complex structured meshes, the block count rarely goes into the hundreds. Figures 8 and 9 illustrate blocking layouts for two model types at both ends of the spectrum. The airfoil passage mesh shown in Fig. 8 contains a very low number of blocks, often not more than 10. On the other side of the application space is a full aircraft model for which a section detail is shown in Fig. 9. Such mesh typically consists of several hundred blocks and is considered high-end in terms of complexity. Even for these models, block count is negligible compared to the number of nodes and elements which are in millions. Hence, operations performed at the block level are computationally insignificant. The second important property of structured meshes is the alignment with main flow features. When looking at a cross-section of a structured mesh, one can follow a grid line in the boundary layer region that “flows” around the structure (e.g. wing), a streamwise grid line direction aligned with the flow, and a cross grid line direction flowing from one domain boundary to another. These grid lines connect opposite surfaces of the domain where the boundary conditions are specified in the form of surface displacements.

Fig. 8 Sample block layout: airfoil passage Fig. 9 Sample block layout for aircraft model

200

I. Malcevic and A. Mousavi

The facts observed above can be used in an efficient two-step iterative scheme to interpolate boundary displacements into the domain. At the start of an iteration, a part of the domain is fully morphed. The rest of the domain is still in the baseline state and is active in the sense that morphing still needs to be performed on it. In the first step, a search is performed on the active domain to identify a block chain connecting opposite surface boundaries. Once the connecting block chain is established, the mesh in this connecting region is morphed using interpolation. This completes the iteration, after which the additional part of the domain (defined by block chain) has been morphed. The iterative process repeats until the domain is fully morphed. The two-step approach has important distinguishing features. First, the connectivity search step operates at the block level and naturally lends itself to built-in domain decomposition. Due to low block count, even the most complex search and optimization algorithms are cheap. One has the freedom to collect all valid connectivity paths, and then to use a variety of priority-driven decision algorithms to choose which ones to tackle first. The priority criteria can vary. “The most important region (to be morphed first)” could be defined by application workflow. It might be the region where a shock is expected to form and maintaining the morphed mesh quality is critical. It might also be the wake region where tighter mesh resolution must be maintained. The order of morphing is important since the boundary constraints accumulate and the active domain gets smaller with each added iteration. Having the opportunity to decide the morphing order, increases the chances of getting valid morphed mesh (for example by morphing first the regions with the smallest elements or the region with the lowest mesh quality). The second aspect in which this approach stands out is its explicit interpolation nature. The connectivity search step (with associated priority decision) identifies the block chain to be morphed. This region has the topology of a cube, with opposite sides being surface mesh patches on domain surfaces (or the surface of the previously morphed region). A set of grid lines (topologically parallel to each other) connect these patches. For each connecting grid line, displacements at ends are known. To distribute them into the domain, one simply interpolates along the grid line using a chosen distribution law, the simplest one being linear interpolation (by grid line length). Irrespective of the interpolation function, (a few choices are discussed at end of this section) the computation of the displacement for each internal node is explicit and depends only on the displacements of the end nodes. There is no system of equations to be solved, there is no matrix–vector multiplication involved. Computational cost is reduced to a single nodal sweep, with a single displacement calculation per node. Thus, the computational cost scales linearly with mesh (nodal) count. The iterative domain decomposition nature of the process allows the addition of steps to the morphing workflow to improve robustness and target CFD-specific features. First, note that in a typical CFD mesh, elements near surfaces have very small thicknesses and high aspect ratios (typically in thousands) (Fig. 10). This region requires special treatment during mesh morphing since thin boundary layer elements cannot tolerate distortion. Relative vertex motion needs to be kept at a minimum,

Explicit Interpolation-Based CFD Mesh Morphing

201

particularly in the direction normal to the wall. In CFD workflows, the boundary layer mesh (region 0 in Fig. 12) is morphed first. It is morphed in a pointwiserigid manner: each surface mesh point has an associated grid line emanating from the (airfoil) surface. Each node on the grid line is assigned the same displacement as the corresponding surface node. For most practical applications, this procedure generates a boundary layer mesh of the same quality as the original by preserving the orthogonality and the spacing of the wall. Figure 10 shows details of a boundary layer mesh for an airfoil re-design with an airfoil section rotated 10 degrees, and a chord shortened by 5%. Note how two meshes look almost identical near the airfoil surface. The other important requirement is to ensure that boundary surface constraints are fully satisfied. For a CFD mesh morphing workflow this requirement typically includes 1–1 periodic surface mesh match and the requirement that bottom and top mesh layers lie on prescribed surfaces (hub or inner surface, and case or outer surface). To satisfy these requirements one can modify the interpolation step described above to bi-directional (or tri-directional) interpolation for block chains that touch the surfaces in question. However, practical implementation typically uses a simpler decoupled approach. During the iterative volume morphing stage, standard interpolation is performed along grid lines. No special consideration is given to the block chains that touch the hub or case surfaces. As a result, outer mesh layers may not lie on designated surfaces. To rectify this, a post-interpolation step is added

Fig. 10 Boundary layer mesh preservation

202

I. Malcevic and A. Mousavi

in which surface mesh nodes are projected to designated surfaces, and associated grid lines (emanating from surface nodes) are appropriately stretched/contracted by interpolating the surface node displacement on the grid line (much like in the regular interpolation step during iterative morphing process). To ensure 1–1 periodic match, surface mesh projection should be along the radial direction (passing through the axis of revolution).

3.1 Morphing Workflow Example The above method is embedded in a workflow to morph a structured CFD airfoil passage mesh shown in Fig. 12. The workflow follows the three-part approach to morphing discussed in Sect. 2. The volume morph algorithm described above is the element of part 3 of the workflow. The steps of the workflow are shown in Fig. 11. The workflow begins with surface motion definition. Whenever given a choice, the motion should be chosen to allow maximum flexibility during morphing. In this example of an airfoil redesign, airfoil deformation is a user input. The redesign shown in Fig. 12 consists of 10% chord reduction at 15 and 90% of the airfoil span and 10% chord extension at 50% span. CFD workflow constraints axial (flow direction) location of inlet and exit surfaces, and inner and outer diameter surfaces (hub and case) do not deform. Other than that, external surfaces are allowed to move. To allow for maximum flexibility during morphing (and the best chances of good mesh quality after morphing), inlet and exit surfaces are allowed to slide circumferentially, and periodic surfaces are allowed to slide to replicate airfoil motion in the middle of the domain. To ensure conformity, periodic movement tapers from the value in the mid-domain to match the corner motions of inlet and exit surfaces. While hub and case surfaces do not deform, the surface mesh is allowed to slide along the surfaces. In the surface morph part of the workflow motion definition is used to compute a morphed surface mesh in cases where sufficient information exists or impose constraints on the volume morph if that is not the case. For this airfoil redesign, inlet and exit morphed surface meshes are computed using angle shifts defined by airfoil leading and trailing edge motion. Constant shift angle is applied to grid points on the same circumferential grid line. Airfoil surface motion morphed inlet and exit surface meshes with the periodic motion definition in part 1 of the workflow, provide sufficient information to compute morphed periodic surface mesh (Fig. 2). Hub and case surface meshes are not computed at this time. The constraint that these meshes lie on respective surfaces needs to be built in the volume morph part of the workflow. The volume part of the workflow begins with the morphing of the boundary layer region (region 0 in Fig. 12). For every grid line emanating from the airfoil surface, and for every node on the grid line, a displacement of the corresponding surface node is applied. After completion of this step, several initialization steps are performed to allow for the iterative part of the workflow to begin. The active domain and its boundary are defined (all the domain except region 0). Interpolation law is set to linear along grid line length. A set of priority rules are defined to facilitate the priority

Explicit Interpolation-Based CFD Mesh Morphing

203

1 Motion Definition Surface

Motion description

Airfoil

User given

Inlet/Exit

Circumferential slide

Periodics

Midchord: replicate airfoil mid-camber motion Fwd/Aft: taper to conform to inlet/exit motion

Hub/Case

No surface deformation. Slip condition

2 Surface Morph Surface

Surface Mesh Morphing Details

Airfoil

User Given

Inlet/Exit

Constant circumferential motion defined by airfoil LE/TE

Periodics

Midchord: replicate airfoil mid-camber motion Fwd/Aft: linear taper to conform to inlet/exit motion

Hub/Case Slip condition. To be enforced during volume morphing stage.

3 Volume Morph 3.0 Boundary layer motion: constant displacement along grid lines off the airfoil 3.1 Initiate active boundary to domain outer boundary and boundary layer outer surf Initiate active domain to full domain minus boundary layer Set interpolation law to linear along grid line arc length Set priority path criteria. Initiate priority path queue 1. By boundary condition: airfoil walls, periodic bc, inlet bc, exit bc 2. By element quality: skewness (minimum angle) 3. By path aspect ratio: from low to high aspect ratio 3.2. Iterative search and interpolate While active domain exists 1. Search all valid paths in active domain 2. Place paths in priority queue 3. Interpolate surface displacements into the highest path block chain 4. Update active domain, surface mesh displacements on active domain bdry 3.3. Finalize: 1. Project surface mesh nodes of first and last layer to hub/case 2. Interpolate: distribute endwall displacements along spanwise grid lines

Fig. 11 Morphing workflow: airfoil passage

sorting of connectivity paths during the search step of an iteration. The sets of rules are divided into subgroups, the lower group of rules serving as a tiebreaker in case two or more equivalent paths exist. The order of the groups and the order of boundary condition types are not arbitrary and are driven by the application type. At the top of the priority are paths that connect parts of the model (for example two airfoil surfaces in the case of a multi-airfoil model). The reason: the mesh quality near airfoil surfaces (or interfacing with airfoil boundary layer) is of the highest importance for the quality

204

I. Malcevic and A. Mousavi

Fig. 12 Morphing process: iteration 1

of a CFD simulation. Similarly, the periodic boundary condition is placed in front of the inlet and exit since in most airfoil meshes, the skewness of the mesh in the mid-passage (the region between the airfoil and periodic surfaces) plays important role in accurately resolving flow features like vortices and shocks. However, if the flow is expected to be subsonic (no shocks), one might prioritize wake mesh quality and place the exit surface in front. Similarly, if this simulation targets the interaction between blade rows in an aircraft engine, and this is the second airfoil in the setup, the quality of the inlet mesh region would be important to ensure proper flow feature transition to the front of this airfoil. In such cases, inlet boundary condition should be prioritized in the workflow. (Note the importance of the flexibility of the workflow. The one-size-fits-all approach does not apply here). With priority rules and a priority queue in place, the iterative search-andinterpolate part of the process can begin. During the search step of an iteration, a set of paths (block chains) connecting the surfaces of the active domain is collected. For this example, the paths found in the first iteration are shown in the upper left part of Fig. 12 (labeled 1a thru 1d). Using a priority set of rules, paths labeled as “1” are selected (upper right of Fig. 12). In the interpolate part of an iteration a sweep is performed thru grid lines along direction “1” connecting the airfoil boundary layer

Explicit Interpolation-Based CFD Mesh Morphing

205

Fig. 13 Morphing process: iteration 2

and the periodic surface. For each grid line, displacements of the end nodes are known (one end node on periodic surface mesh, the other end node on the boundary layer region morphed in step 3.0). Using linear interpolation, end node displacements are interpolated on the nodes of the grid line. At the conclusion of the iteration, the intermediate mesh shown at the bottom right of Fig. 12 is obtained (labeled as “1”). The process continues with iterating on the updated active domain (top left of Fig. 13), searching and obtaining valid paths (labeled 2 and 3), selecting paths labeled 2, and interpolating along inlet grid lines to obtain the intermediate mesh stage “2” at the end of the second iteration (bottom right of Fig. 13). In the final iteration, the remaining exit region of the mesh gets morphed in a similar way. After the completion of the search-and-interpolate stage, the remaining task is to ensure any surface constraints set in part 2 of the workflow are satisfied. In this example, surface mesh constraints are applied to the bottom and the top layer of the mesh such that the final morphed mesh conforms to the target domain. Mesh nodes at the bottom layer are projected to the hub and nodes at the top layer are projected to the case surface. Radial projection (vector passing thru the axis of rotation) is used to maintain rotational periodicity. In the final step displacements of the end nodes are interpolated on the nodes of the grid lines connecting the hub and case surfaces.

3.2 Method Performance The discussion on method performance includes an assessment of runtime, scalability, and the ability to handle large displacements.

206

I. Malcevic and A. Mousavi

Fig. 14 Method performance: airfoil redesign

As previously noted, the bulk of the computational cost lies with the interpolation stage of the volume morph which is implemented as a single nodal sweep. Hence, it is expected that the method scales linearly with the nodal count and that the run time is short. The graph in Fig. 14 shows the runtime performance of the method for the workflow example described in the preceding section. For the scalability test, the mesh count was varied from 0.1 to 40 million nodes. Method indeed achieves linear scalability with respect to the nodal count. From the same graph, one can see that mesh with 25 million nodes morphs in about 2 s on a single CPU. (The reader should focus on ball-park run-time performance rather than the value. The exact run time will depend on the system hardware, and implementation of the workflow). The ability to handle large displacements is tied to the key feature of the method: the interpolation of surface displacements along grid lines connecting domain surfaces. The goal is to produce a morphed mesh that satisfies quality thresholds. In real-life applications, the quality of a mesh is determined using a set of metrics depending on application specifics. For this presentation, aspect ratio and element determinant are chosen to demonstrate the method’s performance. Aspect ratio plays important role in characterizing the boundary layer mesh region and is used on a relative scale. For reliable CFD the aspect ratio of morphed and baseline meshes in this region should be very similar. Determinant, on the other hand, is used on the absolute scale: all elements should have a determinant larger than the threshold value set by the design practice. The quality of the mesh should be assessed separately for boundary and far-field regions. (CFD mesh has different properties, and the morphing method differs in these regions.) As described above, the application of constant displacement on the nodes of grid lines emanating from boundary wall surfaces preserves element shape. Therefore, minimal changes are expected during morphing for this mesh region. The table in

Explicit Interpolation-Based CFD Mesh Morphing

207

Fig. 14 illustrates the performance of the method for the airfoil redesign example: aspect ratio and determinant in the near-field mesh region are essentially unchanged during morphing. For the far-field region, during every iteration, end nodal displacements are interpolated on all topologically parallel grid lines of the block chain that connects domain surfaces. Critical to quality is the level of distortion added to the element during interpolation, which in turn is directly related to relative displacements of element vertices. Neighboring grid lines undergo the same interpolation law on similar arc lengths and similar nodal distribution along the grid lines. If the displacements of the neighboring end nodes on the surfaces of the domain are not drastically different, the interpolation process will result in an element distortion level that can be tolerated by the original element shape. A good quantitative measure to gauge whether a given displacement field can be handled by interpolation is the non-dimensional relative displacement metric. It is computed at the element level as the maximum nodal relative displacement divided by the average element length. (Note that this metric is directional—a separate value is computed for each of the three grid directions). The value of 1 would mean that the maximum relative displacement of nodes of the element equals the element size. This metric depends mainly the on the surface displacements slope. The choice of the interpolation law does have some influence but to a much lesser degree. During morphing, this relative displacement is “added” to the pre-existing element distortion of the baseline mesh, so the answer on how large displacements the morphing can handle depends on the quality of the originating mesh as well. Experience in using the proposed method shows that fields with maximum nondimensional relative displacement metric smaller than 0.25 are handled well during morphing, which spans many real-life CFD applications like optimization, airfoil flutter, wind turbine vibrations, and similar applications illustrated in Sect. 1 of this paper. The table in Fig. 14 illustrates the performance of the method for the airfoil redesign example: changes to both determinant and aspect ratio metrics in the far-field mesh region are minimal. As hinted above, the choice of interpolation law plays a role in the final mesh quality. But more importantly, the choice of the interpolation law determines the shape and smoothness of the displacement field in the mesh domain. The simplest choice is a linear interpolation (by grid line length). It results in the C0 displacement field, with the slope discontinuities occurring at the boundaries of the domains defined in each iterative step. Higher-order interpolation can be used to ensure the desired level of smoothness of the displacement field. Figure 15 illustrates a sample grid line in the morphing region “1” of the first iterative step of the airfoil redesign example. The grid line connects the boundary region (marked as “0” with the periodic surface. The straight blue line depicts the displacement field distribution along the grid lines “1” (in region 1) and “0” (in boundary layer region) when linear interpolation is used and with the assumption of frozen (no motion) periodic surface. Dashed lines correspond to the motion of the respective region on the opposite side of the airfoil. The displacement field exhibits slope discontinuities at the interface to the boundary layer region and the location of the periodic boundary. For some applications, this

208

I. Malcevic and A. Mousavi

Fig. 15 Interpolation law choice

discontinuity may not pose a problem. The morphed mesh may have a slightly higher or lower element growth ratio at locations of discontinuities, but this change may be within the variations already present in the original mesh. However, the smoothness of the displacement field might be of importance for other applications (e.g. fluid– structure interaction) where mesh motion plays a role in a numerical scheme. In such cases, a switch to higher-order interpolation might be needed. (shown as red curve in Fig. 15).

4 Unstructured Mesh Morphing The morphing approach outlined in the previous section has several strong points. It robustly handles large mesh motion, is very fast, and scales linearly with mesh count. The main drawback is that it is limited to structured CFD mesh applications. This section describes the extension of the above approach to morphing unstructured meshes. The goal is to retain the good sides of the method but to relax the requirement of structured baseline mesh. The main idea for the extension originates from the way how the structured mesh gets morphed. Special attention was paid to the boundary layer mesh region in the vicinity of walls, where relative mesh vertex motion is kept to a minimum, and prescribed displacement is diffused into the volume only as one moves away from the walls. This idea can be extended to unstructured meshes, provided the domain can be split into near and far-field regions. To realize such a split, two actions need to be performed. A boundary between two regions needs to be defined, followed by a decomposition step. Boundaries between near and far-field mesh regions should be defined automatically (any user intervention defeats the purpose of fast morphing). The shape of the boundary should be simple, not dependent on surface details, but resemble global model features to stay reasonably close to the structure. This is precisely the same set of criteria used in shrink-wrap technology where the idea is to walk over small features and retain only the global shape of the structure. For mesh split, the emphasis is on simplicity and automation. The length of surface offset used during shrink wrap is not of the primary concern. In the previous section, we used simple airfoil models to showcase the structured morphing approach. Imagine now that those airfoil models come with all the complexity of real hardware: tip squealers, cooling holes, vortex generators, tip shrouds, and part-span shrouds, to name a few. Inevitably, such models would need to be discretized in an unstructured manner. However, from a

Explicit Interpolation-Based CFD Mesh Morphing

209

Fig. 16 Swinging wrecking ball model

10-foot distance, the model would still look like an airfoil. One can provisionally scan the baseline surface mesh model, remove all local features for example section by section), compute simplified cross-sections, and “wrap” all exotic dimples, undulations, and similar, by offsetting sufficiently enough outward. The shrink wrap technology is not new, and it is readily available in many commercial meshing packages today. Additionally, many structures of interest can be shrink-wrapped with very simple actions: airfoil wing and fuselage, wind turbines, and turbomachinery blades can all be approximated with primitive cylindrical structures. Once the wrap surface is defined, domain decomposition can be used to separate the originating mesh into near-field (elements inside) and far-field (elements outside) regions. A simple model of a swinging wrecking ball is used to illustrate the steps of the unstructured mesh morphing process. The baseline mesh model and prescribed displacement field are shown in Fig. 16. The model consists of a ball hanging on a rope attached to a crane. The ball swings toward the crane as shown. Figure 17 illustrates the shrink-wrap model around the rope-ball model. Note that in a real application, rope and ball could be replaced with, for example, an aircraft wing with arbitrary geometry detail level. The shrink-wrap model could still be a circular cylinder. Mesh decomposition into the near-field region around rope (red), on-the-fence (white), and far-field mesh regions (light blue) is shown in Fig. 18. The Mesh morphing process splits into two stages. First, the nodes (elements) of the near-field mesh region are moved. Like in the case of structured mesh morphing, the goal is to preserve the mesh characteristics as much as possible. The approach, again, is to distribute surface displacements within the near-field region in a pointwise rigid way to minimize boundary layer mesh distortion. Surface displacements are propagated through the prism boundary layer in the same way as for structured meshes by applying constant motion on the mesh line emanating from the surface and going through the thickness of the boundary layer. From there, displacements are propagated one element layer at a time until all elements of the inner mesh region are exhausted. In doing so, the vertices of the tetrahedral elements of the previous layer serve as the sources to compute the displacements of the vertices of the elements in the next layer. Various weighting schemes can be used to propagate the displacements in the outward direction, starting from simple

210

I. Malcevic and A. Mousavi

Fig. 17 Shrink wrap of the ball-crane model

Fig. 18 Mesh decomposition

unweighted averaging, but also considering local model curvature, element quality, distance proximity, etc. Layer structure and displacement propagation scheme for a cross-section of the rope in the ball model and simple airfoil model are shown in Fig. 19. After all vertices of the inner mesh region are assigned the displacement value, the morphing of the far field can start. Following the logic of the structured mesh morphing process, the outer boundary of the inner mesh region takes the role of the active region boundary. Unlike the structured mesh case, there is no underlying block structure and structured grid lines to serve as the interpolation medium. The shape of the domain boundary is simple (from the shrink wrap), and that boundary surface is readily discretized using structured surface mesh (see Fig. 17 for the example). This is where another tool from the pre-processing suite—automated blocking generation—is used to generate structured background mesh over the farfield mesh region with shrink-wrap surface mesh at its boundary. Assuming such

Explicit Interpolation-Based CFD Mesh Morphing

211

Fig. 19 Layer progression for inner mesh displacement distribution

mesh is generated the following actions can be taken: (i) Project displacements of the outer layer of the inner mesh region onto the shrink-wrap surface, (ii) morph the structured background mesh to distribute the displacement field into the domain, (iii) project computed displacement field from structured background mesh onto the nodes of the baseline unstructured mesh. The steps listed above are analyzed in more detail. At the heart of the process is the assumption that the structured background mesh can be readily generated. Indeed, the far-field mesh region has a simple shape (shrinkwrap boundary). Automated block generation methods can be used to chunk and generate background structured mesh. Such processes can be found in commercial packages that operate on turbomachinery CFD meshing. More sophisticated methods capable of discretizing complex domains are described in [10–13] among others. Using these methods, background structured mesh can be automatically generated for the target class of applications (airfoil turbomachinery, fuselage-wing, wind turbine, etc.). Background meh for the rope-ball model is shown in Fig. 20. Note, that the structured mesh serves only as the background medium to compute and distribute the target displacement field into the far-field mesh region where the elements are typically large (much larger than in the near-field region—see Figs. 18 and 19 for examples. These elements can tolerate larger relative nodal displacements, including numerical “imperfections” that might arise from using a coarser background mesh. The message here is that the background mesh does not need to be perfect in terms of resolution and smoothness. The resolution should be such to allow for a sufficiently smooth displacement field but can be much coarser than the original unstructured mesh. Once the background mesh is available, the next step is to compute the displacements on its surface—the shrink-wrap surface. The shrink-wrap surface is fully covered by the outer layer of near-field region elements (white elements in Fig. 18). Simple, element-wise projection can be used to compute the displacements for each of the nodes on the shrink-wrap surface. Note that this step is local (localized to each element of the interface), and explicit in nature. Displacements for each node on the

212

I. Malcevic and A. Mousavi

Fig. 20 Structured background mesh

shrink-wrap surface are computed by interpolating the inside mesh element of the original mesh that contains it. From there, the structured background mesh gets morphed. The method described in the previous section is used. This step is the key element in the process since allows for explicit, fast, and scalable displacement field computation. By switching to structured mesh to compute the displacement field, the time-consuming connectivityrelated issues of unstructured meshes are bypassed, presented in the form of expensive implicit schemes for displacement computation. Finally, the displacement field computed on the nodes of the structured meshes needs to be projected to the nodes of the original unstructured mesh. Like in the forward projection step done above, this process is also local and explicit in nature— no system of equations involved, just pure elementwise interpolation. Figures 21, 22, and 23 illustrate the steps outlined above for our swinging ball model. Figure 21 shows a comparison of the baseline and morphed meshes for a few cross-sections of the inner mesh region. The displacement of the shrink-wrap surface is shown in Fig. 22, and the motion of the background mesh is shown in Fig. 23. Finally, mesh motion for the far-field (outer) mesh region for a cross-section of the rope-ball model is shown in Fig. 24.

5 Concluding Remarks This presentation on mesh morphing has two goals. The first half of this paper discusses the elements of the mesh morphing environment, and the interaction of the morphing system with a wider set of pre-processing techniques. Mesh morphing can reach its full potential only when approached holistically. Several examples are shown demonstrating how such interactions lead to qualitatively new simulation capabilities. In the second part, the focus is narrowed to volume mesh morphing. An explicit interpolation-based approach to the morphing of structured meshes is proposed. The method handles large displacements, has a fast run time, and scales linearly with

Explicit Interpolation-Based CFD Mesh Morphing

Fig. 21 Near-field mesh morphing

Fig. 22 Displacement of the shrink-wrap mesh

213

214

I. Malcevic and A. Mousavi

Fig. 23 Motion of the background mesh Fig. 24 Baseline and morphed mesh comparison

mesh count. The proposed method is then extended to the morphing of unstructured meshes by combining it with several pre-processing techniques, demonstrating the benefit of the system-level approach.

Explicit Interpolation-Based CFD Mesh Morphing

215

References 1. M. L. Staten, S.J. Owen, S. M. Shontz, A.G. Salinger, T.S. Coffey, “A comparison of mesh morphing methods for 3D shape optimization”. Proceedings of the 20th International Meshing Roundtable, pp. 293–311 (2011) 2. M. Alexa, “Recent Advances in Mesh Morphing”, Computer Graphics Forum, Vol 21(2), pp:173–192 (2002) 3. A. de Boer, M.S. Van der Schoot, H. Bijl, “Mesh Deformation Based on Radial Basis Function Interpolation”, Computers & structures, Vol 85(11), pp. 784–795 (2007) 4. M.E. Biancolini, “Mesh Morphing and Smoothing by Means of Radial Basis Functions (RBF): A Practical Example Using Fluent and RBF Morph”, Handbook of Research on Computational Science and Engineering: Theory and Practice, Vol 2 pp. 347–380 (2011) 5. T. Rendall, C.B. Allen, “Efficient Mesh Motion Using Radial Basis Functions with Data Reduction Algorithms”, Journa of Computational Physics Vol 228(17) pp: 6231–6249 (2009) 6. D. Seiger, S. Menzel, M. Botsch, “High Quality Mesh Morphing Using Triharmonic Radial Basis Functions”, Proceedings of the 21th International Meshing Roundtable, pp. 1–15 (2012) 7. M.E. Biancolini , A. Chiappa , U. Cella, E. Costa, C. Growth, S. Porziani,” A Comparison Between the Bi-harmonic Spline and the Wendland C2 Radial Function”, Proceedings 2020 International Conference on Computational Science, pp. 294–308 (2020) 8. M.E. Biancolini , A. Chiappa , F. Giorgetti , S. Porziani, M. Rochette,” Radial basis functions mesh morphing for the analysis of cracks propagation”, Proceedings of AIAS 2017 International Conference on Stress Analysis, pp. 433–443 (2018) 9. Myles Morelli, Tommaso Bellosta , Alberto Guardone, “Efficient Radial Basis Function Mesh Deformation Methods for Aircraft Icing”, Journal of Computational and Applied Mathematics, Vol 392, (2021) 10. David L. Rigby, “TopMaker: A Technique for Automatic Multi-Block Topology Generation Using the Medial Axis”, NASA Technical Report NASA/CR—2004–213044 (2004) 11. Damrong Guoy, Jeff Erickson, “Automatic Blocking Scheme for Structured Meshing in 2D Multiphase Flow Simulation”, Proceedings of the 13th International Meshing Roundtable, pp. 121–132 (2004) 12. I. Malcevic, “Automated Blocking for Structured CFD Gridding with an Application to Turbomachinery Secondary Flows, 20th AIAA Computational Fluid Dynamic Conference AIA 2011–3049 (2011) 13. Harold J. Fogg, Cecil G. Armstrong, Trevor T. Robinson, “New techniques for enhanced medial axis based decompositions in 2-D”, Proceedings of the 23th International Meshing Roundtable, pp. 162–174 (2014)

Mesh Adaption and Refinement

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra Sandeep Menon and Thomas Gessner

1 Introduction Polyhedral meshes have been established over the past two decades and are now widely supported in commercial and academic CFD codes [1, 2, 18]. For the finite volume method [7], polyhedral meshes combine the ease-of-use of unstructured tetrahedral mesh generation with the superior numerical properties [12, 14] of hexahedral or structured meshes that are more challenging to generate automatically. The finite volume method does not impose any restrictions on the number of faces bounding a control volume in the mesh and consequently the number of neighbors per cell. The larger number of neighboring cells enables an enriched stencil and therefore the better approximation of gradients. This results in improved solution accuracy and faster convergence with fewer cells when compared to tetrahedral meshes [12, 14] and an overall gain in computational efficiency. Adaptive mesh refinement for numerical simulations has a rich history spanning nearly four decades [4, 13], with the intention of balancing numerical accuracy with reduced computational cost. Traditional methods for mesh refinement employ templates based on cell type, and in the case of isotropic refinement, split these cells equally along all directions. Anisotropic methods have also been explored at various times, as a means of reducing the computational cost further by preferring certain directions for refinement while ignoring others that are irrelevant to the physics being resolved. Metric-based anisotropic methods, introduced for three-dimensional meshes in [15], are most commonly used. These methods on triangle/tetrahedral cells have the attractive ability to align closely with significant flow features such S. Menon (B) Ansys Inc., Chicago, IL, USA e-mail: [email protected] T. Gessner Ansys Inc., Lebanon, NH, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_10

219

220

S. Menon and T. Gessner

as shocks and fluid interfaces, which significantly reduces computational cost while maintaining accuracy within the bulk of the domain [3, 5]. But since metric-based methods are generally tied to specific mesh cell types or grid hierarchies [6] they do not leverage the full flexibility of polyhedral meshes that are widely used for finite volume discretizations. Furthermore, these methods and are generally unable to recover the original mesh. And simplicial meshes also come with the drawback of excessively diffusive solutions (when discretized with a second-order finite-volume scheme). Due to the increased popularity of polyhedral meshes and hybrid meshes that combine polyhedral boundary layer elements with size-field based hexahedra meshes away from the boundaries [19], it becomes imperative to accommodate these types of meshes for adaptive mesh refinement when possible. This was the impetus behind the PUMA algorithm [9] that was devised for Ansys Fluent and is widely used for that purpose today. The PUMA method can be regarded as a generalization of the hexahedral refinement template for arbitrary polyhedra and can therefore accommodate all traditional cell types such as tetrahedra, hexahedra, prisms and pyramids as well.

2 Methodology Prior to the description for anisotropic mesh refinement, it is necessary to define terms used for isotropic refinement, since they become significant during the discussion of transitions from refined to un-refined areas of the mesh and the compatibility between isotropic and anisotropic mesh adaptation. Moreover, the definitions in this section are generalized for anisotropy at a later stage.

2.1 Isotropic PUMA Terminology The concept of a “refinement level” is introduced at mesh elements (such as nodes, faces and cells), which is a numerical value denoting the hierarchy of refinement levels (see Fig. 1). It is initialized to zero for an unrefined mesh and is incremented by one for each additional level of refinement. In general, the mesh adaptation is constrained such that adjacent cells do not differ by more than one level of refinement. The isotropic refinement algorithm begins with the faces of the original polyhedral cell, referred to in this context as “parent” faces or cells, which are subsequently split into “child” faces and cells after refinement. For each face of the polyhedral cell, a mid-node is introduced. The coordinate of this mid-node is typically at the face centroid, but this can be adjusted based on other conditions such as mesh quality. The quality metrics used for polyhedral cells is based on the alignment of the vector pointing from one adjacent cell to the other and the face normal as in [12]. Avoiding

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra

221

Fig. 1 Isotropic refinement levels at nodes

large angles between these vectors is crucial to obtain acceptable results for a finitevolume discretization. For each edge of the face under consideration, a new mid-node is introduced, typically at the edge centroid. For anisotropic refinement within boundary layers, it is sometimes convenient to choose a mid-edge location closer to one of the nodes. This fraction of the length to the splitting point over the original edge-length is defined as the split-ratio. For the edge centroid, the split-ratio would be 0.5. The refinement level of the new mid-edge node is designated as an increment to the current face refinement level. For each node in the parent face, a new quadrilateral child face is created by connecting the mid-edge nodes along with the mid-face node and the original face node. A node is introduced at an appropriate location within the polyhedral cell. The cell centroid is a convenient choice, but this can be altered based on the requirements of mesh quality after refinement. For each node in the parent cell, a new child cell is created by connecting the mid-edge nodes along with the mid-face nodes (. X f 0,1,2 ), mid-cell node (. X c ) and the original cell node (. X n ) (see Fig. 2). It is worthwhile to note that the isotropic refinement of arbitrary polyhedra typically results in a mesh that is largely hexahedral (see Fig. 3), and subsequent refinements can be optimized to deal with hexahedral cells. At the transition between refined and non-refined cells, the connectivity of the non-refined cells must then be updated to account for additional nodes and faces arising from refinement. This can be done quite efficiently by tagging adjacent cells as they are being refined, and then processing tagged cells for updates after the refinement step is complete.

222

S. Menon and T. Gessner

Fig. 2 Subtending a child to cell mid-point

Fig. 3 Exploded view of a refined polyhedral cell

2.2 Anisotropic PUMA Terminology The case for anisotropic PUMA is a specialization of the isotropic method for prismatic polyhedral cells. These cells are typically encountered at the boundary layers of viscous polyhedral meshes and consist of polygonal lower/upper faces with an equal number of nodes that are connected by quadrilateral faces on the side. Once these cells have been identified within the original polyhedral mesh, it is now possible to define two modes of anisotropic mesh adaptivity, namely, tangent (Fig. 4) and normal (Fig. 5) refinement. It is immediately apparent that both modes of anisotropic refinement split a prismatic polyhedral cell in a specific direction while avoiding the other. This allows mesh refinement to be guided towards flow features that have a directional bias, while avoiding increased mesh resolution in directions that do not require it. Tangent refinement is well suited for turbulent flows with specific. y + requirements, because it increases mesh resolution in the wall-normal direction, while keeping the span-wise resolution intact. Normal refinement is typically used to reduce the aspect ratio of

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra

223

Fig. 4 Tangent refinement

Fig. 5 Normal refinement

Fig. 6 Sequential application of anisotropic modes for isotropic refinement

prismatic cells in a boundary layer. An example for which high aspect ratio prisms can be challenging are overlapping overset meshes [8] where local normal refinement can reduce cell size jumps between meshes resulting in a robust mesh intersection without orphan cells [11]. Both modes can also be applied in a sequential manner to a prismatic parent cell in order to achieve isotropic refinement within the boundary layer, as shown in Fig. 6. In the upper transition, normal refinement is applied to the parent cell, followed by tangent refinement on each of the child cells. For the lower transition, tangent refinement is first applied, followed by normal refinement on the two child cells. Both transition modes yield the same isotropic result. The first step while performing tangent or normal refinement is to identity the prismatic cells in a given mesh. Using the cell type makes this trivial for wedge elements, but hexahedral and polyhedral prisms require additional flagging of top and bottom faces to define the normal or tangent direction. This is typically done by visiting the boundary faces of the mesh and checking whether adjacent cells are prismatic i.e.,

224

S. Menon and T. Gessner

Fig. 7 Separation of anisotropic refinement levels

they possess unique top and bottom faces with an equal number of nodes and they have quadrilateral side faces. Thereafter, each subsequent layer of prismatic cells is detected and flagged by a face-cell walk through top and bottom faces discovered in the previous sweep, until cells are no longer prismatic. In situations involving hexahedral cells adjacent to multiple mesh boundaries, the top and bottom faces are no longer unique and therefore such cells are ineligible for anisotropic refinement. While a scalar refinement level is sufficient to describe isotropic refinement levels, it is required to distinguish between levels for each refinement mode in the anisotropic situation. Every node, face and cell in the prismatic boundary layer is now described with a normal and tangent refinement level, with the isotropic refinement level set to the maximum of both. Taking the example of tangent refinement, both child cells will possess a tangent refinement level that is one level higher than the parent, while the normal refinement level remains the same. The converse of this situation occurs for normal refinement. It follows naturally that the mid-point of a face matches the isotropic refinement level, and this characteristic can be used to seamlessly transition between isotropic and anisotropic regions of the mesh for refinement. This scheme is depicted in Fig. 7 for anisotropic refinement of a wedge cell, but it can be extended to any arbitrary prismatic polyhedral shape. The first image shows the original wedge cell with both normal and tangent refinement levels at nodes initialized to zero. The second image shows one level of normal refinement into three prismatic child cells, where the mid-nodes possess a normal refinement level of one, but the tangent refinement levels remain at zero. The final image shows the tangent refinement of one of the child cells, where the tangent refinement level is incremented by one, while the normal levels remain unchanged.

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra

225

Fig. 8 Normal refinement through the stack

Tangent refinement is typically the preferred mode in the boundary layer, as it aligns with the flow direction and captures viscous effects quite well. However, a common scenario is the introduction of isotropic refinement via the cell capping the prism layer. In this situation, it is important to refine the entire prism stack with normal refinement to maintain the one-level balance constraint and to avoid a local degradation of the cell quality as introduced above. Naturally, when cells in the boundary layer are already marked for tangent refinement, the introduction of normal refinement results in those cells being refined isotropically. This is depicted in Fig. 8.

2.3 Coarsening with PUMA Coarsening is the process of reverting changes due to refinement in order to recover the original mesh. This typically requires the maintenance of some form of refinement history, which describes the relationship between parent and child entities. One possible approach is to retain all parent entities after refinement and reinstate them during coarsening after discarding their children. However, this leads to a significant increase in memory usage and is only applicable to the hanging-node style of adaptive mesh refinement [13, 16], where the cell type typically does not change. The coarsening step is also constrained such that the one-level balance between adjacent cells is maintained. The PUMA approach for maintaining refinement history described in this section is very lean in terms of memory usage and is applicable to both isotropic and anisotropic coarsening. The first step is to recognize that parent faces and cells are not stored during the refinement process and must therefore be recovered by agglomerating children. This requires the identification of all children that belong to a given parent. A lean way to achieve this is by defining a unique “parent index” that will be assigned to each

226

S. Menon and T. Gessner

Fig. 9 Refinement history for first level

Fig. 10 Refinement history for second level

child face or cell of the parent that is refined. This can be any arbitrary integer, with the only constraint that all children of a parent must possess the same unique value. This is depicted in Fig. 9 for one level of refinement of an element (face or cell) into 4 children. During the refinement step, a unique parent index is generated (in this example, the integer 735) and assigned to each child. It is assumed that elements with a parent index of zero denote the coarsest level of the mesh. For a second level of refinement in this example, element 2 is refined into 4 children, while other elements are left unrefined. In this case, another unique index is generated (integer 842 in this example) and assigned to each child, as shown in Fig. 10. At this point, however, element 2 no longer exists in the mesh (since parent elements are not stored) and therefore, a separate “history map” is introduced with a single entry which maps 842–735. During the coarsening stage for all elements that share index 842, the parent is first created by agglomerating all children, followed by a lookup in the history map to determine the parent index for the new element (namely, index 735). This approach can be repeated for each additional level of refinement, and the only storage cost incurred for each new element is a single integer along with a history map entry for each parent.

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra

227

Fig. 11 Edge adjacency while coarsening faces

Once all children of a parent have been determined, the actual process of coarsening first involves the detection of common interior entities (i.e., interior edges for child faces and interior faces for child cells) which are subsequently marked for removal from the mesh. For groups of child cells, discovering common faces among them is a trivial step. Once these interior faces have been identified, they are removed from the mesh, leaving the bounding child faces on the parent cell. The next step is to identify whether child faces sharing the same parent index point to the same parent cells on each side (or just one side for boundary faces), which indicates that these child faces are candidates for coarsening into a parent face. Child faces that point to different cells are left in the refined state. While coarsening child faces, there is the additional constraint that resulting edges must form a counterclockwise chain around the parent face centroid / normal, according to the right-hand rule. This can be achieved by adding directed edges of all child faces into an edge adjacency graph, as shown in Fig. 11. The next step is to loop through each node, check its adjacency list and remove duplicate edges for each node on that list. For example, while testing node 0, the only entry in its adjacency list is node 1, but node 1 only has nodes 8 and 2 on its list but not node 0 and so, the adjacency list is left unmodified. While checking node 1, it is seen that node 8 is on its adjacency list, and node 8 also has node 1 on its adjacency list, indicating that it is a duplicate interior edge that can be removed. This process is repeated for subsequent nodes and a single pass through the list is sufficient to remove duplicates. The final graph contains a list of nodes with a single entry in each adjacency list, indicating the next node in the chain. At this point, constructing the parent face is as simple as picking a node and following its adjacent node, adding each one to a list until the first node is reached, thereby completing the chain. The size associated with this history storage approach for a sample polyhedral mesh that has been uniformly refined several times is shown in Table 1. It can be

228

S. Menon and T. Gessner

Table 1 History storage cost for multiple levels of uniform refinement on a sample polyhedral mesh Level Mesh size History map size (Tuple count) Face Cell Face Cell Initial 1 2 3 4 5

660 5,944 46,088 362,848 2,879,360 22,941,146

119 1,851 14,858 118,964 951,912 7,615,677

0 0 3,155 26,931 211,283 1,662,673

0 0 1,851 16,709 135,673 1,087,582

Fig. 12 Coarsening of cells distributed across partitions

noted that the memory consumption is a single integer for each face and cell (for the parent index) and a tuple of integers for each entry in the face and cell history maps. The history maps are only required after the first level of refinement. For most simulations, the default of two levels of refinement is sufficient as it provides a significantly improved spatial resolution while constraining the total cell count over time for efficiency.

2.4 Distributed Parallel PUMA Maintaining the scalability of Ansys Fluent [17], a distributed parallel flow solver, was a requirement for the PUMA algorithm. This was achieved by embedding loadbalancing and migration into the mesh adaptation algorithm and by avoiding any constraints on the distribution of cells across partitions. The latter can easily be achieved for cell refinement which is entirely local to a partition and can proceed normally. However, cell coarsening can span several child cells distributed across multiple partitions, as shown in Fig. 12, which typically occurs as a result of a loadbalancing step that maintains an equal distribution of cells. A convenient choice is to encapsulate all children of a parent cell on the same partition, which effectively restricts all partitioning methods to use the coarsest level of the mesh while distributing cells. While this simplifies coarsening behavior, it can

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra

229

Fig. 13 Refinement example of a tetrahedral mesh with boundary layers

significantly affect flow solver performance, particularly for simulations that involve higher refinement levels. The chosen approach is to accept non-encapsulated cells in the coarsening algorithm by extending the parallel communication layer to include an additional layer of node-connected cells. The resulting parent cell after coarsening is assigned to one of the partitions after discarding all children. This removes all restrictions for mesh partitioning and significantly improves solver scalability, with the minor cost of ensuring that all parent indices are unique across all partitions, which is typically achieved with a few global communication calls during the mesh manipulation step.

3 Examples This section will demonstrate the use of the isotropic and anisotropic adaptation algorithms described in Sect. 2 with a few examples.

3.1 Refinement of a Tetrahedral Mesh with Boundary Layers The first example depicts the refinement of a mixed tetrahedral mesh with layers extruded from one half of the bottom boundary that contains quadrilateral faces as shown in Fig. 13. The first layer of hexahedral cells adjacent to the bottom boundary is subsequently refined in the tangent direction with a split ratio of 0.3, while two tetrahedral cells adjacent to the boundary layers are refined isotropically. The isotropic refinement of the tetrahedral cell at the top of the boundary layer forces normal refinement through the stack. The split ratio of 0.3 is also respected throughout the layer, except at the transition to the isotropic tetrahedral region at the

230

S. Menon and T. Gessner

side, where a split ratio of 0.5 is maintained. This refinement scheme also shows the applicability of the method to situations involving stair-stepping within the boundary layer, where a non-uniform number of layers may be present adjacent to any mesh boundary.

3.2 Isotropic PUMA for the Dam Break Problem This example is a computational fluid dynamics case depicting the dam break problem with an obstacle placed within the domain, where the gas-liquid interface is modeled using a Volume-of-Fluid approach, along with adaptive time-stepping. The fluid is allowed to evolve over time under the influence of gravity. Various stages of the simulation are shown in Fig. 14. The mesh has an initial count of 111276 cells and is adaptively refined and coarsened at every other time-step with a maximum of two refinement levels imposed throughout the course of the simulation. The criteria for refinement and coarsening are defined by the normalized gradient of the gas-liquid volume-fraction. Any cell is refined if the magnitude of this gradient is larger than a specified threshold value and coarsened if the gradient falls below a second (lower) threshold value. Two refinement levels applied globally correspond to a mesh with a cell count of about 18.2 million. The dam break results match the fidelity obtained on a mesh of this size with a significantly lower cell count and computational cost. The mesh is automatically load-balanced during an adaptation step when the difference between maximum and minimum cell count per core exceeds 5% of the total cell count. The total wall-clock time of the flow-solver and mesh adaptation for 500 time-steps are shown in Fig. 15 for various core counts. In this case, the count after 500 time-steps is roughly 4000 cells per core on 64 cores and far from ideal. Nevertheless, the adaptive mesh refinement step along with load-balancing, maintains the scalability of the flow solver at these low cell counts per core as shown by the blue bars. Since any form of mesh manipulation comes with a certain fixed cost related to migration, garbage collection and establishing a parallel communication layer, irrespective of the number of cells involved in the operation, the performance of the mesh adaptation shown in the orange bars only improves up to 32 cores where its cost is comparable to the flow-solve. It should also be noted that most simulations can proceed with a lower frequency of an adaptation every 5 or 10 time-steps, as opposed to 2 in this case. The relative cost of each of these operations over 15000 time-steps for this simulation using two different configurations of computational cores and adaptation frequencies is shown in Table 2. The flow solver dominates the simulation time, as anticipated, with the adaptation and load balancing steps consuming a relatively small fraction. The preparation phase involves the estimation of cell quality after refinement/coarsening. The cleanup phase encompasses steps related to garbage collection, solver array compaction and parallel communication layer setup. These phases of the adaptation process consume the bulk of the computation involved, while the actual refinement/coarsening steps

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra Fig. 14 Evolution of dam break problem with adaptive mesh refinement at time-steps: 0, 5000, 15000

231

232

S. Menon and T. Gessner

Fig. 15 Comparison of flow solver and mesh adaptation at various core counts

Table 2 Relative cost of individual operations on 16 cores (frequency 2) and 24 cores (frequency 10) Operation 16 cores 24 cores frequency = 2 frequency = 10 Time (s) % Time (s) % Flow Adaptation – Prepare – Refinement – Coarsening – Cleanup Balance Total

111989 35050 14846 3233 2789 12680 680 147719

75.8 23.7 10.1 2.18 1.89 8.58 0.46 100

47709 5799 2038 687 460 2304 771 54279

87.9 10.7 3.75 1.26 0.85 4.24 1.42 100

are relatively cheap. The load balancing step is an expensive step but it is called infrequently and so, it consumes a very small percentage of the overall simulation time.

3.3 Anisotropic PUMA for Fuselage, Wing Configuration The next example is an external aerodynamics simulation of a hypothetical aircraft with a wing and fuselage. The initial mesh consists of 340223 cells and a single boundary layer defined throughout the body of the aircraft as shown in Fig. 16. The inlet flow is defined as Mach 0.6 with a gauge pressure of 35606Pa and assumed to be steady state. The pressure-based flow solver is used with SIMPLE for pressurevelocity coupling and the .k − ω SST turbulence model.

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra

233

Fig. 16 Initial mesh of aircraft

Table 3 Comparison of min/max . y + versus cell count + Level Pressure (Pa) .y

0 1 2 3 4 5 6 7 8

Cell Count

Min

Max

Min

Max

12473.3 13110.6 15432.4 17580.6 17149.7 17819.9 17783.7 17775.8 17756.6

45153.3 45310.3 45481.3 46318.6 46927.4 46948.7 46888.0 46887.0 46887.5

0.52 0.34 0.12 0.03 0.06 0.06 0.07 0.09 0.08

240.8 134.6 73.8 37.9 18.5 9.4 4.6 2.3 1.1

340,223 382,556 424,479 464,816 495,199 517,993 532,775 538,871 542,032

Eight successive levels of anisotropic tangent refinement are applied on all surfaces and the minimum/maximum . y + values are computed at each step to determine whether sufficient mesh resolution is achieved to compute a reasonably accurate solution within the boundary layer (see Table 3), demonstrating the desired . y + ≈ 1 being achieved with the addition of only 201809 cells. Achieving the same . y + goal with isotropic refinement would result in a significantly higher increase in the number of cells. A contour plot of the . y + distribution across the surface of the wing/fuselage after 8 levels of refinement is shown in Fig. 17. A uniform splitting ratio of 0.5 is used in this case, and the distribution is largely dictated by the single boundary layer on the initial mesh, but it is also possible to locally adjust the refinement ratio account for a variable layer height at each cell. Details of tangent refinements at Level 8 near the front of the aircraft are shown in Fig. 18.

234

S. Menon and T. Gessner

Fig. 17 Contour plot of . y + distribution across aircraft body Fig. 18 Tangent refinement detail at Level 8

3.4 Combined Isotropic and Anisotropic PUMA for Space Capsule Re-Entry The final example is the simulation of a space capsule under hypersonic re-entry conditions with an angle-of-attack of –25.◦ . The trajectory, velocity and ambient fluid conditions represent the vehicle passing through the earth’s atmosphere at an altitude of 50 km. The initial mesh consists of 104581 polyhedral cells including 15 prismatic polyhedral boundary layers defined around the body of the capsule as shown in Fig. 19. The inlet flow is defined as Mach 17 with a gauge pressure of 25Pa and assumed to be steady state. The fluid is modeled as an ideal gas using the twotemperature model to account for compressibility and thermophysical variations. The steady-state density-based flow solver is used along with the high-speed numerics. Turbulence is modelled with the .k − ω turbulence model. The simulation is initially run for 500 iterations, after which the mesh is adapted periodically every 250 iterations. The error-based Hessian criterion [10] is used to identify cells in the domain for refinement and coarsening, along with anisotropicrefinement in the boundary layers as needed. Snapshots of the adaptively

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra

235

Fig. 19 Initial mesh of the space capsule

Fig. 20 Refined mesh after 750 iterations

refined mesh after 750 and 1500 iterations are shown in Figs. 20 and 21 respectively. The effect of tangent refinement in the prismatic boundary layers is immediately apparent. Additionally, normal refinement through the prism stack occurs due to isotropic refinement at the transition from boundary layers to regular polyhedral cells, resulting in the surface refinement of the capsule. A contour plot for the Mach number is shown in Fig. 22, showing details of the bow-shock captured by the local anisotropic refinement in the boundary layer.

236

S. Menon and T. Gessner

Fig. 21 Refined mesh after 1500 iterations

Fig. 22 Contour plot for Mach number

The simulation was repeated using identical parameters without anisotropic boundary layer refinement, and the comparison of cell counts at each adaptation cycle is shown in Table 4. To capture the details of the shock region, a significant amount of cells in the boundary layer are marked for refinement, and the cost savings of directional anisotropic refinement is immediately apparent.

4 Conclusions This paper demonstrates a new procedure to adaptively refine arbitrary polyhedral meshes, including the anisotropic refinement and coarsening of prismatic polyhedral cells in boundary layers. The refinement scheme defines a conformal template that seamlessly transitions between isotropic and anisotropic regions of the mesh. The

A Method for Adaptive Anisotropic Refinement and Coarsening of Prismatic Polyhedra Table 4 Comparison of cell count at each adaptation cycle Iteration Cell count Isotropic Initial 500 750 1000 1250 1500

104,581 447,834 1,385,658 4,478,563 6,964,051 8,963,725

237

Anisotropic 104,581 350,392 857,307 2,227,375 3,649,348 5,335,096

implementation is designed for a distributed parallel environment where it maintains the scalability of the flow solver via load-balancing. The applicability of this adaptive refinement scheme is demonstrated using several computational fluid dynamics tests that involve polyhedral meshes, with the conclusion that reliably accurate solutions can be achieved with a modest increase in calculation cost. The introduced mesh adaptation method can be used with any criterion that provides information where refinement and coarsening take place. Heuristic criteria, error indicators or estimators can all be applied without the need for the respective criterion to provide the direction of anisotropic refinement.

References 1. Ansys Fluent. Ansys Inc (2022) 2. Simcenter STAR-CCM+. Siemens Industries Digital Software (2022) 3. Alauzet, F., Loseille, A.: A decade of progress on anisotropic mesh adaptation for computational fluid dynamics. Computer-Aided Design 72, 13–39 (2016) 4. Berger, M.J., Oliger, J.: Adaptive mesh refinement for hyperbolic partial differential equations. Journal of Computational Physics 53(3), 484–512 (1984) 5. Davies, D.R., Wilson, C.R., Kramer, S.C.: Fluidity: A fully unstructured anisotropic adaptive mesh computational modeling framework for geodynamics. Geochemistry, Geophysics, Geosystems 12(6) (2011) 6. Freret, L., Williamschen, M., Groth, C.P.T.: Enhanced anisotropic block-based adaptive mesh refinement for three-dimensional inviscid and viscous compressible flows. Journal of Computational Physics 458 (2022) 7. Hirsch, C.: Numerical Computation of Internal and External Flows (Second Edition). Butterworth-Heinemann, Oxford (2007). https://doi.org/10.1016/B978-075066594-0/50039-4 8. Meakin, R.L.: Composite Overset Structured Grids, Chapter 11. Handbook of Grid Generation. CRC Press (1999) 9. Menon, S., Gessner, T.: PUMA (Polyhedra Unstructured Mesh Adaption): A novel method to refine and coarsen convex polyhedra. 14th U.S. National Congress on Computational Mechanics, Montreal, Canada. July 17-20 (2017) 10. Norman, A., Viti, V., MacLean, K., Chitta, V.: Improved cfd methodology for compressible and hypersonic flows using a hessian-based adaption criteria. In: AIAA SCITECH 2022 Forum (2022)

238

S. Menon and T. Gessner

11. Parks, S., Buning, P., Chan, W., Steger, J.: Collar grids for intersecting geometric components within the chimera overlapped grid scheme. In: 10th Computational Fluid Dynamics Conference (1991) 12. Peric, M.: Flow simulation using control volumes of arbitrary polyhedral shape. ERCOFTAC Bulletin 62, 25–29 (2004) 13. Rivara, M.C.: Algorithms for refining triangular grids suitable for adaptive and multigrid techniques. International Journal for Numerical Methods in Engineering 20(4), 745–756 (1984) 14. Spiegel, M., Redel, T., Zhang, J., Struffert, T., Hornegger, J., Grossman, R.G., Doerfler, A., Karmonik", C.: Tetrahedral vs. polyhedral mesh size evaluation on flow velocity and wall shear stress for cerebral hemodynamic simulation. Computer Methods in Biomechanics and Biomedical Engineering 14(1), 9–22 (2011) 15. Tam, A., Ait-Ali-Yahia, D., Robichaud, M., Moore, M., Kozel, V., Habashi, W.: Anisotropic mesh adaptation for 3d flows on structured and unstructured grids. Comput. Methods Appl. Mech. Engrg. 189, 1205–1230 (2000) 16. Verfürth, R.: A posteriori error estimation and adaptive mesh-refinement techniques. Journal of Computational and Applied Mathematics 50(1), 67–83 (1994) 17. Wasserman, S.: Ansys fluent sets record with 129,000 cores. http://engineering.com/story/ ansys-fluent-sets-record-with-129000-cores (2015) 18. Weller, H.G., Tabor, G.R., Jasak, H., Fureby, C.: A tensorial approach to computational continuum mechanics using object-oriented techniques. Computers in Physics 12, 620–631 (1998) 19. Zore, K., Sasanapuri, B., Parkhi, G., Varghese, A.J.: Ansys mosaic poly-hexcore mesh for high-lift aircraft configuration. In: 21st Annual CFD Symposium Conference (2019)

Tetrahedralization of Hexahedral Mesh Aman Timalsina and Matthew Knepley

1 Introduction Tetrahedralization of hexahedra has several applications: rendering engines may only process tetrahedra, discretization methods may only require tetrahedra, and some geometric algorithms are only phrased over tetrahedra. Thus it would be advantageous to convert a hexahedral mesh into as few tetrahedra as possible. Several algorithms exist for this purpose including the popular marching tetrahedra algorithm. These algorithms take advantage of the most natural subdivision of a hexahedron into tetrahedra, and Thus have an inherent simplicity in terms of both understanding and implementation. However, these common algorithms, including the marching tetrahedra, impose severe constraints on the input mesh as they cannot guarantee a conforming division of an arbitrary hexahedral complex, due to non-matching face splits [5]. This work aims to provide a general algorithm that works on any hexahedral mesh with arbitrary face divisions. The major contribution of this work is a clean and intuitive formulation of this problem and a generalization of several well-known triangulation algorithms [6, 11] allowing us to triangulate any hexahedra into five or six tetrahedra, except in an exceptional, degenerate case where we use twelve tetrahedra.

A. Timalsina (B) Purdue University, West Lafayette, Indiana, USA e-mail: [email protected] M. Knepley University at Buffalo, Buffalo, New York, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_11

239

240

A. Timalsina and M. Knepley

2 Background 2.1 Hexahedral Triangulation The decomposition of any polyhedra into other simpler polyhedra has been studied for centuries. Despite its longevity, the problem is difficult and even in the case of decomposition into tetrahedra, it is known that tetrahedralization is .NP-hard [13]. These decomposition problems of arbitrary geometric complexes have yielded a rich body of theoretical results that have provided existence conditions on decompositions and bounds on the minimum number of required tetrahedra [3, 14]. On the practical side, several algorithms have been developed to perform these subdivisions, but all of these are oblivious to the orientations of the face splits. This, however, becomes a problem when the orientations do not match and merging affects split orientations. For instance, in the commonly used marching tetrahedra algorithm [5], each cube is split into six irregular tetrahedra by cutting the cube in half three times, where this division takes place by cutting diagonally through each of the three pairs of opposing faces. In this way, the resulting tetrahedra all share one of the main diagonals of the cube. An obvious limitation of this algorithm, however, is that the cuts are predetermined: that is, we are restricted to select cuts with matching orientations of opposite pairs. It is evident that the case of non-matching orientations is significant. Indeed, the theoretical justification we provide for our algorithm has been explored in several other works. The most notable ones include the use of the region-face-graph or RFgraph to study subdivisions of three-dimensional complexes [12, 15]. These works were primarily in the context of combining tetrahedral mesh into other polyhedra, and underscore the importance of arbitrary tetrahedral subdivisions. Incidentally, [12] mentions that when using the common algorithms with a predefined set of face cuts, the associated hexahedral triangulation fails to detect all the potential hexahedra in a tetrahedral mesh, and the percentage of missed potential hexahedra may be significant and even reach 5% of the overall mesh. The graphs themselves have several nice properties including the fact that the RF graph corresponding to hexahedra-tetrahedra decomposition is planar. Indeed, in the case of hexahedratetrahedra decompositions, these representations mostly match the arguments we develop, but the works themselves merely specified these subdivisions and did not explicitly provide an algorithm for generating them. In fact, the case of subdivisions in the case of non-matching orientations was not known.

2.2 Prism Decomposition In order to specify the decomposition of arbitrary hexahedra, we first start by discussing the prism decomposition procedure we employ in the first three of our cases. This decomposition is well-known and we choose the framework specified by [7]

Tetrahedralization of Hexahedral Mesh

241

Fig. 1 An instance of (left) a rising ( .R) cut and (right) a falling (.F) cut

Fig. 2 The degenerate cases: .RRR cuts (left) and .FFF cuts (right)

Fig. 3 A tetrahedron .T adjacent to a triangular face of the prism . P

where they provide an algorithm for triangulating a prism by choosing face cuts carefully. In particular, we first define rising (.R) and falling (.F) cuts (Fig. 1). The cuts (.R) and (.F) simply depend on whether the split edge is rising or falling as we travel along the extruded prism face in a counterclockwise manner. We now claim that the only degenerate cases correspond to instances when all the face cuts are assigned the same orientations, namely .RRR or .FFF (Fig. 2). Note that any other configuration with at least one non-matching orientation guarantees that the face cuts meet at some vertex. Thus these degenerate configurations definitively characterize the impossibility of triangulating the prism: Proposition 1 In a tetrahedral decomposition of a prism, it is evident that at least two (exterior) face cuts must meet at some vertex. Proof We demonstrate this by examining the triangles of the prism. Consider a tetrahedral decomposition .T of a prism . P. Let .T be a tetrahedron in .T , and let .u, .v, and .x be the vertices of the triangular face of . P-which must be part of a tetrahedronthat is adjacent to .T . Let .s, the summit of the tetrahedron, be the vertex of .T that is opposite to this face, as shown in Fig. 3. Since . x can have only three original incident edges, it follows that .u and .v must share the three (exterior) face cuts between them corresponding to the three tetrahedra in the triangulation. This means that at least two of these face cuts must meet at a common vertex, which is what the proposition states. For any of the cuts that are not degenerate (Fig. 4), a canonical division into three tetrahedra is possible. Combinatorially, this yields six different ways of triangulating a prism.

242

A. Timalsina and M. Knepley

Fig. 4 A valid configuration for prism decomposition

The issue due to Proposition 1 is “fixed” by [7] by looking at the neighboring prisms and changing their configurations to transform these into the non-degenerate cases. We consider a similar strategy for cubes as well, but as we will see, this may not be possible for some global mesh configurations.

3 General Hex-to-Tet: A General Algorithm for Tetrahedralizing a Hexahedral Complex 3.1 Generalizing Prism Decomposition to Cubes Before discussing the generalization of prism decomposition to cubes, we need to clarify some terminology. Recall that the marching tetrahedra algorithm involves partitioning a cube into six irregular tetrahedra by making three cuts along shared diagonals of opposing faces, resulting in the division of the cube into halves three times [5]. We call this shared diagonal the main diagonal. Further, we extend the notion of rising and falling cuts to cubes as follows. As in the case of prisms, we label the orientation of an external face cut as rising (R) or falling (F) by traversing along the extruded face in a counterclockwise manner. Now, a trivial observation that any cube can be divided into two prisms by simply cutting across a diagonal plane allows us to partially reduce arbitrary tetrahedron decomposition of a cube to a decomposition of prisms. Consequently, we can separately triangulate each prism with the main diagonal split serving as a face cut for both prisms. Recall that our main goal was to allow the user to arbitrarily select the face cuts across the six faces; our only freedom being able to choose the main diagonal. We claim that this procedure always works for cases where up to three cuts have been predetermined. For more than three predetermined cuts, the procedure works if the cuts are lined up accordingly. For up to three predetermined cuts, we can use the prism decomposition method without running into the degenerate cases from Sect. 2.2 as we will always have at least two outside face cuts to choose from. We can choose these cuts in such a way that, along with the main diagonal, these cuts guarantee that we get two cuts that meet at a vertex in each of the prisms, utilizing Proposition 1.

Tetrahedralization of Hexahedral Mesh

243

Obviously, solving this case-by-case does not necessarily mean that we obtain a general algorithm for the entire mesh as the cases only correspond to a single hexahedron. However, for simplicity, we will first specify this case-by-case below, and present the main algorithm in Sect. 3.4. Further, for the sake of exposition, we represent an arbitrary hexahedral element as a cube since they are topologically the same.

Zero or One Predetermined Cut We simply run the marching tetrahedron algorithm here. Or, we can choose an arbitrary main diagonal along with the face cuts in each of the prisms so that prism decomposition can be performed.

Two Predetermined Cuts If the two cuts are not opposite to one another, then we can still run the marching tetrahedron algorithm. This is also possible if the two opposite cuts are both falling (.F) or both rising (.R). Crucially, even in the case where the two cuts have opposite orientations, we can choose the main diagonal so that its end points meet the endpoints of the predetermined cuts (Fig. 5).

Three Predetermined Cuts Combinatorially, the orientation of these face cuts induce the following three cases that we handle separately: None of the cuts are opposite to one another In this case, we should be able to select the opposite cuts for each of the three predetermined cuts. Hence, we can apply marching tetrahedron to get the canonical subdivision.

Fig. 5 Two predetermined cuts (green): cutting with the opposite faces in the same orientation (blue) yields .FFR for both prisms

244

A. Timalsina and M. Knepley

Fig. 6 Three predetermined cuts: two of the three cuts form a pair with opposite orientation. (Left) We first choose a dividing plane. (Upper-Right) Then the red diagonal is chosen to configure the prism with two external predetermined cuts to .FFR. (Lower-Right) Finally, the blue cut is then picked in the second prism to avoid the degenerate cases

A pair with opposite face cuts with same orientation This is again trivial: we can simply use marching tetrahedron with one of the pairs having already been determined. A pair with opposite face cuts with different orientation This case employs the following procedure where we want to solve the problem of different orientation by carefully decomposing the cube into prisms (Fig. 6): 1. Choose the uncut pair and decompose the cube into prism by cutting this pair to form the diagonal plane. 2. On whichever prism the third predetermined cut falls, the main diagonal is cut to avoid the prism degenerate case(s). 3. The second prism has an uncut external face, which is again used to avoid the prism degenerate case(s).

Four and Five Predetermined Cuts Again, we will handle the easy cases first. Also, note that at least one of the pairs of cuts must be opposite to one another here (indeed, two of the pairs must be opposite to one another in the case of five predetermined cuts.) At least one pair opposite of each other with the same orientation This case is easy as we can choose such a pair to create the diagonal plane for decomposing the cube into prisms. If two of the remaining cuts lie on the same prism, then we use the diagonal cut to avoid the degenerate case which leaves us with two (or one) remaining cuts in another prism. If only one cut is present in any one of the prisms, then again we can easily avoid .RRR/FFF cases.

Tetrahedralization of Hexahedral Mesh

245

Fig. 7 Decomposition of a single pair with different orientations: (Top) The side opposite to the green cut is chosen so that two of the predetermined cuts get isolated in one of the prisms. (BottomLeft) The red diagonal is chosen to configure the prism to .FFR. (Bottom-Right) The green cut is then picked to avoid the degenerate cases

Only one pair opposite of each other with different orientation We can choose one of the adjacent predetermined cuts and cut across its opposite face with the same orientation. This allows us to choose a diagonal plane with one of the resulting prisms containing two of the predetermined cuts. Here, we can again use the main diagonal cut to escape the degenerate case, while the remaining prism has an extra uncut external face (Fig. 7). Both pairs are opposite of one another with different orientation Meet at a vertex: If the pairs meet at a vertex while being in different orientation to their opposite cuts, then we can simply decompose into prisms using the remaining uncut pair (we can cut in the same orientation as the remaining determined cut in the five case), and the resulting prisms are obviously non-degenerate as the external faces meet at a vertex (Fig. 8). None meet at a vertex: This is the degenerate case, where prism decomposition fails. Geometrically, it is equivalent to having a .RRR/FFF case as in Observation Proposition 1 for cubes. Note that the remaining one/two cuts cannot save this from being degenerate (Fig. 9).

246

A. Timalsina and M. Knepley

Fig. 8 Decomposition of both pairs meeting at a vertex with different orientations: This ensures that we have a pair of opposite uncut faces. (Right) Consequently, we can decompose into prisms and choose the red diagonal cut to avoid the degenerate case

Fig. 9 Degenerate case for a cube with four determined cuts. The case for degenerate case for cubes with more than four determined cuts is omitted as once four of the exterior face cuts are degenerate as in the figure, the remaining face splits cannot save it from degenerate

Six Predetermined Cuts In this case, any two opposite cuts with the same orientation imply that prism decomposition works. A procedure can be carried out accordingly as in the above cases. Again, if any cut is isolated, then the cube cannot be triangulated as Observation Proposition 1 comes into play.

3.2 Decomposition into Five Tetrahedra Interestingly, even if prism decomposition fails, if at least two opposite pairs have different orientation, then when all pairs meet at (some) vertices to one another, we have the following five tetrahedral decomposition (Fig. 10). We recall that the RF graphs from [12] also contained cases with decomposition intro five tetrahedra. This is the concrete manifestation of such a decomposition.

Tetrahedralization of Hexahedral Mesh

247

Fig. 10 The five tetrahedral decomposition: (Right) An implementation of this decomposition can be done by simply removing the central tetrahedron; the rest of the four tetrahedra are distributed along the four corners

3.3 Solving the Degenerate Cases As we outlined above, the cubes fail to be triangulated in the usual way only if one or more cut(s) are isolated. We claim that this cannot be resolved using one of the procedures above, and one of the following methods must be followed:

Flipping Neighboring Cubes Recall that we had several degrees of freedom when choosing one of the cuts when decomposing and later when avoiding the prism degenerate cases. Indeed, the procedure above is invariant with respect to opposite face cuts of the same orientation. That is, we have the following guarantee on the invariance of flipping cuts of neighboring cubes. Proposition 2 Changing the orientations of a pair of opposite face cuts still yields a valid tetrahedral decomposition into six tetrahedra. Proof We want to show that triangulation is invariant under flipping opposite pairs with the same orientation. However, this is essentially changing the diagonal plane that yielded those prisms. Assume however that after flipping the orientation, one of the new prism acquires a degenerate configuration (RRR/FFF). In order for this to happen, we must be constrained to cut the main diagonal in some orientation R (respectively F) for

248

A. Timalsina and M. Knepley

Fig. 11 Degenerate case for cubes when flipping opposite face cuts of adjacent cubes: This is an instance when flipping fails as all four adjacent cubes’ opposite faces have cuts with different orientations

one of the prisms with other cuts having orientations FF (respectively RR). The main diagonal will then have orientation F (respectively R), for the other prism, yielding FFF (respectively RRR). But, this implies that we started with a degenerate FFFF/RRRR exterior face cuts, which must be impossible as this would not have yielded a valid decomposition before flipping. In light of Proposition 2, in order to avoid the degenerate case for four predetermined case, note that changing the orientation of one of the external face cuts suffices. Here, we note that recursively carrying out this flipping of orientation of face cuts may lead to a “chain reaction” that may end up changing the entire mesh. Thus we only look for these flips in adjacent cubes and avoid employing this method if the neighbor of the adjacent cube has predetermined face cuts in favor of both simplicity and efficiency.

Steiner Points Sometimes even flipping fails (Fig. 11), and our last resort is introducing new vertices, called Steiner points, which presents an easy solution to the above problem as any number of predetermined cuts can be triangulated to form 12 tetrahedra [4]: All eight original vertices are connected to the Steiner point to decompose the cube into six pyramids (Fig. 12). Any of the face cuts now yields two tetrahedra.

Tetrahedralization of Hexahedral Mesh

249

Fig. 12 Steiner points allows decomposition into twelve tetrahedra at the expense of an additional vertex: (Left) We first connect each of the vertices with the Steiner point. (Middle, Right) This yields six pyramids, which is then cut across to get tetrahedra using the predetermined cuts (red)

3.4 The Main Algorithm We have now handled all the cases with certain number of predetermined cuts. We note here that it is quite surprising that the methods used to do this all retain a certain level of simplicity. Indeed, the key idea is knowing the right combination of methods to apply in different cases. However, directly translating these mechanisms into an ad hoc implementation would not work. Moreover, ignoring the intricacies between the methods may mean that we sacrifice both parallelizability and efficiency in terms of additional vertices. Nevertheless, any attempt at designing an algorithm has to face the degenerate cases; the simplest counterexample is a hexahedralized torus with four cubes where the face cuts are configured in a way that forces the configuration similar to the one from Fig. 11. This counterexample shows that this remains a non-trivial problem to arrange all these cases in a way that allows parallelizability while also ensuring that we use as few Steiner points as possible. We now present a succinct version of such an algorithm that achieves these goals.

4 Conclusion and Future Work In this paper we presented a general triangulation algorithm for hexahedral meshes. Instead of imposing restrictions on the input mesh like other existing algorithms do, our algorithm does not depend on a predefined set of face cuts. Further, our algorithm identifies the number of predetermined face divisions and uses an extension of prism decomposition algorithm and several other techniques to decomposition the hexahedra into tetrahedra. Crucially, we have ensured that our algorithm tries to find all the valid decompositions without making any assumption on the orientations of the face splits, before employing additional vertices. Finally, contrary to previous works, the theoretical framework we inherited extends well to implementation, and

250

A. Timalsina and M. Knepley

Algorithm 1 Hex-to-Tet (A hexahedral mesh M) 1: while there exists a hexahedron H that is unmarked do 2: N ← number of exterior cut faces of H 3: if N ≥ 4 with two pairs of opposite face cuts with different orientation then 4: if the two pairs meet at the same vertices then ⊳ Sect. 3.2. 5: while there exists an uncut face do 6: Cut the face so that the cut meets at the predetermined cut(s) at some vertex 7: end while 8: else ⊳ Sect. 3.3. Degenerate-Case (H ) 9: 10: end if 11: else ⊳ Sect. 3.1 12: Prism-Decomposition (H ) end if 13: 14: mark H 15: end while 16: return “Done”

Algorithm 2 Prism-Decomposition (Hexahedron H ) 1: 2: 3: 4: 5: 6: 7: 8:

if there does not exist a pair of opposite cuts with same orientation then Cut one of the uncut pairs in this manner end if Cut across such pair to create a diagonal plane and two prisms. if there exists a prism with two of the exterior faces cut then Use the middle diagonal to avoid .RRR/FFF end if Cut the remaining uncut faces of the prism to get valid decompositions

Algorithm 3 Degenerate-Case (A Hexahedral Mesh M ) 1: Get the adjacent cubes of the four faces with each pair having different opposite orientation 2: if any of the four cuts C form an opposite cut pair with the same orientation in their adjacent cubes then ⊳ Sect. 3.3 3: H ' ← neighbor of H that shares the cut C 4: C ' ← face cut in H ' opposite to C. 5: H '' ← neighbor of H ' that shares the cut C ' 6: if H '' has not been marked then 7: Flip the orientation of the cut C ' 8: end if 9: else ⊳ Sect. 3.3 10: Introduce a Steiner point P in H . Link each vertex of H with P using six new interior edges 11: 12: while there exists an uncut exterior face do 13: Cut the face so that the cut is in the same orientation as its opposite face 14: end while 15: end if

in future work, we plan to implement the algorithm above in the PETSc [1, 2] libraries in order to convert meshes with tensor product cells to simplicial cells as part of its DMPlex mesh capabilities [8–10].

Tetrahedralization of Hexahedral Mesh

251

Acknowledgements We gratefully acknowledge the support of the Computational Infrastructure for Geodynamics project NSF EAR-0949446, as well as support from the Department of Energy Applied Math Research program under U.S. DOE Contract DE-AC02-06CH11357. We would also like to thank the anonymous reviewers for providing helpful comments on earlier drafts of the manuscript. We are particularly indebted to the reviewers for bringing to our attention an error present in the proof of Proposition 1, which has since been rectified.

References 1. S. Balay, S. Abhyankar, M. F. Adams, S. Benson, J. Brown, P. Brune, K. Buschelman, E. Constantinescu, L. Dalcin, A. Dener, V. Eijkhout, W. D. Gropp, V. Hapla, T. Isaac, P. Jolivet, D. Karpeev, D. Kaushik, M. G. Knepley, F. Kong, S. Kruger, D. A. May, L. C. McInnes, R. T. Mills, L. Mitchell, T. Munson, J. E. Roman, K. Rupp, P. Sanan, J. Sarich, B. F. Smith, S. Zampini, H. Zhang, H. Zhang, and J. Zhang, PETSc/TAO users manual, Tech. Rep. ANL-21/39 - Revision 3.17, Argonne National Laboratory, 2022. 2. S. Balay, S. Abhyankar, M. F. Adams, S. Benson, J. Brown, P. Brune, K. Buschelman, E. M. Constantinescu, L. Dalcin, A. Dener, V. Eijkhout, W. D. Gropp, V. Hapla, T. Isaac, P. Jolivet, D. Karpeev, D. Kaushik, M. G. Knepley, F. Kong, S. Kruger, D. A. May, L. C. McInnes, R. T. Mills, L. Mitchell, T. Munson, J. E. Roman, K. Rupp, P. Sanan, J. Sarich, B. F. Smith, S. Zampini, H. Zhang, H. Zhang, and J. Zhang, PETSc Web page. https://petsc.org/, 2022. 3. B. Chazelle and L. Palios, Triangulating a non-convex polytype, in Proceedings of the fifth annual symposium on Computational geometry, 1989, pp. 393–400. 4. M. T. De Berg, M. Van Kreveld, M. Overmars, and O. Schwarzkopf, Computational geometry: algorithms and applications, Springer Science & Business Media, 2000. 5. A. Doi and A. Koide, An efficient method of triangulating equi-valued surfaces by using tetrahedral cells, IEICE TRANSACTIONS on Information and Systems, 74 (1991), pp. 214– 224. 6. H. Edelsbrunner et al., Geometry and topology for mesh generation, Cambridge University Press, 2001. 7. K. Erleben, H. Dohlmann, and J. Sporring, The adaptive thin shell tetrahedral mesh, (2005). 8. M. G. Knepley and D. A. Karpeev, Mesh algorithms for PDE with Sieve I: Mesh distribution, Scientific Programming, 17 (2009), pp. 215–230. http://arxiv.org/abs/0908.4427. 9. M. G. Knepley, M. Lange, and G. J. Gorman, Unstructured overlapping mesh distribution in parallel, 2017. 10. M. Lange, L. Mitchell, M. G. Knepley, and G. J. Gorman, Efficient mesh management in Firedrake using PETSc-DMPlex, SIAM Journal on Scientific Computing, 38 (2016), pp. S143–S155. 11. C. W. Lee and F. Santos, Subdivisions and triangulations of polytopes, in Handbook of discrete and computational geometry, Chapman and Hall/CRC, 2017, pp. 415–447. 12. S. Meshkat and D. Talmor, Generating a mixed mesh of hexahedra, pentahedra and tetrahedra from an underlying tetrahedral mesh, International Journal for Numerical Methods in Engineering, 49 (2000), pp. 17–30. 13. J. Ruppert and R. Seidel, On the difficulty of tetrahedralizing 3-dimensional non-convex polyhedra, in Proceedings of the fifth annual symposium on Computational geometry, 1989, pp. 380–392. 14. J. R. Shewchuk, A condition guaranteeing the existence of higher-dimensional constrained delaunay triangulations, in Proceedings of the fourteenth annual symposium on Computational geometry, 1998, pp. 76–85. 15. D. Sokolov, N. Ray, L. Untereiner, and B. Lévy, Hexahedral-dominant meshing, ACM Transactions on Graphics (TOG), 35 (2016), pp. 1–23.

Combinatorial Methods in Grid Based Meshing Henrik Stromberg, Valentin Mayer-Eichberger, and Armin Lohrengel

1 Introduction Meshes containing elements of high quality are of uttermost importance for mechanical engineering applications using Finite Element Method (FEM). Current industry solutions require huge amount of manual geometry preprocessing and mesher configuration for FEM solvers to produce accurate solutions with reasonable computation resources. Our work addresses this issue by proposing a grid-based meshing approach that produces good quality hex-dominant meshes that also allow the element types prisms, pyramids and tetrahedra. Our novel method uses multiple algorithmic techniques such as heuristic and combinatorial methods. This paper explains our method, highlights the central concepts and demonstrates our method with examples. One key step of our algorithm uses the decompositions of a coarser grid into subdivisions of the elements that are intersected by the target geometry. The combinatorial explosion of this step is overcome by pre-computing all possible subdivisions using Answer Set Programming (ASP). We are able to compute optimal solutions for all required cases. The paper is structured as follows. In Sect. 2 we define our terminology used throughout the paper and discuss mesh and element quality. Then in Sect. 3 we discuss how our work relates to the state-of-art meshing algorithms. In Sect. 4 we describe our core method and in Sect. 5 we explain the generation of subdivisions H. Stromberg (B) · A. Lohrengel Institut für Maschinenwesen, Robert-Koch-Straße 32, 38678 Clausthal-Zellerfeld, Germany e-mail: [email protected] A. Lohrengel e-mail: [email protected] V. Mayer-Eichberger Institute of Computer Science, An der Bahn 2, 14476 Potsdam, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_12

253

254

H. Stromberg et al.

and present statistics of their characteristic. Sections 6 and 7 go into detail of the main steps of the algorithm. More examples are analysed and evaluated in Sect. 8. Finally, we conclude and discuss future work in Sect. 9.

2 Background We choose the following terminology throughout this paper. The terms distinguish between entities of the mesh, its dual graph and the constraining geometry (Table 1). Our approach computes hex-dominant meshes with the application of FEM. For structural applications, hex-dominant meshes can yield more accurate simulation results as opposed to fully hexahedral meshes as their elements are of higher quality (see e.g. [19]). Fully hexahedral meshes often exhibit low quality elements in regions with high stresses, which reduces the overall solution quality. Our work addresses quality of mesh elements for FEM and we discuss established quality metrics. Finite Elements must be convex and inversion free to be usable in this context. This property can be proved by showing that the Scaled Jacobian of all elements is positive [15]. Besides this requirement a wide range of mesh quality metrics exist. Furthermore, generated meshes shall be conformal and must not exhibit hanging nodes or edges.

Table 1 Term definitions Term Definition Node Graph node Edge Graph edge Face Element Corner Curve NURBS Body Precursor mesh Retrenched mesh Super element

Mesh node Node of a graph Mesh edge edge of a graph Element face in the mesh Element in the mesh Geometry node Geometry edge (may be curved) Geometry surface Geometry volume enclosed by NURBS Mesh from which the part can be made by subtraction Mesh after cutting of all nodes out of the target geometry Element with twice the intended edge length to be decomposed into smaller sub elements later in the pipeline

Combinatorial Methods in Grid Based Meshing

255

3 Related Work This paper relates to multiple fields in the meshing community. Several approaches are known for the generation of quad meshes in FEM. While octree based mesh generators can directly create volume meshes, Delaunay triangulation based meshers or whisker weaving based algorithms first create a surface mesh and then a volume mesh starting from the surface. Whisker weaving algorithms directly create hexahedral meshes. The hexahedral meshes created through octree approaches cannot be used for FEM, if they contain hanging nodes and result in non-conformal meshes. They can be removed by using so called octant cutout templates, which decompose each hexahedral octant in multiple tetrahedrons, thereby eliminating all hanging nodes. This was an inspiration for our approach working on hybrid meshes. Their resulting mesh is tetrahedral [20]. There are also octree based mesh generators which use all hexahedral octants such as by Borden et. al. [2]. However, implementations, which are available for productive use such as Snappyhexmesh, do not include techniques like the ones shown by [2]. Delaunay based algorithms first create a triangulation and then try to generate hexahedrons from there (e.g. [14]). Whisker weaving algorithms first create a volume mesh, based on a quadrilateral surface mesh, without defining node locations, thus only generating the mesh topology. Then node locations are set and faulty elements are resolved. The algorithms work well on simple box-like geometries, but struggles with more complex parts [17]. A field of active research is in algorithms that generate structured meshes. There are no unique definitions of structured meshes across the literature. In the context of our work, we use the definition of [11] and evaluate the structure of a mesh by examining its singularity graph or generally the valence of mesh nodes. Thus, instead of a precise definition, the topic can be conquered with traits expected from a structured mesh. These traits are: low maximum valency (number of edges at a given node) over the whole mesh, a grid like structure and conservation of part symmetries. The common way of acquiring structured meshes are sweeping algorithms. They create a volume mesh by sweeping a planar mesh through the volume (see e.g. [5]). Sweeping mesh generators yield meshes with very good structural properties but are very limited in terms of accepted geometry. A common use case is to have the user dissect the geometry into sweepable sections. However, sweepability of the complete geometry is not guaranteed by all sections being sweepable and may also depend on the meshing order of the sections. This approach is widely used in commercial software such as ANSYS [1]. It requires experienced users and often causes unexpected results. A current development are quasi-structured quadrilateral meshing algorithms such as [13] which relax the stated requirements for structured meshes and then can achieve an automated tool chain.

256

H. Stromberg et al.

There are publications describing subtractive operations on meshes such as [2] or [21]. They perform a cutting operation on a given mesh by first altering node positions to have a closed loop of edges on the intersection curve between the initial mesh and the cutting tool. Then they remove elements and try to improve the resulting mesh. Even though this approach was a starting point for the work presented here, it proved unusable to solve the problem of cutting a complex geometry out of a mesh. The main issue is that it does not guarantee to preserve part edges. A second issue is that the known method can create elements with critically low Scaled Jacobians. Zhu et al. [21] shows an example where a chamfer is applied to a hex nut. The minimum Scaled Jacobian is low. Other publications such as [3] on subtractive meshing techniques actually perform a remeshing thus leaving the scope of the article. Grid-based meshing algorithms such as the method described by Owen and Shelton in [9] use a grid of cubes which envelopes the desired geometry. Then boundary surfaces of the geometry are mapped onto the mesh. Finally, all sections of the mesh, which are outside of the target geometry, are cut from the mesh. Such methods yield good results on parts with high similarity of the generated grid to the target geometry but produce poor quality meshes on parts with deviating geometry. Their results are also sensitive to the orientation of the target geometry in global coordinates [11]. Livesu et al. in [7] proposes a similar grid-based method using Super Elements and refined subdivisions. However, their work considers only hexahedral elements. The computational problem to find subdivision becomes easier in their case and their method cannot guarantee minimal quality of the resulting elements. Many approaches with similar properties to the ones discussed exist, but follow the same general concepts. All known implementations of meshing algorithms use a Boundary REPresentation (BREP) geometry interface neglecting Computer Aided Design (CAD) tree structure. Our method takes advantage of the expression from the CAD tree and to the best of our knowledge this has not been before in the context of automatic hex-dominant meshing. Our study of subdivisions of the cube is related to other combinatorial approaches investigating similar problems: In [10] the computation of all possible decompositions of a single hexahedron into tetrahedrons is done through Boolean satisfiability solving, which use similar algorithmic techniques than ASP. Their method is limited to the 8 nodes of the hexahedron itself and introduces no further interior nodes. In a related combinatorial approach, the decompositions of pyramids into hexahedra is studied in [18]. The use of Integer Programming in meshing is demonstrated in [12] to optimize a balanced octree in a grid-based meshing algorithm.

4 Algorithm Our meshing algorithm consist of four stages: 1. Generate Precursor Mesh 2. Assign Super Elements

Combinatorial Methods in Grid Based Meshing

257

Fig. 1 Stages of part production typically for subtractive manufacturing in mechanical engineering

3. Map Mesh to Geometry 4. Optimize Mesh. A brief description of the steps is as follows: The first stage chooses a coarse Precursor Mesh from the target geometry as a starting point for the pipeline. Then, to each element in the Precursor Mesh a Super Element is assigned to roughly match the geometry restrictions. In the third step each node in the finer mesh of the Super Elements is mapped back to the target geometry. Finally, the mesh is optimized to improve its quality. To generate conformal meshes, the algorithm requires a complete collection of Super Element decompositions for any subset of the 8 corner nodes of the unit cube. This collection constitutes a main contribution of our work and the generation is explained in depth in Sect. 5. In the rest of this section we discuss the prerequisites for the steps of our algorithm and our reasoning behind the proposed method. The costly computation for Super Elements does not scale to larger meshes with more than 35 nodes. Due to the size requirements of real-world applications of up to .108 nodes, we have chosen an approach inspired by subtractive manufacturing. Many geometries meshed for simulations represent mechanical parts. Such parts are typically designed to be manufactured by removing material from bar stock such as a I-beam. As the bar stock material is a manufactured in a continuous rolling process, its geometry is easily meshable with sweeping methods. From this sweeped mesh the material has to be removed to carve the target geometry out of the bar stock, as shown in Fig. 1. The process stages are illustrated with a cube with a central cylindrical hole subtracted from it. Just removing elements from the mesh in order to emulate the cutting of material is unsufficient as doing so may create a rough surface which does not match the desired part contour. We have modeled the cutting of material by replacing all elements of

258

H. Stromberg et al.

Fig. 2 Example of interfaces between Super Elements. Green nodes are retained Precursor Mesh nodes, dropped nodes are marked red. Possible additional nodes be they used or unused are drawn black

the Precursor Mesh with pre-computed Super Elements. All Super Elements we use are supposed to replace elements of the Precursor Mesh thereby increasing its granularity. Each Super Element is specified by removing some of the nodes from the base element geometry. For an eight node hexahedron .28 = 256 possible Super Elements exist. In order to assure compatibility between the used Super Elements each node of the Precursor Mesh must be kept or removed for all adjacent elements and Super Elements for them must be chosen accordingly. Furthermore, we have globally defined the surface mesh of a Super Element face depending on the nodes present in the Super Element (see Fig. 2). In this approach the assembly of a part from Super Elements can be seen as selecting which nodes of the Precursor Mesh are supposed to be present in the final mesh. We call the result of this process Retrenched Mesh. The Retrenched Mesh for the used example is shown in Fig. 3 and the details of the process are described in Sect. 6. The node coordinates of the nodes within all Super Elements are transformed such that the outer nodes of the Super Element coincide with the corresponding nodes of the Precursor Mesh. The resulting mesh is a rough approximation of the desired geometry. In order make the surfaces match, all faces of the surface mesh are bound to entities of the target geometry as shown for the example in Fig. 3. The details of the process are described in Sect. 7. Geometric entities, which are not fully represented, are deemed too small to be part of the mesh and are neglected as a consequence. In a last step the mesh quality is improved by first optimizing the surface mesh and then the volume mesh. The surface mesh is optimized by moving surface nodes on their respective geometric entities. Nodes which are bound to corners cannot be moved. Currently elements with low Scaled Jacobian (SJ) are identified. Then the node locations of these elements are optimized one element at a time. The resulting changes are minuscule in the provided example. The novel processes Super Element Assignment and Mesh Mapping can be implemented in .O(n log n) in the number of elements as shown in the respective sections.

Combinatorial Methods in Grid Based Meshing

259

Fig. 3 Top: Mesh of Example 1 after assigning Super Elements. Bottom: Mesh after being mapped to geometry. This resulting mesh was automatically computed from target geometry with our proposed method and did not require manual improvements of the mesh

With this complexity of the complete algorithm we expect our approach to scale to typical industrial applications in simulations with up to .108 nodes.

5 Super Element Generation In this section we explain the application of Answer Set Programming to generate optimal Super Elements from subdivisions of the cube. To generate the more fine grained internal mesh of the hexahedral Super Elements, we chose additional 27 nodes located proportionally in the mid sections of the cube, in total being 35 node locations. The additional nodes provide a refinement of the Super Element. The node locations are shown in Fig. 4. The main task is to compute the optimal hybrid mesh that can be created from all these nodes. A naive brute-force algorithm would not be able to solve this computational task with realistic resources and more advanced methods are necessary. With these node locations all possible tetrahedra, hexahedra, prisms and pyramids are generated and their Scaled Jacobian is computed according to [8]. We have specified a minimum SJ for each element type in order to exclude elements of poor quality and to speed up the solution progress. Ultimately, the element portfolio listed in Table 2 is used to assemble mesh solutions. Note that the SJ values for different element types are incomparable and we selected different minimal thresholds. We have modeled a logically consistent mesh in terms of its dual graph, enforcing the topological constraints with the following rule set:

260

H. Stromberg et al.

Fig. 4 The set of all additional node positions for a hexahedron that are available for the generation of the subdivision. We consider all possible elements that can be generated from subsets of these nodes (Table 2) Table 2 Element portfolio All elem. (SJ.> 0) Element type Tetrahedrons Hexahedrons Prisms Pyramids

44850 16333575 1351685 264501

Min. SJ

Count

0.16 0.3 0.45 0.35

22750 3809 2751 1626

• Each face of an element must connect to exactly one other element or be part of the outer hull. • Any two neighboring faces on the outer hull are locally convex. • All elements are connected (there are no disconnected sub-graphs). • No edge of any face may intersect any other face, except for shared nodes (the mesh is not self-intersecting). All these constraints were modelled in ASP, which is well suited for describing such graph-based constraints, and the program is natural and human readable. In particular, the connectivity constraint is difficult to formulate in related approaches such as Integer Programming or Boolean Satisfiability. We use the state-of-the-art ASP solver Clingo1 by the Potassco group [4] to compute solutions to our problem. There are 256 instances for which meshes have to be generated in order to get a complete set of Super Elements. Only 22 problems (plus the trivial empty mesh problem) have to be solved because many problems can be transformed into each other by rotating or mirroring the instance. Table 2 lists the number of elements with positive SJ for each type. Allowing all these elements in the optimization would generate too big instances. So, we carefully determined a minimal threshold for each element type to keep the problem instances manageable (Fig. 5). 1

https://potassco.org/clingo/.

Combinatorial Methods in Grid Based Meshing

261

Fig. 5 Graph representation of a mesh with triangular faces colored red and quadrangular faces colored green

Our implementation generates meshes for all cases in about one minute per problem with the element portfolio from Table 2 on 8 cores with 3.6 GHz. The solver proved optimality for all solutions. We are interested in various quality metrics of the Super Elements beyond maximizing the minimal SJ. To get a better idea if other criteria are better suited, we ran separate optimizations for the following goals: • maximize the minimum SJ • minimize element count • minimize the maximum node valency. For optimizing the minimum SJ we list all 20 non-trivial Super Elements in Fig. 6. For most of these solutions, it would be very hard to find these by hand, and impossible to prove that they are optimal. Table 3 lists the optimization results for all 22 solved problems with values regarding the different goals. In the more complex instances, different goals lead to different meshes.

6 Super Element Assignment For each element of the Precursor Mesh a Super Element has to be assigned. Compatibility between the selected Super Elements is guaranteed by deciding which nodes

262

H. Stromberg et al.

Fig. 6 All non-trivial subdivisions of a unit cube maximising the minimal SJ. The element types are indicated by the following color coding: Hexahedra in green, Tetrahedra in red, Prisms in blue and Pyramids in yellow

Combinatorial Methods in Grid Based Meshing

263

Table 3 Optimization results for all subdivisions with best results for the three different criteria, respectively. Usually, the three resulting meshes per case are different. A comprehensive list of for the column Min SJ according to [8] is shown in Fig. 6. SJs for mixed meshes cannot be compared to hexahedral meshes Case # Inst. Max. valence Min SJ Max elem. count 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

8 12 12 24 6 4 24 12 8 8 12 24 24 6 24 12 2 8 12 4 8 1

3 4 8 9 5 6 13 12 8 13 13 12 9 6 12 6 22 19 16 18 12 6

0.35 0.46 0.26 0.21 1.00 0.24 0.21 0.21 0.29 0.25 0.21 0.21 0.21 0.46 0.35 0.46 0.35 0.35 0.35 0.35 0.35 1.00

1 2 20 18 4 6 36 41 14 36 41 36 28 8 29 8 52 46 40 50 29 8

on the Precursor Mesh shall be present in the final mesh and selecting the resulting Super Element based on the present nodes. So the presence of each Precursor Mesh node in the final mesh can be described as a vector of Boolean variables . N . For inserting each Super Element into each element of the Precursor Mesh a fit quality is computed by approximating the residual volume . R. Figure 7 illustrates this procedure simplified to a planar problem. This is achieved by evaluating if a set of integration points (see X in Fig. 7) is inside or outside the geometry and if they are inside or outside of the Super Element. Now, an assignment of all . N is searched which minimizes the sum of all . R for the selected Super Elements in their respective Precursor Mesh elements. The integration points can be tested for being inside the Super Elements once as the Super Elements are not problem dependent. We are using a grid of .5 × 5 × 5 integration points as this is the smallest uniform grid which is able to differentiate all 256 Super Elements.

264

H. Stromberg et al.

Fig. 7 Assembling a mesh by assigning super elements to precursor mesh elements

Computing . R as the sum of all deviating integration points leads to wavy mesh surfaces for plain geometries. We have found solutions to have a much higher quality when computing . R as . R = R S2 + R P where . R S is the sum of all integration points only occurring in the Super Element and . R S are integration points only occurring in the geometry. By doing so cutting more material is favoured and the algorithm becomes more stable. We solve the minimization problem of . R(N ) with the ASP solver Clingo to compute optimal solutions. We were able to solve the benchmark problem in Fig. 3 for up to 64 elements in the Precursor Mesh with proven optimality. For larger meshes no optimum was found in reasonable computation time and the non-optimal results were not satisfactory. In order to scale the overall method to real world instances we have developed a heuristic that computes good approximations for . R(N ). This heuristic works by iterating over all elements of the Precursor Mesh, finding the most suitable Super Element for it. All elements vote whether they want adjacent nodes included or excluded from the mesh. The execution time of this algorithm is linear in element count. When comparing the ASP based exact method to the heuristics, the two methods compute the same results on our set of sample instances. This indicates that our heuristic provides decent solutions for larger instances.

Combinatorial Methods in Grid Based Meshing

265

7 Entity Mapping In the entity mapping phase of the algorithm, all surface nodes are assigned to geometric entities. This allows to improve geometric accuracy, elevate mesh order and insert more fine grained elements. Geometric entities are surfaces which enclose and define volumes. They are enclosed by curves which are terminated with corners as is shown in Fig. 8. For each node of the surface mesh one geometric entity has to be assigned. We achieve this assignment by first assigning a surface to each element face of the surface mesh. The surface is determined by projecting a line from the element face center in normal direction. The surface assigned to the element face is the first one intersecting this line regardless of the line parameter sign of the intersection. In this process we obtain a colored mesh (see Fig. 9 left). We execute the intersection test with a fine triangular mesh of the target geometry because line intersection test with NURBS geometry are complex and unreliable The intersection point of a NURBS surface and a line is a nonlinear problem whereas the intersection problem of a line and a surface is a linear problem. The unreliability stems from the fact that there is no way to securely test that an arbitrary nonlinear function has no roots. The triangular mesh is generated with . 14 edge length of the target mesh. When all element faces which are adjacent to a node have the same color it is assigned to this surface. In case the adjacent element faces have two different colors the node is assigned to the curve splitting the two surfaces. If the two surfaces share multiple curves the closet curve to the node is chosen. When more then two colors occur, the node must be assigned to a corner. If there is a corner with the same colors in neighbouring surfaces it is selected. Otherwise the distance between node and potentially assigned corners is used as a tiebreaker. In this process, any geometry smaller than the element size of the mesh is automatically de-featured.

Fig. 8 Geometric entities

266

H. Stromberg et al.

Fig. 9 Entity mapping process

A trivial implementation of the algorithm would require to project a test line from each face of the surface mesh to each triangle of the underlying tessellation. In the worst case, the count of surface mesh faces is the same order of magnitude as the volume element count and the total node count. In this case, the time complexity of the algorithm with respect to the total mesh nodes .n is in .O(n 2 ). This run-time can be improved to .O(n log n) by employing a more sophisticated ray casting method such as lazy sweep [16].

8 Results on Examples The method is tested with a subset of the MAMBO [6] data set. The Precursor Mesh for the geometries is selected automatically by analysing the CAD tree structure of the model. The results are presented in Table 4. The resulting meshes and their quality reflect the underlying model structure of the geometry. B5 is modeled as a revolution whereas B14 is modeled as an extrusion. In the case of B11 and B14 the circular cross section is detected and a specialized surface mesher is used for the 2D mesh before sweeping. The elliptic cross section of B15 is not recognized resulting in a lower quality mesh. The difference between B10 and our own example from Fig. 3 is caused by model structure as well. The MAMBO example is modeled as a single

Combinatorial Methods in Grid Based Meshing

267

Table 4 The hex-dominant meshes automatically computed by our method from basic instances of the MAMBO geometry benchmark

B0

B2

B4

B5

B6

B7

B9

B10

B11

B12

B14

B15

B17

B19

B21

B42

268

H. Stromberg et al.

extrusion and our own example is modeled as an extruded cube and a subtracted cylinder. In general our method show good performance on locally convex geometries. For cases such as B2, which contain sharp non convex features, the method fails to generate an adequate mesh. This issue is caused by the fact that the algorithm currently only uses convex Super Elements. The results can be improved by including non-convex Super Elements.

9 Conclusion and Future Work We have presented a novel approach for grid-based meshing using heuristic and combinatorial methods. For two steps of our pipeline we applied Answer Set Programming to solve combinatorial sub-problem. The set of schemes for subdivisions of the cube may find application in other hybrid meshing algorithms. Our success of using ASP in the context of meshing may inspire other combinatorial analysis of related problems. Our method advocates to use the underlying CAD model of the geometry. Especially in the context of FEM of mechanical engineering, we believe that this should receive more attention. Future algorithms could compare their solutions to ours using the same CAD tree from the MAMBO geometry benchmark. These CAD models and the ASP code we have used and all the computed Super Elements, are made available on Github: https://github.com/HenrikJStromberg/combinatorial_meshing. As next steps in our work, we see the following improvements. The current results of our mesh optimization seem to be inferior to those presented by [9]. We will combine their mesh optimization with our algorithm in the future. We plan to evaluate our method on more complex instances available in the MAMBO set. Finally, we aim to improve the efficiency of our implementation to tackle problem sizes common in industrial applications. Acknowledgements We would like to thank Jeff Erickson and Jean Christoph Jung for discussing our ideas with us and giving valuable feedback.

References 1. Ansys Inc., Ansys Mechanical 2022 R2, 2022. 2. M. J. Borden, J. F. Shepherd, and S. E. Benzley, Mesh cutting: Fitting simple allhexahedral meshes to complex geometries, in Proceedings, 8th international society of grid generation conference, 2002. 3. G. Dhondt, A new automatic hexahedral mesher based on cutting, International Journal for Numerical Methods in Engineering, 50 (2001), pp. 2109–2126. 4. M. Gebser, R. Kaminski, B. Kaufmann, and T. Schaub, Multi-shot asp solving with clingo, Theory and Practice of Logic Programming, 19 (2019), p. 27-82.

Combinatorial Methods in Grid Based Meshing

269

5. S. R. Jankovich, S. E. Benzley, J. F. Shepherd, and S. A. Mitchell, The graft tool: An all-hexahedral transition algorithm for creating a multi-directional swept volume mesh, in Proceedings of the 8th International Meshing Roundtable, South Lake Tahoe, California, USA, October 10-13, 1999, K. Shimada, ed., 1999, pp. 387–392. 6. F. Ledoux, MAMBO, June 2022. 7. M. Livesu, L. Pitzalis, and G. Cherchi, Optimal dual schemes for adaptive grid based hexmeshing, ACM Transactions on Graphics (TOG), 41 (2021), pp. 1–14. 8. C. Lobos, Towards a unified measurement of quality for mixed–elements, Tech. Rep. 2015/01, (2015). 9. S. J. Owen and T. R. Shelton, Evaluation of grid-based hex meshes for solid mechanics, Engineering with Computers, 31 (2015), pp. 529–543. 10. J. Pellerin, K. Verhetsel, and J.- F. Remacle, There are 174 subdivisions of the hexahedron into tetrahedra, ACM Trans. Graph., 37 (2018). 11. N. Pietroni, M. Campen, A. Sheffer, G. Cherchi, D. Bommes, X. Gao, R. Scateni, F. Ledoux, J. Remacle, and M. Livesu, Hex-mesh generation and processing: a survey, ACM transactions on graphics, 42 (2022), pp. 1–44. 12. L. Pitzalis, M. Livesu, G. Cherchi, E. Gobbetti, and R. Scateni, Generalized adaptive refinement for grid-based hexahedral meshing, ACM Transactions on Graphics (TOG), 40 (2021), pp. 1–13. 13. M. Reberol, C. Georgiadis, and J.- F. Remacle, Quasi-structured quadrilateral meshing in gmsh–a robust pipeline for complex cad models, arXiv preprint arXiv:2103.04652, (2021). 14. J. Remacle, F. Henrotte, T. C. Baudouin, C. Geuzaine, E. Béchet, T. Mouton, and E. Marchandise, A frontal delaunay quad mesh generator using the L .∞ norm, in Proceedings of the 20th International Meshing Roundtable, IMR 2011, October 23-26, 2011, Paris, France, W. R. Quadros, ed., Springer, 2011, pp. 455–472. 15. J. F. Shepherd, Topological and geometric constraint-based hexahedral mesh generation, PhD Thesis, University of Utah, USA, 2007. 16. C. T. Silvia and J. S. Mitchell, The Lazy Sweep Ray Casting Algorithm for Rendering Irregular Grids, IEEE transactions on visualization and computer graphics, 3 (1997). 17. T. J. Tautges, T. D. Blacker, and S. A. Mitchell, The whisker-weaving algorithm: A connectivity-based method for constructing all-hexahedral finite element meshes, International Journal for Numerical Methods in Engineering, 39 (1996), pp. 3327–3349. Publisher: Wiley. 18. K. Verhetsel, J. Pellerin, and J.- F. Remacle, A 44-Element Mesh of Schneiders’ Pyramid, in 27th International Meshing Roundtable, X. Roca and A. Loseille, eds., Lecture Notes in Computational Science and Engineering, Cham, 2019, Springer International Publishing, pp. 73–87. 19. C. Veyhl, I. Belova, G. Murch, A. Öchsner, and T. Fiedler, On the mesh dependence of non-linear mechanical finite element analysis, Finite Elements in Analysis and Design, 46 (2010), pp. 371–378. 20. M. A. Yerry and M. S. Shephard, Automatic three-dimensional mesh generation by the modified-octree technique, International Journal for Numerical Methods in Engineering, 20 (1984), pp. 1965–1990. 21. H. Zhu, S. Gao, and C. Xian, Hexahedral Mesh Cutting Using Geometric Model With New Boundaries Well Matched, American Society of Mechanical Engineers Digital Collection, (2013), pp. 147–156.

Estimating the Number of Similarity Classes for Marked Bisection in General Dimensions Guillem Belda-Ferrín, Eloi Ruiz-Gironés, and Xevi Roca

1 Introduction In adaptive .n-dimensional refinement, conformal simplicial meshes must be locally modified. One systematic modification for arbitrary dimensions is to bisect a set of selected simplices. This operation splits each simplex by introducing a new vertex on a previously selected refinement edge. Then, this new vertex is connected to the original vertices to define two new simplices. To ensure that the mesh is still conformal, the bisection has to select additional refinement edges on a surrounding conformal closure. In .n-dimensional bisection, the edge selection is commonly based on choosing the longest edge [1–4], the newest vertex [5–10], or using marked bisection [11– 13]. Although all these edge selections are well-suited for adaptation, marked .ndimensional bisection has shown to be suitable for local refinement of unstructured simplicial meshes of three or more dimensions [11–13]. Moreover, on unstructured meshes, marked bisection enforces a key advantage for .n-simplicial adaption. That is, successive refinement leads to a fair number of simplex similarity classes [11–13]. Two simplices belong to the same similarity class if the first one can be transformed into the second one using uniform scalings, rotations, reflections, and translations. That is, both simplices have the same shape, but different sizes, alignments, orientations, and positions. As a consequence, both simplices have G. Belda-Ferrín · E. Ruiz-Gironés · X. Roca (B) Computer Applications in Science and Engineering, Barcelona Supercomputing Center - BSC, 08034 Barcelona, Spain e-mail: [email protected] G. Belda-Ferrín e-mail: [email protected] E. Ruiz-Gironés e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_13

271

272

G. Belda-Ferrín et al.

the same shape quality. Since the number of similarity classes is bounded, so is the minimum mesh quality, and therefore, the bisection method is stable. To understand this stability advantage, we overview the structure of marked bisection. These methods feature a first stage performing a specific-purpose bisection for marked simplices. This marked bisection enforces that after a few initial steps, one can switch independently on each element to another stage featuring Maubach’s newest vertex bisection [7]. The number of initial bisection steps is comparable with the spatial dimension. Hence, these steps are responsible for a reasonable increase in the total number of similarity classes. Estimating the number of similarity classes is key to assess the quality of a marked bisection method. This is so because this number indicates a priori the minimum mesh quality under successive mesh refinement. If the number of classes is smaller, the minimum quality tends to be higher. In the 3D case, there is a tight bound on the number of similarity classes [11]. However, still missing is an estimation of the number of similarity classes generated under successive refinement in arbitrary dimensions. Accordingly, the main contribution of this work is to estimate in arbitrary dimensions an upper bound of the number of similarity classes obtained with a conformal marked bisection method. Moreover, to understand the cyclic structure of the minimum quality, we estimate the number of uniform refinements required to generate all the similarity classes. In particular, we will analyze the number of similarity classes and the number of iterations to obtain them when bisecting simplices with the algorithm presented in [13]. To estimate an upper bound of the number of similarity classes in the worst case, we consider a general simplex without any special symmetry. Note that simplices featuring specific symmetries feature a smaller number of similarity classes. Thus, the corresponding estimates are not upper bounds for the worst case. To estimate the number of similarity classes, we use two ingredients. First, the maximum number of similarity classes for the newest vertex bisection [11]. Second, for marked bisection, we deduce that switching after .n refinements to newest vertex with Maubach tag equal to .n is equivalent to switching after .n − 2 refinements to tag equal to .2. Using our estimate, we derive the number of uniform refinements required to generate all the similarity classes. The results are devised to check how tight is the estimation of the number of similarity classes. For marked bisection, the number of similarity classes has not been estimated for a general simplex in arbitrary dimensions. Alternatively, existing works estimate the number of similarity classes for longest edge bisection [14, 15]. For general shapes, the work [14] bounds the number of similarity classes for longest-edge bisection in two dimensions. For general dimensions, the work [15] numerically computes the number of similarity classes for longest-edge bisection for the equilateral simplex. In our work, we estimate the number of similarity classes for marked bisection and a general simplex in arbitrary dimensions. The rest of the paper is structured as follows. In Sect. 2, we introduce the preliminary notation and concepts. In Sect. 3, we summarize the used marked bisection. In Sect. 4, we provide an upper bound to the number of similarity classes. In Sect. 5, we

Estimating the Number of Similarity Classes …

273

deduce a lower bound of the number of uniform refinements to obtain all the similarity classes. In Sect. 6, we show several examples. Finally, in Sect. 7, we present the concluding remarks of this work.

2 Preliminaries We proceed to introduce the necessary notation and concepts. Specifically, we introduce the preliminaries related simplicial meshes, conformity, and bisection methods. Then, to summarize the marked bisection algorithm [13], we introduce the notion of multi-id to provide a unique identifier to the mid-vertices, and the selection of the bisection edge in a consistent manner.

2.1 Simplicial Meshes, Conformity, and Bisection A simplex is the convex hull of .n + 1 points . p0 , . . . , pn ∈ Rn that do not lie in the same hyperplane. We denote a simplex as .σ = conv( p0 , . . . , pn ). We identify each point . pi with a unique integer identifier .vi that we refer as vertex. Thus, a simplex is composed of .n + 1 vertices and we denote it as .σ = (v0 , . . . , vn ) where .vi is the identifier of point . pi . We have an application .∏ that maps each identifier .vi to the corresponding point . pi . Given a simplex σ, a .k-entity is a sub-simplex composed of .k + 1 vertices of σ, for .0 ≤ k ≤ n − 1. We say that a 1-entity is an edge and an .(n − 1)-entity is a face. The number of .k-entities contained in a simplex σ is ( .

) n+1 . k+1

Particularly, the number of edges and faces of σ is ( ) ) n+1 n+1 n(n + 1) , = n, = 2 n 2

( .

respectively. We associate each face of a simplex σ to its opposite vertex in σ. Specifically, the opposite face to .vi is κ = (v0 , v1 , . . . , vi−1 , vi+1 , . . . , vn ).

. i

We say that two simplices σ1 and σ2 are neighbors if they share a face. We define the bisection of a simplex as the operation that splits a simplex by introducing a new vertex on the selected refinement edge, see Fig. 1. Then, the

274

G. Belda-Ferrín et al.

Fig. 1 Bisection of a tetrahedron

Algorithm 1 Refining a subset of a mesh. input: Mesh T , SimplicesSet S ⊂ T output: ConformalMarkedMesh T2 1: function refineMesh(T , S ) T1 = markMesh(T ) 2: T2 = localRefine(T1 , S ) 3: 4: return T2 5: end function

vertices not lying on this refinement edge are connected to the new vertex. These connections determine two new simplices.

2.2 Marked Bisection To perform the bisection process, we adapt to the .n-dimensional case the recursive refine-to-conformity scheme proposed in [11]. The marked bisection method, Algorithm 1, starts by marking the initial unstructured conformal mesh and then applies a local refinement procedure to a set of simplices of the marked mesh. To do it so, we need to specify a conformal marking procedure for simplices to obtain a marked mesh T1 . Using this marked mesh, the local refinement procedure, Algorithm 9, first refines a set of simplices, then calls a recursive refine-to-conformity strategy, and finally renumbers the mesh. The refine-to-conformity strategy, Algorithm 10, terminates when successive bisection leads to a conformal mesh. Both algorithms use marked bisection to refine a set of elements, see Algorithm 11. See more details of the involved algorithms in Appendix 8.

2.3 Unique Mid-Vertex Identifiers We use multi-ids to uniquely identify the new vertices that are created during the bisection process. A multi-id is a sorted list of vertices, .v = [v1 , . . . , vk ], where .v1 ≤ v2 ≤ · · · ≤ vk . A simplex that contains multi-ids is denoted as .σ = (v0 , . . . , vn ).

Estimating the Number of Similarity Classes …

275

When creating a new vertex after bisecting and edge, we generate a multi-id for the new vertex. The new multi-id is the combination of the multi-ids of the edge vertices, v0 and v1 . In particular, the resulting multi-id is created by merging and sorting the the multi-ids of v0 and v1 . We remark that the ids can appear more than once after generating a new multi-id.

2.4 Consistent Bisection Edge For all mesh entities shared by different mesh elements, we must ensure that these entities have the same bisection edge on all those elements. To this end, we base this selection on a strict total order of the mesh edges. The main idea is to order the edges from the longest one to the shortest one, and use a tie-breaking rule for the edges with the same length. Specifically, we define the consistent bisection edge of a simplex as the longest edge with the lowest global index. A shared edge between two simplices may have a different order of vertices, which can induce different results when computing the edge length from different elements. To avoid these discrepancies, we first order the edge vertices according to the vertex ordering. Then, we compute the length of the edge using the ordered vertices. To define a strict total order of edges, when two edges have the same length, we need a tie-breaking rule. To this end, we use a lexicographic order for the global edges in terms of the order of the vertices.

2.5 Similarity Classes Two simplices σ1 and σ2 are similar if there exists an affine mapping . F(x) = λAx + b for .λ ∈ R, an orthonormal matrix .A, and . F(σ1 ) = σ2 . Because similarity is an equivalence relation, all the simplices that are similar between them form a similarity class. Although similar simplices may have different sizes, alignments, orientations, or positions, they have the same shape. Therefore, all the simplices in a similarity class have the same shape quality. In Fig. 2, we show the obtained similarity classes by refining a triangle. The initial triangle defines the first similarity class, Fig. 2a. In the first bisection step, Fig. 2b, we obtain two additional similarity classes. The next uniform refinement obtains an additional similarity class and repeats the initial similarity class, see Fig. 2c. Finally, when bisecting the triangles of the fourth similarity class, we obtain the second and third similarity classes, see Fig. 2d.

276

G. Belda-Ferrín et al.

1

(a)

2

3

4 1

(b)

2

4 1

(c)

1

3 3

2

1

(d)

Fig. 2 Similarity classes of a triangle obtained by bisection: a initial triangle; b one uniform refinement; c two uniform refinements; and d further refinements do not increase similarity classes

Algorithm 2 Mark a k-simplex. input: k-Simplex σ output: BisectionTree t 1: function stageOneTree(σ) 2: e = consistentBisectionEdge(σ) 3: if dim σ = 1 then 4: t = tree(node = e) 5: else 6: ([v1 ], [v2 ]) = e 7: κ1 = oppositeFace(σ, [v1 ]) 8: κ2 = oppositeFace(σ, [v2 ]) 9: t1 = stageOneTree(κ1 ) 10: t2 = stageOneTree(κ2 ) 11: t = tree(node = e, left = t1 , right = t2 ) 12: end if 13: return t 14: end function

3 Marked Bisection in General Dimensions Following, we summarize an .n-dimensional marked bisection algorithm [13]. First, we detail the co-dimensional marking process, which is based on the consistent bisection edge of a simplex. Then, we define the three stages of the bisection process.

3.1 Co-Dimensional Marking Process We detail the codimensional marking process for a simplex, in which the resulting mark is a bisection tree. The bisection tree is computed by traversing the sub-entities of the simplex in a recursive manner and selecting the consistent bisection edge of each sub-simplex, see Algorithm 2. The resulting bisection tree has height .n, and the tree nodes of level .i correspond to the consistent bisection edges of sub-simplices of co-dimension .i (dimension .n − i). Next, we detail the codimensional marking process for a single simplex, Algorithm 2. Since the codimensional marking process is the first step of the mesh refinement algorithm, the length of the multi-ids of all simplices is one. The input

Estimating the Number of Similarity Classes …

277

Algorithm 3 Bisection of a marked simplex ρ. input: MarkedSimplex ρ output: MarkedSimplex ρ1 , MarkedSimplex ρ2 1: function bisectSimplex(ρ) 2: l = level(ρ) 3: if l < n − 1 then 4: τ = TreeSimplex(ρ) 5: τ1 , τ2 = bisectStageOne(τ) 6: ρ1 , ρ2 = MarkedSimplex(τ1 , τ2 ) 7: else if l = n − 1 then 8: τ = TreeSimplex(ρ) 9: μ1 , μ2 = bisectCastToMaubach(τ) 10: ρ1 , ρ2 = MarkedSimplex(μ1 , μ2 ) 11: else 12: μ = MaubachSimplex(ρ) 13: μ1 , μ2 = bisectMaubach(μ) 14: ρ1 , ρ2 = MarkedSimplex(μ1 , μ2 ) 15: end if 16: return ρ1 , ρ2 17: end function

of the function is a simplex .σ = ([v0 ], . . . , [vn ]) and the output is the corresponding bisection tree. First, we obtain the consistent bisection edge, e, of the simplex, see Line 2. If σ is an edge, this corresponds to the base case of the recursion and we return a tree with only the root node. Otherwise, we obtain the opposite faces of the vertices of the bisection edge, see Lines 7–8. Then, we recursively call the marking process algorithm for the faces κ1 and κ2 , and we obtain the corresponding trees t1 and t2 , see Lines 9–10. Finally, we build the bisection tree t with the bisection edge as root node and the trees t1 and t2 as left and right branches, see Line 11.

3.2 First Bisection Stage: Tree Simplices In the first stage, we bisect the simplices using the bisection trees computed with the codimensional marking process. The first stage is used in the first .n − 2 bisection steps. Moreover, during the refinement process we store the new mid-vertices into κ. ¯ Thus, in the second stage, we are able to reorder the generated simplices and, in the third stage, use newest vertex bisection. To this end, Algorithms 4 and 5 detail the bisection process of a tree simplex in the first stage.

278

G. Belda-Ferrín et al.

Algorithm 4 Bisect a marked tree-simplex. input: TreeSimplex τ output: TreeSimplex τ1 , TreeSimplex τ2 1: function bisectStageOne(τ) 2: (σ, κ, ¯ t, l) = τ 3: e = root(t) ¯ e, l) 4: σ1 , κ¯ 1 , σ2 , κ¯ 2 = bisectTreeSimplex(σ, κ, 5: t1 = left(t); t2 = right(t) 6: l1 = l + 1; l2 = l + 1 7: τ1 = (σ1 , κ¯ 1 , t1 , l1 ) 8: τ2 = (σ2 , κ¯ 2 , t2 , l2 ) 9: return τ1 , τ2 10: end function

▷ Bisection edge ▷ Bisect tree ▷ Bisect level

Algorithm 5 Bisect a tree-simplex. input: Simplex σ, l-List κ¯ , Edge e, Level l output: Simplex σ1 , (l + 1)-List κ¯ 1 , Simplex σ2 , (l + 1)-List κ¯ 2 1: function bisectTreeSimplex(σ, κ, ¯ e, l) 2: (v0 , v1 , . . . , vn ) = σ 3: ([v1,l−1 , v2,l−1 ], . . . , [v1,0 , v2,0 ]) = κ¯ 4: ([v1,l ], [v2,l ]) = e 5: [v1,l , v2,l ] = midVertex([v1,l ], [v2,l ]) 6: (i 1 , i 2 ) = simplexVertices(σ, e) 7: σ1 = (v0 , . . . , vi2 −1 , [v1,l , v2,l ], vi2 +1 , . . . , vn ) 8: σ2 = (v0 , . . . , vi1 −1 , [v1,l , v2,l ], vi1 +1 , . . . , vn ) 9: κ¯ 1 = ([v1,l , v2,l ], [v1,l−1 , v2,l−1 ] . . . , [v1,0 , v2,0 ]) 10: κ¯ 2 = ([v1,l , v2,l ], [v1,l−1 , v2,l−1 ] . . . , [v1,0 , v2,0 ]) 11: return σ1 , κ¯ 1 , σ2 , κ¯ 2 12: end function

3.3 Second Bisection Stage: Casting to Maubach We next detail the second stage of the bisection method for simplices, a stage that is used when the descendant level of a tree-simplex is .l = n − 1. In this stage, after bisecting a tree simplex, we reorder the vertices of the bisected simplices in order to apply newest vertex bisection in the third stage. This process is explained in Algorithms 6 and 7, in which a tree simplex, τ, is bisected into two Maubach simplices, μ1 and μ2 .

3.4 Third Stage: Maubach’s Bisection Finally, we detail the third stage of the bisection method for simplices, which used when the descendant level of a Maubach simplex, μ, is .l ≥ n. In this stage, we use Maubach’s algorithm to favor the conformity, finiteness, stability, and locality

Estimating the Number of Similarity Classes …

279

Algorithm 6 Bisect to Maubach input: TreeSimplex τ output: MaubachSimplex μ1 , MaubachSimplex μ2 1: function bisectToMaubach(τ) 2: (σ, σ, ¯ t, l) = τ 3: e = root(t) ¯ e, l) 4: σ1 , κ¯ 1 , σ2 , κ¯ 2 = bisectTreeSimplex(σ, κ, 5: σ¯ 1 , σ¯ 2 = castToMaubach(e, κ¯ 1 , κ¯ 2 ) 6: d1 = n; d2 = n 7: l1 = l + 1; l2 = l + 1 8: μ1 = (σ¯ 1 , d1 , l1 ) 9: μ2 = (σ¯ 2 , d2 , l2 ) 10: return μ1 , μ2 11: end function

Algorithm 7 Cast to Maubach. input: Edge e, n-List κ¯ 1 , n-List κ¯ 2 output: n-Simplex σ¯ 1 , n-Simplex σ¯ 2 1: function castToMaubach(e, κ¯ 1 , κ¯ 2 ) 2: ([v1,n−1 ], [v2,n−1 ]) = e 3: ([v1,n−1 , v2,n−1 ], . . . , [v1,0 , v2,0 ]) = κ¯ 1 4: ([v1,n−1 , v2,n−1 ], . . . , [v1,0 , v2,0 ]) = κ¯ 2 5: σ¯ 1 = ([v1,n−1 ], [v1,n−1 , v2,n−1 ], . . . , [v1,0 , v2,0 ]) 6: σ¯ 2 = ([v2,n−1 ], [v1,n−1 , v2,n−1 ], . . . , [v1,0 , v2,0 ]) 7: return σ¯ 1 , σ¯ 2 8: end function

Algorithm 8 Adapted Maubach’s algorithm. input: MaubachSimplex μ output: MaubachSimplex μ1 , MaubachSimplex μ2 1: function bisectMaubach(μ) 2: ((v0 , v1 , . . . , vn ), d, l) = μ 3: w = midVertex(v0 , vd ) 4: σ¯ 1 = (v0 , . . . , vd−1 , w, vd+1 , . . . , vn ) 5: σ¯ 2 = (v1 ,(. . . , vd , w, vd+1 , . . . , vn ) d − 1, d > 1 6: Set d ' = n, d=1 7: d1 = d ' ; d2 = d ' 8: l1 = l + 1; l2 = l + 1 9: μ1 = (σ¯ 1 , d1 , l1 ) 10: μ2 = (σ¯ 2 , d2 , l2 ) 11: return μ1 , μ2 12: end function

properties. We reinterpret Maubach’s algorithm using tagged simplices and multi-ids in Algorithm 8.

280

G. Belda-Ferrín et al.

Fig. 3 The three possible bisection trees ti corresponding to the consistent bisection edges: a ([vi1 ], [vi2 ]), b ([vi0 ], [vi2 ]), and c ([vi0 ], [vi1 ])

4 Estimation of the Number of Similarity Classes We estimate an upper bound of the number of similarity classes obtained with marked bisection, and we show that this number is sub-optimal. That is, it is greater than the number of similarity classes obtained using newest vertex bisection. Lemma 1 (Newest vertex bisection for triangular meshes) The marked bisection is equivalent to Maubach’s bisection for 2-simplices. Proof The co-dimensional marking process generates three possible bisection trees for 2-simplices. Those bisection trees are ([ 0 ], [ 2 ]) ([ 0 ], [ 1 ]) ([ 1 ], [ 2 ])

( [ 0 ], [ 1 ]) ( [ 0 ], [ 2 ])

([ 0 ], [ 1 ]) ([ 1 ], [ 2 ])

([ 0 ], [ 2 ]) ([ 1 ], [ 2 ])

These trees are equivalent to the obtained ones when applying Maubach’s method to the triangles .([v1 ], [v0 ], [v2 ]), .([v0 ], [v1 ], [v2 ]), and .([v0 ], [v2 ], [v1 ]), with Maubach tag .d = 2 and bisection level .l = 0. Therefore, for triangular meshes the presented marked bisection algorithm is equivalent to Maubach’s algorithm. Proposition 1 (Tagging with .d = 2 after step .n − 2) Let σ be a simplex marked with the co-dimensional marking process, and consider the mesh Qσn−2 obtained after .n − 2 uniform refinements with marked bisection. Then, we can map a tree-simplex τ of Qσn−2 to a Maubach simplex μ with descendant level .l = n − 2 and tag .d = 2. 0 Let σ0 be a simplex marked with the co-dimensional marking process and Qσn−1 be 0 the mesh obtained after .n − 2 uniform marked bisection refinements. Let .τ ∈ Qσn−2 be a tree-simplex of the form .τ = (σ, κ, ¯ t, l = n − 2). After applying .n − 2 uniform refinements with marked bisection, we know that σ is composed of 3 original vertices and .n − 2 multivertices. That is,

σ = {[vi0 ], [vi1 ], [vi2 ], [v1,n−3 , v2,n−3 ], . . . , [v1,0 , v2,0 ]}.

.

Since the bisection tree t of σ is composed of vertices [vi0 ], [vi1 ], and [vi2 ], that define a triangle, its bisection tree is equivalent to the bisection tree of a triangle. By Lemma 1, t is equivalent to the bisection tree of a tagged triangle. Thus, t is one of the bisection trees depicted in Fig. 3.

Estimating the Number of Similarity Classes …

281

If we map the tree-simplex τ corresponding to the simplex σ to a Maubach sim¯ l = n − 2, d = 2), we have that there are three possible simplices σ, ¯ plex .μ = (σ, illustrated in Eq. (1): .

([vi1 ], [vi0 ], [vi2 ], [v1,n−3 , v2,n−3 ], . . . , [v1,0 , v2,0 ]),

(1a)

([vi0 ], [vi1 ], [vi2 ], [v1,n−3 , v2,n−3 ], . . . , [v1,0 , v2,0 ]),

(1b)

([vi0 ], [vi2 ], [vi1 ], [v1,n−3 , v2,n−3 ], . . . , [v1,0 , v2,0 ]).

(1c)

.

.

We recall that we sorted the vertices [vi0 ], [vi1 ], and [vi2 ] according to the tagged triangles of the proof of Lemma 1. Thus, we have that the simplices of Eqs. (1a)–(1c) correspond to the bisection trees depicted in Fig. 3a, c. Then, it only remains to check that if we apply two uniform tagged-bisection steps to any simplex of Eq. (1), the obtained Maubach simplices are equal to the Maubach simplex obtained after .n uniform refinements with marked bisection. For the sake of simplicity, we only perform the reasoning for the simplex in Eq. ¯ l = n − 2, d = 2) be a Maubach simplex, where σ¯ is defined (1a). Thus, let .μ = (σ, in Eq. (1a). Performing the first Maubach’s bisection step, we obtain two children .μ1 = (σ ¯ 1 , l = n − 1, d = 1) and .μ2 = (σ¯ 2 , l = n − 1, d = 1), where σ¯ = ([vi1 ], [vi0 ], [vi1 , vi2 ], [v1,n−3 , v2,n−3 ], [v1,n−4 , v2,n−4 ], . . . , [v1,0 , v2,0 ]),

. 1

σ¯ = ([vi0 ], [vi2 ], [vi1 , vi2 ], [v1,n−3 , v2,n−3 ], [v1,n−4 , v2,n−4 ], . . . , [v1,0 , v2,0 ]),

. 2

According to Maubach’s algorithm, the bisection edge of level .l = n − 2 is ([vi1 ], [vi2 ]). This bisection edge is the same as the consistent bisection edge according to the bisection tree in Fig. 3a. Therefore, we have that.[vi1 , vi2 ] = [v1,n−2 , v2,n−2 ] and that this tagged bisection step is the same as the one performed using marked bisection. We substitute the new multivertex in σ¯ 1 and σ¯ 2 to obtain σ¯ = ([vi1 ], [vi0 ], [v1,n−2 , v2,n−2 ], [v1,n−3 , v2,n−3 ], . . . , [v1,0 , v2,0 ]),

. 1

σ¯ = ([vi0 ], [vi2 ], [v1,n−2 , v2,n−2 ], [v1,n−3 , v2,n−3 ], . . . , [v1,0 , v2,0 ]).

. 2

Again, for the sake of simplicity, we only perform a tagged bisection on simplex μ1 , since the same argument can be applied to simplex μ2 . We have that the result of Maubach’s bisection on μ1 are the simplices .μ1,1 = (σ¯ 1,1 , l = n, d = n) and .μ1,2 = (σ¯ 1,2 , l = n, d = n), where .

σ¯ 1,1 = ([vi1 ], [vi0 , vi1 ], [v1,n−2 , v2,n−2 ], . . . , [v1,0 , v2,0 ]), σ¯ 1,2 = ([vi0 ], [vi0 , vi1 ], [v1,n−2 , v2,n−2 ], . . . , [v1,0 , v2,0 ])

According to Maubach’s tagged bisection, the bisection edge of level .l = n − 1 is ([vi0 ], [vi1 ]). Again, this bisection edge is the same as the consistent bisection

282

G. Belda-Ferrín et al.

edge according the bisection tree in Fig. 3a. Therefore, we have that .[vi0 , vi1 ] = [v1,n−1 , v2,n−1 ]. Analogously, the bisection edge of level .l = n − 1 is ([vi0 ], [vi1 ]). Therefore, by susbtituting the new multivertex in both children, we obtain σ¯

= ([vi1 ], [v1,n−1 , v2,n−1 ], [v1,n−2 , v2,n−2 ], . . . , [v1,0 , v2,0 ]),

σ¯

= ([vi0 ], [v1,n−1 , v2,n−1 ], [v1,n−2 , v2,n−2 ], . . . , [v1,0 , v2,0 ]).

. 1,1

. 1,2

After performing the mapping to Maubach process at the end of the second stage, we obtain that .[vi0 ] = [v1,n−1 ] and .[vi1 ] = [v2,n−1 ]. Therefore, the obtained simplices are the same that the ones obtained after .n uniform refinements with marked bisection. That is, σ¯ 1,1 and σ¯ 1,2 are equal to the simplices obtain at Lines 5 and 6 of Algorithm 7, respectively. The same argument can be applied when bisecting the simplex μ2 , and also the rest of the simplices in Eq. (1). Therefore, we have proved that the simplices obtained ∐ at level .l = n − 2 can be mapped to Maubach’s simplices of tag .d = 2. .∏ Now, we can state the theorem for the upper bound . Sn over the similarity classes generated by the marked bisection algorithm. To do that, we consider uniform refinements in order to generate the maximum number of simplices per iteration. Thus, let σ .Q0 = σ and σ σ σ .Qk = bisectSimplices(Qk−1 , Qk−1 ) the obtained after performing .k uniform refinements, a mesh .Qiσ that is com( mesh σ) posed of .# Qi = 2i simplices. Considering all the meshes .Qσ0 , . . . Qσk , we have at most k k ∑ ( ) ∑ . # Qiσ = 2i = 2k+1 − 1 (2) i=0

i=0

different simplices. Theorem 1 (Number of similarity classes for marked bisection) Let σ be a simplex marked with the co-dimensional marking process. Assume that from iteration .k, the bisection process is equivalent to Maubach’s bisection. Then, the number of similarity classes generated by the marked bisection method is at most S = (2k − 1) + 2k Mn ,

. n

where . Mn = nn!2n−2 is the maximum number of similarity classes of newest vertex bisection. Proof Let σ be a simplex marked with the co-dimensional marking process. Consider k uniform refinements with marked bisection such that further refinements of marked bisection are equivalent to Maubach’s bisection. By Eq. (2), the number of similarity

.

Estimating the Number of Similarity Classes …

283

classes generated from iteration 0 to .k − 1 is, at most, .2k − 1. Since the number of simplices of Qσk is .2k , the number of simplices generated using Maubach’s algorithm is .2k Mn , where . Mn is a bound of similarity classes of Maubach’s algorithm, see Theorem 4.5 of [11]. Finally, summing the two values we obtain that the number of ∐ similarity classes is at most . Sn = (2k − 1) + 2k Mn , as we wanted to see. .∏ Corollary 1 In the presented marked bisection, .k is at most .n − 2, and the number of similarity classes is at most S = (2n−2 − 1) + 2n−2 Mn = (2n−2 − 1) + 2n−2 nn!2n−2 .

. n

Proof By Proposition 1, at iteration .n − 2 we can map all the simplices of Qσn−2 to Maubach simplices with descendant level .l = n − 2 and tag .d = 2. Therefore, by Theorem 1, the number of similarity classes generated by the marked bisection algorithm is at most n−2 . Sn = (2 − 1) + 2n−2 Mn . As a consequence of Corollary 1, the additional number of similarity classes obtained with marked bisection may grow exponentially with the dimension. Therefore, marked bisection is only suitable for lower dimensions, when the additional number of similarity classes is also small.

5 Number of Uniform Refinements to Obtain All the Similarity Classes To understand the cyclic structure of the similarity classes, we want to calculate the minimum number of uniform refinements required to generate all the similarity classes with the proposed marked bisection. To this end, we first compute the number of uniform refinements to obtain all similarity classes with newest vertex bisection. Newest vertex bisection generates at most . Mn = nn!2n−2 similarity classes, see Theorem 4.5 of [11]. We remark that the number of generated similarity classes is an upper bound and therefore, for some simplices we can obtain less similarity classes than . Mn . Thus, in the case that the method generates . Mn similarity classes, it needs to refine at least . K n times uniformly to generate all of them, where . K n holds that 2 K n +1 − 1 ≥ Mn = nn!2n−2 .

.

Thus, . K n has to fulfill that

2 K n +1 ≥ 1 + nn!2n−2 .

.

Applying logarithms in both sides, we obtain that .

K n ≥ ┌log2 (1 + nn!2n−2 )┐ − 1.

284

G. Belda-Ferrín et al.

Theorem 2 (Number of uniform refinements to obtain all similarity classes) Let σ be a simplex marked with the co-dimensional marking process. Assume that from iteration .k, the bisection process is equivalent to Maubach’s bisection. Then, the minimum number of uniform refinements to obtain all the similarity classes in marked bisection is . In = k + K n . Proof The first .k uniform refinements are performed using marked bisection. Then, the following refinements are performed using newest vertex bisection. Thus, from the refinement .k + 1 onward, all the simplices are bisected using newest vertex bisection. To generate all the similarity classes of this simplices we need at least . K n uniform refinements. Therefore, to generate all the similarity classes of the initial ∐ simplex we need at least . In = k + K n uniform refinements. .∏ Corollary 2 In the presented marked bisection, .k is at most .n − 2, and therefore, the minimum number of uniform refinements to obtain all the similarity classes is I = n − 2 + Kn .

. n

Proof By Proposition 1, at bisection level .n − 2 we can map the obtained simplices to Maubach simplices with tag .d = n. Therefore, by Theorem 2, and using .k is at most .n − 2, the minimum number of uniform refinements to obtain all the similarity classes is . In = n − 2 + K n . As a consequence of Corollary 2, we see that to obtain all the similarity classes, we first need to bisect the simplices until the bisection process is driven by newest vertex bisection. Then, we need to perform the required number of iterations to obtain all the similarity classes of newest vertex bisection. In the case of the presented marked bisection, the first stages are performed in the first .n − 2 iterations.

6 Examples We present an example in which we compute the number of similarity classes of different simplices and compare the obtained number with our upper bound. We have computed the shape quality of the mesh using the expression .

n det(S)2/n , tr(S t S)

where . S is the Jacobian of the affine mapping between the ideal equilateral simplex and the physical simplex [16, 17]. All the results have been obtained on a MacBook Pro with one dual-core Intel Core i5 CPU, with a clock frequency of 2.7 GHz, and a total memory of 16 GBytes.

Estimating the Number of Similarity Classes …

(a)

285

(c)

(b)

(d)

Fig. 4 Three-dimensional simplices considered in Example 1: a equilateral; b Cartesian; c Khun; and d irregular

As a proof of concept, a mesh refiner has been fully developed in Julia 1.4. The Julia prototype code is sequential (one execution thread), corresponding to the implementation of the method summarized in this work.

6.1 Number of Similarity Classes In this example, we show the number of similarity classes obtained with the marked bisection algorithm for different simplices and dimension. To this end, we uniformly refine an equilateral simplex, a Cartesian simplex, a Kuhn simplex, and an irregular simplex for dimensions two, three, four, and five. The equilateral simplex has all its edges of the same length, the Cartesian simplex has vertices determined by the origin and the canonical vectors, the Kuhn simplex is one of the simplices obtained after dividing a hypercube with Coxeter-Freudenthal-Kuhn algorithm, and the irregular simplex has all of its edges with different lengths, see Fig. 4. We numerically predict the number of similarity classes using the quality of the simplices as a proxy. Specifically, we assign an obtained shape quality to a similarity class. With this idea, we uniformly refine the initial simplices and their descendants until the bisection process does not generate more similarity classes. Table 1 shows the number of obtained similarity classes for each case. The equilateral and Cartesian simplex have fewer similarity classes than . Sn . That is because they have geometric symmetries, and thus marked bisection generates less similarity classes than . Sn . On the other hand, the Kuhn simplex is the one that generates the minimum number of similarity classes. That is because marked bisection achieves

Table 1 Number of generated similarity classes by marked bisection Irregular . Sn Dimension Equilateral Cartesian Kuhn 2 3 4 5

3 17 52 185

1 17 45 301

1 3 4 5

4 69 1119 32979

4 73 1539 38407

. Mn

. Sn /Mn

4 36 384 4800

1.00 2.02 4.01 8.00

286

G. Belda-Ferrín et al.

Table 2 Number of uniform refinements to generate all the similarity classes Dimension Equilateral Cartesian

Kuhn

Irregular

. In

.Kn

. In

2

2

2

2

2

2

2

0

3

7

7

2

7

6

5

1

4

10

10

3

13

10

8

2

5

15

18

4

18

15

12

3

− Kn

its optimal number of similarity classes with structured meshes, which are fully composed of Khun simplices. Finally, the irregular simplex generates the maximum number of similarity classes due to its lack of symmetry. That is, it generates the highest number of similarity classes in comparison with the other simplices but does not achieves the maximum number of similarity classes. As the dimension increases, the number of similarity classes also increases in all the cases. Table 2 shows the number of uniform refinements performed to generate the similarity classes of Table 1, and the a lower bound over the number of uniform refinements to generate . Sn similarity classes. We see that the equilateral, the Cartesian, and the irregular simplices are equal or exceed the number of minimum uniform refinements to generate the similarity classes of Table 1. Moreover, the number of uniform refinements of the equilateral and Cartesian simplices is smaller than the irregular simplex for 4D and 5D. For the Kuhn simplex, we can see that the number of uniform refinements to achieve the generated number of similarity classes is .n, except in the 2D case. This is so because the initial simplex is the unique similarity class. Generally, when the number of similarity classes becomes larger, we need to perform more uniform refinements to generate them. In all the cases, the estimated and the obtained number of similarity classes are similar. Note that the predicted number of similarity classes for marked bisection is larger that the number of similarity classes of newest vertex bisection. Moreover, the ratio of similarity classes between marked bisection and newest vertex bisection grows exponentially with the dimension. While we obtain the same number of similarity classes for dimension two, there is a factor of eight for dimension five. The difference between the number of bisection steps to obtain all the similarity classes in marked bisection and newest vertex bisection grows linearly with the dimension.

7 Concluding Remarks To measure the stability of marked bisection, we have estimated in general dimensions an upper bound of the number of obtained similarity classes. Moreover, to understand the cyclic structure of similarity, we have estimated the number of uniform refinements required to generate all the similarity classes. These estimates facilitate comparing marked bisection with the newest vertex bisection.

Estimating the Number of Similarity Classes …

287

This comparison is key because the newest vertex bisection is the optimal reference for the number of generated similarity classes. First, we compare the ratio of the number of similarity classes between marked bisection and the newest vertex bisection. This ratio grows exponentially with the dimension as.O(2n−2 ). Second, we compare the difference between the corresponding numbers of uniform refinements required to generate all the classes. This difference grows linearly with the dimension as .n − 2. According to the scalings, we conclude that when lower is the dimension more suitable is marked bisection for local refinement of unstructured meshes. We also conclude that marked bisection is still the right choice for unstructured meshes. The scalings seem to favor the newest vertex bisection, but it has not been yet guaranteed for unstructured meshes. Although the two estimates are not tight, our results show that they match the magnitudes and scalings with the dimension. To tighten the bounds, we only need to improve the similarity bound because the number of iterations depends on the former bound. Accordingly, we have planned to improve the similarity bound by accounting for the possible simplicial symmetries that may arise during the first refinements of marked bisection. In perspective, the derived scalings further motivate the need to guarantee the newest vertex bisection for local refinement on unstructured meshes. In highdimensional applications, the reduced number of similarity classes of the newest vertex bisection will lead to higher mesh quality and quicker starts of the similarity cycles.

8 Algorithms In this appendix, we declare the necessary algorithms to implement a marked bisection method, as seen in Sect. 2.2. Using the conformingly-marked mesh, the local refinement procedure, Algorithm 9, first refines a set of simplices, then calls a recursive refine-to-conformity strategy, and finally renumbers the mesh. The refine-toconformity strategy, Algorithm 10, terminates when successive bisection leads to a conformal mesh. Both algorithms use marked bisection to refine a set of elements, see Algorithm 11.

288

G. Belda-Ferrín et al.

Algorithm 9 Local refinement of a marked mesh. input: ConformalMarkedMesh T and SimplicesSet S ⊂ T output: ConformalMarkedMesh T ' 1: function localRefine(T , S ) T¯ = bisectSimplices(T , S ) 2: T ' = refineToConformity(T¯ ) 3: T0 = renumberMesh(T ' ) 4: 5: return T0 6: end function

Algorithm 10 Refine-to-conformity a marked mesh. input: MarkedMesh T output: MarkedMesh T ' without hanging vertices 1: function refineToConformity(T ) S = getNonConformalSimplices(T ) 2: 3: if S / = ∅ then T¯ = bisectSimplices(T , S ) 4: T ' = refineToConformity(T¯ ) 5: 6: else T' =T 7: 8: end if 9: return T ' 10: end function

Algorithm 11 Bisect a set of simplices. input: MarkedMesh T , SimplicesSet S output: MarkedMesh T1 1: function bisectSimplices(T , S ) T1 = ∅ 2: 3: for ρ ∈ T do 4: if ρ ∈ S then 5: ρ1 , ρ2 = bisectSimplex(ρ) T1 = T1 ∪ ρ1 6: 7: T1 = T1 ∪ ρ2 8: else T1 = T1 ∪ ρ 9: 10: end if 11: end for 12: return T1 13: end function

References 1. María-Cecilia Rivara. Algorithms for refining triangular grids suitable for adaptive and multigrid techniques. International Journal for Numerical Methods in Engineering, 20(4):745–756, 1984. 2. María-Cecilia Rivara. Local modification of meshes for adaptive and/or multigrid finite-element methods. Journal of Computational and Applied Mathematics, 36(1):79–89, 1991. Special Issue on Adaptive Methods.

Estimating the Number of Similarity Classes …

289

3. Ángel Plaza and Graham F. Carey. Local refinement of simplicial grids based on the skeleton. Applied Numerical Mathematics, 32(2):195–218, 2000. 4. Ángel Plaza and María-Cecilia Rivara. Mesh refinement based on the 8-tetrahedra longest-edge partition. In Proceedings of the 12th International Meshing Roundtable, pages 67–78, 2003. 5. William F. Mitchell. Adaptive refinement for arbitrary finite-element spaces with hierarchical bases. Journal of Computational and Applied Mathematics, 36(1):65–78, 1991. Special Issue on Adaptive Methods. 6. Igor Kossaczký. A recursive approach to local mesh refinement in two and three dimensions. Journal of Computational and Applied Mathematics, 55(3):275–288, 1994. 7. Joseph M. Maubach. Local bisection refinement for .n-simplicial grids generated by reflection. SIAM Journal on Scientific Computing, 16(1):210–227, 1995. 8. Joseph M. Maubach. The efficient location of neighbors for locally refined .n-simplicial grids. 5th International Meshing Roundable, 4(6):137–153, 1996. 9. Christoph T. Traxler. An algorithm for adaptive mesh refinement in .n dimensions. Computing, 59(2):115–137, 1997. 10. Guillem Belda-Ferrín, Eloi Ruiz-Gironés, and Xevi Roca. Bisecting with optimal similarity bound on 3D unstructured conformal meshes. In 2022 SIAM International Meshing Roundtable (IMR), Virtual Conference. Zenodo, 2021. 11. Douglas N Arnold, Arup Mukherjee, and Luc Pouly. Locally adapted tetrahedral meshes using bisection. SIAM Journal on Scientific Computing, 22(2):431–448, 2000. 12. Guillem Belda-Ferrín, Abel Gargallo-Peiró, and Xevi Roca. Local Bisection for Conformal Refinement of Unstructured 4D Simplicial Meshes. In 27th International Meshing Roundtable, volume 127, pages 229–247. Springer International Publishing, 2019. 13. Guillem Belda-Ferrín, Eloi Ruiz-Gironés, Abel Gargallo-Peiró, and Xevi Roca. Conformal marked bisection for local refinement of n-dimensional unstructured simplicial meshes. Computer-Aided Design, page 103419, 2022. 14. Claudio Gutierrez, Flavio Gutierrez, and Maria-Cecilia Rivara. Complexity of the bisection method. Theoretical Computer Science, 382(2):131–138, 2007. 15. Guillermo Aparicio, Leocadio G Casado, Eligius MT Hendrix, Boglárka G-Tóth, and Inmaculada Garcia. On the minimum number of simplex shapes in longest edge bisection refinement of a regular n-simplex. Informatica, 26(1):17–32, 2015. 16. Patrick M. Knupp. Algebraic mesh quality metrics. SIAM Journal on Scientific Computing, 23(1):193–218, 2001. 17. Anwei Liu and Barry Joe. On the shape of tetrahedra from bisection. Mathematics of Computation, 63(207):141–154, 1994.

Cross Field Mesh Generation

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces From a Given Cross-Field Kokou M. Dotse, Vincent Mouysset, and Sébastien Pernet

1 Introduction and Related Work Several numerical schemes used for numerical simulation are based on the implementation of a quadrilateral or hexahedral mesh, as they offer numerous advantages. In mechanics, quadrilaterals are interesting because similar results can be obtained by using modified first-order quadrilaterals instead of quadratic simplexes. Similarly, for the propagation of electromagnetic waves, we observe that quadrilaterals are very efficient due to their naturally tensorial structure [1]. In fluid mechanics, quadrilaterals provide a simple way to deal with anisotropic phenomena within boundary layers [1]. However, while the generation of symplectic meshes (triangles, tetrahedrons) has been well developed for more than half a decade, while the generation of quadrilaterals or hexahedrons is more problematic. Actually it is difficult to simultaneously respect every constraint required by the properties of a good hexahedral mesh, alignment of elements with the edge of the domain, size of elements, and mesh quality (see [2]). Among methods developed to create quadrilateral meshes, such as tri-to-quad conversion [3], SQuad [4], Blossom-Quad [5], the Cartesian grid method [6], the method of frontal advance (also called the paving method) [7, 8], and medial-axis based decomposition methods [9, 10], an interesting one is based on cross-fields analysis [2]. The idea involving cross-fields is to simulate orientation properties of quadrilaterals to derive a proper partitioning of the domain into four-sided subK. M. Dotse (B) · V. Mouysset · S. Pernet ONERA, 2 Av. Edouard Belin, 31000 Toulouse, France e-mail: [email protected] V. Mouysset e-mail: [email protected] S. Pernet e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_14

293

294

K. M. Dotse et al.

domains. They are used for the first time in computer graphics applications to control surface mapping for non-photorealistic rendering, texture synthesis and remapping, and global parameterization [11–13]. A cross-field is a field structure that binds each point of the domain to “a cross”, i.e. a given vector and its rotations with π an angle of .k , where .k ∈ [[0, 3]]. In [2], the authors generate such cross-fields by 2 propagating the crosses from the outer normal of the domain from the edge of the domain (see details in [2]). In [14], the authors addresses constructing quad-mesh on surface patches and in [15], the quad-mesh generation on manifold surfaces is discussed. One goal is to build a smooth cross-field that is aligned with the domain boundary. In the literature, the approach often used consists in minimizing an energy that characterizes the variations of the cross-field [16].

.

inf E(u) = u

1 2

∫ Ω

|∇u|2 d A +

1 4∊2

∫ Ω

(|u|2 − 1)d A.

(1)

The solution field of (1) is used to guide the generation of the quadrilateral mesh. To do this, field lines are integrated from the singular points of the cross-field. These field lines called separators allow to partition the domain into regions of 4 sides that are then filled with quadrilateral meshes (see Fig. 1). The mesh obtained then depends on the location of the singular points (corners of subdomains) in the cross-field. Unfortunately, this distribution can sometimes lead to invalid or unintended partitioning. The first one notably occurs on ultra-stretch domains. The second one refers to partition shapes leading to resulting cell sizes that are widely inhomogeneous (see Fig. 2). In other words, we do not control the appearance of singular points nor their distribution. To bypass these problems, one should aim at controlling the kind and location of singular points in the cross-field. To do this, one can see that a generator of quadrilateral mesh can be achieved by analyzing how cross-fields are related to the Ginzburg-Landau energy, as studied in [16, 17]. Jezdimirovi´c and al [18] propose an algorithm based on a user-defined singularity model as input, possibly with high valence singularities. Alexis Macq et al. [19] give a formulation of the Ginzburg-Landau energy allowing the imposition of internal singularities by substituting them with little holes drilled in the domain. The Ginzburg-Landau energy of the crossed field on the drilled domain is then calculated by solving a linear Neumann problem. They also propose a reformulation of the Ginzburg-Landau energy to handle boundary singularities. However, the complex framework of the Ginzburg-landau theory does not allow to easily take into account non-planar geometry. Moreover additional problems arising from usual industrial meshing context such as handling with piecewise materials, piecewise inhomogeneous boundary conditions or non-analytic geometry, are not suitably addressed by these methods. The idea we propose to develop in this paper is to consider the cross-field as an independent input for the meshing method, thus allowing us to look for a better candidate. On one hand, we can expect different singular point distributions (and

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces …

295

Fig. 1 Quad mesh from Ginzburg-Landau energy

Fig. 2 Inhomogeneous mesh

thus more homogeneous meshes), an easier partitioning on non-planar surfaces, and a tool to address non-simply connected domains. On the other hand, some properties will have to be formulated to ensure that the cross-field leads to a full quad mesh, and

296

K. M. Dotse et al.

Fig. 3 A quadrilateral mesh constructed from eigenmode solutions

operations must be performed on the cross-field to satisfy some boundary conditions arising from partitioning and handling of non-simply connected domains. To illustrate the purpose of this article, we consider an eigenmode of the Laplacian as an example in Fig. 3. The cross-field is constructed from the quarter angle of the gradient of this eigenmode (the same construction can be applied to isolines). We observe that the extrema of an eigenmode of the Laplacian, and therefore of the corresponding cross-field, on the domain are equidistant from the boundary and from one another. This results in a more homogeneous mesh using our method. Another example is given by Viertel et al. in [16], and in most Ginzburg-Landau based quad mesh papers, as the first step of the process. Cross-fields are obtained from representation fields given in the complex plane by: ∏ ( (z − ai ) ) 4i d

.

z ∈ C I −→

i

|z − ai |

∈ C,

(2)

where each .ai ∈ C is a singular point of the field and .di /4 its associated index. The same construction as previously is performed and the resulting cross-field and final mesh are depicted in Fig. 4. It can be noted that the fields in Eq. (2) are not straightforwardly used in [16], but a correction process is first applied to enforce a

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces …

297

Fig. 4 A quadrilateral mesh constructed from the cross-field of Eq. (2). Singular points .ai are plotted in red

prescribed alignment. Our method will thus propose an alignment phase that works similarly to this step. The remainder of this paper is organized as follows. First, in Sect. 2, we introduce some mathematical notions used to express the constraints required on a field of crosses for the generation of quadrilateral meshes, discuss the alignment of a given field of crosses with respect to the domain boundaries for planar domains, and address the notion of singular border points. We then extend the method to non-simply connected domains in Sect. 3. Finally, in Sect. 4, we discuss the case of non-planar surfaces, whose main difficulty is the absence of a global reference frame.

2 Quadrilateral Mesh From a Cross-Field For any given domain .Ω, the idea is to find a subdivision of .Ω into subdomains with four sides. Suppose that we can partition .Ω using the field lines of a vector field defined on .Ω. To establish regions, it is imperative to guarantee that the streamlines (definition in Sect. 2.1), can intersect each other. To this aim, we use a particular vector field called a cross-field. A cross-field is a map that associates four directions

298

K. M. Dotse et al.

orthogonal to each other to each point (see Sect. 2.1). By constructing the cutting of Ω with streamlines whose origins are the singular points of the cross-field, we can create partitions that do not contain any singular points. Let . D be a partition obtained in such a way. The only singular points of the field included in . D then coincide with the corners of . D. Applying the Poincare-Hopf theorem (see [16]), it follows that . D has necessarily four corners, i.e. it is a four-sided domain. Meshing with quads a four-sided domain is trivial. For instance, we can use transfinite interpolation (see [20]) to generate a regular mesh on a domain with four sides. Finally, we achieve a quad mesh of whole domain (see Fig. 1). Lately, as announced in the introduction, in this paper we would like to have the possibility to choose any cross-field and use it to generate a quadrilateral mesh. Contrary to the classical approach of directly generating a cross-field aligned with the edge of the domain, we would rather act on any cross-field provided on the domain. The advantage of such a choice would be that the generated mesh can be aligned with the chosen field, thus making the mesh inherit the properties of the cross-field. Throughout this section, we will assume that .Ω is a bounded, connected domain in .R2 with a piecewise-smooth boundary.

.

2.1 Cross-Fields Definition A two dimensional cross is defined by: c(θ) = {(cos(θ + kπ/2), sin(θ + kπ/2)), k ∈ [[0; 3]]}

.

(3)

Let .C = {c(θ), θ ∈ R}. By associating to each point . p of .Ω an angle .θ( p), we define the cross-field on .Ω as a map .u : p ∈ Ω I −→ c(θ( p)) ∈ C ∪ {0} and which eventually vanishes at a finite number of points, called the singular points. Let .c0 be the cross formed by the x-axis and the y-axis of a local planar coordinate system, for a cross .u( p) given at a point . p ∈ Ω, its principal angle .θu ( p) is given by the minimal rotation between .c0 and .u, i.e.: θ (P) = min{θ ∈ R/u = R(θ)c0 },

. u

θ

(4)

where . R(θ) denotes the rotation of angle .θ. Given .γ(s), with .s ∈ [0, 1], be a .C 1 curve parametrized on the domain .Ω, and let .∂γ(s) denote its derivative. We say that .γ is a streamline of .u if, for all .s ∈ [0, 1], there exists .k ∈ [[1, 4]] such that the cross-product .∂γ(s) ∧ u k (γ(s)) = 0. Here, k .u (γ(s)), k ∈ [[1, 4]] refers to the branches of the cross-field .u at point .γ(s). A separatrix of a cross-field is a streamline that begins or ends at a singularity.

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces …

299

2.2 Index A number called the index, noted .idu , is associated with each singular points. It quantifies the number of times the field turns onto itself around the point . p. At any point . p ∈ Ω\∂Ω, it is evaluates as (see [17]): idu ( p) =

.

1 2π

∫ γ

dθu ,

(5)

where.γ : [0, 1] −→ Ω is a simple closed curve around. p containing no other singular point. Note that if . p is not singular, .idu ( p) = 0. Applying Eq. (5) with respect to .∂Ω, we can define the following global quantity on .u (see Sect. 2.3): ∫ 1 dθu , (6) .deg(u, ∂Ω) = 2π ∂Ω which will allow the following to define a general constraint on the initial cross-field. In the literature, .deg(u, ∂Ω) is commonly referred to as the Brouwer degree. These points are later chosen to be the origin of the streamlines used to partition the domain. Practically, as done for instance in [2], we build the separatices from every singular point and apply a merging algorithm to avoid doubling of lines due to numerical errors. As exposed previously, singular points will be used as starting points for separatrices. Hence, to define how many separatrices start with each point . p, we introduce the valence of . p, denoted .V ( p). It is directly related to the index of the point in the cross-field and is given by: .

V ( p) = 4 − 4idu ( p).

(7)

2.3 Compatibility Constrain on the Cross-Field The Poincaré-Hopf theorem allows us to relate the vector field to the topology of the domain. This theorem can be extended in the context of cross-field (see [21]). It states that, for a cross-field .u defined on a domain and whose boundary crosses are aligned with the outgoing normal of the domain, we have the following relation: n ∑ .

idu ( pi ) = χ(Ω),

(8)

i=1

where .( pi )i∈{1,...,n} is the set of singular point of .u and .χ(Ω) = 2 − 2g − b, (.g is genus of .Ω, .b is boundary number of .Ω) is characteristic Euler of .Ω.

300

K. M. Dotse et al.

Taking into account the boundary and interior singular points, the formula (8) becomes: ns ∑ .

idu (si ) +

i=1

nb ∑

idu (bi ) = χ(Ω),

(9)

i=1

where .(si )i∈{1,...,n s } is the set of interior singular point of .u and .(bi )i∈{1,...,n b } is a boundary singular point of .u. As derived from formula 6, it follows that:

.

deg(u, ∂Ω) =

ns ∑

idu (si ).

(10)

i=1

Finally, we deduce a compatibility constraint that our cross-field must respect on Ω with the following formula:

.

deg(u, ∂Ω) = χ(Ω) −

nb ∑

.

idu (bi ).

(11)

i=1

2.4 Alignment of Cross-Field From the example pictured on Fig. 3, we see that the proposed cross-field has the correct properties. Indeed, on the one hand, the sum of the indexes of the field is equal to 1 and the domain has two singular boundary points of index 1/4 each. Relation (11) is verified. However, when using a cross-field method (summarized at the beginning of Sect. 2) to partition the domain, it is noted that some subdomains deviate from the typical quadrilateral shape, as shown in Fig. 5. This deviation is a result of the cross-field not being properly aligned with the boundaries of the domain, which is in violation of the assumptions of the Poincaré-Hopf theorem. As a result, some partitions do not have four sides. To address this issue, we will adjust the cross-field to align it with the boundaries of the domain. Given a cross-field according to the formula (11), the task is to find a way to align this cross-field with the boundaries of the domain without altering its initial properties. This can be achieved by finding a scalar field .φ representing rotation angles and initial cross-field. Thus we denote the new cross-field: v = R(φ)u,

.

and we want to obtain .v = N on .∂Ω, where . R(φ) is given by the equation:

(12)

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces …

301

Fig. 5 Invalid partitioning obtained from rough cross field extraction of modal solution plotted on first picture on Fig. 3

( .

R(φ) =

) cos(φ) −sin(φ) , sin(φ) cos(φ)

(13)

and . N is the cross-field associated with the outgoing normal. Our determination of φ is based on the approach of [16]. It entails continuously propagating the angle difference between .u and . N throughout .Ω via the following equation: ⎧ ∆φ = 0 in Ω, ⎪ ⎪ ⎨ nb ∑ . (14) ⎪ ˜ φ(γ(t)) = φ(γ(t)) + δθ(bi )1γ([0,t]) (bi ), ∀t ∈ [0, 1]. ⎪ ⎩

.

i=1

In this equation, • .γ : [0, 1] −→ ∂Ω is a parameterization of .∂Ω such that .γ(0) is not a boundary singular point and .γ(0) = γ(1), • .φ˜ = θ N − θu . This quantity is calculated continuously along .∂Ω, • .δθ(bi ) quantifies the presence of singular points .bi . This concept is covered in depth in Sect. 2.5, • .1γ([0,t]) denotes the indicator function. ¯ .idv ( p) = idu ( p). It can be proven that .φ(γ(0)) = φ(γ(1)) and that .∀ p ∈ Ω, Applying this method to the cross-field of Fig. 5, we achieve the partitioning illustrated in Fig. 6. An additional illustration of this procedure on a different domain can be seen in Fig. 7.

302

K. M. Dotse et al.

Fig. 6 Cross-field of the Fig. 5 after alignment process (same picture as second one on Fig. 3)

2.5 Boundary Singularities It is sometimes mandatory for numerical simulations, to delimit portions of the domain boundary with physical nodes. This is especially the case when applying piecewise boundary conditions. When these nodes do not coincide with the geometrical ones (as corners), one way to include this information would be to see them in the cross-field as singular points of the boundary. The presence of a boundary singularity should indicate a local rotation of the cross associated with the outgoing normal. The quantification of this rotation would thus give the value of the index associated with the singularity. In the literature, the associations (corner angle—index) proposed tend to minimize the rotation of the field on the boundary in order to keep the cross-field inside the domain as smooth as possible. The conventional distribution proposed in the literature corresponds to the values in Table 1 with different tolerances around the given angles (see details in [19]).

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces …

303

Fig. 7 Top: unaligned cross-field, middle: rectified cross-field, bottom: mesh quad Table 1 Usual distribution of angles of boundary singularities [19] .π/4 .π Angle .1/4 .0 Index

.3π/2 .−1/4

By applying (5) to a boundary singular point .b, we get the following formula: δθu (b) = 2π I (b) − π + ∧ b,

.

(15)

where .δθu (b) denotes the effective prescribed angular rotation of the cross-field at b is the measure of the neighborhood of .b corresponding to a given index . I (b) and . ∧ the boundary open angle at corner .b. We illustrate on Figs. 8 and 9 impact on resulting meshes for several choices of arbitrary boundary singularity.

3 Non-simply Connected Domains In this section, we will further explore the application of the previously developed alignment method to more complex domains, specifically those containing holes. The application of this method to complex domains with holes is a crucial step towards achieving accurate and reliable results in multi-material domain treatment.

304

K. M. Dotse et al.

Fig. 8 Top: 3 border singular points of index 1/4 each and 1 internal singular point of index 1/4; Bottom: 2 singular points of index 1/4 and 1 singular point of index –1/4 and 3 singular points of index 1/4 inside the domain

Fig. 9 Top: 2 border singular points of index 1/4 each and 2 internal singular point of index 1/4; Bottom: 3 singular points of index 1/4 and 1 singular point of index 1/4 inside the domain

Un Γ Let .Ω be a non-simply connected domain such that .∂Ω = i=1 Γi , where .Γi represents a simply connected component of the boundary .∂Ω and .n Γ is the number of such simply connected components. By applying formula (11), it is observed that the constraint on the initial cross-field becomes: deg(u, ∂Ω) =

nΓ ∑

.

deg(u, Γi ) = χ(Ω) −

i=1



idu (c),

(16)

c∈∂Ω

However, when dealing with non-simply connected domains, this condition is only a requirement and not a guarantee, as it does not ensure continuity of .φ (the alignment angle developed in the previous section) on each boundary segment separated by the singular points on each .Γi , i ∈ {1, n Γ }. To address this issue, we impose condition (11) on each .Γi , i ∈ {1, n Γ }, leading to the following formula: ∑ ⎧ deg(u, Γ0 ) = 1 − idu (c), ⎪ ⎪ ⎪ ⎪ c∈Γ0 ⎨ .

∑ ⎪ ⎪ ⎪ idu (c), ∀ i ∈ [[1, n Γ ]]. ⎪ ⎩deg(u, Γi ) = 1 +

(17)

c∈Γi

where .Γ0 denotes the exterior boundary .Ω. According to our experiments, while it can be easy to construct cross-fields that satisfy formula (16) (for example, by using formula (2)), generating cross-fields that conform to formula (17) in a simple

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces …

305

Fig. 10 Non-aligned cross-field

and efficient manner proved to be more challenging. As an example, the cross-field shown in Fig. 10 complies with condition (16), but does not with (17). Therefore, we introduce an angle field .ψ defined on .Ω which will be used to correct the annular defect of the field .u on the borders of each connected component. As outlined in Sect. 2.4, the ultimate cross field .v is represented by: v = R(φ)R(ψ)u,

.

(18)

where . R(φ) and . R(ψ) are rotation matrices corresponding to the angles .φ (was defined in Sect. 2.4) and .ψ. The computation of .φ in Eq. (14) has to be adjusted as follows: φ˜ = θ N − θu − ψ.

.

(19)

The angle field .ψ acts as a correction factor for the angular defects caused by the presence of holes within the domain. To define it, we evaluate a subsidence vector field .h, whose angle .θh is .4 times .ψ. The vector field .h is given by the following equation: ⎧ ∆h = 0, ⎪ ⎪ ⎪ ⎪ ⎪ ( ) ⎪ ∫ ⎪ ⎪ ∑ 1 ⎪ ⎪ ⎪ θh = 4 deg(u, Γ0 ) − 1 + idu (c) , ⎪ ⎪ ⎪ ⎨ 2π Γ0 c∈Γ0 . (20) ( ) ⎪ ∫ ⎪ ∑ ⎪ 1 ⎪ ⎪ θh = 4 deg(u, Γi ) − 1 − idu (c) , ⎪ ⎪ ⎪ 2π Γi ⎪ c∈Γi ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ∀ i ∈ [[1, n Γ ]].

306

K. M. Dotse et al.

Fig. 11 Aligned cross-field

Fig. 12 Mesh quad on non-simply connected domain

In practice, we might need to modify .h to remove possible singular points in .h. We do this by using the formula (2). The application of the method to the cross-field depicted in Fig. 10 results in an aligned cross-field, as demonstrated in Fig. 11. The resulting quadrilateral mesh can be seen in Fig. 12. Further visual representations of the results can be found in Fig. 13. As previously discussed in the introduction of Sect. 3, we demonstrate an example of a multi-material geometry in Fig. 14. The geometry is composed of a combination of two half-discs and a square plate with a circular hole. By utilizing the techniques developed earlier, specifically by addressing singular points on the boundary and treating domains with holes, we can produce a valid mesh.

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces …

307

Fig. 13 Mesh of Naca0012 with two different configurations of singular points. The initial cross-fields were obtained using formula (2) by shifting the position of the internal singular points .(ai )i∈1,2 involved in the formula with .di = −1, ∀i

Fig. 14 An example with two connected components

4 Case of Non-planar Surfaces In this section, let’s apply the method presented in previous sections to non-planar surfaces. In particular, the lack of a global reference frame to compute the angles of the cross-field on non-planar surfaces is addressed. To overcome this challenge, we propose to utilize the heat method for diffusion as presented in [22]. This method propagates a given vector at a point accordingly to the heat equation. This allows for the construction of a global reference frame for the surface, which enables accurate computation of the angles of the cross-field and alignment with the boundary of the

308

K. M. Dotse et al.

Fig. 15 Vector field obtained by the heat method diffusion [22]

Fig. 16 Global frame

non-planar surface. More specifically, we construct a vector field .w on the surface using Eq. (21), with homogeneous Neumann boundary conditions. We begin the resolution by initializing the equation with a vector field that is equal to an arbitrary vector in the tangent space of an arbitrary point and is set to zero everywhere else on the surface. By solving this equation over a very short time period, we obtain a vector field that does not have any singularities. The next step is to generate a frame at each point of the surface using formula 3. .

∂w = ∇ 2 w, ∂t

(21)

Figure 15 shows an example of the solution of the equation and Fig. 16 illustrates the resulting global frame obtained.

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces …

309

Fig. 17 Invalid partitioning

Fig. 18 Rectified cross-field

In order to illustrate our method in the context of curved surfaces, we will present an adaptation of the process explained in Sect. 2.4 on a quarter of a sphere. We begin by defining a cross-field on the surface, using the same approach as described in Sect. 2.4 and depicted on Fig. 5. The outcome is displayed in Fig. 17. However, similar to the scenario presented in Sect. 2.4, this cross-field may not be in sync with the boundary of the domain, leading to domains that are not four-sided. To overcome this problem, we adjust the cross-field as described in Sect. 2.4. The outcome of this adjustment on the example of a quarter of a sphere is illustrated in Fig. 18. Finally, the quadrilateral mesh is obtained by meshing each block with four sides, and it is shown in Fig. 19.

310

K. M. Dotse et al.

Fig. 19 Quadrilateral mesh of the quarter-sphere

Other examples of the results obtained on various geometries are presented in Fig. 20.

5 Conclusion We have presented a method that allows to abstract the construction of the mesh from the generation of the cross-field. The field of crosses is given by respecting constraints that we have shown. It is then modified in order to adapt it to the topology of the domain. The advantage is that we benefit from a different distribution of singular

Quadrilateral Mesh of Non-simply Connected Domain and Non-planar Surfaces …

311

Fig. 20 Other examples of quadrilateral meshes on non planar geometries

points, and that we can easily take into account non-planar surface manifolds. We have also implemented operations to take into account arbitrary indexes of singular points of boundary and also non simply connected domains. In a future work, we hope to extend the method to mesh adaptation with respect to the solution of a given equation. Acknowledgements The authors are grateful to anonymous referees for their helpful comments and suggestions.

312

K. M. Dotse et al.

References 1. M. Reberol, Maillages hex-dominants : génération, simulation et évaluation. Ph.D. thesis (2018) 2. N. Kowalski, F. Ledoux, P. Frey, in 21th Int. Meshing Roundtable (Sandia Natl. Labs., San Jose, United States, 2012). https://hal.sorbonne-universite.fr/hal-01076754 3. D. Bommes, B. Levy, N. Pietroni, E. Puppo, C. Silva, D. Zorin, Computer Graphics Forum 32 (2013). https://doi.org/10.1111/cgf.12014 4. T. Gurung, D. Laney, P. Lindstrom, J. Rossignac, Comput. Graph. Forum 30, 355 (2011). https://doi.org/10.1111/j.1467-8659.2011.01866.x 5. J.F. Remacle, J. Lambrechts, B. Seny, E. Marchandise, A. Johnen, C. Geuzaine, International Journal for Numerical Methods in Engineering 89, 1102 (2012). https://doi.org/10.1002/nme. 3279 6. R. Schneiders, Engineering with Computers 12, 168 (2005) 7. T.D. Blacker, M.B. Stephenson, International Journal for Numerical Methods in Engineering 32(4), 811 (1991). https://doi.org/10.1002/nme.1620320410 8. M. Staten, R. Kerr, S. Owen, T. Blacker, M. Stupazzini, K. Shimada, International Journal for Numerical Methods in Engineering - INT J NUMER METHOD ENG 81 (2009). https://doi. org/10.1002/nme.2679 9. D.L. Rigby, (2003) 10. T.K.H. Tam, C.G. Armstrong, Advances in Engineering Software and Workstations 13, 313 (1991) 11. N. Ray, B. Vallet, W.C. Li, B. Lévy, ACM Trans. Graph. 27(2) (2008). https://doi.org/10.1145/ 1356682.1356683. 12. D. Bommes, H. Zimmer, L. Kobbelt, ACM Trans. Graph. 28(3) (2009). https://doi.org/10. 1145/1531326.1531383. 13. M. Nieser, U. Reitebuch, K. Polthier, Computer Graphics Forum 30(5), 1397 (2011). https:// doi.org/10.1111/j.1467-8659.2011.02014.x 14. K.M. Shepherd, X.D. Gu, T.J. Hughes, Computer Methods in Applied Mechanics and Engineering 402, 115555 (2022) 15. N. Lei, X. Zheng, Z. Luo, F. Luo, X. Gu, Computer methods in applied mechanics and engineering 366, 112980 (2020) 16. R. Viertel, B. Osting. An approach to quad meshing based on harmonic cross-valued maps and the ginzburg-landau theory (2018) 17. P.A. Beaufort, J. Lambrechts, F. Henrotte, C. Geuzaine, J.F. Remacle, Procedia Engineering 203, 219 (2017). https://doi.org/10.1016/j.proeng.2017.09.799. 26th International Meshing Roundtable, IMR26, 18-21 September 2017, Barcelona, Spain 18. J. Jezdimirovi´c, A. Chemin, M. Reberol, F. Henrotte, J. François Remacle, arXiv e-prints arXiv:2103.02939 (2021) 19. A. Macq, M. Reberol, F. Henrotte, P.A. Beaufort, A. Chemin, J.F. Remacle, J.V. Schaftingen. Ginzburg-landau energy and placement of singularities in generated cross fields (2020) 20. W.A. Cook, International Journal for Numerical Methods in Engineering 8(1), 27 (1974) 21. N. Ray, B. Vallet, W.C. Li, B. Lévy, ACM Trans. Graph. 27(2) (2008). https://doi.org/10.1145/ 1356682.1356683. 22. N. Sharp, Y. Soliman, K. Crane, ACM Transactions on Graphics (TOG) 38(3), 1 (2019)

Ground Truth Crossfield Guided Mesher-Native Box Imprinting for Automotive Crash Analysis Nilanjan Mukherjee

1 Problem Definition Finite element analyses of engineering products have assumed prime importance in design validation for over four decades now. In the twenty-first century these analyses and mesh model preparation processes have become more specialized and automated. Geometry simplification for meshing and mesh generation technologies have also been challenged with scalability and variability requirements. As geometry cannot be over-simplified for analysis, a single meshing algorithm is not adequate in providing every edge and surface feature characteristics analysis accuracy demands. Subdividing or zoning out mesh areas and trying unique meshers on them seem to hold great promise in this regard. In the automotive industry, in particular, an important product design validation or engineering analysis function is crash/collision analysis. This analysis is usually done on a discretized finite element meshed model of the entire car assembly, especially the subassembly that is called body-in-white (BIW). In the automotive industry, body-inwhite (BIW) refers to the fabricated (usually seam and/or tack welded) sheet-metal components that form the car’s body. Body-in-white is a stage of the car body prior to painting and before the moving parts (doors, hoods, fenders etc.), the engine, chassis sub-assemblies, and trim (glass, seats, upholstery, electronics, etc.) have been mounted. Structured and regular quadrilateral-dominant meshes (with the majority of face interior nodes connected to four elements, i.e., possess a valency of 4) are created on these body panels for a variety of finite element analyses. Such a BIW panel is shown in Fig. 1a with the quad-dominant surface mesh in Fig. 1b. The panel

N. Mukherjee (B) Meshing & Abstraction, Simulation and Test Solutions, Siemens Digital Industries Software, SIEMENS, 2000 Eastman Dr., Milford, OH 45150, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_15

313

314

N. Mukherjee

Fig. 1 Quad-dominant mesh (b) on an automotive BIW panel (a) generated with the proposed meshing strategies

is meshed employing a plethora of meshing strategies including the one proposed in this paper. Crash analysis of the digital finite element model is usually a nonlinear, transient dynamic structural analysis under shock velocity and/or impact loading. This is performed in order to predict the stress, deflection and rupture of the automobile in a crash/collision situation. For results/predictions to be accurate, crash analysis requires the finite element quadrilateral-dominant mesh to have many distinct characteristics namely, high quality quasi-structured meshes on features and around bolt/ washer holes. Figure 2 shows two quad-dominant meshes where the mesh around bolt holes is unstructured and unpatterned around the washer holes in Fig. 2a and the most ideally analysis-oriented in Fig. 2b. This paper deals with the development of two-dimensional shape imprinting strategies inside the mesh generator such that meshes can be given distinct characteristics in local zones required by the analysis type. The paper focuses on three classes of strategies/algorithms—(a) mesher-native shape imprinting and multiblocking strategies, (b) mesh direction field computation and (c) box orientation and 2d meshing algorithm selection for the decomposed face subdomains which are called virtual faces.

2 Previous Work Shape-imprinting is classically seen as a CAD operator and is a well-researched and disseminated topic. While dealing with contemporary discrete geometry, shape imprinting research also finds interest in the field of computer graphics. However, when it comes to shape-imprinting tools embedded in finite element mesh generators, published research is quite limited. White and Saigal [1] and Clark et al. [2] report some of the earliest investigations of CAD imprinting for the purpose of conformal mesh generation. Blacker’s [3] “Cooper Tool” introduced the idea of shape imprinting in the sweep meshing context where source faces where internally

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

315

Fig. 2 Mesh characteristic comparison of two quadrilateral meshes around washer hole features

imprinted on volume steps to continue sweeping in the subvolumes. In a similar vein but with completely different approaches Ruiz-Gironés et al. [4] used a leastsquares approximation of affine mappings to decompose sweep meshing volumes with multiple source and target faces into subvolumes, while Cai and Tautges [5] used a novel edge-patch based imprinting technique on the sweep mesh layers to facilitate cage extractions. Lu et al. [6] used a sketch-based approach with a geometric reasoning process to determine sweeping direction. Two types of sweepable regions are used which provides visual clues to the user in developing decomposition solutions. Outside of submapping and sweep meshing the author could not find any investigation on mesher-embedded, mesh-flow controlled shape imprinting for high quality quadrilateral mesh generation. Furthermore, no research paper or patent could be traced on the use of such techniques in the carbody crash analysis.

3 Mesher-Native Imprinting Strategy Shape imprinting is generally conceived as a CAD tool. Since geometry cannot be manipulated during meshing it is impossible to alter them during mesh generation. In this paper an attempt is made to shape-imprint inside a 2D surface mesh generator.

316

N. Mukherjee

3.1 Design and Architecture for Mesher-Native Shape Imprinting It is naturally imperative that a flexible, innovative architecture and design are prerequisite to the development of such functionality. This paper proposes an architecture and object-oriented design for imprinting 2D planforms in the parameter space of the face to mesh [7]. Appendix I describes the sequence of operations leading to the decomposition of the surface into these “zones” or “virtual faces”. The UML (Unified Modeling Language) diagram shows the task begins with the creation of an Imprinter object for the face. The Imprinter creates an Imprint Shaper object. The latter has methods to either select a shape or use a user-driven shape. In the present context that userdriven shape is a square box. Next it orients the shape according to the face’s local or global mesh flow axis. The Imprint Shaper then imprints the chosen shape at the selected location on the parameter space. According to user-driven mesh controls, it applies element intervals or counts to the four sides of the box based on the template selected. The Imprint Shaper decides on the many parameters automatically, when user specification is missing. This complex algorithm will be reported in a separate paper. The Imprint Shaper uses a mesher-native domain decomposer and mesh topology operators to create virtual vertices, virtual edges and virtual faces using a virtual topology engine similar to one reported by Makem et al. [8]. Thus, each imprinted shape becomes a virtual face, and the residual area of the faces becomes another boolean virtual face. These virtual geometries comprise of points and lines, they do not involve surface tessellations. These are not geometries but rather lines and points to mark out sub-regions of a face in its 2d parameter space which we will call “virtual face”. However, they follow a strict Eulerian topological system of definitions, connections and operations. Once the virtual geometries are formed they are stored in the FEM (Finite Element Model) database. Each face has links to its virtual sub-geometries and is free to use or ignore them when needed. The Shaper object also selects and builds mesh template objects, especially for the box shape around holes. The selection of a particular template for a particular box is also being developed and can become the content of another invention disclosure later. Presently, templates are auto-selected based on the number of elements the user asks for around each hole. Finally, the surface meshers access these virtual faces and generate 2d meshes on them which are finally mapped/transformed back to the 3D surfaces [9, 10] after topological mesh cleanup [11] and smoothing [12]. The flowchart in Fig. 3 describes the overall mesher-native imprinting and mesh generation strategies.

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

317

Fig. 3 Overall algorithm flowchart

3.2 Mesh Imprinting Box-With-Hole Shape Face interior holes are regions of devout interest for structural analysts. Structure joining/fastening happens at these circular holes. Joints and fasteners like bolts, rivets, pins, locks pass through these holes. Many power and energy transferring and load bearing members like shafts, rods etc. are lodged at these sites making them potential areas of catastrophic stress, buckling, failure and long-term crack initiation. Finite element analyst always desires a model where the mesh around the holes is regular, structured and is made up of finite elements which deviate minimally from their best

318

N. Mukherjee

shapes (element included angle)—60°. for triangles and 90°. for quads. Creating a box-shape around these holes, orienting them correctly with respect to reference direction leads to a well-patterned mesh. Such functionality is of great interest and importance to almost all structural engineers but most crucially the automotive crash analyst. Weldments, fasteners, bolts etc. are the most susceptible elements during car crash. Naturally, during finite element analysis great care is taken to reduce analysis error in these areas. Consequently, the shape and nature of the quadrilateral-dominant mesh in these localities assume paramount importance.

3.3 Shape Imprint Driven Multiblocking Surface meshes on carbody panels require all-quad, two-layered, patterned meshes around washer pads. Consequently, a shape imprint based multiblocking or face decomposition strategy becomes a natural choice. Figure 4 depicts an example where a body panel face with washer holes is box imprinted first. This mesher-native 2D decomposition technique breaks the face into three virtual faces or mesh areas—the two blue box-with-hole (BWH) virtual faces shown in Fig. 4a and a residual region or BWH boolean face (white). Figure 4b illustrates typical virtual topology elements like virtual vertex and virtual edge which are 2d points and lines used to define a virtual face following a traditional Eulerian topology framework.

Fig. 4 Face with washer holes multiblocked into box-with-hole virtual faces (a) detailed in (b)

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

319

4 Mesh Direction Fields Establishing a mesh flow direction for most classes of industrial problems is important. The mesh generated on the parts needs to approximately follow a direction. Therefore, it is equally important, especially for quadrilateral meshes, to orient the imprinted shape in this direction such that the imprinted shape (and thus, its local mesh) aligns with the global mesh. This paper proposes methods for boundary-aware direction vector computation for faces. It can be done in multiple ways. However, the three most common scenarios encountered by product users are automotive use cases, general mechanical/electronic use and aerospace applications. The most challenging amongst them is automotive BIW meshing.

4.1 Method I: Minimum Oriented Bounding Box Based (MOBB) Non-feature faces in an automotive body panel cover 50–90% of the body panel surface area. These faces are meshed either with the Multizone mesher [13] or the CSALF-Q (loop-paver) mesher [14]. For this class of faces a user-driven crash analysis global direction vector might be specified. If the crash vector is not specified, a natural mesh flow direction vector is computed for each face based on their shapes. A method of determining a singular mesh flow direction vector in 2D is based on the natural shape of the face boundary in 2D. This technique is most general, robust and can serve most of the general mechanical and electronic industry finite element analysis requirements. A sample face is shown in Fig. 5a. The outer loop of the face in a flattened 2D domain is shown in Fig. 5b. Meshing happens in this 2D domain or parameter space of the face. The face outer loop is first discretized with nodes. It thus makes a discretized closed polygon (Fig. 4b). Next a 2D Convex Hull is created (Fig. 5c) for the 2D face using the Gift-Wrap or Andrew algorithm [15]. A well-known rotating caliper method [16] is used to compute the minimum oriented bounding box of all orientations of the convex hull (Fig. 6). The minimum oriented bounding box (MOBB) as shown in Fig. 7, is defined by the oriented bounding box amongst all which has the minimum area. This means this bounding box encapsulated the 2D polygon which represents the discretized face loop best. The X-axis of this bounding box, described by the green 2D vector in Fig. 7 defines the constant mesh flow direction vector. Angle α is the angle between the MOBB X-axis VMOBBx (green vector) and the global 2D axis of the face. In this particular case α = 4.4°. In order to ensure the orientation axis is dependable, a heuristic is set in terms of the area ratio Ar defined by the ratio of the face outer loop area and the MOBB area. This is defined in Eq. 1. An area factor threshold of 2.2 is obtained heuristically from the study of a complete carbody (Fig. 1b) analysis.

320

N. Mukherjee

Fig. 5 Sample tessellated face (a), its 2D parameter space (b) and the convex hull of the face outerloop (c)

Fig. 6 A set of 2D oriented bounding boxes for different orientations of convex hull

Ar =

Alp AM O B B

≤ AFT

(1)

where ALP = 2d area of the face outer loop AMOBB = Area of the minimum oriented bounding box and AFT = Area factor threshold = 2.2.

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

321

Fig. 7 The minimized oriented bounding box for the face with the corresponding axis vector VMOBBX

4.2 Method II: Ground Truth Frame/Cross Field Based (GTFF/GTCF) The face axis based on the MOBB (Minimum Oriented Bounding Box) of the face in its 2D parameter space is not always reliable for every single face of the geometry meshed. When the area factor threshold as defined by Eq. 1 is exceeded, a single face axis vector for shape imprinting and mesh generation is no longer reliable. To be able to orient the boxes reliably and locally on these faces without axis, a novel method is thus proposed. Figure 8 shows a detailed section of a face with many washer holes for which a single reliable mesh flow axis cannot be established using a MOBB face axis. Accordingly, the boxes, in the absence of any mesh flow axis gets created following the default uv-axes of the parameter space of the face. The red vector shows the Vaxis of the parameter space along which all boxes are placed by default. Accordingly, the local mesh around the washer holes does not align with the overall mesh on the face. Such meshes are unsuitable for crash analyses. If one is lucky, the uv-axes of the face might be orthonormal to the crash direction. In such cases these can be used for box-imprinting. It is obvious, however, from even a casual review of the final mesh, the three boxes need to be oriented in different directions, so they align with the mesh better. In order to achieve this a novel algorithm (Algorithm I) is developed. Algorithm I: Local Axes Determination for Box Orientation Fig. 8 Axis-less face in 2D parameter space (u, v) where the boxes are imprinted at incorrect angles leading to a misoriented-flow quad mesh

322

N. Mukherjee

1. First all face inner loops are suppressed and a coarse quad-dominant mesh (mostly quads with about 5% or less triangles) is generated. 2. A non-conforming Voxel mesh field is generated in the background for point tracking 3. A 2D local Frame field [17] is generated on the barycenter (centroid) of every element of that coarse mesh and transferred to element nodes. 4. The global field vector problem is addressed by solving the governing Laplace equation 5. A global crossfield direction field is constructed from the global frame field direction vector field 6. The imprintable boxes are placed atop the coarse quad-dominant mesh and the global crossfield 7. The elements containing each corner of each box are found by searching the box-packed field. 8. From each element containing a box corner the crossfield vector is interpolated at the corner point. 9. The box-corner crossfield vectors are averaged; each box is oriented to align with the average crossfield vector. STEP 1: A quad-dominant coarse background mesh is first generated on the face using a modified version of the CSALF-Q [14] mesher which will be used to generate the final mesh. The thought behind this is to determine an approximate mesh flow field for the “would-be” mesh on the entire face with inner features suppressed, so the boxes can be oriented in a reasonably accurate manner and can align well with the final mesh. Figure 9 shows the background mesh on the face with inner loops suppressed. The only face loop considered is the outerloop. STEP 2: A non-conforming Voxel mesh field is generated next in the face parameter space. This field, shown in Fig. 10, comprises non-conforming, unconnected and overlapping Voxel cells. Each cell represents the bounding box of one mesh element. Figure 10 illustrates a close-up view of the non-conforming Voxel mesh field near the upper side of the face. Equation 2 denotes, in a nutshell, the non-conforming Voxel mesh (VX) for a 2D mesh with N elements as a union of the bounding areas of each element.

Fig. 9 Background coarse quad mesh over the entire face represented by the outerloop only

Ground Truth Crossfield Guided Mesher-Native Box Imprinting … N V X = Ui=1 Bi [O(xi , yi ), Ω(xi , yi )] ∀x, y ∈ N

323

(2)

STEP 3: Following a recent investigation on ground-truth frame field vector for a quadrilateral mesh [17], the element local frame field Fuie and Fvie vectors are computed for an ith element (Fig. 11). Fuie =

d3 d1 · v1 + · v3 (d1 + d3 ) (d1 + d3 )

(3a)

Fvie =

d2 d4 · v2 + · v4 (d2 + d4 ) (d2 + d4 )

(3b)

Equation family 3 describes the computation used for the element local frame field vector at the centroid of the Cp element in terms of its opposite pair edge directions and the perpendicular distances of the sides from the element centroid. For a triangle, it is first split into 3 quads by joining the 3 corners of the triangle to the barycenter of the element. Next, the same vector formulation is used for the 3 quads and they are averaged out at the barycenter of the triangle. The element centroid frame field vectors are next transferred to its corner nodes. Each corner node thus gets the vector of each element it is connected to. The vector at each node is thus averaged. STEP 4: The global problem of the frame field can be expressed as Laplace’s equation of steady state heat conduction in two-dimensions as shown in Eq. 4a. The Fig. 10 Close-up of the non-conforming Voxel mesh field of the face

324

N. Mukherjee

Fig. 11 Side vectors and distance parameters used to construct element frame field vector

2D vector field (u) and F is a functional that needs to be minimized over the field. Dirichlet boundary conditions (4b) are applied and the equation is solved with a length-weighted iterative solver. The resultant global frame field vector field of the mesh is shown at the mesh nodes in Fig. 12. This is representative of the flow-field of the mesh in terms of a frame field. A close-up detail of the GT frame field is shown in Fig. 12b. Although the problem is still not globally solved, the nodal averaged vectors are beginning to indicate at what would be the mesh flow field in the end. STEP 5: From the smoothed GT frame field a GT crossfield is constructed directly by orthogonalizing the nodal vectors. A crossfield vector (vi ) comprises a cross of four orthogonal vectors meeting at a point. Such a vector set can be described by Eq. 4c and is illustrated in Fig. 13. F(u) = ∆u =

∂ 2u ∂ 2u + . ∂x2 ∂ y2

u(x, y) = u o (x, y) vi = [cos(iθ ), sin(i θ )]T θ =

π 0≤i ≤3 2

(4a) (4b) (4c)

The crossfield is necessary here as it is a more simplistic representation of the mesh as its u and v axes are assumed to be orthogonal at each point in the field. This is not true for the frame field vector. Here, the problem at hand is that of orienting a box—an orthogonal shape—therefore a crossfield is more convenient than a frame field.

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

325

Fig. 12 Laplace solved final frame field of the mesh (a) on the face outerloop including a close-up detail (b)

Fig. 13 Solved GT crossfield of the mesh for the entire face

5 Box-With-Hole Orientation The algorithm for orienting the imprintable box such that it aligns with the GT crossfield is discussed by steps 6–9 of Algorithm I. STEP 6: The imprintable boxes around the washer holes are placed atop the coarse quad-dominant mesh and the global crossfield. These boxes and the elements of the background mesh that contain the box corners are shown in Fig. 14.

326

N. Mukherjee

Fig. 14 Rotated box (in blue) to align with the mesh flow

STEP 7: The elements containing each corner of each box are found by searching the nonconforming voxel field described in Eq. 2. Each voxel cell corresponds to an element of the background quad mesh. A band search is performed which is quite efficient as it scans through the voxel cells by comparing the box corner coordinates with the min/max x and y of the voxel cells. This reduces the search to a small subset of elements. A point-in-polygon check is finally performed on these elements. As soon as one element is found the search stops. The background mesh element containing a box corner is thus found. STEP 8: Each one of the originally computed box corners must lie in one element of the background mesh. The task at hand at this point is to compute the GT crossfield at each box corner. Each element of the background mesh stores the GT frame/cross field vectors at their corner nodes. Let us assume Vr and Vb are the horizontal axes of the original box (red) and the to-be-rotated box (black) respectively. This is illustrated in Fig. 14. Red color refers to the original face orientation in 2d and therefore Vr is equivalent to the u-axis of the parameter face. In order to compute Vb the unit crossfield vectors of the 4 corners of the original imprintable box shape (red) must be found first. The background mesh elements containing these corners (as illustrated in Fig. 14) are first determined by means of a linear search of the non-conforming voxel mesh field (Fig. 10). The crossfield vector Vc f ck at any box corners Ck can be interpolated from the nodal crossfield vectors of the element i containing the corner. If Ni j represents the four shape functions of a background isoparametric quadrilateral element i and Vcfij are the crossfield vector of its jth corner the box corner field vector Vc f ck is given by Vc f ck =

Σ4 j=1

1 Ni j (ξk , ηk ) · Vc f i j . 4

(5)

However, in the above equation the isoparametric element coordinates ξk , ηk at the box corner Ck are unknown. If the global coordinates of the 4 corners of element

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

327

i are given by Pil (x, y) | l = 1,2,..4, box corner Ck can be expressed as Ck =

Σ4 l=1

Nl (ξk , ηk ) · Pl

(6)

where Ck and Pl are known. Equation (6) poses an inverse problem of solving for ξk , ηk . This is solved by a traditional Newton–Raphson technique. Figure 15 shows a close-up of a box corner Ck inside a background element E. The coordinates of Ck are known. In order to determine the element natural coordinates ξk , ηk at this point, we start off with paired functions f (ξ, η) and g(ξ, η) for a bilinear isoparametric element. The following equation can be conventionally written. [

J11 J12 J21 J22

]{

∆ξ ∆η

}

{ +

f g

} =0

(7a)

where the Jacobian derivatives are J11 =

∂f ∂ξ

J12 =

∂f ∂g ∂g J21 = J22 = ∂η ∂ξ ∂ξ

(7b)

Using a standard Newton–Raphson iterative procedure and upon further simplification the solution for ith iteration can be written for a point k as. ξki = ξki−1 − (J11i−1 f i−1 + J12i−1 gi−1 )

(8a)

ηki = ηki−1 − (J21i−1 f i−1 + J22i−1 gi−1 )

(8b)

( εn =

ξki − ξki−1 ξki−1

)2

( +

εn < εtol

Fig. 15 Box corner Ck falling inside a background quadrilateral element E with nodes N1 –N4

ηki − ηki−1 ηki−1

)2 (8c) (8d)

328

N. Mukherjee

An initial guess of ξk = 0, ηk = 0 is reasonably good. An error norm εn computedas shown in Eq. 8c is used to test convergence (8d). Solution is assumed to converge when the error norm falls below error tolerance εtol , usually in less than 10 iterations. STEP 9: When ξk , ηk are inverse solved, the GT crossfield vectors are computed at the four box corners Ck and the box centroid Cg using Eq. 5. The five crossfield vectors are finally averaged to determine Vb Vb =

(Σ4 k=1

Vc f ck + Vc f g

)/ 5

(9)

As explained before Vr is constant over the face domain and is equivalent to the u-axis of the parameter space. The smallest angle ϕ, the angle between the red (original) and black (oriented) boxes, can thus be computed as ϕ = cos

−1

(

) Vr · Vb . |Vr | · |Vb |

(10)

The original box (box) is now rotated about its center so as to align with this averaged crossfield axis. This can be expressed as Bn (x, y) = T (ϕ) · Bo (x, y)

(11a)

where Bn denotes new box coordinates, Bo represents old box coordinates and T, the transformation matrix which is a function of angle of rotation/turn ϕ and is given by [

cos ϕ sin ϕ T (ϕ) = − sin ϕ cos ϕ

] (11b)

Figure 16 shows the virtual faces or mesh areas the face is decomposed into after imprinting the oriented boxes. The box-with hole faces (holes are not drawn inside the boxes for the sake of simplicity) are shaded in blue while the residual area, called the boolean virtual face is in pale green and red for the two comparison cases. Figure 16a illustrates the boxes not aligned with the GT crossfield. They are parallel to the v (or u) axis of the parameter space. In Fig. 16b the virtual faces are rotated to align with the local crossfield. It is evident how the box-with-hole virtual faces align with the mesh flow in Fig. 16b. All of them are nearly parallel to their nearest boundary tangent. The turning vector for each box is unique as determined from the crossfield. A section of the final mesh after rotating all imprintable boxes in this process is shown in Fig. 17b in comparison to the original unrotated box (Fig. 17a). A comparison of the two meshes by standard quad element quality measures and even by visual examination clearly indicates the mesh no longer “twists” around the holes. Furthermore, same studies show the mesh around the holes after proper rotation of

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

329

Fig. 16 Decomposed faces with oriented box-with-hole virtual faces and the boolean virtual face

the box is perfectly structured—meaning nearly all face interior quad nodes are tetravalent i.e. connected to 4 neighboring quads and all element angles are fairly close to right angles. The number of resulting triangles is also lesser. It is to be noted that a small number of triangles result in these meshes for two reasons—firstly, the element size on the holes is user-controlled and typically lesser than the global mesh size which requires mesh size transition; secondly all-quad mesh transitions are not allowed in crash analysis meshes as they violate mesh flow. Hence, careful transitions need to be created in the mesh with a small number of triangles.

6 Hole Orientation Inside Box While the box-with-hole virtual face is oriented according to the mesh direction field, the circular hole loop also needs to be oriented with respect to the box. This becomes necessary to ensure the paved layer mesh around the hole has at least two element edges (diametrically opposite) parallel or perpendicular to the box edges. Their mutual orientation requirement is a property of the box-with-hole mesh template selected. For car crash analysis, the box needs to be first oriented along the face local crash direction vector of Vb as described in Eq. 9, Sect. 5. Transformation of each box with respect to the face outer boundary is described in Eqs. 11a and 11b. The washer rings should ideally be oriented parallel to the tangent vector at the closest point on the box boundary. In order to achieve correct ring orientation near

330

N. Mukherjee

Fig. 17 Improvement in mesh directionality after correctly aligning boxes with mesh flow on the face in Fig. 8

the box boundary so as to reduce stress computation errors, the paved ring (first layer of elements around the hole) needs to be appropriately rotated. This is achieved by clocking or rotating the geometry vertex of the circular face loop representing the hole by a parametric offset and by creating a virtual vertex at the new location. This virtual vertex is a ghost representation of the real geometry vertex. During meshing no node is created at the real vertex location but at the ghost vertex location. The node, however, is associated with the real vertex. Figure 18 explains the vertex rotation algorithm. P denotes the vertex on a circular edge-loop running clockwise inside a box imprinted around the hole. The box is first oriented in the face local mesh flow direction V1 . This vector for a MOBB face is VMOBBx as explained in Sect. 4.1 or Vb for the crossfield as explained in Eq. 9. In this particular case the user had applied a mesh control on this washer hole asking for 6 nodes. The blue hexagon inside the circle represents the default discretization on the hole with one node at its original

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

331

Fig. 18 Schematic explaining loop vertex rotation to make paved element edge parallel to boundary

vertex P. Element edge PQ is the edge closest to the box boundary with Q being the closest node. Q is projected to the nearest box boundary edge at point T. The tangent vector to the edge at T is V1 which as explained before. The direction of the most box boundary proximate element edge PQ is represented by vector V2 . For the paved ring to be parallel to the box boundary, vectors V1 and V2 must be parallel. In other words, ) ( V1 · V2 = θ. (12) cos−1 |V1 | · |V2 | where ideally θ = 0◦ . for the vectors to be parallel Typically, they are not parallel. So, in order that the closest paved element edge is parallel to the nearest boundary edge, vertex P needs to be relocated such that Pn becomes its new location, and the green hexagon represents the new discretization on the hole. The green element edge Pn Qn , nearest to the boundary now ends at Pn instead of P and the edge becomes parallel to V1 and thus satisfies Eq. 12. To ensure this the radius vector at P needs to be clocked in a direction opposite to the circular edge-loop (counter-clockwise) by a parametric offset so f f . This required vertex offset is expressed as so f f = r · cos

−1

(

V1 · V2 |V1 | · |V2 |

)/ lloop

(13)

where r = radius of hole and lloop = length or perimeter of the circular edge-loop

332

N. Mukherjee

Fig. 19 Orienting the hole with respect to the box (a) to a parallel configuration (b)

Once the boxes are oriented, the circular face loops (or holes) need to be clocked as explained in eqn. system 4, such that the paved first layer of quad elements around them are parallel to the box. This is illustrated in Fig. 19. The decagonal discretization on the hole is initially unparallel to the box boundary (Fig. 19a). After offsetting the circular edge’s vertex, the orientation is corrected, and the hole is parallel to the box. The final mesh on the mesh will now be flowing in a unique direction both inside and outside the box as shown in Fig. 1b. More examples of such meshes are shown in Figs. 20 and 23.

7 Meshing Algorithms for the Virtual Faces As described before, faces with washer holes are decomposed into two types of mesh areas (virtual faces)—the BWH virtual face and the BWH boolean virtual face. Delicately handled specialized mesh generators and meshing process algorithms have been developed to mesh these virtual faces.

7.1 Washer Mesh Control Crash mesh finite element analysts typically require finer control on the washer regions. The washer holes represent bolt, lug, pin loading areas where the highest stresses occur making their locality critical sites of possible rupture during vehicle collision. According to Griffith’s criteria of crack propagation the speed of crack propagation is directly proportional to the length of an initial crack. Naturally, a lot of care is taken to design and analyze washer sites. User-controlled patterned meshes are thus imperative. A washer mesh control provides that desirable user

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

(a) TM4H11

333

(b) TM6221

(c) TM8111

(d) TM10221

(e) TM12211

(f) TM16111

(g) TM18121

(h) TM20211

Fig. 20 Some meshing templates used for box-with-hole virtual faces from Wec = 4, 20 where is Wec is even

334

N. Mukherjee

control. Washer mesh control is typically applied by hole radius range and is defined by three key parameters, namely number or count of washer elements (Wec ), number of washer element layers (Wnl ) and thickness of washer layers (Wlthk ). Usually, crash analysis meshes require a single layer, i.e. Wnl = 1 of even numbered elements.

7.2 Templatized Meshers for BWH Face The second layer of elements in a box-with-hole virtual face is determined by the templatized meshers. The element count Wec on the inner loop prescribed by the user via washer mesh control defines the template meshers used. These templatized meshers (TM) are illustrated in Fig. 20 and designated by. TM N H P Q

(14)

where N = element count (4–20) on the hole; H = hole orientation (1–3—perpendicular (1), parallel (2), other (3)); P = parity of element count type (1–3 divisible by 4(1), even (2), odd (3)) and Q = quad dominance (1–3 all quad (1), quad-dominant (2), triangular (3)). The shaded regions in each templatized mesher represent the washer meshcontrolled areas where the user decides the three key parameters Wec , Wnl , Wlthk .

7.3 Box Sizing Imprintable boxes are placed at the center of washer holes. Box orientation algorithms have also been discussed. This sub-section deals with the determination of box size. The generation of a quality mesh inside the box-with-hole imprinted virtual face encounters several challenges and conflicts. One of them is mesh quality defined by a number of quality metrices. The templatized mesh must meet these thresholds. Over and above this, a significant challenge is posed by the conflict of 3 sizes, namely–(i) The size dec defined by the user-driven element count on the washer hole; (ii) The average size db defined by the element counts on the box boundary (determined by the template under consideration) and (iii) The global mesh size dg . The minimum and maximum permissible mesh sizes are denoted by dmel and dmax respectively. The following inequalities define the practical permissible ranges of sizes related to the imprintable box.

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

dg ≥ db ≥ dec dg ≤ d max dmel ≤ db ≤ dmax dec > dmel

335

(15)

If the user-driven total thickness or offset of the mapped hole is designated by dIo , dbox the side length of the box, dh the diameter of the washer hole; the most ideal box dimension range (assuming square shape) can be expressed as 2ktol2 dmax > dbox > 1.25ktol1 dmel

(16)

where ktol1 and ktol2 are tolerance factors ktol1 = 1.1; ktol2 = 0.909.

7.4 Mesher Selection Algorithm As stated before, each face and as well their decomposed virtual faces are meshed by different meshers. Mesher choice algorithm is illustrated in Fig. 21. As faces are cycled, the map-meshable face is send to the transfinite mesher. A map-meshability check is run to ensure the face is worthy of a transfinite mesh. For faces with washer holes a MOBB is first generated. If the area ratio Ar is equal or less than the area factor threshold, the orientation or box-turning Vori axis for all i holes becomes the x-(or u) component of the MOBB. Boxes are oriented accordingly and imprinted. Templatized meshers are used for the box-with-hole virtual face while a multizone mesher [13] is used to mesh the box-with-hole boolean face. The multizone mesher is a hybrid quadrilateral mesher that combines paving, cartesian and subdivision algorithms in three distinctly different zones of the face. If Ar exceeds the area factor threshold a GT crossfield is computed; Vori axes for all i washer holes are uniquely computed. While box-with-hole virtual faces are meshed with templatized meshers as before the boolean face is meshed with the CSALF-QD (Combined Subdivision And Loop-Front—Quad-dominant version) mesher. The third category of faces are neither map-meshable nor have washer holes and are meshed with the CSALF-QD mesher.

Explaining Quad-Dominance Linear quad-dominant meshes have become the norm in automotive BIW crash analysis for several decades now. This choice needs to be explained as it is not obvious. Given the extremely large number of mesh-pattern, mesh flow and quality requirements, discussed in this paper, it is virtually impossible to generate an all-quad mesh that is able to honor all constraints, maintain a desirable mesh flow and meet all

336

Fig. 21 Mesher selection algorithm

N. Mukherjee

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

337

of the 12–16 quality goals discussed later in Sect. 8. Less than 5–8% triangles become necessary to insert. However, care needs to be taken in the meshing algorithms to insert them such that one is able to minimize the following mesh irregularities—(i) touching triangles (ii) triangles on free geometry edges and critical feature lines and (iii) triangles misaligned against the mesh flow direction. The meshing strategies/ algorithms to achieve such orientations is complex and will be reported in a separate paper.

8 Mesh Quality for Crash Analysis Several monometric mesh quality measures [18, 19] have been reported in the past which use a single preferably dimensionless, normalized metric to measure the overall quality of a quadrilateral (-dominant) shell mesh. All of these metrices have value in the sense they can be used to compare quad-dominant meshes reasonably well. However, crash analysis requires a far more thorough and stringent system of polymetric mesh quality evaluation. This industry does not rely on monometrices, but rather failure statistics. Table 1 depicts all relevant mesh quality parameters that are tracked for BIW meshes. The permissible thresholds are also reported as an industry average. The generic mesh quality goal is to limit failed elements in all categories to 0.005% or less. Of these a particular type of failure is not permissible at all—minimum element length (dMEL ). Eg denotes global element size. Crash analysis is a transient dynamic analysis where the critical time step is important in terms of achieving convergence. This critical time step depends on the speed of the longitudinal sound wave through the structural material. Equation 17 provides the relationship of the critical time step (∆tc ) and the minimum element length dMEL ∆tc = d M E L =

(1 + α)Ae . c

(17)

where area of the element is given by Ae , c denotes the characteristic length which for a quadrilateral is the longest diagonal and for a triangular element longest side length. Non-dimensional factor α = 1 for triangle and 0 for quadrilaterals. If the smallest element length in a panel mesh drops to less than d M E L , solution convergence becomes uncertain. Thus, a crash mesh becomes acceptable when all element quality failures listed in Table I are below the radar and no element fails d M E L . A monometric measure, however, still becomes necessary to compare meshes. Accordingly, for a mesh of N elements of which Nq are quadrilaterals, a metric σcn called Mesh Condition Number is designed as σcn =

w1 + w2 + w3 + w4 . 2 3 +w + Mwdist + Mwqd4 Ms

w1 Mang

(18a)

338

N. Mukherjee

where angle metric Mang = NNni , Nni being the number of elements whose included angles deviate from the ideal by >10°. Mdist = harmonic mean of element scaled Jacobian. Ms = NNs , where Ns = number of elements whose average size is >90% of Eg . Mqd =

Nq . N

(18b)

Figure 22 depicts the character of a mesh on a single face with washer-holes change from a grossly disoriented form (22a) to a flow-oriented mesh (22c). The ground truth frame field vectors in the localities of the hole centers (marked with blue crosses) are shown in Fig. 22b. While all other quality measures pass in both, the monometric measure clearly separates the non-acceptable, disoriented mesh (where σcn = 0.895) from the oriented one (where σcn = 0.993). Figure 23 does a similar comparison on a smaller panel with/without washer imprinting. While both meshes pass all quality metrices listed in Table I, the mesh without washer imprinting has a mesh condition number of 0.879 while upon imprinting the norm is 0.955. A performance study is included in Appendix II.

Fig. 22 Comparison of meshes on a panel face with two washer holes with with/without box imprinting

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

339

Fig. 23 Comparison of meshes without (a) and with (b) box imprinting

9 Conclusion This paper addresses a crucial industrial quadrilateral mesh generation problem for which no known solutions are available in open literature. For automotive carbody crash analyses it proposes a meshing strategy and related algorithms used to imprint box shapes around washer holes. Contrary to traditional methods of geometry modification which are permanent and can be damaging, a two-dimensional mesher-native, imprinting algorithm is developed. A box-with-a-washer layer shape is imprinted inside the 2D mesher. A minimum oriented bounding box and a ground truth crossfield based box orientation control function are developed. Virtual face decomposition is used to multiblock surfaces into box-with-hole and their boolean mesh areas. Detailed mesh quality comparison clearly justifies the strength of the present approach.

340 Table 1 Crash analysis mesh quality measures

N. Mukherjee

Symbol

Quality metric

Threshold

dM E L

Minimum Element Length

≥0.5 Eg

d M AX E L Maximum element length

≤1.5 Eg

JR

Scaled Jacobian or Jacobian ratio ≥0.48

Ow

Warp

Solver dependent

σS K

Skew

Solver dependent

σor

Aspect ratio

≤4.0

σT

Taper

Solver dependent

θ Qmax

Maximum quad angle

≤ 150deg.

θ Qmin

Minimum quad angle

≥ 30deg.

θT max

Minimum triangle

≤ 140deg.

θT min

Minimum triangle

≥ 20deg.

Tp

Percentage of triangles

≤ 5.0

Appendix I

A UML sequence diagram of the proposed architecture for mesher-native shape imprinting

Ground Truth Crossfield Guided Mesher-Native Box Imprinting …

341

A UML Sequence Diagram of the Proposed Architecture for Mesher-Native Shape Imprinting Appendix II Performance Analysis In terms of performance, the entire processing time is a natural function of the number of holes on the face the user wants a patterned mesh on. For a carbody panel with multiple faces, as shown in Fig. 23, the total mesh generation (incudes parameter space generation, mesh data processing, size and direction map generation, meshing, topological cleaning, smoothing and mesh postprocessing) time is 12.7471 cpu secs. The mesher-native shape imprinting technique, as described by the entire content of this paper, takes 0.9325 cpu secs (i.e., 7.3154% of total meshing time). This indicates a typical performance rating. For an entire body-in-white crash analysis model shape imprinting time around washer holes is typically less than 8% of the total meshing time. Timing data is measured on a Windows × 64, 16 processor desktop with the following configuration—Intel(R) Xeon(R) W-2245 CPU @ 3.90 GHz.

References 1. D.W. White, S. Saigal, “Improved imprint and merge for conformal meshing” Proceedings of the 12th International Meshing Roundtable, Ithaca, NY, pp. 793–800 (2002). 2. B.W. Clark, B.W. Hanks, C.D. Ernst, “Conformal Assembly Meshing with Tolerant Imprinting”, Proceedings of the 17th International Meshing Roundtable. Springer, Berlin, Heidelberg, pp. 267–281 (2008). https://doi.org/10.1007/978-3-540-87921-3_16 3. T. D. Blacker, “The Cooper Tool”, Proceedings of the 5th International Meshing Roundtable, pp. 13–29 (1996). 4. E. Ruiz-Girones, X. Roca, J. Sarrate, “A new procedure to compute imprints in multi-sweeping algorithms”, Proceedings of the 18th International Meshing Roundtable, pp. 281–299 (2009). 5. [5] Shengyong Cai, Timothy J. Tautges, “Surface Mesh Generation based on Imprinting of S-T Edge Patches”, 23rd International Meshing Roundtable, Procedia Engineering vol. 82, pp. 325 – 337 (2014). 6. J.HC. Lu, I. Song, W.R. Quadros, “Geometric reasoning in sketch-based volumetric decomposition framework for hexahedral meshing”. Engineering with Computers Vol. 30, pp. 237–252 (2014). https://doi.org/10.1007/s00366-013-0332-z 7. N. Mukherjee, “Imprint-based Mesh Generation for Computer Aided Design (CAD) Objects”, Siemens Digital Industry Software patent application 2021P03555WO, (2021). 8. J. E. Makem, H. J. Fogg, N. Mukherjee, “Automatic Feature Recognition Using the Medial Axis for Structured Meshing of Automotive Body Panels”, Computer-Aided Design, 120, (2020). https://doi.org/10.1016/j.cad.2020.102845 9. K. Beatty, N. Mukherjee, “Flattening 3D Triangulation for Quality Surface Mesh Generation”. Proc. of the 17th International Meshing Roundtable. Springer, pp.125–139 (2008). 10. K. Beatty, N. Mukherjee, “A Transfinite Meshing Approach for Body-In-White Analyses”. Proc. 19th International Meshing Roundtable, Springer, pp. 49–65 (2010).

342

N. Mukherjee

11. P. Kinney, “CleanUp: Improving Quadrilateral Finite Element Meshes”, Proc. 6th International Meshing Roundtable, pp. 437–447, (1997). 12. N. Mukherjee, “A hybrid, variational 3D smoother for orphaned shell meshes”, Proc. 11th International Meshing Roundtable, pp. 379–390, (2002). 13. N. Mukherjee, “Multizone Quadrilateral Mesh Generator for High Mesh Quality”, Siemens Digital Industry Software Patent (pending) WO2020060561A1, (2018). 14. [14] N. Mukherjee, “CSALF-Q: A Bricolage Algorithm for Anisotropic Quad Mesh Generation’”, Proc. XXth International Meshing Roundtable, Paris, France, Springer, pp. 489-510, (2011). 15. [15]A. M. Andrew, “Another Efficient Algorithm for Convex Hulls in Two.“, Info. Proc. Letters 9, pp. 216-219, (1979). 16. G. T. Toussaint, “Solving geometric problems with the rotating calipers”, Proc. MELECON ‘83, Athens (1983). 17. A. Dielen, I. Lim, M. Lyon, L. Kobbelt, “Learning Direction Fields for Quad Mesh Generation, Eurographics Symposium on Geometry Processing”, Ed. K. Crane and J. Digne, Vol 40(5) (2021). 18. [18] S. H. Lo, “Generating quadrilateral elements on plane and over curved surfaces”, Comput. Struct.,31, 421-426 (1989). 19. S. A. Canann, J. R. Tristano, M. L. Staten, “An Approach to Combined Laplacian and Optimization-Based Smoothing for Triangular, Quadrilateral, and Quad-Dominant Meshes”, Proc. 7th International Meshing Roundtable, pp. 309–323 (1998).

Integrable Cross-Field Generation Based on Imposed Singularity Configuration—The 2D Manifold Case Jovana Jezdimirovi´c, Alexandre Chemin, and Jean-François Remacle

1 Introduction and Related Work The cross-field guided techniques represent a significant member of the quad meshing methods’ family, accompanied by a noteworthy number of methods [2, 5, 33]. The cross-field drives the orientation and the size of quadrilaterals of a quad mesh, and there exists a profound topological relationship between them [7]. It is important to note that the integrability represents a crucial feature of a cross-field, used to obtain a conformal parameterization, e.g., [3, 20], finite length of integral lines, and even influence the number of singularities, e.g., [28]. Nevertheless, cross-fields are not integrable by default. Here, we mention the works closest to our approach and direct the reader to other prominent methods e.g., [9, 12, 23, 24] for a more detailed overview. Computing an integrable cross-field can be achieved by, for instance, using the Hodge decomposition [20, 31], reducing the curl [28], obtaining the metric which is flat except at singularities [10, 22, 35], using the trivial connection [11], or computing a global conformal scaling from curvature prescription [3, 7]. Some of the techniques also consider a flat metric with cone singularities but do not consider additional constraints needed for quadrangulation [26, 27], or obtain conformal parametrization on the prescribed singularity set, but do not take into account the holonomy signature which may result that obtained parameterization is not aligned with the given field J. Jezdimirovi´c (B) Siemens Digital Industries Software, Interleuvenlaan 68, 3001 Leuven, Belgium e-mail: [email protected] A. Chemin · J.-F. Remacle Université catholique de Louvain, Avenue Georges Lemaître, 1348 Louvain-la-Neuve, Belgium e-mail: [email protected] J.-F. Remacle e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_16

343

344

J. Jezdimirovi´c et al.

[8]. The work of [20] generates an integrable vector field from a given frame field relying on the Hodge decomposition. At the same time, using the Hodge decomposition as in [31] is computationally intensive and may not preserve the directions, while reducing the curl in the post-processing step as in [28] may not eliminate the curl entirely. A trivial connection [11], a flat metric with zero holonomy around non-contractible cycles, can indeed be used to obtain an integrable direction field with user-specified singularities, but the boundary alignment constraints may not be honored. We compute a metric resembling the one presented in the method relying on Abel-Jacobi theory [10, 22, 35], but without using the meromorphic quartic differentials. The techniques of [3, 7] present a close concept to the one developed in this paper in terms that the computed size field is obtained from the singularity set. The work of [3] involves using the iterative process to identify the locations and the curvatures of singularities and computing the target metric by solving the linear systems of the Poisson equations. Our approach uses an imposed singularity set, as an application-dependent matter, and exploits the commuting of vector fields under the Lie bracket to obtain the guiding size field by solving two linear systems. Unlike the previously mentioned methods, we develop the integrability formulation for both isotropic and anisotropic scaling. Further, our formulation offers a simple manner of computing the relevant size field and effortless singularity set imposing. Last but not least, the generated cross-field induces per-partition bijective parametrization, more details in [17]. Although leaning on heterogeneous approaches, all quad-meshing methods share the common challenge: dealing with the inevitable singularity configuration. A singularity appears where a cross-field vanishes and it represents an irregular vertex of a quad layout/quad mesh [2], i.e., a vertex which doesn’t have exactly four adjacent quadrilaterals. The singular configuration is constrained by the Euler characteristic .χ, which is a topological invariant of a surface. Moreover, a suboptimal number or location of singularities can have severe consequences: causing undesirable thin partitions, large distortion, not an adequate number and/or tangential crossings of separatrices as well as limit cycles (spiraling separatrices) [4, 29, 33]. Our integrable cross-field formulation, with mathematical foundations detailed in Sect. 2, exploits the concept of user-imposed singularity configuration in order to gain direct control over their number, location, and valence (number of adjacent quadrilaterals). The user is entitled to use either naturally appearing singularities, obtained by solving a non-linear problem [15, 18, 32, 33], using globally optimal direction fields [21], or to impose its own singularity configuration, possibly with high valences, as illustrated in Fig. 1. It is important to note that the choice of singularity pattern is not arbitrary, though. Moreover, it is under the direct constraint of Abel-Jacobi theory [10, 22, 35] for valid singularity configurations. Here, the singularity configuration is taken as an input and an integrable isotropic cross-field is computed by solving only two linear systems, Sect. 3. Finally, the preliminary results of the developed cross-field formulation for an isotropic block-structured quad mesh generation are outlined using the 3-step pipeline [17] in Sect. 4. Computing only one scalar field . H (a metric that is flat except at singularities) imposes a strict constraint on singularities’ placement, i.e., fulfilling all Abel-Jacobi

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

345

Fig. 1 Three quad layouts of a simple domain. Singularities of valence .3 are colored in blue, valence .5 in red, valence .6 in orange, and valence .8 in yellow

conditions. In practice, imposing suboptimal distribution of singularities may lead to not obtaining a boundary-aligned cross-field, preventing an isotropic quad mesh generation, Sects. 4.1 and 4.2. To bypass this issue, we develop a new cross-field formulation on the imposed singularity configuration, which considers the integrability, while relaxing the condition on isotropic scaling of crosses’ branches. Here, two independent metrics . H1 and . H2 are computed instead of only one as in the AbelJacobi framework, enabling an integrable 2D cross-field generation with anisotropic scaling without modifying singularity configuration imposed by the user, Sect. 5. Lastly, final remarks and some of the potential applications are discussed in Sect. 6.

2 Cross-Field Computation on Prescribed Singularity Configuration We define a 2D cross .c as a set of .2 unit coplanar orthogonal vectors and their opposite, i.e., .c = {u, v, −u, −v} with .{u.v = 0, .|u| = |v| = 1} and .u, v are coplanar. These vectors are called cross’ branches.

346

J. Jezdimirovi´c et al.

A 2D cross-field .CM on a 2D manifold .M, now, is a map .CM : X ∈ M → c(X), and the standard approach to compute a smooth boundary-aligned cross-field is to minimize the Dirichlet energy: ∫ .

min CM

M

||∇CM ||2

(1)

subject to the boundary condition .c(X) = g(X) on .∂M, where .g is a given function. The classical boundary condition for cross-field computation is that .∀P ∈ ∂M, with .T(P) a unit tangent vector to .M at .P, one branch of .c(P) has to be colinear to .T(P). In the general case, there exists no smooth cross-field matching this boundary condition. The cross-field will present a finite number of singularities .S j , located at .X j and of index .k j , related to the concept of valence as .k j = 4 − valence(S j ). We define a singularity configuration as the set S = {S j , j ∈ [|1, N |], N ∈ Z}.

.

In the upcoming section, a method to compute a cross-field .CM matching a given singularity configuration .S is developed. In other words, we are looking for .CM such as: ⎧ ⎨ · if X belongs to ∂M, at least one branch ofCM (X) is tangent to ∂M, · singularities of CM are matching the given S . (2) ⎩ (the same number, location, and indices). Before developing the method to compute such a cross-field, a few operators on the 2D manifold have to be defined.

2.1 Curvature and Levi-Civita Connection on the 2D Manifold Let . E 3 be the Euclidean space equipped with a Cartesian coordinates system .{x i , i = 1, 2, 3}, and .M be an oriented two-dimensional manifold embedded in . E 3 . We note .n(X) the unit normal to .M at .X ∈ M. It is assumed that the normal field .n is smooth and that the Gaussian curvature . K is defined and smoothed on .M. If .γ(s) is a curve on .M parametrized by arc length, the Darboux frame is the orthonormal frame defined by T(s) = γ ' (s) .n(s) = n(γ(s))

.

t(s) = n(s) × T(s).

.

(3) (4) (5)

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

One then has the differential structure ⎛ ⎞ ⎛ ⎞⎛ ⎞ T 0 κ g κn T 0 τr ⎠ ⎝ t ⎠ ds .d ⎝ t ⎠ = ⎝ −κg n −κn −τr 0 n

347

(6)

where .κg is the geodesic curvature of the curve, .κn the normal curvature of the curve, and .τr the relative torsion of the curve. .T is the unit tangent, .t the tangent normal and .n the unit normal. Arbitrary vector fields .V and .W ∈ E 3 can be expressed as V = V i Ei ,

.

W = W i Ei

in the natural basis vectors .{Ei , i = 1, 2, 3} of this coordinate system, and we shall note √ i j . < V, W >= V W δi j , ||V|| = < V, V > the Euclidean metric and the associated norm for vectors. The Levi-Civita connection on . E 3 in Cartesian coordinates is trivial (all Christoffel symbols vanish), and one has ∇VE W = (∇V W i )Ei .

.

The Levi-Civita connection on the Riemannian submanifold .M, now, is not a trivial one. It is the orthogonal projection of .∇VE in the tangent bundle .T M, so that one has E i .∇V W = PT M [∇V W] = (∇V W )PT M [Ei ] (7) where . PT M : E 3 I→ T M is the orthogonal projection operator on .T M. An arbitrary orthonormal local basis .(uX , vX , n) for every .X ∈ M, can be represented through the Euler angles.(ψ, γ, φ) which are.C1 on.M, and with the shorthands .sφ ≡ sin φ and .cφ ≡ cos φ, as: ⎛ ⎞ −sφ sψ cγ + cφ cψ .uX = ⎝ sφ cψ cγ + sψ cφ ⎠ , sφ sγ ⎞ ⎛ −sφ cψ − sψ cφ cγ vX = ⎝−sφ sψ + cφ cψ cγ ⎠ , sγ cφ ⎛ ⎞ sψ sγ n = ⎝−sγ cψ ⎠ cγ in the vector basis of . E 3 .

(8)

348

J. Jezdimirovi´c et al.

2.2 Conformal Mapping We are looking for a conformal mapping .

F :

P → M ⊂ E3 P = (ξ, η) I→ X = (x 1 , x 2 , x 3 )

(9)

where .P is a parametric space. As finding .F right away is a difficult problem, one focuses instead on finding the .3 × 2 Jacobian matrix of .F .

˜ J (P) = (∂ξ F (P), ∂η F (P)) ≡ (u(P), v˜ (P)),

(10)

˜ v˜ ∈ T M are the columns vectors of. J . The mapping.F being conformal, the where.u, ˜ = ||˜v(P)|| and are orthogonal columns of . J (P) have the same norm . L(P) ≡ ||u(P)|| ˜ · v˜ (P) = 0. We can also write: to each other, .u(P) .

J = L(u, v),

where .

u= v=

n =u∧v u˜ ˜ ||u|| v˜ . ||˜v||

(11)

Recalling that finding a conformal transformation .F is challenging, we will from now on be looking for the Jacobian . J , i.e., the triplet .(u, v, L). The triplet .(u, v, n) forms a set of .3 orthonormal basis vectors and can be seen as a rotation of .(uX , vX , n) among the direction .n. Therefore, a 2D cross .c(X), X ∈ M can be defined with the help of a scalar field.θ, where.u = Rθ,n (uX ) and.v = Rθ,n (vX ), and the local manifold basis .(uX , vX , n) as: u = cθ uX + sθ vX ,

.

v = −sθ uX + cθ vX .

(12)

By using the Euler angles.(ψ, γ, φ) and.θ, the triplet.(u, v, n) can also be expressed as: ⎛

⎞ −sθ+φ sψ cγ + cθ+φ cψ .u = ⎝ sθ+φ cψ cγ + sψ cθ+φ ⎠ , sθ+φ sγ ⎛ ⎞ −sθ+φ cψ − sψ cθ+φ cγ v = ⎝−sθ+φ sψ + cθ+φ cψ cγ ⎠ , sγ cθ+φ ⎛ ⎞ sψ sγ n = ⎝−sγ cψ ⎠ . cγ

(13)

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

349

It is important to note that .u and .v are the two branches of the cross-field .CM we are looking for. The projection operator . PT M introduced in Eq. (7) then simply amounts to disregarding the component along .n of vectors. For a vector field .w defined on .M, one can write by derivation of Eq. (13) ∇wE u = v ∇w (θ + φ) + sθ+φ n ∇w γ + (cγ v − sγ cθ+φ n)∇w ψ ∇wE v = −u ∇w (θ + φ) + cθ+φ n ∇w γ + (−cγ u + sγ sθ+φ n)∇w ψ

.

∇wE n

(14)

= −(sθ+φ u + cθ+φ v)∇w γ + sγ (cθ+φ u − sθ+φ v)∇w ψ

and hence, using Eq. (7), the expression of the covariant derivatives on the submanifold .M is: ∇w u = v ∇w (θ + φ) + cγ v∇w ψ . (15) ∇w v = −u ∇w (θ + φ) − cγ u∇w ψ. This allows writing the Lie bracket [u, v] = ∇u v − ∇v u

.

= −(u∇u (θ + φ) + v∇v (θ + φ)) − cγ (u∇u ψ + v∇v ψ),

(16)

which will be used in the upcoming section.

3 Integrability Condition with Isotropic Scaling The mapping .F , now, defines a conformal parametrization of .M if the columns of J commute as vector fields, i.e., if the differential condition

.

˜ v˜ ] = ∇u˜ v˜ − ∇v˜ u˜ = [Lu, Lv] 0 = [u,

.

(17)

is verified. Developing the latter expression and posing for convenience . L = e H , it becomes .0 = v∇u H − u∇v H + [u, v], {

and then .

∇u H = − < v, [u, v] > ∇v H = < u, [u, v] >

(18)

which after the substitution of Eq. (16) gives { .

∇u H = ∇v θ + ∇v φ + cγ ∇v ψ −∇v H = ∇u θ + ∇u φ + cγ ∇u ψ.

(19)

350

J. Jezdimirovi´c et al.

In order to obtain the boundary value problem for. H , the partial differential equation (PDE) governing it will be expressed on .∂M as well as on the interior of .M.

3.1

.

H PDE on the Boundary

As the boundary.∂M is represented by curves on.M, it is possible to parametrize them by arc length and thus associate for each .X ∈ ∂M a Darboux frame .(T(X), t(X), .n(X)). As we are looking for a cross-field .CM fulfilling conditions (2), the triplet .(u, v, n) can be identified as .(T(X), t(X), n(X)). One then has: .

∂s T = κg t + κn n ≡ ∇u u = v∇u φ + sφ n∇u γ + (cγ v − sγ cφ n)∇u ψ {

where from follows .

κg = ∇u φ + cγ ∇u ψ κn = sφ ∇u γ − sγ cφ ∇u ψ.

(20)

Using Eq. (19) it becomes: ∇t H = −κg ,

(21)

.

the result that matches exactly the one found in the planar case [17].

3.2

.

H PDE in the Smooth Region on the Interior of . M

To find the PDE governing . H , let’s assume the Jacobian . J is smooth (and therefore H ) in a vicinity .V of .X ∈ M. We choose .U ⊂ V such as .X ∈ U, .∂U such as unit tangent vector .T0 to .∂U0 verifies .T0 = v, .T1 to .∂U1 verifies .T1 = u, .T2 to .∂U2 verifies .T2 = −v, .T3 to .∂U3 verifies .T3 = −u. Thus we have a submanifold .U ⊂ M on which . H is smooth, and such as .∂U = ∂U0 ∪ ∂U1 ∪ ∂U2 ∪ ∂U3 . Darboux frames of .∂U (Fig. 2) are:

.

⎧ (T, t, n) ⎪ ⎪ ⎨ (T, t, n) . ⎪ (T, t, n) ⎪ ⎩ (T, t, n)

= ( v, −u, n) = (−u, −v, n) = (−v, u, n) = ( u, v, n)

on ∂U0 on ∂U1 on ∂U2 on ∂U3

(22)

˜ v˜ ) to be a local coordinate system, we recall Eq. (21) demonstrated in For .(u, Sect. 3.1: .κg = −∇t H , with t = n ∧ T (23)

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

351

Fig. 2 Vicinity of .X considered

and the divergence theorem stating that: ∫

∫ .

∂U

∇t H = −

U

∆H.

(24)

Applying the Gauss-Bonnet theorem on .U leads to: ∫

∫ .

U

K dU +

∂U

κg dl + 4

π = 2πχ(U) 2

where . K and .χ(U) are respectively the Gaussian curvature and the Euler characteristic of .U. As .χ(U) = 1 and using Eqs. (23) and (24), it becomes: ∫

∫ .

U

K dU = −

U

∆H dU

(25)

which holds for any chosen .U. Hence, there is: ∆H = −K , if J is smooth.

.

(26)

In the general case, it is impossible for . J to be smooth everywhere. Indeed, let’s assume .M to be with smooth boundary .∂M (i.e. with no corners) and of the Euler characteristic .χ(M) = 1. If we assume . J is smooth everywhere, it becomes: { ∫ .

K dM + 2πχ(M) M

∫ ∂M

κg dl = 0 = 2π

(27)

which is not in accordance with the Gauss-Bonnet theorem. Therefore, . J has to be singular somewhere in .M. The goal is to build a usable parametrization of .M, i.e., being able to use this parametrization to build a quad mesh of .M. Therefore, we will allow . J to be singular on a finite number . N of points .S j , . j ∈ [|0, N − 1|] and show that this condition is sufficient for this problem to always have a unique solution.

352

J. Jezdimirovi´c et al.

Fig. 3 The disk with four singularities of index .1

3.3

.

H PDE at Singular Points

For now, we know boundary conditions for . H , Eq. (21), and the local equation in smooth regions, Eq. (26). The only thing left is to determine a local PDE governing . H at singular points .{S j }. We define .k j as the index of singularity .S j . For this, we are making two reasonable assumptions: { .

∆H (S j ) = −K (S j ) + α j δ(S j ) ki = k j ⇒ αi = α j ,

(28)

where .α j is a constant, and .δ is the Dirac distribution. We consider the disk .M represented in Fig. 3 with .4 singularities .S j , j ∈ [|0, 3|] of index .k j = 1. The Gauss-Bonnet theorem states that: ∫ ∫ . K dM + κg dl = 2πχ(M). M

∂M

Replacing . K and .κg by their values in Eqs. (21) and (26), and using the hypothesis (28) we get .α = 2π 14 . For the singularity of index .1 we have: 1 ∆H (S j ) = −K (S j ) + 2π δ(S j ). 4

.

Using the same idea, we can generalize the following: ∆H (S j ) = −K (S j ) + 2π

.

kj δ(S j ). 4

(29)

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

353

Fig. 4 . H function obtained on a closed manifold

3.4 Boundary Value Problem for . H To sum up, the equations governing . H on .M are: {

k

∆H = −K + 2π 4j δ(S j ) on M on ∂M. ∇t H = −κg

.

(30)

Equation (30) being a Laplace equation, with Neumann boundary conditions respecting divergence theorem, it admits a unique solution up to an arbitrary additive constant. A triangulation .MT of the manifold .M is generated and problem (30) is solved using a finite element formulation with order 1 Lagrange elements. Once . H is determined (illustrated in Fig. 4), the next step is to retrieve . J ’s orientation, detailed in the next section. The fact that . H is only known up to an additive constant is not harmful as only .∇ H will be needed to retrieve . J orientation.

3.5 Retrieving Crosses Orientation From H In order to get an orientation at a given point .X ∈ M, a local reference basis (uX , vX , n) in .X is recalled. Equation (19) imposes that:

.

{ .

∇u H = ∇v (φ + θ) + cγ ∇v ψ ∇v H = −∇u (φ + θ) − cγ ∇u ψ

(31)

∇uX H = ∇vX (φ + θ) + cγ ∇vX ψ ∇vX H = −∇uX (φ + θ) − cγ ∇uX ψ

(32)

which is equivalent to: { .

and eventually gives:

354

J. Jezdimirovi´c et al.

{ .

∇uX θ = −∇vX H − ∇uX φ − cγ ∇uX ψ = P ∇vX θ = ∇uX H − ∇vX φ − cγ ∇vX ψ = Q

(33)

which is linear in .θ. Using the Kelvin-Stokes theorem, it is possible to show that .θ exists if and only if we have: .∇uX Q − ∇vX P = 0. (34) Using Eq. (33) we obtain:

.

∇uX Q − ∇vX P = ∆H + ∇vX (cγ ∇uX ψ) − ∇uX (cγ ∇vX ψ) = −K + ∇vX (cγ ∇uX ψ) − ∇uX (cγ ∇vX ψ).

(35)

We know that, for 2D manifolds embedded in .R3 , the Gaussian curvature . K is equal to the Jacobian of the Gauss map of the manifold [30]. We have: ⎧ ⎛ ⎞ ⎛ ⎞ cψ −sψ cγ ⎪ ⎪ ⎪ ⎪ ∇u n = sγ ∇uX ψ ⎝sψ ⎠ − ∇uX γ ⎝ cψ cγ ⎠ ⎪ ⎪ ⎨ X ⎛ 0⎞ ⎛ sγ ⎞ . c −sψ cγ ⎪ ψ ⎪ ⎪ ⎪ ⎝ ⎝ ⎠ s cψ cγ ⎠ ∇ n = s ∇ ψ γ − ∇ ⎪ ψ γ vX vX ⎪ ⎩ vX 0 sγ

(36)

Therefore we also have: .

K = sγ (∇vX ψ∇uX γ − ∇uX ψ∇vX γ).

(37)

Developing Eq. (35) and substituting . K with the right-hand side of Eq. (37) we get:

.

−K + ∇vX (cγ ∇uX ψ) − ∇uX (cγ ∇vX ψ) = −K + cγ ∇vX ∇uX ψ − sγ ∇vX γ∇uX ψ − cγ ∇uX ∇vX ψ + sγ ∇uX γ∇vX ψ = 0.

(38)

As Eq. (34) is verified, we know that there exists a scalar field .θ verifying Eq. (33), and therefore that our problem has a unique solution. In order to solve Eq. (33), we first need to obtain a smooth global basis .(uX , vX , n) on.M. This is possible by generating a branch cut.L, as defined below, and computing a smooth global basis .(uX , vX , n) on .M allowing discontinuities across .L. A branch cut is a set .L of curves of a domain . M that do not form any closed loop and that cut the domain in such a way that it is impossible to find any closed loop in . M \ L that encloses one or several singularities, or an internal boundary. As we already have a triangulation of . M, the branch cut .L is in practice simply a set of edges of the triangulation.

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

355

Fig. 5 Edges of the branch cut .L are represented in blue. There exists no closed loop in . M \ L enclosing one or several singularities

The branch cut is generated with the method described in [17] which is based on [6]. An example of generated branch cut is presented in Fig. 5. Once a branch cut .L is available, the field .θ can be computed by solving the linear equations (33). With equations (33), .θ is known up to an additive constant. For the problem to be well-posed, .θ value has to be imposed at one point of domain .M. The chosen boundary condition consists in fixing the angle .θ at one arbitrary point .X BC ∈ ∂M so that.CM (X BC ) has one of its branches collinear with.T(X). The problem can be rewritten as the well-posed Eq. (39) and is solved using the finite element method on the triangulation.MT with order one Crouzeix-Raviart elements. This kind of element has shown to be more efficient for cross-field representation [18]. ⎧ ⎨ PT M (∇θ) = PT M (n × ∇ H − ∇φ − cγ ∇ψ) in M θ(X BC ) = θX BC for an arbitrary X BC ∈ ∂M . (39) ⎩ θ discontinuous on L It is important to note that for Eq. (39) to be well-posed, the .θ value can only be imposed on a single point. A consequence is that if .M has more than one boundary (.∂M = ∂M1 ∪ ∂M2 ∪ · · · ∪ ∂Mn ), the resulting cross-field is guaranteed to be tangent to the boundary .∂Mi such as .X BC ∈ ∂Mi , which does not necessarily hold for all boundaries .∂ M j for . j /= i, as detailed in Sect. 4.1. Once . H and .θ scalar fields are computed on .M (illustrated respectively in Figs. 4 and 6), the cross-field .CM can be retrieved for all .X ∈ M: c(X) = {uk = Rθ+k π2 ,n (uX ), k ∈ [|0, 3|]}.

.

(40)

4 Preliminary Results As a proof of concept, the cross-field computation based on the imposed singularity configuration is included in the 3-step quad meshing pipeline of [17] (illustrated in Figs. 7 and 8):

356

J. Jezdimirovi´c et al.

Fig. 6 Scalar field .θ obtained from scalar field . H (represented in Fig. 4)

Fig. 7 Quad mesh on a 2-sphere with a natural singularity configuration forming an anticube. The singularity configuration comes from solving a non-linear problem, i.e., by using the MBO algorithm from [33]

Step 1: impose a singularity configuration, i.e., position and valences of singularities (see [17]). Step 2: compute a cross-field with the prescribed singularity configuration of Step 1 on an adapted mesh (singularities are placed in refined regions), by solving only two linear systems (Sect. 3). Step 3: compute a quad layout on the accurate cross-field of Step 2, and generate a full block-structured isotropic quad mesh (see [17, 18]). The presented pipeline includes the automatic check that singularity configuration obeys the Euler characteristic of the surface, but it does not inspect all AbelJacobi conditions [10, 22, 35]. Further, the models of industrial complexity would require a more robust quad layout generation technique than the one followed here [17, 18]. The final quad mesh is isotropic, obtained from the quad layout via per-partition bijective parameterization aligned with the smooth cross-field (singularities can only be located on corners of the partitions) [17], and following the size map implied by the . H , i.e., the element’s edge length is .s = e H . In case when the application demands an anisotropic quad mesh, two sizing fields .(H1 , H2 ) for the cross-field must be computed, more details in Sect. 5.

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

357

Fig. 8 Quad mesh on a 2-sphere with an imposed singularity configuration forming a cube

4.1 Valid Singularity Configurations for Conformal Quad Meshing The singularity configuration, including both the positions and valences, plays a crucial role in the generation of conformal quad meshes [14]. It is essential to note that not all user-imposed singularity configurations matching the Euler’s characteristic of the surface will be valid for conformal quad meshing, Fig. 9. The central cause for this lies in the fact that a combination of choices of valences and holonomy is not arbitrary [25]. Relevant findings on the non-existence of certain quadrangulations can be found in [1, 16, 19]. The work of [2] presents the formula for determining the numbers of and indices of singularities, and [13] presents their possible combinations in conforming quad meshes. Latter authors also show that the presented formula is necessary but not sufficient for quad meshes, but neither of these works are proving the rules for the singularities’ placement. Recently, the sufficient and necessary conditions for valid singularity configuration of the conformal quad mesh are presented in the framework based on AbelJacobi’s theory [10, 22, 35]. The developed formulation here is under its direct constraint. In practice, imposing a singularity configuration fulfilling Euler’s characteristic constraint ensures that the flat metric, i.e., the . H field can be obtained. If this singularity configuration also verifies the holonomy condition, the cross-field will be aligned with all boundaries and consistent across the cut graph. We recall here that our formulation entitles the user to impose its own singularity configuration, which in practice can contain a suboptimal distribution of singularities. As a consequence, computed cross-field may not be aligned with all boundaries, Fig. 9d), preventing the generation of the final conformal isotropic quad mesh. To bypass this issue, the following section develops an integrable cross-field formulation with two independent metrics (which are flat except at singularities), instead of only one as presented for Abel-Jacobi conditions.

358

J. Jezdimirovi´c et al.

Fig. 9 Imposing a 3–5 singularity configuration on a torus. a The boundary marked in blue. b The boundary and the cut graph marked in black. c Consistent cross-field across the cut graph. d Cross-field not aligned with the boundary

4.2 Dealing with Suboptimal Distribution of Singularities The issue of suboptimal distribution of singularities imposes the need for developing a new cross-field formulation on the imposed singularity configuration, which considers the integrability while relaxing the condition on isotropic scaling of crosses’ branches. More specifically, the integrability condition, along with computing only ˜ = ||˜v||, imposes the strict constraint on the valid singularone scaling field . H , .||u|| ity configurations, i.e., the need for fulfilling the Abel-Jacobi theorem. Therefore, ˜ and . L 2 = ||˜v|| are introduced and the upcoming section two sizing fields . L 1 = ||u|| presents the mathematical foundations for the generation of an integrable cross-field with anisotropic scaling on .2−D manifolds. As it will be shown in the following, this setting presents promising results in generating an integrable and boundary-aligned cross-field on the imposed set of singularities, even when their distribution is not fulfilling all Abel-Jacobi conditions. Only for the sake of visual comprehensiveness, the presented motivational examples in Figs. 10, 11, 12, 13, 14 and 15 are planar.

5 Integrability Condition with Anisotropic Scaling As explained previously (Sect. 3), a cross-field .CM is integrable if and only if .u˜ and v˜ commute under the Lie Bracket. In other words, the condition:

.

˜ v˜ ] = ∇u˜ v˜ − ∇v˜ u˜ = [L 1 u, L 2 v] 0 = [u,

.

(41)

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

359

Fig. 10 Obtained quad layouts on an imposed set of singularities that do not respect the location’s condition from the Abel-Jacobi theorem. a Quad layouts obtained using the integrable cross-field with isotropic scaling: not aligned with boundaries (marked with “.!”) and demonstrate the presence of t-junctions (marked with “T”) generated by cutting the limit cycles upon their first orthogonal intersection. b Quad layouts obtained with imposing the .θ value along the cut graph and boundary following the method presented in [6]: boundary aligned but demonstrate the presence of t-junctions (marked with “T”) generated by cutting the limit cycles upon their first orthogonal intersection. c Quad layouts obtained using the integrable cross-field with an anisotropic scaling: boundary aligned and without t-junctions Fig. 11 Planar square

where: .

˜ L 1 = ||u||,

and .

has to be verified.

u= v=

L 2 = ||˜v|| u˜ ˜ ||u|| v˜ ||˜v||

(42)

(43)

360

J. Jezdimirovi´c et al.

Fig. 12 Left: a quad layout obtained using the integrable cross-field with isotropic scaling: not aligned with all boundaries (marked with “.!”). Right: a quad layout obtained with imposing the .θ value along the cut graph and boundary following the method presented in [6]: boundary-aligned but with t-junctions (marked with “T”) generated by cutting the limit cycles upon their first orthogonal intersection

Fig. 13 Left: a quad layout obtained at initialization: boundary-aligned but with t-junctions (marked with “T”). Right: the integration error density on .Ω. The total integration error is . E = 0.307898

Developing the latter expression and posing for convenience . L 1 = e H1 and . L 2 = e , it becomes: .0 = v∇u H2 − u∇v H1 + [u, v], H2

{

and then .

∇u H2 = − < v, [u, v] > ∇v H1 = < u, [u, v] >

which after the substitution of Eq. (16) gives:

(44)

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

361

Fig. 14 Left: a quad layout obtained from integrable cross-field with an anisotropic scaling: orthogonal with all boundaries and without t-junctions. Right: the integration error density on.Ω. The total integration error at convergence is . E = 1.45639e − 06

Fig. 15 Left: a quad layout obtained at initialization, with t-junctions (marked with “T”), the total integration error is. E = 0.842169. Right: a quad layout obtained at convergence, the total integration error is . E = 0.013597

{ .

∇u H2 = ∇v θ + ∇v φ + cγ ∇v ψ −∇v H1 = ∇u θ + ∇u φ + cγ ∇u ψ.

(45)

It is important to note that the three scalar fields .(θ, H1 , H2 ) are completely defining the cross-field .CM , as .(ψ, γ, φ) are known since they are defining the local manifold basis .(t, T, n). From Eq. (45), we can define the cross-field .CM integrability error . E as: .

E 2 (θ, H1 , H2 ) =

∫ M

(∇u H2 − ∇v θ − ∇v φ − cγ ∇v ψ)2 + (∇v H1 + ∇u θ + ∇u φ + cγ ∇u ψ)2 dM.

(46)

362

J. Jezdimirovi´c et al.

The problem of generating an integrable cross-field with anisotropic scaling can therefore be reduced at finding three scalar fields .(θ, H1 , H2 ) verifying . E(θ, H1 , . H2 ) = 0. The process of solving this problem presents several difficulties. First,.(θ, ψ, γ, φ) are multivalued functions. This kind of difficulty is commonly encountered in crossfield generation and is tackled here by cutting the domain .M along a generated cut graph. Then, minimizing . E regarding .(θ, H1 , H2 ) is an ill-posed problem. Indeed, there are no constraints on .∇u H1 and .∇v H2 . This is the main obstacle for generating an integrable 2D cross-field with an anisotropic scaling. A simple approach to solve this problem is proposed here. In order to do so, it is needed to: • be able to generate a boundary-aligned cross-field matching the imposed singularity configuration, ¯ • compute .(H1 , H2 ) minimizing . E for an imposed .θ, • compute .θ minimizing . E for an imposed .( H¯ 1 , H¯ 2 ). The final resolution solver (Algorithm 3), proposed in Sect. 5.4, allows for finding a local minimum for . E around an initialization .(θ0 , H10 , H20 ).

5.1 Local Manifold Basis Generation and .θ Initialization As exposed earlier, in order to completely define a unitary cross-field .CM with a scalar field .θ it is needed to define a smooth global basis .(t, T, n) on .M. This is possible by generating a branch cut .L and computing a smooth global basis .(t, T, n) on .M allowing discontinuities across .L. The branch cut is generated using the method described in [6]. A local basis .(t, T, n) on .M can be generated with any cross-field method. Such local basis will be smooth and will not show any singularities, as discontinuities are allowed across the cut graph .L and no boundary alignment is required. Once the cut graph .L and the local basis .(t, T, n) are generated, it is possible to compute .θ only if: • .θ values on .∂M are known, • .θ jump values across .L are known. These can be found using methods described in [6], or can be deduced from a low computational cost cross-field generation detailed in [17].

5.2 Computing .(H1 , H2 ) From Imposed .θ¯ ¯ it is possible to find .(H1 , H2 ) minimizing . E. It is important to note For a given .θ, that, in general, there does not exist a couple .(H1 , H2 ) such as . E = 0. Minimizing

Integrable Cross-Field Generation Based on Imposed Singularity Configuration … .

363

E with imposed .θ¯ is finding the couple .(H1 , H2 ) for which the integrability error is minimal. The problem to solve is the following:

.

Find ( H¯ 1 , H¯ 2 ) such as ¯ H¯ 1 , H¯ 2 ) = E(θ, min

(H1 ,H2 )∈(C1 (M))2

¯ H1 , H2 ). E(θ,

(47)

Let’s define .S as: S = {( H¯ 1 , H¯ 2 ) | ( H¯ 1 , H¯ 2 ) verifies Eq. (47)}.

.

For this problem to be well-posed, a necessary condition is to have .2 independent scalar equations involving .∇ H1 , and the same for .∇ H2 . We can note that in our case, there are no constraints on .∇u H1 and .∇v H2 . Therefore, there is only .1 scalar equation involving .∇ H1 , and .1 scalar equation involving .∇ H2 . As a consequence, the problem we are looking to solve is ill-defined. As this problem is ill-defined, .S will not be a singleton and, in the general case, there will be more than one solution to the problem (47). To discuss this problem in detail, we will use the simple example of a planar domain .Ω illustrated in Fig. 11. In this case, the unitary frame field .CΩ obtained with common methods is: CΩ = {c(X) = {x, y, −x, −y}, X ∈ Ω}

.

which is equivalent to: .

θ¯ = 0.

(48)

(49)

As in this case domain .Ω is planar, we also have: ψ = γ = φ = 0.

.

(50)

Equation (45) becomes: {

∇x H2 = 0 −∇y H1 = 0

(51)

H1 (x, y) = f (x), ∀(x, y) ∈ Ω, ∀ f ∈ C1 (R) H2 (x, y) = g(y), ∀(x, y) ∈ Ω, ∀g ∈ C1 (R).

(52)

.

which gives:

{ .

Knowing this, we finally have .S = (C 1 (R))2 . There is an infinity of solutions, confirming the fact that problem (47) is ill-defined. The solution we could expect to obtain for quad meshing purposes would be:

364

J. Jezdimirovi´c et al.

S = {(H1 , H2 ) = (0, 0)},

.

(53)

which is equivalent to .(L 1 , L 2 ) = (1, 1). Based on this simple example, we can deduce that problem (47) has to be regularized in order to reduce the solution space. One way to achieve this goal is to add a constraint on the .(H1 , H2 ) fields we are looking for. A natural one is to look for .(H1 , H2 ) verifying Eq. (47) and being as smooth as possible. With this constraint, the problem to solve becomes:

.

Find ( H¯ 1 , H¯ 2 ) ∈ S such as ∫ ||∇ H¯ 1 ||2 + ||∇ H¯ 2 ||2 dM = M

∫ min

(H1 ,H2 )∈S M

||∇ H1 ||2 + ||∇ H2 ||2 dM.

(54)

Adding this constraint transforms the linear problem (47) into a non-linear one (54). Algorithm 1 is used to solve Eq. (54), leading to an . E’s local minimum ¯ H¯ 1 , H¯ 2 ) close to .(θ, ¯ H 0 , H 0 ). .(θ, 1 2 k=0 initial guess H10 , H20 ¯ H 0, H 0) compute ∊0 = E(θ, 1 2 while ∊k < ∊k−1 do k =k+1 find (H1k , H2k ) minimizing: ¯ f1 , f2 ) + E(θ,

∫ M

||∇ f 1 − ∇ H1k−1 ||2 + ||∇ f 2 − ∇ H2k−1 ||2 d M

( )2 ( f 1 , f 2 ) ∈ C1 (M) k ¯ Hk, Hk) compute ∊ = E(θ, 1 2 end

Algorithm 1: Regularized solver for (H1 , H2 )

5.3 Computing .θ From .( H¯ 1 H¯ 2 ) For an imposed couple .( H¯ 1 H¯ 2 ), it is possible to find .θ minimizing . E. The problem to solve is formalized as: Find θ¯ ∈ C1 (M) such as . ¯ H¯ 1 , H¯ 2 ) = min E(θ, H¯ 1 , H¯ 2 ). E(θ,

(55)

θ∈C1 (M)

This problem is non-linear too since .∇v H1 and .∇u H2 are showing a non-linear dependence regarding .θ. Algorithm 2 is used to solve Eq. (55), leading to an . E’s ¯ H¯ 1 , H¯ 2 ) close to .(θ0 , H¯ 1 , H¯ 2 ). local minimum .(θ,

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

365

k=0 initial guess θ0 deduce (u0 , v0 ) from θ0 compute ∊0 = E(θ0 , H¯ 1 , H¯ 2 ) while ∊k < ∊k−1 do k =k+1 find θk minimizing: E k ( f, H¯ 1 , H¯ 2 ) =

∫ M

(∇uk−1 H¯ 2 − ∇vk−1 f − ∇vk−1 φ − cγ ∇vk−1 ψ)2 + (∇vk−1 H¯ 1 + ∇uk−1 f + ∇uk−1 φ + cγ ∇uk−1 ψ)2 dM

f ∈ C1 (M) deduce (uk , vk ) from θk compute ∊k = E(θk , H¯ 1 , H¯ 2 ) end

Algorithm 2: Solver for θ

5.4 Minimizing Integrability Error . E Regarding .(θ, H1 , H2 ) Using the three steps exposed previously, it is possible to find a local minimum in the vicinity of an initialization .(θ0 , H10 , H20 ) following Algorithm 3. k=0 initial guess θ0 using method presented in Section 5.1 compute (H10 , H20 ) from θ0 using Alg. 1 compute ∊0 = E(θ0 , H10 , H20 ) while ∊k < ∊k−1 do k =k+1 compute θk from (H1k−1 , H2k−1 ) using Alg. 2 compute (H1k , H2k ) from θk using Alg. 1 compute ∊k = E(θk , H1k , H2k ) end

Algorithm 3: Solver for (θ, H1 , H2 )

For the sake of simplicity the motivational example, presented in Fig. 12 is planar. A set of four of index .1 and four of index –.1 singularities whose locations are not fulfilling the Abel-Jacobi condition is imposed. Consequently, a cross-field generated using the . H function will not be boundary aligned, and a cross-field generated by imposing the .θ value along the cut graph and boundary following the method presented in [6] will not be integrable and therefore will generate limit cycles. The method presented here is applied to compute an integrable boundary-aligned cross-field. Figure 13 represents the cross-field used as an initial guess and Fig. 14 is the one obtained at Algorithm 3 convergence. Figure 13 demonstrates that integrability error density is not concentrated in certain regions, but rather quite uniformly spread over the domain. This suggests that addressing the integrability issue cannot be performed via local modifications but

366

J. Jezdimirovi´c et al.

only via the global one, i.e., the convergence of the presented non-linear problem. Figure 14 shows that generating a limit cycle-free 2D cross-field can indeed be done by solving Eq. (45). Nevertheless, this problem is highly non-linear and ill-defined, and solving it turns out to be difficult. The method proposed here works well when initialization is not far from an integrable solution, i.e., when the imposed singularity set obeys Abel-Jacobi’s conditions. Otherwise, it does not converge up to the desired solution by reaching a local mini¯ H¯ 1 , H¯ 2 ) which does not satisfy . E(θ, ¯ H¯ 1 , H¯ 2 ) = 0, as illustrated in Fig. 15. mum .(θ, Although, it is interesting to note that, even without the presented method’s convergence, the number of t-junctions dramatically decreases and the valid solution, in the opinion of authors, can be “intuitively presumed”.

6 Conclusion and Future Work We presented the mathematical foundations for the generation of an integrable crossfield on 2D manifolds based on a user-imposed singularity configuration with both isotropic and anisotropic scaling. Here, the mathematical setting is constrained by the Abel-Jacobi conditions for a valid singularity pattern. With the automatic algorithms to check and optimize the singularity configuration (as recently presented in [10, 22, 35]), the developed framework can be used to effectively generate both an isotropic and an anisotropic block-structured quad mesh with prescribed singularity distribution. When it comes to computational costs of our cross-field generation, the formulation with isotropic scaling . H takes solving only two linear systems, and the anisotropic one .(H1 , H2 ) represents a non-linear problem. An attractive direction for future work includes, although it is not limited to, working with the user-imposed size map. By using the integrable cross-field formulation relying on two sizing fields . H1 and . H2 , it would be possible to take into account the anisotropic size field to guide the cross-field generation. The size field obtained from the generated cross-field would not precisely match the one prescribed by the user, but it would be as close as possible to the singularity configuration chosen for the cross-field generation. It is important to note that employing the presented framework in the .3D volumetric domain would be possible only for a limited number of cases, in which the geometric and topological characteristics of the volume (more details in [13, 34]) allow the use of cross-field guided surface quad mesh for generating a hex mesh.

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

367

Appendix See Figs. 16 and 17.

Fig. 16 A square with a squared hole rotated by . π4 . Left: Quad layout obtained for an empty singularity set. The corresponding cross-field is isotropic, boundary aligned and integrable. Right: Quad layout obtained for a singularity set composed of four valence .3 (in blue) and four valence .5 (in red) singularities. The corresponding cross-field is anisotropic, boundary aligned and integrable

Fig. 17 Nautilus with a hole. Left: Quad layout obtained for an empty singularity set. The corresponding cross-field is isotropic, boundary aligned and integrable. Midlle: Quad layout obtained for a singularity set composed of a valence 3 (in blue) and a valence 5 (in red) singularity. The corresponding cross-field is anisotropic, boundary aligned and integrable. Right: Quad layout obtained for a singularity set composed of two valence 3 (in blue) and two valence 5 (in red) singularity. The corresponding cross-field is anisotropic, boundary aligned and integrable

References 1. Barnette, D., Jucoviˇc, E., Trenkler, M.: Toroidal maps with prescribed types of vertices and faces. Mathematika 18(1), 82–90 (1971)

368

J. Jezdimirovi´c et al.

2. Beaufort, P.A., Lambrechts, J., Henrotte, F., Geuzaine, C., Remacle, J.F.: Computing cross fields a pde approach based on the ginzburg-landau theory. Procedia engineering 203, 219–231 (2017) 3. Ben-Chen, M., Gotsman, C., Bunin, G.: Conformal flattening by curvature prescription and metric scaling. In: Computer Graphics Forum, vol. 27, pp. 449–458. Wiley Online Library (2008) 4. Bommes, D., Campen, M., Ebke, H.C., Alliez, P., Kobbelt, L.: Integer-grid maps for reliable quad meshing. ACM Transactions on Graphics (TOG) 32(4), 1–12 (2013) 5. Bommes, D., Lévy, B., Pietroni, N., Puppo, E., Silva, C.T., Tarini, M., Zorin, D.: Quad meshing. In: Eurographics (STARs), pp. 159–182 (2012) 6. Bommes, D., Zimmer, H., Kobbelt, L.: Mixed-integer quadrangulation. ACM Transactions On Graphics (TOG) 28(3), 1–10 (2009) 7. Bunin, G.: A continuum theory for unstructured mesh generation in two dimensions. Computer Aided Geometric Design 25(1), 14–40 (2008) 8. Campen, M., Shen, H., Zhou, J., Zorin, D.: Seamless parametrization with arbitrary cones for arbitrary genus. ACM Transactions on Graphics (TOG) 39(1), 1–19 (2019) 9. Campen, M., Zorin, D.: Similarity maps and field-guided t-splines: a perfect couple. ACM Transactions on Graphics (TOG) 36(4), 1–16 (2017) 10. Chen, W., Zheng, X., Ke, J., Lei, N., Luo, Z., Gu, X.: Quadrilateral mesh generation i: Metric based method. Computer Methods in Applied Mechanics and Engineering 356, 652–668 (2019) 11. Crane, K., Desbrun, M., Schröder, P.: Trivial connections on discrete surfaces. In: Computer Graphics Forum, vol. 29, pp. 1525–1533. Wiley Online Library (2010) 12. Ebke, H.C., Schmidt, P., Campen, M., Kobbelt, L.: Interactively controlled quad remeshing of high resolution 3d models. ACM Transactions on Graphics (TOG) 35(6), 1–13 (2016) 13. Fogg, H.J., Sun, L., Makem, J.E., Armstrong, C.G., Robinson, T.T.: Singularities in structured meshes and cross-fields. Computer-Aided Design 105, 11–25 (2018) 14. Gu, X., Luo, F., Yau, S.T.: Computational conformal geometry behind modern technologies. Notices of the American Mathematical Society 67(10), 1509–1525 (2020) 15. Hertzmann, A., Zorin, D.: Illustrating smooth surfaces. In: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 517–526 (2000) 16. Izmestiev, I., Kusner, R.B., Rote, G., Springborn, B., Sullivan, J.M.: There is no triangulation of the torus with vertex degrees 5, 6,..., 6, 7 and related results: Geometric proofs for combinatorial theorems. Geometriae Dedicata 166(1), 15–29 (2013) 17. Jezdimirovi´c, J., Chemin, A., Reberol, M., Henrotte, F., Remacle, J.F.: Quad layouts with high valence singularities for flexible quad meshing. Proceedings of the 29th Meshing Roundtable (2021) 18. Jezdimirovi´c, J., Chemin, A., Remacle, J.F.: Multi-block decomposition and meshing of 2d domain using ginzburg-landau pde. Proceedings, 28th International Meshing Roundtable (2019) 19. Jucoviˇc, E., Trenkler, M.: A theorem on the structure of cell–decompositions of orientable 2–manifolds. Mathematika 20(1), 63–82 (1973) 20. Kälberer, F., Nieser, M., Polthier, K.: Quadcover-surface parameterization using branched coverings. In: Computer graphics forum, vol. 26, pp. 375–384. Wiley Online Library (2007) 21. Knöppel, F., Crane, K., Pinkall, U., Schröder, P.: Globally optimal direction fields. ACM Transactions on Graphics (TOG) 32(4), 1–10 (2013) 22. Lei, N., Zheng, X., Luo, Z., Luo, F., Gu, X.: Quadrilateral mesh generation ii: Meromorphic quartic differentials and abel–jacobi condition. Computer Methods in Applied Mechanics and Engineering 366, 112–980 (2020) 23. Lyon, M., Campen, M., Kobbelt, L.: Quad layouts via constrained t-mesh quantization. In: Computer Graphics Forum, vol. 40, pp. 305–314. Wiley Online Library (2021) 24. Lyon, M., Campen, M., Kobbelt, L.: Simpler quad layouts using relaxed singularities. In: Computer Graphics Forum, vol. 40, pp. 169–180. Wiley Online Library (2021) 25. Myles, A., Pietroni, N., Zorin, D.: Robust field-aligned global parametrization: Supplement 1, proofs and algorithmic details. Visual Computing Lab (2014)

Integrable Cross-Field Generation Based on Imposed Singularity Configuration …

369

26. Myles, A., Zorin, D.: Global parametrization by incremental flattening. ACM Transactions on Graphics (TOG) 31(4), 1–11 (2012) 27. Myles, A., Zorin, D.: Controlled-distortion constrained global parametrization. ACM Transactions on Graphics (TOG) 32(4), 1–14 (2013) 28. Ray, N., Li, W.C., Lévy, B., Sheffer, A., Alliez, P.: Periodic global parameterization. ACM Transactions on Graphics (TOG) 25(4), 1460–1485 (2006) 29. Shepherd, K.M., Hiemstra, R.R., Hughes, T.J.: The quad layout immersion: A mathematically equivalent representation of a surface quadrilateral layout. arXiv preprint arXiv:2012.09368 (2020) 30. Singer, I.M., Thorpe, J.A.: Lecture notes on elementary topology and geometry. Springer (2015) 31. Tong, Y., Lombeyda, S., Hirani, A.N., Desbrun, M.: Discrete multiscale vector field decomposition. ACM transactions on graphics (TOG) 22(3), 445–452 (2003) 32. Vaxman, A., Campen, M., Diamanti, O., Panozzo, D., Bommes, D., Hildebrandt, K., Ben-Chen, M.: Directional field synthesis, design, and processing. In: Computer Graphics Forum, vol. 35, pp. 545–572. Wiley Online Library (2016) 33. Viertel, R., Osting, B.: An approach to quad meshing based on harmonic cross-valued maps and the ginzburg–landau theory. SIAM Journal on Scientific Computing 41(1), A452–A479 (2019) 34. White, D.R., Tautges, T.J.: Automatic scheme selection for toolkit hex meshing. International Journal for Numerical Methods in Engineering 49(1-2), 127–144 (2000) 35. Zheng, X., Zhu, Y., Chen, W., Lei, N., Luo, Z., Gu, X.: Quadrilateral mesh generation iii: Optimizing singularity configuration based on abel–jacobi theory. Computer Methods in Applied Mechanics and Engineering 387, 114–146 (2021)

Element Design

Optimally Convergent Isoparametric P 2 Mesh Generation .

Arthur Bawin, André Garon, and Jean-François Remacle

1 Introduction The generation of curvilinear meshes was initially intended to improve the approximation of the boundary geometry of the domains to be modeled by finite elements [1, 2]. Recently, a new trend has emerged: the use of curved meshes inside the domain to approximate the solution as well as possible [3–7]. This problem is the one of curvilinear mesh adaptation, where anisotropic but also curvilinear elements are allowed. As for anisotropic mesh adaptation, the metric tensor plays a central role in curvilinear mesh adaptation. However, to the best of our knowledge, a solution-based metric tensor field tailored for curved elements is not yet available in the literature. In [7], the two- and three-dimensional metric is either induced from the curvature of the geometry, computed from solutions of PDE on straight-sided meshes, or a combination thereof. But the a priori error model uses straight line parameterizations to write local upper bounds on the interpolation error, and does not let the elements bend to follow the solution, yielding potentially shorter elements. In [4, 5], the target metric is chosen to align the mesh to specified curves or surfaces, but does not rely on the solution of a PDE.

A. Bawin (B) · A. Garon Polytechnique Montréal, Montreal, QC, Canada e-mail: [email protected] A. Garon e-mail: [email protected] A. Bawin · A. Garon · J.-F. Remacle iMMC, UCLouvain, Louvain-la-Neuve, Belgium e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_17

373

374

A. Bawin et al.

The motivation to this paper is thus to describe the structure of the interpolation error if we allow to curve the mesh elements, by extending the framework proposed by Alauzet and Loseille [8, 9] to curvilinear meshes in two dimensions. Another question remains open: how does the curvature of an element influence the interpolation error? Papers dating from the beginning of the finite element era have tried to give an answer to this question. In Ciarlet and Raviart [10], the authors show that, to preserve the optimal convergence of finite elements, the radius of curvature of the edges of the elements should decrease as .h 2 if .h is the size of the elements. This means that in order to preserve the optimal convergence, it is necessary to curve the elements relatively less and less when the edges’ length is decreased. Here, we show, using numerical test cases, that this assumption is too strong and that homothetically refined elements i.e. whose relative curvature remains constant allow for optimal convergence. The paper is structured as follows. In Sect. 2, we derive an interpolation error estimate tailored for curved elements, which is the main contribution of this paper. From this anisotropic estimate, a metric tensor field is obtained, based on previous work on high-order straight-sided meshes [11–13]. In particular, the extension of the log-simplex algorithm proposed in [13] to non-homogeneous error polynomial is a new contribution. We then introduce the metric tensor induced by the graph of the target field .u and use it to establish the principal directions of the mesh. In Sect. 3, the mesh generation algorithm is described. In particular, we adapt the advancing front method introduced in [6] to use both the error metric and the induced metric to specify the target sizes and anisotropic directions of the mesh elements. An extension of the edge curving method proposed in [6] is also presented. Section 4 presents an application of this mesh adaptation framework to two simple test cases.

2 Interpolation Error Model and Metric Tensor The mesh generation methodology described in this paper is based on interpolation error. We follow the steps presented in similar work on higher order (.≥2) straightsided anisotropic mesh adaptation [11–13]. First, we build an a priori error estimate based on a Taylor expansion and higher-order derivatives of an unknown scalar field .u(x, y). As we consider quadratic edges, second and third order derivatives essentially determine the error estimator. Then, we seek the metric tensor field that best translates this estimate in the shape of the ideal elements. It is well-known that for elements with interpolation functions of order .k ≥ 2, the natural connection between the Hessian matrix of .u and the metric tensor driving the mesh adaptation process no longer holds. Indeed, the metric tensor is represented by an .n × n matrix, with .n the space dimension, whereas an error estimate based on a Taylor expansion features tensors of high order derivatives. Rather, a quadratic form represented by a symmetric positive-definite matrix which is a tight upper bound on the error estimate is sought. Finally, the obtained metric field is scaled to obtain a target number of vertices in the final mesh and perform convergence studies.

Optimally Convergent Isoparametric . P 2 Mesh Generation

375

2.1 Curve Parameterizations We start by defining the curves and parameterizations we consider in this paper. When writing an error estimate around a vertex .x0 , initial conditions such as the starting point, initial direction and curvature are prescribed, but the final point of the curve is unknown. Working with a parameter .s ≥ 0 (not necessarily the arclength) thus makes sense in this case. On the other hand, when curving the mesh edges, Sect. 3.3, both extremities are known and a parameterization in .t ∈ [0, 1] is used.

Parameterization of Paths for Error Estimation Let .u(x, y) represent a real-valued scalar field and .x0 = (x0 , y0 ) ∈ R2 be a point around which we define an anisotropic error estimate. To write an error estimator, we consider curves leaving.x0 and whose curvature is obtained from the derivatives of .u. Indeed, for a given unit direction .v, there is infinitely many curved paths leaving .x0 with arbitrary initial curvature .κ, each endowed with an amount of error. Two particular ways of leaving .x0 are to follow the gradient and the level curve of .u at .x0 , i.e. to consider curves .C1 ≡ r1 (s), C2 ≡ r2 (s) everywhere tangent to .∇u and its orthogonal direction .∇u ⊥ . As we aim to generate quadratic edges, we propose to approach those two curves by their quadratic Taylor expansion of the form: x (s) = ri (0) + sv +

. i

s2 ⊥ κv + o(s 3 ), i = 1, 2. 2

(1)

This expression is the projection of the local canonical form of the curve .ri (s) into its osculating plane, see e.g. Proposition 2.6 of [14]. Imposing the match up to order 2 between the curves.Ci and their Taylor expansion yields the following tangent vectors and curvatures (see e.g. [6] for the full computation): v = ∇u(x0 )/||∇u(x0 )||,

. 1

κ =

. 1

v2T H (x0 )v1 , ||∇u(x0 )||

v2 = ∇u ⊥ (x0 )/||∇u ⊥ (x0 )||

(2)

v2T H (x0 )v2 , ∇u(x0 ) · v1

(3)

κ2 = −

with . H the Hessian matrix of .u. It is worth noting that this approximation is .not an arclength parameterization. Indeed, we have: ||xi' (s)|| =

.

/

1 + κi2 s 2 ,

(4)

which is unit only at .s = 0. Hence the norm of the second derivative, ||xi'' (s)|| = κi ,

.

(5)

376

A. Bawin et al.

represents the curvature of .xi (s) only at .s = 0. In particular, approximation (1) does not have constant curvature. Computing the exact curvature of the parabola would require an arclength parameterization of (1), but the computation of ∫ s˜ ≜

s

.

0

yields

||xi' (σ)|| dσ

√ sinh−1 (κi (0)s) + ks 1 + κi (0)2 s 2 , .s ˜= 2κi (0)

(6)

(7)

which cannot be solved for .s(˜s ).

Parameterization of Parabolic Edges As we use isoparametric . P 2 finite elements, the mesh is made of 6-nodes triangles with parabolic edges and the reference-to-physical transformation .x(ξ) is given by: x(ξ) =

6 ∑

.

Xi φi (ξ),

(8)

i=1

where .Xi is the position of the vertices in the physical space and .φi (ξ) denote the quadratic Lagrange basis functions (Fig. 1). The Lagrange functions satisfy .φi (Ξ j ) = δi j , with .Ξ j the reference vertices and .δi j the Kronecker delta. On each edge, the middle vertex (or midpoint) is defined by the displacement vector .α ∈ R2 :

Fig. 1 Reference-tophysical transformation and displacement vector

Optimally Convergent Isoparametric . P 2 Mesh Generation

α = X12 −

.

X1 + X2 , 2

377

(9)

so that the edge parameterization writes for .t ∈ [0, 1]: x(t) = X1 + γt + L(t)α,

.

(10)

with .γ ≜ X2 − X1 and . L(t) ≜ 4t (1 − t).

2.2 Interpolation Error Estimate We now write an estimate of the interpolation error along curves.x(s, θ), θ ∈ [0, 2π]. These curves are obtained by sweeping the unit directions around.x0 and interpolating the curves .x1 (s) and .x2 (s) accordingly: s2 κ(θ)v⊥ (θ), 2 v(θ) = v1 cos θ + v2 sin θ,

x(s, θ) = x0 + sv(θ) + .

(11)

v⊥ (θ) = v1⊥ cos θ + v2⊥ sin θ, κ(θ) = κ1 cos2 θ + κ2 sin2 θ. In the following, we drop the .θ and write .x(s) to ease the notation. We want to find an expression for the interpolation error: e(s) = u(x(s)) − Π2 u(x(s)),

.

(12)

with .Π2 u is the interpolant of .u using quadratic Lagrange functions. We consider Taylor’s integral remainder for an interpolation of degree .k written for a parameterization .x(s), integrating until an arbitrary length .s = s¯ : 1 .e(¯ s) = k





(¯s − s)k D (k+1) u(x(s)) ds.

(13)

0

We examine the case .k = 2 of quadratic interpolation. Using the chain rule, the third derivative of the composition .(u ◦ x)(s) is given by: .

D (3) u = Ci jk (x(s))˙xi x˙ j x˙ k + 3Hi j (x(s))˙xi x¨ j ,

(14)

with . Hi j = ∂ 2 u/∂xi ∂x j , .Ci jk = ∂ 3 u/∂xi ∂x j ∂xk and Einstein’s summation convention on repeated indices. The term in gradient of .u is absent since the third derivative ... of the parameterization . x vanishes. Inserting this in the error estimate, we write:

378

A. Bawin et al.

∫ 1 s¯ e(¯s ) = (¯s − s)2 D (3) u(x(s)) ds 2 0 ( ∫ 1 s¯ 2 (¯s − s) Ci jk (x(s))(v + κv⊥ s)i, j,k = 2 0 ) ⊥ ⊥ + 3Hi j (x(s))(v + κv s)i (κv ) j ds .

∫ Ci jk (x0 ) s¯ = (¯s − s)2 (v + κv⊥ s)i, j,k ds 2 0 ∫ 3Hi j (x0 ) s¯ (¯s − s)2 (v + κv⊥ s)i (κv⊥ ) j ds + 2 0

(15)

≜ c1 s¯ 6 + c2 s¯ 5 + c3 s¯ 4 + c4 s¯ 3 . We have neglected the higher order derivatives (.>3) and approximated . Hi j (x(s)) and .Ci jk (x(s)) by their value at .x0 and took them outside of the integral. The result is a non-homogeneous polynomial of degree 6 in .s¯ : the explicit form of the coefficients .ci = ci (θ) is given in the appendix. For a linear (i.e. straight) parameterization.x(s) = x0 + vs, .x¨ = 0 and .x˙ = v and .e reduces to a homogeneous polynomial of order 3. Indeed, setting .κ = 0, one has: e(¯s ) =

.

s¯ 3 Ci jk (x0 )vi v j vk 6

(16)

or ( 1 C111 (¯s 3 v13 ) + (C112 + C121 + C211 )(¯s 3 v12 v2 ) . ( )( ) 6 ()() a

b

) + (C122 + C212 + C221 )(¯s 3 v1 v22 ) + C222 (¯s 3 v23 ) . ()() )( ) ( c

d

Defining the endpoints .x¯ ≜ s¯ v1 and . y¯ ≜ s¯ v2 , we write e(x, ¯ y¯ ) =

.

) ( 1 a x¯ 3 + b x¯ 2 y¯ + c x¯ y¯ 2 + d y¯ 3 6

(17)

which is the homogeneous error polynomial used for high-order straight-sided mesh adaptation in [11–13]. It is worth pointing out that for a curved parameterization, we ¯ y¯ ), since the path used to travel cannot write the error estimate as a polynomial in .(x, ¯ y¯ ) changes the total interpolation error, contrary to straight-sided from .(x0 , y0 ) to .(x, parameterization where only the endpoint matters.

Optimally Convergent Isoparametric . P 2 Mesh Generation

379

2.3 Optimal Metric Following the approach of [11, 13], we now wish to find the quadratic form, represented by a matrix . Q, that is the best upper bound for this error polynomial. More precisely, we seek the symmetric positive-definite matrix . Q such that d (x, x0 ) ≤ d Q (x, x0 ),

∀ x.

. e

(18)

In this expression, .d Q (., .) is the Riemannian distance induced by the metric tensor associated to . Q and .de (., .) is a distance function induced by the error polynomial. These distance functions are a normalized way to compare . Q and .e, see e.g. Sect. 3.2.2 of [11]. The distance .d Q (., .) is defined by the infimum of the metric-weighted length taken from all the (regular) curves joining two points .p and .q: d (p, q) = inf length(c),

. Q

(19)

where .c : [a, b] → R2 is a differentiable piecewise .C 1 curve with .c(a) = p and .c(b) = q and with ∫ b√ .length(c) = (c' (t), c' (t)) Q dt, (20) a

where .(u, v) Q = uT Qv is the dot product with respect to . Q. While computing the metric tensor at .x0 , we place ourselves in the tangent plane to .R2 at .x0 and hence consider a constant (unknown) metric . Q. This way, the geodesics of . Q are straight lines and the infimum is obtained by looking only at .x − x0 . This will obviously not be the case when we generate curved edges later on, since variations of the metric will determine the edges’ curvature. Thus, we can still write d (x, x0 ) =

. Q

√ (x − x0 )T Q(x − x0 )

(21)

as in the straight-sided case. The error-based distance function is defined by: 1

d (x, x0 ) = |e(¯s (x))| k+1 .

. e

(22)

We thus seek, in a frame centered at .x0 , the SPD matrix . Q such that: (x(¯s )T Qx(¯s ))

.

k+1 2

≥ |e(¯s )|, ∀ θ ∈ [0, 2π], s¯ > 0

(23)

and such that its unit ball has the maximum area, in order to minimize the interpolation error for a given number of mesh vertices, i.e. we seek . Q with minimum determinant. Minimizing .det Q subject to the constraints (23) is impractical though, since it requires an expensive discretization of both.θ and.s¯ to evaluate the constraints. For a linear parameterization .x(s), the error polynomial is homogeneous and a scal-

380

A. Bawin et al.

ing argument [11, 13] shows that it is sufficient to satisfy (23) on the level curve 1 of the error polynomial. The parameter .s¯ can then be obtained by .s¯ = e−1 (1), so the resolution requires a discretization of .θ only. Here, the error polynomial (15) is non-homogeneous in .s¯ and the scaling argument does not hold anymore. As .v and ⊥ .v are unit vectors, we can however write the following upper bound: |e(¯s )| ≤ s¯ 3 (1 + |κ|¯s ) × ⎞ ⎛ ∑ ∑ . 1 |κ| ⎝ |Ci jk (x0 )| + |Hi j (x0 )|⎠ , 6 i, j,k 2 i, j

(24)

whose limit for small curvature .κ is an homogeneous polynomial. We could thus use this bound in the definition of the error-based distance, which would justify the scaling argument for regions of low curvature. Using this bound might be too conservative, so we make the choice of discretizing only the level curve 1 of the non-homogeneous error polynomial (15) all the same. The impact of this trade-off between computational cost, practicality and accuracy is however hard to quantify, and better solutions might be possible. At each vertex of the background mesh, the metric tensor . Q is thus found by solving the optimization problem: min det Q

.

a,b,c ∈ R

xiT Qxi ≥ 1

(25)

for i = 1, ..., n,

where.a, b, c are the coefficients of. Q and the.xi are points lying on the level curve 1 of e(¯s ). As pointed out in [13], the problem is ill-posed since one can always fit an ellipse between constraint points whose determinant goes to 0. To solve this, we use the logsimplex algorithm proposed in [13] which consists of .(i) solving the optimization problem for .L = log Q = R log(Λ)R T , the matrix logarithm of . Q = RΛR T , and 1 for modified constraints, and .(ii) apply iteratively the transformation .x˜ = Q 2 x to converge to the initial constraints. We briefly recall the method: Starting with the identity matrix . Q 0 = I, we solve iteratively

.

min trace L j

.

a ' ,b' ,c' ∈ R

(26)

(yiT ) j L(yi ) j ≥ −||(yi ) j ||2 log(||(yi ) j ||2 ) 1

1

for the .i = 1, ..., n constraint points. At iteration . j, the metric . Q j+1 = Q j2 L j Q j2 is recovered and the constraint points are updated using the transformation .(yi ) j+1 = 1

2 xi . Sweeping the angles .θi ∈ [0, 2π] around the vertex .x0 , we write: Q j+1

e (¯s ) ≜ e(¯s , θi ) = 1 → s¯ = ei−1 (1),

. i

(27)

Optimally Convergent Isoparametric . P 2 Mesh Generation

381

so that the points .xi = x(ei−1 (1)) lie on the level curve 1 of the error polynomial. In [13], the error polynomial is homogeneous and finding the points.xi on the level curve 1 is trivial. Here, the error function is a non-homogeneous polynomial of degree 6, so unfortunately there is no closed-form solution available to solve .ei (¯s ) = ±1 and we must rely on a numerical root-finding algorithm to find the smallest positive real root, as .s¯ is a strictly positive length. The convergence theorem provided in [13] still holds even for a non-homogeneous function. Indeed, if the sequence .(Q) j converges to . Q as . j → ∞, then .(i) the objective function converges to a minimum (.L converges to 0, the proof is unchanged) and .(ii) the log-constraints converge to the initial set of constraints .xiT Qxi ≥ 1. Without relying on the homogeneous character of .e, the proof goes as follows: the 1

1

transformation .(yi ) j = Q j2 x(ei−1 (1)) = Q j2 xi converges to .yi = Q 2 xi as . j → ∞. Since .L → 0, constraint (26) converges to 1

0 ≥ −||yi ||2 log(||yi ||2 )

.

⇐⇒ 0 ≤ log(||yi ||2 ) ⇐⇒ 1 ≤ ||yi ||2 1

1

1

⇐⇒ 1 ≤ ||Q 2 xi ||2 = xiT Q 2 Q 2 xi = xiT Qxi , which concludes the proof. Solving the optimization problem at each vertex of the background mesh yields the metric field . Q(x). Following the continuous mesh theory presented in [8, 15, 16], the metrics are then scaled to obtain roughly . N vertices in the final mesh: −1

Me (x) = C (det Q(x)) p(k+1)+n Q(x),

.

(∫

with C=N

.

2 n

Ω

(det Q(x))

p(k+1) 2( p(k+1)+n)

(28)

) dx ,

(29)

with . p translates in which . L p norm the interpolation error should be minimized, .k = 2 is the polynomial degree of the interpolation and .n = 2 is the space dimension. In the following, we set . p = 2.

3 Mesh Generation With the metric field .Me (x) at hand, we now generate a mesh of . P 2 triangles. We follow the unit mesh [17] and aim at generating edges with metric√ approach √ weighted length in .[1/ 2, 2], to ensure some form of error equidistribution over mesh elements. In our approach, the straight mesh is created using an advancing

382

A. Bawin et al.

front of vertices, and is then curved one edge at a time. This is done in four main steps: 1. generate vertices at unit distance from one another and connect them; 2. curve the straight edges by moving the midpoint to minimize metric-weighted length; 3. make the curved mesh valid; 4. perform quality-enhancing topological operations (edge swaps).

3.1 Principal Directions of the Mesh In straight-sided anisotropic mesh generation, orientation of the elements is generally not controlled, since all triangles inscribed in the unit ball of the local metric are part of an equivalence class [9]. To generate curved elements however, working in the tangent space is not sufficient, and we should account for the variation of the metric and thus follow metric-imposed directions, such as the orthogonal directions given by the eigenvectors of the metric field, similarly to [18]. Here, instead of taking the eigenvectors of the metric field .Me (x), we introduce the induced metric .M I (x), defined on the graph of .u(x, y). The graph .G of .u, denoted by .p = (x, y, u(x, y)), is a surface in .R3 . The restriction of the euclidian metric, ds 2 = d x 2 + dy 2 + dz 2 ,

.

(30)

on the surface associated to the implicit function . f = z − u = 0 yields the induced metric (also known as the first fundamental form): ds 2 =

.

f ,y2 + f ,z2 2 f ,x2 + f ,z2 2 f ,x f ,y 2 d x + d xd y + dy , f ,z2 f ,z2 f ,z2

(31)

with the notation . f ,x = ∂ f /∂x. As . f ,z = 1, . f ,x = −u ,x and . f ,y = −u ,y , this writes: ds 2 = (1 + u 2,x ) d x 2 + 2u ,x u ,y d xd y + (1 + u 2,y ) dy 2 .

.

The matrix associated to this metric is: ( ) 1 + u 2,x u ,x u ,y I .M = , u ,x u ,y 1 + u 2,y

(32)

(33)

whose eigenvectors of the induced metric are ( v =

. 1

) u ,x ,1 , u ,y

( v2 =

) −u ,y ,1 . u ,x

(34)

Optimally Convergent Isoparametric . P 2 Mesh Generation

383

They are aligned respectively with .(u ,x , u ,y ) and .(−u ,y , u ,x ), the direction of the gradient and of the level curve of .u at .p. Hence, the curves obtained by integrating along .v1 and .v2 are approximations of the gradient curves and the level curves of .u. There are thus two metric fields of interest: the metric obtained from the interpolation error analysis .Me (x) (the exponent .e for error was added for emphasis) and the induced metric .M I (x) obtained from the geometry of the graph of .u. Contrary to the error metric, the induced metric is intrinsic to the solution .u and does not involved the interpolation scheme. For each metric field, two types of curves are of particular interest: • the geodesics are locally distance-minimizing curves. They are defined as parameterized curves .g(t) with zero acceleration everywhere on the curve, that is, with ' ' .∇g ' g = 0, where .∇g ' denotes the covariant derivative in the direction . g (t) [14]. For a given coordinate system, the components form of this relation writes: .

j k d 2 gi i dg dg + Γ = 0, jk dt 2 dt dt

(35)

which is as second order ODE in .g(t) and where .Γ i jk are the Christoffel symbols of the second kind, defined as: Γ i jk =

.

) 1 −1 ( Mim Mm j,k + Mmk, j − M jk,m . 2

(36)

Geodesics can be obtained by numerically integrating (35) using e.g. a 4th order explicit Runge-Kutta scheme, along with two initial conditions: .

g(0) = x0 ,

g ' (0) = v,

(37)

with .v a unit direction. • the integral curves tangent to either .v1 or .v2 , the eigenvectors of the matrix associated to the metric tensor. For each of the two metric fields .M I and .Me , one can make the following observations: • The geodesics of the induced metric .M I are the projection on the .x y−plane of the geodesics on the graph of .u. While they minimize the euclidian distance on the graph, they do not have obvious properties in terms of error minimization, i.e. minimizing the error-weighted distance. • The integral curves of the eigenvectors of .M I are approximations of the gradient and level curves of .u. Let .c(ξ) : [−1, 1] → R2 be a (perfectly represented) piece of a level curve, such that .u(c(ξ)) = C. The Lagrange interpolate of .u on .c of degree .k with .n k basis functions writes:

384

A. Bawin et al.

Πk u =

nk ∑

φi (ξ)u(c(Ξi ))

i=1 .

=C

(n k ∑

) φi (ξ)

(38)

i=1

=C since the basis functions sum to 1, with.Ξi the Lagrange nodes. Thus, the pointwise interpolation error .u − Πk u is zero on a level curve. • The geodesics of the error metric .Me minimize the error-weighted distance and are thus good candidates for curves on which generate the vertices. • The integral curves of the eigenvectors of .Me follow the directions of extreme error. They have been investigated e.g. in [18]. The integral curves of both metrics, as well as the geodesics of .Me , exhibit valuable properties in terms of error minimization. It is however not clear to us what is the link between these curves, if there is any. From a practical point of view, integrating along the geodesics of the error metric has not shown to be very robust, mostly because small perturbations in the metric field and its derivatives yield quite different geodesics. In this work, we chose to integrate along the eigenvectors of the induced metric. Integrating along the eigenvectors of .Me are currently also being investigated.

3.2 Vertices Generation and Triangulation We start by generating an anisotropic straight-sided mesh with respect to.Me (x) using mmg2d [19], discarding the inner vertices and keeping only the boundary vertices. These vertices form the initial front. New vertices are chosen from among the four potential neighbours of a vertex of the front. These neighbours lie at unit distance (measured with .Me ) from the vertex along the four directions given by moving forward or backward along the eigenvectors of .M I , Algorithm 1 with . L = 1. The principal sizes .h e1 , h e2 of .Me , as well as the angles .θ I , θe formed by the horizontal and the first eigenvector of .M I and .Me , are used to compute the size along the eigenvector of .M I in the error metric. To avoid abrupt variations in the direction field, the next direction is taken as the closest to the previous one. The neighbour.x j , j = 1, 2, 3, 4 is added to the front if it is not too close to another vertex of the front. To avoid computing distances to every existing vertex, an RTree of a list of vertices along with their exclusion zone of data structure [20], consisting √ characteristic size .1/ 2, is used. √ The exclusion zone of a vertex consists of its four neighbours at distance . L = 1/ 2, approximating the deformed unit ball of .Me (x) centered at the vertex and considering a varying metric (the true unit ball is not an ellipse anymore). A new vertex is added if it lies outside of the convex hull of the neighbouring vertices forming the exclusion zone.

Optimally Convergent Isoparametric . P 2 Mesh Generation

385

Input: Initial position x0 , direction j ∈ [1, 2, 3, 4], target length L, number of uniform steps N. Result: Neighbouring vertex x. x = x0 for i = 1 → N do v1 , v2 = eigenvectors(M I (x)) if i > 1 then v = arg max v · vprev ±v1 ,±v2

else V = [v1 , −v1 , v2 , −v2 ] v = Vj end h e1 h e2 h= / h e1 sin2 (θ I − θe ) + h e2 cos2 (θ I − θe ) h Lv x=x+ N vprev = v end

Algorithm 1: Compute neighbour to vertex x0 .

The set of accepted vertices is triangulated using isotropic Delaunay triangulation, then edge swaps are performed to enhance element quality based on .Me , yielding an anisotropic straight mesh.

3.3 Curving the Edges The edges are then curved by moving the midpoint. To curve the mesh in a single pass and not iteratively, the edges are curved to best approach the geodesics of the error metric .Me . For each parabolic edge .x(t) = x(t, α), we seek the displacement vector .α∗ ∈ R2 such that the error-weighted edge length: ∫

1

length(x) = 0

∫ .

1

= 0



1

=

||x' (t)||Me dt √ /

(x' (t), x' (t))Me dt

(39)

˙ γ + α L) ˙ Me dt (γ + α L,

0

is minimized. The minimization problem .

min length(x)

α∈R2

(40)

386

A. Bawin et al.

is solved with a quasi-Newton method. In [6], the edges are curved by restricting the movement of the midpoint along the orthogonal bisector: we compare the influence of this choice in the results section. More precisely, three strategies are compared: .(i) moving the midpoint along the bisector, .(ii) moving the midpoint anywhere in .R2 and .(iii) moving the midpoint in .R2 , then relocating it at half of the curved edge’s length, such that .x(t = 1/2) = X12 . This step is critical as several results show that curving the elements, i.e. using a non-affine transformation between the reference and the physical triangle, can have dramatic consequences on the interpolation quality, see e.g. [10, 21]. In particular, [10] shows that an asymptotic relation of the form: ||u − Π2 u|| L 2 = O(h 3 )

.

(41)

can be achieved on . P 2 isoparametric triangles with Lagrange basis functions if the displacement vector satisfies: straight

||X12 − X12

.

|| = O(h 2 ),

(42)

straight

where .X12 is the position of the . P 2 midpoint and .X12 = (X1 + X2 )/2. This result is valid on regular families of element, that is, elements for which there exists a constant .a0 such that ρh (43) .0 < a0 ≤ , ∀h, h where .h is the diameter of the element and .ρh is the diameter of the inscribed sphere. This result was observed on a sequence of six regular meshes, such as the ones shown on Fig. 2. For each of these meshes, the inner edges are curved by moving the midpoint along the unit orthogonal bisector .γ ⊥ : ( α=C

.

1 √ N

)m

γ ⊥ ∼ Ch m γ ⊥ ,

(44)

with .C a constant, . N the number of vertices of the mesh and .m an integer exponent. The observed convergence, Fig. 3, follows the results from [10]: curving the elements with straight .||α|| = ||X12 − X12 || = O(h) (45) lowers the convergence rate to 2, whereas the optimal rate is maintained for a higher .k. However, we have observed in our numerical tests (Sect. 4) that curved meshes with the midpoint moved according to (40) can exhibit an .O(h) evolution (or even lower) and still maintain the optimal convergence rate. This would suggest that the bounds from [10] are somewhat too conservative, and that curving the mesh along privileged directions can prevent this loss of convergence rate.

Optimally Convergent Isoparametric . P 2 Mesh Generation

387

Fig. 2 Structured meshes with edges curved along the orthogonal bisector

Fig. 3 Interpolation error in . L 2 -norm on curved structured meshes for the function .u(x, y) = r 4 (x, y) = x 4 + y 4 + 2x 2 y 2 for different curvature amplitudes

3.4 Making the Mesh Valid The curved mesh is not necessarily valid, that is, we do not have . Jmin = minξ J (ξ) > 0 for each curved element, where. J (ξ) = |∂x/∂ξ| is the determinant of the referenceto-physical transformation .x(ξ). To make the mesh valid, we compute a lower bound on . Jmin on each element using the Bézier-based sufficient condition in [22]. If the element is invalid (. Jmin ≤ 0), we backtrack on all three edges and simultaneously reduce the displacements .αi , i = 1, 2, 3 until . Jmin is positive. As the initial mesh is a valid straight mesh, the limit case is always valid.

388

A. Bawin et al.

3.5 Edge Swaps Finally, mesh quality is enhanced by performing edge swaps. As the vertices are supposed to be ideally placed, operations such as position smoothing, vertex insertions or edge collapses are not performed. The curvilinear quality indicator on a triangle . T used is: ∫ √ √ T det M d x , .qM = 4 3 ∑ (46) 3 i=1 LM (ei ) with .LM (ei ) the length of the edge .i. We select the error metric .Me to compute the quality.

4 Numerical Results We test our methodology on two simple test cases: 2 2 2 • .u 1 (x, y) = r 4 (x,(y) = [ (x( + y) ) , ]) 3π y • .u 2 (x, y) = atan 10 sin 2 − 2x .

To focus on metric computation and mesh generation only, we use analytic derivatives . Hi j (x) and .Ci jk (x) of .u. The log-simplex optimization problem (26) is solved using the SoPlex library [23] and the minimization problem (40) is solved using the Ceres library [24]. For each test case, we study the convergence of the Lagrange . P 2 interpolation on isoparametric elements by increasing the desired number of vertices . N in each mesh. The study is performed on sets of meshes with target complexity . N = [50, 100, 200, 400, 600, 800, 1000, 1200, 1600]. To generate a mesh of target complexity . Ni , we proceed iteratively and start from a coarse structured background mesh. We compute the metric field on the background mesh, generate a straight anisotropic mesh then use this mesh as a background mesh to have a more accurate representation of the metric field. We iterate this way five times before curving the mesh. The generated meshes feature curved elements where necessary and typically contain about 1.15 times the requested number of vertices (Fig. 4). Tests were performed sequentially on a laptop with an Intel Core i7 8750h CPU at 2.2 Ghz and 16 Gb of memory. Non-optimized timings are presented in Table 1 for the second test case and for target complexities . N = 200 and .1600. Despite some optimizations, the overall execution time remains very high. As expected, solving the minimization problem for . Q is the costliest part of the metrics computation. Due to the high number of metric evaluations when computing length integrals, most of the meshing step is currently spent interpolating the metric and its derivatives from the background mesh. As it is standard in anisotropic mesh adaptation, the error is reported as a function of the number of mesh vertices . N 1/n with .n the space dimension, here .2. For straight

Optimally Convergent Isoparametric . P 2 Mesh Generation

389

Table 1 Timings for the second test case for target complexity . N = 200 and .1600: computation of the metric tensor fields and mesh generation. All times are given in seconds. . N = 200 . N = 1600 Metric computations (5 passes): Solve .s¯ = e−1 (1) Minimize .det Q Others (metric scaling, etc.) Total metrics Mesh generation: Generate nodes Initial edges curving Edge swaps Others Total mesh Including metric interpolation Total

1.47 8.20 0.81 10.48 s

10.27 51.89 8.04 70.2 s

3.22 0.82 5.86 0.42 10.32 9.92 20.80 s

28.24 6.9 77.31 3.49 115.94 107.76 186.57 s

meshes with interpolation functions of degree .k in two dimensions, the continuous mesh theory [8, 15, 16] predicts an evolution of the error in the . L p norm of the form ||e|| L p ∼ C N −

.

k+1 2

√ ∼ C( N )−(k+1) ,

(47)

thus an asymptotic convergence rate of .k + 1 = 3. The optimal convergence rate in L 2 norm is observed for both test cases, Figs. 5 and 6, as well as an order of 2 in . H 1 norm. The graphs in Figs. 5 and 6 show the influence of the curving strategy on the interpolation error, and are associated to the three approaches discussed in Sect. 3.3. For both test cases, curving the edges along the bisector or relocating the midnode after curving without constraint yield very similar results, and both their error levels are slightly under those of the second curving strategy, which lets the midpoint move freely in .R2 . This suggests that curving along the orthogonal bisector is sufficient to generate optimally adapted meshes, in addition to being slightly faster (1 degree of freedom instead √ of 2). Finally, the evolution of the norm of the displacement .α as a function of . N is shown on Figs. 5 and 6. Notably, the evolution is linear or sublinear, while maintaining optimal convergence rates for the interpolation error.

.

390

A. Bawin et al.

Fig. 4 Top four figures: Surface plot and adapted meshes for .u 1 (x, y) = (x 2 + y 2 )2 and target complexity . N = 50, 100, 200. Vertices are generated along the eigenvectors of the induced metric I .M , which are the directions of the gradient and level curves of .u. The size along these curves is determined by the error metric .Me . The edges are curved by moving the midpoint freely to minimize the edge length in the error metric. Bottom four figures: Surface plot and adapted meshes for .u 2 (x, y) and target complexity . N = 100, 200, 400

Optimally Convergent Isoparametric . P 2 Mesh Generation

391

10 0

10 -1

10 -2

10 -3

10 -4

10 -5

10 -6

10

15

20

25

30

35

40

10

15

20

25

30

35

40

10-1

10-2

10-3

10-4

Fig. 5 Top: Interpolation error for.u 1 in. L 2 (squares),. L ∞ (diamonds) and. H 1 (dots) norms for three approaches to edges curving. The error curves obtained by moving the midnode along the orthogonal bisector (blue) and by relocating the optimal midnode at half-length (green) are mostly identical. straight || Bottom:. L 2 (squares) and. L ∞ (diamonds) norm of the displacement vector.||α|| = ||X12 − X12 √ as a function of . N (.∼ 1/ h)

392

A. Bawin et al. 10 1

10 0

10 -1

10 -2

10 -3

10 -4

10 -5 10

15

20

25

30

35

40

10

15

20

25

30

35

40

10 0

10 -1

10 -2

10 -3

Fig. 6 Top: interpolation error for.u 2 in. L 2 (squares),. L ∞ (diamonds) and. H 1 (dots) norms for three approaches to edges curving. Bottom: . L 2 (squares) and . L ∞ (diamonds) norm of the displacement √ straight vector .||α|| = ||X12 − X12 || as a function of . N (.∼ 1/ h)

Optimally Convergent Isoparametric . P 2 Mesh Generation

393

5 Conclusion and Future Work We have presented a methodology for two dimensional curvilinear mesh generation: interpolation error estimator for curved trajectories, generalization of the existing log-simplex algorithm for high-order metric tensor computation and frontal curved mesh generation. Adapted meshes exhibit mild to more pronounced curvature and reach the optimal third order convergence rate for . P 2 Lagrange elements in . L 2 norm. In particular, it was observed that edge curvature, represented by the displacement vector.α, does not necessarily need to decrease faster than the edge length to maintain optimal convergence rates. To keep the computation of the metric field tractable and to reduce the computational cost, the non-homogeneous character of the error polynomial was set aside to use existing techniques mostly as is. More tests are still necessary to assess the impact of this choice. Future work will be focused on this topic, as well as handling curved boundaries and tackling a three-dimensional extension of this framework. Acknowledgements This work was funded by FRIA grant FC29571 (FRS-FNRS). Financial support from the Simulation-based Engineering Science program funded through the CREATE program from the Natural Sciences and Engineering Research Council of Canada is also gratefully acknowledged.

Appendix Defining .ai ≜ κvi⊥ and C¯ 112 ≜ C112 + C121 + C211 C¯ 122 ≜ C122 + C212 + C221 ,

.

the coefficients of the error polynomial (15) are: ( ) 1 C111 a13 + C¯ 112 a12 a2 + C¯ 122 a1 a22 + C222 a23 , 120 ( 1 C111 a12 v1 + C¯ 112 (a12 v2 + 2a1 a2 v1 ) c2 = 20 ) + C¯ 122 (a22 v1 + 2a1 a2 v2 ) + C222 a22 v2 , . ( 1 C111 a1 v12 + C¯ 112 (a2 v12 + 2a1 v1 v2 ) c3 = 8 + C¯ 122 (a1 v22 + 2a2 v1 v2 ) + C222 a2 v22 ) + H11 a12 + (H12 + H21 )a1 a2 + H22 a22 , c1 =

(48)

394

A. Bawin et al.

c4 = .

) ( 1 C111 v13 + 3C¯ 112 v12 v2 + 3C¯ 122 v1 v22 + C222 v23 6 1 + (H11 a1 v1 + H12 a2 v1 + H21 a1 v2 + H22 a2 v2 ) . 2

(49)

References 1. Toulorge, T., Geuzaine, C., Remacle, J.F., Lambrechts, J.: Robust untangling of curvilinear meshes. Journal of Computational Physics 254, 8–26 (2013) 2. Fortunato, M., Persson, P.O.: High-order unstructured curved mesh generation using the winslow equations. Journal of Computational Physics 307, 1–14 (2016) 3. Zhang, R., Johnen, A., Remacle, J.F.: Curvilinear mesh adaptation. In: International Meshing Roundtable, pp. 57–69. Springer (2018) 4. Aparicio-Estrems, G., Gargallo-Peiró, A., Roca, X.: Defining a stretching and alignment aware quality measure for linear and curved 2d meshes. In: International Meshing Roundtable, pp. 37–55. Springer (2018) 5. Aparicio-Estrems, G., Gargallo-Peiró, A., Roca, X.: High-order metric interpolation for curved .r −adaptation by distortion minimization. In: Proceedings of the 2022 SIAM International Meshing Roundtable, pp. 11–22 (2022) 6. Zhang, R., Johnen, A., Remacle, J.F., Henrotte, F., Bawin, A.: The generation of unit. p 2 meshes: error estimation and mesh adaptation. In: International Meshing Roundtable (virtual), pp. 1–13 (2021) 7. Rochery, L., Loseille, A.: . p 2 cavity operator with metric-based volume and surface curvature. In: Proceedings of the 29th International Meshing Roundtable, pp. 193–210 (2021) 8. Alauzet, F., Loseille, A., Dervieux, A., Frey, P.: Multi-dimensional continuous metric for mesh adaptation. In: Proceedings of the 15th international meshing roundtable, pp. 191–214. Springer (2006) 9. Loseille, A.: Adaptation de maillage anisotrope 3d multi-échelles et ciblée à une fonctionnelle pour la mécanique des fluides. application à la prédiction haute-fidélité du bang sonique. Ph.D. thesis, Université Pierre et Marie Curie-Paris VI (2008) 10. Ciarlet, P.G., Raviart, P.A.: Interpolation theory over curved elements, with applications to finite element methods. Computer Methods in Applied Mechanics and Engineering 1(2), 217–249 (1972) 11. Mbinky, E.C.: Adaptation de maillages pour des schémas numériques d’ordre très élevé. Ph.D. thesis, Université Pierre et Marie Curie-Paris VI (2013) 12. Hecht, F., Kuate, R.: An approximation of anisotropic metrics from higher order interpolation error for triangular mesh adaptation. Journal of computational and applied mathematics 258, 99–115 (2014) 13. Coulaud, O., Loseille, A.: Very high order anisotropic metric-based mesh adaptation in 3d. Procedia engineering 163, 353–365 (2016) 14. Shifrin, T.: Differential geometry: a first course in curves and surfaces. University of Georgia (2015) 15. Loseille, A., Alauzet, F.: Continuous mesh framework part i: well-posed continuous interpolation error. SIAM Journal on Numerical Analysis 49(1), 38–60 (2011) 16. Loseille, A., Alauzet, F.: Continuous mesh framework part ii: validations and applications. SIAM Journal on Numerical Analysis 49(1), 61–86 (2011) 17. Frey, P.J., George, P.L.: Mesh generation: application to finite elements. Iste (2007) 18. Loseille, A.: Metric-orthogonal anisotropic mesh generation. Procedia Engineering 82, 403– 415 (2014) 19. Dobrzynski, C.: MMG3D: User guide (2012)

Optimally Convergent Isoparametric . P 2 Mesh Generation

395

20. Beckmann, N., Kriegel, H.P., Schneider, R., Seeger, B.: The r*-tree: An efficient and robust access method for points and rectangles. In: Proceedings of the 1990 ACM SIGMOD international conference on Management of data, pp. 322–331 (1990) 21. Botti, L.: Influence of reference-to-physical frame mappings on approximation properties of discontinuous piecewise polynomial spaces. Journal of Scientific Computing 52(3), 675–703 (2012) 22. Johnen, A., Remacle, J.F., Geuzaine, C.: Geometrical validity of curvilinear finite elements. Journal of Computational Physics 233, 359–372 (2013) 23. Gamrath, G., Anderson, D., Bestuzheva, K., et al.: The scip optimization suite 7.0. Tech. Rep. 20-10, ZIB, Takustr. 7, 14195 Berlin (2020) 24. Agarwal, S., Mierle, K., Team, T.C.S.: Ceres Solver (2022). URL https://github.com/ceressolver/ceres-solver

Towards a Volume Mesh Generator Tailored for NEFEM Xi Zou, Sui Bun Lo, Ruben Sevilla, Oubay Hassan, and Kenneth Morgan

1 Introduction Contemporary industrial design requires building computer aided engineering (CAE) models suitable for simulation. This task is known to be a major bottleneck due to the excessive human intervention required when processing the upstream computer aided design (CAD) model [16, 22]. This is mainly due to the fact that CAD models often contain excessive details, which prevent generating a mesh that leads to an efficient numerical simulation [6, 10]. In general, the generation of meshes from complex CAD models largely depends upon the type of simulation, because of the numerous multiscale features which may or may not be negligible for the physical problem of interest. Traditional mesh generators produce small, often distorted, elements, when the mesh size desired by the user significantly exceeds the dimension of the geometric features. Large research efforts have been made into methods of de-featuring complex CAD models [3, 8]. However, fully automatised de-featuring has not yet been achieved. Firstly, it is not always possible to make an accurate prediction of the effect of the de-featuring before X. Zou (B) · S. B. Lo · R. Sevilla · O. Hassan · K. Morgan Zienkiewicz Institute for Modelling, Data and AI Faculty of Science and Engineering, Swansea University, Swansea SA1 8EN, UK e-mail: [email protected] S. B. Lo e-mail: [email protected] R. Sevilla e-mail: [email protected] O. Hassan e-mail: [email protected] K. Morgan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_18

397

398

X. Zou et al.

actually performing simulations. Secondly, the de-featuring requirements differ from problem to problem, due to their physical nature. For instance, a small feature can be negligible in a low frequency acoustic analysis, but could have a significant impact on the same problem at higher frequencies. Finally, de-featuring also relies on the desired approximation level, and this is highly dependent on the perspective of the analyst. This problem has been addressed by the virtual topology concept [17], that provides the capability to modify topological entities without changing the geometry. The strategy is particularly attractive for methods involving high-order interpolation, where coarse elements with a high-order polynomial approximation [23] are preferred to exploit the potential benefits. Nevertheless, the mesh has to be refined to guarantee accurate and reliable results at features involving abrupt geometric changes in terms of the normal to the boundary representation (B-rep) [2]. It is known that the commonly used isoparametric elements discretise the boundary of the computational domain as an approximation of the true B-rep. The accurate CAD data, typically the curves and surfaces parametrised by non-uniform rational B-splines (NURBS), are not used during the numerical simulation. This is also true for a simulation with high order methods. As a result, the polynomial approximated boundary representation will introduce unavoidable geometric error, which can be the dominating error in the simulation [1, 4, 11, 24]. Isogeometric methods replace the approximating polynomial functions with NURBS functions, trying to make use of the exact representation of the domain [7]. However, it requires a fundamental change in the way CAD models are prepared. Traditionally, geometry modelling kernels embedded in all industrial CAD platforms [21] focus on the B-rep of a CAD geometry, while isogeometric methods require a tri-variate NURBS description of the solid domain. Furthermore, in isogeometric methods, small elements are still required when small geometric features are present in the original CAD model. The NURBS-enhanced finite element method (NEFEM) [14] addresses this problem by a complete separation of the concepts of geometry and solution approximation. Within NEFEM, the NURBS parametrised B-rep, available from the CAD model, is used only for the geometric description of the domain boundary, whereas standard polynomials are used for the approximation of the solution. With such a separation, the error due to geometric approximation is completely removed. In addition, the introduced NEFEM elements are able to traverse curves and surfaces in the B-rep. This implies that the element sizes are not restricted by small geometric features, but are entirely controlled by the user specification, and the need for de-featuring is consequently avoided. Published solutions to electromagnetics, fluid dynamics, solid mechanics and heat transfer problems demonstrate the potential of the method [12, 13, 18, 20], but the lack of a dedicated mesh generator for NEFEM has hampered its application to complex problems. An automatic NEFEM mesh generator for two-dimensional simulations was introduced in [15]. The NEFEM surface mesh generation in three-dimensional space was recently presented for the first time in [25]. This paper presents the latest efforts made towards the development of a three-dimensional volume mesh generator tailored for

Towards a Volume Mesh Generator Tailored for NEFEM

399

NEFEM. The boundary discretisation is first performed by remeshing a standard surface mesh, allowing faces traversing multiple surfaces whilst maintaining the exact B-rep. The generation process for the NEFEM volume mesh is discussed in detail, including the new entities that have been devised to store the information required by NEFEM elements. Several illustrative examples will be presented to show the potential of the proposed technique.

2 NEFEM Fundamentals Let us consider an open bounded domain .Y ⊂ ℝ3 . The boundary of the domain, nc and surdenoted by .∂Y, is described by a collection of NURBS curves .C := {C i }i=1 ns faces.S := {S j } j=1 . In particular, each boundary curve or surface can be parametrised as C i : [0, 1] −→ C i ([0, 1]) ⊆ ∂Y ⊂ ℝ3 ; .

S j : [0, 1]2 −→ S j ([0, 1]2 ) ⊆ ∂Y ⊂ ℝ3 .

A standard FEM mesh is typically generated in a bottom-up manner, following the point, curve, surface and volume hierarchy of the CAD model. In this process, the geometric entities are associated with the meshing entities. Specifically, points of the CAD model define mesh nodes, curves are discretised into edges, surfaces are discretised into facets such as triangles or quadrilaterals, and volumes are divided into elements such as tetrahedra, prisms, pyramids or hexahedra. This procedure naturally induces small elements if the CAD model contains short curves or small surfaces. NEFEM is dedicated to lifting the restriction of the small geometric features inducing small elements, and a new class of elements is introduced.

2.1 NEFEM Rationale The key idea of NEFEM [14] is the separation of the geometric approximation and functional approximation that are tightly coupled in isoparametric finite elements and isogeometric methods. By decoupling these two concepts, NEFEM generalises the definition of a finite element: the geometry is exactly described by means of the NURBS parametrised B-rep that can be directly obtained from CAD models, whereas the functional approximation is defined using polynomials, as in standard FEM. As a result, the introduced NEFEM element requires new quadrature rules to ensure that the exact B-rep is accounted for by the solver. In two dimensions, a typical NEFEM element can be defined as a curved triangle where at least one edge is geometrically defined as a combination of trimmed NURBS curves. Similarly, in three dimensions, a typical NEFEM element can be defined as a tetrahedron where at least one edge or face is geometrically defined as a collection

400

X. Zou et al.

Fig. 1 Illustration of the generalisation introduced by the concept of NEFEM elements

of trimmed NURBS curves or surfaces, respectively. The new concept of element design is illustrated in Fig. 1 and a detailed discussion can be found in [14]. In the illustrative example of Fig. 1, it can be observed that the exact B-rep is always preserved by the corresponding NEFEM element, regardless of the order of approximation used in the element. In particular, a NEFEM element with low-order interpolation nodes is capable of representing a curved boundary. In addition, Fig. 1 also shows that the face of a NEFEM element can be comprised of a collection of NURBS surface patches, even with abrupt changes of the normal within the face. It is worth noting that NEFEM elements are restricted to a layer of elements in contact with the boundary of the domain. The large majority of elements in a NEFEM mesh do not have any edge or face on the boundary, and the standard isoparametric FEM approach is used. This implies that NEFEM elements are only used near the boundary, and a negligible computational overhead will be introduced when compared to the cost of standard finite elements.

2.2 Geometric Mapping of NEFEM Elements In standard FEM, the shape functions and their derivatives are defined and evaluated at the integration points in the reference element. This information is stored and used to compute the elemental matrices and vectors required by the solver. For each element, the isoparametric mapping between the reference and physical element is used. In NEFEM, the shape functions and the derivatives are defined and evaluated at the integration points of each individual element, directly in the physical domain, and the elementwise matrices and vectors are computed in an ad hoc manner. Therefore,

Towards a Volume Mesh Generator Tailored for NEFEM

401 x4

R

ψ

Ωe x3 x1

ϑ

z

κ x

λ

x2

y

Fig. 2 Illustration of the NEFEM geometry mapping for a tetrahedral element with one face defined on three trimmed NURBS surfaces

the incorporation of NEFEM elements into an existing solver can be easily achieved by creating a new element type that encloses the CAD data and is accompanied by tailored quadratures for NEFEM elements. To facilitate the quadrature scheme for volumetric NEFEM elements, a particular geometric mapping is devised to encapsulate the NURBS parametrisation of the geometric entities. For instance, in three-dimensional space, a mapping between a polygonal prism and a NEFEM tetrahedron is defined as .

y : R −→ Ye (λ, κ, ϑ) |−→ y(λ, κ, ϑ) := (1 − ϑ)S(λ, κ) + ϑx 4 ,

(1)

where . S(λ, κ) is the parametrisation of the curved boundary face, which might be piecewise when involving multiple NURBS surfaces, and. x 4 denotes the node interior to the domain. This mapping is illustrated in Fig. 2 for a tetrahedral element with one face traversing three trimmed NURBS surfaces that are rendered in distinguished colours. In practice, the piecewise definition of a NEFEM element face, as the bottom face shown in Fig. 2, is described by a subdivision based on the surfaces. This leads to a sub-mesh with each constituent cell sticking onto one of the involved surfaces. This elementwise sub-mesh is further discussed later in Sect. 3.3.

3 NEFEM Surface Mesh Generation In this section, the generation of NEFEM surface mesh is briefly recalled. In addition, newly developed checks that are performed to ensure that a valid volume mesh can

402

X. Zou et al.

be generated from the surface mesh are presented. Triangle and tetrahedron elements are considered in this work. The NEFEM surface mesh is a prerequisite for generating the volume mesh, as it provides the boundary discretisation tailored for NEFEM solvers. The surface mesh is desired to satisfy the following requirements: 1. The characteristic element size is dominated by the user-specified spacing, and it is not restricted by the size of geometric features in the CAD model; 2. The surface mesh must not introduce geometric discretisation error as it must encapsulate the NURBS definition of the geometry; 3. The surface elements should pass visibility checks to enable the efficient creation of volume elements avoiding self-intersections. The first two requirements have been addressed in previous work [25]. The last requirement is posed to facilitate the volume mesh generation, and it will be detailed later in Sect. 4.3.

3.1 Surface Meshing Strategy The NEFEM surface mesh generation starts from an initial surface mesh obtained by a standard mesh generator with a user-defined mesh size. Despite that this initial mesh is likely to contain numerous elements violating the user-specified spacing, it is required to be watertight and free of self-intersections. A remeshing is then performed on the initial surface mesh, with a dedicated process to allow creating elements traversing multiple surfaces around geometric features, so that the element sizes become compliant with the user specification.

3.2 GS-Points To register the intersection between an element edge traversing multiple surfaces and a geometric curve, the so-called geometric supporting points, or GS-points are introduced. The GS-points are associated with their parent elements and are used for mesh generation purposes. It is worth emphasising that they do not introduce any additional degrees of freedom in the solver. However, they will be used when devising piecewise quadratures for numerical integration over the faces traversing multiple surfaces and the associated elements. During the surface mesh generation, GS-points are typically created using operations such as edge collapse, edge split or edge flip. Besides, the GS-points can slide along the parent intersection curve, so that the element could achieve a better quality. In this paper, a convention is introduced to render all vertex nodes in black dots, while all GS-points are illustrated with green dots, as illustrated in Fig. 9.

Towards a Volume Mesh Generator Tailored for NEFEM

403

3.3 The Sub-Mesh As mentioned in Sect. 2.2, the sub-mesh is required for the definition of a surface element or a face of a volume element that traverses multiple surfaces. The sub-mesh, along with the GS-points, is used for quadrature as it naturally forms the integration cells. The sub-mesh can also be used to represent physical interfaces inside a NEFEM element [18]. It is worth noting that an integration cell cannot traverse surfaces, and an element traversing multiple surfaces must contain at least two integration cells. In addition, the sub-mesh also plays an important role during the NEFEM mesh generation. Operations like edge split and edge flip are common in a mesh generation process. Unlike the operations in standard mesh generators, the sub-mesh is required during the NEFEM mesh generation because the operations may involve multiple surfaces and their intersection curves. For instance, when flipping an edge between two NURBS-enhanced triangular elements, the subdivision of both elements is necessary for searching the new diagonal traversing multiple surfaces. In this procedure, the GS-points also play a role as the nodes for the sub-mesh of each element. A typical example of the sub-mesh of a NEFEM surface element is shown in Fig. 3. The definition of the triangular surface element with three nodes . x 1 , . x 2 and . x 3 , where edges . E(x 1 , x 2 ) and . E(x 1 , x 3 ) are traversing a NURBS curve, requires the two GS-points, . g 1 and . g 2 , to register the edge-curve intersections. The two GSpoints and three nodes have defined the vertices for three sub-cells that belong to two different surfaces.

3.4 Validity Check Validity checks are performed during the creation of NEFEM surface elements to facilitate the creation of volume elements in the next stage.

Fig. 3 Sub-mesh of a typical NEFEM surface element. Sub-cells belong to different surfaces are filled with distinguishing colours. The intersection curve is coloured in blue

404

X. Zou et al.

The first check is performed before collapsing an element edge that traverses multiple intersection curves or surfaces. This check is closely related to the visibility check in Sect. 4.3 to avoid possible self-intersection in the volume elements. In addition, a second check is performed, after having created or updated the NEFEM elements, and this check will also try to fix self-intersections by curving the subedges. The first check is carried out when trying to create a new NURBS-enhanced edge. The angles between normals to the surfaces at each node and each GS-point are computed and checked, as detailed in Algorithm 1. As illustrated in Fig. 4, a local feature appearing to be a U-shaped channel involves five surfaces .{Si } for .i = 1, . . . , 5. When trying to collapse the short edges inside the channel, all related surface normals at the involved nodes of the sub-mesh are compared with the normal at the target node. The criterion for the validity check is chosen as { .

ni · n j ≥ −1/2, ⇒ pass; ni · n j < −1/2, ⇒ fail,

(2)

where .ni is the normal to surface . Si , and the normals are computed locally at the corresponding nodes of the sub-mesh.

Algorithm 1 Validity check routine. Input: Edge to collapse E(x b , x t ), with base node x b , target node x t Output: Boolean value isValid 1 Collect the set, St , of all parent surfaces of the target node; 2 Collect the set, Sc , of all traversed surfaces for the collapse; 3 for Si ∈ St do 4 if Si ∈ / Sc then 5 Remove Si from St ; 6 end if 7 end for 8 Identify the involved normals at the target node Nt ; 9 Collect the set, Nc , of all normals at involved sub-nodes for the collapse; 10 Initialise isValid = true; 11 for ni ∈ Nt do 12 for n j ∈ Nc do 13 if ni · n j < −1/2 then 14 isValid = false; 15 return; 16 end if 17 end for 18 end for

Figure 4b presents two examples when testing possible new NURBS-enhanced edges before the edge collapse. The first candidate edge, . E(x 5 , x 7 ), is obtained from collapsing node . x 6 to . x 7 , and the normal .n3 at the target node . x 7 is compared with all

Towards a Volume Mesh Generator Tailored for NEFEM

(a) Collapsing scenarios

405

(b) Normals for validity check

Fig. 4 Illustration of the validity check at a U-channel feature. a Two edge collapsing scenarios: from . x 6 to . x 7 , and from . x 2 to . x 4 . b Proposed new NURBS-enhanced edges, showing selected normals for validity checks

normals involved in this collapse, such as .n4 at GS-point . g 2 and .n5 at node . g 5 . This case will pass the validity check. A second option involves the candidate NURBSenhanced edge . E(x 1 , x 4 ), the normal .n4 at target node . x 4 is opposite to .n2 at . x 1 . Therefore, the second configuration will fail the validity check and thus the collapse of edge . E(x 2 , x 4 ) will be prevented. In the rare case that a sub-edge intersects with an intersection curve, as shown in Fig. 5a, the second validity check will detect it and fix it by curving the sub-edge. The intersection between edge . E(x a , x b ) and the intersection curve of surface . S1 and . S3 can easily be detected by seeding a number of sampling points along the intersection curve. This type of self-intersection occurs because surface . S3 is trimmed by a circle. The trimming circle is the image of a circle in the parameter space of the NURBS surface . S3 (λ, κ), as shown in Fig. 5c. A simple fix to this problem is performed by replacing the straight edge . E(x a , x b ) in the parametric space by a cubic curve, as shown in Fig. 5d. The fixed scenario after the second validity check is shown in Fig. 5b.

4 NEFEM Volume Mesh Generation This section presents the latest efforts made towards the generation of NEFEM volume meshes. According to the geometric entity that is part of the B-rep, the tetrahedral element that is of interest falls into one of the two types: • An element with at least one face located on the boundary. • An element with at least one edge but with no faces located on the boundary. An element of each type is further classified in terms of the number of intersection surfaces of the CAD model that it traverses. Elements with faces or edges not

406

X. Zou et al.

Fig. 5 Illustration of the self-intersection fix at the bottom of a cylindric feature. a Edge. E(x a , x b ) intersecting with an intersection curve at the red arrow. b Fixed the intersection by curving. E(x a , x b ) within surface . S3 . c The parameter space of surface . S3 featuring intersection with the trimming circle. d The cubic curve to fix the intersection

traversing multiple surfaces are grown using the same technique available in standard mesh generators. They are still flagged for the purpose of accounting for the NURBS boundary representation by the solver. However, special care must be taken for elements with faces or edges traversing multiple surfaces.

4.1 Volume Meshing Strategy The volume mesh generation starts from a valid NEFEM surface mesh which already encapsulates the GS-points as well as the integration cells. The strategy for the volume meshing is to first generate a layer of NURBSenhanced volume elements that covers the featured surface. Next, the exterior facets of the first layer of volume elements are extracted to form a new surface mesh. This extracted surface mesh will only contain standard elements, so that it can be sent to a standard volume mesh generator to obtain the volume elements of the remaining

Towards a Volume Mesh Generator Tailored for NEFEM Fig. 6 NEFEM volume mesh generation procedure

407 NEFEM Surface Mesh

NEFEM Volume Layer

Extracted Surface Mesh FLITE

Stitch

Interior Volume Mesh Stitch NEFEM Volume Mesh

part of the domain. The NEFEM volume mesh is finally achieved by stitching the NEFEM layer and the standard interior elements. In this work, the FLITE mesh generator [19] is used to create standard meshes, and the procedure is listed in Fig. 6. Remark 1 The stitching of boundary layer mesh and interior mesh, as discussed in [5, 9], is an established procedure in standard mesh generation. The presented strategy is dedicated to generating the geometric-persistent mesh layer that is valid for NEFEM solvers. As the interior mesh is generated after the boundary layer mesh, the stitching of the two meshes is naturally done with merely renumbering the corresponding nodes.

4.2 Growing Volume Elements To guide the growth of volume elements into the three-dimensional domain, the normal vectors are firstly computed based on the surface mesh. Unlike standard triangle elements, a NEFEM triangle element can have a non-unique definition of its normal as it can traverse multiple surfaces. Thus, it is not trivial to evaluate the normal for a face or its edges. As mentioned in Sect. 3.3, each integration cell is associated to a unique parent surface. Therefore, for each integration cell, the normal vector is unique to that parent surface, and a smoothed normal can be obtained at each node of the sub-mesh that may be a node of the mesh or a GS-point. This also implies that a sequence of normal vectors can be extracted along an element edge as it traverses multiple surfaces. Several smoothing options have been tested, such as surface-based averaging, weighted averaging and Laplacian. It was found that the surface-based averaging provides satisfactory normal vectors in the tested geometries. The possible choices of the normal to grow a volume element from a typical NEFEM surface element are illustrated in Fig. 7. The surface element with vertices

408

X. Zou et al.

Fig. 7 Choices of normal vectors to grow a tetrahedral element. a Normal vectors at GS-points. b Normal vector at plane element centroid. c Normal vectors at integration cell centroids

(a) (b)

(c)

{x a , x b , x c } traverses surfaces .{S1 , S4 , S7 }. The smoothed normal at GS-points . g 1 and . g 2 are shown in Fig. 7a, the naive normal evaluated using only the vertices is plotted in Fig. 7b at the apparent centroid. In contrast, Fig. 7c shows the four normal associated to each integration cell depicted at the centroid of each integration cell. The major process used to grow volume elements based on the surface mesh is finding a suitable normal. The first attempt is to loop through all edges in the surface mesh, and check the dihedral angle .θ. A tetrahedron will be created by linking the two vertices when .θ < 2π/3, closing the two triangles joint by the edge. In most cases, it is necessary to find a normal on the edge to create a top node above the edge and try to link it with all vertices and GS-points of the two triangles sharing the edge. During this linking process, the self-intersection checks are performed via evaluating the volume of the newly formed sub-cell tetrahedra. If a self-intersection is identified, the normal vector is tuned by scaling. The base point of the normal vector is changed by sliding along the edge to find another suitable location that is free from self-intersection. A typical scenario for volume element growth is illustrated in Fig. 8. Three surface elements are traversing surfaces. S1 and. S2 that are rendered in red and yellow. During an edge-based loop, a suitable normal vector .ng7 is found at GS-point . g 7 of edge . E(x 2 , x 4 ), as depicted by Fig. 8a. Two tetrahedral elements, coloured in blue and green, are grown with the guidance of .ng7 , sharing the same new vertex . x 6 , as shown in Fig. 8b. Other tetrahedra are grown during this edge-based loop, including the one with a new vertex . x 7 . A second edge loop will be performed to close the edges between two grown tetrahedra where the dihedral angles between element faces are .

Towards a Volume Mesh Generator Tailored for NEFEM

409

Fig. 8 Typical scenario to grow tetrahedral elements. a Normal vectors at GS-point to guide the growth. b Two tetrahedra grown. c Other grown tetrahedra, vertices . x 6 and . x 7 to be linked to form a new tetrahedron

Fig. 9 Sub-mesh of a typical NEFEM volume element grown from a NEFEM surface element

checked. In the scenario presented in Fig. 8c, a new tetrahedron will be created by simply linking the existing vertices . x 6 and . x 7 . As a tetrahedral element is grown based on a surface element, such as shown in Fig. 3, it inherits the subdivision of the surface element and three sub-tetrahedra are grown to form the integration cells for computing quadrature over the NURBSenhanced tetrahedron. This can also be viewed as a straightforward subdivision of the tetrahedron element with the guidance of the sub-cells on a traversing face, as illustrated in Fig. 9.

410

X. Zou et al.

(a) Volume element

(b) Volume Sub-element

(c) Surface element

(d) Surface Sub-element

Fig. 10 Illustration of visibility issue for a NEFEM volume element and its base surface element at a step feature. a Volume element based on surface element in (c). b A volume sub-cell based on (d) exhibits self-intersections within the volume element. c Surface element. d A surface sub-cell

4.3 Self-intersection Check Taking the creation process of triangle to tetrahedron as the example, the objective is to ensure the top vertex of the tetrahedron is visible to any point in the base triangle. When the visibility requirement is met, all ridges of the tetrahedron, excluding the ones corresponding to edges of the base triangle, will be straight, and this enables an efficient subdivision of the volume element into volumetric integration cells. It is worth noting that the visibility requirement is not mandatory for a valid NEFEM element, a self-intersecting element can be fixed by curving the interior edges to maintain the validity. However, the strategy presented here tries to maintain the maximum number of interior edges as straight with the objective to accelerate the solver. At some convex geometric features, special care has to be taken to ensure the visibility from the top node to the bottom sub-nodes. The feature of a sharp step in Fig. 10 presents a scenario in which a violation occurs, resulting in a self-intersecting NEFEM volume element. The volume element shown in Fig. 10a is based on the surface element in Fig. 10c, which traverses surfaces .{S1 , S2 , S3 } as well as intersection curves .{C 1 , C 2 }, and four GS-points have been included. It can be seen that the dihedral angles on the intersection curves are considerably sharp and include both convex and concave instances. Besides, surface . S2 appears to be a narrow strip folding between surfaces

Towards a Volume Mesh Generator Tailored for NEFEM

411

Fig. 11 NURBS surfaces in the CAD model of a flat plate with two cylinders Table 1 Geometric data of the flat plate with cylinders model

Number of NURBS surfaces Number of NURBS curves Minimum curve length Maximum curve length

12 24 0.019 2.000

S1 and . S3 . A surface integration cell with nodes .{g 1 , x 2 , g 2 }, as shown in Fig. 10d, forms the bottom face of the volume integration cell illustrated in Fig. 10b. As highlighted by red dashed lines, the edges . E(x 4 , g 1 ) and . E(x 4 , g 2 ) penetrate both . S1 and . S2 . In other words, the top node . x 4 lacks visibility to the bottom nodes of the sub-mesh . g 1 and . g 2 , and results in a self-intersecting subdivision for the volume element. This self-intersecting volume element can be fixed by curving the edges . E(x 4 , g 1 ) and . E(x 4 , g 2 ), and potentially . E(x 4 , x 2 ) and . E(x 4 , x 3 ). .

5 Examples This section presents some examples to demonstrate the strategy described in the previous section to generate NEFEM volume meshes.

5.1 A Flat Plate with Two Cylinders The first example considers a flat plate with two cylinders, as shown in Fig. 11. The geometric data is listed in Table 1. The original FEM mesh, not complying with the user-defined spacing, is shown in Fig. 12. The NEFEM surface mesh, obtained by using the strategy proposed in [25], is generated with a uniform mesh size that is independent of the thickness of the plate and the heights or diameters of the cylinders, as presented in Fig. 13c.

412

X. Zou et al.

Fig. 12 FEM surface mesh of the flat plate intersected by two cylinders, showing elements not complying with the user-defined spacing

Fig. 13 NEFEM meshing process for the flat plate intersected by two cylinders. a NEFEM surface mesh. b NEFEM volume element layer rendered in green. c Clipped NEFEM volume mesh including the interior volumetric elements

Based on this boundary discretisation, with the desired element size, the first layer of NEFEM volume elements is generated with the strategy described in Sect. 4, as shown in Fig. 13d. The volumetric elements are rendered in green. The interior volumetric elements are generated based on the extracted exterior faces of the NEFEM volume layer and the surface mesh, which form a standard triangulation, with the Delaunay method for tetrahedral mesh. For this model, there are 121 tetrahedra in the NEFEM volume layer, and 50 815 interior volumetric elements. As it can be seen in Fig. 13e, there are much fewer NEFEM volume elements than the standard elements, thus the computational cost introduced by the NEFEM elements would have minor impact to the solver, while the minimum element size is significantly improved. As it is listed in Table 2, the minimum element edge length normalised

Towards a Volume Mesh Generator Tailored for NEFEM

413

Table 2 Normalised edge lengths for the flat plate intersected by two cylinders Minimum edge length in FEM mesh 0.045 Minimum edge length in NEFEM mesh 0.458 Increase factor 10.18

Fig. 14 NURBS surfaces in the CAD model of a wing with blunt trailing edge

Table 3 Geometric data of the wing model

Number of NURBS surfaces Number of NURBS curves Minimum curve length Maximum curve length

5 9 7.27 1 381.12

with the user-specified spacing for NEFEM mesh has increased more than 10 times. This is considered crucial when transient simulations using an explicit time marching are of interest.

5.2 A Wing with a Blunt Trailing Edge This example considers the generation of the NEFEM mesh for a wing with a blunt trailing edge. In this example, the ability to handle a user-specified non-uniform mesh spacing is also considered. The NURBS surfaces defining the wing are presented in Fig. 14 and the geometric data for this model is summarised in Table 3. A non-uniform mesh spacing has been specified using two line sources at both the leading and trailing edges with a stretching ratio equal to five. Though refinement is introduced by the line sources, the prescribed mesh size is greater than the length of the shortest curve. The resulting FEM mesh, shown in Fig. 15, contains numerous small elements at the blunt trailing edge as well as the wing tip. The NEFEM surface mesh shown in Fig. 16c has eliminated the small elements, where the NURBS volume layer are created at the blunt trailing edge feature as presented in Fig. 16d. The number of NURBS-enhanced tetrahedral elements is 179, while that of the interior volumetric elements is 1 458 208. From Table 4 it can be seen the normalised minimum edge length in NEFEM mesh exceeds 7 times compared to the FEM mesh. This again demonstrates that the NEFEM element takes only a

414

X. Zou et al.

Fig. 15 FEM surface mesh of the wing, showing elements not complying with the user-defined spacing

Fig. 16 NEFEM mesh process for the wing with a blunt trailing edge. a NEFEM surface mesh. b NEFEM volume element layer rendered in green. c Clipped NEFEM volume mesh including the interior volumetric elements Table 4 Normalised edge lengths for the wing

Minimum edge length in FEM mesh Minimum edge length in NEFEM mesh Increase factor

0.06 0.46 7.67

negligible portion of the whole mesh, but can improve the minimum element size to enable large time-stepping in explicit solvers.

5.3 Falcon Aircraft In this example, a full aircraft model is considered to demonstrate the capability of handling complex geometries. A variety of geometric features are present in the CAD geometry, such as very short curves and small surfaces, particularly at the wing

Towards a Volume Mesh Generator Tailored for NEFEM

415

Fig. 17 NURBS surfaces in the CAD model of the Falcon aircraft

Table 5 Geometric data of the Falcon model Number of NURBS surfaces Number of NURBS curves Minimum curve length Maximum curve length

48 100 0.37 10.61

Fig. 18 FEM surface mesh of the Falcon, showing elements not complying with the user-defined spacing

tips. The characteristic thickness of the wing is about 0.2, which is shorter than the minimum curve length, and this poses a challenge especially for the surface mesh generation. The NURBS surfaces of the CAD model are presented in Fig. 17 and the geometric data is summarised in Table 5. The original FEM mesh, not complying with the user-defined spacing, is shown in Fig. 18. Despite all surface elements are NURBS-enhanced as they discretised the complex B-rep, only a few elements are traversing multiple surfaces. Therefore, the first layer of NEFEM volume elements, as it focuses on the growth of traversing volume element, contains a small number of tetrahedral elements, as presented in Fig. 19. The total number of tetrahedral elements in the volume mesh is 229 693, in which the number of NEFEM tetrahedron involving multiple definition during integration is 28. In Table 6 it is presented that the normalised minimum edge length in NEFEM mesh has improved 4.5 times than the FEM mesh. It is worth noting that, a very small number of elements with a spacing well below the user-defined spacing is enough to make unfeasible the solution of a transient problem with explicit time marching. Therefore, the ability to lift this restriction has massive implications for the solver.

416

X. Zou et al.

Fig. 19 NEFEM mesh process for the Falcon. a NEFEM surface mesh. b NEFEM volume element layer rendered in green. c Clipped NEFEM volume mesh including the interior volumetric elements Table 6 Normalised edge lengths for the Falcon model

Minimum edge length in FEM mesh Minimum edge length in NEFEM mesh Increase factor

0.06 0.27 4.50

6 Concluding Remarks A method dedicated to generating volume meshes tailored for NEFEM has been presented for the first time. The technique is capable of generating volume meshes where the exact boundary representation, provided by the NURBS parametrisation from the CAD model, is encapsulated in the geometric definition of NEFEM elements. As a result, the small geometric features present in the CAD model no longer restrict the element size in NEFEM meshes. This completely removes the need for the time-consuming de-featuring process on complex CAD models and, at the same time, eliminates the geometric error introduced by the de-featuring process or by traditional mesh generators. Given a CAD geometry in the form of B-rep, the proposed strategy starts by generating an initial surface mesh using a standard mesh generator. Guided by the user-defined spacing, elements near the undersized geometric features are remeshed, and the new elements are allowed to traverse multiple surfaces, provided they pass the dedicated validity check. This process results in a NEFEM surface mesh suitable for the volume mesh generation stage. Various normal vectors are defined and are computed on the NEFEM surface elements to guide the growth of volume elements.

Towards a Volume Mesh Generator Tailored for NEFEM

417

During the growth of each volume element, self-intersection checks are performed to ensure the element validity. The volume elements grown from traversing surface elements form the first layer of NEFEM volume elements, whose exterior faces are extracted and merged with the NEFEM surface mesh, so that a standard volume mesh generator can be used to obtain the remaining of the interior volume mesh. Examples have been presented to demonstrate the applicability and potential of the proposed method. For completeness, the CAD model, the initial FEM surface mesh, the NEFEM surface and volume meshes are shown. The examples involve geometries where the CAD model contains very small edges, such as the wing with a blunt trailing edge. The resulting NEFEM meshes demonstrate a spacing closely matching the user-specification, even when the CAD model contains small features. Future work will involve the new definition and improvement of the element quality, the extension to high-order interpolations, and the integration with a NEFEM solver for practical applications.

References 1. Bassi, F., Rebay, S.: High-order accurate discontinuous finite element solution of the 2D Euler equations. Journal of Computational Physics 138(2), 251–285 (1997) 2. Blacker, T.D., Owen, S.J., Staten, M.L., et al.: CUBIT geometry and mesh generation toolkit 15.1 user documentation. Tech. rep., Sandia National Lab.(SNL-NM) (2016) 3. Danglade, F., Pernot, J.P., Véron, P.: On the use of machine learning to defeature cad models for simulation. Computer-Aided Design and Applications 11(3), 358–368 (2014) 4. Dawson, M., Sevilla, R., Morgan, K.: The application of a high-order discontinuous galerkin time-domain method for the computation of electromagnetic resonant modes. Applied Mathematical Modelling 55, 94–108 (2018) 5. Field, D.A.: Automatic generation of transitional meshes. International Journal for Numerical Methods in Engineering 50(8), 1861–1876 (2001). https://doi.org/10.1002/nme.98 6. Gammon, M., Bucklow, H., Fairey, R.: A review of common geometry issues affecting mesh generation. AIAA Aerospace Sciences Meeting (2018). https://doi.org/10.2514/6.2018-1402 7. Hughes, T.J.R., Cottrell, J.A., Bazilevs, Y.: Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Computer Methods in Applied Mechanics and Engineering 194(39–41), 4135–4195 (2005) 8. Mobley, A.V., Carroll, M.P., Canann, S.A.: An object oriented approach to geometry defeaturing for finite element meshing. In: 7th International Meshing Roundtable (IMR), pp. 547–563 (1998) 9. Owen, S.J., Saigal, S.: Formation of pyramid elements for hexahedra to tetrahedra transitions. Computer Methods in Applied Mechanics and Engineering 190(34), 4505–4518 (2001). https:// doi.org/10.1016/S0045-7825(00)00330-3 10. Park, M.A., Kleb, W.L., Jones, W.T., Krakos, J.A., Michal, T.R., Loseille, A., Haimes, R., Dannenhoffer, J.: Geometry modeling for unstructured mesh adaptation. In: AIAA Aviation 2019 Forum, p. 2946 (2019) 11. Sevilla, R.: Hdg-nefem for two dimensional linear elasticity. Computers & Structures 220, 69–80 (2019) 12. Sevilla, R., Fernández-Méndez, S., Huerta, A.: NURBS-enhanced finite element method for Euler equations. International Journal for Numerical Methods in Fluids 57(9), 1051–1069 (2008) 13. Sevilla, R., Fernández-Méndez, S., Huerta, A.: NURBS-enhanced finite element method (NEFEM). International Journal for Numerical Methods in Engineering 76(1), 56–83 (2008)

418

X. Zou et al.

14. Sevilla, R., Fernández-Méndez, S., Huerta, A.: NURBS-enhanced finite element method (NEFEM): A seamless bridge between CAD and FEM. Archives of Computational Methods in Engineering 18(4), 441–484 (2011) 15. Sevilla, R., Rees, L., Hassan, O.: The generation of triangular meshes for NURBS-enhanced FEM. International Journal for Numerical Methods in Engineering 108(8), 941–968 (2016) 16. Shapiro, V., Tsukanov, I., Grishin, A.: Geometric issues in computer aided design/computer aided engineering integration. Journal of Computing and Information Science in Engineering 11(2) (2011) 17. Sheffer, A., Bercovier, M., Blacker, T., Clements, J.: Virtual topology operators for meshing. International Journal of Computational Geometry & Applications 10(03), 309–331 (2000) 18. Soghrati, S., Merel, R.A.: Nurbs enhanced hifem: A fully mesh-independent method with zero geometric discretization error. Finite Elements in Analysis and Design 120, 68–79 (2016). https://doi.org/10.1016/j.finel.2016.06.007 19. Sørensen, K., Hassan, O., Morgan, K., Weatherill, N.: A multigrid accelerated hybrid unstructured mesh method for 3D compressible turbulent flow. Computational Mechanics 31(1-2), 101–114 (2003) 20. Tan, M.H., Safdari, M., Najafi, A.R., Geubelle, P.H.: A nurbs-based interface-enriched generalized finite element scheme for the thermal analysis and design of microvascular composites. Computer Methods in Applied Mechanics and Engineering 283, 1382–1400 (2015). https:// doi.org/10.1016/j.cma.2014.09.008 21. Taylor, N.J., Haimes, R.: Geometry modelling: Underlying concepts and requirements for computational simulation. In: 2018 Fluid Dynamics Conference, p. 3402 (2018) 22. Thakur, A., Banerjee, A.G., Gupta, S.K.: A survey of cad model simplification techniques for physics-based simulation applications. Computer-Aided Design 41(2), 65–80 (2009) 23. Wang, Z.J., Fidkowski, K., Abgrall, R., Bassi, F., Caraeni, D., Cary, A., Deconinck, H., Hartmann, R., Hillewaert, K., Huynh, H.T., et al.: High-order cfd methods: current status and perspective. International Journal for Numerical Methods in Fluids 72(8), 811–845 (2013) 24. Xue, D., Demkowicz, L.: Control of geometry induced error in .hp finite element (FE) simulations. I. Evaluation of FE error for curvilinear geometries. International Journal of Numerical Analysis and Modeling 2(3), 283–300 (2005) 25. Zou, X., Sevilla, R., Hassan, O., Morgan, K.: Towards a Surface Mesh Generator Tailored for NEFEM. In: 29th International Meshing Roundtable (IMR), Virtual Conference (2021). https://doi.org/10.5281/zenodo.5559148

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM) Kaloyan Kirilov, Joaquim Peiró, Mashy Green, David Moxey, Lourenço Beirão da Veiga, Franco Dassi, and Alessandro Russo

1 Introduction Polytopal meshes, with arbitrarily shaped 2D polygonal or 3D polyhedral computational cells, have been routinely used for the discretization of partial differential equations (PDEs) by finite volume methods [24]. The main advantages of using unstructured polytopal meshes are their ability to discretize complex computational domains and their potential to reduce the computational complexity of the PDE solver. Recently, there has been a growing interest in developing discretisation methods that support polygonal/polyhedral cell shapes meshes with low and high approximation orders. A literature review of the large variety of polytopal methods is given in reference [9]. K. Kirilov · J. Peiró (B) Department of Aeronautics, Imperial College London, London, UK e-mail: [email protected] K. Kirilov e-mail: [email protected] M. Green · D. Moxey Department of Engineering, King’s College London, London, UK e-mail: [email protected] D. Moxey e-mail: [email protected] L. Beirão da Veiga · F. Dassi · A. Russo Department of Mathematics and Applications, University of Milano-Bicocca, Milan, Italy e-mail: [email protected] F. Dassi e-mail: [email protected] A. Russo e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_19

419

420

K. Kirilov et al.

The majority of these methods make use of polytopal meshes with straight edges and faces that, especially for high-order methods, can deteriorate the accuracy of the solution in the presence of curved boundaries or interfaces. As it is well known from the finite element method literature, the representation of the domain geometry with planar facets introduces an error that can stagnate convergence if the order of the approximation is increased. It is therefore important to ensure that the curved interfaces and boundaries are accurately approximated to guarantee the expected order of convergence. To achieve this, one should aim at defining discrete spaces on curved elements in such a way that the domain geometry is accurately represented. Examples of this are the high-order polynomial maps employed in isoparametric finite elements [14], and the use of a CAD representation of the computational domain in isogeometric analysis [7]. The Virtual Element Method (VEM) [23] is arguably one of the very few approaches that permits the definition of those discrete spaces in the context of curvilinear polytopal meshes. To the best of our knowledge, very few methods exist for the generation of boundary-conforming curvilinear polytopal meshes. One exception is reference [10] that adopts a NURBS-enhanced VEM strategy where the edges on boundaries and interfaces are NURBS curves, each defined to exactly match the CAD description of the boundary. The approach that we propose here differs from that in reference [10] in that the geometrical information required by the VEM discretization is obtained through an application programming interface (API) that performs all the required geometrical enquiries on a standard CAD representation of the boundary which eliminates the need to define an individual NURBS representation of each edge that matches the CAD definition. This is more general, employs similar procedures to those employed by current state-of-the-art curvilinear high-order mesh generators [16, 20], and thus facilitates the extension of the methodology to 3D problems. However, we will present the main ideas using a 2D proof-of-concept in the following sections.

2 High-Order VEM Basics This section describes the basics of the virtual element method for the discretization of PDEs in two-dimensional domains with curved boundaries or interfaces. More specifically, we will focus on the information required to proceed with the workflow of the VEM curvilinear mesh generation pipeline, see Fig. 8. A more detailed description of the VEM with curved edges can be found in [8, 23]. In the remainder of the paper we will use the Laplace equation with Dirichlet boundary conditions, i.e. find .u(x, y) such that .

− ∆u = f in Ω; u = u ∗ in ∂Ω,

(1)

as the model problem to illustrate the main features of the VEM discretization and identify the geometrical operations required to formulate it.

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM)

421

Fig. 1 Degrees of freedom for the straight case (a) and for the curved case (b) with polynomial order .k = 2. The mapping .γ (t) (.0 ≤ t ≤ 1) is used to define the curved edge

We will firstly recall how to formulate a VEM discretization on a straight-sided polygonal mesh [21] and then proceed to describe how this approach can be modified to deal with curved edges. Let . E be a polygon with all straight edges, we define the space .

{ Vhk (E) := v ∈ H 1 (E) s.t. v|∂ E ∈ C 0 (∂ E) , ∆v ∈ Pk−2 (E), } v|e ∈ Pk (e) ∀e ⊂ ∂ E .

(2)

where .Ps (O) denotes the set of polynomials of order .s on the set .O. A function .v ∈ Vhk (E) is uniquely determined by the following degrees of freedom, following the notation of Fig. 1a: D1: value of .v at the vertices; D2: .k − 1 values of .v on the ∫ edge nodes; D3: .k(k + 1)/2 moments .

v pk−2 dE. E

Such values are the only ingredients one needs to set the virtual element method, that is to create projection operators, assemble the global matrix and compute the solution [22]. Note that the function .v is not known a priori; indeed there is no need to have the explicit expression of .v ∈ Vhk (E). For this reason we denote .v as virtual: it will never be computed explicitly, and it is known only via its degrees of freedom. Before describing how the VEM deals with curved elements, we recall the definition of the .Π∇k projection operator, which is an essential tool to assemble the global linear system arising from a VEM discretization. Given a function .v ∈ Vhk (E), we define .Π∇k : Vhk (E) → Pk (E) as ∫ ⎧∫ ∇ ⎪ ∇v · ∇ pk dE ⎨ ∇Πk v · ∇ pk dE = ∫ E ∫E . ⎪ ⎩ Π∇k v ds = v ds ∂E

∂E

(3)

422

K. Kirilov et al.

where . pk denotes any polynomial of order .k. The left-hand sides are polynomials so, if an integration quadrature rule for polygons is available, they are computable. The right-hand side of the second equation is an edge-wise continuous polynomial since .v|e ∈ Pk (e) ∀e ∈ ∂ E. Such polynomials are uniquely determined by the degrees of freedom D1 and D2 so they are also computable. In this framework we are able to compute also the right hand side of the first equation as follows. Integrating by parts we get ∫ ∫ ∫ ∇v · ∇ pk dE = −

.

E

v∆pk dE + E

∂E

(n · ∇ pk ) v ds ,

where .n is the outward normal. The value of the bulk integral is known since it uses the internal degrees of freedom of .v, i.e., D3. Finally, we can compute the boundary contribution. As a consequence, we observe that although .v is not explicitly known, we are able to get its .Π∇k −projection directly from the degrees of freedom.

2.1 Extension to Curved Edges To deal with polygons characterized by curved edges, the idea is to put geometry information within .Vhk (E). Given a polygon . E, we denote by .∂ E and .∂∼ E the set of straight and curved edges, respectively. Then, we define a new space { Vhk (E) := v ∈ H 1 (E) s.t. v|∂ E∪∂∼E ∈ C 0 (∂ E ∪ ∂∼ E) ,

.

∆v ∈ Pk−2 (E), v|e ∈ Pk (e) ∀e ⊂ ∂ E , } v|e ∈ ∼ Pk (e) ∀e ⊂ ∂∼ E .

(4)

Pk (e), as a polynomial space of degree .k in the one The key point is the definition of .∼ variable of the parameter space of the curved edge, i.e., ∼ Pk (e) := Pk ([0, 1]) ◦ γ ,

.

where .γ is a sufficiently regular map that describes the curved edge of the polygon (see Fig. 1b). Then, to uniquely determine a function .v ∈ Vhk (E), we need the following degrees of freedom: D1: value at the vertices; D2: .k − 1 values on straight edges; ∼ 2: .k − 1 values on the parameter .D space .[0, 1] associated with the curved edge .e; ∫ D3: .k(k + 1)/2 moments .

v pk−2 dE. E

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM)

423

Comparing the definition of the spaces .Vhk (E) and .Vhk (E), given by Eqs. (2) and (4), we appreciate that the curved space is an extension of the straight one. Indeed, if a polygon . E does not have any curved edges, Eqs. (2) and (4) yield identical spaces. A further proof about this fact are the degrees of freedom: these two spaces share the degrees of freedom .D1, .D2 and .D3, but .Vhk (E) has the additional degrees of freedom ∼ 2 that allow the presence of polygons with curved edges. .D The fact that.Vhk (E) is an extension of.Vhk (E) also gives same benefits from a more practical point of view. Indeed, if we are able to compute integrals on polygons with curved edges and on curved edges themselves, the virtual element framework stays the same. Consider for instance the computation of the .Π∇k projection. If we have a quadrature rule to integrate polynomials on curved domains, the left-hand side of the first equation can be computed. Further, if we are able to integrate polynomials over curved edges, all the integrals over the boundary of . E can also be computed. As a result, the high-order VEM discretization for domains characterized by curved boundaries or interfaces requires the following geometrical information: 1. The coordinates of a set of points on the vertices and edges of the polygonal mesh which are used to interpolate the numerical solution .v. 2. The coordinates of a set of quadrature points and their corresponding weights for evaluating integrals over: a. curved edges, and b. polygons characterized by curved edges. 3. The mapping .γ (t) defining the curved edge which is used to compute tangent and normal vectors appearing in some of the integrals. The numerical integration over curved edges uses the standard quadrature rules on the parameter space which require the evaluation of the Jacobian of the map .γ . Figure 2 depicts the location of the quadrature points on the edges of the mesh. The integration within curved polygons follows the quadrature approach described in reference [23]. In brief and following the notation of Fig. 2, a reference line (e.g. a diagonal of the element) is defined to trace perpendicular lines through the quadrature points of the edges. In each these lines, standard quadrature points are located on the segment within the edge quadrature point and the intersection point with the reference line. The process is repeated for each edge of the element and the resulting set of quadrature points is used for the approximation of the integrals. In practice the reference line is chosen so that no quadrature points may fall outside of the polygon. To reduce the number of quadrature points whilst retaining accuracy, reference [23] proposes the use of a compression procedure. Note that the evaluation of integrals over curved polygonal cells is an area of ongoing research. The following sections describe how this geometrical information is processed as part of the mesh generation procedure.

424

K. Kirilov et al.

Fig. 2 VEM quadrature points: The edge quadrature points are shown as large dots on the edges of the polygon. The generation of internal quadrature points is illustrated for the curved edge (in red) only. Here we define a reference line and trace perpendicular lines to it passing through the edge quadrature points. On the perpendicular lines, standard quadrature points (small dots) are located on the segment within the edge quadrature point and the intersection point with the reference line

3 “A posteriori” High-Order VEM Mesh Generation We seek to generate meshes suitable for high-order VEM discretizations that conform to a computational domain boundary defined in terms of a standard CAD boundary representation (B-rep) [19]. We follow essentially an a posteriori high-order mesh generation approach where we modify a straight-sided polygonal mesh and transform it into a curvilinear mesh that conforms to the boundary. This process is illustrated in Fig. 3. The methodology aims to be completely detached from the VEM solver and to support every user-defined geometrical order. All this is made possible by extending the current capabilities of the open-source high-order mesh generator NekMesh [1] and its application programming interface (API) for geometrical inquiries to external CAD libraries such as Open Cascade [6] and CADfix [12]. In the following we will refer to these libraries as the CAD engine or API. As in the classical a posteriori high-order mesh generation pipelines, it is necessary to generate a valid straight-sided mesh first, and then use high-order tools to curve the boundary and interface mesh edges whilst ensuring the new boundary-conforming elements are valid and of high quality. A graphical illustration of this process is given in Fig. 3. We follow an approach where the straight-sided polytopal mesh is generated using a third-party software, in our case STAR–CCM+ [17]. Then we perform a posteriori connectivity identification between the linear polygonal vertices, edges and the corresponding CAD objects. We construct and project the high-order

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM) Fig. 3 The a posteriori approach to high-order VEM mesh generation. From top to bottom: CAD B-rep definition of the domain, straight-sided polygonal mesh, and curvilinear high-order mesh

425

426

K. Kirilov et al.

nodes on the CAD as specified by a particular combination of quadrature rules. Finally, using these projected nodal points, we use the CAD API to retrieve all the geometrical information relevant to the VEM solver.

3.1 Generation of the Straight-Sided Polygonal Mesh Two main strategies could be employed to generate the straight-sided polygonal mesh. First is the classical bottom-up strategy, where the vertices are inserted directly onto CAD objects (B-Splines, NURBS, etc.) inside NekMesh. Then one can generate seed points and perform Voronoi tesselations, including refinements, as described in reference [10]. This ensures CAD conformity and would allow direct use of the isogeometric VEM without any further mesh manipulations. To detach the geometrical information from the numerical discretization, so that the interaction with the CAD B-rep is not handled by the VEM solver, we start the generation process from a straight-sided polytopal mesh created using a third-party software. Following the existing NekMesh pipeline for unstructured triangular and quadrilateral meshes, we choose STAR–CCM+ [17], which has a robust commercial polyhedral mesh generator. It can read CAD information directly and can be combined with multiple fine control features, including anisotropic prism/quad layers, curvature refinement, maximum edge deviation from the CAD, vertex projection on CAD surfaces, multi-surface proximity mesh control and separate patch (curve and surface) control. The main requirements on the user side for this step are ensuring boundary conformity of the polygonal vertices and a CAD deviation distance within the minimum edge length. Additionally, the user should ensure that the first layer of elements is thick enough to accommodate the edge projection on the CAD. In the following, and for simplicity, we will use a circular ring domain to illustrate some of these features. An example of a straight-sided polygonal mesh generated using STAR–CCM+ is depicted in Fig. 4. Once created in STAR–CCM+, the straight-sided mesh is exported to a .ccm file and it is read by NekMesh through the CCM OpenFoam importer [15]. More specific details about the implementation are given in Sect. 3.5. This input process populates the classical NekMesh data structure of elements, edges and vertices. These also include topological information, for instance the connectivity between the mesh entities, and potential boundary flags set by the user in STAR-CCM+. However these do not contain the CAD information at this stage.

3.2 API to a CAD Engine for Geometrical Queries In order to use any of the high-order tools available, one needs to first obtain the CAD information and then link the boundary mesh entities with the corresponding

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM)

427

Fig. 4 A coarse uniform linear mesh of a ring with interior radius . R1 = 0.2 and exterior radius . R2 = 1

CAD objects. NekMesh achieves that through an API that links to CAD engines. At this moment, NekMesh supports OpenCascade Community Edition (OCE) [6] and ITI CADFix [12]. This API can read geometrical objects such as points, lines, and topological information from a standard STEP file format [11] which can be created using state-of-the-art CAD software. Moreover, the API is responsible for calculating the necessary geometrical information related to the CAD for the following functions: • • • •

mapping parametric location .t on a CAD curve to its Cartesian location .x, mapping Cartesian locations .x to parametric coordinates .t, calculating the closest distance .d to a CAD curve given a coordinate .x, and evaluating normal . N and tangent vectors .T = x ' (t) to the curve.

At this point, both the linear mesh and the CAD objects are available and we can connect the entities by shortlisting the edges, boundary vertices and boundary elements from the edge boundary flags. Due to the tendency of STAR–CCM+ polygonal mesh generator to place vertices away from the CAD curve, one cannot just use these boundary flags, but needs to identify the closest curve to every vertex. In order to do this, NekMesh creates a thin bounding box in Cartesian space around every CAD curve, within a geometrical tolerance in the range of 0.001–0.01 times the maximum length of the box, and stores it in a .k–. D tree data structure [3]. Then exploiting this .k–. D tree, we shortlist only several potential CAD curve candidates for every vertex. For these, we calculate the distance of the vertex to the parametric CAD curves with the help of the API. If the shortest one is further away than a small distance, this vertex and the corresponding edge are left straight to avoid mesh entanglement. Otherwise, the vertex is projected to the CAD curve with its corresponding parametric location, .t. The extension for edges is straightforward. If the two vertices belong to the same CAD curve, then this edge is clearly part of this CAD object, and it is marked as such. An important exception is when the two edge vertices have no common CAD object. This could happen in the junction between two connected CAD curves, where

428

K. Kirilov et al.

STAR–CCM+ inserts an edge. As will be discussed later, this rare case requires a special curving technique. This strategy has also been used extensively for 3D element tetrahedral, hexahedral and prismatic elements. Therefore, the extension to a 3D polyhedral one is relatively straightforward, but with the API now performing geometrical queries on 3D CAD objects: curves and surfaces.

3.3 CAD Projection of Additional Points The main difficulty in curving the edges occurs when the two vertices of an edge lie on the same CAD object. Here NekMesh employs the CAD curve and the API to parametrically create the high-order edge-nodes according to a quadrature rule, defined on the reference segment .[−1, 1], which is typically a form of Gaussian quadrature. This is illustrated in Fig. 5 evaluate their positions on the curve, we utilise the mapping .γ (t) which defines the edge in the region .0 ≤ t1 ≤ t ≤ t2 ≤ 1. We therefore construct a mapping between the intervals.[−1, 1] and.[t1 , t2 ], then apply .γ (t) to locate the points in Cartesian space along with their parametric coordinates .t j . Finally, we calculate the distance, .d j , between the quadrature points on the straight edge and the corresponding one on the curve. In an unlikely scenario that the distance .d j is larger than the edge length, a projection error could have occurred in the CAD engine, and therefore, the edge is linearised. This process ensures exact parametric projection to the CAD curves, mesh boundary conformity and easy access with the API to the necessary geometrical information for output. A rare exception to this process happens when STAR-CCM+ generates an element that spans two CAD objects. In this case, the edge is located across two CAD curves, so the previous approach cannot be applied. Therefore, we generate the quadrature points on the straight edge first. Then we project the edge node . j to the closest cartesian location of the two CAD objects. Note that this introduces a small error in

Fig. 5 Parametric projection for an edge defined on a single CAD curve

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM)

429

Fig. 6 Projection for a straight edge defined on 2 CAD curves

the location of the quadrature points but, if the curves are smooth, it has a negligible effect on the solution accuracy. A schematic of the two processes can be seen in Fig. 6. The projection of points employs the geometrical procedures available in the API. These procedures rely upon third-party implementations such as OpenCascade which are reasonably robust in general. However, their applicability may be limited in some instances such as, for instance, when the curve edges exhibit inflections or very rapid changes in curvature. Following these steps, NekMesh can produce curved meshes with arbitrary userdefined order. Moreover, the mesh generator is completely detached from the VEM solver, it is based on the user’s requirements, and supports any combinations of classical 1D quadrature rules such as Gauss, Gauss-Lobbato, Gauss-Radau, etc. However, the construction of the quadrature points within the polygonal element is left to the solver side, due to the variety of techniques adopted by the different VEM solvers.

3.4 Ensuring Mesh Validity In regions with high curvature, it is possible to generate tangled polygonal elements after the edge projection step, where the addition of curvature may cause the element to self-intersect. Therefore, in the legacy NekMesh pipeline with standard element shapes such as triangles and quadrilaterals, we calculate the distortion of each element using the Jacobian of the mapping from the standard reference space [13]. A negative value indicates an invalid tangled element and hence this element needs to be either linearized or refined. This approach is not easily applicable to arbitrary polygons in the VEM due to the difficulty in defining such mapping. In 2D domains a visual inspection of the mesh often helps identifying invalid elements, however this is not feasible in 3D. One can devise a method to detect the presence of invalid elements by calculating the signed area of a polygon as an integral over its boundary. If the area of the computational domain, its boundary viewed as a polygon, differs from the sum of the areas of the polygonal elements calculated in the same fashion, then

430

K. Kirilov et al.

Fig. 7 A mesh with a layer of stretched quadrilaterals: a A layer thickness below the value .δmin given by Eq. (5) leads to self-intersection and thus invalid elements; b A valid mesh is obtained with a value above the minimum thickness

there are tangled elements in the mesh. These elements can then be identified (in 2D) by calculating the winding number of the polygon, which will be different from zero if self-intersection occurs. An alternative method for imposing mesh validity is to construct a single sufficiently thick boundary layer in the regions of concern which accommodates the curving of the mesh effected by the CAD projection. It has been shown in the literature [14] that this minimum thickness .δmin for a quadrilateral element can be found using the relationship δmin c2 . (5) ≥ R 8R 2 where . R is the radius of curvature and .c the length of the straight-sided edge. This value can also be used as a conservative estimate of the mesh size required to ensure validity for convex polygonal elements with four edges or more. Figure 7 illustrates the application of this criterion in the case of a mesh with a layer of quadrilateral cells near the boundary with values of .δ above and below .δmin , with the later leading to an invalid mesh. It is worth noting that techniques currently employed in high-order meshing for deforming a straight-sided mesh to accommodate boundary curvature, see for instance [16, 20], could also be implemented using a VEM formulation and applied in this context. However, such VEM implementation is left for future work.

3.5 Implementation The various steps of the mesh generation method described in previous sections have been implemented within the open-source code NekMesh. A schematic flowchart of the implementation is presented in Fig. 8. Note that the implementation performs all the required geometrical queries via the API to the CAD engine, and that the communication with the VEM solver is

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM) Fig. 8 The workflow of the proposed pipeline form the CAD definition to the curvilinear high-order mesh

431

Begin

Create a CAD Geometry .step Generate a Linear Mesh .ccm Convert 3rd party linear mesh

Load CAD Module

Associate CAD objects with Boundary Vertices & Edges

Generate Quadrature Points

Project Quadrature Edge Nodes to CAD

Valid?

No

Yes Run VEM Output Module .mesh

.nmg

End / Run VEM Solver

via a simple file-based interface system. The solver is asked to read the information defining the linear mesh (.mesh) and the high-order geometrical information on the curved edges (.nmg). The .mesh file includes only information about the linear mesh: the vertices and their cartesian coordinates, the edges with the IDs of the two vertices, and an additional flag showing the corresponding boundary condition. Finally, for the polygonal elements, we first indicate the number of vertices of the polygon and their IDs.

432

K. Kirilov et al. 0.25

Q-Node D1 N [0,0] Vertices

0.2 0.15 0.1

Y

0.05 0 -0.05 -0.1 -0.15 -0.2

-0.2

-0.15

-0.1

-0.05

0

0.05

0.1

0.15

0.2

0.25

X

Fig. 9 Visualization of the high-order geometrical information (with .k = 2) communicated to the VEM solver via the .nmg file

The .nmg file, on the other hand, does not have any connectivity details or elements. Instead, it communicates only a minimal amount of geometrical information about the previously populated curved edges. NekMesh first provides the ID of the curved edge and the data from the .mesh file. Then, for every quadrature point .x j , determines and writes to the file the following geometrical information with the help of the CAD engine: • • • •

location inside the standard element: .0 ≤ t j ≤ 1, Cartesian location: .x, unit normal to the CAD curve: . N (pointing inside the curvature), and tangent to the CAD curve: .x ' (t).

Considering the test ring geometry from Fig. 12 and the VEM solver described in reference [8] at order .k = 2, NekMesh constructs the combination of quadrature rules automatically as required by the solver: a Gauss-Lobatto rule with .n = 3, and Gauss rules with .n = 2, 3. The geometrical information evaluated on the inner circle from the .nmg file is displayed in Fig. 9. Our choice of using a simple file-based interface system is informed by the need to provide state-of-the-art VEM solvers with easy and robust access to the highorder curvilinear information without the need to interact with the B-rep of the computational domain.

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM)

433

4 Verification and Example of Application 4.1 VEM Verification This section aims at illustrating the validity of the meshes generated by the proposed methodology. It will show that the VEM discretizations of the Laplace equation in a simple domain, a circular ring, achieve the expected rate of convergence in both straight-sided and curvilinear meshes. From the numerical analysis approximate curved boundaries may corrupt the numerical approximation of the discrete solution. Indeed, the error in a numerical solution, .ε, can be split in two main contributions: ε = ε f + εg ,

.

where the error .ε f arises due to the discretization of the functional spaces and the involved (bilinear) forms, and the .εg represents the error on the solution that stems from the approximation of the geometry of the computational domain. When we are dealing with domains with straight boundaries the contribution of .εg is negligible since a piece-wise straight segments perfectly match a straight boundary. Therefore the error of a numerical solution is only .ε f due to the discrete functional spaces used. However, when the domain is curved and we approximate curved boundaries with piecewise straight segments .εg ≈ h 2 where .h is the size of the mesh [8]. As a consequence, improving the approximation provided by the discrete functional spaces may not decrease the whole numerical error we have. In the following we will perform numerical experiments to verify these claims. Moreover, we will see that the high-order meshes generated by NekMesh combined with the curved virtual element spaces reviewed in Sect. 2 overcome this issue. Let .Ω denote a domain consisting of a circle of radius 1 centred at the origin, with a circular hole removed at the origin with radius 0.2. We define the right-hand side and the boundary conditions in such a way that the solution of a Laplacian problem on .Ω is the function 2 2 .u(x, y) = log(x + y ) . We discretize the computational domain .Ω in two ways: one with straight-sided edges, referred to as .noGeo, and one with curved edges, denoted .withGeo. For each of these mesh types, we construct a sequence of four meshes with decreasing mesh size. Figure 10 shows an example of a .withGeo mesh. The VEM approach reviewed in Sect. 2 with .k = 2 is then used to solve this problem. For each of these meshes we compute the errors in the . L 2 norm and the 1 . H semi-norm.

434

K. Kirilov et al.

Fig. 10 A curvilinear high-order polygonal mesh of a ring. The mesh of the domain only displays the edges of the polygonal elements. An enlargement of the mesh near the boundary illustrates the additional degrees-of-freedom required for the high-order discretization. The top window shows the locations of the quadrature points on the edges (blue) and on the interior (red) of the polygonal cells. The bottom window shows the locations of the points used for interpolation

The trend of these errors is depicted in Fig. 11. We begin by analysing the error in the . L 2 norm. For the straight-sided case we know that ε ≈ h 3 and εg ≈ h 2 .

. f

The geometrical error is dominant, and so we observe a decay of order two of the error. However, when we consider curved meshes using the exact geometry, the error in approximating the geometry decreases as .h → 0. For instance, assuming regularity of the domain, a second-order curved edge approximation leads to an error .εg = O(h 3 ). Furthermore we have .ε f ≈ h 3 and, as a consequence, the trend of

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM)

10 -1

10 -1

10 -2

10 -2

10 -3

10 -3

10 -4

10 -4

10 -5

10 -5

10 -6

10 -1

(a)

10 -6

435

10 -1

(b)

Fig. 11 Mesh convergence of the high-order VEM discretization: a error in the . L 2 norm; and b error in the . H 1 semi-norm

the error is not affected by the geometrical approximation error and we observe an error decay of order 3. In the case of the . H 1 semi-norm error, we obtain the expected error decay of order 2 for both type of meshes, but the absolute value of the error computed with the curved mesh is smaller. Also the better convergence trend observed using the withGeo meshes is due to the better approximation of the geometry. Indeed, the total error in the noGeo mesh have two contributions with the same order in the . H 1 norm, namely 2 .ε f ≈ h and εg ≈ h 2 . Consequently the error trend is not corrupted but its value is affected by two contributions. While in the withGeo approach the error due to the geometry is null, the final error is only influenced by .ε f and, as a consequence, it is smaller.

4.2 A Practical 2D Geometry This section illustrates the application of the high-order mesh generation procedure to a computational domain for an automotive aerofoil geometry exhibiting variable curvature along its length. The geometry of the aerofoil is defined by four NURBS curves. The linear polygonal mesh generated by STAR–CCM+ is depicted in Fig. 13. The mesh resolution has been purposely increased in the regions near the leading and trailing edges of the aerofoil.

436

K. Kirilov et al.

Fig. 12 High-order VEM solution (.k = 2) of the Laplace equation with Dirichlet boundary conditions computed on a curvilinear mesh (withGeo) Fig. 13 A straight-sided linear mesh of an automotive aerofoil

Figure 14 shows the edges of the polygonal mesh together with two enlargements showing the location of the interpolation and quadrature points used in the high-order discretization.

5 Conclusions and Further Work We have proposed a proof-of-concept for a polygonal high-order curvilinear mesh generator for the Virtual Element Method (VEM) for arbitrary geometries defined through a standard CAD B-rep of the domain. The realisation is that the interpolation and integration of functions within a VEM discretization only requires the definition

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM)

437

Fig. 14 A curvilinear high-order polygonal mesh of an automotive aerofoil. The mesh of the domain only displays the edges of the polygonal elements. An enlargement of the mesh near the boundary illustrates the additional degrees-of-freedom required for the high-order discretization. The top window shows the locations of the quadrature points on the edges (blue) and on the interior (red) of the polygonal cells. The bottom window shows the locations of the points used for interpolation

of a set of boundary, interface and internal points. This allows us to interpret the problem of finding the location of these interpolation and integration points as geometrical inquiries to a B-rep of the computational domain, performed via an interface to CAD libraries. As a consequence, we can adopt an a posteriori approach to high-order curvilinear mesh generation for the VEM. The starting point of the process is the generation of a straight-sided mesh in STAR–CCM+. The next step is to use the CAD API implemented in NekMesh for reconstructing the CAD information from STEP files

438

K. Kirilov et al.

and, according to the user-defined order and quadrature rules, parametrically projecting the quadrature nodes on the curved geometry. Finally, NekMesh communicates with the VEM solver through a two-file interface system: one for the linear mesh with its connectivity and a second for the geometrical information at the quadrature nodes. Using an exact solution of the Laplace equation on a ring geometry we show that the generated high-order curvilinear meshes for the VEM are valid, accurately reproduce the analytical solution, and converge at the expected rate as the mesh size is decreased. Although we have discussed strategies for assessing the validity of the mesh, we have made no attempt to evaluate mesh quality. This is a topic that has received little attention in the VEM literature until recently [4, 18], but it is of significant importance for VEM-based simulation and adaptation [2] and thus deserves further investigation. Most of the current mesh quality criteria for high-order meshing rely on the evaluation of the Jacobian of a mapping from a reference element, usually a regular polytope, as a measure of its deformation. Even tough such mappings have been proposed for convex straight-sided polygons, e.g. using barycentric coordinates [5], such mappings are, to the best of our knowledge, not available for non-convex curvilinear polytopes. We believe that this work has laid a strong foundation for the extension of the methodology to the generation of curvilinear polyhedral meshes. However, devising quadrature rules suitable for curvilinear polyhedral is an area of current active development. This represents a technical bottleneck that must be addressed before we can extend the proposed methodology to 3D. Acknowledgements This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 955923. David Moxey, Joaquim Peiró and Mashy Green acknowledge funding from EPSRC under grant EP/R029423/1.

References 1. NekMesh: An open-source high-order mesh generator (Last accessed December 2023). https:// www.nektar.info/ 2. Antonietti, P.F., Berrone, S., Borio, A., D’Auria, A., Verani, M., Weisser, S.: Anisotropic a posteriori error estimate for the virtual element method. IMA Journal of Numerical Analysis 42(2), 1273–1312 (2021) 3. Bentley, J.L.: Multidimensional binary search trees used for associative searching. Communications of the ACM 18(9), 509–517 (1975) 4. Berrone, S., D’Auria, A.: A new quality preserving polygonal mesh refinement algorithm for polygonal element methods. Finite Elements in Analysis and Design 207(103770) (2022) 5. Budninskiy, M., Liu, B., Tong, Y., Desbrun, M.: Power coordinates: A geometric construction of barycentric coordinates on convex polytopes. ACM Trans. Graph. 35(6) (2016) 6. Capgemini Engineering: Open Cascade (Last accessed December 2023). https://www. opencascade.com 7. Cottrell, J.A., Hughes, T.J.R., Bazilevs, Y.: Isogeometric Analysis: Toward Integration of CAD and FEA. Wiley (2009)

Curvilinear Mesh Generation for the High-Order Virtual Element Method (VEM)

439

8. Dassi, F., Fumagalli, A., Mazzieri, I., Scotti, A., Vacca, G.: A virtual element method for the wave equation on curved edges in two dimensions. Journal of Scientific Computing 90(50) (2022) 9. Di Pietro, D.A., Droniou, J.: The Hybrid High-Order Method for Polytopal Meshes. Springer (2020) 10. Ferguson, J.A., Kópházi, J., Eaton, M.D.: NURBS enhanced virtual element methods for the spatial discretization of the multigroup neutron diffusion equation on curvilinear polygonal meshes. Journal of Computational and Theoretical Transport 51(4), 145–204 (2022) 11. ISO: ISO 10303-21:2016 Industrial automation systems and integration – Product data representation and exchange – Part 21: Implementation methods: Clear text encoding of the exchange structure. International Organization for Standardization (2016). The STEP standard 12. ITI Global: CADfix (Last accessed December 2023). https://www.iti-global.com/cadfix 13. Karniadakis, G.E., Sherwin, S.: Spectral/hp element methods for computational fluid dynamics, second edn. Oxford University Press (2013) 14. Moxey, D., Green, M., Sherwin, S., Peiró, J.: An isoparametric approach to high-order curvilinear boundary-layer meshing. Computer Methods in Applied Mechanics and Engineering 283, 636–650 (2015). https://doi.org/10.1016/j.cma.2014.09.019 15. OpenFOAM: API guide: ccmtofoam.c file reference (Last accessed December 2023). https:// www.openfoam.com/documentation/guides/latest/api/ccmToFoam_8C.html 16. Ruiz-Gironés, E., Roca, X., Sarrate, J.: High-order mesh curving by distortion minimization with boundary nodes free to slide on a 3D CAD representation. CAD Computer Aided Design 72, 52–64 (2016) 17. Siemens: STAR-CCM+ (Last accessed October 2022). https://www.plm.automation.siemens. com/global/en/products/simcenter/STAR-CCM.html 18. Sorgente, T., Biasotti, S., Manzini, G., Spagnuolo, M.: The role of mesh quality and mesh quality indicatorsin the virtual element method. Adv. Comput. Math. 48(3) (2022) 19. Stroud, I.: Boundary Representation Modelling Techniques. Springer (2006) 20. Turner, M., Peiró, J., Moxey, D.: Curvilinear mesh generation using a variational framework. Computer-Aided Design 103, 73–91 (2018) 21. Beirão da Veiga, L., Brezzi, F., Cangiani, A., Manzini, G., Marini, L.D., Russo, A.: Basic principles of virtual element methods. Math. Models Methods Appl. Sci. 23(1), 199–214 (2013) 22. Beirão da Veiga, L., Brezzi, F., Marini, L.D., Russo, A.: The hitchhiker’s guide to the virtual element method. Math. Models Methods Appl. Sci. 24(08), 1541–1573 (2014) 23. Beirão da Veiga, L., Russo, A., Vacca, G.: The virtual element method with curved edges. ESAIM: Mathematical Modelling and Numerical Analysis 53(2), 375–404 (2019) 24. Versteeg, H., Malalasekera, W.: An Introduction to Computational Fluid Dynamics: The Finite Volume Method, second edn. Pearson (2007)

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant Albert Jiménez-Ramos, Abel Gargallo-Peiró, and Xevi Roca

1 Introduction In approximation theory, one of the key problems is obtaining a set of simplex points for a given polynomial degree that guarantees a small interpolation error. To solve this problem, it is standard to obtain interpolation points featuring a small Lebesgue constant. This constant is usually denoted by .Λ and defined as Λ = max

Np Σ

.

x∈K d

|φi (x)| .

(1)

i=1

It corresponds to the maximum on the .d-dimensional simplex . K d of the summation of the absolute values of the lagrangian functions .φi associated with each of the . N p interpolation points, a summation that is Lipschitz because the absolute value terms are Lipschitz, a summation that is called the Lebesgue function. The maximum of this function appears in the upper bound of the interpolation error. Specifically, given a point distribution, for any function. f , the error of the polynomial interpolation satisfies II II ★ . || f − I ( f )|| ≤ (1 + Λ) II f − p II , where . I ( f ) denotes the lagrangian interpolator, and . p ★ the best polynomial approximation. According to the previous inequality, the smaller the Lebesgue constant, the A. Jiménez-Ramos · A. Gargallo-Peiró · X. Roca (B) Barcelona Supercomputing Center, Barcelona, Spain e-mail: [email protected] A. Jiménez-Ramos e-mail: [email protected] A. Gargallo-Peiró e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 E. Ruiz-Gironés et al. (eds.), SIAM International Meshing Roundtable 2023, Lecture Notes in Computational Science and Engineering 147, https://doi.org/10.1007/978-3-031-40594-5_20

441

442

A. Jiménez-Ramos et al.

smaller the bound of the interpolation error. Moreover, the Lebesgue constant exclusively depends on the position of the interpolation points. For instance, equispaced distribution of points, which lead to larger errors for higher polynomial degrees, feature large values of the Lebesgue constant, yet non-uniform distribution of points, with improved interpolation error, feature sub-optimal values of the Lebesgue constant [1, 6, 7]. Accordingly, to guarantee small interpolation errors, the Lebesgue constant has to be evaluated and thus, it is key to estimate on the simplex the maximum of the Lebesgue function. To approximate this maximum, it is critical to automatically generate on the .ddimensional simplex a finite number of sample points—exactly the goal of this work. Then, for those points, the approximation is the maximum of the function evaluations. Note that these sample points are used to estimate the Lebesgue constant, but they do not correspond to the interpolation points that define the lagrangians in Eq. 1. To estimate the Lebesgue constant, there are general zeroth-order optimization [5] and Lebesgue-specific [2, 6, 7] methods. These approaches are mainly different because the latter family exploits the structure of the Lebesgue function. Specifically, because the Lebesgue function presents several similar local maxima, specific-purpose methods successfully favor smooth gradations of the point resolution. Nevertheless, both families of methods share some aspects. They feature the same stopping criterion, add sample points, and have computational costs scaling with the number of points. To stop the maximum approximation process, all the previous methods terminate after a fixed number of iterations. The two families of methods generate sample points statically or dynamically, statically by adding in one shot a grid of points [2], dynamically by adding at each iteration new points [5–7]. Because the Lebesgue function is evaluated at these sample points, the computational cost of the maximum estimation scales with the number of points. This scaling depends on the method, the polynomial degree, and the simplex dimension. Next, we see how to automatically stop the optimization iterations and the need for neighbor queries and scalable point refinement. Unfortunately, to automatically stop the optimization iterations, neither the general nor the specific-purpose methods exploit that the Lebesgue function is Lipschitz. Although all the previous methods can improve the estimation of the maximum by increasing the number of iterations, none of them automatically stops when the optimal of the Lebesgue function is sufficiently converged. To measure the convergence and stop the iterations, first- and second-order optimization methods check if the approximated candidate is sufficiently flat, a successful condition that can be emulated if the function is Lipschitz—precisely the case for the Lebesgue function. To emulate the sufficiently flat condition on a sample point, it is key to query for point neighbors. Thus, using all these points to evaluate the function, a local estimation of the Lipschitz constant is the quotient of the function difference and the distance of the neighbor points. Then, using this constant and a flatness tolerance, the automatic stopping criterion can be incorporated by only evaluating the function. The specific-purpose [6, 7] and the general [5] methods incorporate neither the stopping criterion nor the neighbor queries.

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant

443

To efficiently estimate the Lebesgue constant in the simplex, it is critical to use scalable point refinement techniques. In this manner, the Lebesgue function can be finely sampled only on the interest regions and coarsely sampled otherwise, an adaptive strategy that reduces the number of needed points [5–7] . As we said before, the computational cost scales with the number of points, so the local refinement reduces the cost of approximating the Lebesgue constant. Unfortunately, when specific-purpose methods refine the resolution [2, 6, 7], the number of points scales exponentially with the dimensionality. This exponential scaling is affordable for two and three dimensions, but impractical for higher dimensions. Fortunately, some general methods do not scale exponentially with the dimension. Specifically, the DiSimpl [5] method adds only two new points per point refinement, a refinement scaling that is well-suited for higher dimensions. Summarizing, for more than three dimensions, there is no specific-purpose method to estimate the Lebesgue constant in the simplex. For two and three dimensions, the specific-purpose methods successfully estimate the Lebesgue constant, but their extensions to arbitrary dimensions do not scale well with the dimensionality. For arbitrary dimensions, optimization methods for general functions scale well with the dimensionality, but they are not specifically devised to estimate the Lebesgue constant. That is, they do not control size gradation. Neither general nor specific-purpose methods feature neighbor queries. Thus, they are not ready to stop automatically when the optimal candidate features sufficient flatness. To address the previous issues, the main contribution of this work is to propose a new specific-purpose point refinement method. The proposed method features a smooth gradation of the resolution, neighbor queries based on neighbor-aware point coordinates, and a point refinement that scales algebraically with the dimension as .(d + 1)d. The main novelty of the proposed smooth point refinement method is not only that it scales algebraically with the dimension but also that it is ready to use an automatic Lipschitz stopping criterion. The main application of the proposed point refinement method is to estimate the Lebesgue constant on the simplex. Accordingly, the results check whether the proposed point refinement method reproduces the literature estimations for the triangle and the tetrahedron. Moreover, the results assess whether the method is well-suited for Lebesgue constant approximations on the simplex for mid-range dimensionality. The rest of the paper is organized as follows. First, in Sect. 2, we review the literature related to this work. Then, in Sect. 3, we describe the system of coordinates that allow the neighbor queries and the core point refinement operations. Next, in Sect. 4, we detail the adaptive minimization method. In Sect. 5, we illustrate with several examples the main features of the presented method. Lastly, in Sect. 6, we present some concluding remarks of this work.

444

A. Jiménez-Ramos et al.

2 Related Work One of the most immediate estimates of the Lebesgue constant is given by the maximum value of the function in an equispaced grid of points. The size of the sampling determines the accuracy of the estimation at the expense of increasing the number of function evaluations. Alternatively, it is possible to use a sequence of admissible meshes [3] as sampling points [2]. Admissible meshes have the property that the maximum value at this finite set of points bounds the infinity norm of a polynomial of certain degree overall the simplex. Unfortunately, none of these methods is adaptive. An alternative is to estimate the Lebesgue constant by means of a non-deterministic adaptive method [7]. The method starts with a random sample of points in the simplex. Next, the function is evaluated, and the points are sorted in terms of their function value. Then, new random samples are generated inside boxes centered at the points with the largest values. At each iteration, the box edge-length is halved to capture the maximum more accurately, and thus, a smooth gradation in the sampling resolution is obtained. This process is repeated until a prescribed number of iterations is reached. Finally, the estimate for the Lebesgue constant is the largest value at a sample point. To compute the Lebesgue constant, an alternative adaptive method [6] named DiTri modifies the DiRect algorithm [4] to work in triangles. The method starts with the evaluation of the function at the centroid of the triangle. Next, the triangle is subdivided using a quadtree strategy. Then, the function is evaluated at the centroid of the three new smaller triangles. At each iteration, the algorithm chooses a set of potentially optimal triangles to refine in terms of their size and the function value at their centroid. After the refinement step, additional elements are refined to ensure a smooth gradation of the element size. When a prescribed number of iterations is reached, the centroid of the triangle with the largest function value determines an estimation of the Lebesgue constant. Remarkably, the method exploits the simplicity of the triangle by uniquely identifying each element with a triplet of integers, and therefore, no explicit mesh connectivity structure is needed. Similarly to the DiRect algorithm but based on simplices, the method named DiSimpl also considers a Lipschitzian optimization approach [5]. DiSimpl is devised to find the global minimum of an arbitrary function whose domain is a hypercube or simplex, and performs particularly well when the function presents symmetries. Initially, the search space is decomposed into simplices. Then, two approaches are considered. In one case, the function is evaluated at the centroid of the simplex, and two hyper-planes cutting the longest edge subdivide the potentially optimal simplex into three smaller simplices. In the other case, the function is evaluated at the vertices of the simplex, and one hyper-plane cutting the longest edge generates two smaller simplices. Interestingly, in any of the approaches, only two function evaluations per refinement are performed. Finally, the algorithm stops when a prescribed number of iterations is reached. These adaptive methods outperform grid-based methods but they are not devised to estimate the Lebesgue constant in the .d-dimensional simplex. The non-deterministic method [7] starts sampling the function in.10 000 points and generates 10 samples per

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant

445

each 2D box. Thus, to keep the same resolution we would need .10d/2 points inside the .d-box. Even in 2D, a considerable amount of approximately .200 000 sample points are required to accurately estimate the Lebesgue constant, and we expect a higher value for higher dimensions. Moreover, the non-deterministic nature of the algorithm makes it difficult to query neighbor points. The quad-tree subdivisionbased method [6] is devised to estimate the Lebesgue constant in the triangle. The natural extension of this method to higher dimensions would subdivide the.d-simplex into .2d subelements and, consequently, the number of sample points would increase exponentially. Furthermore, the simplicity of the triangle case to uniquely identify each element could not be exploited. Finally, the DiSimpl algorithm [5] has not been tested against the Lebesgue function and the subdivision strategy becomes complicated in high dimensions. Moreover, the resulting mesh is not conformal and the method does not feature access to neighbor elements. The method proposed in this work is deterministic and exploits a rational barycentric system of coordinates to uniquely identify each sample point. The method considers a discrete set of directions to refine which are parallel to the simplex edges. Every time a point is refined, we evaluate the function at most .(d + 1) d times. Then, after refining the potentially optimal points, we generate new sample points to ensure a smooth gradation of the sampling resolution. Moreover, the system of coordinates allows accessing to the adjacent points with no need for storing the neighbor structure, which enables a stopping criterion based on the sampling density and the local Lipschitz constant around the extremum.

3 Neighbor-Aware Coordinates for Point Refinement Even though the main application is computing the maximum of the Lebesgue function, we present the method in a minimization framework. In Sect. 3.1, we schematically illustrate the method in 2D. Then, we detail the system of coordinates in Sect. 3.2, and the core refinement operations in Sects. 3.3 and 3.4.

3.1 Outline To estimate the minimum of the target function, we propose using neighbor-aware sample points. Consider the set of sample points shown in Fig. 1a and assume that the point at the barycenter of the triangle is our minimum candidate. To improve the estimation of the minimum, we refine the sampling around it by generating new sample points parallel to the simplicial edges, see Fig. 1b. Analogously, if our next minimum candidate is the black point in Fig. 1b, we generate new sample points at positions parallel to the triangle edges, see Fig. 1c. However, we only generate three new points (in gray) since one of them already exists. Applying successively this

446

A. Jiménez-Ramos et al.

Fig. 1 Illustration of the method. a Initial sampling. b Refining the point at the barycenter by generating six new points (in gray) around it parallel to the triangle edges. c To refine the black point we only evaluate the target function at three new points (in gray)

refinement operation to potentially optimal points, we expect to finely sample the target function and find an accurate estimate of the minimum.

3.2 Neighbor-Aware Coordinates Since we work with simplicial domains, we exploit the barycentric coordinates system. More precisely, we consider an equispaced sampling with .q + 1 points on each edge of the simplex, and we uniquely determine a sample point . x ∈ K d by a set of rational barycentric coordinates of the form ( .

x=

λ1 λd+1 ,..., r r 2q 2q

) ,

(2)

Σd+1 λi with . i=1 = 1, and non-negative integers .r and .λi , .i = 1, . . . , d + 1. For each 2r q barycentric coordinate, the numerator indicates the position on a uniform grid and the denominator represents the level of refinement on the grid. Thus, the higher the denominator, the higher the resolution of the sampling around the point. Aligned with this system of coordinates, for each simplicial edge we choose a refinement direction. Each refinement direction has two possible orientations: forward and backward. Thus, we consider .n D = 2n E vectors, where .n E is the number of edges of . K d . Each vector is identified by a pair of integers .(i, j) and is written in rational barycentric coordinates as u

. (i, j)

=

) 1( ei − e j , q

with .i, j = 1, . . . , d + 1, .i /= j, and where the .(d + 1)-dimensional vector .ek is a vector with a one in the .kth position and zeros elsewhere. The set of direction vectors

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant

447

Fig. 2 For a central point (black dot), surrounding stencil points (black dots) for the refinement directions (gray segments) parallel to the simplex edges (gray edges) in (a) 2D and (b) 3D

U = {u(i, j) for i, j = 1, . . . , d + 1, i /= j}

.

defines a canonical stencil that is used for generating new sample points. More concretely, we generate at most .(d + 1) d new sample points per point refinement. These .(d + 1) d refinement positions provide a reasonable scaling for medium dimensionality while sampling sufficiently fine the neighborhood of the point to refine. In particular, for the equilateral triangle, the six refinement positions are top-left, topright, left, right, bottom-left, bottom-right, see Fig. 2a, while for the tetrahedron there are twelve positions, see Fig. 2b. The value .r in Eq. 2 is strongly related to the resolution ) of the sampling. With( out loss of generality, consider the sample point . x = 21 , 21 in the one-dimensional ) ( interval .[0, 1], and the direction vector .u(1,2) = 21 , −1 . The point 2 ( .

y = x + u(1,2) =

2 0 , 2 2

)

corresponds ( 3 to1 )the point zero in cartesian coordinates, while the point . z = x + 1 u = , is between the points . x and . y. Thus, scaling the vector .u(1,2) with 2 (1,2) 4 4 a factor of the form .2−r , .r ≥ 0, leads to the generation of closer points along the direction described by the vector .u(1,2) . The value of .q determines the density of the initial sampling, see Eq. 2. In practice, we favor setting .q equal to one. When .q is one, the initial grid contains only the simplex vertices as sampling points. That is, the initial number of sampling points is .d + 1, and thus, it scales linearly with the number of dimensions. Bigger values determine denser initial samplings that might require less adaptation iterations, but the initial number of sampling points scales exponentially with the number of dimensions. This initial offset might be reflected in a larger number of final sampling points required to seek the target function minimum.

448

A. Jiménez-Ramos et al.

Fig. 3 Coordinates of a point of resolution .r and its neighbors

We store in a hash table(built from the rational coordinates. Hence, ) ) ( 1the points λd+1 2 k λ1 2k λd+1 λ the point . 2r q , . . . , 2r q and the point . 2r +k q , . . . , 2r +k q are identified as the same point. Moreover, this system of rational barycentric coordinates allows to easily access to the neighbor points with no need for storing the neighbor structure. As depicted in Fig. 3, coordinates of neighbor points with the same denominator only differ in one unit. Thus, to query if a neighbor exists, we simply add one to one component, subtract one from another component, and search in the set of points. Therefore, there is no need for storing explicit connectivity information.

3.3 Point Refinement Besides the resolution, it is also useful to classify the points in terms of the completeness of the stencil. On the one hand, we say a point . x is incomplete of resolution .r if the sample point . x + 2−r u(i, j) exists, for some .u(i, j) ∈ U. Alternatively, this means that at least one point of the stencil of resolution .r centered at . x exists. A sample point . x may be incomplete in several resolutions, but it is for the highest resolution when the representation of the function around . x is more accurate. On the other hand, if all the points of the stencil exist, we say this point is complete. More precisely, a point . x is complete of resolution .r if the sample point . x + 2−r u(i, j) exists, for all .u(i, j) ∈ U. Similarly to the incomplete case, the higher the resolution, the more accurate the representation of the function around the point. We remark that a complete point of resolution .r provides a finer discretization than an incomplete point of resolution .r ' with .r ' ≤ r , since the neighborhood is denser and sampled along all the considered directions. Furthermore, a point may be complete and incomplete at the same time both providing meaningful information. For instance, consider a complete point of resolution .r which is also incomplete of resolution.r + 1. In this case, not only the neighborhood is fully sampled at resolution .r , but additional partial information of the function at resolution .r + 1 is known. Around an incomplete point, this partial information has to be enhanced to obtain a more accurate representation of the target function. Accordingly, we consider an operation that completes the stencil around an incomplete point. Specifically, to

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant

449

Fig. 4 Completing an incomplete sample point. An (a) incomplete point of resolution .r becomes (b) complete of resolution .r

Fig. 5 Refining a complete sample point. A (a) complete point of resolution.r becomes (b) complete of resolution .r + 1

complete an incomplete point of resolution .r , we propose to generate all the missing points of the stencil of resolution .r . Thus, the resulting point is no longer incomplete at level .r . In Fig. 4, for the two-dimensional case, we illustrate the completion step for an incomplete point of resolution .r . Since three points of the stencil exist, Fig. 4a, we only generate the remaining missing points to complete the stencil. Once completed, the point becomes complete of resolution .r , Fig. 4b. When the information gathered from a complete point indicates that there is a minimum nearby, we should sample the function in a smaller neighborhood to capture it. Thus, we need a point refinement operation. Refining a complete point of resolution .r consists in generating all the points of the stencil of resolution .r + 1. Thus, the point becomes complete of resolution .r + 1. We highlight that if the point is incomplete of level .r + 1, we only generate those points needed to complete the stencil of resolution .r + 1 and, therefore, we avoid repeated function evaluation. In Fig. 5, we illustrate, for the two-dimensional case, the refinement of a complete point of resolution .r which is also incomplete of resolution .r + 1. Since one point of the stencil of resolution .r + 1 exists, see Fig. 5a, we generate five points to complete the stencil. Then, the point becomes complete of resolution .r + 1, see Fig. 5b.

450

A. Jiménez-Ramos et al.

Fig. 6 Smooth gradations of the resolutions. a The gray point is complete of resolution .r (dotted line) and incomplete of resolutions .r + 1 (dashed line) and .r + 2 (solid line). b Smooth sampling after refining the gray point until it becomes complete of resolution .r + 1

3.4 Smooth Gradation To obtain smooth discretizations of the target function, we need smooth gradations of the resolution of the sampling points. Accordingly, we only consider sampling configurations where the resolution between neighbors differs at most by one unit. More precisely, assume that the finest complete resolution of a point is .r , and the highest incomplete resolution is .r ' , .r ' > r . Then, the sampling is smooth if .r ' = r + 1. Thus, after completing or refining a point we check if we have a smooth gradation of points. If there is a point such that .r ' > r + 1, we smooth it by refining until resolution .r ' − 1. In Fig. 6a, we show the sampling after refining the black point. We observe that the gray point is complete of resolution .r (dotted stencil), but it is also incomplete of resolutions .r + 1 (dashed stencil) and .r + 2 (solid stencil), and therefore, this sampling configuration is non-smooth. To obtain a smooth discretization, we refine the gray point until it becomes complete of resolution .r + 1 by generating one new point, see Fig. 6b. Now, there is a smooth gradation of the point resolution.

4 Adaptive Point Refinement In this section, we first present our adaptive method to estimate the minimum of a function defined in the.d-dimensional simplex, see Sect. 4.1. The rational barycentric coordinate system described in Sect. 3 is the core of our method since an explicit point connectivity structure is not needed. Then, in Sect. 4.2, we detail the stopping criterion.

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant

451

Algorithm 1 Approximating the minimum by sampling. Input: Function F, Domain K d Output: Minimum x ★ , F (x ★ ) 1: function ComputeMinimum(F, K d ) 2: ∑ ← InitializeSamplePoints(F, K d ) 3: x ★ ← GetMinimum(∑) 4: while x ★ is not a minimum of F do 5: ∑C ← GetCompletePoints(∑) 6: {x Ci } ← GetPointsToRefine(∑C ) 7: RefinePoints(F, ∑, {x Ci }) 8: ∑ I ← GetIncompletePoints(∑) 9: {x I j } ← GetPointsToComplete(∑ I ) 10: CompletePoints(F, ∑, {x I j }) 11: SmoothSampling(F, ∑) 12: x ★ ← GetMinimum(∑) 13: end while 14: return x ★ , F (x ★ ) 15: end function

4.1 Algorithm The adaptive point refinement is detailed in Algorithm 1. Given the function to minimize, . F, and the simplicial domain where it is defined, . K d , the first step of the method is to initialize the set of sample points denoted as .∑, Line 2. We remark that function. F is an arbitrary target function, yet in our main application it corresponds to minus the Lebesgue function. In the second step, the method gets the first minimum approximation on the initial sample points, Line 3. This initialization allows iterating to seek a better approximation of the minimum until convergence, Line 4. To improve the minimum approximation, the iterative process successively refines and completes the potentially optimal sample points and smooths the gradation of the sampling point resolution. First, the method refines the candidate points. To this end, in Line 5, we retrieve the set of complete points .∑C and choose the points to refine, Line 6. We determine the points to refine in terms of their resolution and function value. More concretely, each complete sample point is represented in a graph by a dot with the horizontal coordinate given by its resolution, and the vertical coordinate given by its function value. If a point . x is complete of resolutions .r1 , . . . , rk , with .r1 < r2 < · · · < rk , then it is represented by a dot at position .(rk , F (x)). In Fig. 7, we show this graph in an intermediate stage of the algorithm. Similarly to the DiRect algorithm [4], we choose the points to refine by exploring multiple Lipschitz constants which, in practice, reduces to computing the lower boundary of the convex hull of this point cloud. Then, in Line 7, we refine the chosen points .{x Ci }. Second, the method completes the incomplete points. To this end, in Line 8, we obtain the set of incomplete points .∑ I and choose the points to complete, Line 9. Let . x ∈ ∑ I be an incomplete point of resolutions .r 1 , . . . , r k , with .r 1 < r 2 < · · · < r k , which is either not complete or complete with finest resolution .r , .r < r1 . Incomplete

Fig. 7 Two-dimensional representation of the complete points in terms of the resolution and function value. The lower boundary of the convex hull determines the points to complete

A. Jiménez-Ramos et al.

Function value

452

Point resolution

points provide information about the function in a local sense since they have been sampled along a particular direction only. In contrast, complete points have been sampled along all the directions and, therefore, global information is known. Since we prefer to have first a big picture of the function landscape before focusing on the higher-resolution detail, we represent the incomplete point. x by a dot with coordinates .(r 1 , F (x)) instead of .(r k , F (x)). Then, we obtain a point cloud similar to the one shown in Fig. 7. The lower part of the convex hull of this representation of .∑ I determines the points .{x I j } to be completed. Finally, in Line 10, we complete these points. Third, the method smooths the gradation of the sampling point resolution. Specifically, in Line 11, we generate the points needed to ensure the sampling is smooth, see Sect. 3.4, and retrieve the minimum point . x ★ from the sampling .∑, Line 12. These steps are repeated until the point . x ★ is a minimum, see Line 4. The details on the stopping criterion are to be described in Sect. 4.2. Finally, the algorithm returns the minimum point and the function value at the minimum, Line 14. We highlight that the function is evaluated only in the generation of new sample points, that is, in the refinement, completion, and smoothing steps. Further, in the point data structure, we store the point coordinates and the function value, so it can be immediately obtained when needed avoiding repeated calculations. To easily access to the neighbor points, the point data structure also contains an updated list of the complete and incomplete resolutions.

4.2 Stopping Criterion In zeroth-order minimization, it is standard to stop seeking a minimum when a fixed number of iterations is reached or when the minimum approximation is numerically close to a known minimum value. In our case, only the value of the function is known, yet the sample structure allows obtaining an indicator of the flatness of the function. Accordingly, we can consider a stopping criterion accounting for the function flatness

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant

453

as in first- and second-order optimization methods. The user specifies spatial and functional tolerances, and the method automatically stops when a minimum below these thresholds is found. The spatial tolerance controls the resolution of the sampling in the neighborhood of the minimum sample point. Specifically, given a spatial tolerance .δ, there exists a resolution . R such that the distance between a point . x and the point . x + 2−r u(i, j) is smaller than .δ for all .r ≥ R and vector .u(i, j) ∈ U. Note that a complete point of resolution .r , .r ≥ R, satisfies this criterion. The functional tolerance .ε is used to assess the flatness of the function around a point in terms of an estimate of the local Lipschitz constant. Specifically, consider a complete point . x of resolution .r , and denote by . y the neighbor along the direction −r .(i, j), . y = x + 2 u(i, j) . We estimate the Lipschitz constant of resolution .r around . x along the direction .(i, j) as .

F (x) − F ( y) K˜ (i, j) (x) = , d (x, y)

where .d (x, y) = ||x − y||2 is the distance between points . x and . y. Note that we allow negative Lipschitz constant estimations. In particular, . K˜ (i, j) (x) is negative if and only if . F (x) < F ( y). Moreover, the magnitude of the Lipschitz constant is strongly related to the flatness of the function around . x. Thus, the point . x is a minimum candidate if . K˜ (i, j) (x) is negative and .

I I I˜ I I K (i, j) (x)I < ε,

for all the directions .u(i, j) ∈ U. In Algorithm 1, at the end of loop, there exists a sample point. x ★ such that. F (x ★ ) ≤ F ( y), for all. y ∈ ∑. Then, in Line 4, we check if the point. x ★ is complete of resolution .r , .r ≥ R, and the local estimates of the Lipschitz constant along all the possible directions for resolution .r are less than .ε. If so, we assume that the neighborhood of ★ . x has been sufficiently sampled and the function is sufficiently flat there. Thus, the point . x ★ is considered an estimate of the function minimum and the algorithm stops. Alternatively, it is also possible to limit the number of iterations. This limit allows the user to obtain an approximation of the minimum before the tolerance-based stopping criterion is satisfied. In both cases, the algorithm returns the sample point ★ . x with the smallest function value.

5 Results: Estimation of the Lebesgue Constant The main application of the method presented in Sect. 4 is the estimation of the Lebesgue constant in the .d-dimensional simplex. The Lebesgue constant is used to assess the interpolation capabilities of a nodal distribution and is defined as the

454

A. Jiménez-Ramos et al.

Fig. 8 Lebesgue function of the warp-and-blend nodal distribution of polynomial degree 10 in the triangle [7]

maximum of the Lebesgue function, see Eq. 1. Due to the absolute value, the function is non-differentiable and, hence, a zeroth-order method is required to compute the maximum. In Fig. 8, for a triangle of polynomial degree 10, we show the Lebesgue function for a warp-and-blend nodal distribution [7]. Since this nodal family is symmetric, the Lebesgue function is symmetric, too. Consequently, it is enough to find the maximum inside the sextant of the triangle. More precisely, we consider the symmetric tile of simplex determined by the points with barycentric coordinates (the1 .d-dimensional ) Σd+1 d+1 , . i=1 λi = 1, such that .λi ≥ λ j if .i ≥ j. . λ ,...,λ

5.1 Verification in 2D and 3D To verify the estimated values of the Lebesgue constant found using our method, we compare our results with those reported in [7]. In Table 1, we report the value of the Lebesgue constant for the equispaced and the warp-and-blend distribution [7] for several polynomial degrees . p, . p = 2, . . . , 15, in the triangle. The initial sampling is composed of the three vertices of the domain, .q = 1. We set .δ = 10−4 and .ε = 10−3 for the stopping criterion, and in all cases the minimum is found before the limit of 50 iterations is reached. We also list the number of sample points needed. In general, as the polynomial degree increases, also does the number of points. This is so because, for high polynomial degrees, the basins of the Lebesgue function that contain the minima are smaller and deeper and, consequently, more sample points are needed to

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant

455

Table 1 Number of sample points needed to estimate the Lebesgue constant using the equispaced distribution, .ΛEq , and the warp-and-blend distribution [7], .ΛWB , of polynomial degree . p = 2, . . . , 15 as interpolation set in the triangle Equispaced Warp-and-blend .p .ΛEq # points .ΛWB # points 2 3 4 5 6 7 8 9 10 11 12 13 14 15

1.67 2.27 3.47 5.45 8.75 14.34 24.01 40.92 70.89 124.53 221.41 397.70 720.69 1315.89

219 302 280 280 424 356 409 533 397 427 538 422 412 599

1.67 2.11 2.66 3.12 3.70 4.27 4.96 5.74 6.67 7.90 9.36 11.47 13.97 17.65

221 292 283 483 404 378 668 611 685 497 747 735 1142 885

Fig. 9 Final sampling used to capture the maximum of the Lebesgue function of the warp-and-blend nodal distribution of polynomial degree 10 in the triangle [7]

capture the minimum with the same precision. In spite of this fact, we remark that our method is able to compute a good estimate of the Lebesgue constant using less than 1200 sample points, yet the values coincide with those reported in [7] up to the second decimal place. In Fig. 9, we show the final sampling used to capture the maximum of the Lebesgue function associated with the warp-and-blend distribution of polynomial degree 10 represented in Fig. 8. Since this function features triangle symmetry, the search space is simply the sextant. We remark that regions with higher values, blueish areas in

456

A. Jiménez-Ramos et al.

Table 2 Number of sample points needed to estimate the Lebesgue constant using the equispaced distribution, .ΛEq , and the warp-and-blend distribution [7], .ΛWB , of polynomial degree . p = 2, . . . , 15 as interpolation set in the tetrahedron Equispaced Warp-and-blend .p .ΛEq # points .ΛWB # points 2 3 4 5 6 7 8 9 10 11 12 13 14 15

2.00 3.02 4.88 8.09 13.66 23.38 40.55 71.15 126.20 225.99 408.15 742.69 1360.49 2506.95

398 565 536 581 690 675 751 708 779 798 853 860 843 926

2.00 2.93 4.07 5.32 7.01 9.21 12.54 17.02 24.36 36.35 54.18 84.62 135.75 217.71

398 635 722 990 1040 1671 854 1651 2412 1644 1707 2594 2635 3519

Fig. 8, present a finer sampling in Fig. 9. We also see that there are three local minima with similar function values, yet the global minimum is the one in the interior of the domain. In Table 2, we report the maximum value and the number of sample points needed to estimate the Lebesgue constant for the equispaced and the warp-and-blend distribution [7] for several polynomial degrees . p, . p = 2, . . . , 15, in the tetrahedron. We use the same initial sampling and the same tolerances .δ = 10−4 and .ε = 10−3 for the stopping criterion, and in all cases the minimum is found before the limit of 50 iterations is reached. As in the two-dimensional case, more sample points are required to estimate the Lebesgue constant of higher polynomial degrees. We highlight that the values coincide with those reported in [7] up to the second decimal place, and only 3519 points are needed to compute an estimate of the Lebesgue constant for the warp-and-blend distribution of polynomial degree 15. In contrast, using an admissible mesh of .3519(1/3) ≈ 16 points per line, the estimated value of the Lebesgue constant is 211.07.

5.2 Performance Comparison in 2D To check the performance, we compare the results of our method with the results of our implementation of the DiTri algorithm [6]. For both methods, we compute in a

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant

100 Our method DiTri

10−1 Estimation error

Fig. 10 Error in the estimation of the Lebesgue constant in terms of the number of sample points using our method (blue) and DiTri [6] (red) for the warp-and-blend distribution of polynomial degree 10 in the triangle

457

10−2 10−3 10−4 10−5 10−6

0

50

100 150 Number of points

200

250

triangle the Lebesgue constant for the warp-and-blend symmetric nodal distribution of polynomial degree 10. To do so, we report, at the end of each iteration, the number of sample points and the relative error of the maximum estimation. In Fig. 10, we show the evolution of our method, in blue, and the DiTri algorithm, in red. We observe that both methods show similar evolution. Moreover, to capture the maximum with a relative error below .10−4 , both methods need less than 200 sample points. Note that we do not consider the non-deterministic method [7] because the initial sampling already consists of .10 000 sample points. Although the evolution of both methods is similar in 2D, our method scales better in higher dimensions. We highlight that to refine a point using our method, we generate at most 6 new sample points, while DiTri always requires 3 new sample points. This difference is almost irrelevant in 2D and, consequently, the two curves follow a similar trend. However, this would not be the case in higher dimensions since we generate at most .(d + 1) d new sample points per refinement, while an extension to higher dimensions of DiTri would require .2d − 1 new sample points. Hence, for each method, the number of sample points scales differently with the dimension .d, exponentially for an extension to arbitrary dimensions of DiTri, quadratically for our method. Moreover, since we only need a point data structure, the refinement, completion, and smoothing operations are implemented for arbitrary dimensions and no dimension-specific considerations are required. Finally, the system of rational barycentric coordinates allows easily accessing the adjacent points with no need for storing the neighbor structure, which enables using a stopping criterion based on the flatness of the function.

458

A. Jiménez-Ramos et al.

Table 3 Estimation of the Lebesgue constant of the equispaced distribution of polynomial degree = 6, . . . , 10 in the .d-simplex, .d = 4, . . . , 6. Number of sample points needed to compute our ˜ Eq estimation .ΛEq , and approximation using an admissible mesh .Λ

.p

Degree .p

Dimension 4 # points .ΛEq

˜ Eq .Λ

Dimension 5 # points .ΛEq

˜ Eq .Λ

Dimension 6 # points .ΛEq

˜ Eq .Λ

6 7 8 9 10

1126 1075 1033 1175 1572

19.05 33.51 55.75 90.72 150.71

2030 1545 1578 1677 1667

19.90 34.43 65.06 126.31 241.51

1807 1790 2466 2252 2381

25.45 50.46 96.97 180.24 323.42

19.22 34.08 60.86 109.43 198.08

25.49 46.54 85.24 156.62 288.82

32.63 61.00 114.13 213.76 400.93

5.3 Results in 4D, 5D, and 6D The values reported in Sect. 5.1 for 2D and 3D coincide with the ones found in the literature [7]. Thus, we believe that our method is capable of estimating the Lebesgue constant accurately using a moderate amount of sample points. In Table 3, we show our estimation .ΛEq of the Lebesgue constant of the equispaced nodal distribution of polynomial degree . p in the .d-simplex, . p = 6, . . . , 10, .d = 4, . . . , 6. We also show the number of required sample points. As expected, we observe that the values increase with the polynomial degree and the dimension. As an alternative to our method, we can use an admissible mesh [3] to estimate the Lebesgue constant. Since in dimension .d = 4 we provided an estimate using at most 1572 sample points, we approximate the Lebesgue constant using an admissible mesh of approximately .15721/4 points per line. Analogously, in 5D and 6D, we compute an estimate using approximately .20301/5 and .24661/6 points per line, respectively. In ˜ Eq the maximum function value at this grid of sample points. Table 3, we denote by .Λ We observe that with the same number of points, our method captures a higher value and, therefore, it is more suitable to estimate the Lebesgue constant in small and moderate dimensionality.

6 Concluding Remarks To estimate the Lebesgue constant on the simplex, we have proposed a new specificpurpose point refinement method. The proposed method features a smooth gradation of the resolution, neighbor queries based on neighbor-aware coordinates, and a point refinement that algebraically scales with dimensionality. Remarkably, by using neighbor-aware coordinates, the point refinement method is ready to automatically stop using a Lipschitz criterion. In mid-range dimensionality, we conclude that the point refinement is well-suited to automatically and efficiently estimate the Lebesgue constant on simplices. Specif-

Refining Simplex Points for Scalable Estimation of the Lebesgue Constant

459

ically, for different polynomial degrees and point distributions, our results efficiently have reproduced the literature estimations for the triangle and the tetrahedron. Moreover, we have adaptively estimated the Lebesgue constant up to six dimensions. In perspective, for a given polynomial degree, the proposed point refinement might be relevant to obtaining a set of simplex points that guarantees a small interpolation error. That is, it efficiently estimates the Lebesgue constant, an estimation that is helpful in two ways. First, to assess the quality of a given set of interpolation points. Second, to evaluate the Lebesgue constant when optimizing the interpolation error for the point distribution as a design variable. We also think the method might be well-suited to seek optima in the simplex for functions behaving as the Lebesgue function.

References 1. Angelos, J.R., Kaufman Jr, E.H., Henry, M.S., Lenker, T.D.: Optimal nodes for polynomial interpolation. Approximation theory VI 1, 17–20 (1989) 2. Briani, M., Sommariva, A., Vianello, M.: Computing fekete and lebesgue points: simplex, square, disk. Journal of Computational and Applied Mathematics 236(9), 2477–2486 (2012) 3. Calvi, J.P., Levenberg, N.: Uniform approximation by discrete least squares polynomials. Journal of Approximation Theory 152(1), 82–100 (2008) 4. Jones, D.R., Perttunen, C.D., Stuckman, B.E.: Lipschitzian optimization without the lipschitz constant. Journal of optimization Theory and Applications 79(1), 157–181 (1993) 5. Paulaviˇcius, R., Žilinskas, J.: Simplicial lipschitz optimization without lipschitz constant. In: Simplicial Global Optimization, pp. 61–86. Springer (2014) 6. Roth, M.J.: Nodal configurations and voronoi tessellations for triangular spectral elements. Ph.D. thesis (2005) 7. Warburton, T.: An explicit construction of interpolation nodes on the simplex. Journal of engineering mathematics 56(3), 247–262 (2006)