Shape optimization method

Problem definition

We seek an optimal shape, \(\widetilde{\Omega}\subset\mathbb{R}^{n}\), \(d\in\left\{2,3\right\}\), that minimizes a cost function of the structure for a linear elastic material in a fixed working domain \(D\) subject to Dirichlet and Neumann boundary conditions.

\[\widetilde{\Omega}:=\underset{\Omega\in\mathcal{O}\left(D\right)}{\text{argmin}}J\left(\Omega\right)\]

with \(\mathcal{O}\) a subset of the fixed working domain, \(D\).

The objective function \(J\) is defined as:

(1)\[\begin{split}\begin{align} J:\mathcal{O}&\rightarrow\mathbb{R}\\ \Omega&\rightarrow J(\Omega) = \int_{\Omega}j(u)\text{ }dx \end{align}\end{split}\]

where \(j\) is a function defined from \(\Omega\) to \(\mathbb{R}\) and dependent on the displacement field \(u\) solution of a PDE.

Inequality and equality constraint can be imposed in the shape optimization problem.

Function of equality constraint is denoted \(C_{1}\) and defined by \(C_{1}:\mathcal{O}\rightarrow\mathbb{R}\).

Function of inequality constraint is denoted \(C_{2}\) and defined by \(C_{2}:\mathcal{O}\rightarrow\mathbb{R}\).

General shape optimization problem is written:

(2)\[\begin{split}\begin{cases} \underset{\Omega\in\mathcal{O}}{\min}J(\Omega) & \!\!\!\! =\underset{\Omega\in\mathcal{O}}{\min}\int_{\Omega}j(u)dx \\ C_{1}(\Omega) & \!\!\!\!= 0\\ C_{2}(\Omega) &\!\!\!\! <0 \\ a\left(u,v\right) & \!\!\!\! =l\left(v\right) \end{cases}\end{split}\]

Boundaries definition:

Let \(\Gamma\) denote the free boundary, \(\partial\Omega=\Gamma\cup\left[\partial D\cap\overline{\Omega}\right]\) denote the boundary of \(\Omega\).

One will also distinguish \(\Gamma_{D}\) , the part of the boundary where Dirichlet conditions are applied, and \(\Gamma_{N}\) , the part where Neumann conditions are applied, such that \(\Gamma=\Gamma_{D}\cup\Gamma_{N}\) and \(\Gamma_{D}\cap\Gamma_{N}=\emptyset\). To clarify these definitions, a diagram is given in Figure 1.

Exemple d'image

Figure 1 Illustration of the boundaries of the problem definition.

Augmented lagrangian Method

Augmented Lagrangian Method is used to solve the constrained optimization problems defined by (2). Here, we provide a concise overview of the ALM method. We begin by considering the following problem:

\[\underset{\Omega\in\mathcal{O}}{\min}J(\Omega)\text{ such that }C(\Omega)=0\]

where \(C(\Omega)=0\) represents an equality constraint.

The problem is reformulated as an equivalent min-max problem:

\[\underset{\Omega\in\mathcal{O}}{\min}\underset{\alpha\in\mathbb{R}}{\text{ }\max}\left\{J(\Omega)+\alpha C(\Omega)+\frac{\beta}{2}\left|C(\Omega)\right|^{2}\right\}\]

where \(\alpha\) is a Lagrange multiplier, and \(\beta>0\) is a penalty term.

The quadratic term helps to stabilize the convergence toward a feasible solution and stabilizes the solution to minimize oscillations. The min-max problem is solved using a gradient iterative method, in which, the Lagrange multiplier and the penalty parameters are updated at each iteration as follows:

\[\begin{split}\begin{align} \alpha^{n+1}&=\alpha^{n}+\beta C\left(\Omega_{n} \right) \\ \beta^{n+1}&=\min\left(\hat{\beta},k\beta^{n} \right) \end{align}\end{split}\]

where \(\Omega^{n}\) is the domain at iteration \(n\), \(\hat{\beta}\) is the upper limit of the penalty parameter and \(k\) is a multiplication coefficient.

Céa Method

Céa’s method proposed in [Cea86] enables to overcome the calculation of complex shape derivative terms.

First, a Lagrangian function is introduced and defined as:

(3)\[\begin{split}\begin{split} \mathcal{L}:\mathcal{O}\times V\times V_{0} & \rightarrow \mathbb{R} \\ (\Omega,u,p) & \mapsto \mathcal{L}(\Omega,u,p)=J(\Omega)-a(u,p)+l(p). \end{split}\end{split}\]

The minimization problems (1) without equality and inequality constraints, is equivalent to finding the extremum \(\left(\Omega_{\text{min}},u_{\Omega_{\text{min}}},p_{\Omega_{\text{min}}}\right)\) of the Lagrangian function, solution of the following optimization problem: several Find \(\left(\Omega_{\text{min}},u_{\Omega_{\text{min}}},p_{\Omega_{\text{min}}}\right)\) such that

\[\left(\Omega_{\text{min}},u_{\Omega_{\text{min}}},p_{\Omega_{\text{min}}}\right):=\underset{\Omega\subset\mathcal{O}}{\min}\underset{p\in V_{0}}{\text{ }\max} \underset{u\in V}{\text{ }\min} \text{ } \mathcal{L}(\Omega,u,p)\]

For all \(\Omega\in \mathcal{O}\), in \(u=u_{\Omega}\) (solution of equation (5)), we have:

\[\mathcal{L}(\Omega,u_{\Omega},p)=J(\Omega)\mid_{u=u_{\Omega}}\text{, }\forall p\in V_{0}.\]

The saddle point of the Lagrangian is determined by the following problems: Find \(u_{\Omega}\in V\) such that:

\[\partial_{p}\mathcal{L}(\Omega,u_{\Omega},p;\varphi)=-a(u_{\Omega},\varphi)+l(\varphi)=0\text{, }\forall\varphi\in V_{0}.\]

Find \(p_{\Omega}\in V_{0}\) such that:

\[\partial_{u}\mathcal{L}(\Omega,u_{\Omega},p_{\Omega};\psi)=\partial_{u} J(\Omega;\psi)\mid_{u=u_{\Omega}}-a(\psi,p_{\Omega})=0 \text{, }\forall\psi\in V.\]

According to (3) and with the definition of the saddle point \(\left(u_{\Omega},p_{\Omega}\right)\) the shape derivative of cost function in direction \(\theta\) is written by composition:

\[\begin{split}\begin{align} J'(\Omega)(\theta)&=\mathcal{L}'_{\Omega}(\Omega,u_{\Omega},p_{\Omega};\theta)\\ &=\partial_{\Omega}\mathcal{L}(\Omega,u_{\Omega},p_{\Omega};\theta)+\underset{=0}{\underbrace{\partial_{u}\mathcal{L}(\Omega,u_{\Omega},p_{\Omega};u_{\Omega,\theta}^{'})}}+\underset{=0}{\underbrace{\partial_{p}\mathcal{L}\left(\Omega,u_{\Omega},p_{\Omega};p_{\Omega,\theta}^{'}\right)}}\\ &=\partial_{\Omega}J(\Omega)_{\mid u=u_{\Omega}}-\partial_{\Omega}a(u_{\Omega},p_{\Omega})+\partial_{\Omega}l(p_{\Omega}) \end{align}\end{split}\]

with :

\[\begin{split}\begin{align} u'_{\Omega,\theta}(x)&=\lim_{t\rightarrow0}\frac{u_{\left(\text{Id}+t\theta\right)(\Omega)}(x)-u_{\Omega}(x)}{t} \quad \text{ the eulerian derivative of }u\text{ in direction }\theta\\ p'_{\Omega,\theta}(x)&=\lim_{t\rightarrow0}\frac{p_{\left(\text{Id}+t\theta\right)(\Omega)}(x)-p_{\Omega}(x)}{t} \quad \text{ the eulerian derivative of }p\text{ in direction }\theta. \end{align}\end{split}\]

Mechanical model

In our implementation, we consider a linear elastic isotropic material. In the following, we detail the assumptions and equations of the mechanical model.

Material behavior :

Assuming the material behavior of the domain \(\Omega\) is linear isotropic elastic, with Hooke’s law we have the following relationship between the stress tensor \(\sigma\) and the strain tensor \(\epsilon\) :

\[\sigma = 2\mu\epsilon+\lambda\text{Tr}\left(\epsilon\right)\text{Id}\]

where \(\lambda\) and \(\mu\) are Lamé moduli of the material.

We seek the displacement of the material, \(u\), such that :

(4)\[\begin{split}\begin{align} \begin{cases} - \text{div} \sigma(u) & \!\!\!\!=0 \text{ in }\Omega\\ u& \!\!\!\!=0\text{ on }\Gamma_{D}\\ \sigma(u)\cdot n & \!\!\!\!=g\text{ on }\Gamma_{N} \end{cases} \end{align}\end{split}\]

Note

We assume small deformations and zero volumetric forces.

Weak formulation of Linear elasticity

Find \(u_{\Omega}\in V(\Omega)=\left\{ u\in\left[H^{1}(\Omega)\right]^{d}\mid u_{\mid\Gamma_{D}}=u_{D}\right\}\), such that \(\forall v\in V_{0}(\Omega)=\left\{ u\in\left[H^{1}(\Omega)\right]^{d}\mid u_{\mid\Gamma_{D}}=0\right\}\)

(5)\[a\left(u_{\Omega},v\right)=l\left(v\right)\]

where for all \(u\in V(\Omega)\) and \(v \in V_{0}(\Omega)\) :

\[\begin{split}\begin{align} a\left(u,v\right)&=2\mu\left(\varepsilon(u),\varepsilon\left(v\right)\right)_{L^{2}(\Omega)}+\lambda\left(\nabla\cdot u,\nabla\cdot v\right)_{L^{2}(\Omega)}\\ l\left(v\right)&=\left(g,v\right)_{L^{2}\left(\Gamma_{N}\right)}, \end{align}\end{split}\]

with \(\varepsilon(u)=\frac{1}{2}\left(\nabla u+\nabla^{t}u\right)\).

Level set method

Level set method is used to describe \(\Omega\) and to capture its evolution.

Domain definition

Domain, \(\Omega\), is described by a function \(\phi:D\rightarrow\mathbb{R}\), which is

\[\begin{split}\begin{cases} \phi(x)<0 & \text{ if }x\in\Omega, \\ \phi(x)=0 & \text{ if }x\in\partial\Omega, \\ \phi(x)>0 & \text{ if }x\in D\setminus\overline{\Omega}. \end{cases}\end{split}\]
Exemple d'image

Figure 2 Domain defined by a level-set signed distance function.

There are several level-set functions to define \(\Omega\). However, we are interested in level-set functions with signed distance property to address numerical aspects.

A level set function with signed distance property with respect to \(\phi(x)=0\) is defined as:

\[\begin{split}\begin{align} \phi(x) =& \begin{cases} -d\left(x,\Gamma\right) & \text{ if }x\in\Omega,\\ d\left(x,\Gamma\right) & \text{ if }x\in D\setminus\overline{\Omega}, \end{cases} \end{align}\end{split}\]

where \(d\) is the euclidean distance function distance defined as:

\[\begin{align} d\left(x,\Gamma\right)=\underset{y\in\Gamma}{\inf}d\left(x,y\right)\text{ with }\Gamma=\left\{ x\in D\text{, such that }\phi(x)=0\right\}. \end{align}\]

Advection

To advect \(\phi\) following the velocity field \(\theta_{\text{reg}}\) (extended and regularized over the whole domain \(D\)), we solve a transport equation, defined as:

(6)\[\frac{\partial\phi}{\partial t}+\theta_{\text{reg}}\cdot\nabla\phi=0\text{, }\forall t\in\left[0;T\right].\]

For motion in the normal direction it’s equivalent to solve the following equation:

\[\frac{\partial\phi}{\partial t}-v_{\text{reg}}\left|\nabla\phi\right|=0\text{, }\forall t\in\left[0;T\right].\]

Note

In the context of shape optimization, \(t\) corresponds to a pseudo-time, a descent parameter in the minimization of the objective function.

Instead of solving the Hamilton-Jacobi equation (6) using the Finite Difference Method, Finite Element Method is used.

For the computation of the temporal derivative, we adopt the explicit Euler method between \(0\) and \(T\) (in an arbitrary fixed number of time steps \(\Delta t\)) :

(7)\[\frac{\phi^{n+1}-\phi^{n}}{\Delta t}-v_{\text{reg}}\left|\nabla\phi^n\right|=0\text{, }\forall t\in\left[0;T\right].\]

Here, \(\phi_{n}\) is the previous iterate, and \(n\) parameterizes \(\Omega_{n}\). To solve the minimization problem (1), the descent step \(\Delta t\) of the gradient algorithm is chosen such that:

(8)\[\mathcal{J}\left(\Omega_{n+1}\right)<\mathcal{J}\left(\Omega_{n}\right).\]

Note

Moreover, in order to verify the stability conditions of the explicit Euler scheme, the time step must satisfy the following Courant-Friedrichs-Lewy (CFL) condition:

\[\Delta t < c \frac{h}{v_{\text{max}}}\]

where \(v_{\text{max}}\) is the maximum value of the normal velocity and \(c\in\left]0,1\right]\) is a chosen parameter.

Extension and regularization

Note

The definition of the descent direction is ambiguous. The field is only defined on the free boundary. Implementing an extension of \(v\) is necessary to have a velocity field defined over the entire domain \(D\). Moreover, the regularity of the \(v\) field is not sufficient to ensure the mathematical framework of the notion of shape derivative in Hadamard’s sense (as the space \(L^{2}\left(\Gamma\right)\) is not a subspace of \(W^{1,\infty}\left(\mathbb{R},\mathbb{R}\right)\)), so a regularization is needed.

In our study, extending and regularizing the velocity field involves solving the following problem: Find \(v'_{\text{reg}}\in H_{\Gamma_{D}}^{1}=\left\{ v\in H^{1}\left(D\right)\text{ such that }v=0\text{ on }\Gamma_{D}\right\}\) such that \(\forall w\in H_{\Gamma_{D}}^{1}\)

(9)\[\alpha\left(\nabla v'_{\text{reg}},\nabla w\right)_{L^{2}\left(D\right)}+\left(v'_{\text{reg}},w\right)_{L^{2}\left(D\right)}=-\mathcal{J}'(\Omega)\left(w\right)\]

with \(\mathcal{J}\) the cost function.

Next, we define the normalized velocity field:

(10)\[v_{\text{reg}}=\frac{v'_{\text{reg}}}{\sqrt{\alpha\left\Vert \nabla v'_{\text{reg}}\right\Vert _{L^{2}\left(D\right)}+\left\Vert v'_{\text{reg}}\right\Vert _{L^{2}\left(D\right)}}}\]

This normalization enables the following equality to hold:

\[\left\Vert v_{\text{reg}}\right\Vert _{H_{\Gamma_{D},\alpha}^{1}\left(D\right)}=\sqrt{\alpha\left\Vert \nabla v_{\text{reg}}\right\Vert _{L^{2}\left(D\right)}+\left\Vert v_{\text{reg}}\right\Vert _{L^{2}\left(D\right)}}=1\]

Note

Then, to respect the small deformation hypothesis of the Hadamard method, we multiply by a constant smaller than 1. Alternatively, we can equivalently choose to use an adaptive time step strategy to ensure convergence.