Mathematical Methods 4 Electrotechnic Freaks
1218
2023
978-3-3811-1652-2
978-3-3811-1651-5
expert verlag
Jürgen Ulm
10.24053/9783381116522
The book offers a practice-oriented introduction to the mathematical methods of electrical engineering. The focus is on the solution of ordinary and partial differential equations using analytical and numerical methods. The analytical methods are opposed to the numerical methods. The differential equations were chosen with a view to the problems of electrical engineering. It is shown how they can also be transferred to mechanics or thermodynamics. Numerous examples and exercises with elaborated solutions facilitate the transfer of knowledge to applications.
<?page no="0"?> JÜRGEN ULM Mathematical Methods 4 Electrotechnic Freaks <?page no="1"?> Mathematical Methods 4 Electrotechnic Freaks <?page no="3"?> Jürgen Ulm Mathematical Methods 4 Electrotechnic Freaks <?page no="4"?> DOI: https: / / doi.org/ 10.24053/ 9783381116522 © 2023 · expert verlag ‒ ein Unternehmen der Narr Francke Attempto Verlag GmbH + Co. KG Dischingerweg 5 · D-72070 Tübingen Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Jede Verwertung außerhalb der engen Grenzen des Urheberrechtsgesetzes ist ohne Zustimmung des Verlages unzulässig und strafbar. Das gilt insbesondere für Vervielfältigungen, Übersetzungen, Mikroverfilmungen und die Einspeicherung und Verarbeitung in elektronischen Systemen. Alle Informationen in diesem Buch wurden mit großer Sorgfalt erstellt. Fehler können dennoch nicht völlig ausgeschlossen werden. Weder Verlag noch Autor: innen oder Herausgeber: innen übernehmen deshalb eine Gewährleistung für die Korrektheit des Inhaltes und haften nicht für fehlerhafte Angaben und deren Folgen. Diese Publikation enthält gegebenenfalls Links zu externen Inhalten Dritter, auf die weder Verlag noch Autor: innen oder Herausgeber: innen Einfluss haben. Für die Inhalte der verlinkten Seiten sind stets die jeweiligen Anbieter oder Betreibenden der Seiten verantwortlich. Internet: www.expertverlag.de eMail: info@verlag.expert CPI books GmbH, Leck ISBN 978-3-381-11651-5 (Print) ISBN 978-3-381-11652-2 (ePDF) ISBN 978-3-381-11653-9 (ePub) Umschlagabbildung: © Jürgen Ulm Bibliografische Information der Deutschen Nationalbibliothek Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über http: / / dnb.dnb.de abrufbar. www.fsc.org MIX Papier aus verantwortungsvollen Quellen FSC ® C083411 ® <?page no="5"?> Foreword Mathematics is the universal tool for the scientist, ”... for the mathematic is the basis of all exact scientific knowledge...“ (David Hilbert, German mathematician, 1862-1943). Special attention is therefore paid to learning how to use the tool. As is so often the case, the realisation of the necessity paired with the motivation of the user is in the foreground. If the declared aim is to describe physical relationships by means of mathematics, this does not necessarily require thematic rigour. The application of mathematical rigour is likely to be counterproductive to this concern. Furthermore, G¨ odel’s incompleteness theorem of mathematics applies, which even shows mathematics itself its limitations. Experience has shown that the users’ desire for mathematical rigour can be observed when they are convinced and enthusiastic about mathematics and its possibilities. For this reason, mathematical rigour should not be given the highest priority at the beginning. Mathematics lives from the joy of its users and applications! ”It is impossible to adequately convey the beauties of the laws of nature if someone does not understand mathematics. I regret that, but it is probably so.“ (Richard Feynman, physicist and Nobel Prize winner, 1918 1988), denn ”The book of nature is written in the language of mathematics.“ (Galileo Galilei, 1564 1642). i <?page no="6"?> Calculator, paper, pencil and eraser in combination with coffee form a good basis. Mathematics is the universal tool of electrical engineering. Selected mathematical methods are also used to deal with selected topics in electrical engineering. The work is carried out by presenting the basics, describing the task and solving the problem in detail. The target group of readers also results from this procedure. From the author’s point of view, these are: • Students of engineering sciences who would like to work on scientific topics using mathematical methods. • Software engineers who want to implement differential equations in matrix form in microprocessors. • Simulation engineers who would like to calculate something ”on foot“. • Measurement engineers who need a measurement value from a location where no sensor can be adapted and only calculations can be made for this location. • Maths brave, pale in the face, survived and now want to try maths again. Since our science has a mirror-image structure, it is worthwhile, for example, to familiarise oneself in depth with a scientific discipline. Here, electrical engineering is preferably recommended. By changing the coefficients of a differential equation, the ii <?page no="7"?> enthusiastic reader of this book conquers another scientific discipline (hence the use of the term ”mirror image”). For example, anyone who can solve electrical networks (meshes) can consequently also solve thermal, magnetic, mechanical and hydraulic networks. The mathematical basics include calculation rules, definitions, matrices, ordinary and partial differential equations and coordinate systems. They provide access to understanding the chosen mathematical methods and applications in electrical engineering. An elementary application in electrical engineering is the LCR oscillating circuit, which is described with differential equations and whose properties are presented. The integral transformation, the method of moments and Green’s method have in common the formation of the inner product for the solution of differential equations. The last two methods are introduced in detail with the help of examples. With the method of moments, the transition to the finite element method (FEM) and finite difference method (FDM) is made using application examples. The method of moments is also used to introduce the eigenvalue problem. The development of infinite series by alternately applying the law of flow and the law of induction leads to Bessel functions as well as to the phenomenon of field displacement with the effect of current displacement in the conductor. Selected standards should provide the reader with hints for the preparation of scientific documentation. A note on the extended use of the book is permitted: New exercises can be generated by simply modifying the original problem that has already been solved. The modification of the original task should be done in such a way that its solution is already known in advance. This gives the possibility to compare the results and to further deepen the familiarisation. Because the following always applies ”Uncertain are the calculations of the dispersible“ (Wisdom Literature). With kind regards the author autumn 2023 iii <?page no="9"?> For more information on the institutes, see also Appendix B. <?page no="11"?> Contents 1 Required mathematical basics 1 1.1 Logarithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Arithmetic operations with matrices . . . . . . . . . . . . . . . 3 1.2.2 Addition and subtraction of two matrices . . . . . . . . . . . . . 3 1.2.3 Multiplication of a matrix with a scalar . . . . . . . . . . . . . . 4 1.2.4 Square matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.5 Identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.6 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.7 Subdeterminant or minor . . . . . . . . . . . . . . . . . . . . . . 6 1.2.8 Adjuncts or algebraic complement . . . . . . . . . . . . . . . . . 6 1.2.9 Inverse matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.10 Transposed of a matrix . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.11 Complex conjugate matrix . . . . . . . . . . . . . . . . . . . . . 9 1.2.12 Hermite conjugate matrix . . . . . . . . . . . . . . . . . . . . . 9 1.2.13 Hermitian matrix - self-adjoint matrix . . . . . . . . . . . . . . 10 1.2.14 Orthogonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.15 Unitary matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2.16 Normal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.17 Norm of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.18 Conditioned matrix equation and condition number . . . . . . . 13 1.2.19 Eigenvalue, eigenvector . . . . . . . . . . . . . . . . . . . . . . . 14 1.2.20 Square matrices - a summary . . . . . . . . . . . . . . . . . . . 16 1.3 Integral, differential equations . . . . . . . . . . . . . . . . . . . . . . . 18 1.3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3.2 Differentiation of scalar functions . . . . . . . . . . . . . . . . . 19 vii <?page no="12"?> 1.3.3 Higher order ordinary differential equations . . . . . . . . . . . 19 1.3.4 Partial differential equations . . . . . . . . . . . . . . . . . . . . 21 1.3.5 Partial integration . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.3.6 Classification of differential equations . . . . . . . . . . . . . . . 23 1.3.7 Initial value task . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.3.8 Boundary value problem . . . . . . . . . . . . . . . . . . . . . . 25 1.3.9 Linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.3.10 Inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.3.11 Strong form/ formulation of a differential equation . . . . . . . . 31 1.3.12 Weak form/ formulation of a differential equation . . . . . . . . 31 1.4 Vector classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.5 Differentiation rules for vectors . . . . . . . . . . . . . . . . . . . . . . 32 1.6 Vector operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 1.6.1 Nabla and Laplace operator . . . . . . . . . . . . . . . . . . . . 33 1.6.2 Vector operator Gradient . . . . . . . . . . . . . . . . . . . . . . 34 1.6.3 Vector operator Divergence . . . . . . . . . . . . . . . . . . . . 35 1.6.4 Vector operator Curl . . . . . . . . . . . . . . . . . . . . . . . . 36 1.6.5 Comparison of vector operators . . . . . . . . . . . . . . . . . . 37 1.6.6 Rules of calculation for the Nabla operator . . . . . . . . . . . . 37 1.6.7 Comparison scalar and vector product . . . . . . . . . . . . . . 38 1.6.8 Base, unit vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 39 1.7 Boundary operator ∂ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 1.8 Maxwell’s equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 1.8.1 Relationship between circular and surface integral . . . . . . . . 41 1.8.2 Relation between area integral and volume integral . . . . . . . 42 1.8.3 Maxwell’s equations - differential form . . . . . . . . . . . . . . 43 1.8.4 Maxwell’s equations - integral form . . . . . . . . . . . . . . . . 43 1.8.5 Directional assignment of involved vector fields . . . . . . . . . . 44 1.9 Dirac’s delta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2 Coordinate systems 47 2.1 Cartesian coordinate system . . . . . . . . . . . . . . . . . . . . . . . . 47 2.2 Cylinder coordinate system . . . . . . . . . . . . . . . . . . . . . . . . 49 2.3 Sphere coordinate system . . . . . . . . . . . . . . . . . . . . . . . . . . 51 viii <?page no="13"?> 3 Geometric mean distance - GMD 55 3.1 Geometric mean distance - what for? . . . . . . . . . . . . . . . . . . . 55 3.2 Geometric mean distance - definitions and basics . . . . . . . . . . . . 58 3.2.1 Euclid - The Elements (extracts) . . . . . . . . . . . . . . . . . 58 3.2.2 Arithmetic means - definition . . . . . . . . . . . . . . . . . . . 58 3.2.3 Geometric mean - definition . . . . . . . . . . . . . . . . . . . . 59 3.2.4 GMD - possible combinations . . . . . . . . . . . . . . . . . . . 60 3.2.5 GMD - graphical interpretation . . . . . . . . . . . . . . . . . . 61 3.2.6 Why geometric mean? . . . . . . . . . . . . . . . . . . . . . . . 66 3.3 GMD of two collinear lines . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.3.1 GMD calculation - numerical solution . . . . . . . . . . . . . . 67 3.3.2 GMD calculation - analytical solution . . . . . . . . . . . . . . 67 3.3.3 GMD calculation - example . . . . . . . . . . . . . . . . . . . . 68 3.4 GMD of a collinear arrangement between a point and a line . . . . . . 72 3.4.1 GMD calculation - numerical solution . . . . . . . . . . . . . . 73 3.4.2 GMD calculation - analytical solution . . . . . . . . . . . . . . 73 3.4.3 GMD calculation - example . . . . . . . . . . . . . . . . . . . . 74 3.5 GMD of a line on itself . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.5.1 GMD calculation - analytical solution . . . . . . . . . . . . . . 76 3.5.2 GMD calculation - numerical solution . . . . . . . . . . . . . . 77 3.5.3 GMD calculation - summary . . . . . . . . . . . . . . . . . . . 78 3.6 GMD of two parallel lines . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.6.1 GMD calculation - numerical solution . . . . . . . . . . . . . . 79 3.6.2 GMD calculation - analytical solution . . . . . . . . . . . . . . 80 3.6.3 GMD calculation - example . . . . . . . . . . . . . . . . . . . . 81 3.7 GMD of a point and a helix . . . . . . . . . . . . . . . . . . . . . . . . 83 3.7.1 Length of an unwound helix . . . . . . . . . . . . . . . . . . . . 84 3.7.2 GMD calculation - analytical solution . . . . . . . . . . . . . . 86 3.8 GMD point outside line with its perpendicular on line centre . . . . . . 86 3.8.1 GMD calculation - numerical solution I . . . . . . . . . . . . . . 87 3.8.2 GMD calculation - numerical solution II . . . . . . . . . . . . . 88 3.8.3 Analytical solution and example calculation . . . . . . . . . . . 89 3.8.4 GMD calculation - summary . . . . . . . . . . . . . . . . . . . 90 3.9 GMD point outside line with its perpendicular on line end . . . . . . . 90 3.9.1 GMD calculation - radius right at the element . . . . . . . . . . 91 ix <?page no="14"?> 3.9.2 GMD calculation - radius left at the element . . . . . . . . . . . 94 3.9.3 GMD calculation - analytical solution . . . . . . . . . . . . . . 97 3.9.4 GMD calculation - summary and evaluation . . . . . . . . . . . 98 3.10 GMD point outside line with its perpendicular inside line . . . . . . . . 98 3.10.1 GMD calculation - radius right at the element . . . . . . . . . . 99 3.10.2 GMD calculation - radius left at the element . . . . . . . . . . . 101 3.10.3 GMD calculation - superposition . . . . . . . . . . . . . . . . . 103 3.10.4 GMD calculation - analytical solution . . . . . . . . . . . . . . 104 3.10.5 GMD calculation - Summary and evaluation . . . . . . . . . . . 105 4 LCR parallel and series resonant circuit 107 4.1 Resonant circuits, impedances and resonances . . . . . . . . . . . . . . 107 4.2 Natural frequency - error calculation . . . . . . . . . . . . . . . . . . . 111 4.3 Voltage profiles LCR series resonant circuit with frequency variation . . 112 4.3.1 Voltage characteristics across the inductance . . . . . . . . . . . 113 4.3.2 Voltage characteristics across inductance and resistance . . . . . 115 4.3.3 Voltage characteristics across the resistor . . . . . . . . . . . . . 116 4.3.4 Voltage characteristics across capacitance . . . . . . . . . . . . . 118 4.4 Damped forced LCR series resonant circuit . . . . . . . . . . . . . . . . 119 4.5 Damped free LCR series resonant circuit . . . . . . . . . . . . . . . . . 123 4.6 Undamped free LC resonant circuit . . . . . . . . . . . . . . . . . . . . 125 4.7 Damped forced LCR parallel resonant circuit . . . . . . . . . . . . . . . 126 4.8 Damped free LCR parallel resonant circuit . . . . . . . . . . . . . . . . 131 4.9 Undamped free LC resonant circuit . . . . . . . . . . . . . . . . . . . . 135 5 Current displacement in conductor 137 5.1 Current displacement in the conductor - modelling . . . . . . . . . . . 138 5.2 Current displacement in the conductor - calculation result . . . . . . . 142 5.3 Current displacement in the conductor - simulation result . . . . . . . 143 5.4 Current displacement in conductors - summary . . . . . . . . . . . . . 145 6 Bessel equation and Bessel function 147 6.1 On the person Wilhelm Friedrich Bessel . . . . . . . . . . . . . . . . . 148 6.2 Bessel equation and solutions . . . . . . . . . . . . . . . . . . . . . . . 148 6.3 Bessel equation of the field diffusion equation . . . . . . . . . . . . . . 150 6.4 Bessel function for calculating the field distribution in a capacitor . . . 153 x <?page no="15"?> 6.4.1 Model arrangement . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.4.2 Derivation of the Bessel function . . . . . . . . . . . . . . . . . 153 6.5 Bessel function for calculating the flux density within a coil . . . . . . . 157 6.5.1 Model arrangement . . . . . . . . . . . . . . . . . . . . . . . . . 157 6.5.2 Derivation of the Bessel function . . . . . . . . . . . . . . . . . 157 6.6 Bessel function from general form of Bessel equation . . . . . . . . . . . 160 7 Solution of differential equations using Green’s functions 165 7.1 About George Green . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.2 Green’s integral theorems . . . . . . . . . . . . . . . . . . . . . . . . . 168 7.3 PDE - arrangements of evaluation points and integration points . . . . 169 7.4 PDE - preparation for solution by Green’s - differential form . . . . . . 172 7.5 PDE - preparation for solution by Green’s - integral form . . . . . . . 174 7.5.1 Converting the PDE according to the variable to be solved . . . 174 7.5.2 Homogeneous boundary conditions . . . . . . . . . . . . . . . . 175 7.5.3 Inhomogeneous boundary conditions . . . . . . . . . . . . . . . 176 7.5.4 Dirichlet boundary conditions . . . . . . . . . . . . . . . . . . . 176 7.5.5 Neumann boundary conditions . . . . . . . . . . . . . . . . . . . 177 7.6 PDE - solution of Poisson’s DGL . . . . . . . . . . . . . . . . . . . . . 177 7.6.1 Exercise description . . . . . . . . . . . . . . . . . . . . . . . . . 178 7.6.2 Solution path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 7.7 PDE - solution of Laplace’s DGL . . . . . . . . . . . . . . . . . . . . . 181 7.7.1 Exercise description . . . . . . . . . . . . . . . . . . . . . . . . . 181 7.7.2 Solution path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 7.8 ODE - Preparation for the solution with the Green’s function . . . . . 183 7.8.1 Homogeneous boundary conditions . . . . . . . . . . . . . . . . 185 7.8.2 Inhomogeneous boundary conditions . . . . . . . . . . . . . . . 185 7.8.3 Continuity and discontinuity conditions . . . . . . . . . . . . . . 186 7.9 ODE - solution of d 2 u/ dx 2 = − 1 (I) . . . . . . . . . . . . . . . . . . . 187 7.9.1 Exercise description . . . . . . . . . . . . . . . . . . . . . . . . . 188 7.9.2 Solution I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 7.9.3 Solution II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 7.10 ODE - solution of d 2 y/ dx 2 + y = cosec x . . . . . . . . . . . . . . . . . 195 7.10.1 Exercise description . . . . . . . . . . . . . . . . . . . . . . . . . 195 7.10.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 xi <?page no="16"?> 7.11 ODE - solution of d 2 y/ dx 2 + y = f(x) . . . . . . . . . . . . . . . . . . 197 7.11.1 Exercise description . . . . . . . . . . . . . . . . . . . . . . . . . 198 7.11.2 Solution path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 7.12 ODE - solution of d 2 u/ dx 2 = − 1 (II) . . . . . . . . . . . . . . . . . . . 200 7.12.1 Exercise description . . . . . . . . . . . . . . . . . . . . . . . . . 200 7.12.2 Solution path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 7.13 ODE - solution of d 2 u/ dx 2 = x . . . . . . . . . . . . . . . . . . . . . . 203 7.13.1 Exercise description . . . . . . . . . . . . . . . . . . . . . . . . . 203 7.13.2 Solution path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 8 Method of Lagrangian multipliers 209 8.1 Definition of the Lagrange multiplier method . . . . . . . . . . . . . . . 209 8.1.1 Properties of the method . . . . . . . . . . . . . . . . . . . . . . 209 8.1.2 Mathematical optimisation . . . . . . . . . . . . . . . . . . . . . 210 8.1.3 Calculus of variations . . . . . . . . . . . . . . . . . . . . . . . . 212 8.2 Derivation of the Lagrange multiplier method . . . . . . . . . . . . . . 212 8.3 Application of the method . . . . . . . . . . . . . . . . . . . . . . . . . 214 8.4 Maths example - extreme value problem with one constraint . . . . . . 214 8.5 Maths example - extreme value problem with two constraints . . . . . 216 8.6 Application example - cube inscribed in a sphere . . . . . . . . . . . . 218 8.6.1 Extreme value problem with one constraint . . . . . . . . . . . . 218 8.6.2 Solution with Lagrange multiplier method . . . . . . . . . . . . 219 8.6.3 Solution with elimination method . . . . . . . . . . . . . . . . . 220 8.7 Application example - dimensioning of a coil winding . . . . . . . . . . 222 8.7.1 Extreme value problem . . . . . . . . . . . . . . . . . . . . . . . 222 8.7.2 Solution procedure . . . . . . . . . . . . . . . . . . . . . . . . . 223 9 Differential equations and finite elements 227 9.1 Physics examples for differential equations of 1 ′ th order . . . . . . . . . 227 9.2 Physics examples for 2 ′ th order differential equations . . . . . . . . . . 228 9.3 Finite elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 10 From the Method of Moments to the Galerkin Method 235 10.1 Basic principle of the method of moments (MOM) . . . . . . . . . . . . 235 10.2 Remarks on the method of moments . . . . . . . . . . . . . . . . . . . 237 10.2.1 Matrix (l jk ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 xii <?page no="17"?> 10.2.2 Choosing the basis and weighting functions φ n and w k . . . . . 238 10.3 About Boris Galerkin . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 10.4 Galerkin’s idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 11 Traditional Galerkin Method 241 12 Galerkin method - solution of du/ dx = u 243 12.1 Choosing the base and weighting function . . . . . . . . . . . . . . . . 243 12.2 Weak formulation of the differential equation . . . . . . . . . . . . . . . 244 12.3 Transforming the system of equations into a matrix equation . . . . . . 245 12.4 Solving the linear equation system . . . . . . . . . . . . . . . . . . . . . 245 13 Galerkin method - solution of − d 2 u/ dx 2 = 4x 2 + 1 249 13.1 Choosing the base and weighting function . . . . . . . . . . . . . . . . 249 13.2 Formulation of the weak form with basis and weighting function . . . . 250 13.3 Transforming the system of equations into a matrix equation . . . . . . 250 13.4 Solving the linear equation system . . . . . . . . . . . . . . . . . . . . . 252 14 Galerkin method - solution of d 2 u/ dx 2 = − 1 (I) 255 14.1 Choosing the base and weighting function . . . . . . . . . . . . . . . . 256 14.2 Weak formulation of the differential equation . . . . . . . . . . . . . . . 256 14.3 Transforming the system of equations into a matrix equation . . . . . . 257 14.4 Solving the linear equation system . . . . . . . . . . . . . . . . . . . . . 257 15 Galerkin method - solution of d 2 u/ dx 2 = − 1 (II) 259 15.1 Choosing the base and weighting function . . . . . . . . . . . . . . . . 259 15.2 Weak formulation of the differential equation . . . . . . . . . . . . . . . 260 15.3 Transforming the system of equations into a matrix equation . . . . . . 261 15.4 Solving the linear equation system . . . . . . . . . . . . . . . . . . . . . 262 16 Galerkin method - Ampere’s law 263 16.1 Galerkin method - Ampere’s law for the conductor inside . . . . . . . . 265 16.1.1 Weak formulation of the differential equation . . . . . . . . . . . 265 16.1.2 Transforming the system of equations into a matrix equation . . 266 16.1.3 Solving the linear equation system . . . . . . . . . . . . . . . . 267 16.2 Galerkin method - Ampere’s law for the conductor outside . . . . . . . 268 16.2.1 Weak formulation of the differential equation . . . . . . . . . . . 269 xiii <?page no="18"?> 16.2.2 Transforming the system of equations into a matrix equation . . 269 16.2.3 Solving the linear equation system . . . . . . . . . . . . . . . . 270 16.3 Comparison of FEM with Galerkin results . . . . . . . . . . . . . . . . 271 17 Galerkin-FEM 273 17.1 Galerkin FEM - What is being solved? . . . . . . . . . . . . . . . . . . 273 17.2 Galerkin-FEM - Procedure for the solution . . . . . . . . . . . . . . . . 274 18 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (I) 277 18.1 Weak formulation of the differential equation . . . . . . . . . . . . . . . 278 18.2 Discretisation of the domain Ω to be solved . . . . . . . . . . . . . . . 279 18.3 Choosing the base and weighting function . . . . . . . . . . . . . . . . 279 18.4 Formulation of the weak form with triangular functions φ(x) . . . . . . 281 18.5 Transforming the system of equations into a matrix equation . . . . . . 282 18.6 Solving the linear equation system . . . . . . . . . . . . . . . . . . . . . 285 19 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (II) 289 19.1 Weak formulation of the differential equation . . . . . . . . . . . . . . . 290 19.2 Discretisation of the domain Ω to be solved . . . . . . . . . . . . . . . . 291 19.3 Choosing the base and weighting function . . . . . . . . . . . . . . . . 291 19.4 Formulation of the weak form with triangular functions φ(x) . . . . . . 291 19.5 Transforming the system of equations into a matrix equation . . . . . . 291 19.6 Solving the linear equation system . . . . . . . . . . . . . . . . . . . . . 292 20 Galerkin-FEM - Electrostatic field calculation 295 20.1 Weak formulation of the differential equation . . . . . . . . . . . . . . . 295 20.2 Discretisation of the domain Ω to be solved . . . . . . . . . . . . . . . . 296 20.3 Choosing the base and weighting function . . . . . . . . . . . . . . . . 296 20.4 Formulation of the weak form with triangular functions φ(x) . . . . . . 296 20.5 Transforming the system of equations into a matrix equation . . . . . . 298 20.6 Solving the linear equation system . . . . . . . . . . . . . . . . . . . . . 300 21 Galerkin-FEM - heat diffusion 303 21.1 Weak formulation of the differential equation . . . . . . . . . . . . . . . 303 21.2 Discretisation of the domain Ω to be solved . . . . . . . . . . . . . . . 305 21.3 Choosing the base and weighting function . . . . . . . . . . . . . . . . 305 xiv <?page no="19"?> 21.4 Formulation of the weak form with triangular functions φ(x) . . . . . . 305 21.5 Transforming the system of equations into a matrix equation . . . . . . 306 21.6 Solving the linear equation system . . . . . . . . . . . . . . . . . . . . . 307 21.7 Diffusion process completed . . . . . . . . . . . . . . . . . . . . . . . . 310 22 Galerkin-FEM - magnetic field diffusion 313 22.1 Weak formulation of the differential equation . . . . . . . . . . . . . . . 313 22.2 Discretisation of the domain Ω to be solved . . . . . . . . . . . . . . . 315 22.3 Choosing the base and weighting function . . . . . . . . . . . . . . . . 315 22.4 Formulation of the weak form with triangular functions φ(x) . . . . . . 315 22.5 Transforming the system of equations into a matrix equation . . . . . . 316 22.6 Solving the linear equation system . . . . . . . . . . . . . . . . . . . . . 317 23 Introduction to the finite difference method 323 23.1 Numerical notation of the linear field diffusion equation . . . . . . . . . 323 23.2 On the persons Crank and Nicolson . . . . . . . . . . . . . . . . . . . . 324 23.3 Solution with implicit method according to Crank-Nicolson . . . . . . . 324 23.3.1 Transforming the diffusion equation into a matrix equation . . . 325 23.3.2 Solving the matrix equation . . . . . . . . . . . . . . . . . . . . 326 23.3.3 Application example . . . . . . . . . . . . . . . . . . . . . . . . 329 23.4 Solution with explicit method according to Crank-Nicolson . . . . . . . 332 23.4.1 Transforming the diffusion equation into a matrix equation . . . 332 23.4.2 Solving the matrix equation . . . . . . . . . . . . . . . . . . . . 333 23.4.3 Application example . . . . . . . . . . . . . . . . . . . . . . . . 334 24 Applications of FEM to product development 341 24.1 Analysis of a proportional magnet . . . . . . . . . . . . . . . . . . . . . 341 24.1.1 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 24.1.2 Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 24.1.3 Postprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 24.2 Synthesis of a planar asynchronous disc motor . . . . . . . . . . . . . . 345 24.2.1 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 24.2.2 Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 24.2.3 Postprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 24.2.4 Prototype of the planar asynchronous motor . . . . . . . . . . . 346 xv <?page no="20"?> 25 Virtual product design 349 25.1 Coupling between FEM and optimisation tools . . . . . . . . . . . . . . 349 25.2 Multi-objective optimisation - Pareto optimisation . . . . . . . . . . . 350 25.3 Optimisation example electromagnet . . . . . . . . . . . . . . . . . . . 351 25.3.1 Monte Carlo method . . . . . . . . . . . . . . . . . . . . . . . . 352 25.3.2 Particle swarm method . . . . . . . . . . . . . . . . . . . . . . . 354 25.3.3 Evolutionary method . . . . . . . . . . . . . . . . . . . . . . . . 354 25.3.4 Discussion of the results . . . . . . . . . . . . . . . . . . . . . . 355 26 Eigenvalue problems 357 26.1 Eigenvalue problem - introduction . . . . . . . . . . . . . . . . . . . . 357 26.2 Eigenvalue problem - method of moments . . . . . . . . . . . . . . . . 358 26.3 Eigenvalue problem - canonical form . . . . . . . . . . . . . . . . . . . 359 27 Eigenvalue problem-MOM - solution of − d 2 u/ dx 2 = λu 361 27.1 Exercise description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 27.2 Solution path and solution . . . . . . . . . . . . . . . . . . . . . . . . . 361 27.3 Solution for 1 ′ th order . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 27.4 Solution for 2 ′ th order . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 28 Common features of methods to solve differential equations 369 28.1 Method of Moments (MOM) . . . . . . . . . . . . . . . . . . . . . . . . 369 28.2 Integral transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 28.3 Green’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 29 Things worth knowing about modelling 375 29.1 Categories of modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 29.2 Analytics versus Numerics . . . . . . . . . . . . . . . . . . . . . . . . . 376 30 Useful standards 379 Bibliography 383 A Appendix 389 A.1 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 A.2 Integrals for chap. 3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 A.3 Integrals for chap. 3.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 xvi <?page no="21"?> A.4 Integrals for chap. 3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 A.5 MATLAB-Code - Heat diffusion script . . . . . . . . . . . . . . . . . . 397 A.6 MATLAB code - magnetic field diffusion script . . . . . . . . . . . . . 400 A.7 Tool comparison - MATLAB vs. COMSOL . . . . . . . . . . . . . . . 407 B Campus K¨ unzelsau - Inside 409 Index 411 xvii <?page no="23"?> Symbols and abbreviations Symbol Meaning Unit A coefficient, matrix A area m 2 B coefficient, matrix B, � B magnet. flux density, vector of magnet. flux density V s/ m 2 B h interpolation, approach function C coefficient, matrix C capacity As/ V C heat capacity J/ K D coefficient, Charge D charge density As/ m 2 D discriminant E coefficient, matrix E, � E electric field strength, electric field strength V / m E length-related electric field strength V / m 2 F coefficient, function F force N, kgm/ s 2 G Green’s function G coefficient H, � H magnet. field strength, vector of the magnet A/ m H Φ field interpolation function, approach function I current A J, � J electr. current density, vector of electr. current density A/ m 2 <?page no="24"?> xx Symbol Meaning Unit K constant L inductivity V s/ A M matrix N number of nodes, line elements, running variable, number of turns P power W P polynomial function, evaluation point P ′ source point, integration point Q charge As R residuum R radius m R resistance Ω S matrix S P vertex U voltage V V volume m 3 W Wronski determinant X reactance, reactance Ω Z, | Z | impedance, magnitude of the impedance Ω Z impedance (complex impedance) Ω a coefficient a 0 acceleration m/ s 2 b damping coefficient kg/ s c constant c spring constant N/ m c speed of light m/ s c specific heat capacity J/ (kgK) d diameter m e e-function �e unit vector f auxiliary variable, function, matrix, column vector g auxiliary variable, function, matrix h element length, distance, height m <?page no="25"?> xxi Symbol Meaning Unit i control variable i current A j control variable j imaginary unit √− 1 k, k constant, complex constant l length m l matrix m control variable m mass kg n normal, number of partial intervals p impulse kg m/ s p variable, function r radius m s constant s distance, length m t time s u function, interpolation, approach function u voltage V ˆ u 0 voltage amplitude V v function, interpolation, approach function v speed m/ s w weight, weighting, test, shape function x coordinate, path m y coordinate, path m y function z coordinate, path m Γ edge of the FEM area Δ delta, differential Θ magnetomotive force A Φ magnetic flux V s Ψ chained magnetic flux V s Ω area, sub-area, element m 2 <?page no="26"?> xxii Symbol Meaning Unit α coefficient β coefficient γ Coefficient, boundary value δ decay constant ε permittivity As/ (V m) ε 0 permittivity of the vacuum [8, 8542 10 − 12 As/ (V m)] As/ (V m) υ temperature ◦ C κ specific electrical conductivity m/ (Ωmm 2 ) λ thermal conductivity W/ (mK) λ eigenvalue, Lagrange multiplier μ permeability V s/ (Am) μ 0 permeability of the vacuum [4π10 − 7 V s/ (Am)] V s/ (Am) ρ density kg/ m 3 ρ volume charge density As/ m 3 τ time constant s υ h approach, test function ϕ potential V ϕ interpolation, approach function, angle ϕ angle rad φ development, base, triangular function ω angular velocity, angular frequency 1/ s ΔA, ΔA � differential surface elements m 2 Δx, Δy differential line elements m dA infinitesimal surface element m 2 dx, dy infinitesimal line elements m L linear operator M linear operator O zero operator I identity operator ∇ Nabla operator Δ Delta operator <?page no="27"?> Chapter 1 Required mathematical basics ”Last time I asked: What does mathematics mean to you? , and some people answered: The manipulation of numbers, the manipulation of structures. And if I had asked what music means to you, would you have answered: The manipulation of notes? “ (Serge Lang, French-American mathematician, 1927-2005) from ”The beauty of doing Mathematics“. Serge Lang became known for his work on algebraic geometry and number theory and as the author of many textbooks. The basics required for the numerical solution of differential equations have been compiled in this chapter. These essentially include matrices, definitions and classifications of differential equations as well as initial and boundary value problems and vector operators. Particularly recommended literature for this are [4], [60] and [67]. 1.1 Logarithm The logarithm of x (numerus, logarithmand) to the base a is the real number b (exponent), for which the following applies log a x = b a b = x. The logarithm to the base 10 is called the decadic or Briggsian logarithm. It follows log 10 x = lg x <?page no="28"?> 2 Required mathematical basics and it applies log (x · 10 α ) = α + log x. Examples of this are • Example 1: log ( 5 · 10 1 ) = 1 + log 5 = 1, 698 • Example 2: log ( 5 · 10 2 ) = 2 + log 5 = 2, 69. Furthermore log a = α + log m with the numerus or logarithm a, mantissa m and α the index of the logarithm, equal to the exponent of the place value of the first significant digit of the numerus. See also [1], S. 56. In summary, some more useful logarithmic laws are • Multiplication of the independent parameters log a (u · v) = log a u + log a v • Division of the independent parameters log a ( u v ) = log a u − log a v • Exponentiation of the independent variable log a u v = v log a u • Squaring of the independent variable log a v √ w = log a w 1 v = 1 v log a w. <?page no="29"?> 1.2 Matrices 3 1.2 Matrices The matrix notation summarises the calculations with functions and thus increases the overview. For this purpose, a vector operator summarises derivatives. These are marked with a simple symbol (Nabla or Laplace operator). The matrix notation (matrix equations) enables the numerical solution of linear systems of equations by means of the solution methods known in the literature. Therefore, matrix and matrices receive special attention. Selected matrix operations are presented here. These include the necessary matrix calculation rules, the inversion, multiplication of a matrix, matrix textures as well as determinant calculation rules, and much more. Recommended literature is [60], p. 268 ff. and [29], p. 12 ff. (Random matrices - new universal laws). 1.2.1 Arithmetic operations with matrices Table 1.1 summarises the most important algebraic axioms. Table 1.1: Summary of the most important calculation rules Associative law A (BC) = (AB) C Distributive law A (B+C) = AB + AC (A+B) C = AC + BC Transpose (AB) T = B T A T Note that matrix multiplication is not commutative, which is A · B � = B · A. 1.2.2 Addition and subtraction of two matrices Two matrices A and B of the same type are added or subtracted by adding or subtracting their corresponding elements A ± B = (a ik ± b ik ) = C, with i = 1, 2, 3, ..., m and k = 1, 2, 3, ..., n and C the sum or difference matrix. Addition and subtraction are only defined for matrices of the same type (m, n). <?page no="30"?> 4 Required mathematical basics 1.2.3 Multiplication of a matrix with a scalar The multiplication of a matrix A with the scalar λ is done by multiplying each individual matrix element with the scalar λ A = λ ⎛⎜⎜⎜⎜⎝ a 11 a 12 ... a 1m a 21 a 22 ... ... ... ... ... ... a n1 a n2 ... a nm ⎞⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎝ λ a 11 λ a 12 ... λ a 1m λ a 21 λ a 22 ... ... ... ... ... ... λ a n1 λ a n2 ... λ a nm ⎞⎟⎟⎟⎟⎠ . In scalar multiplication, the associative and distributive laws apply, since the cases λ = α · β or λ = α ± β are equally valid. 1.2.4 Square matrix Square matrices A have the same number of rows and columns, i.e. m = n with A = A n,n = ⎛⎜⎜⎜⎜⎝ a 11 a 12 ... a 1n a 21 a 22 ... ... ... ... ... ... a n1 a n2 ... a nn ⎞⎟⎟⎟⎟⎠ . Examples of square matrices are the diagonal matrices, the symmetrical matrices, normal matrices, Hermitian matrices and the unit matrices. 1.2.5 Identity matrix The identity matrix or unit matrix E is a diagonal matrix in which all elements outside the main diagonal disappear E = ⎛⎜⎜⎜⎜⎝ 1 0 ... 0 0 1 ... 0 ... ... ... ... 0 0 ... 1 ⎞⎟⎟⎟⎟⎠ and a ii = 1. The unit matrix is sometimes also referred to as the identity matrix. The unit matrix is a square matrix. Despite its simplicity, it is significant. For example, the result of a multiplication of the unit matrix with a matrix is again the matrix itself. <?page no="31"?> 1.2 Matrices 5 1.2.6 Determinant The determinant allows the investigation of matrices according to ”patterns“, , for example to investigate solutions of differential equations (see the Wronski determinant). Determinants are calculated from square matrices. The determinant of a 2-row square matrix A = (a ik ) is a real number det A = ∣∣∣∣∣ a 11 a 12 a 21 a 22 ∣∣∣∣∣ = a 11 a 22 − a 12 a 21 . A determinant is multiplied by a scalar λ by multiplying the elements of a single line by the scalar λ det A = λ ∣∣∣∣∣ a 11 a 12 a 21 a 22 ∣∣∣∣∣ = ∣∣∣∣∣ λ a 11 λ a 12 a 21 a 22 ∣∣∣∣∣ = λ a 11 a 22 − λ a 12 a 21 . The determinant of a square (3,3)-matrix A = (a ik ) is understood to be the number det A = ∣∣∣∣∣∣∣∣ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ∣∣∣∣∣∣∣∣ . The 3-row determinant is determined according to the rule of Sarrus det A = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 − a 13 a 22 a 31 − a 11 a 23 a 32 − a 12 a 21 a 33 . A determinant takes the value zero if • all elements are equal to zero, • two rows or columns are equal to each other, • two rows or columns are proportional to each other, • one row or column is representable as a linear combination of the remaining rows or columns. An example of this is <?page no="32"?> 6 Required mathematical basics det D = ∣∣∣∣∣∣∣∣∣∣ 16 3 2 13 5 10 11 8 9 6 7 12 4 15 14 1 ∣∣∣∣∣∣∣∣∣∣ = 0 the determinant of D¨ urer’s square from his copper engraving MELENCOLIA I. 1.2.7 Subdeterminant or minor If m arbitrary rows and m arbitrary columns are deleted from an n-row determinant, the result is an (n − m)-row determinant, which is called a subdeterminant (n − m)’th order or minor. An example of this is the determinant A, whose minor M 1,2 is sought. This is obtained by deleting the first row and second column A = ∣∣∣∣∣∣∣∣ 2 0 1 3 2 − 4 1 0 3 ∣∣∣∣∣∣∣∣ , M 1,2 = ∣∣∣∣∣ 3 − 4 1 3 ∣∣∣∣∣ = 3 · 3 − ( − 4 · 1) = 13. Subdeterminants are required, for example, to calculate the inverse matrix and form the preliminary stage for calculating the adjoints. 1.2.8 Adjuncts or algebraic complement The adjoints, adjuncts or algebraic complement A adj is formed by subdeterminant formation of the matrix A according to the procedure shown in fig. 1.1. A subsequent multiplication of the elements with the sign ( − 1) i+k , the i-th row and k-th column, which is shown in fig. 1.1 and transposing leads to the adjoints A adj of the matrix A. <?page no="33"?> 1.2 Matrices 7 Figure 1.1: Procedure for the development of the adjuncts An example of this is A = ⎛⎜⎜⎝ 2 0 1 3 2 − 4 1 0 3 ⎞⎟⎟⎠ ; A adj = ⎛⎜⎜⎝ 6 0 − 2 − 13 5 11 − 2 0 4 ⎞⎟⎟⎠ . The adjoint must not be confused with the adjoint matrix. The Latin term ”adjuncts“ means the subdeterminant assigned to an element of a determinant, where ”adjuncts“ means to assign, to attach. The Latin word ”complement“ means addition. Adjoints can be used to calculate the inverse of a square matrix. 1.2.9 Inverse matrix The calculation of the inverse matrix A − 1 A − 1 = 1 det A A adj is done using the adjuncts. Furthermore AA − 1 = A − 1 A = E. <?page no="34"?> 8 Required mathematical basics An example of this is A = ⎛⎜⎜⎝ 7 2 3 1 5 4 9 3 7 ⎞⎟⎟⎠ , A − 1 = ⎛⎜⎜⎝ 0, 2473 − 0, 0538 − 0, 0753 0, 3118 0, 2366 − 0, 2688 − 0, 4516 − 0, 0323 0, 3548 ⎞⎟⎟⎠ with det A = 93 and AA − 1 = ⎛⎜⎜⎝ 1 0 0 0 1 0 0 0 1 ⎞⎟⎟⎠ = E. The inversion of a matrix enables, for example, the solution of linear systems of equations. 1.2.10 Transposed of a matrix The transposed matrix A T of a matrix A is obtained by swapping its rows with columns. The following relation exists between the elements: a Tik = a ki . If the matrix A is of type (m, n), the transposed matrix A T is of type (n, m). By transposing, for example, a row vector changes into a column vector and vice versa. An example of an (m, m)-matrix is A = ⎛⎜⎜⎝ 7 2 3 1 5 4 9 3 7 ⎞⎟⎟⎠ , <?page no="35"?> 1.2 Matrices 9 A T = ⎛⎜⎜⎝ 7 1 9 2 5 3 3 4 7 ⎞⎟⎟⎠ . For example, transposing a matrix is part of calculating adjoints, or is applied to calculate eigenvalues. 1.2.11 Complex conjugate matrix The conjugate complex number of z = a + bi is z ∗ = a − bi. The complex conjugate matrix of A is A ∗ in which each element of the matrix is replaced by its complex conjugate element. An example of this is A = ⎛⎜⎜⎝ 1 2 3i 1 + 2i 5 − 3i 9 0 7 − 4i ⎞⎟⎟⎠ , A ∗ = ⎛⎜⎜⎝ 1 2 − 3i 1 − 2i 5 3i 9 0 7 + 4i ⎞⎟⎟⎠ . Swapping the sign of the imaginary unit corresponds to mirroring the imaginary part on the real axis. 1.2.12 Hermite conjugate matrix The Hermitian conjugate matrix or adjoint of a matrix or adjoints of the matrix A of type (m, n) with complex elements is the transpose of its complex conjugate, or the complex conjugate of its transpose <?page no="36"?> 10 Required mathematical basics A H = (A ∗ ) T = � A T � ∗ . An example of this is A ∗ = ⎛⎜⎜⎝ 1 2 − 3i 1 − 2i 5 3i 9 0 7 + 4i ⎞⎟⎟⎠ , A H = ⎛⎜⎜⎝ 1 1 − 2i 9 2 5 0 − 3i 3i 7 + 4i ⎞⎟⎟⎠ = (A ∗ ) T . If the matrix is real (i.e. it contains only real elements) then it is A ∗ = A. The adjoint matrix must not be confused with the adjoint. 1.2.13 Hermitian matrix - self-adjoint matrix The Hermitian matrix A is a square matrix with complex elements equal to its adjoint matrix A = (A ∗ ) T = A H . In the case of real element occupation, the notions of symmetric and Hermitian matrices correspond to each other. An example is A = (A ∗ ) T = � 3 2 + i 2 − i 1 � = A H . Hermitian matrices are used, for example, in systems of linear equations. The Marix was named after Charles Hermite, a French mathematician (1822-1901). 1.2.14 Orthogonal matrix A square matrix A is said to be orthogonal if its transpose is equal to its inverse <?page no="37"?> 1.2 Matrices 11 A T = A − 1 or the multiplication of the transposed orthogonal matrix with the orthogonal matrix is equal to the unit matix A T A = E. An example of this is A T = ( 0 1 1 0 ) , A = ( 0 1 1 0 ) , so that ( 0 1 1 0 ) · ( 0 1 1 0 ) = ( 0 1 1 0 ) = E. Furthermore • det(A) = 1: A is a rotation matrix • det(A) = -1: A is a rotation, mirror matrix is given. Orthogonal matrices are used in systems of linear equations and in matrix decomposition. 1.2.15 Unitary matrix A square matrix A with complex elements is defined as a unitary matrix, if (A ∗ ) T = A − 1 or A(A ∗ ) T = (A ∗ ) T A = E is. It is thus the transpose of its complex conjugate, which corresponds to the inverted matrix. In the real, the terms unitary and orthogonal coincide. An example of this is <?page no="38"?> 12 Required mathematical basics A = ⎛⎜⎜⎝ 1 √ 2 1 √ 2 0 − 1 √ 2 i 1 √ 2 i 0 0 0 i ⎞⎟⎟⎠ , A ∗ = ⎛⎜⎜⎝ 1 √ 2 1 √ 2 0 1 √ 2 i − 1 √ 2 i 0 0 0 − i ⎞⎟⎟⎠ , A − 1 = (A ∗ ) T = ⎛⎜⎜⎝ 1 √ 2 1 √ 2 i 0 1 √ 2 − 1 √ 2 i 0 0 0 − i ⎞⎟⎟⎠ . Unitary matrices are used in matrix decomposition. 1.2.16 Normal matrix A square matrix is called a normal matrix if it satisfies the equation AA T = A T A. Hermitian, unitary, symmetric and orthogonal matrices are examples of normal matrices. An example of a normal matrix is A = � i 0 0 3 − 5i � , A T = � i 0 0 3 − 5i � , A T A = A A T = � − 1 0 0 − 16 − 30i � . 1.2.17 Norm of a matrix Given the matrix A with A = � 1 2 0 − 1 � , whose norm with <?page no="39"?> 1.2 Matrices 13 • � A � 1 = sum of absolute values of row elements 1 = 3, • � A � 2 = sum of the absolute values of the series elements 2 = 1, • � A � ∞ = maximum of the sum of the absolute values of all series elements = orm � A � = 3 is calculated. Matrix norms are often used in linear algebra and numerical mathematics. Furthermore, they are used to investigate the convergence of power series of matrices. 1.2.18 Conditioned matrix equation and condition number When solving a matrix equation, numerical problems may arise which need to be evaluated. Given is the matrix equation I D = A ( 400 − 201 − 800 401 ) ( x 1 x 2 ) = ( 200 − 200 ) with solution x 1 = − 100 and x 2 = − 200. Now effect • a small change of I ( 401 − 201 − 800 401 ) ( x 1 x 2 ) = ( 200 − 200 ) a large change of D, in this case x 1 = 40.000 and x 2 = 79.800, the system is said to be ill-conditioned, which is the case here. • a small change in I a small change in D, the system is said to be well-conditioned. The evaluation of a matrix A is done with its condition number cond � A � including its inverse. Here • cond � A � ≈ 1: well conditioned matrix, • cond � A � > 1: ill conditioned matrix. n <?page no="40"?> � � A �� � ∞ � · � � A − �� 1 � ∞ � norm � A − 1 � 14 Required mathematical basics Given are the matrices A and A − 1 with A = � 400 − 201 − 800 401 � ; A − 1 = � − 1 . 0025 − 0 . 5025 − 2 − 1 � . The condition number cond � A � of the matrix A is compared with the maximum sum of the elements of a row cond � A � = norm � A � = · | − 2 | + | − 1 | � �� � norm � A − 1 � = = � 1 calculated. The matrix is considered ill-conditioned. Furthermore, by means of log(cond � A � ) = log(3603) = 3 . 6 calculates the number of decimals (decimal places) that are lost in precision. There is no clear definition here, so care should be taken when using it. 1.2.19 Eigenvalue, eigenvector As an example, consider the matrix equation ⎛⎜⎜⎝ 1 2 0 2 1 0 0 0 − 3 ⎞⎟⎟⎠ · ⎛⎜⎜⎝ 1 1 1 ⎞⎟⎟⎠ = ⎛⎜⎜⎝ 3 3 − 3 ⎞⎟⎟⎠ , where the column vector of the left half of the equation does not match the result vector of the right half of the equation. By changing the left column vector and multiplying it by the matrix again, it follows that 3603 � | − 800 | �� + | 401 � | norm � A � 1201 · 3 <?page no="41"?> 1.2 Matrices 15 ⎛⎜⎜⎝ 1 2 0 2 1 0 0 0 − 3 ⎞⎟⎟⎠ � �� � A · ⎛⎜⎜⎝ 1 1 0 ⎞⎟⎟⎠ � �� � �v = ⎛⎜⎜⎝ 3 3 0 ⎞⎟⎟⎠ = 3 ���� λ · ⎛⎜⎜⎝ 1 1 0 ⎞⎟⎟⎠ � �� � �v a result vector which is equal to the left column vector. The matrix equation takes the general form A �v = λ · �v, where A is the matrix, �v is the eigenvector and λ is the scalar eigenvalue. The left-hand side of the equation is a matrix-vector multiplication and the right-hand side of the equation is a scalar multiplication. If in the progression λ is used ⎛⎜⎜⎝ λ 0 0 0 λ 0 0 0 λ ⎞⎟⎟⎠ = λ · ⎛⎜⎜⎝ 1 0 0 0 1 0 0 0 1 ⎞⎟⎟⎠ � �� � E is described with the help of the unit matrix E, then the matrix equation follows again in general form A �v = (λ E) · �v. By rearranging it follows A �v − (λ E) · �v = (A − λ E) · �v = 0. Values for λ are sought which satisfy the equation. The condition is calculated with the characteristic polynomial P(λ) <?page no="42"?> 16 Required mathematical basics det (A − λ E) = P(λ) = ∣∣∣∣∣∣∣∣∣∣∣ a 11 − λ a 12 . . . a 1n a 21 a 22 − λ . . . ... ... . . . ... a m1 · · · · · · a mn − λ ∣∣∣∣∣∣∣∣∣∣∣ = 0 (1.1) which arises through the development of the determinant. The determination of eigenvalues is preferably used in physical-technical systems for the calculation of resonance frequencies. 1.2.20 Square matrices - a summary Quadratic matrices of the type (m, m) or A mm are often used to describe physical phenomena and are significant in physics. fig. 1.2 shows a summary. <?page no="43"?> 1.2 Matrices 17 Figure 1.2: Summary of selected types of quadratic (m, m) matrices <?page no="44"?> 18 Required mathematical basics 1.3 Integral, differential equations Many processes in science and technology are described by means of differential equations (DEs). In order to facilitate access to differential equations, they are presented here. After initial definitions of terms, a classification of differential equations is given. Furthermore, initial value tasks and boundary value tasks are presented. In the following summary, particular use was made of the literature [4], [62], [68]. 1.3.1 Definitions • If x and y are two variable quantities and if exactly one y-value can be assigned to a given x-value, then y is called a function of x and write y = f(x). • The variable x is called independent variable or argument of the function y. The variable y is called dependent variable. • Differential equation (DE) is called an equation in which, in addition to one or more independent variables and one or more functions of these variables, the derivative of this function with respect to the independent variables also occur. The order of a differential equation is equal to the order of the highest derivative occurring in it. • An equation in which derivatives of a function y = y(x) occur up to the nth order is called ordinary differential equation (ODE) of nth order. • Partial differential equations (PDEs) contain partial derivatives of a function of several variables. • A differential equation is called linear if the function and its derivatives occur only linearly, i.e. to the first power. • A differential equation is called homogeneous if the sum of all terms containing the function f or its derivative of f is equal to zero. Otherwise it is called inhomogeneous. • A function whose equation is resolved according to the dependent variable is called explicit (explicitus (lat.) unwound). The general form of the explicit function is y = f(x). With the explicit form of a mathematical function, its <?page no="45"?> 1.3 Integral, differential equations 19 values can be calculated directly without transforming the function (example: y = √ 1 − x 2 ). • A function whose equation is not solved for the dependent variable is called implicit (implicitus (Latin) wrapped up). The general form of an implicit function is f(x, y) = 0. The implicit form of an equation f(x, y) = 0 is obtained when this equation can be uniquely solved for y (example: x 2 − y 2 − 1 = 0). In tab. 1.2 are examples of differential equations. Table 1.2: Examples for the representation and naming of differential equations y ′ = 2 x Explicit Dgl 1st order x + yy ′ = 0 Implicit Dgl 1st order y ′ + yy ′′ = 0 Implicit Dgl 2nd order ¨ s = − g Explicit Dgl 2nd order y ′′′ + 2 y ′ = cos(x) Implicit Dgl 3rd order y (6) − y (4) + y ′′ = e x Implicit Dgl 6th order 1.3.2 Differentiation of scalar functions The following rules apply to the differentiation of scalar functions: • Sum rule: d(u ± v) = du ± dv (term-wise differentiation), • Product rule: d(u v) = u dv + v du. 1.3.3 Higher order ordinary differential equations The linear Ordinary Differential Equation (ODE) of order n with non-constant coefficients has the form a n (x) d n y(x) dx n + · · · + a 1 (x) dy(x) dx + a 0 (x)y(x) = { f(x) (inhomog. ODE) 0 (homog. ODE). (1.2) If f(x) = 0, the ODE is called homogeneous, otherwise inhomogeneous. A special case is the linear ODE with constant coefficients <?page no="46"?> 20 Required mathematical basics a n d n y(x) dx n + · · · + a 1 dy(x) dx + a 0 y(x) = { f(x) (inhomog. ODE) 0 (homog. ODE), (1.3) which is called homogeneous in the case f(x) = 0, otherwise inhomogeneous. The solution of the ODEs is done as follows: • Solution of the homogeneous case of the ODE of eq. (1.2): To solve the homogeneous eq. (1.2), n linear independent functions y 1 (x), y 2 (x), ..., y n (x) must be determined which satisfy this equation and are given as the general solution or complementary equation y c (x) y c (x) = c 1 y 1 (x) + c 2 y 2 (x) + ... + c n y n (x) can be denoted. • Solution of the inhomogeneous case of the ODE of eq. (1.2): To the complementary solution y c (x) we still have to determine the particular solution y p (x), which can take any function satisfying the inhomogeneous eq. (1.2). The general solution of the inhomogeneous eq. (1.2) is thus y(x) = y c (x) + y p (x). • Solution of the homogeneous case of the ODE of eq. (1.3): The complementary function y c (x) is sought. For this purpose, the initial function y(x) = A e λx is chosen and inserted into the homogeneous eq. (1.3). The division by A e λx leads to the (auxiliary) equation a n λ n + a n − 1 λ n − 1 + ... + a 1 λ + a 0 = 0. From this, three main solution cases can usually be worked out, whose solutions can be linear and independent. <?page no="47"?> 1.3 Integral, differential equations 21 • Solution of the inhomogeneous case of the ODE of eq. (1.3): For the complementary solution y c (x), the particulate solution y p (x) must also be determined. There is no generally applicable method for linear ODEs with constant coefficients for finding the particulate solution y p (x). The general solution of the inhomogeneous eq. (1.3) is y(x) = y c (x) + y p (x). The question is how to determine that n individual solutions of the homogeneous equations (1.2) and (1.3) are linearly independent. For the solution, the repeated differentiation of the complementary equation y c is carried out c 1 y 1 (x) + c 2 y 2 (x) + ... + c n y n (x) = 0 c 1 y � 1 (x) + c 2 y � 2 (x) + ... + c n y � n (x) = 0 ... 0 (1.4) c 1 y (n − 1) 1 (x) + c 2 y (n − 1) 2 (x) + ... + c n y (n − 1) n (x) = 0. The n functions y 1 (x), y 2 (x), ..., y n (x) are linearly independent over an interval if W (y 1 , y 2 , ..., y n ) = ∣∣∣∣∣∣∣∣∣∣∣ y 1 y 2 . . . y n y � 1 y � 2 . . . ... ... . . . ... y (n − 1) 1 · · · · · · y (n − 1) n ∣∣∣∣∣∣∣∣∣∣∣ � = 0. Here W (y 1 , y 2 , ..., y n ) is the Wronski determinant whose value still depends on x. For useful literature on this, see [44], page 786 ff. as well as [60], page 490 ff. 1.3.4 Partial differential equations A general differential equation with n-independent variables of mth order in implicit form is called the equation F ( x 1 , x 2 , ... , x n , u, ∂u ∂x 1 , ..., ∂u ∂x n , ∂ 2 u ∂x 21 , ∂ 2 u ∂x 1 ∂x 2 ..., ) = 0. <?page no="48"?> 22 Required mathematical basics This is called a partial differential equation (PDE). If m is the highest order of the partial derivative occurring in it, the equation is called partial differential equation of mth order. If the equation is solved for u m (x), the explicit form of the ordinary differential equation of mth order is obtained. Recommended literature: [4], p. 504 ff; [68], p. 549 ff. The value of a mixed derivative ∂ 2 u ∂x 1 ∂x 2 is for given values of x 1 and x 2 independent of the sequence of derivation ∂ 2 u ∂x 1 ∂x 2 = ∂ 2 u ∂x 2 ∂x 1 (Schwarz’s law of permutation). Higher order partial derivatives are defined analogously ([4], p. 410). In general ∂ 2 u ∂x 1 ∂x 2 � = ∂u ∂x 1 · ∂u ∂x 2 . Here are some hints for the notation of derivatives. It is f ′ = ∂f ∂x = df dx . For the second derivative it follows f ′′ = d 2 f dx 2 , which again must not be confused with (f ′ ) 2 = df dx · df dx . Further literature see [49], p. 327. <?page no="49"?> 1.3 Integral, differential equations 23 1.3.5 Partial integration The equation for partial integration is ˆ u(x) v ′ (x) dx = u(x) v(x) − ˆ u ′ (x) v(x) dx. This equation is valid for definite integrals ˆ b a u(x) v ′ (x) dx = [u(x) v(x)] ∣∣ b a − ˆ b a u ′ (x) v(x) dx. In some cases, multiple partial integration may be required. 1.3.6 Classification of differential equations For further consideration, the common abbreviations for partial derivatives according to tab. 1.3 are used. The general linear partial differential equation of the 2nd order for the function f(x, y) of the two independent variables x and y has the form A f xx + B f xy + C f yy + D f x + E f y + F f = G, (1.5) where the coefficients A, B, C, D, E, F , G are generally functions of x and y. These partial differential equations are classified into three groups according to the values of the coefficients A, B, C: Table 1.3: Abbreviations for partial derivatives Partial Derivation Abbreviation ∂f ∂x f x ∂f ∂y f y ∂ 2 f ∂x 2 f xx ∂ 2 f ∂y 2 f yy ∂ 2 f ∂x∂y f xy <?page no="50"?> 24 Required mathematical basics • Elliptic differential equations: These describe states, i.e. completed processes that do not depend on time t (stationary processes). The extremum principle applies to the solution of elliptic differential equations. The maximum or minimum of the solution is assumed at the edges and not in the interior of the domain. • Parabolic differential equations: They describe balancing processes that depend on the time t (transient processes). Parabolic problems are typically initial or boundary value problems. They describe thermal or magnetic diffusion processes. The extremum principle also applies to parabolic differential equations. • Hyperbolic differential equations: This type describes wave propagations and transport processes that depend on time t. Hyperbolic problems are pure initial value problems. In a finite domain, boundary values can thus not be given arbitrarily, but are replaced by compatibility conditions. With the help of tab. 1.4 the classification of differential equations can be done. Since the coefficients A, B and C are functions of x and y, the partial differential equation eq. (1.5) can be elliptic in a certain subdomain G ⊂ R 2 and parabolic or hyperbolic in another subdomain [68], p. 463. Table 1.4: Classification of differential equations Type Sign of Solution range B 2 − 4AC elliptic < 0 closed parabolic = 0 open hyperbolic > 0 open 1.3.7 Initial value task If the solution y = y(x) of an ordinary differential equation of nth order at a point x 0 is given the n-values y(x 0 ), y � (x 0 ), y �� (x 0 ), ..., y n − 1 (x 0 ) is given, one speaks of a initial value task. The given values are called initial values or initial conditions [4], p. 504. <?page no="51"?> 1.3 Integral, differential equations 25 1.3.8 Boundary value problem The general solution of an ordinary differential equation of nth order contains n free integration constants, which are determined in a special solution by initial or boundary conditions. If conditions are imposed on the solution of an ordinary differential equation at several points on the outer points of the domain of definition, these conditions are called boundary conditions. The solution sought must take special conditions or function values at the ends of an interval of the independent variables. A differential equation with boundary conditions is called boundary value problem [4], p. 504. These are additional conditions that the special solution must satisfy at one or more points. The general solution of a partial differential equation, on the other hand, generally contains arbitrary functions as integration constants. In order to determine these, boundary conditions along the boundary Γ must be specified. The boundary conditions are • Boundary condition 1st kind: u = γ on Γ (Dirichlet condition), • Boundary condition 2nd kind: ∂u/ ∂n = γ on Γ (Neumann condition), • Boundary condition 3rd kind: ∂u/ ∂n + βu = γ on Γ (Cauchy condition). to distinguish [62], p. 18 f; [68], p. 464 f. The directional derivative in the boundary condition ∂u/ ∂n is defined by ∂u ∂n = n grad u = n 1 u x + n 2 u y , for the function u(x, y) with grad u = (u x , u y ) and the normal unit vector n = (n 1 , n 2 ) pointing outward on the edge of the domain G. In fig. 1.3 a rectangular region with boundary Γ 1 to Γ 4 can be seen. For n = n(x, y) on the boundary Γ 1 holds ∂u ∂n = − ∂u ∂x and hence n = ( − 1, 0). For n on the edge Γ 2 holds ∂u ∂n = ∂u ∂y <?page no="52"?> 26 Required mathematical basics Figure 1.3: Example of normal derivation of a rectangular domain G and hence n = (0, 1). For n on the edge Γ 3 holds ∂u ∂n = ∂u ∂x and hence n = (1, 0). For n on the edge Γ 4 holds ∂u ∂n = − ∂u ∂y and thus n = (0, − 1). 1.3.9 Linear operators In summary, the most commonly used relationships of linear operators follow. • Linear operator L : An operator is called linear if for both functions f and g and the scalar t L (f + g) = L f + L g L (tf) = t L f holds. <?page no="53"?> 1.3 Integral, differential equations 27 • Adjoint operator L a : This is defined with �L f, g � = � f, L a g � . • Self-adjoint operator: L a = L . • Zero operator O , identity operator I : O a = 0 I a = a, where a is a vector. • Inverse operator L − 1 : L f = g L − 1 L f = L − 1 g f = L − 1 g. It is LL − 1 = L − 1 L = I . Examples of this are: • Example 1: − d 2 f dx 2 = g(x) L = − d 2 dx 2 L f = g(x). • Example 2: L u(x) = a 0 d 2 u dx 2 + a 1 du dx + a 2 u L = a 0 d 2 dx 2 + a 1 d dx + a 2 . <?page no="54"?> 28 Required mathematical basics • Example 3: If L and M are two linear operators and a is a vector, it follows that ( L + M )a = L a + M a (λ L )a = λ( L a) ( LM )a = L ( M a). 1.3.10 Inner product An infinite dimensional vector space of functions for which an inner product is defined is called a Hilbert space. The inner product is introduced as the generalisation of the dot product. The inner product corresponds to � f, g � = ˆ b a f g dx the multiplication of two vectors or functions followed by integration over the domain Ω ∈ [a, b]. The result is always a scalar. For typographical reasons, the representation of the inner product is done with the two brackets � � . • Inner product of vectors: The inner product, also called dot product or scalar product, describes the scalar multiplication of vectors. The result is a scalar. Here �a = (a 1 , a 2 , ..., a n ) and �b = (b 1 , b 2 , ..., b n ). The inner product of both vectors is done with the notation �a · �b = � �a,�b � = a 1 b 1 + a 2 b 2 + a 3 b 3 + ... + a n b n ([50], p. 24, p. 65). Shown is the special case where the scalar product equals the integral � �a,�b � = ˆ Ω �a �b dx is. As an example, the inner product of the position vectors �a(x) and �b(x) is formed. The integration in this example is along the distance x. <?page no="55"?> 1.3 Integral, differential equations 29 • Inner product of functions: In fig. 1.4 a) to c) show the time curves of the voltage u(t), the current i(t) and the resulting power P (t). The electric energy W el is to be calculated as the inner product of the two functions voltage u(t) and current i(t) with � u, i � = ˆ Ω u(t) i(t) dt = ˆ Ω P (t) dt = W el , where Ω ∈ [0, 5]. Multiplying both functions followed by integration over time t gives the electrical energy W el = 19 J . Figure 1.4: Voltage, current and power time graphs The scalar product also contains the same information. The assumption is that the local change of the vector components take place per time. In the example, the integral path distance corresponds to 5 s. The geometric interpretation corresponds to the area under the ”curve“. � �u,�i � = �⎛⎜⎜⎜⎜⎜⎜⎜⎝ 2 3 3 2 1 ⎞⎟⎟⎟⎟⎟⎟⎟⎠ , ⎛⎜⎜⎜⎜⎜⎜⎜⎝ 3 1 1 2 3 ⎞⎟⎟⎟⎟⎟⎟⎟⎠ � = 2 · 3 + 3 · 1 + 3 · 1 + 2 · 2 + 1 · 3 = 6 + 3 + 3 + 4 + 3 = 19. <?page no="56"?> 30 Required mathematical basics In this case the results of the integral and the scalar product are identical. For definitions see also [67]. • Inner product, normalised: An inner product is normalised by � f � = √ � f, f � = [ ˆ b a | f(x) | 2 w(x) dx ] 1/ 2 . Here w(x) is the weighting function. • Normal function: A square integrable function f(x) is called normal if � f(x), f(x) � = ˆ b a (f(x)) 2 dx = 1. • Normalised function: A normalised function ˆ f is defined by ˆ f = f � f � = 1. • Orthogonal function: Two functions f(x) and g(x) are orthogonal in the interval a ≤ x ≤ b with weighting function w(x) if � f(x), g(x) � = ˆ b a f(x) g(x) w(x) dx = 0 is, and the norm of a function is defined as || f || = � f | f � 1/ 2 . • Orthonormal function: Two functions f(x) and g(x) are orthonormal in the interval a ≤ x ≤ b with weighting function w(x) if � f(x), f(x) � = ˆ b a (f(x)) 2 w(x) dx = 1 � g(x), g(x) � = ˆ b a (g(x)) 2 w(x) dx = 1 is. For an introduction to the subject of the inner product, see also [5], p. 162 and p. 166. <?page no="57"?> 1.4 Vector classification 31 1.3.11 Strong form/ formulation of a differential equation The ordinary differential equation L(u), for example, in the form L(u) = 0, x ∈ Ω d 2 u(x) dx 2 + 1 = 0 is called the strong form (the form known to us with the corresponding strong continuity conditions). 1.3.12 Weak form/ formulation of a differential equation The ordinary differential equation L(u) is multiplied by a function v(x) and integrated over the interval Ω � L, v � = ˆ Ω L(u) v(x) dx = ˆ Ω ( d 2 u(x) dx 2 + 1 ) v(x) dx = 0 and thus converted into its weak form, which must satisfy weaker continuity conditions. This can be simplified with the notation of the inner product. The weak form requires only simple continuity of the derivative (weak continuity requirement). 1.4 Vector classification A classification of vectors was given by Maxwell in [52], Art. 12, p. 10 with ”Physical vector quantities may be divided into two classes, in one of which the quantity is defined with reference to a line, while in the other the quantity is defined with reference to an area.“ is carried out. A distinction is made between vectors of which one class • is defined with reference to a line. Examples are the electric and magnetic field strength. <?page no="58"?> 32 Required mathematical basics • is defined with respect to an area. Examples are the magnetic flux density � B and the electric current density � J . Both vectors are calculated from flux quantities that perpendicularly penetrate a surface. 1.5 Differentiation rules for vectors In the further course the derivation of vector functions is is introduced [4]. The vector function of a scalar variable t describes a vector �a =�a(t) if its components are functions of t: a 1 (t) �e 1 , a 2 (t) �e 2 , a 3 (t) �e 3 . The derivative of �a(t) with respect to t is a vector function of t d�a dt = lim Δt → 0 �a(t + Δt) − �a(t) Δt . A vector is differentiated according to a scalar quantity by differentiating the individual components d�a dt = da 1 dt �e 1 + da 2 dt �e 2 + da 3 dt �e 3 . The differential quotient is a vector. The differential of a vector function �a(t) is defined by Δ�a = d�a dt Δt = �a(t + Δt) − �a(t). It is ϕ(t) a scalar function. For the differentiation of vector products, the following rules apply: <?page no="59"?> 1.6 Vector operators 33 d dt (ϕ �a) = ϕ d�a dt + �a dϕ dt d dt ( �a ± �b ± �c ) = d�a dt ± d�b dt ± d�c dt d dt(�a · �b) = �a · d�b dt + d�a dt · �b d dt(�a × �b) = �a × d�b dt + d�a dt × �b d dt [ �a · (�b × �c) ] = �a · ( �b × d�c dt ) + �a · ( d�b dt × �c ) + d�a dt · ( �b × �c ) d dt [ �a × (�b × �c) ] = �a × ( �b × d�c dt ) + �a × ( d�b dt × �c ) + d�a dt × ( �b × �c ) d dt �a [ϕ(t)] = d�a dϕ · dϕ dt . 1.6 Vector operators The following operators are presented • Nabla and Laplace operator: Enables simplified notation for the following vector operators, • Gradient: Directional derivative of a scalar function, • Divergence: Examines the flux of the vector field (flux per unit volume) with respect to flux sources and flux sinks, • Curl: Examines a vector field for vortices in Cartesian coordinates. 1.6.1 Nabla and Laplace operator The Nabla-operator (eng.: del-operator) is described with <?page no="60"?> 34 Required mathematical basics ∇ = ∂ ∂x 1 �e 1 + ∂ ∂x 2 �e 2 + ∂ ∂x 3 �e 3 = ⎛⎜⎜⎝ ∂ ∂x 1 ∂ ∂x 2 ∂ ∂x 3 ⎞⎟⎟⎠ . The operator has the property of a vector and a mathematical operator at the same time. The Laplace operator Δ is called the delta operator. This designation must not be confused with the English designation for Nabla operator (del-operator). The Laplace operator Δ is the scalar product of the product of the Nabla operator Δ = ∇ · ∇ = ∇ 2 = ⎛⎜⎜⎝ ∂ x 1 ∂ x 2 ∂ x 3 ⎞⎟⎟⎠ · ⎛⎜⎜⎝ ∂ x 1 ∂ x 2 ∂ x 3 ⎞⎟⎟⎠ = ∂ 2 ∂x 21 + ∂ 2 ∂x 22 + ∂ 2 ∂x 23 with itself. 1.6.2 Vector operator Gradient The gradient of the scalar location function (potential function) ϕ is in Cartesian coordinates grad ϕ = ∂ϕ ∂x 1 �e 1 + ∂ϕ ∂x 2 �e 2 + ∂ϕ ∂x 3 �e 3 = ∇ ϕ = ⎛⎜⎜⎝ ∂ ∂x 1 ∂ ∂x 2 ∂ ∂x 3 ⎞⎟⎟⎠ · ϕ = ⎛⎜⎜⎝ ∂ϕ ∂x 1 ∂ϕ ∂x 2 ∂ϕ ∂x 3 ⎞⎟⎟⎠ . <?page no="61"?> 1.6 Vector operators 35 The name of potential was first given by Green. Considering the vector components of the gradient as the first derivative of its potential function is based on Laplace. The gradient is a vector field. It is a measure for the change of the scalar function ϕ(x 1 , x 2 , x 3 ) in the direction of the coordinates x 1 , x 2 , x 3 at the point P under consideration. The gradient grad ϕ(x 1 , x 2 , x 3 ) points always in the direction of the largest increase of its potential function. The gradient is always perpendicular to the surfaces ϕ(x 1 , x 2 , x 3 ). In the calculation rules for the gradient, φ and ψ are scalar fields, c is a constant: grad c = �0 grad (c φ) = c (grad φ) grad (φ ± ψ) = grad φ ± grad ψ grad (φ + c) = grad φ grad (φ · ψ) = φ (grad ψ) + ψ (grad φ). 1.6.3 Vector operator Divergence In Cartesian coordinates, divergence is denoted by div � B = ∂B 1 ∂x 1 + ∂B 2 ∂x 2 + ∂B 3 ∂x 3 = ∂ � B 1 ∂x 1 �e 1 + ∂ � B 2 ∂x 2 �e 2 + ∂ � B 3 ∂x 3 �e 3 = ∇ � B. The divergence is a scalar. The scalar multiplication of the operator with a vector forms the scalar product and thus the divergence of the vector � B is formed. In the calculation rules for the divergence, � A and � B are vector fields, φ is a scalar field, �a is a constant vector and c is a constant: <?page no="62"?> 36 Required mathematical basics div �a = 0 div (φ � A) = (grad φ) · � A + φ (div � A) div (c � A) = c (div � A) div ( � A + � B) = div � A + div � B div ( � A + �a) = div � A div ( � A × � B) = � B curl � A − � A curl � B. 1.6.4 Vector operator Curl The curl of a vector field is expressed in Cartesian coordinates and denoted with curl � B = ( ∂B 3 ∂x 2 − ∂B 2 ∂x 3 ) �e 1 + ( ∂B 1 ∂x 3 − ∂B 3 ∂x 1 ) �e 2 + ( ∂B 2 ∂x 1 − ∂B 1 ∂x 2 ) �e 3 . The calculation formula can be expressed in Cartesian coordinates by a determinant curl � B = ∣∣∣∣∣∣∣∣ �e 1 �e 2 �e 3 ∂ ∂x 1 ∂ ∂x 2 ∂ ∂x 3 B 1 B 2 B 3 ∣∣∣∣∣∣∣∣ = ∇ × � B. The result of the curl is again a vector. The most important calculation rules for curl follow. Here � A and � B are vector fields, φ is a scalar field, �a is a constant vector and c is a constant: curl �a = �0 curl (φ � A) = (grad φ) × � A + φ (curl � A) curl (c � A) = c (curl � A) curl ( � A ± � B) = curl � A ± curl � B curl ( � A + �a) = curl � A curl curl( � A) = grad div � A − Δ � A curl ( � A × � B) = � A div � B − � B div � A + ( � B · ∇ ) � A − ( � A · ∇ ) � B, where ( � A · ∇ ) = A 1 ∂ ∂x 1 + A 2 ∂ ∂x 2 + A 3 ∂ ∂x 3 . <?page no="63"?> 1.6 Vector operators 37 1.6.5 Comparison of vector operators In tab. 1.5 the vector operators are contrasted. This contains the arguments as well as the results. Table 1.5: Comparison of the vector operators Operator: grad ϕ div � A curl � A Argument Scalar Vector Vector Result Vector Scalar Vector Nabla-Operator ∇ · ϕ ∇ · � A ∇ × � A 1.6.6 Rules of calculation for the Nabla operator Rules for differentiation of scalar functions apply to the use of the Nabla operator. Let φ and ψ two scalar functions, � F and � G two vectorial functions of spatial coordinates. The index at the ∇ operator indicates to which function the operator is to be applied. It is valid using the sum rule: ∇ (φ ± ψ) = ∇ φ (φ) ± ∇ ψ (ψ) ∇ ( � F ± � G) = ∇ � F ( � F ) ± ∇ � G ( � G). According to the product rule, the differentiation is performed using the Nabla operator: <?page no="64"?> 38 Required mathematical basics ∇ (φ · ψ) = ∇ φ (φ, ψ) + ∇ ψ (φ, ψ) = ψ ∇ φ (φ) + φ ∇ ψ (ψ) ∇ (φ · � F ) = φ ∇ � F + � F ∇ φ ∇ ( � F · � G) = ∇ � F ( � F · � G) + ∇ � G ( � F · � G) = ( � G ∇ ) � F + � G × ( ∇ × � F ) + ( � F ∇ ) � G + � F × ( ∇ × � G) ∇ × (φ · � F ) = ∇ φ × � F + φ ∇ × � F ∇ ( � F × � G) = ∇ � F ( � F × � G) + ∇ � G ( � F × � G) = � G ( ∇ × � F ) − � F ( ∇ × � G) ∇ × ( � F + � G) = ∇ × � F + ∇ × � G ∇ × ( � F × � G) = � F ( ∇ � F ) − � G( ∇ � F ) + ( � G · ∇ ) � F − ( � F ∇ ) � G. 1.6.7 Comparison scalar and vector product It is the geometric product of two vectors �a �b = �a · �b + �a ∧ �b, where �a · �b as the inner product and �a ∧ �b as the wedge product. The continuation is with the inner product �a · �b = �a �b. More on this in [41]. In tab. 1.6 a comparison of scalar and vector products was made. The first column names the computational law, the second column the scalar products, which is opposite to the third column, the vector products. Here �a, �b and �c are vectors and α is a constant. <?page no="65"?> 1.6 Vector operators 39 Table 1.6: Scalar and vector products Naming Scalar product Vector product Commutative law �a �b = �b �a �a × �b = − �b × �a = − ( �b × �a ) Associative law for α ( �a �b ) = (α �a)�b α ( �a × �b ) = α �a × �b multiplic. with scalar = ( α �b ) �a = �a × α �b Associative law for �a × ( �b × �c ) � = ( �a × �b ) × �c multiplic. with vector �a ( �b �c ) � = ( �a �b ) �c �a × ( �b × �c ) = (�a · �c)�b − ( �a · �b ) �c =�b (�a · �c) − �c ( �a · �b ) ( �a × �b ) × �c = (�a · �c)�b − ( �b · �c ) �a for �c = �a, � d = �b it becomes ( �a × �b ) · ( �a × �b ) = ( �a × �b ) 2 = (�a · �a) ( �b · �b ) − ( �a · �b ) 2 Distributive law �a ( �b + �c ) = �a �b + �a �c �a × ( �b + �c ) = �a × �b + �a × �c α ( �a + �b ) = α�a + α�b ( �a + �b ) × �c = �a × �c + �b × �c Orthogonality �a �b = 0, if �a ⊥ �b �a × �b = � ab, if �a ⊥ �b Collinearity �a �b = a b, if �a �b �a × �b = 0, if �a � �b Square of a vector �a �a = �a 2 = a 2 �a × �a = 0 Scalar multiplication �a · ( �a × �b ) = 0 �a · ( �b × �c ) = ( �a × �b ) · �c 1.6.8 Base, unit vectors In tab. 1.7 the calculation rules of the orthonormal basis, or unit vectors �e 1 , �e 2 and �e 3 are summarized. These are multiplied scalarly and vectorially and the results are plotted. Orthonormal basis vectors are characterized by the fact that • their scalar product of any two different basis vectors results in zero. • the vectorial multiplication of two different unit vectors yields the unit vector of the third dimension. • every basis vector has been normalized to length one. <?page no="66"?> 40 Required mathematical basics • no basis vector can be described as the sum of the remaining basis vectors. Basis vectors must be linearly independent. Table 1.7: Calculation rules for �e i orthonormal basis vectors · �e 1 �e 2 �e 3 × �e 1 �e 2 �e 3 �e 1 1 0 0 �e 1 0 �e 3 − �e 2 �e 2 0 1 0 �e 2 − �e 3 0 �e 1 �e 3 0 0 1 �e 3 �e 2 − �e 1 0 1.7 Boundary operator ∂ A volume V is bounded in fig. 1.5 a) by a closed surface A, which can also be denoted by ∂V . An open surface Ω is shown in fig. 1.5 b) bounded by its boundary, which is denoted by ∂Ω. However, the boundaries ∂ themselves ∂∂ = 0 Figure 1.5: n-dimensional manifolds <?page no="67"?> 1.8 Maxwell’s equations 41 have no edge. Boundaries are boundless. With this procedure it can be stated that the boundary operator ∂ reduces the dimension n of a region by one after the procedure (see tab. 1.8). As an example, the fourth Maxwell’s theorem is given in the following chapter. The boundary operator is sometimes also called the edge operator. Table 1.8: Interrelationships of manifolds and surfaces Manifold ⇒ Boundary Volume (n=3) ⇒ Surface, closed area (n=2) open surface (n=2) ⇒ closed path (n=1) 1.8 Maxwell’s equations As an introduction to Maxwell’s equations, the relationship between circular and surface integrals is presented. The Maxwell’s equations are presented in their differential and integral form. Furthermore, a directional assignment between the involved vector fields is given. 1.8.1 Relationship between circular and surface integral The relation between a circle integral and an area integral is described by Maxwell’s fourth theorem from [52], art. 24, p. 25: ”A line-integral taken round a closed curve may be expressed in terms of a surface-integral taken over a surface bounded by the curve.“ ˛ ∂Ω ... ds = ¨ Ω ... dA. (1.6) One application is Stoke’s integral theorem. The Stoke’s integral theorem establishes a connection between the circulation and the curl of a vector field and allows the transformation of a surface integral into a line integral along the surface boundary (boundary integral), or the transformation of a boundary integral into a surface integral. The boundary integral of the tangential components of a vector field � F along the closed <?page no="68"?> 42 Required mathematical basics curve ∂Ω is equal to the surface integral of the normal component of the curl of � F over an arbitrary surface A bounded by the curve ∂Ω ˛ ∂Ω � F d�s = ¨ Ω (curl � F ) d � A. Examples of this are: • Ampere’s law: ˛ ∂Ω � H d�s = ¨ Ω � J d � A = Θ. • Faraday’s law (law of induction): ˛ ∂Ω � E d�s = ¨ Ω d � B dt d � A. Where � H is the electric field strength, � J is the current density, � E is the electric field strength, and � B is the magnetic flux density. 1.8.2 Relation between area integral and volume integral The Gaussian integral theorem establishes a connection between a surface integral and a volume integral. The surface is a closed surface and includes a volume. A closed surface always separates two spaces (interior, exterior). The surface integral (envelope integral) of a vector field � F over a closed surface is equal to ‹ ∂Γ � F d � A = ˚ Γ div � F dV the volume integral of the divergence of the vector � F over the volume V enclosed by the surface. Examples of this are • Gauss’ theorem of electrostatics: ‹ ∂Γ � E d � A = ˚ Γ ρ ε 0 dV. <?page no="69"?> 1.8 Maxwell’s equations 43 • Source free of magnetic flux density � B: ‹ ∂Γ � B d � A = 0. 1.8.3 Maxwell’s equations - differential form In [32], p. 18-2, tab. 18-1 these are given in their differential form ∇ · � E = ρ ε 0 (1.7) ∇ × � E = − ∂ � B ∂t (1.8) ∇ · � B = 0 (1.9) c 2 ∇ × � B = � J ε 0 + ∂ � E ∂t . (1.10) Therefore, they are neither bound to coordinate systems nor do they contain any geometric quantities and are therefore preferably suitable for the setting and rearrangement of equations. 1.8.4 Maxwell’s equations - integral form The differential form of Maxwell’s equations is converted into their integral forms by means of Gauss’s theorem and Stoke’s theorem ‹ ∂Ωg � E �n dA = ˚ Ωg ρ ε 0 dV (1.11) ˛ ∂Ωo � E ds = ¨ Ωo − ∂ � B ∂t dA (1.12) ‹ ∂Ωg � B �n dA = 0 (1.13) c 2 ˛ ∂Ωo � B ds = ¨ Ωo [ � J ε 0 + ∂ � E ∂t ] dA, (1.14) which contain geometrical quantities. For examples see also [32], p. 18-7. After selecting a coordinate system, the delta elements for the boundaries, areas and volumes have to be determined accordingly. The necessary symbols and their designations are listed in tab. 1.9. <?page no="70"?> 44 Required mathematical basics Table 1.9: Symbols and meanings Symbol Designations Symbol Designations � B Magnetic flux density ρ Normal vector � E Electric field strength ε 0 Permittivity of the vacuum � J Current density Ω o , Ω g Surfaces: open, closed �n Normal vector Γ Boundary from Ω c Speed of light, ds Delta length element c = 1/ √ ε 0 μ 0 dA Delta surface element dV Delta volume element 1.8.5 Directional assignment of involved vector fields The mapping is done using the following convention: • Right-hand rule: vectors resulting from a cross product, or vectors associated with a counterclockwise bounded surface and positive quantities perpendicular to this surface, are assigned to the right-hand rule. As an example, consider the curl of the magnetic field in the equations (1.10) and (1.14). The vectors perpendicular to the surface are called axial vectors (cf. [32], pp. 13-11 f.). • Left-hand rule: A vector, which is connected with a counterclockwise outlined surface and negative downward pointing quantities perpendicular to this surface, was assigned to the left-hand rule. Examples of this are the equations (1.8) and (1.12), whose axial vector becomes negative when the boundary remains constant compared to the right-hand rule. Basic definitions were introduced in ”On Right-handed and Left-handed Relations in Space“ from [52], Art. 23, p. 24 f. Electromagnetic quantities, however, are not always to be assigned to a right-handed or left-handed rule. Here we refer to the literature [31] p. 52-6 ff. and [32] p. 13-11 f.. 1.9 Dirac’s delta function Dirac’s delta function is introduced using the example of the independent variable x in fig. 1.6. The location x 0 denotes the locus of discontinuity. The delta function is <?page no="71"?> 1.9 Dirac’s delta function 45 normalized. The area under its curve is equal to one Figure 1.6: The Dirac’s delta function on the example with one independent variable ˆ + ∞ −∞ δ(x − x 0 ) dx = 1. (1.15) The blanking out property applies ˆ b a δ(x − x 0 ) f(x) dx = ˆ x 0 +� x 0 − � δ(x − x 0 ) f(x) dx = { f(x 0 ) for x = x 0 0 for x � = x 0 . For further reading, see [48]. <?page no="73"?> Chapter 2 Coordinate systems Calculations are simplified by the adapted choice of coordinate systems (COSs). Representatives are the Cartesian coordinate system, the cylinder coordinate system and the spherical coordinate system. They belong to the orthogonal (right-angled) coordinate systems. Here the coordinate axes are perpendicular to each other. The introduced indices of the vector operators refer to the direction of the base and unit vectors. Useful literature: [27] (p. 3 ff.), [47], (p. 16 ff.). 2.1 Cartesian coordinate system The three coordinate axes x, y, z of the spatial, orthogonal coordinate system as well as vector and vector operator definitions are defined with reference to fig. 2.1 are introduced: • Vector: In the Cartesian COS, a differential vector with the components in the direction of the three coordinate axes is d�s(x, y, z) = ds x (x, y, z) �e x + ds y (x, y, z) �e y + ds z (x, y, z) �e z = dx �e x + dy �e y + dz �e z . • Gradient: For the differentiation, the relationships for calculating the gradient field apply in the Cartesian COS it is grad ϕ = ∂ϕ ∂x �e x + ∂ϕ ∂y �e y + ∂ϕ ∂z �e z . <?page no="74"?> 48 Coordinate systems Figure 2.1: Cartesian coordinate system The components of the gradient field in the Cartesian COS are as follows grad x ϕ = ∂ϕ ∂x grad y ϕ = ∂ϕ ∂y grad z ϕ = ∂ϕ ∂z . • Divergence: The calculation of the divergence in the Cartesian COS is done with div � A = ∂A x ∂x + ∂A y ∂y + ∂A z ∂z . The meaning of the word divergence describes going apart, striving apart. • Curl: The curl in the Cartesian COS is determined according to curl � A = ∇ × � A = ∣∣∣∣∣∣∣∣ �e x �e y �e z ∂ ∂x ∂ ∂y ∂ ∂z A x A y A z ∣∣∣∣∣∣∣∣ <?page no="75"?> 2.2 Cylinder coordinate system 49 The components of the curl in the Cartesian Cartesian COS are given by curl x � A = ( ∂A z ∂y − ∂A y ∂z ) �e x curl y � A = ( ∂A x ∂z − ∂A z ∂x ) �e y curl z � A = ( ∂A y ∂x − ∂A x ∂y ) �e z . The curl of a vector is curl � A = ( ∂A z ∂y − ∂A y ∂z ) �e x + ( ∂A x ∂z − ∂A z ∂x ) �e y + ( ∂A y ∂x − ∂A x ∂y ) �e z . • Nabla operator: The Nabla operator is defined in the Cartesian COS with ∇ = ∂ ∂x �e x + ∂ ∂y �e y + ∂ ∂z �e z . • Delta operator: The delta operator in the Cartesian COS is defined with Δ = ∂ 2 ∂x 2 + ∂ 2 ∂y 2 + ∂ 2 ∂z 2 . 2.2 Cylinder coordinate system Introduces the coordinates Φ, r and z and definitions for vector and vector operators in relation to fig. 2.2. • Vector: In the cylinder COS, a vector is defined by d�s(Φ, r, z) = ds Φ (Φ, r, z) �e Φ + ds r (Φ, r, z) �e r + ds z (Φ, r, z) �e z = r dΦ �e Φ + dr �e r + dz �e z . <?page no="76"?> 50 Coordinate systems Figure 2.2: Cylindrical coordinate system • Gradient: The gradient of the scalar function ϕ is calculated in the cylinder COS with gradϕ = grad r ϕ �e r + grad Φ ϕ �e Φ + grad z ϕ �e z . Its components are grad r ϕ = ∂ϕ ∂r grad Φ ϕ = 1 r ∂ϕ ∂Φ grad z ϕ = ∂ϕ ∂z . • Divergence: The calculation of the divergence of the vector field � A(r, Φ, z) in the cylinder COS is done with div � A = 1 r ∂ (r A r ) ∂r + 1 r ∂A Φ ∂Φ + ∂A z ∂z . • Curl: The calculation of the curl of the vector field � A(r, Φ, z) in the cylinder COS is done with the relations <?page no="77"?> 2.3 Sphere coordinate system 51 curl � A = ∇ × � A = 1 r ∣∣∣∣∣∣∣∣ �e r r�e Φ �e z ∂ ∂r ∂ ∂Φ ∂ ∂z A r r A Φ A z ∣∣∣∣∣∣∣∣ . The components of curl in the cylinder COS are given by curl r � A = ( 1 r ∂A z ∂Φ − ∂A Φ ∂z ) �e r curl Φ � A = ( ∂A r ∂z − ∂A z ∂r ) �e Φ curl z � A = ( 1 r ∂ (rA Φ ) ∂r − 1 r ∂A r ∂Φ ) �e z . • Nabla operator: The following applies to the Nabla operator in the cylinder COS ∇ = ∂ ∂r�e r + 1 r ∂ ∂Φ�e Φ + ∂ ∂z �e z . • Delta operator: The following applies to the delta operator in the cylinder COS Δ = 1 r ∂ ∂r ( r ∂ ∂r ) + 1 r 2 ∂ 2 ∂Φ 2 + ∂ 2 ∂z 2 . 2.3 Sphere coordinate system The orthogonal curvilinear coordinates r, Θ, Φ and definitions of vector operators with reference to the figures 2.3 and 2.4 are introduced. • Gradient: The gradient of a scalar potential function ϕ is denoted in the spherical COS by gradϕ = grad r ϕ �e r + grad Θ ϕ �e Θ + grad Φ ϕ �e Φ and its components <?page no="78"?> 52 Coordinate systems Figure 2.3: Coordinates and angles of the spherical coordinate system grad r ϕ = ∂ϕ ∂r grad Θ ϕ = 1 r sin Φ ∂ϕ ∂Θ grad Φ ϕ = 1 r ∂ϕ ∂Φ. • Divergence: For the divergence in the sphere COS applies div � A = 1 r 2 ∂ (r 2 A r ) ∂r + 1 r sin Θ ∂(sin Θ A Θ ) ∂ Θ + 1 r sin Θ ∂A Φ ∂Φ . • Curl: The curl of the vector field � A(r, Θ, Φ) in the sphere COS is given by the relation curl � A = ∇ × � A = 1 r 2 sin Θ ∣∣∣∣∣∣∣∣ �e r r �e Θ r sin Θ �e Φ ∂ ∂r ∂ ∂Θ ∂ ∂Φ A r r A Θ r sin Θ A Φ ∣∣∣∣∣∣∣∣ . The components of curl in the sphere COS are <?page no="79"?> 2.3 Sphere coordinate system 53 Figure 2.4: Spherical coordinate system curl r � A = ( 1 r ∂A Φ ∂Θ − 1 r sin Θ ∂A Θ ∂ Φ ) �e r curl Θ � A = ( 1 r sin Θ ∂A r ∂Φ − ∂A Φ ∂r ) �e Θ curl Φ � A = ( ∂A Θ ∂r − 1 r ∂A r ∂Θ ) �e Φ . The indices indicate the direction of the curl vector in r-, Θor Φ-direction. • Nabla operator: The Nabla operator in the sphere COS is ∇ = ∂ ∂ r �e r + 1 r ∂ ∂ Θ �e Θ + 1 r sin Θ ∂ ∂ Φ �e Φ . • Delta operator: The delta operator in the sphere COS is given by Δ = 1 r 2 ∂ ∂ r ( r 2 ∂ ∂ r ) + 1 r 2 sin Θ ∂ ∂ Θ ( sin Θ ∂ ∂ Θ ) + 1 r 2 sin 2 Θ ∂ 2 ∂ Φ 2 . <?page no="81"?> Chapter 3 Geometric mean distance - GMD The Geometric Mean Distance (GMD) is a universal tool for the characterization of distances, areas and volumes as well as for the the calculation of the centroid. Furthermore, it is indispensable for the calculation of inductance, because ”There are several problems of great practical importance in electro-magnetic measurements, in which the value of a quantity has to be calculated by taking the sum of the logarithms of the distances of a system of parallel wires from a given point.“ From the Transactions of the Royl Society of Edinburgh, Vol XXVL: On the Geometrical Mean Distance of Two Figures on a Plane A useful reference with many calculated examples is [56]. 3.1 Geometric mean distance - what for? The following shows possible applications of the geometric mean distance and points out its usefulness: • Evaluation of a change of area: Fig. 3.1 a potential surface is shown, which has to be evaluated. Here the mean geometric distance between a chosen location point, in this case the coordinate origin, and the function values can be calculated. A change in the course of the function will be noticeable as a change in the geometric mean distance. <?page no="82"?> 56 Geometric mean distance - GMD Figure 3.1: Evaluation of a potential field • Evaluation of line changes: In fig. 3.2 B(H)-characteristics are visible, which differ only by nuances. The calculation of the geometric mean distance can be used to evaluate the change. • Calculation of a vector potential: Here the calculation between the evaluation point and the point of integration is to be mentioned. • Calculation of inductance: Calculation of self-inductance and mutual inductance (see fig. 3.3). • Centroid calculations of lines, areas and volumes. <?page no="83"?> 3.1 Geometric mean distance - what for? 57 Figure 3.2: Assessment of B(H) characteristics Figure 3.3: Conductor spacing for inductivity calculation <?page no="84"?> 58 Geometric mean distance - GMD 3.2 Geometric mean distance - definitions and basics The basics required to determine the geometric mean distance are summarised in this chapter. 3.2.1 Euclid - The Elements (extracts) The following are the necessary definitions from Euclid [28]: • ”A point is what has no parts.“ • ”A line is a length without width.“ • ”The ends of a line are two points.“ • ”A straight line (stretch) is one that is even with the points on it.“ • ”A surface is that which has only length and width.“ The terms line and stretch are used interchangeably. The German term geometrischer mitteler Abstande, also called Geometric M ean Distance in English, is hereafter abbreviated to GM D and denotes its distance or radius R. 3.2.2 Arithmetic means - definition Usually the (arithmetic) mean x of the values (measured values) is also called arithmetic mean [17], p. 5. The arithmetic mean x is given by x = 1 n n ∑ i=1 x i , (3.1) where n is the number of singular values and x i the singular values. Examples of this are given in tab. 3.1. It can be seen that the arithmetic mean always assumes a value in the middle of the number series. Further literature on this [1], p. 531. <?page no="85"?> 3.2 Geometric mean distance - definitions and basics 59 Table 3.1: Examples for the calculation of the arithmetic mean x 1 = 1 5 (100 + 200 + 300 + 400 + 500) = 300 x 2 = 1 5 (10 + 20 + 30 + 40 + 50) = 30 x 3 = 1 5 (1 + 2 + 3 + 4 + 5) = 3 x 4 = 1 4 (1 + 2 + 3 + 4) = 2.5 x 5 = 1 5 (0.1 + 0.2 + 0.3 + 0.4 + 0.5) = 0.3 x 6 = 1 5 (0.01 + 0.02 + 0.03 + 0.04 + 0.05) = 0.03 x 7 = 1 5 (0.001 + 0.002 + 0.003 + 0.004 + 0.005) = 0.003 3.2.3 Geometric mean - definition The geometric mean R is defined as the n ′ th root of the products of its singular values. R = n √ x 1 x 2 . . . x n = n √√√√ n ∏ i=1 x i = ( n ∏ i=1 x i ) 1 n . Here n is the number of individual values. The geometric mean is always less than or equal to the arithmetic mean. See also [1], p. 532. Examples of the geometric mean are given in tab. 3.2. Table 3.2: Examples for calculating the geometric mean R 1 = 5 √ 10 · 20 · 30 · 40 · 50 = 26 R 2 = 5 √ 1 · 2 · 3 · 4 · 5 = 2.6 R 3 = 4 √ 1 · 2 · 3 · 4 = 2.21 R 4 = 5 √ 0.1 · 0.2 · 0.3 · 0.4 · 0.5 = 0.26 R 5 = 5 √ 0.01 · 0.02 · 0.03 · 0.04 · 0.05 = 0.026 For example, 2 √ ab is the geometric mean of a and b, the product of which is an equalarea rectangle with edge length 2 √ ab. The geometric mean is also obtained from eq. (3.1) with logarithmising on both sides <?page no="86"?> 60 Geometric mean distance - GMD ln R = ln x = 1 n n ∑ i=1 ln x i e ln R = e 1 n ∑ n i=1 ln x i R = e 1 n ∑ n i=1 ln x i = n √ x 1 x 2 . . . x n . The size of the GMD takes a value between the smallest and largest distance. 3.2.4 GMD - possible combinations In fig. 3.4 possible combinations of distances between points, lines and geometric figures are shown. Figure 3.4: Distance combination possibilities <?page no="87"?> 3.2 Geometric mean distance - definitions and basics 61 3.2.5 GMD - graphical interpretation Here we refer in particular to A Treatise on Electricity and Magnetism Vol. II [53] as well as [51]. The following definitions can be taken from the literature [61], p. 166: Figure 3.5: Distances r between point and geometric figures to calculate the mean geometric distance R ”The geometrical mean distance of a point from a line is the n th root of the product of <?page no="88"?> 62 Geometric mean distance - GMD the n distances from the point P to the various points in the line, n being increased to infinity in determining the value of R. Or, the logarithm of R is the mean value of log d for all the infinite values of the distance d. Similarly, the geometrical mean distance of a line from itself is the n th root of the product of the n distances between all the various pairs of points in the line, n being infinity. Similar definitions apply to the g. m. d. of one area from another, or of an area from itself.“ The procedure presented below is not subject to any law of nature and was chosen for didactic reasons and is based on the assumption that the handling of a surface can be explained more easily, figuratively speaking, than the handling of a point. Hence the introduction by means of planes, which in the course of the work change to lines and points. The calculation of the mean geometric distance is limited to surfaces, lines and points in this script. The calculation possibilities of a GM D between the geometric objects are shown in the figures 3.5 to 3.7, with the following explanations: • Fig. 3.5 a): The summation is done at the beginning over all elements of the area A 1 multiplied by the corresponding logarithm of the radius r(ΔA 1 ) followed by the summation of all elements of the area A 2 with the corresponding logarithms of the respective radius r(ΔA 2 ). By division with A 1 A 2 and raise in the exponent, the GM D R follows. • Fig. 3.5 b): The integration is done first with the logarithm r(x 2 , y 2 ) over the area A 2 , followed by the integration of the logarithm of r(x 1 , y 1 ) over the area A 1 . By dividing by A 1 A 2 and raising in the exponent, the GM D R follows. • Fig. 3.5 c): Opposite fig. 3.5 a) the surface A 1 shrinks to a evaluation point P and its position outside the surface A 2 is defined by its coordinates. The summation is done over all elements ΔA. By division with A and raising in the exponent the GM D R follows. For further illustration the conversion takes place A log R = N ∑ i=1 log r i ΔA log R = 1 A N ∑ i=1 log r i ΔA. If A = N ΔA, then it follows that <?page no="89"?> 3.2 Geometric mean distance - definitions and basics 63 N ΔA log R = N ∑ i=1 log r i ΔA log R = 1 N N ∑ i=1 log r i . • Fig. 3.5 d): The integration is done over the area A with the evaluation point radius r. By division with A and raise in the exponent the GM D R follows. Figure 3.6: Distances r between point and line to calculate the mean geometric distance R • Fig. 3.6 a): If in fig. 3.5 c) the surface A changes into a line, then the line length AB remains, which is discretised and whose line elements are assigned radii r. The sum of the logarithms r divided by the line length AB and raising to the exponent leads to the searched GM D R. For illustration purposes, the conversion <?page no="90"?> 64 Geometric mean distance - GMD AB log R = N ∑ i=1 log r i Δs log R = 1 AB N ∑ i=1 log r i Δs is made. If AB = N Δs it follows that N Δs log R = N ∑ i=1 log r i Δs log R = 1 N N ∑ i=1 log r i . • Fig. 3.6 b): Initial situation as in a). Replace the sigma sign by the integral sign and Δs by ds. Let the distance r be a function of the coordinates x and y. • If the remaining line AB in fig. 3.6 c) turns into a point, the radius r between two points remains, which corresponds to the GM D by raising it to the exponent. <?page no="91"?> 3.2 Geometric mean distance - definitions and basics 65 Figure 3.7: Distances r of geometric objects on themselves to calculate the mean geometric distance R It is interesting to know that a surface and a distance also have a mean geometric distance to themselves, within themselves, which is used in particular in the calculation of the self-inductance. • Fig. 3.7 a): If two separate, identical surfaces are mentally placed on top of each other, the arrangement according to fig. 3.7 a). The mean geometric distance of a surface from itself corresponds to the double sum of the logarithmic individual <?page no="92"?> 66 Geometric mean distance - GMD distances r i divided by the square of the surface. The raise in the exponents leads to the sought distance R. • Fig. 3.7 b): Situation as in fig. 3.7 a). The integration takes place first over area A, followed by the second integration over area A. • Fig. 3.7 c): A line is discretised into line elements and the double sum is calculated from the logarithmic distance r i . A subsequent division by the square of the line length with rise in the exponent leads to the sought R. • Fig. 3.7 d): Stand as in 3.7 c). Replace summation by integration and continue as in c). 3.2.6 Why geometric mean? The author in [61], p. 171 points out that when calculating the selfand mutual inductances with geometric mean distances (GMD), more accurate values may be obtained by including arithmetic mean distances (AMD) and arithmetic mean square distances in conjunction with geometric mean distances. An example of a deviating result is the comparison of calculated distances R of a line s to itself: • GMD of a line to itself: R = 0.22 s • AMD of a line on itself: R = 0.33 s. 3.3 GMD of two collinear lines In fig. 3.8 two lines (collinear) lying on a straight line are visible. The GMD between the two lines is sought. Figure 3.8: Arrangement for calculating the GMD of two collinear lines <?page no="93"?> 3.3 GMD of two collinear lines 67 3.3.1 GMD calculation - numerical solution Both lines are discretised into n and m line elements b c ln R = N c ∑ i=1 N b ∑ j=1 ln r i,j Δx j Δx i ln R = 1 b · c N c ∑ i=1 N b ∑ j=1 ln | x j − x i | Δx j Δx i , with r i,j = | x j − x i | . Furthermore, with b = N b Δx j and c = N c Δx i it follows ln R = 1 N b N c N c ∑ i=1 N b ∑ j=1 ln | x j − x i | . 3.3.2 GMD calculation - analytical solution This is followed by the analytical solution with b c ln R = ˆ a+b a ˆ c 0 ln r dx 1 dx 2 ln R = 1 b c ˆ a+b a ˆ c 0 ln | x 2 − x 1 | dx 1 dx 2 , where r = | x 2 − x 1 | . In the continuation, the logarithm of the mean geometric distance follows for the fig. 3.8 with ln R = 1 b c [ ln | a + b − c | [( ac + bc − c 2 ) − (a + b) 2 − c 2 2 ] − ln | a − c | [( ac − c 2 ) − a 2 − c 2 2 ] − 3bc 2 +(a + b) 2 2 ln | a + b | − a 2 2 ln | a | ] , which must then be raised to the exponent. If the two partial distances are assumed to be of equal length with b = c = s, it follows ln R = 1 s 2 [ ln | a + s | ( as + a 2 + s 2 2 ) − ln | a | s 2 − 3s 2 2 − ln | a − s | ( as − s 2 + a 2 2 )] <?page no="94"?> 68 Geometric mean distance - GMD the logarithm of the GMD. The derivation of this can be found in chap. A.2. Compare also [61], p. 168, eq. (130). 3.3.3 GMD calculation - example The GMD of the line arrangement is to be calculated according to fig. 3.9. The required information can be found in tab. 3.3. Figure 3.9: Example arrangement for calculating the GMD of two collinear lines Numerical solution: A discretisation of the two lines a and b was chosen that was as simple as possible. Each resulting line element was assigned a radius r ij adjacent to the right of the line element. The numerical solution is done with the help of tab. 3.3. <?page no="95"?> 3.3 GMD of two collinear lines 69 Table 3.3: Parameter example 1 of fig. 3.9 N c = 5; N b = 4; Δx i = Δx j = 1; a = 7, b = 4; c = 5 x i x j r ij = a − x i + x j x 1 = 1 x 1 = 1 r 11 = 7 x 2 = 2 x 1 = 1 r 21 = 6 x 3 = 3 x 1 = 1 r 31 = 5 x 4 = 4 x 1 = 1 r 41 = 4 x 5 = 5 x 1 = 1 r 51 = 3 x 1 = 1 x 2 = 2 r 12 = 8 x 2 = 2 x 2 = 2 r 22 = 7 x 3 = 3 x 2 = 2 r 32 = 6 x 4 = 4 x 2 = 2 r 42 = 5 x 5 = 5 x 1 = 2 r 52 = 4 x 1 = 1 x 3 = 3 r 13 = 9 x 2 = 2 x 3 = 3 r 23 = 8 x 3 = 3 x 3 = 3 r 33 = 7 x 4 = 4 x 3 = 3 r 43 = 6 x 5 = 5 x 3 = 3 r 53 = 5 x 1 = 1 x 4 = 4 r 14 = 10 x 2 = 2 x 4 = 4 r 24 = 9 x 3 = 3 x 4 = 4 r 34 = 8 x 4 = 4 x 4 = 4 r 44 = 7 x 5 = 5 x 4 = 4 r 54 = 6 ∑ N c 1 ∑ N b 1 ln r i,j = 36.59 • Example 1 - numerical solution: With fig. 3.9 and the information from tab. 3.3 it follows <?page no="96"?> 70 Geometric mean distance - GMD ln R = 1 N b · N c N c ∑ i=1 N b ∑ j=1 ln | x j − x i | = 1 5 · 4 5 ∑ 1 4 ∑ 1 ln r i,j = 1 20 36.59 = 1.83 R = e 1.83 = 6.26. • Example 2 - numerical solution: Using the information from tab. 3.4 it follows ln R = 1 N b · N c N c ∑ i=1 N b ∑ j=1 ln | x j − x i | = 1 2 · 3 3 ∑ 1 2 ∑ 1 ln r i,j = 1 6 10.41 = 1.73 R = e 1.73 = 5.67. Analytical solution: • Example 1 - analytical solution: With fig. 3.8 and the data of tab. 3.3 it follows <?page no="97"?> 3.3 GMD of two collinear lines 71 Table 3.4: Parameters example 2 N c = 3; N b = 2; Δx i = Δx j = 2; a = 7, b = 4; c = 6 x i x j r ij = a − x i + x j x 1 = 2 x 1 = 2 r 11 = 7 x 2 = 4 x 1 = 2 r 21 = 5 x 3 = 6 x 1 = 2 r 31 = 3 x 1 = 2 x 2 = 4 r 12 = 9 x 2 = 4 x 2 = 4 r 22 = 7 x 3 = 6 x 2 = 4 r 32 = 5 ∑ N c 1 ∑ N b 1 ln r i,j = 10.41 ln R = 1 4 · 5 [ ln | 7 + 4 − 5 | [ (35 + 20 − 25) − (11) 2 − 25 2 ] − ln | 2 | [ (35 − 25) − 49 − 25 2 ] − 60 2 +(11) 2 2 ln | 11 | − 49 2 ln | 7 | ] = 1 20 36.53 = 1.826 R = e 1.826 = 6.21. • Example 2 - analytical solution: Using the data from tab. 3.4 it follows <?page no="98"?> 72 Geometric mean distance - GMD ln R = 1 4 · 6 [ ln | 7 + 4 − 6 | [ (42 + 24 − 36) − 121 − 36 2 ] − ln | 1 | [ (42 − 36) − 7 2 − 6 2 2 ] − 72 2 +(11) 2 2 ln | 11 | − 49 2 ln | 7 | ] = 1 24 41.28 = 1.72 R = e 1.72 = 5.58. Summarization: The results summarised in tab. 3.5 show • a good agreement between analytically and numerically calculated results. • the mutual complementarity of both methods as well as the reduction of the risk of error. Table 3.5: Comparison of the results from R Reference to Method Δx i N c Δx j N b R Fig. 3.9, tab. 3.3 numerical solution 1 5 1 4 6.26 Fig. 3.8, tab. 3.3 analytical solution 6.21 tab. 3.4 numerical solution 2 3 2 2 5.57 tab. 3.4 analytical solution 5.58 3.4 GMD of a collinear arrangement between a point and a line Goes into fig. 3.8 c → 0, it follows that fig. 3.10, a collinear arrangement between the point P 1 and the line b. <?page no="99"?> 3.4 GMD of a collinear arrangement between a point and a line 73 Figure 3.10: Arrangement for calculating the GMD of a point and collinear lines 3.4.1 GMD calculation - numerical solution The procedure is as shown in fig. 3.6 a). The line is discretised into N line elements. The radius is r i = a + i Δx and b = N Δx with which b ln R = N ∑ i=1 ln | r i | Δx ln R = 1 N N ∑ i=1 ln | r i | becomes. 3.4.2 GMD calculation - analytical solution The procedure for the analytical treatment of the problem in fig. 3.10 is b ln R = ˆ a+b a ln | r | dr ln R = 1 b ˆ a+b a ln | r | dr. With integral (A.1) follows ln R = 1 b [r ln(r) − r] a+b a = 1 b [(a + b) ln(a + b) − (a + b) − (a ln(a) − a)] = 1 b [(a + b) ln(a + b) − b − a ln(a)] . For a = 0 it follows with <?page no="100"?> 74 Geometric mean distance - GMD lim a → 0 a ln | a | = 0. Thus remains ln R = 1 b (b ln(b) − b) = ln(b) − 1 R = b e = 0.37 b. 3.4.3 GMD calculation - example With a selected example, the numerical and analytical calculation including a comparison of results is carried out. Numerical solution: Figure 3.11: Arrangement for numerical calculation of the GMD of a point and collinear lines <?page no="101"?> 3.4 GMD of a collinear arrangement between a point and a line 75 Table 3.6: Example data for fig. 3.11 a = 10; b = 6; a = 0; b = 6; r i Δx = 1; N = 6 Δx = 0.5; N = 12 Δx = 1; N = 6 r 1 = 11 r 1 = 10,5 r 1 = 1 r 2 = 12 r 2 = 11,0 r 2 = 2 r 3 = 13 r 3 = 11,5 r 3 = 3 r 4 = 14 r 4 = 12,0 r 4 = 4 r 5 = 15 . . . = . . . r 5 = 5 r 6 = 16 r 12 = 16 r 6 = 6 ∑ ln | r i | = 13.4 ∑ ln | r i | = 30.09 ∑ ln | r i | = 6.58 R = 13.39 R = 13.14 R = 2.99 Analytical solution: With the data from Tab. 3.6 follows ln R = 1 6 ((10 + 6) ln(10 + 6) − 6 − 10 ln 10) = 1 6 15.33 = 2.56 R = 12.88. Summarization: Table 3.7: Comparison of the results of R of the figures 3.10 and 3.11 Reference to Δx N R Fig. 3.11 1 6 13.39 Fig. 3.11 0.5 12 13.14 Fig. 3.10 analytical solution 12.88 Fig. 3.10 analytical solution for a = 0 2.22 Fig. 3.10 numerical solution f¨ ur a = 0 2.99 The results summarised in tab. 3.7 show: <?page no="102"?> 76 Geometric mean distance - GMD • Result depends on discretisation. • Increasing discretisation makes numerical result approximate to analytical result. 3.5 GMD of a line on itself If in fig. 3.8 a → 0 and b = c = s, it follows that fig. 3.12, in which there are two congruent lines lying on top of each other. This assumption also ensures that two lines are always involved in the calculation of the GMD. Figure 3.12: Arrangement for calculating the GMD of a line on itself What is sought is the GMD of a line to oneself. 3.5.1 GMD calculation - analytical solution s s ln R = ˆ s 0 ˆ s 0 ln | r | dx 1 dx 2 ln R = 1 s 2 ˆ s 0 ˆ s 0 ln | x 2 − x 1 | dx 1 dx 2 = ln | s | − 3 2 with r = | x 2 − x 1 | . The details required for this are given in chap. A.3. It follows R = e − 3/ 2 s = 0.22 s. The result agrees with [53] p. 296. <?page no="103"?> 3.5 GMD of a line on itself 77 3.5.2 GMD calculation - numerical solution The arrangement of fig. 3.12 is solved numerically in the progress. This is done by discretising the line into line elements. The procedure is that each line element is given a radius at its element’s end (point 1 ), starting from the evaluation point P . Figure 3.13: Arrangement for calculating the GMD of a line on itself for the numerical calculation Table 3.8: For example 1: Radius calculations of fig. 3.13 N = 4; Δx = 2; s = 8 i j r i,j ln | r i,j | i j r i,j ln | r i,j | 1 1 −− −− 1 3 2 · 2 = 4 1.37 2 1 1 · 2 = 2 0.69 2 3 1 · 2 = 2 0.69 3 1 2 · 2 = 4 1.37 3 3 −− −− 4 1 3 · 2 = 6 1.79 4 3 1 · 2 = 2 0.69 1 2 1 · 2 = 2 0.69 1 4 3 · 2 = 6 1.79 2 2 −− −− 2 4 2 · 2 = 4 1.37 3 2 1 · 2 = 2 0.69 3 4 1 · 2 = 2 0.69 4 2 2 · 2 = 4 1.37 4 4 −− −− ∑ ∑ ln | r i,j | = 13.29 • Example 1 to tab. 3.8: ln R = 1 s 2 · N ∑ i=1 N ∑ j=1 ln | x j − x i | Δx j Δx i ; i � = j. 1 In Euclid’s words, a line begins and ends with a point. <?page no="104"?> 78 Geometric mean distance - GMD With s = N Δx and r i,j = | x i − x j | follows ln R = 1 N 2 · N ∑ i=1 N ∑ j=1 ln | r i,j | = 1 4 2 13.29 = 0.83 R = e 0.83 = 2.29 • Example 2: ln R = 1 N 2 · N ∑ i=1 N ∑ j=1 ln | r i,j | = 1 5 2 19.32 = 0.77 R = e 0.77 = 2.16 Table 3.9: For example 2: Radius calculations of fig. 3.13 N = 5; Δx = 1.6; s = 8 i j r i,j ln | r i,j | i j r i,j ln | r i,j | 1 1 −− −− 1 5 4 · 1.6 = 6.4 1.86 2 1 1 · 1.6 = 1.6 0.47 2 5 3 · 1.6 = 4.8 1.57 3 1 2 · 1.6 = 3.2 1.16 3 5 2 · 1.6 = 3.2 1.16 4 1 3 · 1.6 = 4.8 1.57 4 5 1 · 1.6 = 1.6 0.47 5 1 4 · 1.6 = 6.4 1.86 5 5 −− −− . . . . . . . . . . . . . . . . . . . . . . . . ∑ ∑ ln | r i,j | = 19.32 3.5.3 GMD calculation - summary The summary of results is given in Tab. 3.10. <?page no="105"?> 3.6 GMD of two parallel lines 79 Table 3.10: Result juxtaposition for s = 8 Reference to Method Δx N R Fig. 3.13, tab. 3.8 numerical solution 2 4 2.29 Fig. 3.13, tab. 3.9 numerical solution 1.6 5 2.16 Fig. 3.12 analytical solution - - 1.76 Assessment: • Convergence of the numerical solutions towards the analytical solution can be seen. • Analytical solution assumes a minimum for R. 3.6 GMD of two parallel lines In fig. 3.14 two parallel lines of different length are visible, from which the GMD is calculated. Figure 3.14: Double line arrangement for calculating the GMD 3.6.1 GMD calculation - numerical solution Following fig. 3.5 a) follows with fig. 3.15 a discretisation of both lines as well as <?page no="106"?> 80 Geometric mean distance - GMD AB ln R = N A ∑ i=1 N B ∑ j=1 ln | r i,j | ΔA ΔB ln R = 1 AB N A ∑ i=1 N B ∑ j=1 ln | r i,j | ΔA ΔB. With A = N A ΔA and B = N B ΔB becomes log R = 1 N A N B N A ∑ i=1 N B ∑ j=1 ln | r i,j | . Figure 3.15: Double line arrangement for numerical calculation of the GMD 3.6.2 GMD calculation - analytical solution The analytical solution is done by means of double integration over the two lines A and B A B ln R = ˆ B 0 ˆ A 0 ln | r | da db ln R = 1 A B ˆ B 0 ˆ A 0 ln( √ (a − b) 2 + c 2 ) da db with r = √ (a − b) 2 + c 2 . The solution describes eq. (A.4), the derivation of which is given in chap. A.4. <?page no="107"?> 3.6 GMD of two parallel lines 81 3.6.3 GMD calculation - example If in fig. 3.14 A = B = s, it follows that fig. 3.16. Figure 3.16: Double line arrangement for numerical calculation of the GMD Numerical solution: The numerical solution starts with the definition of the radius r i with r i = √ (a Δx − b Δx) 2 + c 2 as well as Table 3.11: Radius calculations from fig.3.16 N = 4; Δx = 1; s = 4; c = 2 a Δx b Δx r i a Δx b Δx r i 1 1 √ 4 1 3 √ 8 2 1 √ 5 2 3 √ 5 3 1 √ 8 3 3 √ 4 4 1 √ 13 4 3 √ 5 1 2 √ 5 1 4 √ 13 2 2 √ 4 2 4 √ 8 3 2 √ 5 3 4 √ 5 4 2 √ 8 4 4 √ 4 ∑ ∑ ln | r i,j | = 14.32 <?page no="108"?> 82 Geometric mean distance - GMD s s ln R = N ∑ i=1 N ∑ j=1 ln | r i,j | Δx j Δx i . With s = N Δx it follows ln R = 1 N 2 N ∑ i=1 N ∑ j=1 ln | r i,j | = 1 16 14.32 = 0.9 R = e 0.9 = 2.45. Proof: R = 16 √ √ 4 · . . . · √ 4 = 2.45. Analytical solution: According to eq. (A.5 ) and the data in tab. 3.11 is ln R = 1 s 2 [ s 2 ( ln( √ s 2 + c 2 ) − 3 2 ) + c 2 2 ( ln(c 2 ) − ln(s 2 + c 2 ) ) + 2sc arctan ( s c )] = 1 4 2 [ 4 2 ( ln( √ 4 2 + 2 2 ) − 3 2 ) + 2 2 2 ( ln(2 2 ) − ln(4 2 + 2 2 ) ) + 2 4 2 arctan ( 4 2 )] = 0.9 R = e 0.90 = 2.47. Summary: Tab. 3.12 shows a juxtaposition of results. <?page no="109"?> 3.7 GMD of a point and a helix 83 Table 3.12: Results comparison Reference to Method R Fig. 3.16 numerical solution 2.45 Fig. 3.16 analytical solution 2.47 Assessment: • Numerical method with comparative coarse discretisation gives a good approximation to the analytical solution. • Analytical solution assumed to be exact solution. 3.7 GMD of a point and a helix In fig. 3.17 a cylindrical spiral (helix) with radius R and constant pitch P at an axial length l in z-direction is shown. Figure 3.17: Helix with dimensions For this arrangement, the mean geometric distance (GMD) is to be calculated with reference to the evaluation point P 1 (cf. fig. 3.18). The integration point P 2 is located on the spiral and runs on it from the beginning to its end. <?page no="110"?> 84 Geometric mean distance - GMD Figure 3.18: Helix with integration and evaluation point 3.7.1 Length of an unwound helix For handling reasons, the further descriptions of the helix are made with reference to the axial direction z. Thus l z is the independent parameter. The differential helix length Δs (unwound length) is Δs ≈ √ Δu 2 + Δl 2 z . In detail to be calculated are: • Differential circumference Δu: Δu = Δu(l z ) = R Δϕ(l z ) • Differential angle Δϕ: The calculation is done with Δϕ(l z ) = 2π P Δl z . Thus the differential circumference Δu follows with Δu = R 2π P Δl z . <?page no="111"?> 3.7 GMD of a point and a helix 85 The differential unwound helix length of fig. 3.19 thus becomes Δs ≈ √( R 2π P Δl z ) 2 + Δl 2 z ≈ √( R 2π P + 1 ) 2 Δl 2 z ≈ ( R 2π P + 1 ) ︸ ︷︷ ︸ Δl z ≈ C Δl z . Figure 3.19: Helix partial view with arrangement of the differential variables The unwound helix length thus becomes Δs ≈ C Δl z s ≈ C ∑ Δl z s = C ˆ l 0 dl z s = C l. Proof: • Pitch P → 0: Assumes that Δl z → 0 goes over Δu → 2πR, which means δs = δu, or s = u. <?page no="112"?> 86 Geometric mean distance - GMD • Pitch P → ∞ : The constant C takes the value 1. Thus the unwound helix length s takes the length l in fig. 3.17. See also fig. 3.19. 3.7.2 GMD calculation - analytical solution In chap. 3.7.1 it could be shown that • the unwound helix length is a multiplication of the axial length l by a constant C. • with interpretation of fig. 3.18 the locomotion of the integration point P 2 in the direction of the arrow changes only the axial length while keeping the radius R. • with the mentioned points the calculation of the GMD to the calculation of the GMD of fig. 3.23 with P 1 , P 2 for x = 0 can be transferred. For example, if the double line arrangement of fig. 3.16 is wound with pitch P and radius R, this yields a double helix arrangement which can be transformed to fig. 3.17 would differ only by one more helix, and whose GMD would be given according to chap. 3.6. 3.8 GMD point outside line with its perpendicular on line centre Given is the arrangement according to fig. 3.20 with the line s and the evaluation point P 1 , which is at distance h in the perpendicular of the line bisector (P ′ ). Figure 3.20: Arrangement for calculating the GMD between a point and a line We are looking for the GM D R between the distance s and the evaluation point P 1 . <?page no="113"?> 3.8 GMD point outside line with its perpendicular on line centre 87 3.8.1 GMD calculation - numerical solution I The line element is marked on the right side with the distance r i . A distance r i is assigned to each section element. The calculated radii are shown in Tab. 3.13. Figure 3.21: Arrangement for numerical calculation of the GMD of fig. 3.20 with radius lying outside Table 3.13: Radius calculations of fig. 3.21 r i s = 8; Δx = 1; N = 8; h = 8; P 2 = s/ 2 r 1 = √ (4Δx) 2 + h 2 = √ 80 = 8.94 r 2 = √ (3Δx) 2 + h 2 = √ 73 = 8.54 r 3 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 4 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 5 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 6 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 7 = √ (3Δx) 2 + h 2 = √ 73 = 8.54 r 8 = √ (4Δx) 2 + h 2 = √ 80 = 8.94 ∑ ln | r i | = 17.07 ln R = 1 8 8 ∑ i=1 ln | r i | = 1 8 · 17.07 = 2.13 R = e 2.13 = 8.44. Proof: <?page no="114"?> 88 Geometric mean distance - GMD R = 8 √ √ 80 √ 73 · . . . · √ 80 = 8.44. 3.8.2 GMD calculation - numerical solution II The line element is labelled on the left side with the distance r i . A distance r i is assigned to each line element. The calculations of the radii are shown in Tab. 3.14. Figure 3.22: Arrangement for numerical calculation of the GMD of fig. 3.20 with radius lying inside Table 3.14: Radius calculations of fig. 3.22 r i s = 8; Δx = 1; N = 8; h = 8; P 2 = s/ 2 r 1 = √ (3Δx) 2 + h 2 = √ 73 = 8.54 r 2 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 3 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 4 = √ (0Δx) 2 + h 2 = √ 64 = 8.00 r 5 = √ (0Δx) 2 + h 2 = √ 64 = 8.00 r 6 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 7 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 8 = √ (3Δx) 2 + h 2 = √ 73 = 8.54 ∑ ln | r i | = 16.84 <?page no="115"?> 3.8 GMD point outside line with its perpendicular on line centre 89 ln R = 1 8 8 ∑ i=1 ln | r i | = 1 8 · 16.84 = 2.11 R = e 2.11 = 8.21. Proof: R = 8 √ √ 73 √ 68 · . . . · √ 73 = 8.21. 3.8.3 Analytical solution and example calculation The GMD of the arrangement from fig. 3.20 is calculated analytically, followed by an example application. Analytical solution: Incorporating the coordinate system from fig. 3.20 follows ln R = 1 s N ∑ i=1 ln | r i | · Δx with Δx → 0 und N → ∞ into its integral representation ln R = 1 s ˆ s 0 ln (r(x)) dx. With r(x) = √ (x − P ′ ) 2 + h 2 it follows ln R = 1 s ˆ s 0 ln (√ (x − P ′ ) 2 + h 2 ) dx = 1 s [ − x − ln (√ (P ′ − x) 2 + h 2 ) (P ′ − x) − h arctan ( P ′ − x h )] x=s x=0 = 1 s [ − s − ln (√ (P ′ − s) 2 + h 2 ) (P ′ − s) − h arctan ( P ′ − s h ) + ln (√ P ′ 2 + h 2 ) P ′ + h arctan ( P ′ h )] . (3.2) <?page no="116"?> 90 Geometric mean distance - GMD Example of an analytical solution: With the line length s = h = 8 follows ln R = 1 8 [ − 8 − ln (√ (4 − 8) 2 + 8 2 ) (4 − 8) − 8 arctan ( 4 − 8 8 ) + ln ( √ 4 2 + 8 2 ) 4 + 8 arctan ( 4 8 )] = 2.12 R = e 2.12 = 8.32. 3.8.4 GMD calculation - summary The results are summarised in tab. 3.15. Table 3.15: Comparison of results for R Bezug auf Δx N R Abb. 3.21 1 8 8.44 Abb. 3.22 1 8 8.21 Abb. 3.20 analytical solution 8.32 Assessment: • Analytical solution lies between the two numerical solutions. • Numerical solutions show deviations to the analytical solution and to each other 3.9 GMD point outside line with its perpendicular on line end Given is the arrangement according to fig. 3.23 with a line AB and a point P 1 which is positioned vertically above the end of the line B at a distance h. Thus the line BP 1 forms the perpendicular to the line AB. <?page no="117"?> 3.9 GMD point outside line with its perpendicular on line end 91 Figure 3.23: Arrangement for calculating the GMD between line and point We are looking for the GM D R between the line AB and point P . 3.9.1 GMD calculation - radius right at the element To calculate R, the line AB is discretised into individual line elements and their length is varied with Δx = 1, Δx = 2 and Δx = 4. The original line length is maintained. GMD with increment Δx = 1 und N = 8 In fig. 3.24 the discretised line AB is shown with the radius assignments at the right element point, r 1 to r 8 . Figure 3.24: Procedure for calculating the GMD of a line The calculation is carried out with ln R = 1 AB N ∑ i=1 ln | r i | · Δx. <?page no="118"?> 92 Geometric mean distance - GMD In tab. 3.16 summarises the radius calculations and their results. Table 3.16: Radius calculations of fig. 3.24 r i AB = 8; Δx = 1; N = 8; h = 8 r 1 = √ (0Δx) 2 + h 2 = √ 64 = 8.0 r 2 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 3 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 4 = √ (3Δx) 2 + h 2 = √ 73 = 8.54 r 5 = √ (4Δx) 2 + h 2 = √ 80 = 8.94 r 6 = √ (5Δx) 2 + h 2 = √ 89 = 9.43 r 7 = √ (6Δx) 2 + h 2 = √ 100 = 10.0 r 8 = √ (7Δx) 2 + h 2 = √ 113 = 10.63 ∑ ln | r i | = 17.52 ln R = 1 N N ∑ i=1 ln | r i | = 1 8 · 17.52 = 2.19 R = e 2.19 = 8.94. Proof: R = 8 √ √ 64 √ 65 · . . . · √ 113 = 8.94. GMD with increment Δx = 2 and N = 4: In tab. 3.17 the radius calculation with extended line elements and reduced number is summarised. <?page no="119"?> 3.9 GMD point outside line with its perpendicular on line end 93 Table 3.17: Radius calculations of fig. 3.24 with increased step size Δx and reduced segment number N r i AB = 8; Δx = 2; N = 4; h = 8 r 1 = √ (0Δx) 2 + h 2 = √ 64 = 8.0 r 2 = √ (1Δx) 2 + h 2 = √ 68 = 8.25 r 3 = √ (2Δx) 2 + h 2 = √ 80 = 8.94 r 4 = √ (3Δx) 2 + h 2 = √ 100 = 10.0 ∑ ln | r i | = 8.68 ln R = 1 4 N ∑ i=1 ln | r i | = 1 4 · 8.68 = 2.17 R = e 2.17 = 8.76. Proof: R = 4 √ √ 64 · . . . · √ 100 = 8.76. GMD with increment Δx = 4 and N = 2: In Tab. 3.18 the line elements were again enlarged and their number reduced so that the total length of the line remained the same. Table 3.18: Radius calculations of fig. 3.24 with increased step size Δx and decreased element number N r i AB = 8; Δx = 4; N = 2; h = 8 r 1 = √ (0Δx) 2 + h 2 = √ 64 = 8.0 r 2 = √ (1Δx) 2 + h 2 = √ 80 = 8.99 ∑ ln | r i | = 4.27 The calculation of R is carried out with <?page no="120"?> 94 Geometric mean distance - GMD ln R = 1 2 N ∑ i=1 ln | r i | = 1 2 · 4.27 = 2.1 R = e 2.1 = 8.46. Proof: R = 2 √ √ 64 · √ 80 = 8.46. 3.9.2 GMD calculation - radius left at the element To calculate R, the line AB is discretised and a line element variation with Δx = 1, Δx = 2 and Δx = 4, keeping its original length, is performed. GMD with increment Δx = 1 and N = 8: In fig. 3.25 the discretisation as well as the assignments of the individual radii r i to their line elements can be seen. Figure 3.25: GMD of a line The calculation of the individual radii can be seen in tab. 3.19. <?page no="121"?> 3.9 GMD point outside line with its perpendicular on line end 95 Table 3.19: Radius calculations of fig. 3.25 r i AB = 8; Δx = 1; N = 8; h = 8 r 1 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 2 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 3 = √ (3Δx) 2 + h 2 = √ 73 = 8.54 r 4 = √ (4Δx) 2 + h 2 = √ 80 = 8.94 r 5 = √ (5Δx) 2 + h 2 = √ 89 = 9.43 r 6 = √ (6Δx) 2 + h 2 = √ 100 = 10.0 r 7 = √ (7Δx) 2 + h 2 = √ 113 = 10.63 r 8 = √ (8Δx) 2 + h 2 = √ 128 = 11.31 ∑ ln | r i | = 17.86 Thus the calculation of R follows with ln R = 1 8 N ∑ i=1 ln | r i | = 1 8 · 17.86 = 2.23 R = e 2.23 = 9.33. Proof: R = 8 √ √ 65 √ 68 · . . . · √ 128 = 9.33. GMD with increment Δx = 2 and N = 4: The linear elements Δx were lengthened and their number N reduced, so that the distance AB remained the same. The corresponding radius calculation is given in tab. 3.20. <?page no="122"?> 96 Geometric mean distance - GMD Table 3.20: Radius calculations of fig. 3.25 with increased increment Δx and decreased segment number N r i AB = 8; Δx = 2; N = 4; h = 8 r 1 = √ (1Δx) 2 + h 2 = √ 68 = 8.25 r 2 = √ (2Δx) 2 + h 2 = √ 80 = 8.94 r 3 = √ (3Δx) 2 + h 2 = √ 100 = 10.0 r 4 = √ (4Δx) 2 + h 2 = √ 128 = 11.31 ∑ ln | r i | = 9.03 ln R = 1 N N ∑ i=1 ln | r i | = 1 4 · 9.03 = 2.26 R = e 2.26 = 9.56. Proof: R = 4 √ √ 68 √ 80 · . . . · √ 128 = 9.56. GMD with increment Δx = 4 and N = 2: A renewed lengthening and reduction of the line elements as well as their radius calculation is summarised in tab. 3.21. Table 3.21: Radius calculations of fig. 3.25 with increased increment Δx and decreased segment number N r i AB = 8; Δx = 4; N = 2; h = 8 r 1 = √ (1Δx) 2 + h 2 = √ 80 = 8.94 r 2 = √ (2Δx) 2 + h 2 = √ 128 = 11.31 ∑ ln | r i | = 4.62 The calculation of R follows with <?page no="123"?> 3.9 GMD point outside line with its perpendicular on line end 97 ln R = 1 N N ∑ i=1 ln | r i | = 1 2 · 4.62 = 2.3 R = e 2.3 = 10.06. Proof: R = 2 √ √ 80 · √ 128 = 10.06. 3.9.3 GMD calculation - analytical solution The numerical calculations are followed by an analytical calculation. With reference to fig. 3.23, this is derived from a summation representation ln R = 1 AB N ∑ i=1 ln | r i | · Δx with Δx → 0 and N → ∞ into the integral notation ln R = 1 AB ˆ AB 0 ln | r(x) | dx. With r = √ x 2 + h 2 follows ln R = 1 AB ˆ AB 0 ln ( √ x 2 + h 2 ) dx = 1 AB [ x ln ( √ x 2 + h 2 ) − x + h arctan ( x h )] AB 0 = 1 AB [ AB ln (√ AB 2 + h 2 ) − AB + h arctan ( AB h )] . (3.3) With the line length AB = h = 8 follows ln R = 1 8 [ 8 ln ( √ 8 2 + 8 2 ) − 8 + 8 arctan ( 8 8 )] = 1 8 17.69 = 2.2 R = e 2.2 = 9.13. <?page no="124"?> 98 Geometric mean distance - GMD 3.9.4 GMD calculation - summary and evaluation The numerically and analytically obtained results are compared and summarised in tab. 3.22. Table 3.22: Comparison of the results for R Reference to Δx N R Fig. 3.24 1 8 8.94 Fig. 3.24 2 4 8.76 Fig. 3.24 4 2 8.46 Fig. 3.25 1 8 9.33 Fig. 3.25 2 4 9.56 Fig. 3.25 4 2 10.06 Fig. 3.23 analytical solution 9.13 Fig. 3.20 analytical solution eq. (3.2) 9.13 The result is confirmed by eq. (3.2) by setting the x-values of P 1 and P 2 to zero. Assessment: • Analytical solution lies between numerical solutions • Numerical solutions show deviations to the analytical solution and to each other • Procedure of the numerical method (figures 3.24, 3.25) shows influence on the result 3.10 GMD point outside line with its perpendicular inside line Given is the arrangement according to fig. 3.26 with a line AB and a point P which is positioned vertically above the point P at a distance h. Thus the line P ′ P forms the perpendicular to the line AB. <?page no="125"?> 3.10 GMD point outside line with its perpendicular inside line 99 Figure 3.26: Arrangement for calculating the GMD between line and point We are looking for the GM D R between the line AB and point P . 3.10.1 GMD calculation - radius right at the element To calculate R, the line AB is discretised and a line element variation with Δx = 1, Δx = 2 and Δx = 4, keeping the original length. GMD with increment Δx = 1 and N = 8 In fig. 3.27 the discretisation as well as the assignments of the individual radii r i to its line elements can be seen. The radius is always applied to the right point of the line element. Figure 3.27: Procedure for the calculation of a GMD The radii are calculated in tab. 3.23. <?page no="126"?> 100 Geometric mean distance - GMD Table 3.23: Radius calculations of fig. 3.27 r i AB = 8; Δx = 1; N = 8; h = 8 r 1 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 2 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 3 = √ (0Δx) 2 + h 2 = √ 64 = 8.0 r 4 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 5 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 6 = √ (3Δx) 2 + h 2 = √ 73 = 8.54 r 7 = √ (4Δx) 2 + h 2 = √ 80 = 8.94 r 8 = √ (5Δx) 2 + h 2 = √ 89 = 9.43 ∑ ln | r i | = 17.05 The calculation of R follows with ln R = 1 8 N ∑ i=1 ln | r i | = 1 8 · 17.05 = 2.13 R = e 2.13 = 8.42. Proof: R = 8 √ √ 68 √ 65 · . . . · √ 89 = 8.42. GMD with increment Δx = 2 and N = 4: The line element length Δx is extended and the number of elements is reduced. Thus, the original line length is retained. In tab. 3.24 shows the radius calculation. <?page no="127"?> 3.10 GMD point outside line with its perpendicular inside line 101 Table 3.24: Radius calculations of fig. 3.27 r i AB = 8; Δx = 2; N = 4; h = 8 r 1 = √ (1Δx) 2 + h 2 = √ 68 = 8.94 r 2 = √ (0Δx) 2 + h 2 = √ 64 = 8.0 r 3 = √ (1Δx) 2 + h 2 = √ 68 = 8.25 r 4 = √ (2Δx) 2 + h 2 = √ 80 = 8.94 ∑ ln | r i | = 8.49 The calculation of R is carried out with ln R = 1 N N ∑ i=1 ln | r i | = 1 4 · 8.49 = 2.12 R = e 2.12 = 8.35. Proof: R = 4 √ √ 68 √ 64 · . . . · √ 80 = 8.35. 3.10.2 GMD calculation - radius left at the element To calculate R, the section AB is discretised and a section element variation with Δx = 1 and Δx = 2, keeping the original length, is performed. The radius assignment is done at the left point of the corresponding line element. GMD with step size Δx = 1 and N = 8: In fig. 3.28 the discretisation as well as the radius assignment can be seen. <?page no="128"?> 102 Geometric mean distance - GMD Figure 3.28: Procedure for calculating the GMD of a line Table 3.25: Radius calculations of fig. 3.28 r i AB = 8; Δx = 1; N = 8; h = 8 r 1 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 2 = √ (0Δx) 2 + h 2 = √ 64 = 8.0 r 3 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 4 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 5 = √ (3Δx) 2 + h 2 = √ 73 = 8.54 r 6 = √ (4Δx) 2 + h 2 = √ 80 = 8.94 r 7 = √ (5Δx) 2 + h 2 = √ 89 = 9.43 r 8 = √ (6Δx) 2 + h 2 = √ 100 = 10.0 ∑ ln | r i | = 17.25 In tab. 3.25 the radius calculations can be seen. The calculation of the GMD follows with ln R = 1 N N ∑ i=1 ln | r i | = 1 8 · 17.25 = 2.16 R = e 2.16 = 8.64. Proof: R = 6 √ √ 65 · . . . · √ 100 = 8.64. <?page no="129"?> 3.10 GMD point outside line with its perpendicular inside line 103 GMD with increment Δx = 2 and N = 4: The line element length Δx is increased as well as the number of elements N is decreased and the radii in tab. 3.26 are calculated again. Table 3.26: Radius calculations of fig. 3.28 r i AB = 8; Δx = 2; N = 4; h = 8 r 1 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 2 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 3 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 4 = √ (3Δx) 2 + h 2 = √ 73 = 8.45 ∑ ln | r i | = 8.43 ln R = 1 N N ∑ i=1 ln | r i | = 1 4 · 8.43 = 2.11 R = e 2.11 = 8.23. Proof: R = 4 √ √ 65 √ 65 · . . . · √ 73 = 8.23. 3.10.3 GMD calculation - superposition For more complicated geometric arrangements, solving by superposition is a possible approach. In fig. 3.29 the superposition of two lines is shown, whose GMD is calculated by numerical and analytical methods. Figure 3.29: GMD of two lines using superposition <?page no="130"?> 104 Geometric mean distance - GMD Table 3.27: Radius calculations of fig. 3.29 (right hand side) r i AP ′ = 6; Δx = 1; N = 6; h = 8 r 1 = √ (0Δx) 2 + h 2 = √ 64 = 8.0 r 2 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 r 3 = √ (2Δx) 2 + h 2 = √ 68 = 8.25 r 4 = √ (3Δx) 2 + h 2 = √ 73 = 8.54 r 5 = √ (4Δx) 2 + h 2 = √ 80 = 8.94 r 6 = √ (5Δx) 2 + h 2 = √ 89 = 9.43 ∑ ln | r i | = 12.86 R AP ′ = 6 √ √ 64 · . . . · √ 89 = 8.52 Table 3.28: Radius calculations of fig. 3.29 r i P ′ B = 2; Δx = 1; N = 2; h = 8 r 1 = √ (0Δx) 2 + h 2 = √ 64 = 8.0 r 2 = √ (1Δx) 2 + h 2 = √ 65 = 8.06 ∑ ln | r i | = 4.17 R P ′ B = 2 √ √ 64 √ 65 = 8.03. The radius R of both lines of fig. 3.29 is R AB = 2 √ 8.52 · 8.03 = 8.27. 3.10.4 GMD calculation - analytical solution Analytical solution to calculate the GMD of fig. 3.26 R AB = 2 √ R AP ′ R P ′ B . <?page no="131"?> 3.10 GMD point outside line with its perpendicular inside line 105 The solution is done with the double application of eq. (3.3): • R AP ′ : ln R AP ′ = 1 6 [ 6 ln ( √ 6 2 + 8 2 ) − 6 + 8 arctan ( 6 8 )] = 1 6 12.96 = 2.16 R AP ′ = e 2.16 = 8.67. • R P ′ B : ln R P ′ B = 1 2 [ 2 ln ( √ 2 2 + 8 2 ) − 2 + 8 arctan ( 2 8 )] = 1 2 4.18 = 2.09 R P ′ B = e 2.09 = 8.08. Thus the GMD R AB follows with R AB = 2 √ 8.67 · 8.08 = 8.37. 3.10.5 GMD calculation - Summary and evaluation The summary of results is given in tab. 3.29. Table 3.29: Comparison of the results for R Reference to Δx N R Fig. 3.27 1 8 8.42 Fig. 3.27 2 4 8.35 Fig. 3.28 1 8 8.64 Fig. 3.28 2 4 8.22 Fig. 3.29 1 8 8.27 Fig. 3.26 analytical solution 8.37 Fig. 3.20 analytical solution eq. (3.2) 8.52 <?page no="132"?> 106 Geometric mean distance - GMD The result is obtained by eq. (3.2) by setting the x-values of P 1 and P 2 to six. Assessment: • Analytical solution lies between numerical solutions. • Numerical solutions show deviations to the analytical solution and to each other • Procedures according to figures 3.27 and 3.28 show influences on the result. <?page no="133"?> Chapter 4 LCR parallel and series resonant circuit The definition of the reactances is followed by their frequency characteristics and the natural frequency calculation with subsequent error calculation. Voltage curves of the LCR series resonant circuit are derived and discussed. This is followed by the natural frequency calculation of the damped LCR series and parallel resonant circuit. The chapter concludes with the calculation of the forced, damped LCR parallel resonant circuit. 4.1 Resonant circuits, impedances and resonances The reactances (reactances or AC resistances) X as well as the complex impedances Z of the individual components of the resonant circuits according to fig. 4.1 are calculated as follows: • Inductance or coil L: X L = ωL; Z L = j ωL = j X L , • Capacitance or capacitor C: X C = 1 ωC ; Z C = 1 j ωC = j X C , • Resistance or resistivity R: X R = R; Z R = R. <?page no="134"?> 108 LCR parallel and series resonant circuit Figure 4.1: Series and parallel resonant circuit with effective values In fig. 4.1 a) shows a series resonant circuit for damped and forced oscillation. This is characterised by the current I, which flows equally through all the components involved (voltage source, capacitor C, inductance L and resistance R). At the natural circuit frequency ω 0 , the impedance has the value of the effective resistance of the coil. The circuit is very low impedance. The maximum resonance current flows. Below the natural circuit frequency, the capacitive property of the circuit dominates. This is contrasted by the frequency range above the natural circuit frequency. In this range, the inductive influence of the circuit dominates. This effect is called voltage resonance because the voltage across the inductance and capacitance can become greater than the total voltage. In fig. 4.2 the curves of the individual impedances and the resulting current are plotted against the angular frequency ω normalised to the natural angular frequency ω 0 . The equation for the magnitude of the impedance Z r is | Z r | = √ R 2 + ( ωL − 1 ωC ) 2 . A parallel resonant circuit with forced and damped oscillation is shown in fig 4.1 b). The voltage U 0 is applied to all components involved. A corresponding branch current I R , I L , I C and sum current I is produced by each component. In a parallel circuit, the voltage becomes the reference value, since it is applied equally to all components. In the case of active resistance, voltage and current do not have a phase shift. The current in the capacitor branch leads the voltage by 90 ◦ ahead of the voltage. Through the coil, the current follows the voltage by 90 ◦ . The branch currents are directly proportional to the conductance values. In the range of low angular frequencies, the conductance <?page no="135"?> 4.1 Resonant circuits, impedances and resonances 109 Figure 4.2: Characteristics of the series resonant circuit values in the coil branch determine the behaviour of the circuit. In the range of high circuit frequencies, the conductance values in the capacitor branch are decisive. There is exactly one natural circuit frequency ω 0 at which the two conductance values of inductance and capacitance are equal. Because of the opposing phase shift, the reactive conductances cancel each other out at this angular frequency. The parallel resonant circuit then has the properties of an ohmic resistor, in which the phase angle between current and voltage is 0 ◦ . The magnitude of the impedance Z p is given by | Z p | = 1 √( 1 R ) 2 + ( ωC − 1 ωL ) 2 . At the natural circuit frequency, Z p and the voltage record a maximum (cf. 4.3). Below this natural circuit frequency the inductive above the natural circuit frequency the capacitive influence determines the circuit behaviour. This effect is called current resonance. Here, branch currents can occur that are greater than the total current. To determine the natural circuit frequency of the undamped system ω 0 , the two reactances X L and X C are equated and transformed: <?page no="136"?> 110 LCR parallel and series resonant circuit Figure 4.3: Characteristics of the parallel resonant circuit X L = X C ω L = 1 ω C ω 0 = 1 √ LC . (4.1) The further transformation leads to the natural frequency of the undamped system f 0 = 1 2 π √ LC , (4.2) where eq. (4.2) is called Thomson’s vibration formula. Useful standards for this are [11], [12], [13] and [14]. <?page no="137"?> 4.2 Natural frequency - error calculation 111 4.2 Natural frequency - error calculation In technical applications, components with tolerances are always used. Therefore, the effects of the component tolerances on the natural frequency f 0 with the mean value ¯ f 0 and the measurement uncertainty Δf 0 are of interest f 0 = ¯ f 0 (L, C) ± Δf 0 (L, C). For this purpose, the influence of each independent parameter (L and C) on the natural frequency must be determined. This is done by forming the total differential of the natural frequency of eq. (4.2) with df 0 = df 0 (L, C) = ∂f 0 (L, C) ∂L dL + ∂f 0 (L, C) ∂C dC. The resulting terms correspond to straight line equations with the partial derivative as the slope multiplied by the delta size (independent variable). Cf. also the Taylor development. The maximum error Δf 0 max follows with Δf 0 max = ∣∣∣∣ ∂f 0 (L, C) ∂L ΔL ∣∣∣∣ + ∣∣∣∣ ∂f 0 (L, C) ∂C ΔC ∣∣∣∣ = ∣∣∣∣ − C 4 π (LC) 3/ 2 ΔL ∣∣∣∣ + ∣∣∣∣ − L 4 π (LC) 3/ 2 ΔC ∣∣∣∣ . Table 4.1: Used components Device Mean Measurement uncertainty Measurement (absolute value) uncertainty in [%] C 2.2 · 10 − 6 F ± 0.11 · 10 − 6 ± 5 L 13.5 · 10 − 3 H ± 0.675 · 10 − 3 ± 5 Including the components named in tab. 4.1 the maximum error Δf 0 max follows <?page no="138"?> 112 LCR parallel and series resonant circuit Δf 0 max = ∣∣∣∣ − 2.2 10 − 6 F 4 π (13.5 10 − 3 H · 2.2 10 − 6 F ) 3/ 2 · 0.675 · 10 − 3 H ∣∣∣∣ + ∣∣∣∣ − 13.5 10 − 3 H 4 π (13.5 10 − 3 H · 2.2 10 − 6 F ) 3/ 2 · 0.11 · 10 − 6 F ∣∣∣∣ = 23 Hz + 23 Hz = 46 Hz. The result of the natural frequency f 0 is therefore f 0 = 924 Hz ± 46 Hz. The percentage maximum measurement uncertainty (maximum error) is thus ∣∣∣∣ Δf 0 max ¯ f 0 ∣∣∣∣ = ∣∣∣∣ 46 Hz 924 Hz ∣∣∣∣ = 0.05 = 5 %. Useful standards for this are [15], [16], [17] and [18]. 4.3 Voltage profiles LCR series resonant circuit with frequency variation In fig. 4.4 shows the circuit diagram of a forced and damped series resonant circuit consisting of the components resistor R, inductor L and capacitor C with the complex voltages U R , U L , U C , U RL as well as the source voltage U 0 and the complex current I as rms values. The excitation by the voltage source is harmonic with the angular frequency ω = [0, ∞ ]. The voltage equation is transformed into the reactance equation U L + U R + U C = U 0 jωL I + R I + 1 jωC I = U 0 jωL + R + 1 jωC = U 0 I , (4.3) where a division is made by the complex rms value of the current I, which cannot reach the value of zero. <?page no="139"?> 4.3 Voltage profiles LCR series resonant circuit with frequency variation 113 Figure 4.4: Example of an LCR series resonant circuit 4.3.1 Voltage characteristics across the inductance The frequency variation of the source voltage causes a variation of the voltage across the inductance, which is represented below in relation to the source voltage as the normalised voltage U L / U 0 as a function of a multiple of the resonance frequency ω/ ω 0 . For this purpose, in eq. (4.3) the complex current I is given by jωL + R + 1 jωC = U 0 U L jωL − ω 2 LC + RjωC + 1 jωC = U 0 U L jωL and a common denominator is formed. The subsequent division by jωL, reciprocaljωL jωC 1 − ω 2 LC + RjωC = U L U 0 and absolute value formation leads to ωL √ R 2 + ( 1 ωC − ωL) 2 = U L U 0 . The further substitution with ω = x ω 0 = x/ √ LC allows the notation x 2 √ LC √ (x R C) 2 + LC (1 − x 2 ) 2 = U L U 0 . (4.4) The equation can be explained as follows: <?page no="140"?> 114 LCR parallel and series resonant circuit Figure 4.5: Frequency-dependent curves of the voltage U L referred to the source voltage U 0 across the inductance • ω = 0: The capacitor C blocks. Thus there is neither current through nor voltage drop across the inductor. • ω = ω 0 : Z r takes the value of the resistor R and allows maximum current flow. • ω → ∞ : The inductor L blocks. The voltage drop across the inductor tends to the source voltage U 0 . • R → 0: The resistance influence vanishes. The equation changes into x 2 1 − x 2 = U L U 0 and exhibits singularity at x = 1. In fig. 4.5 the normalized voltage waveforms are shown with the resistance as the plot parameter for x = [0, 5] of eq. (4.4). The voltage of the inductor related to the source voltage is shown as the ordinate and the angular frequency related to the natural angular frequency is shown on the abscissa. <?page no="141"?> 4.3 Voltage profiles LCR series resonant circuit with frequency variation 115 4.3.2 Voltage characteristics across inductance and resistance Real inductors include a resistive component. A measured voltage across the inductance therefore still includes the voltage-effective ohmic component, as shown in fig. 4.4 as U RL has already been defined. In the sequel, the voltage across a real inductor is to be determined as a function of the angular frequency and the resistance as a plot parameter. In eq. (4.3), the complex current I is given by Figure 4.6: Frequency-dependent characteristics of the voltage U RL referred to the source voltage U 0 across the inductance and the resistance jωL + R + 1 jωC = U 0 U RL (R + jωL) and replaced. The following • division by (R + jωL), • principal denominator formation, <?page no="142"?> 116 LCR parallel and series resonant circuit • reciprocal value formation, • absolute value formation leads to the equation √ ( − 2ω 2 LRC) 2 + (ωR 2 C − ω 3 L 2 C) 2 (R − 2ω 2 LRC) 2 + (ωL + ωR 2 C − ω 3 L 2 C) 2 = U RL U 0 . The further substitution with ω = x ω 0 = x/ √ LC, x = [0, 5] permits the notation √√√√√√ ( 2 x 2 R √ LC ) 2 + (x R 2 C − x 3 L) 2 ( R √ LC − 2 x 3 R √ LC ) 2 + (x L + x R 2 C − x 3 L) 2 = U RL U 0 , whose normalized curves are shown in fig. 4.6 with the resistor as a plot parameter. On the abscissa is plotted the angular frequency referenced to the natural angular frequency and on the ordinate is plotted the voltage referenced to the source voltage across the inductance and the resistor. From the figure it can be seen that • at ω = 0 over the resistor and inductance no voltage drops, because the capacitor blocks. • at ω → ∞ the impedance of the capacitor tends to very small values and that of the inductor and resistor to very high values. The voltage across the inductor approaches the value of the source voltage. • the voltage maximum with decreasing resistance is shifted towards lower frequency values, until ω = ω 0 . • an increasing resistance causes an increasing attenuation, resulting in a decreasing voltage maximum. 4.3.3 Voltage characteristics across the resistor If in eq. (4.3) the complex current I is replaced by U / R, it follows. jωL + R + 1 jωC = U 0 R U R . If subsequently <?page no="143"?> 4.3 Voltage profiles LCR series resonant circuit with frequency variation 117 • a division by R, • the inverse, • the absolute value formation is done, the equation follows Figure 4.7: Frequency-dependent characteristics of the voltage U R referred to the source voltage U 0 across the resistor 1 √ (1 + ( 1 ωRC − ωL R ) 2 = U R U 0 . The substitution with ω = x ω 0 = x/ √ LC, x = [0, 5] leads to the notation 1 √ 1 + ( √ L xR √ C − x √ L R √ C ) 2 = U R U 0 . <?page no="144"?> 118 LCR parallel and series resonant circuit The corresponding normalized voltage waveforms with the resistance as a share parameter are shown in fig. 4.7. It should be noted that • at ω = 0 (dc voltage) the capacitor blocks, with the consequence that no current flows through the resistor and thus across the resistor the voltage drop is zero. • at ω → ∞ the inductance L tends to high impedance values, with the consequence that the current as well as the voltage drop across the resistor tends to zero. • In the case of resonance at ω = ω 0 the reactances of inductance and capacitance compensate each other. The current and coupled with it the voltage drop across the resistor take maximum value one. A resistor is not an energy storage device, therefore no voltage rise over it can be expected. 4.3.4 Voltage characteristics across capacitance If in eq. (4.3) the complex current I is replaced by U jωC, it follows jωL + R + 1 jωC = U 0 U C 1 jωC . If a successive • multiplication by jωC, • reciprocation, • amount formation is performed, this leads to the equation 1 √ (1 − ω 2 LC) 2 + (RCω) 2 = U C U 0 . The substitution with ω = x ω 0 = x/ √ LC, x = [0, 5] leads to the notation 1 √ (1 − x 2 ) 2 + ( R √ C L x ) 2 = U C U 0 . <?page no="145"?> 4.4 Damped forced LCR series resonant circuit 119 Figure 4.8: Frequency-dependent characteristics of the voltage U C referred to the source voltage U 0 across the capacitance The corresponding normalized voltage waveforms with the resistance as a plot parameter are shown in fig. 4.8. It should be noted that • at ω = 0 (dc voltage) the entire voltage across the capacitor drops because it blocks. • at ω → ∞ the capacitor impedance tends to zero and thus the voltage U C also tends to zero. • the maximum of the voltage rise with decreasing resistance is shifted to higher frequencies until ω = ω 0 is reached. 4.4 Damped forced LCR series resonant circuit The subject of this research is the damped and forced series resonant circuit shown in fig. 4.9, consisting of the voltage source u 0 , the resistor R q (source resistor) connected <?page no="146"?> 120 LCR parallel and series resonant circuit in parallel with the voltage source, and the components inductance L, capacitor or capacitance C, and resistor R. The oscillating voltage of the voltage source u 0 excites the oscillating circuit to oscillate. The circuit is described by the voltage differential equation u L + u R + u C = u 0 L di dt + R i + 1 C ˆ T i dt = u 0 d 2 i dt 2 + R L di dt + 1 LC i = 1 L du 0 dt . Figure 4.9: Example of a damped LCR series resonant circuit For the solution the transformation into the complex image domain is done in rms representation and with the help of tab. 4.2 p 2 I + R L p I + 1 LC I = 1 L U 0 s p 2 + R L p + 1 LC = 1 L U 0 I s. The substitutions • U 0 / I = U 0 e jϕ u / I e jϕ i ; with ϕ u ϕ i = 0 of the resistor follows U 0 / I = R q , <?page no="147"?> 4.4 Damped forced LCR series resonant circuit 121 • ω e = R q / L allow p 2 + R L p + 1 LC = R q L s p 2 + R L p + 1 LC = R q L jω 1 p 2 + R L p + 1 LC = j ( R q L ) 2 p 2 + R L p + 1 LC − j ( R q L ) 2 = 0. Table 4.2: Transformation table time domain, complex image domain Time domain Complex domain Complex domain (RMS value) i i I di dt jω i = p i jω I = p I d 2 i dt 2 (jω) 2 i = p 2 i (jω) 2 I = p 2 I du 0 dt jω e u 0 = s u 0 jω e U 0 = s U 0 The obtained quadratic equation is calculated with the help of the midnight formula (goes back to the work of Evariste Galois (1811-1831)) p 1,2 = − R L ± √( R L ) 2 − 4 ( 1 LC − j ( R q L ) 2 ) 2 = − R 2L ± √ j 4 ( R 2L ) 2 − j 4 1 LC + j 4 j ( R q L ) 2 = − R 2L ± √ − j 2 ( R 2L ) 2 + j 2 1 LC + j 2 j 3 ( R q 2L ) 2 = − R 2L ± j √ 1 LC − ( R 2L ) 2 − j ( R q L ) 2 (4.5) = − δ ± j √ ω 2 0 − δ 2 − j ω 2 e = − δ ± j ω d = j ω 1,2 . <?page no="148"?> 122 LCR parallel and series resonant circuit The transformation of the above equations is done with the self-chosen goal to represent the term 1/ (LC) alone and positive. Here is • ω 0 the natural angular frequency of the undamped system ω 0 = 1 √ LC , (4.6) • f 0 the natural frequency of the undamped system f 0 = ω 0 2π , (4.7) • δ the decay coefficient or damping factor δ = R 2L , (4.8) • ω d the natural angular frequency of the damped system ω d = √ ω 2 0 − δ 2 , (4.9) • f d the natural frequency of the damped system f d = ω d 2π , (4.10) • ω e the excitation angular frequency ω e = R q L , (4.11) which is multiplied by the imaginary unit under the root expression. <?page no="149"?> 4.5 Damped free LCR series resonant circuit 123 4.5 Damped free LCR series resonant circuit The damped and forced series resonant circuit shown in fig. 4.9 is transformed with R q = 0 into the state of the damped and free resonant circuit according to fig. 4.10. The voltage source is short-circuited, or removed. The capacitor is assumed to be the voltage source for the state t = 0. From eq. (4.5) remains the equation p 1,2 = − R 2L ± j √ 1 LC − ( R 2L ) 2 = − δ ± j √ ω 2 0 − δ 2 (4.12) = − δ ± j ω d = j ω 1,2 . Figure 4.10: Example of a damped LCR series resonant circuit <?page no="150"?> 124 LCR parallel and series resonant circuit Table 4.3: Parameters and comparison of natural circuit, natural frequencies of the free and damped LCR series resonant circuit according to fig. 4.10 Simulation parameters: C = 2.2 μF L = 13.5 mH ω 0 = 5803 s − 1 according to eq. (4.6) Resistance R [Ω] Method 1 5 10 20 ω d according to eq. (4.9) [s − 1 ] 5802.5 5799.6 5790 5755 ω d according to fig. 4.11 [s − 1 ] 5806 5799 5730 5711 f d according to eq. (4.10) [Hz] 923.5 923.0 921.6 916 f d according to fig. 4.11 [Hz] 924 923 912 909 δ according to eq. (4.8) [s − 1 ] 37 185 370 740 Figure 4.11: LTspice simulation result - free damped LCR series resonant circuit The discriminant D = ω 2 0 − δ 2 decides about the mode of oscillation. At the eq. (4.12) three cases can be distinguished: • D = 0: Aperiodic limiting case. The damping allows just no more oscillation (aperiodic = non-periodic). <?page no="151"?> 4.6 Undamped free LC resonant circuit 125 • D < 0: Aperiodic behavior or overdamped case. The system is no longer capable of any real oscillation and strives towards a stable state (voltage equilibrium). • D > 0: Weak damping. The system is capable of oscillation and performs a damped oscillatory vibration. A comparison of the LTspice simulation results from fig. 4.11 with the results obtained from equations (4.9) and (4.10) was made in tab. 4.3. The deviations are due to the selected time steps of the LTspice simulation, which become larger with increasing frequency. 4.6 Undamped free LC resonant circuit If R = 0 is still set in the continuation, fig. 4.10 into the fig. 4.12. Here u L = u C = u. The capacitor C was chosen as the source in the plot. Furthermore, the eq. (4.12) goes into the eq. (4.6) p 1,2 = 0 ± j √ ω 2 0 j ω 1,2 = 0 ± j ω 0 ω 1,2 = ± ω 0 = ± 1 √ LC , Figure 4.12: Undamped LC oscillating circuit the condition of the free and undamped LC resonant circuit. Here the positive natural circuit frequency is the solution. <?page no="152"?> 126 LCR parallel and series resonant circuit 4.7 Damped forced LCR parallel resonant circuit In fig. 4.13 a damped parallel resonant circuit for a forced oscillation is shown. It consists of the voltage source u 0 , the series-connected internal resistor R q (source resistor) and the components resistor R, inductance L and capacitor or capacity C. The oscillating voltage of the voltage source u 0 excites the circuit to oscillate. In the sequel, the oscillatory behavior of the circuit is to be investigated. For this, the circuit must be described with the help of a differential equation. With the node set for node 1 the sum of the currents is formed. These are expressed with the help of their voltages and thus the differential equation of the oscillating circuit is formulated i C + i R + i L = i 0 C du dt + 1 R u + 1 L ˆ T u dt = i 0 d 2 u dt 2 + 1 RC du dt + 1 LC u = 1 C di 0 dt . Figure 4.13: Example of a forced and damped LCR resonant circuit With the help of the transformation table tab. 4.4 follows the algebraic equation p 2 U + 1 RC p U + 1 LC U = 1 C s I 0 p 2 + 1 RC p + 1 LC = s C I 0 U . The substitutions <?page no="153"?> 4.7 Damped forced LCR parallel resonant circuit 127 • I 0 / U = I 0 e jϕ i / U e jϕ u ; mit ϕ i ϕ u = 0 of the resistance follows I 0 / U = 1/ R q , • ω e = 1/ (R q C) Table 4.4: Transformation table time domain, complex image domain Time domain Complex Complex domain domain (RMS value) u u U du dt jω u = p u jω U = p U d 2 u dt 2 (jω) 2 u = p 2 u (jω) 2 U = p 2 U di 0 dt jω e i 0 = s i 0 jω e I 0 = s I 0 make the quadratic equation possible p 2 + 1 RC p + 1 LC − j (R q C) 2 = 0, the solution of which with the help of the midnight formula p 1,2 = − 1 RC ± √ 1 (RC) 2 − 4 ( 1 LC − j (R q C) 2 ) 2 = − 2 2RC ± √ 4 (2RC) 2 − 4 ( 1 LC − j (R q C) 2 ) 2 = − 1 2RC ± √ j 4 (2RC) 2 − j 4 LC + jj 4 (R q C) 2 = − 1 2RC ± j √ 1 LC − 1 (2RC) 2 − j (R q C) 2 (4.13) = − δ ± j √ ω 2 0 − δ 2 − j ω 2 e = − δ ± j ω d = j ω 1,2 is carried out. The transformation of the above equations is done with the self-chosen goal to represent the term 1/ (LC) alone and positive, which allows a comparability <?page no="154"?> 128 LCR parallel and series resonant circuit with further circuit arrangements. The term with the imaginary unit under the root expression represents the excitation angular frequency. Furthermore Table 4.5: Simulation parameters and simulation results of the damped LCR resonant circuit shown in fig. 4.13 Calculation and Calculation and simulation parameters simulation results R q = 200 Ω ω 0 = 5802 s − 1 , calculated with eq. (4.14 ) R = 1 kΩ f 0 = 923.5 Hz, calculated with eq. (4.15) L = 13.5 mH ω d = 5798 s − 1 , calculated with eq. (4.17 ) C = 2.2 μF f d = 923 Hz, calculated with eq. (4.18) ˆ u 0 = 5 V Excitation frequency f = f 0 Figure 4.14: LTspice simulation result - voltages and currents of the LCR parallel resonant circuit excited with u 0 at f = f 0 = 923 Hz • ω 0 the natural angular frequency of the undamped system <?page no="155"?> 4.7 Damped forced LCR parallel resonant circuit 129 ω 0 = 1 √ LC , (4.14) • f 0 the natural frequency (eigenfrequency) f 0 = ω 0 2 π , (4.15) • δ the decay coefficient δ = 1 2 RC . (4.16) For R → ∞ the circuit assumes the state of undamped oscillation. For R → 0 the circuit goes into the aperiodic state. It is no longer capable of real oscillation, since a short circuit prevents this. • ω d the natural angular frequency of the damped system ω d = √ ω 2 0 − δ 2 , (4.17) • f d the natural frequency (eigenfrequency) f d = ω d 2 π , (4.18) • ω e the excitation angular frequency ω e = 1 R q C . (4.19) For R q → ∞ the circuit assumes the state of free oscillation. <?page no="156"?> 130 LCR parallel and series resonant circuit Figure 4.15: LTspice simulation result - LCR parallel resonant circuit, at f = f 0 = 923 Hz in transient as phase diagram As an example of an LCR resonant circuit excited at the natural frequency f 0 , fig. 4.14 the time histories of the source voltage u 0 , the capacitor voltage u C , the capacitor current i C , the current through the inductor i L as well as the current through the resistor i R of the circuit of fig. 4.13. It is recognized that • the capacitor was assigned an initial voltage of 10 V at time t = 0 s, • capacitor voltage and current increase in amplitude and thus in rms value with increasing simulation time, • the voltage ˆ u 0 = 5 V , • after the transient the voltage u 0 is in phase with the voltage u C and the currents i R and i 0 . The simulation parameters and simulation results are summarized in tab. 4.5. In fig. 4.15, the transient (motions) starting at t = 1 ms to t = 2.9 ms (from fig. 4.14) at the <?page no="157"?> 4.8 Damped free LCR parallel resonant circuit 131 Figure 4.16: LTspice simulation result - voltages and currents of the LCR parallel resonant circuit excited with u 0 at f = 992 Hz frequency of 923 Hz what is shown as a phase diagram. Counterclockwise transient oscillation can be seen as an increasing radius. In fig. 4.16 the voltage and current curves, excited with a higher frequency differing from f 0 and f d are shown. The voltage u 0 precedes the voltage u C after the transient process. If the circuit is excited with a frequency f < f 0 , the voltage u 0 lags behind the voltage u C after the transient process. A in-phase condition between the voltage u 0 , u C and the currents i R , i 0 is no longer given. 4.8 Damped free LCR parallel resonant circuit Aims in fig. 4.13 the resistance R q → ∞ , the circuit goes into the state of a damped and free LCR parallel resonant circuit according to fig. 4.17. From eq. (4.13) remains p 1,2 = − 1 2RC ± j √ 1 LC − 1 (2RC) 2 (4.20) = − δ ± j √ ω 2 0 − δ 2 = − δ ± j ω d . <?page no="158"?> 132 LCR parallel and series resonant circuit Figure 4.17: Example of a free and damped LCR parallel resonant circuit with L as source Table 4.6: Parameters and comparison of natural circuit, natural frequencies of the damped LCR parallel resonant circuit according to fig. 4.17 Simulation parameters: C = 2.2 μF L = 13.5 mH ω 0 = 5803 s − 1 nach Gl. (4.14) Resistor R [Ω] Method 100 200 1000 ω d according to eq. (4.17) [s − 1 ] 5339 5690 5798 ω d according to fig. 4.18 [s − 1 ] 5221 5623 5749 f d according to eq. (4.18) [Hz] 850 906 923 f d according to fig. 4.18 [Hz] 831 895 915 δ according to eq. (4.16) [s − 1 ] 2273 1136 227 A case discrimination of the oscillatory behavior can be performed as described in chap. 4.5. The eq. (4.20) is interpreted as follows: • R → 0: Periodic oscillation is no longer enabled. The circuit is short-circuited. • R → ∞ : The natural circuit frequency of the damped system transitions to the natural circuit frequency ω 0 of the undamped LC resonant circuit according to eq. (4.14). It follows <?page no="159"?> 4.8 Damped free LCR parallel resonant circuit 133 Figure 4.18: LTspice simulation result - damped LCR parallel resonant circuit p 1,2 = 0 ± j 1 √ LC . = 0 ± j ω 0 j ω 1,2 = ± j ω 0 ω 1,2 = ± ω 0 , where here the positive natural angular frequency ω 1 = ω 0 forms the solution. In fig. 4.18 the voltages of the damped LCR resonant circuit according to fig. 4.17 with the resistor as the plot parameter. Visible are the decaying equalization processes. These are accelerated with a decreasing resistance. The influences on the natural frequencies are shown in tab. 4.6, likewise the simulation, calculation parameters with calculation results. The discrepancies between the analytical and numerical results are due to the time discretization. It is pointed out that ω 0 > ω d . For the parallel resonant circuit, in fig. 4.19 the voltage versus current can be seen as a phase diagram with the resistance as the plot parameter. The decays (motions) are counterclockwise and end at the center <?page no="160"?> 134 LCR parallel and series resonant circuit Figure 4.19: LTspice simulation result - LCR parallel resonant circuit, decay process as phase diagram of the diagram at u = i = 0. In contrast, an undamped system describes a circle. It is worth noting the assumed current directions in fig. 4.17 is given: • If all three currents are assumed to be positive as current directions, the nodal rule is thus violated, but this does not change the obtained result of the natural angular frequency of the damped system. The reason is the quadratic term in the root of the midnight formula. • If the two currents i C and i L are assumed to be of the same direction, independent of the current direction i R , then this leads to the result p 1,2 = 1 2RC ± j √ 1 LC − 1 (2RC) 2 . The real part has a sign change. The natural angular frequency of the damped system is equal to that in eq. (4.20). <?page no="161"?> 4.9 Undamped free LC resonant circuit 135 4.9 Undamped free LC resonant circuit Goes into fig. 4.17 the resistor R → ∞ , the undamped and free LC oscillatory circuit thus follows from fig. 4.12. From eq. (4.20) follows the resonant frequency according to eq. (4.14) of the free and undamped LC oscillatory circuit, which also follows the undamped and free LC oscillatory circuit from chap. 4.6. <?page no="163"?> Chapter 5 Current displacement in conductor In fig. 5.1 a) an electrical conductor is shown in the cylindrical coordinate system. The conductor is described by means of the radius R, the length l as well as the electrical conductivity κ and is penetrated by the current density J through the conductor crosssectional area (front side). Figure 5.1: Electrical cylindrical conductor with current density profile J (r) In the case of a direct current J (ω = 0), the current density J (r) will be homogeneous over the cross-sectional area of the conductor. In the case of an alternating current J (ω > 0), current is displaced in the conductor towards the edge area of the conductor. The current density is thus inhomogeneously distributed over the area and contains a radial change. However, the current density is constant along the circumference (allsided current displacement). Compare this with fig. 5.1 b). The effective conductor <?page no="164"?> 138 Current displacement in conductor cross-sectional area for current conduction decreases. The purpose of this chapter is to describe the effect of current displacement in the conductor by means of the two Maxwell equations (1.12) and (1.14), which already contain geometric quantities for modelling. The time-varying current density in the conductor causes a time-varying flux density, which in turn induces a time-varying electric field. The effect of current displacement is due to the superposition of the electric fields induced in the conductor, which is the focus of the modelling and the current density term in equation (1.14) is neglected. The current displacement resulting from the superposition of the electric field is described with the parameters conductor radius R and angular frequency ω by means of a polynomial. 5.1 Current displacement in the conductor - modelling The cause of the current displacement in the conductor is the field displacement in the conductor, the cause of which is an alternating voltage applied to the conductor. For the modelling, a change from the time to the complex image area takes place. The conductor according to fig. 5.1 a) is shown in fig. 5.2 a) with its surfaces A 1 (crosssectional area of the conductor) and A 2 (half-plane), its boundaries Γ 1 and Γ 2 as well as the conductor length l. Along the ladder drifts l in fig. 5.2 b) the applied time-varying voltage u = E 1 · l drives the time-varying current. Furthermore, the area-penetrating field quantities, such as the magnetic flux density B and the electric field strength E are drawn. Both were calculated with the procedure according to fig. 5.3, in which the assumptions necessary for modelling are named. The electric field E 1 evokes the magnetic flux density B 1 in the conductor by the flow law (right-hand rule). With increasing frequency, the flux density B 1 , coupled by the law of induction (left-hand rule), induces the electric field strength E 2 , whose direction is the same at the outer edge of the conductor and opposite to the direction of E 1 inside the conductor. This concatenation of cause and effect was shown in fig. 5.3 for example from n = 0 to n = 2. For the range I of the circular conductor [0 ≤ r ≤ R/ 2] the integration constants in fig. 5.3 assume the values r 1 = 0 and r 2 ≤ R/ 2. The electric field strength in the centre of the conductor E Z (r, ω) is given by the polynomial equation <?page no="165"?> 5.1 Current displacement in the conductor - modelling 139 Figure 5.2: Electrical conductor with surfaces, boundaries and field shapes E Z (r, ω) = E 1 − E 2 − E 3 − E 4 − · · · = [ 1 − 1 2 2 ( ω r c ) 2 − 1 2 2 4 2 ( ω r c ) 4 − 1 2 2 4 2 6 2 ( ω r c ) 6 − · · · ] E 0 e jωt = [ 1 − 1 (1! ) 2 ( ωr 2c ) 2 − 1 (2! ) 2 ( ωr 2c ) 4 − · · · ] E 0 e jωt . For the edge region II of the circular conductor [R/ 2 ≤ r ≤ R] the integration constants in fig. 5.3 assume the values r 1 = R/ 2 and r 2 ≤ R. The electric field strength in the boundary region of the conductor E R (r, ω) is calculated with the polynomial equation E R (r, ω) = E 1 + E 2 + E 3 + E 4 + · · · = [ 1 + 1 (1! ) 2 ( ωr 2c ) 2 + 1 (2! ) 2 ( ωr 2c ) 4 + · · · ] E 0 e jωt . The substitution a = ω r/ (2c) allows the shortened summation notation of both polynomial equations with E Z (r, ω) = n= ∞ ∑ n=0 [ 1 − 2n | 1 − 2n | a 2n (n! ) 2 ] E 0 e jωt (5.1) E R (r, ω) = n= ∞ ∑ n=0 [ a 2n (n! ) 2 ] E 0 e jωt . (5.2) <?page no="166"?> 140 Current displacement in conductor Figure 5.3: Procedure for deriving the field superposition <?page no="167"?> 5.1 Current displacement in the conductor - modelling 141 For symmetry reasons, the electric field strengths E 2 , E 3 , etc. are equal to zero at the location R/ 2. The two polynomial equations E Z (r, ω), E R (r, ω) are shifted by the coordinate transformation to E Z (R/ 2, ω) = E R (R/ 2, ω) = 0 on the r-axis and defined range-wise, leaving only the field strength E 1 at the location R/ 2. From the two polynomial equations, the electric field strength E(r, ω) follows defined by range E(r, ω) = { E Z ( − r + R 2 , ω), [0 ≤ r ≤ R/ 2] : Section I E Z (r − R 2 , ω), [R/ 2 ≤ r ≤ R] : Section II for the circular conductor. A superposition of all electric fields involved (constructive superposition in section II) causes a higher voltage and thus a higher current density J at the edge of the conductor. This concentrates the current conduction on the outer area of the conductor, which is called the skin effect. Inside the conductor (section I), this leads to a destructive superposition between the field E 1 and all other electric fields involved. This can lead to a current backflow (negative current density) inside the conductor (see fig. 5.6). In the polynomials of the equations (5.1) and (5.2), all parameters such as angular frequency ω and conductor radius r that influence the current displacement are evident. These are discussed as follows: • At the edge of the conductor E 3 and E 2 take the same direction as E 1 . At the centre of the conductor E 3 and E 2 are opposite to the field strength E 1 . • ω = 0: DC, all terms involving ω vanish. The current density is homogeneously distributed over the conductor cross-section. See figures 5.1 b) and 5.4. • ω → ∞ : AC, the individual terms increase in value. The polynomial equation values will take a maximum at the edge of the conductor and a minimum at the centre of the conductor. Compare the result in fig. 5.4. • r = 0: According to the modelling, all terms containing the parameter r disappear. This leaves the field strength E 1 . • 0 > r ≥ R/ 2: Field weakening in the centre of the conductor (destructive superposition of all E-fields). • R/ 2 > r ≥ R: Field enhancement at the edge of the conductor (constructive superposition of all E-fields). <?page no="168"?> 142 Current displacement in conductor • r = R: At this point, the maximum current displacement occurs for ω > 0. Increasing the radius R while keeping the angular frequency constant increases the effect of current displacement, which has already been validated with the results in figures 5.4, 5.5 and 5.6. • Waveguide with r i < r ≤ R: The definition of the integration limits for deriving the individual E-field terms in fig. 5.3 shows that for r = r i , with r i < R the contributions of the individual terms are smaller and thus reduce the effect of current displacement. • ωR: The angular frequency ω and the conductor radius R multiplicatively influence the current displacement. With an increasing conductor radius R, for example, the angular frequency ω must be reduced in order to keep ω R, the effect of current displacement constant. If an electric field strength E (r) related to the conductor radius is introduced with the help of the polynomial equations, then together with the electric conductivity κ the current density J can be calculated with J = κ ˆ E (r) dr. The current flowing through the conductor is calculated with integration over the conductor cross-sectional area A 1 (fig. 5.2 a) I = ˆ ˆ Ω J dA 1 = 2π ˆ R 0 J (r) r dr. 5.2 Current displacement in the conductor - calculation result The two polynomials of the equations (5.1) and (5.2) represent the current displacement in the circular conductor and are applied to a circular conductor with a radius of R = 0.001 m. For this purpose, the angular frequency ω was chosen as the share parameter. <?page no="169"?> 5.3 Current displacement in the conductor - simulation result 143 Figure 5.4: Calculation result - normalized electric field strength E Z (r, ω) and E R (r, ω) as a function of conductor radius and angular frequency ω as share parameters The MATLAB result is shown in fig. 5.4. The values of the polynomial equations each take a minimum at the centre of the conductor and a maximum at the edge of the conductor. The current density and thus the current conduction concentrates with increasing angular frequency at the edge of the conductor and can assume negative values at the center of the conductor. The characteristic of the electric field over the conductor cross-section is the cause of the radial current density distribution J (r). 5.3 Current displacement in the conductor - simulation result Figures 5.5 and 5.6 show the simulation results of current displacement J(r) in three copper conductors using MATLAB Partial Differential Equation Toolbox. The conductor diameter is d 1 = 2 mm. The other diameters behave like d 2 = 2 d 1 and d 3 = 3 d 1 . In the model, a specific electrical conductivity of copper was assumed to <?page no="170"?> 144 Current displacement in conductor be κ = 56.2 · 10 6 1/ (Ωm). The maximum current density takes the value 1. Negative values indicate a reversal of direction of the current density in the conductor. The current displacement is constant around the circumference of the conductor. The results are discussed as follows: Figure 5.5: Simulation result - real part of the current density J(r) in the copper conductor at f = 1 kHz • Both figures show that the effect of current displacement becomes stronger with increasing conductor diameter. • In fig. 5.5 the current displacement was simulated with an angular frequency of 1 kHz. Significant current displacement occurs in the left conductor. In contrast, no current displacement is visible in the right conductor. • In fig. 5.6 the angular frequency was increased to 3 kHz. All three conductors show the effect of current displacement. The negative current density in the left conductor corresponds to a reversal of current direction. For the interested reader, [54] chap. 3.3 the current displacement is derived and calculated by means of the field diffusion equation. <?page no="171"?> 5.4 Current displacement in conductors - summary 145 Figure 5.6: Simulation result - real part of the current density J(r) in the electrical copper conductor at f = 3 kHz 5.4 Current displacement in conductors - summary The usual introduction to the theory of current displacement in conductors in the literature is often done using the diffusion equation. See [58], (p. 172 ff.); [65], (p. 287 ff.) and [66], (p. 554 ff.). Interestingly, Maxwell’s equations in their integral form provide direct access to the theory of current displacement in conductors, which leads to series expansions and describes the effect of current displacement. The findings obtained in this chapter are summarised as follows: • The analytical model for calculating the current displacement is to be assigned to category A according to chap. 29 to category A (model classification). • The modelling is carried out with two Maxwell equations (Ampere’s and Faraday’s law) in their integral forms. • The modelling is carried out in the complex image domain, which simplifies the time derivatives and is conducive to comprehensibility and readability. • The effect of current displacement is represented by means of superposition of all <?page no="172"?> 146 Current displacement in conductor electric fields involved, which form the cause of the current density distribution in the conductor. • The result of the modelling are polynomial equations for the description of the radial E-field distribution, in which the parameters conductor radius R and angular frequency ω make the effect of current displacement visible. • The modelling is alternatively possible with the approach of the current density c 2 ˛ ∂Ωo � B ds = ˆ ˆ Ωo � J ε 0 dA. For this purpose, J = κE can optionally be assumed. • The procedure for modelling is suitable for introducing the theory of current displacement. <?page no="173"?> Chapter 6 Bessel equation and Bessel function Bessel equations and Bessel functions are of great importance in natural science and technology for the calculation of cylindrically symmetrical problems. In this chapter, the person Wilhelm Friedrich Bessel is acknowledged, Bessel equations are derived, application examples for the Bessel function are named and converted into Bessel functions, which is called the solution of the Bessel equations. In [35] mathematical-physical application examples are named, which lead to Bessel functions. These are oscillations of a homogeneous chain, heat conduction problems and ”Keppler’s task“ for the calculation of planetary motions. In the following • the person Wilhelm Friedrich Bessel is introduced, and • the Bessel equation and solution, • the Bessel equation from the field diffusion equation, • the Bessel function for the field distribution in a plate capacitor, • the Bessel function for field distribution in a cylindrical coil, • the Bessel function from the general form of the Bessel equation will be derived. For example, the figures 6.1 and 6.5 show cylindrically symmetrical arrangements, a capacitor and a cylindrical coil, which are used to derive Bessel functions. All two arrangements have a cause in common, which causes an effect, which in turn becomes the cause for the subsequent effect. The observation can be continued to infinity. In fig. 6.1 a) a time-varying electric field E between the capacitor plates causes a time-varying magnetic field B at an increasing frequency, which in turn causes an <?page no="174"?> 148 Bessel equation and Bessel function electric field counteracting the cause. The resulting electric field between the capacitor plates is weakened with increasing frequency. In fig. 6.5 b) a cylindrical coil can be seen, within which the resulting flux density of the coil decreases with increasing frequency. The named effects are described below by means of a Bessel function to be derived. 6.1 On the person Wilhelm Friedrich Bessel Friedrich Wilhelm Bessel was a German astronomer and mathematician. He was born in Minden in 1784 and died in K¨ onigsberg in 1846. At first he worked as a merchant. In 1806 he became an observer at the private observatory of J. H. Schr¨ oter in Lilienthal and in 1810 professor of astronomy and director of the observatory in K¨ onigsberg. He is considered the most important astronomer of the first half of the 19th century. Bessel published more than 350 papers and, after presenting an orbit determination of Halley’s comet, was promoted above all by H. W. M. Olbers. As the founder of astrometry, Bessel investigated the basics for the exact determination of the position of celestial bodies and in 1838 was the first to give the determination of a stellar parallax (distance of a celestial body), that of the star 61 Cygni in the constellation of Swan. From this, Bessel derived the first reliable stellar distance. In 1844, he also deduced from the variability of the stars proper motion the existence of companion stars (double stars), which were not yet observable at the time, and investigated aberration (apparent change in the position of the stars due to the speed of light and the observer), precession (movement of an axis), nutation (short-period fluctuations in precession) and the obliquity of the ecliptic (Earth’s orbital plane). Bessel proved the polar altitude fluctuation in 1844. He delivered important work on geodesy and geophysics, especially on the exact definition of astronomical coordinate systems, on potential and perturbation theory (introduction of the cylinder or Bessel functions) and on the dimensions of the Earth’s ellipsoid. 6.2 Bessel equation and solutions The general form of the Bessel equation is according to. [35], eq. (4) x 2 d 2 y dx 2 + x dy dx + (x 2 − ν 2 ) y = 0, (6.1) whose general solutions is the cylinder function Z ν (x) <?page no="175"?> 6.2 Bessel equation and solutions 149 Z ν (x) = C 1 J ν (x) + C 2 Y ν (x). Here J ν (x) is the Bessel function of the first kind and Y ν (x) is the Bessel function of the second kind. The later is called the Neumann function. The functions are defined with J ν (x) = ∞ � k=0 ( − 1) k (x/ 2) ν+2k k! Γ(ν + k + 1) Y ν (x) = J ν (x) cos(πν) − J − ν (x) sin(πν) . Selected example solutions are given below: • For ν = ± n ± 1 2 ; n = 0, 1, . . . : J 1/ 2 (x) = � 2 πx sin(x), J − 1/ 2 (x) = � 2 πx cos(x), J 3/ 2 (x) = � 2 πx � 1 x sin(x) − cos(x) � , J − 3/ 2 (x) = � 2 πx � 1 x cos(x) − sin(x) � , J n+1/ 2 (x) = � 2 πx ⎡⎣ sin � x − nπ 2 � n/ 2 � k=0 ( − 1) k (n + 2k)! (2k)! (n − 2k)! (2x) 2k = + cos � x − nπ 2 � (n − 1)/ 2 � k=0 ( − 1) k (n + 2k + 1)! (2k + 1)! (n − 2k − 1)! (2x) 2k+1 ⎤⎦ , J − n − 1/ 2 (x) = � 2 πx ⎡⎣ cos � x − nπ 2 � n/ 2 � k=0 ( − 1) k (n + 2k)! (2k)! (n − 2k)! (2x) 2k = − sin � x − nπ 2 � (n − 1)/ 2 � k=0 ( − 1) k (n + 2k + 1)! (2k + 1)! (n − 2k − 1)! (2x) 2k+1 ⎤⎦ , Y 1/ 2 (x) = − � 2 πx cos(x), Y − 1/ 2 (x) = � 2 πx sin(x), Y 3/ 2 (x) = ( − 1) n+1 J − n − 1/ 2 (x), Y − n − 1/ 2 (x) = ( − 1) n J n+1/ 2 (x). <?page no="176"?> 150 Bessel equation and Bessel function • For ν = ± n, n = 0, 1, 2, . . .: J − n (x) = ( − 1) n J n (x), Y − n (x) = ( − 1) n Y n (x). • For ν an arbitrary integer: J ν (x) = 1 π [ ˆ π 0 cos(x sin(Θ) − νΘ) dΘ − sin(π ν) ˆ ∞ 0 e − x sinh(t) − νt dt ] , Y ν (x) = 1 π [ ˆ π 0 sin(x sin(Θ) − νΘ) dΘ − ˆ ∞ 0 ( e ν t + e − ν t cos(πν) ) e − x sinh(t) dt ] . For the derivation see also [35], p. 54. An in-depth introduction to the Bessel functions of the first kind, definitions, properties, series expansions and representations by certain integrals can be found in [35]. The treatment of the Bessel functions of the second kind are summarised in detail in [36]. 6.3 Bessel equation of the field diffusion equation One possible way to derive the Bessel equation is to derive the diffusion equation, which is then converted into a Bessel equation in its general form. With the Amper’s law and the resulting electric field strength rot � B = � J μ 0 = κ � E μ 0 � E = 1 κ μ 0 rot � B with the involvement of the law of induction rot � E = − ∂ � B ∂t follows by inserting 1 κ μ 0 rot rot � B = − ∂ � B ∂t . <?page no="177"?> 6.3 Bessel equation of the field diffusion equation 151 The use of the relationship of vector analysis rot rot � B = grad div � B ︸ ︷︷ ︸ =0 − Δ � B allows the simplified representation 1 κ μ 0 ( − Δ � B ) = − ∂ � B ∂t . The further transformation leads to the diffusion equation in integral form Δ � B = κ μ 0 ∂ � B ∂t . In the continuation, the equation is developed using cylindrical coordinates. For this purpose Δ � B = 1 r ∂ ∂r ( r ∂B ∂r ) + 1 r 2 ∂ 2 B ∂Φ 2 ︸ ︷︷ ︸ =0 + ∂ 2 B ∂z 2 ︸ ︷︷ ︸ =0 . The application of the product rule ∂ ∂r ( r ∂B ∂r ) = r ∂ 2 B ∂r 2 + ∂B ∂r ∂r ∂r = r ∂ 2 B ∂r 2 + ∂B ∂r leads to the partial derivative 1 r [ r ∂ 2 B ∂r 2 + ∂B ∂r ] = ∂ 2 B ∂r 2 + 1 r ∂B ∂r . By substituting, the diffusion equation follows as a partial differential equation of 2 ′ th order in cylindrical coordinates ∂ 2 B ∂r 2 + 1 r ∂B ∂r = κ μ 0 ∂B ∂t . Here B = B z (r, t). This equation has to be transformed into the form of eq. (6.1), which is done step by step in the following. At the beginning, the transformation of the <?page no="178"?> 152 Bessel equation and Bessel function equation into the complex image domain is carried out by substituting the flux density B B z (r, t) = ˆ B z (r) e jωt is transformed, with which the diffusion equation d 2 ˆ B z (r) dr 2 e jωt + 1 r d ˆ B z (r) dr e jωt = j ω κ μ 0 ˆ B z (r) e jωt follows for the complex image area. The time derivative was therefore converted into a multiplication of the dependent variable B with jω, which appears in zero order. The division by e jωt with rearrangement of the equation leads to d 2 ˆ B z (r) dr 2 + 1 r d ˆ B z (r) dr + (0 − j ω κ μ 0 ) ˆ B z (r) = 0. The multiplication with r 2 r 2 d 2 ˆ B z (r) dr 2 + r d ˆ B z (r) dr + (0 − j ω κ μ 0 ) ˆ B z (r) r 2 = 0 further approximates the equation to the desired form of eq. (6.1). The bracket (0 − j ω κ μ 0 ) r 2 thus appears with the imaginary unit. The equation is completed with the substitution k 2 = − j ω κ μ 0 as well as k r 2 k 2 k 2 d 2 ˆ B z (r) dr 2 + rk k d ˆ B z (r) dr + ( 0 + r 2 k 2 ) ˆ B z (r) = 0 extended. In order to achieve a further approximation to eq. (6.1), a renewed substitution a 2 = k 2 r 2 and a rearrangement is necessary, which changes the notation a 2 d 2 ˆ B z (r) da 2 + a d ˆ B z (r) da + ( a 2 + 0 ) ˆ B z (r) = 0 which leads to the zero-order Bessel equation (ν 2 = 0), first kind mode according to eq. (6.1). <?page no="179"?> 6.4 Bessel function for calculating the field distribution in a capacitor 153 6.4 Bessel function for calculating the field distribution in a capacitor The subject of the following investigation is the plate capacitor as shown in fig. 6.1 a) and b). It is known that the capacitance decreases with increasing excitation frequency. This effect is to be verified with the help of the Bessel function to be derived. 6.4.1 Model arrangement On a capacitor consisting of two circular metallic plates arranged in parallel at a distance h with radius R according to fig. 6.1 a), the influence of the frequency on the radial electric field distribution between the capacitor plates is to be investigated. Figure 6.1: Capacitor arrangement with areas and their boundaries 6.4.2 Derivation of the Bessel function The assumptions made in the sequel aim at representing the radial distribution of the electric field in the capacitor by means of a Bessel function. The necessary mathematical description is done in the complex notation. In fig. 6.1 b) the surface A 1 with its boundary Γ 1 , which is penetrated by the magnetic flux density B and the surface A 2 with its boundary Γ 2 , which limits the electric field strength E, can be seen. Due to the high conductivity of both capacitor plates, the tangential components of the electric field strength are omitted. An alternating current of the capacitor of fig. 6.1 a) causes an increasing, time-varying magnetic field at high frequency, which encloses the capacitor at the circumference and the area A 1 in the edge region according to fig. 6.1 a). This boundary field results from infinitesimal consideration of individual capacitor surface elements ΔA 2 assumed to be differential, which are perpendicularly penetrated by the <?page no="180"?> 154 Bessel equation and Bessel function time-varying electric field E and are therefore enclosed by a time-varying magnetic field B. Within neighbouring finite surface elements, the magnetic B-field cancels itself out (destructive superposition of the fields). Thus, a resulting magnetic B-field remains at the boundary, which encloses all surface elements (constructive superposition of the fields). The E-field caused by the B-field counteracts the original E-field (negative (-) z-direction), which leads to a field weakening in the edge regions of the capacitor. In fig. 6.2 these interactions were shown for the steps n = 0 to n = 2. The resulting electric fields are given by E = E 1 − E 2 + E 3 − · · · = [ 1 − ( ωr 2c ) 2 1 (1! ) 2 + ( ωr 2c ) 4 1 (2! ) 2 − · · · ] E 0 e jωt and superimposed. In the centre of the capacitor at r = 0 the field E 1 is recorded. With increasing radius and frequency, the field is weakened in the outer regions of the capacitor. With the substitution a = ωr/ (c) follows E(a) = [ 1 − 1 (1! ) 2 ( a 2 ) 2 + 1 (2! ) 2 ( a 2 ) 4 − 1 (3! ) 2 ( a 2 ) 6 + · · · ] E 0 e jωt , which is shortened in the summation notation <?page no="181"?> 6.4 Bessel function for calculating the field distribution in a capacitor 155 Figure 6.2: Procedure for deriving the zero-order and first kind Bessel function using the example of the radial electric field distribution in a capacitor <?page no="182"?> 156 Bessel equation and Bessel function E(a) = ∞ ∑ m=0 [ ( − 1) m 1 (m! ) 2 ( a 2 ) 2m ] E 0 e jωt = J 0 (a) E 0 e jωt can be represented. The equation corresponds to a zero-order Bessel function of the first kind. See also [32], p. 23-4. The electric field at the edge of the capacitor thus experiences an attenuation compared to the interior of the capacitor. In fig. 6.3 the behaviour of the Bessel function J 0 (a) for a = [0, 2.6] is shown. For ω = 0 the static homogeneous electric field remains in the capacitor. With an increasing frequency and a chosen radius R the electric field at this point increases according to fig. 6.3. A further increase in frequency causes the electric field at the edge to become zero and can also invert. In fig. 6.4 the electric field E between the capacitor plates is sketched as a function of individual terms. The field superposition (superposition) causes the electric field to decrease at the edge of the capacitor plates. Figure 6.3: Curve of the Bessel function J 0 (a) <?page no="183"?> 6.5 Bessel function for calculating the flux density within a coil 157 Figure 6.4: Field progression as a function of individual E-field terms 6.5 Bessel function for calculating the flux density within a coil The subject of the following investigation is the cylindrical air coil as shown in fig. 6.5 a) with N turns. It is known that the inductance decreases with increasing excitation frequency. This effect is to be verified with the help of the Bessel function. 6.5.1 Model arrangement The Bessel function allows the calculation of the field distribution within a cylindrical coil. The cylindrical coil is shown in fig. 6.5 b) with the corresponding descriptions and dimensions. Here, the area A 1 bordered by the edge Γ 1 is interspersed by the flux density B and the area A 2 bordered by the edge Γ 2 is interspersed by the electric field strength E. 6.5.2 Derivation of the Bessel function For the further procedure, a change from the time domain to the complex image domain takes place. The flux density B 1 , in fig. 6.5 b), induces the electric field strength E 1 . According to the right-hand rule, this induces the flux density B 2 . The flux density B 2 <?page no="184"?> 158 Bessel equation and Bessel function induces the electric field strength E 2 . These processes can be continued indefinitely. In fig. 6.6 the necessary procedure from n = 0 to n = 2 is documented. The multiplication of the electric field strength equations E 1 and E 2 in the steps n = 0 and n = 1 with ( − 1) cause the directional adjustments of the magnetic flux density courses and result in the polynomial equation B = B 1 + B 2 + B 3 + · · · = [ 1 − ( ωr 2c ) 2 1 (1! ) 2 + ( ωr 2c ) 4 1 (2! ) 2 − · · · ] B 0 e jωt . (6.2) Figure 6.5: Cylindrical coil with surfaces, boundaries and field characteristics The polynomial describes the flux density at the point R. Here, the square of the speed of light is c 2 = 1/ (μ 0 ε 0 ). With the substitution a = ωr/ (c) follows B = [ 1 − 1 (1! ) 2 ( a 2 ) 2 + 1 (2! ) 2 ( a 2 ) 4 − 1 (3! ) 2 ( a 2 ) 6 + · · · ] B 0 e jωt , = ∞ ∑ n=0 ( − 1) n 1 (n! ) 2 ( a 2 ) 2n B 0 e jωt = J 0 (a) B 0 e jωt , <?page no="185"?> 6.5 Bessel function for calculating the flux density within a coil 159 Figure 6.6: Procedure for deriving the Bessel function using a cylindrical coil which takes the form of the Bessel function of zero order and first kind and whose curve is shown in fig. 6.7. The magnetic flux density therefore shows a frequency and radius <?page no="186"?> 160 Bessel equation and Bessel function dependence and also alternates in sign. In the interior of the coil, opposing, locally distributed flux densities can occur simultaneously. If the flux density B 0 is formulated as a function of an excitation current, the polynomial equation is integrated over the coil area member by member and the magnetic flux thus obtained is represented over the excitation current, this leads to the inductance L. 6.6 Bessel function from general form of Bessel equation The Bessel equation in its general form a 2 d 2 B z (r) da 2 + a dB z (r) da + ( a 2 + ν 2 ) B z (r) = 0 is converted into its Bessel function as infinite series for solution. For this purpose, the Bessel equation is converted by means of division by a 2 , which results in the Bessel equation d 2 B z (r) da 2 + 1 a dB z (r) da + ( 1 − ν 2 a 2 ) B z (r) = 0 (6.3) is converted into its normal form. At a = 0, a singularity arises. In order to circumvent this, an approach with the Frobenius power series B z (r) = a σ ∞ ∑ n=0 B n a n (6.4) dB z (r) da = ∞ ∑ n=0 (n + σ) B n a n+σ − 1 d 2 B z (r) da 2 = ∞ ∑ n=0 (n + σ) (n + σ − 1) B n a n+σ − 2 is chosen. Let n be assumed to be an integer. By substituting these relations into eq. (6.3) and then multiplying by a 2 − σ , it follows that ∞ ∑ n=0 (n + σ) (n + σ − 1) B n a n + 1 a ∞ ∑ n=0 (n + σ) B n a n+1 + ( 1 − ν 2 a 2 ) ∞ ∑ n=0 B n a n+2 = 0. <?page no="187"?> 6.6 Bessel function from general form of Bessel equation 161 Table 6.1: Determination of the coefficients B n A subsequent summary by multiplying out the parentheses, applying the first binomial theorem as well as a power law leads to ∞ ∑ n=0 [ (n + σ) 2 − ν 2 ] B n a n + ∞ ∑ n=0 B n a n+2 = 0. The equation must be zero for all values of a. Since a is the independent variable, it can take a non-zero value. The different powers of a also do not allow the equation to be satisfied. Thus, it remains that the coefficients of a itself must assume the value zero. The coefficient B n is contained in both terms of the equation. This leads to the assumption to set B n = 0, which satisfies the equation. Another way to satisfy the equation is to shift the indices ∞ ∑ n=0 [ (n + σ) 2 − ν 2 ] B n a n + ∞ ∑ n=0 B n − 2 a n = 0 ∞ ∑ n=0 [( (n + σ) 2 − ν 2 ) B n + B n − 2 ] a n = 0, which leads to a recurrence equation in which the independent variable a only appears with the same power and a n can be excluded. For the recurrence equation n ≥ 2 applies, with the consequence that B − 2 = B − 1 = 0. The determination of B n is done with <?page no="188"?> 162 Bessel equation and Bessel function ∞ ∑ n=0 [ (n + σ) 2 − ν 2 ] B n a n = ∞ ∑ n=0 − B n − 2 a n . If in the continuation σ = ± ν is set and transformed, the following follows ∞ ∑ n=0 B n a n = ∞ ∑ n=0 − 1 n (n ± 2ν) B n − 2 a n . In tab. 6.1 the coefficients B n are defined by way of example. The gamma function yields the following results: • Γ(1) = 1, • Γ(1 ± ν) = ν! , if ν is positive and integer, • Γ(ν) = ∞ , if ν is an integer ≤ 0. The derivation of the gamma function can be found in [60], p. 635 f. and as well in [43]. The result of the gamma function flows into the solution of the Bessel equation in the form of the Frobenius power series according to eq. (6.4), a Bessel function with chosen zero order ν = 0 and first kind B(a) = ( 1 − a 2 2 · 2 + a 4 2 2 4 2 − · · · ) B 0 = [ 1 − ( a 2 ) 2 1 (1! ) 2 + ( a 2 ) 4 1 (2! ) 2 − · · · ] ︸ ︷︷ ︸ J 0 (a) B 0 , or in summation notation B(a) = ∞ ∑ m=0 ( − 1) m a 2m 2 2m m! Γ(1 + m) B 0 = ∞ ∑ m=0 ( − 1) m a 2m 2 2m (m! ) 2 B 0 = J 0 (a) B 0 . (6.5) In fig. 6.7 are the curves of the Bessel functions of the first order ν = 0 to ν = 3 as solution of the Bessel equation eq. (6.3). In contrast to the other orders, the zero order has the function value J 0 (0) = 1. The MATLAB code required for this can be seen below: <?page no="189"?> 6.6 Bessel function from general form of Bessel equation 163 Figure 6.7: Examples of Bessel functions of the first kind with order ν as share parameter a = 0: 0.2: 15; figure; plot(a,besselj(0,a),’r-’,a,besselj(1,a),’k--’,a,besselj(2,a),’k-o’, a,besselj(3,a),’k-d’,’Linewidth’,2); grid on; ax = gca; ax.FontSize = 14; xlabel(’a’); ylabel(’J(a)’); legend(’J_0(a), \nu = 0’,’J_1(a), \nu = 1’,’J_2(a), \nu = 2’, ’J_3(a), \nu = 3’); print -depsc2 -tiff Bessel_01.eps print -dpng Bessel_01.png <?page no="191"?> Chapter 7 Solution of differential equations using Green’s functions George Green came from the simplest of backgrounds and became an important mathematician. He is representative of many who have not yet discovered themselves, but can do so soon. In the continuation, the person George Green is introduced and excerpts from his methods are introduced with applications. 7.1 About George Green George Green (1793-1841), born in Nottingham, was a British mathematician and physicist and a miller. Green worked in his father’s mill. As a young boy he possessed a keen interest in mathematics and at the age of eight was sent to the Robert Goodacre Academy in Lower Parliament Street, Nottingham. In his later work he was concerned with potential functions, addition and interchange theorems and Gaussian satisfying integral equations of two parameter functions in spatial domains excluding discontinuities. His publication ”An Essay on the Application of Mathematical Analysis in the Theories of Electricity and Magnetism“ (1828) introduces, among other things, the integral theorems named Green’s theorems [37]. Other recommended literatures are [40], chap. 1.10; [50], chap. 8; [52] Art. 100 and Art. 101 and [60], chap. 15 and chap. 21. <?page no="192"?> 166 Solution of differential equations using Green’s functions Figure 7.1: Procedures for solving the DGL using Green’s method One of the basic problems of field theory is the construction of solutions for linear differential equations (DGLs) when there is a defined source and the differential equation must satisfy given boundary conditions. Green’s method allows the solution of a wide variety of differential equation types for which there may be no alternative analytical solution. In general, Green’s functions tend to be distribution functions. The Green’s function is the solution for differential equations with a source term given by a point source. Practically, the solution of the same differential equation with any source term can be done point by point by integrating the Green’s function over the source term. This is equivalent to an uncountable number of superpositions of solutions of equations with the point source, which is why the linearity of the differential operator is important. A superposition always presupposes a linearity of the system. In the sequel, according to fig. 7.1 it follows <?page no="193"?> 7.1 About George Green 167 Figure 7.2: Summary and systematics of differential equations for solution by means of Green’s functions • the overview of common differential equations in fig. 7.2, which can be solved with the help of Green’s functions, • the derivation of Green’s integral theorems, • the explanation of the principle leading to the Green’s function, • the preparations of the PDEs to solve by Green’s function in differential and integral form, • the inclusion of the boundary conditions, • the preparation of the ODEs considering the boundary and continuity conditions, • the solution of chosen PDEs and ODEs by means of Green’s function. The solution of PDEs and ODEs consists of finding a suitable Green’s function with the inclusion of boundary conditions. <?page no="194"?> 168 Solution of differential equations using Green’s functions 7.2 Green’s integral theorems The derivation of Green’s integral theorems is done by means of Gauss’ integral theorem ‹ ∂Ω � F �n dA = ˚ Ω div � F dV, Figure 7.3: Sources located in volume Ω whose vector field � F exits via the surface ∂Ω where � F represents a curl-free source field. See also Art. 25 ”On the effect of the operator ∇ on a vectorfunction“ [52], p. 25 and fig. 7.3. Here the curl-free source field � F penetrates out of (or into) the volume Ω via the surface ∂Ω. With the substitution � F = v ∇ u follows ‹ ∂Ω (v ∇ u) �n dA = ˚ Ω div (v ∇ u) dV. With the freely selectable scalar functions u and v and the relations • ∇ u �n = ∂u ∂n , ∇ v �n = ∂v ∂n , • div (v ∇ u) = ∇ v ∇ u + v ∇ 2 u, div (u ∇ v) = ∇ u ∇ v + u ∇ 2 v follows the first Green’s equation (first Green’s theorem) ‹ ∂Ω v ∂u ∂n dA = ˚ Ω ( ∇ v ∇ u + v ∇ 2 u ) dV (7.1) as well as <?page no="195"?> 7.3 PDE - arrangements of evaluation points and integration points 169 ‹ ∂Ω u ∂v ∂n dA = ˚ Ω ( ∇ u ∇ v + u ∇ 2 v ) dV. (7.2) By subtracting eq. (7.2) from eq. (7.1) follows ‹ ∂Ω ( v ∂u ∂n − u ∂v ∂n ) dA = ˚ Ω ( v ∇ 2 u − u ∇ 2 v ) dV (7.3) the second Green’s equation (second Green’s theorem). The development of the theorems can be seen in [37] as well as [52] Art. 100. Following fig. 7.3, it may be noted that the Gaussian integral theorem is applicable to a volume enclosed by a multiply (in this example doubly) connected region. The surface integral of eq. (7.3) describes in fig. 7.4 the integration over the surfaces ∂Ω 1 and ∂Ω 2 bounding the difference volume Ω = Ω 1 − Ω 2 . Note that the positive surfaces extend away from the volume, as indicated by the normal vectors �n in fig. 7.4. A useful application of Gauss’s theorem is known as Green’s theorem ([59], p. 21 f.). Figure 7.4: Volume with multiple connected regions 7.3 PDE - arrangements of evaluation points and integration points In fig. 7.5 a case distinction is made between a point charge arrangement which is located in the centre of the coordinate system (first column) and outside the coordinate origin (second column). The point charge Q receives the notation P 0 , which is called the integration point. <?page no="196"?> 170 Solution of differential equations using Green’s functions Figure 7.5: Potential calculations of point charges, charge accumulations and volume charge densities At point P 1 , which is called the evaluation point, the potential ϕ is to be calculated. Furthermore, the equations for calculating the potential from point charges and volume charge densities are named in the right column. The fig. 7.5 a) shows the point charge arrangement in the centre of the 2D coordinate system. At the location x 1 the evaluation point P 1 is shown. The distance between the integration point and the evaluation point is therefore the distance x 1 . This changes in fig. 7.5 b) in that the integration point and thus the point charge Q has been positioned away from the origin at the location P 0 = x 0 . The distance between the evaluation point and the point of integration is therefore | x 1 − x 0 | . In fig. 7.5 d) follows the change into the 3D coordinate system, with centric arrangement of the point charge Q. Thus �r 0 is at the point x 0 = 0. The description of the distance between integration and evaluation point is only done with <?page no="197"?> 7.3 PDE - arrangements of evaluation points and integration points 171 the radius �r 0 . In fig. 7.5 e) the point charge Q is positioned at the integration point P 0 and described with the radius �r 0 . The distance between the integration point and the evaluation point is therefore | �r 1 − �r 0 | . In the figures 7.5 g) and h), point charge accumulations summarised in the volume charge densities ρ, both in spherical forms. Both figures have in common that the evaluation point P 1 is located outside the charged volume. In fig. 7.5 g) the charge-affected region is arranged centrically in the coordinate origin and the special case is drawn in which a point charge is arranged centrically and thus the evaluation point radius �r 0 = 0. If all other point charges are used for the calculation, the evaluation point radius thus becomes �r 0 � = 0, which corresponds to the procedure in fig. 7.5 h). In this figure, the charge-affected region has also been shifted out of the coordinate origin. The relation between the volume charge density ρ, the individual charges q and the total charge Q is given by ρ = lim ΔV → 0 ΔQ ΔV Q = ˚ Ω ρ dV = ∑ q i . The potential ϕ can be calculated on the one hand with a superposition of all single charges q, on the other hand with the integral over the volume charge density ρ. Now, with the help of the figures 7.5 d) and e) it can be argued that a point charge is a charge distribution (charge density ρ) which exists only at the point P 0 at x = x 0 , y = y 0 and z = z 0 and integrates over an infinitesimal volume ˆ x=(x 0 +�) x=(x 0 − �) ˆ y=(y 0 +�) y=(y 0 − �) ˆ z=(z 0 +�) z=(z 0 − �) ρ dx dy dz = { Q @ P = P 1 (x 0 ± �, y 0 ± �, z 0 ± �) 0 @ P � = P 1 yields the point charge Q, which corresponds to the property of a delta function ˆ x=(x 0 +�) x=(x 0 − �) ˆ y=(y 0 +�) y=(y 0 − �) ˆ z=(z 0 +�) z=(z 0 − �) ρ dx dy dz = ˆ x=(x 0 +�) x=(x 0 − �) ˆ y=(y 0 +�) y=(y 0 − �) ˆ z=(z 0 +�) z=(z 0 − �) Q δ(x − x 0 , y − y 0 , z − z 0 ) dx dy dz = Q and whose introduction is shown in fig. 1.6. Furthermore, at the point r = r 0 in the spherical coordinate system with dV = 4πr 2 dr <?page no="198"?> 172 Solution of differential equations using Green’s functions 4πρ ˆ r 0 0 r 2 dr = ˆ r 0 0 Q δ(r − r 0 ) dr = Q. Here is r | �r(x, y, z) | r 0 | �r 0 (x 0 , y 0 , z 0 ) | . For the calculation of charge accumulations from the figures 7.5 g) and h) the superposition principle is used, which requires linearity of the use case. 7.4 PDE - preparation for solution by Green’s - differential form The preparation of a PDE for solution by Green’s function is done in differential form. The inhomogeneous partial differential equation (PDE) of the type ∇ 2 u(r) = f(r) in the spherical coordinate system. Let the volume to be considered in fig. 7.3 b) depends only on the radius r ∈ [0, R]. Using the linear differential operator L = ∇ 2 and applying it to u(r), it follows that L u(r) = f(r). (7.4) The objective is to solve the PDE according to the variable u. This can be done by multiplication with the inverse linear operator in the general representation L − 1 L u(r) = L − 1 f(r) u(r) = L − 1 f(r), (7.5) where L − 1 L = 1. Or by introducing the Green’s function G, which is still to be determined, and drawing in the Dirac’s delta function in the equation <?page no="199"?> 7.4 PDE - preparation for solution by Green’s - differential form 173 δ(r − r 0 ) = L G(r, r 0 ), (7.6) where r 0 is in the area Ω, i.e. in the volume, and embodies the radius of the integration point. By integration over the area Ω follows ˆ Ω δ(r − r 0 ) dr = ˆ Ω L G(r, r 0 ) dr = 1. For an introduction to Dirac’s delta function, see fig. 1.6. See also [57], p. 562 f.. This is followed by the extension of eq. (7.4) with one L u(r) ˆ Ω L G(r, r 0 ) dr ︸ ︷︷ ︸ =1 = ˆ Ω δ(r − r 0 ) dr ︸ ︷︷ ︸ =1 f(r) or L u(r) ˆ Ω δ(r − r 0 ) dr = ˆ Ω L G(r, r 0 ) dr f(r) L u(r) = L ˆ Ω f(r) G(r, r 0 ) dr. The latter notation allows shortening with the linear operator. Thus the general solution follows u(r) = ˆ Ω G(r, r 0 ) f(r) dr (7.7) = � G(r, r 0 ), f(r) � of the PDE of eq. (7.4) to u, which corresponds to the solution according to eq. (7.12) including the homogeneous boundary conditions. The Green’s function is thus also the solution of eq. (7.6) and must still be determined in the continuation. In eq. (7.7) the Green’s function can be understood as the kernel of the integral. Furthermore, the solution of this equation can be interpreted as the inner product. Note that r 0 embodies the position of the discontinuity point. <?page no="200"?> 174 Solution of differential equations using Green’s functions 7.5 PDE - preparation for solution by Green’s - integral form The preparation of a PDE for solution by Green’s function is done in the integral form. The inhomogeneous partial differential equation (PDE) of the type ∇ 2 u(x, y, z) = f(x, y, z) (7.8) (Poisson’s DGL) in Cartesian coordinates is to be solved for u. Let the inhomogeneous term here be f(x, y, z) and represent a heat source in steady state, or a charge distribution of an electrostatic problem in a volume Ω, which according to fig. 7.3 a) is bounded by the surface ∂Ω. 7.5.1 Converting the PDE according to the variable to be solved The preparation of a PDE for solution by Green’s function is done in the integral form. Taking into account Green’s integral theorems eq. (7.1) and eq. (7.3) as well as the respective renaming of v by G, it follows that ‹ ∂Ω G ∂u ∂n dA = ˚ Ω ( ∇ G ∇ u + G ∇ 2 u ) dV ‹ ∂Ω ( G ∂u ∂n − u ∂G ∂n ) dA = ˚ Ω ( G ∇ 2 u − u ∇ 2 G ) dV. (7.9) Let G be the Green’s function that solves the PDE problem for u. By substituting eq. (7.8) into eq. (7.9) it follows ‹ ∂Ω ( G ∂u ∂n − u ∂G ∂n ) dA = ˚ Ω ( G f − u ∇ 2 G ) dV = ˚ Ω G f dV − ˚ Ω u ∇ 2 G dV = ˚ Ω G f dV − u ˚ Ω ∇ 2 G dV. A new changeover provides <?page no="201"?> 7.5 PDE - preparation for solution by Green’s - integral form 175 ˚ Ω G f dV − ‹ ∂Ω ( G ∂u ∂n − u ∂G ∂n ) dA = u ˚ Ω ∇ 2 G dV. On the left side of the equation is the volume integral over G f as well as the envelope integral, which contains the boundary conditions on the surface. On the right-hand side, the function u to be solved is in front of the volume integral. Efforts are now directed towards solving this in such a way that the integral of the right-hand side of the equation assumes the value one. For this purpose the function ∇ 2 G = δ(x − x 0 , y − y 0 , z − z 0 ) (7.10) is introduced. The right side of the equation embodies the Dirac delta function and x 0 , y 0 , z 0 lie in the volume Ω. The function can be physically interpreted as an impulse response at x = x 0 , y = y 0 and z = z 0 . A brief description of Dirac’s delta function is given in fig. 1.6. Thus it follows ˚ Ω G f dV − ‹ ∂Ω ( G ∂u ∂n − u ∂G ∂n ) dA = u ˚ Ω δ(x − x 0 , y − y 0 , z − z 0 )dx 0 dy 0 dz 0 ︸ ︷︷ ︸ 1 u(x, y, z) = ˚ Ω G f dV − ‹ ∂Ω ( G ∂u ∂n − u ∂G ∂n ) dA (7.11) the solution of the PDE for u with a given Green’s function G G = G(x, y, z, x 0 , y 0 , z 0 ), which in this example depends on six variables and still has to be determined in the continuation. Green’s function is thus also the solution of eq. (7.10). Moreover, in eq. (7.11) no boundary conditions have been set yet. 7.5.2 Homogeneous boundary conditions The surface integral of eq. (7.11) vanishes when homogeneous boundary conditions (boundary conditions which are set to zero) are included. To be mentioned are <?page no="202"?> 176 Solution of differential equations using Green’s functions • Dirichlet boundary condition: Let u be zero on the surface ∂Ω: u = 0, or • Neumann boundary condition: Let the derivative on the surface (boundary) be ∂Ω: ∂u/ ∂n = 0. Since only one boundary condition can be fulfilled at a time, the following follows for G • G be zero on the surface ∂Ω: G = 0, or • since the coordinates x 0 , y 0 and z 0 are inside the volume, it follows for G on the surface (edge) ∂Ω: ∂G/ ∂n = 0. The simultaneous requirement to take the independent variable or its derivative as zero leads to the Cauchy boundary condition and thus to an overdeterminacy, which no longer allows us to expect a solution. The eq. (7.11) becomes the general solution u(x, y, z) = ˚ Ω G(x, y, z, x 0 , y 0 , z 0 ) f(x, y, z) dV (7.12) = � G(x, y, z, x 0 , y 0 , z 0 ), f(x, y, z) � , when the homogeneous boundary conditions are included which also corresponds to the solution of eq. (7.7). 7.5.3 Inhomogeneous boundary conditions The inhomogeneous boundary conditions are mentioned here for the sake of completeness and will not be discussed further here. The superposition of separated solutions offers, for example, the possibility to make a change of variables in PDEs in order to transform between the inhomogeneity of the boundary conditions and the inhomogeneity of the equation. The inhomogeneity of a PDE is determined either by the PDE itself or by the boundary conditions imposed on the solution. See also [60] chap. 21.5.2. 7.5.4 Dirichlet boundary conditions If u is given at the surface (boundary) of the volume, then a Green’s function G(r, r 0 ) must be found which is 1. the property G(r, r 0 ) = G(r 0 , r) (applies also to 2) and 3)), <?page no="203"?> 7.6 PDE - solution of Poisson’s DGL 177 2. the condition ∇ 2 G(r, r 0 ) = δ(r, r 0 ), 3. the property G(r, r 0 ) = 0, if r 0 lies on the surface, the boundary of the volume V , 4. a singularity at r = r 0 . This is called a Dirichlet-Green function and is defined for the interior of the volume V . 7.5.5 Neumann boundary conditions If ∂u/ ∂n is given at the surface (boundary) of the volume, a Green’s function G(r, r 0 ) must be found, • which takes the property ∂G(r, r 0 )/ ∂n = 0 when r 0 is on the surface, the boundary of the volume V . • or the simplest boundary condition, which is defined as Neumann-Green’s function is denoted with ∂G(r, r 0 ) ∂n = 1 A . Here r lies on the surface ∂Ω (enveloping surface A) of the boundary of the volume Ω. 7.6 PDE - solution of Poisson’s DGL By means of Green’s method, the Poisson’s DGL already known from the literature ∇ 2 ϕ = − ρ ε 0 is to be solved for the potential ϕ(ρ, r). The initial situation is a collection of point charges q as the cause of the potential ϕ ϕ = 1 4πε 0 ∑ k q k r k . <?page no="204"?> 178 Solution of differential equations using Green’s functions The sum of individual point charges q is called Q. r k describes the distance between the integration and evaluation points, as shown in fig. 7.5. With the transition of the sum of point charges into a charge density ρ Q = ˚ Ω ρ(r) dV (r), it follows again the potential already known from the literature ϕ(r) = 1 4πε 0 ˚ Ω ρ(r) | r 1 − r 0 | dV (r), cf. also fig. 7.5 i). The focus is on the short solution path compared to the standard procedure (twofold integration, determination of the integration constants, introduction of further conditions, ...). 7.6.1 Exercise description Given a sphere of radius R filled with the charge density ρ in the cylindrical coordinate system according to the figures 7.5 g) and 7.6. The arrangement with volume V (Ω) bounded by surface A (∂Ω) is spherical and centred at the coordinate origin. Thus the potential depends only on the radius r. We are looking for the potential ϕ(r, ρ) of the PDE ∇ 2 ϕ = − ρ ε 0 in the outer space with r ∈ [R, ∞ ] the array of fig. 7.6, which is described by Poisson’s DGL in spherical coordinates. As a boundary condition, let the potential ϕ(r) → 0 for r → ∞ . 7.6.2 Solution path With eq. (7.11), the potential ϕ ϕ(r) = ˚ Ω G(r, r 0 ) f(r) dV (r) − ‹ ∂Ω ( G(r, r 0 ) ∂ϕ(r) ∂n − ϕ(r) ∂G(r, r 0 ) ∂n ) dA is described. The inclusion of the homogeneous boundary conditions <?page no="205"?> 7.6 PDE - solution of Poisson’s DGL 179 Figure 7.6: Charge-filled sphere in a vacuum whose potential is sought in outer space • Dirichlet: G(r, r 0 ) = 0 ∂Ω, • ϕ(r → ∞ ) = 0 makes the surface integral disappear. It remains ϕ(r 0 ) = ˚ Ω G(r, r 0 ) f(r) dV (r 0 ). (7.13) Since the Green’s function we are looking for is also the solution of ∇ 2 G(r, r 0 ) = δ(r − r 0 ). The integration over the volume Ω ˚ Ω ∇ 2 G(r, r 0 ) dV (r 0 ) = ˚ Ω δ(r − r 0 ) dV (r 0 ) = 1 is carried out. By applying Gauss’ integral theorem (divergence theorem) to the lefthand side of the equation, it follows ˚ Ω ∇ 2 G(r, r 0 ) dV (r 0 ) = ‹ ∂Ω ∇ G(r, r 0 ) dA(r 0 ) = 1. At r 0 = R (i.e. on the surface) applies ∇ G(r, r 0 ) A(r 0 ) = 1. <?page no="206"?> 180 Solution of differential equations using Green’s functions The Nabla operator applied to the Green’s function in the spherical coordinate system and the evolution of the area element dA provides ∇ G(r, r 0 ) = ∂G(r, r 0 ) ∂r �e r + 1 r ∂G(r, r 0 ) ∂Θ �e Θ ︸ ︷︷ ︸ =0 + 1 r sin Θ ∂G(r, r 0 ) ∂Φ �e Φ ︸ ︷︷ ︸ =0 and dA = r 2 sin Θ dΘ dΦ ˆ dA = r 2 ˆ π 0 sin Θ dΘ ˆ 2π 0 dΦ = 4 π r 2 ∣∣∣∣ r=R r=0 . By rearranging and integrating again, Green’s function follows dG(r, r 0 ) dr 4πr 2 ∣∣ r=R r=0 = 1 dG(r, r 0 ) dr = 1 4πr 2 G(r, r 0 ) = 1 4π ˆ 1 r 2 dr = − 1 4πr + F (r, r 0 ), (7.14) where the integration constant F (r, r 0 ) may be set to zero. With the substitution of eq. (7.14) into eq. (7.13) as well as with the substitution f(r) = − ρ(r 0 ) ε 0 it follows ϕ(r) = ˚ Ω − 1 4πr − ρ(r 0 ) ε 0 dV (r 0 ) = 1 4πε 0 ˚ Ω ρ(r 0 ) | r − r 0 | dV (r 0 ), the already known potential of charge accumulation of fig. 7.6 and fig. 7.5 i) in the outer space. After integration has taken place and r � R follows ϕ(r) = 1 4πε 0 Q r the potential with the sum Q of all charges. <?page no="207"?> 7.7 PDE - solution of Laplace’s DGL 181 7.7 PDE - solution of Laplace’s DGL By means of Green’s method, the Laplace’s DGL already known from the literature ∇ 2 ϕ = 0 is to be solved, whose solution ϕ(r) = Q 4πε 0 r is also known. 7.7.1 Exercise description The Laplace’s PDE ∇ 2 ϕ = 0 is to be solved according to the potential ϕ(r) for r ∈ [0, ∞ ]. By applying the spherical coordinate system, the potential ϕ depends only on the radius r. As a boundary condition, the potential ϕ(r) → 0 for r → ∞ . 7.7.2 Solution path The general potential equation ϕ(r) = ˚ Ω G(r, r 0 ) f(r) dV (r) − ‹ ∂Ω ( G(r, r 0 ) ∂ϕ(r) ∂n − ϕ(r) ∂G(r, r 0 ) ∂n ) dA(r) is to bee solved by determining the Green’s function. With f(r) = 0 the volume integral becomes zero and the potential equation turns into a boundary value problem ϕ(r) = − ‹ ∂Ω ( G(r, r 0 ) ∂ϕ(r) ∂n − ϕ(r) ∂G(r, r 0 ) ∂n ) dA(r). The solution is done by searching boundary conditions. The application of the Dirichlet boundary condition G(r, r 0 ) = 0 on the surface A makes the first term in the righthand side of the equation disappear. Nevertheless, ∂G(r, r 0 )/ ∂n = 0 must not be set, since this would mean that the conditions <?page no="208"?> 182 Solution of differential equations using Green’s functions ‹ ∂Ω ∂G(r, r 0 ) ∂n dA r = ‹ ∂Ω ∇ G(r, r 0 ) �n dA r = ˚ Ω ∇ 2 G(r, r 0 )dV = 1 would be violated. This leads to the simplest allowed Neumann boundary condition (Neumann-Green function) ∂G(r, r 0 ) ∂n A = 1. Other forms of representation of the fraction are ∂G(r, r 0 ) ∂n = dG(r, r 0 ) dn = G ′ (r, r 0 ) �r | �r | = G ′ (r, r 0 ) �e r = dG(r, r 0 ) dr �e r , and lead to dG(r, r 0 ) dr = 1 A = 1 4πr 2 . At this point, another remark should be made about the unit vectors. Basically, the potential ϕ(r) is a scalar quantity. This is obtained by multiplication with the unit vector �e r dG(r, r 0 ) dr �e r A �e r = dG(r, r 0 ) dr A is achieved since �e r · �e r = 1. In the sequel, the function ϕ(r) is assigned a fixed value K on the surface of A. Thus it follows K = dG(r, r 0 ) dr A. Integrating the equation over r leads to the Green’s function we are looking for G(r, r 0 ) = ˆ r dG(r, r 0 ) dr dr = ˆ r 1 A dr = ˆ r 1 4πr 2 dr = − 1 4πr + F (r, r 0 ), where the integration constant F (r, r 0 ) may be set equal to zero. Thus the potential ϕ(r) follows with <?page no="209"?> 7.8 ODE - Preparation for the solution with the Green’s function 183 ϕ(r) = K G(r, r 0 ) = K − 1 4πr . The determination of the constant K remains. Here, we fall back on the given potential of the Poisson’s DGL in the outer space. This is identical to the potential of Laplace’s DGL. The location of the determination of both potentials is in the charge-free outer space. Therefore, the constant K follows with K = − Q ε 0 = − ˝ ρ(r 0 ) dV (r 0 ) ε 0 , where the sign decides about positive or negative charges. The solution according to the potential ϕ(r) is thus ϕ(r) = Q 4πε 0 r . 7.8 ODE - Preparation for the solution with the Green’s function Assume the nth order inhomogeneous ODE with a n (x) d n y(x) dx n + · · · + a 1 (x) dy(x) dx + a 0 (x) y(x) = f(x), (7.15) which depends only on x and is to be solved for y and f(x) represents a freely selectable function. In preparation, the linear operator L is used L = a n (x) d n dx n + · · · + a 1 (x) d dx + a 0 , which introduces the simplified notation L y(x) = f(x). (7.16) <?page no="210"?> 184 Solution of differential equations using Green’s functions The general solution of the differential equation is done by multiplication with the inverse linear operator L − 1 L − 1 L y(x) = L − 1 f(x) y(x) = L − 1 f(x), where L − 1 L = 1. The exact solution of the differential equation shall be done by Green’s method, which is equivalent to the determination of the inverse linear operator. In the sequel, the differential equation eq. (7.16) is solved by means of the Green’s function G(x, z), which is yet to be determined, taking into account the boundary conditions in the domain a ≤ x ≤ b. The Green’s function G(x, z) corresponds to the function y(x). A substitution which reveals the Green’s method. In addition, the blanking property of Dirac’s function applies (see chap. 1.9) δ(x − z) = L G(x, z) (7.17) = f(x), which physically corresponds to the impulse response of a system. Here, Green’s function G(x, z) must satisfy (i.e. be the solution of) the original ODE, which is the lefthand side of eq. (7.15). Where the left-hand side of eq. (7.17) corresponds to the delta function. The subsequent integration ˆ b a δ(x − z) dx = ˆ b a L G(x, z) dx = ˆ b a f(x) dx = 1 yields the number one. The multiplication of eq. (7.16) with one offers several possibilities, of which L y(x) ˆ b a δ(x − z) dx ︸ ︷︷ ︸ =1 = ˆ b a L G(x, z) dx ︸ ︷︷ ︸ L − 1 f(x) y(x) = ˆ b a G(x, z) dx f(x) = ˆ b a G(x, z) f(z) dz (7.18) = � G(x, z), f(z) � <?page no="211"?> 7.8 ODE - Preparation for the solution with the Green’s function 185 is chosen because the linear operator can be truncated. The equation y(x) to be solved is on the left-hand side and is a dependency of the Green’s function yet to be determined and can also be represented as an inner product. Applying the linear differential operator L to both sides of eq. (7.18 ) leads to L y(x) = ˆ b a [ L G(x, z)] f(z) dz = f(x). (7.19) A comparison with the delta function and its properties f(x) = ˆ b a δ(x − z) f(z) dz (7.20) shows that eq. (7.19) requires for a freely chosen function f(x) in the interval a ≤ x ≤ b. The development of Green’s function is based on the fact that whenever x � = z, then L G(x, z) = 0. This holds for x < z and x > z, expressing G(x, z) as the solution of a homogeneous ODE. From a physical point of view, the right-hand side of the equation can be evaluated as the system response to a unit impulse at x = z. Conditions must be imposed on the Green’s function G(x, z) to solve it. These are the boundary and continuity conditions. A distinction is made between • homogeneous and inhomogeneous boundary conditions, • continuity as well as discontinuity conditions. 7.8.1 Homogeneous boundary conditions For the general solution of y(x) of eq. (7.18), G(x, y) must satisfy the boundary conditions. y(x) or its derivative must take the value zero at given points. This is most easily achieved by G(x, z) itself satisfying the boundary conditions. For example, let y(a) = y(b) = 0, then G(a, z) = G(b, z) = 0 is also required. 7.8.2 Inhomogeneous boundary conditions Examples of inhomogeneous boundary conditions are y(a) = α, y(b) = β or y(0) = y ′ (0) = γ, where α, β and γ are non-zero. For a DGL of n ′ th order, n boundary conditions are generally required to solve the DGL. These n boundary conditions can be of different types: <?page no="212"?> 186 Solution of differential equations using Green’s functions • n-point boundary conditions: y(x m ) = y m , whereby m ∈ [1, n] is, • 1-point boundary conditions: y(x 0 ) = y � (x 0 ) = y �� (x 0 ) = ... = y (n − 1) (x 0 ) = y 0 , • a combination between nand 1-point boundary conditions. The simplest method for solving the ODE is to proceed according to u(x) = y(x) + p(x). Here is • u(x) the inhomogeneous ODE solved to x with inhomogeneous boundary conditions y(a) = α and y(b) = β. • y(x) the homogeneous ODE solved to x with homogeneous boundary conditions y = u(a) = u(b) = 0. • p(x) the (n − 1) � th-order polynomial satisfying the boundary conditions. Here p = mx + c, with m = (α − β)/ (a − b) and c = (βa − αb)/ (a − b). 7.8.3 Continuity and discontinuity conditions The continuity and discontinuity conditions concern the Green’s function G(x, z) and its derivatives at x = z, which are obtained by integrating eq. (7.17) to x over the small interval [z − ε, z + ε] and the condition ε → 0 lim � → 0 n ∑ m=0 ˆ z+� z − � a m (x) d m G(x, z) dx m dx = lim � → 0 ˆ z+� z − � δ(x − z) dx = 1 (7.21) can be determined with the following result: If d n G/ dx n exists at x = z with value infinity (infinite discontinuity), then • the (n − 1) � th derivative must have finite discontinuity, • while all lower orders d m G/ dx m with m < (n − 1) must have continuity at x = z. Therefore, terms containing this derivative cannot contribute to the integral of the left-hand side of eq. (7.21). <?page no="213"?> 7.9 ODE - solution of d 2 u/ dx 2 = − 1 (I) 187 A partial integration of the left-hand side of eq. (7.21) yields lim � → 0 ˆ z+� z − � a n (x) d n G(x, x 0 ) dx n dx = lim � → 0 ∣∣∣∣ a n (x) d n − 1 G(x, z) dx n − 1 ∣∣∣∣ z+� z − � − lim � → 0 ˆ z+� z − � a n − 1 (x) d n G(x, z) dx n dx = 1. With m ∈ [0, n − 1] it follows lim � → 0 ˆ z+� z − � a m (x) d m G(x, z) dx m dx = 0, since, due to the required continuity, the derivative may assume very small values and, moreover, the integral over the very small interval z ± ε may be neglected. Thus, only the remaining term d n G/ dx n contributes to the integral of eq. (7.21). Furthermore, there are n-conditions, where there must be continuity for G(x, z) and its derivatives up to the (n − 2) � th order at the point x = z, where d n − 1 G/ dx n − 1 is a discontinuity of 1/ a n (z) at x = z a n (z) d n − 1 G(x, z) dx n − 1 ∣∣∣∣ x=(z ± �) = 1 d n − 1 G(x, z) dx n − 1 ∣∣∣∣ x=(z ± �) = 1 a n (z) , which also indicates d n − 1 G dx n − 1 ∣∣∣∣ x=(z+�) − d n − 1 G dx n − 1 ∣∣∣∣ x=(z − �) = 1 a n (z) . See also fig. 7.7. In the sequel, it is necessary to determine the Green’s function G and thus to solve the differential equations eq (7.18). 7.9 ODE - solution of d 2 u/ dx 2 = − 1 (I) Given is the ordinary differential equation of 2 � th order d 2 u(x) dx 2 + 1 = 0, x ∈ Ω (7.22) <?page no="214"?> 188 Solution of differential equations using Green’s functions Figure 7.7: Curves of the derived Green’s functions in the domain Ω = [0, 1] with the homogeneous Dirichlet boundary conditions u(0) = u(1) = 0, whose solution is u(x) = − 1 2 x 2 + 1 2 x = 1 2 ( x − x 2 ) , a quadratic equation (parabola opened downwards) in which the coefficients differ only by the sign, the vertex is S P (1/ 2 | 1/ 8) and has the zeros a = 0 and b = 1. Differentiating this equation twice leads to eq. (7.22). In fig. 7.8 the exact curve of the equation can be seen. 7.9.1 Exercise description By means of Green’s method the ordinary differential equation of the 2nd order is to be solved d 2 u(x) dx 2 + 1 = 0, x ∈ Ω <?page no="215"?> 7.9 ODE - solution of d 2 u/ dx 2 = − 1 (I) 189 Figure 7.8: Solution u(x) of the 2nd order differential equation or d 2 u(x) dx 2 = − 1 with the homogeneous boundary conditions u(0) = u(1) = 0, whose solution is u(x) = − 1 2 x 2 + 1 2 x = 1 2 ( x − x 2 ) . Two solutions are presented. 7.9.2 Solution I It is L = d 2 dx f(x) = − 1. <?page no="216"?> 190 Solution of differential equations using Green’s functions It follows L u(x) = f(x). In the interval [0, 1] there is the requirement L G(x, z) = δ(x − z). The solution path begins with the substitution of u(x) by G(x, z) d 2 G(x, z) dx 2 = δ(x − z). The general solution of Green’s function follows with the general approach G(x, z) = { A(z) x + B(z) f¨ ur x < z C(z) x + D(z) f¨ ur x > z, where the coefficients A(z), B(z), C(z) and D(z) still have to be determined in the progress. This is done with the inclusion of the boundary conditions G(0, z) = 0 = A(z) 0 + B(z) ⇒ B(z) = 0 G(1, z) = 0 = C(z) 1 + D(z) ⇒ D(z) = − C(z), with which, once again, Green’s function G(x, z) = { A(z) x f¨ ur x < z C(z) x − C(z) f¨ ur x > z can be written down. The consideration of the continuity and discontinuity conditions (on the left and right side of the discontinuity point) of G(x, z) at the point x = z C(z) z − C(z) − A(z) z = 0 A(z) z = C(z) z − C(z) A(z) = C(z) − 1 z C(z) = C(z) ( 1 − 1 z ) <?page no="217"?> 7.9 ODE - solution of d 2 u/ dx 2 = − 1 (I) 191 and the derivative of G(x, z) with dG/ dx result in dG(x, z) dx = { A(z) f¨ ur x < z C(z) f¨ ur x > z. The continuation of the determination of the coefficients is done by including the lefthand and right-hand derivatives C(z) − A(z) = 1 C(z) − C(z) ( 1 − 1 z ) = 1 C(z) − C(z) + C(z) z = 1 C(z) = z. With C(z) it follows A(z) = C(z) ( 1 − 1 z ) = z ( 1 − 1 z ) = z − 1. Green’s function follows again with G(x, z) = { (z − 1) x f¨ ur x < z z x − z f¨ ur x > z, whose integration over the range Ω according to eq. (7.18) yields the solution of the ODE <?page no="218"?> 192 Solution of differential equations using Green’s functions u(x) = ˆ 1 0 G(x, y) ( − 1) dz = − [ ˆ x 0 (z x − z) dz + ˆ 1 x (z − 1) x dz ] = − [ x ˆ x 0 z dz − ˆ x 0 z dz + x ˆ 1 x z dz − x ˆ 1 x dz ] = − [ x 2 z 2 ∣∣∣ x 0 − 1 2 z 2 ∣∣∣ 1 0 + x 2 z 2 ∣∣∣ 1 x − x z ∣∣∣ 1 x ] = − [ x 3 2 − x 2 2 + x 2 − x 3 2 − x + x 2 ] = − ( x 2 2 − x 2 ) = 1 2 ( x − x 2 ) , which is equivalent to the solution of the ODE in chap. 18. As for the section-bysection definition of Green’s function and its integration limit assignments to carry out the integration, the following should be noted here: • Integration is performed over the independent variable z. • Thus, the first integral is assigned the integration interval z = 0 to z = x, which also means that z < x and thus x > z. • Integration is thus performed over the function zx − z. 7.9.3 Solution II A differently designed solution path is presented by means of the Wronski determinant. Assume that y 1 and y 2 form the linear independent solutions of the homogeneous ODE L u(x) = 0 in the domain [0, 1]. It is required that y 1 (0) = 0 and y 2 (1) = 0. All homogeneous solutions of L u(x) = 0 satisfying y(0), y(1) = 0 must have proportionality to the constants A(z), B(z), which are independent of x. Thus Green’s function follows G(x, z) = { A(z) y 1 (x) f¨ ur x < z B(z) y 2 (x) f¨ ur x > z. The continuity condition applies <?page no="219"?> 7.9 ODE - solution of d 2 u/ dx 2 = − 1 (I) 193 G(x, z) ∣∣∣ z − ε = G(x, z) ∣∣∣ z+ε A(z) y 1 (x) = B(z) y 2 (x) and discontinuity condition (jump in the derivative) dG(x, z) dx ∣∣∣ z − ε − dG(x, z) dx ∣∣∣ z+ε = 1 a 2 A(z) y � 1 (z) − B(z) y � 2 (z) = 1 a 2 . The coefficient a 2 is assigned to the highest derivative, which takes the value one in the normal form. See also eq. (7.21). In the sequel, the continuity condition after A(z) A(z) = B(z) y 2 (z) y 1 (z) was changed over. A renewed insertion into the discontinuity condition with continued conversion leads to the proportionality constant B(z) B(z) y 2 (z) y � 1 (z) y 1 (z) − B(z) y � 2 (z) = 1 a 2 B(z) y 2 (z) y � 1 (z) − y � 2 (z) y 1 (z) y 1 (z) = 1 a 2 B(z) = y 1 (z) [y 2 (z) y � 1 (z) − y � 2 (z) y 1 (z)] a 2 = y 1 (z) W (z) a 2 . Where W (z) is the Wronski determinant W (z) = ∣∣∣∣∣ y 1 y 2 y � 1 y � 2 ∣∣∣∣∣ = y 1 y � 2 − y 2 y � 1 � = 0. With the continuity condition and B(z) follows the determination of the proportionality constant A(z) B(z) = A(z) y 1 (z) y 2 (z) <?page no="220"?> 194 Solution of differential equations using Green’s functions and by substituting in the discontinuity condition it follows A(z) y ′ 1 (z) − A(z) y 1 (z)y ′ 2 (z) y 2 (z) = 1 a 2 A(z) y ′ 1 (z)y 2 (z) − y 1 (z)y ′ 2 (z) y 2 (z) = 1 a 2 , which change to A(z) A(z) = y 2 (z) [y ′ 1 (z)y 2 (z) − y 1 (z)y ′ 2 (z)] a 2 = y 2 (z) W (z) a 2 . With the proportionality constants A(z) and B(z), Green’s function follows again G(x, z) = { y 2 (z) y 1 (x) W (z) a 2 f¨ ur x < z y 1 (z) y 2 (x) W (z) a 2 f¨ ur x > z. It remains to determine y 1 (x) and y 2 (x) from the boundary conditions y 1 (0) = 0 : y 1 (x) = x; y ′ 1 (x) = 1 y 2 (1) = 0 : y 2 (x) = x − 1; y ′ 2 (x) = 1. Thus the value of the Wronski determinant is given by W (z) = y 1 y ′ 2 − y 2 y ′ 1 = z 1 − (z − 1)1 = z − z + 1 = 1. With the coefficient of the highest derivative a 2 = 1 follows Green’s function G(x, z) = { x (z − 1) f¨ ur x < z (x − 1) z f¨ ur x > z, which corresponds to Green’s function of the former solution path. The solution of the ODE requires its integration <?page no="221"?> 7.10 ODE - solution of d 2 y/ dx 2 + y = cosec x 195 u(x) = ˆ 1 0 G(x, z) f(z) dz = ˆ x 0 B(z) y 2 (x) f(z) dz + ˆ 1 x A(z) y 1 (x) f(z) dz = y 2 (x) ˆ x 0 y 1 (z) W (z) a 2 f(z) dz + y 1 (x) ˆ 1 x y 2 (z) W (z) a 2 f(z) dz = ˆ x 0 (x − 1) z ( − 1) dz + ˆ 1 x x (z − 1) ( − 1) dz, where f(z) = ( − 1). The solution is identical with the solution from chap. 7.9.2. A note should be made here about the search for the function y 2 . The function y 2 (x) = 1 − x also fulfils the homogeneous boundary condition and leads to the Green’s function G(x, z) = { x (1 − z) f¨ ur x < z (1 − x) z f¨ ur x > z, which produces a solution of the sought ODE inverted by the sign. 7.10 ODE - solution of d 2 y/ dx 2 + y = cosec x The inhomogeneous ODE of 2 ′ th order with homogeneous boundary conditions is to be solved. 7.10.1 Exercise description The inhomogeneous ODE of 2 ′ th order d 2 y dx 2 + y = cosec x = 1 sin x with the boundary conditions y(0) = y(π/ 2) = 0 is to be solved using Green’s method. 7.10.2 Solution It is L = d 2 dx 2 + 1 f(x) = cosec x. <?page no="222"?> 196 Solution of differential equations using Green’s functions Thus it follows L u(x) = f(x). In the interval [0, π/ 2] L G(x, z) = δ(x − z) is required. The solution path begins with the substitution of u(x) by G(x, z) d 2 G(x, z) dx 2 + G(x, z) = δ(x − z). The general solution of the Grenn’s function is G(x, z) = { A(z) sin x + B(z) cos x f¨ ur 0 < x < z C(z) sin x + D(z) cos x f¨ ur z < x < π/ 2. By inserting the boundary conditions for x < z G(0, z) = 0 = A(z) 0 + B(z) 1 ⇒ B(z) = 0 and for x > z G(π/ 2, z) = 0 = C(z) 1 + D(z) 0 ⇒ C(z) = 0 Green’s function follows G(x, z) = { A(z) sin x f¨ ur 0 < x < z D(z) cos x f¨ ur z < x < π/ 2. Here B(z) = C(z) = 0. The derivative of Green’s function leads to dG(x, z) dx ∣∣∣∣ x=z = { A(z) cos z f¨ ur 0 < x < z − D(z) sin z f¨ ur z < x < π/ 2. Their continuity and discontinuity conditions provide <?page no="223"?> 7.11 ODE - solution of d 2 y/ dx 2 + y = f(x) 197 dD(z) cos z dz − dA(z) sin z dz = 1 − D(z) sin z − A(z) cos z = 1. It ist D(z) cos z − A(z) sin z = 0. After obtaining two equations containing the coefficients A(z) and D(z), the rearrangement to A(z) and D(z) follows A(z) = − cos z D(z) = − sin z and by substituting in the Green’s function it follows G(x, z) = { − cos z sin x f¨ ur 0 < x < z − sin z cos x f¨ ur z < x < π/ 2. By integration over the range [0, π/ 2] it follows according to eq. (7.18) y(x) = ˆ π/ 2 0 G(x, y) cosec z dz = − cos x ˆ x 0 sin z cosec z dz − sin x ˆ π/ 2 x cos z cosec z dz = − x cos x + sin x ln(sin x) the sought function y(x). As far as the sectional definition of the Green’s function and its integration limit assignments for carrying out the integration are concerned, the following should also be noted here: The integration is carried out over the independent variable z. Thus, the integration interval z = 0 to z = x is to be assigned to the first integral, which also means that z < x and thus x > z. Integration is thus performed over the function − sin z cos x cosec z. 7.11 ODE - solution of d 2 y/ dx 2 + y = f(x) The inhomogeneous ODE of 2nd order with homogeneous boundary conditions is to be solved. <?page no="224"?> 198 Solution of differential equations using Green’s functions 7.11.1 Exercise description The inhomogeneous ODE of 2 ′ th order d 2 y dx 2 + y = f(x) with the boundary conditions y(0) = y ′ (0) = 0 is to be solved using Green’s method. 7.11.2 Solution path The linear differential operator is L = d 2 dx + 1, with which L y = f(x) follows. Furthermore L G(x, z) = δ(x − z). The general solution using trigonometric functions of Green’s function is G(x, z) = { A(z) sin x + B(z) cos x f¨ ur 0 < x < z C(z) sin x + D(z) cos x f¨ ur z < x < ∞ . The boundary conditions for 0 < x < z are used as follows: • G(0,z) = 0: A(z) · sin 0 + B(z) · cos 0 = 0 ⇒ B(z) = 0, • G’(0,z) = 0: A(z) cos 0 ⇒ A(z) = 0. The Green’s function thus becomes G(x, z) = { 0 f¨ ur 0 < x < z C(z) sin x + D(z) cos x f¨ ur z < x < ∞ . <?page no="225"?> 7.11 ODE - solution of d 2 y/ dx 2 + y = f(x) 199 The continuity, discontinuity condition on the left and right side of the discontinuity point x = z G(x = z, z) : C(z) sin z + D(z) cos z − 0 = 0 and its derivation dG(x, z) dx ∣∣∣∣ x=z : C(z) cos z − D(z) sin z = 1 lead to two equations which allow the determination of the coefficients C(z) and D(z) C(z) = cos z D(z) = − sin z. Subsequently, Green’s function is newly G(x, z) = { 0 f¨ ur 0 < x < z sin x cos z − cos x sin z f¨ ur z < x < ∞ formulated, which also G(x, z) = { 0 f¨ ur 0 < x < z sin(x − z) f¨ ur z < x < ∞ corresponds. The solution sought for y(x) follows with y(x) = ˆ ∞ 0 G(x, z) f(z) dz = ˆ x 0 sin(x − z) f(z) dz − ˆ ∞ x 0 f(z) dz ︸ ︷︷ ︸ 0 . A note on the alternative calculation is given here: The solution was started with the section of the Green’s function for the interval 0 < x < z. Alternatively, the calculation can also be started with the section of the Green’s function for the interval z < x < ∞ . This leads to integration over the same Green’s function at the end of the solution procedure. Here, however, the first integral becomes zero (interval [0, x]). The second integral to be subtracted is the integral over the Green’s function with the interval [x, ∞ ]. <?page no="226"?> 200 Solution of differential equations using Green’s functions 7.12 ODE - solution of d 2 u/ dx 2 = − 1 (II) To solve the 2nd order inhomogeneous ODE with inhomogeneous Dirichlet boundary conditions. 7.12.1 Exercise description The differential equation d 2 u(x) dx 2 = − 1 (7.23) were assigned the inhomogeneous boundary conditions u(0) = 0 and u(4/ 5) = 2/ 25. The solution of the differential equation is u(x) = − 1 2 x 2 + 1 2 x. The graph of the function can be seen in fig. 7.9. Figure 7.9: Procedure for solving the ODE eq. (7.23) with inhomogeneous boundary conditions <?page no="227"?> 7.12 ODE - solution of d 2 u/ dx 2 = − 1 (II) 201 7.12.2 Solution path The solution path begins by setting the boundary conditions from u(0) = u(4/ 5) = 2/ 25 to y(0) = y(4/ 5) = 0. The associated function is named with the dependent variable y. The continuation follows with the search of the (n − 1) ′ th polynomial p(x) p(x) = m x = 0, 08 0, 8 x = 1 10 x, which fulfils the boundary conditions. The course of the polynomial function can also be seen in fig. 7.9. The solution is thus composed of the homogeneous solution y(x) and the inhomogeneous solution p(x) u(x) = y(x) + p(x). For the solution of the homogeneous ODE y(x) by means of Green’s function, the linear approach G(x, z) = { A(z) x + B(z) f¨ ur 0 < x < z C(z) x + D(z) f¨ ur z < x < 4/ 5 in which the coefficients are determined individually. With the boundary conditions A(z) 0 + B(z) = 0 ⇒ B(z) = 0 C(z) 4/ 5 + D(z) = 0 ⇒ D(z) = − 4/ 5 C(z) Green’s function follows again G(x, z) = { A(z) x f¨ ur 0 < x < z C(z) (x − 4/ 5) f¨ ur z < x < 4/ 5. The application of the continuity conditions at the point x = z yields C(z) (z − 4/ 5) = A(z) z C(z) ( 1 − 4 5z ) = A(z). The derivative of Green’s function (discontinuity condition) <?page no="228"?> 202 Solution of differential equations using Green’s functions dG(x, z) dx ∣∣∣∣ x=z = { A(z) f¨ ur 0 < x < z C(z) f¨ ur z < x < 4/ 5 leads to C(z) C(z) − A(z) = 1 C(z) − C(z) ( 1 − 4 5 z ) = 1 C(z) = 5 4 z and to D(z) D(z) = − 4 5 C(z) = − 4 5 5 4 z = − z. Ultimately, A(z) becomes A(z) = C(z) ( 1 − 4 5z ) = 5 4 z ( 1 − 4 5z ) = 5 4 z − 1. Green’s function G(x, z) = { ( 5 4 z − 1 ) x f¨ ur 0 < x < z 5 4 z x − z f¨ ur z < x < 4/ 5 is thus completely determined and is integrated y(x) = ˆ 4/ 5 0 G(x, z) f(z) dz = ˆ 4/ 5 0 G(x, z) ( − 1) dz. The following are the term-by-term integration steps <?page no="229"?> 7.13 ODE - solution of d 2 u/ dx 2 = x 203 y(x) = − [ ˆ x 0 ( 5 4 z x − z ) dz + ˆ 4/ 5 x ( 5 4 z − 1 ) x dz ] = − [( 5 8 z 2 x − 1 2 z 2 ) ∣∣∣∣ z=x z=0 + ( 5 8 z 2 x − xz ) ∣∣∣∣ z=4/ 5 z=x ] = − [ 5 8 x 3 − 1 2 x 2 + 2 5 x − 4 5 x − 5 8 x 3 + x 2 ] = 2 5 x − 1 2 x 2 . The sample gives y(0) = y(4/ 5) = 0. The equation is shown in fig. 7.9. With the addition follows u(x) = y(x) + p(x) = 2 5 x − 1 2 x 2 + 1 10 x = 1 2 x − 1 2 x 2 = 1 2 ( x − x 2 ) the solution to the ODE we are looking for. 7.13 ODE - solution of d 2 u/ dx 2 = x The homogeneous ODE of 2 ′ th order with inhomogeneous boundary conditions is to be solved by Green’s method. 7.13.1 Exercise description The differential equation d 2 u(x) dx 2 = x L u(x) = x with the inhomogeneous boundary conditions u(0) = 1 and u(1) = 2 whose solution u(x) = 1 6 x 3 + 5 6 x + 1 can also be achieved by integration, is to be solved using Green’s method. It is L = d 2 / dx 2 the linear operator. <?page no="230"?> 204 Solution of differential equations using Green’s functions 7.13.2 Solution path The ODE is solved using Green’s method. For this purpose • transforms the original differential equation L u(x) = x into a homogeneous differential equation L y(x) = 0 with boundary conditions y(0) = y(1) = 0 in the function y(x) to be determined, • in this case, a partial solution p(x) is sought which satisfies both boundary conditions are fulfilled. The equation p(x) was found p(x) = x + 1. The solution of the differential equation sought is the sum of the particulate and homogeneous solution u(x) = y(x) + p(x). With Green’s method the solution of the homogeneous ODE L y(x) = 0 is determined. The general form of Green’s function is G(x, z) = { A(z) y 1 (x) f¨ ur x < z B(z) y 2 (x) f¨ ur x > z. In addition, the continuity condition G(x, z) ∣∣∣ z − ε = G(x, z) ∣∣∣ z+ε A(z) y 1 (x) = B(z) y 2 (x) and discontinuity condition (jump in the derivative) dG(x, z) dx ∣∣∣ z − ε − dG(x, z) dx ∣∣∣ z+ε = 1 a 2 A(z) y � 1 (z) − B(z) y � 2 (z) = 1 a 2 is included. The coefficient a 2 is to be assigned to the highest derivative. See also eq. (7.21), which takes the value one in normal form. In the sequel, A(z), B(z), y 1 (x), <?page no="231"?> 7.13 ODE - solution of d 2 u/ dx 2 = x 205 y ′ 1 (x), y 2 (x) and y ′ 2 (x) have to be determined. For this purpose, y 1 (x) and y 2 (x) are calculated from the boundary conditions y 1 (0) = 0 : y 1 (x) = x; y ′ 1 (x) = 1 y 2 (1) = 0 : y 2 (x) = x − 1; y ′ 2 (x) = 1. The Wronski determinant W (z) W (z) = ����� y 1 y 2 y ′ 1 y ′ 2 ����� = y 1 y ′ 2 − y 2 y ′ 1 � = 0 takes the value W (z) = y 1 y ′ 2 − y 2 y ′ 1 = z 1 − (z − 1) 1 = z − z + 1 = 1. Green’s function G(x, z) = ⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩ y 2 (z) W (z) a 2 � �� � A(z) y 1 (x) f¨ ur 0 < x < z y 1 (z) W (z) a 2 � �� � B(z) y 2 (x) f¨ ur z < x < 1 becomes G(x, z) = � x (z − 1) f¨ ur 0 < x < z (x − 1) z f¨ ur z < x < 1. The derivation of the coefficients A(z) and B(z) can be found in chap. 7.9.3. The homogeneous solution follows by integration of the general solution according to eq. (7.18) with <?page no="232"?> 206 Solution of differential equations using Green’s functions y(x) = ˆ 1 0 G(x, z) f(z) dz = (x − 1) ˆ z=x z=0 z z dz + x ˆ z=1 z=x (z − 1) z dz = (x − 1) 1 3 z 3 ∣∣∣∣ z=x z=0 + x ( 1 3 z 3 − 1 2 z 2 ) ∣∣∣∣ z=1 z=x = 1 3 x 4 − 1 3 x 3 + 1 3 x − 1 2 x − ( 1 3 x 4 − 1 2 x 3 ) = 1 6 x 3 − 1 6 x. Figure 7.10: Procedure for solving the ODE eq. (7.24) with inhomogeneous boundary conditions The sought solution of the inhomogeneous ODE to u(x) follows from the sum of the homogeneous and particulate individual solutions with <?page no="233"?> 7.13 ODE - solution of d 2 u/ dx 2 = x 207 u(x) = y(x) + p(x) = 1 6 x 3 − 1 6 x + x + 1 = 1 6 x 3 + 5 6 x + 1. In fig. 7.10 the function curves are shown. <?page no="235"?> Chapter 8 Method of Lagrangian multipliers In scientific and technical applications, extreme value problems are frequently encountered, which are limited by corresponding constraints. A quick and simple method for solving such problems is the method of Lagrange multipliers, whose definition, derivation and worked-out application examples can be found in this chapter. 8.1 Definition of the Lagrange multiplier method The method was developed by Joseph-Louis Lagrange and proves to be very simple in application. The core of the method is an objective function f, which is expanded by adding its constraint g multiplied by the variable λ F = f + λ g. The maximum or minimum of this expanded function F is sought. The Lagrange multiplier method reduces a problem f with constraints g to a problem without constraints, the Lagrange function F . The constraints are contained in F . 8.1.1 Properties of the method The most important characteristics of the method have been summarised as follows and are used for classification purposes: • Independence of variables: When searching for extreme values of functions of several variables, an independence of the variables is often assumed. Examples of <?page no="236"?> 210 Method of Lagrangian multipliers this are mathematical functions such as f(x), f(x, y) or f(x, y, z), which are set to zero and solved by simple or partial derivatives. In physical applications, this independence is often no longer given due to the introduction of constraints. • Two methods juxtaposed: - Elimination method: The elimination method is based on the reduction of an n-variable problem to an n − 1-variable problem by substitution. The method can only be used if a variable of the constraint can be represented explicitly. - Lagrange multipliers: If the constraint cannot be put into the desired explicit form, it must be set equal to zero. Thus the method of Lagrange multipliers can be applied. 8.1.2 Mathematical optimisation Optimisation problems are extreme value tasks with constraints, which are often characterised by their large number of variables that have to be solved. Optimisation methods (solution methods) have been developed for this purpose over the centuries. A system of optimisation methods is shown in fig. 8.1. For the most part, these can be solved using the method of Lagrange multipliers. In fig. 8.2 shows an example of a linear optimisation. Here g 1 to g 4 are the constraints and Z is the function to be maximised. x max and y max denote the solutions which mark the maximum of the function. For dynamic optimisation, the definitions of objective functions with corresponding properties (Markov’s property, Bellmann’s optimality principle) are required, which are not part of this chapter. An optimisation task in general form is described by Figure 8.1: Systematics of the optimisation methods <?page no="237"?> 8.1 Definition of the Lagrange multiplier method 211 Figure 8.2: Example of a maximum problem with x and y as independent parameters max { f(x i ) | g j (x i ) = 0, x i ≥ 0 } i = 1, . . . , n; j = 1, . . . , m, n > m. See also [34], p. 706. Here, the objective function f(x i ) is to be maximised (minimised) while simultaneously satisfying the constraints g j (x i ) = 0. The number of equations of the constraints j must be smaller than the number of parameters x i of the objective function. In a nonlinear optimisation, the constraints of the given functions f and g j are omitted. In a quadratic optimisation, f(x i ) is to be assumed quadratic. In a linear optimisation, f and g j are linear functions. The method of Lagrange multipliers finds its way into non-linear optimisation through the formation of a generalised Lagrange function F (x i , λ j ) = f(x i ) + m ∑ j=1 λ j g j (x i ), where j = 1, · · · , m. The generalised Lagrange function is suitable for the treatment of extreme value problems with constraints. Here λ j are the Lagrange multipliers and g j (x i ) the constraints of the extremal function f(x i ). <?page no="238"?> 212 Method of Lagrangian multipliers 8.1.3 Calculus of variations The calculus of variations deals with a maximum or minimum problem. Here, a function is to be determined for which an integral assumes a maximum or minimum value. The method of Lagrange multipliers is partly used to solve variational problems, the systematics of which can be seen in fig. 8.3. The systematics should serve as a classification. A further treatment of the variation methods is not intended in this chapter. Figure 8.3: Systematics of the methods for the calculus of variations 8.2 Derivation of the Lagrange multiplier method The function f(x, y) is to be examined for its extreme values. At the same time, the constraint g(x, y) = 0 as well as their total differentials apply df(x, y) = ∂f ∂x dx + ∂f ∂y dy = 0 (8.1) dg(x, y) = ∂g ∂x dx + ∂g ∂y dy = 0, (8.2) which must also be fulfilled at the same time, where dx and dy are independent of each other and freely chooseable. It follows the conversion of the equations to dx and dy dx = − ∂g ∂y ∂g ∂x dy dy = − ∂g ∂x ∂g ∂y dx, which are then substituted into eq. (8.1) <?page no="239"?> 8.2 Derivation of the Lagrange multiplier method 213 df = − ∂f ∂x ∂g ∂y ∂g ∂x dy + ∂f ∂y dy = 0 (8.3) df = ∂f ∂x dx − ∂f ∂y ∂g ∂x ∂g ∂y dx = 0. (8.4) With the division by dy as well as by dx follows ∂f ∂y dy − ∂f ∂x ∂g ∂x ∂g ∂y dy = 0 ⇒ ∂f ∂y − ∂f ∂x ∂g ∂x ︸︷︷︸ ∂f/ ∂g = λ ∂g ∂y = 0 ∂f ∂x dx − ∂f ∂y ∂g ∂y ∂g ∂x dx = 0 ⇒ ∂f ∂x − ∂f ∂y ∂g ∂y ︸︷︷︸ ∂f/ ∂g = λ ∂g ∂x = 0. Thus remains ∂f ∂y − λ ∂g ∂y = 0 ∂f ∂x − λ ∂g ∂x = 0, which is equivalent to ∇ f − λ ∇ g = 0 ∇ (f − λ g) = 0, and roughly corresponds to df − λ dg = 0. The method thus results in an alignment of the gradients of its involved potential functions, as described with the help of fig. 8.4. In fig. 8.4 a) the Rosenbrock function f(x, y), as the function to be examined for its extreme values, including its isolines (contour lines) can be seen. In fig. 8.4 b) the isolines and the corresponding potential functions f(x, y) = d 1 to f(x, y) = d 3 are visible and the gradient of f(x, y) = d 1 <?page no="240"?> 214 Method of Lagrangian multipliers is drawn with arrows. The gradient always points perpendicularly in the direction of the highest rise of its potential function. The isoline of the constraint g(x, y) = c is tangential to the isoline of the potential function f(x, y) = d 1 . At this point the gradients run collinearly and the function f(x, y) = d 1 with the constraint g(x, y) = c assumes a maximum value. Figure 8.4: Graphical representation of the method 8.3 Application of the method The application of the Lagrange multiplier method is essentially broken down as follows: • Formulation of the extreme value problem and its function including all constraints. • Setting up the Lagrange function. • Solving the Lagrange function by derivation and zeroing of the derivatives. 8.4 Maths example - extreme value problem with one constraint Given is the function f(x, y, z) to be maximised with its constraint g(x, y, z) converted to zero with <?page no="241"?> 8.4 Maths example - extreme value problem with one constraint 215 f(x, y, z) = x 3 + y 3 + z 3 (8.5) g(x, y, z) = x 2 + y 2 + z 2 = 1 ⇒ x 2 + y 2 + z 2 − 1 = 0. (8.6) Hereby the Lagrange function follows F (x, y, z, λ) = f(x, y, z) + λ g(x, y, z) = x 3 + y 3 + z 3 + λ ( x 2 + y 2 + z 2 − 1 ) . The following are the derivatives of the Lagrange function and their zero-settings F x = 3x 2 + 2λx = 0 F y = 3y 2 + 2λy = 0 F z = 3z 2 + 2λz = 0 F λ = x 2 + y 2 + z 2 − 1 = 0. This is followed by the solution procedure according to λ, which allows the variables x, y and z to be determined. Starting with F x + F y + F z = 3 ( x 2 + y 2 + z 2 ) + 2λ (x + y + z) = 0, into which F λ is now inserted 3 + 2λ(x + y + z) = 0 2λ (x + y + z) = − 3 λ = − 3 2(x + y + z) . This leds to λ. Substituted into F x it follows 3x 2 − 3x x + y + z = 0 x = 1 x + y + z x 2 + xy + xz = 1. <?page no="242"?> 216 Method of Lagrangian multipliers Herewith follows the solution x = y = z x 2 + y 2 + z 2 = 1 x 2 + x 2 + x 2 = 1 3 x 2 = 1 with x = y = z = ± 1 √ 3 , for which eq. (8.5) assumes a maximum under the constraint of eq. (8.6). 8.5 Maths example - extreme value problem with two constraints Given is the function f(x, y, z) to be maximised with its constraints g(x, y, z), h(x, y, z) converted to zero with f(x, y, z) = x 3 + y 3 + z 3 g(x, y, z) = x 2 + y 2 + z 2 = 1 ⇒ x 2 + y 2 + z 2 − 1 = 0 h(x, y, z) = x + y + z = 0. From this follows the Lagrangian function F (x, y, z, λ) = f(x, y, z) + λ g(x, y, z) + μ h(x, y, z) = x 3 + y 3 + z 3 + λ ( x 2 + y 2 + z 2 − 1 ) + μ (x + y + z) . The following are the derivatives of the Lagrange function <?page no="243"?> 8.5 Maths example - extreme value problem with two constraints 217 F x = 3x 2 + 2λx + μ = 0 F y = 3y 2 + 2λy + μ = 0 F z = 3z 2 + 2λz + μ = 0 F λ = x 2 + y 2 + z 2 − 1 = 0 F μ = x + y + z = 0. The next step is to find the solution for the variables x, y and z. Starting with the subtraction F x − F y 3x 2 + 2λx + μ − 3y 2 − 2λy − μ = 0 3 ( x 2 − y 2 ) + 2λ(x − y) = 0 3 (x + y) (x − y) + 2λ(x − y) = 0, which is a first solution x = y. This makes x + y + z = 0 − 2x = z. Substituted into g(x, y, z) follows 2x 2 + ( − 2x) 2 = 1 6x 2 = 1 x = ± 1 √ 6 = y, which also z = ∓ 2 √ 6 follows. The following notes should also be summarised: <?page no="244"?> 218 Method of Lagrangian multipliers • λ and μ are not necessarily calculated may or may not be. That is why the method is sometimes called the method of indefinite Lagrange multipliers. • The solutions obtained are to be checked by means of the constraints. • It holds: - number of constraints m < number of variables n of the objective function, - (n + m) number of unknown variables and as many equations. 8.6 Application example - cube inscribed in a sphere This example with one constraint is solved with • Lagrange multiplier method • Elimination method. This comparison allows an assessment of both methods and their procedures. 8.6.1 Extreme value problem with one constraint A cube (hexahedron) is to be inscribed into a sphere of radius R while maximising its volume according to fig. 8.5. Let the origin of the coordinate system be located in the centre. Therefore, for reasons of symmetry, the volume of the cube is V = 2x 2y 2z. We are looking for the dimensions of the cube whose volume is to assume a maximum value. At the same time the constraint of the radius applies R 2 = x 2 + y 2 + z 2 . <?page no="245"?> 8.6 Application example - cube inscribed in a sphere 219 Figure 8.5: Cube inscribed in a sphere 8.6.2 Solution with Lagrange multiplier method At the beginning, the constraint of the radius R 2 = x 2 + y 2 + z 2 ⇒ x 2 + y 2 + z 2 − R 2 = 0. is rearranged to zero. With the inclusion of this constraint, the variables of the function to be maximised are no longer independent of each other. The function to be maximised and its constraint are combined to form the Lagrange function F (x, y, z, λ) = 2x 2y 2z + λ ( x 2 + y 2 + z 2 − R 2 ) , which is derived subsequently F x = 8yz + 2λx = 0 | · x F y = 8xz + 2λy = 0 | · y F z = 8xy + 2λz = 0 | · z F λ = x 2 + y 2 + z 2 − R 2 = 0. The derivatives set to zero are marked with indices. The addition of the functions F x + F y + F z multiplied by the variables results in <?page no="246"?> 220 Method of Lagrangian multipliers 24 x y z + 2 λ ( x 2 + y 2 + z 2 ) = 0. With inclusion of F λ , the result becomes 24 x y z + 2 λ R 2 = 0. Transformed to λ it follows λ = − 12 x y z R 2 , which is now inserted into F x and transformed to x F x = 8yz − 2 12 x y z R 2 x = 0 x 2 = R 2 3 x 1,2 = ± R √ 3 = y = z, which fulfils the constraint R 2 = 0. The maximum volume V max of the cube is thus V max = 2 R √ 3 · 2 R √ 3 · 2 R √ 3 = 8 R 3 3 √ 3 . 8.6.3 Solution with elimination method The solution of the problem is considerably simplified by squaring and rearranging the volume relationship V = 2x 2y 2z V 2 = 64 x 2 y 2 z 2 V 2 64 = x 2 y 2 z 2 = f(x, y, z), whereby the volume should assume a maximum. Here, the constraint <?page no="247"?> 8.6 Application example - cube inscribed in a sphere 221 R 2 = x 2 + y 2 + z 2 x 2 = R 2 − y 2 − z 2 is inserted into the volume relationship. Thus the function to be maximised follows F (y, z) = ( R 2 − y 2 − z 2 ) y 2 z 2 = R 2 y 2 z 2 − y 4 z 2 − y 2 z 4 , their derivative is F y = 2R 2 yz 2 − 4y 3 z 2 − 2yz 4 = 0 | : 2yz 2 F z = 2R 2 y 2 z − 2y 4 z − 4y 2 z 3 = 0 | : 2y 3 z and is set to zero. Dividing provides F y = R 2 − 2y 2 − z 2 = 0 F z = R 2 − y 2 − 2z 2 = 0. It may be noted that the equations now reached describe ellipses. In the continuation F z is converted to z 2 R 2 − y 2 − 2z 2 = 0 R 2 − z 2 2 = z 2 and inserted into F y . With subsequent changeover follows R 2 − 2y 2 − R 2 − z 2 2 = 0 R 2 − 2y 2 − R 2 2 − y 2 2 = 0 R 2 = 3 y 2 R √ 3 = ± y = ± z, <?page no="248"?> 222 Method of Lagrangian multipliers which corresponds to the result of the solution using the Lagrange multiplier method and leads to the maximum volume V max = 8 R 3 3 √ 3 . 8.7 Application example - dimensioning of a coil winding The example includes the voltage source with resistor and coil as shown in fig. 8.6. The designations and abbreviations used are shown in tab. 8.1. 8.7.1 Extreme value problem Given a voltage source with a corresponding U (I) characteristic as well as the ring made of a magnetic material, the number of windings N of which must be dimensioned in the following way: • maximum magnetic field H in the ring (extreme value) • consideration of the source internal resistance R i and winding resistance R w • given power loss P v max as constraint. Figure 8.6: Source-load arrangement with source characteristic The winding wire shall be selected according to the standard DIN EN 60317 − 8. The calculation result shall provide the values for the parameters number of windings N <?page no="249"?> 8.7 Application example - dimensioning of a coil winding 223 Table 8.1: Summary of the designations Symbol Designation Symbol Designation U 0 [V ] Voltage source l w [m] Length of a winding U Ri [V ] Voltage drop source l f e [m] Mean length iron ring U [V ] Voltage across coil A D [m 2 ] Wire-nominal cross-sectional area R i [Ω] Internal resistance source κ [1/ (Ωm)] Spec. electr. conductivity R w [Ω] Winding resistance P v [W ] Power loss N Number of windings I [A] Current H [A/ m] Magnetic field strength I SC [A] Short-circuit current and source voltage U 0 in order to be able to make a winding design adapted to the source, which does not exceed a given maximum power loss P v max (only constraint). 8.7.2 Solution procedure The setting up of the objective function to be maximised including the embedding of the constraint is done step by step: • Determination of the function to be maximised from the Ampere’s law: ˛ H ds = Θ = N I H l f e = N U 0 R w + R i . With the winding resistance R w = N l w κ A D follows the function to be maximised H = N U 0 l f e ( N l w κ A D + R i ) = κ A D N U 0 l f e (N l w + κ A D R i ) ⇒ max. (8.7) <?page no="250"?> 224 Method of Lagrangian multipliers • Determination of the constraint: Let the power loss P v of the ohmic load and internal resistance be given as a constraint P v = U 2 0 R w + R i = κ A D U 2 0 N l w + κ A D R i . (8.8) The function of the constraint must be rearranged to zero in order to be able to apply the method of Lagrange multiplication. Thus the constraint follows with κ A D U 2 0 N l w + κ A D R i − P v = 0. The function to be maximised including its constraint to be multiplied by λ is summarised into the Lagrangian function H L H L (N, U 0 , λ) = κ A D N U 0 l f e (N l w + κ A D R i ) + λ ( κ A D U 2 0 N l w + κ A D R i − P v ) , (8.9) which is solved in progress. This is followed by the derivatives of the function H L (N, U 0 , λ), which are named with the index notation after the derived parameter. These are H LN = κ A D U 0 l f e (N l w + κA D R i ) − λ κ A D l w U 2 0 (N l w + κA D R i ) 2 − κ A D l w U 0 N l f e (N l w + κA D R i ) 2 = 0 (8.10) H LU 0 = κ A D (N + 2 λ l f e U 0 ) l f e (N l w + κ A D R i ) = 0 (8.11) H Lλ = κ A D U 2 0 N l w + κ A D R i − P v = 0. (8.12) This is followed by the treatment of equations (8.10) to (8.11) with H LN + H LU 0 = 0, and allows the rearrangement according to λ <?page no="251"?> 8.7 Application example - dimensioning of a coil winding 225 λ = κ A D N U 0 l w l fe (N l w + κ A D R i ) 2 − κ A D (N +U 0 ) l fe (N l w + κ A D R i ) 2 κ A D U 0 (N l w + κ A D R i ) − κ A D l w U 2 0 (N l w + κ A D R i ) 2 . (8.13) From eq. (8.12) follows directly the required voltage U 0 U 0 = √ P v (N l w + κ A D R i ) κ A D . (8.14) By substituting eq. (8.13) and eq. (8.14) into eq. (8.10) it follows that √ κ A D P v l f e √ N l w + κ A D R i = 0. (8.15) Furthermore, by substituting eq. (8.13) and eq. (8.14) into eq. (8.11), it follows − κ A D N l f e (N l w + κ A D R i ) = 0. (8.16) Since both equations (8.15) and (8.16) are set to zero, they can be equalised and solved for N without further action. With this follows with the midnight formula N = P v l w κ A D + √( P v l w κ A D ) 2 − 4 ( − P v R i ) 2 . Table 8.2: Calculation data and calculation results Symbol Values Symbol Values R i 0.1 Ω κ Cu 58 · 10 6 1/ (Ω m) R w 0.045 Ω P v max 300 W l f e 0.157 m l w 0.05 m d Cu 0.4 mm A D 0.12566 · 10 − 6 m 2 N opt 7 U 0 opt 6.6 V H max 1910 Θ 318 A Winding diameter wire d Cu according standard DIN EN 60317-8 <?page no="252"?> 226 Method of Lagrangian multipliers With the number of turns N , the Lagrange equation, eq. (8.9) is completely solved. The calculation of the example in fig. 8.6 follows. The values required for this, including the calculation results, are summarised in tab. 8.2. The optimal results were labelled N opt and U 0 opt . In fig. 8.7 a), the magnetic field strength H(N, U 0 ) of eq. (8.7) can be seen as a function of the variables N and U 0 . The optimum field strength H at N opt and U 0 opt is marked by an arrow. The line marked with stars shows the magnetic field strength H at the constant power loss P v max . Fig. 8.7 b) shows the power loss P v as a function of the variables N and U 0 . Furthermore, the maximum power loss P v max is shown as a plane. The arrow shows the position in the power loss plane at N opt and U 0 opt . In this dimensioning example, increasing the number of turns beyond the optimum number of turns has a weakening effect on the magnetic field strength. Furthermore, increasing the voltage beyond the voltage optimum leads to the specified maximum power loss being exceeded. Figure 8.7: Calculation results for H(N, U 0 ) (left) and P v (N, U 0 ) (right) <?page no="253"?> Chapter 9 Differential equations and finite elements Frequently used differential equations of 2 ′ th order and their solution with the Galerkin method as the most used method within the finite element method (FEM) are presented. In addition, the elements used in the FEM application are shown. The FEM is one of the dominant numerical methods for calculating systems of equations. It has its origin in structural analysis. A mathematical description of the method was already given in 1943, but it was not before 1968 that the method was applied to electromagnetic field problems. The FEM has proven to be more powerful and flexible in its application to complex geometries and inhomogeneous materials in comparison to other methods and has thus become established. For this reason, computer programmes could be developed which created a great variety of application possibilities. Many processes in science and technology are described by means of partial differential equations of 1 ′ th and 2 ′ th order. The following papers of the manuscript were mainly written with the literature [2], [3], [30], [33] and [55]. 9.1 Physics examples for differential equations of 1 ′ th order Differential equations of 1 ′ th order contain a derivative of the independent variables. In the case of derivatives with respect to time, these often embody an energy store. Examples of 1 ′ th order differential equations are given in fig. 9.1. The required independent variables are articulated in terms of their use at the element: <?page no="254"?> 228 Differential equations and finite elements Figure 9.1: Examples of differential equations (DEs) of 1 ′ th order • ”Across Variables“: Variables whose magnitudes act across the element (along). Examples are the electrical voltage drop along a resistor, the pressure drop along a water pipe, the temperature drop along a component or through a wall. • ”Through Variables“: Variables whose quantities flow through the element. Examples are the electric current, the mass-volume flow and the heat flow. 9.2 Physics examples for 2 � th order differential equations Examples of 2 ′ th order differential equations are summarised in figures 9.2 and 9.3. In fig. 9.2 a) the oscillation equations of the electrical and mechanical oscillating circuit can be seen. Oscillatory systems are characterised by two independent forms of energy (magnetic and electric) into which the energy is transferred in each case during the exchange. The time derivatives in the equations each represent an energy store. The differential equations are each a damped series oscillating circuit with excitation. The <?page no="255"?> 9.2 Physics examples for 2 ′ th order differential equations 229 solution is the current i or the displacement x. Figure 9.2: Examples of 2 ′ th order differential equations The diffusion equation is shown in fig. 9.3 b). The field diffusion and heat diffusion equations are shown. The equations are characterized by two spatial derivatives and one time derivative. Solved for flux density � B and temperature υ, respectively. The Poisson’s differential equation of electrostatics for solving the potential ϕ is given in fig. 9.3 c). The wave equation in fig. 9.3 d) is characterized with two spatial derivatives and two time derivatives. In each case, it is solved for the electric field strength � E <?page no="256"?> 230 Differential equations and finite elements and for the magnetic field strength � H, respectively. Looking at the oscillation equation in fig. 9.2 a) analogies between mechanical and electrical oscillators can be worked out. Findings, which were gained at the example of mechanical oscillators, can be transferred consequently to electrical oscillating circuits. The inverse of this statement is valid. Two oscillating arrangements are called analogous if they are described by the same differential equation. In tab. 9.1 there is a comparison between the electrical and mechanical quantities of the oscillation equations, which are subdivided into the electrical series and parallel oscillation circuit [39], p. 210. Figure 9.3: Examples of 2 ′ th order differential equations - continued The analogous relations were derived from fig. 9.2 a). An interpretation of these relations leads to the following conclusions: • Depending on the circuit arrangement (series or parallel connection), the electrical versus the mechanical parameters change. • For example, the mechanical oscillation equation can be written with the displacement variable x as an independent parameter: <?page no="257"?> 9.2 Physics examples for 2 ′ th order differential equations 231 Table 9.1: Analogy of electrical and mechanical quantities Electric analogy Mechanical analogy Series oscillating circuit: voltage u force F current i velocity v reciprocal capacity 1/ C spring constant c resistance R attenuation constant b inductivity L mass m Parallel oscillating circuit: voltage u velocity v current i force F capacity C mass m reciprocal resistance 1/ R attenuation constant b reciprocal inductivity 1/ L spring constant c m d 2 x dt 2 + b dx dt + c x = F 0 d 2 x dt 2 + b m dx dt + c m x = a 0 , where a 0 represents the acceleration coefficient. The displacement variable x corresponds to the current i in the electrical series resonant circuit and to the voltage u in the electrical parallel resonant circuit. • By means of time integration of the last equation shown, it applies dx dt + b m x + c m ˆ x dt = v 0 the oscillation equation by means of the impressed velocity v 0 of a parallel oscillator. All elements involved experience the same velocity, which corresponds to the voltage source u 0 in the electrical parallel oscillating circuit. <?page no="258"?> 232 Differential equations and finite elements 9.3 Finite elements The discretisation of the solution space or solution area requires the division into individual spaces or individual areas, the finite elements. In fig. 9.4 typical finite elements are shown ordered by space dimensions and order of the approach function. Figure 9.4: Classification of chosen finite elements For the discretisation of boundary value problems in the two-dimensional domain, triangular or quadrilateral elements are preferably used. For triangular elements, elementwise linear, quadratic or cubic approach functions are often used. These are equal to one in one node and equal to zero in all remaining nodes. With element-wise quadratic <?page no="259"?> 9.3 Finite elements 233 approach functions, edge centre nodes are also used. Here, quadratic approach functions are defined, which are equal to one in a triangle node and equal to zero in the remaining edge nodes. Bilinear, biquadratic or quadratic functions are used for the quadrilateral elements (see also [42], p. 227 ff.). <?page no="261"?> Chapter 10 From the Method of Moments to the Galerkin Method The Method of Moments (MOM) is a numerical method for solving boundary value problems, preferably in the field of electromagnetism. Like the finite element method, the method of moments transforms equations of a given boundary value problem into a matrix equation for solution with a computer. A closed formulation of the method is presented in [38]. It has become a predominant method for computations in the field of electromagnetism. In [63], the phase discretisation grid method is presented as an extension of the MOM for calculating the field scattering behaviour of electrically conductive bodies. 10.1 Basic principle of the method of moments - (MOM) A successful introduction to the method of moments is given in [38], p. 5 ff. The basic principle of MOM is based on the transformation of an equation with boundary conditions by means of numerical approximation into a matrix equation for solution with known procedures. To illustrate the method, the inhomogeneous equation L f = g (10.1) is considered. Where L is a linear operator, f is the unknown function to be determined and g is the known function representing the source term. This is a deterministic <?page no="262"?> 236 From the Method of Moments to the Galerkin Method problem whose solution is unique, which means that only one f is associated with a given g. An analysis problem exists when L and g are given and f is to be determined. A synthesis problem exists if f and g are given and L is to be determined. In the sequel, only analysis problems are treated. The solution domain is defined with Ω. To find the solution, f is defined as the series of functions φ 1 , φ 2 , φ 3 , ... in the domain of Ω f = N � j=1 a j φ j . (10.2) Here a j are the unknown development coefficients and φ j the development or basis functions. Thus the inhomogeneous equation follows N � j=1 a j L φ j = g. In the following, weighting or test functions w 1 , w 2 , w 3 are defined and with them the inner product N � j=1 a j � w k , L φ j � = � w k , g � with k = 1, 2, 3, ... is formed. The equation is written in matrix notation (l jk ) (a j ) = (g k ) . Thereby are (l jk ) = ⎛⎜⎜⎝ � w 1 , L φ 1 � � w 1 , L φ 2 � ... � w 2 , L φ 1 � � w 2 , L φ 2 � ... ... ... ... ⎞⎟⎟⎠ (10.3) (a j ) = ⎛⎜⎜⎜⎜⎝ a 1 a 2 . . ⎞⎟⎟⎟⎟⎠ (10.4) <?page no="263"?> 10.2 Remarks on the method of moments 237 (g k ) = ⎛⎜⎜⎜⎜⎝ 〈 w 1 , g 〉 〈 w 2 , g 〉 . . ⎞⎟⎟⎟⎟⎠ . Assuming the non-singularity of the matrix (l), the determination of the coefficients a is possible with (a j ) = (l − 1 jk ) (g k ). This corresponds to the solution of f. The expressions for the solution of f are given by (φ j ) = (φ 1 φ 2 φ 3 ...) and f = (φ j ) (a k ) = (φ j ) (l − 1 jk ) (g k ). If the solution is exact or approximated depends on the choice of φ j and w k . 10.2 Remarks on the method of moments In the following, the user of the MOM is given hints and recommendations regarding the matrix, the choice of the base and weighting function. 10.2.1 Matrix (l jk ) If the matrix (l jk ) is of infinite order, its inversion can only be performed for some special cases such as a diagonal matrix. The classical eigenfunction method leads to a diagonal matrix and can be taken as a special case of the MOM. If, on the other hand, φ j and w k are finite, the matrix assumes a finite order and can be inverted using already known methods. <?page no="264"?> 238 From the Method of Moments to the Galerkin Method 10.2.2 Choosing the basis and weighting functions φ n and w k One of the main tasks of all special problems is the choice of the basis function φ j and weighting function w k . The basis function should be linearly independent and chosen in such a way that the superposition of eq. (10.2) approximates the function f adequately and well. The weighting function w k should also be linearly independent and chosen such that the inner product � w k , g � is relatively independent of the property of g. Guidance on the choice of basis and weighting functions can be provided by the coefficients of the MacLaurin series describing the function of interest. Some other factors that influence the choice of φ j and w k are • the desired accuracy of the solution, • the effort to develop the matrix elements, • the matrix size to be inverted, • the realisation of a well-conditioned matrix. This chapter informs about the idea of the mathematician Galerkin and introduces his method. The Galerkin method is used very widely to solve differential equations, for example, in structural mechanics, fluid mechanics, heat and mass transport, acoustics and microwave applications. Problems described by ordinary differential equations, partial differential equations and integral equations can be investigated by means of the Galerkin formalism. ”Any problem for which governing equation can be written down is a candidate for a Galerkin method.“ [33], S. 1. The origin of the method is traced back to a publication by Galerkin (1915). 10.3 About Boris Galerkin Boris Grigorievich Galjorkin (Galerkin) (1871-1945) was a Soviet mechanical engineer and mathematician who graduated from St Petersburg in 1899 [33]. Galerkin studied at the St Petersburg Polytechnic from 1893. From 1899 he worked in a locomotive factory as an engineer and helped build a railway line in Manchuria. Back in St Petersburg he <?page no="265"?> 10.4 Galerkin’s idea 239 worked as a senior engineer in a steam boiler factory. Galerkin was imprisoned in 1907 for remarks critical of the Tsar, where he became involved in civil engineering. In 1908 his first publication on structural engineering appeared. In Leningrad he accepted a chair in civil engineering in 1922, and taught at the Leningrad Institute of Railway Engineering and at the Leningrad State University. At the Military Engineering Technical University there he became professor and head of civil engineering. Galerkin was head of the Institute of Mechanics of the Soviet Academy of Sciences in Saint Petersburg from 1940 until his death in Moscow. 10.4 Galerkin’s idea The special choice of taking as weighting function the basis function w j = φ j is called Galerkin’s method. ”Galerkin method is used to reduce an ordinary differential equation to a system of algebraic equations.“ [33], S. 4. <?page no="267"?> Chapter 11 Traditional Galerkin Method One of the key features of the traditional or conventional Galerkin method is presented below. A 2D problem is represented by the linear differential equation L u = 0 (11.1) in the domain D(x, y) with the boundary conditions S(u) = 0, ∂D. The Galerkin method assumes that u u a = u 0 (x, y) + N ∑ j=1 a j φ j (x, y) (11.2) is accurately represented by the approximate solution. The following applies: • φ j is a known analytic function (basis function), • u 0 is introduced to satisfy the boundary conditions, • a j are coefficients to be determined. Substituting eq. (11.2) into eq. (11.1), it follows that <?page no="268"?> 242 Traditional Galerkin Method L u a = L u 0 + N ∑ j=1 a j L φ j = R(a 0 , a 1 , ..., a N , x, y) (11.3) = 0. With the Galerkin method, the unknown coefficients a j are obtained by developing the inner product � φ k , R � = 0, k = 1, ..., N (11.4) in the domain D ∈ [0, 1], with the residual R (Latin residuum = what is left behind or in mathematics the deviation) and the analytical function φ k (weighting function) known from eq. (11.2). Since the present example is based on a linear differential equation, eq. (11.4) can be directly written as a matrix equation for the coefficient a j as N ∑ j=1 a j � φ k , L φ j � = −� φ k , L u 0 � . (11.5) Solving and inserting a j into eq. (11.2) leads to the approximate solution of u a . The effect of the Galerkin method becomes transparent in the examples worked out in the following chapters. <?page no="269"?> Chapter 12 Galerkin method - solution of du/ dx = u The Galerkin method is applied below to reduce an ordinary differential equation to a system of algebraic equations. The ordinary differential equation of the 1 ′ th order is to be solved du dx − u = 0 (12.1) in the domain Ω with 0 ≤ x ≤ 1 and the boundary condition y(0) = 1, whose solution is u(x) = e x . 12.1 Choosing the base and weighting function In the sequel, the function u is described by means of series expansion u j = N ∑ j=0 a j x j = a 0 + N ∑ j=1 a j x j , (12.2) where x j represents the basis function. The coefficient a 0 = 1 is chosen to satisfy the boundary condition. The intentional structuring of the experimental solution to meet <?page no="270"?> 244 Galerkin method - solution of du/ dx = u the boundary conditions is a common practice in the application of of the traditional Galerkin method. Compare eq. (11.2). Substituting eq. (12.2) into eq. (12.1), it follows that d dx ( a 0 + N ∑ j=1 a j x j ) − ( a 0 + N ∑ j=1 a j x j ) = 0 = R. After derivation remains N ∑ j=1 a j jx j − 1 − 1 − N ∑ j=1 a j x j = 0 − 1 + N ∑ j=1 a j ( jx j − 1 − x j ) = 0 = R. 12.2 Weak formulation of the differential equation The weighting function w = x k − 1 is introduced. In the progression the development of the inner product with the weighting function takes place � R, w � = 0 ˆ 1 0 R x k − 1 dx = 0, k = 1, ..., N ˆ 1 0 [ − 1 + N ∑ j=1 a j ( jx j − 1 − x j )] x k − 1 dx = 0 ˆ 1 0 [ − x k − 1 + N ∑ j=1 a j ( jx j − 1 − x j ) x k − 1 ] dx = 0 ˆ 1 0 N ∑ j=1 a j ( jx j − 1 − x j ) x k − 1 dx = ˆ 1 0 x k − 1 dx. After integration and insertion of the integration limits, it follows the weak formulation of eq. (12.1) <?page no="271"?> 12.3 Transforming the system of equations into a matrix equation 245 j j + k − 1 x j+k − 1 ���� 1 0 − 1 j + k x j+k ���� 1 0 = 1 k x k ���� 1 0 j j + k − 1 − 1 j + k = 1 k . 12.3 Transforming the system of equations into a matrix equation The equations are transformed into a system of linear equations M A = D with the elements of M and D m kj = � jx j − 1 − x j , x k − 1 � = j j + k − 1 − 1 j + k d k = � 1, x k − 1 � = 1 k . A is the vector of unknown coefficients a j . The individual elements of the matrix M are now calculated with N = 3. As an example, the elements m(3, j) are calculated for k = 3: m 31 = 1 1 + 3 − 1 − 1 1 + 3 = 1 12 m 32 = 2 2 + 3 − 1 − 1 2 + 3 = 3 10 m 33 = 3 3 + 3 − 1 − 1 3 + 3 = 13 30 . 12.4 Solving the linear equation system The linear system of equations thus obtained ⎛⎜⎜⎝ 1 2 2 3 3 4 1 6 5 12 11 20 1 12 3 10 13 30 ⎞⎟⎟⎠ � �� � M · ⎛⎜⎜⎝ a 1 a 2 a 3 ⎞⎟⎟⎠ � �� � A = ⎛⎜⎜⎝ 1 1 2 1 3 ⎞⎟⎟⎠ � �� � D <?page no="272"?> 246 Galerkin method - solution of du/ dx = u is solved with A = M − 1 D. The Galerkin method transforms the ordinary differential equation into an algebraic system of equations. The solution of the coefficients is Figure 12.1: Comparison of the exact analytical with the numerical approximate solutions A = ⎛⎜⎜⎝ a 1 a 2 a 3 ⎞⎟⎟⎠ = ⎛⎜⎜⎝ 1.014 0.423 0.282 ⎞⎟⎟⎠ . From the following substitution of eq. (12.2) follows the approximate solution <?page no="273"?> 12.4 Solving the linear equation system 247 u 3 = 1 + 1.014 x 1 + 0.423 x 2 + 0.282 x 3 . The exact solution is u 4 = e x . In fig. 12.1 the comparison between the exact and the approximate solution was made. It can be seen that the function u 3 and u 4 appear congruent. <?page no="275"?> Chapter 13 Galerkin method - solution of − d 2 u/ dx 2 = 4x 2 + 1 Let the inhomogeneous differential equation of the 2 ′ nd order − d 2 u(x) dx 2 = 4 x 2 + 1 (13.1) with the boundary condition u(0) = u(1) = 0 be given. This is a boundary value problem with the solution u(x) = − 1 3 x 4 − 1 2 x 2 + 5 6 x in the domain Ω = [0, 1]. The differential equation is solved in the continuation with the traditional Galerkin method. 13.1 Choosing the base and weighting function With the help of the approach or basis function u n u n = x − x n+1 the function u(x) with the polynomial u u = N ∑ n=1 a n u n = N ∑ n=1 a n ( x − x n+1 ) <?page no="276"?> 250 Galerkin method - solution of − d 2 u/ dx 2 = 4x 2 + 1 is developed. According to the Galerkin method, the weighting function w is equal to the basis function w m = x − x m+1 = u n . 13.2 Formulation of the weak form with basis and weighting function With the weighting function equal to the basis function it follows � w m , L u � = � w m , g � N ∑ n=1 a n � w m , L u n � = � w m , g � N ∑ n=1 a n 〈 x − x m+1 , − d 2 dx 2 ( x − x n+1 )〉 = 〈 x − x m+1 , 4x 2 + 1 〉 and thus the weak form of the differential equation N ∑ n=1 a n ︸︷︷︸ (a n ) ˆ Ω ( x − x m+1 ) ( − d 2 dx 2 ( x − x n+1 )) dx ︸ ︷︷ ︸ T erm1 (l mn ) = ˆ Ω ( x − x m+1 + 4x 3 − 4x m+3 ) dx ︸ ︷︷ ︸ T erm2 (g m ) . 13.3 Transforming the system of equations into a matrix equation The equation is transformed into (a n ) (l mn ) = (g m ). For this purpose, the terms 1 and 2 are developed as follows: • Term 1 (l mn ): Integration is done by applying partial integration twice. The first application of partial integration yields <?page no="277"?> 13.3 Transforming the system of equations into a matrix equation 251 ˆ 1 0 ( x − x m+1 ) − d 2 dx 2 ( x − x n+1 ) dx = [( x − x m+1 ) − d dx ( x − x n+1 )] ∣∣∣∣∣ 1 0 ︸ ︷︷ ︸ =0 − ˆ 1 0 − (m + 1) x m − d dx ( x − x n+1 ) dx. The second application of partial integration yields − ˆ 1 0 (m + 1) x m d dx ( x − x n+1 ) dx = − [ (m + 1) x m ( x − x n+1 )] ∣∣∣ 1 0 ︸ ︷︷ ︸ =0 + ˆ 1 0 m (m + 1) x m − 1 ( x − x n+1 ) dx = ˆ 1 0 m (m + 1) x m − m (m + 1) x m+n dx = [ m(m + 1) m + 1 x m+1 − m(m + 1) m + n + 1 x m+n+1 ] ∣∣∣∣∣ 1 0 = m (m + n + 1) − m (m + 1) m + n + 1 = m n m + n + 1 = (l mn ). • Term 2 (g m ): Solution takes place through term-by-term integration ˆ 1 0 ( x − x m+1 + 4x 3 − 4x m+3 ) dx = ( 1 2 x 2 − 1 m + 2 x m+2 + x 4 − 4 m + 4 x m+4 ) ∣∣∣∣∣ 1 0 = 1 2 − 1 m + 2 + 1 − 4 m + 4 . The common main denominator and the summation of the numerator leads to 1 2 − 1 m + 2 + 1 − 4 m + 4 = m (3m + 8) 2 (m + 2) (m + 4) = (g m ). <?page no="278"?> 252 Galerkin method - solution of − d 2 u/ dx 2 = 4x 2 + 1 13.4 Solving the linear equation system Solving the linear system of equations (l mn ) − 1 (g m ) = (a n ) for N = 1 is (l mn ) − 1 (g m ) = � 11 10 � . This makes the series of the function u u = 1 � n=1 a n � x − x n+1 � = 11 10 � x − x 2 � = − 11 10 x 2 + 11 10 x. The solution for N = 2 is (l mn ) − 1 (g m ) = � 1 10 2 3 � . This completes the series of the function u u = 2 � n=1 a n � x − x n+1 � = 1 10 � x − x 2 � + 2 3 � x − x 3 � = − 2 3 x 3 − 1 10 x 2 + 23 30 x. The solution for N = 3 is (l mn ) − 1 (g m ) = ⎛⎜⎜⎝ 1 2 0 1 3 ⎞⎟⎟⎠ . <?page no="279"?> 13.4 Solving the linear equation system 253 Table 13.1: Coefficients for N = 4 (l mn ) (g m ) n m 1 2 3 4 1 1 3 1 2 3 5 2 3 11 30 2 1 2 4 5 1 8 7 7 12 3 3 5 1 9 7 3 2 51 70 4 2 3 8 7 3 2 16 9 5 6 The polynomial function u becomes u = 3 ∑ n=1 a n ( x − x n+1 ) = 1 2 ( x − x 2 ) + 0 ( x − x 3 ) + 1 3 ( x − x 4 ) = − 1 3 x 4 − 1 2 x 2 + 5 6 x. Differentiation twice leads to eq. (13.1). For N = 4 the coefficients for (l mn ) and (g m ) have been summarised in tab. 13.1. The fig. 13.1 shows the graphical representations of the individual development steps. <?page no="280"?> 254 Galerkin method - solution of − d 2 u/ dx 2 = 4x 2 + 1 Figure 13.1: Function u(x) with the number of series elements N as the plot parameter <?page no="281"?> Chapter 14 Galerkin method - solution of d 2 u/ dx 2 = − 1 (I) Given is the ordinary inhomogeneous differential equation of 2 ′ th order d 2 u(x) dx 2 + 1 = 0, x ∈ Ω (14.1) in the domain Ω = [0, 1] with Dirichlet boundary conditions u(0) = u(1) = 0, whose solution is u(x) = − 1 2 x 2 + 1 2 x = 1 2 ( x − x 2 ) , and their function curve is shown in fig. 14.1. The ordinary differential equation eq. (14.1) is solved in progress using the traditional Galerkin method. To solve the differential equation, the condition on the interval Ω = [a, b] = [0, 1] with u(x) = 0, x ∂Ω is imposed and a non-linear weighting function is chosen for the solution. It follows again � w, R � = ˆ Ω w R dx = ˆ Ω w ( d 2 u(x) dx 2 + 1 ) dx = 0, with the residual R and weighting or test function w. <?page no="282"?> 256 Galerkin method - solution of d 2 u/ dx 2 = − 1 (I) Figure 14.1: Solution u(x) from eq. (14.1) 14.1 Choosing the base and weighting function In the sequel u(x) = N ∑ n=1 a n ( x − x n+1 ) (14.2) w(x) = x − x m+1 , according to Galerkin, the function u(x) to be solved is assumed to be a polynomial with the same function class as the weighting function. 14.2 Weak formulation of the differential equation From this follows the weak formulation of eq. (14.1) with <?page no="283"?> 14.3 Transforming the system of equations into a matrix equation 257 N � n=1 a n ˆ Ω � x − x m+1 � d 2 dx 2 � x − x n+1 � dx = − ˆ Ω � x − x m+1 � dx. (14.3) 14.3 Transforming the system of equations into a matrix equation The left term of the eq. (14.3) is transformed into the form N � n=1 a n � n 2 + n n x n+m+1 − n + mn + n 2 m + n + 1 x n+1 � ����� 1 0 � �� � (lmn) = − � 1 2 x 2 − 1 m + 2 x m+2 � ����� 1 0 � �� � (g m ) using partial integration and the right term of eq. (14.3) is formed by integration. The domain is bounded by Ω = [0, 1] according to fig. 14.1. 14.4 Solving the linear equation system The resulting system of linear equations (a n ) (l mn ) = (g m ) is solved for (a n ). It is ⎛⎜⎜⎝ a 1 a 2 a 3 ⎞⎟⎟⎠ = ⎛⎜⎜⎝ 80/ 3 256 8448/ 5 112 5504/ 5 7424 1968/ 5 3968 191232/ 7 ⎞⎟⎟⎠ − 1 � �� � (l − 1 mn ) · ⎛⎜⎜⎝ 40/ 3 56 984/ 5 ⎞⎟⎟⎠ � �� � (g m ) = ⎛⎜⎜⎝ 1/ 2 0 0 ⎞⎟⎟⎠ . Thus the solution of the differential equation of eq. (14.1) follows in the notation of eq. (14.2) <?page no="284"?> 258 Galerkin method - solution of d 2 u/ dx 2 = − 1 (I) u(x) = a 1 ( x − x 1+1 ) + a 2 ( x − x 2+1 ) + a 3 ( x − x 3+1 ) = 1 2 ( x − x 2 ) = − 1 2 x 2 + 1 2 x. <?page no="285"?> Chapter 15 Galerkin method - solution of d 2 u/ dx 2 = − 1 (II) The 2 ′ th order ordinary differential equation d 2 u(x) dx 2 + 1 = 0 = R (15.1) whose solution in the domain Ω = [0, 4] with the homogeneous Dirichlet boundary conditions u(0) = u(4) = 0 is u(x) = − 1 2 x 2 + 2 x (15.2) and shown in fig. 15.1, is to be solved in the progress with the traditional Galerkin method by developing the inner product � w, R � = 0 ˆ Ω w(x) ( d 2 u(x) dx 2 + 1 ) dx = 0. 15.1 Choosing the base and weighting function According to the Galerkin method, w m = u n must be set. Suitable weighting and basis functions must be searched for and checked. The searched function must always fulfil the required boundary conditions w(x) = u(x) = 0 at x ∂ Ω. The following was searched for, found and checked <?page no="286"?> 260 Galerkin method - solution of d 2 u/ dx 2 = − 1 (II) Figure 15.1: Solution u(x) of the eq. (15.1) u(x) = N ∑ n=1 a n ( x n − 1 4 x n+1 ) w(x) = x m − 1 4 x m+1 . 15.2 Weak formulation of the differential equation The base and weighting function found is inserted and transformed into the equation shown above for calculating the inner product. From this follows the weak form of the differential equation ˆ Ω ( x m − 1 4 x m+1 )[ d 2 dx 2 N ∑ n=1 a n ( x n − 1 4 x n+1 ) + 1 ] = 0 a n N ∑ n=1 ˆ Ω ( x m − 1 4 x m+1 ) d 2 dx 2 ( x n − 1 4 x n+1 ) dx ︸ ︷︷ ︸ (l mn ) = − ˆ Ω ( x m − 1 4 x m+1 ) dx ︸ ︷︷ ︸ (g m ) . <?page no="287"?> 15.3 Transforming the system of equations into a matrix equation 261 15.3 Transforming the system of equations into a matrix equation The transformation of the weak form of the equation into the matrix equation is done by means of partial integration. Subsequent formation of the antiderivative and insertion of the boundaries x(0) = x(4) = 0 yield the two matrices • Matrix (l mn ) : � − n x m+n − 1 ( − n 3 x 2 + 8 n 3 x − 16 n 3 + n x 2 − 8 n x + 16 n) 16 (m 3 + 3 m 2 n + 3 m n 2 − m + n 3 − n) − m 2 n x m+n − 1 (16 n − 8 n x + n x 2 + x 2 − 16) 16 (m 3 + 3 m 2 n + 3 m n 2 − m + n 3 − n) + m n x m+n − 1 ( − 2 n 2 x 2 + 16 n 2 x − 32 n 2 − n x 2 + 16 n + x 2 + 16) 16 (m 3 + 3 m 2 n + 3 m n 2 − m + n 3 − n) � ����� 4 0 = − 2 4 m+n − 1 m n m 3 + 3 m 2 n + 3 m n 2 − m + n 3 − n . For m = [1 2 3] and n = [1 2 3] the matrix (l mn ) follows (l mn ) = ⎛⎜⎜⎝ − 4 3 − 8 3 − 32 5 − 8 3 − 128 15 − 128 5 − 32 5 − 128 5 − 3072 35 ⎞⎟⎟⎠ . Here det(l mn ) = − 65536/ 2625 � = 0. • Matrix (g m ) : − ˆ Ω � x m − 1 4 x m+1 � dx = − � ˆ Ω x m dx − ˆ Ω 1 4 x m+1 dx � = − ⎛⎝ 1 m + 1 x m+1 ����� 4 0 − 1 4(m + 2) x m+2 ����� 4 0 ⎞⎠ = − 1 m + 1 4 m+1 + 1 4(m + 2) 4 m+2 . For m = [1, 2, 3] it follows <?page no="288"?> 262 Galerkin method - solution of d 2 u/ dx 2 = − 1 (II) (g m ) = ⎛⎜⎜⎝ − 8 3 − 16 3 − 64 5 ⎞⎟⎟⎠ . • Matrix (a n ) : The matrix of the variable a to be solved is for n = [1, 2, 3] (a n ) = ⎛⎜⎜⎝ a 1 a 2 a 3 ⎞⎟⎟⎠ . 15.4 Solving the linear equation system The resulting system of linear equations (a n ) (l mn ) = (g m ) is calculated after (a n ) with ⎛⎜⎜⎝ a 1 a 2 a 3 ⎞⎟⎟⎠ = ⎛⎜⎜⎝ − 4 3 − 8 3 − 32 5 − 8 3 − 128 15 − 128 5 − 32 5 − 128 5 − 3072 35 ⎞⎟⎟⎠ − 1 � �� � (l mn ) · ⎛⎜⎜⎝ − 8 3 − 16 3 − 64 5 ⎞⎟⎟⎠ � �� � (g m ) = ⎛⎜⎜⎝ 2 0 0 ⎞⎟⎟⎠ . Thus the solution of the differential equation of eq. (15.1) follows in the representation of eq. (15.2) u(x) = a 1 � x − 1 4 x 1+1 � + a 2 � x − 1 4 x 2+1 � + a 3 � x − 1 4 x 3+1 � = 2 � x − 1 4 x 2 � = − 1 2 x 2 + 2 x. <?page no="289"?> Chapter 16 Galerkin method - Ampere’s law The mathematical shell of Ampere’s law is formed by Maxwell’s fourth theorem, eq. (1.6). Ampere’s law from fig. 9.1 in its differential form is transformed into a matrix equation using the Galerkin method and solved for the magnetic field strength H. The solution is done for the inside and outside of the conductor. In fig. 16.1 the integral form of the law is shown. The law is described by means of Maxwell’s fourth theorem, which represents the relationship between a circular integral and an area integral. Also in fig. 16.1 the analytical derivations of the magnetic field strengths for the inner space, for the surface and the outer space of the conductor are shown. The magnetic field line H Φa in the outer area with radius r is drawn as a representative of the inner and outer area. The legend can be found in the figure. In fig. 16.2 the graphical solution of both equations of the magnetic field strength for the inner and outer space of the conductor can be seen. In the inner space of the conductor, the magnetic field strength increases proportionally to the radius r. The maximum magnetic field strength occurs at the conductor surface R. The magnetic field strength decreases from the conductor surface with a hyperbolic function, which tends towards zero with a very large radius r. <?page no="290"?> 264 Galerkin method - Ampere’s law Figure 16.1: Derivation of the magnetic field strength H Figure 16.2: Curve of the magnetic field strength inside and outside the conductor rod <?page no="291"?> 16.1 Galerkin method - Ampere’s law for the conductor inside 265 16.1 Galerkin method - Ampere’s law for the conductor inside For reasons of clearness, the integral form of Ampere’s law is introduced. For the inner area of the conductor, according to fig. 16.1, an increase in the radius of the circular integral of eq. (1) leads to an increase in the magnetic field strength and the length of the field line, so that the area integral of the right-hand side of the equation, which is bounded by the field line, also increases and a maximum is reached at the surface according to eq. (3), fig. 16.1. Compare also fig. 16.2. The calculation is continued with the differential form of Ampere’s law. Here, the curl of the magnetic field strength in the cylindrical coordinate system is given by rot � H Φi = � J rot z � H Φi = ⎛⎜⎝ 1 r ∂rH Φi ∂r − 1 r ∂H r ∂Φ � �� � =0 ⎞⎟⎠ 1 r ∂rH Φi ∂r = 1 r ∂ ∂r � r J 2 r � = J. (16.1) The equation is to be solved for H Φi . The field has only one component in the circumferential direction Φ, therefore the further term of the equation disappears. The magnetic field is solved for the interior of the conductor. The procedure for the solution corresponds to the procedure according to chap. 10. 16.1.1 Weak formulation of the differential equation By rearranging eq. (16.1), it follows that ∂rH Φi (r) ∂r = r J. If the current density J is assumed to be constant and the radius r increases (right side of the equation), the derivative of the left side of the equation is not to be assumed constant. The equation is to be solved according to the magnetic field strength in the inner space of the conductor H Φi . Using the appropriate basis function H Φi (r) = N � n=1 a n r n − 1 (16.2) <?page no="292"?> 266 Galerkin method - Ampere’s law it follows N ∑ n=1 a n d dr ( r · r n − 1 ) = r J. Including the weighting function w m the representation with the help of the inner product follows N ∑ n=1 a n 〈 w m , d dr r · r n − 1 〉 = � w m , r J � and from the Galerkin approach w m = r m − 1 follows the weak formulation of Ampere’s law N ∑ n=1 a n 〈 r m − 1 , d dr r · r n − 1 〉 = 〈 r m − 1 , r J 〉 in the summation notation with number N of summands, which still have to be determined. 16.1.2 Transforming the system of equations into a matrix equation The weak form of Ampere’s law is obtained by means of integrals over the domain Ω N ∑ n=1 a n ˆ Ω r m − 1 · d dr ( r · r n − 1 ) dr ︸ ︷︷ ︸ T erm1 = ˆ Ω r m − 1 · r · J dr ︸ ︷︷ ︸ T erm2 . Here Ω = r ∈ [0, R], where the conductor radius is R. The development of the first term takes place by means of partial integration to ˆ R 0 r m − 1 · d dr ( r · r n − 1 ) dr = [ r m+n − 1 ] R 0 − ˆ R 0 (m − 1) r m − 2 r n dr = R m+n − 1 − m − 1 m + n − 1 R m+n − 1 = n m + n − 1 R m+n − 1 . <?page no="293"?> 16.1 Galerkin method - Ampere’s law for the conductor inside 267 The second term is developed by integration to ˆ R 0 r m − 1 · r · J dr = J m + 1 R m+1 . The weak formulation is thus once again seen as a sum N � n=1 a n n m + n − 1 R m+n − 1 � �� � (l mn ) = J m + 1 R m+1 � �� � (g m ) . 16.1.3 Solving the linear equation system The summation notation of the weak form is converted into the matrix notation for N = 3 (a n ) · (l mn ) = (g m ) (a n ) · ⎛⎜⎜⎝ R R 2 R 3 R 2 2 2R 3 3 3R 4 4 R 3 3 R 4 2 3R 5 5 ⎞⎟⎟⎠ = (g m ) (a n ) = (l mn ) − 1 · (g m ) = ⎛⎜⎜⎝ 9 R − 36 R 2 30 R 3 − 18 R 2 96 R 3 − 90 R 4 10 R 3 − 60 R 4 60 R 5 ⎞⎟⎟⎠ · ⎛⎜⎜⎝ JR 2 2 JR 3 3 JR 4 4 ⎞⎟⎟⎠ ⎛⎜⎜⎝ a 1 a 2 a 3 ⎞⎟⎟⎠ = ⎛⎜⎜⎝ 0 J 2 0 ⎞⎟⎟⎠ . By substituting into the basis function eq. (16.2) follows H Φi (r) = 0 · r 0 + J 2 · r 1 + 0 · r 2 = J 2 · r = 0.509 · 10 6 A 2 m 2 · 2.5 · 10 − 3 m = 636.25 A/ m <?page no="294"?> 268 Galerkin method - Ampere’s law the result of the field strength, which corresponds to the analytical result from fig. 16.1 at the conductor surface. 16.2 Galerkin method - Ampere’s law for the conductor outside In the outer space, the magnetic field is curl-free . With eq. (4) in fig. 16.1 follows rot � H Φa = 1 r ∂ ∂r ( r J R 2 2r ) = 0. Consequently, the magnetic field can no longer be calculated with its curl. Since the derivative disappears, the Galerkin method is not applicable. According to the circular integral of eq. (1) in fig. 16.1, an increase in the radius of the magnetic field line leads to an increase in the area to be integrated, which is described by means of Maxwell’s fourth theorem. The area integration takes place over the current-density-carrying area. In the case of the area integral, however, only the integration over the area carrying the current density (conductor) provides a contribution to the area integral. Thus, the right-hand side of eq. (1) in fig. 16.1 remains constant. The effect of the increasing radius r in the circular integral leads to a decrease of the magnetic field strength according to eq. (4) in fig. 16.1. The magnetic field strength multiplied by its length is constant and corresponds to the area integral over the current density. Compare also fig. 16.1. A new differential equation for the calculation of the external field H Φa has to be found, for which its derivative towards r does not vanish. Using the figures 16.2 and eq. (4) in 16.1, the differential equation is obtained by deriving the weakening magnetic field as the radius r increases dH Φa (r) dr = − J R 2 2 1 r 2 , whose derivative also depends on the radius r and is therefore not constant. The equation is solved according to the magnetic field strength for the outer area of the conductor H Φa . <?page no="295"?> 16.2 Galerkin method - Ampere’s law for the conductor outside 269 16.2.1 Weak formulation of the differential equation By means of considerations of the equation of the magnetic field strength in the external area (fig. 16.1), it was possible, by means of a MacLaurin series development, to obtain the basis and at the same the weighting function H Φa (r) = N ∑ n=1 a n r − n . Thus follows N ∑ n=1 a n d dr r − n = − J R 2 2 1 r 2 . The inclusion of the weighting function w m , which corresponds to the basis function, provides the weak formulation of Ampere’s law for the outer area of the conductor N ∑ n=1 a n 〈 w m , d dr r − n 〉 = 〈 w m , − J R 2 2 1 r 2 〉 N ∑ n=1 a n 〈 r − m , d dr r − n 〉 = 〈 r − m , − J R 2 2 1 r 2 〉 . 16.2.2 Transforming the system of equations into a matrix equation The weak form of Ampere’s law is obtained by integration over the domain Ω N ∑ n=1 a n ˆ Ω r − m d dr r − n dr ︸ ︷︷ ︸ T erm1 = ˆ Ω r − m − J R 2 2 1 r 2 ︸ ︷︷ ︸ T erm2 . Here Ω = r ∈ [R, ∞ ]. In the continuation, the development of the terms 1 and 2 takes place: • Term 1: The partial integration provides <?page no="296"?> 270 Galerkin method - Ampere’s law ˆ ∞ R r − m d dr r − n dr = [ r − m r − n ] ∞ R − ˆ ∞ R − m r − m − 1 r − n dr = r − m − n ∣∣∣ ∞ R − ˆ ∞ R − m r − m − n − 1 dr = r − m − n ∣∣∣ ∞ R − − m − m − n r − m − n ∣∣∣ ∞ R = 0 − R − m − n − ( 0 − − m − m − n R − m − n ) = ( − m − m − n − 1 ) R − m − n = − n − m − n R − m − n . • Term 2: The integration provides − J R 2 2 ˆ ∞ R r − m r − 2 = − J R 2 2 ˆ ∞ R r − m − 2 dr = − J R 2 2 1 − m − 1 r − m − 1 ∣∣∣ ∞ R = − J R 2 2 1 − m − 1 ( 0 − R − m − 1 ) = J R 2 2 1 − m − 1 R − m − 1 . This again follows the weak formulation with N ∑ n=1 a n − n − m − n R − m − n ︸ ︷︷ ︸ (l mn ) = J R 2 2 1 − m − 1 R − m − 1 ︸ ︷︷ ︸ (g m ) . 16.2.3 Solving the linear equation system The results of the weak form summation notation are converted to matrix notation for N = 3 and their matrix elements are summarised in tab. 16.1. Using the procedure in chap. 16.1.3, the solution of the magnetic field H Φa in the outer space of the conductor follows with H Φa (r) = a 1 · r − 1 + a 2 · r − 2 + a 3 · r − 3 = J R 2 2 · r − 1 + 0 · r − 2 + 0 · r − 3 . <?page no="297"?> 16.3 Comparison of FEM with Galerkin results 271 Table 16.1: Results of the matrix elements (l mn ) n m 1 2 3 (g m ) 1 1 2 R − 2 2 3 R − 3 3 4 R − 4 J 4 R 0 2 1 3 R − 3 2 4 R − 4 3 5 R − 5 J 6 R − 1 3 1 4 R − 4 2 5 R − 5 3 6 R − 6 J 8 R − 2 (a n ) J 2 R 2 0 0 The magnetic field strength at the position r = 10 mm calculated with the data of tab. 16.3, no. 7 is compared in tab. 16.2, no. 7. 16.3 Comparison of FEM with Galerkin results The results are compared on the basis of a selected conductor. For the comparison of results between numerical calculation according to Galerkin and the FEM software COMSOL Multiphysics, tab. 16.3, no. 2 was used as a reference. In tab. 16.2 both results are compared. A difference between the two results occurs within the scope of the numerical accuracy and the decimal places taken into account. The FEM results with COMSOL Multiphysics are shown below. In fig. 16.3 follows the corresponding magnetic field inside and outside the conductor. The maximum of the magnetic field strength is always on the surface of the conductor. The magnetic field strength of the conductor with a diameter of 5 mm can be read offat the conductor surface with 625 A/ m. Table 16.2: Comparison of results of the magnetic field strength H o at the conductor surface Nr. COMSOL Multiphysics Traditional Galerkin 2 636.4 A/ m 636.25 A/ m 7 160.4 A/ m 160 A/ m <?page no="298"?> 272 Galerkin method - Ampere’s law Figure 16.3: FEM simulation result of the magnetic field strength in and outside the conductor rod with conductor rod diameter as plot parameter Table 16.3: Simulated data Conductor- Conductor- Current I Current- Fielddiameter d area A density J strength H o Nr. [mm] [(mm) 2 ] [A] [A/ (mm) 2 ] [A/ m] 1 2 3.14 10 3.183 1592.0 2 5 19.63 10 0.509 636.4 3 8 50.27 10 0.199 398.0 4 11 95.03 10 0.105 289.6 5 14 153.93 10 0.065 227.8 6 17 226.98 10 0.044 188.0 7 20 314.16 10 0.032 160.4 <?page no="299"?> Chapter 17 Galerkin-FEM ”The Galerkin finite-element method has been the most popular method of weighted residuals, used with piecewise polynomials of low degree, since the early 1970s.“ [33], S. 86. 17.1 Galerkin FEM - What is being solved? The Galerkin FEM is used to solve ≥ 2 ′ th order differential equations. In the Galerkin FEM, a section-wise defined linear weighting or test function is applied. In the literature, this is often referred to as a shape or interpolation or triangular function according to fig. 17.1 b), which is also the simplest form. The test solution in a one-dimensional domain x 1 ≤ x ≤ x N , for example, is given by the global equation valid for the entire domain u h = N ∑ i=1 u i φ i (x), (17.1) where φ i (x) represents the triangular function and u i the nodal values (coefficients) to be solved. In fig. 17.1 a) it can be seen that u h interpolates the function u linearly between the unknown nodal values and this for each element. It can be seen in fig. 17.1 b) the linear decay from the value one at a respective node to the value zero at the two neighbouring nodes and moreover in the remaining domain. Only two shape functions and two unknown node values provide a non-zero contribution to eq (17.1). For example, at element 2, only the shape functions φ 2 and φ 3 provide a contribution <?page no="300"?> 274 Galerkin-FEM Figure 17.1: Interpolation of finite elements by means of triangular function to u h of eq.(17.1). Moreover, the fig. 17.1 a) that u h is continuous along the elements and the derivative du h / dx is discontinuous at the element edges. Furthermore, it can be seen that only the nodal values coincide with u. Consequently, an interpolation error is associated with the introduction of u h . 17.2 Galerkin-FEM - Procedure for the solution The reader is introduced to the solution of a differential equation by means of the Galerkin method using a 1D example. Galerkin’s method belongs to the class of methods of weighted residuals [33], p. 24. The necessary conditions of Galerkin’s method are according to [33], p. 30: • The weighting function w is of the same class as the basis function φ. • The weighting and basis functions are linearly independent in the Galerkin FEM. <?page no="301"?> 17.2 Galerkin-FEM - Procedure for the solution 275 • The basis function should exactly satisfy the initial as well as the boundary conditions. The general procedure for using the Galerkin method to solve a partial differential equation is divided into the following steps: 1. Transformation of the strong form of the partial differential equation to be solved into the weak form (weak formulation), 2. discretisation of the domain Ω to be solved into a finite number n of subdomains Ω n with N nodes in Galerkin FEM, 3. choice of basis and weighting functions, 4. formulation of the weak form of the obtained differential equation by means of chosen basis and weighting function, 5. transformation of the equation into a matrix equation, 6. solving the obtained linear system of equations. <?page no="303"?> Chapter 18 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (I) Given is the ordinary inhomogeneous differential equation of 2 ′ th order Figure 18.1: Analytical solution u(x) of eq. (18.1) <?page no="304"?> 278 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (I) d 2 u(x) dx 2 + 1 = 0, x ∈ Ω (18.1) in the domain Ω = [0, 1] with Dirichlet boundary conditions u(0) = u(1) = 0 whose solution is the function u(x) u(x) = − 1 2 x 2 + 1 2 x = 1 2 ( x − x 2 ) is and their functional course can be taken from fig. 18.1. The differential equation is to be solved using the Galerkin FEM. 18.1 Weak formulation of the differential equation The eq. (18.1) is to be transformed into a matrix equation and solved with the help of linear weighting functions. The function u(x) of eq. (18.1) on the interval Ω = [a, b] = [0, 1] with u(x) = 0, x ∂ Ω is sought. The strong form of the eq. (18.1) is converted into the weak form for solution with the Galerkin method as follows ˆ Ω R w dx = � R, w � = 0. They are w = weighting or test function and R = residual. With R = d 2 u(x) dx 2 + 1 w = w(x) follows ˆ Ω ( d 2 u(x) dx 2 + 1 ) w(x) dx = 0 ˆ Ω d 2 u(x) dx 2 w(x) dx + ˆ Ω w(x) dx = 0. <?page no="305"?> 18.2 Discretisation of the domain Ω to be solved 279 With partial integration w(x) du(x) dx − ˆ Ω du(x) dx dw(x) dx dx + ˆ Ω w(x) dx = 0 and inclusion of Dirichlet boundary conditions at the outer nodes (edges) u(x) = w(x) = 0; x ∂Ω the first term of the equation according to chap. 1.3.5 is w(x) du(x) dx ∣∣∣∣ b a = 0. This is the case because the weighting functions only apply to the inner nodes and take the value zero at the outer nodes. From this follows the weak form of the differential equation (18.1) ˆ Ω du(x) dx dw(x) dx dx − ˆ Ω w(x) dx = 0. (18.2) 18.2 Discretisation of the domain Ω to be solved The interval [a, b] is divided into n subintervals Ω n with N nodes. For the graphical representation according to fig. 18.2 these are n = 5 subintervals (elements, subregions) with N = 6 nodes. The interval boundaries are set with x 0 = a and x 6 = b and are called outer nodes. 18.3 Choosing the base and weighting function Linear functions of the type straight line equations are selected, which are assigned to the individual nodes according to fig. 18.3. For reasons of illustration, this function is referred to in the literature as a triangular function. The triangular function is defined for a node at which it takes the value one. The triangular function has a discontinuity at this point, which requires a sectionor element-wise function definition. The function values of all neighbouring nodes are assigned the value zero. The advantage of this <?page no="306"?> 280 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (I) Figure 18.2: 1D-discretisation of the domain Ω into n subdomains Ω i type of straight line equations is their simplified derivatives, which yield a constant. Each inner node x i , i = 1, ..., N − 1 are assigned two straight line equations (triangular function) φ i (x) with the sectional definition φ i (x) = ⎧⎪⎪⎨⎪⎪⎩ m · x + b, x i − 1 < x ≤ x i − m · x + b, x i < x ≤ x i+1 0, others. The straight line slope m and the intercept b are determined from the boundary conditions x ∂x i − 1 = 0, x ∂x i = 1 as well as x ∂x i = 1 and x ∂x i+1 = 0, which is φ i (x) = ⎧⎪⎪⎨⎪⎪⎩ 1 x i − x i − 1 x − x i − 1 x i − x i − 1 = x − x i − 1 h , x i − 1 < x ≤ x i − 1 x i+1 − x i x + x i+1 x i+1 − x i = x i+1 − x h , x i < x ≤ x i+1 0, others. No basis functions are defined at the edges of the domain Ω. Thereby is h = Ω n the equidistant distance between two nodes (element length). In fig. 18.3, the triangular functions in the domain Ω are drawn in an approximate way. Looking at eq. (18.2), derivatives of the function become necessary. Thus, the section-wise derivatives of the triangular functions conclude with <?page no="307"?> 18.4 Formulation of the weak form with triangular functions φ(x) 281 Figure 18.3: Nodal assignment of the triangular functions φ(i) with their derivatives in the domain Ω dφ i (x) dx = ⎧⎪⎪⎨⎪⎪⎩ 1 h , x i − 1 < x ≤ x i − 1 h , x i < x ≤ x i+1 0, others. Triangular functions and their derivatives take the value zero with this definition outside their nodal allocations. 18.4 Formulation of the weak form with triangular functions φ(x) The Galerkin method implies that the basis function is equal to the weighting function. In the sequel, the triangular function φ(x) is used as the basis and weighting function. The function u(x) to be solved is given by the approximated approach function u h (x) for a 1D element with the two nodes x i and x i+1 u h (x) = 2 � i=1 u i φ i (x) = u 1 φ 1 (x) + u 2 φ 2 (x) w(x) = φ(x). Here u i is the variable to solve for and φ i (x) is the basis function and φ(x) is the weighting function. Substituting in eq. (18.2) it follows <?page no="308"?> 282 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (I) n ∑ i=1 [ ˆ Ω d(u i φ i (x)) dx dφ(x) dx dx ] − ˆ Ω φ(x) dx = 0 (18.3) and proceed by multiplication out ˆ Ω [ u 1 dφ 1 (x) dx dφ(x) dx + u 2 dφ 2 (x) dx dφ(x) dx ] dx = ˆ Ω φ(x) dx. According to Galerkin’s method, the base function is equal to the weighting function and these form a product. In this respect, the two derivatives of the identical and sectionally defined base and weighting functions must be multiplied together. This concerns in each case the multiplication of the derivatives of the rising and falling straight lines of the triangular functions. This results in two equations. To determine u 1 the remaining function φ(x) is assigned the function φ 1 (x) and to determine u 2 , φ(x) is assigned the function φ 2 (x). Thus the following applies to u 1 at node x i u 1 ˆ Ω dφ 1 dx dφ 1 dx dx + u 2 ˆ Ω dφ 2 dx dφ 1 dx dx = ˆ Ω φ 1 (x) dx and for u 2 at the node x i+1 u 1 ˆ Ω dφ 1 dx dφ 2 dx dx + u 2 ˆ Ω dφ 2 dx dφ 2 dx dx = ˆ Ω φ 2 (x) dx, or summarised in matrix notation ˆ Ω ( dφ 1 (x) dx dφ 1 (x) dx dφ 2 (x) dx dφ 1 (x) dx dφ 1 (x) dx dφ 2 (x) dx dφ 2 (x) dx dφ 2 (x) dx ) dx ( u 1 u 2 ) = ˆ Ω φ(x) dx ( 1 1 ) . (18.4) For the right term of the integration of the function over Ω, a distinction by means of indices 1, 2 is not necessary, since such a distinction will not affect the integral. 18.5 Transforming the system of equations into a matrix equation Each element is bounded by two nodes. In the process, the two node matrices are derived with the help of eq. (18.4), merged into an element matrix and this is converted for <?page no="309"?> 18.5 Transforming the system of equations into a matrix equation 283 all elements into a global matrix, the coefficient matrix. Following Galerkin’s thought, the base and weighting functions are to be integrated via the derivative. Both are according to fig. 18.3 and require a section-wise integration of the derivative of the rising and falling edges of both triangular functions: • Node matrix of the inner node x i with weighting function φ = φ 1 : The inner node x i corresponds to the inner node x 1 of fig. 18.3. Thus the integration interval [x 0 , x 2 ] follows. With the help of eq. (18.4) follows u 1 ˆ x 1 x 0 dφ 1 dx dφ 1 dx dx + u 2 ˆ x 1 x 0 dφ 2 dx dφ 1 dx dx+u 1 ˆ x 2 x 1 dφ 1 dx dφ 1 dx dx + u 2 ˆ x 2 x 1 dφ 2 dx dφ 1 dx dx = ˆ Ω φ 1 dx, each for the derivatives of the rising and falling edges of the triangular functions φ 1 and φ 2 . The integration interval corresponds to the element length h, the subdomain Ω i . The derivation of the basis and weighting functions is done by considering fig. 18.3, with which leads to u 1 ˆ x 1 x 0 1 h 1 h dx + u 2 ˆ x 1 x 0 0 1 h dx+u 1 ˆ x 2 x 1 − 1 h − 1 h dx + u 2 ˆ x 2 x 1 1 h − 1 h dx = ˆ Ω φ 1 dx. In the second term, the derivative for φ 2 in the interval [x 0 , x 1 ] is not defined and therefore takes the value zero. A further summation leads to u 1 ˆ x 1 x 0 1 h 2 dx + 0 + u 1 ˆ x 2 x 1 1 h 2 dx + u 2 ˆ x 2 x 1 − 1 h 2 dx = ˆ Ω φ 1 dx. The integration is done term by term over the subintervals Ω i (element length h), or for the function φ 1 over the interval Ω with ˆ Ω i 1 h 2 dx = ˆ h 0 1 h 2 dx = 1 h 2 ˆ h 0 dx = 1 h 2 x ∣∣∣∣ h 0 = 1 h ˆ Ω i − 1 h 2 dx = ˆ h 0 − 1 h 2 dx = − 1 h 2 ˆ h 0 dx = − 1 h 2 x ∣∣∣∣ h 0 = − 1 h ˆ Ω φ 1 dx = h. <?page no="310"?> 284 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (I) To simplify, the lower integration limit of the interval Ω i is assumed to be zero and the upper integration limit is assumed to be h. By inserting the integration results, the node matrix follows u 1 1 h + 0 + u 1 1 h − u 2 1 h = h 1 h ( 2 − 1 )( u 1 u 2 ) = h (18.5) for the node x 1 . • Node matrix of the inner node x i+1 with weighting function φ = φ 2 : The inner node x i+1 corresponds to the inner node x 2 of fig. 18.3. From this follows the integration interval [x 1 , x 3 ]. With the help of eq. (18.4) follows u 1 ˆ x 2 x 1 dφ 1 dx dφ 2 dx dx + u 2 ˆ x 2 x 1 dφ 2 dx dφ 2 dx dx+u 1 ˆ x 3 x 2 dφ 1 dx dφ 2 dx dx + u 2 ˆ x 3 x 2 dφ 2 dx dφ 2 dx dx = ˆ Ω φ 2 dx the identical procedure as for the node x 1 . The derivation of the basis and weighting functions follows from the observation of fig. 18.3, with this u 1 ˆ x 2 x 1 − 1 h 1 h dx + u 2 ˆ x 2 x 1 1 h 1 h dx+u 1 ˆ x 3 x 2 0 − 1 h dx + u 2 ˆ x 3 x 2 − 1 h − 1 h dx = ˆ Ω φ 2 dx is achieved. In the third term, the derivative for φ 1 in the interval [x 2 , x 3 ] is not defined and therefore takes the value zero. A further summation leads to u 1 ˆ x 2 x 1 − 1 h 2 dx + u 2 ˆ x 2 x 1 1 h 2 dx + 0 + u 2 ˆ x 3 x 2 1 h 2 dx = ˆ Ω φ 2 dx. The integration is done section by section over the subintervals as for the node x 1 . By inserting the integration results, the node matrix equation follows u 1 − 1 h + u 2 1 h + 0 + u 2 1 h = h 1 h ( − 1 2 )( u 1 u 2 ) = h (18.6) <?page no="311"?> 18.6 Solving the linear equation system 285 for the node x 2 . • Element matrix: Representing the element bounded by the interior nodes x 1 and x 2 with the inclusion of the node matrix equations (18.5) and (18.6), the element matrix and consequently the element matrix equation 1 h � 2 − 1 − 1 2 � · � u 1 u 2 � = h � 1 1 � is created. • Coefficient matrix: The individual element equations of the inner nodes x 1 to x 4 are represented in the global matrix (coefficient matrix) S by expansion 1 h ⎛⎜⎜⎜⎜⎝ 2 − 1 0 0 − 1 2 − 1 0 0 − 1 2 − 1 0 0 − 1 2 ⎞⎟⎟⎟⎟⎠ � �� � S · ⎛⎜⎜⎜⎜⎝ u 1 u 2 u 3 u 4 ⎞⎟⎟⎟⎟⎠ � �� � u h = h ⎛⎜⎜⎜⎜⎝ 1 1 1 1 ⎞⎟⎟⎟⎟⎠ � �� � f are summarised in the coefficient matrix equation. It remains to consider the Dirichlet boundary conditions of the outer nodes. In the assumed example equation, the strong form of eq. (18.1) was achieved by differentiating eq. (18.1) twice. It is easy to see that this also resulted in the loss of information, which must be added back as boundary conditions in the sequel in order to be able to follow the course of eq. (18.1) in fig. 18.1. 18.6 Solving the linear equation system The system of linear equations thus obtained 1 h S u h = h f <?page no="312"?> 286 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (I) Figure 18.4: Visualisation of results using the Galerkin method is solved after S − 1 (S u h ) = h 2 S − 1 f � S − 1 S � � �� � E u h = h 2 S − 1 f u h = h 2 S − 1 f = h 2 ⎛⎜⎜⎜⎜⎝ 2 3 3 2 ⎞⎟⎟⎟⎟⎠ . The system of equations is to be expanded by including the Dirichlet boundary conditions <?page no="313"?> 18.6 Solving the linear equation system 287 Figure 18.5: Comparison of analytical and numerical results ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1 0 0 0 0 0 − 1 2 -1 0 0 0 0 -1 2 -1 0 0 0 0 -1 2 -1 0 0 0 0 -1 2 − 1 0 0 0 0 0 1 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ � �� � S · ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ u 0 u 1 u 2 u 3 u 4 u 5 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ � �� � u h = h 2 ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 0 1 1 1 1 0 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ � �� � f . (18.7) With this procedure, any boundary conditions can be specified in the column vector f, such as the function value zero at the two outer nodes. Thus the values of the inner nodes at the points x i follow u h = 4 � i=1 u i (x) φ i (x) = h 2 (2 φ 1 + 3 φ 2 + 3 φ 3 + 2 φ 4 ) . <?page no="314"?> 288 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (I) The basis functions take the value one at the points x i . Element length, boundary conditions and results of the numerical approximation to the course of eq. (18.1) are summarised in tab. 18.1 and summarised in fig. 18.5 graphically. The fig. 18.4 shows the graphical result interpretation of the Galerkin method. The Dirichlet boundary conditions u(x 0 ) = u(x 5 ) = 0 are not drawn in the function plot. Table 18.1: Numerical results of eq. u(x) = 1 2 (x − x 2 ) Element length h: 0.2 Dirichlet condition left edge 0 Dirichlet condition right edge 0 Node number i 0 1 2 3 4 5 Location x i 0 0.2 0.4 0.6 0.8 1.0 u(x i ) 0 0.08 0.12 0.12 0.08 0 <?page no="315"?> Chapter 19 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (II) Given is the 2 ′ th order ordinary differential equation d 2 u(x) dx 2 + 1 = 0, x ∈ Ω (19.1) in the domain Ω = [0, 4] with boundary conditions u(0) = u(4) = 0, whose solution is the quadratic equation (downward open parabola) with vertex S P (2 | 2) u(x) = − 1 2 x 2 + 2 x (19.2) with zeros a = 0 and b = 4. In fig. 19.1 the exact curve of the equation can be seen. The differential equation is solved using Galerkin FEM. The procedure for the solution is carried out according to the instructions given in chap. 17.2 and is identical to the procedure in chap. 18. By differentiating the equation (19.2) twice, the partial differential equation of 2 ′ th order (Poisson’s differential equation) follows in the strong formulation d 2 u(x) dx 2 + 1 = 0, x ∈ Ω (19.3) u(x) = 0, x ∂Ω. (19.4) It is easy to see that eq. (19.3) is again of the type of eq. (18.1), and can only be determined exactly by the boundary conditions. In the sequel, the function u(x) on the interval Ω = [a, b] = [0, 4] is sought. <?page no="316"?> 290 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (II) Figure 19.1: Analytical solution u(x) of eq. (19.1) 19.1 Weak formulation of the differential equation The strong form of eq. (19.3) is transformed into the weak form for solution with the Galerkin method as follows ˆ Ω w R dx = � w, R � = 0. It is w the weighting or test function and R the residual. From R = d 2 u(x) dx 2 + 1 w = w(x) it follows ˆ Ω ( d 2 u(x) dx 2 + 1 ) w(x) dx = 0 <?page no="317"?> 19.2 Discretisation of the domain Ω to be solved 291 ˆ Ω d 2 u(x) dx 2 w(x) dx + ˆ Ω w(x) dx = 0. The application of partial integration including the homogeneous boundary conditions and multiplication by (-1) leads to ˆ Ω du(x) dx dw(x) dx dx − ˆ Ω w(x) dx = 0, the weak form of eq. (19.3). 19.2 Discretisation of the domain Ω to be solved The discretisation is done according to fig. 18.2. The domain Ω or interval [a, b] is again discretised into five subintervals Ω n with six nodes, where x 0 and x 5 form the outer nodes. 19.3 Choosing the base and weighting function The choice of the basis and weighting functions is the same as in chap. 18.3 chosen triangular functions. The functions and their derivatives are defined section by section. See also fig. 18.3. 19.4 Formulation of the weak form with triangular functions φ(x) Using the procedure according to chap. 18.4, the weak form of the differential equation follows in matrix notation according to eq. (18.4). 19.5 Transforming the system of equations into a matrix equation Here it is sufficient to apply the chapter 18.5. The two node matrices are created, merged into an element matrix and then the coefficient matrix is created. <?page no="318"?> 292 Galerkin-FEM - solution of d 2 u/ dx 2 = − 1 (II) 19.6 Solving the linear equation system From the procedure described in chap. 18.6, the eq. (18.7) follows again. The required values and results are given in tab. 19.1. The outer nodes receive the value zero according to the boundary conditions. The inner nodes remain with u h = 4 ∑ i=1 u i (x) φ i (x) = h 2 (2 φ 1 + 3 φ 2 + 3 φ 3 + 2 φ 4 ) . Figure 19.2: Presentation of results using the Galerkin method In Abb. 19.2 ist das Ergebnis grafisch mittels Dreiecksfunktionen (Basis- und Wichtungsfunktionen) dargestellt. In Abb. 19.3 ist die Gegen¨ uberstellung zwischen analytisch und numerisch errechnetem Ergebnis ersichtlich. <?page no="319"?> 19.6 Solving the linear equation system 293 Table 19.1: Numerical results of eq. u(x) = − 1 2 x 2 + 2x Element length h: 0.8 Dirichlet condition left edge 0 Dirichlet condition right edge 0 Node number i 0 1 2 3 4 5 Location x i 0 0.8 1.6 2.4 3.2 4.0 u(x i ) 0 1.28 1.92 1.92 1.28 0 Figure 19.3: Comparison of analytical and numerical results <?page no="321"?> Chapter 20 Galerkin-FEM - Electrostatic field calculation The electrostatic field of a plate capacitor according to fig. 20.2 a) shall be calculated by the Poisson’s differential equation ∇ 2 ϕ = − ρ ε . A voltage of U c = 100 V is applied to the capacitor. The procedure is based on chap. 12. Subsequently, the electrostatic field is calculated by gradient formation of the potential ϕ. 20.1 Weak formulation of the differential equation To calculate the electrostatic field, the differential equation in fig. 9.3 c), shown in its strong form, is solved for the potential. By rearranging it follows ∇ (ε ∇ ϕ) ︸ ︷︷ ︸ =D + ρ = 0 = R. By weighting with the function w of the residual and integration over the domain Ω (plate spacing) it follows <?page no="322"?> 296 Galerkin-FEM - Electrostatic field calculation ˆ Ω w R dx = 0 ˆ Ω w [ ∇ (ε ∇ ϕ) + ρ] dx = 0 ˆ Ω w ( ∂ϕ 2 ∂x 2 + ρ ε ) dx = 0 ˆ Ω w ( ∂ϕ 2 ∂x 2 ) dx + ˆ Ω w ( ρ ε ) dx = 0. By partial integration of the first term of the left half of the equation, the weak formulation of the differential equation follows w dϕ dx − ˆ Ω ( dϕ dx dw dx ) dx + ˆ Ω w ( ρ ε ) dx = 0 w ∇ ϕ − ˆ Ω ( ∇ ϕ ∇ w) dx + ˆ Ω w ( ρ ε ) dx = 0. (20.1) 20.2 Discretisation of the domain Ω to be solved The discretisation of the domain Ω is done according to fig. 18.2. The element length is h = 2 mm. 20.3 Choosing the base and weighting function The basis and weighting functions are chosen according to chap. 18.3 triangular functions φ(x) are chosen. 20.4 Formulation of the weak form with triangular functions φ(x) The procedure is based on chap. 18.4. The domain Ω has meanwhile been divided into n = 5 subdomains (elements) Ω n and the triangular function has been assigned to the weighting and basis function. For the first term of the eq. (20.1), according to chap. 1.3.5, the left and right boundary must be used as boundary conditions for the determined integral. At the boundaries the triangular functions and thus also the term <?page no="323"?> 20.4 Formulation of the weak form with triangular functions φ(x) 297 assume the value zero. The calculation is reduced to the inner nodes. By summation over all sub-areas it follows n ∑ i=1 [ ˆ Ω ( ∇ ϕ ∇ φ(x)) dx − ρ ε ˆ Ω φ(x) dx ] = 0. Including the approach function ϕ h (x) = 2 ∑ i=1 ϕ i φ i (x) = ϕ 1 φ 1 (x) + ϕ 2 φ 2 (x), and rearrangement and insertion provides ˆ Ω [ ϕ 1 dφ 1 (x) dx dφ(x) dx + ϕ 2 dφ 2 (x) dx dφ(x) dx ] dx = ρ ε ˆ Ω φ(x) dx ︸ ︷︷ ︸ Source term . To determine ϕ 1 , the function φ(x) is allocated the function φ 1 (x) and for determination of ϕ 2 the function φ 2 (x) is allocated to φ(x). For ϕ 1 at node x 1 it follows ˆ Ω [ ϕ 1 dφ 1 (x) dx dφ 1 (x) dx + ϕ 2 dφ 2 (x) dx dφ 1 (x) dx ] dx = ρ ε ˆ Ω φ 1 (x) dx and for ϕ 2 at node x 2 follows ˆ Ω [ ϕ 1 dφ 1 (x) dx dφ 2 (x) dx + ϕ 2 dφ 2 (x) dx dφ 2 (x) dx ] dx = ρ ε ˆ Ω φ 2 (x) dx. Summarised in matrix notation, the equation is as follows: ˆ Ω ( dφ 1 (x) dx dφ 1 (x) dx dφ 2 (x) dx dφ 1 (x) dx dφ 1 (x) dx dφ 2 (x) dx dφ 2 (x) dx dφ 2 (x) dx ) dx ( ϕ 1 ϕ 2 ) = ρ ε ˆ Ω φ(x) dx ( 1 1 ) . For the right term of the equation, a differentiation of the function φ by means of indices is not necessary, since it has no influence on the integration result. <?page no="324"?> 298 Galerkin-FEM - Electrostatic field calculation 20.5 Transforming the system of equations into a matrix equation The procedure is based on chap. 18.5. With the weak form of the differential equation followed the nodal equations of an element, which is transferred into the element matrix, the coefficient matrix and finally into the linear equation system: • Node matrix of the first inner node x i with φ(x) = φ 1 (x): This is followed by the interval [x 0 , x 2 ] ϕ 1 ˆ Ω ( dφ 1 (x) dx dφ 1 (x) dx ) dx + ϕ 2 ˆ Ω ( dφ 2 (x) dx dφ 1 (x) dx ) dx = ρ ε ˆ Ω φ(x) dx and by adapting the integration interval ϕ 1 ˆ x 1 x 0 dφ 1 dx dφ 1 dx dx + ϕ 2 ˆ x 1 x 0 dφ 2 dx dφ 1 dx dx+ϕ 1 ˆ x 2 x 1 dφ 1 dx dφ 1 dx dx + ϕ 2 ˆ x 2 x 1 dφ 2 dx dφ 1 dx dx = ρ ε ˆ Ω φ dx. Taking into account the conditions of chap. 18.3 follows according to the procedures of chap. 18.5 ϕ 1 ˆ x 1 x 0 1 h 1 h dx + ϕ 2 ˆ x 1 x 0 0 1 h dx+ϕ 1 ˆ x 2 x 1 − 1 h − 1 h dx + ϕ 2 ˆ x 2 x 1 1 h − 1 h dx = ρ ε ˆ Ω φ dx. In the second term, the derivative for φ 2 in the interval [x 0 , x 1 ] is not defined and therefore takes the value zero. A further summary leads to ϕ 1 ˆ x 1 x 0 1 h 2 dx + 0 + ϕ 1 ˆ x 2 x 1 1 h 2 dx + ϕ 2 ˆ x 2 x 1 − 1 h 2 dx = ρ ε ˆ Ω φ dx. After term-wise integration and conversion, the node matrix for the node x 1 follows ϕ 1 1 h + 0 + ϕ 1 1 h − ϕ 2 1 h = ρ ε h 1 h ( 2 − 1 )( ϕ 1 ϕ 2 ) = ρ ε h. <?page no="325"?> 20.5 Transforming the system of equations into a matrix equation 299 • Node matrix of the second inner node x i+1 with φ(x) = φ 2 (x): This is followed by the interval [x 1 , x 3 ] ϕ 1 ˆ Ω ( dφ 1 (x) dx dφ 2 (x) dx ) dx + ϕ 2 ˆ Ω ( dφ 2 (x) dx dφ 2 (x) dx ) dx = ρ ε ˆ Ω φ(x) dx and by adjusting the integration interval ϕ 1 ˆ x 2 x 1 dφ 1 dx dφ 2 dx dx + ϕ 2 ˆ x 2 x 1 dφ 2 dx dφ 2 dx dx+ϕ 1 ˆ x 3 x 2 dφ 1 dx dφ 2 dx dx + ϕ 2 ˆ x 3 x 2 dφ 2 dx dφ 2 dx dx = ρ ε ˆ Ω φ dx. As before, taking into account the conditions of chap. 18.3 and following the procedure in chap. 18.5 ϕ 1 ˆ x 2 x 1 − 1 h 1 h dx + ϕ 2 ˆ x 2 x 1 1 h 1 h dx+ϕ 1 ˆ x 3 x 2 0 − 1 h dx + ϕ 2 ˆ x 3 x 2 − 1 h − 1 h dx = ρ ε ˆ Ω φ dx. In the third term, the first derivative for φ 2 in the interval [x 0 , x 1 ] is not defined and therefore takes the value zero. A further summary leads to ϕ 1 ˆ x 2 x 1 − 1 h 2 dx + ˆ x 2 x 1 1 h 2 dx + 0 + ϕ 2 ˆ x 3 x 2 1 h 2 dx = ρ ε ˆ Ω φ dx. After term-wise integration and conversion the node matrix for the node x 2 follows ϕ 1 − 1 h + ϕ 2 1 h + 0 + ϕ 2 1 h = ρ ε h 1 h ( − 1 2 )( ϕ 1 ϕ 2 ) = ρ ε h. • Element matrix: The nodal equations of the two inner nodes x i and x i+1 are given as the element equation <?page no="326"?> 300 Galerkin-FEM - Electrostatic field calculation 1 h � 2 − 1 − 1 2 � · � ϕ 1 ϕ 2 � = ρ ε h � 1 1 � . • Coefficient matrix: All nodal equations are combined in the coefficient matrix to form a linear system of equations (coefficient matrix equation) ⎛⎜⎜⎜⎜⎝ 2 − 1 0 0 − 1 2 − 1 0 0 − 1 2 − 1 0 0 − 1 2 ⎞⎟⎟⎟⎟⎠ · ⎛⎜⎜⎜⎜⎝ ϕ 1 ϕ 2 ϕ 3 ϕ 4 ⎞⎟⎟⎟⎟⎠ = ρ h 2 ε ⎛⎜⎜⎜⎜⎝ 1 1 1 1 ⎞⎟⎟⎟⎟⎠ . • According to the task, a voltage is applied to the capacitor, which is taken into account by the Dirichlet boundary conditions (elliptical differential equation). The source term ρ is therefore to be set equal to zero in order to avoid overdetermination. Including the Dirichlet boundary conditions (ϕ(x 0 ) = 0 V, ϕ(x 5 ) = 100 V ) the linear system of equations follows with ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1 0 0 0 0 0 − 1 2 − 1 0 0 0 0 − 1 2 − 1 0 0 0 0 − 1 2 − 1 0 0 0 0 − 1 2 − 1 0 0 0 0 0 1 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ · ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ ϕ 0 ϕ 1 ϕ 2 ϕ 3 ϕ 4 ϕ 5 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 0 0 0 0 0 100 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . 20.6 Solving the linear equation system The solution of the linear system of equations is carried out according to chap. 18.6. The result vector of the potential for all nodes is ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ ϕ 0 ϕ 1 ϕ 2 ϕ 3 ϕ 4 ϕ 5 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 0 20 40 60 80 100 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ <?page no="327"?> 20.6 Solving the linear equation system 301 and for the inner nodes follows ϕ h = 4 ∑ i=1 ϕ i (x) φ i (x) = 20 φ 1 + 40 φ 2 + 60 φ 3 + 80 φ 4 . In tab. 20.1 summarises the results. In fig. 20.1 is the graphical representation of the solution including the four basis functions and Dirichlet boundary conditions. Figure 20.1: Result presentation of the potential curve using the Galerkin method In fig. 20.2 a) the plate capacitor is shown with the electrostatic field and the potentials of the inner nodes ϕ 1 to ϕ 4 . Dirichlet boundary conditions were applied to the outer nodes ϕ 0 = 0 V and ϕ 5 = 100 V . In fig. 20.2 b) the result of the potential curve versus the plate spacing (x-axis) is shown. From the potential curve of fig. 20.2 b), the electrostatic field is calculated with � E = − grad ϕ = − dϕ dx �n = − 20 V 2 mm = − 10 kV / m. For the sample, the calculation of the voltage U c across the capacitor is done by integration along the plate spacing with <?page no="328"?> 302 Galerkin-FEM - Electrostatic field calculation Table 20.1: Numerical results of the potential curve Element length h [mm] 2 Dirichlet boundary conditions Left edge ϕ 0 (x = 0) [V] 0 Right edge ϕ 5 (x = 10mm) [V] 100 Node number i 0 1 2 3 4 5 Location x i [mm] 0 2 4 6 8 10 ϕ h (x i ) [V ] 0 20 40 60 80 100 Figure 20.2: Plate capacitor and potential profile ˆ x5 x0 � E d�l = 10 kV / m · 10 mm = 100 V. <?page no="329"?> Chapter 21 Galerkin-FEM - heat diffusion The heat transfer through bodies is described by means of the heat diffusion equation. This is characterised by a time derivative and two spatial derivatives. For an illustrative interpretation of the one-dimensional heat diffusion, see fig. 21.1. A body with the thermal conductivity λ made of copper is heated to 100 ◦ C on one side at the front surface. The heat flow spreads only in the x direction. For example, at a selected location on the x-axis, the temperature increases with increasing time t. The local temperature distribution in the body, which is symbolised by the dark arrows in the bar, is looked for in the progression at a time t. 21.1 Weak formulation of the differential equation The one-dimensional heat diffusion equation according to fig. 9.3, b2) is transformed into Poisson’s differential equation of the form d 2 υ(x) dx 2 = ρ c λ dυ dt ︸ ︷︷ ︸ K (21.1) d 2 υ(x) dx 2 = K, x ∈ Ω with the boundary conditions υ(x 0 ) and υ(x 5 ). The solution of the differential equation is carried out for an assumed dυ/ dt near an assumed time step t, followed by the formation of the inner product by integration of the weighted residual over the domain Ω <?page no="330"?> 304 Galerkin-FEM - heat diffusion Figure 21.1: Example of a one-dimensional heat diffusion process ˆ Ω R w dx = 0 ˆ Ω ( d 2 υ(x) dx 2 − K ) w dx = 0 ˆ Ω d 2 υ(x) dx 2 w dΩ − K ˆ Ω w dx = 0. After partial integration of the first term the weak form of the diffusion equation follows w dυ(x) dx − ˆ Ω dυ(x) dx dw dx dx − K ˆ Ω w dx = 0 w ∇ υ(x) − ˆ Ω ∇ υ(x) ∇ w dx − K ˆ Ω w dx = 0. (21.2) <?page no="331"?> 21.2 Discretisation of the domain Ω to be solved 305 21.2 Discretisation of the domain Ω to be solved The discretisation of the domain Ω is done according to fig. 18.2. The element length is h = 10 mm. 21.3 Choosing the base and weighting function Triangular functions are defined as basis and weighting functions according to chap. 18.3. 21.4 Formulation of the weak form with triangular functions φ(x) The procedure is based on chap. 18.4. The domain Ω has meanwhile been divided into n = 5 subdomains (elements) Ω n . The weighting function and the basis function were equated with the triangular function φ(x). Due to the boundary conditions, the first term of eq. (21.2) is set equal to zero, since the triangular functions at the boundary are zero. This reduces the calculation to the inner nodes. Including the approach function υ h (x) = 2 ∑ i υ i φ i (x) = υ 1 φ 1 (x) + υ 2 φ 2 (x) and substituting into eq. (21.2) with subsequent transformation follows ˆ Ω [ υ 1 dφ 1 dx dφ dx + υ 2 dφ 2 dx dφ dx ] dx = K ˆ Ω φ(x) dx ︸ ︷︷ ︸ Source term . To determine the temperatures υ 1 and υ 2 at the nodes x 1 and x 2 the matrix notation ˆ Ω ( dφ 1 (x) dx dφ 1 (x) dx dφ 2 (x) dx dφ 1 (x) dx dφ 1 (x) dx dφ 2 (x) dx dφ 2 (x) dx dφ 2 (x) dx ) dx ( υ 1 υ 2 ) = K ˆ Ω φ(x) dx ( 1 1 ) can again be used. <?page no="332"?> 306 Galerkin-FEM - heat diffusion Table 21.1: Material data/ coefficients/ boundary conditions Specifications for copper material Density ρ [kg/ m 3 ] 8933 Spec. heat capacity c [J/ (kgK)] 383 Thermal conductivity λ [W/ (mK)] 384 ρ c/ λ [s/ m 2 ] 8937 dυ/ dt [K/ s]; (48.41-51.61) ◦ C/ 0.5 s, bei t= 4 s -6.4 K [K/ m 2 ] -57,196.8 h [m] 0.01 K h 2 [K] -5.72 Dirichlet boundary conditions υ(x 0 ) [ ◦ C] 100 υ(x 5 ) [ ◦ C] 20 21.5 Transforming the system of equations into a matrix equation The procedure is based on chap. 18.5. • Node matrix of the first inner node x i with φ(x) = φ 1 (x): This is followed by the interval [x 0 , x 2 ] 1 h ( 2 − 1 ) · ( υ 1 υ 2 ) = K h. • Node matrix of the second inner node x i+1 with φ(x) = φ 2 (x): This is followed by the interval [x 1 , x 3 ] 1 h ( − 1 2 ) · ( υ 1 υ 2 ) = K h. • Element matrix: The two inner nodes x i and x i+1 are summarised as the element equation <?page no="333"?> 21.6 Solving the linear equation system 307 1 h � 2 − 1 − 1 2 � · � υ 1 υ 2 � = K h � 1 1 � . • Coefficient matrix: The two nodal equations are combined in the coefficient matrix to form a linear system of equations ⎛⎜⎜⎜⎜⎝ 2 − 1 0 0 − 1 2 − 1 0 0 − 1 2 − 1 0 0 − 1 2 ⎞⎟⎟⎟⎟⎠ � �� � S · ⎛⎜⎜⎜⎜⎝ υ 1 υ 2 υ 3 υ 4 ⎞⎟⎟⎟⎟⎠ � �� � υ h = K h 2 ⎛⎜⎜⎜⎜⎝ 1 1 1 1 ⎞⎟⎟⎟⎟⎠ � �� � f . • Including the Dirichlet boundary conditions according to tab. 21.1, the system of equations for the inner and outer nodes follows again ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1 0 0 0 0 0 − 1 2 − 1 0 0 0 0 − 1 2 − 1 0 0 0 0 − 1 2 − 1 0 0 0 0 − 1 2 − 1 0 0 0 0 0 1 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ · ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ ϕ 0 ϕ 1 ϕ 2 ϕ 3 ϕ 4 ϕ 5 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 100 − 5.72 − 5.72 − 5.72 − 5.72 20 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . 21.6 Solving the linear equation system The solution of the linear system of equations is done according to chap. 18.6 by setting up the matrix equation, followed by the transformation according to the temperature υ h <?page no="334"?> 308 Galerkin-FEM - heat diffusion S υ = f S − 1 (S υ h ) = S − 1 f � S − 1 S � � �� � E υ h = S − 1 f υ h = S − 1 f = ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 100.0 72.6 50.8 34.8 24.6 20.0 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . Figure 21.2: MATLAB result of a one-dimensional heat diffusion process For the inner nodes it follows υ h = 4 � i=1 υ i (x) φ i (x) = 72.6 φ 1 + 50.8 φ 2 + 34.8 φ 3 + 24.6 φ 4 . <?page no="335"?> 21.6 Solving the linear equation system 309 In tab. 21.1 shows the necessary data for the calculation. The material data were taken from the tables in [46], the data for the calculation of the coefficient K from fig. 21.3. In fig. 21.2 is the MATLAB result of a temperature distribution over place and time, created with the PDE toolbox. The corresponding MATLAB code can be found in the appendix A.5. The temperature vs. location transition was shown in fig. 21.3. This plot allows a comparison with the results obtained above. The deviations are due to rounding and reading accuracy (MATLAB data cursor is not exactly on the node). In addition, dυ/ dt was read at a mean distance (19.8 mm). In tab. 21.2 the results are compared. Figure 21.3: MATLAB result of a one-dimensional heat diffusion process with time t as the plot parameter In fig. 21.4 the basis functions including their approach functions of the inner nodes over the domain (Ω) are shown schematically. The Dirichlet boundary conditions are embodied by the outer nodes (υ 0 , υ 5 ). A comparison of the results obtained with MATLAB and Galerkin methods is shown in fig. 21.5. The required value pairs were taken from tab. 21.2. <?page no="336"?> 310 Galerkin-FEM - heat diffusion Figure 21.4: Graphical result display of the local temperature distribution 21.7 Diffusion process completed The diffusion process is considered complete when the temperature change over time of the right term of eq. (21.1) for t → ∞ ρ c λ dυ dt → 0 tends towards zero and thus a stationary state occurs. The Poisson differential equation is thus transformed into the Laplace differential equation. The linear system of Table 21.2: Comparison of the results of the temperature and location curve with the help of fig. 21.3 Node number i 0 1 2 3 4 5 Location x i [mm] 0 10 20 30 40 50 υ h [ ◦ C] bei t = 4 s; Galerkin 100 72.6 50.8 34.8 24.6 20 υ [ ◦ C] bei t = 4 s; MATLAB 100 73.9 51.6 33.8 22.8 20 υ h [ ◦ C] bei t = ∞ s; Galerkin 100 84 68 52 36 20 <?page no="337"?> 21.7 Diffusion process completed 311 Figure 21.5: Result comparison equations becomes ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1 0 0 0 0 0 − 1 2 − 1 0 0 0 0 − 1 2 − 1 0 0 0 0 − 1 2 − 1 0 0 0 0 − 1 2 − 1 0 0 0 0 0 1 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ · ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ υ 0 υ 1 υ 2 υ 3 υ 4 υ 5 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 100 0 0 0 0 20 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . The solution gives the local temperature distribution of the steady state ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ υ 0 υ 1 υ 2 υ 3 υ 4 υ 5 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 100 84 68 52 36 20 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . The steady state is shown in fig. 21.5 and in tab. 21.2. <?page no="339"?> Chapter 22 Galerkin-FEM - magnetic field diffusion In fig. 22.1 a) the section of a pot armature magnetic circuit, consisting of armature, yoke, return spring, spring preload tube, winding carrier and winding, can be seen. During the switch-on process, the closed B field lines penetrate the pole faces from the inside to the outside. Further consideration is referred to the cut-out in the inner pole of fig. 22.1 b). As a rule, ferromagnetic materials with a non-linear B(H) characteristic are used for electromagnets. In the sequel, however, a material with a linear B(H)characteristic is assumed. The linear paramagnetic material copper was chosen as the material for the body in order to obtain a constant permeability. A high flux density is applied to the left edge of the pole section. The intensity of the flux density decreases in radial extension and reaches a minimum at the inner radius of the inner pole (right edge). The flux density thus spreads out normal to the direction of flux (transversal spreading). This diffusion process is also subject to change over time. 22.1 Weak formulation of the differential equation The one-dimensional field diffusion equation was taken from fig. 9.3 b). Since in the sequel only one dimension of the vector � B is considered, the vector is equal to its magnitude B. The one-dimensional field diffusion equation is written in the form <?page no="340"?> 314 Galerkin-FEM - magnetic field diffusion Figure 22.1: Example of a one-dimensional magnetic field diffusion d 2 B(x) dx 2 = μ 0 κ dB dt ︸ ︷︷ ︸ K d 2 B dx 2 = K, x ∈ Ω and the boundary conditions B(x 0 ), B(x 5 ) are assigned to it. The solution is carried out for an assumed dB/ dt. The integration over the residual weighted with w follows ˆ Ω R w dx = 0 ˆ Ω ( d 2 B(x) dx 2 − K ) w dx = 0 ˆ Ω d 2 B(x) dx 2 w dx − K ˆ Ω w dx = 0. After partial integration of the first term the weak form of the field diffusion equation follows w dB(x) dx − ˆ Ω dB(x) dx w dx dx − K ˆ Ω w dx = 0 w ∇ B(x) − ˆ Ω ∇ B(x) ∇ w dx − K ˆ Ω w dx = 0. (22.1) <?page no="341"?> 22.2 Discretisation of the domain Ω to be solved 315 22.2 Discretisation of the domain Ω to be solved The discretisation of the domain Ω is done according to fig. 18.2. The element length is h = 2 mm. 22.3 Choosing the base and weighting function The basis functions are defined according to chap. 18.3. 22.4 Formulation of the weak form with triangular functions φ(x) The procedure is based on chap. 18.4. The domain Ω has meanwhile been divided into n = 5 subdomains (elements) Ω n and the weighting function w together with the basis function has been replaced by the triangular function φ(x). Due to the boundary conditions, the first term of eq. (22.1) is set equal to zero, since the basis functions at the boundary are equal to zero. This reduces the calculation to the inner nodes. With the inclusion of the basis function B h (x) = 2 ∑ i=1 B i φ i (x) = B 1 φ 1 (x) + B 2 φ 2 (x) follows by substitution and rearrangement ˆ Ω [ B 1 dφ 1 dx dφ dx + B 2 dφ 2 dx dφ dx ] dx = K ˆ Ω φ i (x) dx ︸ ︷︷ ︸ Source term . For the determination of the flux density B 1 and B 2 at the nodes x 1 and x 2 , the matrix notation ˆ Ω ( dφ 1 (x) dx dφ 1 (x) dx dφ 2 (x) dx dφ 1 (x) dx dφ 1 (x) dx dφ 2 (x) dx dφ 2 (x) dx dφ 2 (x) dx ) dx ( B 1 B 2 ) = K ˆ Ω φ(x) dx ( 1 1 ) can be applied. <?page no="342"?> 316 Galerkin-FEM - magnetic field diffusion 22.5 Transforming the system of equations into a matrix equation The procedure is based on chap. 18.5. • Node matrix of the first inner node x i with φ(x) = φ 1 (x): This is followed by the interval [x 0 , x 2 ] 1 h � 2 − 1 � · � B 1 B 2 � = K h. • Node matrix of the second inner node x i+1 with φ(x) = φ 2 (x): This is followed by the interval [x 1 , x 3 ] 1 h � − 1 2 � · � B 1 B 2 � = K h. • Element matrix: The two inner nodes x i and x i+1 are summarised as the element equation 1 h � 2 − 1 − 1 2 � · � B 1 B 2 � = K h. • Coefficient matrix: The two nodal equations are combined in the coefficient matrix to form a linear system of equations 1 K h 2 ⎛⎜⎜⎜⎜⎝ 2 − 1 0 0 − 1 2 − 1 0 0 − 1 2 − 1 0 0 − 1 2 ⎞⎟⎟⎟⎟⎠ � �� � S · ⎛⎜⎜⎜⎜⎝ B 1 B 2 B 3 B 4 ⎞⎟⎟⎟⎟⎠ � �� � B h = ⎛⎜⎜⎜⎜⎝ 1 1 1 1 ⎞⎟⎟⎟⎟⎠ � �� � f . <?page no="343"?> 22.6 Solving the linear equation system 317 • Including the Dirichlet boundary conditions ( � B(x 0 ) = 1 T , � B(x 5 ) = 0.2 T ) follows again the system of equations ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1 0 0 0 0 0 17.7 − 35.4 17.7 0 0 0 0 17.7 − 35.4 17.7 0 0 0 0 17.7 − 35.4 17.7 0 0 0 0 17.7 − 35.4 17.7 0 0 0 0 0 1 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ · ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ B 0 B 1 B 2 B 3 B 4 B 5 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1 1 1 1 1 0.2 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . 22.6 Solving the linear equation system The solution of the linear system of equations is done according to chap. 18.6 with Figure 22.2: MATLAB result of a one-dimensional magnetic field diffusion process <?page no="344"?> 318 Galerkin-FEM - magnetic field diffusion Table 22.1: Material data/ coefficients/ boundary conditions Specifications for copper material Permeability μ 0 [V s/ (Am)] 4 π 10 − 7 ≈ 1.2 10 − 6 Spec. electr. conductivity κ [A/ (V m)] 56.2 10 6 dB/ dt [T / s]; (0.49-0.53) T/ 0.2 ms -200 K [V s/ m 4 ] -14,124.6 h [m] 0.002 K h 2 [V s/ m 2 ] -0.0565 1/ (K h 2 ) [m 2 / (V s)] -17.7 Dirichlet boundary conditions B(x 0 ) [T ] 1 B(x 5 ) [T ] 0.2 S B h = f S − 1 (S B h ) = S − 1 f � S − 1 S � � �� � E B h = S − 1 f B h = S − 1 f = ⎛⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1.00 0.73 0.51 0.35 0.25 0.20 ⎞⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . For the inner nodes it follows B h = 4 � i=1 B i (x) φ i (x) = 0.73 φ 1 + 0.51 φ 2 + 0.35 φ 3 + 0.25 φ 4 . In tab. 22.1 shows the necessary data for the calculation. The material data were taken from the tables in [46], the data for the calculation of the coefficient K from fig. 22.3. <?page no="345"?> 22.6 Solving the linear equation system 319 Figure 22.3: MATLAB result of a one-dimensional field diffusion process with time t as plot parameter In fig. 22.2 is the MATLAB result of a flux density distribution over displacement and time, created with the PDE toolbox. The corresponding MATLAB code can be found in the appendix A.6. The result was confirmed with COMSOL Multiphysics. The transition to the plot of flux density versus location with time t as the plot parameter was shown in fig. 22.3. This representation allows a comparison with the results obtained above. The deviations are due to rounding and reading inaccuracy (MATLAB data cursor is not exactly on the node). In addition, dB/ dt was read at a mean distance (4.08 mm). In tab. 22.2 the results are compared. In fig. 22.4 schematically shows the basis functions including their approach functions of the inner nodes over the domain (Ω). The Dirichlet boundary conditions are embodied by the outer nodes (B 0 , B 5 ). A comparison of the results obtained with MATLAB and with Galerkin’s method is given in fig. 22.5. The required value pairs were taken from tab. 22.2. In the appendix A.7 the results obtained with the MATLAB-PDE toolbox were compared to the results of COMSOL-Multiphysics. <?page no="346"?> 320 Galerkin-FEM - magnetic field diffusion Table 22.2: Comparison of the results of the flux density and spatial course with the aid of fig. 22.3 Node number i 0 1 2 3 4 5 Location x i [mm] 0 2 4 6 8 10 B(x i ) [T ] bei t = 1.4 ms; Galerkin 1 0.73 0.51 0.35 0.25 0.2 B(x i ) [T ] bei t = 1.4 ms; MATLAB 1 0.75 0.53 0.36 0.26 0.2 Figure 22.4: Graphical result representation of the local flux density distribution <?page no="347"?> 22.6 Solving the linear equation system 321 Figure 22.5: Result comparison with value pairs from tab. 22.2 <?page no="349"?> Chapter 23 Introduction to the finite difference method In this method, the differential quotient is replaced by a difference quotient. From this follows Diff erential quotient = diff erence quotient + discretisation error. With this discretisation, an equation is transformed into an algebraic equation (algebraised). In fig. 9.3 b) the field diffusion equation with the square of the Nabla operator can be seen. In the notation with partial derivatives, the field diffusion equation follows with ∂ 2 � B ∂x 2 = μ 0 κ ∂ � B ∂t . (23.1) The solution is done by means of the finite difference method (FDM) with implicit and explicit methods. Recommended literature is [62], chap. 3: ”Finite Difference Methods“ with exercises and solutions. 23.1 Numerical notation of the linear field diffusion equation Eq. (23.1) is solved as a one-dimensional field diffusion equation using numerical methods. In the further course, the derivatives are expressed with with the help of difference <?page no="350"?> 324 Introduction to the finite difference method quotients. For this purpose eq. (23.2) is used as forward difference quotient and eq. (23.3) is written as the central difference quotient ([62], p. 126 f.) ∂B ∂t = B j+1 − B j Δt (23.2) ∂ 2 B ∂x 2 = B i+1 − 2B i + B i − 1 (Δx) 2 . (23.3) If the equations (23.2) and (23.3) are substituted into eq. (23.1), it follows that B i+1,j − 2B i,j + B i − 1,j (Δx) 2 = μκ B i,j+1 − B i,j Δt . (23.4) The solution of eq. (23.4) is done by implicit and explicit method. 23.2 On the persons Crank and Nicolson John Crank (1916-2006) was an English mathematician whose work on the numerical solution of partial differential equations was groundbreaking. He studied mathematics at the University of Manchester. Phyllis Nicolson (1917-1968) was an English mathematician. Her best-known work is the Crank-Nicolson method, which she developed together with John Crank. She also studied mathematics and physics at the University of Manchester. 23.3 Solution with implicit method according to Crank-Nicolson In this application, the time and distance must be divided (discretised) into smaller units. This makes the use of the finite difference method advantageous. The difference quotients of the diffusion equation (2 � th order differential equation) are replaced by difference quotients. In the further course, the implicit method according to Crank- Nicolson is applied. This is followed by the transformation of the equation into a linear (n, n)-equation system with subsequent solution, followed by an application example. <?page no="351"?> 23.3 Solution with implicit method according to Crank-Nicolson 325 23.3.1 Transforming the diffusion equation into a matrix equation For this purpose, the left term of eq. (23.4) is replaced by the mean value of the central difference quotient of the j � th and (j + 1) � th time series 1 2 ( B i+1,j − 2B i,j + B i − 1,j (Δx) 2 + B i+1,j+1 − 2B i,j+1 + B i − 1,j+1 (Δx) 2 ) = μ κ B i,j+1 − B i,j Δt . By rearranging it follows (B i+1,j − 2B i,j + B i − 1,j + B i+1,j+1 − 2B i,j+1 + B i − 1,j+1 ) Δt 2(Δx) 2 μκ = B i,j+1 − B i,j . With the substitution k = 1 2μ κ Δt (Δx) 2 (23.5) the readability is made easier kB i+1,j − 2kB i,j + kB i − 1,j + kB i+1,j+1 − 2kB i,j+1 + kB i − 1,j+1 = B i,j+1 − B i,j . By rearranging and separating the individual terms according to the j � th and (j + 1) � th time step, the result is as follows kB i − 1,j − 2kB i,j + B i,j + kB i+1,j = − kB i − 1,j+1 + B i,j+1 + 2kB i,j+1 − kB i+1,j+1 . A subsequent summary allows kB i − 1,j + (1 − 2k) B i,j + kB i+1,j = − kB i − 1,j+1 + (1 + 2k) B i,j+1 − kB i+1,j+1 (23.6) rearranging the equation after the j � th and (j + 1) � th time step. The left-hand side of eq. (23.6) is known, since it contains the current j � th time step and known geometric steps. The right-hand side of eq. (23.6) contains the known geometric steps but the unknown state at time j + 1 (cf. 23.1). With the substitution of the left-hand side of eq. (23.6) <?page no="352"?> 326 Introduction to the finite difference method Figure 23.1: Division into steps for the implicit method b 1 = k B i − 1,j + (1 − 2k) B i,j + k B i+1,j = (k (1 − 2k) k) ⎛⎜⎜⎝ B i − 1,j B i,j B i+1,j ⎞⎟⎟⎠ (23.7) and substitution of the right-hand side of eq. (23.6) follows in matrix notation b 1 = ( − k (1 + 2k) − k) � �� � A ⎛⎜⎜⎝ B i − 1,j+1 B i,j+1 B i+1,j+1 ⎞⎟⎟⎠ � �� � x . (23.8) The equations thus have the form b = A x, (23.9) where A and x are matrices. 23.3.2 Solving the matrix equation A precondition for the solvability of an inhomogeneous linear (n, n)-system b = A x is the square matrix, which is not yet the case with eq. (23.8). In order to be able to <?page no="353"?> 23.3 Solution with implicit method according to Crank-Nicolson 327 use this solution method nevertheless, the geometric steps of eq. (23.7) with b 2 = (k (1 − 2k) k) ⎛⎜⎜⎝ B i,j B i+1,j B i+2,j ⎞⎟⎟⎠ (23.10) b 3 = (k (1 − 2k) k) ⎛⎜⎜⎝ B i+1,j B i+2,j B i+3,j ⎞⎟⎟⎠ (23.11) and that of eq. (23.8) with b 4 = ( − k (1 + 2k) − k) ⎛⎜⎜⎝ B i,j+1 B i+1,j+1 B i+2,j+1 ⎞⎟⎟⎠ (23.12) b 5 = [ − k (1 + 2k) − k) ⎛⎜⎜⎝ B i+1,j+1 B i+2,j+1 B i+3,j+1 ⎞⎟⎟⎠ (23.13) extended. Eq. (23.10) and eq. (23.11) are inserted into eq. (23.7) b 1 = ⎛⎜⎜⎜⎜⎜⎜⎜⎝ k (1 − 2k) k 0 0 0 k (1 − 2k) k 0 0 0 k (1 − 2k) k 0 0 0 k (1 − 2k) 0 0 0 0 k ⎞⎟⎟⎟⎟⎟⎟⎟⎠ · ⎛⎜⎜⎜⎜⎜⎜⎜⎝ B i − 1,j B i,j B i+1,j B i+2,j B i+3,j ⎞⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎜⎜⎝ k B i − 1,j + (1 − 2k) B i,j + k B i+1,j k B i,j + (1 − 2k)B i+1,j + k B i+2,j k B i+1,j + (1 − 2k)B i+2,j + k B i+3,j k B i+2,j + (1 − 2k)B i+3,j k B i+3,j ⎞⎟⎟⎟⎟⎟⎟⎟⎠ � �� � b . All the elements contain known quantities. Eq. (23.12) and eq. (23.13) are inserted into eq. (23.8), with this follows <?page no="354"?> 328 Introduction to the finite difference method Figure 23.2: Extended step division of the implicit method b 1 = ⎛⎜⎜⎜⎜⎜⎜⎜⎝ − k (1 + 2k) − k 0 0 0 − k (1 + 2k) − k 0 0 0 − k (1 + 2k) − k 0 0 0 − k (1 + 2k) 0 0 0 0 − k ⎞⎟⎟⎟⎟⎟⎟⎟⎠ � �� � A · ⎛⎜⎜⎜⎜⎜⎜⎜⎝ B i − 1,j+1 B i,j+1 B i+1,j+1 B i+2,j+1 B i+3,j+1 ⎞⎟⎟⎟⎟⎟⎟⎟⎠ � �� � x . Fig. 23.2 shows the extension of the geometric steps. The matrix A is now quadratic. Thus, the equation satisfies the requirements for solving linear systems of equations using Cramer’s rule. For the following time step j becomes j +1 and j +1 becomes j +2. The solution process for matrix equations begins with the check for non-singularity of the matrix A, which is done with det(A) � = 0 = − k 5 . In the continuation, the inverse matrix A − 1 <?page no="355"?> 23.3 Solution with implicit method according to Crank-Nicolson 329 Table 23.1: Material data Specifications for material copper: Permeabilit¨ at μ 0 [V s/ (Am)] 4 π 10 − 7 = 1.2 10 − 6 Spec. electr. conductivity κ [A/ (V m)] 56.2 10 6 1/ (2 μ κ) [m 2 / s] 0.0071 A − 1 = ⎛⎜⎜⎜⎜⎜⎜⎜⎝ − 1 k − 2 k+1 k 2 − 3 k 2 +4 k+1 k 3 − 4 k 3 +10 k 2 +6 k+1 k 4 − 5 k 4 +20 k 3 +21 k 2 +8 k+1 k 5 0 − 1 k − 2 k+1 k 2 − 3 k 2 +4 k+1 k 3 − 4 k 3 +10 k 2 +6 k+1 k 4 0 0 − 1 k − 2 k+1 k 2 − 3 k 2 +4 k+1 k 3 0 0 0 − 1 k − 2 k+1 k 2 0 0 0 0 − 1 k ⎞⎟⎟⎟⎟⎟⎟⎟⎠ is formed with known methods. This leaves the multiplication with the matrix b to solve the flux density column vector x of the (j + 1) � th time step according to A − 1 b = A − 1 A � �� � E x A − 1 b = x. Where E is the unit matrix. 23.3.3 Application example An application example is the recalculation of a magnetic field diffusion process according to fig. 23.3. For this purpose, five geometric steps x 0 to x 4 were defined. The steps to be compared are marked with the MATLAB data cursor in the figure. The 0 � th time step (t = 0 ms) is characterised by the fact that at the left edge, at x = 0 m, the boundary condition B(x 0 ) was set. The comparison is made with the second time step t = 0.2 ms. The data required for the recalculation can be taken from the tables 23.1 and 23.2. Including the Dirichlet boundary conditions, the matrix equation is calculated with <?page no="356"?> 330 Introduction to the finite difference method Figure 23.3: MATLAB result of a one-dimensional field diffusion process with time t as plot parameter for factor k = 0.2272 in tab. 23.2 ⎛⎜⎜⎜⎜⎜⎜⎜⎝ 1 0 0 0 0 0 − 0.2272 1.4544 − 0.2272 0 0 0 − 0.2272 1.4544 − 0.2272 0 0 0 − 0.2272 1.4544 0 0 0 0 1 ⎞⎟⎟⎟⎟⎟⎟⎟⎠ · ⎛⎜⎜⎜⎜⎜⎜⎜⎝ B i − 1,j+1 B i,j+1 B i+1,j+1 B i+2,j+1 B i+3,j+1 ⎞⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎜⎜⎜⎜⎜⎝ 1 0 0 0 0.001 ⎞⎟⎟⎟⎟⎟⎟⎟⎠ . The location and time data of the flux density B were taken from the MATLAB simulation result fig. 23.3 and inserted into tab. 23.2. The result value pairs from tab. 23.2 were inserted in fig. 23.4 compared graphically. The approximation of the Crank- Nicolson result to the MATLAB result is changed by arbitrarily varying the boundary condition B(x 4 ) to see the influence on the result. If the boundary condition B(x 4 ) = 57.9 10 − 6 T (value from fig. 23.3) is replaced by the boundary condition B(x 4 ) = 1 mT in the Crank-Nicolson method, a clear approximation to the MATLAB PDE result is obtained. <?page no="357"?> 23.3 Solution with implicit method according to Crank-Nicolson 331 Table 23.2: Comparison of the results of the flux density and spatial curves with the help of fig. 23.3 Dirichlet boundary conditions: Left edge B(x 0 ), [T ] 1 Right edge B(x 4 ) [T ] 0.001 und 57.9 10 − 6 Time, spatial discretisation, k-factor: Δt [s] 0.0002 Δx [m] 0.0025 Δt/ Δx 2 [m/ s 2 ] 32 k [1] 0.2272 x-Position B(t=0 s) B(t=0.2 ms) B(t=0.2 ms) B(t=0.2 ms) [m] [T ] MATLAB Crank-Nicolson Crank-Nicolson 0 1.0 1.0 1.0 1.0 0.0025 0 0.3035 0.25 0.0145 0.005 0 0.0322 0.04 0.0023 0.0075 0 0.0016 0.0064 0.37 10 − 3 0.01 0 57.9 10 − 6 0.001 57.9 10 − 6 <?page no="358"?> 332 Introduction to the finite difference method Figure 23.4: Comparison between MATLAB PDE and Crank Nicolson result 23.4 Solution with explicit method according to Crank-Nicolson In the sequel, the explicit method for solving the field diffusion equation eq. (23.1) is presented. 23.4.1 Transforming the diffusion equation into a matrix equation For this purpose, eq. (23.4) B i+1,j − 2B i,j + B i − 1,j (Δx) 2 = μκ B i,j+1 − B i,j Δt is used. With the substitution of the factor k according to eq. (23.5) follows k B i+1,j − 2k B i,j + k B i − 1,j = B i,j+1 − B i,j . <?page no="359"?> 23.4 Solution with explicit method according to Crank-Nicolson 333 With the rearrangement of the individual terms according to the time step j and j + 1 becomes k B i − 1,j + (1 − 2k) B i,j + k B i+1,j = B i,j+1 . In matrix notation it follows Figure 23.5: Step division of the explicit method (k (1 − 2k) k) � �� � A ⎛⎜⎜⎝ B i − 1,j B i,j B i+1,j ⎞⎟⎟⎠ � �� � b = B i,j+1 � �� � x . (23.14) The left term of eq. (23.14) is known, since it includes the present time step and known geometric steps. Fig. 23.5 shows the step division. 23.4.2 Solving the matrix equation After calculating the function values of the current time step j, the function value of the future time step j + 1 can be calculated explicitly. The linear eq. (23.14) is extendet to a (m, n)-system <?page no="360"?> 334 Introduction to the finite difference method ⎛⎜⎜⎝ k (1 − 2k) k 0 0 0 k (1 − 2k) k 0 0 0 k (1 − 2k) k ⎞⎟⎟⎠ � �� � A · ⎛⎜⎜⎜⎜⎜⎜⎜⎝ B i − 1,j B i,j B i+1,j B i+2,j B i+3,j ⎞⎟⎟⎟⎟⎟⎟⎟⎠ � �� � b = ⎛⎜⎜⎝ B i,j+1 B i+1,j+1 B i+2,j+1 ⎞⎟⎟⎠ � �� � x . Fig. 23.6 shows the expansion of the geometric steps. The number of nodes in the (j + 1) � th time step corresponds to the number of nodes in the j � th time step minus 2. Figure 23.6: Expanded step division of the explicit method 23.4.3 Application example In fig. 23.7 shows the MATLAB result of a field diffusion process. The diffusion process is recalculated from the time 2 ms to the time 4 ms. The data required for the recalculation are shown in tab. 23.3. With the k factor calculated there, the best approximation to the MATLAB result could be achieved. The matrix equation to be solved is <?page no="361"?> 23.4 Solution with explicit method according to Crank-Nicolson 335 ⎛⎜⎜⎝ 1.42 − 1.84 1.42 0 0 0 1.42 − 1.84 1.42 0 0 0 1.42 − 1.84 1.42 ⎞⎟⎟⎠ · ⎛⎜⎜⎜⎜⎜⎜⎜⎝ 1 0.668 0.391 0.198 0.087 ⎞⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎝ 0.74 0.51 0.31 ⎞⎟⎟⎠ . Figure 23.7: MATLAB result of a one-dimensional field diffusion process with time t as plot parameter for factor k = 1.42 in tab. 23.3 In the continuation, another calculation is carried out and the factor k is chosen so that (1 − 2k) = 0. The matrix equation becomes ⎛⎜⎜⎝ 0.5 0 0.5 0 0 0 0.5 0 0.5 0 0 0 0.5 0 0.5 ⎞⎟⎟⎠ · ⎛⎜⎜⎜⎜⎜⎜⎜⎝ 1 0.499 0.157 0.0347 0.0069 ⎞⎟⎟⎟⎟⎟⎟⎟⎠ = ⎛⎜⎜⎝ 0.579 0.267 0.082 ⎞⎟⎟⎠ . <?page no="362"?> 336 Introduction to the finite difference method Table 23.3: Example 1: Comparison of the results of the flux density and spatial curves with the help of fig. 23.7 Dirichlet boundary condition: Left edge B(x 0 ) [T ] 1 Temporal, spatial discretisation, k-factor: Δt [s] 0.0002 Δx [m] 0.001 Δt/ Δx 2 [m/ s 2 ] 200 k [1] 1.42 x-Position B(t=0.2 ms) [T ] B(t=0.4 ms) [T ] B(t=0.4 ms) [T ] [m] MATLAB MATLAB Explizit 0 1.0 - - 0.001 0.67 0.76 0.74 0.002 0.39 0.54 0.51 0.003 0.20 0.36 0.31 0.004 0.09 - - The data required for the recalculation can be found in fig. 23.8 and summarised in tab. 23.4. The third calculation example is shown in tab. 23.5. This final calculation allows an assessment of the k-factor influence (choice of k-factor) on the calculation results. <?page no="363"?> 23.4 Solution with explicit method according to Crank-Nicolson 337 Figure 23.8: MATLAB-Ergebnis eines eindimensionalen Felddiffusionsvorgangs mit der Zeit t als Scharparameter f¨ ur Faktor k = 0.5 in tab. 23.4 <?page no="364"?> 338 Introduction to the finite difference method Table 23.4: Example 2: Comparison of the results of the flux density and spatial curves with the help of fig. 23.8 Dirichlet boundary condition: Left edge B(x 0 ), [T ] 1 Temporal, spatial discretisation, k-factor: Δt [s] 0.0002 Δx [m] 0.001685 Δt/ Δx 2 [m/ s 2 ] 70.44 k [1] 0.5 x-Position B(t=0.2 ms) [T ] B(t=0.4 ms) [T ] B(t=0.4 ms) [T ] [m] MATLAB MATLAB Explizit 0 1.0 - - 0.00168 0.499 0.633 0.579 0.00337 0.157 0.317 0.267 0.005 0.035 0.135 0.082 0.0064 0.0069 - - <?page no="365"?> 23.4 Solution with explicit method according to Crank-Nicolson 339 Table 23.5: Example 3: Comparison of the results of the flux density and spatial curves with the help of fig. 23.3 Dirichlet boundary condition: Left edge B(x 0 ) [T ] 1 Temporal, spatial discretisation, k-factor: Δt [s] 0.0002 Δx [m] 0.0025 Δt/ Δx 2 [m/ s 2 ] 32 k [1] 0.2272 x-Position B(t=0.2 ms) [T ] B(t=0.4 ms) [T ] B(t=0.4 ms) [T ] [m] MATLAB MATLAB Explizit 0 1.0 - - 0.0025 0.3022 0.466 0.398 0.005 0.032 0.1296 0.086 0.0075 0.0016 0.025 0.0079 0.01 57 10 − 6 - - <?page no="367"?> Chapter 24 Applications of FEM to product development For the user, the application of the FEM is usually limited to the operation of suitable FEM programs. Common to all of them is the three-step procedure in the programme application, which is divided into the three phases of • Preprocessing, • Processing, • Postprocessing. As examples for the application of the FEM 1. the recalculation (analysis) of an already existing proportional solenoid, 2. the precalculation (design) of a planar asynchronous disc motor. is presented. 24.1 Analysis of a proportional magnet An example for the recalculation of an existing product is the proportional magnet with a geometrically influenced force-displacement characteristic according to fig. 24.1 from the company Robert Bosch GmbH, which is used on the in-line injection pump to control the fuel supply. The designation of the magnet elements is given in tab. 24.1. <?page no="368"?> 342 Applications of FEM to product development Figure 24.1: Cross-section of a proportional magnet Table 24.1: Designations of the proportional magnet No. Designation No. Designation 1 Magnetic back-iron (stator) 4 Armature with active element 2 non-magnetic tube 5 orthocyclic winding 3 Slide bearing tube 6 Overmoulding 24.1.1 Preprocessing The preprocessing phase involves drawing the desired contour with subsequent meshing. Drawing is done either by means of the graphic editor integrated in the FEM software or by drawing by using an external graphic editor and then importing the graphic into the FEM software. Similarly, the meshing is done either directly with the FEM software or by meshing the externally created graphic and then importing it into the FEM software. In fig. 24.2 a) is a 2D cross-section of the proportional solenoid according to fig. 24.1 with the surrounding air space is shown. The meshing of all components including the airspace is shown in fig. 24.2 b). The axis of rotation is located to the left of the geometry in each image. Material properties and boundary conditions are assigned to the individual construction elements. See also chap. 1.3.8. <?page no="369"?> 24.1 Analysis of a proportional magnet 343 Figure 24.2: Preprocessing using the example of a proportional magnet from fig. 24.1 24.1.2 Processing The processing phase comprises the actual solution of the equations used in the FEM. A distinction is made between two basic solution methods: • Direct method: The solution is worked out in one (single) huge calculation step. The Galerkin method may be mentioned as a representative. The solvers work according to the ”LU decomposition-method“ published by the Polish mathematician, astronomer and geodesist Tadeusz Banachiewicz in 1938. Here LU stands for ”lower“ and ”upper“ and ”decomposition“ for ”decomposition“. The LU decomposition refers to the square matrix A into a matrix L in which the elements on the lower left are occupied up to the diagonal and the matrix U in which the elements on the right upper the diagonal are occupied. Thus A = L · U. For a useful reference, see [64], p. 405 ff. • Iterative method: The iterative methods approach the solution step by step. To be mentioned here are ”conjugate gradient method“, ”generalized minimum residual method“ as well as ”biconjugate gradient stabilized method“. These methods allow the observation of the decreasing error (convergence) and the number of steps already computed during a converging computation. Well-conditioned computa- <?page no="370"?> 344 Applications of FEM to product development tional problems exhibit monotonic convergence. However, if only a slowed down or oscillating convergence behaviour is shown, this indicates a less well-conditioned computational problem. With commercial software, the software user can usually not influence the solution algorithm. Even the necessary boundary conditions are already suggested or adapted by the software. 24.1.3 Postprocessing The postprocessing phase contains the actual analysis phase, in which the result of the solved variable is displayed by means of colour coding and must be interpreted. In fig. 24.3 the magnetic flux density B is coloured and the isolines of the magnetic flux Φ are greyed. Red colours represent high, blue colours low flux densities. Figure 24.3: Postprocessing using the example of a proportional magnet from fig. 24.1 <?page no="371"?> 24.2 Synthesis of a planar asynchronous disc motor 345 24.2 Synthesis of a planar asynchronous disc motor Virtual product development (virtual prototyping) allows cost-efficient development. Iteration cycles, which include prototype construction and testing, are reduced to a minimum and form the final stage of product development. CAE-based optimization including CAE-based robustness evaluation is becoming increasingly important in virtual prototyping. The combination of optimization and robustness evaluation leads to target-oriented strategies of design optimization and thus to a shortening of the product development cycle. An efficient coupling of the FEM with an optimization tool is a cycle-shortening step for virtual product development. 24.2.1 Preprocessing The preprocessing phase is shown in fig. 24.4. In fig. 24.4 a) the motor elements are drawn using the JMAG editor. The rotor and the stator can be seen. In fig. 24.4 b) the automatic meshing of rotor and stator is done using 3D solid elements. Figure 24.4: Preprocessing example of a planar asynchronous motor 24.2.2 Processing The processing procedure is carried out according to chap. 24.1.2. <?page no="372"?> 346 Applications of FEM to product development 24.2.3 Postprocessing In fig. 24.5 the magnetic flux density B is shown in colour coding. Here, light and red colours indicate a high flux density and darker and blue colours indicate a low flux density. The stator and the squirrel-cage rotor (short-circuit bars with short-circuit rings) are shown. Figure 24.5: Postprocessing example of the planar asynchronous motor 24.2.4 Prototype of the planar asynchronous motor In fig. 24.6 shows the prototype of the planar asynchronous disc motor. In fig. 24.6 a) the front view with holding plate and motor shaft can be seen. Fig. 24.6 b) shows the rear view after assembly. The side view of the motor is shown in fig. 24.6 c). The centrally arranged disc rotor (double short-circuit cage rotor, rotor), which is provided with a short-circuit cage on the left and right sides, can be seen. Both stators, each consisting of a circuit board with iron back-circuit, are attached to both sides of the rotor. The air gap is adjusted by means of spacer tubes. In fig. 24.7 shows the disassembled disc rotor motor. The dismounted front stator with printed windings in fig. 24.7 a) allows a view of the double short-cage rotor in fig. 24.7 b). Behind this is the second stator. The <?page no="373"?> 24.2 Synthesis of a planar asynchronous disc motor 347 double stator arrangement enables the compensation of the occurring rotor axial forces. All necessary designations are given in tab. 24.2. With regard to the manufacturing, it should be noted that the two iron backs of the stators as well as the rotor were made of discs consisting of powder composite material. Figure 24.6: Prototype of the planar asynchronous disc motor Figure 24.7: Detailed view with stator (left) and rotor (right) The powder composite material serves to guide the flux and is characterised by a very low specific electrical conductivity. The stator windings were manufactured on circuit boards using a printing process. The necessary contours for the conductor tracks and <?page no="374"?> 348 Applications of FEM to product development Table 24.2: Designations and simulation results of the asynchronous disc motor from fig. 24.7 No. Designation No. Designation 1 Powder composite 5 Distance tube (stator) (rotor, stator) 2 Short circuit ring outside (rotor) 6 Drive shaft (rotor) 3 Short circuit bar (rotor) 7 Holding screw (stator) 4 Short circuit ring inside (rotor) 8 printed coil (stator) Torque at f min max. torque M max 0.17 Nm at 50 Hz 0.28 Nm at 150 Hz short-circuit bars were milled. In fig. 24.8 shows the simulated motor torque curve over the drive frequency. The simulation results obtained are also shown in tab. 24.2. Figure 24.8: FEM simulation result - motor torque characteristic <?page no="375"?> Chapter 25 Virtual product design Virtual product development (virtual prototyping) allows cost-efficient development. Iteration cycles, which provide for prototype construction and testing, are then sensibly introduced at the end of a product development cycle and thus shorten it. CAE-based optimisation including CAE-based robustness evaluation is becoming increasingly important in virtual prototyping. The combination of optimisations and robustness evaluation leads to target-oriented strategies of design optimisation and thus to a shortening of the product development cycle. One cycle-shortening measure for virtual product development is the efficient coupling of the FEM with an optimisation tool. 25.1 Coupling between FEM and optimisation tools Optimisation is finding its way into virtual product development. For this purpose, in fig. 25.1 shows the schematic embedding of an FEM software in an optimisation software. A parameterised FEM model is coupled with an optimisation tool via interfaces and input and output variables are defined. After selecting the optimisation strategy and the variables, the optimiser varies their contents and transfers them to the FEM tool. The FEM tool calculates the model and returns the calculation and simulation result to the optimiser after completion of the calculation. An algorithm of the optimiser checks the result for plausibility and decides on the continuation of the optimisation. This procedure allows optimisation with many independent variables (multi-criteria optimisation). <?page no="376"?> 350 Virtual product design Figure 25.1: Embedding the FEM into an optimisation tool 25.2 Multi-objective optimisation - Pareto optimisation In multi-objective optimisation (multi-criteria optimisation), several requirements are placed on a problem solution, which must be fulfilled in the best possible way. These requirements are typically contradictory, so that an optimum with regard to all functions cannot be achieved with one solution. One example is Pareto optimisation, named after the Italian economist Vilfredo Pareto (1848 - 1923). A Pareto optimum is a state in which it is not possible to make one solution better without at the same time making another solution worse. The set of Pareto-optimal points is called the Pareto front. A calculation result of a multi-objective optimisation is shown in fig. 25.2. A motor torque optimisation as a function of the geometric parameters d 1 and d 2 is evident. All points and circles indicate a solution. The best possible solutions are distributed along the Pareto front. In Pareto optimisation, multi-objective optimisation methods search <?page no="377"?> 25.3 Optimisation example electromagnet 351 not only for the best possible solution, but also for a set of compromise solutions from which the user selects one for realisation. Further examples for the optimisation of a rotatory drive (permanent magnet excited synchronous machine) can be found in the dissertation [45]. Figure 25.2: Simulation result of a multi-objective optimisation 25.3 Optimisation example electromagnet In fig. 25.3 a) a plunger armature magnet with armature (1), shoulder piece (2), base with armature counterpart (3), magnetic return (4) and coil (5) can be seen. The plunger armature magnet is subjected to a variable assignment (dimensioning) according to fig. 25.3 b). The variables are listed in tab. 25.1. For the upcoming optimisation, all variables are released for variable modification and provided with interval limits. Let the optimisation objective be the maximum magnetic force F mag taking into account the ohmic power loss P V in the coil. The optimisation process also requires the constant recalculation of a coil fitted into the winding window. The optimisation is done with the software tool OptiSLang. OptiSLang is an algorithmic toolbox for sensitivity analyses, optimisations, robustness evaluations, reliability analyses and Robust Design Optimisation (RDO). With the following solution methods <?page no="378"?> 352 Virtual product design Figure 25.3: Example plunger armature magnet • Monte Carlo method, • Particle swarm method, • Evolutionary method the optimisation of the plunger armature magnet presented in fig. 25.3 a) was carried out. The variables fixed during the optimisation are the voltage, the conductivity of the copper wire, the outer diameter d a and the air gap. The remaining variable contents were released to the optimiser for variation. 25.3.1 Monte Carlo method The Monte Carlo method requires the multiple simulation of mathematical-scientific models using random numbers. The selected variables are assigned random values within their interval limits. Random numbers are realised random quantities which are subject to fixed distributions. The following are to be mentioned • Uniformly distributed random numbers: In a considered interval, the random numbers are uniformly distributed. • Non-equally distributed random numbers: Random numbers are generated with an arbitrary distribution function. <?page no="379"?> 25.3 Optimisation example electromagnet 353 Table 25.1: Designations from fig. 25.3 b) Variable Designation Variable Designation b w Width winding window l w Winding window length d a Outer diameter l 1 Bottom thickness d ai Inside diameter l 2 Length of armature counterpart magnetic back iron d i Diameter l 3 Length of magnet, inside armature counterpart l A Armature length l 4 Length of magnet d n Wire diameter U Voltage Figure 25.4: Optimisation result using Monte Carlo method In fig. 25.4 the results of the magnetic force F mag versus the power dissipation P V of the coil are shown. All points represent magnetic circuit designs. The values of the variables named in tab. 25.1 were randomly selected within their specified limits, without the involvement of a decision algorithm. <?page no="380"?> 354 Virtual product design 25.3.2 Particle swarm method Particle swarm optimisation is a method inspired by nature. It is a swarm intelligencebased biological algorithm and mimics the social behaviour of a swarm of bees foraging for food. In fig. 25.5 shows the optimisation result achieved with this method. Figure 25.5: Optimisation result using the particle swarm method 25.3.3 Evolutionary method The Evolutionary Method, like the Particle Swarm Method, is a method inspired by nature. The method imitates evolution (optimisation) in nature. To be mentioned are: • Survival of the fittest, • evolution by mutation, recombination and selection. The method was developed for optimisation problems for which no gradient information is available, such as binary or discrete search spaces. In fig. 25.6 shows the results obtained using an evolutionary strategy. <?page no="381"?> 25.3 Optimisation example electromagnet 355 Figure 25.6: Optimisation result using evolutionary method 25.3.4 Discussion of the results The simulation results are summarised as follows: • With the Monte Carlo method, due to its randomly selected variable contents, more design calculations are to be expected until a meaningful result is available. • Both the particle swarm and the evolutionary strategy methods provide a meaningful result with fewer design calculations by imitating natural selection mechanisms (targeted search). <?page no="383"?> Chapter 26 Eigenvalue problems Many scientific and technical processes, such as those shown in the figures 9.2, 9.1, are described by means of boundary value problems. For example, if the oscillation equations of fig. 9.2 only the resonance frequencies of the differential equation are of interest, the differential equations of the boundary value tasks are transformed into a dependence of the parameter λ. Of interest are these values of the parameter (eigenvalues), which lead to a solution of the differential equation (eigensolutions, eigenfunctions). The solutions belonging to the eigenfunctions are called eigenvalues. Eigenfunctions are homogeneous functions. The boundary value problem (boundary value problem) was thus transformed into an eigenvalue problem (eigenvalue problem) whose solution exists, for example, only for certain frequencies (resonance frequencies). Again, the general solution procedure consists of the transformation (reduction) of an eigenvalue equation into a matrix equation, which can be solved with the known algorithms. Recommended literature on this is [38], ch. 7 as well as [60], ch. 17. 26.1 Eigenvalue problem - introduction Compared to eq. (10.1), another form of equations follows L f = λ f, which takes the general form of the eigenfunction L f = λ M f (26.1) <?page no="384"?> 358 Eigenvalue problems Here L and M are linear operators. Values specifically allowed for the parameter λ are called eigenvalues, which lead to the solution of f, the eigenfunction. 26.2 Eigenvalue problem - method of moments The application of the method transforms an eigenfunction into a matrix eigenvalue equation, which can be solved with the methods known from the literature and is comparable in application to the procedure in chap. 10. For the eigenfunction eq. (26.1), basis functions φ 1 , φ 2 , φ 3 , ... f = N � j=1 a j φ j (26.2) are chosen, insert into eq. (26.1) results in N � j=1 a j L φ j = λ N � j=1 a j M φ j . The formation of the inner product including the weighting or test functions w 1 , w 2 , ..., w m leads to N � j=1 a j � w k , L φ j � = λ N � j=1 a j � w k , M φ j � , where k = 1, 2, 3, ..., which is noted in the matrix eigenvalue equation (l jk ) (a j ) = λ (m jk ) (a j ) . (26.3) Here is (m jk ) = ⎛⎜⎜⎝ � w 1 , M φ 1 � � w 1 , M φ 2 � ... � w 2 , M φ 1 � � w 2 , M φ 2 � ... ... ... ... ⎞⎟⎟⎠ , (l jk ) the matrix according to eq. (10.3) and (a j ) is the column vector according to eq. (10.4). The eq. (26.3) can only have solutions if <?page no="385"?> 26.3 Eigenvalue problem - canonical form 359 det | l jk − λ m jk | = 0 (26.4) holds true. The determinant is a polynomial of λ, with solutions λ 1 , λ 2 , λ 3 ... . Here λ l are the eigenvalues of the matrix equation eq. (26.3) which approximate the eigenvalues of the function eq (26.1). The corresponding matrices (a j ) 1 , (a j ) 2 , (a j ) 3 , ... are eigenvectors of eq. (26.3) and are coefficients of the functions f a i = (φ n ) T (a n ) i , (26.5) which approximate the eigenfunction of eq. (26.1). In eq. (26.5), (Φ n ) is the matrix of basis functions Φ n . The success of the MOM depends on the ingenuity in choosing the appropriate basis function Φ n and the weighting function w n . The special choice w n = Φ n is called the Galerkin method. If the linear operator M exists, it follows with eq. (26.1) M − 1 L f = λ f. The inner product � w k , M φ j � is thus identical to � w k , φ j � . 26.3 Eigenvalue problem - canonical form The canonical form of the eigenvalue equation is often denoted by L f = λ f, which says that a symmetric matrix can be represented by its eigenvalue matrix when M forms the unit operator. <?page no="387"?> Chapter 27 Eigenvalue problem-MOM - solution of − d 2 u/ dx 2 = λu The example serves to clarify the concept presented in chap. 26. It also introduces the normalisation of functions for the purpose of comparison with each other. 27.1 Exercise description The eigenvalue problem is to be solved − d 2 u dx 2 = λu with the boundary conditions u(0) = u(1) = 0, whose eigenvalues are λ l = (lπ) 2 ; l = 1, 2, 3, ... and eigenfunctions u l = √ 2 sin(lπx) (27.1) are already known from the literature. 27.2 Solution path and solution With the linear operator <?page no="388"?> 362 Eigenvalue problem-MOM - solution of − d 2 u/ dx 2 = λu L = − d 2 dx 2 follows the representation L u = λ u. (27.2) The procedure is carried out according to the method of moments. The approximation or approach function is selected u = N ∑ j=1 a j u j (27.3) u j = x − x j+1 w k = x − x k+1 , which satisfies the Dirichlet boundary conditions u(0) = u(1) = 0. In the Galerkin method, the weighting function w k is chosen, which is equal to the basis function u j . By inserting it into eq. (27.2), the equation follows N ∑ j=1 a j � w k , L u j � ︸ ︷︷ ︸ T erm1 = λ N ∑ j=1 a j � w k , u j � ︸ ︷︷ ︸ T erm2 , (27.4) whose terms are developed in the continuation. 27.3 Solution for 1 � th order The basis function eq. (27.3) consists of one term at N = 1. This is followed by the development of the terms 1 and 2 of eq. (27.4). • Term 1: The evolution of the matrix (l jk ) takes place with (l jk ) = � w k , L u j � = � x − x k+1 , L ( x − x j+1 ) � = ˆ 1 0 ( x − x k+1 ) − d 2 dx 2 ( x − x j+1 ) dx. <?page no="389"?> 27.3 Solution for 1 � th order 363 The twofold application of partial integration provides ˆ x=1 x=0 ( x − x k+1 ) − d 2 dx 2 ( x − x j+1 ) dx = [( x − x k+1 ) − d dx ( x − x j+1 )] ∣∣∣∣∣ x=1 x=0 ︸ ︷︷ ︸ 0 − ˆ x=0 x=1 (1 − (k + 1)x k ) − d dx (x − x j+1 ) dx − ˆ 1 0 (1 − (k + 1)x k ) − d dx (x − x j+1 ) dx = − [( 1 − (k + 1)x k ) ( − (x − x j+1 ) )] ∣∣∣ 1 0 ︸ ︷︷ ︸ 0 + ˆ 1 0 ( k(k + 1)x k − 1 (x − x j+1 ) ) dx, which leads to ˆ 1 0 ( k(k + 1)x k − 1 (x − x j+1 ) ) dx = ˆ 1 0 (k 2 + k)x k − (k 2 + k)x k+j dx = [ k 2 + k k + 1 x k+1 − k 2 + k k + j + 1 x k+j+1 ] ∣∣∣∣∣ x=1 x=0 = [ k(k + 1) k + 1 − k 2 + k k + j + 1 ] and finally leads to the matrix (l jk ) due to the common denominator (l jk ) = j k j + k + 1 . • Term 2: The evolution of the matrix (m jk ) is done with (m jk ) = � w k , u j � = � x − x k+1 , x − x j+1 � = ˆ Ω ( x 2 − x 1 x j+1 − x k+1 x 1 + x k+1 x j+1 ) dx = ˆ Ω ( x 2 − x j+2 − x k+2 + x k+j+2 ) dx. <?page no="390"?> 364 Eigenvalue problem-MOM - solution of − d 2 u/ dx 2 = λu Table 27.1: Coefficients for N = 4 (m jk ) (l jk ) j j k 1 2 3 4 1 2 3 4 1 1 30 1 20 5 84 11 168 1 3 1 2 3 5 2 3 2 1 20 8 105 11 120 32 315 1 2 4 5 1 8 7 3 5 84 11 120 1 9 13 105 3 5 1 9 7 3 2 4 11 168 32 315 13 105 32 231 2 3 8 7 3 2 16 9 With term-wise integration, the following applies (m jk ) = ( 1 3 x 3 − 1 j + 3 x j+3 − 1 k + 3 x k+3 + 1 k + j + 3 x k+j+3 ) ∣∣∣∣∣ 1 0 = 1 3 − 1 j + 3 − 1 k + 3 + 1 k + j + 3 = j k (k + j + 6) 3 (j + 3) (k + 3) (k + j + 3) . Thus the matrix equation to be solved follows (l jk ) (a j ) = λ (m jk ) (a j ). (27.5) The convergence of the solution is shown with increasing number N of basis functions. The required matrix elements were compiled in tab. 27.1 up to N = 4. For N = 1, the result follows from eq. (27.5) 1 3 a 1 = λ 1 30 a 1 . The eigenvalue λ 1 = λ (1) 1 = 10 can be read offdirectly. The exact value for λ is (1 · π) 2 = 9.9. The notation (1) indicates N = 1. It remains to determine a 1 of the eigenfunction. The comparability with eq. (27.1) is achieved by normalising the approximated eigenfunction u (1) 1 = a 1 (x − x 2 ) according to <?page no="391"?> 27.3 Solution for 1 ′ th order 365 � u (1) 1 � = √ � u (1) 1 , u (1) 1 � = √ ˆ 1 0 (a 21 (x − x 2 )) 2 dx = a 1 1 √ 30 = 1, Figure 27.1: Comparison of eigenfunction curves (N = 1) where a 1 = √ 30 and the eigenfunction u (1) 1 is u (1) 1 = √ 30 (x − x 2 ). A comparison with eq. (27.1) is shown in fig. 27.1. <?page no="392"?> 366 Eigenvalue problem-MOM - solution of − d 2 u/ dx 2 = λu 27.4 Solution for 2 � th order For the order N = 2 the solution of the first term corresponds to the 1 ′ th order u (2) 1 = √ 30 ( x − x 2 ) = u (1) 1 . Eq. (27.3) consists of two terms for N = 2, making eq. (27.5) to be ( 1 3 1 2 1 2 4 5 ) ( a 1 a 2 ) = λ ( 1 30 1 20 1 20 8 105 ) ( a 1 a 2 ) and 1 3 a 1 + 1 2 a 2 = λ ( 1 30 a 1 + 1 20 a 2 ) (27.6) 1 2 a 1 + 4 5 a 2 = λ ( 1 20 a 1 + 8 105 a 2 ) (27.7) follows. The eigenvalues are determined with the help of eq. (26.4) det | l jk − λ m jk | = 0, which by substituting det ∣∣∣∣∣( 1 3 1 2 1 2 4 5 ) − λ ( 1 30 1 20 1 20 8 105 )∣∣∣∣∣ = det ∣∣∣∣∣ 1 3 − λ 1 30 1 2 − λ 1 20 1 2 − λ 1 20 4 5 − λ 8 105 ∣∣∣∣∣ = 0 follows. The development of the determinant leads to the quadratic equation 1 25200 λ 2 − 13 6300 λ + 1 60 = 0, which leads to the midnight formula λ 1/ 2 = 13 6300 ± √( 13 6300 ) 2 − 4 1 25200 1 60 2/ 25200 . <?page no="393"?> 27.4 Solution for 2 ′ th order 367 The solution are the eigenvalues λ 1 = λ (2) 1 = 42 λ 2 = λ (2) 2 = 10, where the exact value for λ 2 = (2 · π) 2 = 39.5. This leaves the determination of a 1 and a 2 with the following procedure. Substituting λ 2 into the equations (27.6) and (27.7) leads to the result zero. Substituting λ 1 leads to a 2 = − 2/ 3 a 1 u (2) 2 = a 1 (x − x 2 ) − a 2 (x − x 3 ) = a 1 (x − x 2 ) − 2 3 a 1 (x − x 3 ) = a 1 ( (x − x 2 ) − 2 3 (x − x 3 ) ) . The remaining coefficient a 1 is determined by normalisation � u (2) 2 � = √ � u (2) 2 , u (2) 2 � = 1 = √ ˆ 1 0 u (2) 2 dx = 1 = √ ˆ 1 0 [ a 1 ( (x − x 2 ) − 2 3 (x − x 3 ) )] 2 dx = 1 = a 1 √ ˆ 1 0 [( (x − x 2 ) − 2 3 (x − x 3 ) )] 2 dx = 1 = a 1 1 √ 1890 = 1. It remains a 1 = √ 1890 a 2 = − 2 3 √ 1890, which leads to the eigenvalue equation u (2) 2 = √ 1890 ( x − x 2 ) − 2 3 √ 1890 ( x − x 3 ) . A comparison can be seen in fig. 27.2. <?page no="394"?> 368 Eigenvalue problem-MOM - solution of − d 2 u/ dx 2 = λu Figure 27.2: Comparison of eigenfunction curves (N = 2) <?page no="395"?> Chapter 28 Common features of methods to solve differential equations The commonalities of the methods are named • Method of Moments (MOM), • Integral transformation, • Green’s method for the solution of differential equations. Common to all methods is the formation of the inner product � f, g � = ˆ c f g dΩ, for the solution of differential equations. All methods have in common the formation of the inner product, which provides for the integration of the functions f and G in the interval c over the domain Ω. The methods are presented individually and very briefly. 28.1 Method of Moments (MOM) The function to be solved is developed as a series of a self-selected function (base function). The series elements contain coefficients. By forming the inner product with a weighting function and transferring it into a matrix equation, the solution of the function searched for is determined. Galerkin intends to choose the weighting function equal to the base function. It is <?page no="396"?> 370 Common features of methods to solve differential equations L f = g, where L is a linear operator, f is the unknown function to be determined and g is the known function in the solution domain Ω. To find the solution, f is expressed as a series f = N � j=1 a j φ j . Here a j are the unknown development coefficients and φ j the development or basis functions. Thus the inhomogeneous equation follows N � j=1 a j L φ j = g. A weighting or test function w 1 , w 2 , w 3 is defined and with this the inner product N � j=1 a j � w k , L φ j � = � w k , g � with k = 1, 2, 3, ... is formed with it. In matrix notation it follows [l jk ] [a j ] = [g k ] , where [l jk ] = ⎛⎜⎜⎝ � w 1 , L φ 1 � � w 1 , L φ 2 � ... � w 2 , L φ 1 � � w 2 , L φ 2 � ... ... ... ... ⎞⎟⎟⎠ [a j ] = ⎛⎜⎜⎜⎜⎝ a 1 a 2 . . ⎞⎟⎟⎟⎟⎠ <?page no="397"?> 28.2 Integral transformation 371 [g k ] = ⎛⎜⎜⎜⎜⎝ 〈 w 1 , g 〉 〈 w 2 , g 〉 . . ⎞⎟⎟⎟⎟⎠ is. The determination of the coefficients a is possible with [a j ] = [l − 1 jk ] [g k ]. This corresponds to the solution of f. The expressions for the solution of f are given shortly by [φ n ] = [φ 1 φ 2 φ 3 ...] and f = [φ j ] [a j ] = [φ j ] [l − 1 jk ] [g k ]. See also chap. 10. 28.2 Integral transformation An integral transformation is understood to be a relation between two functions f(t) and F (p) of the form of the inner product F (p) = ˆ + ∞ −∞ K(p, t) f(t) dt. (28.1) The function F (p) is the image function with the image domain as the domain of definition. The function f(t) is called original function. Its domain of definition is called original domain. The function K(p, t) is called the core of the transformation. The variable t is a real variable. The variable p = δ + jω is a complex variable. A <?page no="398"?> 372 Common features of methods to solve differential equations shortened notation of the transformation is achieved by the introduction of the symbol T with F (p) = T { f(t) } . Integral transformations are suitable for solving ordinary and partial differential equations, integral equations and difference equations. Methods of integral transformations are often called operator methods. The two most important integral transformations are the Fourier transformation and the Laplace transformation. [4]. The Laplace transformation is characterised by the fact that the kernel of the transformation of eq. (28.1) is K(p, t) = e − pt . The mathematical method is called the Laplace transformation. In addition, individual steps within the method are named Laplace transformation. The Laplace transformation is applied to linear differential equations with constant coefficients. The solution of the differential equation takes place in the following three steps: 1. Transformation of the given differential equation with the help of the Laplace transformation into a linear algebraic equation. 2. Solving the algebraic equation. The solution of the linear algebraic equation is the image function of the solution we are looking for. 3. Transform back (inverse Laplace transformation) of the image function. This gives the solution of the sought differential equation (original function). 28.3 Green’s method To be solved ∇ 2 u(r) = f(r). With the linear differential operator L = ∇ 2 follows <?page no="399"?> 28.3 Green’s method 373 L u(r) = f(r). The multiplication with the inverse linear operator in the general representation is L − 1 L u(r) = L − 1 f(r) u(r) = L − 1 f(r). It is G the Green’s function and δ the Dirac’s delta function δ(r − r 0 ) = L G(r, r 0 ). By integration over the range Ω follows ˆ Ω δ(r − r 0 ) dr = ˆ Ω L G(r, r 0 ) dr = 1. This is followed by the multiplication L u(r) ˆ Ω δ(r − r 0 ) dr ︸ ︷︷ ︸ =1 = ˆ Ω L G(r, r 0 ) dr f(r) L u(r) = L ˆ Ω f(r) G(r, r 0 ) dr. By shortening with the linear operator, the solution follows after the searched function u(r) by forming the inner product with u(r) = ˆ Ω G(r, r 0 ) f(r) dr = � G(r, r 0 ), f(r) � . See also chap. 7. <?page no="401"?> Chapter 29 Things worth knowing about modelling In the preliminary stages of modelling a scientific-technical system, it is important to clarify what statements the model should make, for what purpose the model should be created and what effort should be invested in this. In the course of this, such ideas are discussed in the points • Categories of modelling, • Analytics versus Numerics. 29.1 Categories of modelling The models to be distinguished are divided into categories A to D according to increasing degree of complexity: • Category A: Mathematical, analytical model based on integral, differential equations to simulate a scientific-technical system. The model is used to represent rudimentary relationships and investigations as well as to develop a fundamental understanding of a scientific-technical system. • Category B: Mathematical, physical model based on integral, differential equations with integration of measured values, data tables (look-up tables). For this reason, models of category B are not purely analytical models, since discontinuities may occur when integrating measurement and data tables or when switching <?page no="402"?> 376 Things worth knowing about modelling between model structures. Category B models are used, for example, for rough dimensioning in the development process. The results obtained are input variables for models of category C and D. • Category C: Numerical model using numerical methods to reproduce scientific and technical systems. • Category D: Models resulting from the combination of category B and C models. Category D offers the highest degree of modelling, whose error in prediction is a minimum and the effort to spend is a maximum. 29.2 Analytics versus Numerics In tab. 29.1 there is a summarised comparison of analytical and numerical methods for magnetic actuator simulation. The reluctance method is mentioned here as representative of the analytical method and the finite element method as representative of the numerical method. The following results can be derived from the comparison: • The disadvantage of the reluctance method is just the advantage of the FEM method and vice versa. • The simultaneous application of both methods leads to a gain in knowledge. • The demand on the user of the reluctance method is the knowledge about the courses of magnetic fluxes as well as knowledge about partial flux density inhomogeneities (saturation phenomena), if applicable. • The demand on the user of the FEM is based on the reasonable discretisation of the FEM area. • An approximation of the simulation results using the reluctance method to the FEM simulation results is achieved by increasing the number of reluctances in a magnetic network. <?page no="403"?> 29.2 Analytics versus Numerics 377 Table 29.1: Comparison of analytical and numerical methods Analytical method Numerical method Representative Reluctance method Finite element method (FEM) Modelling Replication of the geometry The discretisation of the (Preprocessing) with magnetic resistances spatial area (airspace, and magnetic magnetic circuit) enables the voltage sources. definition of the node coordinates and the boundary conditions. Equation solution a) Equation linear: Numerical solution: (Processing) Analytical solution possible Setting up and b) Equation non-linear: solving the Numerical method required. matrix equations. Set up and solve the matrix equation. Result analytical equations graphical representation presentation static characteristic curves colour coding of the (Postprocessing) and transient processes results Advantage: Material and geometric Challenging geometric relationships apparent from arrangements possible, equations. resulting in the application of FEM in the fine dimensioning phase. Disadvantage: Simple geometric No material and arrangements possible, geometric correlations resulting in the use of the recognisable. reluctance method in the rough dimensioning phase. <?page no="405"?> Chapter 30 Useful standards In this chapter, selected standards are cited which are important for the work of an electrical engineer and scientist. It starts with useful standards for the preparation of documentation, theses, scientific reports. • DIN 1301-Part 1: ”units - unit names, unit signs“. The standard lists units of the International System of Units (SI) as well as other recommended units with size, unit name, unit symbol and definitions [7]. • DIN 1301 supplement: The supplement does not contain further standards, but additional information to part 1 [6]. • DIN 1302: ”General mathematical signs and terms“. This standard specifies mathematical signs and terms and their designations [8]. • DIN 1303: ”Vectors, matrices, tensors - signs and terms“. The standard deals with signs and terms concerning vectors, matrices and tensors. The algebraic structure is presented [9]. • DIN 1304-Part 1: ”Formula symbols - general formula symbols“. This standard specifies formula symbols for physical quantities. In addition, general formula symbols are listed which are used in physics and in engineering [10]. • DIN 1338: ”Formula notation and formula set“. This standard applies to the notation and typesetting of mathematical, physical and chemical formulae. It serves the author (students, correctors, ...) to create good formula sets [19]. <?page no="406"?> 380 Useful standards • DIN 4895 Part 1: ”Orthogonal coordinate systems - general terms“. This standard deals with coordinate systems in three-dimensional Euclidean spaces and with the representation of physical quantities in such coordinate systems [23]. • DIN 4895 Part 2: ”Orthogonal coordinate systems - differential operators of vector analysis “. In this standard differential operators of vector analysis are presented in their orthogonal coordinates [22]. In the continuation, standards regarding oscillations and waves are listed: • DIN 1311 Part 1: ”Oscillations and systems capable of oscillating“. This standard specifies terms relating to oscillations and systems capable of oscillating predominantly in the field of mechanics [12]. • DIN 1311 Part 2: ”Oscillations and systems capable of oscillating“. This standard specifies terms relating to oscillations and systems capable of oscillating, predominantly in the field of mechanics, and also provides guidelines for their application. It deals with oscillatory systems with one degree of freedom [13]. • DIN 1311 Part 3: ”Oscillations and systems capable of oscillating“. This standard specifies terms of oscillation technology and mechanics for several degrees of freedom [14]. • DIN 1311 Part 4: ”Oscillation theory - oscillating continua, wave “. This standard includes definitions of continua, equations of the oscillating continuum, oscillations of the continuum and waves [11]. This is followed by a summary list of standards in which time-dependent quantities and AC quantities are dealt with: • DIN 5483 Part 1: ”Designations of time dependence“. This standard contains the description of constant and periodic processes, the multiphase sinusoidal process, the multiphase process, sinusoidal-related processes, oscillations, the pulse, pulseshaped processes, the shock, the periodic pulse train, the modulated pulse, the jump and its differential quotients, the linear rise process, the wedge process, transition process, compensation process as well as the noise process [24]. • DIN 5483 Part 2: ”formula symbols“. This standard contains formula symbols for time-dependent quantities as well as examples for time-dependent quantities [25]. <?page no="407"?> 381 • DIN 5483 Part 3: ”Complex representation of sinusoidal time-dependent quantities“. This standard includes the complex representation of sinusoidal quantities, special complex values, the complex representation of a time-dependent vector, the rotary operator, calculation with complex quantities [26]. • DIN 40110 Part 1: ”Alternating current quantities - multiconductor circuits“. In this standard, measured and calculated quantities of alternating current circuits are presented in their functional interdependencies [20]. • DIN 40110 Part 2: ”AC quantities - Two-wire circuits“. This standard is to be used for calculations of multi-conductor circuits in electrical power engineering [21]. Standards in which the fundamentals and basic terms of measurement technology are presented and evaluations of measurements are carried out can be found in • DIN 1319 Part 1: ”Fundamentals of metrology - basic terms“. This standard defines and describes general basic terms of metrology (field of knowledge related to measurements) [15]. • DIN 1319 Part 2: ”Fundamentals of metrology terms for measuring equipment“. This standard defines terms of metrology which are of importance for the use of measuring equipment [16]. • DIN 1319 Part 3: ”Fundamentals of metrology - evaluation of measurements of a single measurand, uncertainty of measurement“. This standard applies to the determination of the value of a single measurand and its measurement uncertainty by evaluation of measurements [17]. • DIN 1319 Part 4: ”Fundamentals of metrology - evaluation of measurements, measurement uncertainty“. This standard applies to the common determination and specification of measurement results and measurement uncertainties of measurands in the evaluation of measurements [18]. <?page no="409"?> Bibliography [1] Bartsch, H. J.: Taschenbuch Mathematischer Formeln. Verlag Harri Deutsch, 1990 [2] Bastos, J. ; Ida, N.: Electromagnetics and Calculation of Fields; 2 Auflage. Springer, 1997 [3] Bastos, J. ; Sadowski, N.: Electromagnetic Modeling by Finite Element Methods. Marcel Dekker, Inc., 2003 [4] Bronstein, I. N. ; Semendjajew, K. A. ; Musilo, G. ; M¨ uhlig, H.: Taschenbuch der Mathematik. 5. Auflage. Verlag Harri Deutsch, 2000 [5] Burke-Hubbard, B.: Wavelets. Die Mathematik der kleinen Wellen. Birkh¨auser Verlag, 1997 [6] DIN-1301-1: Einheiten - Einheiten¨ahnliche Namen und Zeichen; Beiblatt zu Teil 1. Beuth Verlag GmbH, 1982 [7] DIN-1301-1: Einheiten - Teil 1: Einheitennamen, Einheitenzeichen. Beuth Verlag GmbH, 2002 [8] DIN-1302: Allgemeine mathematische Zeichen und Begriffe. Beuth Verlag GmbH, 1999 [9] DIN-1303: Vektoren, Matrizen, Tensoren - Zeichen und Begriffe. Beuth Verlag GmbH, 1987 [10] DIN-1304-1: Formelzeichen - Teil 1: Allgemeine Formelzeichen. Beuth Verlag GmbH, 1994 [11] DIN-1311: Blatt 4: Schwingungslehre. Schwingende Kontinua, Wellen. Beuth Verlag GmbH, 1974 383 <?page no="410"?> 384 BIBLIOGRAPHY [12] DIN-1311-1: Schwingungen und schwingungsf¨ahige Systeme; Teil 1: Grundbegriffe, Einteilungen. Beuth Verlag GmbH, 2000 [13] DIN-1311-2: Schwingungen und schwingungsf¨ahige Systeme; Teil 2: Lineare, zeitinvariante schwingungsf¨ahige Systeme mit einem Freiheitsgrad. Beuth Verlag GmbH, 2002 [14] DIN-1311-3: Schwingungen und schwingungsf¨ahige Systeme; Teil 3: Lineare, zeitinvariante schwingungsf¨ahige Systeme mit endlich vielen Freiheitsgraden. Beuth Verlag GmbH, 2000 [15] DIN-1319-1: Grundlagen der Messtechnik; Teil 1: Grundbegriffe. Beuth Verlag GmbH, 1995 [16] DIN-1319-2: Grundlagen der Messtechnik; Teil 2: Begriffe f¨ ur Messmittel. Beuth Verlag GmbH, 2005 [17] DIN-1319-3: Grundlagen der Messtechnik; Teil 3: Auswertung von Messungen einer einzelnen Messgr¨oße, Messunsicherheit. Beuth Verlag GmbH, 1996 [18] DIN-1319-4: Grundlagen der Messtechnik; Teil 4: Auswertung von Messungen, Messunsicherheit. Beuth Verlag GmbH, 1999 [19] DIN-1338: Formelschreibweise und Formelsatz. Beuth Verlag GmbH, 2011 [20] DIN-40110-1: Wechselstromgr¨oßen; Teil 1: Zweileiterstromkreise. Beuth Verlag GmbH, 1994 [21] DIN-40110-2: Wechselstromgr¨oßen; Teil 2: Mehrleiter-Stromkreise. Beuth Verlag GmbH, 2002 [22] DIN-4895: Orthogonale Koordinatensysteme - Teil 2: Differentialoperatoren der Vektoranalysis. Beuth Verlag GmbH, 1977 [23] DIN-4895: Orthogonale Koordinatensysteme - Teil 1: Allgemeine Begriffe. Beuth Verlag GmbH, 1997 [24] DIN-5483-1: Zeitabh¨angige Gr¨oßen - Teil 1: Benennung der Zeitabh¨angigkeit. Beuth Verlag GmbH, 1983 <?page no="411"?> BIBLIOGRAPHY 385 [25] DIN-5483-2: Zeitabh¨angige Gr¨oßen - Teil 2: Formelzeichen. Beuth Verlag GmbH, 1982 [26] DIN-5483-3: Zeitabh¨angige Gr¨oßen - Teil 3: Komplexe Darstellung sinusf¨ormig zeitabh¨angiger Gr¨oßen. Beuth Verlag GmbH, 1994 [27] Eom, H. J.: Primary Theory of Electromagnetics. Springer Verlag, 2013 [28] Euklid: Die Elemente - B¨ ucher 1 bis 13. Ostwalds Klassiker der exakten Wissenschaften, Band 235; Harri Deutsch, 2010 [29] Eynard, B.: Zufallsmatrizen - Neue universelle Gesetze. Spektrum der Wissenschaft, Heft 10, 2018 [30] Fetzer, J. ; Haas, M. ; Kurz, S: Numerische Berechnung elektromagnetischer Felder - Band 627. Expert Verlag GmbH, 2002 [31] Feynman, R. ; Leighton, R. ; Sands, M.: Lectures on Physics Vol. I. Addison- Wesley Publishing Company, 1963 [32] Feynman, R. ; Leighton, R. ; Sands, M.: Lectures on Physics Vol. II. Addison-Wesley Publishing Company, 1963 [33] Fletcher, C.A.J.: Computational Galerkin Methods. Springer, 1984 [34] Gellert, W. ; K¨ ustner, H. ; Hellwich, M. ; K¨ astner, H.: Kleine Enzyklop¨adie Mathematik; 2te Auflage. VEB-Verlag, 1977 [35] Graf, J. H. ; Gubler, E.: Einleitung in die Theorie der Bessel’schen Funktionen - Die Besselfunktion erster Art. Verlag Wyss, 1898 [36] Graf, J. H. ; Gubler, E.: Einleitung in die Theorie der Bessel’schen Funktionen - Die Besselfunktion zweiter Art. Verlag Wyss, 1900 [37] Green, G.: An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism. https: / / arxiv.org/ abs/ 0807.0088, Letzter Zugriffam 29.12.20 [38] Harrington, R. F.: Field Computation by Moment Methods. Macmillan, 1968 [39] Hering, E. ; Steinhardt, H.: Taschenbuch der Mechatronik. Fachbuchverlag Leipzig, 2005 <?page no="412"?> 386 BIBLIOGRAPHY [40] Jackson, J. D.: Classical Electrodynamics. 4th Ed. John Wiley and Sons, Inc., 1998 [41] Josipovic, M.: Geometric Multiplication of Vectors - An Introduction to Geometric Algebra in Physics. Birkh¨auser, 2020 [42] Jung, M. ; Langer, U.: Methode der finiten Elemente f¨ ur Ingenieure; 2. Auflage. Springer/ Vieweg, 2013 [43] Kiepert, L.: Integral-Rechnung Band I. Helwingsche Verlagsbuchhandlung, 1922 [44] Kiepert, L.: Integral-Rechnung Band II - Theorie der gew¨ohnlichen Differential- Gleichungen. Helwingsche Verlagsbuchhandlung, 1922 [45] Krotsch, J.: Mehrkriterielle Optimierung permanentmagneterregter Synchronmotoren in Außenl¨ auferbauweise unter besonderer Ber¨ ucksichtigung der Radialkr¨afte. Dissertationsschrift, Friedrich-Alexander-Universit¨ at, Erlangen-N¨ urnberg, 2016 [46] Kuchling, H.: Taschenbuch der Physik. Verlag Harri Deutsch, 1991 [47] Lee, Y. H.: Introduction to Engineering Electromagnetics. Springer Verlag, 2013 [48] Leighthill, M. J.: Introduction to Fourier Analysis and generalised Functions. Cambridge, 1959 [49] Liedl, R. ; Kuhnert, K.: Analysis in einer Variablen. B. I. Wissenschaftsverlag M¨ unchen, 1992 [50] Marsden, J. E. ; Tromba, A. J.: Vector Calculus. W. H. Freeman and Company, 2000 [51] Maxwell, J. C.: On the Geometrical Mean Distance of Two Figures on a Plane. Transactions of the Royal Society of Edinburgh, Vol. XXVL, 1872 [52] Maxwell, J. C.: A Treatise on Electricity and Magnetism, Volume I. Cambridge University Press, 1873 [53] Maxwell, J. C.: A Treatise on Electricity and Magnetism, Volume II. Cambridge University Press, 1873 <?page no="413"?> BIBLIOGRAPHY 387 [54] Morisco, D.: Berechnung der Stromverdr¨angung in Mehrleiteranordnungen in der Umgebung von bewegten ferromagnetischen K¨orpern durch Verkn¨ upfung von Finite Elemente Methode und Teilleitermethode. Dissertationsschrift TU Ilmenau; Cuvillier Verlag G¨ ottingen, 2020 [55] Munz, C.D. ; Westermann, T.: Numerische Behandlung gew¨ohnlicher und partieller Differenzialgleichungen; 2. Auflage. Springer, 2009 [56] N. N.: National Institute of Standards and Technology. https: / / www.nist.gov/ , letzter Zugriffam 01.08.2022 [57] Papula, L.: Mathematik f¨ ur Ingenieure und Naturwissenschaftler - Band 2. Verlag Vieweg/ Teubner, 2009 [58] Philippow, E.: Taschenbuch Elektrotechnik, Band 1: Allgemeine Grundlagen. VEB Verlag Technik Berlin, 1976 [59] Plonsey, R. ; Collin R. E.: Principles and Applications of Electromagnetic Fields. Mc Graw-Hill Book Company, Inc., 1961 [60] Riley, K. F. ; Hobson, M. P. ; Bence, S. J.: Mathematical Methods for Physics and Engineering - Third Edition. Cambridge University Press, 2015 [61] Rosa, E. B. ; Grover, F. W.: Scientific Papers of the Bureau of Standards. Formulas and Tables for the Calculation of Mutual and Self-Inductance, No. 169; Washington Government Printing Office, 1916 [62] Sadiku, M. N. O.: Numerical Techniques in Elektromagnetics - 2nd ed. CRC Press LLC, 2001 [63] Sawitzki, A.: Zur Berechnung der elektromagnetischen Felder von ausgedehnten komplexen Systemen durch Erweiterung der Momentenmethode um eine effiziente Rasterung. Dissertationsschrift, Technische Universit¨ at Ilmenau, 2017 [64] Schwarzenberg-Czerny, A.: On Matrix Factorization and Efficient Least Squares Solution. Astronomy and Astrophysics Supplement, v.110, p.405, 1995 [65] S¨ uße, R.: Theoretische Grundlagen der Elektrotechnik - Band 2. Teubner Verlag, 2006 <?page no="414"?> 388 BIBLIOGRAPHY [66] Simonyi, K.: Theoretische Elektrotechnik, 10. Auflage. Barth Verlag, Leipzig, 1993 [67] Weisstein, E.: Wolfram Mathworld - The web’s most extensive mathematics resource. https: / / mathworld.wolfram.com, Letzter Zugriffam 15.02.21 [68] Weller, F.: Numerische Mathematik f¨ ur Ingenieure und Naturwissenschaftler. Verlag Vieweg, 1996 <?page no="415"?> Appendix A A.1 Integrals The integrals were taken from the literature [4]: ˆ ln x dx = x ln x − x (A.1) ˆ ln(x 2 − x 1 ) dx 1 = [ln(x 2 − x 1 ) − 1] (x 1 − x 2 ) (A.2) ˆ [ln(x 2 − x 1 ) − 1] (x 1 − x 2 ) dx 2 = 3x 22 4 − x 21 ln(x 2 − x 1 ) 2 − x 22 ln(x 2 − x 1 ) 2 − 3x 1 x 2 2 + x 1 x 2 ln(x 2 − x 1 ) (A.3) A.2 Integrals for chap. 3.3 Inner integral: ˆ c 0 ln | x 2 − x 1 | � �� � r dx 1 = [ln | x 2 − x 1 | − 1] (x 1 − x 2 ) �� x 1 =c x 1 =0 = [ln | x 2 − c | − 1] (c − x 2 ) − [ln | − x 2 | − 1] ( − x 2 ) = ln | x 2 − c | c − ln | x 2 − c | x 2 − c + ln | x 2 | x 2 = ln | x 2 − c | (c − x 2 ) − c + ln | x 2 | x 2 . Outer integral: ˆ a+b a ⎡⎣ ln | x 2 − c | (c − x 2 ) � �� � A − c ���� B + ln | x 2 | x 2 � �� � C ⎤⎦ dx 2 = ˆ a+b a A dx 2 − ˆ a+b a B dx 2 + ˆ a+b a C dx 2 389 <?page no="416"?> 390 Appendix • Integral A: ˆ a+b a A dx 2 = ˆ a+b a c ln | x 2 − c | dx 2 ︸ ︷︷ ︸ A 1 − ˆ a+b a x 2 ln | x 2 − c | dx 2 ︸ ︷︷ ︸ A 2 ˆ a+b a A 1 dx 2 = ln | x 2 − c | (x 2 − c) c − c x 2 ∣∣ x 2 =a+b x 2 =a = ln | a + b − c | (ac + bc − c 2 ) − bc − ln | a − c | (ac − c 2 ). For b = c = s it follows ˆ a+b a A 1 dx 2 = ln | a | as − s 2 − ln | a − s | (as − s 2 ) ˆ a+b a A 2 dx 2 = ln | x 2 − c | (x 22 − c 2 ) 2 − cx 2 2 − x 22 4 ∣∣∣∣ x 2 =a+b x 2 =a = ln | a + b − c | ((a + b) 2 − c 2 ) 2 − c(a + b) 2 − (a + b) 2 4 − ln | a − c | (a 2 − c 2 ) 2 + ca 2 + a 2 4 = ln | a + b − c | ((a + b) 2 − c 2 ) 2 − 2ab + b 2 4 − bc 2 − ln | a − c | (a 2 − c 2 ) 2 . For b = c = s it follows ˆ a+b a A 2 dx 2 = ln | a | ((a + s) 2 − s 2 ) 2 − 2as + s 2 4 − s 2 2 − ln | a − s | (a 2 − s 2 ) 2 = ln | a | ( a 2 2 + as ) − as 2 − 3s 2 4 − ln | a − s | ( a 2 − s 2 2 ) ˆ a+b a A dx 2 = ˆ a+b a A 1 dx 2 − ˆ a+b a A 2 dx 2 = ln | a + b − c | [( ac + bc − c 2 ) − (a + b) 2 − c 2 2 ] − ln | a − c | [( ac − c 2 ) − a 2 − c 2 2 ] − bc 2 + 2ab + b 2 4 . <?page no="417"?> A.2 Integrals for chap. 3.3 391 For b = c = s it follows ˆ a+b a A 1 dx 2 − ˆ a+b a A 2 dx 2 = ln | a | as − s 2 − ln | a − s | (as − s 2 ) − ln | a | ( a 2 2 + as ) + 2as 4 + 3s 2 4 + ln | a − s | ( a 2 − s 2 2 ) . = − a 2 2 ln | a | − ( as − s 2 + a 2 2 ) ln | a − s | − s 2 4 + as 2 . • Integral B: ˆ a+b a B dx 2 = c x 2 ∣∣ x 2 =a+b x 2 =a = c(a + b) − ca = bc. For b = c = s it follows ˆ a+b a B dx 2 = s 2 • Integral C: ˆ a+b a C dx 2 = x 22 (ln | x 2 | − 1/ 2) 2 ∣∣∣∣ x 2 =a+b x 2 =a = (a + b) 2 (ln | (a + b) | − 1/ 2) 2 − a 2 (ln | a | − 1/ 2) 2 = (a + b) 2 2 ln | a + b | − (a + b) 2 4 − a 2 2 ln | a | + a 2 4 = (a + b) 2 2 ln | a + b | − 2ab + b 2 4 − a 2 2 ln | a | . For b = c = s it follows ˆ a+b a C dx 2 = ( as + a 2 + s 2 2 ) ln | a + s | − as 2 − s 2 4 − a 2 2 ln | a | . In summary, the final integral follows from the sum of the previously solved integrals with <?page no="418"?> 392 Appendix ˆ a+b a (A − B + C) dx 2 = ln | a + b − c | [( ac + bc − c 2 ) − (a + b) 2 − c 2 2 ] − ln | a − c | [( ac − c 2 ) − a 2 − c 2 2 ] − bc 2 + 2ab + b 2 4 − bc + (a + b) 2 2 ln | a + b | − 2ab + b 2 4 − a 2 2 ln | a | = ln | a + b − c | [( ac + bc − c 2 ) − (a + b) 2 − c 2 2 ] − ln | a − c | [( ac − c 2 ) − a 2 − c 2 2 ] − 3bc 2 +(a + b) 2 2 ln | a + b | − a 2 2 ln | a | . For b = c = s it follows ˆ a+b a (A − B + C) dx 2 = ln | a | [ as − (a + s) 2 − s 2 2 ] − ln | a − s | [ as − s 2 − a 2 − s 2 2 ] − 3s 2 2 +(a + s) 2 2 ln | a + s | − a 2 2 ln | a | . = ln | a | ( − a 2 2 ) − ln | a − s | ( as − s 2 + a 2 2 ) − 3s 2 2 + ln | a + s | ( as + a 2 + s 2 2 ) − ln | a | ( a 2 2 ) = ln | a + s | ( as + a 2 + s 2 2 ) − ln | a | s 2 − 3s 2 2 − ln | a − s | ( as − s 2 + a 2 2 ) . A.3 Integrals for chap. 3.5 Inner integral: ˆ s 0 ln | x 2 − x 1 | ︸ ︷︷ ︸ r dx 1 = ˆ x 2 0 ln | x 2 − x 1 | dx 1 ︸ ︷︷ ︸ A + ˆ s x 2 ln | x 1 − x 2 | dx 1 ︸ ︷︷ ︸ B <?page no="419"?> A.3 Integrals for chap. 3.5 393 The integral is to be set up in such a way that singularities in the argument of the logarithm are avoided. The solution corresponds to the GM D of a point on the line to the line itself. With integral eq. (A.2) follows • Integral A: ˆ x 1 =x 2 x 1 =0 ln | x 2 − x 1 | dx 1 = (x 1 − x 2 ) [ln | x 2 − x 1 | − 1] = 0 − ( − x 2 [ln | x 2 | − 1]) = x 2 ln | x 2 | − x 2 . • Integral B: ˆ x 1 =s x 1 =x 2 ln | x 2 − x 1 | dx 1 = (x 2 − x 1 ) [ln | x 1 − x 2 | − 1] = 0 − [x 2 ln | s − x 2 | − x 2 − s ln | s − x 2 | +s] = − x 2 ln | s − x 2 | +x 2 + s ln | s − x 2 | − s. The result of both integrals is ˆ s 0 ln | x 2 − x 1 | dx 1 = x 2 ln | x 2 | − x 2 − x 2 ln | s − x 2 | +x 2 + s ln | s − x 2 | − s = x 2 ln | x 2 | +(s − x 2 ) ln | s − x 2 | − s. Outer integral: ˆ s 0 ⎛⎝ x 2 ln | x 2 | � �� � A + (s − x 2 ) ln | s − x 2 | � �� � B − s ���� C ⎞⎠ dx 2 . • Integral A: ˆ s 0 x 2 ln | x 2 | dx 2 = (x 2 2 (ln x 2 − 1/ 2))/ 2 �� x 2 =s x 2 =0 = s 2 2 ln s − s 2 4 . • Integral B: ˆ s 0 (s − x 2 ) ln | s − x 2 | dx 2 = � ln | s − x 2 | − 1 2 � (s − x 2 ) 2 2 ���� x 2 =s x 2 =0 = s 2 2 ln | s | − s 2 4 . <?page no="420"?> 394 Appendix • Integral C: ˆ s 0 s dx 2 = sx 2 ∣∣ x 2 =s x 2 =0 = s 2 . In summary, the following integral follows ˆ a+b a (A + B − C) dx 2 = s 2 2 ln s − s 2 4 + s 2 2 ln | s | − s 2 4 − s 2 = s 2 [ ln | s | − 3 2 ] . A.4 Integrals for chap. 3.6 Inner integral: ˆ a=A a=0 ln √ (a − b) 2 + c 2 ︸ ︷︷ ︸ r da = ln( √ (a − b) 2 + c 2 )(a − b) − a + c arctan ( a − b c ) ∣∣∣∣ a=A a=0 = ln( √ (A − b) 2 + c 2 )(A − b) − A + c arctan ( A − b c ) − [ ln( √ ( − b) 2 + c 2 )(0 − b) − 0 + c arctan ( 0 − b c )] = ln( √ (A − b) 2 + c 2 )(A − b) ︸ ︷︷ ︸ A − A ︸︷︷︸ B + c arctan ( A − b c ) ︸ ︷︷ ︸ C + b ln( √ ( − b) 2 + c 2 ) ︸ ︷︷ ︸ D − c arctan ( − b c ) ︸ ︷︷ ︸ E . Outer integral: • Integral A: <?page no="421"?> A.4 Integrals for chap. 3.6 395 ˆ b=B b=0 A db = (A − b) 2 4 − c 2 ln((A − b) 2 + c 2 ) 4 − ln( √ (A − b) 2 + c 2 ) (A − b) 2 2 ∣∣∣∣ b=B b=0 = (A − B) 2 4 − c 2 ln((A − B) 2 + c 2 ) 4 − ln( √ (A − B) 2 + c 2 ) (A − B) 2 2 − A 2 4 + c 2 ln(A 2 + c 2 ) 4 + A 2 ln( √ A 2 + c 2 ) 2 . • Integral B: ˆ b=B b=0 B db = AB • Integral C: ˆ b=B b=0 C db = c 2 ln((A − b) 2 + c 2 ) 2 − c (A − b) arctan ( A − b c ) ∣∣ b=B b=0 = c 2 ln((A − B) 2 + c 2 ) 2 − c (A − B) arctan ( A − B c ) − c 2 ln(A 2 + c 2 ) 2 + Ac arctan ( A c ) . • Integral D: ˆ b=B b=0 D db = c 2 ln(b 2 + c 2 ) 4 − b 2 4 + b 2 ln( √ b 2 + c 2 ) 2 ∣∣∣∣ b=B b=0 = c 2 ln(B 2 + c 2 ) 4 − B 2 4 + B 2 ln( √ B 2 + c 2 ) 2 − c 2 ln(c 2 ) 4 . • Integral E: ˆ b=B b=0 E db = c 2 ln(b 2 + c 2 ) 2 − bc arctan ( b c ) ∣∣∣∣ b=B b=0 = c 2 ln(B 2 + c 2 ) 2 − Bc arctan ( B c ) − c 2 ln(c 2 ) 2 . In summary, the final integral follows from the sum of the previously solved integrals with <?page no="422"?> ˆ b=B b=0 (A − B + C + D − E) db = (A − B) 2 4 − c 2 ln((A − B) 2 + c 2 ) 4 − ln( √ (A − B) 2 + c 2 )(A − B) 2 2 − A 2 4 + c 2 ln(A 2 + c 2 ) 4 − AB + c 2 ln((A − B) 2 + c 2 ) 2 + A 2 ln( √ A 2 + c 2 ) 2 − c(A − B) arctan ( A − B c ) − c 2 ln(A 2 + c 2 ) 2 +Ac arctan ( A c ) + c 2 ln(B 2 + c 2 ) 4 − B 2 4 + B 2 ln √ B 2 + c 2 2 − c 2 ln(c 2 ) 4 − c 2 ln(B 2 + c 2 ) 2 +Bc arctan ( B c ) + c 2 ln(c 2 ) 2 = − 3 2 AB − c 2 4 ln((A − B) 2 + c 2 ) − (A − B) 2 2 ln( √ (A − B) 2 + c 2 ) + c 2 4 ln(A 2 + c 2 ) + A 2 2 ln( √ A 2 + c 2 ) + c 2 2 ln((A − B) 2 + c 2 ) +(Bc − Ac) arctan ( A − B c ) − c 2 2 ln(A 2 + c 2 ) +Ac arctan ( A c ) − c 2 4 ln(B 2 + c 2 ) + B 2 2 ln( √ B 2 + c 2 ) + c 2 4 ln(c 2 ) + Bc arctan ( B c ) . (A.4) For A = B = s it follows the integral ˆ b=s b=0 (A − B + C + D − E) db = − 3 2 s 2 + c 2 2 ln(c 2 ) − c 2 2 ln(s 2 + c 2 ) + s 2 ln( √ s 2 + c 2 ) +2cs arctan ( s c ) = s 2 ( ln( √ s 2 + c 2 ) − 3 2 ) + c 2 2 ( ln(c 2 ) − ln(s 2 + c 2 ) ) +2cs arctan ( s c ) . (A.5) With regard to the integration constants, it should be noted that these are omitted in the present determined integrals. <?page no="423"?> A.5 MATLAB-Code - Heat diffusion script 397 A.5 MATLAB-Code - Heat diffusion script % -------------------------------------------------------------- % Reinhold-Wuerth University Kuenzelsau Campus % Author: Prof. Dr.-Ing. J. Ulm % MATLAB program for solving the heat diffusion equation % Date: Summer 2023 % -------------------------------------------------------------function pde_Diffusion close all; clear all; clc; t = [0: 0.5: 4]; % Linearvektor %t = linspace(0,1.0E-3,10); % Linearvektor x = linspace(0,0.05,20); % Linearvektor<<< %x = linspace(0,0.05,150); % Linearvektor m = 0; % Ordnungszahl Symmetrieproblem der Ebene % Loest Randwertprobleme elliptischer und parabolischer PDEs u = pdepe(m,@pdex1pde,@pdex1ic,@pdex1bc,x,t); % Matrix u: Zeilen beinhalten Weginfo, beginnend bei null % Spalten beinhalten Zeitinfo, beginnend bei null % Rufe Funktion auf um Werkstoffname zu erhalten Mat_Name = ’Kupfer’; % Figure-Titel % Position Figure links = 20; % Bezugskoordinate linker Bildrand (vertikal) unten = 300; % Bezugskoordinate unterer Bildrand (horizontal) breite = 600; % Bildbreite augehend von ’Bezugskoordinate <?page no="424"?> 398 Appendix linker Bildrand’ hoehe = 350; % Bildhoehe ausgehend von ’Bezugskoordinate unterer Bildrand’ figure(’position’,[links,unten,breite,hoehe]); surf(x,t,u,’Linewidth’,1.5); set(gca,’Fontsize’,12); view(8,12); %Titel = strcat(’Werkstoff: ’, Mat_Name, ’ \kappa / \kappa_0 = ’,num2str(Faktor)); Titel = strcat(’Werkstoff: ’, Mat_Name); title(Titel); xlabel(’(Linker Rand) Weg x [m] (rechter Rand)’); ylabel(’Zeit t [s]’); zlabel(’\varphi_h [ ◦ C]’); %print -depsc2 -tiff Diff1.eps links = 700; % Bezugskoordinate linker Bildrand (vertikal) unten = 300; % Bezugskoordinate unterer Bildrand (horizontal) breite = 600; % Bildbreite augehend von ’Bezugskoordinate linker Bildrand’ hoehe = 350; % Bildhoehe ausgehend von ’Bezugskoordinate unterer Bildrand’ figure(’position’,[links,unten,breite,hoehe]); %plot(rot90(u,3),’Linewidth’,1.2); plot(x,rot90(u,3),’Linewidth’,1.2); Titel = strcat(’Werkstoff: ’, Mat_Name); set(gca,’Fontsize’,12); xlabel(’(Linker Rand) Weg x [m] (rechter Rand)’); ylabel(’[ ◦ C]’); title(Titel); legend(’4,0 s’,’3,5 s’,’3,0 s’,’2,5 s’,’2,0 s’,’1,5 s’, ’1,0 s’,’0,5 s’,’0,0 s’) grid on; <?page no="425"?> A.5 MATLAB-Code - Heat diffusion script 399 % Beispiele fuer Bildausgaben %print -depsc2 -tiff Diff2.eps %print -depsc2 -tiff Name.eps %print -dpng Name.png %print -dbmp Name.bmp % -------------------------------------------------------------- % Partielle Differenzialgleichung % Angaben aus Kuchling; Taschenbuch der Physik function [c,f,s] = pdex1pde(x,t,u,DuDx) Rho = 8933; % kg/ m^3; cth = 383; % J/ (kg K); Lamda = 384; % W/ (m K); c = Rho * cth / Lamda; f = DuDx; s = 0; % -------------------------------------------------------------- % Anfangsbedingung (initial conditions) function u0 = pdex1ic(x) u0 = 0; % -------------------------------------------------------------- % Randbedingungen function [pl,ql,pr,qr] = pdex1bc(xl,ul,xr,ur,t) % l = linker Rand, Anfang des Probekoerpers bei x = 0 % r = rechter Rand, Ende des Probekoerpers bei x = x_max % Randbedingung Beispiel 1- Kupfer - Referenz COMSOL % mit Randbedingung u0=0 % Temperatur = 100; % pl = ul-Temperatur; % ql = 0; <?page no="426"?> 400 Appendix % pr = ur-0; % qr = 0; %print -depsc2 -tiff Diffusion_Beispiel_1.eps % Randbedingung Beispiel 2 - Diff-Laenge gegnueber Bsp. 1 halbiert % mit Randbedingung u0=0 Temperatur = 100.0; pl = ul-Temperatur; ql = 0; pr = ur; qr = 20; %print -depsc2 -tiff Diffusion_Beispiel_2.eps % Randbedingung Beispiel 3 - Diff-Laenge gegnueber Bsp. 1 halbiert % mit Randbedingung u0=0 % Temperatur = 100; % pl = ul-Temperatur; % ql = 0; % pr = 0; % qr = ur-1 ; %print -depsc2 -tiff Diffusion_Beispiel_3.eps % -------------------------------------------------------------- A.6 MATLAB code - magnetic field diffusion script % -------------------------------------------------------------- % Reinhold-Wuerth University Kuenzelsau Campus % Author: Prof. Dr.-Ing. J. Ulm % MATLAB program for solving the magnetic field diffusion equation % Date: Summer 2023 % -------------------------------------------------------------function pde_Diffusion <?page no="427"?> A.6 MATLAB code - magnetic field diffusion script 401 close all; clear all; clc; t = [0E-4: 2E-4: 1.4E-3]; % Linearvektor %t = linspace(0,1.0E-3,10); % Linearvektor x = linspace(0,0.010,50); % Linearvektor m = 0; % Ordnungszahl Symmetrieproblem der Ebene % Loest Randwertprobleme elliptischer und parabolischer PDEs u = pdepe(m,@pdex1pde,@pdex1ic,@pdex1bc,x,t); % Matrix u: Zeilen beinhalten Weginfo, beginnend bei null % Spalten beinhalten Zeitinfo, beginnend bei null % rufe Funktion auf um Werkstoffname zu erhalten [my_kappa,Mat_Name] = Permeabilitaet_kappa(0.1); % Position Figure links = 20; % Bezugskoordinate linker Bildrand (vertikal) unten = 300; % Bezugskoordinate unterer Bildrand (horizontal) breite = 600; % Bildbreite augehend von ’Bezugskoordinate linker Bildrand’ hoehe = 350; % Bildhoehe ausgehend von ’Bezugskoordinate unterer Bildrand’ figure(’position’,[links,unten,breite,hoehe]); surf(x,t,u,’Linewidth’,1.5); set(gca,’Fontsize’,12); view(8,12); %Titel = strcat(’Werkstoff: ’, Mat_Name, ’ \kappa / \kappa_0 = ’,num2str(Faktor)); Titel = strcat(’Werkstoff: ’, Mat_Name); %title(Titel); xlabel(’(Linker Rand) Weg x [m] (rechter Rand)’); ylabel(’Zeit t [s]’); <?page no="428"?> 402 Appendix zlabel(’B [T]’); %print -depsc2 -tiff Diff_B_Galerkin_01.eps links = 700; % Bezugskoordinate linker Bildrand (vertikal) unten = 300; % Bezugskoordinate unterer Bildrand (horizontal) breite = 600; % Bildbreite augehend von ’Bezugskoordinate linker Bildrand’ hoehe = 350; % Bildhoehe ausgehend von ’Bezugskoordinate unterer Bildrand’ figure(’position’,[links,unten,breite,hoehe]); %plot(rot90(u,3),’Linewidth’,1.2); plot(x,rot90(u,3),’Linewidth’,1.2); Titel = strcat(’Werkstoff: ’, Mat_Name); set(gca,’Fontsize’,12); xlabel(’(Linker Rand) Weg x [m] (rechter Rand)’); ylabel(’B [T]’); legend(’1,4 ms’, ’1,2 ms’, ’1,0 ms’, ’0,8 ms’, ’0,6 ms’, ’0,4 ms’, ’0,2 ms’, ’0,0 ms’); %title(Titel); grid on; % Beispiele fuer Bildausgaben %print -depsc2 -tiff Diff_B_Galerkin_02.eps %print -depsc2 -tiff Diff_B_Galerkin_02.eps %print -depsc2 -tiff Name.eps %print -dpng Name.png %print -dbmp Name.bmp % -------------------------------------------------------------- % Partielle Differenzialgleichung function [c,f,s] = pdex1pde(x,t,u,DuDx) [my_kappa,Mat_Name] = Permeabilitaet_kappa(u); c = my_kappa; <?page no="429"?> A.6 MATLAB code - magnetic field diffusion script 403 f = DuDx; s = 0; % -------------------------------------------------------------- % Anfangsbedingung (initial conditions) function u0 = pdex1ic(x) u0 = 0; % -------------------------------------------------------------- % Randbedingungen function [pl,ql,pr,qr] = pdex1bc(xl,ul,xr,ur,t) % l = linker Rand, Anfang des Probekoerpers bei x = 0 % r = rechter Rand, Ende des Probekoerpers bei x = x_max % Randbedingung Beispiel 1- Kupfer - Referenz COMSOL % mit Randbedingung u0=0 % Tesla = 1.0; % pl = ul-Tesla; % ql = 0; % pr = ur-0.2; % qr = 0; %print -depsc2 -tiff Diffusion_Beispiel_1.eps % Randbedingung Beispiel 2 - Diff-Laenge gegnueber Bsp. 1 halbiert % mit Randbedingung u0=0 Tesla = 1.0; pl = ul-Tesla; ql = 0; pr = ur; qr = 1; %print -depsc2 -tiff Diffusion_Beispiel_2.eps % Randbedingung Beispiel 3 - Diff-Laenge gegnueber Bsp. 1 halbiert % mit Randbedingung u0=0 <?page no="430"?> 404 Appendix % Tesla = 1.0; % pl = ul-Tesla; % ql = 0; % pr = 0; % qr = ur-1 ; %print -depsc2 -tiff Diffusion_Beispiel_3.eps % -------------------------------------------------------------- % -------------------------------------------------------------function [my_kappa,Mat_Name] = Permeabilitaet_kappa(B) % -------------------------------------------------------------- % zur Verfuegung stehende Werkstoffe Werkstoff_M = 2; % waehle Werkstoff aus (1 ... 5) perm = 3; % 1 = differenzeille Permeab.; % 2 = Permeabilitaet % 3 = Permeabilitaet des Vakuums if Werkstoff_M == 1 % Vacoflux 50 Mat_Name= [’Vacoflux50’]; H_mat = [0 13 20 22 26 30 34 39 45 52 60 69 79 91 104 138 242 447 782 2393 5514 15000 50000 ]; B_mat = [0 0.04 0.1 0.16 0.26 0.43 0.65 0.86 1.09 1.28 1.42 1.51 1.59 1.65 1.71 1.81 1.95 2.07 2.15 2.25 2.28 2.30 2.40]; kappa = 2.5.*10.^6; elseif Werkstoff_M == 2 % Kupfer Mat_Name= [’Kupfer’]; H_mat = [0 10 25000 50000]; B_mat = [0 1.2566E-5 0.0314159 0.062831853]; kappa = 56.2.*10.^6; elseif Werkstoff_M == 3 Mat_Name= [’9SMn28K’]; H_mat = [ 0 396 793 1031 1190 1587 3174 5952 9920 19841 30158 50000]; <?page no="431"?> A.6 MATLAB code - magnetic field diffusion script 405 B_mat = [ 0 0.6 0.97 1.15 1.24 1.39 1.53 1.63 1.7 1.75 1.77 1.8 ]; kappa = 4.5.*10.^6; % Leitfaehigkeit geschaetzt elseif Werkstoff_M == 4 Mat_Name= [’11SMn 30’]; H_mat = [0 1250 2500 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 ]; B_mat = [0 0.95 1.29 1.65 1.81 1.89 1.95 2 2.02 2.04 2.05 2.07 2.09]; kappa = 4.55.*10.^6; elseif Werkstoff_M == 5 Mat_Name= [’100Cr6’]; H_mat = [0 100 300 500 1250 2500 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000]; B_mat = [0 0.01 0.10 0.21 0.63 1.17 1.37 1.47 1.54 1.58 1.60 1.63 1.65 1.66 1.67 1.69]; kappa = 4.65.*10.^6; else error(’Waehle Werkstoff aus! ! ! ’) end my_0 = 4*pi*10^-7; if perm == 1 % Differenzielle Permeabilitaet dB = diff(B_mat); dH = diff(H_mat); my_mat = zeros(size(H_mat)); my_mat(1: end-1) = dB./ dH; my_kappa = interp1(B_mat,my_mat,B,’pchip’) .* kappa; elseif perm == 2 % Permeabilitaet my_mat = B_mat./ (H_mat+eps); my_mat(1) = my_mat(2); % unterdruecke NAN-Ausgabe my_kappa = interp1(B_mat,my_mat,B,’pchip’) .* kappa; <?page no="432"?> 406 Appendix elseif perm == 3 % Permeabilitaet des Vakuums my_kappa = kappa.*my_0; else error(’Waehle Permeabilitaet aus! ! ! ’) end % -------------------------------------------------------------- <?page no="433"?> A.7 Tool comparison - MATLAB vs. COMSOL 407 A.7 Tool comparison - MATLAB vs. COMSOL The two simulation results of a one-dimensional field diffusion process in an infinitely long copper block with a specific electrical conductivity κ = 56 10 6 1/ (Ωm) and constant permeability μ 0 are compared. The flux density of 1 T was given as a boundary condition on one side. In fig. A.1 this is shown with the MATLAB PDE toolbox. The figure shows the flux density over the location x. Here, the time t is the equidistant plot parameter. Obvious is the decreasing diffusion path with increasing time. Which will assume a steady state for infinite time. This also means that the diffusion rate slows down and has only reached its highest value at the beginning. Figure A.1: MATLAB-PDE Toolbox simulation result In fig. A.2 the result obtained with COMSOL Multiphysics. As before, the decrease in diffusion speed can be observed. Summary: In both cases, all parameters were kept constant to allow for accurate comparison. In <?page no="434"?> 408 Appendix both result plots, the time t was chosen as the equidistant plot parameter. The results of both tools agree very well. The MATLAB code can be found in the appendix A.6. Figure A.2: COMSOL Multiphysics simulation result <?page no="435"?> Appendix B Campus K¨ unzelsau - Inside The K¨ unzelsau campus is distinguished by its study programmes in electrical engineering, automation technology & electromechanical engineering. In the Bachelor’s degree programmes, the author reads the modules ”Electrical Machines“ and ”Drive Systems Design“ and in the Master’s degree programme Electrical Engineering, in its specialisation Electromagnetic Systems (EMS) the modules ”Theory of Electromagnetic Fields“ and the ”Electro-Magneto-Mechanical Energy Converters“ as well as the ”Magnetic Measurement Technology“. Figure B.1: New research building at the K¨ unzelsau Campus (Photo: Wilhelm Feucht) His research topics include • Electromagnetic energy converters, 409 <?page no="436"?> 410 Campus K¨ unzelsau - Inside • Magnetic sensor technology, • Magnetic materials testing, • Analytical and numerical computational methods/ simulation techniques. The Institute for Rapid Mechatronic Systems (ISM), which has produced numerous patents, publications and awards in collaboration with regional industry, was founded in 2010. As part of the 2019 campus expansion, a research building, the building in the foreground of fig. B.1, was constructed and also 2019 in addition the Institute of Digitalisation and Electrical Drives (IDA) was established, which started its work on the ground floor (400 m 2 ) in 2020. Prof. Dr.-Ing. J¨ urgen Ulm was appointed as executive director for the establishment and management. He is assisted by Prof. Dr.- Ing. Ingo K¨ uhne as deputy institute director and institute assistant Dr. Anna Konyev. In 2020, Prof. Ulm also received a research professorship for electromagnetic systems. IDA’s basic focus is on applied research in the field of digitalisation of electromagnetic drives, sensor technology and measurement technology: • IDA supports regional companies by providing access to research and development in connection with the university, especially for small companies, • IDA covers bottlenecks in research and development (R&D) in companies by outsourcing R&D tasks to IDA, • IDA supports the realisation of own ideas and helps to create innovative products. Further information on both institutes can be found on the homepage • https: / / www.hs-heilbronn.de/ ida, • https: / / www.hs-heilbronn.de/ ism My heartfelt thanks go to my team at both institutes for providing the images and for their support in producing this book! Kind regards J¨ urgen Ulm <?page no="437"?> Index aberration 148 boundary condition, Cauchy 25 acceleration coefficient 231 boundary condition, Dirichlet 25 across variables 228 boundary condition, Neumann 25 adjoint 6 Boundary operator 40 Ampere’s law 42 boundary value task 18 analogy of electr. and mechan. quantities 231 calculus of variations, methods 212 analysis problem 236 capacitor, voltage charact. 118 aperiodic behaviour 125 cartesian coordinate system 47 aperiodic limiting case 124 categories, Modelling 375 area 41 commutative law 39 associative law 39 complement, algebraic 6 asynchronous motor, prototype 345 constraints 209 attenuation constant 231 continuity condition 185 attenuation differential equation 228 coordinate systems 47 basis function 236 core of the transformation 371 basis vectors 39 Crank, John 324 basis vectors, orthonormal 39 Crank-Nicolson, explicit method 332 Bessel equation, field diffusion 150 Crank-Nicolson, implicit method 324 Bessel equation, field in the capacitor 153 current density 32 Bessel function, first kind 163 current displacement 137 Bessel equation, flux density distribution 157 cylinder coordinate system 49 Bessel equation, general form 148 delta function 44 Bessel equation, solutions 148 determinant 5 Bessel, Wilhelm Friedrich 148 deterministic problem 235 <?page no="438"?> 412 Index difference quotient, central 324 error calculation 111 differential equation 18 Euclid 58 differential equation, classification 24 evaluation point 62 differential equation, elliptic 24 evolutionary method 354 differential equation, explicit 19 excitation angular frequency 122 differential equation, homogeneous 18 extreme value problems 209 differential equation, hyperbolic 24 field diffusion equation 230 differential equation, implicit 19 finite difference method 323 differential equation, inhomogeneous 18 finite elements, classification 232 differential equation, linear 18 Galerkin method 238 differential equation, ordinary 18 Galerkin, Boris 238 differential equation, parabolic 24 gamma function 162 differential equation, partial 18 Gaussian theorem, electrostatics 167 differential equation, strong form 31 GMD 55 differential equation, weak form 31 Green, George 165 differential quotient 32 Green’s integral theorems 168 diffusion equation 230 heat diffusion equation 230 Dirac’s delta function 44 heat transfer 303 discontinuity condition 185 helix 83 discretisation error 323 Helmholtz equation 167 discriminant 124 Hermite, Charles 10 distributive law 3 Hilbert space 28 D¨ urer square 6 hyperbolic differential equation 24 ecliptic 148 IDA 409 edge operator 41 impedances 107 e-function 246 inductance, current differential eq. 120 eigensolutions 357 induction law 43 eigenfunctions 357 initial value 18 eigenvalue 357 inner product of functions 29 eigenvalue problem 357, canonical form inner product of vectors 28 eigenvector 14 inner product, normalized 30 electric field strength, related 142 integration point 170 elimination method 220 integration, partial 23 elliptic differential equation 24 ISM 409 envelope integral 42 Lagrange function 209 <?page no="439"?> Index 413 Lagrangian multipliers 209 Nabla operator 33 Laplace operator 33 nat. angular freq., damped system 122 Laplace’s differential equation 181 nat. angular freq. 122 law of conservation of momentum 228 nat. angular freq., damped system 122 LCR resonant circuits, series, parallel 108 nat. angular freq., error calculation 111 left-hand rule 44 Neumann-Green function 177 line 58 Nicolson, Phyllis 324 linear operator 26 normal function 30 logarithm 1 normalised function 30 logarithm, Brigg’s 1 numerus 1 LU decomposition 343 nutation 148 mantissa 2 ODE 18 matrix, adjunct 6 operator, inverse 27 Matrix, anti hermitian 17 operator, linear 26 matrix, complex conjugate 9 operator, self-adjoint 27 matrix, condition number 13 optimisation tool 350 matrix, conditioned 13 optimisation, mathematical 210 matrix, hermitian conjugate 9 ordinary differential equations 18 matrix, inverse 7 orthogonal function 30 matrix, norm 12 orthonormal function 30 matrix, normal 12 overdamped case 125 matrix, orthogonal 12 parabolic differential equation 24 matrix, self-adjoint 17 parallel oscillator, mechanical 229 matrix, square 4 parallel resonant circuit, electrical 229 matrix, transposed 8 Pareto front 351 matrix, unitary 11 Pareto optimisation 350 Maxwell’s equations 43 partial differential equations 18 Maxwell’s IV Theorem 41 particle swarm method 354 methods of moments (MOM) 235 PDE 18 mirror matrix 9 phase diagram 130 modelling, categories 375 plunger anchor magnet 352 MOM 235 point 58 moment method, basic principle 235 Poisson’s DE of electrostatics 230 Monte Carlo method 353 Poisson’s DE, solution 178 multi-objective optimisation 350 postprocessing 344 <?page no="440"?> 414 Index potential 34 Stoke’s integral theorem 42 potential curve 302 strong form, differential equation 31 potential function 34 Sturm-Liouville equation 167 precession subdeterminant 6 preprocessing 148 synthesis problem 236 problem, deterministic 235 test function 236 processing 343 through variables 228 product development, virtual 349 transformation, time, image area 121 product, geometric 38 triangular function 274 product, inner 38 triangular matrix 17 product, inner, normalized 30 unit matrix 15 product, vector, inner 28 unit vectors, calculation rules 40 product, wedge 38 variable, dependent 18 proportional magnet 342 variable, independent 18 reactance 112 vector operator curl 36 residuum 242 vector operator divergence 35 right-hand rule 44 vector operator gradient 34 Ritz method 212 vector product 39 rotation matrix 11 vector product, collinearity 39 Sarrus, rule of 5 vector product, inner 28 scalar product 28 vector product, orthogonality 39 scalar product, collinearity 39 vector, classification 31 scalar product, orthogonality 39 vectors, differentiation rules 32 Schwarz’s permutation law 22 vectors, inner product 38 series resonant circuit, electrical 229 virtual product development 345 series resonant circuit, mechanical 229 wave equation 230 solution, complementary 20 weak form, differential equation 31 solution, particulate 21 weighting function 30 spherical coordinate system 51 Wronski determinant 21 standards, useful 379 zero operator 27 All figures and tables were prepared by the author himself. <?page no="441"?> ISBN 978-3-381-11651-5 The book offers a practice-oriented introduction to the mathematical methods of electrical engineering. The focus is on the solution of ordinary and partial differential equations using analytical and numerical methods. The analytical methods are opposed to the numerical methods. The differential equations were chosen with a view to the problems of electrical engineering. It is shown how they can also be transferred to mechanics or thermodynamics. Numerous examples and exercises with elaborated solutions facilitate the transfer of knowledge to applications. The content Required mathematical basics ‒ Coordinate systems ‒ Geometric mean distance ‒ GMD ‒ LCR parallel and series resonant circuit ‒ Current displacement in conductor ‒ Bessel equation and Bessel function ‒ Solution of differential equations using Green’s functions ‒ Method of Lagrangian multipliers ‒ Differential equations and finite elements ‒ the Method of Moments to the Galerkin Method ‒ Galerkin Method ‒ Galerkin-FEM ‒ Electrostatic field calculation ‒ Galerkin-FEM ‒ heat diffusion ‒ Galerkin-FEM ‒ magnetic field diffusion ‒ Introduction to the finite difference method ‒ Applications of FEM to product development ‒ Virtual product design ‒ Eigenvalue problems ‒ Common features of methods to solve differential equations ‒ Things worth knowing about modelling The target groups of this book are: Students of science and engineering who want to work on scientific topics using mathematical methods. Simulation, software and measurement engineers who need mathematical methods for their daily development work. The author Professor Jürgen Ulm teaches electrotechnology at Reinhold-Würth University, Künzelsau Campus, where he is a research professor for electromagnetic systems. His research focuses on electrical drives, electromagnetic sensors and methods for non-destructive magnetic material testing. He is the head of the Institute for Digitalisation and Electrical Drives (IDA).
