eBooks

International Conference on Calibration Methods and Automotive Data Analytics

0520
2019
978-3-8169-8463-4
978-3-8169-3463-9
expert verlag 
Karsten Röpke

Discussions on electrification, air pollution control and driving bans in inner cities bring major challenges for powertrain development. Real Driving Emissions (RDE), Worldwide Harmonized Light-Duty Test Procedures (WLTP) and the next level of CO2 reduction enforce new development methods. At the same time, new measurement technology and better IT infrastructure mean that ever larger amounts of data are available. Thereby, methods of digitization, e.g. Machine Learning, may be used in automotive development. Another challenge arises from the ever-increasing number of vehicle variants. Many OEMs reduce the number of their engines to reduce costs. However, the basic engines are then installed with little hardware customization in numerous vehicle models. As a result, the application of derivatives and the systematic validation of an application play an important role.

<?page no="1"?> International Conference on Calibration Methods and Automotive Data Analytics <?page no="3"?> International Conference on Calibration Methods and Automotive Data Analytics Dr. Karsten Röpke, Prof. Clemens Gühmann, Matthias Schultalbers, Dr. Wolf Baumann, Dr. Mirko Knaak (eds.) and 78 Co-Authors <?page no="4"?> Bibliografische Information der Deutschen Nationalbibliothek Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über http: / / dnb.dnb.de abrufbar. © 2 019 · expert verlag GmbH Dischingerweg 5 · D-72070 Tübingen Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Jede Verwertung außerhalb der engen Grenzen des Urheberrechtsgesetzes ist ohne Zustimmung des Verlages unzulässig und strafbar. Das gilt insbesondere für Vervielfältigungen, Übersetzungen, Mikroverfilmungen und die Einspeicherung und Verarbeitung in elektronischen Systemen. Alle Informationen in diesem Buch wurden mit großer Sorgfalt erstellt. Fehler können dennoch nicht völlig ausgeschlossen werden. Weder Verlag noch Autoren oder Herausgeber übernehmen deshalb eine Gewährleistung für die Korrektheit des Inhaltes und haften nicht für fehlerhafte Angaben und deren Folgen. Internet: www .expertverlag.de eMail: info@verlag.expert Printed in Germany ISBN 978-3-8169-34 63 - 9 ( P rint) ISBN 978-3-8169-84 63 - 4 (ePDF) <?page no="5"?> Preface Discussions on electrification, air pollution control and driving bans in inner cities bring major challenges for powertrain development. Real Driving Emissions (RDE), Worldwide Harmonized Light-Duty Test Procedures (WLTP) and the next level of CO2 reduction enforce new development methods. At the same time, new measurement technology and better IT infrastructure mean that ever larger amounts of data are available. Thereby, methods of digitization, e.g. Machine Learning, may be used in automotive development. Another challenge arises from the ever-increasing number of vehicle variants. Many OEMs reduce the number of their engines to reduce costs. However, the basic engines are then installed with little hardware customization in numerous vehicle models. As a result, the application of derivatives and the systematic validation of an application play an important role. In this book, the lectures of the International Conference on Calibration - Methods and Automotive Data Analytics held on May 21 and 22, 2019 in Berlin are contained. We would like to thank all authors for their contributions to this conference. Dr. Karsten Röpke, IAV GmbH Prof. Dr. Clemens Gühmann, TU Berlin Matthias Schultalbers, IAV GmbH Dr. Wolf Baumann, IAV GmbH Dr. Mirko Knaak, IAV GmbH <?page no="7"?> Contents Preface 1 Data Analysis I .............................................................................1 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks ..................................................................................................... 1 Yuncong Yu, Thomas Mayer, Eva-Maria Knoch, Michael Frey, Frank Gauterin 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression................................................................ 10 Yuncong Yu, Thomas Mayer, Eva-Maria Knoch, Michael Frey, Frank Gauterin 1.3 Time-Delay Estimation for Automotive Applications .............................. 21 Niklas Ebert, Frank Kirschbaum, Thomas Koch 2 MBC I...........................................................................................35 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model ........................................................................... 35 Kento Fukuhara, Daniel Rimmelspacher, Wolf Baumann, Yutaka Murata, Yui Nishio 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development ............................................................................................... 54 Christian Friedrich, Christian Kunkel, Matthias Auer 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine.......................................................................................... 74 Justin Seabrook, Josh Dalby, Kiyotaka Shoji, Akira Inoue 3 MBC II..........................................................................................85 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration ....................................................... 85 Robert Bauer, Sebastian Weber, Richard Jakobi, Frank Kirschbaum, Carsten Karthaus, Wilfried Rossegger 3.2 Bayesian Optimization and Automatic Controller Tuning ...................... 95 Matthias Neumann-Brosig, Alexander von Rohr, Alonso Marco Valle, Sebastian Trimpe <?page no="8"?> Contents 3.3 Engine Calibration Using Global Optimization Methods....................... 103 Ling Zhu, Yan Wang 4 Methods .................................................................................... 118 4.1 Finding Root Causes in Complex Systems ............................................ 118 Hans-Ulrich Kobialka 4.2 A Probabilistic Approach for Synthesized Driving Cycles ................... 125 Michael Hegmann, Wolf Baumann, Felix Springer 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN ...................................................................................................... 132 Peter Schichtel, Alireza Koochali, Sheraz Ahmed, Andreas Dengel 5 RDE ...........................................................................................146 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration................................ 146 Sung-Yong Lee, Jakob Andert, Imre Pörgye, Daechul Jeong, Marius Böhmer, Andreas Kampmeier, Sebastian Jambor, Matthias Kötter, Markus Netterscheid, Markus Ehrly 5.2 Digital Transformation of RDE Calibration Environments: The Quest for Networked Virtual ECUs and Agile Processes ................................. 174 Jakob Mauss, Felix Pfister 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development .............................................................. 188 Michael Grill, Mahir Tim Keskin, Michael Bargende, Peter Bloch, Giovanni Cornetti, Dirk Naber 6 MBC III.......................................................................................198 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures.................................................................................................. 198 Thomas Kruse, Thorsten Huber, Holger Kleinegraeber, Nicola Deflorio 6.2 Risk Averse Real Driving Emissions Calibration under Uncertainties ............................................................................................. 211 Alexander Wasserburger, Nico Didcock, Stefan Jakubek, Christoph Hametner 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods.................................................................................. 219 Stefan Scheidel, Marie-Sophie Gande, Giacomo Zerbini, Marko Decker <?page no="9"?> Contents 7 Automated Calibration II .........................................................232 7.1 AMU-Based Functions on Engine ECUs................................................. 232 Benedikt Nork, René Diener 7.2 Efficient Calibration of Transient ECU Functions through System Optimization .............................................................................................. 246 André Sell, Frank Gutmann, Tobias Gutmann 7.3 Dynamic Safe Active Learning for Calibration ....................................... 258 Mark Schillinger, Benjamin Hartmann, Martin Jacob 8 Data Analysis II ........................................................................278 8.1 Applications of High Performance Computing for the Calibration of Powertrain-Controls ................................................................................. 278 Markus Schori, Matthias Schultalbers 8.2 Efficient Automotive Development - Powered by BigData Technology ................................................................................. 286 Tobias Abthoff, Dankmar Boja The Authors.........................................................................................291 <?page no="11"?> 1 Data Analysis I 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks Yuncong Yu, Thomas Mayer, Eva-Maria Knoch, Michael Frey, Frank Gauterin Abstract This paper addresses an important problem of time series analysis at test benches. A new method is presented that allows automated segmentation of measurement time series using a Convolutional Neural Network (CNN). The CNN is trained for this purpose with specifically generated data. The results show a high quality and efficiency. The field of application of the algorithm is not limited to the automotive industry focused on here, but can be easily transferred to other areas that allow a visual segmentation of data. Kurzfassung Dieser Beitrag behandelt ein wichtiges Problem der Zeitreihenanalyse von Messreihen am Prüfstand. Es wird eine neue Methode vorgestellt, die mit Hilfe eines Convolutional Neural Networks (CNN) eine automatische Segmentierung von Messreihen ermöglicht. Das CNN wird hierzu mit gezielt generierten Daten trainiert. Das Ergebnis zeigt eine hohe Güte und Geschwindigkeit. Das Einsatzgebiet des Algorithmus ist nicht auf die hier fokussierte Anwendung in der Automobilindustrie beschränkt, sondern kann leicht auf andere Bereiche, die eine visuelle Segmentierung von Daten erlauben, übertragen werden. 1 Introduction With rapidly accelerating product life cycles in the automotive industry, research and development based on modelling and simulation gain increasing significance. Nowadays models of systems in automobiles have grown extensively in complexity, which impedes comprehension and analysis of the system. Consequently, feasibility of manually evaluating the output data and optimizing the system suffers. A possible solution is a data analysis system that automates the evaluation process. Examining of measurement and simulation data is one of the most common tasks for model analysis, evaluation and optimization. Many simulation and measurement results are time series which go through several phases. Therefore, segmentation is often the first fundamental step for further data mining. 1 <?page no="12"?> 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks 2 Literature Review 2.1 Time Series Segmentation Time series segmentation is partitioning a time series into several internally homogeneous [1, p. 466] and externally different but contiguous sub-time series [2, p. 1110]. It is often a preprocessing step in data mining that assists with extraction of interesting information in later steps [3, p. 50]. Time series segmentation algorithms can be classified into three categories: top-down, bottom-up and sliding windows [4, p. 2]. Topdown algorithms start from the coarsest segments and partition existing segments recursively [3, p. 38]. Whereas bottom-up algorithms begin with the finest possible segments and merge the most similar neighbors [3, pp. 39-40]. Sliding windows are like filters going through the time series and pick out the positions where the data characteristics change dramatically [3, p. 41]. Combinations of these algorithms are possible, for example, the combination of Sliding Windows and Bottom-Up (SWAB) [4]. These algorithms usually measure the internal homogeneity of a segment. This is where most of the innovations take place. One of the possible ways is to represent segments with simpler substitutions like lines by Piecewise Linear Representation (PLR) [4, p. 1] and polynomials by Piecewise Polynomial Representation (PPR) [5, p. 193]. A relatively advanced way to evaluate the homogeneity is Principle Component Analysis (PCA) [6]. In this study, a new approach based on a CNN is proposed, which falls into none of these three categories, as all segment boundaries in a time series are detected simultaneously. Hence, measurement of homogeneity is not necessary here. This approach reaches a decent accuracy, is universally applicable and highly efficient during application. 2.2 Convolutional Neural Networks The Convolutional Neural Network (CNN) was first proposed by Yann LeCun and others in 1998 [7, p. 121]. The recent decade has witnessed a variety of image-related applications of CNNs. For instance in pulmonary nodule (a sign at the early stage of lung cancer) detection from Computed Tomography (CT) images [8] and facial recognition including even micro-expression recognition [9] [10]. The typical input data for CNNs are two-dimensional image data, however one-dimensional CNNs are proven to be a quick alternative to Recurrent Neural Networks (RNNs) for sequence data processing [11, p. 288]. A recent application case of one-dimensional CNNs is the detection of the QRS complex (a special signal pattern) in Electrocardiogram (ECG) for cardiovascular disease diagnosis [12]. The CNN used in this paper is responsible for detecting boundaries in time series. These boundaries partition the time series, which enables further segment-wise extraction of information. 2 <?page no="13"?> 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks 3 Time Series Generator Training of neural networks requires a quite large amount of training data. As there are not enough suitable data, usually labelled and proven, a time series generator was developed to provide sufficient data. The main function of the time series generator is to mass produce random time series according to user configurations. User configurations include mainly the number of time series to generate, length of the time series, the number of channels in a time series, allowed numbers of segments in a channel, minimum segment length and intensity of different types of noise. The time series generator can generate two types of data. The first type contains “single” time series, where each sample includes only one time series. An example of a single time series with three channels is shown in Figure 1. In each channel, the data curve contains several segments, like horizontal line segments, line segments with a certain slope unequal to 0 and segments corresponding to the step responses of firstorder linear time-invariant (LTI) system (commonly known in German as “PT1”). There are random steps between segments. Segment boundaries in different channels are not completely random. The background behind this setting is that the time series can be regarded as the state of a system observed against time. Each channel represents a measured signal like temperature, pressure, voltage. A change in the observed system may result in sudden changes (boundaries in the curves) in several channels. These boundaries in different channels are usually close to each other because they indicate the same change in the observed system. The second type contains pairs of time series, where each sample includes two similar time series, as shown in Figure 2. Pairs of time series are generated to represent pairs of measurement and simulation data to feed the CNN for an automatic time series comparing algorithm [13]. 3 <?page no="14"?> 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks Figure 1: A single time series with three channels Figure 2: A pair of time series with three channels 4 <?page no="15"?> 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks 4 Time Series Segmentation Algorithm 4.1 Requirements In general, the time series segmentation algorithm splits a given time series into segments. These segments are internally homogeneous [1, p. 466]. Adjacent segments exhibit different characteristics and are contiguous in time. Contiguity means no overlap and no gap between two neighboring segments. The time series used to develop, train and test the neural network for the time series segmentation algorithm are generated by the time series generator described in Chapter 3. A question for segmentation is, whether to find the same boundaries for all the channels (“global” boundaries) or different boundaries for each channel (“local” boundaries). In this research, the time series segmentation algorithm segments a multivariate time series one channel after another separately, detecting “local” boundaries in each channel. Finding the “local” boundaries enables the calculation of time shifts between channels. If only global boundaries are needed, it is also completely feasible to train the CNN to read several channels simultaneously and segment them with the same set of boundaries. The detailed algorithm can be looked up in [14]. 4.2 CNN for Segmentation The structure of the developed CNN used for time series segmentation is shown in Figure 3. Closely related steps for processing input and output data are also presented. The input data for the CNN is a single channel or visually a curve. The output data of the CNN also constitute a time series, which has the same length as the original input channel. Each value in the output time series indicates the probability of the time point being a boundary. Inside the CNN, the data go through three one-dimensional convolutional layers in the CNN. The deeper the layer, the more filters are applied and the longer these filters become, which is typical for a CNN. Untypical for this CNN is that there are no max-pooling layers because experiments without them show better results. An intuitive explanation could be that max-pooling leads to loss of location information due to its massive downsampling effect, during which the length of output data shortens. In a typical application of CNNs, an object in an image is detected regardless of its position. Whereas, boundary detection in a time series requires to know not only the existence of these boundaries, but also their time dependent positions. Out of the same reason, zero padding is used in each convolutional layer to ensure that the length of the interim output data always stays the same. Detailed configurations of the applied CNN are shown in Table 1. 5 <?page no="16"?> 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks Figure 3: CNN for time series segmentation … …… ……… ………… ………… ………… N Channels 8 Channels 16 Channels 32 Channels 8 Filters (Length 5) 16 Filters (Length 5) 32 Filters (Length 7) 99 Neurons Convolutional Layer 1 Convolutional Layer 3 Flatten Output Layer CNN Output Data Convolutional Layer 2 Non-max Suppression Original Time Series Load a Channel Impuls over Threshold CNN Input Data Segmentation Result Probable Boundaries 6 <?page no="17"?> 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks Table 1: Configurations of the CNN for time series segmentation Layer Layer Type Size Activation Function 1 (input) - 100 - 2 Convolutional Number of filters: 8 Kernel Size: 5 ReLu 3 Convolutional Number of filters: 16 Kernel Size: 5 ReLu 4 Convolutional Number of filters: 32 Kernel Size: 7 ReLu 5 Flatten - - 6 (output) Fully connected 100 sigmoid This CNN is trained with 800’000 time series as training data and 200’000 time series as validation data. The optimiser Adam is used with the loss function binary crossentropy, which is typical for multi-class, multi-label classification like this case. The training and validation data are generated with the time series generator described in Chapter 3. The performance of the CNN is further evaluated and verified with 1’000 time series. They are generated separately with the time series generator and have no association with the training and validation data mentioned above. The results are shown in Table 2. Table 2: Evaluation of the CNN for time series segmentation Number rate Time series samples 1’000 100% Correctly segmented time series 808 80.8% Boundaries 4’976 100% Correctly detected boundaries (excluding multidetected ones) 4’842 97.3% Undetected boundaries 134 2.7% Multi-detected boundaries 0 0% Falsely detected boundaries 125 2.5% A boundary is defined as correctly detected, when there is exactly one detected boundary within 2 seconds around it (the length of all the original time series is 100 time steps). Table 2 shows a satisfactory evaluation outcome. More than 97% of the boundaries are correctly detected; less than 3% of the boundaries are ignored or falsely detected. 7 <?page no="18"?> 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks 5 Results To illustrate the results of the segmentation algorithm, a sample from the automatically segmented time series is plotted in Figure 4. There are three channels in this time series, labeled from Channel 1 to Channel 3. The data curve in each channel is segmented with the time series segmentation algorithm. The dashed lines are detected boundaries. The plot shows satisfactory results intuitively. Figure 4: Examples of time series segmentation 6 Conclusion In this study, a new method for segmentation of multivariate time series is presented. It is based on the techniques of convolutional neural networks (CNN). A time series generator is developed to support training of the CNN and testing of the method. Performance evaluation shows satisfactory results. 97% of the boundaries in the time series can be correctly detected. The presented method favors two new approaches. On the one hand, it has proposed a way to use machine learning even when not enough training data is available. In this case, synthetic data adapted to the application can be generated to support training process. One the other hand, a new way to segment multivariate time series is put forward. Various segment types can be segmented, not only linear ones as with many conventional approaches. Besides, there are quite less parameters to tune during application and the algorithm runs very efficiently during application. Nevertheless, there is still much room for improvement of the presented method. 8 <?page no="19"?> 1.1 Segmentation of Multivariate Time Series with Convolutional Neural Networks References [1] Vasko, K. T. and Toivonen, H. T. T. 2002. Estimating the Number of Segments in Time Series Data Using Permutation Tests, IEEE International Conference on Data Mining , pp. 466-473. [2] Graves, D. and Pedrycz, W. 2009. Multivariate Segmentation of Time Series with Differential Evolution, IFSA/ EUSFLAT Conference , pp. 1108-1113. [3] Lovrić, M., Milanović, M. and Stamenković, M. 2014. Algoritmic Methods for Segmentation of Time Series: An Overview. Journal of Contemporary Economic and Business Issues, Vol. 1, No. 1, pp. 31-53. [4] Keogh, E., Chu, S., Hart, D. and Pazzani, M. 2004. Segmenting Time Series - A Survey and Novel Approach. M. Last, A. Kandel and H. Bunke (eds), Data Mining in Time Series Databases , pp. 1-21. [5] Xu, Z., Zhang, R., Kotagiri, R. and Parampalli, U. 2012. An Adaptive Algorithm for Online Time Series Segmentation with Error Bound Guarantee, Proceedings of the 15th International Conference on Extending Database Technology. New York, NY, USA, ACM (EDBT ’12), pp. 192-203. [6] Abonyi, J., Feil, B., Nemeth, S. and Arva, P. 2005. Modified Gath-Geva Clustering for Fuzzy Segmentation of Multivariate Time-Series. Fuzzy Sets and Systems, Vol. 149, No. 1, pp. 39-56. [7] Skansi, S. 2018. Introduction to Deep Learning : From Logical Calculus to Artificial Intelligence. Cham, Springer. (Undergraduate Topics in Computer Science SpringerLink : Bücher). [8] Xie, H., Yang, D., Sun, N., Chen, Z. and Zhang, Y. 2019. Automated Pulmonary Nodule Detection in CT Images using Deep Convolutional Neural Networks. Pattern Recognition, Vol. 85, pp. 109-19. [9] Singh, R. and Om, H. 2017. Newborn Face Recognition Using Deep Convolutional Neural Network. Multimedia Tools and Applications, Vol. 76, No. 18, pp. 19005-15. [10] Peng, M., Wang, C., Chen, T., Liu, G. and Fu, X. 2017. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition. Frontiers in Psychology, Vol. 8, p. 1745. [11] Chollet, F. 2018. Deep Learning mit Python und Keras: das Praxis-Handbuch vom Entwickler der Keras-Bibliothek. 1st ed. Frechen, mitp. [12] Xiang, Y., Lin, Z. and Meng, J. 2018. Automatic QRS complex detection using two-level convolutional neural network. BioMedical Engineering OnLine, Vol. 17, No. 1, p. 13. [13] Yu, Y., Mayer, T., Knoch, E.-M., Frey, M. and Gauterin, F. 2019. Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression, Proceedings of the International Conference on Calibration - Methods and Automotive Data Analytics. [14] Yu, Y. 2018. Analysis, Comparison and Interpretation of Multivariate Time Series. Master Thesis, Karlsruhe Institute of Technology. 9 <?page no="20"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression Yuncong Yu, Thomas Mayer, Eva-Maria Knoch, Michael Frey, Frank Gauterin Abstract This paper introduces a novel method for comparison of similar time series, especially measurement and simulation data to identify problems in the observed system. It employs the technique for time series segmentation proposed in [1] together with Dynamic Time Warping (DTW) to jointly segment pairs of measurement and simulation time series. Further, a Convolutional Neural Network (CNN) is used to identify the characteristics of segments. It is trained with synthetic data generated by the time series generator presented in [1]. Finally, the essential parameters are estimated with regression. Performance evaluation of each step is conducted and shows a high accuracy. The usage of this method is not restricted to evaluation of measurement and simulation time series, but can be extended to serve the general purpose of sequence data comparison. Kurzfassung In diesem Beitrag wird eine neuartige Methode zum Vergleich ähnlicher Zeitreihen, insbesondere Zeitreihen von Mess- und Simulationsdaten, vorgestellt. Ziel ist es, Unterschiede in den beiden Zeitreihen zu bestimmen, um eventuelle Probleme im beobachteten System zu erkennen. Die in [1] präsentierte Methode zur Segmentierung von Zeitreihen zusammen mit der Methode der dynamischen Zeitnormierung (DTW, engl. Dynamic Time Warping) wird hierbei angewendet, um eine synchronisierte Segmentierung von Mess- und Simulationsreihen zu ermöglichen. Außerdem wird ein Convolutional Neural Network (CNN) eingesetzt, um den funktionalen Zusammenhang der Zeitreihen innerhalb der Segmente zu klassifizieren. Das CNN wird hierzu mit synthetischen Daten trainiert, die von einem Zeitreihengenerator (siehe [1]) erzeugt werden. Die wesentlichen Parameter werden mit Hilfe von Regression geschätzt. Das vorgestellte Vorgehen zeigt eine hohe Prognosegüte. Der Einsatzbereich dieser Methode ist jedoch nicht nur auf die Auswertung von Mess- und Simulationszeitreihen beschränkt, sondern lässt sich auf den allgemeinen Vergleich von Datensequenzen erweitern. 10 <?page no="21"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression 1 Introduction Model-based development plays an increasingly important role in automotive engineering. With rapidly growing complexity of models, their data analysis becomes much more difficult and often far too demanding for manual evaluation. Many simulation results are in the form of time series showing the temporal response of a system under specific configuration. Comparison of simulation results with measurement is one of the essential ways to find error causes in the model. Therefore, techniques to compare time series play a key role in describing the dynamics behind the curves in automotive data analysis. However, a direct comparison of two arbitrary curves is barely feasible. Because there are so many different characteristics that a curve can assume. Hence, a method is needed to handle the problem of time series comparison. 2 Literature Review 2.1 Time Series Comparison Comparing two time series, especially a pair of measurement and simulation time series is one of the most common tasks in modelling and simulation. An intuitive way to compare two time series is to calculate the L1 norms (sum of absolute values) or L2 norm (Euclidean length) of their differences. Two advanced techniques based on the methods mentioned above are Dynamic Time Warping (DTW) and Edit Distance with Real Penalty (ERP) [2, p. 569], which take into consideration the time shifts between two time series [3, p. 792]. ERP is first introduced and thoroughly described in [3]. DTW is used in this study and will be explained in the following section. Other methods include Longest Common Subsequence (LCSS) and Edit Distance with Real Sequence (EDR), which are based on a notion called matching threshold. They are proven to be more robust against noise compared to the methods mentioned above [4, p. 673] [5, p. 491]. These traditional methods measure the distance or similarity of two time series. In this paper, a novel method for comparing two time series based on segmentation, CNN and regression is put forward. It is fully automated and can provide a detailed assessment digging deep into various aspects of the two compared time series. - 11 <?page no="22"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression 2.2 Dynamic Time Warping Dynamic Time Warping (DTW) is an algorithm to measure similarity of two time series [6, p. 38], as it can match points of two similar time series, for example, measurement and simulation data with time shifts between them. Consequently, a point at time 𝑡 in one time series may correspond to one point at time 𝑡 (𝑡 𝑡 ) in the other times series. DTW can handle this problem of time distortion [6, p. 38]. However, there is the restriction that the start points and end points of two time series should match respectively. Figure 1 shows two time series denoted blue and orange respectively. Figure 1: Matching of corresponding points on two time series with similar characteristics using Dynamic Time Warping (DTW) Corresponding points are matched with black dashed lines as a result of DTW. Details about implementation of DTW is not handled in this paper because the algorithm is extensive and can be looked up in [7, p. 193]. DTW is utilized in this paper to match measurement and simulation time series. Together with the time series segmentation algorithm described in [8], it enables simultaneous segmentation of both time series. 2.3 Convolutional Neural Network The Convolutional Neural Network (CNN) is a member of Artificial Neural Networks (ANNs), or Deep Neural Networks (DNNs) to be precise. Generally, a CNN begins with several blocks, each of which is made up of one or more convolutional layers [9, p. 121] followed by a max-pooling layer. The output of the last block is then flattened out by a flatten layer and fed to a small conventional network with two or more fully connected layers. The last fully connected layer is also the output layer of the whole CNN [10, p. 439] [9, p. 126]. Configuration of CNNs means setting suitable hyperparameters. Parameters such as weights and bias are internal variables in a CNN. They are initialized and updated 12 <?page no="23"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression automatically during training and not directly set by the user. On the contrary, hyperparameters are user configured. They influence the values of parameters, for example the number of layers, number and size of filters in the convolutional layers, activation functions and loss functions. 3 Comparison Method - - 3.1 Overview of the Algorithm The proposed method compares segmented measurement and simulation data. It evaluates their similarity quantitatively from many aspects. In the first step, the measurement and simulation data are segmented jointly with the segmentation algorithm proposed in [1] together with the sequence matching algorithm DTW. Next, the segment characteristics of the measurement and simulation segments are identified with a CNN and compared. There are three segment types within the scope of this paper, namely “constant”, “linear” and “PT1”. “PT1” refers to the step response of a first-order linear time-invariant (LTI) system. Then the essential parameters (for instance the time constants for “PT1” segments) for the segment will be estimated according to the segment types using regression. Finally, error convergence and time shifts at both ends of each segment will be checked. - - 3.2 Joint Segmentation of Time Series Pairs with Dynamic Time Warping With the method for time series segmentation proposed in [1], it is only possible to segment a single time series, not a pair of measurement and simulation data. If measurement and simulation data are segmented separately, they are likely to have different sets of boundaries, and it is intractable to find the relationship between the two sets of boundaries. In the proposed algorithm, the CNN based segmentation method in [1] is applied to segment measurement and simulation time series separately in the first step. The results are probabilities of each time point being a boundary [1]. Then, DTW is used to match points on the simulation curve with those on the measurement curve. The mean value of the possibilities of corresponding points are calculated, and if this value for a pair of corresponding points is over 0.5, both points are recognized as boundaries, one for measurement curve, the other for simulation curve. Note that a pair of corresponding points may not be at the same time, which means that time shifts are taken into consideration during averaging. Figure 2 shows an example of joint segmentation of measurement and simulation data. The blue curve refers to measurement data and the red one to simulation data. Both have three channels numbered from Channel 1 to Channel 3. They can be arbitrary variables, like temperature, pressure or current. The blue dots denote the real boundaries of the measurement data and the red ones the real boundaries of the simulation 13 <?page no="24"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression data. The detected boundaries are delineated with dashed lines in corresponding colours. Figure 2: An example of joint segmentation of measurement and simulation data The overall performance of the method based on the CNN proposed in [1] and DTW is evaluated with 1’000 pairs of measurement and simulation time series generated by the time series generator presented in [1]. Each of them comprises three channels. Each channel includes 1 to 4 segments, or rather 0 to 3 boundaries. The evaluation results with these test data are shown in Table 1. - 14 <?page no="25"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression Table 1: Evaluation of the joint segmentation of time series pairs - Overall Measurement Simulation Number Rate Number Rate Number Rate correctly segmented channels 2’055 69% 2’209 74% 2’076 69% correctly detected boundaries 11’453 88% 5’842 90% 5’611 87% undetected boundaries 1’417 11% 629 10% 788 12% multi-detected boundaries 72 1% 0 0% 72 1% falsely detected boundaries 876 7% 390 7% 486 8% The results for segmentation of pairs of measurement and simulation data are slightly worse than the evaluation results for segmentation of only single time series in [1]. Correctly detected boundaries are slightly less than 90% and undetected a little more than 10% compared with 97.3% and 2.7% for single time series [1]. The accuracy decreases, because a much more complex problem is dealt with, where the measurement and simulation data are processed in conjunction. Nonetheless, the results are still well acceptable. 3.3 Identification of Segment Characteristics with a Convolutional Neural Network Knowing the segment type is the prerequisite to apply regression and determine parameters of the segment. Comparing essential parameters is much easier as direct comparison of two curves and makes it possible to discover problems in simulation or measurement. The presented approach is classification based on a convolutional neural network (CNN) to identify the segment characteristics. Traditional ways involve trying different models and comparing their errors. These methods are computationally expensive, especially when dealing with many possible segment types. On the contrary, CNNs usually work efficiently during application, regardless of the number of possible segment types. More efforts are needed only during training process. In this study, a trained CNN takes the segment as its input and gives one of the segment types, “constant”, “linear” or “PT1” as output. The configuration of this CNN is listed in Table 2. The CNN is trained with around 32’000 segments as training data and around 8’000 segments as validation data using optimizer Adam with loss function categorical crossentropy, which is typical for multi-class classification like this case. These data are generated by the time series generator in [1]. 15 <?page no="26"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression Table 2: Configurations of the CNN for segment type identification Layer Layer Type Size Activation Function 1 (input) - 20 - 2 Convolutional Number of filters: 8 Kernel size: 3 ReLU 3 Max-pooling 2 - 4 Convolutional Number of filters: 16 Kernel size: 5 ReLU 5 Flatten - - 6 Fully connected 32 ReLU 7 (output) Fully connected 3 softmax 3.4 Regression of Segment Parameters After the segment types of both measurement data and simulation data are identified, parameters according to the segment types can be determined using regression. As mentioned, there are three types of segments dealt with in this study: “constant”, “linear” and “PT1” The constant value of a horizontal line and the slope of a line segment can be found using linear regression. Calculating the time constant 𝜏 of a “PT1” segment, namely the step response of a first-order LTI system, is based on the differential formula [11, p. 194] 𝜏 ⋅ 𝑦 𝑡 𝑦 𝑡 𝐾 ⋅ 𝑢 𝑡 (1) where 𝑦 represents the values of the segment, 𝑡 is time, 𝐾 is the unknown constant gain of the “PT1” segment and 𝑢 𝑡 is a Heaviside step function defined as 𝑢 𝑡 0 𝑡 0 1 𝑡 0 (2) In order to carry out regression, the term 𝑦 is substituted with its finite difference with sampling period Δ𝑡, in this case 1 time step: 𝑦 𝑡 𝐾 ⋅ 𝑢 𝑡 𝜏 ⋅ 𝑦 𝑡 Δ𝑡 𝑦 𝑡 Δ𝑡 2Δ𝑡 (3) Now only the two constant coefficients 𝜏 and 𝐾 are unknown and can be calculated using linear regression. - - 4 Results By way of illustration, a time series sample with measurement and simulation data is processed using the presented method. In the first step, the pair of time series is segmented by the time series segmentation algorithm. The result is shown in Figure 3. 16 <?page no="27"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression The blue dots denote the real boundaries of the measurement data and the orange ones the real boundaries of the simulation data. The detected boundaries are delineated with dashed lines in corresponding colours. The black dotted lines match boundaries of the measurement data to corresponding boundaries of the simulation data. - - Figure 3: An example of a jointly segmented time series pair Then, these two curves are evaluated one segment after another. For a better illustration, the chosen pair of time series contains only one channel (channel 1). For multivariate time series with more than one channels, the algorithm will go through each channel separately. The comparison results are shown in Table 3. It shows detailed assessment of the measurement and simulation data that outstrips the conventional methods mentioned in Section 2.1 for comparing two time series. 17 <?page no="28"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression Table 3: Comparison results of a sample time series 4 Constant Constant Matched Constant value 2.49 x 10 -2 -2.77 x 10 -2 -6.95 x 10 2 -4.40 x 10 2 3 0 3 Constant Constant Matched Constant value 5.09 x 10 -1 5.06 x 10 -1 -6.69 x 10 -2 -6.43 x 10 -2 6 3 2 Linear PT1 Mismatched - - - -9.73 x 10 -2 -7.55 x 10 -2 0 6 1 PT1 PT1 Matched Time constant 4.03 1.6 -1.65 x 10 -3 -5.62 x 10 -2 0 0 Segment Number Measurement Simulation Comparison Parameter Name Measurement Simulation Left Right Left Right Segment Type Parameter Error Convergence Time Shift 18 <?page no="29"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression 5 Conclusion - In this paper, a novel method for comparing pairs of measurement and simulation data is presented. Firstly, dynamic time warping combined with the time series segmentation algorithm proposed in [1] is used to segment the pair of time series. Then, a convolutional neural network is applied to identify the characteristics of the segments. Finally, regression is carried out to estimate the essential parameters of each segment according to the identified segment type. With this method, it is possible to compare pairs of time series with the objective of detecting possible problems in the data set and better characterising their functional behaviour. In a practical example, a reasonable accuracy is reached, with 87% boundaries in time series pairs detected, 98% segment characteristics identified and decent precision of parameter estimation even for complex segments like “PT1”. Future work includes the extension of the method to more segment types, especially non-monotonic segments like polynomials and “PT2” (the step response of a secondorder LTI system) and further validation of the method with real data. References [1] Yu, Y., Mayer, T., Knoch, E.-M., Frey, M. and Gauterin, F. 2019. Segmentation of Multivariate Time Series with Convolutional Neural Networks, Proceedings of the International Conference on Calibration - Methods and Automotive Data Analytics. [2] Morse, M. D. and Patel, J. M. 2007. An Efficient and Accurate Method for Evaluating Time Series Similarity, Proceedings of the 2007 ACM SIGMOD International Conference on Management of Data. New York, NY, USA, ACM (SIGMOD ’07), pp. 569-580. [3] Chen, L. and Ng, R. 2004. On the Marriage of Lp-norms and Edit Distance, Proceedings of the 13th International Conference on Very Large Data Bases - Volume 30, VLDB Endowment (VLDB ’04), pp. 792-803. [4] M. Vlachos, G. Kollios and D. Gunopulos. 2002. Discovering Similar Multidimensional Trajectories, Proceedings 18th International Conference on Data Engineering, pp. 673-684. [5] Chen, L., Özsu, M. T. and Oria, V. 2005. Robust and Fast Similarity Search for Moving Object Trajectories, Proceedings of the 2005 ACM SIGMOD International Conference on Management of Data. New York, NY, USA, ACM (SIGMOD ’05), pp. 491-502. [6] Sio-Iong Ao. 2010. Applied Time Series Analysis and Innovative Computing. Dordrecht, Springer. (SpringerLink : Bücher, 59). 19 <?page no="30"?> 1.2 Time Series Comparison with Dynamic Time Warping, Convolutional Neural Network and Regression [7] Müller, M., Mattes, H. and Kurth, F. 2006. An Efficient Multiscale Approach to Audio Synchronization, Proceedings of the 7th International Conference on Music Information Retrieval. Victoria, Canada, ISMIR , pp. 192-197. [8] Yu, Y. 2018. Analysis, Comparison and Interpretation of Multivariate Time Series. Master Thesis, Karlsruhe Institute of Technology. [9] Skansi, S. 2018. Introduction to Deep Learning : From Logical Calculus to Artificial Intelligence. Cham, Springer. (Undergraduate Topics in Computer Science SpringerLink : Bücher). [10] LeCun, Y., Bengio, Y. and Hinton, G. 2015. Deep Learning. Nature, Vol. 521, 436 EP -. [11] Lunze, J. 2013. Regelungstechnik 1. Systemtheoretische Grundlagen, Analyse und Entwurf einschleifiger Regelungen. 9th ed. Berlin, Springer. (Springer-Lehrbuch). (In ger). - 20 <?page no="31"?> 1.3 Time-Delay Estimation for Automotive Applications - Niklas Ebert, Frank Kirschbaum, Thomas Koch Abstract Time-delay estimation is a common challenge in transient engine calibration. A good estimation is required for subsequent studies of transient signal data, since signal calculations are affected by inaccurate time alignment. Time delays occur in recording data to different recorders, in case of a process duration time due to long tubes and for different measurment principles. Most automotive emission measurement equipment records substance concentrations. These need to be converted into emission mass flows such that they can be controlled against emission regulations. In European real driving emission law, the cross correlation is mandatory to be used to determine the delay time. Since this is a fast algorithm, within this paper it is proven to not be accurate for aligning the measured particle count with engine mass flow signals. In this context, model-based methods with linear dynamic models like autoregressive models with exogenous inputs (ARX) show promising results for time-delay prediction, even for uncorrelated signals. Using a created test case for which the actual time delay between two different, artifical created signals is known, the cross correlation and model-based methods are compared. The model-based methods yield more accurate results whilst the time-delay estimations with cross correlation deviate from the actual time delay. Moreover, the predicted time-delay results are partly physically implausible in some cases. This deviation is particularly large for signals with a low linear correlation. Hence, the emission evaluation results of real driving cycles can have significant errors. Kurzfassung Ein häufig auftretendes Problem in der transienten Kalibrierung von Motorsteuergeräten ist die Identifikation von Totzeiten. Verlangt wird diese bei der korrekten Ausrichtung von Signaldaten unterschiedlicher Signalsenken, oder für die Identifikation von Prozess Verzugszeiten durch Rohrleitungen oder verschiedener Messprinzipien. Die meisten Messgeräte für Emissionen nehmen Gaskonzentrationen auf. Die Emissionsgrenzen der Abgasgesetzgebung sind dagegen in Emissionsmassenströmen vorgegeben, sodass diese Konzentrationen in Massenströme umgerechnet werden müssen. Der für die Bestimmung von Totzeiten vorgeschriebene Algorithmus für die europäischen Real Driving Emissions (RDE) Verordnung, die Kreuzkorrelation, bietet hier eine schnelle Methode. Allerdings zeigen sich beispielsweise Grenzen für die Schätzung der Totzeit zwischen Messsignalen für Partikelkonzentrationen und Massenströmen. Hier bieten modellbasierte Methoden 21 <?page no="32"?> 1.3 Time-Delay Estimation for Automotive Applications mit linearen dynamischen Modellen wie ARX Modellen (autoregressive models with exogenous inputs) vielversprechende Ansätze für die genauere Bestimmung von Totzeiten, auch bei Anwendungen auf unkorrelierte Signale. In zwei Verifizierungsfällen mit künstlich erzeugten Signalen und bekannten Totzeiten wird die Methode der Kreuzkorrelation mit den modellbasierten Schätzungen verglichen. Ergebnisse der modellbasierten Ansätze liefern genauere Totzeiten, während die der Kreuzkorrelation von diesen abweichen. Kreuzkorrelation schätzt hier in manchen Fällen physikalisch unplausible Zeitverzüge. Unterschiedliche Signale mit niedriger linearer Korrelation beeinflussen die Genauigkeit der Schätzungen und haben einen Einfluss auf Auswertungen von RDE Fahrten. 1. Introduction A time-delay estimation (TDE) is a widely used method in processes of research and development in automotive applications. The engine calibration process with the objective of optimizing fuel consumption and emissions is highly dependent on accurate alignment of event data. Critical driving events are responsible for a high amount of emission and need to be aligned correctly to engine parameters in order to reach the right conclusions. Optimizing the relevant engine parameters is achieved by varying these and repeating the previously identified driving events on a test bench. The test plans are drafted using Design of Experiment (DoE) theories for statistical test planning. Modeling measured data and optimization of target quantities can be highly affected by inaccurate time-delay prediction. Common practical engineering TDE problems in engine calibration tasks are time delays in different signal recorders, time-delays of sensors due to measuring principles and transport delays of the exhaust mass flow. A TDE is also required and even mandatory for the current practical example of real driving emissions (RDE), which is dictated by law as a supplement to the laboratory worldwide harmonized light duty test cycle (WLTC) for all new car emissions tests since 1. September 2017 [1]. Since European emissions standards are defined in the physical unit of mass per unit length [2], car manufacturers are required to fulfill the norms for respectively carbon monoxide (CO), carbon dioxide (CO ), hydrocarbon (HC), oxides of nitrogen (NO ) and particulate number (PN). Emission measurement systems record emission signals in a unit of gas concentration, which are converted in post-processing with analysis tools in mass per unit length unit. In order to calculate track section emissions based on the mass per length unit correctly, recorded gas concentrations, exhaust mass flows, vehicle speed and other relevant engine data need to be time aligned. Here TDE has considerable influence in the correct shift of engine parameter signals to massand airflow signals and analogously to gas concentration signals. Further computation of incorrect aligned data can result in higher final emission results. The paper defines first the term of time-delay and summarizes groups of estimation methods. Several algorithms for TDE can be found in literature. In this work the principles of the selected methods are briefly introduced. Techniques of time domain estimation and model-based estimation by linear dynamic models are taken into account. This methods are examined in a verification test case on random transferred signals with additional noise and known time-delay. Finally portable emission 22 <?page no="33"?> 1.3 Time-Delay Estimation for Automotive Applications measurement systems (PEMS) test bench data is time aligned by using cross correlation function and a model-based estimation method with ARX. The influence of different TDE methods on integral emission results is outlined. 2. Estimation methods for time-delays A general TDE problem is defined according to [3] 𝑦 𝑘 𝐺 𝑧 𝑢 𝑘 𝑒 𝑘 𝐺 𝑧 𝑢 𝑘 𝑛 𝑒 𝑘 , (1) where a measured signal 𝑦 𝑘 for a complete model of 𝐺 𝑧 and the recorded noise 𝑒 𝑘 are given. This can be split up to a time-delay 𝑛 and a time-invariant linear transfer function 𝐺 𝑧 without time-delay. The time-delay can be interpreted as an apparent time-delay or a true time-delay [4]. The apparent time-delay is the time-delay that yields the best model quality and results by using more sophisticated models with higher model orders even if the original process may have no delay. The true timedelay represents the pure time-delay of the physical process with minimal assumption on the process dynamics [4]. Yet the highest model accuracy does not necessary match with the true time-delay [5]. A adapted summary of approaches to solve TDE problems for single-input-single-output systems of eq. (1) can be summarized like following [3]:  Time domain approximation methods.  Frequency-domain approximation methods.  Explicit time-delay parameter methods. A more broad overview for TDE techniques can be found in [3]. The within this paper presented methods concentrate on examine the true time-delay and not on the apparent time-delay. From the group of time domain approximation methods impulse response estimation or the cross correlation function are introduced. Model-based estimation methods are part of explicit time-delay parameter methods. 2.1. Time domain approximation methods An approach for TDE can be done using correlation analysis. The cross correlation function 𝑅 𝑛 (cf. [6]) measures similarity of two signals 𝑢 𝑘 and 𝑦 𝑘 in dependency of the time shift 𝑛 . The cross correlation function is given by 𝑅 𝑛 ∑ 𝑢 𝑘 𝑦 𝑘 𝑛 . (2) Here, the cross correlation function shifts the signal 𝑦 𝑡 for time step 𝑛 and multiplies signals 𝑢 𝑘 and 𝑦 𝑘 𝑛 . This product is integrated and then normalized over the time interval. The maximum value of the cross correlation function indicates the best similarity and hence an initial guess for the time-delay between two signals [7]. This cross correlation method for TDE is applicable for linear systems. In cases of nonlinear systems between input 𝑢 𝑘 and output 𝑦 𝑘 , cross correlation can fail to detect all nonlinear effects [8]. However, the usage of this approach for the TDE is regulated by the European Union law [9]. 23 <?page no="34"?> 1.3 Time-Delay Estimation for Automotive Applications A scaled linear dependency between two signals 𝑢 𝑘 and 𝑦 𝑘 can be expressed with the Pearsons correlation coefficient. This coefficient is defined as ratio between the covariance 𝜎 of two signals and to the product of standard deviations for each signal 𝜎 and 𝜎 [10]. The benefit of using the correlation analysis is the simplification of a linear relationship between two signals to one single value, scaled to the range 1, 1 . This value quantifies the degree of linear dependency between the signals. The Pearson correlation coefficient (cf. [10]) can be calculated with ρ ∑ ∑ ∑ . (3) Here 𝑢 and 𝑦 denotes the sample means of all sample points 𝑢 and 𝑦 . A Pearson correlation coefficient close to 1 or -1 corresponds to higher levels of correlation. Values equal to 1 or 1 indicate that the data points are located on a line. The less linear correlated the signals are, the closer is ρ to 0. By shifting signal data 𝑦 𝑘 𝑛 and calculating Pearsons ρ for each shift, the estimated time-delay is chosen for the 𝑛 , for which Pearson ρ is maximized. This approach is adjusted to the cross correlation function in eq. (2). A graphical approach to determine a time-delay is by estimating the impulse response 𝐺 𝑧 of an input and output of a system. A linear system can be described with the impulse response 𝐺 𝑧 [5] 𝑦 𝑘 𝐺 𝑧 𝑢 𝑘 𝑛 . (4) Here a parametric finite impulse response (FIR) model is used to estimate the impulse response [3]. Like ARX models, FIR models are linear in the parameters and can be estimated by LS [7]. The time-delay can be found by analyzing the impulse response plot with a confidence interval of sufficient standard deviations [5]. The start of the nonzero part of the impulse response until its response is equal to the time-delay [3]. However the FIR model order 𝑚 has to be chosen very large to include all 𝐺 𝑧 that are significantly different from zero, otherwise the approximation error would become too large [7]. 2.2. Explicit time-delay parameter methods Methods using linear dynamic models for TDE show a promising approach. Estimating several linear dynamic models with a set of time-delays yield in different model qualities. The estimation of the time-delay 𝑛 results from a subsequent comparison of the resulting model errors in choosing the best fit [3]. These delays do not necessarily correspond to the true delay but can also contain the apparent delay due to the approximation error of the model structure [4]. Mean square error (MSE) is the choice for a quantitative assessment of the estimation in this work and is defined as [5] MSE 𝜃 𝐸 ||𝜃 𝜃 || 𝐸 ∑ 𝜃 𝜃 , (5) An autoregressive with exogenous input (ARX) model is a widely applied linear dynamic model [7] and shown in eq. (6) [5]. The main advantage of estimating several 24 <?page no="35"?> 1.3 Time-Delay Estimation for Automotive Applications ARX models with different time-delays 𝑛 is the less required computational effort compared to more complex model structures [5]. For parameter estimation the method of least squares (LS) is used to fit models. ARX models with their characteristic polynomials and white noise 𝑒 𝑘 is given to 𝐴 𝑧 𝑦 𝑘 𝐵 𝑧 𝑢 𝑘 𝑛 𝑒 𝑘 . (6) Second or fourth order ARX models are recommended for the nominator 𝐵 and the denominator 𝐴 polynomials in TDE [11]. Another approach is to select higher ARX model orders and increase model quality further. Here, the initial estimated time-delay 𝑛 for the fourth order ARX model is used fixed and several ARX models with increasing orders 𝑛 and 𝑛 are identified. The best order is chosen with respect to validation data that is not used for the model estimation. The selected model orders 𝑛 and 𝑛 are used again to estimate a complete set of timedelays 𝑛 to determine the best fit. Comparisons for model orders with pole/ zero maps can be used as indication for model orders chosen to high. If 𝑜 zeros of the estimated transfer function almost compensate 𝑜 of its poles then model order m is chosen too high and the true order of the system is 𝑚 𝑜 [7]. ARX model is given by 𝑦 𝑘 𝑎 𝑦 𝑘 1 . . . 𝑎 𝑦 𝑘 𝑛 𝑏 𝑢 𝑘 𝑛 . . . 𝑏 𝑢 𝑘 𝑛 𝑛 1 𝑒 𝑘 . (7) In this paper instrumental variable (IV) method is used as a remedy against the consistency problems of the conventional ARX model estimation [7]. Usually emission sensor data are affected by measurement noise and IV method is a prospective choice to be compared to method of LS. The bias of the ARX model parameters are reduced by the use of IV method [7]. It is mentioned that ARX models yield to inaccuracies in strongly disturbed cases [7]. Therefore the more complex linear dynamic models Output-Error (OE) and autoregressive moving average with exogenous input (ARMAX) are considered in accordance with [3]. The identification of more complex model structures result in higher accuracy of the models and time-delays 𝑛 [11]. In contrast to ARX and ARMAX models, noise disturbs the process additively at the output 𝑦 𝑘 . The white noise 𝑒 𝑘 does not include the 1/ 𝐴 𝑧 polynomials and hence OE models are nonlinear in their parameters [7]. For the estimation of OE models 𝑦 𝑘 𝑢 𝑘 𝑛 𝑒 𝑘 (8) a multidimensional optimization with a numerical search has to be used [5]. Note that OE models are used for stable processes, since the OE predictor is unstable if the 𝐹 𝑧 polynomial is unstable [7]. Further since there is no noise transfer function for OE models eq. (8), OE models are more sensitive to low frequency disturbances [12]. Björklund [3] suggests an approach which combines the benefits of OE of being more accurate with the benefits of ARX models of having less computational effort to a prefiltered TDE method based on ARX models (called M ET 1 STRUC ) [3]. The algorithm is subdivided in three parts. In the first part an ARMAX model is estimated as 𝐴 𝑧 𝑦 𝑘 𝐵 𝑧 𝑢 𝑘 𝑛 𝐶 𝑧 𝑒 𝑘 . (9) 25 <?page no="36"?> 1.3 Time-Delay Estimation for Automotive Applications The second step is a pre-filtering of the input 𝑢 𝑘 and the output 𝑦 𝑘 with polynomial 1/ 𝐶 𝑧 . In the last step several ARX models with a complete set of timedelays are estimated with the best time-delay 𝑛 . By pre-filtering, the bias error due to an incorrect noise model is reduced [3]. Since an ARMAX model is nonlinear in its parameters due to the noise transfer function with 𝐶 𝑧 , parameters need to be estimated by a numerical search. Thus, the M ET 1 STRUC algorithm is extended to estimate a state space model first and then convert it to an ARMAX model structure. This conversation is possible if polynomials orders of 𝐴 𝑧 , 𝐵 𝑧 and 𝐶 𝑧 are chosen high enough to describe the system [3]. The algorithm of M ET 1 STRUC can be summarized as: 1. Estimating a state space model eq. (10) with subspace method. 2. Converting to ARMAX model structure eq. (9). 3. Pre-filtering 𝑢 𝑘 and 𝑦 𝑘 through 1/ 𝐶 𝑧 . 4. Estimating several ARX models with a complete set of time-delays and choosing the best 𝑛 . A linear single-input, single-output state space model is described by [7] 𝑥 𝑡 𝐴 𝑥 𝑡 𝑏𝑢 𝑡 𝑦 𝑡 𝑐 𝑥 𝑡 𝑑𝑢 𝑡 𝑒 𝑡 , (10) with state vector 𝑥, output vector 𝑦, the noise 𝑒 and the parameters in 𝐴, 𝑏, 𝑐 and 𝑑 [7]. The subspace method is used for the state space model estimation [5]. All timedelay methods are realized in M ATLAB developed by T HE M ATH W ORKS , I NC . using the S YSTEM I DENTIFICATION T OOLBOX of [13]. All signals are pre-processed to an uniformly sampling frequency prior TDE using an anti-aliasing FIR low pass filter to the order of 𝑚 10 and fulfilling the Nyquist-Shannon sampling theorem [4]. 3. Verification of the time-delay estimation methods Two test signals are created to proof the suitability of the presented TDE techniques, which are shifted with a pre-selected time-delay. A uniformly distributed random signal 𝑢 𝑡 is generated as a verification reference for different TDE estimation methods. The signal is distributed in the range 0, 20 and a linear trend is added. The random signal is transferred by using both, linear and nonlinear, state space models for output 𝑦 𝑡 . All methods are evaluated to draw conclusions regarding the stability of the methods with increasing nonlinearity. The resulting time-delays from the different estimation methods can then be assessed regarding their stability when applied to correlated and uncorrelated data. A maximum order for model-based methods is set to 𝑚 8. The general form of linear single-input, single-output state space models is given by eq. (10). The following system serve as the linear test case. The output 𝑦 𝑡 of the linear test case is shown in fig. 1. 𝑥 𝑡 𝑥 𝑡 14.27 26.87 1 0 𝑥 𝑡 𝑥 𝑡 1 0 𝑢 𝑡 𝑦 𝑡 4.03 45.10 𝑥 𝑡 𝑥 𝑡 𝑢 𝑡 (11) 26 <?page no="37"?> 1.3 Time-Delay Estimation for Automotive Applications A linear state space model can be extended to nonlinear dynamic systems by eq. (12). For nonlinear cases, state space models can be expressed as [7] 𝑥 𝑡 h 𝑥 𝑡 , 𝑢 𝑡 𝑦 𝑡 𝑔 𝑥 𝑡 (12) 𝑥 𝑡 𝑥 𝑡 0 1 0.1 0.075 𝑥 𝑡 𝑥 𝑡 0 1 𝑢 𝑡 𝑦 𝑡 2cos 𝑥 𝑡 sin 𝑥 𝑡 (13) A possible nonlinear test case state space model for transforming random input 𝑢 𝑡 is shown in eq. (13), while output 𝑦 𝑡 is shown in fig. 1. The verification data outputs 𝑦 𝑡 and 𝑦 𝑡 are artificial shifted by 𝑛 23 such that the true time-delay is known. Hence, the accuracy of the different estimation methods can be directly assessed. Figure 1: Verification test cases with random signal 𝑢 and output of linear 𝑦 𝑡 and nonlinear state space model 𝑦 𝑡 . Both output signals are time-delayed by 𝑛 23. The Pearson correlation coefficient for the linear case is ρ 40 % without consideration of a time-delay between the two signals. For the nonlinear test case it is ρ 4 %, which does not show a linear relationship. Results of different TDE methods are shown in tab. 1 for the linear and nonlinear test case. Note that for OE models polynomials orders of 𝐴 𝑧 are not relevant, instead orders of 𝐹 𝑧 are given. 27 <?page no="38"?> 1.3 Time-Delay Estimation for Automotive Applications Table 1: Results of the estimated time-delays with different model approaches based on the two verification test cases. Linear test case Nonlinear test case Method 𝑛 𝑛 or 𝑛 𝑛 𝑛 𝑛 or 𝑛 𝑛 Cross correlation 23 47 Max. Pearson R 23 148 ARX 2nd 23 2 2 34 2 2 ARX 4th 23 4 4 34 4 4 ARX LS 23 3 1 47 1 1 ARX IV 23 1 1 47 1 1 M ET 1 STRUC 23 8 1 37 7 1 OE 𝑛 1 23 1 8 34 1 1 OE 𝑛 2 23 2 8 34 2 4 All TDE methods result in accurate estimation of 𝑛 23 for the linear test case. The Pearson coefficient is maximized by shifting 𝑛 23. The Pearson coefficient of ρ 40% without a consideration of a time-delay raises to ρ 73 %. OE methods cause the highest computing time due to their comprehensive parameter estimation methods. ARX model orders in LS and IV methods result in values of 3 and below. With initial guessed time-delay, the MSE is minimized to ARX LS model orders of 𝑛 3 and 𝑛 1. The use of IV method minimizes MSE with model orders to 𝑛 𝑛 1. For OE models, the estimation results to the upper limit at 𝑛 8. Here pre-filtered ARX method M ET 1 STRUC gives same time-delay 𝑛 23 with 𝑛 8 and 𝑛 1. For the nonlinear test case, none of the estimation methods result in a 𝑛 23 timedelay. Cross correlation and Pearson estimation give results that deviate strongly from the TDE results of the model-based methods. The cross correlation estimates a physical implausible result with a negative time-delay value. The most accurate estimation is given by OE models and ARX models with 2nd and 4th order with 𝑛 34. With a initial guessed time-delay of 𝑛 34, ARX estimates best model orders to 𝑛 𝑛 1. Repeated estimation of a complete set of time-delays 𝑛 results in a higher time-delay 𝑛 47 for the method of LS as well as with the use of IV. Elapsed time in TDE with ARX models is 2% of the total computation time with OE models for the nonlinear case. The results of the cross correlation function depending on the shift 𝑛 of the linear and nonlinear test case are plotted in fig. 2. Because of the highly nonlinear system, the cross correlation function shows several local minima and maxima. The algorithm results in a time-delay of 𝑛 47 at the lowest point of the function result 𝑅 , . The linear case shows one global maximum of 𝑅 , at the right time-delay of 𝑛 23. Estimating the start of the impulse response 𝐺 𝑧 and then graphically interpret the time-delay does not deliver the expected results. Therefore it is excluded of further investigation. 28 <?page no="39"?> 1.3 Time-Delay Estimation for Automotive Applications Figure 2: Cross correlation 𝑅 of linear and nonlinear test case as function of the shift 𝑛 . The maximum and hence the time-delay appears linear at 𝑛 23 and nonlinear at 𝑛 47. 4. Results of time-delay estimation on emission consumption Difficulties in TDE occur in dealing with time alignment of test bench emission data with engine control unit signals. The influence of TDE on RDE and WLTC test bench data is analyzed by comparing cumulative emission bag results with different timedelay methods. The TDE on ARX models with model orders 𝑛 𝑛 4 are chosen to be compared with cross correlation results. TDE with OE models is not compared due to the higher computational time.The verification study recommends ARX 4th order to give stable results at low computing time. The results of the time-delays can be affected. All data is collected from gasoline engines that use PEMS for emission measurement. Cumulative bag results are calculated by converting their concentration 𝑟 to gas mass flow 𝑚 using the mass flow and averaged molecular weight of the exhaust 𝑀 . 𝑚 𝑟 𝑀 (14) The averaged molecular weight of the exhaust is given by the sum of the molecular mass of the exhaust components times their concentration for each component as 𝑀 ∑ 𝑟 𝑀 . (15) For the 𝑃𝑁 mass flow, eq. (16) applies. The calculation is simplified by assuming the 29 <?page no="40"?> 1.3 Time-Delay Estimation for Automotive Applications density to be constant and using the value at atmospheric pressure and a temperature of 273.15 K [9] 𝑚 𝑟 . (16) Further analysis of gas emissions NO , CO, CO and PN is based on cycle aggregated mass. This is calculated by the gas emission mass flow rate and their integration over the cycle duration. Further mass results of components 𝑚 are evaluated in relation to the entire cycle distance s, as shown in 𝑚 d𝑡. (17) In correlating engine data with particle count sensors for PN , Pearson coefficient ranges around ρ 0 % up to 10 % indicating nonlinear behavior. Plots of the mass flow and PN concentration of a segment from a RDE test case are shown in fig. 3. The data range is scaled. Figure 3: Scaled signals of exhaust mass flow 𝑚 and particle number c for an range of 100 s. Pearson correlation coefficient is ρ 1.8 %. In this case TDE with cross correlation method is producing physical implausible timedelays in almost all RDE cases. In fig. 3 the cross correlation functions plots for data with and without cold start section detect many local maxima likewise in the nonlinear verification case in fig. 4. Since PN sensors show interference to carbon-dioxide and carbon-monoxide especially in cold start phases, the examination concentrates on regular operating conditions. The cold start phase is completed 5 min after engine start. The influence of the cold start section on TDE of 𝑃𝑁 is in RDE 2 case for the method of cross correlation a higher time-delay of 𝑛 2124 instead of 𝑛 164. 30 <?page no="41"?> 1.3 Time-Delay Estimation for Automotive Applications In RDE 3 case with cold start cross correlation results in a time-delay of 𝑛 491. The result for a TDE with cross correlation on the same data without the cold start section results in 𝑛 15. The ARX model TDE is has in this case small deviations of 𝑛 13 with cold start and 𝑛 9 without. Small deviations can also be detected in RDE case 2, with and without cold start the ARX model TDE results in time-delays of 𝑛 29. In RDE 2 case the cross correlation estimates the time-delay for PN incorrect to a high negative value of 𝑛 164, therefore further investigations on emission mass flow 𝑚 is not made in this case. However, the cross correlation estimates time-delays for all gas concentrations 𝑟 to their exhaust mass flow 𝑚 . Figure 4: The cross correlation function 𝑅 in RDE 2 case of signals exhaust mass flow 𝑚 and particle number c gives a maximum at 𝑛 2124 and without cold start section (CS) at 𝑛 164. The emission consumption results of components NO , CO, CO and PN for different TDE methods are shown in bar chart fig. 5. Cycle data of two RDE results and one WLTC are evaluated. WLTC data has no PN measurement data. Therefore an additional RDE case is compared for PN. The respective gas emission is scaled by the result of using no TDE. Note that time-delays contain process duration as well as sensor reaction time. Results of the TDE on emissions with exhaust mass flow data 𝑚 are summarized in tab. 2. The TDE based on the ARX model estimation yields to a value of around 𝑛 4 while for the cross correlation a value of 𝑛 7 is estimated in RDE cases for components of NO , CO and CO . 31 <?page no="42"?> 1.3 Time-Delay Estimation for Automotive Applications Figure 5: Scaled emission consumption of components 𝑁𝑂 , 𝐶𝑂, 𝐶𝑂 and 𝑃𝑁 and the influence of the used TDE method on the emission quantity. Comparison of bars show high differences for emission masses of PN, CO and NO , whereas for the CO data the cross correlation and ARX method give quite similar results. The estimated time-delays differ between 𝑛 2 and 𝑛 3 for the RDE cases and 𝑛 7 in the WLTC, shown in tab. 2. The method of cross correlation yields to higher time-delays than the ARX method in the gas emissions. Differences of timedelays are relatively small for the emission consumption of CO and are in the range of 7 to 10 % in RDE cases and 4 % in the WLTC case. NO emissions differ clearly in between the used TDE method. The use of the model-based estimation with ARX models results in higher masses of NO emissions instead of TDE using the cross correlation function. For the WLTC case the cross correlation gives a time-delay of 𝑛 16, which is higher than the ARX time-delay of 𝑛 8. In relation to the mass with no TDE, the TDE with ARX models is 10 % higher in RDE 1, 20 % in RDE 2 case and 25 % in WLTC. Table 2: Summary of TDE estimations of cross correlation and ARX models on emissions to exhaust mass flow data. ARX Cross correlation Emission RDE 1 RDE 2 WLTC RDE 1 RDE 2 WLTC NO 𝑛 4 𝑛 3 𝑛 8 𝑛 7 𝑛 8 𝑛 16 CO 𝑛 5 𝑛 5 𝑛 8 𝑛 8 𝑛 7 𝑛 15 CO 𝑛 3 𝑛 4 𝑛 10 𝑛 7 𝑛 9 𝑛 26 Emission RDE 1 RDE 2 RDE 3 RDE 1 RDE 2 RDE 3 PN 𝑛 31 𝑛 29 𝑛 9 𝑛 4 𝑛 164 𝑛 15 32 <?page no="43"?> 1.3 Time-Delay Estimation for Automotive Applications The effect of the used TDE methods on 𝐶𝑂 are higher emission results. In both RDE cases the by cross correlation estimated time-delays increase the emission consumption to 30 %. The lower estimated time-delays of the TDE based on ARX models result in 10 % higher masses of 𝐶𝑂. For WLTC both methods for TDE give same deviations even though the estimation for the time-delay by cross correlation of 𝑛 26 is particularly high. The strongest influence of TDE methods can be seen on the plot for the particle number in fig. 5. In fig. 4 the cross correlation function fails to detect a reasonable timedelay in RDE 2 case. The time-delay by the ARX model-based estimation results in 𝑛 29 and a relatively higher number of PN of 8 % . The highest deviation in between both methods is apparent in RDE 1 case. The TDE by the cross correlation function gives 30 % less PN than numbers with no TDE, whereas TDE with ARX models is 70 % higher for PN. In RDE 3 case the number of particles is slightly different in between the TDE methods. The estimated time-delay delta is 𝑛 6. 5. Conclusion This paper presents the challenging problem of TDE in automotive calibration of transient signal data. The application example here is the time alignment of emission with mass flow data to fulfill emission standards. Several TDE methods for engine calibration of transient data are introduced and evaluated. All methods show good results for linear correlated data. The results of a nonlinear verification case demonstrate rising inaccuracy for the time domain approximation methods as well as for the explicit time-delay parameter methods. Here vast deviations for time-delays are occurring in using cross correlation function instead of model-based methods. On measurement data from test benches all methods are applicable, but incorrect TDE can be observed in working with highly uncorrelated data. These test cases show high differences on the signals number of particle PN for resulting time-delays. Deviations are in RDE 1 case 70 % higher emission consumption in TDE with ARX models and 30 % less in the use of cross correlation method compared to no TDE. Pre-preparation of the data can minimize inaccuracy in the TDE by screening out special driving events like cold start sections of the estimation. For inaccurate results of the TDE using cross correlation function, the plot of the function depending on the time-delay can help to detect a better local maximum or minimum for the right time-delay. Model-based methods should be applied in cases for more stability of estimation. TDE methods based on ARX models with low model orders appear to have less limitations in the estimation of time-delays and less variation in resulting time-delays than the TDE method of cross correlation. TDE on gas emissions of CO show small deviations in between the results of different TDE methods that can be considered negligible, since the signals are higher correlated. In working with less correlated data like CO and NO , trends can be observed for the use of different TDE methods. In NO plots, TDE with ARX models is resulting in higher emission consumption, while in CO emissions the opposite can be observed. Future investigations will concentrate on nonlinear dynamic model approaches for TDE in working with e.g. nonlinear ARX methods to get a more reliable algorithm in uncorrelated data. 33 <?page no="44"?> 1.3 Time-Delay Estimation for Automotive Applications References [1] L. Caudet, M. Talko, New and improved car emissions tests become mandatory on 1 September, Brussels, 2017. [2] Amtsblatt der Europäischen Union, Verordnung (EU) 2017/ 1347 der Komission: vom 13. Juli 2017. zur Berichtigung der Richtlinie 2007/ 46/ EG des Europäischen Parlaments. [3] S. Björklund, A survey and comparison of time-delay estimation methods in linear systems, Univ, Linköping, 2003. [4] A.K. Tangirala, Principles of system identification: Theory and practice, CRC Press, Boca Raton, FL, 2015. [5] L. Ljung, System identification: Theory for the user, 2nd ed., Prentice Hall PTR, Upper Saddle River, NJ, 2012. [6] T. Kuttner, Praxiswissen Schwingungsmesstechnik, Springer Vieweg, Wiesbaden, 2015. [7] O. Nelles, Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models, Springer, Berlin, Heidelberg, 2001. [8] S.A. Billings, Nonlinear system identification: NARMAX methods in the time, frequency, and spatio-temporal domains, John Wiley & Sons, Chichester, West Sussex, 2013. [9] Amtsblatt der Europäischen Union, Verordnung (EU) 2016/ 427 Der Komisson: vom 10. März 2016. zur Änderung der Verordnung (EG) Nr. 692/ 2008 hinsichtlich der Emissionen von leichten Personenkraftwagen und Nutzfahrzeugen (Euro 6). [10] H.-J. Mittag, Statistik: Eine Einführung mit interaktiven Elementen, 4th ed., Springer Spektrum, Berlin, Heidelberg, 2016. [11] R. Dittmar, Advanced Process Control: PID-Basisregelungen, Vermaschte Regelungsstrukturen, Softsensoren, Model Predictive Control, De Gruyter Oldenbourg, München, Wien, 2017. [12] C. Bohn, H. Unbehauen, Identifikation dynamischer Systeme: Methoden zur experimentellen Modellbildung aus Messdaten, Springer Vieweg, Wiesbaden, 2016. [13] Lennart Ljung, System Identification Toolbox: User's Guide, 5th ed., 2017. 34 <?page no="45"?> 2 MBC I 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model Kento Fukuhara, Daniel Rimmelspacher, Wolf Baumann, Yutaka Murata, Yui Nishio Abstract Due to the increased powertrain complexity and the demand for an efficient and robust development process, application of simulation technology has become more and more popular. Even though well-established modelling methodology and simulation tools are available, their application field of powertrain simulation is still limited due to the expensive computational load. Hence, model-based powertrain calibration, which usually involves hundreds of optimization parameters, is operated rather manually. In this context, it is desirable to develop new Model-Based Calibration (MBC) methodologies for the extended utilization of simulation technology. In this paper, a cold start calibration of a Diesel engine is conducted by using numerical optimization with a dynamic engine simulation model. The model is capable to predict engine behaviour under defined transient operation and varying coolant temperature. Calibration results are shown and advantages of automated calibration process are discussed. 1 Introduction Engine calibration is a series of processes to determine the optimal system control strategy with defined hardware and software. The goal of engine calibration is to maximize engine performances (e.g. fuel consumption) while achieving defined requirements such as legislative requirements (i.e. emission norms), system requirements (e.g. durability, component protection, etc.) and customer requirements (e.g. drivability, noise, robustness, etc.). Due to the increasing number of control parameters in modern Diesel engines, a calibration process typically involves up to several hundreds of calibration parameters, which are stored in the form of lookup tables in the Engine Control Unit (ECU). Model-Based Calibration (MBC) has been identified as a valid methodology for increasing calibration efficiency and quality in order to compensate the increasing process complexity to be handled [1], [2], [3]. By using data-driven engine models, engine performance can be evaluated several hundred times faster than real time. This characteristic is suitable for solving such problems involving a large number of optimization parameters. While the application field of MBC has been rather limited to steady state calibrations, it is now desirable to extend MBC methodologies to transient calibrations. 35 <?page no="46"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model There are three main topics to be addressed in this paper: 1. Formulation of the engine calibration task as an optimization problem. 2. Application of new methodologies to accelerate the numerical optimization. 3. Application of the transient engine model in the automated calibration. The general description of the calibration task and its formulation as an optimization problem are discussed in section 2. In section 3, an example of automated calibration and its results are discussed. 2 Automated Calibration Using Numerical Optimization Automated calibration is a general term for the calibration parameter search which is driven by the numerical optimization algorithm. Numerical optimization is a general technique to find a best possible combination of parameters within a defined optimization problem. There are three key factors for the successful execution of automated calibration: 1. Formulation of the optimization problem, which is possible to solve for the optimization algorithm, 2. Selection of a suitable optimization algorithm, 3. Consideration of optimization size in terms of the number of optimization parameters and the function evaluation speed. 2.1 Formulation of Optimization Problem Calibration tasks are typically defined with “calibration engineer language” such as: 1. Reduce NOx emission (calibration target) 2. at hot WLTC cycle test (test condition) 3. by changing EGR map (calibration parameters) 4. while keeping Soot emission below xx g/ km (calibration criteria/ boundaries) In numerical optimization, above arguments are translated into the following form: min 𝒑 𝑓 𝒑, 𝒖 such that 𝑐 𝒑, 𝒖 0 𝑨 ∙ 𝒑 𝒃 𝒑 lb 𝒑 𝒑 ub 1 where 𝒑 : Optimization parameters (calibration parameters) 𝒖 : Other influence parameters (test condition) 𝑓 𝒑, 𝒖 : Objective function (calibration target) 𝑐 𝒑, 𝒖 0 : Non-linear constraints (calibration criteria) 𝑨 ∙ 𝒑 𝒃 : Linear constraints (calibration criteria) 𝒑 𝒑 𝒑 : Optimization parameter range (calibration criteria) 36 <?page no="47"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model While calibration parameters are usually stored in the form of 1-D or 2-D lookup table matrix 𝑷 (𝑛 × 𝑚 matrix), optimization parameters are typically defined as a vector 𝒑 (𝑛 ∙ 𝑚 × 1 vector): 𝑷 𝑝 , ⋯ 𝑝 , ⋮ ⋱ ⋮ 𝑝 , ⋯ 𝑝 , 2 𝒑 vec 𝑷 ⎣⎢⎢⎢⎡ 𝑝 , 𝑝 , ⋮ 𝑝 , 𝑝 , ⎦ ⎥⎥⎥⎤ 3 where the scalar value 𝑝 , is an optimization parameter at the position of 𝑖 and 𝑗 (𝑖 1, 2, … 𝑛 and 𝑗 1, 2, … 𝑚) in matrix 𝑷. In the opposite way, optimization parameter vector 𝒑 can be transformed to matrix 𝑷 by using indexing technique ( 𝒑 , vec 𝑷 ). This operation is essential for converting between optimization parameter vector and lookup table matrix. 𝑷 vec 𝒑 vec 𝑷 ∙ ⋯ vec 𝑷 ∙ ⋮ ⋱ ⋮ vec 𝑷 ∙ ⋯ vec 𝑷 ∙ 𝑝 , ⋯ 𝑝 , ⋮ ⋱ ⋮ 𝑝 , ⋯ 𝑝 , 4 Since it is not always the case that the whole table values are optimized, but just selected table points, it is necessary to employ index matrix 𝑳 used for the preparation of optimization parameter vector 𝒑. In the opposite way, 𝑷 matrix is reproduced by using transposed index matrix 𝑳 and offset vector 𝒅. 𝒑 𝑳∙vec 𝑷 1 0 0 0 0 1 0 0 0 0 0 1 ∙ 𝑎 𝑏 𝑐 𝑑 𝑎 𝑏 𝑑 5 𝑷 vec 𝑳 ∙ 𝒑 𝒅 vec 1 0 0 0 1 0 0 0 0 0 0 1 ∙ 𝑎 𝑏 𝑑 0 0 𝑐 0 vec 𝑎 𝑏 𝑐 𝑑 6 In the case of optimization with more than one lookup table, calibration parameters have usually different ranges. In order to assure a stable and robust optimization, the normalization (conditioning) of calibration parameters is applied. The normalization is usually applied according to the valid or predefined parameter range ([𝑝 min 𝑝 max ]). Finally, a set of calibration parameter matrixes 𝑷 𝑘 1, 2, … 𝑙 can be expressed in the following form: 37 <?page no="48"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model 𝒑 ⎣⎢⎢⎢⎡norm 𝑳 ∙vec 𝑷 norm 𝑳 ∙vec 𝑷 ⋮ norm 𝑳 ∙vec 𝑷 ⎦⎥⎥⎥⎤ 7 Reconstruction of a table matrix 𝑷 can be successfully achieved by introducing another index matrix 𝑴 (similar to 𝑳 matrix): 𝑷 vec 𝑳 ∙ denorm 𝑴 ∙ 𝒑 𝒅 8 where 𝑴 ∙ 𝒑 1 0 0 0 ⋯ 0 0 1 0 0 ⋯ 0 0 0 1 0 ⋯ 0 0 0 0 1 ⋱ 0 ∙ ⎣⎢⎢⎢⎢⎡𝑎 𝑏 𝑐 𝑑 𝑒 ⋮ ⎦ ⎥⎥⎥⎥⎤ 𝑎 𝑏 𝑐 𝑑 9 With above explained process, calibration parameter matrixes 𝑷 and optimization parameter vector 𝒑 can be transformed in both ways. 2.2 Acceleration of Numerical Optimization Optimization with a computationally expensive simulation model brings additional challenges. In this regard, the following measures are considered: I. Approximation of the optimization problem a. Approximation of optimization parameters (e.g. parameter reduction) b. Approximation of simulation model (e.g. surrogate modelling) II. Acceleration of the simulation a. Application of parallel computing b. Acceleration of the simulation speed via system simplification III. Improvement of optimization efficiency via intelligent algorithm a. Limitation of search design space via system input constraints In the following section, first (I) and second (II) points are discussed. 2.2.1 Parameter Approximation For the acceleration of the optimization, parameter approximation technique is used in order to reduce the total number of degrees of freedom. Thereby the original set of optimization parameters 𝒑 is replaced by a sub-set vector 𝒑′. Since the optimization parameter 𝑝, stored in 𝑷 matrix, has coordination information 𝑥 and 𝑦, 𝑝 can e.g. be expressed by using a polynomial function with identified polynomial coefficients 𝑝′ (2 nd order polynomial in the following example). 38 <?page no="49"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model 𝑝 , 𝑝′ 𝑝′ ∙ 𝑥 , 𝑝′ ∙ 𝑦 , 𝑝′ ∙ 𝑥 , ∙ 𝑦 , 𝑝′ ∙ 𝑥 , 𝑝′ ∙ 𝑦 , 10 Where 𝒑 ⎣⎢⎢⎢⎡ 𝑝 , 𝑝 , ⋮ 𝑝 , 𝑝 , ⎦ ⎥⎥⎥⎤ 𝒙 ⎣ ⎢⎢⎢⎡ 𝑥 , 𝑥 , ⋮ 𝑥 , 𝑥 , ⎦ ⎥⎥⎥⎤ 𝒚 ⎣ ⎢⎢⎢⎡ 𝑦 , 𝑦 , ⋮ 𝑦 , 𝑦 , ⎦ ⎥⎥⎥⎤ 11 Now, the optimization parameter vector 𝒑 (𝑛 ∙ 𝑚 × 1 vector) is approximated by a so called regressor matrix 𝑹 and polynomial coefficient vector 𝒑′ (5 × 1 vector). 𝒑 𝑹 ∙ 𝒑 ⎣⎢⎢⎢⎢⎡1 𝑥 , 𝑦 , 𝑥 , ∙ 𝑦 , 𝑥 , 𝑦 , 1 𝑥 , 𝑦 , 𝑥 , ∙ 𝑦 , 𝑥 , 𝑦 , ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 1 𝑥 , 𝑦 , 𝑥 , ∙ 𝑦 , 𝑥 , 𝑦 , 1 𝑥 , 𝑦 , 𝑥 , ∙ 𝑦 , 𝑥 , 𝑦 , ⎦ ⎥⎥⎥⎥⎤ ∙ ⎣ ⎢⎢⎢⎢⎢ ⎡ 𝑝 𝑝 𝑝 𝑝 𝑝 𝑝 ⎦ ⎥⎥⎥⎥⎥ ⎤ 12 The regressor matrix is constructed according to applied polynomial order and selected terms. Furthermore, not only the polynomial function, but also other mathematical functions, such as radial basis, can be applied. Once the regressor matrix is set up, initial set of subset parameters 𝒑′ is obtained by linear regression. It is noted that the exact reconstruction of 𝒑′ from 𝑷 is not possible anymore with the parameter approximation process (this is why the process is called parameter “approximation”). At the end, the optimization problem can be formulated with approximated optimization parameters by the following expression. min 𝒑 𝑓 𝑹 ∙ 𝒑 , 𝒖 such that ⎩ ⎨ ⎧𝑐 𝑹 ∙ 𝒑 , 𝒖 0 𝑹 ∙ 𝒑′ 𝒑 lb 𝑹 ∙ 𝒑′ 𝒑 ub -inf 𝒑′ inf 13 Optimization parameter range constraint (𝒑 𝒑 𝒑 ) is not used anymore and linear constraints (𝑨 ∙ 𝒑 𝒃) are used instead in order to restrict parameter ranges in 𝒑 domain. Performance of above explained parameter approximation technique is examined with a simple optimization problem. In the optimization problem, a torque estimation model as shown in figure 1 is used. The objective of the optimization is to minimize the errors between estimated torque calculated by the model and observed torque (i.e. Torque_Error in the picture) at 2812 test points (= combination of speed, load and ignition timing). Additionally, a constraint is applied so that estimated torque is always bigger than observed torque (i.e. Torque_Error_Positive < 0 in the picture). At the optimization, three maps and one curve, total 176 optimization parameters stored in the torque estimation structure, are optimized. Optimization is executed by using MATLAB optimizer fmincon and active-set algorithm. In order to evaluate the efficiency of the 39 <?page no="50"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model parameter approximation approach, different optimization setups (direct optimization without parameter approximation, different polynomial order) are compared. For a better comparison, optimizations are terminated after exceeding a certain number of function evaluations (= 5000). Figure 1: Optimization problem used for the evaluation Table 1 shows the optimization results. By applying the parameter approximation technique, the number of optimization parameters is reduced by up to 80% compared to direct optimization. At almost the same number of function evaluations, the setup with conditioned 3 rd order polynomial shows the best cost and constraint progression. Figure 2 shows the convergences of cost and constraint violation. One can observe that the optimization setup with parameter approximation results in faster convergence. This result indicates that a certain degree of generalization of the optimization problem leads to a better and faster convergence. On the other hand, it is assumed that the achievable optimality of the optimization parameters will be limited due to the reduced degrees of the freedom. Table 1: Optimization results with different optimization parameter setups Optimization set-up Number of optimization parameters Number of function evaluations Cost Constraint violation Direct optimization 176 5,017 1000.2 1418.9 Polynomial, 3 rd order 34 5,005 762.2 1.7034 Polynomial, 5 th order 69 5,030 1659.8 1394.8 Conditioned polynomial, 3 rd order 34 5,016 283.1 0.2 Conditioned polynomial, 5 th order 69 5,050 366.1 623.4 40 <?page no="51"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model Figure 2: Convergence of cost and constraint violation 2.2.2 Application of Parallel Computing Thanks to the latest development of computer science and its surrounding environment (i.e. software and hardware), application of parallel computation technique has become more and more popular. In many programming languages, such as MATLAB, parallel computing feature is implemented and applicable without the need of having a deep insight of the technology. For numerical optimization with a huge number of function evaluations, parallel computing technique brings a significant contribution. In order to evaluate the advantage of parallel computing technique, an investigation is carried out by using the same optimization problem shown in the section 2.2.1. In this investigation, a windows PC (Intel Core i7-4770 CPU @ 3.40 GHz 3.40 GHz, RAM 32.0 GB) and MATLAB parallel computing feature are used together with fmincon and active-set algorithm. With this setup, the computation in the gradient-determination phase can be parallelized. Table 2 shows the investigation results. With increased number of workers, the optimization can be accelerated up to factor 3.22. However, one can observe that the number of parallelizations does not correspond to the acceleration factor. This is because of the initialization time required for the function evaluation and line-search calculation, which cannot be parallelized. However, parallel computing feature brings a clear advantage in terms of the effective time duration necessary to solve the optimization problem. Table 2: Optimization results with different number of parallelizations Optimization set-up Number of parallelizations Number of function evaluations Duration in sec Acceleration factor Direct optimization with 1 worker 1 5,017 601.3 1.00 Direct optimization with 2 workers 2 5,017 332.1 1.81 Direct optimization with 3 workers 3 5,017 268.7 2.24 Direct optimization with 4 workers 4 5,017 228.9 2.63 Direct optimization with 8 workers 8 5,017 186.9 3.22 0 0.5 1 1.5 2 10 4 0 1000 2000 3000 4000 5000 Number of function evaluations 0 0.5 1 1.5 2 2.5 10 4 0 1000 2000 3000 4000 5000 3000 3500 4000 4500 5000 Number of function evaluations 0 1000 2000 3000 4000 5000 Direct Poly 3rd Poly 5rh Poly condi 3rd Poly condi 5rh 41 <?page no="52"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model 2.3 IAV Optimization Framework For the execution of optimizations, a MATLAB based optimization tool “IAV Optimization Framework” [5] has been developed. Figure 3 shows the basic idea of the Optimization Framework. The tool is designed to handle various types of simulation models and calibration tasks. By importing a simulation model, the user can setup the optimization problem via graphical user interface. Optimization parameters are selected from the model variables available in the simulation, and optimization objectives and constraints are built from existing simulation signals. Standard MATLAB optimizers (optimization algorithm) are used in the tool. However, various available options can be configured via graphical user interface such as parallel computation and used optimization algorithm. Figure 3: Concept of IAV Optimization Framework Figure 4 shows the image of internal data handling in the Optimization Framework. Optimization parameter handling, building of objective function and constraints explained in the previous sections are implemented. Hence no mathematical consideration or knowledge is necessary for the setup of an optimization problem. 42 <?page no="53"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model Figure 4: Schematic Illustration of internal data handling process 43 <?page no="54"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model 3 Calibration of Coolant Temperature Correction Maps 3.1 Temperature Correction Structure As an application example, coolant temperature correction maps are calibrated. Figure 5 (left side) shows a respective implementation where the demand value for the main injection timing is determined with respect to associated operating parameters, i,e. engine speed, injection amount and coolant temperature. Base demand values are stored in the base map (phiBase). Coolant temperature correction, which is the product of the offset (phiOffset) and the factor (phiFactor) values, is then added to the base value. On the right side of figure 5, an example of corrected maps at coolant temperature = 35, 65 and 85°C are shown. In this example, the injection timing is advanced in order to adjust the combustion center, which is retarded by the slower combustion under cold engine condition. Figure 5: Example of an environmental correction maps implemented in ECU 3.2 Optimization Parameters In this study, coolant temperature correction maps/ curves for 4 combustion parameters are calibrated: main injection timing, rail pressure, boost pressure and fresh air mass amount. Those 4 combustion parameters have big impact on the combustion and have to be adjusted in order to assure a stable and robust combustion under cold engine condition. Table 3 and figure 6 show all correction maps/ curves calibrated in this study. Note that some of the factor values are stored as maps with two leading parameters (coolant temperature and engine speed/ injection amount). Table 3: Summary of calibration labels # Label Unit Description Remarks 1 phiOffset °CA BTDC Main injection timing offset map 256 parameters (16x16) 2 phiFactor - Main injection timing factor curve 12 parameters (1x12) 3 pRailOffset hPa Rail pressure offset map 256 parameters (16x16) 4 pRailFactor - Rail pressure factor curve 8 parameters (1x8) 5 pBoostOffset hPa Boost pressures offset map 256 parameters (16x16) 6 pBoostFactor - Boost pressures factor map 256 parameters (16x16) 7 mAirOffset mg/ str Fresh air mass offset map 256 parameters (16x16) 8 mAirFactor - Fresh air mass factor map 256 parameters (16x16) 44 <?page no="55"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model Figure 6: Coolant temperature correction maps/ curves The automated calibration is carried out with a transient Diesel engine model discussed in the following section (section 3.2). The engine model is designed to simulate engine warm-up condition for given operating traces. By tuning the combustion correction maps, optimal combustion settings are searched considering the fuel consumption and tailpipe emissions. 3.3 Transient Diesel Engine Model In this section, transient Diesel engine model used in the automated calibration is discussed. Figure 6 shows the 1.6L 4-cylinder turbo-charged Diesel engine considered in this study. The engine model is setup in a MATLAB Simulink environment and consists of three main components as described in figure 7: ECU, engine and exhaust aftertreatment. The model is capable to simulate tailpipe emission and engine related parameters with defined operating traces such as engine speed and demand break torque. Furthermore, the influence of ambient pressure, temperature and coolant temperature is considered. Since the engine is equipped with Lean NOx Trap (LNT) catalyst, engine operation with rich combustion, so called LNT re-generation, is simulated as well. In terms of the computation effort, certain degree of simplification is applied in the model. Coolant temperature as well as rich operation traces are pre-defined and fed as simulation input. The ECU model is simplified to contain only base ECU structures. No controller (i.e. feedback control) is implemented. In addition, a fixed simulation step size of 0.1 sec is applied for acceleration. Thanks to applied measures, the simulation runs around 50 times faster than real time while keeping the simulation accuracy. Of f set Factor Tw in °C #1 phiOffset #2 phiFactor #3 pRailOffset #4 pRailFactor Of fset Factor Tw in °C #5 pBoostOffset #6 pBoostFactor Off set Factor #7 m AirOffset #8 m AirFactor Factor Off set 45 <?page no="56"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model Figure 7: 1.6L 4-cylinder turbo-charged Diesel engine with HPand LP-EGR Figure 8: Overview of engine model setup 3.3.1 Dynamic DoE Engine Model A dynamic engine model is created by using dynamic DoE technique [4]. As model inputs, main combustion parameters such as engine speed, break torque, main injection timing, rail pressure, boost pressure, fresh air mass amount and LP-EGR fraction are selected. Additionally, influence of ambient pressure, temperature, coolant temperature and rich operation is considered. As model outputs, engine-out emissions as well as exhaust gas temperature and mass flow are simulated. Predicted demand injection amount, which is necessary for the calculation of the demand combustion parameters, is fed to ECU model. Engine speed sensor Lean NOx trap catalyst + Diesel particulate f ilter Cooing w ater Cooing w ater Cooing w ater Air mass f low sensor Ambient pressure and temperature sensor Intake throttle valve LP-EGR dif f erent pressure sensor VNT actuator LP-EGR valve LP-EGR cooler Compressor Turbine (VNT) HP-EGR valve Temperature sensors Lambda (UEGO) sensors Engine w ater temperature sensor Intake shutter valve Intercooler Camshaf t sensor High pressure pump Boost pressure and temperature sensor Fuel tank Common rail, pressure sensor and relief valve PM sensor Lambda (UEGO) sensors 46 <?page no="57"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model 3.3.2 Engine Control Unit (ECU) Model Core ECU structure is implemented according to software specification. The implemented ECU structure determines demand combustion settings from engine speed, demand injection amount, coolant temperature, ambient temperature and pressure. The model consist of several 1-D and 2-D lookup tables. Calibration data (e.g. lookup table values) are stored in MATLAB workspace. The simulation accuracy of the implemented ECU is limited according to the simplification. However, it is assumed that the implemented model structure is accurate enough for the prediction of cycle emissions. 3.3.3 Aftertreatment Model For the simulation of the tailpipe NOx, a simple LNT model is implemented. The implemented model consists of several lookup tables and simulates NOx adsorption and desorption behavior considering engine-out NOx and several influence parameters. Characteristic of NOx adsorption and desorption behavior is captured in the lookup tables. Table values were calibrated by using real measurement data by minimizing the model error. This calibration process is similar to that of torque estimation model explained in section 2.2.1. 3.4 Formulation of Optimization Problem Applying the numerical optimization technologies explained in the section 2, 8 calibration maps/ curves (shown in table 3) are calibrated, considering two driving cycles, namely NEDC and WLTC (figure 9). Both cycles simulate the engine warm up phase with an initial coolant temperature of 25°C. For WLTC cycle, rich operation is also simulated. Objective of the optimization is the minimization of fuel consumption. In the objective term, map smoothing constraints 𝑐 d p 𝑷 (averaged second derivatives [6] [7]) are added as penalty with appropriate penalty factors. As for the constraints, tailpipe NOx and soot limits are defined for both, NEDC and WLTC. Furthermore, gradient constraints 𝑐 dpylb 𝑷 and 𝑐 dpyub 𝑷 [6] [7] are applied to achieve desired map/ curve shape (monotonous increase/ decrease of table values over an axis). Finally, parameter range constraints are added. Threshold values for the constraints are determined experimentally during the initial investigation and appropriate values are applied. Despite the high number of optimization parameters, almost all calibration table values are optimized without parameter approximation. Only boost pressure and fresh air mass factor maps (pBoostFactor and mAirFactor), which have exceptionally an additional second axis, are approximated by using a specially designed regressor matrix explained in the section 2.2.1. By applying the 0 th order polynomial for the second dimension (i.e. only a constant term), the degree of freedom on the second axis is eliminated. Total number of optimization parameters is 459. 47 <?page no="58"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model Figure 9: Applied simulation cycles in the optimization Overall, the optimization problem is described with the following arguments: min 𝒑 𝑤 NEDC ∙ 𝑚 fuel 𝑡, 𝒑, 𝒖 𝑑𝑡 𝑤 WLTC ∙ 𝑚 fuel 𝑡, 𝒑, 𝒖 𝑑𝑡 𝜎 ∙ max 0, 𝑐 d p 𝑷 14 such that ⎩ ⎪ ⎪ ⎪⎪ ⎪ ⎪⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪ ⎨ ⎪⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪⎪ ⎪ ⎧ 𝑚 NOx 𝑡, 𝒑, 𝒖 𝑑𝑡 𝑐 0 𝑚 NOx 𝑡, 𝒑, 𝒖 𝑑𝑡 𝑐 0 𝑚 Soot 𝑡, 𝒑, 𝒖 𝑑𝑡 𝑐 0 𝑚 Soot 𝑡, 𝒑, 𝒖 𝑑𝑡 𝑐 0 𝑐 dpylb 𝑷 𝑐 dpxub 𝑷 𝑐 dpxlb 𝑷 𝑐 dpylb 𝑷 𝑐 dpxub 𝑷 𝑐 dpxlb 𝑷 𝑐 dpxlb 𝑷 𝑐 dpxub 𝑷 𝒑 lp,1 𝒑 𝒑 up,1 𝒑 lp,2 𝒑 𝒑 up,2 𝒑 lp,3 𝒑 𝒑 up,3 𝒑 lp,4 𝒑 𝒑 up,4 𝒑 lp,5 𝒑 𝒑 up,5 𝑹 ∙ 𝒑 𝒑 lb,6 𝑹 ∙ 𝒑 𝒑 ub,6 𝒑 lp,7 𝒑 𝒑 up,7 𝑹 ∙ 𝒑 𝒑 lb,8 𝑹 ∙ 𝒑 𝒑 ub,8 Rich [-] Speed [km/ h] Tw [°C] Ne [rpm] Tq [Nm] Cold NEDC without rich operations Cold WLTC with rich operations 48 <?page no="59"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model where 𝒑 𝑴 ∙ 𝒑, 𝑘 1, 2, … 8 15 𝑷 vec 𝑳 ∙ denorm 𝒑 𝒅 for 𝑘 1, 2, 3, 5, 7 vec 𝑳 ∙ denorm 𝑹 ∙ 𝒑 𝒅 for 𝑘 6 and 8 16 𝑷 : Main injection timing offset map 𝑷 : Main injection timing factor curve 𝑷 : Rail pressure offset map 𝑷 : Rail pressure offset curve 𝑷 : Boost pressure offset map 𝑷 : Boost pressure factor map 𝑷 : Fresh air mass offset map 𝑷 : Fresh air mass factor map and 𝒖 𝑢 𝑡 , 𝑢 𝑡 , 𝑢 𝑡 , 𝑢 𝑡 𝒖 𝑢 𝑡 , 𝑢 𝑡 , 𝑢 𝑡 , 𝑢 𝑡 3.5 Optimization Results Since there is NOx-Soot/ CO2 trade-off in Diesel engine calibration, three optimization runs are conducted in order to check the achievable CO2 by varying the NOx limit value. In order to find appropriate NOx limit values, a cycle simulation is performed with a reference calibration dataset (Reference 1). After the simulation, 71, 75 and 85% from the obtained tailpipe NOx result are defined as NOx limit. In the same way, soot threshold is also defined from the reference calculation result (180% from the reference). Finally, the optimizations are executed by using MATLAB optimizer fmincon and sqp algorithm. Table 4 shows the cycle emission results (accumulated fuel mass flow, tailpipe NOx and soot at NEDC and WLTC cycle) calculated with reference and optimized calibration datasets. Reference 1…3 are results with calibration datasets, which are created with conventional calibration process as reference. Optimization 1…3 are results obtained from the automated calibration process. Reference 1, 2 and 3 result show the NOx-Soot/ CO2 trade-off as the baseline. Optimization 1…3 shows NOx-CO2 trade-off with almost constant soot value as desired. Considering achieved NOx reduction and acceptable CO2 increase, optimization 1 is selected as the best calibration dataset. Table 5 shows a summary of the optimization duration. Three optimizations are executed with parallel computing feature (number of workers = 4…8). The number of function evaluations reaches up to 60,000. Due to the high number of optimizations parameters, optimizations took several days. Figure 10 shows the optimized maps/ curves (orange) in comparison to the reference maps/ curves (blue). One can observe that smooth maps/ curves are generated thanks 49 <?page no="60"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model to the gradient and the smoothing constraints. As for factor curves, monotonous increase of table values is achieved by gradient constraints. Table 4: Cycle simulation results with reference and optimized calibration datasets Criteria Cycle Reference 1 Reference 2 Reference 3 Abs %* Abs %* Abs %* Cumulated CO2 NEDC 278.5 100.0 287.5 103.2 288.0 103.4 Cumulated NOx NEDC 286.9 100.0 282.5 98.5 157.1 54.8 Cumulated Soot NEDC 151.5 100.0 362.7 239.5 259.2 171.1 Cumulated CO2 WLTC 780.8 100.0 803.3 102.9 804.5 103.0 Cumulated NOx WLTC 2916.4 100.0 2577.6 88.4 2071.4 71.0 Cumulated Soot WLTC 728.1 100.0 1638.9 225.1 1363.9 187.3 Criteria Cycle Optimization 1 Optimization 2 Optimization 3 Abs %* Abs %* Abs %* Cumulated CO2 NEDC 282.1 101.3 276.7 99.4 273.3 98.1 Cumulated NOx NEDC 146.8 51.2 103.2 36.0 95.8 33.4 Cumulated Soot NEDC 272.8 180.1 272.8 180.1 272.8 180.0 Cumulated CO2 WLTC 784.3 100.5 780.8 100.0 778.4 99.7 Cumulated NOx WLTC 2071.0 71.0 2187.6 75.0 2312.8 79.3 Cumulated Soot WLTC 1339.9 184.0 1313.3 180.4 1284.2 176.4 %*; relative to Reference 1 result Table 5: Optimization results with different optimization parameter setups Optimization set-up Number of workers Number of optimization parameters Number of function evaluations Optimization duration (h) Optimization 1 8 459 49,888 213.6 Optimization 2 4 459 60,137 288.5 Optimization 3 4 459 60,216 127.0 Figure 10: Optimized maps (orange surface) in comparison to original maps (blue surface) Of f set Factor Tw in °C Of fset Factor Tw in °C Off set Factor Factor Off set #1 phiOffset #2 phiFactor #3 pRailOffset #4 pRailFactor #5 pBoostOffset #6 pBoostFactor #7 m AirOffset #8 m AirFactor 50 <?page no="61"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model 3.6 Verification Measurement Finally, verification measurements are conducted by applying the obtained optimized dataset. Since base maps in optimization 1 dataset are also calibrated with the automated calibration procedure, the dataset is verified with hot and cold emission cycles (cold WLTC and hot RDE with rich operations). Table 6 and figure 11 show the verification results. While optimization 1 result indicates very good NOx-soot/ CO2 performance under cold condition (WLTC), a satisfying NOxsoot/ CO2 performance is observed under hot condition (RDE). However, slight deviation between the model prediction and measurement results is observed. One of the reasons for this result may be the lacking simulation accuracy caused by the simple simulation setup. Hence, further improvement of the model predictive accuracy is desirable in this regard. Overall, the feasibility of the automated calibration with a cycle simulation is confirmed with this study. Table 6: Verification measurement results with cold WLTC and hot RDE Criteria Cycle Reference 1 Reference 2 Reference 3 Optimization 1 Abs %* Abs %* Abs %* Abs %* Cumulated CO2 WLTC 118.3 100.0 121.3 102.5 120.3 101.7 118.8 100.4 Cumulated NOx WLTC 225.0 100.0 109.0 48.4 129.0 57.3 148.0 65.8 Cumulated Soot WLTC 25.0 100.0 76.0 304.0 56.0 224.0 51.0 204.0 Cumulated CO2 RDE 121.3 100.0 123.5 101.8 123.5 101.8 122.3 100.8 Cumulated NOx RDE 217.0 100.0 112.0 51.6 128.0 59.0 154.0 71.0 Cumulated Soot RDE 32.0 100.0 - - 65.0 203.1 66.0 206.3 %*; relative to Reference 1 result (a) Cold WLTC verification measurement results (b) Hot RDE verification measurement results Figure 11: Verification measurement results with cold WLTC and hot RDE 118 118.5 119 119.5 120 120.5 121 121.5 Measure CO2 [g/ km] 100 120 140 160 180 200 220 240 100 100.5 101 101.5 102 102.5 50 60 70 80 90 100 100 120 140 160 180 200 220 240 Measure tailpipe NOx [mg/ km] 20 30 40 50 60 70 80 50 60 70 80 90 100 100 150 200 250 300 Reference 1 Reference 2 Reference 3 Optimization 1 Measure tailpipe NOx [mg/ km] Measure Soot [mg/ km] Reference 1 Reference 2 Reference 3 Optimization 1 51 <?page no="62"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model 4 Conclusion In this paper, calibration of coolant temperature correction maps is conducted by using numerical optimization together with a dynamic engine simulation model. In the first part, the basic handling of calibration parameters, construction of objective and constraint functions are explained. By considering the utilization of computationally expensive simulation models, new technologies are required in order to compensate the increased computational load. In this regard, parameter reduction and parallel computing technique are investigated. In the second part, applied engine simulation model and optimization setups are explained. As a result of the automated calibration and subsequent verification measurements, satisfying calibration results are obtained. On the other hand, the required optimization duration is still very long and further reduction of the calculation time is desirable. In this regard, further investigation of time reduction measures has to be carried out in the future. Improvement of the simulation accuracy is also an issue to be investigated. However, it is important to consider the trade-off between the simulation accuracy and computational load. Finally, application of more sophisticated optimization algorithms such as genetic algorithm should be considered to avoid local optima. In this regard, one should also consider the drawback of increased number of function evaluations necessary for such optimization algorithm. The benefit of the automated calibration is the efficient and systematic parameter search, which cannot be achieved by manual operation. On the other hand, huge number of function evaluations requires a certain amount of calculation time and the formulation of the optimization problem requires a certain amount of engineering knowledge. In this respect, development of an optimization environment, which contains all necessary features described in this paper, is desirable. Literatur [1] W. Baumann, T. Dreher, K. Röpke, S. Stelzer: DoE for Series Production Calibration, 7th Conference on Design of Experiments (DoE) in Engine Development, 2013 [2] C. Haukap, B. Barzantny, K. Röpke: Model-based calibration with data-driven simulation models for non-DoE experts, 6th Conference on Simulation and Testing for Automotive Electronics, 2014 [3] Y. Murata, Y. Kato, T. Kanda, M. Sato: Application of Model Based Calibration to Mass Production Diesel Engine Development for Indian Market, 8th Conference on Design of Experiments (DoE) in Engine Development, 2015 [4] K. Fukuhara, D. Rimmelspacher, W. Baumann, K. Röpke, Y. Murata, Y. Nishio, M. Kikuchi, Y. Yamaya: Dynamic MBC Methodology for Transient Engine Combustion Optimization, Automotive Data Analytics, Methods, DoE Proceedings of the International Calibration Conference, 2017 [5] D. Rimmelspacher, W. Baumann, F. Akbaht, K. Röpke, C. Gühmann: Usability of computer-aided ECU calibration, 7th International Symposium on Development Methodology 52 <?page no="63"?> 2.1 Automated Calibration Using Numerical Optimization with Dynamic Engine Simulation Model [6] J. Poland: Finding smooth maps is NP-complete, Information Processing Letters 85 (2003) 249-253 [7] A. Nessler , C. Haukap, K. Roepke: Global Evaluation of the Drivability of Calibrated Diesel Engine Maps, 2006 IEEE Conference on Computer Aided Control Systems Design Munich, Germany, October 4-6, 2006 53 <?page no="64"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development Christian Friedrich, Christian Kunkel, Matthias Auer Abstract In order to reduce costs, time and emissions of engine tests in the development process of modern large bore engines for ships and power plant applications, modelbased calibration techniques are used to a greater extent during the last years. However, the resulting number of engine tests utilizing model-based calibration techniques are still quite high and further improvements of the model-based calibration techniques are needed. Based on this need a new methodology has been developed, describing the transfer of modelling results between two engines of the same series with minor differences like the number of cylinders or fuel properties. Therefore, as a first step regression models for chosen steady state engine outputs of engine A are built. To generate the transfer functions between engine A and engine B only a significantly reduced amount of measurements from engine B is necessary. Utilizing the regression models of engine A, predictions of the steady state engine outputs of the engine A for the given engine set points of the engine B measurements are calculated. Two different strategies for generating the transfer functions have been pursued. Between predicted model values of engine A and the corresponding engine B measurement values, delta values (differences) or factor values (quotients) can be determined. Performing a new regression with the resulting delta or factor values in dependency of the engine set points so-called delta and factor transfer functions can be generated. With the help of these transfer functions additional data points for engine B can be calculated and used to build models of the steady state engine outputs of engine B afterwards. As a last step, with these engine B models the engine behavior of engine B can be optimized and engine maps of the engine control unit based on the optimization results can be filled without any further testing. The applicability of the methodology is shown exemplarily for a transfer of modelling results from a single cylinder to a full scale engine which is the most challenging scenario of the methodology's utilization in terms of large bore engine development due to differences in friction and applied models of the turbocharger behavior for the single cylinder engine tests. Kurzfassung Um die Versuchsumfänge, -kosten und die daraus resultierenden Emissionen bei der Applikation moderner Großmotoren für Schiffs- und Kraftwerksanwendungen zu reduzieren, kommen seit einigen Jahren verstärkt modellbasierte Applikationsmetho- 54 <?page no="65"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development den zum Einsatz. Trotz Einführung dieser Methoden stellen die resultierenden Versuchsumfänge immer noch einen erheblichen Aufwand dar. Aus diesem Grund sind weitere Optimierungen der modellbasierten Applikationsmethoden erforderlich. Für eine weitere Reduzierung dieser Versuchsumfänge wurde eine Methodik entwickelt, welche einen Übertrag von Modellen zwischen zwei Motoren derselben Baureihe beschreibt, die sich nur geringfügig, wie z.B. in der Anzahl der Zylinder oder in den Eigenschaften des verwendeten Kraftstoffes, unterscheiden. Hierzu werden im ersten Schritt Regressionsmodelle ausgewählter Motorausgangsgrößen von Motor A gebildet. Für die Erzeugung der Transferfunktionen zwischen Motor A und Motor B wird ein deutlich reduzierter Umfang an Messdaten von Motor B benötigt. Unter Verwendung der Regressionsmodelle von Motor A werden Vorhersagen der Motorausgangsgrößen von Motor A für die gegebenen Einstellgrößen der Messdaten von Motor B berechnet. Bei der Generierung der Transferfunktionen wurden zwei verschiedene Strategien verfolgt. Zwischen den vorhergesagten Modellausgangswerten von Motor A und den dazugehörigen Messwerten von Motor B können entweder Delta- Werte (Differenzen) oder Faktor-Werte (Quotienten) bestimmt werden. Durch erneute Regression dieser Delta- oder Faktorwerte in Abhängigkeit der gegebenen Einstellgrößen können sogenannte Delta- oder Faktor-Transferfunktionen erzeugt werden. Unter Zuhilfenahme dieser Transferfunktionen können zusätzliche Datenpunkte für Motor B berechnet werden und anschließend für eine erneute Modellbildung von Motor B herangezogen werden. Abschließend kann mit den Modellen von Motor B das Motorverhalten von Motor B optimiert und Kennfelder für das Motorsteuergerät bedatet werden, ohne dass weitere Motortests notwendig wären. Die Anwendbarkeit der Methodik wird beispielhaft für einen Übertrag von Modellergebnissen von einem Einzylinderauf einen Vollmotor gezeigt, was die größte Herausforderung des Einsatzes der Methodik in der Großmotorenentwicklung darstellt, da sich die Motoren bezüglich Reibung stark unterscheiden und für den Einzylinderbetrieb Modelle, wie z.B. für das Turboladerverhalten, benötigt und eingesetzt werden. 1 Introduction and motivation In the recent years stricter emissions legislation with a simultaneous reduction of fuel oil consumption led to an utilization of modern technologies as common rail injection systems (CR), variable valve trains (VVT), variable turbine geometry (VTG) or exhaust gas aftertreatment systems (diesel particulate filters DPF or selective catalytic reduction SCR) in automotive and even large bore engines. As a result, the number of calibration factors of modern engines to be optimized during calibration increased significantly [1], [2]. In order to manage the enormous calibration effort, model-based calibration techniques have been introduced. While the utilization of these methods in terms of automotive engine calibration is almost state of the art and has been investigated to a high extent during the last 25 years [3], [4], [5], their application in terms of calibrating low and medium speed large bore engine is quite new and has been investigated only to a smaller extent during the last decade [6], [7], [8], [9]. 55 <?page no="66"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development Low and medium speed large bore engines show much higher absolute fuel consumptions and longer durations to reach steady state operation due to longer times until component temperatures have stabilized than automotive engines. The high fuel costs and long stabilization times are limiting the amount of measurable data points for model-based calibration purposes. Table 1 gives an overview of chosen boundary conditions in terms of automotive and large bore engine calibration. Table 1: Chosen boundary conditions for automotive and large bore engine calibration. Automotive engines Large bore engines absolute fuel oil consumption up to 55 l/ h (Opel 2.2l ECOTEC @ 100% load) [10] up to 4600 l/ h (MAN 18V48/ 60 @ 100% load) duration of stabilization ~ 2 min ~ 15 - 30 min unmanned test bed operation state of the art rarely used amount of measurements per application often 1000 and more often less than 200 As shown in the boundary conditions the amount of measurable data points of large bore engines is very limited even if model-based calibration techniques are applied. For a further reduction of the amount of required measurements in addition to modelbased calibration techniques the new methodology for transferring modelling results between engines of the same series presented in this paper was developed. Note that this methodology does not only help to reduce development costs but also reduces emissions and green house gases. 2 The new methodology In this chapter, the new methodology for transferring modelling results between two engines will be introduced. After some fundamentals regarding modelling techniques and model evaluation, the basic principle of the methodology will be described and every single step of the methodology’s workflow will be explained in detail. 2.1 Fundamentals of modelling techniques Utilizing the new methodology, different model types and statistical criteria for model evaluation are used. Therefore, the fundamentals of three model types as polynomials, radial basis function networks and Gaussian process models and chosen statistical criteria for model evaluation purposes will be described briefly. 2.1.1 Polynomials In the context of model based engine calibration polynomials are one of the most often used model approaches [3], [4]. A general polynomial with degree l and dimension p for an objective y can be expressed by the following equation [11]: y = w 0 + w i u i + w i 1 ,i 2 u i 1 u i 2 p i 2 =i 1 p i 1 =1 p i=1 + ... p i 1 =1 w i 1 ,⋯,i l u i 1 ...u i l p i l =i l -1 (1) 56 <?page no="67"?> i 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development The first term w 0 is the offset, the terms w i 1 ,…,i l are the model coefficients and u i with i = {1, …, p} are the model inputs. Using M basis functions φ to describe any combination of the p model inputs u and the coefficients θ i the general polynomial equation can be replaced by: y ̂ (u) = 𝜃 0 + ∑ 𝜃 i  i M i=1 (2) In order to fit the polynomial to n measured data points the coefficients θ 0 and θ i have to be estimated by the help of the Least Squares Regression. Therefore the sum of the squared errors e(i) between model predictions y ̂( i ) and measured data points y(i) is minimized according to: ∑ e 2 (i) = ∑ [y(i) − y ̂ (i)] 2 n i=1 n i=1 = (y − ϕθ) 2 → min (3) 2.1.2 Radial basis function networks (RBFs) Another modelling approach that is applied within the new methodology are radial basis function networks (RBFs) with one hidden layer and neurons always representing radial basis functions of the same type. In general, RBFs are a subclass of artificial neural networks (ANNs) whose basic structure is shown in Figure 1. Figure 1: Basic structure of an artificial neural network with one hidden layer according to [11]. The model inputs u 1 , u 2 , …, u p are used to determine the activations x 1 , x 2 , …, x M for each of the M neurons in the hidden layer containing a transfer function  i . In case the activation x i is exceeding a pre-defined threshold, a neuron and its transfer function is activated. By accumulating each of the M neuron outputs multiplied by a weighting factor θ i the model output y ̂ arises and can also be expressed with the simplified polynomial equation (2). Hidden layer Output layer Input layer u 1 u 2 u p ) (   ) (   ) (   M    + x 1 x 2 x M ... ... y^ 57 <?page no="68"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development There are various radial basis functions used in radial basis function networks. Commonly used functions among others are linear, inverse multiquadratic, logistic or Gaussian functions. Additional radial basis functions as well as the mathematical description of each function can be found in [12]. In terms of RBFs a radial construction mechanism to determine the neurons’ activations is applied. Radial basis functions describing the transfer functions of the neurons are specified by two parameters, center c and width σ . The activation x i of any neuron i is given by the Euclidean distance of the input vector u and the RBF center c i [11]: x = ‖u c i ‖ (4) In order to estimate the coefficients θ i of a RBF network the Regularized Least Squares method is utilized [13]. A regularization parameter  reg augments the sum of squared errors in equation (3) to the regularized sum of squared errors: (y ϕθ) 2 +  reg θ 2 → min (5) Neuron outputs with high weights θ i are penalized by the term  reg θ 2 in order to generate a smooth model course within the given design space. Performing a regression based on measured data points according to equation (2), the number of neurons M, the radial basis function parameters c and σ of each neuron as well as the weights θ i and the offset θ 0 have to be determined. Therefore, numerical algorithms, such as the Orthogonal Least Squares or the Regularized Orthogonal Least Squares algorithm described in [13] and [14] are applied. 2.1.3 Gaussian process models (GPMs) Gaussian process models are the third modelling approach used in terms of the new methodology. Utilizing a Bayesian approach, a specific family function is chosen out of various family functions whose mean is representing a measured dataset with the highest probability [15]. Gaussian process models are a Gaussian probability distribution for each of the n input vectors u i with i = {1, …, n} and they depend on so-called hyper-parameters θ . Taking n measurements with n combinations of the p input parameters u 1 , u 2 , …, u p and n output values y the likelihood function can be written as [16]: log p(y,θ) 1 2 y T Ky 1 2 log det(K) n 2 log 2π (6) The likelihood function itself assigns a likelihood p(y,θ) to the hyper-parameters θ to represent the measured data as good as possible. In equation (6) K is the covariance matrix. The covariance matrix contains entries of an applied covariance function to pairs of measured data points representing their similarity [16]. Typical covariance functions also called kernel functions used in terms of GPM regression are the 58 <?page no="69"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development Squared Exponential, Matern 3/ 2 and Matern 5/ 2 function. Detailed information and their mathematical descriptions can be found in [17]. A common way to estimate the hyper-parameters according to a given measured data set is the marginal likelihood technique. Using the Bayesian rule and considering mean value functions for the Gaussian probability distributions (e.g. zero, constant or linear mean value functions) the hyper-parameters θ can be estimated by maximizing the natural logarithmized likelihood function [18]: ln p(y|θ) → max (7) 2.1.4 Statistical criteria for model evaluation In order to compare different models regarding their modelling quality some statistical criteria can be evaluated. The statistical criteria used within this paper will be briefly explained in the following section. A very known criterion is the standard deviation σ e of the residuals e also known as root mean square error (RMSE). Using the degrees of freedom df given by the difference between available number of measured data points n and the minimum required number of measured data points n min the standard deviation of the residuals σ e can by calculated with the following formula [19]: σ e = 1 df e 2 (i) n i = 1 = 1 n n min y(i) y(i) 2 n i = 1 (8) The unit of the standard deviation of the residuals is equal to the unit of the modelled variable. A good model quality is given when the criterion takes small values but not smaller than the measurement accuracy of the model output variable. Another very common criterion is the coefficient of determination R 2 . According to [19] it can be determined with the following equation ( y is the mean value of all measured data points): R 2 1 ∑ [y(i) y(i)] 2 ni=1 ∑ y(i) y 2 ni=1 (9) The ideal value of this criterion for a good model is one. Since the quotient is tending to zero for small deviations between model predictions and measured data values a coefficient of determination close to one is desired. The third criterion, the standard deviation of the predicted residuals σ PRESS , can be evaluated with a so-called leave-one-out cross-validation. Therefore, the i-th observation y(i) is excluded from the regression and predicted by the model y (i) (i) fitted by 59 <?page no="70"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development the remaining observations. Doing this for all observations the following equation for the standard deviation of the predicted residuals σ PRESS is given by [20]: σ PRESS = 1 df 1 y(i) y (i) (i) 2 n i = 1 (10) The standard deviation of the predicted residuals is often used to avoid an overfitting model behavior, where a model fits the measured data points very well but shows an overswinging course between them. Like the standard deviation of the residuals, the standard deviation of the predicted residuals should be as small as possible and is given in the unit of the modelled output variable. A last criterion used for model evaluation is the standard deviation of the validation residuals σ e,val . Additional observed data points are predicted with the model while these data points have not been used for the regression. Similar to equation (8), for v additional observations σ e,val is given by: σ e,val = 1 df e 2 (i) v i = 1 = 1 n n min y(i) y(i) 2 v i = 1 (11) and should be as small as possible and close to σ e for a good model quality. 2.2 Basic principle of the new methodology In large bore engine development, calibration of prototype engines is an important and extensive part in the underlying product development process. Model-based calibration techniques have been introduced to limit time and effort of cost-intensive and time-consuming measurements. However, for engines of the same series with different cylinder number configurations, separate model-based calibration processes were executed in the past although the engine and combustion behavior of both engines shows only minor differences. The deviations between engines, varying in their cylinder configuration, appear due to differences in the following aspects:  gas exchange characteristics  turbocharger configuration  cylinder firing order  friction losses  fabrication tolerances In order to reduce the effort while calibrating the second engine the new methodology for transferring model results from one engine to another has been developed. Figure 2 shows a schematic overview of the new methodology’s workflow. The main steps of the workflow will be explained in detail in the following sections. 60 <?page no="71"?> Figure 2: Schematic overview of the new methodology’s workflow based on [9]. 2.2.1 Modelling of engine A A basic requirement for the methodology’s utilization is a completed model-based calibration process of engine A. In order to approximate the real engine behavior in a precise manner, higher model orders are often applied. However, these model approaches require higher amounts of measurements for coefficient estimation. Therefore, during the calibration process of engine A an extensive scope of measurements has to be recorded at the engine test bench to build valid engine models of engine A. The measurements are distributed in the design space of engine A according statistical approaches. Commonly a d-optimal design is set up and augmented with space filling points for model fitting. For validation purposes, some additional measurements that will not be used for regression are added in the test plan as well. After all measurements of the set up test plan have been recorded, model committees containing various types of polynomials, radial basis function networks and Gaussian process models are built for each chosen engine output. For each output, the best model is chosen out of the committee by evaluating the statistical criteria for model evaluation explained in section 2.1.4. As a result valid models of chosen engine outputs of engine A are available for the transfer of modelling results to another engine of the same series in terms of the new methodology. 2.2.2 Measurements of engine B As it can be seen in Figure 2, based on the given models of engine A a certain amount of measurements of engine B is needed to perform a transfer of modelling results between engine A and B. The measurements of engine B have to be taken according to a suitable test plan. Assuming the engine and combustion behavior shows only minor differences between engine A and engine B, the differences between these engine can be modelled utilizing lower model orders. These lower model orders require fewer measurements for coefficient estimation. Hence, in comparison to the amount of measurements taken of engine A the amount of measurements of engine B can be reduced significantly. In order to cover the design space of engine B Model engine A Measurements engine B Differences Transfer function (Model of differences) Model engine B Calculated data points engine B y y ̂ i, [-] [-] [-] [-] [-] [-] [-] [-] [-] [-] [-] yi, [-] yi [-] yi [-] y ̂ i, [-] 61 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development <?page no="72"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development as well as possible with a low number of measurements, d-optimal test plans are predestinated, because they distribute the measurements close to the design space boundaries. Therefore, as for the modelling of engine A, the test plan for engine B is set up containing d-optimal test points, that are augmented with space filling test points and a certain amount of validation points. Due to a poor model accuracy in case of extrapolation it is of high of importance that resulting test points for engine B are within the given design space of engine A. 2.2.3 Creation of delta transfer functions Once the models of engine A and the measurements of engine B are available transfer for chosen steady state engine outputs can be created. As a first strategy, between predicted model values of engine A and measured values of engine B differences ∆ can be determined. For any chosen engine output y i the delta values ∆ y i for each of the n measurements of engine B can be calculated with the following equation: ∆ y i (j) = y i,B (j) y i,A (j) , with j = 1, 2, ..., n (12) Since theses delta values depend on the input parameter vectors u that have been varied according the test plan of engine B, they can be modelled in dependency of this parameters. As a result, models of the delta values (delta transfer functions) ∆ y i for each of the chosen engine outputs are available describing the differences in engine behavior of engine A and engine B in the given design space. Utilizing the delta transfer functions and the engine models of engine A an arbitrary amount of artificial data points of engine B can be calculated. Thinking about n artificial data points for any engine output y i , the data points y i,B * for engine B can be determined: ⎣ ⎢⎢⎢⎡y i,B * (1) y i,B * (2) ⋮ y i,B * (n)⎦⎥⎥⎥⎤ ⎣⎢⎢⎢⎡y i,A (1) y i,A (2) ⋮ y i,A (n)⎦⎥⎥⎥⎤ ⎣⎢⎢⎢⎡∆ y i (u(1)) ∆ y i (u(2)) ⋮ ∆ y i (u(n))⎦ ⎥⎥⎥⎤ (13) 2.2.4 Creation of factor transfer functions The second strategy to create transfer functions between engine A and engine B is based on the quotients (factors) φ between predicted model values of engine A and measured values of engine B. The factor values φ y i for each of the n measurements of engine B for any chosen engine output y i can be determined as follows: φ y i (j) = y i,B (j) y i,A (j) , with j = 1, 2, ..., n (14) 62 <?page no="73"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development Calculating the factor values for each chosen engine output y i and each measurement of engine B the so-called factor transfer functions for each engine output can be computed. Therefore, the factor values in dependency of the input parameter vectors u of the measurements of engine B are modelled, performing a regression. In an analogous manner to the delta transfer functions, an arbitrary amount of artificial data points of engine B can be calculated with the engine models of engine A and the factor transfer functions φ y i : ⎣ ⎢⎢⎢⎡y i,B * (1) y i,B * (2) ⋮ y i,B * (n)⎦ ⎥⎥⎥⎤ ⎣ ⎢⎢⎢ ⎡y i,A (1) ∙ φ y i (u(1)) y i,A (2) ∙ φ y i (u(2)) ⋮ y i,A (n) ∙ φ y i (u(n))⎦ ⎥⎥⎥ ⎤ (15) 2.2.5 Modelling of engine B As a last step of the methodology’s workflow according Figure 2, based on the calculated y i,B * and measured data points y i,B of engine B models y i,B of the chosen engine outputs y i can be built, performing a regression for each output variable again. In order to optimize the engine behavior these models can be used to run numerical optimizations and fill look-up tables or engine maps for the input parameters u afterwards. 3 Application of the methodology For validation purposes the new methodology was applied for a transfer of modeling results from a single cylinder to a full scale medium speed large bore engine. Since a single cylinder engine has no exhaust turbocharger and considerable deviating friction losses than a full scale engine, this transfer constitutes the most challenging use case of the methodology. In the upcoming sections, in an analogous order as the description of the methodology’s workflow, each step of the transfer from the single cylinder to the full scale engine will be explained in detail. 3.1 Experimental setup 3.1.1 Single cylinder and full scale engine The measurements for the validation of the methodology have been recorded at a single cylinder engine 1L35/ 44DF and a six cylinder full scale engine 6L35/ 44DF of MAN Energy Solutions SE in Augsburg. The 35/ 44DF engine series with a bore of 350mm and a stroke of 440 mm is a typical example of an auxiliary genset application running at a constant engine speed of 750rpm. As the abbreviation “DF” implies, the engine is a dual fuel engine. It can be operated with marine diesel fuels, such as marine gas oil, marine diesel oil or heavy fuel oil in diesel mode or with premixed natural gas ignited by a little amount of liquid pilot fuel in gas mode. Within this paper, only the diesel operation mode of the engine is regarded. The letter “L” expresses that both engines, single cylinder and six cylinder 63 <?page no="74"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development engine, are arranged as inline engine configurations. The engines are equipped with two common rail injection systems, one for the liquid pilot fuel and one for the main diesel fuel. During the recording of the measurements of each engine, four engine parameters (inputs) have been varied according to a defined test plan:  relative effective engine power  charge air temperature before cylinder  rail pressure  start of injection The aim of the measurements was to build data-driven engine models for various engine outputs in dependency of those input parameters. Within this paper, as an example, the engine outputs specific fuel oil consumption and the specific NOxemissions are chosen among various other outputs. The experimental setup showing the relevant engine inputs and outputs as well as some additional technical data of the 35/ 44DF engine series and pictures of the single cylinder and the full scale engine is given in Figure 3. Figure 3: Experimental setup to validate the new methodology. 3.1.2 Test plan single cylinder engine Considering the four input parameters relative effective engine power, charge air temperature, rail pressure and start of injection, a robust model approach, like a third order polynomial, would require a minimum amount of 35 measurement points for regression. This amount of measurements was distributed in the design space according to a d-optimal approach. In order to fit robust RBF-networks and Gaussian process models and to increase the statistical degree of freedom, these 35 measurement points have been augmented by 15 additional space-filling test points and four validation data points. The resulting test plan for the 1L35/ 44DF single cylinder is shown in Figure 4. Engines Inputs rel. eff. engine power (P e ) charge air temperature ( CA ) rail pressure (p rail ) start of injection ( SOI ) Outputs spec. fuel oil consumption (b e ) spec. NOxemissions (NO x ) … MAN Energy Solutions SE 1L35/ 44DF (Dual Fuel) MAN Energy Solutions SE 6L35/ 44DF (Dual Fuel) bore [mm] 350 stroke [mm] 440 displacement volume per cylinder [l] 42.3 speed [rpm] 750 cylinder power [kW] 530 mep [bar] 20 mode [-] diesel length 6L [mm] 6485 width 6L [mm] 2539 height 6L [mm] 4163 gross weight 6L [t] 40.5 engine power 6L [kW] 3180 64 <?page no="75"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development Figure 4: Test plan of the 1L35/ 44DF single cylinder engine as pairwise projections. 3.1.3 Test plan full scale engine Assuming a very similar behavior of single cylinder and full scale engine based on the experience, model approaches of lower orders are sufficient to model the differences between these engines in a proper way. Therefore, a second order polynomial for four input parameters would require a minimum amount of measurements of 15 for regression. Again, these measurements have been spread in the design space given by the input parameters according a d-optimal approach. Three additional test points and four validation data points have been augmented to increase the degree of freedom. Hence, the test plan of the 6L35/ 44DF full scale engine contains 22 measurements in total and leads to a reduction of about 60% compared to the approach with higher order polynomials. The final test plan is shown in Figure 5. Figure 5: Test plan of the 6L35/ 44DF full scale engine as pairwise projections. 65 <?page no="76"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development 3.2 Modelling results single cylinder engine As mentioned in section 2.2.1 for each engine output a model committee containing various types of polynomials, radial basis function networks and Gaussian process models is fitted utilizing the measurements recorded according to the set up test plan. Out of the committee, the best model is chosen by evaluating statistical criteria described in 2.1.4. Out of the model committee for the specific fuel oil consumption a reduced third order polynomial was selected as best. The reduction of the polynomial was reached by removing statistically insignificant terms out of the model equation. For the specific NO x -emissions a reduced third order polynomial was assigned as best out of the model committee as well. The predicted-observed plots of the built models for the specific fuel oil consumption and the specific NO x -emissions are shown in Figure 6. Figure 6: Predicted-observed plots for the reduced third order polynomials of the specific fuel oil consumption and specific NO x -emissions of the 1L35/ 44DF single cylinder engine. It can be seen that both, regression data points and validation data points are located very close to the bisecting line indicating an overall very good model quality. Furthermore, the statistical criteria for model evaluation given in Table 2, confirm the very good model quality of both models. The coefficient of determination R 2 is very close to one and the standard deviations of the residuals  e , of the predicted residuals  PRESS and of the validation residuals  e,val are very small. Table 2: Statistical criteria of the fitted single cylinder models for specific fuel oil consumption and specific NOx-emissions. Specific fuel oil consumption b e,1L Specific NO x - Emissions NO x,1L R 2 [-] 0,993 1,000  e [%*] 0,55 0,37  PRESS [%*] 0,64 0,57  e,val [%*] 0,37 0,39 *percentage in relation to the maximum value of each output 66 <?page no="77"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development 3.3 Deltaand factor transfer functions In the following sections the creation of the delta and factor transfer functions between the 1L35/ 44DF single cylinder and 6L35/ 44DF full scale engine will be explained. 3.3.1 Delta transfer functions Utilizing the single cylinder engine models for the specific fuel oil consumption and specific NO x -emissions, for all the 22 input parameter combinations of the measurements of the full scale engine predicted values for the specific fuel oil consumption and specific NO x -emissions of the single cylinder were calculated. Afterwards, between full scale engine measured values and single cylinder predicted values for specific fuel oil consumption and specific NO x -emissions differences (delta values)  b e and  NO x were determined according to equation (12). With these 22 delta values a new regression in dependency of the input parameters was performed in order to create delta transfer functions. Once more, model committees containing polynomials, RBF-networks and Gaussian process models for each engine output were fitted and the best models were chosen from within. For the delta transfer function of the specific fuel oil consumption a reduced second order polynomial and for the delta transfer function of the specific NO x -emissions a RBF-network with 10 neurons and Wendland basis functions was assigned as best. The predictedobserved plots of both delta transfer functions are given in Figure 7. Figure 7: Predicted-observed plots for the delta transfer functions of the specific fuel oil consumption and specific NO x -emissions. Since regression and validation data points for both delta transfer functions are close to the bisecting line, the model quality can be rated as good. In addition, the statistical criteria for model evaluation in Table 3 attest the good model quality of both transfer functions as well. R 2 is close to one and the standard deviations  e ,  PRESS and  e,val show acceptable values. 67 <?page no="78"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development Table 3: Statistical criteria of the fitted delta transfer functions for specific fuel oil consumption and specific NOx-emissions. Specific fuel oil consumption  b e Specific NO x - Emissions  NO x R 2 [-] 0,994 0,999  e [%*] 2,33 1,33  PRESS [%*] 2,78 2,27  e,val [%*] 2,33 2,98 *percentage in relation to the maximum delta value of each output 3.3.2 Factor transfer functions In an analogous manner to the determination of the delta values, for the 22 input parameter configurations of the full scale engine measurements factors (quotient values)  b e and  NO x were determined according to equation (14). The factors are the quotients between full scale engine measured values and single cylinder predicted values for specific fuel oil consumption and specific NO x -emissions. In dependency of the four input parameter and based on the resulting 22 factor values of both engine outputs, model committees for each factor transfer function have been built and the best models were chosen out of these committees. For the factor transfer function of the specific fuel oil consumption a Gaussian process model with zero mean value function and Matérn 3/ 2 kernel function was evaluated as best. A RBF-network with seven neurons and multiquadratic basis functions was chosen for the transfer function of the specific NO x -emissions. The predicted-observed plot of these factor transfer functions can be seen in Figure 8. Figure 8: Predicted-observed plots for the factor transfer functions of the specific fuel oil consumption and specific NO x -emissions. The model quality of the transfer function of the specific fuel oil consumption can be rated as good because both, regression and validation data points are located very close to the bisecting line. On the contrary, by examining the predicted-observed plot of the transfer function of the specific NO x -emissions it can be seen that the regres- 68 <?page no="79"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development sion and validation data points are more distant from the bisecting line. Therefore, the model quality of this transfer function was only rated mediocre. Evaluating the statistical criteria for model evaluation in Table 4 and comparing the statistical criteria with those of the transfer function for the specific fuel oil consumption, this mediocre model quality of the transfer function for the specific NO x emissions is confirmed. Table 4: Statistical criteria of the fitted factor transfer functions for specific fuel oil consumption and specific NOx-emissions. Specific fuel oil consumption  b e Specific NO x - Emissions  NO x R 2 [-] 0,998 0,948  e [%*] 0,22 1,19  PRESS [%*] 0,43 1,40  e,val [%*] 0,43 1,62 *percentage in relation to the maximum factor value of each output 3.4 Modelling results full scale engine Before the models of the 6L35/ 44DF full scale engine could have been built, the data points for regression needed to be determined. Due to a slightly better model quality, for the determination of data points for the specific fuel oil consumption the factor transfer function and for the specific NO x -emissions the delta transfer function was used. In order to exclude the given inaccuracies of the single cylinder engine models, which cannot be avoided in terms of regression, the data points of the full scale engine have been calculated based on the single cylinder measurements and not as explained in 2.2.3 and 2.2.4 with the single cylinder engine models. For each data point of the single cylinder engine and its input parameter combination factor values for the specific fuel oil consumption and delta values for the specific NO x -emissions have been calculated using the created factor and delta transfer function. In an analogous manner to the equations (13) and (15), data points of the fuel scale engine have been computed based on the 50 single cylinder engine measurements and 50 factor and delta values determined before. With these transferred full scale engine data points finally a further regression was performed. Since 50 transferred data points were available for regression, while only 18 measurements have been recorded at the full scale engine, model committees containing higher order models that could not have been determined with only 22 measurements, have been built. Out of these committees the best model for each output was chosen. For both, the specific fuel oil consumption and the specific NO x -emissions of the 6L 35/ 44DF full scale engine a reduced third order polynomial was assigned best out of the model committees. The predicted-observed plots of these models are shown in Figure 9. 69 <?page no="80"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development Figure 9: Predicted-observed plots for the reduced third order polynomials of the specific fuel oil consumption and specific NO x -emissions of the 6L35/ 44DF full scale engine. By observing the plots, the model quality of both 6L35/ 44DF full scale engine models can be rated as good, since regression and validation data points are located very close to the bisecting line. Table 5 shows the statistical criteria for model evaluation of both models. For the purpose of comparison, in addition, the statistical criteria of the single cylinder engine models are shown in Table 5 as well. Table 5: Statistical criteria of the fitted single cylinder and full scale engine models for specific fuel oil consumption and specific NOx-emissions. Specific fuel oil consumption b e,1L Specific fuel oil consumption b e,6L Specific NO x - Emissions NO x,1L Specific NO x - Emissions NO x,6L R 2 [-] 0,993 0,976 1,000 1,000  e [%*] 0,55 0,56 0,37 0,44  PRESS [%*] 0,64 0,64 0,57 0,58  e,val [%*] 0,37 0,40 0,39 0,52 *percentage in relation to the maximum value of each output The statistical criteria of the transferred full scale engine models b e,6L and NO x,6L proof the overall good model quality of both models. R 2 is close to one and the standard deviations  e ,  PRESS and  e,val show very small values. Comparing the statistical criteria of the transferred full scale engine models b e,6L and NO x,6L with the statistical criteria of single cylinder engine models b e,1L and NO x,1L , it can be seen clearly that the models show equal models qualities and the transfer of the modelling results can be rated as successful. This can also be reasoned by having a look at the relative deviations  between the measured full scale engine values of the 22 full scale engine measurements and the values predicted by the transferred full scale engine models in Table 6. Since all deviations beside two values of the specific NO x -emissions are less or equal 1% the transferred engine models of the 6L35/ 44DF full scale engine were judged as valid. 70 <?page no="81"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development Table 6: Relative deviations between 22 measured full scale engine values and predicted full scale engine values. No [-]  b e [%]  NO x [%] No [-]  b e [%]  NO x [%] No [-]  b e [%]  NO x [%] No [-]  b e [%]  NO x [%] 1 0,3 0,8 7 0,2 0,6 13 0,4 1,0 19 0,5 0,9 2 0,1 0,9 8 0,4 0,2 14 0,3 0,2 20 0,3 0,6 3 0,6 0,8 9 0,2 0,3 15 0,3 0,5 21 0,7 0,1 4 0,5 0,2 10 0,4 2,2 16 0,1 0,8 22 0,0 1,5 5 0,2 0,3 11 0,2 0,4 17 0,3 2,5 6 0,0 0,4 12 0,3 0,6 18 0,1 0,5 4 Summary and conclusions In this paper, a new methodology for transferring modelling results between engines of the same series in terms of large bore engine development was introduced. The main objective was to reduce the effort of cost-intensive and time-consuming engine tests accompanied by a reduction of emissions and greenhouse gases during these tests by avoiding separate model-based calibration processes for each engine. After the model-based calibration process of one engine (engine A) is finished, robust and valid models of chosen engine outputs are available. Since both engines only show minor differences in engine and combustion behavior, a separate model-based calibration process for engine B would be inefficient. For this reason, a significantly reduced amount of measurement points of engine B is used in order to create delta or factor transfer functions between engine A and B. With these transfer functions and the models of engine A, arbitrary amounts of artificial data points for engine B can be calculated, that can be used for regression of engine B models. The methodology was applied to a transfer of modelling results between a medium speed large bore 1L35/ 44DF single cylinder and 6L35/ 44DF full scale engine of MAN Energy Solutions SE. As an example, models of the engine outputs specific fuel oil consumption and specific NO x -emissions have been transferred successfully between these engine utilizing created delta and factor transfer functions. Since both transfer strategies (delta and factor transfer) led to good results, no recommendation for one of the strategies can be given. In fact, it can be recommended to pursue both transfer strategies and chose that transfer function showing the better quality. For the modelling of the single cylinder engine 54 measurements were recorded, only 22 measurements of the full scale engine were taken. In case of a separate modelbased calibration process of the full scale engine 54 measurements of the full scale engine would have been recorded instead of only 22 measurements. As an example, a reduction of 32 measurements of an 18V48/ 60CR large bore engine of MAN Energy Solutions SE would reduce the test bed time by about eight hours, would save fuel costs of about 12 000 Euros and would decrease emissions and greenhouse gases by about 60%. Hence, this reduction of the measurement effort has a huge impact on the efficiency of the large bore engine development process and on the protection of environment and climate. 71 <?page no="82"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development References [1] Atkinson, C. and Mott, G., “Dynamic Model-Based Calibration Optimization: An Introduction and Application to Diesel Engines”, SAE Technical Paper 2005-01- 0026, 2005, doi: 10.4271/ 2005-01-0026. [2] Atkinson, C., Allain, M., and Zhang, H., "Using Model-Based Rapid Transient Calibration to Reduce Fuel Consumption and Emissions in Diesel Engines", SAE Technical Paper 2008-01-1365, 2008, doi: 10.4271/ 2008-01-1365. [3] Mitterer, A., “Optimierung vielparametriger Systeme in der Kfz-Antriebsentwicklung“, Ph.D. thesis, Technical University of Munich, Munich, 2000. [4] Klöpper, F., “Entwicklung und Einsatz modellgestützter Online-Methoden zur Parameteroptimierung von Verbrennungsmotoren am Prüfstand“, Ph.D. thesis, Technical University of Munich, Munich, 2009. [5] Berger, B., “Modeling and Optimization for Stationary Base Engine Calibration“, Ph.D. thesis, Technical University of Munich, Munich, 2012. [6] Mayer, S., Tuner, A. E. and Andreasen, A., “Design of Experiments Analysis of the NOx-SFOC Trade-off in Two-stroke Marine Engine”, presented at CIMAC Congress 2010, Norway, June 14-17, 2010. [7] Große-Löscher, H. and Haberland, H., “DoE und Modellbildungsmethoden zur effizienten Analyse des Betriebsverhaltens und zur Auslegung von Großdieselmotoren“, presented at International Conference - Design of Experiments (DoE) in Engine Development 2007, Germany, May 11-12, 2007. [8] Friedrich, C., Auer, M., and Stiesch, G., "Model Based Calibration Techniques for Medium Speed Engine Optimization: Investigations on Common Modeling Approaches for Modeling of Selected Steady State Engine Outputs," SAE Int. J. Engines 9(4): 2016, doi: 10.4271/ 2016-01-2156. [9] Friedrich, C., “Entwicklung einer Methode zur Verkürzung des modellbasierten Applikationsprozesses in der Großmotorenentwicklung“, Ph.D. thesis, Technical University of Munich, Munich, 2018. [10] Schnittger, W., Bednarek, G. and Pöpperl, M., “Der neue 2,2-l-ECOTEC- Aluminium-Motor von Opel.“, Motortechnische Zeitschrift, Issue 9/ 2000. [11] Nelles, O., “Nonlinear System Identification - From Classical Approaches to Neural Networks and Fuzzy Models”, Berlin, Germany, Springer, 2001, doi: 10.1007/ 978-3-662-04323-3. [12] Model Based Calibration Toolbox (Version 5.0), program help, Mathworks Inc. MATLAB®, Natwick, 2015. [13] Chen, S., Chng, E. S. and Alkadhimi, K., “Regularized orthogonal least squares algorithm for constructing radial basis function networks”, International Journal of Control 64 (5): 829-837, 1996, doi: 10.1080/ 00207179608921659. [14] Chen, S., Cowan, C.F.N. and Grant, P.M., “Orthogonal least squares learning algorithm for radial basis function networks”, Neural Networks 2 (2): 302-309, 1991, doi: 10.1109/ 72.80341. [15] Klar, H., Klages, B., Gundel, D., Kruse, T., Huber, R. and Ulmer, H., “New processes for efficient, model-based engine calibration”, presented at Proceedings 72 <?page no="83"?> 2.2 A new Methodology for Transferring Modelling Results between Engines in Terms of Model-Based Calibration in Large Bore Engine Development of the 5th Symposium on Development Methodology 2013, Germany, October 22-23, 2013. [16] Thewes, S., Lange-Hegermann, M., Reuber, C. and Beck, R., “Advanced Gaussian Process Modeling Techniques“, presented at International Conference - Design of Experiments (DoE) in Powertrain Development 2015, Germany, June 11-12, 2015. [17] Rasmussen, C. E. and Williams, C., K., I.: “Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)”, Cambridge, USA, The MIT Press, 2005, ISBN: 026218253X. [18] Berger, B., “Modeling and Optimization for Stationary Base Engine Calibration“, Ph.D. thesis, Technical University of Munich, Munich, 2012. [19] Siebertz, K., van Bebber, D. and Hochkirchen, T.: “Statistische Versuchsplanung - Design of Experiments (DoE)“, Heidelberg, Germany, Springer, 2010, doi: 10.1007/ 978-3-642-05493-8. [20] Kutner, M. H., Nachtsheim, C. J., Neter, J. and Li, W., “Applied Linear Statistical Models, 5th Edition“, New York, USA, McGraw Hill, 2005, doi: 10.2307/ 27640109. 73 <?page no="84"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine Justin Seabrook, Josh Dalby, Kiyotaka Shoji, Akira Inoue Abstract Virtual calibration is the process for optimising the engine calibration in a simulation environment. It is similar to the conventional test-based calibration process except that some or all engine test bench activities are replaced with data from engine simulation. In the context of the standard V-cycle diagram for powertrain development, some calibration tasks can be performed in a virtual environment on the left-hand side of the “V” to support the engine design process and provide an engine calibration early in the process. In this paper, the virtual calibration process for a gasoline engine application is described. The Integrated Model Based Development environment consists of vehicle, engine and aftertreatment models. These are a mix of 0D and 1D physics-based models, except for the combustion component which is an empirical DoE model. The inclusion of this stochastic process model (SPM) of the combustion system, derived from a single cylinder engine DoE, is essential for accurate heat release and engine-out emissions predictions. The virtual multi-cylinder engine (MCE) is thus a 1D full engine model with the combustion for each cylinder represented by the SPM. The virtual calibration of the MCE mirrors the conventional test bench-oriented process. A combination of steady state DoE tests for normal and catalyst heating modes and iterative vehicle-based calibration based on simulated WLTC tests are performed in the simulation environment. The output is VVT, EGR, spark and catalyst heating mode maps optimised to meet cycle emissions and fuel economy targets. The virtual MCE DoEs may include engine hardware variables so that trade-offs between emissions performance, cost and packaging can be explored. 1 Introduction In recent years, vehicle manufacturers have put increasing effort and resources into reducing development time while at the same time managing an ever-increasing degree of complexity in the powertrain control systems. A key part of this change is the move from test-based development and calibration to a virtual process [1-3]. While a wholly digital process for engine calibration is not yet feasible, significant progress can be made through simulation of vehicle, engine and aftertreatment in conjunction with an empirical model for the combustion and emissions. This simulation environment is suitable for early calibration tasks and to support hardware development where candidate engine and aftertreatment options can be compared on an “optimised calibration” basis. 74 <?page no="85"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine The testing phase for the purpose of engine calibration utilises single cylinder engine (SCE) tests to characterise the combustion system. The multi-cylinder engine (MCE) on which the calibration is performed is a virtual engine incorporating the SCE combustion model, facilitating MCE calibration before the physical engine is available. The approach is compatible with both diesel and gasoline engines and each presents different challenges in terms of achieving sufficient accuracy for the task of engine calibration. In this paper an Integrated Model Based Development (IMBD) process is presented for the development of a gasoline engine application for low cycle emissions. 2 V-cycle and the Virtual Calibration Process The stages of engine development are often presented in the form of a V-cycle (Figure 1), with system level tasks at the top and component level tasks at the base of the V. The left-hand side encompasses early development tasks, while the right-hand side comprises the later development and system integration tasks. There is a symmetry in the V such that tasks at the same height correspond to the same system level. IMBD enables some tasks on the right-hand side to be performed in simulation at the equivalent left-hand stage of product development, or even earlier. For example, vehicle emissions tests with sophisticated vehicle, engine and aftertreatment models may be run in simulation before the engine hardware exists, but with a level of accuracy that was previously impossible at that stage of development. Figure 1: Product Development V-cycle The engine calibration task on the right-hand side is not moved to the left-hand side. Rather, a virtual duplicate of the task is carried out on the left-hand side. This means that hardware decisions, particularly those critical for emissions feasibility, can be made with much greater confidence in the likely real system performance to be expected later in the process. A secondary benefit of performing the virtual calibration 75 <?page no="86"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine process is that, when the subsequent right-hand side calibration task is conducted, a good starting point calibration is available and the duration for the main calibration task is reduced. 3 Simulation Environment The parent simulation environment for this study was Simulink, which incorporated map-based models of the driver, torque converter, CVT and vehicle as well as managing the interfaces between the engine and combustion sub-models. The model of the 4-cylinder turbocharged gasoline engine was initially created and validated in the 1D gas dynamics simulation tool WAVE, where the steady-state performance was correlated to engine test data. The model was then automatically converted to WAVE-RT [4] (the fast variant of WAVE) to enable real time simulation. The diagram in Figure 2 illustrates the IMBD process. A Simulink model contains all the stages. Firstly, Driver, Vehicle and ECU strategy models are initiated for a given drive cycle and ECU calibration. At each timestep, these output engine speed, engine load (EATAC, an air flow-based parameter equivalent to volumetric efficiency), the ECU fuelling and demand settings for calibration parameters such as VVT and EGR. Additionally, the vehicle coolant temperature is estimated. Figure 2: Schematic of IMBD Process All these parameters are input to a Stochastic Process Model [5] that computes the heat release profile. The SPM is from the SCE DoE described in the next section. EATAC and ECU fuelling are fed directly to the WAVE-RT block along with the engine calibration for that time step. WAVE-RT takes these and the heat release profile and predicts the instantaneous EATAC, EGR rate and air-fuel ratio taking into account the fluid dynamics in the boost and EGR circuits. Finally, the SPM is called again to predict the emissions based on these dynamic values plus the other calibration settings. 76 <?page no="87"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine The IMBD model was validated against vehicle test data for nominal hardware and calibration (Figure 3). The cumulative CO2 over the Worldwide harmonized Light vehicle Test Cycle for the model is within 0.8% of the test result. In general, the model will be validated against data from an earlier engine and vehicle specification then modified for the new specification. For this project, focused on the development methodology, data for the target vehicle was available. Figure 3: Model CO2 Correlation on WLTC 4 Combustion Model As stated earlier, the combustion model is the only empirical model in the process. Physical models of combustion and emissions do not yet have the required accuracy for the calibration task. Therefore the combustion model was generated from Design of Experiments (DoE) tests on a SCE. The DoE variables were in general the same as the parameters that will later be calibrated for the virtual multi-cylinder engine. The one additional, non-calibration, variable was coolant temperature. The variables were as follows: Variable Unit Speed rev/ min EATAC % VVT Inlet °CA VVT Exhaust °CA EGR Rate % Tumble Control Valve ° Injection 1 Timing °BTDC Fuel Pressure MPa Air-fuel Ratio - Coolant Temperature °C 0 200 400 600 800 1000 1200 1400 1600 1800 Time [s] 0 20 40 60 80 100 120 140 160 Cumulative CO2 Emissions [g/ km] Vehicle Test IMBD Model 77 <?page no="88"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine Figure 4: SCE Experiment Design All calibration variables had design constraints based on engine speed and load. To improve testing efficiency, the variables of Tumble Control Valve and Coolant Temperature were restricted to discrete levels as the TCV was controlled manually and coolant temperature has a lengthy stabilisation time. The space-filling design for the ten variables (Figure 4) was an Optimal Latin Hypercube with 500 test points. Engine speed was limited to 2700 rev/ min on the SCE, which is much lower than the maximum engine speed on the MCE. However, for the purposes of emissions prediction and calibration on the WLTC, the range of 700 to 2700 rev/ min is sufficient. Figure 5: SCE Model View 15 60 105 0 27.5 55 -2 22.75 47.5 0 1.5 3 0 45 90 20 60 100 2.5 11.25 20 12 13.35 14.7 700 1700 2700 15 52.5 90 15 60 105 0 27.5 55 -2 22.75 47.5 0 1.5 3 0 45 90 20 60 100 2.5 11.25 20 12 13.35 14.7 Speed [rev/ min] EATAC [%] VVT Inlet [°CA] VVT Exhaust [°CA] EGR Rate [%] Tumble Flap [°] Injection 1 Timing [°BTDC] Fuel Pressure [MPa] AF [ratio] Coolant T [°C] k = 10 N (Design) = 500 NOx [g/ h] PN [#/ cm³] Std GIMEP [kPa] Fuel [kg/ h] 78 <?page no="89"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine Figure 6: SCE Catalyst Heating Model View Spark was not included as a DoE variable in the Design. For each DoE case, spark was set to MBT (maximum brake torque), or DBL (detonation borderline limit) where MBT was not feasible due to knock. MBT was found using the B50 (angle of 50% mass fraction burned) method. Four variables took their base calibration settings (injection timings 2 & 3 and fuel quantity for injections 1 & 2), and intake manifold temperature was set to a representative MCE value. When testing was complete, SPMs were fitted for all responses including emissions, heat release, exhaust port temperature and combustion stability. Selected models are shown graphically in Figure 5. A second SCE DoE with 90 cases was carried out for catalyst heating mode. The eight variables were the calibration parameters for this combustion mode. The responses were heat flux (energy flow into the exhaust), combustion stability and emissions. Figure 6 shows the principal models. The dashed lines denote 95% confidence intervals. The models were slotted into the simulation environment where they are used to predict emissions and combustion stability. 5 MCE DoE 5.1 Design The MCE DoE process was designed to replicate the normal calibration process based on engine tests. Designs for six DoEs were generated:  Global DoEs at 15, 40, 65 & 90°C coolant temperature  Catalyst Heating DoE at 15°C coolant temperature  Dynamic DoE at 90°C coolant temperature Although this is a simulation DoE, the number of cases (test points) was chosen on the same basis as for a test-based global DoE. The guideline number of cases for creating -40 -35 -30 -25 5 10 15 20 70 80 90 85 90 95 100 210 220 230 240 65 70 75 80 -10 -5 0 5 10 15 20 25 30 COV GIMEP [%] 0.04 0.06 0.08 0.1 0.12 HC [g/ h] 2000 4000 6000 8000 10000 12000 PN [#/ cm³] 1150 1200 1250 1000 1200 1400 1600 1800 2000 2200 Heat Flux [W] Speed [rev/ min] VVT Inlet [°CA] VVT Exhaust [°CA] Tumble Flap [%] Injection 1 Timing [°BTDC] Injection 1-2 Separation [°CA] Injection 1 Quantity [%] Spark Timing [°BTDC] 79 <?page no="90"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine global SPMs is 50 per variable so, with 9 inputs, the design had 450 cases. The variables were: Variable Unit Speed rev/ min EATAC % VVT Inlet °CA VVT Exhaust °CA EGR Rate % Tumble Control Valve ° Injection 1 Timing °BTDC Fuel Pressure MPa Air-fuel Ratio - Note that the earlier SCE DoE had one additional variable, namely coolant temperature, that is omitted from the MCE DoE. For the virtual MCE DoEs the coolant temperature is fixed for each experiment. This is to replicate more closely the normal testbased DoE process and for compatibility with the ECU strategy which has maps for selected coolant temperatures. Coolant temperature was included in the SCE DoE as a means of reducing the amount of SCE testing required. Air-fuel ratio is a MCE DoE variable, although it will not be calibrated during the optimisation process focused on the stoichiometric region. Air-fuel ratio improves prediction of emissions during simulation of transient cycles, so it is useful to include it in the model for that purpose. Figure 7: Design for Catalyst Heating Mode 25 35 45 0 10 20 60 75 90 80 90 100 200 220 240 60 70 80 -15 -7.5 0 1150 1200 1250 20 60 100 25 35 45 0 10 20 60 75 90 80 90 100 200 220 240 60 70 80 -15 -7.5 0 Speed [rev/ min] VVT Inlet [°CA] VVT Exhaust [°CA] TCV [°] Injection 1 Timing [°BTDC] Injection 1-2 Separation [°CA] Injection 1 Quantity [%] Spark Retard [°CA] Pipe 1 Length [mm] k = 9 N (Design) = 100 80 <?page no="91"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine The global designs for the four temperatures were identical. For a test-based DoE programme, it would be necessary to modify the variable ranges for the lower coolant temperatures to avoid testing at operating conditions with poor combustion. However, for a simulation-based DoE, this is unnecessary. At the optimisation stage, a constraint for standard deviation of IMEP will ensure that the calibration generated at each temperature has good combustion stability. The MCE catalyst heating DoE included one hardware variable, namely the length of the pipe between turbine outlet and catalyst inlet. This variable has a significant influence on catalyst light off time. A short length is beneficial for light off time and hence tailpipe emissions but can be difficult and expensive to package. This DoE will allow the trade-off between pipe length and tailpipe hydrocarbon emissions to be estimated quantitively. The design is shown in Figure 7. 5.2 Simulation The test phase of the virtual optimisation task includes running the design points using the integrated WAVE-RT and Simulink model, including the SCE SPMs. The SPMs provide the combustion parameters for the WAVE-RT model (Figure 8), and also output the emissions and combustion stability. The MCE WAVE-RT model outputs the fuel consumption, exhaust temperature and other parameters that may be used as constraints or objectives for the optimisation. The real-time capability of WAVE-RT means that, even with a single licence, the DoE designs can be tested in a few hours. The data collected for the global DoE is steady state data. For the catalyst heating DoE both steady state and transient data is generated. The transient data is for the first 60 seconds of a WLTC cycle and includes predictions of cumulative tailpipe emissions during that period. Figure 8: WAVE Model of MCE 81 <?page no="92"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine 5.3 Modelling Models were fitted for all responses. As would be expected for simulation data, the model quality was extremely good. Example models for the catalyst heating DoE are given in Figure 9. Figure 9: MCE Catalyst Heating DoE 5.4 Optimisation and Validation The optimisation tool was set up with the objective to maximise Torque at a grid of engine speed and load (EATAC). Since all grid points are stoichiometric, this is equivalent to minimising brake specific fuel consumption. Constraints for combustion stability, exhaust temperature and map smoothness were applied. The same process was repeated for the models at 15, 40 & 65°C. The combustion stability constraints were relaxed slightly at lower coolant temperatures, just as they would be for a test-based calibration project. Figure 10: Example Optimised Map - Fuel Pressure Time to 300°C [s] TP HC [mg] Heat Flux [W] COV IMEP [%] 82 <?page no="93"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine The optimisation tool produced optimised maps (Figure 10) of the following parameters, under cold, hot and catalyst heating conditions:  Inlet VT  Exhaust VT  EGR rate  Tumble Flap position  Injection Timing  Fuel Pressure Validation of the optimised calibration involves returning to the integrated vehicle model and implementing the optimised calibration map so that the drive-cycle fuel consumption and emissions impact can be predicted. Figure 11 compares the CO2 emissions for the baseline and optimised calibration. The optimised calibration shows 2.76% lower CO2 than the baseline calibration. Since the baseline calibration is a production calibration, a larger improvement in CO2 was not expected here. Figure 11: CO2 on WLTC for Baseline and Optimised Calibrations Figure 12: Catalyst Heating DoE Pareto Curve Cumulative CO2 Emissions [g/ km] Time to 300°C Catalyst Brick Temperature [s] 83 <?page no="94"?> 2.3 Virtual Calibration to Improve the Design of a Low Emissions Gasoline Engine The catalyst heating mode optimisation had two purposes. The primary goal was to generate calibration maps for the production-intent hardware. The secondary reason was to quantify the impact of a design change on catalyst light off time and cumulative tailpipe HC emissions on the WLTC. The distance between turbine outlet and catalyst inlet (Pipe 1 Length, L) is critical for these responses. Figure 12 shows that the tradeoff between heat flux and catalyst light off time for a range of pipe lengths. This is useful information to know at the design stage and the results from this secondary activity can be used to inform a design review and verify the intended design is suitable. 6 Conclusions The IMBD process has been applied to the virtual calibration of a gasoline engine at an early stage of the engine development V-cycle. The engine and tailpipe emissions are predicted so the design of hardware, including the turbocharger and exhaust aftertreatment system, can be carried out based on quantitative emissions predictions. Later, on the right-hand side of the V-cycle, the same process can be used to generate an initial calibration before the multi-cylinder engine is available for testing. A key factor for successful IMBD is the speed of the simulation environment. Ideally cycles should be run in real time. With parallel computing, it is possible to use slower physical models but having a fast option provides considerable flexibility. Combustion models based on chemical kinetics are not accurate enough for emissions calibration. Therefore, it is necessary to include an SPM in the simulation environment. SCEs are often available early in the development process and are an ideal data source for emissions and heat release profiles for the 1D engine model. The use of IMBD for virtual calibration on both the leftand right-hand sides of the Vcycle helps to compress development times and reduce cost. It can also be expected to lead to a more robust solution with fewer hardware iterations. 7 References [1] Marcos Alonso (2018), “Model Based Calibration: A Challenge for Optimal Emissions, 6th International Exhaust Emissions Symposium, Poland [2] Bjoern Lumpp, Mouham Tanimou, Martin Mcmackin, Eva Bouillon, Erica Trapel, Micha Muenzenmay, Klaus Zimmermann (2014), “Desktop Simulation And Calibration Of Diesel Engine Ecu Software Using Software-In-The-Loop Methodology”, SAE 2014-01-0189 [3] Ethan Faghani, Jelena Andric, Jonas Sjoblom (2018), “Toward an Effective Virtual Powertrain Calibration System”, SAE 2018-01-0007 [4] Sensorless Control Strategy Enabled by a Sophisticated Tool Chain by Adam Kouba, Patrick Niven, Bohumil Hnilicka & Jiri Navratil, SAE 2015-01-2847 [5] Justin Seabrook, Simon Edwards, Tomasz Salamon & Ian Noell (2003), “Comparison of Neural Networks, Stochastic Process Methods and Radial Basis Functions for the Optimisation of Engine Control Parameters”, DoE in Engine Development, Berlin 84 <?page no="95"?> 3 MBC II 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration Robert Bauer, Sebastian Weber, Richard Jakobi, Frank Kirschbaum, Carsten Karthaus, Wilfried Rossegger Abstract The tyre model of Pacejka enjoys considerable popularity because, among other benefits, parameter sets for many tyres are available. Although accurate in the low slip range, they may not be valid in the high slip range, as this is not the primary goal of the available parameter sets. Hence, when using this model on powertrain test beds for high-slip manoeuvres, results may not be comparable to the real road. This contribution presents a simple modification of the tyre model solely in the high slip range leaving the low slip range completely unaltered. The modification allows much better accordance of the tyre model with real road behaviour, and therefore, a very good match between real road and test bed measurements can also be achieved for high-slip manoeuvres. Kurzfassung Das Reifenmodell von Pacejka wird gerne verwendet, da neben anderen Vorteilen Parametersätze für viele Reifen verfügbar sind. Obwohl sie im Niedrigschlupfbereich sehr genau sind, können sie im Hochschlupfbereich stark abweichen, da dies ja auch nicht das eigentliche Ziel dieser Parametersätze ist. Wenn man daher dieses Modell an einem Antriebsstrang-Prüfstand für Hochschlupfmanöver einsetzt, können die Ergebnisse durchaus stark von Straßenmessungen abweichen. Dieser Beitrag stellt eine einfache Modifikation des Reifenmodells nur im Hochschlupfbereich vor, die den Niedrigschlupfbereich vollkommen unverändert belässt. Diese Modifikation ermöglicht eine viel bessere Übereinstimmung zwischen Reifenmodell und realem Verhalten, dementsprechend kann auch bei Hochschlupfmanövern eine sehr gute Übereinstimmung zwischen Straßen- und Prüfstandsmessungen erzielt werden. 1 Introduction The driveability calibration of an automotive powertrain is an important step in the development process, as it defines the character of a vehicle. Due to the increasing variety of vehicle and powertrain configurations, as well as increasing time and cost pressure, calibration carried out on a state-of-the-art powertrain test bed is an appealing alternative to classic on-road tests: Tests on the test bed can be carried out automatically and ensure reproducibility by minimising undesirable disturbance ef- 85 <?page no="96"?> 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration fects. The calibration parameters of the respective drivability function, which are varied in a test planned with the DoE methodology, are used as inputs of empirical black box models, such as polynomials, Gaussian process models or neural networks [1]. The optimal settings of the calibration parameters can then be found by using a numerical optimisation algorithm. This test bed (Section 2) uses models of the vehicle body, the wheels and the tyres. In addition to proper control, their structure and parameterisation is crucial for equivalency between real road and test bed measurements. In the low slip range (i.e. wheel slip below about 20%), a very good correlation between real road and test bed has already been reported in the literature [1,2]. However, high-slip manoeuvres (i.e. manoeuvres with wheel slip well above 20%) initially revealed differences between real road and test bed measurements (compare Figures 7,8), as the tyre model and its parameterisation failed to satisfactorily reproduce the real behaviour in the high slip range (Section 3). With an appropriate modification of the tyre model (Section 4), the very good correlation between real road and test bed can be expanded to high-slip manoeuvres (see Figures 7,9). The modified model needs only two additional parameters, which can be interpreted intuitively. This is of practical importance, as because of that, for new tyres the parameterisation of this modification is possible without prior experiments on the road. 2 Powertrain Test Bed The setup of the powertrain test bed under consideration allows the mounting of the complete vehicle with its body (see Figure 1). Instead of to original vehicle wheels, the wheel hubs are connected to the dynos by constant-velocity joint shafts. The wheels shown in Figure 1 are special wheels for use on the test bed only, they do not turn but have a lead-through shaft with a bearing. Figure 1: Powertrain test bed (with vehicle body) To stress the powertrain appropriately, the speeds of the wheels are determined by wheel models (including tyre models), which use the measured torques of the side shafts and interact with a model of the vehicle body (see Figure 2). These models are 86 <?page no="97"?> 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration all calculated in real time on the test bed, and the resulting wheel speeds are used as reference values for speed control [3]. Speed controller n w,ref n w Frequency converter Model of wheel wheel wheel wheel vehicle body Model of Side shaft T ss n w Model of Model of Model of Dyno Figure 2: Control concept of the powertrain test bed [3] 3 Tyre Model The tyre model used is the well-established magic formula of Pacejka [4]. As well as the low computational effort required, which makes it appealing for online applications, its main advantage is the availability of parameter sets (usually provided with so-called “tir files”) for many tyres. Based on the longitudinal wheel slip x x e w v v r s    (1) with the angular wheel speed  w , the effective rolling radius r e of the wheel and the longitudinal velocity v x of the wheel contact centre, the friction coefficient         Bs Bs E Bs C D F F µ z x atan atan sin     (2) describes the ratio of the longitudinal force F x to the vertical load F z and can be calculated using the stiffness factor B, the shape factor C, the peak value D and the curvature factor E. Usually these parameters vary with respect to the vertical load, so the friction coefficient will vary as well. In order to not cover the look on the essentials, in this contribution these parameters are chosen to be constant. Please note that for E=0 Eq. (2) simplifies to:     Bs C D atan sin   (3) 3.1 High Slip Range In the following, two parameter sets are used as listed in Table 1. Table 1: Parameter sets Parameter sets B C D E Parameter set 1 12.0 1.60 1.20 0.00 Parameter set 2 24.0 1.60 1.10 0.97 87 <?page no="98"?> 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration Parameter set 1 represents a simplified version of the tir file corresponding to the tyres of the specimen described in Section 5 and to the relevant range of vertical load. When comparing the µ-slip curve of this set with a rough estimate (Appendix A) gathered during a drive-away manoeuvre, it can be seen that in the high slip range this parameter set is a poor representation of real behaviour (Figure 3). This is not as strange as it seems at first glance, as the tir file itself defines a range of validity for the longitudinal slip (KPUMIN, KPUMAX), which is clearly exceeded in this experiment. Hence, parameter set 1 is not valid in the high slip range. Figure 3: Comparison of two parameter sets with real road measurement Parameter set 2 is chosen with respect to the depicted real road measurement and would represent reality much better in the high slip range, but it has a significant drawback: It does not fit in the low slip range! Now parameter set 1 could be used for the low slip range and parameter set 2 for the high slip range. But this would most likely result in a discontinuous µ-slip curve, as these two sets were not originally designed to fit together. Another idea is to modify the parameters B, C, D in such a way that the slope of the µ-slip curve at the origin stays the same [5]. This may work well, but besides the fact that the low slip range is slightly influenced too, it has one major drawback: The parameters provided with tir files cannot be used directly. A further idea is to modify the slip used for the calculation of µ, and this will be discussed in the next section. 4 Modification of the Tyre Model The main idea is based on a practical approach: The modified slip          max max max max max ~ s s s s s s s s s s for for for (4) is limited to a maximum slip s max and is used in Eq. (2) instead of s. For an arbitrarily chosen limit of 30%, this results in the µ-slip-curve depicted in Figure 4. 0 50 100 150 200 250 300 0 0.2 0.4 0.6 0.8 1 1.2 longitudinal slip in % friction coefficient Parameter set 1 Parameter set 2 Real road measurement 88 <?page no="99"?> 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration Figure 4: Slip modification with limited slip Despite its simplicity, the correlation in the high slip range is significantly improved. However, two flaws can be observed: The transition to the modified slip range could be smoother, and the slope does not fit in the high slip range. The first idea can therefore be improved by using an asymptotic function instead of a hard limit. The commonly used atan function in its basic form      2 atan 2 ~ max max   s s s s (5) already comes close, but it modifies the slip in the low slip range too. An adapted version (such that in the range mod s s  the slip is not modified, see appendix B) with arbitrarily chosen values s mod =20% and s max =40% is depicted in Figure 5. The transition to the modified slip range is now smoother, but the slope still does not fit in the high slip range. Two other asymptotic functions (based on 1/ x and 1/ sqrt(x), see appendix B) selected as examples show the same issue. Figure 5: Slip modification with asymptotic functions 0 50 100 150 200 250 300 0.6 0.8 1 1.2 longitudinal slip in % friction coefficient 0 50 100 150 200 250 300 0 10 20 30 40 50 longitudinal slip in % modified slip in % Parameter set 1 Parameter set 2 Set 1 with limited slip Limited slip 0 50 100 150 200 250 300 0.6 0.8 1 1.2 longitudinal slip in % friction coefficient 0 50 100 150 200 250 300 0 10 20 30 40 50 longitudinal slip in % modified slip in % Parameter set 1 Parameter set 2 Set 1 with atan(x) Set 1 with 1/ x Set 1 with 1/ sqrt(x) atan(x) 1/ x 1/ sqrt(x) 89 <?page no="100"?> 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration A closer look at Eqs. (2),(3) now reveals an interesting detail - the curvature factor E used in the magic formula can be seen as a modification of the slip too: When using           B Bs s E s s atan ~ (6) in Eq. (3) instead of s, the same result is obtained as with Eq. (2). The adapted version                                       mod mod mod mod mod mod mod atan atan atan atan ~ s s B Bs Bs s s E s s s s s s B Bs Bs s s E s s for for for mod mod (7) with arbitrarily chosen values s mod =20% and E mod =0.92 is depicted in Figure 6, and it can clearly be seen that the previous flaws have been eliminated. Figure 6: Slip modification with proposed method 5 Experimental Results The specimen is an up-to-date prototype vehicle with hybrid front wheel drive. The selected driving manoeuvre is a straight-line drive-away from standstill on a horizontal road in “sport” mode (which essentially disables the automatic start-stop of the engine) and “dyno” mode without ESP (otherwise, the specimen would try to prevent the high slip range). Figure 7 shows a real road measurement with the accelerator pedal position (scaled), the engine speed (divided by the overall gear ratio), both front wheel speeds, the corresponding vehicle velocity determined by the acceleration signal (Appendix A), and, for comparison purposes, the torque (sum of torque referred to the engine). Due to the configuration (dyno mode without ESP), the rear wheel speeds are not available. Please note that the wheel speed sensor delivers a delayed signal at very low speed. 0 50 100 150 200 250 300 0.6 0.8 1 1.2 longitudinal slip in % friction coefficient 0 50 100 150 200 250 300 0 10 20 30 40 50 longitudinal slip in % modified slip in % Parameter set 1 Parameter set 2 Set 1 with proposed method Proposed method 90 <?page no="101"?> 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration Figure 7: Real road measurement The beginning of the high slip section reveals an interesting detail: The left wheel oscillates slightly faster than the right wheel. This is because the side shafts are not symmetric: The left side shaft is shorter than the right one and hence stiffer. Figure 8 shows a test bed measurement with the original tyre model without modification (compare with Figure 6, “Parameter set 1”). The amplitude of the oscillation is far too large compared to the real road, due to the steep slope at about 30-50% slip. Further on, because of too less friction at about 100% slip, the engine speed increases too quickly while the vehicle speed increases too slowly compared to the real road. Figure 8: Test bed measurement with original model Finally, Figure 9 shows a test bed measurement with the modified tyre model using the values s mod =20% and E mod =0.92 (compare with Figure 6, “Set 1 with proposed method”). Now the oscillations match to the real road almost perfectly, and even the different frequencies at the beginning of the high slip section can be observed. 0 0.5 1 1.5 2 2.5 3 0 100 200 300 400 500 time in s speed in rpm, torque in Nm Accelerator Engine Left wheel Right wheel Vehicle Torque 0 0.5 1 1.5 2 2.5 3 0 100 200 300 400 500 time in s speed in rpm, torque in Nm Accelerator Engine Left wheel Right wheel Vehicle Torque 91 <?page no="102"?> 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration Figure 9: Test bed measurement with modified model 6 Conclusion A modification of Pacejka’s tyre model has been presented, that allows much better accordance of the tyre model with real road behaviour in the high slip range when using typical parameter sets provided with tir files. This modification only alters the high slip range, meaning that previous achievements and results obtained in the low slip range are completely unaffected. The modified model needs two additional parameters s mod and E mod , which can be interpreted intuitively: With s mod the beginning of the modified slip range is determined (no modification in the range mod s s  ), and E mod is the additional curvature factor in the modified slip range with a meaning similar to the original curvature factor introduced by Pacejka. Two values are of particular interest, that is E mod =0 which results in no modification at all, and E mod =1, which results in a modification comparable with the pure limited slip case (though not exact, see Eqs. (4),(7)). Experimental results show that the modified model allows the very good correlation between real road and test bed to be expanded to high-slip manoeuvres. Appendix A Rough Estimate of µ-slip The µ-slip characteristic of a certain tyre can easily be determined using the wheel speed, velocity, longitudinal force and vertical load measured in on-road tests. Unfortunately, most of these values are usually not available. However, when we consider a straight-line drive-away manoeuvre on a horizontal road with a front wheel drive, a rough estimate based on wheel speed and longitudinal acceleration a can be made. 0 0.5 1 1.5 2 2.5 3 0 100 200 300 400 500 time in s speed in rpm, torque in Nm Accelerator Engine Left wheel Right wheel Vehicle Torque 92 <?page no="103"?> 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration Determining the velocity by integrating the acceleration a is normally not advisable as offset errors accumulate quickly. But for short manoeuvres, this works quite well as long as the velocity at the beginning of the manoeuvre is known, and this is the case for drive-away manoeuvres starting from standstill. With this velocity and the wheel speed, the slip can be calculated using Eq. (1). The longitudinal force for both front wheels can be estimated by ma F x  (8) with the vehicle mass m and neglecting rolling resistance and wind drag. The vertical load for both front wheels can be estimated [6] by l h ma l l mg F r z   (9) with acceleration g due to gravity, distance l r from the centre of gravity to the rear wheel contact point in longitudinal direction, wheelbase l and height h of the centre of gravity. Finally, for the friction coefficient the following is obtained: ah gl al F F µ r z x    (10) B Asymptotic functions The complete version based on the atan-function is:                               mod mod mod max mod mod max mod mod mod mod max mod mod max 2 atan 2 2 atan 2 ~ s s s s s s s s s s s s s s s s s s s s s s for for for     (11) A basic version using the asymptotic function x/ (x+1) reads            0 s for for 1 0 1 ~ max max max max max max s s s s s s s s s s s s (12) and the complete version is:                                      mod mod mod max mod mod max mod mod mod mod mod max mod mod max mod ~ s s s s s s s s s s s s s s s s s s s s s s s s s s for for for (13) As the underlying asymptotic function   1 1 1 1 1      x x x x f (14) is a flipped and shifted version of 1/ x, we might also say “based on 1/ x”. 93 <?page no="104"?> 3.1 Modification of Pacejka’s Tyre Model in the High Slip Range for Model-Based Driveability Calibration For a slower convergence we might use 1/ sqrt(x) instead of 1/ x resulting in the underlying asymptotic function   1 1 1 2    x x f (15) and                           0 s for for 1 2 1 1 0 1 2 1 1 ~ max max max max s s s s s s s s (16) as a basic version with the factor 2 for a slope of 1 at the origin. The complete version based on 1/ sqrt(x) is:                                                mod mod mod max mod mod max mod mod mod mod max mod mod max 2 1 1 1 2 1 1 1 ~ s s s s s s s s s s s s s s s s s s s s s s for for for (17) References [1] Pillas J., Kirschbaum F., Jakobi R., Gebhardt A., Uphaus F.: Model-based load change reaction optimization using vehicle drivetrain test beds. Proceedings of the 14th Stuttgart International Symposium, Stuttgart (2014), p. 857-867 [2] Bauer R., Uphaus F., Gebhardt A., Kirschbaum F., Jakobi R., Rossegger W.: Agility Simulation for Driveability Calibration on Powertrain Test Beds. Proceedings of the 7th International Symposium on Development Methodology, Wiesbaden (2017) [3] Bauer, R.: New Methodology for Dynamic Drive Train Testing. Proceedings of the 12th Symposium on International Automotive Technology, Pune (2011) [4] Pacejka H.: Tyre and Vehicle Dynamics. 3rd edition, Butterworth-Heinemann, Oxford (2012) [5] Weber S., Dursun Y., Kirschbaum F., Jakobi R., Bäker B., Fischer J.: Investigations of the process of road matching on powertrain test rigs. Proceedings of the 17th Stuttgart International Symposium, Stuttgart (2017), p. 409-423 [6] Rill G.: Road Vehicle Dynamics. Taylor & Francis, Boca Raton, 2012 94 <?page no="105"?> 3.2 Bayesian Optimization and Automatic Controller Tuning Matthias Neumann-Brosig, Alexander von Rohr, Alonso Marco Valle, Sebastian Trimpe Abstract We give a short overview of Bayesian Optimization, an information-efficient class of gradient-free optimization algorithms, and its applications to automatic learning of controller parameters. We discuss three control problems that have been solved with this approach. Kurzfassung Wir geben einen kurzen Überblick über die Bayes'sche Optimierung, eine informationseffiziente Klasse von gradientenfreien Optimierungsalgorithmen, und ihre Anwendungen zum automatischen Lernen von Reglerparametern. Wir diskutieren drei Regelungsprobleme, die mit diesem Ansatz gelöst wurden 1 Introduction and Outline In industrial applications, the need for controller tuning is ubiquitous. A control-theoretic solution is to model the dynamic systems in question, and design a controller based on the system model. This combination of system identification and controller design is a case of indirect tuning, as the tuning takes place on a model of the system, not the system itself. For further information on system identification, we refer to [1]. While system identification and indirect tuning methods is a powerful tool with many successful applications, there are cases in which a more direct tuning method is indicated. For instance, some part of the system might be hard to model analytically, e.g. nonlinearities or friction. Direct tuning refers to controller tuning that is performed on the system itself instead of a model, and constitutes a good alternative in cases where a controller structure fitting the control problem is known (or the controller structure is pre-determined, as is oftentimes the case in industrial applications). In the applications discussed herein, we use a Gaussian Process (GP) to model the function mapping the controller parameters to a user-specified cost, measured in a 95 <?page no="106"?> 3.2 Bayesian Optimization and Automatic Controller Tuning physical experiment. The optimization algorithm iteratively suggest new controller parameters in a way to minimize the cost with a comparatively small number of physical experiments. This paper is organized in two parts: in the first part, Bayesian Optimization (BO) and GPs are introduced. In the second part, we briefly exhibit several applications and point to relevant publications. We also discuss several practical issues related to the application of controller tuning via BO. 2 Gaussian Processes and Bayesian Optimization We give only a very basic overview of GPs and BO in this article - a thorough discussion is beyond the scope of a single article. Instead, for a comprehensive treatment, we kindly refer the reader to any of the standard textbooks and articles, i.e. [2] for GPs and [3] for BO. 2.1 Gaussian Processes A GP on a set 𝑋 is a set of real-valued random variables 𝑓 : 𝑥 ∈ 𝑋 any finite collection of which has a joint normal distribution. It can be shown that a GP is uniquely determined by its mean function 𝑚: 𝑋 → ℝ and kernel or covariance function 𝑘: 𝑋 𝑋 → ℝ. Conditioning on a finite set of observations 𝑌 at locations 𝑇 and assuming normally distributed observation noise with variance 𝜎 , the posterior has the following mean and covariance functions: 𝑚 𝑥 𝑚 𝑥 𝑘 𝑥, 𝑇 𝑘 𝑇, 𝑇 𝜎 𝐼 𝑌 𝑚 𝑇 𝑘 𝑥, 𝑦 𝑘 𝑥, 𝑦 𝑘 𝑥, 𝑇 𝑘 𝑇, 𝑇 𝜎 𝐼 𝑘 𝑇, 𝑦 where 𝑘 𝑇, 𝑇 , 𝑘 𝑥, 𝑇 and 𝑘 𝑇, 𝑦 denote the corresponding Gram matrices and 𝑚 𝑇 is the vector of prior means at the positions 𝑇. Thus, given a kernel and some data, we can make predictions at new locations in 𝑋 stochastically, i.e. we have a quantitative measure of uncertainty (the posterior variance) as well as a predicted value (the posterior mean). Due to incorporation of noise in the model, a GP is able to directly deal with noisy measurements. A one-dimensional example of a GP regression is shown in Figure 1, with the prior in the left frame and the GP conditioned on some observed data points in the right frame. We note that the prior covariance function is usually not known, but a wide range of general covariance (or kernel) functions are available, and there are structured methods to choose one of these. 96 <?page no="107"?> 3.2 Bayesian Optimization and Automatic Controller Tuning Figure 1: Example of a posterior GP. The light blue color depicts +/ -2 standard deviations of the posterior. 2.2 Bayesian Optimization BO aims to solve a minimization problem by training a GP model on observed function evaluations and then using the model to suggest new locations. An overview on BO is given in e.g. [3]. The next location in a BO algorithm is chosen by maximizing a surrogate “acquisition function” that models how informative a particular point is, given the current posterior model. The acquisition function typically depends on not only the posterior mean, but the variance as well. Thus, a BO algorithm can incorporate uncertainty into its decisionmaking. An example of an acquisition function is shown in Figure 1. Once a user-specified criterion (like number of measurements) is met, the algorithms output is the minimum of the posterior mean. Some common acquisition functions are Expected Improvement (EI), Entropy Search (ES) and Lower Confidence Bound (LCB). EI aims for points that are likely to improve significantly on the best points observed so far. ES aims to reduce the Entropy of the posterior distribution of the minimum (a term that is not always defined, but on a countable, discrete grid, these problems do not occur) and LCB seeks to maximize a lower confidence bound of the posterior model. It is important to note that there is no optimal a priori choice for the acquisition function. Rather, each choice is a specific heuristic that may or may not fit the problem. 97 <?page no="108"?> 3.2 Bayesian Optimization and Automatic Controller Tuning 3 Controller Tuning via Bayesian Optimization In this section, we discuss the common problem of kernel and hyperparamter choice and give some examples of a practical application of the BO controller tuning framework. 3.1 Choice of kernel and hyperparameters While there are infinitely many covariance functions to choose from, some kernel functions have the desirable property of enabling the corresponding GP to approximate any continuous function on a compact set arbitrarily well given enough training data. This can be formulated in a mathematically rigorous way as a convergence proof - cf. Chapter 6 of [2]. A practical approach for kernel choice is to start out with a small number of kernels, and compare them to experimental data that is available prior to the tuning experiments, using approaches such as the Bayes Information Criterion [4], the Akaike criterion [5], or simply picking a universal kernel (like RBF or one of the Matérn or rational quadratic kernels). We do note, however, “correct” kernel choice has a direct impact on optimization results. Once a kernel is chosen, there are four ways to deal with the hyperparameters:  Fix them to specific prior to the tuning process using domain knowledge and possibly prior experimental data  Use a maximum likelihood (ML) approach (either via online hyperparamter estimation or prior to the tuning)  Use a maximum a posterior (MAP) approach - this can be seen as a combination of the two previous points, as the prior incorporates domain knowledge, but one still allows for an optimization of the values.  Use approximations to the stochastically correct Bayesian approach of integrating out the hyperparameters, for instance via Markov Chain Monte Carlo methods. Which approach is chosen should depend on the availability of prior data and or estimates, and the confidence in these estimates. 3.2 Example Applications In this subsection, we briefly discuss three example applications of the theory discussed so far. The aim for this subsection is to give an idea of the wide applicability of BO in the controller tuning framework (which is agnostic to the physical system), not to provide a thorough overview of the results in question. 3.2.1 Throttle valve control Two of the major challenges of throttle valve control are the nonlinearities induced friction and different spring rest positions in combination with different spring stiffnesses within the valve. The friction part is not trivial to model analytically. We applied 98 <?page no="109"?> 3.2 Bayesian Optimization and Automatic Controller Tuning BO with two different acquisition functions and two different user-specified functionals on the problem of tuning an ADRC controller for the throttle valve [6], and all results in this subsection are from that paper, which constitutes a good example of a collaboration between an industrial partner and a research institute (IAV and the Max Planck- Institute for Intelligent Systems, MPI-IS, respectively). One of the cost functionals specified was the sum of the rise-time of the throttle valve and the overshoot - thus, low cost values implied a quick response with small overshoot. The other cost functional was a combination of several system norms, with the intention of also incorporating a robustness measure into the BO. Some of the results are:  The performance of BO did not depend critically on the chosen acquisition function.  The BO algorithm was consistently able to find lower-cost parameters in a fourdimensional parameter space, compared to manual tuning by a domain expert, in only 10 trials of 2 to 3 minutes of measurement each.  We did not observe a critical influence of measurement noise on the performance of the algorithm.  It is feasible to incorporate system norms into the functional, thus enabling BO to incorporate system-theoretic notions like stability or robustness as well as heuristics. Figure 2 shows a sample trajectory after tuning. Figure 2: Demonstration of throttle valve after tuning 99 <?page no="110"?> 3.2 Bayesian Optimization and Automatic Controller Tuning 3.2.2 Locomotion of Soft Microrobots The soft microrobots recently developed by [7] are made of a photoresponsive material that deforms locally when exposed to light. They are able to achieve locomotion by periodically changing their shape, forming wave patterns on their surface, inspired by the locomotion of microorganisms. Their deformation can be actuated by external light fields. Models for the effects of the light on microrobots locomotion are unavailable, rendering classical optimal control infeasible. In [8] the locomotion performance is modeled from experimental data using GPs. The data acquisition to learn the model is guided by BO. By learning the relation between the light field’s parameter and the locomotion performance the authors avoid to directly model the more complex deformation dynamics. Kernel, hyperparameter and acquisition function were chosen based on existing experimental data. Augmentation of the dataset by randomization of the data pre-vented overfitting and thereby making the approach robust to differences in micro-robot samples. When using the BO-based tuning approach the locomotion speed could be improved significantly when compared to the previously best known parameters. The results demonstrate the effectiveness of BO in controller tuning for the soft microrobots. 3.2.3 Humanoid robot balancing an inverted pole A main difficulty in robotic systems is the tuning of controller parameters to make the robot complete a specific task with a desired performance. Such tuning process can be time consuming and tedious. In [9], the manual tuning of a humanoid robot (shown in Figure 3) balancing an inverted pole is replaced by a BO strategy, which seeks the optimal performance through sequential experiments. Specifically, an information-efficient criterion (Entropy Search [10]) leverages observed collected data from each experiment in order to suggest the most informative tuning configurations to try on the robot that could improve the overall balancing performance. Experimental results demonstrate the effectiveness of this method against manual tuning in twoand fourdimensional tuning problems. 100 <?page no="111"?> 3.2 Bayesian Optimization and Automatic Controller Tuning Figure 3: A humanoid robot balancing a pole 4 Conclusion In this short article, we described Gaussian Processes and Bayesian Optimization and briefly discussed three very diverse tuning problems solved with this approach. While it is evident that indirect tuning methods are relevant, these results demonstrate that a direct tuning approach can lead to results competitive with manual tuning regarding the time spent on the tuning problem as well as the expected cost after tuning. 5 Literature [1] L. Ljung, System Identification — Theory For the User, Upper Saddle River, N.J.: PTR Prentice Hall, 1999. [2] C. E. Rasmussen and C. Williams, Gaussian Processes for Machine Learning, Cambridge, Massachusetts: MIT Press, 2006. [3] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams and N. d. Freitas, "Taking the Human Out of the Loop: A Review of Bayesian Optimization," Proceedings of the IEEE, pp. 148 - 175, Januar 2016. [4] G. E. Schwarz, "Estimating the dimension of a model," Annals of Statistics, p. 461-464, 1978. 101 <?page no="112"?> 3.2 Bayesian Optimization and Automatic Controller Tuning [5] H. Akaike, "Information theory and an extension of the maximum likelihood principle," in Proceedings of the Second International Symposium on Information, Budapest, 1973. [6] M. Neumann-Brosig, A. Marco, D. Schwarzmann and S. Trimpe, "Data-efficient Auto-tuning with Bayesian Optimization: An Industrial Control Study," IEEE Transactions on Control Systems Technology, pp. 1-11, 2019. [7] S. Palagi, A. G. Mark, S. Y. Reigh, K. Melde, T. Qiu, H. Zeng, C. Parmeggiani, D. Martella, A. Sanchez-Castillo, N. Kapernaum, F. Giesselmann, D. S. Wiersma, E. Lauga and P. Fischer, "Structured light enables biomimetic swimming and versatile locomotion of photoresponsive soft microrobots," Nature materials 15, no. 6, 2016. [8] A. von Rohr, S. Trimpe, A. Marco, P. Fischer and S. Palagi, "Gait learning for soft microrobots controlled by light fields," 2018 IEEE/ RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6199-6206, 2018. [9] A. Marco, P. Hennig, J. Bohg, S. Schaal and S. Trimpe, "Automatic LQR tuning based on Gaussian process global optimization," 2016 IEEE international conference on robotics and automation (ICRA), pp. 270-277, 2016. [10] P. Hennig and C. J. Schuler, "Entropy search for information-efficient global optimization," Journal of Machine Learning Research 13, pp. 1809-1837, 2012. 102 <?page no="113"?> 3.3 Engine Calibration Using Global Optimization Methods Ling Zhu, Yan Wang Abstract The automotive industry is subject to stringent regulations in emissions and growing customer demands for better fuel consumption and vehicle performance. Engine calibration, a process that optimizes engine performance by tuning engine controls (actuators), becomes challenging nowadays due to significant increase of complexity of modern engines. The traditional sweep-based engine calibration method is no longer sustainable. To tackle the challenge, this work considers two powerful global optimization methods: genetic algorithm (GA) and Bayesian optimization. We developed and tested both methods on different engines, and carried out experimental study on a gasoline engine calibration problem. The results demonstrated that both GA and Bayesian optimization effectively find solutions very close to global optimum, and Bayesian optimization is more stable and has better worst-case performance. 1 Introduction Engine calibration has been adopting well-developed design of experiments (DOE) and numerical optimization processes that searches the optimal engine actuator settings for best engine performance at each engine speed/ load point. The process stemmed from engine systems developed more than 50 years ago when small number of actuators were used in engines. Such engine systems only need reasonable number of DOE testing to complete the characterization and calibration. In modern automotive industry, there has been increasing complexity in engine systems with new technologies introduced to meet stringent regulations in emissions and requirements for better fuel consumption, performance, driving comfort, and so on. In addition, those new added technologies introduce more nonlinearity and interaction that make the optimization even more challenging. On the other hand, the development of advanced engine technologies and latest trends in the autonomous vehicles, driver assist technologies, and connectivity increase the potential opportunities to further improve engine performances. However, without proper methods, these opportunities cannot be realized. The traditional engine calibration usually involves significant amount of dynamometer tests with very high operation cost while with very limited availability. The engine calibration process becomes exponentially expensive with increased degree of freedom, thus the traditional engine calibration is no longer sustainable to optimize such engines with high number of the control parameters compared to conventional systems. Recently test automation and smart DOE technologies have been developed and implemented to address these process deficiencies. However, improvement is in need on 103 <?page no="114"?> 3.3 Engine Calibration Using Global Optimization Methods final optimality of the calibration, especially when full DOE is no longer containable. Better optimization strategy is required. Evolutionary computation (EC), a family of meta-heuristic algorithms, is a nature-inspired global optimization method that shows great potential in solving difficult optimization problems. During the last decade, EC has been applied to automotive problems including engine design optimization, engine control, engine calibration [1], engine diagnostics, etc. In this work, we customized genetic algorithms (GAs) for engine calibration targeting optimization on real engine hardware platform. One of the key challenges to use GAs for engine calibration is the conflict between the limited number of hardware (dynamometer) testing and the demands for large number of solution evaluations (hardware testing) in order to find global optima. Another challenge is that the GA search tends to cover the full design space while some solutions may not run on real engine at all. These solutions will return no values for the algorithm to evolve. In this work, customized GAs is designed with domain knowledge to handle these issues. In addition, an adaptive algorithm parameter tuning mechanism is developed to automatically tune the mutation rates and mutation parameters based on the optimization performance. This will allow users to save the time and resources for manual tuning. Besides the GA, we also considered Bayesian optimization, known as efficient global optimization (EGO) method, which specifically targets to the black-box global optimization problem with computationally expensive function evaluations. Bayesian optimization usually begins with a number of random solutions, and builds surrogate models. Bayesian optimization runs optimization on the surrogate model to define the next point that balances exploration and exploitation. The surrogate model is iteratively updated with new points and all previous points. Bayesian optimization is a process of online / adaptive DOE by iteratively designing new experiments. It is well suited to engine calibration problem where the engine testing is usually expensive. By considering two global optimization methods on engine calibration, a comparative study of these two methods was carried out on a high-fidelity gasoline engine platform with limited number of total evaluations. The aim of this work is to provide valuable insight on how two powerful global optimization methods perform in engine calibration. 2 Previous works Numerous works on engine calibration have been proposed in past decades. The traditional model-based [12] engine calibration methods usually build numerical models from collected DOE based real-engine data and run optimization on these numerical models. Due to the increase in the number of the calibration parameters, the classic numerical modeling method showed limitation on representing high degree of freedom and high nonlinearity [11]. The black-box models such as Gaussian Process (GP) model [11, 14, 17] show high potential in representing complex mechanics, and have been used in several prior works [11, 14, 17]. The non-parametric GP models can be viewed as having infinitely many parameters and therefore, are more flexible and numerically robust compared to the classic parametric linear or polynomial numerical models. 104 <?page no="115"?> 3.3 Engine Calibration Using Global Optimization Methods In model-based engine calibration, the model accuracy in the optimal region depends on DOE, and the numerical optimization relies on the model accuracy. If the DOE arranges few or no points in the optimal region, the optimization can fail to find the good solutions due to the lack of representations near the optimal region. To address this limitation, interactive DOE or automatic calibration [13, 15, 16, 17, 18] was introduced to iteratively create new solutions (DOEs) based on the feedback of previous DOEs or model. Another challenge here is the computation complexity involved. Researchers addressed this issue with stochastic gradient-based online learning algorithm [16]. Meta-heuristic method including evolutionary computation [13, 15, 18] started gaining popularity in the last decade in solving engine optimization and calibration problems. Besides, Bayesian optimization using GP as the surrogate model also has high potential solving the engine calibration in real-engine platform, and the aim of this work is to evaluate the effectiveness of both GA and Bayesian optimization. 3 GA GA [2] is one of nature-inspired search methods based on the principles of biological evolution to solve optimization problem. GA begins with a population of random solutions and identifies fitness function that characterizes the level to which the solution satisfies the problem constraints and optimize the desired objectives. Subsequently, solutions of the next generation (refer to offspring) are generated by selecting solutions of higher fitness value from current generation and applying genetic operations: mutation and crossover. Iteratively, GA evolves the population that the average fitness gets better until some stopping criterion is reached. The stopping criterion is either the number of generation reaches the maximum allowed generation or best solutions consistently converge to the same region. Compared to classic local and gradient-based search method, GA has higher potential to find solutions in near optimal region and has less chance to be stuck in the local optima. 3.1 GA engine calibration optimization framework The overall framework of engine calibration with GA is given in the Figure 1. Unlike model-based engine calibration methods with numerical optimization on a model built by DOE test data, GA is directly optimizing specific outputs on the real engine. In this study we use a high-fidelity engine model to emulate real engine. The high-fidelity engine models are more and more accurate and can predict real engine performance very well. In this optimization framework, GA evolves design variables of solutions in the population, and the evolved solutions are sent to engine model. The engine model sends back the outputs of each solution to GA to calculate the fitness function as well as constraints violation. GA starts with a well-designed initial population. GA emphasizes the good solutions and more points will be created near these areas. It is worth to mention that parallelization of solution evaluation is possible in GA. Multiple solutions can be computed in parallel on high-fidelity model to save computation time. 105 <?page no="116"?> 3.3 Engine Calibration Using Global Optimization Methods Figure 1: GA overall framework 3.2 Initial population For initial population, we use Latin Hypercube Sampling (LHS) [22], which samples the solutions widely spread in design space. LHS is commonly applied in global optimization and it reduces the bias as it remembers the location of previously sampled points in which row and column the sample points were taken, and creates new point by avoiding already-occupied rows and columns [22]. 3.3 Fitness assignment and constraint handling In GA, fitness functions evaluate the quality of the solution, in this work we directly use objective (BSFC) value and combine with parameter-less constraint handling mechanism [3]. The mechanism compares solutions by objective value for feasible solutions, and by constraint violation for infeasible solutions. The smaller constraint violation implies the solution is closer to the constraint boundary, and the mechanism create the selection pressure from infeasible area into the feasible region. This handling mechanism does not introduce any new parameter, thus no further tuning is required. 3.4 Genetic operators As GA explores the search space of possible solutions through reproduction, and new solutions are created by genetic operation. Genetic operation includes two genetic operators: crossover and mutation. Crossover exchanges parts of two parent solutions and creates new solutions. Mutation makes random partial changes to a parent solution. Each operator has at least one parameter. Tuning the parameter is essential in this study due to the limited number of solution evaluations are available. Running optimization on the high fidelity engine models usually takes significant simulation time, which is very expensive computationally. This is equivalent to running the algorithm on an engine dynamometer, which has very high operation cost, and is therefore expensive operationally. 106 <?page no="117"?> 3.3 Engine Calibration Using Global Optimization Methods For crossover, we use simulated binary crossover [5] which has self-adaptive feature of real-parameter GA. Self-adaptive operators [4] are adopted to reduce parameter tuning effort. For mutation, Gaussian mutation operator is used, and the mutation parameter () is either manually or automatically tuned. For automatic mutation operator tuning [7] is based on convergence of the population and the individual performance. The details are given in Section 5.2. 4 Efficient global optimization: Bayesian optimization Bayesian optimization [6, 23-26] solves global optimization problems with computationally expensive function evaluations. The overall framework of Bayesian optimization is given in the Figure 2. Similar to GA, the Bayesian optimization starts with initial samplings of the search space, and evaluate the solutions to store all the required outputs to the solution. For each output, a surrogate model is built, and new point is found by optimizing acquisition function, which searches from surrogate models to balance exploration and exploitation. The new point is evaluated in high-fidelity engine model and added to all past solutions to build a new surrogate models. Surrogate models are improved iteratively by adding a new point in each time, and eventually are expected to have higher accuracy near optimal region. Similar to GA, we also use LHS for initial sampling as LHS is recommended in several works [23, 24]. Figure 2: Bayesian optimization framework 4.1 Surrogate model In each iteration, the surrogate model is constructed from all past solutions. The surrogate model is expected to provide relatively good prediction of the function landscape, especially in the optimal region. In this work, we use kriging method, and the predictive model for the kriging is given in Eq. 1 and Eq. 2[6]: 𝑦 𝑥 𝜇̂ 𝜓  𝜓 𝒚 𝟏𝜇̂ , 1 𝜓 cor 𝑌 𝒙 , 𝑌 𝒙 exp 𝜃 𝑥 𝑥 , 2 where 𝜇̂ is mean term, 𝜓 is correlation, 𝜓 is correlation between point 𝒙 and new point 𝒙. In correlation function, there are two parameters for each design variable 𝑥 , 𝜃 and 𝑝 . The 𝜃 is scale length and 𝑝 .is the exponent. It is clear that kriging method 107 <?page no="118"?> 3.3 Engine Calibration Using Global Optimization Methods is based on the similar concept of GP. However, kriging is more versatile and complicated, as it further adapts scale length and order number for each design variable. This effort improves the predictive power of kriging as well as the computation complexity. To simplify the computation, we fix the exponent to be the same for all design variable, and also apply constant mean term. 4.2 Acquisition function Once the surrogate models are built, the next point is chosen by optimizing the acquisition function. The search strategy for next point is important, as it is critical to improve the surrogate model accuracy. Similar to any global optimization technique, the proper balance between exploration and exploitation is essential. In the search, exploitation directs the search to the current best estimated solution, and exploration directs to the region with highest uncertainty. Acquisition function aggregates exploitation and exploration on the surrogate model, and encode the aggregated information into optimization. One of the common acquisition functions is expected improvement (EI), which measures the expected value of the improvement at each point over the best-observed point [8]. The function formula [6] is given below in Eq. 3. 𝐸𝐼 𝒙 𝑦 𝑦 𝒙 ⏀ 𝒙 ̂ 𝒙 𝑠̂ 𝒙 ∅ 𝒙 ̂ 𝒙 𝑖𝑓 𝑠̂ 𝒙 0, 0 𝑖𝑓 𝑠̂ 𝒙 0, 3 where 𝑦 is current observed best value, 𝑦 𝒙 is estimated mean, 𝑠̂ 𝒙 is variance for the estimation, ⏀ is the cumulative distribution function of standard normal distribution, and ∅ is probability density function of standard normal distribution. In the above equation, when the confidence bounds 𝑠̂ 𝒙 is not equal to zero, the first term encodes exploitation, and the second term encodes exploration. As iterations go, once the optimization cannot improve 𝑦 furthermore, the ⏀ would be close to its minimum value, but ∅ would reach its maximum value. In this case, the exploration term dominates in the EI, and the optimization will return a solution in a highly uncertain region where few or no existing solutions cover the region. 5 Customization for engine calibration 5.1 Repair for non-operational solutions 5.1.1 Non-operational solutions In constrained optimization problem, there are two types of solutions: feasible and infeasible solutions, shown in Figure 3. Feasible solutions (also known as in-bound solutions) satisfy all the constraint, and infeasible solutions (also known as out-of-bound solutions) violate at least one of the constraints. 108 108 <?page no="119"?> 3.3 Engine Calibration Using Global Optimization Methods Figure 3: Feasible and infeasible solutions Usually, GA or EGO uses a certain constraint handling mechanism to emphasize the feasible solutions and give less credit to the infeasible solutions. Most of constraint handling mechanisms require actual constraint function value in order to know how much the infeasible solution violates the constraints. Constraint violation information can provide a search direction towards the feasible region. Without constraint violation information, it slows down convergence speed of GA, especially for early generations where there are no or few feasible solutions in the population. In real engine testing, some solutions cannot even run completely on real engine platform due to the potential knock that causes significant damage to the engine. These solutions, called non-operational solutions (see Figure 4), are part of infeasible solutions, but do not provide any information either about objectives or constraints. For EGO, if the solution is non-operational solution, it has to find an alternative solution in order to continue the iterative search. For GA, if most of the population consists of non-operational solutions, it slows down optimization process. Because non-operational solutions fail to provide the search direction, and GA will spend a lot of time on finding the operational solutions, not to mention the optimal ones. In worst case, the search cannot even converge into the feasible region. Furthermore, for real engine optimization, in most cases, only limited number of evaluations (real tests) are available, and accelerating convergence speed in the early generations is crucial. Figure 4: Non-operational solutions To test the negative effects of non-operational solutions, we run experiments on desktop-based diesel engine optimization. The diesel engine model used in this optimization has ten design variables, and two objectives (minimize BSFC and NOx emission) 109 <?page no="120"?> 3.3 Engine Calibration Using Global Optimization Methods with three constraints. For this desktop study, we apply SPEA2, one of the stat-of-art multi-objective GAs, and run two different sets of experiments. In the first set of experiments, we define some parts of infeasible area as non-operational (as shown in Figure 2), and penalize non-operational solutions by assigning maximum constraint violation value to them. In the second sets of experiments, all infeasible solutions are operational (as shown in Figure 1), and all solutions have objective and constraint value available. Same parameter and initial population setting (which has less than 20% of the feasible solutions) are applied to both sets of the experiments. Figure 3 shows the optimization process of the two sets of experiment. The blue circle is the first set, and the green triangle is the second set. To simulate the real engine platform, we limit the total number of evaluation to 360, shows the first 120 evaluations in this figure. The yaxis is the percentage of the feasible solutions in the archive population, the higher, the better. Archive population is the historical elite solutions found by SPEA2. In GA, archive solutions guide the whole population toward the optimal region. Figure 5 shows that without non-operational points, after 60 evaluations, the archive solutions are all feasible, and the archive is ready to search the optimal region inside of feasible area. However, with non-operational points, even after 120 evaluations, half of the archive solutions are infeasible, this means the search still struggles to converge into the feasible region. It shows that non-operational solutions impede the optimization process; hence, it is essential to handle these solutions properly in GA when only limited number of evaluations are available. Figure 5: Effect of non-operational solutions 5.1.2 Repair method for the non-operational points To handle the non-operational points, this work proposes a genetic repair method. The repair method is inspired from one of the constraint handling method used in GA [19 - 21]. For example, if a solution A is non-operational, we randomly select one feasible solution B from the archive, then search for an operational solution A new along the direction from A to B. A new then replaces A as the new solution. Therefore, this algorithm creates extra evaluation but help provide search direction to GA while preserving the randomness. It is also noted that this algorithm is mostly likely only needed in the early generations. 110 <?page no="121"?> 3.3 Engine Calibration Using Global Optimization Methods 5.1.3 Effectiveness of repair method With this algorithm, we revisited experiments using the diesel engine application mentioned above. Figure 6 shows the results of optimization process three different settings. The two settings (green triangle and blue circle) are the same with Figure 5, and the red square is the experimental results for our proposed repair method. To simulate the real engine platform, we limit the total number of evaluation to 360, the figure shows the percentage of the feasible solutions in the archive. From the figure, for the proposed method, the archive solutions converge to the feasible region within 50 evaluations, much faster than the other two cases. After the archive solutions are all operational, we do not further fix the non-operational solutions to ensure the randomness of the genetic search. This will also reduce the extra evaluation created by the method. Our proposed method outperforms for handling non-operational solutions; hence, the GA with the proposed method can significantly enhance the optimization process for online real engine optimization. Figure 6. Effectiveness of non-operational handling method For performance evaluation we run the experiments with and without repair algorithm for 30 times, and evaluate the optimization performance by hypervolume. Usually, in multi-objective optimization, there are two performance metric to evaluate a set of evolved solutions: convergence and diversity. Hypervolume is a well-studied method that is used to evaluate both convergence and diversity. For hypervolume, the greater, the better. Figure 7 shows the hypervolume of all 30 runs with or without repair. The bar represents the maximum, minimum, and average hypervolume value of the 30 runs. The distribution figure over the bar is the frequency of the corresponding hypervolume value. For example, for the first setting, the hypervolume is wide spread and is therefore unstable. However, by applying our proposed repair method, GA performs in a stable manner with consistently higher hypervolume value. Our proposed method minimizes the destructive impact of non-operation solutions and ensure the optimization process of GA. 111 <?page no="122"?> 3.3 Engine Calibration Using Global Optimization Methods Figure 7. Performance of non-operational handling method 5.2 Adaptive mutation operator There are two types of GAs used in this work, fixed GA and adaptive GA. The parameters in fixed GA are manually tuned. In contrast, adaptive GA uses only adaptive mutation operator that automatically tunes the mutation rate and Gaussian mutation parameter () based on the performance. The adaptive tuning strategy for of i th solution is given below [7].    𝑓 𝑓 / 𝑓 𝑓  , 𝑖𝑓 𝑓 𝑓 ,  𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒, 4 𝑓 𝑤 𝑓 , 𝑤 1 , where 𝑓 is objective value of i th solution in current population, 𝑓 is weighted sum of the objective values of all current population, 𝑓 is the minimum objective value of current population.  and  is the range for the . The same adaptive strategy is applied for mutation rate. The adaptive mechanism allows higher exploration for the solutions far from current-best solutions, and allow higher exploitation for the solutions near current-best solutions. It also adaptively changes the degree of exploitation based on performance. 6 Experiments 6.1 Experiment setup 6.1.1 High fidelity engine model Due to the availability of dyno resources, we used high fidelity engine model for the experiment of the algorithm. GT-POWER [9] is a leading engine simulation tool used in engine design and evaluation. In this week, we used a high fidelity GT-POWER to 112 <?page no="123"?> 3.3 Engine Calibration Using Global Optimization Methods predict performance indices of a gasoline engine to calculate performance, such as engine BSFC, combustion, and torque, port pressures, component temperatures, etc. We are focusing on steady-state simulations of GT-POWER engine model at different operating points, and we conducted our study on the calibration at one speed/ load point at a time. The model has numerous input parameters, and six input parameters were chosen as calibration parameters. These calibration parameters include throttle angle, waste gate orifice, ignition timing, valve timings, air-fuel-ratio, and engine speed. The goal of the calibration is to minimize the BSFC at the same speed/ load point, meanwhile making sure other outputs such as temperatures and combustion performance satisfy corresponding constraints. Figure 8 shows such an engine calibration problem is actually a constrained optimization problem. For such a constrained realparameter optimization problem, two global optimization methods - GA and Bayesian optimization was introduced to minimize BSFC with four inequality constraints. To consider the resource limitation of real-engine environment, the number of total evaluation is capped to 360. Figure 8. Engine calibration as an optimization problem 6.1.2 Optimization setup The experimental setting of two optimization algorithms is as follows. All GA experiments used simulated binary crossover with crossover rate 𝑃 = 0.9, crossover distribution parameter  = 5. For adaptive mutation, GA used Gaussian mutation with adaptive mutation rate and variance. Three different population sizes (8, 12, and 20) are tested and compared. We also tested a fixed GA with Gaussian mutation rate 𝑃 = 0.3, and population size 20. For Bayesian optimization, a surrogate model uses kriging, and DACE [10] toolbox was used to estimate the 𝜃 for each calibration parameter 𝑥 . The order of kriging was set to 𝑃 = 2 for all j to increase the smoothness of the fitting [6]. For each setting, experiments were run at least 10 times, and the experimental results are given next. 113 <?page no="124"?> 3.3 Engine Calibration Using Global Optimization Methods 6.2 Results 6.2.1 Optimization performance Given the limited number of total evaluations (360 evaluations), large population size will suffer from inadequate evolution, due to the small number of generations. To mitigate this problem, small population size (<=20) is recommended in order to allow GA to run more generations. At the same time, for a problem with so many design variables, too little population size is not acceptable either, and we only studied population size as low as 8. Figure 9 shows how an adaptive GA optimizes the BSFC over evaluations. The algorithm doesn’t need any tuning of the algorithm parameters. The final BSFC is 0.2% more than the global optimal, compared to 2% the algorithm started with. Figure 9: BSFC improvements by GA Table 1 shows the best-BSFC found by each setting of the algorithm, with all values normalized to 1 as the absolute best BSFC for the engine, or the global optimal for the speed/ load point. It is also noted that the fixed GA needed extra tuning to find the best algorithm parameters. The objective is to find the smallest BSFC. Among all repeats of the same experimental setting, table listed best-case which shows the minimum best-BSFC of all repeats, worst case which shows the maximum best-BSFC, average case which took average of best-BSFC of all repeats. The table shows that despite of given 360 evaluations, GA and Bayesian optimization can find the solution whose BSFC is very close to optimum BSFC, and the difference shows it is difficult to define one algorithm outperforms the other. From the perspective of the algorithm stability, performance of Bayesian optimization and fixed GA may be more consistent among all the runs, and Bayesian optimization may perform slightly better than GA, but the difference is not significant. Please note that in real-world application, measurement noise and uncertainty of the combustion system exist, this difference of ~1% is often negligible. 114 <?page no="125"?> 3.3 Engine Calibration Using Global Optimization Methods Table 1: Best BSFC found by GA and Bayesian optimization (BSFC is normalized, the minimum value is 1 after normalization) Optimization method Population size Best case (BSFC) Average case (BSFC) Worst case (BSFC) Adaptive GA 8 1.001 1.005 1.010 12 1.002 1.007 1.013 20 1.002 1.007 1.013 Fixed GA 20 1.005 1.008 1.010 Bayesian Opt N/ A 1.0003 1.004 1.006 6.2.2 Surrogate model accuracy For Bayesian optimization, it is important that the final surrogate model has higher or better accuracy in optimal regions, or the regions of interest. After running all the optimizations, we detected there were several optimal regions including global and local optimal ones. Some regions are close to each other, yet there are gaps between regions in the objective space that we consider them independent regions. In the speed/ load point we tested, we concluded there are four such regions and we test the accuracy of the surrogate model in these four regions. Specifically, one surrogate model was built for each constraint and objective we picked, and we use the Root Mean Square Error (RMSE) of surrogate model for comparison. As shown in Table 2, in the optimal regions, except constraint 1, all other surrogate models had high accuracy (error is smaller than 3%). For the specific one, the large errors are potentially due to its relatively narrower range. Please note the last row in Table 2 is the model error for the entire search space. This is well expected and, since the focus is the model accuracy in the optimal regions. Given the limited number of the points (solutions), it is not possible nor necessary for surrogate models to provide good abstraction of the physical model in the entire sixdimension design space. Table 2: RMSE of surrogate model in different optimal regions (All values are normalized to [0, 1]) Region Constraint 1 Constraint 2 Constraint 3 Constraint 4 BSFC Optimal region 1 0.18 0.01 0.02 0.002 0.01 Optimal region 2 0.23 0.03 0.03 0.003 0.02 Optimal region 3 0.20 0.03 0.02 0.002 0.01 Optimal region 4 0.20 0.04 0.02 0.003 0.01 Entire search space 1.33 1.73 0.08 0.040 2.39 115 <?page no="126"?> 3.3 Engine Calibration Using Global Optimization Methods 7 Conclusion Real-world engine calibration usually involves expensive evaluations and only limited number of solution evaluations is available. To tackle the limited resource, we customized two global optimization methods and carried out a comparative study on engine calibration: GA and Bayesian optimization. As one of the nature-inspired optimization techniques, GA is a population-based search method, and usually requires large number of solution evaluations. To improve GA performance, self-adaptive mechanism is designed to tune mutation operators automatically, and genetic repairs were adopted to address non-operational solutions in real engine testing. Besides GA, Bayesian optimization, that is known to efficiently solve optimization problem with computationally expensive function evaluations, was also run on the same engine calibration problem. The experiments on high-fidelity engine model (GT-POWER) demonstrated effectiveness of both methods, as both methods were able to find the solution near one of the optimal regions. From this study, we could conclude that both methods have high potential to solve engine calibration problem in real-engine environment. Future work will include the uncertainties and noise of the real-engine platform. Reference [1] Tayarani-N, Mohammad-H., Xin Yao, and Hongming Xu. "Meta-heuristic algorithms in car engine design: A literature survey." IEEE Transactions on Evolutionary Computation 19.5 (2015): 609-629. [2] Whitley, Darrell. "A genetic algorithm tutorial." Statistics and computing 4.2 (1994): 65-85. [3] Deb, Kalyanmoy. "An efficient constraint handling method for genetic algorithms." Computer methods in applied mechanics and engineering 186.2-4 (2000): 311- 338. [4] Deb, Kalyanmoy, and Hans-Georg Beyer. "Self-adaptive genetic algorithms with simulated binary crossover." Evolutionary computation 9.2 (2001): 197-221. [5] Agrawal, Ram Bhushan, Kalyanmoy Deb, and Ram Bhushan Agrawal. "Simulated binary crossover for continuous search space." Complex systems 9.2 (1995): 115-148. [6] Forrester, Alexander IJ, and Andy J. Keane. "Recent advances in surrogatebased optimization." Progress in aerospace sciences 45.1-3 (2009): 50-79. [7] Srinivas, Mandavilli, and Lalit M. Patnaik. "Adaptive probabilities of crossover and mutation in genetic algorithms." IEEE Transactions on Systems, Man, and Cybernetics 24.4 (1994): 656-667. [8] Letham, Benjamin, et al. "Constrained Bayesian optimization with noisy experiments." Bayesian Analysis (2018). [9] GT-Power, https: / / www.gtisoft.com/ gt-suite-applications/ propulsion-systems/ gtpower-engine-simulation-software/ , Gamma Technologies. [10] Nielsen, Hans Bruun, Søren Nymand Lophaven, and Jacob Søndergaard. "DACE-a matlab kriging toolbox." (2002). 116 <?page no="127"?> 3.3 Engine Calibration Using Global Optimization Methods [11] Millo, Federico, Pranav Arya, and Fabio Mallamo. "Optimization of automotive diesel engine calibration using genetic algorithm techniques." Energy 158 (2018): 807-819. [12] Atkinson, Chris, and Gregory Mott. Dynamic model-based calibration optimization: An introduction and application to diesel engines. No. 2005-01-0026. SAE Technical Paper, 2005. [13] Zaglauer, Susanne, and Ulrich Knoll. "Evolutionary algorithms for the automatic calibration of simulation models for the virtual engine application." IFAC Proceedings Volumes 45.2 (2012): 177-181. [14] Berger, Benjamin, and Florian Rauscher. "Robust Gaussian process modelling for engine calibration." IFAC Proceedings Volumes 45.2 (2012): 159-164. [15] Bertram, Aaron M., Qiang Zhang, and Song-Charng Kong. "A novel particle swarm and genetic algorithm hybrid method for diesel engine performance optimization." International Journal of Engine Research 17.7 (2016): 732-747. [16] Janakiraman, Vijay Manikandan, XuanLong Nguyen, and Dennis Assanis. "Stochastic gradient based extreme learning machines for stable online learning of advanced combustion engines." Neurocomputing 177 (2016): 304-316. [17] Mosbach, Sebastian, et al. "Iterative improvement of Bayesian parameter estimates for an engine model by means of experimental design." Combustion and Flame 159.3 (2012): 1303-1313. [18] Kianifar, Mohammed Reza, Loan Felician Campean, and Dave Richardson. "Sequential DoE framework for steady state model based calibration." SAE International Journal of Engines 6.2 (2013): 843-855. [19] Coello, Carlos A. Coello. "Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art." Computer methods in applied mechanics and engineering 191.11-12 (2002): 1245- 1287. [20] Kramer, Oliver. "A review of constraint-handling techniques for evolution strategies." Applied Computational Intelligence and Soft Computing 2010 (2010). [21] Mezura-Montes, Efrén, and Carlos A. Coello Coello. "Constraint-handling in nature-inspired numerical optimization: past, present and future." Swarm and Evolutionary Computation 1.4 (2011): 173-194. [22] Helton, Jon C., and Freddie Joe Davis. "Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems." Reliability Engineering & System Safety 81.1 (2003): 23-69. [23] Jones, Donald R., Matthias Schonlau, and William J. Welch. "Efficient global optimization of expensive black-box functions." Journal of Global optimization 13.4 (1998): 455-492. [24] Brochu, Eric, Vlad M. Cora, and Nando De Freitas. "A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning." arXiv preprint arXiv: 1012.2599 (2010). [25] Bull, Adam D. "Convergence rates of efficient global optimization algorithms." Journal of Machine Learning Research 12.Oct (2011): 2879-2904. [26] Kandasamy, Kirthevasan, Jeff Schneider, and Barnabás Póczos. "High dimensional Bayesian optimisation and bandits via additive models." International Conference on Machine Learning. 2015. 117 <?page no="128"?> 4 Methods 4.1 Finding Root Causes in Complex Systems Hans-Ulrich Kobialka Abstract Root cause analysis in an industrial setting is a difficult unsupervised machine learning problem. We target this challenge by the massive use of neural network technology and systematic involvement of domain experts. Kurzfassung Root-Cause-Analyse fasst ähnliche fehlerhafte Vorgänge zu Gruppen zusammen und versucht für die Vorgänge einer Gruppe Fehlerursachen zu identifizieren. Für komplexe Produktionsprozesse ist dies ein schwieriges unsupervisiertes maschinelles Lernproblem. Der hier vorgestellte Ansatz kombiniert den intensiven Einsatz von tiefen Neuronalen Netzen mit einer systematischen Einbindung von Domänen- Experten. 1 Introduction Failures, or non-optimal behavior, in complex industrial systems may have a broad spectrum of potential root causes. In many cases, root causes cannot be pinned down to simple rule violations (e.g. exceeding of a threshold). Instead, root causes may be complex dynamical phenomena often not known to system experts. The detection of unknown complex root causes in high dimensional time series data is a severe problem, but also a key issue for optimizing the performance of complex systems. Currently terabytes of data are collected during industrial production. Sensor data is recorded at the scale of milliseconds while processes typically run for minutes or hours. Production lines observed by many sensors produce long and highdimensional time series data. On special occasions, domain experts take a very selective look at that raw data guided by their system knowledge, but in general, intelligent analysis is needed to exploit the information provided by sensors. The goal of root cause analysis is to identify the driving factors behind a certain phenomenon, in our case a machine failure. In a perfect world and for our given scenario, a root cause analysis might yield something like the following result: “On machine XY, component A shows aging effects. Therefore, in case of varying paper quality, vibrations, in the frequency spectrum around 10 Hz, may occur which in turn increase 118 <?page no="129"?> 4.1 Finding Root Causes in Complex Systems the probability that paper problems cause failure“. Most likely, machine learning will not deliver such explanations like that within the next decade. Currently, simple rules can be derived from data having a moderate number of features [1]. For long, highdimensional time series, root cause analysis becomes more challenging by far. Our approach pursues an interaction between machine learning driven root cause analysis and domain experts. Data scientists talk to domain experts to understand complex production systems, and to start analysis first on the least complex setting. Then data scientists seek the data for clusters, outliers, and interesting events, and identify the information relevant. This reduced information can then be presented to domain experts for discussion. It should be emphasized that this approach highly depends on the availability of system experts. Expert knowledge can speed up both clustering and feature selection significantly, where uninformed strategies would suffer from the dimensionality of the search spaces. We applied this iterative and systematic approach to the root cause analysis on printing machines. The efforts for this were quite considerable, but alternatives were lacking due to the high dimensionality of the problem. 2 Approach Our approach consists of two steps. In step 1, failures are clustered according to the component which may have caused the failure. There are two sub steps. In step 1.1, the clusters are identified together with a few prototype failure samples, which represent each cluster. Subsequent in step 1.2, unlabeled failure samples are labeled (i.e. associated to clusters) in a kind of self-learning process performed jointly on all clusters. In step 2, relevant parts of the sensor signal are identified for each cluster using multi-instance learning [3]. 2.1 Clustering of failure samples (Step 1) The problem is that conventional clustering algorithms (such as k-means clustering) cannot be applied. From high-dimensional time series, an enormous amount of features can be generated. Selecting the ones best for root cause detection is hard without any prior knowledge. So, some arbitrary features will be selected. Furthermore, in such a high dimensional feature space, Euclidian distance is likely to fail because some feature dimension may not be appropriately scaled with respect to others. Finally, clustering algorithms may group samples according to various criteria, for instance, according to the product produced, or according to the machine used. Therefore, we have to tell clustering that samples should be clustered according to their root causes. 2.1.1 Identification of clusters and prototype failure samples (Step 1.1) Within printing machines, there are several components which contribute to the process separately. The behavior of one machine component does not influence others, but may cause the overall process to fail. Therefore, the sensors located at a component should be sufficient to identify the failures caused by this particular component. 119 <?page no="130"?> 4.1 Finding Root Causes in Complex Systems This approach covers only a part of all failures. For instance, it leaves out all failures which are caused within the final unit where effects of all components interact with each other. Furthermore, there may be failures caused by events not observed by any sensor. Figure 1: Events at a component may lead to failure occurring at the final unit Each failure sample (i.e. all data recorded about a particular failure instance), which can be separated with high confidence from non-failure samples by just using the sensor information of a component, is assumed to be caused by this component. Figure 2: Sensor information measured at different machine components We first applied decision trees and subgroup discovery [2] on this task. As both approaches cannot be applied on sensor time series directly, features were constructed manually and each multi-dimensional time series gets represented by a feature vector. Unfortunately, both decision trees and subgroup discovery yielded some interesting insights on some small subgroups, but did not cluster the majority of failure samples. We then switched to the use of Recurrent Neural Networks (RNNs). Time series can be directly fed into an RNN (i.e. there is no need for manual feature construction). For a task to be solved, an RNN learns highly complex non-linear transformations in time and across different input channels. 120 <?page no="131"?> 4.1 Finding Root Causes in Complex Systems The task of the RNN is to map the features (sensors) of component C, f C (x), of sample x to the probability p fail,C (x) that x is a failure sample. Formally, the RNN is a function f, that maps the data samples x, given as sensor readings for each component C, to the probability of failure p fail,C (x). f C (x) → p fail,C (x) (1) The data available contains the ground truth probability for failure samples (failure probability = 1) and non-failure samples (failure probability = 0). By using only the sensors of component C, f C (x), this mapping should become infeasible for failure samples not caused by component C. In an iterative process of training and validation, failure samples showing high validation error are subsequently removed from the training set. Each iteration step is performed as 18-fold cross validation (CV). After each iteration, some amount (e.g. 5%) of the failure samples 1 is removed, while retaining failure samples showing high probability (>80%). A fixed number of iterations (e.g. 20) is performed. After each iteration i, the remaining training set X Ci used is stored. This iterative process is performed for each component separately. The final clusters, i.e. the final training sets for the components, are determined by jointly maximizing the sum of cluster members across all clusters, and minimizing the number of failure samples belonging to multiple clusters. This is done by a) selecting X Ci for each cluster C, and b) by determining a minimum failure probability p fail_min,C for each X Ci , and selecting as cluster members only samples x with p fail,C (x) > p fail_min,C . This way, the number of failure samples belonging to multiple clusters can be minimized either by a) choosing the samples X Ci obtained after more iterations (i.e. increased i) on the clusters concerned, or b) increasing the minimum failure probability p fail_min,C for the chosen X Ci . The resulting failure samples X C show high probabilities of being caused by events on component C. Because the dimensionality of the time series f C (x) is significantly smaller than the one of x, f C (x) can be plotted, compared with other cluster members or members of other clusters. It is important to involve domain experts at this point and discuss findings with them. Data may contain artifacts (e.g. arbitrary events may correlate with failure). The CV scheme is an important counter measure against this, but if several samples contain the same artifact, this will influence cluster membership. Without the help of domain experts, this cannot be detected. Furthermore, domain experts can give hints like creating sub clusters (in case of multiple root causes) or using only a single sensor (if sufficient) for detecting a root cause. 2.1.2 Labeling of unlabeled failure samples (Step 1.2) In step 1.1, only a small fraction of failure samples are labeled. Starting from these high-confident samples, similar samples are identified and labeled accordingly. This 1 Note that it is important that only failure samples are removed during this process, but not non-failure samples. Otherwise, even randomly labeled data samples could be perfectly separated by removing about half of the data samples. 121 <?page no="132"?> 4.1 Finding Root Causes in Complex Systems way, failure clusters get enlarged, sub clusters may be identified, and more confidence in clusters is gained. In a kind of self-training process, for each cluster, an initial RNN model is trained on high-confident samples and then applied on unlabeled failure samples. Unlabeled samples, for which the RNN predicts a high confidence of being a failure sample, are then labeled to be members of this cluster. The RNN model is then retrained on the enlarged sample set and again applied on the yet unlabeled failure samples. This is repeated until no labels are assigned any more. This iterative process is performed in parallel on all clusters. The confidence of an unlabeled failure sample of being a member of a cluster does not only depend on the output of the RNN model of that cluster, but also requires that the RNN models of the other clusters don’t claim this sample for their clusters. I.e., for x being an unlabeled failure sample, and p fail,C (x) > p fail_min,C , and p fail,C (x) - max C’≠C (p fail,C’ (x)) > thresh, => x is labeled as a member of cluster C. Again, also during this process, the support of domain experts can improve the result a lot. Whenever p fail,C (x) gets close to p fail_min,C , x becomes a candidate to be discussed with domain experts. This way the boundaries between clusters can be explored with greater confidence. Without any domain expertise, there is less confidence, less failure samples are labeled, and clusters remain unvalidated. 2.2 Identification of relevant parts of the sensor signal (Step 2) In step 2, a feature selection process is performed separately for each cluster in order to detect the relevant information, which finally identifies the root cause. Once during previous discussion with domain experts, a root cause has already been pinned down in some particular part of the time series, step 2 need not be performed for this particular root cause. The feature selection process does not only reduce the dimension of the input time series, but also selects time intervals important for cluster membership. This reduces the amount of information and eases subsequent investigation & validation by domain experts, who aim to formulate a human-readable explanation like the one in the introduction. In particular, we apply multi-instance learning similar to [3], even we use RNNs instead of Convolutional Neural Networks (CNNs). During multi-instance learning, several views on data samples are generated. Each view contains only a subset of the information of a data sample, for example, a particular time interval of a long time series. In an iterative process, multi-instance learning reveals the view, which performs best in correctly associating samples to their clusters. Multi-instance learning is performed separately for each cluster. 122 <?page no="133"?> 4.1 Finding Root Causes in Complex Systems 3 Efficient model training Training of many models may be prohibitive depending on the method used and the computing sources available. The procedure described in step 1.1 uses 18-fold cross validation, i.e. 18 models are trained during each iteration. During the root cause analysis for printing machines, we performed 20 iterations on 4 clusters which led in total to the training of 18 x 20 x 4 = 1440 RNN models for step 1.1. Step 1.2 is computationally less demanding. During the root cause analysis for printing machines, 4 to 7 iterations have been performed per cluster. During step 2, on average 44 views were evaluated per cluster using more than fifty iterations until convergence. Thus about 44 x 4 x 50 = 8800 RNN models were trained. We trained Echo State Networks (ESNs) [4] having 4,000 nodes each. ESNs can be trained quite fast by training only the weights on the connections to the outputs using linear regression. In contrast, LSTMs [6] or GRUs [5] train the weight matrices of the RNN and the connections between the input and the RNN as well. Using a ESN of size 4,000, computation of the final RNN state for each of the 2,138 samples (i.e. time series of length 155) took about 15 minutes on a laptop computer (Intel I7, 32 Giga Byte RAM). Each state is a vector of length 4,000. ESN training performs linear regression on these 2,138 states. This is done within 2 minutes. ESN states can be re-used. Note that during cross validation, different sets of training samples are used. Therefore, to train a model on a training set, the pre-computed states of these samples are collected, and then only linear regression has to be performed to train the output weight matrix. When re-using pre-existing states, training 18 models in parallel (during cross validation) takes about 6 minutes. Using gradient descent, like with LSTM or GRU networks, states computed during previous training cannot be re-used, because the weight matrices, which are used to compute the state, are modified during training. This way, training takes much longer compared to ESNs, unless reducing network size by at least two magnitudes. Using ESNs, we feel free to train many RNN models. Please note that the use of RNN models in this context is on testing whether a sample should be included into a training set of a cluster, or not (Step 1). In step 2, the performance of different views are compared relative to each other. After determination of clusters and their views, highest possible precision can be achieved by training the final models using deep architectures with many levels [7]. 4 Conclusions Root cause analysis in highly complex systems is difficult as it is an unsupervised learning problem applied on high dimensional time series data. Analysis suffers from the curse of dimensionality and the non-availability of a sufficiently large number of samples (time series) needed to deal with the large amount of features. 123 <?page no="134"?> 4.1 Finding Root Causes in Complex Systems Our approach presented in this paper uses machine learning to identify interesting samples and focus on relevant information, which can be presented and discussed with domain experts. Identification of the initial clusters and their prototype samples (step 1.1) is the crucial step in our unsupervised clustering approach. Prototype selection is performed by an iterative cross validation procedure in order minimize the influence of artifacts on the clustering process. Root cause analysis on printing machines has shown that input from domain experts is essential for distinguishing interesting events from artifacts, assessment of clusters, deciding on clustering depth, and dealing with various design decisions where certainty cannot be obtained from data in a statistical way. Literature [1] Martin Atzmueller. Subgroup Discovery - Advanced Review. WIREs Data Mining & Knowledge Discovery 2015, 5: 35-49. [2] Trabold, D. & Grosskreutz, H. (2013), Parallel subgroup discovery on computing clusters - First results, in 'BigData' , IEEE Computer Society, pp. 575-579 . [3] Yan, Z., Zhan, Y., Peng, Z., Liao, S., Shinagawa, Y., Zhang, S., Metaxas, D.N. and Zhou, X.S., 2016. Multi-instance deep learning: Discover discriminative local anatomies for bodypart recognition. IEEE transactions on medical imaging, 35(5), pp.1332-1343. [4] Lukosevicius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3): 127-149, 2009. [5] Chung, Junyoung, et al. "Empirical evaluation of gated recurrent neural networks on sequence modeling." arXiv preprint arXiv: 1412.3555 (2014). [6] Hochreiter, S. and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, Vol. 9, No. 8, 1997, pp. 1735-1780. http: / / dx.doi.org/ 10.1162/ neco.1997.9.8.1735 [7] Goodfellow, I., Bengio, Y. and Courville, A., Deep Lerning, MIT Press, 2016. http: / / www.deeplearningbook.org 124 <?page no="135"?> 4.2 A Probabilistic Approach for Synthesized Driving Cycles Michael Hegmann, Wolf Baumann, Felix Springer Abstract The Euro 6d-TEMP standard from September 2017 introduces Real Driving Emissions as an additional approval requirement. Portable emission monitoring systems (PEMS) are used to measure on-board emissions outside the test bench. This procedure adds a strong element of stochasticity to the final emission results. The chosen route, ambient conditions, road profile and the behavior of the driver strongly influence the outcome of the test. A single driving cycle or route may therefore not be enough to cover the large variety of possible valid speed traces and ambient conditions. IAV proposes a RDE validation process based on statistical consideration of emission results. Based on a few measured trips only, a huge number of drivable, synthetic cycles is created. Those cycles in combination with methods from machine learning are used for assessing the vehicle’s emissions performance. 1 Introduction The RDE test procedure adds a strong element of randomness to conventional repeatable test bench cycles. Even for the same route, a large spread in measured emissions may show up due to different driving styles, traffic and ambient conditions. A robust engine and exhaust aftertreatment calibration has to account for all of the different factors that affect the overall emission result. A single worst-case cycle or route may therefore not be enough to cover the large variety of possible routes and side-conditions (cf. [1]) In this paper, we propose an additional step in the RDE validation process that accounts for the element of randomness by relying on a statistical approach: the creation of synthetic cycles in combination with machine learning methods. On-road measurements, which are not necessarily valid RDE trips, are cut into small maneuvers. By clustering and reassembling these maneuvers we are able to create thousands of realistic and RDE compliant RDE cycles. Depending on engine type and aftertreatment system, the measured tailpipe emissions can be used to the estimate the overall emission outcome of these synthetic cycles. For systems with long-term states, i.e. with system memory, the emissions are predicted from data-driven models. The following section describes the method used for cycle creation. In section 3, we give some results and finally conclude the paper with a short summary. 125 <?page no="136"?> 4.2 A Probabilistic Approach for Synthesized Driving Cycles 2 Methodological approach 2.1 Data acquisition In this section, we shortly explain the underlying principles of IAV’s synthetic cycle creation method. Figure 1 illustrates the basic process chain. In a first step, we perform on-road measurements, which are the basis of the subsequent cycle generation. These trips do not necessarily have to be RDE compliant but should cover a broad range of routes, traffic situations and driving styles. Optionally, we can complete the on-road measurements with data from the roller test bench. This has the advantage that precisely planned sequences of driving maneuvers can be obtained. Figure 1: IAV’s process of RDE validation with synthetic cycles 126 <?page no="137"?> 4.2 A Probabilistic Approach for Synthesized Driving Cycles 2.2 Creation of the maneuver database The vehicle velocity and acceleration profiles of the on-road measurements are then analyzed and each trip is split into maneuvers with typical lengths of a few seconds up to a few minutes. Each maneuver is classified as idle, acceleration, constant or deceleration and is tagged by its initial and final velocity. Depending on the concrete task, we can also apply additional tags like e.g. the gear. In a more abstract sense, each maneuver can be conceived as a transition between two states of the vehicle. The states are defined by the velocity of the vehicle and optional variables describing the current condition of the engine, like the gear or the exhaust aftertreatment system temperature. Together with additional information on ambient conditions, emission values and state variables of the engine and aftertreatment system, the velocity trace of each maneuver is stored into a maneuver database. 2.3 Cycle creation The process of cycle creation makes use of the maneuver database. Based on the assumption of an underlying Markov model, sequences of maneuvers are chosen randomly in order to reassemble many different driving cycles. From a statistics point of view, this process of creation of a large number of driving cycles out of a limited number of measurements is very similar to bootstrapping methods [2,3]. Due to the Markov condition (memorylessness), a system’s subsequent state is dependent on the current state only [4]. In other words, to create realistic velocity profiles, the individual probability of each maneuver to occur is taken into account. This information can be considered as prior knowledge and has been extracted from a huge number of measured RDE trips with different drivers and various vehicles. It therefore contains typical driving styles in city, rural and on the highway. Note that the prior knowledge strongly influences the outcome of the cycle creation process. I.e. if the typical driving style was defined from rather sporty drives, the maneuvers with acceleration over a wide range of vehicle speeds become more likely. Consequently, the synthesized driving cycles show a similar behavior. However, for the use case described below, the standard maneuver probabilities have been chosen. 2.4 Computation of tailpipe emissions Within the maneuver database, all relevant signals of each maneuver are stored. So it is as well possible to use tailpipe emission measurements from the PEMS for the synthesized sequence. In practice, this procedure could lead to not continuous emission traces and even worse, to not continuous states of the considered system. Therefore, we need to make use of engineering knowledge to respect the peculiar characteristics of the system under test. For gasoline engines, the dominant system states are boost pressure in case of turbocharged engines, the oxygen storage level and the temperature of the three-way catalytic converter. Fortunately, the time scale of the boost pressure state is small enough in relation to the length of a maneuver. It can therefore be ignored without introducing a significant amount of simulation error. In contrast, the oxygen storage level of the 127 <?page no="138"?> 4.2 A Probabilistic Approach for Synthesized Driving Cycles catalytic converter cannot be ignored as it is influenced by the conditions over a certain amount of time. The actual time depends on the concentration of the individual emission constituents and the mass flow, provided that the catalyst has a regular operation temperature. To deal with the dependency on prior history, the relevant maneuvers are concatenated to contain a sufficient history of samples to drive the amount of stored oxygen state into a well defined state. Further, the catalytic converter temperature for warm engine operation of a gasoline engine is assumed to be high enough for full conversion capability. That is, conversion efficiency mainly depends on the oxygen storage level. For cold engine operation, the maneuver database is extended by cold start and warm-up experiments, which can directly be used for the simulation of tailpipe emissions. For Diesel engines, the relevant system states do play a crucial role for the computation of tailpipe emissions. Besides the boost pressure, which can be handled as with gasoline engines, the temperature of emission aftertreatment system as well as filling levels of lean NOx trap or dosing amount of SCR systems need to be considered. For transient simulation, this also requires the simulation of corresponding ECU functions for LNT regeneration or SCR dosing strategy to gain a high simulation accuracy. Whereas in gasoline applications the prediction of tailpipe emissions is possible without further simulation models, for Diesel engines it is not. The validity of the approach clearly depends on the accuracy of the simulation environment. However, in model based calibration processes, those simulation environments are often available and can be used along with the cycle generation method. 3 Use cases There are various applications of synthetic driving cycles in the context of vehicle development, namely  visualization of emission statistics, estimation of emission result probabilities  judgement of existing driving cycles w.r.t. their criticality,  computation of worst-case driving cycles. In the early calibration stage, the calibration engineers usually request a worst-case cycle as a development target. In practice, the criticality of a driving cycle is judged with respect to the emission result. Therefore, the worst-case driving cycles are different for different emission constituents. Further note that a worst-case cycle always depends on the current calibration, i.e. it is impossible to define a driving cycle which constitutes the worst-case scenario over all calibration changes. Considering the aforementioned limitations, we recommend a two-step procedure for the creation of critical, RDE-compliant driving cycles. As a first step, all measurement data is analyzed by means of machine learning methods, like e.g. classification in order to extract unusual events with high emission peaks in the data. Those events have to be evaluated in terms of usability for the generation of a critical driving cycle. The reason is that we need to avoid an overemphasis of rare maneuvers with high emission values, originating e.g. from OBD events. As a second step, the maneuver database is used in conjunction with an optimization algorithm in order to create a worst-case sequence of maneuvers. Due to the non-smooth nature of the optimization costs, it is necessary to make use of a gradient free algorithm, like e.g. a genetic algorithm. This 128 <?page no="139"?> 4.2 A Probabilistic Approach for Synthesized Driving Cycles also allows for multi-objective optimization with respect to the different emission constituents. Figure 2 shows a schematic of the worst-case cycle creation. Figure 2: Use of genetic algorithm for creation of worst-case driving cycles A practical application of the method discussed here, was also given by Taindjis et al [5]. They used synthetically created driving patterns as an input for a simulation environment and optimized a calibration with respect to statistical measures of emission distributions. 4 Limitations As with any data driven method, the performance of synthetic driving cycles heavily depends on the data quality and richness. Whereas typical measures for data quality may be applied to the recorded on-road measurements, it is by far more difficult to judge the richness of the acquired data. The reason is that exact numbers of e.g. acceleration cannot be transferred from one vehicle to the next as long as they differ in engine power and vehicle mass. Following a rather pragmatic approach, we make sure that measurement data contains drives from different conditions, like e.g. • typical RDE-drives in city, rural and highway, • explicit city drives, • highway drives with high velocities, • full load accelerations • high dynamic driving (even outside of RDE conformity limits), • uphill and downhill driving, • cold start and warming-up Based on the measurement data, it is possible to evaluate the coverage of speed/ load domain and the velocity distribution. By means of comparison to a dataset that includes more than 100 measured RDE drives obtained for various drivers and routes, a simple measure of data completeness can be calculated. Based on the underlying statistical model, it is also possible to forecast the expected dynamics of the synthesized cycles 129 <?page no="140"?> 4.2 A Probabilistic Approach for Synthesized Driving Cycles in terms of va pos_ 95. This allows us, to estimate the average criticality of the synthesized cycles with respect to the upper dynamic boundary without explicitly creating them. 5 Examples Within IAV, the process for driving cycle synthetization has been used in a variety of projects over the past years. Whereas in gasoline applications, the direct estimation of tailpipe emission results is possible, for diesel engines simulation environments of different complexity have been used. Figure 3 shows the typical outcome of the method using the emissions of a passenger car with gasoline engine as an example. For this demonstration, a total of 3ꞏ10 5 cycles has been created in approximately 6 hours on a standard desktop PC. Figure 3: Typical distribution of distance specific emissions for 3ꞏ105 synthetic cycles. The bars mark the emissions of the database (orange) and of RDE compliant (green) trips. It can be clearly seen, that most of the synthetic trips (roughly 99%) fulfill the engineering target with respect to the chosen emission limits. In other words, if the ambient conditions and driving styles during the test drives are met, there is a small probability of roughly 1% that a valid RDE compliant trip violates the engineering emission target. In addition to the synthetic cycles, also the specific emissions of the trips entering the database have been plotted. Obviously, the emission range covered by the RDE compliant trips (green bars) is small compared to the range covered by the synthetic cycles. The database trips, on the other hand, are not restricted to RDE boundaries an exhibit are large spread in the emission domain. 130 <?page no="141"?> 4.2 A Probabilistic Approach for Synthesized Driving Cycles 6 Summary In this paper, the use of IAVs synthetic driving cycle method and its various application fields have been discussed. The presented method is based on reassembling driving cycles from recorded maneuvers which are collected in real driving conditions by using PEMS measurement systems or onboard emission sensors. Based on a huge database of test drives, the typical driving style from road measurements is captured and re-used during cycle creation. The approach therefore allows for a realistic estimation of emission results which match with standard test conditions. Additionally, this method eliminates the need for an explicit consideration of driving styles and traffic conditions as both aspects are inherently covered by the statistical concept. 7 References [1] F. Springer, M. Hegmann, M. Knaak, D. Reppel: Increasing RDE robustness using methods of statistical learning, in SIA Powertrain Conference, 2017 [2]B. Efron: Bootstrap Methods: Another Look at the Jackknife. In: The Annals of Statistics. 7, Nr. 1, 1979, S. 1-26. [3] B. Efron, R.J. Tibshirani: An introduction to the bootstrap, New York: Chapman & Hall, 1993 [4] H. Risken, T. Frank: The Fokker-Planck Equation, Berlin Heidelberg: Springer-Verlag, 1999 [5] D. Taindjis, G. Dober, W. Baumann, N. Guerrassi: Engine transient calibration for real driving conditions: a holistic statistical approach, in SIA Powertrain Conference, 2018 131 <?page no="142"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN Peter Schichtel, Alireza Koochali, Sheraz Ahmed, Andreas Dengel Abstract Time series forecasting is one of the challenging problems for humankind. Traditional forecasting methods using mean regression models have severe shortcomings in reflecting real-world fluctuations. While new probabilistic methods rush to rescue, they fight with technical difficulties like quantile crossing or selecting a prior distribution. To meld the different strengths of these fields while avoiding their weaknesses as well as to push the boundary of the state-of-the-art, we introduced ForGAN [1]. Here we present the main results on the Lorenz [1], Mackey-Glass [2], and Internet Traffic [3] data sets. Note these proceedings represent a summary of [1]. Kurzfassung Zeitreihenprognosen sind eines der herausragenden Probleme der Menschheit. Traditionelle Prognoseverfahren, die Mittelwertregressionsmodelle verwenden, haben gravierende Mängel bei der Abbildung der realen Schwankungen. Während neue probabilistische Methoden zur Rettung eilen, kämpfen diese jedoch mit technischen Schwierigkeiten wie Quantil-Crossing oder der Auswahl eines Priors. Um die unterschiedlichen Stärken dieser Felder unter Vermeidung ihrer Schwächen zu verschmelzen und die Grenzen des Standes der Technik zu erweitern, führten wir ForGAN [1] ein. Hier präsentieren wir die wichtigsten Ergebnisse auf den Lorenz [1], Mackey-Glass [2] und Internet Traffic [3] Datensätzen. Es gilt zu beachten, dass diese Proceedings eine Zusammenfassung von [1] sind. 1 Introduction The forecast of what might lay before us is one of the most intriguing challenges for humankind. It is no surprise that there is a huge and diverse community concerned with forecasting and decision making. To name, but a few, there is weather and climate prediction [3,4], flood risk assessment [5], seismic hazard prediction [6,7], predictions about the availability of (renewable) energy resources [8,9], economic and financial risk management [10,11], health care [12-14], predictive and preventative medicine [15] and many more. Since the forecast is the prediction of future values, the goal is to acquire 𝜌 𝑥 |𝑥 , … , 𝑥 , the probability distribution of the near future given the recent past. In the following paragraphs, we review mean regression forecast methods briefly and then we provide an overview about scientific endeavors on probabilistic forecasting. 132 <?page no="143"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN Note that in this paper, we call the history of events 𝑥 , … , 𝑥 the condition 𝑐 and we use 𝑥 as the notion for the value of the next step i.e. target value. 1.1 Mean Regression Forecast Mean regression forecasting is concerned with predicting 𝜇 𝜌 𝑥 |𝑐 most accurately. There is a broad range of mean regression methods available in literature e.g., statistical methods (like ARMA and ARIMA [16] and their variants), machine learning based methods (like Support Vector Machines (SVM) [17-23], Evolutionary Algorithms (EA) [21-27] and Fuzzy Logic Systems (FLS) [26-32]), and Artificial Neural Network based methods (ANN) [33-36]. Figure 1 presents an example of the problem inherent in all mean regression based methods. It shows a cluster of time series with identical, but noisy, time window 𝑐 𝑥 , … , 𝑥 and the future value at 𝑡 10 (to be found right of the blue dashed line) which can take two distinctive realizations: in 80% of the cases 𝑥 yields one while in 20% of the cases it yields zero. To demonstrate the problem, we train a simple neural network to forecast 𝑥 . We present the result in Figure 2. It illustrates that the regression model fails to model the data. The best answer we can get from mean regression will converge to 0.8, the weighted average of all possible values for 𝑥 . 1.2 Probabilistic Forecasting Probabilistic forecasting serves to quantify the variance in a prediction [37]. Different approaches have been proposed to undertake probabilistic forecasting in various fields [38-53]. Two of the most prominent approaches in these fields are conditional quantile regression and conditional expectile regression. Quantile regression is a statistical technique intended to estimate, and conduct inference about, conditional quantile functions [54]. To estimate the regression coefficients from training data, one uses the asymmetric piecewise linear scoring function, which is consistent for the 𝛼-quantile [54,55]. Expectile regression works similarly but it is based on the asymmetric piecewise quadratic scoring function [56-58]. Furthermore, one can use a collection of point forecasts for a specific quantity or event as an ensemble model for probabilistic forecasting. In this setup, we need some form of statistical post-processing [37]. Stateof-the-art techniques for statistical post-processing include the non-homogeneous regression (NR) or ensemble model output statistics (EMOS) technique proposed by Gneiting et al. [38] and the ensemble Bayesian model averaging (BMA) approach developed by Raftery et al. [59]. For an in-depth review of probabilistic forecasting, please refer to [37]. Besides these methods, researchers employ Bayesian probability theory to provide approaches for probabilistic forecasting. Bayesian probability theory offers mathematically grounded tools to reason about model uncertainty, but these usually come with a prohibitive computational cost [60]. The success of Bayesian model heavily relies on selecting a prior distribution. Selecting a suitable prior distribution is a delicate task which requires insight into the data. For an in-depth review of Bayesian probabilistic machine learning, please refer to [61,62]. 133 <?page no="144"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN Figure 1: A cluster of time windows which are almost similar on every time step except for the value at the last step [1]. Figure 2: The probability distribution of the last step (in green color) alongside the distribution learned by a mean regression model (in orange color) [1]. 1.3 Generative Adversial Networks Generative Adversarial Network (GAN) [63] is a new type of neural networks which enables us to learn an unknown probability distribution from samples of the distribution. Many interesting derivations, extensions, and applications have been proposed for GANs [64-68]. Unfortunately, despite their remarkable performance, evaluating and comparing GANs is notoriously hard. Thus, the application of GANs is limited to the domains where the results are intuitively assessable like image generation [64], music generation [69], voice generation [70], and text generation [71]. In these procedings, we summarize the potential of ForGAN [1] to learn full probability distribution of the future values. More details about the method and the data can be found in [1]. 2 Methodology 2.1 Generative Adversarial Network (GAN) Generative Adversarial Networks [63] are a class of algorithms for modeling a probability distribution given a set of samples from the data probability distribution 134 <?page no="145"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN 𝜌 . A GAN consists of two neural networks namely generator G and discriminator D. These components are trained simultaneously in an adversarial process. First, a noise vector z is sampled from a known probability distribution 𝜌 𝑧 (normally a Gaussian distribution). G takes the noise vector z as an input and trains to generate a sample whose distribution follows 𝜌 . On the other hand, D is optimized to distinguish between generated data and real data. In other words, D and G play the following two-player minimax game with value function V(G,D): 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 𝐸 ~ 𝑥 log 𝐷 𝑥 𝐸 ~ 𝑙𝑜𝑔 1 𝐷 𝐺 𝑧 . (1) While training the GAN, generator G learns to transform a known probability distribution 𝜌 to the generators distribution 𝜌 which resembles 𝜌 . 2.2 Conditional GAN (CGAN) Conditional GAN (CGAN) [68] is an extension of GAN which enables us to condition the model on some extra information y. The new value function V(G,D) for this setting is: 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 𝐸 ~ 𝑥 log 𝐷 𝑥|𝑦 𝐸 ~ 𝑙𝑜𝑔 1 𝐷 𝐺 𝑧|𝑦 . (2) 2.3 Probabilistic Forecasting with CGAN (ForGAN) Figure 3: Overview of the ForGAN setup. The condition c is handed to generator G and discriminator D. In this paper we aim to model the probability distribution of one step ahead value 𝑥 given the historical data 𝑐 𝑥 , … , 𝑥 , i.e. 𝜌 𝑥 |𝑐 . We employ CGAN to model 𝜌 𝑥 |𝑐 . Figure 3 presents an overview of ForGAN. Therefore, the value function is: 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 𝐸 ~ 𝑥 log 𝐷 𝑥 |𝑐 𝐸 ~ 𝑙𝑜𝑔 1 𝐷 𝐺 𝑧|𝑐 . (3) 135 <?page no="146"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN By training this model, the optimal generator models the full probability distribution of 𝑥 for a given condition window. With having the full probability distribution in hand, we can extract information regarding any possible outcome and the probability of their occurrence by sampling. 2.4 G-Regression Model Normally, forecasting models are trained by optimizing a point-wise error metric as loss function however we employ adversarial training to train a neural network for forecasting. To study the effectiveness of adversarial training in comparison to conventional training, we construct the G-regression model, a model with identical structure to generator G. However, we train this model with RMSE as loss function. 3 Experiments To investigate the performance of ForGAN, we test our method with three experiments and later on, if applicable, compare results with the state-of-the-art methods. Let us first introduce the used datasets. 3.1 Datasets 3.1.1 Lorenz Dataset The Lorenz data set is constructed from the chaotic phase of the Lorenz system [1]. In Figure 6 we present the different condition clusters and the resulting probability distribution for 𝜌 𝑥 |𝑐 . The data set can be found at: https: / / cloud.dfki.de/ owncloud/ index.php/ s/ KGJm5iNKrCnAwEg . Figure 4: Left -- Different condition windows of the Lorenz data. Right -- 𝜌 𝑥 |𝑐 (color corresponding to condition). From [1]. 136 <?page no="147"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN 3.1.2 Mackey-Glass dataset The time delay differential equation suggested by Mackey and Glass [2] has been used widely as a standard benchmark model to generate chaotic time-series for the forecasting task. 𝑥′ . (4) To make our result comparable with state-of-the-art [72], we set 𝑎 0.1, 𝑏 0.2 and 𝜏 17. We generate a dataset with length 20000 using Eq. (4) for our second experiment. 3.1.3 Internet Traffic dataset For our last experiment, we apply our method to a real-world problem, forecasting internet traffic. We use a dataset which belongs to a private ISP with centers in eleven European cities (which is commonly known as A5M) [2]. It contains data corresponding to a transatlantic link and was collected in 2005 from 06: 57 on 7th of June to 11: 17 on 29th of July. 3.2 Evaluation Metrics Commonly, in forecasting tasks, point-wise error metrics are used. To be able to compare to the state-of-the-art we report RMSE, MAE and MAPE which are related to each other by 𝑅𝑀𝑆𝐸 1 𝑁 𝑥 𝑥 . 𝑀𝐴𝐸 1 𝑁 |𝑥 𝑥 | . 𝑀𝐴𝑃𝐸 10 𝑁 |𝑥 𝑥 | 𝑥 . (5) Here N is the number of data samples 𝑥 , and 𝑥 are the actual predictions. However, point-wise error metrics are not suitable for assessing distributions similarities. Since ForGAN models the full probability distribution of 𝑥 , we are interested in measuring how accurate we managed to reproduce the data distribution. Therefore, we select the Kullback-Leibler divergence (KLD) [73] to report the performance of our method. KLD measures the divergence between two probability distributions P and Q. Since we have finite data samples and ForGAN by nature samples, we select the discrete version of KLD which is defined as: 𝐾𝐿𝐷 𝑃|𝑄 𝑃 log 𝑃 𝑄 . (6) 137 <?page no="148"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN Note, P denotes data distribution and Q indicates prediction probability distribution. Hence, due to the appearance of Q in the denominator, if predictions distribution does not cover data distribution correctly KLD is not defined. To determine the optimal number of bins for the histogram of distribution, we follow the method suggested in [99] which aims for the optimum between shape information and noise. KLD value shows how accurate our method has learned the data distribution and the point-wise error metrics specifies how well it has considered the condition to forecast 𝑥 . Furthermore, we have the possibility to compare ForGAN with other methods based on various criteria. 3.3 Setup For each experiment, the dataset is divided into three subsets. 50% of the dataset is used as the train set, 10% as the validation set and 40% as the test set. We code the ForGAN using TensorFlow [74] and run it on a DGX-1 machine. 4 Results and Discussion The numerical results achieved by ForGAN are summarized in Table 1 alongside the state-of-the-art results on the Mackey-Glass dataset [72] and Internet traffic dataset (A5M) [3]. Furthermore, we report the results obtained from G-regression. Table 1: The results achieved by ForGAN alongside the results from G-regression model and state-of-the-art on Mackey-Glass dataset [72] and Internet traffic dataset [3]. The numbers in the parenthesis indicate the one standard deviation of results . state-of-the-art G-Regression ForGAN Lorenz RMSE - 𝟐. 𝟗𝟏 4.06 1 MAE - 𝟐. 𝟑𝟗 2.94 1 MAPE - 𝟐. 𝟐𝟓% 3.35 24 % KLD - 𝑁𝑎𝑛 𝟏. 𝟔𝟕 𝟏𝟎 𝟐 Mackey- Glass RMSE 4.38 10 5.63 10 𝟑. 𝟖𝟐 𝟐 𝟏𝟎 𝟒 MAE - 4.92 10 𝟐. 𝟗𝟑 𝟏 𝟏𝟎 𝟒 MAPE - 6.29 10 % 𝟑. 𝟒𝟔 𝟐 𝟏𝟎 𝟐 % KLD - 8.00 10 𝟑. 𝟏𝟖 𝟏𝟎 𝟑 Internet Traffic (A5M) RMSE - 𝟏. 𝟐𝟕 𝟏𝟎 𝟖 1.31 0 10 MAE - 𝟗. 𝟎𝟏 𝟏𝟎 𝟕 9.29 3 10 MAPE 2.91% 𝟐. 𝟖𝟓% 2.94 1 % KLD - 5.31 10 𝟐. 𝟖𝟒 𝟏𝟎 𝟏𝟏 In the Lorenz experiment, the G-regression method performs better than ForGAN based on RMSE, MAE and MAPE values. However, we can perceive from Figure 5 how misleading these metrics can be. Figure 5 presents the probability distribution learned by ForGAN alongside the histogram of the G-regression predictions and the data distribution for each cluster on the test set as well as the entire test set. 138 <?page no="149"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN Figure 5: The prediction of 𝑥 produced by ForGAN (blue) and G-regression (green) together with the ground truth (red) for the different condition clusters as well as the entire test set [1]. These plots indicate that ForGAN learns the probability distribution of the dataset precisely with respect to the corresponding cluster. Contrary, G-regression predictions are completely inaccurate. The G-regression method has converged to the mean value of 𝑥 distribution for each cluster and as a result, the predictions do not represent the ground truth at all. Since the histogram of G-regression predictions does not cover the range of ground truth values, it is not possible to calculate KLD for G-regression method in this experiment. Furthermore, we expect ForGAN to forecast all possible outcomes for a given time window. To investigate the validity of this assumption, we select two random time windows from the test set and forecast 𝑥 100 times using ForGAN. Figure 6 portrays the distribution of sampled 𝑥 alongside the probability distribution of their cluster. We can perceive from this figure that ForGAN can model the full probability distribution of 𝑥 for a given time window condition accurately. 139 <?page no="150"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN Figure 6: The probability distribution of 𝑥 learned by ForGAN for two randomly selected time windows c and the data distribution of the time window cluster they origin from on Lorenz dataset [1]. In the Mackey-Glass experiment, ForGAN outperforms both state-of-the-art [72] and G-regression model based on point-wise error metrics as well as KLD. The Gregression model has the same structure as ForGAN and it is optimized directly on RMSE, yet ForGAN performs significantly better than G-regression. We find this observation to be the evidence for the effectiveness of adversarial training for forecasting in comparison to standard training methods. Finally, in our last experiment on Internet traffic dataset, G-regression method outperforms state-of-the-art and ForGAN based on the MAPE value. On the other hand, ForGAN performs almost two times better than G-regression method based on KLD. Furthermore, the KLD for the state-of-the-art method is not available. Due to inconsistency between point-wise error metrics and divergence measure, selecting the best method with certainty is not possible. 5 Conclusion We present results obtained with ForGAN, a neural network for one step ahead probabilistic forecasting. Our method is trained using adversarial training to learn the conditional probability distribution of future values. We test our method with three experiments. In the first experiment, ForGAN demonstrates its high capability of learning probability distributions while taking the input time window into account. In the next two experiments, ForGAN demonstrates impressive performance on two public datasets, showing the effectiveness of adversarial training for forecasting tasks. We compare ForGAN to G-regression, where the generator architecture is kept, but RMSE loss is optimized. We demonstrate that while G-regression performs better than ForGAN based on some point-wise error metrics, it does not accurately model the real data distribution and ForGAN outperforms G-regression considering distribution divergence measure. Our experiments show that point-wise error metrics are not a precise indicator for the performance of forecasting methods. 140 <?page no="151"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN Our experiments reveal that in the presence of strong noise, the effectiveness of For- GAN is more prominent as we illustrate in Lorenz experiments. The performance of mean regression methods is close to ForGAN when the noise is weak. Since ForGAN can model data distributions with any level of noise, it is more reliable and a robust choice for forecasting in comparison to mean regression methods. Literatur [1] Koochali A.; Schichtel P.; Ahmed S.; Dengel A. Probabilistic Forecasting of Sensory Data with Generative Adversarial Networks - ForGAN. arXiv: 1903.12549v1 [cs.LG]. [2] Mackey, M.C.; Glass, L. Oscillation and chaos in physiological control systems. Science 1977, 197, 287-289. [3] Cortez, P.; Rio, M.; Rocha, M.; Sousa, P. Multi-scale Internet traffic forecasting using neural networks and time series methods. Expert Systems 2012, 29, 143- 155. [4] Racah, E.; Beckham, C.; Maharaj, T.; Kahou, S.E.; Prabhat, M.; Pal, C. ExtremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events. Advances in Neural Information Processing Systems, 2017, pp. 3402-3413. [5] Rodrigues, E.R.; Oliveira, I.; Cunha, R.; Netto, M. DeepDownscale: a deep learning trategy for high-resolution weather forecast. 2018 IEEE 14th International Conference on e-Science (e-Science). IEEE, 2018, pp. 415-422. [6] Wiesel, A.; Hassidim, A.; Elidan, G.; Shalev, G.; Schlesinger, M.; Zlydenko, O.; El-Yaniv, R.; Nevo, S.; Matias, Y.; Gigi, Y.; others. Ml for flood forecasting at scale 2018. [7] Mousavi, S.M.; Zhu, W.; Sheng, Y.; Beroza, G.C. CRED: A Deep Residual Network of Convolutional and Recurrent Units for Earthquake Signal Detection. CoRR 2018, abs/ 1810.01965, [1810.01965]. [8] Ross, Z.E.; Yue, Y.; Meier, M.; Hauksson, E.; Heaton, T.H. PhaseLink: A Deep Learning Approach to Seismic Phase Association. CoRR 2018, abs/ 1809.02880, [1809.02880]. [9] Gensler, A.; Sick, B. A Multi-Scheme Ensemble Using Coopetitive Soft-Gating With Application to Power Forecasting for Renewable Energy Generation. arXiv preprint arXiv: 1803.06344 2018. Version March 21, 2019 submitted to Sensors 15 of 18 [10] Chen, Y.; Wang, Y.; Kirschen, D.S.; Zhang, B. Model-Free Renewable Scenario Generation Using Generative Adversarial Networks. CoRR 2017, abs/ 1707.09676, [1707.09676]. [11] Huang, W.R.; Perez, M.A. Accurate, Data-Efficient Learning from Noisy, Choice- Based Labels for Inherent Risk Scoring. CoRR 2018, abs/ 1811.10791, [1811.10791]. 141 <?page no="152"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN [12] Zhang, Q.; Luo, R.; Yang, Y.; Liu, Y. Benchmarking Deep Sequential Models on Volatility Predictions for Financial Time Series. CoRR 2018, abs/ 1811.03711 , [1811.03711]. [13] Avati, A.; Pfohl, S.; Lin, C.; Nguyen, T.; Zhang, M.; Hwang, P.; Wetstone, J.; Jung, K.; Ng, A.Y.; Shah, N.H. Predicting Inpatient Discharge Prioritization With Electronic Health Records. CoRR 2018, abs/ 1812.00371, [1812.00371]. [14] Janssoone, T.; Bic, C.; Kanoun, D.; Hornus, P.; Rinder, P. Machine Learning on Electronic Health Records: Models and Features Usages to predict Medication Non-Adherence. CoRR 2018, abs/ 1811.12234, [1811.12234]. [15] Avati, A.; Jung, K.; Harman, S.; Downing, L.; Ng, A.Y.; Shah, N.H. Improving Palliative Care with Deep Learning. CoRR 2017, abs/ 1711.06402, [1711.06402]. [16] Box, G.E.P.; Jenkins, G. Time Series Analysis, Forecasting and Control; Holden- Day, Inc.: San Francisco, CA, USA, 1990. [17] Yan, X.; Chowdhury, N.A. A comparison between SVM and LSSVM in mid-term electricity market clearing price forecasting. Electrical and Computer Engineering (CCECE), 2013 26th Annual IEEE Canadian Conference on. IEEE, 2013, pp. 1- 4. [18] Yan, X.; Chowdhury, N.A. Mid-term electricity market clearing price forecasting using multiple least squares support vector machines. IET Generation, Transmission & Distribution 2014, 8, 1572-1582. [19] Rubio, G.; Pomares, H.; Rojas, I.; Herrera, L.J. A heuristic method for parameter selection in LS-SVM: Application to time series prediction. International Journal of Forecasting 2011, 27, 725-739. [20] Vapnik, V. Statistical learning theory. 1998 ; Vol. 3, Wiley, New York, 1998. [21] Frohlich, H.; Chapelle, O.; Scholkopf, B. Feature selection for support vector machines by means of genetic algorithm. Tools with artificial intelligence, 2003. proceedings. 15th ieee international conference on. IEEE, 2003, pp. 142-148. [22] Huang, C.L.; Wang, C.J. A GA-based feature selection and parameters optimizationfor support vector machines. Expert Systems with applications 2006, 31, 31-240. [23] Cortez, P.; Rocha, M.; Neves, J. Genetic and evolutionary algorithms for time series forecasting. International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems. Springer, 2001, pp. 393-402. [24] Gan, M.; Peng, H.; Dong, X.p. A hybrid algorithm to optimize RBF network architecture and parameters for nonlinear time series prediction. Applied Mathematical Modelling 2012, 36, 2911-2919. [25] Kim, K.j.; Han, I. Genetic algorithms approach to feature discretization in artificial neural networks for the prediction of stock price index. Expert systems with Applications 2000, 19, 125-132. [26] Cai, Q.; Zhang, D.; Wu, B.; Leung, S.C. A novel stock forecasting model based on fuzzy time series and genetic algorithm. Procedia Computer Science 2013, 18, 1155-1162. [27] Bas, E.; Uslu, V.R.; Yolcu, U.; Egrioglu, E. A modified genetic algorithm for forecasting fuzzy time series. Applied intelligence 2014, 41, 453-463. 142 <?page no="153"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN [28] Chu, T.C.; Tsao, C.T.; Shiue, Y.R. Application of fuzzy multiple attribute decision making on company analysis for stock selection. Fuzzy Systems Symposium, 1996. Soft Computing in Intelligent Systems and Information Processing., Proceedings of the 1996 Asian. IEEE, 1996, pp. 509-514. [29] Song, Q.; Leland, R.P.; Chissom, B.S. A new fuzzy time-series model of fuzzy number observations. Fuzzy Sets and Systems 1995, 73, 341-348. Version March 21, 2019 submitted to Sensors 16 of 18 [30] Egrioglu, E.; Aladag, C.H.; Yolcu, U. Fuzzy time series forecasting with a novel hybrid approach combining fuzzy c-means and neural networks. Expert Systems with Applications 2013, 40, 854-857. [31] Shah, M. Fuzzy based trend mapping and forecasting for time series data. Expert Systems with Applications 2012, 39, 6351-6358. [32] Aladag, C.H.; Yolcu, U.; Egrioglu, E.; Dalar, A.Z. A new time invariant fuzzy time series forecasting method based on particle swarm optimization. Applied Soft Computing 2012, 12, 3291-3299. [33] Assaad, M.; Boné, R.; Cardot, H. A new boosting algorithm for improved timeseries forecasting with recurrent neural networks. Information Fusion 2008, 9, 41-55. [34] Zitnik, M.; Nguyen, F.; Wang, B.; Leskovec, J.; Goldenberg, A.; Hoffman, M.M. Machine learning for integrating data in biology and medicine: Principles, practice, and opportunities. Information Fusion 2019, 50, 71-91. [35] Ogunmolu, O.; Gu, X.; Jiang, S.; Gans, N. Nonlinear systems identification using deep dynamic neural networks. arXiv preprint arXiv: 1610.01439 2016. [36] Dorffner, G. Neural networks for time series processing. Neural network world. Citeseer, 1996. [37] Malhotra, P.; Vig, L.; Shroff, G.; Agarwal, P. Long short term memory networks for anomaly detection in time series. Proceedings. Presses universitaires de Louvain, 2015, p. 89. [38] Collins, M. Ensembles and probabilities: a new era in the prediction of climate change, 2007. [39] Gneiting, T.; Raftery, A.E. Weather forecasting with ensemble methods. Science 2005, 310, 248-249. [40] Palmer, T.N. The economic value of ensemble forecasts as a tool for risk assessment: From days to decades. Quarterly Journal of the Royal Meteorological Society 2002, 128, 747-774. [41] Palmer, T. Towards the probabilistic Earth-system simulator: a vision for the future of climate and weather prediction. Quarterly Journal of the Royal Meteorological Society 2012, 138, 841-861. [42] Cloke, H.; Pappenberger, F. Ensemble flood forecasting: A review. Journal of Hydrology 2009, 375, 613-626. [43] Krzysztofowicz, R. The case for probabilistic forecasting in hydrology. Journal of hydrology 2001, 249, 2-9. 143 <?page no="154"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN [44] Jordan, T.H.; Chen, Y.T.; Gasparini, P.; Madariaga, R.; Main, I.; Marzocchi, W.; Papadopoulos, G.; Sobolev, G.; Yamaoka, K.; Zschau, J. Operational earthquake forecasting. State of knowledge and guidelines for utilization. Annals of Geophysics 2011, 54. [45] Pinson, P.; others. Wind energy: Forecasting challenges for its operational management. Statistical Science 2013, 28, 564-585. [46] Zhu, X.; Genton, M.G. Short-term wind speed forecasting for power system operations. International Statistical Review 2012, 80, 2-23. [47] Groen, J.J.; Paap, R.; Ravazzolo, F. Real-time inflation forecasting in a changing world. Journal of Business & Economic Statistics 2013, 31, 29-44. [48] Timmermann, A. Density forecasting in economics and finance. Journal of Forecasting 2000, 19, 231-234. [49] Montgomery, J.M.; Hollenbach, F.M.; Ward, M.D. Ensemble predictions of the 2012 US presidential election. PS: Political Science & Politics 2012, 45, 651-654. [50] Alkema, L.; Raftery, A.E.; Clark, S.J.; others. Probabilistic projections of HIV prevalence using Bayesian melding. The Annals of Applied Statistics 2007, 1, 229- 248. [51] Raftery, A.E.; Li, N.; Ševˇcíková, H.; Gerland, P.; Heilig, G.K. Bayesian probabilistic population projections for all countries. Proceedings of the National Academy of Sciences 2012. [52] Jones, H.E.; Spiegelhalter, D.J. Improved probabilistic prediction of healthcare performance indicators using bidirectional smoothing models. Journal of the Royal Statistical Society: Series A (Statistics in Society) 2012, 175, 729-747. [53] Hood, L.; Heath, J.R.; Phelps, M.E.; Lin, B. Systems biology and new technologies enable predictive and preventative medicine. Science 2004, 306, 640-643. [54] Koenker, R. Quantile regression. 2005. [55] Koenker, R.; Bassett Jr, G. Regression quantiles. Econometrica: journal of the Econometric Society 1978, pp. 33-50. [56] Efron, B. Regression percentiles using asymmetric squared error loss. Statistica Sinica 1991, pp. 93-125. [57] Newey, W.K.; Powell, J.L. Asymmetric least squares estimation and testing. Econometrica: Journal of the Econometric Society 1987, pp. 819-847. [58] Dette, H.; Volgushev, S. Non-crossing non-parametric estimates of quantile curves. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 2008, 70, 609-627. Version March 21, 2019 submitted to Sensors 17 of 18. [59] Schnabel, S.K.; Eilers, P.H. Simultaneous estimation of quantile curves using quantile sheets. AStA Advances in Statistical Analysis 2013, 97, 77-87. [60] Damianou, A.; Lawrence, N. Deep gaussian processes. Artificial Intelligence and Statistics, 2013, pp. 207-215. [61] Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature 2015, 521, 452. [62] Gal, Y. Uncertainty in deep learning. University of Cambridge 2016. 144 <?page no="155"?> 4.3 Probabilistic Forecasting with Generative Adversarial Networks - ForGAN [63] Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Advances in neural information processing systems, 2014, pp. 2672-2680. [64] Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. CoRR 2015, abs/ 1511.06434, [1511.06434]. [65] Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein gan. arXiv preprint arXiv: 1701.07875 2017. [66] Donahue, J.; Krähenbühl, P.; Darrell, T. Adversarial Feature Learning. CoRR 2016, abs/ 1605.09782, [1605.09782]. [67] Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. CoRR 2016, abs/ 1606.03657, [1606.03657]. [68] Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv preprint arXiv: 1411.1784 2014. [69] Mogren, O. C-RNN-GAN: Continuous recurrent neural networks with adversarial training. arXiv preprint arXiv: 1611.09904 2016. [70] Gao, Y.; Singh, R.; Raj, B. Voice Impersonation using Generative Adversarial Networks. arXiv preprint arXiv: 1802.06840 2018. [71] Yu, L.; Zhang, W.; Wang, J.; Yu, Y. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. AAAI, 2017, pp. 2852-2858. [72] Kellert, S.H. In the wake of chaos: Unpredictable order in dynamical systems; University of Chicago press, 1993. [73] Méndez, E.; Lugo, O.; Melin, P., A Competitive Modular Neural Network for Long- Term Time Series Forecasting. In Nature-Inspired Design of Hybrid Intelligent Systems; Melin, P.; Castillo, O.; Kacprzyk, J., Eds.; Springer International Publishing: Cham, 2017; pp. 243-254. [74] Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Goodfellow, I.; Harp, A.; Irving, G.; Isard, M.; Jia, Y.; Jozefowicz, R.; Kaiser, L.; Kudlur, M.; Levenberg, J.; Mané, D.; Monga, R.; Moore, S.; Murray, D.; Olah, C.; Schuster, M.; Shlens, J.; Steiner, B.; Sutskever, I.; Talwar, K.; Tucker, P.; Vanhoucke, V.; Vasudevan, V.; Viégas, F.; Vinyals, O.; Warden, P.; Wattenberg, M.; Wicke, M.; Yu, Y.; Zheng, X. Tensor- Flow: Large-Scale Machine Learning on Heterogeneous Systems, 2015. Software available from tensorflow.com 145 <?page no="156"?> 5 RDE 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Sung-Yong Lee, Jakob Andert, Imre Pörgye, Daechul Jeong, Marius Böhmer, Andreas Kampmeier, Sebastian Jambor, Matthias Kötter, Markus Netterscheid, Markus Ehrly Abstract This article discusses an advanced powertrain calibration approach based on the Xin-the-Loop (XiL) simulation approach and its wide applicability towards virtual Real Driving Emissions (RDE) calibration and validation. The fulfilment of the challenging RDE standards has significantly increased the necessity of seamless system validation at early development stages, especially regarding pollutant emissions and fuel economy. This has led to a strongly increasing need to utilize XiL simulation methods within calibration and validation tasks. Thereto, this paper focuses on enhancement of the simulation approach to predict engine system performance with respect to in-use compliance and to optimize Engine Control Unit (ECU) calibration by minimizing the effort for hardware testing. This holistic simulation approach is a combination of the statistical Model-in-the-Loop (MiL) simulation method used for validation of in-field system performance and the Hardware-in-the-Loop (HiL) based virtual calibration method using a fully virtualized vehicle and powertrain in combination with a real hardware ECU. The applied method demonstrates the adequate interactive process merging various model-based methodologies and its professional roll-out to the conventional vehicle calibration process. In this study, practicably proven powertrain and environment modeling approaches are introduced as the fundamentals of the integrated XiL based digitalization. Furthermore, the RDE cycles for XiL simulations are generated with a combination of market-specific environmental statistics (altitude level, ambient temperature level) and driver dynamics. This enables a seamless consideration of critical RDE scenarios for the validation and optimization of ECU calibration. It is demonstrated that the HiL virtual calibration test bed predicts the engine performance already at the early calibration phase. This is used for system characterization of in-use performance validation. Critical and worst-case RDE scenarios are identified in the virtual environment and validated for emission compliance with the target hardware ECU. The proposed approach realizes lean development processes by reducing expensive hardware tests, and it increases overall quality of RDE compliance. 146 <?page no="157"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration 1 Introduction Implementation of the Worldwide harmonized Light vehicles Test Procedure (WLTP) including Worldwide harmonized Light-Duty Vehicle Test Cycle (WLTC) and the Real Driving Emissions (RDE), as requirements into the EU-legislation, take place to close the gap between the on road vehicle and laboratory testing. In real-driving operation, the emissions and fuel consumption differ significantly from the values obtained in the official emission test cycles on a chassis dyno [1]. In addition, the system requirements have been significantly increased with the growing complexity of modern Diesel engines. Consequently, accurate sensing, stable control and robust monitoring becomes more and more essential for the Engine Control Unit (ECU) software development. However, this has exponentially increased the development and calibration effort [2]. To address these challenges which stem from these new boundary conditions and regulations, the use of model-based calibration methods have continuously been intensified. The system-level based simulative assessments have enabled the reduction of the cost-intensive experimental hardware based testing [2, 3]. This paper introduces an X-in-the-Loop (XiL) based multi-level system simulation process for the RDE robustness validation with respect to prediction of the engine out and tailpipe emissions in the critical real-driving cycles. These cycles are obtained from a large-scaled databased statistical method using accelerated simulation approach and an advanced RDE cycle generator based on the simulated driver characteristics and parametric variation for the road profiles. The focus of the conducted research is to evaluate the adequate interactive process merging various model-based methodologies into one harmonized work-flow and to discuss its prospective roll-out for the conventional vehicle calibration process. 2 Process Overview 2.1 Challenges for Vehicle Calibration and Validation Extensive effort is spent on the development of Diesel engines and vehicle powertrain combinations for various markets. Decreasing emission limits, with an increasing variability of operating conditions in the emission certification, results in a complex and labor-intensive calibration and validation. Adding to the challenges faced for the frontloading approach of RDE calibration and validation tasks, various boundary conditions have to be considered. Figure 1 summarizes the challenges in the series-level vehicle calibration process. For passenger cars, compliance with the EU6d standards is also needed for the urban part of the driven RDE cycle including the cold start phase. The low temperature operation is a challenge for the technology and the emissions calibration. The uncertainties of the RDE cycle definition for the critical and worst-cases are the main driver for increased necessity of the seamless validation. Additionally, the availability of a sufficient number of prototype vehicles, Portable Emissions Measurement System (PEMS) units and resources for conducting enough RDE tests is limited. It becomes more challenging to make a representative conclusion for the emission status in early calibration phase. Finally, the listed upcoming RDE challenges, as shown in Figure 1, represent 147 <?page no="158"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration a high risk for a possible type-approval delay due to unexpected software-updates, environmental testing, etc. just before Start of Production (SOP). 2.2 Targets & Requirements In order to have time and cost effective solutions for tackling the above-mentioned challenges, as well as efficiently calibrating the emissions, the Hardware-in-the-Loop (HiL) based virtual calibration approach should be an integral part of the vehicle calibration process. One of the barriers facing HiL is the high one-time cost for system setup which has not previously been included in the conventional calibration projects, as depicted in Figure 2. However, a tailor-made virtual test bed including physical models and real ECU hardware can achieve an optimal trade-off between the quality targets to be achieved, the required development time and the applied resources. The given research presents the approach to enhance the HiL based calibration and validation capability for the RDE-specific challenges using a combined XiL based multilevel simulation approach. Finally, the proposed simulation process targets three development goals, as illustrated in Figure 2;  Reduction of hardware testing: Frontloading through accurate and fast virtualization of a target powertrain enables a substantial reduction of the required test loops on prototype vehicles.  Frontloading of RDE critical cases: Earlier detection of potential technical issues reduces the high development cost that could be caused during the later development phase. Figure 1: Challenges in series-level vehicle calibration 148 <?page no="159"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Cost and effort reduction of the one-time set-up and combined XiL approach can be achieved by more efficient selection of powertrain modeling techniques, which enable fast and accurate model parametrization effort for the system-level simulation. Under these circumstances, a combined simulation process has been developed as a valid methodology for increasing seamless calibration and validation capability and quality in order to compensate for the complexity caused by the enlarged RDE test matrix. Development target of XiL-based virtual calibration and validation Project timeline / Progress Development effort / - Project timeline / Progress Development effort / - Conventional Virtual HiL RDE, more vehicle variants, increased test matrix, ... Virtual HiL Virtual XiL Conventional RDE, more vehicle variants, increased test matrix, ... 4 2 4 Frontloading of RDE critical cases 2 Reduction of hardware testing 2 1 3 1 3 Increased effort for XiL Cost for one-time set-up Figure 2: Development goals of XiL based RDE calibration and validation The optimization of the RDE test matrix for the vehicle testing is the purpose of this XiL based multi-level simulation process for the series-level powertrain development. 2.3 Simulation Process Overview The extensive and accurate system analysis for various real driving conditions is mandatory for the development of modern powertrain components and controls. Thus, a closed-loop system validation with critical RDE cases becomes necessary in order to adequately calibrate the overall system and ensure the robustness of the calibration. However, this also requires an optimization of the state-of-the-art development methodologies and model-based calibration tools, in order to successfully apply the frontloading of the critical RDE case validation for the actual development cycle. This paper introduces a combined simulation approach based on a multi-level XiL environment, shown in Figure 3. This schematic workflow shows a combination of different approaches and the connection between the methods to ensure high maturity of system compliance. The tool chain starts with the configuration of a target application including market and legislative requirements. The set-up of the Model-in-the-Loop (MiL) and HiL is executed during the vehicle calibration phases. 149 <?page no="160"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration The proposed method for the RDE simulation is based on the establishment of several interactive RDE simulation loops between two heterogeneous XiL environments. This method is executed on the MiL level utilizing two simulation tools to generate a pool of the RDE clients (driving cycles and driver characteristics) for the MiL and HiL simulation. The proposed method has been developed by merging on the MiL level two different simulation methodologies are working together to generate different RDE clients, cycles and input data for the HiL simulation;  In Use Performance Validation (IUPV) based virtual powertrain simulation [4]  Simulation-based powertrain development methodology using real-driving cycle generator [5] The simulation loops are established for the prediction of the critical RDE cases in order to generate an optimal hardware test matrix. Figure 4 describes the applied interactive simulation process steps in more detail. The process uses an iterative validation approach based on the MiL, HiL and vehicle tests, in order to produce consistent simulation results during the project development period. Figure 3: XiL based simulation methodology for RDE robustness validation 150 <?page no="161"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration MiL: 1 st Optimization of RDE Test Matrix Database of RDE Scenarios RDE Test Matrix Real Driving Test Plan HiL: 2 nd Optimization of RDE Test Matrix Statistics and Multi-parametric based RDE Client Generation Accelerated Powertrain Simulation Real-Time Vehicle & Driver Simulation Critical RDE Cases Representative Customer RDE Cycles Real Road Tests Project x Project y ... DoE Optimization (Driving Cycles) DoE Optimization (CO 2 , NOx Emissions, ...) Figure 4: Proposed XiL based Interactive simulation process The process steps are described, as follows;  Real driving test plan (incl. characterization): o A system characterization is performed at nominal conditions (20 °C ambient temperature and sea-level) by driving different cycles either with a vehicle on a chassis dyno or by HiL simulation. o In addition to this base system characterization, further cycles are simulated on the HiL test bench, which cover a wide range of different ambient conditions and driving styles. This method for the extended characterization maximizes the benefits of a HiL test bench, since these cycles cannot usually be driven by the real vehicle due to the limited availability of hardware or testing resources. o Real driving conditions are created by varying driver behavior and driving routes. A multi-parametric test plan is generated using a Design of Experiments (DoE) approach for combining different conditions, e.g. driver behavior and parameters for driving route (total distance, distance of urban and rural, etc.). A further optimization of the initial test plan is possible by selection of the critical velocity profiles that are close to the drive aggression limits. o The driving cycles for the real driving conditions are generated using a stand-alone tool in GT-SUITE. The driving cycles based on the previous 151 <?page no="162"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration vehicle and HiL characterization are generated using a MATLAB-based algorithm. The initial test plan includes both driving cycles.  MiL virtual powertrain simulation (1 st RDE test matrix): o The data from the real driving test plan is used in combination with a worldwide client database to define RDE clients, all fulfilling the RDE criteria. o The accelerated MiL simulation enables the prediction of cumulative tailpipe NOx emissions of all RDE clients. After the analysis of the simulation results, critical RDE clients, e.g. in terms of engine-out and tailpipe NOx emissions, are selected. o The derivation of the critical RDE scenarios (1 st RDE test matrix) can be generated using the simulation results (e.g. NOx emissions). Hereby, the driver parameter sets for the cycle generation are optimized based the DoE approach. o The 1 st RDE test matrix is finalized with the generated RDE clients of the optimized parameter sets.  HiL Virtual vehicle and driver simulation (2 st RDE test matrix): o A RDE test matrix (with significantly reduced number of test cases) is simulated on the HiL. Extended RDE environmental conditions are tested using the complete-vehicle model and real hardware ECU of the target hardware. o For the detailed investigation and optimization of the ECU calibration, virtual calibration is applied. The time-step based behaviors of the single critical HiL test cases are analyzed. o For the critical case optimization, the most critical cycles are selected for further parameter optimization using the DoE approach. Additional simulation loop is possible to generate new critical driving cycles including optimized driver characteristics and RDE route parameters. o Based on the HiL results (engine-out and tail-pipe NOx, HC, engine-out soot, fuel consumption, etc.), the critical RDE cases are optimized again in order to identify the RDE lead scenarios for the road or chassis dyno testing. This finally reduces the number of the needed final hardware tests.  RDE test matrix for vehicle testing: o The driving cycles of the critical RDE cases and representative customer real driving cycles are documented. o The evaluated RDE robustness results from the HiL are stored together with the documented simulation loops.  Real road test: o The optimized RDE test matrix is executed on the real roads or the chassis dyno. The test matrix includes the synthetic driving cycles of the critical RDE cases captured by the previous simulation process step. It also contains the representative customer RDE cycles.  Database of critical RDE scenarios: o Data collection: All measurement data is processed according to robustness evaluation procedures to clearly check the distribution of system results and also to ensure good quality on acceptance of in-service conformity. 152 <?page no="163"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration o Management and storage: The stored and analyzed data of different projects, vehicles and configurations is organized and is used for scheduling of the next iterative simulation loops. In the following chapters, the elements of the simulation process, shown in Figure 3 and Figure 4, are discussed in details and the conducted RDE simulation results are evaluated. 3 Fundamentals of the Proposed Process 3.1 Multi-level XiL Simulation Environments This chapter describes the fundamentals for the applied MiL and HiL simulation environments. The simulation process ensures the systematic RDE validation procedure, while interacting with two different XiL visualization levels in several simulation loops. 3.1.1 Model-in-the-Loop for Engine Mode and Control Functions Figure 5 illustrates the used Model-in-the-Loop (MiL) environment of the accelerated powertrain simulation using a simplified ECU control model. Exhaust Aftertreatment Model Physical Virtual Simplified ECU model containing key functions Engine Model Figure 5: Model-in-the-Loop Environment for accelerated powertrain simulation The purpose of MiL is the accelerated RDE client simulation with massive cycle data input based on a large-scale databased statistical system characterization. Therefore, the fast-running simulation models are essential for the optimal execution of the huge simulation test case matrix. 3.1.2 Hardware-in-the-Loop for Detailed Powertrain Virtualization The HiL system provides an efficient solution for an ECU calibration platform. The platform emulates the complete periphery of the ECU in an equivalent operation environment to the real vehicle. An advanced HiL system with high computing power is generally required for the virtual calibration purpose. An example HiL set-up is shown in Figure 6. 153 <?page no="164"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Physical Virtual HW components (actuators, feedback sensors, …) ECU Exhaust Aftertreat Model Engine Model HiL input and output real-time interface Vehicle Model Driver Model Figure 6: Hardware-in-the-Loop Environment for detailed complete-powertrain simulation In a closed-loop HiL, the quality of the Input/ output (IO) signals and robust conversion of physical signals are mandatory to avoid any oscillations of the ECU feedback loops and non-intended signal manipulations. The electrical interface between ECU and HiL and the real-time co-simulation model interfaces for the complete-powertrain and the driver are the key factors when emulating the complete ECU environment. For HiL applications, the deployment of accurate closed-loop powertrain models based on a suitable and practically proven combination of extensive physical, semi-physical and data-driven models is important to ensure reliable and robust calibration and validation results. 3.2 Multi-domain Powertrain System Modeling In this chapter, the investigated multi-domain engine and exhaust aftertreatment modeling approaches are introduced for MiL and HiL simulation, and the necessary theoretical fundamentals of modeling principles applied for the proposed RDE investigations are discussed. As summarized in Figure 7, the different kinds of modeling terms are used for explaining the applied modeling elements and principles of the thermodynamic and kinetic process for Diesel engines. The illustrated classification indicates the variety of the virtualization methods that can be used for multiple engine domains. 154 <?page no="165"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Turbolader modeling Air path modeling In-cylinder combustion modeling physics-based mean value Emission modeling semi-physical empirical DoE physics-based mean value mean value combustion experiment data based maps experiment data based maps empirical DoE HiL Modeling approach 1 HiL Modeling approach 2 MiL Modeling approach Exhaust aftertreatment modeling experiment data based maps artificial neural network Data-driven map & look-up table physical eq. based physics-based (kinetic) mean value physics-based (kinetic) mean value Figure 7: Overview of modeling approaches applied for MiL and HiL simulation Three best-practice modeling approaches are addressed with a focus on the emission prediction and the applicability of these approaches for the proposed XiL simulation process is discussed. 3.2.1 Artificial Neural Network based Modeling of Engine Sub-Domains The Artificial Neural Network (ANN) based powertrain modeling technique is an effective method based on a black-box model approach. Figure 8 describes the schematic procedure applied for the sub-model generation using ANN modeling. Figure 8: Schematic procedure for sub-model generation using ANN for MiL In this study, the focus lies on the adaptability and consistency across various RDE related boundary conditions. The applied approach generates appropriate simulation models in an efficient way that allows an accurate NOx emission prediction, although the ECU is simplified. The simplification of the ECU and the engine plant model is mandatory for establishing a fast simulation of the enlarged test cases. This trained control logic is connected with the ANN based plant engine and exhaust aftertreatment model for the accelerated MiL simulation, as highlighted in Figure 9. 155 <?page no="166"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Figure 9: Principle of accelerated MiL simulation using cascaded ANN structure The generic ECU model is based on the Lean NOx Trap (LNT) and Selective Catalytic Reduction (SCR) related ECU control functions. These are important, since they have direct impact on the prediction of the tailpipe NOx emissions. These functions are modelled using an ANN approach, when the availability and accessibility of the ECU models are limited. The following SCR ECU functions are trained using the general procedure illustrated Figure 8 and the applied principle explained in Figure 9;  SCR DeNOx efficiency model  SCR NH 3 loading model  SCR feed forward controller  Dosing strategy coordinator  SCR fill level governor The limited extrapolation of the proposed ANN structure is ensured by combining the other physics-based MiL models with the cascaded ANN models, in order to avoid the generation of the simulation results from the outside of the boundary conditions that are not used for the model training. The MiL simulation of complete ECU functions in a virtual environment might be a conceivable solution for certain development use cases [6], but it requires significant effort for set-up of all control and diagnostic models [2]. Alternatively, a simplified ECU model including key control functions can be applied for the accelerated MiL simulation of the NOx tailpipe emissions. 3.2.2 Mean Value Engine Modeling with Semi-Physical Emission Model The Mean Value Modeling (MVM) for Diesel engine applications is constructed with high modularity, and facilitates compliance with established process standard CMMI and modeling standard AUTOSAR. The physics-based approach combined with some empirical sub-models, allows for accurate usage in HiL applications, where the model accuracy under transient and extrapolated ambient conditions is of major importance [2]. 156 <?page no="167"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration In order to maintain maximum modularity, the component structure of the engine model spans across multiple levels. Each sub-component consists of many interdependent, closely coupled functions. For example, the exhaust manifold, contained within the air path module, consists of an exhaust manifold pressure function and an exhaust manifold temperature function. The strict separation of functions allows adaptability of the engine model for various engine applications in short transition periods. Each composition includes hardware-related components, e. g. intake valves, turbocharger and intercooler for the air path composition. In the chosen modeling approach, all of the hardware configurations are included in the base engine model. This approach, together with the ability to switch between up to eight combustion modes, enables an easy adaptation of the model for different engine architectures. The effort for the parametrization of the physics-based mean-value engine model depends on the following boundary conditions: • Required level of target accuracy • Quality and availability of engine test bench data Both conditions are significantly influenced by project objectives and available time for model training. Therefore, the identification of optimal parametrization effort is important for the execution of virtual calibration tasks. Clearly, the discussion involves important cost related questions that are not easily answered. Instead, the conducted research focuses on the summarized results from improved modeling, which is an important element of evaluated HiL processes scientific contribution to the selected modeling approaches. The grey box modeling approach for the in-cylinder NOx formation produces satisfactory results, since the parametrization of look-up tables is done using experimental data, and the physical correlations are formulated in mathematical equations. Its foundation is based on the O 2 and NOx correlation. More generally, the estimation of the engine out NOx molar fraction is explained by ψ ψ , ψ ψ , , (1) where ψ , and ψ , describe the previous state of the reference NOx and O 2 . The exponent k can be calibrated for different correlations that are associated with available engine test bench data and based on previous experience with different engines. The estimation of the newly produced NOx at the end of the combustion event can be expressed in 𝑑𝑛 , 𝑑𝑡 𝑒 ∑ ∙∆ , (2) where 𝑛 , is the NOx molar quantity, which is estimated to equal the NO molar quantity. The indices i..j define each applied correlation ∆𝑆 that is impacting the reaction. For instance, the in-cylinder O 2 concentration before combustion is considered with other impacting parameters as variation from reference conditions. The proposed model was used to predict NOx under various transient RDE cycles and extended ambient conditions. 157 <?page no="168"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration In this research, this semi-physical emission modeling has been compared to the datadriven DoE modeling that is described in the following chapter. Both approaches are implemented in the main model frame of MVM and used for proposed RDE applications. The physic-based air path modeling is kept the same during each approach. 3.2.3 Gaussian Process DoE Model for in-cylinder and Emission The advanced Gaussian Process (GP) modeling, which is extended from a standard GP model, is applied to construct in-cylinder combustion and emission models for the HiL application. The standard GP model is based on an empirical Bayesian linear model [7]. By definition in [7], it is completely specified by its mean function 𝑚 𝑥 and co-variance function 𝐾 𝑥, 𝑥′ of a real process 𝑓 𝑥 , which is generally expressed by 𝑓 𝑥 ~ 𝐺𝑃 𝑚 𝑥 , 𝐾 𝑥, 𝑥 . (3) With the assumption 𝑚 𝑥 0, given measurements points 𝛸 ∈ ℝ at 𝑛 different sample sizes (configurations) of 𝑚 a given input dimensionality (input parameters) and resulting measurement data 𝑦 ∈ ℝ , a likelihood function log 𝑝 𝑦|𝛸, 𝜃 1 2 𝑦 𝐾𝑦 1 2 log det 𝐾 𝑛 2 log 2𝜋 (4) is formulated. Reuber at el. [8] extended the standard GP to a warped GP approach, in order to improve the regression in terms of heteroscedastic noises that depend on the measurement point. The corresponding likelihood function is expressed by log 𝑝 𝑦|𝛸, 𝜃, 𝜓 1 2 𝛽 𝑦 𝐾𝛽 𝑦 1 2 log det 𝐾 𝑛 2 log 2𝜋 𝜕𝛽 𝑧 𝜕𝑧 | , (5) where 𝛽 𝑦 𝛽 𝑦; 𝜓 represents the suitable warping function with a tuple of hyperparameters 𝜓. This enables automatic finding of an output transformation that is suitable for the given data [8]. In order to enhance the applicability of the GP for internal combustion engine specific problems, Manifold GP is introduced in [8]. For proper modeling of system behaviors caused by rapid changes, e.g. sudden rise of CO emissions in Diesel engines once the air-fuel-ratio is too low (rich mixture). The standard covariance functions of equation (5) generates limited correlation of measurements under these conditions. An adapted covariance function is formulated by 𝑥 , 𝑥 → 𝐾 𝑥 , 𝑥 ≔ 𝐾 𝑥 , 𝑥 ; 𝜃, 𝜑 ≔ 𝐾 𝜇 𝑥 ; 𝜑 , 𝜇 𝑥 ; 𝜑 ; 𝜃 , (6) where a class of input transformations 𝑥 → 𝜇 𝑥 𝜇 𝑥; 𝜑 depends on an additional hyperparameter 𝜑. Manifold GP automatically select a suitable input transformation, 158 <?page no="169"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration such that after this transformation the correlation of the inputs can adequately be captured by equation (6) and non-smooth behavior of the combustion and emissions can be modeled [8]. The combined approach of the warped GP and Manifold GP is used to enable the best suited selection of models. The warped GP affects the output of GP, while Manifold GP influences only the input transformation. Therefore, the implementation of both approaches can be realized easily. One of the important non-parametric identification methods is the GP regression model, known for its accurate prediction [9]. However, it is also known that increasing the number of model inputs causes longer simulation times due to enlarged training data points. Alternative methods are the polynomial models or neural networks, which are parametric models and require lower computation time for model training. The slow learning rules and simulation of neural networks (e.g. computation of each neuron) makes the real-time computation difficult [10]. 4 Process Development and Application In this chapter, the proposed RDE validation process is introduced and the application results are discussed. The combined RDE simulation based on MiL and HiL enables the consequent frontloading. The consistency of the simulation process is ensured with harmonized tools and methods used in a series-level vehicle calibration. The focus of this article is to optimize the exponentially enlarged RDE test matrix in combination with the accelerated MiL simulation and the HiL simulation. 4.1 Generation of RDE Simulation Input 4.1.1 Multi-parametric Driver Characteristics and Real Driving Situations Real driving conditions can be characterized by varying driver behavior, driving routes and ambient boundary conditions such as traffic and weather. Hence, real driving conditions have a very high variance compared to fixed and controlled certification driving cycles. The reliable abstraction of real driving conditions with the goal of the controllability of this high variance in terms of a time and cost efficient development is the development target. The result of this RDE simulation input generation step is a derivation of a concise number of parameters. Figure 10 shows the general procedure and the dominant influences which represent real driving situations using parametric descriptions. 159 <?page no="170"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Figure 10: Overall procedure for RDE cycle generation based on parametric definition for real driving situations Specifically, a parametrization has been derived for • Speed limitation (target speed for the route) • Route profile • Stopping times and intervals • Driver aggressiveness (acceleration and deceleration behavior) • Driver tolerance with regards to exceeding or falling below the speed limit • Driver behavior at constant speed (so-called 'wobbling') • Traffic (vehicle-vehicle interaction as following and congestion) • Wind and ambient temperature • Slope profiles This parameterization is essentially unlimited in its capacity. Optionally, it can be limited to RDE legislation, but it can also represent completely free driving situations. One of the important pieces of information needed for the parametrization is the targeted and allowable maximum vehicle acceleration in m/ s². This is one of the boundary conditions considered for the RDE cycle generation in order to ensure reproducible accurate results during the highly dynamic driving cycles on the real vehicle and virtual vehicle. The synthetically designed real driving cycles are based on the principle of the discrete velocity speed limits, 𝑣 , , as expressed in equation 7. The real drivers react to these and decide their following actions. The base calculation method is explained by [11]. Two fundamental equations of the cycle generation are listed by 𝑣 𝑓 ∙ 𝑣 , (7) 𝑎 𝑓 ∙ 𝑎 (8) 160 <?page no="171"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration The desired vehicle speed is directly influenced by the desired vehicle speed factor 𝑓 . The driving route based vehicle speed target 𝑣 , is calculated by using the legal speed limit of the current route, maximum cornering speed and visibility (based on sight distance) velocity. The driving dynamic factor 𝑓 is correlated with the vehicle acceleration and deceleration phase. The desired driver acceleration 𝑎 is calculated by this factor and driver demand acceleration part 𝑎 . The major advantages of the parameter-based abstraction of driving are the already described restriction of the variance (reduction of effort) and the convertibility of parametric descriptions in Design of Experiments (DoE) methods. The latter is the basis for the derivation of RDE Lead Scenarios which will be described in the chapter of RDE simulation and methodology. 4.1.2 Large-scale Databased Statistical Characterization and Simulation Input Generation The main principle is to characterize the target system by operating it in test driving cycles on a chassis dyno or on a test track, and then combine these characterization cycles with operation profiles from a wide database to simulate different driving scenarios. The concept of the system characterization for the heavy duty application is introduced in [12]. For the passenger car applications, driving characterization cycles on a chassis dyno are used. The applied RDE simulation process proposes the additional characterization using the HiL test bench. The benefit of using HiL data for input characterization is the real control behavior of the extended conditions thanks to the extrapolation capability described in [2]. In a first step of the methodology, driving profiles are generated by an advanced algorithm taking the system characterization into account and using the database with more than 10,000 driving profiles for different worldwide commercial applications, as shown in Figure 11. Figure 11: FEV database for RDE clients 161 <?page no="172"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration In a second step, reliable statistics, e.g. the 100 representative European clients can be optionally used to generate final RDE clients. Following adaptations of client distribution could be done:  Target customer driving behavior  Commercial/ private usage  Market/ region  Sales volume  Vehicle class The core of RDE client generation lies on the flexible utilization of the advanced algorithm combining real vehicle characterization data from the chassis dyno and the HiL test bench with the statistical data and other important RDE criteria. The output of the algorithm is an input database of all created RDE client inputs for the next simulation steps, as shown in the Figure 12. Figure 12: Principle algorithm work-flow for the RDE client input formation Finally, the combined process of the multi-parametric driver characterization (GT- SUITE) and the large-scale databased statistical algorithm (MATLAB) generates 101,000 default driving clients. After the first optimization, 5,700 RDE (1000 cases from the RDE driver characteristics and 4,700 cases from the statistical data) clients are used for the MiL simulation using the example application of a light-duty Diesel engine. In the following chapter, the combined RDE simulation process and proposed methodology using the above explained RDE driver, route and client formation as simulation inputs for the MiL and HiL simulation. 4.2 Combined RDE Simulation and Methodology 4.2.1 Accelerated RDE Client Simulation with Massive Cycle Data Input The generated profiles are processed by the RDE client formation tool in MiL environment with ECU model, engine model and exhaust aftertreatment simulation models. The accelerated powertrain simulation calculates engine raw and tailpipe emissions, based on the input data and the implemented models. Optionally, for a particular lifetime and/ or distance. In the last step, the simulation output data is evaluated regarding 162 <?page no="173"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration system behavior, and the statistics are generated for different simulated driving scenarios and vehicle applications. Figure 13 shows the distribution of the drive aggressiveness characteristic points from the simulated 1000 cases. The created test plan includes also high dynamic boundary conditions with the exceeded aggressive limits (V*Apos 95%ile). Figure 13: Distribution of 1000 real driving cycles based on multi-parametric real driver and route characterization Figure 14 shows the finalized RDE clients generated by the statistical approach and the optimized driver cases based on the multi-parametric variation (Figure 10 and Figure 12). The engine out and tailpipe NOx emission results from MiL, shown in Figure 18, will be discussed in combination with the HiL results in the last chapter of the evaluation. Figure 14: Optimized RDE clients used for MiL simulation Vehicle speed * positive acceleration / W/ kg 0 30 60 90 120 Av erage v ehicle speed (Real driv ing simulation cy le input trace) / km/ h 0 40 80 120 160 Urban Rural Highway V*Apos 95%ile Relative Positive Acceleration (RPA) / m/ s 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Av erage v ehicle speed (Real driv ing simulation cy le input trace) / km/ h 0 40 80 120 160 Urban Rural Highway Lower RPA bound Relative Positive Acceleration (RPA) / m/ s 2 0.0 0.1 0.2 0.3 0.4 0.5 0.6 AV.Speed U rb [km/ h] 0 40 80 120 160 Urban Rural Highway Lower RPA bound Vehicle speed * positive acceleration / W/ kg 0 30 60 90 120 AV.Speed M ot [km/ h] 0 40 80 120 160 Urban Rural Highway V*Apos 95%ile 163 <?page no="174"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Synthetically determined RDE cycles reflect the reliable real driving situations that have been predicted by the best-practice engineering knowledge. However, they do not always the confidence of representing the maximum (worst) emission case for each vehicle-drive combination during the real driving operation. In driving cycles under real conditions, situations occur repeatedly with different boundary conditions (e.g. operation of the engine and exhaust aftertreatment system, ambient temperature and pressures) in which the vehicle produces more emissions than compared to a synthetic RDE cycle. This challenge emphasizes the necessity of the accelerated MiL powertrain simulation to execute the large simulation matrix of the predicted RDE clients, in order to cover as many scenarios as possible already in the MiL phase. 4.2.2 Multi-domain Real-time Vehicle Simulation and Case Optimization This chapter discusses a methodical derivation of a RDE Lead Scenario as the worstcase scenario for any vehicle-drive combination. In previous chapters, the parametric description of real driving situations and the large scale RDE client input data allow the derivation of a multidimensional simulation space using DoE consisting of parametric combinations of driving scenarios and driving behavior (order of magnitude 5,700 RDE scenarios) which covers the emission relevant driving situations which can occur in real operation. In the next step, the complete-vehicle simulation is conducted on the HiL test bench in order to predict the emissions with the interaction of the ECU software and its calibration. After the HiL simulation, critical RDE cases can be determined. The derivation of the critical RDE Lead Scenarios based on MiL, HiL and DoE are explained, as follows:  Selection of n parameters for the description of the real driving situation (based on the MiL simulation results)  Optional limitation of the n-dimensional simulation space as for example: o to a specific RDE legislation scenario or o to market typical driving situations  Generation of a parametric simulation space using DoE  Performing the driving cycle emission HiL simulations (calculation in the order of magnitude of multiple scenarios)  Identification of critical HiL simulation clients and time-step based analysis of the ECU, engine and other important physical parameters  Modelling of the HiL emission results, e.g. CO 2 or NOx depending on the driving situation parameters with extended GP models  Determination of the drive scenario parameter sets (test plan) with maximum emissions (RDE Lead Scenarios) using the DoE model or HiL simulation results  Re-performing of the RDE client generation with new sets and HiL emission simulation with the final RDE Lead Scenario parameter sets For the case optimization, various driving behaviors are considered. The impact of the driving behavior on the NOx emissions is simulated, as shown Figure 15. The target speed limit and actual vehicle speed in driving cycles from 3 different drivers are illustrated. The dynamic factor (aggressiveness factor) helps to variate how quickly the driver reacts to a new vehicle speed limit. This parameter can be varied together with 164 <?page no="175"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration other driving cycle parameters in order to investigate the effect of different driver behaviors on NOx emissions and fuel consumption. Figure 15: Simulated impact of the driver behaviors on engine out NOx emissions on the HiL test bench Figure 16 shows that the high acceleration significantly influences the NOx emissions immediately after the engine starts and before the catalyst light-off is reached during the beginning of the driving cycle. However, it is shown in Figure 16, that a defensive driver can produce increased tailpipe NOx emissions during this phase due to the low SCR inlet temperature. The engine and exhaust temperatures are still too low to ensure the required NOx conversion efficiency of the SCR system during this phase. Target Speed Agressive M oderate Defensive Injection Quantity / mg/ hub 0 10 20 30 40 50 60 70 Vehicle Speed / km/ h 0 10 20 30 40 50 60 70 EO NO x Mass Flow / g/ s 0.00 0.10 0.20 0.30 0.40 Cumulative EO NO x / g -0.15 -0.05 0.05 0.15 0.25 Time / s 184 185 186 187 188 189 190 191 192 193 194 Accelerator Pedal Position / % 0 20 40 60 80 100 165 <?page no="176"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Figure 16: Impact of the defensive driving on the SCR heat-up and the tailpipe NOx emissions 4.3 Evaluation of RDE Cycle Simulation Results 4.3.1 Emission Validation for Simulated WLTC and RDE Cycles Firstly, several WLTC and RDE cycles are validated using the vehicle measurement in order to demonstrate the emission prediction accuracy of the HiL and the used models. As shown in Figure 17, the overall accuracy of ±10% in relative deviation is achieved. Additionally, two different ambient temperatures (-7°C for WLTC and 5°C for RDE cycle 3) are simulated to validate the extrapolation capability of the physical powertrain models. The evaluation results from the investigated RDE cycles show a good prediction of the vehicle and powertrain behaviors in the WLTC and RDE tests under the transient driving conditions and extended ambient conditions. The accurate reproduction of the ECU control behaviors and the emissions are mandatory to apply the proposed simulation approach for the RDE validation. Target Speed Agressive M oderate Defensive Vehicle Speed / km/ h 0 10 20 30 40 50 60 70 TP NO x Mass Flow / g/ s 0.00 0.10 0.20 0.30 0.40 Cumulative TP NO x / g -0.20 -0.10 0.00 0.10 0.20 SCR Inlet Temperature / °C 100 150 200 250 300 Time / s 184 185 186 187 188 189 190 191 192 193 194 166 <?page no="177"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Figure 17: Validation of the HiL simulation results for the engine-out NOx emission Simulation M easurement RDE C y cle 2 RDE C y cle 3; T amb = 5°C RDE C y cle 4 RDE C y cle 5 WLTC ; T amb = 22°C WLTC ; T amb = -7°C RDE C y cle 1 EO NOx / g/ s 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 NO x (% of cumulative mass) / - -1.2 -0.8 -0.4 0.0 0.4 0.8 1.2 -2.1% EO NOx / g/ s 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0 200 400 600 800 1000 1200 1400 1600 1800 NO x (% of cumulative mass) / - 0.0 0.2 0.4 0.6 0.8 1.0 1.2 10.6% NO x (% of cumulative mass) / - -1.2 -0.8 -0.4 0.0 0.4 0.8 1.2 EO NOx / g/ s 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0 200 400 600 800 1000 1200 1400 1600 1800 2.9% EO NOx / g/ s 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0 200 400 600 800 1000 1200 NO x (% of cumulative mass) / - -1.2 -0.8 -0.4 0.0 0.4 0.8 1.2 8.5% EO NOx / g/ s 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Time / s 0 200 400 600 800 1000 1200 NO x (% of cumulative mass) / - -1.2 -0.8 -0.4 0.0 0.4 0.8 1.2 -9.7% EO NOx / g/ s 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0 200 400 600 800 1000 1200 NO x (% of cumulative mass) / - -1.2 -0.8 -0.4 0.0 0.4 0.8 1.2 3.2% EO NOx / g/ s 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0 200 400 600 800 1000 1200 NO x (% of cumulative mass) / - -1.2 -0.8 -0.4 0.0 0.4 0.8 1.2 -5.9% 167 <?page no="178"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration 4.3.2 Emission Prediction for Generated RDE clients In this chapter, the conducted MiL and HiL simulation results are discussed. The case optimization is conducted after the MiL stage to reduce the test matrix for the next HiL simulation. The applied procedure is described in the previous chapters. Figure 18 summarizes the XiL simulation results for the engine out and tailpipe NOx emissions. The details of the simulated cases are explained:  Based on the initial test plan, the MiL simulation is executed and the 1 st RDE test matrix consisting of the 5,700 RDE clients is generated.  For HiL simulation, 40 RDE cases are selected for the visualization and investigated in details using the RDE clients of the optimized 1 st RDE test matrix.  The HiL simulation results of the 1 st RDE test matrix and other 46 WLTC cases (used to determine the parameter sets of the predicted system performance with the aged SCR and malfunctioning EGR system) are analyzed, in order to create the optimized 2 nd RDE test matrix for the road tests, as described in Figure 4.  The following criteria of the RDE case definition were considered additionally for the HiL simulation: o SCR aging states (degreened, full useful life and aged part) o Road load variation o Aggressiveness of driver  A type-approval road load and a reference road load are used for the simulation models. The reference road load is measured under realistic conditions, representative of in-use vehicles driven in real-world conditions.  Based on the several hundreds HiL simulation results of the identified critical cases, the RDE lead scenarios for the 2 nd RDE test matrix are obtained by the DoE approach generating new parameter sets for the worst case combinations.  The generated DoE model allows fast identification of the further critical parameters and challenging combinations. As explained in the previous chapters, the MiL simulation is used to generate the first optimized RDE test matrix with the reduced number of the required test runs for the HiL. 168 <?page no="179"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Figure 18: Summarized emission results from MiL and HiL simulation As shown in Figure 18, different type of critical scenarios are evaluated in combination of various driving cycles, in order to validate the robustness of the investigated ECU software and calibration for RDE related conditions. The degradation of EATS is based on the aged SCR simulation. The kinetic-based SCR model considers the reduced catalytic reaction surface and the shifted light-off temperatures for the aging characteristics. The predicted DeNOx efficiency is strongly influenced by the SCR aging. Additionally, extended ambient temperatures are tested. The highest tailpipe NOx (at nominal ambient condition, with the baseline SCR and at CF < 1) is resulted in the RDE client that showed also the lowest NOx conversion efficiency in MiL. Finally, the worst RDE clients are predicted by HiL in combination with the driver aggressiveness and aged SCR, as shown in Figure 18. The driving cycle of the worst RDE client is synthetically generated and it contains not only the highly dynamic velocity profile, also the other driving route characteristics. However, the synthetic cycles cannot reflect all possible worst-case scenarios for the real-world driving or real engine operation. Use of a test matrix of short and optimized synthetic cycles, which are representative of more real driving cycles, eliminates the need of iterative simulation loops. Also, a seamless validation is ensured thanks to the enlarged test cases of the MiL phase. The combined simulation approach helps to generate more realistic cycles while characterizing the simulation input by real ECU data and enabling a flexible parametric variation of real driving conditions. Consequently, this leads to an optimal final test matrix for the hardware testing. 4.3.3 Optimization of RDE Test Matrix Figure 19 describes different packages for the RDE test matrix. The maximum package contains all possible conditions that could be theoretically considered for the seamless validation. The extended matrix includes the optimized and reduced test cases customized for the boundary conditions. NOx tailpipe / g/ km 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 N Ox engine out / g/ km 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 M iL: 5700 cli ents HiL: RDE cases HiL: Real driver behavior CF 1 169 <?page no="180"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Figure 19: Determination of the RDE test matrix However, only a significantly reduced test matrix can be executed in actual projects due to the limited resources, e.g. PEMS equipment and testing vehicles. This is indicated as the current and optimized status in Figure 19. In conventional approach, the challenges are still remaining with the following uncertainties:  Uncertainty of the selected driving cycles that really represent critical cases and cover sufficient scenarios for achieving seamless validation.  Uncertainty of the test case reduction, when the cost-intensive test plan has to be reduced only with a few limited optimization loops. Therefore, the combined simulation approach generates the benefits gained from the reduction of the testing cost and calibration quality improvement, as shown in Figure 20. 170 <?page no="181"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration Figure 20: Cost reduction and quality improvement using the proposed XiL simulation approach The optimized determination of the RDE test matrix leads to reduced testing cost due to the accurate optimization of the critical test cases based on the simulation approach. MiL and HiL enable a seamless validation environment using the iterative simulation loops for the creation of the RDE test matrix. This simulation process improves finally the quality by means of the robustness and consistency, since various boundary conditions such as dynamic driving behavior (Figure 10) and statistical RDE client information (Figure 11) are tested at the earlier stage of the development. The quality is obtained by following improvements:  Early identification of calibration improvement for the engine-out emissions  Fast optimization of emission control systems for the tailpipe emissions  Validation of critical RDE cases in a wide range of driving conditions  Identification of the worst RDE lead scenarios for further calibration optimization 5 Conclusion The contribution of the present work is a combined simulation process and methodology based on the multi-level XiL simulation and various heterogeneous development tools. This process and methodology consider the powertrain and vehicle simulation for the extended validation environments to frontload the RDE validation work towards the MiL and HiL virtual test beds. The combined process and methodology has shown a possibility to optimize the huge RDE test matrix in an efficient way. The accurate prediction of the NOx emissions is the major optimization value for the ECU calibration validation. However, the RDE Optimized Status Conventional Test Range Optimized Test Matrix based on XiL Quality improvement Extended Maximum Current Status XiL Cost decrease Testing Cost XiL Quality 171 <?page no="182"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration worst-case and its cycle definition (in terms of the fuel consumptions or NOx emissions) can be aligned to only a limited extent with the realistic driver behaviors and possible scenarios on a real road. The further focus of the development of the tool chain and methodology of the combined approach lies in:  Effort and cost optimization for the RDE test matrix generation loops  Connection of XiL with advanced environment simulations (e.g. real traffic and street simulation) It is favorable to use the proposed simulation approach for the vehicle calibration, even though the complexity of the used tools and simulation environments still exists. However, it enables a robustness validation of the ECU software and its calibration already at earlier development phase and ensure also a seamless evaluation of the critical RDE cases. Literature [1] Fontaras, G., Zacharof, N.-G., and Ciuffo, B., “Fuel consumption and CO 2 emissions from passenger cars in Europe - Laboratory versus real-world emissions,” Progress in Energy and Combustion Science 60: 97-131, 2017, doi: 10.1016/ j.pecs.2016.12.004. [2] Lee, S.-Y., Andert, J., Neumann, D., Querel, C. et al., “Hardware-in-the-Loop Based Virtual Calibration Approach to Meet Real Driving Emissions Requirements,” WCX World Congress Experience, SAE Technical Paper Series, APR. 10, 2018, SAE International400 Commonwealth Drive, Warrendale, PA, United States, 2018. [3] Trampert, S., Nijs, M., Huth, T., and Guse, D., “Simulation von realen Fahrszenarien am Prüfstand,” MTZ Extra 22(S1): 12-19, 2017, doi: 10.1007/ s41490- 017-0008-5. [4] Christoph Menne, Michael Rupp, David Blanco-Rodriguez, and Thomas Körfer, “Diesel Engine Emission Control Concepts for Robust Compliance with EU6d Legislation,” in: Aachen Colloquium Automobile and Engine Technology 2016. [5] Henning Baumgarten, Johannes Scharf, Matthias Thewes, Tolga Uhlmann et al., “Simulation-Based Development Methodology for Future Emission Legislation,” in: 8. Emission Control Conference 2016 - Real Driving Emissions. [6] Blanco-Rodriguez, D., Vagnoni, G., Aktas, S., and Schaub, J., “Model-based Tool for the Efficient Calibration of Modern Diesel Powertrains,” MTZ worldwide, Ausgabe 10/ 2016, Springer, 2016. [7] Rasmussen, C.E. and Williams, C.K.I., “Gaussian processes for machine learning,” Adaptive computation and machine learning, 3rd ed., MIT Press, Cambridge, Mass., ISBN 0-262-18253-X, 2008. [8] Thewes, S., Lange-egermann, M., Reuber, C., and Beck, R., “Advanced Gaussian Process Modeling Techniques,” in: Design of Experiments (DoE) in Engine Development, Expert Verlag, 2015. 172 <?page no="183"?> 5.1 Virtual Real Driving Environment and Emissions: A Road Towards XiL Based Digitalization of Powertrain Calibration [9] Gutjahr, T., Ulmer, H., and Ament, C., “Sparse Gaussian Processes with Uncertain Inputs for Multi-Step Ahead Prediction,” IFAC Proceedings Volumes 45(16): 107-112, 2012, doi: 10.3182/ 20120711-3-BE-2027.00072. [10] Sánchez-Montañás, M.A., “Strategies for the Optimization of Large Scale Networks of Integrate and Fire Neurons,” in: Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, ISBN 978-3-540-45720-6. [11] Bösch, P., “Der Fahrer als Regler: Dissertation,” Wien, 1991. [12] Daechul Jeong, Maurice Smeets, Henning Gero Petry, Markus Netterscheid, Imre Pörgye, Matthias Kötter,Sung-Yong Lee, “Improving the quality of in-service emission compliance based on advanced statistical approaches,” in: 13. Internationale MTZ-Fachtagung Großmotoren. 173 <?page no="184"?> 5.2 Digital Transformation of RDE Calibration Environments: The Quest for Networked Virtual ECUs and Agile Processes Jakob Mauss, Felix Pfister Abstract This paper presents a fully virtual (that is, an entirely simulation-based) environment to be used to move certain work steps of a RDE calibration process to Windows PC. The environment can also be connected to real (as opposed to simulated) powertrain components, leading to a mixed (real/ virtual) calibration environment. The environment is composed from commercially available tools Silver and CarMaker. We discuss requirements and challenges for virtualization of RDE calibration in general, as well as features and limits of the presented tool chain. Kurzfassung Dieser Beitrag beschreibt eine vollständig virtuelle (also restlos simulations-basierte) Werkzeugkette für die RDE Applikation. Damit lassen sich bestimmte Arbeitsschritte auf Windows PC verlagern. Die Werkzeuge können außerdem mit echten Powertrain Komponenten gekoppelt werden, um eine gemischte (real/ virtuelle) Applikationsumgebung darzustellen. Die Werkzeugkette besteht aus den kommerziell verfügbaren Softwareprodukten Silver und CarMaker. Wir diskutieren Anforderungen für die Virtualisierung von RDE Applikation im Allgemeinen, sowie Eigenschaften und Schwächen der präsentierten Werkzeugkette. 1. Introduction Digital transformation is everywhere and has risen to the top of CEOs’ strategic plans. Two of the attributes that are typically used when talking about digital transformation is “agility” and “virtualization”: the power to move quickly and innovate fast using virtual (rather than real) versions of something. Now, here is the challenge and the topic of this paper: Digitize the RDE calibration and test process! Make it agile! Make it virtual! Let’s face it: Today, RDE calibration and testing is still mostly performed outdoor: on real roads, with real drivers and real cars. One of the challenges will be to integrate this traditional testing business (which will not disappear) with the digital space so that they complement and work alongside one another. We see three topic areas at this intersection of the traditional and the digitally enabled RDE testing business. 174 <?page no="185"?> 5.2 Digital Transformation of RDE Calibration Environments a. Digitize the RDE proving ground, the driver and the powertrain-plants and vehicle hardware: It will be described how virtual 3D RDE routes and route topologies are generated effectively and time-efficiently out of real-world data or virtual maps. A virtual driver is then driving these profiles. Plant models include engine models, transmission and driveline models as well as tire/ wheel and vehicle models. b. Virtualize your control units (target hardware): It will show how entire control units are virtualized in a way that includes (i) the control software supplied by Tier-1s, (ii) the OEM-specific functions and (iii) virtualization of the CAN / FlexRay network communication using real network topology: An industrialized full system simulation, where the processors, memories, peripheral devices and the environment are simulated in such detail that the target software cannot “tell” the difference from a real target system, is presented. The model runs “the same” binary software as would run on the real xCU target. Most innovations are, as we all know, in software, functions and in communication. There is therefore little doubt that it is this technology (see [1], [2]) that has a massive potential to master the challenges of complexity in virtual RDE testing. c. Allow for a smooth transition: To be sustainable and to be sure that all involved parts of the company and of the suppliers work in sync, this digital transformation requires the ability to mix real and virtual components, allowing a smooth transition from a real to a virtual development environment. Such an evolutionary path should, for example, begin at the powertrain or engine testbed, where the engine and its control unit is real and the rest is virtual and then smoothly and consistently extend all the way down to the pure virtual space. This aspect will be shown and discussed. Figure 1: Tool chain for virtual RDE calibration process. 175 <?page no="186"?> 5.2 Digital Transformation of RDE Calibration Environments The remainder of the paper is structured as follows: Sections 2 and 3 describe the building blocks of the tool chain shown in Figure 1. Section 4 explains how to use the tool chain to smoothly move selected work steps from a real engine test rig to a pure virtual calibration environment. 2. Digitization of RDE proving ground, driver and vehicle 2.1 Digitization of the 3D-road and the manoeuvre catalogue Robust processes and confidence in simulation results are required before investing resources into model-based development methods and workflows. That’s why realand virtual world-test drives need first to be compared 1: 1 based on a number of clearly defined tests and robust processes and criteria. The task of a first step is therefore the definition of a so called “manoeuvre catalogue” (MC) used to compare real and virtual behaviour. The scenarios defined in this catalogue range from basic road tests for virtual vehicle identification and validation, such as  constant speed driving,  straight-line neutral coast down,  neutral coast-down in a turn,  in-gear coast down,  open-clutch engine-coast-down,  neutral run-up,  tip-in, tip-out to more complex test-drives, which include scenarios directly at the OEMs or on open proving grounds and on public roads around the world:  upshifts and downshifts  straight-line acceleration,  backward driving,  traffic jams with start-stop,  mountain driving, e.g. in torque vectoring sports mode,  at least one or optimally, two or more test-campaigns which are RDEcompliant. Let it be mentioned that IPG’s CarMaker (Version 8.x) already comes with an impressive off-the-shelf library of predefined digital road models including four real-world RDE cycles based on outdoor GPS measurements with barometric correction; Figure 2. These provide a good starting point and can readily be adapted by the user. Companies like “3D-Mapping” (3d-mapping.de) and “Atlatec” (atlatec.de) provide streamlined measurement services for CarMaker’s IPGRoad format. The digital transformation of the roads not only covers the road-geometry as such, but includes all details which impact on the vehicles and drivers trajectory planning (speed and curve planning), namely road-works, speed-signs, stop-signs and traffic-lights. It also includes ambient conditions (side-wind, temperature, humidity,…) all along the road. The generation of traffic (i.e. of other vehicle objects), which of course also impact on a vehicle’s speed and trajectory is discussed below in section 2.4. 176 <?page no="187"?> 5.2 Digital Transformation of RDE Calibration Environments Figure 2: CarMaker 8.x offers a large library of digital road models including four RDE-compliant tracks. The vehicle identification & validation manoeuvres are used to identify and validate the virtual prototypes (virtual twins) in subsequent project steps. Alignment with the various stakeholders and documentation of such type of manoeuvre catalogues might require one or two weeks of project work. Once such a MC has been set-up, it is fixed for many month or even years and is only slightly adapted from project to project. It can be, once the necessary changes have been made, applied to all kinds of vehicles and driver types. 2.2 On-road (outdoor) testing In the second step, the previously defined MC is executed “outdoors”, i.e. with real vehicles, on real roads with real drivers; figure 3. The following quantities are measured online  PEMS (portable emissions measurement system) values for NOx, PM, CO, HCs,  Powertrain CAN bus and OBD (on-board diagnosis) data,  Inertial Measurement Unit (IMU) data (GPS, accelerations and speeds), which might be corrected by  DGPS (Differential GPS) and barometric pressures. It has proven useful to also capture the front-view of the test-drives with a video camera (point-of-view shots). In order to be able to apply meaningful statistics on the result data such as the calculation of probability density functions (e.g. bell curves) and in order to judge the statistical significance of results, each manoeuvre should be repeated at least three times (RDE drives) or five-to-seven or even as often as nine times for the shorter manoeuvres. According to the project experience of the authors, this project step requires one 177 <?page no="188"?> 5.2 Digital Transformation of RDE Calibration Environments week of vehicle preparation (equip with measurement and data acquisition systems) and a two-week measurement campaign. Once the teams are experienced and the implemented processes are robust, this workload can be reduced to one week or even only 3 days. Figure 3: Accurate correlation with real-world testing is key: Outdoor testing with PEMS and GPS includes highway routes, traffic scenarios and steep grade test routes. 2.3 Building the virtual vehicle prototype In this section we focus on the building-process of the physical-virtual vehicle prototype. The building process for virtual control units is discussed in section 3 of this paper. Collecting “easy to identify” vehicle parameters to build-up the virtual prototype for general RDX-studies doesn’t require much time and has proven useful in the past. These include geometricas well as mass-geometric and basic kinematic data. Namely: vehicle length, vehicle width, vehicle height, track width, total front/ rear wheel load, wheel base, tire radius and suspension toe-in angles. These parameters, as well as the 3D geometry-model (e.g. in wave front .obj format) can, in many cases, be directly found and/ or downloaded from the internet (e.g., www.hum3d.com). A 3D-animation of the manoeuvre is helpful in order to increase the acceptance of virtual road testing and provides a direct plausibility-check for at least 12 fundamental vehicle dynamics quantities [Figure 4]. In situations where the project-team has access to CAE departments, these “easy to identify parameters” can advantageously be complemented by tire data (e.g., Pacejka tire data) and aerodynamics data (vehicle’s drag coefficient as a function of the side- 178 <?page no="189"?> 5.2 Digital Transformation of RDE Calibration Environments wind angle of attack in a range between 0 and 15 degrees). As an alternative, tire manufacturers (e.g. Michelin Engineering and Services) have mass geometric and suspension-kinematics data as well as data-driven models of nearly all their tires and some of those of those of their competitors readily available. Engineering service providers such as the AMFD Dresden GmbH offer the service to measure and identify physical prototypes according to defined processes. This source of information can also advantageously be integrated into the RDE virtualization-workflow. IPG’s Car- Maker incorporates a fully integrated dataset generator to generate a fairly decent multibody-system vehicle-dataset of IPGCar (DoE-Methods have an as by today undiscovered potential to generate vehicle datasets, which to discuss is, however not within the scope of this article.) Figure 4: IPGMovie provides a direct plausibility-check for at least 12 fundamental real-world driving-dynamics quantities as well as of the road and traffic objects. EPA annually publishes “Data on Cars used for Testing Fuel Economy” [ https: / / www.epa.gov/ compliance-and-fuel-economy-data/ data-cars-used-testing-fuel-economy]. This data is derived from vehicle testing done at EPA’s National Vehicle and Fuel Emission Laboratory in Ann Arbor (Michigan) and by vehicle manufacturers who submit their own test data to EPA. This data includes the very important A, B, C coast down parameters and much more. We have been unable to identify a comparable source of data for European laws and regulations (e.g. from the German Kraftfahrbundesamt). The authors of this paper would be grateful for any hint in this direction. The complete “Digital Transformation Process (Real-to-Virtual)” of the Physical Virtual Vehicle requires one-to-three weeks (including documentation and reporting), when this task is performed for the first times and anywhere from one hour to two days for subsequent campaigns, i.e. when a stable and robust process and project team has already been established in the company. 179 <?page no="190"?> 5.2 Digital Transformation of RDE Calibration Environments 2.4 Building phenomenological virtual traffic: The RDX Test Generator. The RDX-challenge creates a need for a method that enables the development and testing of powertrain systems under multiple realistic driving conditions in a repeatable but also random way. With the boundary conditions discussed in the previous sections, the virtual driver can already drive along the route in the virtual world, given a certain driver parameter setting. This corresponds to a test-drive at “3 o’clock in the morning” with little or no traffic or other obstacles (eg pedestrians) on the road. In order to generate RDE-compliant test-drives, however, traffic scenarios need to be introduced. On engine and powertrain test beds, as well as in the pure virtual space, road measurements are still often replayed using inflexible speed-load profiles. In other cases some allegedly critical measurements are defined and used as an RDE “reference cycle”. In all these cases, the velocity profile is only one specific situation consisting of a traffic situation, mood and skill of the driver, the road conditions and the payload of the vehicle. So replaying one velocity profile does not satisfy the need for testing the RDXrobustness of a vehicle in everyday operation. Another disadvantage of this approach is that each powertrain concept combined with the vehicle chassis has its own critical emissions behaviour in specific driving situations. Critical emission events often occur as a consequence of the combination of the powertrain configuration, road characteristics, driver behaviour and the traffic situation [Disch, 2012]. Figure 5: Workflow of the RDX Test Generator To overcome these restrictions a method was developed to create multiple real driving scenarios on given routes based on a phenomenological traffic representation approach. As is well known, the European RDE legislation [EC, 2016] imposes many conditions that determine the validity of an RDE test drive or the lack thereof: the length of each section (urban, rural, motorway) has to be in a specific range and the average velocity and the driving dynamics (“v*a_pos”) must meet specific requirements. As a consequence a realistic, easy to use and flexibly parametrizable traffic simulation is an essential part of the RDE development methodology, which includes comprehensive variations of all speed and trajectory relevant real driving factors to ensure the overall 180 <?page no="191"?> 5.2 Digital Transformation of RDE Calibration Environments system robustness. This task is accomplished by a tool called “RDX Test Generator”, which is an integral part of IPG’s CarMaker/ TestBed. Essentially the RDX Test Generator “intelligently” equips the a-priori given 3D-route with a great number of additional road-signs (traffic lights and speed signs) all along the route. The route-sections originate from a real-driving database. The technology and mathbackground of this tool has previously been described in an SAE paper [Petters, 2019]. The details will not be repeated here. The workflow of the tool is summarized in figure 5. The results obtained with this approach show a very good correlation with outdoordriving measurement data. All generated real driving scenarios and cycles can be reused within the development process. 3. Virtualizing networked control units of the powertrain In this section, we briefly describe how to build a virtual ECU with Silver. A Silver virtual ECU is a model of a real ECU that runs on Windows PC and behaves much the same as the real ECU, simply because it runs the same control software, not because someone identifies and models the ECU's behaviour. Handcrafting an accurate ECU model (also known as 'restbus simulation') is infeasible in practice, given the code size and complexity of current control software, in particular for combustion engines. 3.1 About the software running on a real ECU Before we dive into the details of building a virtual ECU, let us look at the control software that runs on a real ECU. Control software can be developed within or without the AUTOSAR framework (there are also mixed cases), and it can be hand-coded by C programmers or generated by a tool from a graphical model. Such models are typically developed using MATLAB/ Simulink or Ascet by engineers without any programing skills, which is known as 'model-based development'. In all these cases, the resulting control software is finally given as C code which defines or implements  variables that can later be measured and calibrated with tools such as INCA and CANape. A typical ECU contains many thousands of variables.  functions that perform computations on the variables and might call other functions. A typical ECU contains many thousands of functions. A function can be: o platform independent: Such a function can be compiled on any platform including Windows/ Silver and shows essentially the same behaviour when executed, no matter on which platform. This notion of platform independence does not count differences caused by called functions. o platform dependent: A function that works only on the platform for which it has been developed. For example, reading the 4 bytes located at memory address 0xF0000210 returns system time in clock ticks on Tri- Core platforms, but results in an access violation (fatal error) on most other platforms, including Windows. Platform dependent functions are typically located in the lowest layer of an ECU architecture, often called driver layer, BIOS, HAL (hardware abstraction layer) or MCAL (micro controller abstraction layer). This way, developers of the control software 181 <?page no="192"?> 5.2 Digital Transformation of RDE Calibration Environments can migrate to another platform quickly by essentially replacing that driver layer.  tasks: these are the functions to be executed by the Os (operating system), typically platform independent and less than 50. A task typically just calls up to a few hundred other functions (application software and basic software) in a specific order fixed at compile time. For example, the 10 ms task is called by the Os every 10 ms and runs all functions that should run 100 times per second, while a crankshaft synchronous task might be called by the Os twice per revolution of the crankshaft and calls functions related to fuel injection, knock detection and measurement of the crankshaft position.  Os: functions (mostly platform dependent) that implement the real-time operating system, responsible for running all tasks exactly as scheduled by the developers of the control software. 3.2 Building a virtual ECU Silver offers build tools to turn control software that follows the above pattern into a virtual ECU. Two major scenarios are supported  C code: In this scenario, the build process is based on compiling the C code for Windows. This also covers scenarios where different parties (typically OEM and Tier1) own different parts of the C code. In such a case, each side compiles his code for Windows and shares the resulting object files (binaries, one object per C source file) with the other parties. Silver is then used to build a virtual ECU based on all object files.  Chip simulation: the elf or hex file (binary resulting from compiling all control software for the target platform, such as TriCore or PowerPC) is used to build a vECU, based on Silver's built-in chip simulators. Neither C code nor object files are required in this case. The resulting simulation runs a few times slower than the corresponding C code vECU. On a good PC, this is still fast enough for soft real-time simulations in many cases. For both frameworks, the build process is similar and involves the following activities  setup a configuration file that lists key properties of the virtual ECU.  replace platform-dependent functions by providing equivalent replacement functions for Windows/ Silver. The mechanism for actually replacing functions depends on the framework used (C code or chip simulation) and is handled by Silver in both cases. To ease the implementation of replacement functions, Silver is shipped with the SBS library (Silver Basic Software), which allows to quickly implement frequently used drivers, such as Can, Lin, FlexRay, and NvM, to name a few. To further simplify this step for the case of AUTOSAR ECUs, Silver is shipped with opensource implementations of key drivers of the standardized MCAL layer.  replace the Os, which is platform dependent and must therefore be replaced as well. Silver offers two options: either statically configure the tasks to run by the Os in the vECU configuration file (as shown in Fig. 6) or implement the services provided by the Os (such as ActivateTask, TerminateTask) for Silver. To support the latter option, Silver is shipped with an open-source implementation of key services of AUTOSAR-Os and OSEK, which is the dominant automotive pre-AUTOSAR Os. 182 <?page no="193"?> 5.2 Digital Transformation of RDE Calibration Environments  build the vECU by running one of Silver's vECU build tools. Depending on the options given, the vECU is created as DLL for Silver, as FMU for Co Simulation, or as sfunction (mexw32 or mexw64 file) for MATLAB/ Simulink. Figure 6 shows a sample vECU configuration file for the case of a vECU build from Windows object files. In this case, the party building the vECU has no access to the C code of the control software but uses Windows object files received from another party. Lines 3 to 8 point to the location of files: object files used to build the vECU, dbc files used by Silver's Can driver to operate two virtual Can busses A and B, the calibration data (dcm file) to load at runtime into the vECU and the name of the a2l file to generate at build time by adapting a given original a2l file to the Windows platform. Lines 11 to 14 are used to statically configure the Os. As explained above, this can be omitted if the Os driver shipped with Silver is used instead. Line 17 to 20 list variables needed to connect to other modules running in silver, such as plant model or plotter. In the example, Silver's command-line tool sbsBuild is used to tune the config file into an executable model, either a sfunction, an FMU or a Silver module. Configuration files for chip simulation look very similar. Silver's configuration language for vECUs offers many other features, for example means to deal with multi-core platforms, such as Infineon's AURIX. Details are beyond the scope of the paper. Figure 6: Building a vECU from a vECU configuration file. The effort for building a virtual ECU as described above varies from 1 person week for a simple transmission controller to 4 person months for a high-end engine controller, assuming an experienced virtualization engineer. The result of such an effort is not just a virtual ECU, but also a build script that allows to quickly repeat the build for new versions of the control software. Such build scripts are for example used to enable nightly build and nightly tests in the context of agile development and continuous integration [3]. 3.3 Integrating Silver virtual ECUs into CarMaker A high-end ECU for engine control is connected to about 100 sensors and actuators. In order to run the corresponding virtual ECU on Windows, we need a detailed model 183 <?page no="194"?> 5.2 Digital Transformation of RDE Calibration Environments of the engine that matches the signal interface of the vECU. Such models are typically available at OEMs and Tier1 suppliers, for example from HiL test rigs. Silver can import and run such models, as developed with or provided by tools such as MATLAB/ Simulink, GT-Power, Amesim, or (for exhaust aftertreatment) Axisuite. This way, a fairly complete model of an automotive powertrain can be built in Silver as shown in Fig. 1. The entire powertrain model is then exported as FMU for Co-Simulation, using a built-in feature of Silver. CarMaker’s ModelManager offers a feature to import the resulting FMU. This does not require any programming or recompiling on the CarMaker side. When running the resulting CarMaker setup, the Silver powertrain FMU replaces the default powertrain shipped with CarMaker. Both tools run side by side then: CarMaker represents the virtual vehicle as well as the virtual test-drive environment with driver, road, traffic and ambient conditions, while Silver (shown in Fig 7.) allows to conveniently interact with the powertrain model, e.g. to plot selected variables of the ECU. or to even step and debug the control software. Such a simulation runs independent of real-time. If required, this allows the execution of very accurate simulation models, e.g. to simulate the details of the combustion process. Such detailed models are typically not real-time capable and therefore impossible to run on a test bench. Figure 7: A powertrain model with networked virtual ECUs running in Silver 4. Leveraging integrated digital and physical testing 4.1 Testbed topology The pure virtual simulation environment presented so far can be used to move certain work steps of an RDE calibration process (such as pre-calibration) to Windows PC. 184 <?page no="195"?> 5.2 Digital Transformation of RDE Calibration Environments However, full virtualization is not always feasible or desirable. For example, for accurate predictions of emissions, it might be necessary to keep at least the real combustion engine in the loop. In such cases, it may still be possible to “downsize” a powertrain test rig by replacing selected components by their virtual counterparts. Downsizing is meant to reduce the costs and complexity of a setup and to increase its agility. Technically, this is possible here because both CarMaker and Silver offer also a real-time simulation mode. In this mode, virtual ECUs running in Silver can connect to real CAN, Lin and FlexRey busses, which enables communication between virtual and real ECUs. [7] describes how AMG has used this to eliminate real transmission hardware and TCU from such a test rig. As schematically shown in Figure 8 below, the fundamental idea behind Car- Maker/ TestBed is to embed real powertrain components into a highly flexible manoeuvreand event based testing environment: The “real Unit Under Test” at the testbed (the powertrain component) exchanges real-time information (torque, speed, …) with the “virtual UUT” (the rest-of-the-vehicle) and the virtual proving ground (road, traffic, environment, driver and manoeuvre control). The ensemble of real and virtual UUT then undergoes “in-the-loop testing” in a simulated real-world driving situation. This extends the well-known x-in-the-loop approach to powertrain-testbeds, thereby bridging together the four testing environments “Office - Lab - Testbed - Road” into one integrated and open development environment. A CarMaker layer called “Test Manager” acts between Requirement Management Systems and test execution. Entire “test catalogues” with thousands of test-runs are managed in a comprehensive and flexible manner. Figure 8: Engine-In-The-Loop with CarMaker/ TestBed and Silver virtual ECUs 185 <?page no="196"?> 5.2 Digital Transformation of RDE Calibration Environments As everybody knows, the powertrain has evolved from being a combination of mechanics and electrical engineering into one of networking ECUs. This means that development and testing has to run the vehicle’s “information flow”, i.e. the information exchange between the different ECUs in its entirety. Surprisingly, virtual model and ECU-driven rest-bus communication at the test bed has not yet received the attention it deserves and requires. Instead of having live communication as in a real-life RDXoperation, the developers so far had generally to put up with rather rudimentary ‘rest bus simulation.’ The ‘x-in-the-loop’ approach, pursued in this paper closes this gap by ‘firing’ - in real time not only the power flow (torque and speed), but also the information flow interfaces in very realistic driving situations, all based on models. This turns the classic component test beds (e.g. engine test bed) into a system driven development which links the virtual world with the physical one, opening up a whole new world, where most of the limitations imposed by the old type of test bed no longer apply. 4.2 The future of powertrain development is continuous, progressive and pervasive It is difficult to underestimate the consequences of this approach: New powertrain development and testing economics are at work here. When the differences between digital and real world disappear, when it is possible to switch to and from virtual and real components and test-drives just by a few mouse clicks - the entire economics of how we develop powertrains can be changed. It can be switched from an anticipatory, prescriptive style to a more adaptive and agile style of development. It can be switched from a process based on the traditional “define-design-build” (V-cycle) to one based on continuous adaption: “envision-explorerefine”. The future of powertrain development is continuous, progressive and pervasive. Why? It is  Continuous, because (system) testing occurs constantly, from concept to design release and beyond,  Progressive, because (system) tests mature over time, re-using work from prior activities and  Pervasive, because it occurs at every level of the vehicle development, from top level systems of systems all the way down to the components and the smallest parts. Standardized practices, linear thinking and prescriptive processes are no match for today’s fast evolving, volatile and uncertain powertrain product development environments. The approach presented in this paper describes a clear path into a direction which has already started to fundamentally alter how new powertrain development is managed. 186 <?page no="197"?> 5.2 Digital Transformation of RDE Calibration Environments 5. Conclusion Virtual calibration and testing is, of course, not expected to fully replace their physical counterparts, but offers an complement to real hardware environments with specific advantages: A virtual PC-based environment requires relatively little investment, can therefore be made available at the desktop of every testand calibration engineer, and provides highly reproduceable (deterministic) results. This can be used to realize time and efficiency gains in series production projects. To summarize: There is much to contemplate when converting the real world of outdoor RDE testing into a fully virtual space. And although one could assert that the fundamental idea of “simulation” and virtualization has not changed over the last years, one can see that the functionality and possibilities of the tools of the trade have changed considerably: An integrated, industrialized tool-chain for the digital transformation of the RDE calibration and test-processes has been established to do exactly what the name implies: To drive component level calibration and design decisions out of global vehicle and environment (i.e. system-level) considerations. References [1] René Linssen, Frank Uphaus, Jakob Mauss: Simulation of Networked ECUs for Drivability Calibration, in: ATZelektronik, worldwide eMagazine, 4/ 2016. https: / / www.qtronic.de/ en/ mercedes-benz-engine-control/ [2] Yukata Murata: Chip Simulation for Virtual ECUs. Presentation by Honda at the QTronic User Conference 2018 Virtual ECUs and Applications 18th of October, Berlin, Germany. https: / / www.qtronic.de/ en/ qtronic-user-conference-2018/ [3] Johannes Foufas: Continuous integration and continuous validation with explorative tests for propulsion controls and calibration. Presentation by Volvo Cars at the QTronic User Conference 2018. https: / / www.qtronic.de/ en/ qtronic-user-conference-2018/ [4] Disch, C et.al: Experimental Investigations of Transient Emissions Behaviour Using Engine-In-The-Loop. IPG Apply and Innovate Conference 2012. [5] EC: European Commission Regulation (EU) 2016/ 427: Amending Regulation (EC) No. 692/ 2008 as Regards Emissions from Light Passenger and Commercial Vehicles (Euro 6), The European Commission, 10, Mar. 2016. [6] Petters, J. et.al. “Phenomenological Traffic Simulation as a Basis for an RDE Development Methodology,” SAE Technical Paper 2019-26-0346, 2019, doi: 10.4271/ 2019-26-0346. [7] Christian Mayr et. al: Test emissionsrelevanter Fahrzyklen auf dem Motorprüfstand. Presentation by AVL, AMG and QTronic at: MTZ-Fachtagung Simulation und Test, Hanau bei Frankfurt/ M., 25.9. - 26.9.2018. 187 <?page no="198"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development Michael Grill, Mahir Tim Keskin, Michael Bargende, Peter Bloch, Giovanni Cornetti, Dirk Naber Abstract With the introduction of "Real Driving Emissions" (RDE) powertrain simulation has become indispensable to identify critical operating conditions for the exhaust aftertreatment system at an early stage in the development process and to analyze possible corrective actions. Therefore a rudimentary virtual RDE calibration is also necessary in the early concept phase. For doing so, it makes a lot of sense to use 1D flow simulation to model effects such as boost pressure built-up, high/ low pressure EGR travel times or thermal inertia. There are two challenges regarding 1D simulation of RDE:  Combustion system development is usually done at single cylinder engines. It is necessary to integrate the results of the single cylinder engine in an effective way into a virtual full engine for RDE investigations and first virtual calibration  Moreover, it is typically necessary to investigate a vast number of RDE driving patterns, taking up much more computational time than it was the case previously with a single type-approval driving cycle. Consequently, "Fast Running Models" (FRM) that reduce the computational effort needed for flow simulation have been used to counteract this increase. However, it is inevitable that such an approach reduces the accuracy of the boundary conditions at intake valve closing (IVC), e.g. EGR rate, temperature and pressure, and crucially, the predictive power of quasi-dimensional burn rate, emission or knock models is very sensitive to these parameters. Such an approach thus yields results of questionable reliability, with the risk of overlooking critical conditions that have to be addressed later on at much higher costs. Existing data-based approaches avoid these challenges, but fail to depict relevant physical processes in the flow path predictively (boost pressure built-up etc.), making it hardly possible to assess technologies such as variable valve trains. Therefore Robert Bosch GmbH and FKFS developed a new simulation approach. The new tool presented in this paper thus combines a physics-based model for the gas exchange (intake and exhaust system) with a data-based model for the high pressure-part, using mean effective pressure and emission values derived from the test bench or detailed 1D flow simulation. This solves the aforementioned dilemma and provides at the same time an easy way to use test bench data for RDE simulations and powertrain analysis. The implementation of such an approach will be presented in this paper, along with some exemplary results showing how the new tool can be used to generate accurate and reliable results for RDE investigations at minimal computational cost. 188 <?page no="199"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development Kurzfassung Die Einführung von „Real Driving Emissions“ (RDE) hat die Antriebsstrangsimulation unverzichtbar gemacht, um früh im Entwicklungsprozess kritische Betriebszustände des Abgasnachbehandlungssystems zu erkennen und mögliche Gegenmaßnahmen zu analysieren. Deshalb wird eine erste, rudimentäre RDE-Kalibrierung auch in der frühen Konzeptphase benötigt. Hierfür bietet sich eine 1D-Strömungssimulation an, um Effekte wie Ladedruckaufbau, Hochdruck-/ Niederdruck-AGR-Laufzeiten oder thermische Trägheiten abzubilden. Dabei bleiben aber zwei große Herausforderungen:  Die Entwicklung des Brennverfahrens wird üblicherweise an Einzylinderaggregaten durchgeführt. Die Prüfstandergebnisse müssen dann auf effektive Art und Weise auf einen virtuellen Vollmotor übertragen werden, an dem RDE- Untersuchung und eine erste virtuelle RDE-Kalibrierung durchgeführt werden kann.  Darüber hinaus ist es erforderlich, eine sehr große Anzahl von RDE- Fahrprofilen zu untersuchen, was deutlich mehr Rechenzeit in Anspruch nimmt als es zuvor bei einem einzelnen Zertifizierungszyklus der Fall war. Als Reaktion darauf kommen verbreitet sogenannte „Fast Running Models“ (FRM) zum Einsatz, die diesen Anstieg durch eine Reduzierung der zur Strömungssimulation benötigten Rechenzeit kompensieren. Allerdings lässt es sich bei einem solchen Ansatz nicht vermeiden, dass die Genauigkeit der Randbedingungen zum Zeitpunkt Einlass-schließt (ES), z.B. AGR-Rate, Temperatur und Druck, deutlich abnimmt. Auf diese Parameter reagiert aber die Vorhersagefähigkeit von quasidimensionalen Brennraten-, Klopf- und Emissionsmodellen sehr sensitiv. Die mit einem solchen Ansatz gewonnenen Ergebnisse sind demnach von zweifelhafter Zuverlässigkeit, was das Risiko birgt, dass kritische Zustände übersehen werden, um die man sich dann später zu sehr viel höheren Kosten kümmern muss. Existierende datenbasierende Ansätze vermeiden diese Herausforderungen, sind aber nicht in der Lage, bedeutende physikalische Vorgänge in Luftpfad (Ladedruckaufbau etc.) vorhersagefähig zu beschreiben, was es zum Beispiel nahezu unmöglich macht, Technologien wie variable Ventilsteuerungen zu bewerten. Daher haben die Robert Bosch GmbH und das FKFS gemeinsam einen neuen Simulationsansatz entwickelt. Das neue Simulationswerkzeug, dass in diesem Beitrag vorgestellt wird, kombiniert eine physikbasierte Modellierung des Ladungswechsels (Ansaugstrecke und Abgassystem) mit einem datenbasierten Modell für den Hochdruckteil, das als Eingangsgrößen den effektiven Mitteldruck und Emissionswerte benutzt, die entweder vom Einzylinderprüfstand oder aus einer detaillierten Strömungssimulation gewonnen werden können. Damit wird das zuvor beschriebene Dilemma gelöst und gleichzeitig ein einfacher Weg geschaffen, auf dem Prüfstandsergebnisse für RDE-Simulationen und Antriebsstranganalyse nutzbar gemacht werden können. Die Implementierung eines solchen Ansatzes wird in diesem Beitrag vorgestellt, begleitet von einigen beispielhaften Ergebnissen, die aufzeigen, wie das neue Werkzeug benutzt werden kann, um genaue und belastbare Ergebnisse bei RDE- Untersuchungen mit minimalem Rechenaufwand zu erzeugen. 189 <?page no="200"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development 1 RDE Challenge and Existing SimulationTools From 2017 on new passenger cars in the EU have to fulfil exhaust gas emission limits also under RDE conditions. While the emission limits will tighten from current Euro 6d-TEMP to Euro 6d in 2020 and Euro 7 beyond the RDE regulation includes many drive requests that go beyond the current pre-defined driving cycles. E.g. on the one hand vehicle speeds up to 160 km/ h and severe accelerations on inclining roads can cause high raw emissions that have to be converted reliably by exhaust gas aftertreatment (EAT). On the other hand low ambient temperatures (down to -7°C), long single stop durations (up to 5 minutes) and downhill drives can lead to an underrun of the lower operational temperature limits of the EAT components. In the near future the latter issue might be even strengthened by two striven efficiency-enhancing measures - hybridization with longer ICE standstill periods and reduction of fuel demanding cold start heating strategies. A series of the RDE requests as well as the emission calculation itself are related to extensive drive sections or even the full RDE drive. As a consequence the substitution of RDE drives by a small number of representative powertrain operation sequences is only possible to a very limited extent. The consideration of RDE performance within powertrain and EAT concept evaluation causes thereby a high testing effort which might even be complicated by missing hardware during early concept phase. 0D/ 1D powertrain simulation including the prediction of the thermal behaviour of ICE and EAT as well as EAT conversion rates could help to virtualize RDE tests in order to save time and costs of powertrain development. Additionally simulation can be used for synthetic RDE driving profile generation to create “worst-case” drives, either by combining a variety of cut up measured real driving sequences or by utilizing stochastic driving parameter distributions derived from them. Fundamental condition for harnessing the potentials of virtual RDE tests is a simulation tool that constitutes an optimal compromise between predictability, flexibility and simulation time. Assessing existing simulation approaches shows that all of them have severe drawbacks regarding this demand profile (see Figure 1):  Detailed 1D flow simulation model coupled with quasi-dimensional models Physical modeling has proven itself to be a valuable tool in the development of engine technologies: especially 1D-CFD allows the modeling of the whole combustion engine with great flexibility and moderate effort. Air system dynamics, EGR-mixing and the gas exchange can be simulated accurately. Combustion models are available in a broad variety from measured burn rates over simple Vibe approximations up to phenomenological approaches that allow the prediction of the rate of heat release ([1]-[3]). The application ranges from very early stages in the development where the engine hardware is not necessarily defined or available until the support of function development and software calibration in the final development stages. Virtual hardware components like turbochargers can be matched to the respective requirements or EGR control strategies can be evaluated. All in all it represents a very powerful tool, albeit with a crucial drawback with regard to RDE boundary conditions: computational times (real time factor 50…200, [4]), although rather low compared to 3D-CFD simulations, are still quite high - too high for the high amount of different operating conditions that has to be tested for RDE development. To a limited degree, this can be mitigated by using smart fea- 190 <?page no="201"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development tures like master/ slave modes for the cylinder objects in full engine models, allowing to reduce the computational effort for the high pressure part distinctly (e. g. almost 75% in a four-cylinder engine). However, as the main part of the computational time is consumed by the flow simulation, the overall reduction is insufficient for RDE demands.  Fast-running 1D flow simulation model coupled with quasi-dimensional models This approach addresses exactly the already mentioned, high computational effort for 1D CFD flow simulation. The basic idea is to lump various flow volumes together, reducing thus the number of required calculations per time step while enabling a larger time step size at the same time. A considerable reduction in simulation time can be reached in this way, coming close to real time capability depending on the level of simplification (a factor of two compared to real time can be considered as a typical value), which is definitively enough to qualify for the "fast" tag. However, this approach inevitably changes the model's ability to predict pressure waves in the flow part - actually one of the most important benefits of 1D simulation compared to a pure 0D approach - leading to significant changes in the boundary conditions at IVC for the high pressure part. By nature quasidimensional models are very sensitive to these starting conditions (not unlike the real engine), so they should only be used with flow models that can deliver accurate boundary conditions for the combustion.  Data-based models/ mean value models Data-based approaches are the tools of choice when it is required to quantify characterize existing systems accurately. Here, former map-based interpolations are increasingly replaced by statistical models that are able to describe the desired result value in dependence of more than just one to three input parameters, which are typical for maps. This can either be necessary to describe results depending on the degrees of freedom of operation that modern engines provide or to represent the deviations from stationary operation an engine faces while operated under highly transient conditions. Besides the proven fulfillment of accuracy demands, trained data models can be evaluated with nearly no computational effort. Here the limitation is the extrapolation capability - data based models can only provide trustful information where training data was available. In particular this means that changes in the intake or exhaust system compared to the original configuration cannot be taken into account in the simulation model, making it highly inflexible and unsuitable for tasks like function development and calibration. 191 <?page no="202"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development Figure 1: Positioning of different simulation set-ups along the three basic requirements for RDE simulations (DET: detailed 1D flow simulation model coupled with quasidimensional models; FRF: Fast-running 1D flow simulation model coupled with quasi-dimensional models, DAT: data-based models/ mean value models) 2 Basic Idea for New Tool To get a fast, accurate and flexible simulation tool, a combination of data-based and physics-based modelling was chosen, an approach that has already been successfully applied with regards to emission models (see [5]):  Data-based representation of combustion characteristics and emissions to increase computational speed and to reduce the sensitivity of the high pressure part to inaccuracies from fast-running gas exchange calculations  Physics-based representation of gas exchange, air system dynamics and EGRmixing to maintain full flexibility To link both parts, the pressure at EVO is the most important quantity that has to be modelled. A dedicated interface was developed (dubbed “RapidCylinder ® ”) that calculates the pressure trace based on characteristic combustion values (such as the pressure at EVO) and sets the engine-out emissions to the desired values. This basic idea is depicted in Figure 2. While the physics-based model is basically the same as in existing simulation tools - the use of a Fast Running Model is recommended - the data-based part requires some preparatory work to be done. The process can be described as a three-stepsapproach, which is detailed in the following sections:  “Data Sampling”: Generating input data for the data-based model (engine measurement campaigns or detailed 0D/ 1D simulations)  “Training”: Generating of statistical models that derive the desired characteristic values from input parameters of the intake path  “Calculating”: The characteristic combustion values as well as emission values are fed to the “RapidCylinder ® ” that links the physics-based flow model to the databased in-cylinder model. 192 <?page no="203"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development Figure 2: Basic approach of new simulation tool 2.1 Data Sampling The necessary database for the data-based model can be either generated by means of detailed 1D flow simulation or on the test bench (typically one cylinder engines are used). In both cases, the first step is the design of experiments. Users should start with the question which control parameters are to be used as input parameters for the statistical model. The answer will obviously depend on the type of engine (i.e. gasoline or Diesel), but also other factors will decide which set of parameters represents a sensible choice. For instance, if a simulation model is used for data generation, the residual gas content at IVC can be easily and directly determined, whereas other quantities such as valve overlap would have to be used when data sampling is test-bench-based. Table 1 shows an exemplary -by no means exhaustive - list of parameters that could be varied to generate the necessary database. 193 <?page no="204"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development Table 1: Example of possible variation parameters for data generation variation parameters Diesel engine variation parameters gasoline engine engine speed amount of injected fuel rail pressure EGR rate boost pressure start of injection (main injection) amount of fuel (pilot injection) […] engine speed manifold pressure air/ fuel ratio residual gas content inlet valve closing (IVC) ignition timing temperature at IVC […] The number of variation parameters as well as the individually set sensible variation range will determine the number of required operating points. Generally speaking, a one-digit number of variation parameters and some hundred resulting operating points will represent an adequate database. Depending on the desired accuracy of the statistical model and the number of output parameters that are of interest for the user, the number of operating points may increase. For instance, if a prediction of soot emissions with high accuracy is desired, a higher number of variation parameters will be needed typically. 2.2 Training The output parameters that have to be determined based in the generated database are already fixed within the RapidCylinder ® . There are both mandatory and optional quantities as described in Table 2. Table 2: Output parameters of the statistical model that are fed to the RapidCylinder ® mandatory parameters optional parameters indicated mean effective pressure of high-pressure part (Shelby definition) cylinder pressure at exhaust valve opening (EVO) […] peak pressure prank angle of peak pressure emission values […] Finally, a statistical model is needed to connect the input parameters (as listed in Table 1) with the output parameters (as listed in Table 2). The statistical model then uses the generated database for training. For the simplest cases, i.e. when a very low number of input parameters is chosen, the statistical model could be as straightforward as a 2D or 3D table that can be directly stored in the flow simulation program. However, for more complicated cases, other tools will be more useful. Neuronal networks (for instance MATLAB-based) or Gaussian Process Modelling (for instance with ASCMO ® by ETAS GmbH) could then be used for the statistical model. In any case, the user only has to make sure that based on the boundary conditions from the flow simulation, the statistical model calculates the parameters described in Table 2 and feeds them forward to the RapidCylinder ® . 194 <?page no="205"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development 2.3 Calculating The RapidCylinder ® , which is fed with the data from the statistical model, finally has to create a synthetic pressure trace that meets the desired target values (i.e. indicated mean effective pressure and pressure at EVO plus optional parameters if specified). By doing so, the gap between intake and exhaust path in the flow simulation model is closed and the model can run like any conventional 1D CFD flow simulation. Figure 3 illustrates how a pressure trace algorithm is used to generate the pressure trace from the target values. This is done automatically within the RapidCylinder ® source code and does not require any user input. Additionally, the temperature and heat transfer rate for every time step are estimated, as these could be useful quantities for the design of the engines cooling system or the dimensioning of components. Furthermore, if emission values are specified by the statistical model, the RapidCylinder ® will make sure that the exhaust gas has exactly the desired composition in the flow simulation, which can be in turn useful for modelling the exhaust aftertreatment. Figure 3: Generation of pressure trace 3 Application and Exemplary Results In Figure 4, a section of a RDE drive cycle calculated with the RapidCylinder ® approach is compared with the results using the UserCylinder ® , both running in the same fast running 1D flow model. For the comparison, a RDE-ready 4-cylinder Diesel engine with 2l of displacement volume and a start-stop system was used. For a gasoline engine, the same quality of results can be expected. The RDE section includes the longest allowed single stop duration of five minutes. It can be seen that both calculation methods can follow the target vehicle speed reasonably well, thus proving the quality of the transient controllers. In this example, the computation time when 195 <?page no="206"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development using the RapidCylinder ® is reduced by a factor of 0.5 to 200 min compared to the UserCylinder ® calculation. At the first part of the cycle, including the standstill phase, the SCR catalyst wall temperatures of both approaches are almost identical. After the fast acceleration following the standstill phase of the cycle, the maximum difference of only 14 K is reached. For the rest of the drive cycle, the temperature difference is close to zero. These results clearly illustrate the benefit of the RapidCylinder ® in drive cycle calculations: While producing almost the same results as the detailed calculation of combustion processes, only half of the computational time is needed. Still, transient effects like the boost pressure built-up are accounted for in simulation. Especially when comparing different set-ups of engine periphery (like turbochargers) or different car configurations, the RapidCylinder ® allows an investigation of more variants in a shorter time, while simultaneously maintaining the high level of reliability of simulation results. Figure 4: RDE-application of RapidCylinder ® vs. UserCylinder ® : comparison of catalyst temperature Literature [1] Fandakov, A.; Grill, M.; Bargende, M.; Kulzer, A.: Two-Stage Ignition Occurrence in the End Gas and Modeling Its Influence on Engine Knock. SAE paper 2017- 24- 0001, 2017. 196 <?page no="207"?> 5.3 A new, Model-Based Tool to Evaluate RDE Compliance during the Early Stage of Development [2] Kaal, B.; Grill, M.; Bargende, M.: Transient Simulation of Nitrogen Oxide Emissions. SAE paper 2016-01-1002, 2016. [3] Grill, M.; Kaal, B.; Rether, D.; Keskin, M.: Strömungsmodelle zur Analyse und Optimierung von komplexen Systemen in transienten Zuständen. 17th Conference VPC - Simulation and Test, Hanau, 2015. [4] Mirfendreski, A.: Entwicklung eines echtzeitfähigen Motorströmungs- und Stickoxidmodells zur Kopplung an einen HiL-Simulator. Dissertation, Stuttgart, Universität, 2017. [5] Cornetti, G.; Kruse, T.; Huber, T.: Simulation of diesel engine emissions by coupling 1-D with data-based models. 14th Stuttgart International Symposium - Automotive and Engine Technology, Stuttgart, 2014. 197 <?page no="208"?> 6 MBC III 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures Thomas Kruse, Thorsten Huber, Holger Kleinegraeber, Nicola Deflorio Abstract To fulfill the RDE legislation, the consideration of dynamic effects becomes a crucial factor during engine optimization. Especially particle emissions differ significantly in transient operation compared to the steady state behavior. This has to be considered already during the engine base calibration at the test bench. Therefore, the classical steady state DoE approach for global engine optimization has to be expanded. This work presents a study on a GDI engine, where a methodology has been applied combining a data driven dynamic engine model with the ECU injection strategy. To build the dynamic engine model a transient DoE was performed on an engine test bench with variation of speed and load in a wide operating range and additionally all relevant calibration parameter of the injection system. For the modelling, an advanced Gaussian process with a NARX feedback structure was applied. The resulting transient model was then coupled with the relevant parts of the ECU injection structure in a newly developed simulation and optimization tool environment. This allows to predict and optimize transient gaseous and particle emissions for RDE cycles. Main optimization target has been the cumulative particle emissions and fuel consumption with the simultaneous consideration of additional constraints such as smooth calibration maps. This has been achieved by modifications of major injection parameters. Kurzfassung Zur Erfüllung der aktuellen RDE Gesetzgebung ist eine Berücksichtigung dynamischer Effekte ein entscheidender Faktor bei der Motoroptimierung. Insbesondere Partikelemissionen weichen bei transientem Betrieb signifikant vom stationären Verhalten ab. Dies sollte schon bei der motorischen Grundanpassung am Motorprüfstand berücksichtigt werden. Der klassische stationäre DoE Ansatz zur globalen Motoroptimierung muss daher erweitert werden. Diese Arbeit präsentiert eine Studie an einem Ottomotor mit Direkteinspritzung, bei der eine neue Methodik angewandt wurde, welche dynamische, datengetriebene Modelle mit der ECU Einspritzstrategie kombiniert. Zur Erstellung des dynamischen Motormodells wurde ein transientes DoE am Motorprüfstand durchgeführt bei dem neben einer Variation von Drehzahl und Last in einem weiten Betriebsbereich 198 <?page no="209"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures zusätzlich alle relevanten Parameter des Einspritzsystems variiert wurden. Zur Modellierung wurde ein angepasster Gauß Prozess mit einer überlagerten NARX Struktur angewandt. Das resultierende transiente Modell wurde dann zusammen mit den relevanten Funktionen aus der ECU Software für die Einspritzstrategie in einem neu entwickelten Simulations- und Optimier Tool gekoppelt. Dies erlaubt nun die Vorhersage und Optimierung von Gas- und Partikel Emissionen für RDE Zyklen. Hauptziel der Optimierung war eine Verbesserung der kumulativen Partikelemissionen sowie des Kraftstoffverbrauches bei gleichzeitiger Berücksichtigung zusätzlicher Kriterien wie glatte Kalibrierkennfelder. Dies wurde durch Modifikationen zentraler Einspritzparameter erreicht. 1 Introduction In the last years, the use of steady state data based DoE (Design of Experiments) models had become an established methodology for engine base calibration. Significant improvements in terms of efficiency and quality have been achieved [1, 2, 3]. Nevertheless, with the todays focus on RDE, dynamic effects has to be considered already during the engine base calibration to achieve an optimal result and avoid costly iterations. Especially the particle emission of modern direct injection gasoline engines is highly sensitive to transient effects. Extensions of the DoE approach to dynamic data based models has been developed and presented over the last years [4, 5] and successfully used mainly for validation purpose [6, 7]. However, dynamic models are today rarely used for direct parameter optimization in engine calibration. Opposite to the steady state case, an optimization of the calibration maps based on driving cycle weighted operating points is not applicable. Instead, sets of representative transient driving cycles for engine speed and load has to be parsed through the relevant part of the ECU function containing the calibration maps to be optimized before stimulating the dynamic data based model. The optimization is then performed on the cumulated outputs of the dynamic model. Figure 1 shows the necessary workflow for dynamic optimization. Figure 1: Workflow for optimizing ECU function parameter in combination with a dynamic model 199 <?page no="210"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures In this project, this methodology was applied on a 4-cylinder 1.3-liter turbo charged gasoline direct injection (TGDI) engine with variable valve timing and a high-pressure injection system capable of up to two additional split injections. For this engine, all above-mentioned parameter had been pre-optimized by a global steady state DoE using the ETAS ASCMO-STATIC tool and its Global Optimization add-on. Good steady state results for fuel consumption, particle emissions and further criteria could be achieved, but testing the calibration with transient driving cycles like WLTC or RTS95 resulted in significantly higher values in particular for particle emissions. The strongest dynamic impact on the particle emission is supposed to be caused by the parameter of the injection system, namely for this engine:  Fuel Pressure  Main Injection Timing (SOI)  Split Factor of the first split injection  Timing of the first split injection  Split Factor of the second split injection  Timing of the second split injection The relevant part of the ECU strategy that calculates these parameters depending on speed and load was available as a Simulink model. The task now consist in the following work steps: 1. Build a dynamic engine model describing the influence of the relevant engine outputs, mainly particle and fuel consumption on this 6 calibration parameter plus speed and torque (8 dimensions in total) 2. Describe the relevant part of the ECU strategy that calculates these parameter depending on speed and load with an adjusted function model 3. Combine both parts, the dynamic engine model and the ECU function model in one tool environment and perform an optimization on the relevant targets, mainly cumulative particle mass and fuel consumption plus additional constraints. 2 Dynamic Modeling 2.1 Concept A good consideration of dynamic effects with a data-driven modelling algorithm can be achieved by applying a superordinate model structure on a regression model. This approach is often referred to as “nonlinear autoregression with external inputs” (NARX) [8, 9]. That means, the system input space is expanded with the feedback of past input and output values up to a certain time horizon (figure 2). In the following, the feedback values are referred to as features. The NARX approach transforms the dynamic identification problem into a quasi-stationary relationship with the new input vector 𝑥 𝑘 : 𝑦 𝑘 𝑓 𝑥 𝑘 𝑓 𝑥 𝑘 , 𝑥 𝑘 1 , … , 𝑥 𝑘 , 𝑥 𝑘 1 , … , 𝑦 𝑘 1 , … , (1) 200 <?page no="211"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures where k indicates a discrete time-step. Based on an available data set of measured input-output values, a standard data-driven regression can be used for modelling the nonlinear relationship 𝑓 𝑥 𝑘 . Regarding the core regression algorithm, Gaussian Processes (GP) has proven to be the most suitable approach for general data modeling purposes [10, 11]. Figure 2: Principle of Dynamic Modeling using the NARX Approach However, in the case of dynamic modeling, the GP has to cope on the one hand with the high number of inputs caused by the NARX feedback structure and on the other hand with high amounts of data points in transient measurements. To deal with that, a specific type of a sparse GP with a reduced number of base functions has been used [12]. An additional important measure applied here to reduce the model complexity is a “feature selection” which automatically finds the relevant inputs (features) of the NARX-structure and eliminates the irrelevant ones. This feature selection starts with an empty matrix of feedbacks looking back until a user-defined time lag (depending on the estimated memory time of the system). Then, a model is trained successively for each feature of this lookback matrix and the feature causing the largest increase in model quality is selected. This process is repeated, until no new feature is found causing a significant model improvement. The dynamic modeling framework described above is available in the ETAS ASCMO- DYNAMIC tool. It offers additional functionalities for data pre-processing as e.g. an appropriate downsampling, a model visualization and a module for the setup of a transient DoE. 2.2 Transient DoE To minimize the required measurement effort for the model building, a suitable transient DoE approach should be applied for the test planning. It has turned out, that a space filling approach, where the amplitudes and the gradients of all inputs are varied following a Sobol-sequence [13] provides very good modelling results. The transient DoE module of ETAS ASCMO-DYNAMIC allows to define gradientand amplitude bounds to consider known limits of the system under test and the desired test duration. Additionally, constraints between inputs can be applied, e.g. to avoid an overlap of the different injection timings or to limit speed/ torque range to the feasible area. Three different DoE with increasing complexity has been set up for this project. In the first 201 x 1 (t) Structure for Dynamic Modelling x (t) x (t-1) x (t-2) y (t-1) y (t-2) y (t-3) t t -1 y(t) t -1 t -1 t -1 t -1 x 2 (t) x 3 (t) Regression with Gaussian Process <?page no="212"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures DoE the parameter speed, torque, fuel-pressure, start of main injection (SOI) has been varied assuming reasonable amplitudes and gradients derived from some available aggressive test cycles. For the second and third DoE, the parameter timing and split factor has been added for double and then triple injection. Figure 3 shows the resulting DoE traces for the most complex DoE with in total eight parameter considering all split injections. Figure 3: Traces of the Dynamic DoE with eight inputs proposed by ETAS ASCMO The transient DoE can be exported to the test bench system in two different ways: Either the original traces with a user defined sampling rate (e.g. 0.1 sec.) or as a list with start and end values and the corresponding ramp times for each parameter. The second option is the easiest one to handle for most test bench systems and was chosen in this study. The transient DoE was run on a AVL Puma Open TM 1.5 automation system in combination with the INCA MCE iLinkRT technology on an ETAS Real-time Hardware ES910.3 to guarantee a fast ECU access [14]. The test bench has been equipped with a fuel flow meter, fast gas analyzer, a particle counter and an AVL Microsoot Sensor TM to record the relevant outputs. 2.3 Modeling Results The measurement of the three DoE required approximately one hour each and all three resulting training data sets has been imported from the standard MDF4 -format into ETAS ASCMO-DYNAMIC to perform the modeling process as described in chapter 2.1. A first analysis showed that a downsampling from 10 Hz to 2 Hz was possible without losing the dynamics of the signals. An appropriate downsampling is always recommended for any dynamic modeling to reduce the model complexity in terms of lookback length and data size. The main outputs that should be modelled has been the fuel mass from the flowmeter and particle mass from the AVL Microsoot Sensor TM . Additionally, the gaseous emissions NO x , HC and CO has been considered for the modelling. The PN measurements from the particle counter showed a relatively bad quality in terms of repeatability and was not considered hereafter in this study. 202 <?page no="213"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures Then the automated feature selection of ETAS ASCMO-DYNAMIC was started for the above mentioned five outputs, which required some hours of calculation time on a laptop for the given modeling complexity. Since no user interaction is necessary, this step can be performed overnight. Figure 4 shows the result of the NARX feature selection for the output of the particle mass (“Micro_Soot”). It can be seen, that the best model in terms of the root means square error (RMSE) has 18 features with a lookback length of 50 time steps. With the sampling rate of 2 Hz (0.5 sec), this means that input variations can have an influence on the particle mass with a time lag of up to 25 sec. The necessary lookback length for the fuel mass and gaseous emission models was shorter with a time lag of about 10 steps (5 sec.). Figure 4: Results of the “Feature Selection” for the best model of particle mass 203 <?page no="214"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures Figure 5 shows the comparison of the model prediction for the fueland particle-mass from ETAS ASCMO-DYNAMIC with a validation measurement from a WLTP cycle. The fuel mass can be predicted with a high accuracy (RMSE of 0.44 kg/ h on a range of 10 kg/ h) while the particle mass shows a higher deviation (RMSE of 0.55 mg/ m 3 ) on a range of 5 mg/ m 3 ), but the transient peaks can be predicted fairly well. The gaseous emission, especially NO x , could also be prediction in a sufficient quality. Figure 5: Comparison of the Model Prediction (blue) with a validation measurement (black) of Fuel-and Particle Mass for an extract of a WLTC 3 Tool environment for ECU Function Optimization In the automotive industry, physical function models are widely used in today’s ECUs as virtual sensors or feedforward controller. Prominent examples are models for the engine torque, for the cylinder air-charge or for exhaust gas temperatures [15]. These models typically contain many parameters as n-D maps, curves and values, combined in a complex, physically motivated structure. To obtain a sufficiently good model accuracy, these parameters have to be calibrated based on real measurements from an engine or a vehicle equipped with special sensors. This calibration is a complex optimization task: the deviation between the measured output and the modelled output has to be minimized by tuning all parameters simultaneously. Performing this optimization manually can be very time consuming or even impossible since many models contain thousands of parameter values. Specific in-house script based optimizer designed for individual models or ECU functions are often used at different companies. 204 <?page no="215"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures To facilitate and generalize this important calibration task, ETAS has developed the easy to use tool ETAS ASCMO-MOCA, which can be configured for different models and contains an efficient optimizer. MOCA here stands for “Model Calibration” and the tool is part of the ASCMO tool family for model-based calibration [16]. A schematic overview of the generic optimization problem that should be solved is given in figure 6. Figure 6: Calibration task of optimizing an ECU model (example: engine torque) by fitting its parameters (maps and curves) to measured reference values In the center, a graphical representation of the physical model whose parameters should be calibrated is shown. As an example, a simplified representation of a typical model for the torque of a gasoline is shown. The model contains three maps (best inner torque, best ignition timing, drag torque) and one curve for the ignition efficiency. With the right calibration of these parameters, the model should predict the engine out torque for all combinations of speed, air mass flow and ignition angles with a high accuracy. A real torque model used in today ECU’s has a similar structure but has additional inputs as e.g. air-fuel ratio, camshaft positions and valve lift and consist of ten or more maps and curves. For the model calibration, measurements from a test bed or from a vehicle equipped with real sensors are performed, covering the complete input space. MOCA offers different options to make the ECU function to be calibrated accessible for the optimization. In case of functions consisting only of algebraic combination or logical operations as binary switches between calibration parameter and data channels, the ECU function can easily replicated in the tool as a formula with a user-friendly editor providing all imported data channels, calibration parameter and intermediate nodes. If the ECU function is available as Simulink models, a direct connection to the Simulink model can be established, whereat all available information from the Simulink model are matched automatically to the dataand parameter in MOCA. Further ECU model formats supported by MOCA are ASCET models and the recently established FMI (Functional Mock-Up Interface) standard. 205 <?page no="216"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures Once the ECU function is accessible in MOCA, the main optimization target now is to minimize the deviation between the measured outputs Y i, measured and the corresponding outputs predicted by the model Y i, predicted for every measurement i. A good general criteria for the deviation is the squared sum over all residuals Y i, predicted - Y i, measured . An additional optimization target is the smoothness of the calibration maps to avoid overfitting and provide a good extrapolation capability of the model. This can be achieved by introducing an additional penalty term in the optimization target function that describes the roughness R of a map by the 2 nd derivative of the map output z over the input axes x and y at the Qx and Qy breakpoints at the position lx and ly: 𝑅 1 𝑄𝑥 𝜕 𝑧 𝜕𝑥 1 𝑄𝑦 𝜕 𝑧 𝜕𝑦 The optimization problem can now be formulated as follows: 𝑎𝑟𝑔𝑚𝑖𝑛 𝑌 , 𝑝 𝑌 , 𝑆 ∗ 𝑅 S k is an individual smoothing factor that can be different for every of the M calibration maps k. Additional criteria as map gradient limitation, bound maps or limit values for outputs can also be considered. In case the model output Y i,predicted (p) is described as algebraic formula containing the calibration parameter p, the analytic gradient of the target function over all parameter p are calculated in MOCA making the optimization very fast. In the example of the torque model calibration, even a complex model with >10 maps and curves consisting of >1000 values and typical data sets of >10,000 measurements the optimization process needs only a few minutes. In case the ECU function is represented as Simulink model the optimization is significantly slower while models in ASCET or FMI standard coming close to the speed of the formula based optimization. Since its market introduction, MOCA is today widely used for ECU function calibration [3] but also for the calibration of plant models used for HiL, SiL or MiL (XiL) simulation [17]. 4 Optimization of an ECU function on a Dynamic Engine Model It now turned out that, beyond the direct data driven function optimization described above, MOCA is the ideal tool environment to combine transient or steady state plant models with an ECU function enabling the joint optimization of the ECU calibration parameter with reference to the output of the plant model (see figure 1). This is applied in this study using the ASCMO-DYNAMIC plant model described in chapter 2 and the ECU function for the injection strategy containing all 6 injection relevant calibration maps: Fuel pressure, timing of main injection, timings and split factors for the two split injections. The ECU function was available originally in Simulink but to speed up the optimization process, it was replicated as formula in MOCA. The Simulink model was additionally connected to MOCA to validate the correct replication by comparing the 206 <?page no="217"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures output of the two function models. Further steps to set up the optimization in MOCA has been:  Loading the calibration maps with the steady state optimized pre-calibration available in a standard calibration format (DCM)  Loading a set of representative engine speed and torque traces from different driving cycles (WLTP, RTS95) as stimuli  Importing the ASCMO-DYNAMIC engine model and connect its inputs with the outputs of the ECU function and the engine speed and torque traces  Defining the optimization criteria The optimization criteria have been the minimization of the cumulated particleand fuel mass and, with a weaker weight, the cumulated NOx emission. As another hard constraints a minimum duration and separation of the split injection has been imposed to consider the limitations of the injectors. Additionally, smoothing factors for the calibration maps have been applied. The optimization then took approximately one hour. Figure 7 shows the setup of the optimization criteria in MOCA together with one of the considered calibration maps, the start of main injection (SOI) before (grey) and after (colored) optimization. For the SOI map, the optimizer proposed a significant shift to smaller values (later injection timing) compared to the steady state optimized map. Figure 7: MOCA Setup to optimize ECU parameter on a dynamic engine model with respect to particle-and fuel mass. Referenceand optimized SOI maps are shown 207 <?page no="218"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures Figure 8 shows the predicted results of particle mass and fuel consumption for the referenceand the optimized calibration. While for the cumulated fuel consumption the optimization results in a 1.5 % improvement in the cumulated sum, the particle mass is reduced by 35 %. For the particle mass, the comparison of the actual values for referenceand optimized calibrations shows that most of the originally high transient peaks are significantly reduced. The NOx emission (not shown here) could be kept constant. Figure 8: Results of the optimization for particleand fuel mass in comparison to the original reference calibration 5 Conclusion and Outlook That study on a modern GDI Engine demonstrated, how dynamic data based engine models can be combined with ECU functions for systematic optimization of RDE cycle emissions. This is an important contribution to a frontloading of the calibration process: Dynamic effects can considered already during the early phase of base calibration on 208 <?page no="219"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures the test bench. One focus of future work is a further improvement of the transient modeling in terms of shorter model training times and an even better consideration of long memory effects. For that purpose, new approaches like “Recurrent Neural Networks” (RNN) have been implemented and tested with promising results. Literature [1] Klar, H, Klages, B., Gundel, D., Kruse, T. et al. “New methods for efficient model based engine calibration” Presented at the in 5th International Symposium on Development Methodology, 2013 [2] Yooshin Cho, Donghee Han “The Global DoE Model Based Calibration and the Test Automation of a Gasoline Engine”, 9 th International Calibration Conference, 2017 [3] James Miller, Matthew Grove, Dominic Baker, James Taylor, David Pates “Application of Advanced Modelling Techniques in the Development of a High Specific Output 3 Cylinder Gasoline Engine”, SAE World Congress, 2019 [4] T. Gutjahr, H. Ulmer, C. Ament. „Sparse Gaussian Processes with Uncertain Inputs for Multi-Step Ahead Prediction”, Symposium on System Identification, vol. 16. IFAC, 2012. [5] Jens Schreiter, Heiner Markert, Michael Hanselmann Duy Nguyen-Tuong, Christian Bohne “Large Scale Transient Data-based Models for the Simulation of Vehicle Power Demand” 7th Conference on “Design of Experiments (DoE) in Engine Development” [6] Tom Berghmans, Aymeric Rateau, Kotaro Maeda, Thiebault Paquet “Development of a Simulation Platform for Validation and Optimisation of Real-World Emissions” 9th International Calibration Conference, 2017 [7] Y. Murata Y. Nishio, Y. Yamaya, M. Kikuchi “Model Based Engine Calibration for RDE”, SIA Powertrain Conference, 2018 [8] J. Sjöberg, Q. Zhang, L. Ljung, A. Benveniste, B. Delyon, P.-Y. Glorennec, H. Hjalmarsson, and A. Juditsky „Nonlinear Black-Box Modeling in System Identification: a Unified Overview” In Automatica, volume 31, Elsevier, 1995 [9] O. Nelles. Nonlinear System Identification “From Classical Approaches to Neural Networks and Fuzzy Models” Springer, 2001. [10] U. Schulmeister, M. Boßler, T. Huber, M. Johannaber, T. Kruse, H. Ulmer “Employment of advanced Simulation Methods in Calibration and Development Process” 2nd International Symposium on Development Methodology, 2007 [11] B. Berger, F. Rauscher, and B. Lohmann “Analysing Gaussian Processes for Stationary Black-Box Combustion Engine Modelling” IFAC World Congress, volume 18, 2011. [12] J. Quiñnonero-Candela, C. E. Rasmussen, and C. K. I. Williams 2Approximation Methods for Gaussian Process Regression” Technical report, Microsoft Research, 2007. 209 <?page no="220"?> 6.1 Optimizing Gaseous and Particle Emissions of a GDI Engine by Coupling a Dynamic Data Based Engine Model with ECU Injection Structures [13] I.M. Sobol: On the distribution of points in a cube and the approximate evaluation of integrals. U.S.S.R. Computational Mathematics 7(4), 1967, 86-112. [14] K. Schnellbacher “Rapid Measurement and Calibration utilizing the Fast ECU Access” 3rd International Symposium on Development Methodology, 2009 [15] R. Isermann: Engine Modeling and Control, Springer, 2014 [16] T. Kruse, T. Huber, H. Kleinegraeber “New Approach to Optimize Parameters in Complex Physical ECUand XiL Models” Powertrain Modeling Conference, 2016 [17] I. Hein, C. Fuchs, R. Diener, H. Markert: Software in the Loop methodology as leading methodology for efficient Development of future powertrain systems, 10th Emission Control, 2019 210 <?page no="221"?> 6.2 Risk Averse Real Driving Emissions Calibration under Uncertainties Alexander Wasserburger, Nico Didcock, Stefan Jakubek, Christoph Hametner Abstract Automotive powertrains are operated in a wide range of conditions. Nonetheless, the optimisation and calibration process of automotive powertrain systems are typically performed under controllable testbed conditions and do not consider uncertainties and random effects. Drive cycle variations, driver behaviour, traffic events and changing ambient conditions are just a few examples of possible uncertainties. The prediction quality of the models involved in the optimisation is generally limited as well. Additionally, the fit of the models might change over time due to material ageing or it could vary among multiple vehicles because of deviations in series production. Therefore, powertrains operate very well under the specific testbed conditions that were assumed in the optimisation process but there is little control over what actually happens during real world operation, which also implies breaching legislative thresholds. By taking the involved uncertainties into account during the optimisation process, more robust calibrations can be achieved. This is accomplished by a risk averse, stochastic optimisation approach, which minimises the risk of bad performance, rather than the nominal values of fuel consumption and emission. Kurzfassung Fahrzeugantriebe werden unter verschiedensten Bedingungen betrieben. Dennoch wird der Optimierungs- und Kalibrierungsprozess von Antriebsstrangsystemen typischerweise unter kontrollierbaren, gleichbleibenden Bedingungen auf Prüfständen durchgeführt und berücksichtigt keine Unsicherheiten und zufälligen Effekte. Fahrzyklusvariationen, Fahrerverhalten, Verkehrsereignisse und wechselnde Umgebungsbedingungen sind nur einige Beispiele für mögliche Unsicherheiten. Auch die Vorhersagequalität der in der Optimierung verwendeten Modelle ist in der Regel begrenzt. Darüber hinaus kann sich die Modellgüte im Laufe der Zeit, beispielsweise aufgrund von Materialalterung, ändern, oder sie kann wegen Abweichungen in der Serienproduktion von Fahrzeug zu Fahrzeug variieren. Infolgedessen verhalten sich Antriebsstränge optimal unter den spezifischen Prüfstandsbedingungen, die im Optimierungsprozess angenommen wurden, aber es gibt kaum Aussagekraft darüber, was während des realen Betriebs tatsächlich passiert, was auch die Überschreitung gesetzlicher Grenzwerte beinhaltet. Durch die Berücksichtigung der relevanten Unsicherheiten während des Optimierungsprozesses können robustere Kalibrierungen erreicht werden. Dies wird durch einen risikoaversen, stochastischen Optimierungsansatz erreicht, bei dem anstelle der Nominalwerte von Kraftstoffverbrauch und Emissionen das Risiko einer schlechten Leistung minimiert wird. 211 <?page no="222"?> 6.2 Risk Averse Real Driving Emissions Calibration under Uncertainties 1 Motivation The calibration of powertrains by means of optimisation of certain criteria like fuel consumption and exhaust gas emissions is usually validated on testbeds, where all operating conditions are known, controllable and repeatable. However, as a matter of fact, the performance of the vehicle in terms of consumption and emissions during real-world operation can differ greatly from the manufacturer’s specifications that were obtained on such testbed runs. The reasons for that divergence are manifold and range from varying ambient conditions to different driver behaviour. Figure 1 provides an exemplary overview of the various disturbances and uncertainties that might affect the engine performance during operation and should therefore be considered during the calibration process. The engine control unit (ECU) contains engine maps that provide the demand values of several engine parameters depending on engine speed and torque. Speed and torque are determined by the operation of the vehicle itself, i.e. current route, drive cycle and driver. For example, different drivers can exhibit various consumption profiles while driving the same route, which implies that a calibration may fit the drive style of one driver but it may be suboptimal for another. Similarly, drive cycle based calibration is prone to cycle overfitting, which implies good performance on the cycle the calibration is based on, but potentially bad performance on other drive cycles (see e.g. [1]). Moreover, the stochastic influence of physical parameters and ambient conditions like altitude, air temperature, wind, rain etc. influence the engine and powertrain operation and can lead to consumption and emission profiles that are vastly different from testbed results. Besides these external random effects, also internal uncertainties can occur. For example in the engine the ECU demands a specific value for a parameter but the actuators inside the engine can control that parameter only within a certain error tolerance. Also, the measurements in the involved control actions might be noisy and therefore optimality of the engine maps in the ECU might be compromised. Figure 1: Uncertainties in engine calibration. 212 <?page no="223"?> 6.2 Risk Averse Real Driving Emissions Calibration under Uncertainties Lastly, the models that are typically utilised in the optimisation of the ECU are uncertain as well. The models are usually based on noisy measurement data which leads to the models having a certain variation in their parameters and output as well. Moreover, the goodness of fit of a model might not be the same for supposedly identical vehicles because of material ageing and deviations in series production. The task now is to calibrate the powertrain in such a way that it is more robust against random disturbances and deviations. For example, minimise the engine’s fuel consumption and/ or emissions, but not for one fixed testbed setting, but, more realistically, for a broad variety of possible variations and scenarios. This is achieved by modelling the involved uncertainties as random variables that affect the powertrain performance. This implies that the optimisation involved in the calibration is not deterministic but stochastic. For example, the emissions to be minimised are no longer a deterministic value but a random variable with a probability distribution which depends on the engine maps. The stochastic optimisation problem is solved by considering describing statistics, so-called risk measures, of this distribution. The selection of appropriate risk measures has a large influence on the solution and reflects the risk aversion of the decision maker. 2 Stochastic Approach for Calibration As mentioned above, the robust optimisation approach presented here is based on a stochastic formulation. Suppose the objective to be minimised is given by 𝑓 𝑢 , where 𝑢 ∈ ℝ is the decision variable. Various random disturbances and uncertainties are incorporated generally by expressing 𝑓 as a function 𝑓 𝑢, 𝑋 of the decision variable 𝑢 and an m-dimensional random variable 𝑋. Depending on the type of considered uncertainty, the exact formulation may be different. 2.1 Stochastic Formulation for Various Types of Uncertainties Powertrain calibration and energy efficiency management systems are often based on optimising the performance on characteristic drive cycles. The obvious trouble with drive cycle based calibration is the possibility of overfitting the calibration to the specific cycle. Overfitting would produce calibrations which work well on the used drive cycle but they might show a bad performance on different cycles, for example during real-world operation. Keeping the disperse scenarios of vehicle usage in mind, it is reasonable to expect quite different calibrations from different drive cycles. Therefore, a stochastic approach, that takes an entire set of drive cycles into account simultaneously, leads to more robust results that perform well under a wide range of drive cycles. During the optimisation, all the cycles are evaluated according to some performance criterion which leads to a performance probability distribution that is affected by the decision variable 𝑢. An extensive presentation of the proposed workflow based on a sample of various simulated drive cycles can be found in [5]. The approach presented therein can also be used in order to include the uncertainty related to different driver behaviours and varying ambient conditions that affect the speed-load trajectory of the engine like wind and slope. 213 <?page no="224"?> 6.2 Risk Averse Real Driving Emissions Calibration under Uncertainties As mentioned above, another source of uncertainty is related to the operation of the powertrain itself. Even under constant operating conditions, consumption and emissions may vary due to uncertainties in the realised ECU parameters. For example, the actual injected quantity typically spreads around the optimal value stored in the ECU. If this variance is not considered in the optimisation, this might lead to unexpected deviations in fuel and emissions, if unfavourable combinations of ECU parameters are realised. The inclusion of such uncertainties in a stochastic optimisation setting is straight forward. Instead of minimising 𝑓 𝑢 the target function is altered to 𝑓 𝑢 𝑋 , where 𝑋 is a random vector with the same dimension as 𝑢. The distribution of 𝑋 could for example be assumed as normal with zero-mean and a certain variance. The output 𝑌 𝑓 𝑢 𝑋 is not necessarily normally distributed since 𝑓 is generally not a linear function. The distribution of 𝑌 can be estimated using Monte-Carlo simulation. Often the output 𝑌 will consist of the sum of numerous outputs at different operating points 𝑌 𝑌 𝑌 ⋯ 𝑌 . In that case, and under certain conditions, the distribution of 𝑌 can be approximated by a normal distribution by virtue of a central limit theorem (CLT). In such a case it suffices to estimate mean and variance of 𝑌, because the normal distribution is fully specified by those two values. Estimates can then be obtained by an adequate Taylor approximation. Lastly, another source of uncertainty to be mentioned here is the modelling itself. Not only the model structure but also the estimated parameters are uncertain. Moreover, one model might have a different goodness of fit for different vehicles due to deviations in the production process. Apart from that, the measurement data used in the model generation is noisy and erroneous. Instead of using a Monte-Carlo simulation, the output distribution can be estimated using the estimated output variance. The estimation of the output variance depends on the model estimation algorithm. In some model structures like local model networks, the variance may also depend on the input vector 𝑢. Using the knowledge of the variance, a sample of the output can be generated quickly without having to evaluate 𝑓 𝑢 more than once. In the following subsection it will be described how the obtained distribution information is used in order to transform the stochastic optimisation problem to a deterministic one. This is achieved by calculating risk measures of the output distributions. 2.2 Risk Averse Optimisation Random objectives cannot be optimised directly. This is due to the fact that the value of the objective is random and therefore one generally cannot determine an optimiser 𝑢 ∗ that optimises the objective for each possible random outcome. For this reason, the distribution of the objective itself has to be considered in the optimisation. This is achieved here by calculating statistics that describe certain aspects and properties of the underlying distribution. For example, one can calculate or estimate the expected value of the distribution and minimise that. This makes sense if one is interested in achieving good results on average. However, for some applications it might be more interesting to reduce the risk of extreme events like unreasonably high emissions. For that purpose, the following risk measures are introduced: 214 <?page no="225"?> 6.2 Risk Averse Real Driving Emissions Calibration under Uncertainties Definition: Let 𝑋 be a continuous random variable and 𝛼 ∈ 0,1 . (i) The Value-at-Risk (VaR) at the confidence level 𝛼 is defined as VaR 𝑋 ≔ inf 𝑥 ∈ ℝ: 𝑃 𝑋 𝑥 𝛼 . 1 (ii) The Conditional Value-at-Risk (CVaR) at the confidence level 𝛼 is defined as CVaR 𝑋 ≔ 𝐸 𝑋|𝑋 VaR 𝑋 . 2 For a confidence level close to 1, these risk measures quantify the risk contained in the right tail of the distribution. VaR is the 𝛼-quantile of the distribution. It is the value which is not exceeded by the random variable with probability 𝛼. Minimising the Value-at-Risk implies minimising the best 𝛼 100% realisations of the random objective. However, similarly to the minimisation of the expected value, there is no control over the worst realisations. Even if the majority of realisations are good, the worst 1 𝛼 100%, which exceed the VaR by definition, could lead to unacceptable, extreme outcomes. In that case, the Conditional Value-at-Risk is a more appropriate choice. CVaR is the expected value of the realisations exceeding the corresponding VaR. It is therefore the average of the worst events. Minimising CVaR can be seen as the most risk-averse approach presented here, as it puts emphasis on the worst cases. It has to be stressed, that the risk measures themselves are deterministic values that contain certain information about the underlying distribution. Therefore, applying risk measures to the objective distribution, stochastic problems are transformed to deterministic ones. For more background information on risk measures and their desired properties refer to for example [1, 2]. Details on the optimisation of risk measures can be found in [3, 4]. To sum up, the workflow of the proposed stochastic approach is as follows: 1. Define a stochastic performance criterion including the considered uncertainties. 2. Select a risk measure. 3. Optimise the risk measure with respect to the decision variable 𝑢 ∈ ℝ . The risk measures can be estimated using the techniques proposed in Section 2.1. 3 Example As an example, the case of drive cycle variations in the calibration of a Diesel engine, as presented in [5], is discussed. Note that this is only an example and the approach can be applied to various technical problems and uncertainties, as suggested in Section 2.1. We consider the following objective function to be minimised: 𝐶 𝑢 , … , 𝑢 𝑤 𝑓 𝑥 , 𝑢 𝑝 max 0, 𝑤 𝑓 𝑥 , 𝑢 𝑆 𝑝 3 215 <?page no="226"?> 6.2 Risk Averse Real Driving Emissions Calibration under Uncertainties The cost 𝐶 𝑢 , … , 𝑢 is the aggregated cost arising from fuel consumption and emission during completion of the drive cycle 𝐷. Each operating point 𝑥 has a corresponding weight 𝑤 . The prices of fuel consumption and emissions are denoted by 𝑝 and 𝑝 , respectively. Emissions only imply costs if the exceed the fixed threshold 𝑆, otherwise the right summand is zero. Based on the random drive cycle 𝐷 an optimisation is performed: min 𝜌 𝐶 𝑢 4 s. t. 𝑢 ∈ 𝑈 The operator 𝜌 stands for an arbitrary risk measure. The constraint set includes upper and lower limits for all components of 𝑢 and smoothness constraints for the resulting engine maps. The results of three different optimisations are illustrated in Figures 2-4. The objective function (3) was minimised using one single cycle in order to serve as a comparison to the results of the stochastic optimisation. Problem (4) was solved using the expected value and the conditional value-at-risk, respectively. The same constraints were used in all three optimisations. For the stochastic optimisation, a sample based on stochastic variation of the single cycle used in the deterministic optimisation was deployed. Figure 2 depicts the estimated distribution of the three resulting calibrations when evaluating (3) on a set of validation cycles. It is clearly visible, how the shape of the distribution is influenced by the selected risk measure. The deterministic standard approach leads to a large variance and the highest expected costs, value-at-risk and conditional value-at-risk. In contrast, the stochastic optimisation of the expected value exhibits the smallest expected costs and overall also a smaller variance of costs. Minimising the conditional value-at-risk, which is a more risk-averse approach, leads to the smallest conditional value-at-risk, and therefore to the lowest cost in the case of the realisation of an unfavourable drive cycle. In return, however, a higher average cost has to be accepted. In Figures 3-4 the corresponding distributions of fuel consumption and emissions are illustrated. It can be seen that the more risk averse approaches here lead to generally lower emissions at the expense of higher fuel consumption. The lower emissions imply a lower probability of violating the threshold 𝑆, which is indicated by the red vertical line. While in the case of the deterministic optimisation more than 50% of all tested drive cycles exceed the threshold, this value is greatly reduced using stochastic optimisation. In the case of CVaR-optimisation, only around 10% of all cycles violate the emission threshold. The reason for that is the structure of the cost function (3): As long as the emissions remain below that threshold, no costs arise from emissions and therefore higher fuel consumption is accepted. High total cost therefore only arise when the emission threshold is violated. For that reason, the stochastic optimisation using the conditional value-at-risk, which is primarily concerned with the extreme events, focuses on reducing emissions. This leads to the lowest probability of exceeding the threshold which also means the lowest probability of extreme costs (see Figure 2). However, this is achieved by accepting higher fuel consumption, which is the reason for the larger expected cost in that case. 216 <?page no="227"?> 6.2 Risk Averse Real Driving Emissions Calibration under Uncertainties Figure 2: Distributions of the resulting cost of three calibrations. The red area contains the worst 10% of cost-realisations. The solid and dashed lines indicate the expected value and the conditional value-at-risk with 𝛼 0.9, respectively. Figure 3: Distributions of the fuel consumption of the three calibrations. Figure 4: Distributions of the NO x emissions of the three calibrations. The red line indicates the threshold level 𝑆. 217 <?page no="228"?> 6.2 Risk Averse Real Driving Emissions Calibration under Uncertainties 4 Conclusion A novel, more holistic approach for powertrain calibration was presented. By modelling disturbances and random events as random variables and incorporating them in the optimisation process, more robust and reliable calibrations are obtained. This is achieved by optimising risk measures, which quantify certain characteristics of the resulting distributions. The choice of an adequate risk measure is influenced by the risk-aversion of the decision maker and significantly influences the result. The probability and severity of extreme outcomes can be mitigated by risk-averse optimisation at the cost of average performance. Literatur [1] Mock, Peter, John German, Anup Bandivadekar, and Iddo Riemersma. 2012. “Discrepancies between Type-Approval and ‘Real-World’ Fuel-Consumption and CO2 Values.” The International Council on Clean Transportation, 1-13. [2] Artzner, Philippe, Freddy Delbaen, Jean-Marc Eber, and David Heath. 1999. “Coherent Measures of Risk.” Mathematical Finance 9 (3): 203-228. doi: 10.1111/ 1467-9965.00068 [3] Mitra, Sovan, and Tong Ji. 2010. “Risk Measures in Quantitative Finance.” International Journal of Business Continuity and Risk Management 1 (2): 125-135. doi: 10.1504/ IJBCRM.2010.033634 [4] Rockafellar, R. Tyrrell, and Stanislav Uryasev. 2000. “Optimization of Conditional Value-at-Risk.” Journal of Risk 2 (3): 21-41. doi: 10.21314/ JOR.2000.038 [5] Wasserburger, Alexander, Christoph Hametner, and Nico Didcock. 2019. "Riskaverse real driving emissions optimization considering stochastic influences." Engineering Optimization, doi: 10.1080/ 0305215X.2019.1569646 218 <?page no="229"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods Stefan Scheidel, Marie-Sophie Gande, Giacomo Zerbini, Marko Decker Abstract The modern powertrain calibration process is characterized by shorter development cycles and an increased number of vehicle variants. In addition, changes in legal requirements increase the focus on the transient emissions behaviour of vehicles. This includes highly dynamic certification cycles, real world driving including cold start and a limit on the particulate number for gasoline engines. As any transient manoeuvre (drive away, tip in, parts of a cycle, etc.) is described by a time trace, the widely known steady state methods of DoE-testing and model-based calibration cannot be used. Dynamic empirical modelling is one possible solution, but the measurement effort and the mathematical knowledge needed is far higher than for a classical steady state DoE approach. Therefore, the reduction of the time trace to scalar KPI values (KPI values = “Key Performance Indicator” values) is a known possible solution to apply “steady state methods” for transient optimization. Up to now, this calculation of KPIs was done by recorder analysis in the post processing, i.e. after all DoE variations were performed. Therefore, the combination with an online iterative DoE approach is not possible. This paper presents an integrated online approach, in which the KPIs are calculated directly after each manoeuvre and not in the post processing. The online DoE method incorporates the training of empirical models of the KPIs while the test is running. With the information of the online models, the variation points can be directed into the area of interest. That means, unfavourable combinations with high emissions and/ or bad drivability can be avoided in the iterative distribution of DoE points, while the density of measurement point in the interesting area (low emissions, good drivability) is increased. Furthermore, the availability of online models enables online optimization. The implementation of this approach is aiming for an easy and fast parameterization of the test run, to enable calibration engineers familiar with steady state DoE methods to use the method without additional knowledge or training. This publication will describe the general approach and present results from various calibration projects for both Diesel and Gasoline engines. In addition, an outlook on the application of the same method for new areas like hybrid calibration and ADAS will be given. 219 <?page no="230"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods Kurzfassung Die heutige Antriebsstrangentwicklung ist gekennzeichnet durch immer kürzere Entwicklungszyklen und eine zunehmende Anzahl an Variantenapplikationen. Außerdem erhöht die aktuelle Gesetzgebung den Fokus auf das transient Emissionsverhalten von Fahrzeugen. Dies umfasst hochdynamische Zertifizierungszyklen, RDE (Real Driving Emissionen) inklusive Kaltstart sowie die Limitierung der Partikelanzahl für Benzinmotoren. Da jedes transiente Manöver (Anfahren, Tip-In, Ausschnitt eines Zyklus, etc.) durch einen Zeitverlauf beschrieben ist, können konventionelle DoE Methoden basierend auf stationären Messdaten nicht ohne weiteres zur Anwendung kommen. Empirisch dynamische Modellbildung wäre eine Lösung, ist jedoch hinsichtlich Aufwand und notwendigem mathematischen Knowhow deutlich komplexer als ein klassischer stationärer DoE Ansatz. Um den stationären DoE Ansatz auch für dynamische Vorgänge anzuwenden, besteht die Möglichkeit, den Zeitverlauf auf skalare Bewertungsgrößen, sogenannte KPIs (“Key Performance Indicator”), zu reduzieren und diese zu modellieren. Bis heute erfolgte die Berechnung der KPIs hauptsächlich durch eine dem DoE-Messprogramm nachgelagerte Auswertung wodurch eine Kombination mit online DoE Methoden unmöglich war. Dieser Artikel zeigt eine integrierte online DoE-Methode, bei der die KPIs direkt nach der Ausführung eines Manövers berechnet werden. Die online DoE Methode basiert auf der Erstellung der empirischen Modelle der KPIs bereits zur Laufzeit des Prüflaufs. Die in den Modellen enthalte Information kann genutzt werden, um die Variationspunkte deutlich besser zu verteilen als dies mit konventionellen DoE-Plänen möglich wäre. Dadurch können ungünstige Variationspunkte z.B. mit hohen Emissionen und/ oder schlechter Fahrbarkeit von vornherein vermieden werden und stattdessen eine höhere Punktedichte im Zielbereich (niedrige Emissionen, gute Fahrbarkeit) erreicht werden. Des Weiteren eröffnet die Verfügbarkeit der Modelle auch die Möglichkeit der Online-Optimierung. In der Umsetzung der Methode wurde besondere Wert auf die einfache Parametrierbarkeit gelegt, sodass Kalibrationsingenieure die Methode ohne besonderes Training selbstständig anwenden können. Im Artikel wird sowohl die neue Methode erörtert als auch die Anwendung der Methode für ausgewählte Kalibrieraufgaben in der Serienentwicklung von Diesel- und Otto-Motoren. Abschließend wird ein Ausblick gegeben, wie diese Methode für zukünftige Optimierungsprobleme im Bereich Hybrid-Entwicklung und ADAS angewendet werden kann. 1 Introduction 1.1 Motivation Starting from the 1980s, regulations regarding tailpipe-emissions for combustion engines in cars, trucks and for non-road applications have been continuously tightened. However, for around three decades the legislation focused on the reduction of the 220 <?page no="231"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods emissions limits which had to be achieved under laboratory conditions based on rather moderate certification cycles. Only in recent years, first for heavy duty applications, now also for passenger cars, the challenge changed from “reduced targets in a known, moderate cycle” to “achieving given targets under different - partly unknown - highly dynamic conditions” [1]. Statistic evaluation of typical EURO6 driving cycles and RDE-routes have shown that more than 70% of the accumulated tailpipe emissions are generated during transient events. Figure 1 illustrates the challenges of the latest legislations, highlighting the much higher accelerations in EURO 6d-Temp cycles. Figure 1: Challenges in powertrain calibration This leads to two fundamental conclusions for engine calibration: • The engine needs to be “clean” under all circumstances, thus including transient manoeuvres. The calibration of the dynamic engine behaviour with focus on emissions becomes crucial to pass the legislative certification. • Drivability and emission calibration cannot be executed as two separate tasks anymore. Standardized, methodical approaches for steady state calibration are widely known and used [3]. A similar standardized and “easy to use” concept for transient calibration has been established over the past two years and will be presented in this publication. 1.2 State of the art: Steady state DoE Design of Experiments (DoE), empirical modelling and model-based optimization are known methods to deal with high dimensional optimization problems. In powertrain calibration, DoE is established since around 20 years as an efficient methodical approach to optimize the steady state calibration [2]. Conventional DoE is based on a list of experiments, distributed in the n-dimensional variation space by a certain criterion (D-optimal, V-optimal, space filling, etc.). This list 221 <?page no="232"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods of experiments is executed and afterwards, empirical models are trained based on the steady state measurement results. Rapid increase in computational power and the demand for a high level of automation enabled the development of online DoE methods in recent years [3, 4, 5]. Common feature of all online, iterative DoE methods is the fact, that test plan execution and model training are no longer consecutive steps but run in parallel. The availability of online models enables the intelligent distribution of additional measurement points (in the area of interest and within the limit boundaries) during the test as well as online optimization. Fundamental requirement for online DoE is the availability of the test results directly after each experiment. As for steady state DoE, an “experiment” is typically a mean value measurement of engine responses, this requirement is not hard to fulfil. For dynamic events, the time trace needs to be analysed to calculate scalar KPIs 1.3 State of the art: Dynamic DoE To optimize a transient manoeuvre, on the first sight the most logical approach is to perform a dynamic DoE (dynamic excitation of the system based on a DoE plan) and train a dynamic empirical model based on the transient data [6]. This dynamic model can be used in an optimization process to improve the response of the system in the time domain [7]. As dynamic models generate a time trace of the outputs based on a time trace of the inputs, the optimization process itself is more complex than for time invariant models. The conversion of the optimal time trace into optimized calibration maps (generally not time-based) in the ECU is an even more challenging task. Besides all that, the generation and execution of a dynamic DoE sequence sets higher demands on the pre-determination of limit boundaries (to avoid limit violations during the dynamic test), the automation system, the data post processing tools and in general a higher level of mathematical knowledge. Therefore, dynamic DoE has beyond doubt its field of application but so far this approach has not grown out of its niche existence for highly specialized application. 2 Steady state methods for transient events 2.1 General approach Looking at steady state DoE, the training data for the time invariant model is collected according to the following scheme: 1. Set the controllable inputs parameters to the desired values according to the DoE plan 2. Wait until the system is stabilized 3. Take a mean value measurement of the system responses The application of steady state methods for dynamic events is based on the replacement of the mean value measurement of the stabilized system by scalar assessment values called KPI (“Key Performance Indicator). The KPIs are calculated based on the 222 <?page no="233"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods transient time trace. Those values can be modelled with time invariant empirical models. Therefore, after calculating the KPIs, the modelling and optimization process is identical to a steady state DoE [11]. Figure 2: Workflow transient optimization based on KPIs This approach has already been known and used in the past. But as the calculation of the KPI values was mainly done in a post processing step, the application of online DoE methods with its benefits mentioned above was hardly possible. To enable the full benefits of the latest online DoE methods, the calculation of KPIs needs to be moved from the post processing (usually in a tool without interface to the automation system) into the toolchain of the testbed automation. With the KPIs available online, the main hurdle to establish this process is taken but as the devil is always in the detail, some more challenges need to be overcome to successfully establish DoE for transient events. In the following, a fully integrated solution is presented, where the calculation of KPIs is done online by the automation system. The main focus is on the implementation with high usability and without the need for specific expert knowledge. 2.2 Challenges and Solutions 2.2.1 Online calculation of KPIs As identified above, the main enabler for a standardized method are online calculated KPIs, which is possible since AVL CAMEO TM 3R9. The in-built cyclic formula device is extended by so-called “aggregating functions”, which offer the calculation of integrals, minimum and maximum values as well as standard deviations of signals at any time during the automated DoE testrun. The calculation window can be started and stopped using trigger flags during the testrun. Equipped with this functionality, KPIs for emission optimization can be easily calculated with minimal additional effort during any CAMEO testrun. 223 <?page no="234"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods In combination with AVL PUMA Open 2 TM as part of the AVL MultiSync Technology™, the KPI calculations parameterized in CAMEO are executed on the realtime platform in PUMA with up to 1kHz calculation frequency [8]. For the assessment of drivability, AVL DRIVE TM already offers well accepted algorithms to calculate so-called “ratings” for manoeuvres [12]. Offering a standardized interface, the DRIVE ratings can be transferred to CAMEO immediately after each manoeuvre execution. 2.2.2 Time delay compensation Being a methodology mainly driven by the need for emission optimization of transient events, the online calculation of emission mass flows needs to be considered. Testbed equipment measures the concentrations of emission components and the exhaust mass flow with different devices, resulting in individual time delays for each signal. A precise calculation of emission mass flows based on concentration and exhaust mass flow is only possible for synchronized signals. In case the calculation of KPIs is done in the post processing, time delays can easily be compensated by shifting the delayed signal forward by its delay. During an online calculation, shifting signal forward is not possible, as future values of the delayed signal are unknown. To overcome this predicament, the time shift is applied in reverse direction: The slowest relevant signal is identified, and all faster channels are delayed by the difference between their own time delay and the time delay of the slowest signal. To be consistent, also the activation/ deactivation trigger for the calculation window is shifted backwards by the highest delay time. 2.2.3 Form variation of maps and curves for transient DoE Remembering the scheme of steady state DoE given above, the first step is mentioned as “Set the controllable input parameters to the desired values”. The desired values are the constant values calculated in the DoE plan or by the online DoE method. These values can directly be written to the calibration maps and curves in the ECU (Engine Control Unit) since the operating point in the map is fixed during a stead state DoE. During transient manoeuvres, the operating point in the calibration maps will most likely change since the relevant calibration maps are based on speed and load as xand yaxis. Therefore, the shape of the calibration maps determines the resulting time trace of the calibration parameter during the manoeuvre. Depending on the use case, scalar offsets and/ or factors applied to a pre-calibrated shape of the ECU maps/ curves may be sufficient DoE variations. In many cases, the optimal shape of a map/ curve is unknown, making a DoE-based form variation necessary. Figure 3 summarizes the differences between conventional steady state DoE and DoE for transient events. The left side shows the workflow of steady state testing: Since the operating point is constant, the shape of the calibration maps doesn’t matter. During a transient manoeuvre, the operating point is changing. Therefor the shape of the map is influencing the result. As a consequence, a DoE-based shape variation is required. 224 <?page no="235"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods Figure 3: Steady state versus transient DoE To keep the dimensionality of the DoE as low as possible while still enabling a flexible variation of the shape, the following approach for a DoE-based form variation is used: Several grid points of a map/ curve are treated as individual DoE variations. The final map shape is generated by a spline interpolation between the discrete grid points. Figure 4 exemplarily illustrates the form variation of a calibration map in y-direction based on three discrete DoE-variations: The discrete grid point will be varied in a definable range. For each combination, a spline interpolation will be used to calculate the final map shape. Figure 4: Form variation of a calibration map based in three grid points Depending on the task, both shaping in one dimension (as shown above) or 2-dimensional map shaping can be applied. 225 <?page no="236"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods 3 Application examples 3.1 Diesel engine tip-in optimization The main parameter for transient calibration of diesel engine is the smoke limitation map. Besides that, modern engine control systems offer the possibility of transient corrections. Activated by a transient recognition (e.g. based on boost pressure deviation or quantity gradient), the steady state setpoint for the relevant combustion parameters (e.g. start of injection, rail pressure, EGR & Boost control etc.) can be modified during transient events. The described methodology has been applied in various diesel engine calibration projects, for both heavy duty and passenger car vehicles. The optimization process is based on a tip-in from 0% to 100% accelerator pedal in zero seconds. Depending on the application, the engine speed is either kept constant during the tip-in or simulated based on a vehicle model. Especially in passenger car development, the approach shows a high frontloading potential since the dynamic optimization is typically done on the road or testtrack. In the following the results of a selected passenger car calibration project will be described. The result of the DoE-based optimization on the engine testbed was compared to the dataset generated during an extensive test trip, using an experience based manual calibration approach. This manual calibration took about one-week time. Target of the DoE-based optimization was to either demonstrate a similar calibration result in less time, or if possible find potential to further reduce the transient soot peaks and therefor DPF-load while keeping torque build-up, NOx-emissions and combustion noise on the same level. After a detailed analysis of the ECU functionality, the following variations were selected for the DoE-approach:  Smokelimiter maps: Form variation (as described in chapter 2.2.3) based on 3 grid points  Main timing correction map: Form variation based on 3 grid points  Swirl correction map: Scalar offset to basemap  Rail pressure correction map: Scalar factor to basemap  Pilot quantity correction map: Scalar offset to basemap  Pilot timing correction map: Scalar offset to basemap For modelling and optimization, the following KPIs were used:  90% torque rising time (Md_t90)  Power integral  Soot concentration peak & Soot mass flow integral  NOx concentration peak & NOx mass flow integral  Maximum combustion noise  Maximum gradient of combustion noise The workflow and the results are illustrated in Figure 5. Tip-Ins are performed at constant engine speed. KPIs for emissions and drivability are calculated and modelled. Based on the models, the soot integral with given NOx and drivability constraints is minimized. 226 <?page no="237"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods Figure 5: DoE based tip-in optimization The optimization process was done at 5 fixed engine speeds. All necessary measurements could be collected during less than 2 days’ time on the testbed, followed by 3 days modelling, optimization and map generation work in the office. The resulting calibration shows 30%-60% lower soot peaks in the whole engine speed range compared to the manual calibration approach. Since the selected manoeuvre of a 0% to 100% tip-in at constant engine speed is rather synthetic, the effect of the new calibration was validated in WLTC and RDE measurements on the chassis dyno. Figure 6 shows that the soot reduction effect seen in the synthetic tip-ins carries over very well to any real driving cycle. Figure 6: Verification in WLTC (extraction) Furthermore, the generated models from the test bed can be re-used for further fine tuning of the calibration in later development steps. 227 <?page no="238"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods 3.2 Gasoline engine: Catalyst heat up and transient particulate optimization The overall tailpipe emissions of a gasoline engine during a cycle are mainly generated in two phases:  Gaseous emissions and particulates during cold start and catalysis heating  Particulate emissions during transient events The presented approach can be tweaked to tackle both optimization problems: To optimize the emissions after cold start, the light off temperature shall be reached as fast as possible, while keeping the engine out emissions on a low level. For this use case, the engine testbed got equipped with a rapid cooldown system for the engine and the exhaust aftertreatment system to ensure a repeatable real cold start. After the start, emissions and temperatures were monitored for 60 seconds and several KPI values were calculated until the engine got switched off and cooled again. As calibration parameters the idle speed setpoint, lambda and ignition offset were varied. Figure 7 illustrates an overview of this use case. The workflow is similar to the above described example: The first 60s of the cycle are performed, KPIs calculated and modelled. Finally, a model based optimization of accumulated cold start emissions is performed. Figure 7: DoE based catalyst heat up calibration For the optimization of transient peaks of particulate emissions during normal operating conditions, a comparable approach to the diesel optimization described in chapter 3.2 can be applied: Main calibration parameter for the optimization is a dynamic correction of the start of injection (SOI). The SOI correction map was varied with a 3-point form variation as described in chapter 2.2.3 during a tip-in manoeuvre. Since the particulate peaks are mainly caused by cold surfaces in the combustion chamber, the engine was operated at 30°C coolant temperature and, to cool down the piston as much as possible, during the tip-out phase a fuel cut was performed. 228 <?page no="239"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods After the DoE, models for particulate peak and integral have been generated to perform an optimum search. Figure 8 illustrates the transient particulate emission peak with and without dynamic correction of the start of injection after optimization. Figure 8: Transient particulate peak optimization 3.3 Future fields of application: Operational strategy and ADAS As described above, this method has been heavily used in conventional powertrain calibration. Nevertheless, the approach of applying (form-)variations to calibration maps, performing a manoeuvre, extracting KPIs from the time trace, modelling the KPIs and finally optimizing the system can be used in many other fields of application: In system engineering, this approach can be used to perform a combined optimization of the e.g. hybrid operating strategy and optimal component selection. In recent years, the method has been applied to both a conventional powertrain including the shift strategy optimization [9] and an electrified powertrain including the hybrid operational strategy [10]. In the near future, this method will also be used for the calibration and validation of Advanced Driver Assistant Systems in simulation environments. For ADAS validation, the idea is to generate a manoeuvre like a highway cut-in scenario as shown in figure 9, that can be executed in a simulation environment. For the given scenario, parameters like vehicle speeds and triggers for the cut-in and cut-out will be varied. All parameter combinations with a high distance-to-crash are not interesting because in that case, the system is not challenged. Also, parameter combinations that lead to an inevitable crash are not of interest. Therefore, the target of the model based optimization is the identification of various parameter combinations that all lead to a “close to crash” scenario. 229 <?page no="240"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods Figure 9: DoE approach for ADAS validation Once those “close to crash” scenarios are found, the same method can be used to vary calibration parameters of the ADAS controller to maximize the distance-to-crash for the given manoeuvre. 4 Conclusion The developed method has been successfully applied in several SOP calibration projects in both dieseland gasoline engine and powertrain calibration. The method delivered in all field of application a better calibration in less time, compared to a manual calibration approach. Besides the result quality, it has proven to be as easy to use by calibration engineers experienced in steady state DoE methods. Further potential is seen in system optimization for electrified powertrains and driver assistant systems. References [1] Maschmeyer, H.; Kluin, M.; Beidl, C.: Real Driving Emissions - Ein Paradigmenwechsel in der Entwicklung, MTZ-Artikel, 76. Jahrgang, Wiesbaden, Februar 2015 [2] Seabrook, J.; Revereault, P.; Preston, M.; Grahn, M.; Bringhed, J.; Lundberg, B.: Application of Emulator Models in Hybrid Vehicle Development, International Calibration Conference, Berlin 2017 [3] Rainer, A.; Koegeler, H.M.: Iterative DoE - improved emission models and better optimisation results within a shortened measurement time. In: International Journal of Powertrains, 2017 230 <?page no="241"?> 6.3 A Versatile Approach for Transient Manoeuvre Optimization Using DoE Methods [4] Klein, P.; Kirschbaum, F.; Hartmann, B.; Bogachik, Y.; Nelles, O.: Adaptive Test Planning for the Calibration of Combustion Engines - Application, Design of Experiments (DoE) in Powertrain Development, Berlin 2013 [5] Sandmeier, N.; Röpke, K.: Improving the usability and time effort of modern engine calibration tasks by means of an online, model-based approach, 6th International symposium on Development Methodology, Wiesbaden 2015 [6] Nebel, M.; Vogels, M.-S.; Combé, T.; Winsel, T.; Pfluegl, H.; Hametner, Ch.: Global Dynamic Models for XiL-based Calibration, SAE 2010-01-0329 [7] Kolar, A.; Luig, J.: Dynamic optimization of engine actuator set values using a metaheuristic algorithm, Simulation and Testing for Automotive Electronics V, Berlin 2014 [8] Rottberger, G.; Krenn, J.: Das neue Dieselmotoren Prüffeld von BMW Steyr - mittels Kalibrierdatenverbund zukunftssicher aufgestellt, 7th International symposium on Development Methodology, Wiesbaden 2017 [9] Ortner, M.; Schörghuber, Ch.; Scheidel, S.; Hasenbichler, G.: Selektion der optimalen Antriebsstrangkonfiguration für künftige Anforderungen an Nutzfahrzeuge, MTZ-Artikel, 79. Jahrgang, Wiesbaden, Oktober 2018 [10] Ravi, A.; Kögeler, H.-M.; Jones, S.; Huss, A.; Allouchery, L.; Massoner, A.; Weigerl, P.: Front Loading the Calibration of Hybrid Operation Strategy via a Virtual Model Based Approach, 14th International Symposium on Advanced Vehicle Control, Beijing 2018 [11] Scheidel, S.; Gande, M.-S.: DoE-based transient manoeuvre optimization, 7th International symposium on Development Methodology, Wiesbaden 2017 [12] Holzinger, J.; Schoeggl, P.; Schrauf, M.; Bogner, E.: Objective assessment of driveability while automated driving, ATZ 10/ 2014 231 <?page no="242"?> 7 Automated Calibration II 7.1 AMU-Based Functions on Engine ECUs Benedikt Nork, René Diener Abstract Integrating AMU functionality into the micro-controllers used in the current generation of MDG1 (Motronic Diesel Gasoline) engine control units represents the successful completion of an important step towards the availability of multi-dimensional Gaussian process models for engine management. Since the start of this development, which was presented at the 2015 DoE conference, Robert Bosch GmbH has been working towards expanding the fields in which this methodology can be used. In addition to in-house developments, which have produced a series of home-grown control unit functions, the methodology was also the subject of early discussions with Robert Bosch GmbH’s customers. This, in turn, generated ideas about how one might sensibly use the opportunities offered by the asc@ECU method to expand existing functions. In the meantime, one functionality is now in series production; others will follow in 2019. One example of collaboration with a customer of Robert Bosch GmbH will be described below. The description will cover, in particular, the results of a specific functionality in the diesel engine field, including the motivation behind it and its functional implementation. Integrating numeric models for the multi-dimensional predictive mapping and control of combustion engines represents one possible solution for future, ever more complex, engine management control strategies. This provides an opportunity to significantly reduce the number of parameters needing to be populated with data; at the same time, considerably fewer resources are required to implement the application and functional transparency is improved. This presentation will illustrate the structural improvement to the way the fresh air and the exhaust gas recirculation (EGR) mass flows are recorded by using the AMU with a Bosch MD1 control unit fitted to the DEUTZ TCD 3.6 industrial engine. In addition to the engine speed sensor, only pressure and temperature sensors on the engine are used to determine the mass flows. Furthermore, the structures of the traditional map-based functions together with the function structures including integral AMU models will be illustrated, along with some of the development stages during the functional optimisation process. A four-dimensional ‘volumetric efficiency’ model was defined for calculating the engine charge volume, i.e. the gas mass flow via the cylinder head. A numerical AMU model was combined with a physical model for calculating the EGR mass flow. This allowed a mass flow, based on the Bernoulli equation, to be calculated from the pressure differential at one venturi pipe in the EGR section and the temperature at this point (physical model). This theoretical EGR mass flow, in addition to other variables, was used as the input parameter for a subsequent numerical AMU correction model. The results achieved by applying the improved model-based functions to the 232 <?page no="243"?> 7.1 AMU-Based Functions on Engine ECUs engine will then be illustrated using a comprehensive range of measurements and discussed. 1 AMU (Advanced Modelling Unit) on the Bosch MDG1 Starting from 2011 the Robert Bosch Company developed the HW acceleration unit AMU and presented this new feature in 2015 at the DoE Conference to a broad audience 1[1] . The AMU is an integral part of the micro controller, which is available on the latest ECU generations from Bosch. It is an arithmetic logical unit (ALU), which is directly connected with the crossbar and thus has connection to the flash. The AMU is designed to calculate autonomously a weighted sum of exponential functions in a very fast and accurate manner. This allows the calculation of Gaussian Process Models as well as Radial Basis Function (RBF) Models without any extra workload for the cores. By benchmarking the calculation of Gaussian Process Models, the AMU is approx. 30 times faster in comparison to a calculation on a core. This leads to the conclusion that only a HW acceleration unit like the AMU meets the real-time requirements to calculate Gaussian Processes Models as well as RBF models as an easy and fast modelling approach for machine learning on embedded devices. Figure 1: Schematic of the μC1 architecture [1] Diener, R., et al: Data-based Models on the ECU, Design of Experiments (DoE) in Powertrain Development, 2015, Expert Verlag, ISBN 978-3-8169-3316-8 233 <?page no="244"?> 7.1 AMU-Based Functions on Engine ECUs For addressing the calculation unit within the application software Bosch developed a HW driver, which handles the complete management of data transfer and model calculation. The only action performed within the application software is initiating the model calculation using the AMU during start-up. The HW driver loads the data from the ECU flash into the AMU’s RAM. Later the application function calls an AMU calculation and the AMU driver handles all required steps. The calculation can be configured as synchronous or asynchronous. In synchronous mode, the calling process is waiting for the AMU result, in asynchronous mode; the calling process is going on with other tasks and picks up later the result from the AMU calculation. The driver is developed in compliance with AUTOSAR standard 4.2 and can also be shared via software sharing. If several models should be calculated and the overall model size would not fit into the AMU RAM, then the driver is able to reload the model data for every calculation. In addition the AMU driver supports multi-tasking in case several models are executed on different priority levels. In general the micro controller families Infineon AURIX and Freescale/ STMicroelectronics MPC5700 are equipped with the AMU which is mounted on Bosch and non-Bosch control units. On the latter, Bosch can provide the AMU driver via SW Sharing model and corresponding licenses. 2 MCC (Model-Based Charge Control) The DEUTZ TCD 3.6 is a supercharged industrial diesel engine. If high performance, optimum responsiveness, low emissions and the best possible fuel consumption are to be permanently guaranteed, highly dynamic control of the air path is absolutely essential. This involves regulating and adjusting the optimum combination of the fresh air mass flow and the exhaust gas recirculation rate in all given environmental conditions within the engine’s entire operating range. The function on the engine control unit which is responsible for doing this is the Model-Based Charge Control (MCC). Target values for fresh air mass flow and exhaust gas recirculation rate are first converted into a target value for the gas mass flow via the cylinder head into the combustion chamber(𝑚 ), and into a target value for the recirculated exhaust gas mass flow(𝑚 ). The two converted target values are then compared with actual values for 𝑚 and 𝑚 and controlled by actuating the throttle valve and the EGR adjuster. Two multi-dimensional numerical models are used for calculating the actual values for 𝑚 and 𝑚 . The computing algorithm used is based on the Gaussian Process model. The AMU processor core in the MD1 control unit’s micro-controller is used for the numerically complex calculation of the numerous e-functions of the Gaussian Process. Making use of the AMU relieves the pressure on the MD1’s CPU to the extent that it becomes possible to handle several and even more complicated models serially in a single calculation step. Those sensors on the engine which are available for engine 234 <?page no="245"?> 7.1 AMU-Based Functions on Engine ECUs management are of use as input parameters for the models for calculating 𝑚 and 𝑚 . Figure 2: MCC embedded structure ´ 3 Map-based structure versus model-based structure The usual widely adopted approach when illustrating engine management and control functions is map-based. This means that, in order to achieve a particular engine parameter setting or when calculating engine characteristics, the characteristic maps, characteristic lines and constants are interlinked. Both the control parameters (e.g. the main injection start time) and the calculation variables (e.g. volumetric efficiency) are calculated in this way. Starting from a basic map for the parameter being calculated, various corrections depending on other constraints are also required. Ultimately, a two-dimensional numerical model with two input parameters and one output parameter, i.e. a characteristic map, is used. This two-dimensional model is then adjusted depending on other parameters which may also influence the model. This is how the multi-dimensional dependencies, which almost always occur in practice, are illustrated. Using the map-based functional approach, two input parameters at most can be processed simultaneously. Ultimately, therefore, all map-based functions remain oneand two-dimensional profiles 235 <?page no="246"?> 7.1 AMU-Based Functions on Engine ECUs Figure 3: MAP based functionality in a multi-dimensional parameter space. Up to a certain level of complexity, the mapbased function structures have so far been sufficient although every applications developer is pretty well aware of the weaknesses of ever more complex correction structures. The map-based function structure which has evolved over time does not extend beyond the two-dimensional characteristic map, since a three-dimensional (or greater-dimensional) map can neither be clearly illustrated nor realised in practice by the capability required of the engine control unit. Meeting the requirements demanded of current and future drive concepts, with their multifarious actuators and sensors, and compliance with the ever more stringent statutory limits can only be achieved to a very limited extent if a two-dimensional map-based function structure is employed. This presentation will suggest the numerical model as a possible remedy for this dilemma, i.e. how to go beyond the second dimension as regards engine management function structure. If a numerical model is used to calculate target values or physical engine characteristics with AMU assistance, then, in principle, the number of input parameters can be chosen at will. Subsequent corrections, dependent on additional constraints, are superfluous. A clearly transparent function block will be created if the correct input parameters are chosen. Calibrating and applying the model is, of course, a more time-consuming task than for a single characteristic map; however, if one considers the entire complex function including all the corrections, then populating a numerical model with data is considerably simpler and more transparent. Whichever engine control unit is used, it must, of course, be capable of calculating the numerical model correspondingly rapidly. The Bosch MD1 control unit including its AMU is well-equipped to do this. In summary, as functions become increasingly complex with an ever greater number of input parameters, using numerical models will considerably reduce the overall time and effort required while offering better quality and ensuring that the application can be clearly understood. 236 <?page no="247"?> 7.1 AMU-Based Functions on Engine ECUs Figure 4: Model-based functionality 4 Radial basis functions and Gaussian Process regression At this point, the concept of radial basis functions needs to be discussed briefly but without detailed mathematical explanations. The aim is to dispel worries about this modelling procedure and to make the basic principles of the algorithm accessible to a wider circle of users. To provide a better understanding, the first thing is to explain how the modelling results are presented. In order to generate a model, you need measuring points. These measuring points represent value pairs, i.e. at every point in the input space you get an answer to the system being modelled in the form of a measuring point. You can think of these measuring points as the analysis of an unknown function. Using modelling based on the measurement data, we aim to calculate this true, but to us unknown, function. The result of this modelling is a curve which can be used for model predictions. This curve is produced by a very generic mathematical calculation, namely by the superposition of an identical, but variously shifted, basis function. Various basis functions, which differ as regards complexity and efficiency, exist to achieve this. One that is commonly used is the quadratic exponential basis function which is comparable with the density function of the standard normal distribution (Gauss function). If basis functions of this type are chosen, the question arises as to where their centre point should be placed. The coordinates in the input space of those points which you have measured is an obvious choice. You may then imagine a basis function of this type lying ‘beneath’ every measuring point. Basis functions can also be easily generated in this way by means of what are known as kernels or kernel functions. Taking the simplest case, using this type of modelling, you now have a curve which has been produced by the superposition of basis functions and whose centre points lie where there are measuring points. In line with the name ‘radial’ basis function, it is obvious that a basis function which is radially symmetrical is being used. Figure 2 illustrates, taking the one-dimensional case, that any number of smooth functions can be represented by superposing them. The functions may also, of course, assume negative values. 237 <?page no="248"?> 7.1 AMU-Based Functions on Engine ECUs Figure 5: Superposition of seven individual basis functions to form one smooth function. The situation for a given dimension x, illustrated in Figure 5, can also be transferred mathematically to several dimensions, allowing a corresponding number of multidimensional curves and areas to be generated. As regards what is known as model training, i.e. the optimisation of the modelling described above, the task now consists of determining the width and the height of all the basis function curves - their position in the space is known and does not need to be optimised since it has already been clearly defined by the measuring points. It is at this point that the distinctive features of the Gaussian Process regression come into play 2 . On one hand, the way in which the width and height of the basis functions are calculated makes the method a very powerful tool but, on the other, it turns it into a less comprehensible algorithm for the majority of users. A discussion of the mathematical principles of the Gaussian Process regression does not form part of this presentation; it would go beyond the limits set and be a great distraction from the actual topic. Because it is frequently taken for granted or gets lost in the specialist literature amongst the mathematical discussion and comment, some typical assumptions and pre-conditions which underpin this modelling technique and which make a major contribution to the successful formation of a model are listed below:  the true function being modelled is smooth  the data measured have a normally distributed noise in relation to the unknown true function  the measurement noise along one dimension is always the same  the dynamics of the true function along one dimension are the same 2 The term Gaussian Process regression has nothing to do with basis functions of a function being used which are similar to those of the Gauss bell curves. Other basis functions can be used as well (e.g. rational quadratic, Matérn, periodic, etc.). The name derives from the fact that the model prediction is not a single point but a probability density distribution. When drawn, the curve represents the maximum values of a probability density distribution, thus representing the most likely pattern of the unknown true function, given the data being used. 238 <?page no="249"?> 7.1 AMU-Based Functions on Engine ECUs  the measuring point density in the input space is very largely uniform (space filling) (this applies not only to the Gaussian Process regression)  only uncorrelated input parameters are used (basic DoE rule) Based on these assumptions, using the Gaussian Process regression without any greater knowledge of modelling, a very good model can be generated almost at the touch of a button, a model which produces a high quality explanation of the measurement data 3 and which delivers reliable model predictions within the measured space. If problems are experienced with the modelling, they can practically always be attributed to a failure to observe the assumptions set out above. 5 EGR correction and volumetric efficiency 5.1 The EGR correction model The TCD 3.6 is a supercharged diesel engine with a high pressure EGR system. The exhaust gas from the exhaust manifold is fed back into the combustion chamber via the EGR actuator, the EGR cooler, the reed valves (non-return valves) and an EGR sensor in the engine’s intake manifold downstream of the compressor. In every dynamic operating situation, it is the interaction between the EGR actuator and the throttle valve that determines the appropriate mixture of exhaust gas and fresh air in the intake manifold. For the air path control (MCC) to function faultlessly, the recirculated exhaust gas mass flow at any given moment must be metered as precisely as possible in every situation. This highly dynamic recording of the EGR mass flow is achieved via a four-dimensional numerical model. The input parameters are the sensor values for engine speed, boost pressure, the exhaust gas backpressure ahead of the turbine and the value for the EGR sensor. Assisted by the MD1 engine management system’s AMU, the model is recalculated for every combustion cycle. The output parameter from the EGR correction model is one factor by which the EGR sensor value is corrected. The product of the correction value and the sensor value is the desired figure for the recirculated exhaust gas mass flow 𝑚 . 5.2 The volumetric efficiency model The gas mass flow via the cylinder head into the engine’s combustion chamber is also calculated using a four-dimensional numerical model. The model calculates the volumetric efficiency of the cylinder head. The product of the volumetric efficiency (λ a ), the number of work cycles per revolution, the displacement, plus the density in the intake manifold is the desired figure for the cylinder head mass flow 𝑚 . The input parameters for the volumetric efficiency model are engine speed, boost pressure, the temperature in the intake manifold and the exhaust gas backpressure ahead of the exhaust gas turbocharger turbine. 3 The Gaussian Process regression in particular expects noisy measurement values in the model assumption, i.e. the measurement is regarded as a deduction from an unknown function plus a similarly unknown noisy measurement. Regarded in this way, it makes the Gaussian Process regression special compared to other machine learning methods; it considerably improves the handling of real-life measurement data which are always subject to measurement noise. 239 <?page no="250"?> 7.1 AMU-Based Functions on Engine ECUs Figure 6: Numeric models EGR Correction and Volumetric Efficiency 6 Tool chain and application A comprehensive ‘tool’ chain is needed for populating an AMU-based numerical model on the MD1 control unit with data. The following tasks have to be carried out step-by-step (see Figure 4). Figure 7: MD1 AMU tool chain 6.1 Measurement data Depending on the model inputs, measurement data must first be captured to generate the model. The data can come from an engine test stand, a field trial or even from an appropriate engine simulation run. The quality of the measurement data used will determine the quality of the model. A trial to acquire model data can be prepared using a DoE program (e.g. ETAS ASCMO or AVL CAMEO, etc.). Dynamic measure- 240 <?page no="251"?> 7.1 AMU-Based Functions on Engine ECUs ment data, i.e. snapshots of the system not subjected to averaging, can also be used to produce a model. This particularly applies if the model variables level out after a brief time delay (e.g. injection mass, boost pressure). If used, intensely timedependent variables (e.g. temperatures or emissions) from dynamic measurements will result in model inaccuracies because the time input parameter will be missing. When creating the model, it is even occasionally absolutely essential to use measurements with combinations of parameters which do not occur at all during static engine operation. These measurement data must then either be measured dynamically or simulated or the engine will have to be appropriately conditioned. If this parameter space, which may only occur during dynamic operation for example, is not included as one element when creating the model, then, in real-life driving, this may lead to extrapolation with corresponding model defects. In order to avoid extreme and potentially impossible model values, the model value can in addition be limited in the engine management unit. 6.2 ASCMO DEUTZ uses the ASCMO DoE program, produced by ETAS, for populating an MD1 AMU model. ASCMO provides the option of producing DoE trials plans when generating a model. In the DoE trials plan, the entire x-dimensional model input parameter space within the real-life operating limits which are being determined for the engine (e.g. engine speed, boost pressure, injection mass, etc.) should be covered by combinations of parameters. If there are clear interdependencies between the model input parameters (e.g. the start of main injection is map-based depending on the mass of the main injection), then the input parameter space is not really x-dimensional but x-1 or even x-2 dimensional or even greater. In this case, all dependent input parameters should be ignored since, by implication, they would recur several times. All model input parameters must be genuinely adjustable within a significant range independent of the other input variables. Put simply, one can state that for every model input, just one system actuator (engine, test stand) needs to be present. (For example, injection mass => injectors, boost pressure => VTG actuator, exhaust gas backpressure => valve in the engine test stand’s exhaust gas system). In addition, when producing the trial plan, care should be taken to ensure that the x-dimensional input parameter space will be recorded as uniformly as possible using measuring points. If necessary, areas which are likely to experience a longer dwell time in real-life engine operation can be measured using more measuring points when creating the model. The model will then produce a more exact record of this area. 241 <?page no="252"?> 7.1 AMU-Based Functions on Engine ECUs 6.3 The DEUTZ TCD 3.6 EGR correction model The distribution of measuring points in the four-dimensional input parameter space (engine speed, boost pressure (P2), exhaust gas backpressure (P3) and the EGR sensor signal (𝒎 𝑺𝒆𝒏𝒔𝒐𝒓 ) can be illustrated by the TCD 3.6’s EGR correction model on the MD1 engine management unit. Figure 8: EGR correction model DOE test scheme 6.4 Validation and compression Once a trials plan has been worked through and the corresponding trials data are available, then a numerical model with very many grid points can first be produced ‘offline’ using ETAS ASCMO. Within ETAS ASCMO, the number of grid points in this large model can be greatly reduced by numerical optimisation or, in other words, compressed. The accuracy of the model will be virtually retained. After validation, possibly using additional static or dynamic measurement data, the model will finally be transferred to the MD1 engine control unit where it can then be validated on the engine. 242 <?page no="253"?> 7.1 AMU-Based Functions on Engine ECUs 7 Designing the TCD 3.6 EGR correction model When developing the numerical model for calculating the EGR mass flow for the DEUTZ TCD 3.6, the priority task was to work out how many and which sensor parameters should be chosen as model input parameters. Potentially, a greater number of model input parameters should improve the quality of the model, but only if they influence the model value being calculated to a significant degree. However, every additional input parameter also greatly increases both the time and effort needed for the application and the computing power required of the AMU. The optimum compromise turned out to consist of the four model input parameters: engine speed, boost pressure, exhaust gas backpressure and EGR sensor value. The number of model grid points chosen is another parameter which requires optimisation. A large number of grid points also potentially improves the quality of the model but again requires greater AMU computing power. As regards the EGR correction model for the TCD 3.6, there was no evidence of a further improvement in quality beyond 150 model grid points (see Figure 9). Figure 9: EGR correction mode: optimised structure & size 243 <?page no="254"?> 7.1 AMU-Based Functions on Engine ECUs 8 The results After developing the function and applying the two numerical models for recording the cylinder head mass flow 𝑚 , the EGR mass flow 𝑚 and the model-based air path control (MCC), static and dynamic trials were carried out on the engine test stand. A steady state engine map measurement showed the modelled EGR mass flow to have an accuracy of 5% and better of the EGR final value. Figure 10: EGR correction model: steady state quality In an NRTC (Non-Road Transient Cycle), the measured engine air mass intake was compared with the cumulative difference between the cylinder head mass flow and EGR mass flow as measured by the MD1 engine management unit. This showed a cumulative accuracy by both models together of 1.8%. Figure 11: EGR correction & volumetric efficiency model: transient quality 244 <?page no="255"?> 7.1 AMU-Based Functions on Engine ECUs 9 Summary and outlook 9.1 Summary Two numerical models designed to calculate synchronously with each power stroke the volumetric efficiency and the exhaust gas recirculation mass flow were successfully implemented in the MD1 engine management unit on the DEUTZ TCD 3.6 industrial engine. The type and the number of input and output signals used, plus the number of model grid points, were optimised with a view, on the one hand, to minimal model errors and, on the other, to the application and computing complexity involved. The models were subjected on the engine to steady state and transient validation with good results. 9.2 Outlook The AMU models presented here will be applied structurally to all current DEUTZ engine development projects employing the MD1 engine management unit. The successful implementation of two AMU-assisted numerical models on the DEUTZ TCD 3.6 encourages us to apply these multi-dimensional numerical models to other complex engine and drive train management functions. Model-based multi-dimensional target value structures are also conceivable. The Robert Bosch GmbH is currently developing a new AMU generation. These new AMU will be designed for more mathematical flexibility and calculation performance. The aim of this development is to support various applications in the context of Machine Learning / Artificial Intelligence. 245 <?page no="256"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization André Sell, Frank Gutmann, Tobias Gutmann Abstract The necessity of the recurring calibration of engine control functions for a large number of variants opens up potential for increasing efficiency. In this context, transient functions are moving more into focus due to legislative changes in the form of more dynamic certification cycles and the consideration of real driving. Since the well-known stationary DoE and MBC methods cannot be used directly for transient calibration tasks, methods have already been proposed for automated and efficient processing of such tasks. In this paper, some potentials of these methods are pointed out and suggestions for further development are made. These concern the amount of data to process and parameters to be optimized, the determinacy of the optimization problem and the plausibility of the optimized parameters. The SGE system optimization method applies these suggestions and addresses the potential by automatically calibrating ECU functions based on measurement or modeled data by simultaneously optimizing all calibration parameters. In this way, also transient systems can be calibrated efficiently, as the example of an exhaust gas temperature function shows. Kurzfassung Die Notwendigkeit der wiederkehrenden Applikation von Motorsteuerungsfunktionen für eine Vielzahl von Varianten eröffnet Potenzial zur Effizienzsteigerung. In diesem Zusammenhang rücken transiente Funktionen aufgrund gesetzlicher Änderungen in Form von dynamischeren Zertifizierungszyklen und der Berücksichtigung des realen Fahrens stärker in den Fokus. Da die bekannten stationären DoE- und MBC-Methoden nicht direkt für transiente Applikationsaufgaben eingesetzt werden können, wurden bereits Methoden zur automatisierten und effizienten Bearbeitung solcher Aufgaben vorgeschlagen. In diesem Beitrag werden einige Potenziale dieser Methoden aufgezeigt und Vorschläge zur Weiterentwicklung gemacht. Diese betreffen die Menge der zu verarbeitenden Daten und zu optimierenden Parameter, die Bestimmtheit des Optimierungsproblems und die Plausibilität der optimierten Parameter. Der SGE Ansatz der Systemoptimierung wendet diese Vorschläge an und adressiert damit das vorhandene Potenzial, indem sie Steuergerätefunktionen basierend auf Mess- oder modellierten Daten automatisch durch gleichzeitige Optimierung aller Parameter appliziert. Auf diese Weise können auch transiente Systeme effizient bearbeitet werden, wie das Beispiel einer Abgastemperaturmodells zeigt. 246 <?page no="257"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization 1 Introduction 1.1 Motivation The calibration effort of parameters in present vehicle ECUs is growing due to the increasing diversification of drive, vehicle and country variants. In addition, changes in legislation in the form of more dynamic certification cycles and the consideration of real driving bring the transient behavior of the vehicle, especially with regard to emissions and diagnostic functions, more into focus. This further increases the calibration effort, since the known stationary DoE and MBC methods cannot be applied [1] and manual calibration is very time consuming [2]. Therefore, methods are required to make the calibration of transient functions more efficient and accurate in the daily routine of a calibration engineer. 1.2 State of the Art Proposals exist for the automated calibration of transient functions [2]. The SGE approach that does also apply to transient functions is the so-called system optimization. It permits the automated calibration of ECU functions by simultaneously optimizing all calibration parameters. The aim is to minimize the deviation of the ECU system behavior from a reference behavior [3]. Problems and limitations arising in this context are discussed in the following and potentials are pointed out. 2 Challenges and Concepts 2.1 Large amounts of data to be processed Transient control functions contain elements whose behavior is not only determined by the current state of the system inputs, but also depends on the previous state such as filters and integrators. Therefore, it is not possible to completely calibrate such systems by setting combinations of the inputs e.g. according to a DoE plan and measuring the outputs stationary. If the dynamic behavior of a control function is to be calibrated, time traces of the input and output signals of the system must be applied. This makes it necessary to process large amounts of data during optimization regardless of whether these originate from measurements, simulations or a dynamic model. This places increased demands on the plausibility of the data and the performance of the optimization [2]. One proposed solution is to reduce the time data to scalar describable properties (KPI values = "Key Performance Index" values) [1]. However, since these are limited to fixed maneuvers, they must be selected very carefully in order to cover the relevant operating range. In addition, the optimized calibration must be tested and confirmed for an extended operating range. The time-based data of the input and output signals required for optimization can be reduced by converting them from their acquisition or calculation grid to a dynamic grid derived from the course of the signals without exceeding a defined deviation. As a result, fewer points are placed in stationary phases than in dynamic phases. The ECU function to be optimized may have to be adapted if it was previously calculated in a constant grid, for example. Since the grid generation has to be done only once but 247 <?page no="258"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization saves runtime in each calculation run of the optimization, a clear overall advantage of optimization time results. The following figure shows an example of the reduction of a signal by more than 80% of the points without the deviation from the output signal exceeding 1%. Figure 1: 80% signal reduction with less than 1% deviation If further runtime optimizing measures are added, such as the integration of the ECU function as a Simulink Host-Based Shared Library, typical calibration tasks such as the dynamic exhaust gas temperature model can be optimized overnight, which is usually sufficient for the calibration engineer as a typical user. 2.2 Large number of parameters to be optimized Although methods exist to directly map models in the ECU [4], most of the functions to be calibrated are traditionally implemented as a combination of maps, curves and scalar parameters. Since the system optimization optimizes all parameters of a function simultaneously and curves and maps consist of several to many individual parameters, the optimization must handle hundreds to thousands individual parameters. This results in two problems. On the one hand, the calculation time required for the optimization increases and on the other hand, flawed input data cause the optimization to use the high number of parameters to minimize the objective function by generating implausible and wavy maps [2]. The latter problem is discussed in the section 2.4. To compensate the long calculation runtimes that occur, it is proposed to replace the parameters with constructs that can be described with fewer parameters, such as polynomials [2] [5], approximation by individual cells [6] or LoLiMoT networks [7]. All of them have in common that restrictions of the form capabilities of the ECU parameters are made and thus the behavior of the ECU function cannot be mapped exactly if interpolation grid, incrementation or the interpolation routine deviate. When using polynomial models, another disadvantage is that the expected optimized parameter shape must be known in order to determine the polynomial order. Thus the utilization is more effortful when new or changed functions are to be calibrated because usually several iterative optimizations are necessary before suitable settings were found. Therefore, an algorithm is proposed which allows runtime advantages despite exact mapping of the ECU behavior and without prior knowledge. In this procedure called 1000 1200 1400 1600 1800 2000 2200 2400 2600 eng_speed [1/ min] 1700 1900 2100 2300 2500 2700 eng_speed_red. [1/ min] 10.4 10.6 10.8 11 11.2 11.4 11.6 11.8 12 time [s] o eng_speed [1/ min] o eng_speed_reduced [1/ min] 248 <?page no="259"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization initial estimation, similar to the approximation by single cells [6], a map/ curve is first divided into a smaller number of single surfaces, so that the number of parameters to be optimized is reduced. This division is not static, but adapts itself in the course of the optimization. As the optimization progresses, the cells are divided repeatedly until, at the end of the optimization, the map again corresponds to the ECU state and thus the optimization result is not subject to any restrictions. As an example, the following figure illustrates the initial reduction of a map from 144 to 16 individual parameters. Figure 2: Initial estimation map reduction The following figure shows the runtime reduction that can be achieved for the function of a gasoline engine load detection with five maps. The course of the objective criterion of two optimizations can be seen, which differ only in the use of the initial estimation. For initial estimation, the 768 individual parameters were reduced to 80 (-90%). In this way, the optimization result is completed in 20min instead of 120min runtime (-83%). Figure 3: Optimization performance due to initial estimation feature 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 optimization duration [h] o with initial estimation o without initial estimation 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000 6500 7000 objective function output [-] 249 <?page no="260"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization 2.3 Underdetermined optimization task The optimization task can be underdetermined if, for example, individual parameters exist in the case of maps for which there are no input data points in any of the four neighboring map sections. Then this map parameter will not have any influence to the objective criterion and therefore cannot be determined in a unique way during optimization. There are suggestions to consider a penalty term depending on the smoothness in this case [8], which will be dealt with in section 2.4. There is also a common case of underdeterminedness when parameters of a function are summed or multiplied and there is an infinite number of value combinations that produce the same result. As described in [2], this case can be made unique by constraints of the optimization. In the following example of determining the optimal ignition angle of a torque model as the sum of two parameters, the lambda correction in CURVE_ZWOPTLAM for lambda 1, for example, can be set to zero. Figure 4: Underdetermined optimization task 2.4 Smoothness / plausibility of the optimized parameters An important criterion for the evaluation of the optimization, besides the accuracy, is the plausibility and smoothness of the resulting parameters. Due to an underdetermined optimization task or flawed measurement data, the results of the optimization can be unsatisfactory and considerable postprocessing is required [2]. Two approaches exist here. On the one hand the mentioned reduction of the parameters to e.g. polynomial models leads to restrictions of the degrees of freedom, so that the results are necessarily smooth however with the discussed disadvantages. On the other hand, smoothness in the form of constraints or penalty terms [8] can be considered during optimization. Constraints limit the permissible gradients and curvatures. Within these limits, however, they do not influence the optimization, so that the result will not be smoother than these limits. In other words, it is very difficult for the user to define these limits in such a way that they provide plausible smooth results without worsening the accuracy to an undesirable extent. Similarly is behaves with the quantitative adjustment of penalty terms to take smoothness into account. These always deteriorate the result of the optimization and also require careful tuning. 250 <?page no="261"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization Both smoothness criteria as a constraint and a penalty term must be individually adapted to each parameter to be optimized if the shapes of the parameters significantly differ. They also strongly depend on the quality of the data for the input and output signals. Flawed data requires stronger smoothness criteria than error-free data. Experience has shown that both methods require an iterative adjustment over several optimizations, which eliminates part of the efficiency gain through optimization. An additional problem of penalty terms is that they are also applied to parameter sections for which no data is available. Since in these sections the parameter values have no influence on the optimization result, these sections will usually be wavy and implausible after optimization [2]. If one then applies a penalty term that evaluates the smoothness of the entire parameter as a whole, the smoothing of the underdetermined sections will unnecessarily worsen the optimization result, since the determined sections will also be smoothed at cost of the objective. As already mentioned in [8], it is proposed not to consider a smoothness criterion as a penalty term but to use it as a further criterion for optimization. This then becomes a multi-criteria optimization and avoids the problem of worsening of the result due to the smoothing of underdetermined sections. However, there is still a need to adjust the weighting of the smoothness criteria of the individual parameters. As part of our system optimization, we have developed a procedure for taking smoothness into account that does not require manual adjustment. The optimization algorithm does not consider the smoothness directly. Instead parallel to the optimization, a smoothing algorithm operates, which considers all parameters simultaneously, analogous to the optimization, and minimizes the gradient and curvature of the parameters using a criterion similar to [8]. It is limited by a maximum permissible worsening of the objective function caused by the smoothing. Optimization and smoothing algorithms are regularly exchanging data and integrate the respective progress. What's new is that the smoothness is not used directly as a penalty or constraint, but only the worsening of the objective function caused by the smoothing is taken into account. There are some advantages to this approach. On the one hand, no manual parameterization of a smoothing criterion is necessary. Each parameter is smoothed individually up to the permitted threshold of the objective function. Thus, sections where only little error occurs due to smoothing (e.g. overdetermined areas due to multiple data) are strongly smoothed, while other sections are only slightly adjusted if much error would occur due to smoothing. Underdetermined sections are smoothed even completely. Furthermore the smoothing can compensate roughness between the parameters by the simultaneous processing of all parameters. This is very relevant when optimizing underdetermined functions, which contain multiplication or summation (see section 2.3). In such sections of the parameters that are not defined by limits, large smoothing advances without loss of quality are usually possible, since any number of combinations of a multiplication and sum provide the same result as described above. Since the smoothing algorithm evaluates the objective function, this procedure results in an additional runtime compared to pure optimization without considering smoothness. However, in our experience, this procedure supports a plausible progress of the optimization and avoids local optima. In addition, the increase in runtime is within a 251 <?page no="262"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization range that allows overnight processing for typical calibration tasks, which is usually sufficient for a calibration engineer as a typical user. As part of the system optimization, the user is provided with a comprehensive graphical user interface for postprocessing after completion of the described combined optimization and smoothing. There it is possible to perform manual or automatic postprocessing. The time related output of the objective function is available for comparison of all settings at any time. The smoothing algorithm already used during optimization is also available. In postprocessing, the constraint error threshold is adjustable which enables the user to conveniently weight accuracy and smoothness. This is a great advantage especially for flawed data and is much easier to handle than making fixed adjustments at the pre-processing of the optimization. 3 Application example 3.1 Calibration of a Transient Exhaust Temperature Function The use of the system optimization with the features described before is explained now using a typical ECU function of an exhaust gas temperature model, which is somewhat outdated. Although there are more modern functions for mapping the physical behavior, this is a good example of a typical work package of a calibration engineer who has to work on existing functions that cannot completely map the physical behavior. The function is illustrated in the following figures. It maps the gas temperature before the catalyst and the material temperature of the catalyst depending on 9 input signals. In a first step the stationary exhaust gas temperature before catalyst is calculated. Afterward the transient behavior before catalyst is applied and finally the exothermic and transient behavior of the catalyst is modeled. Figure 5: Stationary part of the ECU function calibrate 252 <?page no="263"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization Figure 6: Transient part of the ECU function calibrate Time-based measurement data containing all input signals and the two measured temperatures a total of 30000 data points are available as a reference. The data is derived from chassis dynamometer measurements and thus allows the calibration of parameters that are depending on the input signals being varied during data recording. These are 6 maps, curves and scalars consisting of 142 single parameters. Further parameters describing the ignition angle and lambda dependence were not varied in the measurements and were therefore taken from an existing dataset, which was determined in advance on the engine test bench. The objective criterion was implemented by integrating a Simulink Host-Based Shared Library of the ECU function and calculating the deviation from the reference data for both temperatures. Since a certain degree of temperature deviation is less relevant, a final deviation weighting has been introduced to place more emphasis on minimizing high deviations. Here the user has all options to apply his experiences and priorities in order to guide the optimization to his desired direction. This allows to define a compromise if an ECU function cannot exactly map the physical behavior. The resulting signal is converted to a scalar quality criterion by the optimization. The calibration parameters are adjusted in such a way that this criterion is minimized. 253 <?page no="264"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization Figure 7: Objective function For this function the optimization needs 2 hours of computing time on a standard computer. The postprocessing takes about 15 minutes. As described, the user can view the results of the optimization in postprocessing and, if necessary, adjust and further smooth them. Graphically guided, a continuous selection of a state between the raw result of the optimization and a very smooth variant with a significant increase of constraint error is available as well as manual editing and data exchange with calibration data files. This enables a comparison with existing calibrations. Afterwards the parameters can be transferred directly into the ECU. See the following figures for the optimized calibration parameters of the exhaust gas temperature function after postprocessing. Simulink DLL (temp_catalyst +1) delta_temp_sum [°C] ++ abs delta_temp_catalyst_in [°C] abs "temp_catalyst_in_meas" +temp_catalyst_in_sim [°C] measured temperature simulated temperature temp_catalyst_sim [°C] Apply increased weighting for big deviations abs delta_temp_catalyst [°C] deviation "temp_catalyst_meas" ** Calibration parameters +- "input_signals" DCM (ATM parameters) Signals delta_temperature_weighted [°C] weighting factor calculation 254 <?page no="265"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization Figure 8: Calibration parameters after postprocessing During postprocessing, the time-related objective function output as well as input and interim signals are always available for a comparison of all variants in a so called system signal view. In this way, the effects of postprocessing and smoothing can be evaluated in relation to the time data and thus the actual deviations of the properties to be optimized. Figure 9: System Signal View - measured signals, optimized and smoothed result 1 6 11 16 21 T_CAT_IN_FILT [s] 20 40 60 80 100 120 140 160 180 2 engine_massflow [kg/ h] T_CAT_IN_FILT 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 T_CAT_FILT [s] 20 40 60 80 100 120 140 160 180 2 engine_massflow [kg/ h] T_CAT_FILT 0.6 0.7 0.8 0.9 1 T_CAT_IN_SPEED [s] 0 50 100 1 vehicle_speed [km/ h] T_CAT_IN_SPEED -50 0 50 100 150 200 250 300 350 400 450 500 550 600 650 temp_catalyst_in* [°C] 150 200 250 300 350 400 450 500 550 600 650 700 temp_catalyst* [°C] 0 100 200 vehicle_speed [km/ h] 0 5000 10000 15000 20000 25000 300 samples [-] o temp_catalyst_in o temp_catalyst_in_sim_opt o temp_catalyst_in_sim_smooth o temp_catalyst o temp_catalyst_sim_opt o temp_catalyst_sim_smooth o vehicle_speed 255 <?page no="266"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization As you can see from the previous illustration, this simple ECU function provides a good representation of the physical behavior with plausible shapes of the parameters to be optimized. The optimized signals (*_opt) map the measured ones well and there is only little deterioration by the postprocessing (*_smooth). Only at the beginning of the measurement during warm-up there are relevant deviations, since the function does not consider the temperature as a input variable. 4 Summary In this paper some potentials of the already known methods for automated calibration of transient ECU functions were pointed out and suggestions for further development are made. Transient control functions must be calibrated based on time related data of the input and output signals. This makes it necessary to process large amounts of data, which reduces the performance of the optimization. To avoid some limitations of the known „Key Performance Index" approach [1], it is proposed to reduce the time-based data to a dynamic grid derived from the course of the signals without exceeding a defined deviation. The reduced amount of data results in an advantage of optimization time without significant deviation of the optimization result. Traditional approaches implement ECU functions as a combination of maps and curves consisting of several hundred to thousand individual parameters. This results in a significant performance loss of the optimization and in case of flawed data also to implausible and wavy parameter shapes. Derived from existing approaches to reduce maps and curves to constructs that can be described with fewer individual parameters, a new dynamic reduction mechanism called initial estimation is proposed to avoid the drawbacks resulting from the necessity of prior knowledge and a modified implementation of the ECU function. To ensure plausible and smooth shapes of the parameters to optimize even for underdetermined optimization tasks and flawed measurement data, approaches already exist to use smoothness criteria in the form of optimization constraints or penalty terms in the objective function. This results in some disadvantages for the optimization and usually requires an iterative adjustment, which eliminates part of the efficiency gain through optimization. Therefore, a new proposal was made for considering smoothness without the need for manual adjustment. In parallel to the optimization, a smoothing algorithm is operating that does not influence the optimization through a penalty or constraint, but only regards the worsening of the objective function caused by the smoothing. In this way, an individual smoothing of the parameters is made possible based on the objective and no assumptions about the shape of the parameters are required. The capabilities of smoothing are further enhanced by the options available during postprocessing. Finally, the suggestions were applied to the automated calibration of an exhaust gas temperature function resulting in a good representation of the physical behavior with plausible shapes of the parameters to be optimized, which is supported by extensive and comfortable features during postprocessing for influencing and evaluating the result. 256 <?page no="267"?> 7.2 Efficient Calibration of Transient ECU Functions through System Optimization 5 Reference [1] Stefan Scheidel, Marie-Sophie Gande - AVL List GmbH, "DOE-based transient maneuver optimization," in International Symposium On Development Methdology, Wiesbaden, 2017. [2] Daniel Rimmelspacher, Dr. Wolf Baumann, Ferhat Akbaht, Dr. Karsten Röpke - IAV GmbH; Prof. Dr. Clemens Gühmann - Technische Universität Berlin, "Usability of computer-aided ECU calibration," in International Symposium On Development Methdology, Wiesbaden, 2017. [3] A. Sell, F. Gutmann, T. Gutmann - SGE Ingenieur GmbH, System optimization for automated calibration of ECU function; Automotive Data Analytics, Methods, DoE, Renningen: expert Verlag, 2016. [4] M. G. Stefan Angermaier, Implementation of data-based models using dedicated machine learning hardware (AMU) and its impact on function development and the calibration processes; Automotive Data Analytics, Methods, DoE, Renningen: expert Verlag, 2017. [5] I. Brahma, The Challenges of an Emirical Model Based Transient Calibration Process; Design of Experiments (DoE) in Engine Development, Renningen: Expert Verlag, 2011. [6] D. S. H. L. M. C. S. M. Grégory Font, Derivative Free Optimization Method and Physical Simulations Coupled with Statistical Models for Transient Engine Calibration; Design of Experiments (DoE) in Engine Development, Renningen: expert Verlag, 2011. [7] O. Nelles, "Lokale, lineare Modelle zur Identifikation nichtlinearer, dynamischer Systeme," in at - Automatisierungstechnik, Band 45, Heft 4, ISSN (Online) 2196- 677X, ISSN (Print) 0178-2312, DOI: https: / / doi.org/ 10.1524/ auto.1997.45.4.163. , 1997, pp. 163-174. [8] R. L. Y. D. F. K. T. K. Jan-Christoph Goos, Computing Optimized Calibration Maps including Model Prediction and Map Smoothness; Design of Experiments (DoE) in Engine Development, Renningen: expert Verlag, 2015. 257 <?page no="268"?> 7.3 Dynamic Safe Active Learning for Calibration Mark Schillinger, Benjamin Hartmann, Martin Jacob Abstract Dynamic data-based models offer various advantages for the calibration of combustion engines. These models not only describe the instantaneous relationship between one or more inputs and outputs. Rather the outputs are also dependent on the history of the input signals. Dynamic models can be used in an automatic optimization of the calibration, for controller tuning or in the evaluation of the calibration according to real driving emissions. Similar to stationary data-based models, dynamic models require informative measurement data to obtain a good accuracy. Infeasible input values which would lead to critical system states have to be avoided during the measurements. It is noteworthy that input signals which are infeasible for stationary measurements can still be feasible if only applied for a limited time. Thus, stationary and dynamic system boundaries are not identical and a stationary boundary identification is not sufficient. Finally, the effort for planning the experiments and performing the measurements should be as small as possible. In this contribution, the method dynamic safe active learning (dynamic SAL) is presented, which addresses the challenges mentioned above. The method learns a dynamic Gaussian process model online from simultaneously performed measurements. The dynamic excitation signals are optimized during the test run based on their information gain while considering safety constraints. Dynamic SAL is based on the previously published stationary SAL algorithm. It does not only generate and measure stationary points but dynamic trajectories instead. In detail, the algorithm features two dynamic GP models. The first one is a model of the system behavior. The trajectories are chosen iteratively such that this model’s differential entropy gain is maximized. The second model learns the strain of the system under test. It is called discriminative model. Trajectories which have a low probability of feasibility are rejected during the optimization. This contribution proposes the practical implementation and evaluation of the algorithm at the high pressure fuel supply system of a test vehicle. The proposed algorithms are running in real-time and all calculations are done online during the vehicle measurements. Kurzfassung Dynamische datenbasierte Modelle bieten eine Reihe von Vorteilen für die Applikation von Verbrennungsmotoren. Diese Modelle beschreiben nicht nur einen festen Zusammenhang von einem oder mehreren Eingangs- und Ausgangsgrößen, sondern auch 258 <?page no="269"?> 7.3 Dynamic Safe Active Learning for Calibration den Einfluss der Historie der Eingänge auf die Ausgänge. Dynamische Modelle können beispielsweise für eine automatisierte Optimierung der Applikation, für die Einstellung von Reglern oder die Untersuchung der Applikation bezüglich Realfahrtemissionen verwendet werden. Genau wie stationäre datenbasierte Modelle benötigen dynamische Modelle informative Messdaten, um eine hohe Modellgüte zu erreichen. Gleichzeitig müssen unzulässige Eingangswerte, die zu kritischen Systemzuständen führen können, während der Vermessung vermieden werden. Dabei ist zu beachten, dass Eingangssignale, die im Stationärbetrieb unzulässig sind, bei einer dynamischen Messung durchaus zulässig sein können, sofern sie nur für eine begrenzte Zeit am Prüfling anliegen. Dynamische und stationäre Grenzen der Eingangssignale sind somit nicht deckungsgleich und eine rein stationäre Grenzschätzung ist nicht ausreichend. Zu guter Letzt soll der Aufwand für die Versuchsplanung und die Durchführung der Messungen möglichst gering gehalten werden. In dieser Veröffentlichung wird die Methode dynamisches sicheres aktives Lernen (dynamic safe active learning; dynamic SAL) vorgestellt, die die oben genannten Herausforderungen adressiert. Diese Methode trainiert ein dynamisches Gaußprozessmodell iterativ während der Vermessung. Die bei der Messung verwendeten dynamischen Anregungssignale werden ebenso während der Messung optimiert. Ziel dieses Vorgehens ist, den Informationsgewinn unter Einhaltung der Sicherheitsgrenzen zu maximieren. Dynamic SAL basiert auf einem bereits veröffentlichten stationären SAL Algorithmus. Im Gegensatz zu diesem generiert und vermisst das neue Verfahren nicht nur stationäre Punkte, sondern dynamische Trajektorien. Der Algorithmus verwendet zwei dynamische GP-Modelle. Das erste Modell bildet das Systemverhalten nach. Die zu vermessenden Trajektorien werden iterativ so optimiert, dass der Gewinn an differentieller Entropie dieses Modells maximiert wird. Das zweite Modell lernt den Belastungszustand des zu vermessenden Systems und wird diskriminatives Modell genannt. Trajektorien für die dieses Modell eine zu hohe Systembelastung prädiziert, werden in der Optimierung nicht berücksichtigt. Diese Veröffentlichung beschreibt die konkrete Implementierung und Evaluierung des Verfahrens am Hochdruckkraftstoffsystem eines Versuchsfahrzeugs. Die vorgeschlagenen Algorithmen laufen in Echtzeit und alle nötigen Berechnungen werden online während der Vermessung am Fahrzeug durchgeführt. 1 Introduction In the field of combustion engine calibration, data-based modeling methods are state of the art and well established since many years. However, the trend is towards more and more complex modeling tasks, because the dynamics of real systems have to be reproduced in detail. The simulation of dynamic, nonlinear control plants becomes much more relevant, especially with respect to the increasing requirements regarding real driving emissions. In practice, the task of dynamic data-based modeling leads to several challenges: 259 <?page no="270"?> 7.3 Dynamic Safe Active Learning for Calibration 1. The modeling approaches become more and more complex, especially, when the underlying dynamic structure of the control plant is unknown and nonlinear effects have to be modeled. 2. The requirements of dynamic measurements of the control plants regarding the automation system often can be challenging. 3. An a priori-definition of dynamic boundaries of the experimental design is difficult. Moreover, the automated exploration of the dynamic experimental design space is a highly complex task. At Bosch, the online exploration and modeling of critical engine operation limits for engine test bed measurements have proven their practical relevance, see [4, 9]. Beyond that, the authors successfully applied the method stationary safe active learning, used for the online training of static Gaussian process models, see [9, 10]. Based on these experiences and results, our goal was to transfer these methods to the dynamic modeling case. Especially, Bosch Corporate Research’s development of a new algorithm for dynamic safe active learning (see [19]) enabled the practical implementation. Their main contribution is the simultaneous training of a dynamic GP regression model using active learning combined with the exploration and modeling of the dynamic operating limits. The main challenge for Bosch Engineering was the translation of the algorithms, that were tested in simulations only, to the real world test vehicle. As a result, a real-time automation concept was developed and the algorithms were tested and evaluated for a high pressure fuel supply system of a gasoline engine. The findings of these investigations are summarized in this contribution. In the literature, several approaches for dynamic active learning algorithms were published, especially with focus on combustion engine measurements. Reference [2] proposes an adaptive experiment design for dynamic engine measurements. APRB-signals or ramp-hold-signals are optimized online using either D-optimality or a model-committee criterion. Multilayer perceptron neural networks are utilized for modeling. The different methods are evaluated at a gasoline engine. The evaluation not only focuses on the dynamic accuracy, but also highlights an improved stationary accuracy of the identified models. In [1] the approach is enhanced to Runge-Kutta neural networks. Furthermore, system limits are incorporated in the design, which were omitted earlier. In [17] an online DoE based on the receding horizon principle of model predictive control is presented. The future trajectory is optimized in each step for a finite prediction horizon, but only the first planned point is applied to the system. A D-optimality criterion is used to generate most informative queries. System output limits are also incorporated. For modeling, local model networks are chosen. The approach is successfully evaluated in simulation. Unfortunately, the runtime of the algorithm is not addressed. Probably this idea will be hard to apply to a real-world system, as the optimization in each time step might be computationally demanding. In [16] the selection criterion is further developed towards a D-S-optimality criterion. More details on the algorithm and further simulation examples are presented. Real-time capability is mentioned as a challenge, but it is not stated whether it was achieved and, if so, for which sampling time. 260 <?page no="271"?> 7.3 Dynamic Safe Active Learning for Calibration In [11] a dynamic safe learning algorithm combining Gaussian processes (GP) and cubic Bézier curves is presented. For modeling, the thesis introduces a sparse deterministic training conditional GP approach with a maximum error insertion criterion. In fact, the selection of new measurement points does not use any online criteria. Instead, the Bézier curves are chosen based on a sobol set in phase space I . Besides this trajectory generation scheme, the thesis focuses on the safety criterion. Similar to the stationary safe active learning algorithm introduced in [13], a discriminative model is learned to separate feasible from infeasible trajectories. The algorithm from [11] was further developed at the Bosch Corporate Sector Research and Advance Engineering. The new version features two main differences to the original one: First, instead of Bézier curves, ramps are used. They can be computed more efficiently and especially the check if a trajectory is within the box-constraints framing the phase space is less computationally demanding. Second, the algorithm was extended to be a “real” active learning strategy, i.e. the trajectory sections are optimized during the measurement procedure. Therefore, a differential entropy criterion is used. The theoretical aspects of this algorithm are covered in [19]. The paper at hand is based on the same algorithm and presents its first real-world application. Thereby it reuses parts of the PhD thesis [8]. The remainder of this paper is structured as follows. Section 2 provides fundamentals required in the subsequent sections. In Section 3, dynamic SAL’s methodology and its implementation at the high pressure fuel supply system are described. Afterwards, measurement results are presented in Section 4 and discussed in Section 5. The contribution closes with a conclusion in Section 6. 2 Fundamentals In this section some fundamentals required for the remainder of the paper are summarized. Subsection 2.1 covers the modeling algorithm dynamic Gaussian process models. Afterwards, active learning and the previously published stationary safe active learning algorithm are described in Subsection 2.2 and 2.3. The dynamic safe active learning approach presented in this paper will be evaluated at the high pressure fuel supply system of a gasoline system, which is introduced in the last Subsection 2.4. 2.1 Dynamic Gaussian Process Models Gaussian process models are a nonparametric Bayesian modeling approach. Opposed to parametric models like transfer functions, GP models represent the system’s latent function 𝑓 (𝒙) by a stochastic process, the eponymous Gaussian process 𝑓 (𝒙) ∼ 𝒢𝒫 (𝑚 GP (𝒙), 𝑘(𝒙, 𝒙 ′ )) (1) with mean function 𝑚 GP (𝒙) and covariance function 𝑘(𝒙, 𝒙 ′ ) . A priori, i.e. before taking any training data into account, a Gaussian distribution with constant mean 𝑚 GP (𝒙) = 𝜇 0 and covariance function 𝑘(𝒙 𝑖 , 𝒙 𝑗 ) is assumed for any set of I The phase space is spanned by all input signals and their derivatives. 261 <?page no="272"?> 7.3 Dynamic Safe Active Learning for Calibration function evaluations. This GP is called prior. In most cases the prior mean is set to 𝜇 0 = 0 without loss of generality, which is also presumed in the following. The process is subsequently conditioned on the training data to obtain the posterior. Thanks to the probabilistic framework this is possible analytically for regression models and yields the predictive distribution 𝑝(𝑦 ∗ |𝒙 ∗ , 𝒟 ) = 𝒩 (𝜇 ∗ , 𝜎 2 ∗ ), with { 𝜇 ∗ = 𝒌 T∗ (𝑲 + 𝜎 2 n 𝑰 𝑚 ) −1 𝒚 𝜎 2 ∗ = 𝜎 2 n + 𝑘 ∗∗ − 𝒌 T∗ (𝑲 + 𝜎 2 n 𝑰 𝑚 ) −1 𝒌 ∗ , (2) see [7]. Thereby, 𝒟 = {𝒙 𝑖 , 𝑦 𝑖 |𝑖 = 1, … , 𝑚} denotes the 𝑚 input-output pairs used for training, 𝒌 ∗ ∈ ℝ 1×𝑚 the vector of covariances between the latent function value 𝑓 (𝒙 ∗ ) at the test point 𝒙 ∗ and the training function values, 𝑲 ∈ ℝ 𝑚×𝑚 the covariance matrix of the training function values, 𝜎 2 n the noise variance, 𝑰 𝑚 the 𝑚 × 𝑚 identity matrix, 𝒚 ∈ ℝ 1×𝑚 the vector of training outputs, and 𝑘 ∗∗ = 𝑘(𝒙 ∗ , 𝒙 ∗ ) the prior variance of 𝑓 (𝒙 ∗ ) . 𝒌 ∗ and 𝑲 are composed using the covariance function as 𝒌 ∗ = ⎡⎢⎢⎢⎢⎣ 𝑘(𝒙 1 , 𝒙 ∗ ) ⋮ 𝑘(𝒙 𝑚 , 𝒙 ∗ ) ⎤⎥⎥⎥⎥⎦ and 𝑲 = ⎡⎢⎢⎢⎢⎣ 𝑘(𝒙 1 , 𝒙 1 ) ⋯ 𝑘(𝒙 1 , 𝒙 𝑚 ) ⋮ ⋱ ⋮ 𝑘(𝒙 𝑚 , 𝒙 1 ) ⋯ 𝑘(𝒙 𝑚 , 𝒙 𝑚 ) ⎤⎥⎥⎥⎥⎦ . (3) The covariance function is usually parameterized by a set of hyperparameters. For example, in case of the commonly used squared exponential covariance function 𝑘(𝒙 𝑖 , 𝒙 𝑗 ) = 𝜎 2 0 exp (− 1 2 (𝒙 𝑖 − 𝒙 𝑗 ) T 𝜦 −1 (𝒙 𝑖 − 𝒙 𝑗 )) , with 𝜦 = diag(𝜆 21 , … , 𝜆 2𝑑 ), (4) the hyperparameters are the prior variance 𝜎 2 0 , the length-scales 𝜆 1 , … , 𝜆 𝑑 , and the noise variance II 𝜎 2 n . The dimension of the input space is denoted as 𝑑 . The hyperparameters can be learned by maximizing the log marginal likelihood log 𝑝(𝒚|𝜃) = − 1 2 𝒚 T (𝑲 + 𝜎 2 n 𝑰 ) −1 𝒚 − 1 2 log|𝑲 + 𝜎 2 n 𝑰 | − 𝑚2 log 2𝜋. (5) More information can be found in [7]. For dynamic modeling with GP models, an external dynamics approach can be used. More specific, a nonlinear autoregressive with exogenous input (NARX) structure is utilized in this contribution. Therefore, a time-dependent feature vector 𝝋(𝑘) is constructed, using delayed system inputs 𝑥 𝑖 and outputs 𝑦 : 𝝋 T (𝑘) = [𝑦(𝑘 − 1), … , 𝑦(𝑘 − 𝑛), 𝑥 1 (𝑘), … , 𝑥 1 (𝑘 − 𝑛), 𝑥 2 (𝑘), … , 𝑥 𝑑 (𝑘 − 𝑛)], (6) where 𝑛 is the maximum output and input delay. Note that not all 𝑥 𝑖 (𝑘 − 𝑙) and 𝑦(𝑘 − 𝑙), 𝑖 ∈ [1, 𝑑], 𝑙 ∈ [0, 𝑛] have to be present in 𝝋(𝑘) . This feature vector is used as input to a nonlinear stationary GP model. Thus, the model output becomes 𝑦 ∗ = 𝑓 NL (𝝋(𝑘)) . See [6] for more details. II The noise variance is no parameter of the covariance function, but plays an analogous role. Thus, it is considered as an additional hyperparameter here, similar to [7]. 262 <?page no="273"?> 7.3 Dynamic Safe Active Learning for Calibration Unfortunately, GP models suffer from a high number of training samples 𝑚 , as the complexity for model training is 𝒪 (𝑚 3 ) . This is due to the matrix inversion in (2). Thus, more efficient sparse GP models were developed. They use different approximations to reduce the complexity to 𝒪 (𝑚𝑚 2idx ) , where 𝑚 idx denotes the size of an index set and is usually much smaller than 𝑚 , see e.g. [12]. The use of sparse GPs is especially frequent in combination with dynamic modeling. 2.2 Active Learning Active Learning is a discipline of machine learning. This technique addresses problems where the acquisition of labeled training data is very expensive. Its basic idea is that a learning algorithm selects unlabeled data from a pool and requests an “oracle” to label this data. See [15] for an overview of different active learning approaches. For example, in case of stationary measurements at an engine test bench, the unlabeled data are input points for the engine. They can be generated at nearly no cost. Labeling this data means taking measurements at the test bench, which is comparatively expensive and time consuming. The goal of letting the learning algorithm choose the data to be labeled, i.e. iteratively optimizing the position of the input points, is to improve the quality of the resulting model. Alternatively, the number of measurement points necessary to obtain a required model quality could be reduced. Furthermore, it is not necessary to design a measurement plan in advance. In this contribution, an active learning approach based on Gaussian process models and a differential entropy criterion is used. Thereby, a GP regression model is not only trained after the complete measurement process is done, but already during the measurement after a small amount of samples was aquired. The differential entropy criterion is frequently used in the active learning literature. It is equivalent to choosing queries with maximum predicted posterior variance, which renders it easily applicable in combination with GP models, see [13]. Optimizing the trajectories online, opposed to planning them in advance, and utilizing the learned model properties are the major differences between active learning and classical design of experiments (DoE) methods. 2.3 Stationary Safe Active Learning In previous publications a stationary safe active learning (SAL) algorithm was presented, compare [9, 10, 13]. It combines the aforementioned differential entropy-based active learning approach with a discriminative model. This second model learns the input boundaries of the system under test, i.e. how to separate the whole input space 𝕏 into its feasible part 𝕏 + and the infeasible part 𝕏 − . It replicates a health function which indicates which parts of the input space are feasible and which parts are infeasible, i.e. where the strain on the system exceeds a given limit. The health function value of a specific point in the input space 𝒙 is calculated from a combination of supervised output signals 𝒛 using a predefined risk function ̃ ℎ(𝒛) . If, for example, a maximum temperature may not be exceeded, the risk function will be designed such that a high measured temperature will result in a low health function value. If the health function drops below 263 <?page no="274"?> 7.3 Dynamic Safe Active Learning for Calibration zero, a point is considered infeasible. Infeasible points should be avoided during the measurement in order not to damage the system under test. While optimizing the next query point, the predicted health function 𝑔 ∗ is required to be positive with a minimum user-defined probability 𝑝 . Formally speaking it is demanded that Pr(𝑔 ∗ ≥ 0|𝒙 ∗ , 𝒉, 𝑿) ≥ 𝑝, (7) where 𝒙 ∗ is the query point, 𝒉 the vector of measured discriminative function values used for training and 𝑿 the matrix containing all training inputs. After some transformations this results in the selection scheme 𝒙 𝑖+1 = argmax 𝒙 ∗ ∈𝕏 𝜎 2 𝑓 ∗ (𝒙 ∗ ) (8a) 𝑠.𝑡. 𝜇 𝑔∗ (𝒙 ∗ ) − 𝜈𝜎 𝑔∗ (𝒙 ∗ ) ≥ 0. (8b) Variable 𝒙 𝑖+1 is the next input point to be measured, 𝜎 2 𝑓 ∗ the posterior variance of the regression model, 𝜇 𝑔∗ and 𝜎 𝑔∗ are predicted mean and variance of the discriminative model, and 𝜈 is a confidence parameter. A higher 𝜈 requires either a higher confidence of the discriminative model’s prediction, corresponding to a small 𝜎 𝑔∗ , or a larger predicted mean value 𝜇 𝑔∗ . Thus, 𝜈 corresponds to the minimum required probability of feasibility for future queries. 2.4 The High Pressure Fuel Supply System Injection Time Rail Pressure Engine Speed Actuation Figure 1: Sketch of the HPFS system’s main components, inputs (continuous lines), and output (dashed line). The figure is taken from [18]. The dynamic SAL approach presented in this paper is evaluated at the high pressure fuel supply (HPFS) system of a 1.4 L four-cylinder gasoline engine in a test vehicle. The HPFS system features three input and one output signal, as shown in Figure 1. For this evaluation, the injection time was set by the ECU, in order to prevent extinguishing the combustion or damaging components. Furthermore, all measurements are executed with no load, therefore no vehicle test bench is necessary. With these simplifications, two inputs remain, the engine speed and the fuel pump actuation. The rail pressure is the output to be modeled and the supervised variable. It must not exceed a given limit. 264 <?page no="275"?> 7.3 Dynamic Safe Active Learning for Calibration Table 1: Comparison of the requirements on stationary and dynamic SAL. stationary SAL dynamic SAL system is time dependent no yes regression model stationary dynamic discriminative model stationary dynamic typical number of samples low ( 10 1 ∼ 10 3 ) high ( 10 4 ∼ 10 6 ) GP models regular sparse optimization variable next point parameterized trajectory real-time capability not required required space to be explored physical inputs also delayed inputs 3 Dynamic Safe Active Learning In this section, the new dynamic safe active learning approach is described. Subsection 3.1 focuses the methodology, whereas Subsection 3.2 covers challenges of a real-world application of dynSAL. 3.1 Methodology In case of dynamic Safe Active Learning, several aspects change compared to the stationary case presented in 2.3, which render the SAL algorithm more complex. Table 1 provides an overview. The most notable difference is that dynamic instead of stationary models have to be used. This does not only affect the regression model, but also the discriminative model. Hence, the feasible and infeasible input spaces 𝕏 + and 𝕏 − become time-dependent as well, i.e. 𝕏 + (𝑡) and 𝕏 − (𝑡) . The complete input space 𝕏 defined by box-constraints remains time independent, though. The number of samples obtained from the system under test often differs by multiple orders of magnitude. Stationary systems in the automotive domain are usually modeled using a few tens to a few thousand samples. Dynamic systems are commonly sampled with a fixed sampling time. Many systems in the automotive domain operate with sampling times in a range of 10 ms to 100 ms, which results in a much higher number of samples than in the stationary case already after a few minutes of measuring. This poses a burden for the modeling algorithm. Especially GP models suffer from a high number of training samples. Thus, more efficient sparse GP models are required for dynamic modeling, see e.g. [12]. To optimize each sampled point in case of dynamic SAL would be a huge computational load. The optimization of the next query would be necessary in every sampling interval, i.e. each 10 ms to 100 ms as mentioned above. Furthermore, consecutive points are not independent in the dynamic case, due to limited allowed gradients. Thus, it is reasonable to bundle multiple points to trajectories, which are parameterized and whose 265 <?page no="276"?> 7.3 Dynamic Safe Active Learning for Calibration parameters are optimized by the dynamic SAL algorithm. Still, the real-time capability of the algorithm is a challenge. Each new trajectory needs to be planned until the previous is completely measured. This is opposed to the stationary case, where no hard bound on the planning time exists. The input space relevant for exploration is also different in the stationary and dynamic case. This space is to be explored by the learning algorithm so that the final model can provide reliable predictions in the largest possible fraction of the feasible part of the input space. Therefore, GP models need a good coverage of all combinations of their inputs. In the stationary case the input space is given by the physical inputs of the system and denoted as 𝕏 . As mentioned in Section 2.1, dynamic GP models with an external dynamics approach use an external NARX structure to represent the dynamics of the system. This results in a feature vector 𝝋 as defined in (6), which is fed into a stationary GP model which performs one-step predictions. Hence, it is not sufficient to have a good coverage of 𝕏 for good predictions, but the space spanned by all the features in 𝝋 has to be covered instead. The used type of trajectory influences which features can be covered well. If, for example, ramps are optimized, it is hard to excite higher derivatives of the input, as these are either zero or infinite. The optimization of the trajectories is conducted based on a differential entropy criterion. Compared to stationary SAL, the criterion has to incorporate a covariance matrix 𝜮 𝑓 ∗ ∈ ℝ 𝑛×𝑛 instead of a single scalar variance, see (8a). This reflects that not only a single point but a trajectory consisting of 𝑛 points is optimized. In order to apply an optimization algorithm to 𝜮 𝑓 ∗ the matrix has to be transformed to a scalar value. In the literature, different approaches have been proposed for this step. For example, the determinant, the trace, or the maximum eigenvalue of 𝜮 𝑓 ∗ could be maximized, see e.g. [3]. In the following, a simpler approach is chosen: The variance is not optimized along the complete trajectory but only at the trajectory’s endpoint. Even though this optimality criterion is not as mature as those considering the full matrix, it omits the necessity to calculate the complete matrix 𝜮 𝑓 ∗ and thereby reduces the computational workload. The safety criterion used in dynamic SAL is similar to the criterion used in stationary SAL, compare (7), i.e. Pr(𝒈 ∗ ≥ 𝟎|𝑿 ∗ , 𝒉, 𝑿) = ∫ ∞ 𝟎 𝒩 (𝒈 ∗ |𝝁 𝑔∗ , 𝜮 𝑔∗ )d𝒈 ∗ ≥ 𝑝. (9) Opposed to the stationary case, 𝒈 ∗ ∈ ℝ 𝑛 is now a vector instead of a scalar, as not only one point but a trajectory consisting of multiple points 𝑿 ∗ ∈ ℝ 𝑛×𝑑 is to be checked for safety. Thus, the normal distribution on the right hand side including the vector of predicted mean values 𝝁 𝑔∗ ∈ ℝ 𝑛 and the predicted covariance matrix 𝜮 𝑔∗ ∈ ℝ 𝑛×𝑛 are multidimensional, compare Figure 2. Thereby, 𝑛 denotes the number of samples of the current trajectory and 𝑑 is the dimension of the input space. The integration has to be conducted over each dimension of 𝒈 ∗ so that Pr(𝒈 ∗ ≥ 𝟎|𝑿 ∗ , 𝒉, 𝑿) is a scalar. Consequently, a trajectory of e.g. 1 s duration at a sampling time of 10 ms consists of 𝑛 = 100 samples, which results in a 100-dimensional integration. Unfortunately, the integral cannot be solved analytically and some common approximations also fail because of the high number of dimensions. In [11] several possibilities to approximate the integration are presented. 266 <?page no="277"?> 7.3 Dynamic Safe Active Learning for Calibration 𝑔 1∗ 𝑔 2∗ Figure 2: Visualization of the probability density function cumulated in (9) for a trajectory with two points. The contour lines denote the predicted posterior distribution of the discriminative function value 𝒈 ∗ . The distribution is Gaussian and has a maximum in the first quadrant. Equation (9) integrates this distribution for 𝑔 1∗ > 0 and 𝑔 2∗ > 0 , i.e. in the gray shaded area. In this quadrant, both discriminative function values are positive and thus feasible. To avoid a necessary simulation to calculate the variance at the endpoint, a nonlinear finite impulse response (NFIR) structure III without output feedback is used for the regression model. With a NFIR structure, the variance at the planned trajectory’s endpoint can be calculated in one step. If a NARX model was used, predictions for each intermediate point on the trajectory were necessary, as the model depends on previous outputs. Formally stated, the predicted variance from the intermediate steps needs to be considered in the following steps, as the fed back output is not a single value anymore but a distribution. While the first aspect is simply a computational burden, the latter is mathematically demanding. Thus it is left for future research. This effect is also relevant for the discriminative model, which is implemented as NARX model to obtain a better prediction accuracy. Therefore, a simplification is used: first, only the mean values are iteratively calculated for all intermediate points, ignoring the distribution of the fed back outputs. Second, the covariance matrix for all points is calculated in one step. In fact, only the current inputs and their derivatives are used as features of the regression model. This is equivalent to using the current and one delayed input per input dimension as features. Using the derivatives has the advantage that it directly is a property of the used ramps. In combination these features uniquely define a ramp trajectory. Derivatives of higher order would not contribute to the model output if used as features, as they are zero along a ramp segment and infinite at the transition to the following ramp. They do not take other values in between. In a mathematical form, the next endpoint 𝒙 𝑖+1 and the length of the next trajectory 𝑛 𝑖+1 are obtained by solving the optimization [𝒙 𝑖+1 , 𝑛 𝑖+1 ] = argmax 𝒙 ∗ ∈𝕏,𝑛 ∗ ∈[𝑛 min ,𝑛 max ] 𝜎 2 𝑓 ∗ (𝒙 ∗ , ̇ 𝒙 ∗ (𝒙 𝑖 , 𝒙 ∗ , 𝑛 ∗ )) (10a) 𝑠.𝑡. Pr(𝒈 ∗ ≥ 𝟎|𝑿 ∗ (𝒙 𝑖 , 𝒙 ∗ , 𝑛 ∗ ), 𝒉, 𝑿) ≥ 𝑝. (10b) III also called nonlinear structure with exogenous input (NX structure) 267 <?page no="278"?> 7.3 Dynamic Safe Active Learning for Calibration Thereby, 𝑛 min and 𝑛 max are predefined lower and upper bounds for the length of the trajectory (compare the next section for details). Variable 𝜎 2 𝑓 ∗ denotes the predicted variance of the regression model, which depends on the currently evaluated inputs 𝒙 ∗ and the gradients ̇ 𝒙 ∗ . The gradients themselves can be calculated using the last trajectory’s target 𝒙 𝑖 , which is the current trajectory’s starting point, the endpoint and the number of steps 𝑛 ∗ . The constraint is determined using (9). The sampling points of the trajectory 𝑿 ∗ can also be calculated from starting point, endpoint and number of steps. The integral in (9) is solved using a Monte Carlo approach. Thereby, a high number (e.g. 10 000) of random realizations of the vector 𝒈 ∗ is picked from the distribution 𝒩 (𝒈 ∗ |𝝁 𝑔∗ , 𝜮 𝑔∗ ) . Afterwards, the number of feasible realizations (i.e. those where all elements of 𝒈 ∗ are positive) is divided by the number of all realizations, which approximates the trajectory’s probability of feasibility. 3.2 Challenges of a Real-Time Implementation After some evaluations in simulation, the method was implemented for a real HPFS system as part of the master thesis [14]. Because of already existing code and interfaces to the system the algorithm was implemented in Matlab. The main challenge was to give the algorithm a (soft) real-time capability. The development based upon the code used at Bosch Corporate Research for simulation experiments. The runtime of this code was analyzed in detail and the trajectory optimization was identified as the most time consuming part responsible for approximately 81 % of the total runtime. Consequently, a subsequent code optimization focused on this part. Especially the evaluation of a trajectory’s feasibility, which is conducted more than 200 times per trajectory optimization on average, turned out to be expensive. Almost 98 % of the time needed for trajectory optimization was spent on the evaluation of the optimization constraint. This workload could be significantly reduced to approximately 10 % of the original time by a combination of multiple measures. Most beneficial were: • The limitation of the permitted duration of trajectories. During the trajectory optimization, matrix multiplications with 𝑛 × 𝑛 -matrices have to be calculated, where 𝑛 is the length of the trajectory. These scale with 𝒪 (𝑛 3 ) . Thus, it is faster to optimize multiple short trajectories with small 𝑛 instead of less but longer ones with large 𝑛 . On the other hand, a too short trajectory length does not leave enough time for the following optimization. Therefore, also a lower bound for the trajectory length was introduced. • The usage of single precision floating point variables instead of double precision in some critical parts. The decreased numerical accuracy turned out to be acceptable. As more data could be stored in CPU cache, some operations significantly gained speed. • The initial generation of pseudo random numbers required for the Monte Carlo integration. The algorithm even benefits in accuracy from the reuse of the random numbers. The noise effect introduced by using a stochastic method becomes deterministic and does not affect the numerical derivation of the probability of feasibility done by the optimization algorithm. 268 <?page no="279"?> 7.3 Dynamic Safe Active Learning for Calibration input 1 input 2 P1 P2 P3 P4 P5 M1 M2 M3 M4 U1 U2 time Figure 3: Illustration of the parallelized dynamic SAL algorithm. The left plot shows the input space at one certain time instance. The black continuous ramps have already been measured and are used for a model update. At the same time, the dark blue dashed ramp is measured. In parallel, the light blue dotted ramp is planned. The right plot shows how planning (top row), measuring (middle row) and model update (bottom row) for each trajectory (indexed by a number) follow after one another. Other approaches like the parallelization of the trajectory optimization itself or the autogeneration of compiled MEX functions (Matlab Executable) did not result in a performance gain but even decreased speed. The overhead of parallelizing small code segments or calling MEX functions exceeds the benefits of these ideas. Despite the speedup gained through the described measures, a parallelization of the algorithm is necessary. While in the stationary case the next point can be planned while waiting at the previous point or at a safe initial point, a dynamic measurement requires planning and measuring to run in parallel to avoid stationary waiting. Figure 3 illustrates the parallelized algorithm. As shown, the workload is distributed to three workers which run on different cores: Planning The first worker is responsible for the optimization of the next ramp segment. Based on the current available regression and discriminative models it finds the trajectory which ends in the point with maximum variance and is presumably feasible. The planning of the next trajectory segments starts each time the measurement of the previous segment was finished. The result of the optimization is provided to the measurement worker. Measurement The second worker applies the planned trajectory to the system under test, measures the results and continuously supervises the safety of the system. If a limit violation occurs, the measurement worker reacts by driving to a safe point. The measured data is provided to the model update worker. The measurement worker has to wait for the planning of the respective trajectory to finish. Model update The third worker updates the GP models needed for the trajectory optimization. This worker runs mostly independent of the two others, because model training often needs more time than one trajectory optimization and measurement. As soon as new measurement data is available and the previous update is finished, 269 <?page no="280"?> 7.3 Dynamic Safe Active Learning for Calibration the models are updated. If the update is completed, the new models are provided to the planning worker and used in the next trajectory optimization. Besides the described main tasks, several undesirable special cases have to be taken care of by the implementation. For example, the optimization of the next trajectory might not find a feasible solution in a given time budget. In this case, the system is returned to a stationary safe central point and the optimization is repeated with this new starting point. If a limit violation occurs during the measurement, the automation drives the system in a safe point and the planning of the following trajectory is restarted. As worst case scenario, a limit violation at the endpoint of a trajectory segment is already taken into consideration when planning each segment. Not only the currently planned ramp, but also a return trajectory from the next selected point to the stationary safe central point (the direct connection of these points with maximum gradient) is assessed for safety. The evaluation of the presented dynamic SAL algorithm at the high pressure fuel supply system is presented in the following section. 4 Experimental Results Dynamic safe active learning was evaluated at the high pressure fuel supply system of a test vehicle. The regression model is set up as NFIR structure. The GP model uses engine speed and fuel pump actuation together with their derivatives as inputs. The discriminative model features a NARX approach. In total, 9 features are used for this model: the two physical inputs and the fed back output delayed 1 to 3 steps each. This structure was found suitable to model the HPFS system in previous experiments. Formally stated this yields 𝑦(𝑘) = 𝑓 NL (𝒙(𝑘), ̇ 𝒙(𝑘)) and (11) ℎ(𝑘) = 𝑔 NL (ℎ(𝑘 − 1), ℎ(𝑘 − 2), ℎ(𝑘 − 3), 𝒙(𝑘), 𝒙(𝑘 − 1), 𝒙(𝑘 − 2)) , (12) where 𝑓 NL denotes the stationary nonlinear part of the regression model and 𝑔 NL the stationary nonlinear part of the discriminative model. The hyperparameters of the models were optimized on previously available measurement data and kept constant during the experiments. At the beginning of the measurement, 40 initial trajectories were planned in a region of the input space which was known to be safe by expert knowledge. This data is needed to train initial models for the subsequent trajectory optimization. The required safety level for the optimization constraint was set to 𝑝 = 0.5 . The sampling time is set to 25 Hz. The risk function was defined as ̃ ℎ(𝑧) = 1 − exp(𝜃 h (𝑧 − 𝑧 max )) (13) where 𝑧 is the supervised output, i.e. the rail pressure. The maximally allowed pressure is 𝑧 max = 18 MPa and the scaling factor was set to 𝜃 h = 1 10 . Figure 4 shows the input and output signals of a 5 min dynamic SAL run. At the beginning, the initial trajectories with limited amplitudes are visible. After approximately 20 seconds the online-generated trajectories start. Between the ramp segments, sometimes stationary waiting occurs. In these events the optimizer did not finish the planning of the next trajectory fast enough. In total, 64 % of the measurement time were spent 270 <?page no="281"?> 7.3 Dynamic Safe Active Learning for Calibration 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 0 10 20 time (s) pressure (MPa) 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 1,000 2,000 3,000 4,000 time (s) engine speed (min −1 ) 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 0 50 100 time (s) pump actuation (mm 3 ) Figure 4: Output (top) and inputs (middle and bottom) for a dynamic SAL measurement. The maximum allowed pressure was set to 18 MPa. 271 <?page no="282"?> 7.3 Dynamic Safe Active Learning for Calibration 0 100 200 300 0 50 100 150 200 training data length (s) NRMSE (%) 0 100 200 300 0 50 100 150 200 training data length (s) NRMSE (%) 0 100 200 300 0 50 100 150 200 training data length (s) NRMSE (%) Figure 5: NRMSE of a model trained using dynamic SAL (solid lines), ramp signals (dashed lines) or chirp signals (dotted lines). The NRMSE was calculated on three different test datasets: a dynamic SAL run in the top left plot, a ramp DoE in the top right plot, and a chirp DoE in the bottom plot. waiting for the planning algorithm. If the planning takes too long or if a limit violation occurs, the system is returned to a safe point and a new planning approach is started. Around 6 % of the measurement time were spent at the safe point. The limit of the pressure was kept very well. Only one minor limit violation with a maximum pressure of 18.1 MPa was recorded. To compare the performance of the regression and discriminative model of dynamic SAL to offline planned ramp and chirp DoEs, the models were evaluated on test data. All together, five test datasets were recorded: three to benchmark the regression model and two for the discriminative model. The first three are a dynamic SAL run, a chirp, and a ramp DoE. Each of them was recorded with a pressure limit of 18 MPa. These datasets are independent of the training data due to varying initializations and different random number realizations. To evaluate the discriminative model, measured data which exceeds the 18 MPa limit is required. Therefore, a ramp and a chirp measurements resulting in a maximum pressure of 25 MPa were used. As benchmark, NFIR models similar to that of SAL were learned using a 5-minute chirp and a 5-minute ramp DoE. By comparing the predictions with the measured outputs the normalized root mean square error (NRMSE) NRMSE = 1𝑦√ 1 𝑚 ∗ 𝑚 ∗ ∑ 𝑖=1 (𝑦 ∗,𝑖 − 𝑦 𝑖 ) 2 (14) 272 <?page no="283"?> 7.3 Dynamic Safe Active Learning for Calibration 0 100 200 300 0 50 100 training data length (s) sensitivity, specificity (%) 0 100 200 300 0 50 100 training data length (s) sensitivity, specificity (%) Figure 6: Sensitivity (solid lines) and specificity (dashed lines) of dynamic SAL’s discriminative model evaluated on two test datasets: The first is a ramp DoE, the second a chirp DoE. Both DoEs result in pressures up to 25 MPa. 0 5 10 15 20 25 30 35 40 45 50 05 10 15 20 pressure (MPa) SAL as training data 0 5 10 15 20 25 30 35 40 45 50 05 10 15 20 pressure (MPa) ramp as training data 0 5 10 15 20 25 30 35 40 45 50 05 10 15 20 pressure (MPa) chirp as training data Figure 7: Predicted output (solid lines) and measured output (dashed lines) of the first 50 s of the SAL test data. The predictions were made using the models obtained using 300 s of SAL, ramp, or chirp training data. 273 <?page no="284"?> 7.3 Dynamic Safe Active Learning for Calibration can be calculated. Thereby, 𝑦 is the mean of all 𝑚 ∗ measured outputs, 𝑦 ∗,𝑖 the predicted, and 𝑦 𝑖 the measured output of test sample 𝑖 . As Figure 5 shows, each model performs best on a test dataset similar to its training data. SAL’s performance is always competitive to the other dataset which was not learned on the type of test data. The absolute value of the NRMSE is comparatively high in all measurements. This becomes obvious when looking at the predicted vs. measured outputs in Figure 7. The predicted outputs correspond to the right end of the upper left plot in Figure 5. The bad model quality is due to the limited number of features used in the models, i.e. just first order NFIR structures. Using output feedback or a much larger input history greatly improves the modeling quality, as previous modeling attempts proofed. The performance of the discriminative model is assessed using two figures: sensitivity and specificity. They describe the fraction of correctly classified feasible or infeasible test segments, respectively, i.e. sensitivity = number of segments correctly classified as feasible number of all feasible test segments , (15) specificity = number of segments correctly classified as infeasible number of all infeasible test segments . (16) Sensitivity and specificity of the dynamic SAL obtained on test data are shown in Figure 6. The test data is divided in 1 second long segments and the discriminative model is used to predict the feasibility of each segment independently. Therefore, the same Monte Carlo sampling approach as in dynamic SAL is utilized. This kind of evaluation of the discriminative model closely resembles the way it is used within dynamic SAL. As the figure shows, the specificity never drops below 99 % for both test datasets. This corresponds to the small number of limit violations during the measurements. The sensitivity starts at a lower value and rises up to 96 % and 85 % depending on the test dataset. When applied to ramp data, the performance is better than on chirp data. This observation is similar to the evaluation of the regression model. It is no surprise, as dynamic SAL also optimizes ramp input signals. 5 Discussion and Outlook The application example presented above allows the following conclusion: The discriminative model works well and prevents large limit violations. The modeling quality of the regression model is comparable to that of similar models learned on ramp and chirp DoEs and depends largely on the kind of test data. Its absolute quality is not fully satisfying though, which calls for the use of more features or a NARX structure in the regression model of the dynamic SAL algorithm. It is possible to run dynamic SAL at a real-world engine. Still, the planing algorithm is not fast enough. Even though some optimizations finish on time, a large fraction of stationary waiting occurs. In the future, the algorithm can be improved and extended in several ways. Currently, the NFIR structure used for the regression model featuring only current inputs and their derivatives is too simple for many real-world systems. Thus, after the dynamic SAL run a more complex NARX model with more features is trained using the data optimized on the simpler model. The implications of this procedure have to be studied in more detail. Most likely, data collected for the NFIR model is not optimal for the NARX approach. 274 <?page no="285"?> 7.3 Dynamic Safe Active Learning for Calibration Directly optimizing for the NARX structure would add complexity to the dynamic SAL algorithm, though. Each trajectory would need to be simulated starting from the last point. The predicted variance should be carried through the simulation to gain accurate results. Probably ramp signals are not the best choice for the excitation of the system under test, even though they are widely used in everyday business. As stated earlier, higher derivatives of the system are not excited properly. As an alternative, Bézier curves do not have this disadvantage, but their computation is more demanding. Computational efficiency is in fact a major challenge for dynamic SAL. As shown in Section 4, the current implementation mostly achieved a 40 ms sampling time but still suffered from occasional stationary waiting. Adding new features and more complexity to the algorithm will worsen this problem. A relief could be the implementation in a more efficient lower level programming language like C or C++. Nonetheless the obtainable speedup of these languages compared to Matlab is heavily applicationand coding-style dependent. Thus, a noteworthy speedup can most likely only be obtained if the new implementation is done by an experienced programmer who focuses on speed. Another idea for performance optimization is the used optimization algorithm. Currently the trajectories’ endpoints and lengths are continuously optimized using a convex optimizer with multi-start. The requirements for the optimization algorithm are somehow special in this application. The optimization should be very fast, as it has to finish during one trajectory segment’s length which is currently around 1 s. It would be beneficial if the optimization can guarantee to find a feasible solution after a certain time. As the objective function and the nonlinear constraint are non-convex, a local convex optimization is not sufficient. Nonetheless it is not necessary that the global optimum is found, as long as the found local optimum is “good enough”. The objective function’s gradient can be calculated analytically, as long as a model without output feedback is used. Due to the Monte Carlo approach, no analytical gradient is available for the constraint, though. Given these requirements and depending on the dimension of the input space, a random brute-force search, see e.g. [5], could deliver performance gains. This quite simple approach is especially suited for low-dimensional problems. It will not find the global optimum, but has a deterministic runtime and no risk of getting stuck in local optima. In fact, random brute-force search is a special case of convex optimization with multi-start, where the number of multi-starts is high and the convex optimization performs zero iterations. A combination of the two is also possible, e.g. by using a high number of multi-starts and a convex optimization with a fixed low number of iterations. An idea to simplify the optimization by reducing its dimension is to omit the trajectory length as an optimization variable. Currently, this variable is used as major influence on the gradient of the segment. Nonetheless, the gradient could still be varied by the distance of the trajectories endpoint to the current point. The reachability of points in the phase space during one optimization step will be significantly reduced by this approach, as gradient and endpoint become linked. Intermediate points of the trajectory will still be able to cover nearby points with high gradients and further away points could be reached with low gradients in multiple steps. Thus, the link of gradient and endpoint does not need to be a drawback, but the quality of the results still has to be investigated. 275 <?page no="286"?> 7.3 Dynamic Safe Active Learning for Calibration 6 Conclusion In this contribution, an approach for safe online design of experiments and model learning for dynamic systems was presented. The approach is based on two Gaussian process models, the regression model and the discriminative model. It optimizes a dynamic ramp-based DoE using a differential entropy-based selection criterion. The optimization is constrained by a prediction of the system under test’s strain. Only trajectories with sufficient probability of feasibility are measured. The method was successfully applied to the high pressure fuel supply system of a gasoline engine in a test vehicle. Therefore, the algorithm was implemented with soft real-time capabilities. Depending on the test dataset, it outperforms or at least performs on par with different offline-generated DoEs. Opposed to those, dynamic system limits were identified online and not significantly violated during all measurements. This proof of concept is a major step towards the commercial application of dynamic SAL. It has the potential to ease the creation of dynamic data-based models for calibration while improving modeling quality. Acknowledgment The authors like to thank their colleagues from Bosch Corporate Sector Research and Advance Engineering, namely Duy Nguyen-Tuong, Mona Meister, and Christoph Zimmer, for their preliminary work and their support. Thanks are also due to Lukas Schweizer for his work on the real-time capable implementation. References [1] Michael Deflorian. “Versuchsplanung und Methoden zur Identifikation zeitkontinuierlicher Zustandsraummodelle am Beispiel des Verbrennungsmotors”. PhD thesis. Technischen Universität München, 2011. [2] Michael Deflorian, Florian Klöpper, and Joachim Rückert. “Online dynamic black box modelling and adaptive experiment design in combustion engine calibration”. In: IFAC Proceedings Volumes 43.7 (2010), pp. 703-708. [3] Valerii V. Fedorov and Peter Hackl. Model-Oriented Design of Experiments. Springer New York, 1997. : 10.1007/ 978-1-4612-0703-0 . [4] Benjamin Hartmann et al. “Online-methods for engine test bed measurements considering engine limits”. In: 16th Stuttgart International Symposium. Wiesbaden: Springer Fachmedien, 2016. : 10.1007/ 978-3-658-13255-2_92 . [5] Richard Khoury and Douglas Wilhelm Harder. Numerical Methods and Modelling for Engineering. Springer International Publishing, 2016. : 10.1007/ 978-3- 319-21176-3 . [6] Oliver Nelles. Nonlinear System Identification. Springer-Verlag GmbH, 2001. : 978-3-642-08674-8. : 10.1007/ 978-3-662-04323-3 . [7] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning. The MIT Press, 2006. : 978-0-262-18253-9. 276 <?page no="287"?> 7.3 Dynamic Safe Active Learning for Calibration [8] Mark Schillinger. “Safe and Dynamic Design of Experiments”. PhD thesis. University of Siegen, 2019. Submitted. [9] Mark Schillinger et al. “Modern Online DoE Methods for Calibration - Constraint Modeling, Continuous Boundary Estimation, and Active Learning”. In: Automotive Data Analytics, Methods, DoE. Proceedings of the International Calibration Conference. Ed. by Karsten Röpke and Clemens Gühmann. Expert Verlag, 2017. : 978-3-8169-3381-6. [10] Mark Schillinger et al. “Safe Active Learning of a High Pressure Fuel Supply System”. In: 9th EUROSIM Congress on Modelling and Simulation. Oulu, Finland, 2016. : 10.1109/ EUROSIM.2016.137 . [11] Jens Schreiter. “Data-efficient and Safe Learning with Gaussian Processes”. PhD thesis. Institute for Parallel and Distributed Systems at the University of Stuttgart. In preparation. [12] Jens Schreiter, Duy Nguyen-Tuong, and Marc Toussaint. “Efficient sparsification for Gaussian process regression”. In: Neurocomputing 192 (June 2016), pp. 29- 37. : 10.1016/ j.neucom.2016.02.032 . [13] Jens Schreiter et al. “Safe exploration for active learning with Gaussian processes”. In: Machine Learning and Knowledge Discovery in Databases. Springer, 2015, pp. 133-149. [14] Lukas Schweizer. “Implementierung und Evaluation eines sicheren aktiven Lernverfahrens”. MA thesis. Karlsruher Institut für Technologie, 2017. [15] Burr Settles. Active Learning Literature Survey. Computer Sciences Technical Report 1648. University of Wisconsin-Madison, 2009. [16] Markus Stadlbauer. “Optimization of excitation signals for nonlinear systems”. PhD thesis. Technische Universität Wien, 2013. [17] M. Stadlbauer et al. “Online measuring method using an evolving model based test design for optimal process stimulation and modelling”. In: 2012 IEEE International Instrumentation and Measurement Technology Conference Proceedings. 2012. : 10.1109/ I2MTC.2012.6229185 . [18] Nils Tietze et al. “Model-based calibration of engine controller using automated transient design of experiment”. In: 14th Stuttgart International Symposium. Wiesbaden: Springer Fachmedien, 2014. : 10.1007/ 978-3-658-05130-3_111 . [19] Christoph Zimmer, Mona Meister, and Duy Nguyen-Tuong. “Safe Active Learning for Time-Series Modeling with Gaussian Processes”. In: Advances in Neural Information Processing Systems 31. Ed. by S. Bengio et al. Curran Associates, Inc., 2018, pp. 2735-2744. 277 <?page no="288"?> 8 Data Analysis II 8.1 Applications of High Performance Computing for the Calibration of Powertrain-Controls Markus Schori, Matthias Schultalbers Abstract Most powertrain calibration tasks can be formulated as a mathematical optimization problem and in many cases be solved using an appropriate solver. Methods of High Performance Computing (HPC) can dramatically reduce computation times and increase the number of scenarios that can be included in the optimization. In some cases they are even required to solve problems, that were not solvable before. However, performance tuning of given applications usually involves manual programming effort that might outweigh these benefits. Therefore, practical hints from our experience at IAV for effectively integrating high performance computing in the software development and calibration process are given. Kurzfassung Die meisten Applikationsaufgaben des Antriebsstrangs können als Optimierungsproblem formuliert und in vielen Fällen durch einen geeigneten Solver gelöst werden können. Methoden des High Performance Computings (HPC) können dabei die Laufzeit der Optimierung dramatisch verringern und/ oder die Anzahl der berücksichtigten Szenarien erhöhen. In einigen Fällen werden die Probleme erst durch HPC lösbar. Die Reduzierung der Laufzeit erfordert jedoch in vielen Fällen manuelle Programmierleistung, was den Geschwindigkeitsvorteil teilweise wieder aufheben kann. Es werden daher praktische Hinweise gegeben, wie HPC effektiv in den Applikations- und Softwareentwicklungsprozess eingebracht werden können. 1 Calibration as an Optimization Problem Most energy management systems for hybrid electric vehicles still use rule-based energy management systems to define the current mode of operation and set-points under given circumstances. Because of the high number of parameters, the calibration of such energy-managements can be a cumbersome task. Mathematical optimization methods are therefore inalienable to the calibration process. The calibration task can be formulated as a nonlinear optimization problem of the general form 278 <?page no="289"?> 8.1 Applications of High Performance Computing for the Calibration of Powertrain-Controls min 𝜙 𝑣 𝑠. 𝑡. 𝑣 𝑣 𝑣 𝑐 𝑐 𝑣 𝑐 where the cost function 𝜙 resembles the goal of the calibration task, which can include efficiency, emissions of noxious or greenhouse gases, model quality, response quality of a regulator or similar terms and the constraints 𝑐 describe boundary conditions such as e.g. component or acoustic limitations. Whether the formulated problem can be solved using a mathematical optimization procedure depends on the availability of mathematical models. To ensure that a solution to the mathematical problem formulation also represents a good set of parameters for the calibration, it is important that the model reflects the nonlinearities of the system with a sufficient degree of accuracy. For the case of energy managements for hybrid electric vehicles (HEVs), high quality models can be built with limited effort. To achieve a calibration that is well-suited for a large number of customer profiles, different environmental conditions such as driver types, driving profiles, elevations, temperatures as well as vehicle conditions such as the initial state of charge or the power demand of auxiliary devices need to be respected when calibrating the energy management. Therefore most optimizations have to be performed over a large number of carefully selected driving scenarios with large variations of environmental and vehicle parameters. The computation time of the optimization procedure usually only grows linearly with the number of driving situations that are taken into account for the evaluation of one candidate parameter set. On the one hand, this allows for choosing a high number of scenarios over which the optimization is to be performed, on the other hand the total computation time should be kept low since new calculations have to be frequently scheduled to adapt constraints and incorporate new scenarios. Exemplarily, Table 1 shows the number of parameter sets that can be evaluated per second with different parallelization technologies. Each evaluation involves setting ECU-parameters for a start-stop functionality and running simulations over two recorded driving cycles, where each of the two cycles was approximately 60 minutes long. It should be noted however, that the test-scenario only involved the completely parallelizable simulations, not the optimization part that determines the new parameter sets. Table 1: Performance gain using different parallelization technologies Single-Core Multi-Threading (28 Cores) Multi-Processing (4 x 28 Processes) Multi-GPU (8 x NVIDIA V100) Parametersets / second 1.1 29.7 108.5 1848 Modern methods of High Performance Computing (HPC) are therefore inalienable to solve optimization problems in calibration contexts. However their implementation can 279 <?page no="290"?> 8.1 Applications of High Performance Computing for the Calibration of Powertrain-Controls be time-consuming, which can outweigh the benefits if they are not well integrated into the calibration and software development process. 2 Optimization and Amdahl’s law At first sight, it might seem appealing to increase the parallelization resources assigned to a given problem as far as possible to maximize the speedup of the application. However, Amdahl’s law imposes an upper bound on the performance gain for a given application even without taking into account latency and bandwidth of data transfer that further limit the scalability of computation time. According to Amdahl’s law, the theoretical speedup 𝑆 is limited to 𝑆 1 1 𝑝 𝑝 𝑛 where 𝑝 is the share of computation time that benefits from additional processors 𝑛. As an example, even if 95 % of the code actually benefit from parallelization, the use of more than 128 processors will hardly yield any further speedup. Figure 1: Upper bounds for speedup Optimization problems or optimal control problems typical for automotive calibration scenarios mostly involve computationally expensive forward simulations. Therefore most of the computation time has to be spent on evaluations of the cost and/ or constraint functions of the problem. Gradient based methods for solving nonlinear constrained optimization problems, such as Sequential Quadratic Programming or Interior-Point methods, require the gradient of the cost function and the Jacobian matrix of the constraints with respect to the optimization variables in each iteration. In many cases, parts of the underlying model are not accessible or too complex for modifications and therefore modern methods for gra- 0 20 40 60 80 100 120 Speedup-factor Number-of-processors p-=-0.5 p-=-0.75 p-=-0.9 p-=-0.95 p-=-0.99 280 <?page no="291"?> 8.1 Applications of High Performance Computing for the Calibration of Powertrain-Controls dient calculation such as symbolic, automatic or complex differentiation are not applicable. Instead finite a finite differencing scheme has to be employed, which requires 𝑛 1 for one-sided or 2𝑛 function evaluations for central finite differences. Fortunately, these function evaluations can be executed completely in parallel and therefore the share 𝑝 is usually quite high. Nevertheless for problems with 𝑛 ≫ 100, a significant time span is spent on linear algebra, which is required to calculate the search direction of the current iteration. This share also benefits from additional processors but saturation will be reached more quickly. The maximum number of parallel function evaluations is in this case also limited to 𝑛 1 or 2𝑛 respectively. For problems with a lower number of variables, this makes the use of Graphics Processing Units (GPUs) that can have thousands of cores unreasonable. When global optimization procedures as for instance genetic algorithms or particle swarm optimization are used, the share of function evaluations over procedures required to compute the new parameter sets is usually much higher and therefore these methods scale very well with additional computing resources. Additionally, these optimization methods always benefit from larger populations or particle sets, so the number of parallel function evaluations can be perfectly adapted to the available computing resources. Figure 2: Performance scalability for gradient-based and global optimization It should be noted however, that global optimization procedures still have major disadvantages in terms of convergence rate and constraint handling and should not generally be preferred due to better scaling of the computation time. 3 Parallelization Libraries The most important libraries for parallelization can be classified into multi-threading, multi-processing and GPU-computing. At IAV, the following libraries have proven to be well applicable for calibration scenarios: 0% 20% 40% 60% 80% 100% Gradient-based optimization Global optimization Determination-of-new-parameter-sets Function-evaluations 281 <?page no="292"?> 8.1 Applications of High Performance Computing for the Calibration of Powertrain-Controls OpenMP [1] widely facilitates thread based parallelization by providing compiler directives for parallel execution of certain constructs (e.g. for-loops) as well as directives for synchronization. The initial burden of using OpenMP is rather low but attention has to be paid to avoid problems typical for thread based parallelism, i.e. race conditions and dead-locks. With the new standards, support for vectorization features such as SSE and AVX has been added, which can be useful for parallelization on the lowest level. Purely thread based parallelism cannot be extended to distributed memory architecture, which limits scalability of this approach to more than one machine in our computing cluster. It is therefore used at IAV to parallelize lower-level functions. MPI (Message passing interface) [2] is a standard that allows for running an application in given number of processes on distributed memory architectures and that offers a wide range of functions for communication between those processes. MPI is used at IAV for running complex simulations in separate processes, distributed over many machines in our cluster. If large amounts of data are required for the simulations, attention should be paid to a homogenous and fast networking architecture, since bottle-necks can easily arise waiting for data to reach the respective process. It can also be used to effectively parallelize Python code without the limitations of multi-threading due to the global interpreter lock. Cuda [3] is NVIDIA’s architecture for designing massively parallel applications. It offers extensions to the C++ language that make the use of GPUs feasible for multi-purpose computing. Cuda can dramatically increase the level of parallelization under certain circumstances. The use of complex simulations is often prevented by the limited amount of registers and the fact that branching (e.g. if statements in the underlying functions) reduces the possible degree of parallelization. It is therefore used for problems that require a high number of function evaluations but have only limited model complexity. Over the last years, libraries have emerged that offer compiler directives that tell the compiler to execute certain parts of the code on a accelerator, as for instance a GPU. For our use-cases, these libraries lack a sufficient degree of control over memory operations and parallel execution and consequently did not offer an advantage over cuda, which allows more fine-grained control. 4 Practical Hints for the effective use of HPC in calibration processes 4.1 Implement core libraries in C/ C++ and build wrappers for high-level languages Despite the significant progress of interpreter languages such as Matlab and Python (e.g. Just-In-Time compilers), compiled languages still offer a strong performance advantage. Most relevant parallelization technologies (OpenMP, MPI, cuda) are based on C/ C++ and hence it is the first choice for High Performance Computing. An additional advantage is the ease of developing interfaces to the most important high-level interpreter languages. This is a key point since most calibration engineers are used to working with those languages. 282 <?page no="293"?> 8.1 Applications of High Performance Computing for the Calibration of Powertrain-Controls For optimization tasks, IAV therefore strives for having all frequently used core functionalities in C++ and wrappers available for Python and Matlab. This includes a container for multi-dimensional arrays as well as common mathematical functions as linear interpolation, spline evaluation, gaussian processes and neural networks. 4.2 Make parallelization optional for low-level functions The functions mentioned in the previous section are usually used in different contexts. They can be used in Single-Input-Multiple-Data (SIMD) fashion, where the same function is called repeatedly for a large set of input data or they can be called in the context of a more complex forward simulation, where one parameter set requires the call of a diverse set of consecutive functions and operations. In the latter case, the forward simulations will be run in parallel on the available resources, so no further gain can be expected from a parallel execution of the low-level functions. It is therefore beneficial to allow for disabling parallelization of the low-level functions, e.g. by compiler switches. An exception are vectorization capabilities such as SSE or AVX that are also available through OpenMPs SIMD directives. These use a lower level of parallel execution and therefore do not interfere with threading or multi-processing. 4.3 Make it easy to transfer memory When using multi-processing, the current set of parameters has to be sent to the respective processes where the forward simulation is run and the result needs to be sent back. Sending memory via inter-process communication channels, requires the data to be continuous in memory. This serialization process is simplified significantly, if the data used containers are themselves easily serializable. The containers should therefore provide interfaces to a serialization library such as Google Protocol Buffers or Boost Serialization. In a similar way, this holds for GPU-computing where the model data have to be transferred to the GPU in the beginning and the current parameter set in each iteration. This transfer includes the steps of allocating memory on the GPU, copying the data and checking for errors. This process can be made more comfortable and less error-prone by implementing data containers for the GPU and let special constructors, destructors and assignment operators handle those steps. 4.4 Standardize model interfaces If the simulations to be run in parallel have an abstract interface, such as the Functional Mockup Interface (FMI) standard, the reusability of software components can be increased dramatically as the data objects for input signals, parameters and output signals can be standardized and no specialized functions need to be implemented for running the simulation. This allows for easily interchanging the simulation model without having to change any of the routines that handle data exchange and running the simulations. Each Functional Mockup Unit (FMU) should be instantiated and run in it’s own process as thread-safety of the model can usually not be guaranteed. Unfortunately, FMUs are not amenable for GPU computations as the standard does not allow the implementation of kernels that run in SIMD fashion, which is a requirement for effective use of 283 <?page no="294"?> 8.1 Applications of High Performance Computing for the Calibration of Powertrain-Controls GPUs. If the FMUs are self-created, it might be worth defining a lower level interface that can be used in a kernel as well as in a model that complies with the FMI standard. 4.5 Follow the Pareto Rule HPC software engineers are often tempted to maximize performance of an application as much as possible, which can be quite time-consuming. In many cases, the performance gain over effort of the software engineer follows the pareto rule, where with some minor efforts the performance can be increased substantially and further efforts only yield minor improvements. The pareto rule should therefore be kept in mind when deciding on whether further performance tuning is reasonable in a given context. 5 Automation of the calibration process Running the calculations required for the calibration tasks does not require any attention of the calibration engineer and therefore should be run in an automatized manner. A typical scheme works as follows: As soon as new input data, such as vehicle measurements or parameters are saved at a specified location, the need of a new optimization run is automatically determined and the calculation scheduled. Once the calculation is finished, postprocessing procedures create the reports the calibration engineer needs for evaluation of the quality of the newly generated parameter set. If required, the calibration engineer formulates new boundary conditions or modifies the cost function, otherwise the new parameter set can be tested in a real vehicle or a more complex simulation. This generates new input data, which automatically triggers new calculations. This automation allows the calibration engineer to focus fully on using his/ her knowledge for formulating cost functions and constraints, testing and improving the model. If a similar calibration is to be performed for a similar vehicle, these elements are reusable, comprehensible, traceable and reproducible. Figure 3: Automatic calibration process 284 <?page no="295"?> 8.1 Applications of High Performance Computing for the Calibration of Powertrain-Controls 6 Conclusion Mathematical optimization is a fundamental tool for modern calibration process. The high number of scenarios that need to be incorporated into the formulation of the problem makes the use of high performance computing inevitable. Yet, the efficient use of HPC requires careful planning and decision making. The software architecture has to be designed to increase modularization and reusability while maintaining the performance benefits of parallelization. If well integrated, HPC significantly increases productivity and the quality of the calibration. Literature [1] https: / / www.openmp.org/ [2] https: / / www.mcs.anl.gov/ research/ projects/ mpi/ standard.html [3] https: / / developer.nvidia.com/ cuda-zone 285 <?page no="296"?> 8.2 Efficient Automotive Development - Powered by BigData Technology Tobias Abthoff, Dankmar Boja Abstract In the future automotive development cycle, efficient handling as well as high-performance and scalable analyses gain a decisive role due to enormously increasing data volumes and heterogeneity of utilized data and documents. Mastering this familiar, but more and more complex task is pivotal for success or failure in achieving technical goals and market success of the developed vehicle. This applies to data / documents of all kinds in general but, in particular, to sensor data information from a wide range of sources, such as engine / powertrain time series, bus traces, camera and lidar data, GPS / geolocations, emission / PEMS sensorics and data from other special measuring equipment (NVH, etc.), to name a few of the most common ones. Kurzfassung Im zukünftigen automobilen Entwicklungszyklus spielen aufgrund enorm ansteigender Datenmengen und zunehmender Heterogenität von benötigten Dokumenten und Daten das effiziente Handling sowie eine performante und skalierbare Analyse eine entscheidende Rolle, welche für Erfolg oder Misserfolg beim Erreichen der technischen Entwicklungsziele und den Markterfolg des entwickelten Fahrzeugs ausschlaggebend sein kann. Dies gilt allgemein für Daten/ Dokumente aller Art und im Speziellen für Sensor-Informationen unterschiedlichster Quellen wie Zeitreihen aus Motor/ Triebstrang, Bus-Traces, Kamera- und Lidar-Daten, GPS/ Geolokationen, Emissions-/ PEMS- Sensorik und Daten aus weiterer Spezialmesstechnik (NVH, usw.), um einige der gebräuchlichsten zu nennen. Herausforderungen dabei bestehen beispielweise in der Verwaltung des Gesamtdatensatzes bei niedrigen Such- und Zugriffszeiten, effizienter Informations-Verlinkung/ Fusion, einem möglichst hohen Automatisierungsgrad bei Daten-Eintritt in und ggf. auch Löschung vom System, einer einfachen und gleichzeitig mächtigen Analytics-Umsetzung und Entwicklungsumgebung sowie einer Architektur, welche Analytics-UseCases vom frühem Entwicklungsstadium bis hin zu einer breiten produktiven Nutzung unterstützt. All dies gilt es unter der Prämisse von Enterprise-Tauglichkeit und 286 <?page no="297"?> 8.2 Efficient Automotive Development - Powered by Big-Data Technology Verwendbarkeit/ Wartbarkeit durch IT und Endusern in einem global tätigen Konzern umzusetzen. Wir geben einen Überblick, wie die Vorteile von Technologien Hadoop, Spark, Elastic etc. in einem ganzheitlichen Ansatz - umgesetzt als BigData Analytics Platform - Entwicklern helfen, die anstehenden Herausforderungen der Handhabung, Verwaltung und Analyse von Daten erfolgreich zu meistern, um die relevante Nadel in ihrem Heuhaufen von In-formationen zu finden. Wir zeigen das Konzept der Analyse-Entwicklung von der Proof-of-Concept-Phase bis hin zu einem breit anwendbaren Tool-Framework für industrielle R&D-Abteilungen sowohl hinsichtlich der Datengröße als auch der Anzahl der Entwickler, die diese Tools in der Praxis anwenden. Challenges Challenges of handling this huge data in the right wayinclude, for example  management of the entire data set with low search and access times  efficient information linking and data fusion  the highest possible degree of automation in the process of data entry into and possibly deletion from the system  a simple yet powerful analytics implementation and development environment and  an architecture supporting analytics use cases from early stage development to broad productive use. All this needs to be implemented regarding enterprise suitability, usability and maintainability by IT departments and needs of the designated end users in a globally active organisation. Comprehensive sensor data analysis Working with measurement data can roughly be divided into four topical sections, as depicted in Figure 1: Innovation, automation, collaboration and integration. 287 <?page no="298"?> 8.2 Efficient Automotive Development - Powered by Big-Data Technology Figure 1: Four aspects of sensor data analysis Those four categories have no predefined logical order and are strongly cross-connected. However, taking it by the natural order of measure data creation the illustration starts with the aspect of INTEGRATE: Generic analytics approaches need to consider that data creation and storage are temporarily or permanently distributed. Data is recorded and saved at different test grounds, enterprise development locations or data centers. Since terabytes to petabytes of data easily accumulate in early development phase (e.g. due to autonomous driving recording unreduced and uncompressed raw sensor data) that information cannot be transferred to other locations for analyses on demand. Worldwide Data therefore means that a data set scattered over different locations must nevertheless be utilizable for analytics as if in one place. Only if this is possible, advantages of measure data analytics can fully be exploited. Enterprise security basically comprises the issues of adjustable, seamless access control. All components must feature being run and maintained in a world-wide framework 288 <?page no="299"?> 8.2 Efficient Automotive Development - Powered by Big-Data Technology with security aspects ideally not noticeable to end users due to single-sign-on implementation. In the step of data unboxing the different measure data types need to be converted to a processable format. Data types are variable and often proprietary and thus a fast adaption to new standards must be supported by the data unboxing concept. The next item group is about how to COLLABORATE: In a running project, usually multiple users and user groups use data and documents which often creates grey areas of data competence and therefore suboptimal insight of what is inside the available dataset. Fast search functionality enables a single user to explore and use the entire data set on his own without having prior knowledge or aid by colleagues. It facilitates FINDING instead of SEARCHING what is required for a task. This starts from search by metadata information and ranges up to complex conditional queries that allow tracking down certain events in measurement data. The item teams focus on organizing and sharing data, documents and also created results within workgroups and teams of colleagues. Conventional working procedures often separate organizational structures and data locations. The possibility to link and sort data into topical spaces by definable properties and schemes therefore enhances transparency and retrievability of information. Since linking is based on fast-search indexing technology, disadvantages like multiple instances of identical data in folderbased approaches do not occur. Data management adds further functionality in e.g. introduction of data lifecycle management and self-acting data retention policies. COLLABORATE thus brings storage and use of data closer together and increases exploitation of existing data assets. The probably most intriguing item group is INNOVATE: Innovation is most expected from big data technology and interactive analytics is a key feature. With a functional data unboxing workflow all data can be accessed, but fast generation of insights require flexible and expeditious analytics. An interactive development environment facilitates carrying out iterations, hence speeding up the process of finding a suitable approach and implementations for a given problem. Interactive analytics per se is not new, but it has to be capable of administeringall remote data and be accessible by all users - aspects discussed in INTEGRATE. Automotive libs enrich interactive analytics by lowering the entry level in creating own code. Examples might be a support for handling common data structures like time series data, trace data or object data together with convenient and efficient algorithms and methods. Those libs are typically very specific and cannot be found with required functionality at public domain. However, open source is the third and also relevant factor of innovation. While domain specific libs might be required it is also beneficial to make use of the enormous variety of open source software. This includes new scientific approaches typically being put public domain but also technological improvements. We think it is essential not to invent the wheel numerous times but to imbibe and utilize the momentum of the open source community. 289 <?page no="300"?> 8.2 Efficient Automotive Development - Powered by Big-Data Technology With those three item groups mentioned big data technologies can be put to use for automotive data development tasks. Nonetheless, the fourth aspect of AUTOMATE brings the existing parts to another level in terms of usability and efficiency. Many existing analytics environments cover parts of the above-mentioned requirements. With APPs the valuable created intellectual properties and methodologies can be both protected and brought to wider usership. Automotive developers are for the most no data or computer scientists. Presenting an - even comparably simple - development environment often suffers from acceptance problems by users who do not want to look at code to have their problems solved. Packaging tested and functional analytics code into an app after the first development phase with lots of iterations has been completed allows those users to profit from big data technology without being harmed by its complexity. Furthermore, a working, closed app does not unveil the applied approaches and can also be given to a wider usership in terms of IP-protection. APPs further have the advantage that they can easily be arranged in a workflow. In a sequence of lucid steps even complex tasks can be performed and an app or app workflow allows external, event-based triggering by means of REST API. Tedious mechanisms like data ingest pipelines on arrival of new measurement input, quality checks, simple stats or complex analytics can be implemented and no longer need to be done manually or be orchestrated by system administrators. Data engineering and data science therefore move closer together thus enabling deploy to production, where by production we mean steady operation in an IT not in a manufacturing context. Processes that can be automated have a significantly greater chance to overcome continuous transitions in development proceeding and to add value. It is our strong believe that big data technology can contribute in automotive development to simplify and boost efficiency of analytics in practice. The aspects presented are implemented in our big data products DaSense and EAGLE and reportedly brought benefit to users at various organizations. 290 <?page no="301"?> The Authors Dr. Karsten Röpke Prof. Dr. Clemens Gühmann IAV GmbH TU Berlin Berlin Berlin Matthias Schultalbers Dr. Wolf Baumann IAV GmbH IAV GmbH Gifhorn Berlin Dr. Mirko Knaak IAV GmbH Berlin Dr. Tobias Abthoff NorCom München Dr.-Ing. Sheraz Ahmed DFKI GmbH Kaiserslautern Prof. Dr.-Ing. Jakob Andert VKA RWTH Aachen Dr.-Ing. Matthias Auer MAN Energy Solutions SE Augsburg Prof. Dr.-Ing. Michael Bargende Forschungsinstitut für Kraftfahrwesen und Fahrzeugmotoren Stuttgart - FKFS Stuttgart Priv.-Doz. Dr. Robert Bauer Kristl, Seibt & Co GmbH Graz, Austria Dr.-Ing. Peter Bloch Robert Bosch GmbH Stuttgart Dr.-Ing. Marius Böhmer FEV Europe GmbH Aachen Dankmar Boja NorCom Stuttgart Dr.-Ing. Giovanni Cornetti Robert Bosch GmbH Stuttgart Josh Dalby Passenger Car Market Sector - Propulsion Ricardo UK Shoreham-by-Sea, England DI Marko Decker AVL List GmbH Graz, Austria Dipl.-Ing. Nicola Deflorio ETAS GmbH Branch in Italy Torino, Italy Prof. Dr. Prof. hc. Andreas Dengel Chair of Knowledge-Based Systems TU Kaiserslautern Dr. Nico Didcock AVL List GmbH Graz, Austria Dipl.-Ing. (FH) René Diener Robert Bosch GmbH Stuttgart 291 <?page no="302"?> The Authors Niklas Ebert, M.Sc. Daimler AG Stuttgart Dipl.-Ing. Markus Ehrly FEV Europe GmbH Aachen Dr.-Ing. Michael Frey Karlsruher Institut für Technologie (KIT) Karlsruhe Dr.-Ing. Christian Friedrich MAN Energy Solutions SE Augsburg Kento Fukuhara IAV GmbH Berlin Dipl.-Ing. Marie-Sophie Gande AVL List GmbH Graz, Austria Prof. Dr. rer. nat. Frank Gauterin Karlsruher Institut für Technologie (KIT) Karlsruhe Dr.-Ing. Michael Grill Forschungsinstitut für Kraftfahrwesen und Fahrzeugmotoren Stuttgart - FKFS Stuttgart Frank Gutmann, M.Eng. SGE Ingenieur GmbH Gräfelfing Dipl.-Ing. (FH) Tobias Gutmann SGE Ingenieur GmbH Gräfelfing Dr. Christoph Hametner Christian Doppler Laboratory for Innovative Control and Monitoring of Automotive Powertrain Systems TU Wien Dr.-Ing. Benjamin Hartmann Bosch Engineering GmbH Abstatt Dr. Michael Hegmann IAV GmbH Berlin Dipl.-Ing. Thorsten Huber ETAS GmbH Stuttgart Akira Inoue Systems/ Base Technology & Convergence Group Nissan Motor Co Atsugi, Japan Dipl-Ing. (FH) Martin Jacob Bosch Engineering GmbH Abstatt Dr. Richard Jakobi Daimler AG Stuttgart Prof. Stefan Jakubek Institute of Mechanics and Mechatronics, Division of Process Control and Automation TU Wien Dipl.-Ing. Sebastian Jambor FEV Europe GmbH Aachen Daechul Jeong, M.Sc. FEV Europe GmbH Aachen Andreas Kampmeier, M.Sc. FEV Europe GmbH Aachen Carsten Karthaus, M.Sc. (TUM) Daimler AG Sindelfingen 292 <?page no="303"?> The Authors Dr.-Ing. Mahir Tim Keskin Forschungsinstitut für Kraftfahrwesen und Fahrzeugmotoren Stuttgart - FKFS Stuttgart Dr.-Ing. Frank Kirschbaum Daimler AG Stuttgart Dipl.-Ing. Holger Kleinegraeber ETAS GmbH Stuttgart Dipl.-Ing. Eva-Maria Knoch Karlsruher Institut für Technologie (KIT) Karlsruhe Dr. Hans-Ulrich Kobialka Fraunhofer Institut Intelligente Analyse- und Informationssysteme IAIS Schloss Birlinghoven, Sankt Augustin Prof. Dr. sc. techn. Thomas Koch Institut für Kolbenmaschinen Karlsruher Institut für Technologie (KIT) Karlsruhe Alireza Koochali, Msc. IAV GmbH Kaiserslautern Dipl.-Ing. Matthias Kötter FEV Europe GmbH Aachen Dr.-Ing. Thomas Kruse ETAS GmbH Stuttgart Dipl.-Ing. Christian Kunkel MAN Energy Solutions SE Augsburg Sung-Yong Lee, M.Sc. VKA RWTH Aachen Dipl.-Inform. Dr. Jakob Mauss QTronic GmbH Berlin Dr.-Ing. Thomas Mayer AUDI AG Ingolstadt Dr. Yutaka Murata Honda R&D Co., Ltd. Japan Dipl.-Ing. Dirk Naber Robert Bosch GmbH Stuttgart Dipl.-Ing. Markus Netterscheid FEV Europe GmbH Aachen Dr. Matthias Neumann-Brosig IAV GmbH Gifhorn Yui Nishio Honda R&D Co., Ltd. Japan Dipl.-Ing. Benedikt Nork DEUTZ AG Köln Dipl.-Ing. Dr. Felix Pfister IPG Automotive GmbH Karlsruhe Dipl.-Ing. Imre Pörgye FEV Europe GmbH Aachen Daniel Rimmelspacher IAV GmbH Berlin Alexander von Rohr, M.Sc. IAV GmbH and Max Planck Institute for Intelligent Systems Weissach, Tübingen/ Stuttgart 293 <?page no="304"?> The Authors Dr. Wilfried Rossegger Kristl, Seibt & Co GmbH Graz, Austria Stefan Scheidel, M.Sc. AVL List GmbH Graz, Austria Dr. rer. nat. Peter Schichtel IAV GmbH Kaiserslautern Mark Schillinger, M.Sc. Bosch Engineering GmbH Abstatt Dr.-Ing. Markus Schori IAV GmbH Gifhorn Dr Justin Seabrook Control & Calibration Ricardo UK Shoreham-by-Sea, England Dipl.-Ing. André Sell SGE Ingenieur GmbH Gräfelfing Kiyotaka Shoji Systems/ Base Technology & Convergence Group Nissan Motor Co Atsugi, Japan Felix Springer IAV GmbH Berlin Dr. Sebastian Trimpe Max Planck & Cyber Valley Research Group on Intelligent Control Systems Stuttgart/ Tübingen Alonso Marco Valle, M.Sc. Max Planck Institute for Intelligent Systems Tübingen/ Stuttgart Ph.D. Yan Wang Ford Motor Company Dearborn, MI, United States Dipl.-Ing. Alexander Wasserburger Christian Doppler Laboratory for Innovative Control and Monitoring of Automotive Powertrain Systems TU Wien Sebastian Weber, M.Sc. Mercedes-AMG GmbH Affalterbach Yuncong Yu, M.Sc. Karlsruher Institut für Technologie (KIT) Karlsruhe Giacomo Zerbini AVL List GmbH Graz, Austria Ph.D. Ling Zhu Ford Motor Company Dearborn, MI, United States 294