Quality Assessment For Hyperspectral Airborne Systems From Data Up To Land-Products DISSERTATION zur Erlangung des akademischen Grades Doctor Rerum Naturalium (Dr. rer. nat.) im Fach Informatik eingereicht an der Mathematisch-Naturwissenschaftliche Fakultät der Humboldt-Universität zu Berlin von Ingénieur Généraliste Supaéro, Diplômé de l’Institut Supérieur de l’Aéronautique et de l’Espace: Grégoire Henry Gérard Kerr Präsident der Humboldt-Universität zu Berlin: Prof. Dr. Jan-Hendrik Olbertz Dekan der Mathematisch-Naturwissenschaftliche Fakultät: Prof. Dr. Elmar Kulke Gutarter/in: 1. Prof. Dr. Ralf Reulke 2. Prof. Dr. Thomas Udelhoven 3. Dr. René Preusker Tag der Verteidigung: 30.07.2015 Short Abstract This work proposes a methodology for performing a quality assessment on the complete air- borne hyperspectral system, thus ranging from data acquisition up to land-product generation. It is compliant with other quality assessment initiatives, such as the European Facility for Air- borne Research (EUFAR), the Quality Assessment for Earth observation work-group (QA4EO) and the Joint Committee for Guides in Metrology (JCGM). These are extended into a generic framework allowing for a flexible but reliable quality assessment strategy. Since airborne hyper- spectral imagery is usually acquired in several partially overlapping flight-lines, it is proposed to use this information redundancy to retrieve the imagery’s internal variability. The underlying method is generic and can be easily introduced in the German Aerospace Center DLR’s hyper- spectral processing chain. The comparison of two overlapping flight-lines is not straightforward, should it only be because the presence of geo-location errors present in the data. A first step consists in retrieving the relative variability of the pixel’s geo-locations, hence providing pairs of pixels imaging the same areas. Subsequently, these pairs of pixels are used to obtain quality indicators accounting for the reproducibility of mapping-products, thus extending the EUFAR’s quality layers up to land-products. The third stage of the analysis consists of using these reli- ability results to improve the mapping-products: it is proposed to maximise the reliability over the mapping-methods’ parameters. Finally, the repeatability assessment is back propagated to the hyperspectral data itself. As a result, an estimator of the reflectance variability (including model-, and scene-induced uncertainties) is proposed by means of a blind-deconvolution ap- proach. Altogether, this complements and extends the EUFAR quality layers with estimates of the data and products repeatability while providing confidence intervals as recommended by JCGM and QA4EO. Keywords— Airborne Remote Sensing, Hyperspectral Processing Chain, Imaging Spectro- scopy, Quality Assessment, Signal Processing, EUFAR, QA4EO i Zusammenfassung Im Rahmen dieser Arbeit wird ein Konzept entwickelt und umgesetzt, welches eine um- fassende Bewertung von Daten flugzeuggetragener hyperspektraler Systeme ermöglicht. Es baut auf mehreren aktuellen Initiativen zur Erfassung der Datenqualität flugzeuggetragener Sensoren auf: Der ’European Facility for Airborne Reseach’, der ’Quality Assessment for Earth Obser- vation Workgroup’ und dem ’Joint Committee for Guides in Metrology’. Bei der Befliegung eines Gebietes mit hyperspektralen Sensorsystemen werden mehrere, teilweise sich überlap- pende, Flugstreifen aufgenommen. Es wird vorgeschlagen, die Bildinformationen dieser Über- lappungsbereiche als redundant anzusehen und so die innere Variabilität der Daten zu erfassen. Die jeweils zwischen zwei Flugstreifen auftretende Variabilität kann (aufgrund unterschiedlicher Blickrichtungen) als ungünstigster anzunehmender Fall (’worst-case’) betrachtet werden und er- gänztdaherexistierendeAnsätze,diesichaufdieAuswertunghomogenerFlächenkonzentrieren. Das entwickelte Konzept ist auf unterschiedliche Sensorsysteme anwendbar, somit generisch und kannproblemlosindieaktuelleDatenprozessierungskettedesDeutschenZentrumsfürLuft-und Raumfahrt e.V. integriert werden. Im ersten Abschnitt der Arbeit wird dargelegt, wie korres- pondierende Pixelpaare, die in den jeweiligen Streifen an gleicher Geolokation liegen, ermittelt werdenkönnen. DaraufaufbauenderfolgteinePlausibilitätsüberprüfungdererfaßtenPixelpaare unter Verwendung von Zuverlässigkeitsmetriken, die auf Basis höherwertigerer Datenprodukte berechnetwerden. IneinemweiterenSchrittwerdendieErgebnissegenutzt,umdienotwendigen Parameter für eine optimierte Bildauswertung - hier im Sinne der Zuverlässigkeit - abzuleiten. Abschließend werden die Pixelpaare benutzt, um die globale Variabilität der Reflektanzwerte abzuschätzen. Insgesamt werden durch diese Arbeit die existierenden Methoden zur Qualität- skontrolle optischer Bilddaten umfassend ergänzt. Schlagwörtern — flugzeugetragene Sensorsysteme, Prozessierungskette für hyperspektrale Bilddaten, spektroskopische Bildaufzeichnung, Qualitätssicherung, Signalverarbeitung, EUFAR, QA4EO ii Long Abstract In order to be able to use results derived from hyperspectral imagery, it is necessary to first validatethecorrespondingdataandproducts: ’Are they reliable enough for achieving this goal?’, ’What is their quality?’...Although some initiatives are currently aiming at addressing these questions, several issues remain open. The first one regards the exact definition of ’quality’. In thefollowingthetermisrestrictedtothenotionofrepeatability. Inotherwords,thiscorresponds to answering the question: ’What are the discrepancies between two similar measurements?’ A second issue regards the feasibility of such an assessment. Hyperspectral data allows for theretrievalof awide rangeof -possiblysubtle -physical processes. Althoughthis isusually an advantage, it also implies that any measurement is influenced by many unwanted sources. As a result, hyperspectral imagery requires a complex pre-processing chain for translating the ’raw’ measurements into valuable information. This involves, in turn, further complex considerations. Tostartwith,thesepre-treatmentsarebasedonmodels-orapproximationsoftheactualprocess - they are therefore introducing their own uncertainties into the data, which would in turn also need to be assessed. Besides these limitations, more classical considerations should be made: the sensor itself is not perfect and need to be characterised and calibrated. This stage is also associatedwithanuncertaintywhichaffectsanyfurtherstep. Inordertoevaluatethequalityof aproduct-oreventocheckwetheradatasetissuitableforaspecifictask-thecompletesystem should be investigated: the sensor and the associated characterisation & calibration, the data acquisition, the data pre-processing and the products generation steps. As a result any direct uncertainty propagation within the complete system is - in practice - illusory. In order to bypass these limitations, the European Facility for Airborne Research (EUFAR) has already produced several sets of quality tags. They do not account for the reliability as such, but only indicate whether such or such aspect of the data is susceptible to affect the quality. Although useful, this information has only been developed for the data itself - up to reflectance: due to the wide variety of products (e.g. classification, quantitative analysis...), mappingapplicationswereleftaside. TheGermanAerospaceCenter, beingpartoftheEUFAR, is however interested in extending the existing quality tags toward land-products. This work aims at addressing this issue. Itsfirststepsregardsthedataqualityitself: althoughofinterest,theEUFARqualitytagsare not detailed enough for obtaining any quantification of the imagery’s reliability. For practical applications, this is theoretically by passed by so-called vicarious (or cross-) validation. The sensor system images a large homogeneous area whose properties are known and compared to the output of the processing chain. However, the huge majority of the areas of interest are heterogeneous, hence bringing further uncertainties (e.g. adjacency effects, influence of differing pixel’s footprints). Homogeneous areas are therefore corresponding to an over-optimistic case. Thus, another solution had to be introduced in order to account for more plausible cases. Sinceairbornehyperspectralimageryisusuallyacquiredinseveralpartiallyoverlappingflight- lines, it is proposed to use this information redundancy to retrieve the imagery’s internal vari- ability. This implies a data-driven quality assessment specific to each data-set. This specificity is however not a limiting factor since this analysis can be performed on virtually every data-set. A further consequence is however the fact that the repeatability assessment is only accounting forrelativeerrors. Again, sincethegoalisheretoinvestigatetherepeatabilityofmeasurements, absolute biases are of limited interest - they can actually be assessed by other means. Finally, it is shown that this assessment is actually corresponding to a pessimistic - or worst case scenario - quality assessment, hence complementing existing work on homogeneous areas. iii In practice, the comparison of two overlapping flight-lines is however not straight-forward, should it only be because the presence of geo-location errors present in the data. A first step consiststhereforeinretrievingtherelativevariabilityofthepixel’sgeo-locations,henceproviding pairs of pixels imaging the same geo-locations. This stage is performed on atmospherically corrected (reflectance) geo-rectified data. Subsequently, these pairs of pixels are used to obtain ’quality indicators’ accounting for the reproducibility of mapping-products. It is in particular advocated to replace the classical Cohen’s κ coefficient used in remote sensing by the more generic Krippendorff’s α. The latter is indeed more generic: it is not limited to analysis of two classifications-itcanforexamplebeusedtoanalysemorethantwoquantitativemappingresults (e.g. organic carbon content of the soil) - while being more stable with respect to configuration changes(e.g. numberofinvestigatedpixels). Severalotherdedicatedindicatorsarealsoproposed, as well as means to obtain their confidence intervals, hence extending EUFAR results over land- products. The third stage of this analysis consists in studying whether it is possible to use these reli- ability results to improve the mapping-products. Once the reliability of a mapping-product has been established, it is logical to to improve it. This work proposes to optimise the reliability over the mapping-methods’ parameters. Since this approach is only depending on the reliability retrieved by means of pairs of observations - which can be computed on virtually every data-set - the approach is quite generic. Finally, the repeatability assessment is back propagated to the hyperspectral data itself. As a result, an estimator of the reflectance variability (including models, as well as scene induced uncertainties) is proposed, hence both complementing the EUFAR quality layers with estimates of the data repeatability and going beyond them by producing quantitative results. This is achieved by means of a blind-deconvolution framework used in conjunction with the use of levy alpha stable distributions. Theproposedapproachesarealtogetherprovidinggenerictoolsthatcouldbeusedonalmost any data-set with minimal human interaction, hence opening the way to their incorporation within the German Aerospace Center processing chain for airborne hyperspectral data. They are furthermore extensively tested against classical data-sets as well as against a specialised test study elaborated within this work. iv Contents 1 Introduction & Objectives 1 1.1 Hyperspectral Airborne Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Proposed Approach and Outline of the Work . . . . . . . . . . . . . . . . . . . . 3 2 Remote Sensing & Spectroscopy 5 2.1 Introduction: Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Hyperspectral Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1 Basic Principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 Interactions with Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.3 Spectral Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.4 Measurement Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3 DLR Processing Chain & Associated Errors . . . . . . . . . . . . . . . . . . . . . 10 2.3.1 Processing Chain Description . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3.2 Further Sources of Uncertainties . . . . . . . . . . . . . . . . . . . . . . . 13 2.4 Common Spectroscopy Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.1 Spectral Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.2 Band Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3 Quality Assessment 21 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Generic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2.1 Quality Assessment: Uses and Interest . . . . . . . . . . . . . . . . . . . . 22 3.2.2 Quality for Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2.3 Quality for Data Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.4 Quality as Traceability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.5 Quality as Reproducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.6 Profane Requirements and Conclusions . . . . . . . . . . . . . . . . . . . 24 3.3 Generic Quality Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3.1 Recommendations from JCGM . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3.2 Quality Assessment for Earth Observation (QA4EO) . . . . . . . . . . . . 26 3.3.3 EUFAR Quality Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.3.4 Connex Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4 Quality Assessment in Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4.1 Expert Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4.2 In-Situ Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.4.3 Cross Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.4.4 Quality Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.4.5 Bayesian Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.4.6 Noise Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.4.7 Miscellaneous Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.5 Conclusions and Retained Methodology . . . . . . . . . . . . . . . . . . . . . . . 35 v 4 Corresponding Pixels Automated Matcher 37 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1.1 Generic Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1.2 Co-Registration: State of the Art . . . . . . . . . . . . . . . . . . . . . . . 39 4.2 Proposed Methodology: CPAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2.2 Validating Found Pixels . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.3 Note on Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.3 Parameters Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.3.1 Spectral Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.3.2 Neighbourhood Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.3.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.4 CPAM Validation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.4.1 Illustrative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.4.2 Validation on Synthetic Data . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.4.3 Triple Pass Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.5 CPAM Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.5.1 HyMap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.5.2 HySpex - DLR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.5.3 AISA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.5.4 Extension to Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5 Filling the Gaps 77 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.2 Available Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.3 Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.3.1 Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.3.2 Solving the Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.3.3 Dealing with Outliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.3.4 Note on Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.4 Retained Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.4.1 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.4.2 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 6 Quality Indicators for Maps 89 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.1.1 Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 6.1.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.2 Basic Analysis of L3 products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.2.1 Generic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.2.2 Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.2.3 Specialised ’Basic’ Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.2.4 Case Study: Nominal Data . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6.2.5 Case Study: Absolute Data . . . . . . . . . . . . . . . . . . . . . . . . . . 106 6.2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 6.3 In-depth Analysis: Cohen’s Kappa Coefficient . . . . . . . . . . . . . . . . . . . . 108 6.3.1 Inter-Rater Reliability: the Kappa Coefficient . . . . . . . . . . . . . . . . 108 6.3.2 Other Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 6.4 A Unified Framework: Krippendorff’s α . . . . . . . . . . . . . . . . . . . . . . . 113 6.4.1 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.4.2 Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.4.3 Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 vi 6.4.4 Note on Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6.4.5 Relationships Between Krippendorff’s Alpha and Other Coefficients . . . 115 6.4.6 Conclusions on Theoretical Background . . . . . . . . . . . . . . . . . . . 115 6.5 Experimental Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 7 Improving Land-Products 119 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 7.2 Framework & Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.2.1 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.2.2 Model’s Parameters: Origins and Analysis . . . . . . . . . . . . . . . . . . 120 7.3 Methodology: Improving Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 7.3.1 Which ’Empirical’ Parameters? . . . . . . . . . . . . . . . . . . . . . . . . 121 7.3.2 Optimisation Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 7.3.3 Over-fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 7.4 Optimisation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 7.4.1 Generic Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 7.4.2 Other Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 7.4.3 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 7.5 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 7.5.1 Classification Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 7.5.2 Obtaining Optimisation Parameters . . . . . . . . . . . . . . . . . . . . . 128 7.5.3 Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 7.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 8 In Depth Analysis of the Variability 133 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 8.2 Retained Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 8.2.1 Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 8.2.2 Existing Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 8.2.3 Differing Viewpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 8.3 Analysis of Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 8.3.1 Classical Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 8.3.2 Which Distribution? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 8.4 Stable Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 8.4.1 Notions and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 8.4.2 Sum of Stable Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 140 8.4.3 Link with Other Distributions. . . . . . . . . . . . . . . . . . . . . . . . . 141 8.4.4 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 8.5 Retrieving Individual Variability . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 8.5.1 Retained Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 8.5.2 Getting the Optimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 8.5.3 Summary: Final Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 145 8.6 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 8.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 9 Summary 149 9.1 Reminder: Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 9.2 Review of Proposed Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 9.3 Critical Review of the Achievements . . . . . . . . . . . . . . . . . . . . . . . . . 151 9.4 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 vii
Description: