Software tools, Reference data, and Guidelines

Software tools - Introduction

Software supporting mathematics and statistics in metrology is often used for simulating measurement processes, for data evaluation and for uncertainty assessment. Urgent software tools from the perspective of mathematics and statistics in metrology were identified by a stakeholder consultation process and long-term experiences from the EMN Mathmet.

The following table shows a summary of the most relevant software tools.

 

Software tools

Title Description / Details  
CASoft

CASoft is a software that enables risks associated with decision-making in conformity assessment to be managed when measurement uncertainty is to be taken into account. In particular, CASoft aims to support the practical application of the methodology described in the reference document “The role of measurement uncertainty in conformity assessment” JCGM106:2012. CASoft is written in Matlab®.

https://www.lne.fr/en/software/CASoft
LNE Uncertainty

LNE Uncertainty is a freeware software that evaluates the measurement uncertainty by the propagation of variances and/or propagation of distributions using Monte Carlo simulations. These methods are described in the Guide to the Expression of Uncertainty in Measurement (GUM) and its Supplement 1 (GUM S1), respectively. LNE Uncertainty is written in Matlab®.

https://www.lne.fr/en/software/lne-uncertainty-evaluating-measurement-uncertainties-using-gum-and-monte-carlo
metrology package for R

Free open source contributed package for R. Provides classes and calculation and plotting functions for metrology applications, including measurement uncertainty estimation and inter-laboratory metrology comparison studies. Measurement uncertainty approaches include algebraic and numerical differentials and Monte carlo simulation.

https://cran.r-project.org/package=metRology
PyDynamic

PyDynamic offers propagation of uncertainties for
- application of the discrete Fourier transform and its inverse
- filtering with an FIR or IIR filter with uncertain coefficientsdesign of a FIR filter as the inverse of a frequency response with uncertain coefficients
 - design on an IIR filter as the inverse of a frequency response with uncertain coefficients
- deconvolution in the frequency domain by division
- multiplication in the frequency domain
-  transformation from amplitude and phase to a representation by real and imaginary parts
- 1-dimensional interpolation
For the validation of the propagation of uncertainties, the Monte-Carlo method can be applied using a memory-efficient implementation of Monte-Carlo for digital filtering. PyDynamic is written in Python.

https://pypi.org/project/PyDynamic
PyTia 3.00

The PyThia UQ toolbox is an open-source software package designed to generate polynomial chaos surrogates for high-dimensional functions in a non-intrusive fashion.  The surrogate is fast to evaluate, allows analytical differentiation and has a built-in global sensitivity analysis via Sobol indices. Assembling the surrogate is done non-intrusive by least-squares regression, hence only training pairs of parameter realizations and evaluations of the forward problem are required to construct the surrogate. No need to compute any nasty interfaces for lagacy code. PyThia is written in Python.

https://pypi.org/project/pythia-uq/
EPTlib library

EPTlib is an open source, extensible collection of C++ implementations of electric properties tomography (EPT) methods. It includes: 
-  Helmholtz EPT
- Convection-reaction EPT
- Gradient EPT
- Phase-based Helmholtz EPT with automatically selected kernel
 

https://eptlib.github.io/
Calibration Curve Computing (CCC)

CCC offers:
- Evaluates calibration curves fitting n pairs of measurement data (x, y)
- Uncertainties and correlations associated with both x and y values can be provided as input to the fitting procedures
- For each data point (x, y), single or repeated measurements can be provided as input to the fitting procedures
 

https://www.inrim.it/en/services/software-and-database/ccc-software
NPLUnc

NPLUnc promote and support the use of the GUM, GUM Supplement 1 concerned with the use of a Monte Carlo method for uncertainty evaluation and GUM Supplement 2, concerned with extending the GUM and GUM Supplement 1 to measurement models with any number of output quantities.

https://www.npl.co.uk/software
X(L)GENLINE

This software supports the ISO Technical Specification (TS) Determination and use of straight-line calibration functions.

https://www.npl.co.uk/software
Software to support ISO/TS 28037:2010E

This software supports the ISO Technical Specification (TS) Determination and use of straight-line calibration functions.

https://www.npl.co.uk/software
NPL CoMet

The NPL CoMet Toolkit (Community Metrology Toolkit) is an open-source software project to develop Python tools for the handling of error-covariance information in the analysis of measurement data.

https://www.comet-toolkit.org/about/
PTB Software Tools

PTB Software tools addressing specific tasks in uncertainty quantification, evaluation of interlaboratory comparisons, legal metrology, regression and deep learning: 
- Rejection sampling for Bayesian uncertainty evaluation using the Monte Carlo techniques of GUM-S1
- Bayesian sample size planning in type A uncertainty evaluation
- A simple method for Bayesian uncertainty evaluation in linear models
- An introductory example for Markov chain Monte Carlo (MCMC)
- WinBUGS software for the analysis of immunoassay data
- MCMC implementation for the analysis of magnetic field fluctuation thermometry
- A smoothness informed Low-Rank reconstruction method
- Bayesian Uncertainty Quantification for Magnetic Resonance Fingerprinting
- Inspecting adversarial examples using the fisher information
- Hypothesis-based acceptance sampling for the MID
- Shapiro-Wilk test
- Calibration of a torque measuring system - GUM uncertainty evaluation for least-squares versus Bayesian inference
- Quantifying uncertainty when comparing measurement methods – Haemoglobin concentration as an example of correlation in straight-line regression
- Calibration of a sonic nozzle as an example for quantifying all uncertainties involved in straight-line regression
- Bayesian hypothesis testing for key comparisons
- Evaluation of uncertainties in the application of the DFT
- A smoothness informed low-rank reconstruction method
 

https://www.ptb.de/cms/en/ptb/fachabteilungen/abt8/fb-84/ag-842/software.html

Reference data

The need for reference data is increasing and it is expected that reference data sets will become more and more important in metrology as in other fields. They are expected to act as “digital standards” for the benchmarking and validation of AI tools, digital twins and virtual metrology models. The urgent needs for reference data for mathematics and statistics in metrology were identified by the stakeholder consultation process and long-term experiences from the members of the EMN Mathmet.

The following table shows a summary of the most relevant reference data.

 

Reference data

Title Description / Details  
PTB XL (ECG –Reference data)

The PTB-XL ECG dataset is a large dataset of 21799 clinical 12-lead ECGs from 18869 patients of 10 second length. The raw waveform data was annotated by up to two cardiologists, who assigned potentially multiple ECG statements to each record. The in total 71 different ECG statements conform to the SCP-ECG standard and cover diagnostic, form, and rhythm statements. To ensure comparability of machine learning algorithms trained on the dataset, we provide recommended splits into training and test sets. In combination with the extensive annotation, this turns the dataset into a rich resource for the training and the evaluation of automatic ECG interpretation algorithms. The dataset is complemented by extensive metadata on demographics, infarction characteristics, likelihoods for diagnostic ECG statements as well as annotated signal properties.

https://physionet.org/content/ptb-xl/1.0.3/
PTB-XL+

The PTB-XL+ is a synthetic database comprising a total of 16,900 12 lead ECGs based on electrophysiological simulations equally distributed into healthy control and 7 pathology classes. The pathological case of myocardial infraction had 6 sub-classes. A comparison of extracted features between the virtual cohort and a publicly available clinical ECG database demonstrated that the synthetic signals represent clinical ECGs for healthy and pathological subpopulations with high fidelity. The ECG database is split into training, validation, and test folds for development and objective assessment of novel machine learning algorithms.

https://arxiv.org/pdf/2211.15997.pdf
TraCIM database

The TraCIM Computational Aims Database is a 
system for storing and managing documents that record the specifications of computational aims. The specifications can then be cited by other parts of the TraCIM system. The four categories of user of the database are listed below in order of increasing system privileges. RT: computational modelling and virtualmetrology, validation of models.

https://tracim.ptb.de/tracim/

Guidelines

The most important guidelines for stakeholders in the field of mathematics and statistics in metrology as well as for Mathmet members are mainly documents about uncertainty evaluation (e.g. the GUM suite of documents).

Some stakeholders had indicated a need for guidance on new trends or specific applications (e.g. medical applications, machine learning, climate and environmental observations). The urgent needs for guidelines for mathematics and statistics in metrology were indentified by a stakeholder consultation process and long-term experiences from the members of the EMN Mathmet.

The following table shows a summary of the most relevant guidelines.

Guidelines

Title Description / Details  
Measurement Uncertainty JCGM GUM suite

This GUM suite established general rules for evaluating and expressing uncertainty in measurement that are intended to be applicable to a broad spectrum of measurements. The Gum suites comprises:
- JCGM100:2008 Guide to the Expression of Uncertainty in Measurement
- JCGM101:2008 Supplement 1 – Propagation of distributions using a Monte Carlo Method
- JCGM102:2011 Supplement 2 – Extention to any number of output quantities (2011)
- JCGM 106:2012 Evaluation of measurement data – The role of measurement uncertainty in conformity assessment (2021)
- JCGM GUM-6:2020 Guide to the expression of uncertainty in measurement – Part6: Developing and using measurement models
 

https://www.bipm.org/en/committees/jc/jcgm/publications
Eurachem guide on "Quantifying Uncertainty in Analytical Measurement"

Eurachem guide gives detailed guidance for the evaluation and expression of uncertainty in quantitative chemical analysis, based on the approach taken in the ISO “Guide to the Expression of Uncertainty in Measurement”, including numerical and Monte Carlo methods. Common areas in which chemical measurements are needed, and in which the principles of the Guide may be applied, are: 
- Quality control and quality assurance in manufacturing industries. 
- Testing for regulatory compliance. 
- Testing utilising an agreed method. 
- Calibration of standards and equipment. 
- Measurements associated with the development and certification of reference materials.
 

https://www.eurachem.org
Eurachem guide "Use of uncertainty information in compliance assessment"

Guidance on use of measurement uncertainty information in conformity assessment. The guide includes a discussion and general recommendations, including the use of "guard bands" to improve the probability of correct acceptance or correct rejection. This is followed by more detailed guidance on establishing rules for interpretation and by several examples.

https://www.eurachem.org
EMPIR NEW04: Best practice guides

The collection comprises three guides. These guides are deliverable of EMRP joint research project NEW04:
- A Guide to Bayesian Inference for Regression Problems: The Guide provides practical guidance on Bayesian inference for regression problems.

- Best practice guide to uncertainty evaluation for computationally expensive models: The guide provides a summary of current best practice in uncertainty evaluation for computation- ally expensive models. A computationally expensive model can, in this case, be considered as a model that takes a sufficiently long time to produce results that the user has rejected Monte Carlo sampling as an uncertainty evaluation method because it requires too many model evaluations to reach the level of accuracy required.

- A guide to decision-making and conformity assessment: Generic guidance including some original material presented in this document, addresses the role of measurement uncertainty in decision-making and conformity assessment for multivariate cases, regression and computationally expensive models illustrated for a number of case studies such as fire engineering, healthcare and electricity energy metering

https://www.ptb.de/emrp/new04-best-practice-guides.html
EMPIR 17NRM05 EMUE: Good practice in evaluating measurement uncertainty – Compendium of examples

This suite of examples illustrates the use of the methods described in the Guide to the expression of Uncertainty in Measurement (GUM), and several other methods that have not yet been included in this suite of documents. / RT:  Uncertainty quantification and data analysis.

http://empir.npl.co.uk/emue/wp-content/uploads/sites/49/2021/07/Compendium_M36.pdf

Disclaimer

The information in this database has been provided by the Members and Partners of the EMN for Mathematics and Statistics. EURAMET has no influence on their correctness and does not assume any liability for it.

The information contained in this document is provided 'as is' without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability, fitness for a particular purpose, or non-infringement.

In no event shall the creator of this document be liable for any special, incidental, indirect or consequential damages of any kind, or any damages whatsoever, including, without limitation, those resulting from loss of use, data or profits, whether or not advised of the possibility of damage, and on any theory of liability, arising out of or in connection with the use or performance of this information.