A large part of the analytical variation in metabolomics measurements is caused by suboptimal performance of the chosen platform for (sub)sets of metabolites and instrumental drift. The ability of a method to detect a specific metabolite (i.e., its performance) is a complex interplay of its physical and chemical properties and is also partially dependent on the sample composition (matrix, e.g., ion suppression in MS based systems) and in many cases on its concentration. Analytical variation for an individual analyte caused by differences in sample composition (matrix effect during extraction, derivatization and analysis) can be removed only by using stable isotope labeled internal standards. The measured intensities can then be normalized towards these internal standards. In metabolomics where hundreds of (also unidentified) metabolites are measured, it is very uncommon to measure calibration standards for each of these metabolites. This would be very laborious and thus expensive, but equally important is that beforehand it is not always clear which metabolite is of interest for the study.
For proper biological interpretation however, it is essential to have quantitative and annotated metabolomics data. This project aims at developing analytical and statistical methodologies to 1) quantify and 2) correct for analytical (non-biological) variation between various samples, batches of samples, instruments and laboratories. Metabolomics data obtained from various analytical platforms acquired at different points in time for different sample-sets can then be merged and used for comprehensive statistical data analyses.
In order to achieve such data, four levels of comparison and quantification of the (relative) metabolites concentration were identified: within one sample, over samples, over batches of samples and over studies. For each of these levels experiments will be set-up to quantify different sources of variation.