Display options
Share it on

Anal Chem. 2017 Nov 07;89(21):11568-11575. doi: 10.1021/acs.analchem.7b02909. Epub 2017 Oct 26.

Fluctuation Scaling, Calibration of Dispersion, and Detection of Differences.

Analytical chemistry

Rianne Holland, Roman Rebmann, Craig Williams, Quentin S Hanley

Affiliations

  1. School of Science and Technology, Nottingham Trent University , Clifton Lane, Nottingham NG11?8NS, United Kingdom.
  2. School of Biology, Chemistry and Forensic Science, University of Wolverhampton , Wulfruna Street, Wolverhampton WV1?1LY, United Kingdom.

PMID: 29019236 DOI: 10.1021/acs.analchem.7b02909

Abstract

Fluctuation scaling describes the relationship between the mean and standard deviation of a set of measurements. An example is Horwitz scaling, which has been reported from interlaboratory studies. Horwitz and similar studies have reported simple exponential and segmented scaling laws with exponents (α) typically between 0.85 (Horwitz) and 1 when not operating near a detection limit. When approaching a detection limit, the exponents change and approach an apparently Gaussian (α = 0) model. This behavior is often presented as a property of interlaboratory studies, which makes controlled replication to understand the behavior costly to perform. To assess the contribution of instrumentation to larger scale fluctuation scaling, we measured the behavior of two inductively coupled plasma atomic emission spectrometry (ICP-AES) systems, in two laboratories measuring thulium using two emission lines. The standard deviation universally increased with the uncalibrated signal, indicating the system was heteroscedastic. The response from all lines and both instruments was consistent with a single exponential dispersion model having parameters α = 1.09 and β = 0.0035. No evidence of Horwitz scaling was found, and there was no evidence of Poisson noise limiting behavior. The "Gaussian" component was a consequence of background subtraction for all lines and both instruments. The observation of a simple exponential dispersion model in the data allows for the definition of a difference detection limit (DDL) with universal applicability to systems following known dispersion. The DDL is the minimum separation between two points along a dispersion model required to claim they are different according to a particular statistical test. The DDL scales transparently with the mean and works at any location in a response function.

Publication Types