Skip to main content
Skip table of contents

Negative concentrations in spectrum analysis

Often, people ask us why the full spectrum analysis (FSA) method that is part of the Gamman software, sometimes produces negative concentrations as a result of the fit. The reason for this is purely mathematical. FSA is based on fitting the measured spectra with detector response curves. The minimization algorithm used is general least squares (LSQ) which basically finds optimum concentrations for K, U and Th such that the measured spectrum is fitted best by the detector response curves. However, mathematically speaking, the optimum can be achieved with positive and/or negative concentrations. However, in real life such negative nuclide concentrations are of course non-physical.

There are a couple of occasions, or reasons that may lead to negative numbers in the output of the algorithm:

  1. Low countrate data. When the measured spectrum has a (very) low number of counts, then the concentrations will be close to zero. However, inherent to the LSQ algoritm is that the nuclide concentrations are accompanied by an uncertainty. And this uncertainty will be relatively large for low-countrate spectra. Yielding a probability that the actual "concentration" derived from the spectra is negative (for instance K = -10 +/- 20 Bq/kg.
  2. Difference between calibration and measurement geometries. Often when one does a measurement, for instance with a point-source close to a detector, the resulting spectrum will have a shape that is different (much sharper) than the representing calibration curve. This is due to the differing geometries between curve and measurement.  The image below shows that situation. We see a Th spectrum taken with a Th source close to a detector. The Th response curve (blue), however, was modelled assuming Th inside a layer of 0.5m soil. As one can see, the curve does not fit the measurement in a perfect way. The algorithm still tries to find the optimum fit, at the expense of reporting negative Uranium concentration.

  3. Nuclides present in data and not in calibration. In this case, there are peak(s) present in the data that are not accounted for by the calibration file. For instance, one could measure antropogeneous 137Cs without having a calibration curve for that nuclide. Then the algorithm still finds a solution but the problem is (mathematically) not well-described.

There are different ways to tackle this issue. We at Medusa have implemented and tested a method which is called "non-negative least squares (NNLSQ)". This is a bounded version of the general least squares algorithm that avoids numbers to get <0. The algorithm basically uses a iterative strategy to solve the fit. That is, at first a normal LSQ fit will be done, as usual. Then, if one of the concentrations turns out negative, this concentration will be set to zero, and a new fit will be done using the remaining concentrations as variables. Drawback of the method is that is does not full-fill Poisson statistics as the distribution of concentration values will have a cut-off at zero. 

NOTE: Non-neg LSQ will of course not tackle situations where the calibration is not suited for the measurement done (situations 2 and 3). 

Further reading:


JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.