This following ratings: deficiency, sufficiency, luxury consumption and

This method involves a comparison of the obtained
nutrient value to the critical value on a nutrient to nutrient basis. It allows
interpretation of plant nutrient status using the following ratings:
deficiency, sufficiency, luxury consumption and excess (Bates, 1971). Critical
values (CVA) and sufficiency ranges (SRA) are the most widely use univariate
approach for the diagnosis of nutritional status of a given crop (Camacho et al., 2012; Serra et al., 2012; Beaufils et al.,
1973; Walmorth and Sumner, 1987). Deviation form Optimum Percentage (DOP) was
also proposed by (Montañésa et al., 1993) as a univariate approach for
diagnosing nutritional status of a given crop. Critical Values

Critical values have been
defined as the concentration at which there is a 5–10% yield reduction. The use
of critical values for practical interpretation has limited value. It is best
suited to diagnose severe deficiencies and has little application in
identifying hidden hunger. Symptoms are generally evident when nutrient
concentrations decrease below the critical value. Critical values play an
important role in establishing lower limits of sufficiency ranges. The nutrient
is said to be deficient if it is lower than the critical value and sufficient
when greater than the critical value. If the analyzed value is equal to the critical value it
is said to be optimum (Rao et al., 1990). Thus, a nutrient concentration
far below or above reference values is associated with decreasing crop growth,
yield, and quality (Mourão-Filho, 2005). Sufficiency Ranges

Sufficiency range
consists of optimum ranges of nutrient concentration to establish the
nutritional status of a given crop (Serra et
al., 2013). Sufficiency range interpretation offers significant advantages
over the use of critical values. First, hidden hunger in the transitional zone
can be identified since the beginning of the sufficiency range is clearly above
the critical value. Sufficiency ranges also have upper limits, which provide
some indication of the concentration at which the element may be in excess. Deviation form Optimum Percentage (DOP)

DOP (Montañésa et al., 1993) index is defined
as the percentage deviation of the concentration of an element with respect to
the optimum content taken as the reference value. The DOP index is calculated


Where C is the concentration of a given

Cref is the optimal nutrient concentration.

A DOP index can be
positive, zero, or negative, the negative DOP index values mean that are
undersupplied and positive DOP index values mean that are oversupplied. The sum
of the absolute values of the DOP indexes (?DOP) gives an indication of the
sufficiency, deficiency or excesses of the nutrient in question. If the sample
is near to an adequate nutritional status, the ?DOP will be near zero (Montañes
et al., 1993). DOP is not widely used as the useful references are the
deficiency (Xu et al., 2015).
Limitation of univariate approach

Generally, univariate
methods try to evaluate isolated deficiency or excess values without measuring
the overall nutritional imbalance. Similarly, although the critical value and
sufficiency range approaches have been used to make accurate diagnoses, some of
the disadvantages are that the values vary with the concentration of other
nutrients, plant age, and varieties. And as such, there is often difficulty in
establishing consistent critical values and relate them to high yields.
Similarly, this approach is erroneous in that critical nutrient concentration
are not independent but can vary in magnitude as the background concentration
of other nutrients increases or decreases in crop tissue (Walworth and Sumner.,
1986, 1991 and 1993). The method does not diagnose which nutrient is “most
limiting” when two or more nutrients are simultaneously deficient (Bailey et al., 1997).

This approach assesses
only the sufficiency status of a single nutrient (e.g N) on the basis of its
abundance relative to one other nutrient (e.g P) and makes no allowance for
potential imbalances with other nutrients (Bangroo et al., 2010). These comparisons based on the standard values are a
basic methodology which considers each nutrient independently and this way
difficult the application of the concept of nutritional balance since it can
only identify a nutrient at each time (Jones, 1981; Beverly, 1991). Thus, the
use of such methods does not allow the rank of nutrient limitations (Maeda et
al., 2004; Meyer, 1981). On the other hand, they consider different
nutritional mechanisms of plants not the interactions issue between nutrients
(Schaller et al., 2002).

Furthermore, the
standardization of the sampling period required by the above-referred methods
is based on the principle that higher nutrient requirements are met at the
flowering stage which, in case of annual crops, prevents its results from being
used on the benefit of the crop from which the sample was taken (Srivastava and
Singh, 2008; Nachtigall, 2004; Harger et al., 2003; Meyer, 1981). The
evolution of tissue maturation, and therefore the instability of nutritional
concentrations, is another difficulty in the interpretation and the
establishment of these standards as well as the quantification of its impact on
production (Sumner, 1979; Nachtigall, 2004; Srivastava and Singh, 2008).

Finally as these methods
do not consider environmental factors or other nutritional conditions, it is
assumed that the reference values listed are not unique or universally
applicable, which justifies the existence of a large number of critical values
referred in the literature for the same crop (Srivastava and Singh, 2008). These
approaches have been used for several crops including potatoes (O’Sullivan et al., 1997), maize and sorghum (Jones et al., 1990; Westfall et al., 1990).