The Binomial Model

The data from a binomial process consist of two discrete outcomes (binary). A test organism is either dead or alive after a given period of time. An effluent is either in compliance or it is not. In a given year, a river floods or it does not flood. The binomial probability distribution gives the probability of observing an event x times in a set of n trials (experiment). If the event is observed, the trial is said to be successful. Success in this statistical sense does not mean that the...

Case Solution Mercury Data

Water specimens collected from a residential area that is served by the city water supply are indicated by subscript c p indicates specimens taken from a residential area that is served by private wells. The averages, variances, standard deviations, and standard errors are City (nc 13) yc 0.157 g L s2 0.0071 sc 0.084 s 0.023 Private (np 10) yp 0.151 g L sp 0.0076 sp 0.087 s 0.028 The difference in the averages of the measurements is yc - yp 0.157 - 0.151 0.006 Mg L. The variances sc and sp of...

Kinds of Control Charts

What has been said so far is true for control charts of all kinds. Now we look at the Shewhart2 chart (1931), cumulative sum chart (Cusum), and moving average charts. Moving averages were used for smoothing in Chapter 4. The Shewhart chart is used to detect a change in the level of a process. It does not indicate a change in the variability. A Range chart (Chapter 11) is often used in conjunction with a Shewhart or other chart that monitors process level. The quantity plotted on the Shewhart...

Exercises

Reinterpret the Pallesen example in the text after pooling the higher-order interactions to estimate the error variance according to your own judgment. 26.2 Ammonia Analysis. The data below are the percent recovery of 2 mg L of ammonia (as NH3-N) added to wastewater final effluent and tap water. Is there any effect of pH before distillation or water type Source Dhaliwal, B. S., J. WPCF, 57, 1036-1039. Source Dhaliwal, B. S., J. WPCF, 57, 1036-1039.

What about Type II Error

So far we have mentioned only the error that is controlled by selecting a. That is the so-called type I error, which is the error of declaring an effect is real when it is in fact zero. Setting a 0.05 controls this kind of error to a probability of 5 , when all the assumptions of the test are satisfied. Protecting only against type I error is not totally adequate, however, because a type I error probably never occurs in practice. Two treatments are never likely to be truly equal inevitably they...

Exponentially Weighted Moving Average

In the simple moving average, recent values and long-past values are weighted equally. For example, the performance four weeks ago is reflected in an MA30 to the same degree as yesterday's, although the receiving environment may have forgotten the event of 4 weeks ago. The exponentially weighted moving average (EWMA) weights the most recent event heavily, and each event going into the past proportionately less. where 0 is a suitably chosen constant between 0 and 1 that determines the length of...

Components of Variance

KEY WORDS analysis of variance, ANOVA, components of variance, composite sample, foundry waste, nested design, sampling cost, sample size, system sand. A common problem arises when extreme variation is noted in routine measurements of a material. What are the important sources of variability How much of the observed variability is caused by real differences in the material, how much is related to sampling, and how much is physical or chemical measurement error A well-planned experiment and...

Case Study Calcium Carbonate Scaling in Water Mains

A small layer of calcium carbonate scale on water mains protects them from corrosion, but heavy scale reduces the hydraulic capacity. Finding the middle ground (protection without damage to pipes) is a matter of controlling the pH of the water. Two measures of the tendency to scale or corrode are the Langlier saturation index (LSI) and the Ryznar stability index (RSI). These are where pH is the measured value and pHs the saturation value. pH is a calculated value that is a function of...

The Median

The median is an unbiased estimate of the mean of any symmetric distribution (e.g., the normal distribution). The median is unaffected by the magnitude of observations on the tails of the distribution. It is also unaffected by censoring so long as more than half of the observations have been quantified. The median is the middle value in a ranked data set if the number of observations is odd. If the number of observations is even, the two middle values are averaged to estimate the median. If...

Comments

The classical null hypothesis is that The difference is zero. No scientist or engineer ever believes this hypothesis to be strictly true. There will always be a difference, at some decimal point. Why propose a hypothesis that we believe is not true The answer is a philosophical one. We cannot prove equality, but we may collect data that shows a difference so large that it is unlikely to arise from chance. The null hypothesis therefore is an artifice for letting us conclude, at some stated level...

Detecting a Rectangular Bump

To detect a uniform change that lasts for b sampling intervals, what Box and Luceno (1997) call a bump, the Cuscore is the sum of the last b deviations Q (yt - T) + (yt_ - T) + (yt_2 - T) + + (y +1 - T) Dividing this Q by b gives the arithmetic moving average of the last b deviations. An arithmetic moving average (AMA) chart is constructed by plotting the AMA on a Shewhart-like chart but with control lines at 2ab and 3ob, where ob is the standard deviation of the moving average. If the original...

Interlaboratory Comparisons

A consulting firm or industry that sends test specimens to several different laboratories needs to know that the performance of the laboratories is consistent. This can be checked by doing do an interlaboratory comparison or collaborative trial. A number of test materials, covering the range of typical values, are sent to a number of laboratories, each of which submits the values it measures on these materials. Sometimes, several properties are studied simultaneously. Sometimes, two or more...

ITest to Compare the Averages of Two Samples

Two independently distributed random variables y1 and y2 have, respectively, mean values n and n2 and variances of and o . The usual statement of the problem is in terms of testing the null hypothesis that the difference in the means is zero n1 - n2 0, but we prefer viewing the problem in terms of the confidence interval of the difference. The expected value of the difference between the averages of the two treatments is If the data are from random samples, the variances of the averages y1 and...

Variance Components Analysis

Variance components analysis is a method for learning what fraction of the total variance in a measurement process is caused by different components (factors) that contribute random variation into the sampling and testing process. If we have n measurements, denoted by yi, i 1, ,n, the sample variance for the entire data set is One design that allows the variance of each of these factors to be estimated independently of the other factors is the nested (or hierarchical) design shown in Figure...

Linear Combinations of Variables

The variance of a sum or difference of independent quantities is equal to the sum of the variances. The measured quantities, which are subject to random measurement errors, are a, b, c, The signs do not matter. Thus, y a - b - c - . also has a2y a2 + ab + a2 + We used this result in Chapter 2. The estimate of the mean is the average - 1 y n (yi+y2+y3+ + y ) The variance of the mean is the sum of the variances of the individual values used to calculate the average 2 1 2 2 2 2S ay - (ai +a2 + a3...

Using the Charts

Now examine the performance of a control chart for a simulated process that produced the data shown in Figure 11.2 the X chart and Range charts were constructed using duplicate measurements from the first 20 observation intervals when the process was in good control with X 10.2 and R 0.54. The X action limits are at 9.2 and 11.2. The R action limit is at 1.8. The action limits were calculated using the equations given in the previous section. As new values become available, they are plotted on...

Dunnetts Method for Multiple Comparisons with a Control

In many experiments and monitoring programs, one experimental condition (treatment, location, etc.) is a standard or a control treatment. In bioassays, there is always an unexposed group of organisms that serve as a control. In river monitoring, one location above a waste outfall may serve as a control or reference station. Now, instead of k treatments to compare, there are only k - 1. And there is a strong likelihood that the control will be different from at least one of the other treatments....

The EPA Approach to Estimating the MDL

EPA defines the MDL as the minimum concentration of a substance that can be measured and reported with 99 confidence that the analyte concentration is greater than zero and is determined from analysis of a sample in a given matrix containing the analyte. Similar definitions have been given by Glaser et al. (1981), Hunt and Wilson (1986), the American Chemical Society (1983), Kaiser and Menzes (1968), Kaiser (1970), Holland and McElroy (1986), and Porter et al. (1988). Measurements are...

Prediction Intervals

A prediction interval contains the expected results of a future sample to be obtained from a previously sampled population or process. Based upon a past sample of measurements, we might wish to construct a prediction interval to contain, with a specified degree of confidence (1) the concentration of a randomly selected single future unit from the sampled population, (2) the concentrations for five future specimens, or (3) the average concentration of five future units. The form of a two-sided...

Error Suppression and Magnification

A nonlinear function can either suppress or magnify error in measured quantities. This is especially true of the quadratic, cubic, and exponential functions that are used to calculate areas, volumes, and reaction rates in environmental engineering work. Figure 10.1 shows that the variance in the final result depends on the variance and the level of the inputs, according to the slope of the curve in the range of interest. 1 o J Error in y inflated Error in y FIGURE 10.1 Errors in the computed...

Comparison of the Charts

Shewhart, Cusum, Moving Average, and EWMA charts (Figures 12.1 to 12.3) differ in the way they weight previous observations. The Shewhart chart gives all weight to the current observation and no weight to all previous observations. The Cusum chart gives equal weight to all observations. The moving average chart gives equal weight to the k most recent observations and zero weight to all other observations. The EWMA chart gives the most weight to the most recent observation and progressively...

Assessing Bias

Bias is the difference between the measured value and the true value. Unlike random error, the effect of systematic error (bias) cannot be reduced by making replicate measurements. Furthermore, it cannot be assessed unless the true value is known. Two laboratories each were given 14 identical specimens of standard solution that contained CS 2.50 ,ug i L of an analyte. To get a fair measure of typical measurement error, the analyst was kept blind to the fact that these specimens were not...

Theory iTest to Assess Agreement with a Standard

The known or specified value is defined as no. The true, but unknown, mean value of the tested specimens is n, which is estimated from the available data by calculating the average y . We do not expect to observe that y no, even if n no- However, if y is near no, it can reasonably be concluded that n j0 and that the measured value agrees with the specified value. Therefore, some statement is needed as to how close we can reasonably expect the estimate to be. If the process is on-standard or...

Table of Contents

1 Environmental Problems and Statistics 5 Seeing the Shape of a Distribution 6 External Reference Distributions 9 Accuracy, Bias, and Precision of Measurements 10 Precision of Calculated Values 11 Laboratory Quality Assurance 12 Fundamentals of Process Control Charts 16 Comparing a Mean with a Standard 17 Paired i-Test for Assessing the Average of Differences 18 Independent i-Test for Assessing the Difference of Two Averages 19 Assessing the Difference of Proportions 20 Multiple Paired...

Special Problems

Introductory statistics courses commonly deal with linear models and assume that available data are normally distributed and independent. There are some problems in environmental engineering where these fundamental assumptions are satisfied. Often the data are not normally distributed, they are serially or spatially correlated, or nonlinear models are needed (Berthouex et al., 1981 Hunter, 1977, 1980, 1982). Some specific problems encountered in data acquisition and analysis are Aberrant...

Limit of Detection

KEY WORDS limit of detection, measurement error, method limit of detection, percentile, variance, standard deviation. The method limit of detection or method detection limit (MDL) defines the ability of a measurement method to determine an analyte in a sample matrix, regardless of its source of origin. Processing the specimen by dilution, extraction, drying, etc. introduces variability and it is essential that the MDL include this variability. The MDL is often thought of as a chemical concept...

Confidence Interval for an Interaction

Here we insert an example that does not involve a t-test. The statistic to be estimated measures a change that occurs between two locations and over a span of time. A control area and a potentially affected area are to be monitored before and after a construction project. This is shown by Figure 23.2. The dots in the squares indicate multiple specimens collected at each monitoring site. The figure shows four replicates, but this is only for illustration there could be more or less than four per...

Population and Sample

The person who collects a specimen of river water speaks of that specimen as a sample. The chemist, when given this specimen, says that he has a sample to analyze. When people ask, How many samples shall I collect they usually mean, On how many specimens collected from the population shall we make measurements They correctly use sample in the context of their discipline. The statistician uses it in another context with a different meaning. The sample is a group of n observations actually...

Normality Randomness and Independence

The three important properties on which many statistical procedures rest are normality, randomness, and independence. Of these, normality is the one that seems to worry people the most. It is not always the most important. Normality means that the error term in a measurement, e, is assumed to come from a normal probability distribution. This is the familiar symmetrical bell-shaped distribution. There is a tendency for error distributions that result from many additive component errors to be...

Environmental Problems and Statistics

There are many aspects of environmental problems economic, political, psychological, medical, scientific, and technological. Understanding and solving such problems often involves certain quantitative aspects, in particular the acquisition and analysis of data. Treating these quantitative problems effectively involves the use of statistics. Statistics can be viewed as the prescription for making the quantitative learning process effective. When one is confronted with a new problem, a two-part...

Theory The Paired iTest Analysis

Define S as the true mean of differences between random variables yJ and y2 that were observed as matched pairs under identical experimental conditions. S will be zero if the means of the populations from which yj and y2 are drawn are equal. The estimate of S is the average of differences between n paired observations Because of measurement error, the value of d is not likely to be zero, although it will tend toward zero if S is zero. The sample variance of the differences is The standard error...

Transformations to Obtain Constant Variance

When the variance changes over the range of experimental observations, the variance is said to be non-constant, or unstable. Common situations that tend to create this pattern are (1) measurements that involve making dilutions or other steps that introduce multiplicative errors, (2) using instruments that read out on a log scale which results in low values being recorded more precisely than high values, and (3) biological counts. One of the transformations given in Table 7.1 should be suitable...

Solution Dunnets Method

Rather than create a new example we reconsider the data in Table 20.1 supposing that laboratory 2 is a reference (control) laboratory. Pooling sample variances over all five laboratories gives the estimated within-laboratory variance, sj ool 0.51 and spool 0.71. For k - 1 4 treatments to be compared with the control and v 45 degrees of freedom, the value of t4> 45> 0.o5 2 2.55 is found in Table 20.4. The 95 Comparing Four Laboratories with a Reference Laboratory 3.97 4.30 4.46 3.12 3.34...

Assessing the Difference Between Two Proportions

The binomial distribution expresses the number of occurrences of an event x in n trials, where p is the probability of occurrence in a single trial. Usually the population probability p in a binomial process is unknown, so it is often more useful to examine the proportion of occurrences rather than their absolute number, x. Contrary to our guidelines on notation (Chapter 2), the population parameter p is not denoted with a Greek letter symbol. A hat (A) is used to distinguish the population...

Case Study to Emphasize the Benefits of a Paired Design

A once-through cooling system at a power plant is suspected of reducing the population of certain aquatic organisms. The copepod population density (organisms per cubic meter) were measured at the inlet and outlet of the cooling system on 17 different days (Simpson and Dudaitis, 1981). On each sampling day, water specimens were collected within a short time interval, first at the inlet and then at the outlet. The sampling plan represents a thoughtful effort to block out the effect of day-to-day...

Using a Reference Distribution to Compare Two Mean Values

Let the situation in the previous example change to the following. An experiment to evaluate the effect of an industrial discharge into a treatment process consists of making 10 observations consecutively before any addition and 10 observations afterward. We assume that the experiment is not affected by any transients between the two operating conditions. The average of 10 consecutive pre-discharge samples was 6.80, and the average of the 10 consecutive post-discharge samples was 6.86. Does the...

Case Study

Biological assays are a means of determining the toxicity of an effluent. There are many ways such tests might be organized species of test organism, number of test organisms, how many dilutions of effluent to test, specification of response, physical conditions, etc. Most of these are biological issues. Here we consider some statistical issues in a simple bioassay. Organisms will be put into (1) an aquarium containing effluent or (2) a control aquarium containing clean water. Equal numbers of...

Specialized Control Charts

KEY WORDS AR model, autocorrelation, bump disturbance, control chart, Cusum, Cuscore, cyclic variation, discrepancy vector, drift, EWMA, IMA model, linear model, moving average, process monitoring, random variation, rate of increase, serial correlation, Shewhart chart, sine wave disturbance, slope, spike, weighted average, white noise. Charts are used often for process monitoring and sometimes for process control. The charts used for these different objectives take different forms. This chapter...

Assessing the Difference of Proportions

KEY WORDS bioassay, binomial distribution, binomial model, censored data, effluent testing, normal distribution, normal approximation, percentages, proportions, ratio, toxicity, t-test. Ratios and proportions arise in biological, epidemiological, and public health studies. We may want to study the proportion of people infected at a given dose of virus, the proportion of rats showing tumors after exposure to a carcinogen, the incidence rate of leukemia near a contaminated well, or the proportion...

Case Study Solution

Figure 24.1 is a dot diagram showing the location and spread of the data from each laboratory. It appears that the variability in the results is about the same in each lab, but laboratories 4 and 5 may be giving low readings. The data are replotted in Figure 24.2 as deviations about their respective means. An analysis of variance will tell us if the means of these laboratories are statistically different. Lab 5 i Lab 4 * * Lab 2 i * * j Lab 1 _+ i _* FIGURE 24.1 Dot plots comparing the results...

Preface to 1st Edition

When one is confronted with a new problem that involves the collection and analysis of data, two crucial questions are How will using statistics help solve this problem And, Which techniques should be used This book is intended to help environmental engineers answer these questions in order to better understand and design systems for environmental protection. The book is not about the environmental systems, except incidentally. It is about how to extract information from data and how...

One FactorAta Time OFAT Experiments

Most experimental problems investigate two or more factors (independent variables). The most inefficient approach to experimental design is, Let's just vary one factor at a time so we don't get confused. If this approach does find the best operating level for all factors, it will require more work than experimental designs that simultaneously vary two or more factors at once. These are some advantages of a good multifactor experimental design compared to a one-factor-at-a-time (OFAT) design It...

Sample Size for Assessing the Equivalence of Two Means

The previous sections dealt with selecting a sample size that is large enough to detect a difference between two processes. In some cases we wish to establish that two processes are not different, or at least are close enough to be considered equivalent. Showing a difference and showing equivalence are not the same problem. One statistical definition of equivalence is the classical null hypothesis H0 n n2 0 versus the alternate hypothesis H1 n n2 0. If we use this problem formulation to...

Multiplicative Expressions

The propagation of error is different when variables are multiplied or divided. Variability may be magnified or suppressed. Suppose that y ab. The variance of y is Likewise, if y a b, the variance is Notice that each term is the square of the relative standard deviation (RSD) of the variables. The RSDs are oy y, Oja, and ob b. These results can be generalized to any combination of multiplication and division. For where a, b, c and d are measured and k is a constant, there is again a...

Regression on Rankits

It is possible to replace the probabilities with rankits (also called normal order scores or order statistics) and then to use regression to fit a line to the probability plot (Gilliom and Helsel, 1986 Hashimoto and Trussell, 1983 Travis and Land, 1990). This is equivalent to rescaling the graph in terms of standard deviations instead of probabilities. If the data are normally distributed, or have been transformed to make them normal, the probabili-ties(p) are converted to rankits (normal order...

Method Detection Limit General Concepts

The method detection limit (MDL) is much more a statistical than a chemical concept. Without a precise statistical definition, one cannot determine a scientifically defensible value for the limit of detection, expect different laboratories to be consistent in how they determine the limit of detection, or be scientifically honest about declaring that a substance has (or has not) been detected. Beyond the statistical definition there must be a clear set of operational rules for how this...

Cohens Maximum Likelihood Estimator Method

There are several methods to estimate the mean of a sample of censored data. Comparative studies show that none is always superior so we have chosen to present Cohen's maximum likelihood method (Cohen, 1959, 1961 Gilliom and Helsel, 1986 Haas and Scheff, 1990). It is easy to compute for samples from a normally distributed parent population or from a distribution that can be made normal by a log)arithmic transformation. A sample of n observations has measured values of the variable only at y >...

Stratified Sampling

Figure 23.4 shows three ways that sampling might be arranged in a area. Random sampling and systematic sampling do not take account of any special features of the site, such as different soil type of different levels of contamination. Stratified sampling is used when the study area exists in two or more distinct strata, classes, or conditions (Gilbert, 1987 Mendenhall et al., 1971). Often, each class or stratum has a different inherent variability. In Figure 23.4, samples are proportionally...

Fractional Factorial Experimental Designs

KEY WORDS alias structure, confounding, defining relation, dissolved oxygen, factorial design, fractional factorial design, half-fraction, interaction, main effect, reference distribution, replication, ruggedness testing, t distribution, variance. Two-level factorial experimental designs are very efficient but the number of runs grows exponentially as the number of factors increases. Usually your budget cannot support 128 or 256 runs. Even if it could, you would not want to commit your entire...

Make the Original Data Record a Plot

Because the best way to display data is in a plot, it makes little sense to make the primary data record a table of values. Instead, plot the data directly on a digidot plot, which is Hunter's (1988) innovative combination of a time-sequence plot with a stem-and-leaf plot (Tukey, 1977) and is extremely useful for a modest-sized collection of data. The graph is illustrated in Figure 3.1 for a time series of 36 hourly observations (time, in hours, is measured from left to right). FIGURE 3.1...

Case Study Compaction of Fly

There was a proposal to use pozzolanic fly ash from a large coal-fired electric generating plant to build impermeable liners for storage lagoons and landfills. Pozzolanic fly ash reacts with water and sets into a rock-like material. With proper compaction this material can be made very impermeable. A typical criterion is that the liner must have a permeability of no more than 10-7 cm sec. This is easily achieved using small quantities of fly ash in the laboratory, but in the field there are...

Random and Systematic Errors

The titration example oversimplifies the accumulation of random errors in titrations. It is worth a more complete examination in order to clarify what is meant by multiple sources of variation and additive errors. Making a volumetric titration, as one does to measure alkalinity, involves a number of steps 1. Making up a standard solution of one of the reactants. This involves (a) weighing some solid material, (b) transferring the solid material to a standard volumetric flask, (c) weighing the...

The Box Cox Power Transformations

A power transformation model developed by Box and Cox (1964) can, so far as possible, satisfy the conditions of normality and constant variance simultaneously. The method is applicable for almost any kind of statistical model and any kind of transformation. The transformed value of the original variable y is where yg is the geometric mean of the original data series, and X expresses the power of the transformation. The geometric mean is obtained by averaging ln(y) and taking the exponential...

Transformations for Linearization

Transformations are sometimes used to obtain a straight-line relationship between two variables. This may involve, for example, using reciprocals, ratios, or logarithms. The left-hand panel of Figure 7.1 shows the exponential growth of bacteria. Notice that the variance (spread) of the counts increases as the population density increases. The right-hand panel shows that the data can be described by a straight line when plotted on a log scale. Plotting on a log scale is equivalent to making a...

References

On the Distribution of a Positive Random Variable Having a Discrete Probability Mass at the Origin, J. Am. Stat. Assoc., 50, 901-908. Aitchison, J. and J. A. Brown (1969). The Lognormal Distribution, Cambridge, England, Cambridge University Press. Berthouex, P. M. and L. C. Brown (1994). Statistics for Environmental Engineers, Boca Raton, FL, Lewis Publishers. Blom, G. (1958). Statistical Estimates and Transformed Beta Variables, New York, John Wiley. Cohen, A. C., Jr....

Probability Plots

A probability plot is not needed to interpret the data in Table 5.1 because the time series plot and dot diagrams expose the important characteristics of the data. It is instructive, nevertheless, to use these data to illustrate how a probability plot is constructed, how its shape is related to the shape of the frequency distribution, and how it could be misused to estimate population characteristics. The probability plot, or cumulative frequency distribution, shown in Figure 5.4 was...

Constructing an External Reference Distribution

The first 130 observations in Figure 6.1 show the natural background pH in a stream. Table 6.1 lists the data. Suppose that a new effluent has been discharged to the stream and someone suggests it is depressing the stream pH. A survey to check this has provided ten additional consecutive measurements 6.66, 6.63, 6.82, 6.84, 6.70, 6.74, 6.76, 6.81, 6.77, and 6.67. Their average is 6.74. We wish to judge whether this group of observations differs from past observations. These ten values are...

Parametric Estimates of Quantiles

If we know or are willing to assume the population distribution, we can use a parametric method. Parametric quantile (percentile) estimation will be discussed initially in terms of the normal distribution. The same methods can be used on nonnormally distributed data after transformation to make them approximately normal. This is convenient because the properties of the normal distribution are known and accessible in tables. FIGURE 8.1 Correspondence of percentiles on the lognormal and normal...

Significance Tests

In Example 2.9 we knew that the nitrate population mean was truly 8.0 mg L, and asked, How likely are we to get a sample mean as small as y 7.51 mg L from the analysis of 27 specimens If this result is highly unlikely, we might decide that the sample did not represent the population, probably because the measurement process was biased to yield concentrations below the true value. Or, we might decide that the result, although unlikely, should be accepted as occurring due to chance rather than...