Design and Analysis of Experiments


373 124 14MB

English Pages 312 Year 1979

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Design and Analysis of Experiments

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

IkTHE STATISTICS

LIBRARY

UNIVERSITY OF GLASGOW

'

r

/

.

.

/

\



r

\

f

f

y

I

J

Design and Analysis of Experiments

M. N. DAS Central Water Commission, New Delhi

N. C. GIRI University of Montreal, Quebec, Canada

WILEY EASTERN LIMITED New Delhi

Bangalore

Bombay

Calcutta

Copyright © 1979, Wiley Eastern Limited

This book or any part thereof may not be reproduced in any form without the written permission of the publisher

This book is not to be sold outside the country to which it is consigned by Wiley Eastern Limited

10 98765432

ISBN

0 85226 158 6

F Published by Mohinder Singh Sejwal for W/iley Eastern Limited 4835/24 Ansari Road, Daryaganj, New Delhi 110002, and printed by Mohan Makhijani at ReJchaPrinters Private Limited A 102/1 Okhla Industrial Estate, Phase A New Delhi 110020. Printed in India. fh .

Preface

The material presented in the book is the product of the experience gained by the authors while offering courses on design and analysis of experiments to graduate and post-graduate students and applied workers in the Institute of Agricultural ResearchStatistics,New Delhi and Department of Mathematics, University of Montreal, Canada. The book has been written to suit the academic persons including teachers and students and the applied workers who have to apply statistical principles for the collection and interpretation of experimental data. Almost all the commonly adopted designs have been included in the book. Most of the advanced techniques and methodology available for designing and analysis of experiments are included here together with discussion of elementary concepts and pre¬ liminary treatments of different topics. A number of new concepts and alternative methods of treatment of several topics have been presented. Simple andj convenient methods of construction of confounded symmetri¬ cal and asymmetrical factorial designs, alternative methods of analysis of missing observations, orthogonal latin squares, designs for bio-assays and weighing designs are some of the examples. The book contains in all nine chapters including a chapter on basic statistical methods and concepts. A chapter on designs for bio-assays required for pharmaceutical investigations and response surface designs has also been added. Complicated mathematical treatment has been avoided while presenting the results in the different chapters. No emphasis has been given on combinatorics while presenting the methods of construction of various designs. More emphasis has been laid on common sense, experience and intuition while introducing the topics, providing proofs of main results and discussing extension and application of the results. The book can serve as a textbook for both the graduate and post¬ graduate students in addition to being a reference book for applied workers and research workers and students in statistics. The authors acknowledge gratefully the facilities provided by the University of Montreal for writing the book. M. N. Das N. C. Giri May 1979

'

, , /



.

'







.

'

V ■

\

Contents

CHAPTER 1

Concepts of Experiments: Design and Analysis 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

.

Design of Experiments and Collection of Data 1 Experiments and their Designs 2 Methodology for Making Inferences 3 Three Principles of Designs of Experiments 7 Experimental Error and Interpretation of Data 8 Contrasts and Analysis of Variance 10 Models and Analysis of Variance 16 Two-Way Classified Data 26 Assumptions of Analysis of Variance 43 CHAPTER 2

Complete Block Designs 2.1 2.2 2.3 2.4 2.5

Completely Randomized Designs 48 Randomized Block Designs 52 Latin Square Designs 56 Missing Observations in Randomized Block Designs 64 An Illustration 74 CHAPTER 3

Factorial Experiments 3.1 Characterization of Experiments 78 3.2 Factorial Experiments 79 3.3 Factorial Experiments with Factors at Two Levels 80 3.4 Finite Fields and Designs of Experiments 83 3.5 Grouping for Interaction Contracts 85 3.5 Confounding 86 3.7 Confounding in more than Two Blocks 89 3.8 Experiments with Factors at Three Levels Each 91 3.9 A General Method of Construction of Confounded Factorials 97 3.10 Maximum Number of Factors to Save Interactions upto a Given Order for a Given Block Size 103 3.11 Analysis of Factorial Experiments 105 3.12 Fractional Factorials 107

VI

CHAPTER 4

Asymmetrical Factorial and Split-Plot Designs 4.1 4.2 4.3 4.4 4.5 4.6 4.7

Asymmetrical Factorial Designs 120 Confounded Asymmetrical Factorials 120 Construction of Balanced Confounded Asymmetrical Factorials 122 Construction of Confounded Asymmetrical Factorial vx22 in 2v plot Blocks 131 Analysis of Balanced Confounded Asymmetrical Factorials 133 Split-Plot Designs 143 Analysis 144 CHAPTER 5

Incomplete Block Designs 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12

Varietal Trials 152 Incomplete Block Designs 153 Balanced Incomplete Block Designs 156 Construction of B.I.B. Designs 158 Analysis 166 Analysis with Recovery of Inter-Block Information 168 Youden Squares 172 Lattice Designs 172 Partially Balanced Incomplete Block Designs 176 Analysis of P.B.I.B. Designs 183 Analysis with Recovery of Inter-Block Information 186 Optimality of Designs 198 CHAPTER 6

Orthogonal Latin Squares 6.1 6.2 6.3 6.4

Orthogonal Latin Squares 205 Maximum Number of Orthogonal Latin Squares 205 Construction of Orthogonal Latin Squares 206 Construction of Orthogonal Latin Squares by Using Pairwise Balanced Designs 209 CHAPTER 7

Designs for Bio-assays and Response Surfaces 7.1 7.2 7.3 7.4

Bio-assays 277 Direct Assays 218 Indirect Bio-assays 220 Parallel Line Assays 224

CONTENTS

7.5 7.6 7.7

Incomplete Block Designs for Bio-assays 229 Slope Ratio Assays 236 Response Surface Designs 244 CHAPTER 8

Analysis of Covariance and Transformation 8.1 8.2 8.3 8.4 8.5 8.6 8.7

Analysis af Covariance 263 Analysis of Covariance for Randomized Block Designs 263 Analysis of Covariance of Completely Randomized and Latin Square Designs 267 Analysis of Covariance of Non-Orthogonal Data and Designs in Two-Way Classification 267 Analysis of Covariance with Two Ancillary Variates 272 Covariance and Analysis of Experiments with Missing Observations 274 Transformations 275 CHAPTER 9

Weighing Designs 9.1 Introduction 280 9.2 Definition 280 9.3 Method of Estimation 281 9.4 Incomplete Block Designs as Weighing Designs 282 9.5 Two Pan Weighing Designs from B.I.B. Designs 296 9.6 Two Associate P.B.I.B. Designs as One Pan Weighing Designs 289 9.7 Weighing Designs from Truncated Incomplete B.I.B. Designs 290 9.8 Efficiency 290

vii

v;

1

,

-&C

5

'■c-



\

;■

i’■■■■"'

'

"

■'r“

.

• '



LI l&i

-.fr

.

'

■■■/?■

.



'



.

:

:

,

I '

r ;i:

.



»

.

..

,





"i . - '•>.

'



s4-

■'

' •'



*

CHAPTER 1

Concepts of Experiments: Design and Analysis 1.1

Design of Experiments and Collection of Data

Experimentation and making inferences are twin essential features of general scientific methodology. Statistics as a scientific discipline is mainly designed to achieve these objectives. It is generally concerned with problems of inductive inferences in relation to stochastic models describing random phenomena. When faced with the problem of studying a random phenomenon, the scientist, in general, may not have complete knowledge of the true variant of the phenomenon under study. A statistical problem arises when he is interested in the specific behaviour of the unknown variant of the phenomenon. After a statistical problem has been set-up, the next step is to perform experiments for collecting information on the basis of which inferences can be made in the best possible manner. The methodology for making inferences has three main aspects. First, it derives methods for drawing inference from observations when these are not exact but subject to variation. As such, the inferences are not exact but probabilistic in nature. Second, it specifies methods for collection of data appropriately so that the assumptions for the application of appropriate statistical methods to them are satisfied. Lastly, techniques for proper ^interpretation of results are devised. There has been a great advance in the derivation of statistical methods applicable to various problems under different assumptions. A good coverage of these methods is available in Fisher (1953), Giri (1976) and Scheffe( 1959). A good deal of work has also been done in the field of data collection and interpreta¬ tion techniques. The topic of the present book, viz., design and analysis of experiments falls in the sphere of data collection and interpretation techniques. The other main topic in this regard is the theory of sample surveys. Though the theories of sample surveys and design of experi¬ ments are both concerned with data collection techniques, they serve different purposes. The theory of sample surveys has the objective of deriving methods for collection of samples of observations from a popu¬ lation which exists in its own way such that the sample can adequately represent and accurately interpret the population. In the case of experi¬ mental data no such population exists in its own way. What exists is a problem and the data have, so to say, to be manufactured by proper

DESIGN AND ANALYSIS OF EXPERIMENTS

2

experimentation so that an answer to the problem can be inferred from the data. Creation of controlled conditions is the main characteristic feature of experimentation. It is the design of an experiment which specifies the nature of control over the operations in the experiments. Proper designing is necessary also to ensure that the assumptions required for appropriate interpretation of the data are satisfied. Designing is necessary, moreover, to increase accuracy and sensitivity of the results. Data obtained without regard to the statistical principles cannot lead to valid inferences. They can, no doubt, be analyzed, but the results obtained from them need not hold true subsequently in situations other than those in which they were collected. For example, if two varieties of a crop are to be compared with regard to their yield performance by conducting an experiment, and a particular variety which the experimenter for some personal reasons, say, wants to favour, is allotted to the better plots, then the statistical principles are violated in the experiment and the data collected from the experiment cannot be validly interpreted. Their interpretation may show the favoured variety more promising. But the same result may not be obtained in future when the variety does not receive a favoured treatment. It is, therefore, necessary that the data are collected by adopting proper designs so that they can be validly inter¬ preted. For further reading on this topic Fisher (1953), Kempthorne(1952), and Federer (1955) maybe consulted. 1.2

Experiments and their Designs

As already stated an experiment starts with a problem, an answer to which is obtained from interpretation of a set of observations collected suitably. For this purpose a set of experimental units and adequate experimental material are required. Equal sized plots of land, a single or a group of plants, etc. are used as experimental units for agricultural experiments. For animal husbandary experiments animals, animal organs, etc. form the experimental units. Again, for industrial experiments machines, ovens and other similar objects form the experimental units. The problems are usually in the form of comparisons among a set of treatments in respect of some of their effects which are produced when they are applied to the experimental units. A general name ‘treatment' will be given throughout the book to denote experimental material among which comparison is desired by utilizing the effects which are produced when the experimental material is applied to the experimental units. For example, in agricultural experiments different varieties of a crop, different fertilizer doses, different levels of irrigation, different combinations of levels of two or more of the above factors, viz. variety, irrigation, nitrogen fertilizer, date of sowing, etc. may constitute the treatments. Given a set of treatments which can provide information regarding the objective of an experiment, a design for the experiment defines the size

CONCEPTS OF experiments: design and analysis

3

and number of the experimental units, the manner in which the treatments are allotted to the units and also the appropriate type and grouping of the experimental units. These requirements of a design ensure validity, interpretability and accuracy of the results obtainable from an analysis of the observations. These purposes are served by the principles of (i) randomization which defines the manner of allocation of the treatments to the experimental units, (ii) replication which specifies the number of units to be provided for each of the treatments and (iii) error control which increases the precision by choosing appropriate type of experimental units and also their grouping. 1.3

Methodology for Making Inference

The basis for making statistical inference is one or more samples of observations from one or more variables. These observations are required to satisfy certain assumptions. Some of the assumptions are that the observations should belong to p. population having some specified probability distribution, and that they should be independent. The, distribution which is usually assumed is the normal distribution because most of the variables encountered in nature are found to have this distri¬ bution. Such distributions involve certain unknown quantities called parameters which differ from variable to variable. The main purpose of statistical inference is, (i) to estimate such parameters by using the obser¬ vations, and (ii) to compare these parameters among themselves again by using the observations and their estimates. The methodology dealing with the first part of the inference has developed into what is known as theory of estimation and that for the second part into methods of testing of hypothesis. The maximum likelihood method of estimation and the least squares method of estimation are the two more important methods of estimation. The least squares method of estimation gives the same estimate as that of the maximum likelihood method of estimation under normality assumption. This method has been discussed in Section 1.7. The details about these methods are available in publications on statistical methodology cited in Section 1.1. For testing of hypothesis, first certain hypothesis involving the parameters is made. The hypotheses are of comparative nature and depend on the type of inference problems. For example, let there be two varieties of a crop denoted by vx and v2, there being n1 observations of yield on vx from plots each of a given size and w2 observations on v2 from similar plots. Further, let the observations of the ith variety (i = l, 2) have the normal distribution at

\/2tt

where y denotes the variable of yield of the crop, pi is the mean yield and ffi2 is the variance of the ith variable.

4

DESIGN AND ANALYSIS OF EXPERIMENTS

By using the two samples of observations, and af can be estimated by adopting an appropriate method of estimation. In regard to testing of hypothesis we have two problems of comparison—one being the comparison between o1 and c2 and other between and (x2. For the purpose of comparison an hypothesis of the type al=c2 or y-1=ii2 is made. This type of hypothesis which specifies that the difference between any pair of a number of similar parameters is zero, is known as the null hypothesis. Next, a statistic, that is, a function of the observations, is defined so as to suit the objective of the problem. This is called the test statistic as it is used to test the tenability or otherwise of the hypothesis. Let us imagine a very large number of independent samples of obser¬ vations from a population under investigation. From each such sample a value of a statistic can be obtained. Thus, there will be as many values of the statistic as the number of samples. We can now think of a probability distribution of these values of the statistic. Such a distribution is called the sampling distribution of the statistic. For testing of hypothesis first the sampling distribution of a test statistic is theoretically derived assuming that the hypothesis regarding the parameters of the original population of the observations is correct From the sampling distribution it is possible to evaluate the probability of occurrence of the value of the test statistic lying in given ranges of values. For example, one can evaluate the probability of assuming by a test statistic, T all values which are greater than a given value, say, T0 or less than a value T0' or which lie between T0 and T0\ Usually, for specific test statistic tables are prepared showing different values of T0 and the corresponding probability or vice versa. Such tables are available in Fisher and Yates (1942). Evidently, the larger T0 deviates from its central or most expected value, the less is the probability of getting samples which provide T greater than T0. For testing of hypothesis, usually two values of T0 are important, one of these corresponds to 5 per cent probability of having samples giving T greater than T0 when the hypothesis is correct and the other corresponds to 1 per cent probability of having samples giving T greater than T0. We may call these two values of T0 as r.95 and T.go respectively or in general T,_a corresponding to a per cent probability. These are called respectively the 5 per cent, 1 per cent and a per cent values of significance. We have said earlier that for testing of hypothesis the value of a suitable test statistic is calculated from the observations. If this value of the statistic is greater than its tabulated value at 5 per cent level of significance, the probability of getting such samples as has given the value is less than 5 per cent if the hypothesis is correct. This, in other words, means that the correctness of the hypothesis is greatly doubtful. When the sampling distribution of a statistic is symmetrical, a similar conclusion is also possible if T is less than r.05, as the sample value of T is too small in such cases. Hence, in such a situation

CONCEPTS OF experiments: design and analysis

5

the hypothesis is rejected at 5 per cent level of significance. This shows that we are likely to reject a correct hypothesis in 5 per cent of the samples. If, again, the calculated value of the test statistic is greater than its tabulated value at 1 per cent level of significance, the probability of getting such samples as has given the value is less than 1 per cent if the hypothesis is correct. In this case the hypothesis is rejected at 1 per cent level of significance. Evidently, rejection at 1 per cent level of significance carries more definite information than rejection at 5 per cent level of significance, because when we reject an hypothesis at 1 per cent level we are likely to reject a correct hypothesis for 1 per cent samples while in the case of 5 per cent level rejection we are likely to reject a correct hypothesis in 5 per cent cases. Thus, in the 1 per cent case we are likely to be in less error by rejecting an hypothesis. Though we have discussed the test procedure with reference to 5 per cent and 1 per cent levels of significance, it is not necessary that testing should be restricted to only these two levels. Tables are available for testing at other levels of significance if so required. For example, in a situation where more definite information is required, a test at 0.1 per cent level of significance can be applied. In relation to design and analysis of experiments, two test statistics, viz. (i) /-test for testing the significance of the difference between two linear estimates, and (ii) F-test or variance ratio test for testing equality of two variances, are usually applied. Some of the hypotheses tested by, these tests are discussed below: Case I: One sample from a normal population with mean g.. When the problem is to know if p. differs from a given or conjectured value the hypothesis (x=^0 is made. The test statistic to be applied in this case is called one sample /-test and is taken as

y-Fo VsZln where y is the sample mean based on n observations from a normal population and s2 is the error mean square given by (l(y—y)2/(n—l) and has («—1) degrees of freedom (d.f.). The concept of degrees of freedom has been discussed in a later section. Here, / is said to have the degrees of freedom of s2, that is, n— 1. The 5 per cent or 1 per cent values of / depend on the d.f. of / and are tabulated for different values of d.f. Usually, /-table is prepared by showing the 5 per cent level of significance as /.975. This is done in consideration of the fact that / has a symmetrical distribution and can be both positive and negative as the mean of the /-distribution is zero. Actually, /.025 is negative and the probability of / being less than /.025 is .025. Thus, the 5 per cent significance value of / is a value, say, /„' such that the probability of having samples to give values of / exceeding /„' is 2.5 per cent and that of having

DESIGN AND ANALYSIS OF EXPERIMENTS

6

samples to give values of t less than —/„' is 2.5 percent so that the probability of t being greater than t0 or less than —10 is 5 per cent. This type of test which takes into account the probability of occurrence of extreme values of a test statistic on either direction of its mean value is known as two-tailed test. In such cases the alternatives to the hypothesis, viz. do not specify the nature of difference, that is, whether the difference should be only positive or only negative. The hypothesis which specify the direction like the alternative being \x > {*„, leads to one-tailed test. In such tests the 5 per cent level of significance corres¬ ponds to t.05. Though we have discussed the test with reference to 5 per cent level of significance, the same considerations and procedures apply to tests at other levels of significance. Case 2: Two samples from two normal populations having means Ui and {a2 and a common variance. Setting the problem to test if the estimates of mean from the two samples are homogeneous, the hypothesis is made. The test statistic / is defined as

y*—y

where yx and y2 are the means of the two samples of sizes ftj and n2 respectively, v2-.^ Q’lf "ki)2+ «l + «2 —2

yu and y2i denoting observations from the two samples. Here, the d.f. of t is nx-\-n2—2. The table of t is the same for both one sample and two sample tests. There are other situations where /-test is applied. Full discussion on the topic has been given in Snedecor (1946), Goulden (1952) and other books on statistical methods. Case 3: Variance ratio or F-test. estimates of variance a2 given by

Let mean squares, sx2 and s22 be two

-?>* and —1

(*;-*>* n2-1

obtained from two samples yx, y2,.. •, ynx and xx, x2,,... xna from two normal populations having variances, ax2 and a22 respectively. In order to test if the mean squares sx2 and s22 can be considered to be of equal order, the hypothesis ct12=ct22 is made. It is tested by defining the test-statistic, F given by F~S-~ with s22

— 1 and n2~ 1 d.f.

F-table has been so prepared that Fis always greater than 1.

Thus, Fhas

CONCEPTS OF experiments: design and analysis

7

to be taken as either sf/s^ or 522/jx2, whichever is greater than one. If, however, it is intended to test if one, say, sx2 is greater than 522, then the ratio should be s^/s^ and no test is required if sx2 < s22. We shall encounter this type of situation mostly in this study. It is not always necessary that sx2 and s22 should be calculated from two independent samples. What is necessary is that ^x2 and s22 should be two independent mean squares. For details of this topic the reader may consult Lehmann (1959). 1.4

Three Principles of Designs of Experiments

We have seen that randomization, replication and error control are the three main principles of designs of experiments. The roles they play in data collection and interpretation are discussed below. Randomization

After the treatments and the experimental units are decided the treatments are allotted to the experimental units at random to avoid any type of personal or subjective bias whiqh may be conscious or unconscious. This ensures validity of the results. It helps to have an objective comparison among the treatments. It also ensures independence of the observations which is necessary for drawing valid inference from the observations by applying appropriate statistical techniques. We shall see subsequently that depending on the nature of the experi¬ ment and the experimental units, there are various experimental designs. Each design has its own way of randomization. We shall, therefore, discuss the procedure of random allocation separately while describing each specific design. However, for a thorough discussion on the subject the reader may see Fisher (1942), Kempthorne (1952) and Ogawa (1974).

Replication

If a treatment is allotted to r experimental units in an experiment, it is said to be replicated r times. If in a design each of the treatments is replicated r times, the design is said to have r replications. Replication is necessary to increase the accuracy of estimates of the treatment effects. It also provides an estimate of the error variance which is a function of the differences among observations from experimental units under identical treatments. Though, the more the number of replications the better it is, so far as precision of estimates is concerned, it cannot be increased indefinitely as it increases cost of experimentation. Moreover, due to limited availability of experimental resources too many replications cannot be taken. , The number of replications is, therefore, decided keeping in view the

8

DESIGN AND ANALYSIS OF EXPERIMENTS

permissible expenditure and the required degree of precision. Sensitivity of statistical methods for drawing inference also depends on the number of replications. Sometimes this criterion is used to decide the number of replications in specific experiments. A more detailed discussion of this topic is deferred till a discussion of experimental error is made. The principle of error control will also follow the same discussion. 1.5

Experimental Error and Interpretation of Data

After the observations are collected they are statistically analysed so as to obtain relevant information regarding the objective of the experiment. As we know, the objective is usually to make comparisons among the effects of several treatments when the observations are subject to variation. Such comparisons are made by the technique of analysis of variance. It will be seen subsequently that inference is drawn through this technique by comparing two measures of variation, one of which arises due to uncontrolled sources of variation, called the error variation and the other includes variation due to a controlled set of treatments together with the variation due to all the uncontrolled causes of variation contributing to the error variation. For example, let there be six plots of land denoted by Pl5 P2, P3, p4, P5 and P6. The first three plots receive one treatment, say, and the last three, another treatment, tz. Suppose, further, that plots P, and P4 receive one level of irrigation, P2, P5, another level and P3, P6, a third level. Let ylf y2, y3, y4, y5 andjs denote the observations on a character recorded from the above six plots in that order. Then the comparison Pi~y-i which denotes the variation in the observations from the first two plots, does not contain any component of variation due to the treatments as both of them receive the same treatment. But the comparison is not free from the effect of the other controlled factor viz. irrigation as Pt received one level of irrigation while P2 received another level. Hence, this comparison by itself does not contribute to error variation, But the comparison (yt— y2)~is> evidently, free from the variability caused by both the controlled factors, viz. treatment and irrigation. Hence, such comparisons which contain contributions due to uncontrolled factors like soil fertility and management variation which were not specified in the plots, build up error variance. The actual measure of error variance is a function of the squares of all such comparisons. The procedure of obtaining it has been discussed in the next section. There are, again, some other concepts of experimental error which we shall discuss at appropriate places.

Determination of Number of Replications

Error variance provides a measure of precision of an experiment, the less the

CONCEPTS OF experiments: design and analysis

9

error variance the more is the precision. Once a measure of error variance is available for a set of experimental units, the number of replications needed for a desired level of sensitivity can be obtained as below. Given a set of treatments an experimenter may not be interested to know if two treatments differ in their effects by less than a certain quantity, say, d. In other words, he wants an experiment which should be able to differentiate two treatments when they differ by d or more. As discussed in the previous section the significance of the difference between two treatments is tested by /-test where t _ yt—yj

V 2 s2/r Here, yt, and yj are the arithmatic means of two treatment effects each based on r replications, s2 is a measure of the error variation. Given a difference d, between two treatment effects such that any difference greater than d should be brought out as significant by using a design with r replications, the following equation provides a solution of r:

r=-L£L X 0 V2s2/r ’ where /0 is the critical value of the /-distribution at the desired level of significance, that is, the value of / at 5 or 1 per cent level of significance read from the table. If s2 is known or is based on a very large number of observations, made available from some pilot pre-experiment investi¬ gation, then / is taken as the normal variate. If .s2 is estimated with n degrees of freedom (d.f.) then /„ corresponds to n d.f. When the number of replications is r or more as obtained above, then all differences greater than d are expected to be brought out as significant by an experiment when it is conducted on a set of experimental units which has variability of the order of s2. For example, in an experiment on wheat crop conducted in a seed farm in Bhopal, India to study the effect of application of nitrogen and phosphorus on yield, a randomized block design with three replications was adopted. There were 11 treatments two of which were (i) 10 lb/acre of nitrogen (ii) 20 lb per acre of nitrogen. The average yield figures for these two applications of the fertiliser were 1438 and 1592 Ibs/acre respectively and it is required that differences of the order of 150 lb/acre should be brought out significant. The error mean squares 02) was 12134.88. Assuming that the experimental error will be of the same order in future experiments and t0 is of the order of 2.00, which is likely as the error d.f. is likely to be more than 30 as there are 11 treatments, we have all the information to obtain the numbers of replications r from the relation: /

-

|J|-

#“V2Wr

DESIGN AND ANALYSIS OF EXPERIMENTS

10

T hat is

2f0V d2

2x22x 12134.88 =»=4 (approx.) 1502

Thus, an experiment with 4 replications is likely to bring out differences of this order as significant. Another criterion for determining r is to take a number of replications which ensures at least 10 d.f. for the estimate of error variance in the analysis of variance of the design concerned since the sensitivity of the experiment will be very much low as the F test which is used to draw inference in such experiments, is very much unstable below 10 d.f. The above considerations for determining the number of replications holds only for specific designs. Error Control

The considerations in regard to the choice of number of replications ensure reduction of standard error of the estimates of the treatment effects because the standard error of the estimate of a treatment effect is Vs2jr. But they cannot reduce the error variance itself, though a large number of replications can ensure a more stable estimate of the error variance. It is, however, possible to devise methods for reducing the error variance. Such measures are called error control or local control. One such measure is to make the experimental units homogeneous. Another method is to form the units into several homogeneous groups, usually called blocks, allowing variation between the groups. A considerable amount of research work has been done to divide the treatments into suitable groups for allotment to groups of experimental units so that the treatment effects can be estimated more precisely. Extensive use of combinatorial mathematics has been made for formation of such groups of treatments. These have been discussed in a later chapter. Adequate coverage of the combinatorial aspects of this topic is available in Bose (1939) and Raghavarao (1971). 1.6

Contrasts and Analysis of Variance

The main technique adopted for the analysis and interpretation of the data collected from an experiment is the analysis of variance technique which essentially consists of partitioning the total variation in an experiment into components ascribable to different sources of variation due to the controlled factors and error. The following discussion attempts to relate the technique of analysis of variance to comparisons among treatment effects which in terms of symbols can be called contrasts.

11

CONCEPTS OF experiments: design and analysis

Contrast

Let ylt y2,... ,y„ denote n observations or any other quantities.

The linear »

n

function C=X 1 tfi where l/’s are given numbers such that Y 1 the maximum number of mutually orthogonal contrasts among them is n— 1.

Suppose we have the following m mutually orthogonal contrasts:

Proof:

Ci=Z lj iyi 1

n

c2=Z

iztyi

C*=Z

lml }>i

i

1

Let us now take one more contrast C— Z liyi where l/’s are unknown but i

71

satisfy £ 1,-0.

Now, C can be orthogonal to each of the above m

contrasts if the following simultaneous equations in /,’* have at least one non-zero solution:

Z

l/=0

i

Z lull-0 t

z hi 1*=0

Z 1 mi l/=0 i

But this set of homogeneous linear equations in n unknowns can have a non-zero solution, only if the total number of equations does not exceed n-l. Thus, m can be at the most n-2 and the total number of such contrasts cannot exceed n— 1. This proves that the maximum number of mutually orthogonal contrasts among n quantities is n— 1 and that the contrasts can be written in more than one way as there is an infinite number of solutions of the homogeneous equations Q. E. D. One way of writing such contrasts is to progressively introduce the values as below:

(i)

yi-yz

(“) Ti+Tg—2ys

(iii) Ti+T2+T3-3v4 and so on.

CONCEPTS OF experiments: design and analysis

13

Another set of orthogonal contrasts for various values of n is available in the tables for Statisticians and Biometricians prepared by Fisher and Yates (1942) under the name of Orthogonal polynomials. Measures of Variation

The square of a contrast gives a measure of variation due to the contrast. The sum of squares due to the (n— 1) mutually orthogonal contrasts of n observations, gives the s.s. due to the n observations. This s.s. divided by the number of independent contrasts on which it is based viz. (n—1) gives a measure of variation of the observations and is called mean squares (m.s.) The number of independent contrasts on which a s.s. is based is called the degrees of freedom (d.f.) of the sum of squares. We have seen in Section 1.4 that certain contrasts represented variation due to uncontrolled causes of variation while certain others did not. The s.s. due to contrasts which represent variation due to only uncontrolled causes of variation builds up error variance. Again, the s.s. due to some other contrasts which are orthogonal to the above error contrasts and represents comparisons among effects of, say, a set of treatments, build up the treatment s.s. For illustration let us take the case discussed in Secton 1.4 as reproduced in Table 1.1. The figures shown in bracket are the obser¬ vations on wheat yield collected from an actual experiment conducted in an experimental station in Betul, India with three irrigation treatments viz. Ix: h: /3:

and

tl: i2:

no irrigation, one irrigation at tillering two irrigations, one at tillering and one at flowering stage with two manurial treatments, 15 lbs/acW-f-15 Ibs/ac(/\05) 80 lbs/acAH-80 lbs/ac (P205)

These figures have been extracted from the publication entitled National Index of field experiments, M.P. (1954-59) published by IARS, New Delhi. TABLE 1.1 Yield Data from the Irrigation Experiment (lb/acre) Irrigation levels Marginal total

Treatments

h h h Totals

3*u (837)

h

h

3*13

3*12

(804)

3*21

3*22

3*23

(914)

(758)

(849)

3*11+3*21 (1751)

3*12 4 3*22

"

(1562)

* +3*12+3*13

3 11

(843)

3*13

+ 3*23

(1692)

(2484) 3*21

+ 3*22 + 3*23 (2521) 2 3*// (5005)

14

DESIGN AND ANALYSIS OF EXPERIMENTS

Here, yu (/= 1,2, y=l, 2, 3) denotes the observation from the ith treatment and yth level of irrigation. It will be seen that the following two orthogonal contrasts are free from the effects of the two controlled factors viz. treatments and irrigation:

(i)

0'u->,2i)-0'i2-J’22)

00

O'ls-Jaaj

Each of these contrasts is actually a contrast of contrasts. In a contrast of two observations in the same column, the effect of irrigation is removed. Again, by taking contrast of such contrasts the effect of treatment is eliminated. Evidently, it is not possible to obtain any more error contrast which is orthogonal to each of the above two error contrasts. Hence, the s.s. due to error variation, in short, error s.s. can be obtained by adding the squares of these two contrasts. This s.s. has two d.f. Again, let us take the contrast O'l+J'a+J's) ~ O^+Ts+ya) It is easily seen that this contrast gives a comparison between the effects of the two treatments. This contrast is not affected by the effects of the irrigation levels as they are evenly distributed in the positive and negative parts of the contrast. There is no other contrast orthogonal to the above which also represents purely treatment comparison. This contrast is also orthogonal to each of the two error contrasts presented earlier.. We get the treatment s.s. by obtaining the square of the above contrast and dividing it by the appropriate factor as indicated earlier. It has evidently 1 d.f. On the hypothesis that there is no variation among the treatment effects, the treatment s.s. is distributed as c2/2 withl d.f. Thus, on the above hypothesis both the error m.s. and treatment m.s. have the same expected value, a2. They are independent as they are obtained from orthogonal contrasts. Their ratio, F, can, therefore, be used to provide a test of the hypothesis of equality of the treatment effects. To complete the analysis of variance, we have yet to account for two more orthogonal contrasts each of which is orthogonal to the three contrasts already discussed. We can write these two contrasts as below from the irrigation marginal totals: 0)

(>;U+^2l)-(>’x* + ^22)

(ii)

(>;iX+3;2x) + (>,ia+>’22)-2

Oia+J^)

They represent comparison of irrigation effects and their s.s. gives a measure of variation due to irrigation effects. Table 1.2 shows in a compact form the details of the analysis of variance obtainable from the contrast approach. As the figures in bracket in Table 1.1 are averages based on 4 observations each, the actual divisors for getting the s.s. is 1/16 times each of the divisors shown in col (3) of table 1.2 for the different s.s. The s.s. obtained

15

Total

Analysis of Variance Table

X yi?~

CONCEPTS OF experiments: DESIGN AND ANALYSIS

16

DESIGN AND ANALYSIS OF EXPERIMENTS

from the revised divisors can be considered to be based on the original observations rather than on the averages shown in the table. Though the above analysis helps to have a clear understanding of the technique and its rationale, it need not be adopted to analyze larger numbers of observations, as there are simpler methods for obtaining such sums of squares. The present method has, however, the advantage that special components of variation in which an experimenter may be interested can be easily obtained and tested through this technique. 1.7

Models and Analysis of Variance

The method of analysis described in the previous section was more a deduction from intuitive arguments. It did not clearly specify the nature of the treatment or error effects, that is, if they are fixed or random. In order to provide a more rigorous basis and a sound statistical treatment another representation of analysis of variance based on postulated models has been given below briefly. Both these methods, however, lead to the same ultimate results so far as our present objective is concerned. A statistical model is actually a linear relation of the effects of the different levels of a number of factors involved in an experiment along with one or more terms representing error effects. The effects of any factor can be either fixed or random. For example, the effects of two well defined levels of irrigation are fixed as each irrigation level can be reasonably taken to have a fixed effect. Again, if the variety of a crop is taken as a factor with a number of varieties of the crop as its levels, then the effects of the varieties will be random if these varieties are selected at random from a large number. The random effects can again belong to a finite or an infinite population. The error effects are always random and may belong either to a finite or infinite population. A model in which each of the factors has fixed effects and only the error effect is random is called a fixed model. Models in which some factors have fixed effects and some random effects are called mixed models. Again, models where all the factors have random effects are called random models. Depending on the finiteness or otherwise of the random effect populations, mixed and random effect models can be of many different types. A detailed discussion of this topic is, however, not the objective of this section. Readers are referred to Wilk and Kempthorne (1955) for details on the topic. In fixed effect models, the main objectives are to estimate the effects, find a measure of variability among the effects of each of the factors and finally find the variability among the error effects. In random effect models the main emphasis is on estimating the variability among the effects of the different factors. The methodology for obtaining expressions of variability is, however, mostly the same in the different models, though the methods appropriate for their testing are different.

CONCEPTS OF experiments: design and analysis

17

We shall restrict ourselves in this publication to mainly fixed effect models. A fixed effect model for, say, two factors is written as below: y»k—[a+ at+bj+euk where yuu is an observation coming from a unit defined by the levels i,j, k of the factors involved, at is the effect of the i-th level of one factor, say, A and bj is the effect of another factor, say, B and etJk is an error effect which is assumed to be normally and independently distributed with zero mean and a constant variance a2. These assumptions regarding behaviour of etjk are necessary for drawing inference by adopting known statistical methodology. The methodology that is adopted is the analysis of variance technique by which inference is drawn by applying F test. For the F test it is necessary that the observations, that is, the error components should be normally and independently distributed with a common variance. Thus, while collecting observations by adopting various designs to be discussed subsequently it has to be ensured that these assumptions are satisfied by the observations, otherwise no valid inference can be drawn from their. analysis. A further assumption that has been made in the model is that the effects are additive. Though often this assumption is satisfied, it is desirable to get it tested preferably in relatively less known situations. A test due to Tukey (1949) is available for this purpose. Though we have presented above a model involving two factors, there can be other types of models depending on the nature of data, that is, the number of controllable factors involved in the data classification. For example, if the data are from the different levels of a single factor, then we call the data as one-way classified data and it has its own model. In general, if the data belong to the level combinations of m different factors, we call them m-way classified data. Further discussions about classification of data and their models have been made in subsequent sections. After a model has been fixed, the general method of analysis takes the following steps. Let the model be denoted in general by ah bj, c*) +