Question

Describe the two major theories used for the detection of out-of-control costs.

Describe the two major theories used for the detection of out-of-control costs.

0 0
Add a comment Improve this question Transcribed image text
Answer #1

THE TWO MAJOR THEORIES USED FOR THE DETECTION OF OUT OF CONTROL COSTS ARE AS FOLLOWS :-

1) Classical Test Theory

2) Decision Theory

1) Classical Test Theory:-

Classical test theory assumes that each person has a true score,T, that would be obtained if there were no errors in measurement. A person's true score is defined as the expected number-correct score over an infinite number of independent administrations of the test. Unfortunately, test users never observe a person's true score, only anobserved score, X. It is assumed that observed score = true score plus some error:

Classical test theory is concerned with the relations between the three variables X , T , and E in the population. These relations are used to say something about the quality of test scores. In this regard, the most important concept is that of reliability. The reliability of the observed test scores X , which is denoted as XT , is defined as the ratio of true score variance {\sigma^2_T} to the observed score variance 2 :

Pxt = 1

Because the variance of the observed scores can be shown to equal the sum of the variance of true scores and the variance of error scores, this is equivalent to

PŘz = of of Pýt o ok + og

This equation, which formulates a signal-to-noise ratio, has intuitive appeal: The reliability of test scores becomes higher as the proportion of error variance in the test scores becomes lower and vice versa. The reliability is equal to the proportion of the variance in the test scores that we could explain if we knew the true scores. The square root of the reliability is the correlation between true and observed scores.

Evaluating tests and score : Reliablity

Reliability cannot be estimated directly since that would require one to know the true scores, which according to classical test theory is impossible. However, estimates of reliability can be obtained by various means. One way of estimating reliability is by constructing a so-called parallel test. The fundamental property of a parallel test is that it yields the same true score and the same observed score variance as the original test for every individual. If we have parallel tests x and x', then this means that

(x) = (x)

and

{\sigma}^2_{E_i}={\sigma}^2_{E'_i}

Under these assumptions, it follows that the correlation between parallel test scores is equal to reliability (see Lord & Novick, 1968, Ch. 2, for a proof).

0XX = * = Px7 exx σχσχιση

Using parallel tests to estimate reliability is cumbersome because parallel tests are very hard to come by. In practice the method is rarely used. Instead, researchers use a measure of internal consistency known as Cronbach's {\alpha} . Consider a test consisting of k items u_{j} , j= 1, ..., . The total test score is defined as the sum of the individual item scores, so that for individual i

-CA I-

Then Cronbach's alpha equals

2 2 ん.] 1 1

Cronbach's {\alpha} can be shown to provide a lower bound for reliability under rather mild assumptions.[citation needed] Thus, the reliability of test scores in a population is always higher than the value of Cronbach's {\alpha} in that population. Thus, this method is empirically feasible and, as a result, it is very popular among researchers. Calculation of Cronbach's {\alpha} is included in many standard statistical packages such as SPSS and SAS.

Evaluating items: P and item-total correlations

Reliability provides a convenient index of test quality in a single number, reliability. However, it does not provide any information for evaluating single items. Item analysis within the classical approach often relies on two statistics: the P-value (proportion) and the item-total correlation (point-biserial correlation coefficient). The P-value represents the proportion of examinees responding in the keyed direction, and is typically referred to as item difficulty. The item-total correlation provides an index of the discrimination or differentiating power of the item, and is typically referred to as item discrimination. In addition, these statistics are calculated for each response of the oft-used multiple choiceitem, which are used to evaluate items and diagnose possible issues, such as a confusing distractor. Such valuable analysis is provided by specially-designed psychometric software.

As has been noted above, the entire exercise of classical test theory is done to arrive at a suitable definition of reliability. Reliability is supposed to say something about the general quality of the test scores in question. The general idea is that, the higher reliability is, the better. Classical test theory does not say how high reliability is supposed to be. Too high a value for {\alpha} , say over .9, indicates redundancy of items. Around .8 is recommended for personality research, while .9+ is desirable for individual high-stakes testing.[3]These 'criteria' are not based on formal arguments, but rather are the result of convention and professional practice. The extent to which they can be mapped to formal principles of statistical inference is unclear.

2. Decision Theory:-

Decision theory is concerned with the problem of making decisions. The term statistical decision theory pertains to decision making in the presence of statistical knowledge, by shedding light on some of the uncertainties involved in the problem. For most of this report, unless otherwise stated, it may be assumed that these uncertainties can be considered to be unknown numerical quantities, denoted by θ. Decision making under uncertainty draws on probability theory and graphical models. This report and more particularly this Part focuses on the methodology and mathematical and statistical concepts pertinent to statistical decision theory. This initial section presents the decisional framework and introduces the notation used to model decision problems.

The Basic Elements The previous section summarized the basic elements of decision problems. For brevity purposes, this section will not repeat the description of the two types of decision models and simply state the mathematical structure associated with each element. It is assumed that a decision maker can specify the following basic elements of a decision problem. 1. Action Space: A = {a}. The single action is denoted by an a, while the set of all possible actions is denoted as A. It should be noted that the term actions is used in decision literature instead of decisions. However, they can be used somewhat interchangeably. Thus, a decision maker is to select a single action a ∈ Afrom a space of all possible actions. 2. State Space: Θ = {θ}. (or Parameter Space) The decision process is affected by the unknown quantity θ ∈ Θ which signifies the state of nature. The set of all possible states of nature is denoted by Θ. Thus, a decision maker perceives that a particular action a results in a corresponding state θ. 3. Consequence: C = {c}. Part I: Decision Theory – Concepts and Methods 4 The consequence of choosing a possible action and its state of nature may be multidimensional and can be mathematically stated as c(a,θ ) ∈C . 4. Loss Function:l(a,θ ) ∈ A× Θ. The objectives of a decision maker are described as a real-valued loss function ) l(a,θ , which measures the loss (or negative utility) of the consequence ) c(a,θ . 5. Family of Experiments: E = {e}. Typically experiments are performed to obtain further information about eachθ ∈ Θ. A single experiment is denoted by an e, while the set of all possible experiments is denoted as E. Thus, a decision maker may select a single experiment e from a family of potential experiments which can assist in determining the importance of possible actions or decisions. 6. Sample Space: X = {x}. An outcome of a potential experiment e ∈ E is denoted as x ∈ X . The importance of this outcome was explained in (3) and hence is not repeated here. However, it should be noted that when a statistical investigation (such as an experiment) is performed to obtain information about θ, the subsequent observed outcome x is a random variable. The set of all possible outcomes is the sample space while a particular realization of X is denoted as x. Notably, X is a subset of n ℜ . 7. Decision Rule: A δ (x) ∈ . If a decision maker is to observe an outcome X = x and then choose a suitable action δ (x) ∈ A , then the result is to use the data to minimize the loss ) l(δ (x),θ . Sections 2 and 3 focus on discussing the appropriate measures of minimization in decision processes. 8. Utility Evaluation: u(⋅,⋅,⋅,⋅) on E × X × A× Θ . The quantification of a decision maker’s preferences is described by a utility function u(e, x,a,θ ) which is assigned to a particular conduct of e, a resulting observed x, choosing a particular action a, with a corresponding θ. The evaluation of the utility function u takes into account costs of an experiment as well as consequences of the specific action which may be monetary and/or of other forms.

Add a comment
Know the answer?
Add Answer to:
Describe the two major theories used for the detection of out-of-control costs.
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for? Ask your own homework help question. Our experts will answer your question WITHIN MINUTES for Free.
Similar Homework Help Questions
ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT