Describe the two major theories used for the detection of out-of-control costs.
THE TWO MAJOR THEORIES USED FOR THE DETECTION OF OUT OF CONTROL COSTS ARE AS FOLLOWS :-
1) Classical Test Theory
2) Decision Theory
1) Classical Test Theory:-
Classical test theory assumes that each person has a true score,T, that would be obtained if there were no errors in measurement. A person's true score is defined as the expected number-correct score over an infinite number of independent administrations of the test. Unfortunately, test users never observe a person's true score, only anobserved score, X. It is assumed that observed score = true score plus some error:
Classical test theory is concerned with the relations between the three variables , , and in the population. These relations are used to say something about the quality of test scores. In this regard, the most important concept is that of reliability. The reliability of the observed test scores , which is denoted as , is defined as the ratio of true score variance to the observed score variance :
Because the variance of the observed scores can be shown to equal the sum of the variance of true scores and the variance of error scores, this is equivalent to
This equation, which formulates a signal-to-noise ratio, has intuitive appeal: The reliability of test scores becomes higher as the proportion of error variance in the test scores becomes lower and vice versa. The reliability is equal to the proportion of the variance in the test scores that we could explain if we knew the true scores. The square root of the reliability is the correlation between true and observed scores.
Evaluating tests and score : Reliablity
Reliability cannot be estimated directly since that would require one to know the true scores, which according to classical test theory is impossible. However, estimates of reliability can be obtained by various means. One way of estimating reliability is by constructing a so-called parallel test. The fundamental property of a parallel test is that it yields the same true score and the same observed score variance as the original test for every individual. If we have parallel tests x and x', then this means that
and
Under these assumptions, it follows that the correlation between parallel test scores is equal to reliability (see Lord & Novick, 1968, Ch. 2, for a proof).
Using parallel tests to estimate reliability is cumbersome because parallel tests are very hard to come by. In practice the method is rarely used. Instead, researchers use a measure of internal consistency known as Cronbach's . Consider a test consisting of items , . The total test score is defined as the sum of the individual item scores, so that for individual
Then Cronbach's alpha equals
Cronbach's can be shown to provide a lower bound for reliability under rather mild assumptions.[citation needed] Thus, the reliability of test scores in a population is always higher than the value of Cronbach's in that population. Thus, this method is empirically feasible and, as a result, it is very popular among researchers. Calculation of Cronbach's is included in many standard statistical packages such as SPSS and SAS.
Evaluating items: P and item-total correlations
Reliability provides a convenient index of test quality in a single number, reliability. However, it does not provide any information for evaluating single items. Item analysis within the classical approach often relies on two statistics: the P-value (proportion) and the item-total correlation (point-biserial correlation coefficient). The P-value represents the proportion of examinees responding in the keyed direction, and is typically referred to as item difficulty. The item-total correlation provides an index of the discrimination or differentiating power of the item, and is typically referred to as item discrimination. In addition, these statistics are calculated for each response of the oft-used multiple choiceitem, which are used to evaluate items and diagnose possible issues, such as a confusing distractor. Such valuable analysis is provided by specially-designed psychometric software.
As has been noted above, the entire exercise of classical test theory is done to arrive at a suitable definition of reliability. Reliability is supposed to say something about the general quality of the test scores in question. The general idea is that, the higher reliability is, the better. Classical test theory does not say how high reliability is supposed to be. Too high a value for , say over .9, indicates redundancy of items. Around .8 is recommended for personality research, while .9+ is desirable for individual high-stakes testing.[3]These 'criteria' are not based on formal arguments, but rather are the result of convention and professional practice. The extent to which they can be mapped to formal principles of statistical inference is unclear.
2. Decision Theory:-
Decision theory is concerned with the problem of making decisions. The term statistical decision theory pertains to decision making in the presence of statistical knowledge, by shedding light on some of the uncertainties involved in the problem. For most of this report, unless otherwise stated, it may be assumed that these uncertainties can be considered to be unknown numerical quantities, denoted by θ. Decision making under uncertainty draws on probability theory and graphical models. This report and more particularly this Part focuses on the methodology and mathematical and statistical concepts pertinent to statistical decision theory. This initial section presents the decisional framework and introduces the notation used to model decision problems.
The Basic Elements The previous section summarized the basic elements of decision problems. For brevity purposes, this section will not repeat the description of the two types of decision models and simply state the mathematical structure associated with each element. It is assumed that a decision maker can specify the following basic elements of a decision problem. 1. Action Space: A = {a}. The single action is denoted by an a, while the set of all possible actions is denoted as A. It should be noted that the term actions is used in decision literature instead of decisions. However, they can be used somewhat interchangeably. Thus, a decision maker is to select a single action a ∈ Afrom a space of all possible actions. 2. State Space: Θ = {θ}. (or Parameter Space) The decision process is affected by the unknown quantity θ ∈ Θ which signifies the state of nature. The set of all possible states of nature is denoted by Θ. Thus, a decision maker perceives that a particular action a results in a corresponding state θ. 3. Consequence: C = {c}. Part I: Decision Theory – Concepts and Methods 4 The consequence of choosing a possible action and its state of nature may be multidimensional and can be mathematically stated as c(a,θ ) ∈C . 4. Loss Function:l(a,θ ) ∈ A× Θ. The objectives of a decision maker are described as a real-valued loss function ) l(a,θ , which measures the loss (or negative utility) of the consequence ) c(a,θ . 5. Family of Experiments: E = {e}. Typically experiments are performed to obtain further information about eachθ ∈ Θ. A single experiment is denoted by an e, while the set of all possible experiments is denoted as E. Thus, a decision maker may select a single experiment e from a family of potential experiments which can assist in determining the importance of possible actions or decisions. 6. Sample Space: X = {x}. An outcome of a potential experiment e ∈ E is denoted as x ∈ X . The importance of this outcome was explained in (3) and hence is not repeated here. However, it should be noted that when a statistical investigation (such as an experiment) is performed to obtain information about θ, the subsequent observed outcome x is a random variable. The set of all possible outcomes is the sample space while a particular realization of X is denoted as x. Notably, X is a subset of n ℜ . 7. Decision Rule: A δ (x) ∈ . If a decision maker is to observe an outcome X = x and then choose a suitable action δ (x) ∈ A , then the result is to use the data to minimize the loss ) l(δ (x),θ . Sections 2 and 3 focus on discussing the appropriate measures of minimization in decision processes. 8. Utility Evaluation: u(⋅,⋅,⋅,⋅) on E × X × A× Θ . The quantification of a decision maker’s preferences is described by a utility function u(e, x,a,θ ) which is assigned to a particular conduct of e, a resulting observed x, choosing a particular action a, with a corresponding θ. The evaluation of the utility function u takes into account costs of an experiment as well as consequences of the specific action which may be monetary and/or of other forms.
Describe the two major theories used for the detection of out-of-control costs.
3- what is tort? 6-Describe the two major theories used by the courts to determine damages? 7-Sam contract to buy an asset from Pam for $205,000, but six months later, Pam says no to the sale. The asset is now worth $325,000. What would the damages be under the benefit-of bargain rule? 8-What five major factors are required for the benefit-of the bargain computation? 9-Describe the four major ways to calculate lost profits?
Describe two major biological theories of aging. Compare and contrast each theory.
List and briefly describe the ways that each of the major learning theories—behaviorist, cognitive, social learning, psychodynamic, and humanist—can be used to increase a behavior. Then, list and briefly describe the ways that each of these major theories can be used to decrease or extinguish a behavior.
Research and describe five leadership theories. (Tip: check out changingminds.org for a list of leadership theories. This website is NOT an academic source so cannot be used in this paper, but it can help you find ideas which you will then research in the NAU Online Library. Do not use a category of theories like behavioral theories). For each theory, identify and explain which traits are exemplified. Provide a scenario that shows each theory in use. Select one leadership theory...
Describe two functions of detectors and detection timing?
briefly describe the Freud, Erikson, and Piaget's theories regarding development. Provide the major similarities and differences between each. Explain how these early theories were developed and why there is concern related to race, gender, socioeconomic status, and other areas of diversity in how these theories were developed.
Discuss two or three of the major shifts that have occurred in managerial theories and concepts pertaining to the evolution of human resources. Describe specific actions that human resources departments are responsible for regardless of the type of organization. Select one of the following a health care settings: hospital, community care, transitional care, home care, and primary care. Describe theories and principles that should be incorporated within this setting and justify those selections with specific supporting examples. Consider the health...
Explain the major theories of effective change management and how these are implemented and evaluated. Select and describe two major theories of change management, e.g. Lewin or Kotter. Select one of the theories and illustrate it in practical sense – you can use a hypothetical scenario here or a workplace example. Include evaluation, an overview of the outcome and the overall impact.
No control system is perfect. Be prepared to describe the major threats to the effectiveness of any control system and explain why it is a threat. There are two levels of controls – entity and transaction. Describe the main differences between entity-level controls and transaction-level controls. Provide two examples of each to illustrate your answer. What is involved with the information and communication component and why is it important to the strength of internal controls.
Identify and describe developmental and teaching/learning theory/theories used and why they are appropriate to the learners you targeted.