Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Achievement tests are designed to measure one's

10/11/2021 Client: muhammad11 Deadline: 2 Day

Module Chapter 8 p655 wk3

C H A P T E R 8

Test Development

All tests are not created equal. The creation of a good test is not a matter of chance. It is the product of the thoughtful and sound application of established principles of test development. In this context, test development is an umbrella term for all that goes into the process of creating a test.

In this chapter, we introduce the basics of test development and examine in detail the processes by which tests are assembled. We explore, for example, ways that test items are written, and ultimately selected for use. Although we focus on tests of the published, standardized variety, much of what we have to say also applies to custom-made tests such as those created by teachers, researchers, and employers.

The process of developing a test occurs in five stages:

1. test conceptualization;

2. test construction;

3. test tryout;

4. item analysis;

5. test revision.

Once the idea for a test is conceived ( test conceptualization ), test construction begins. As we are using this term, test construction is a stage in the process of test development that entails writing test items (or re-writing or revising existing items), as well as formatting items, setting scoring rules, and otherwise designing and building a test. Once a preliminary form of the test has been developed, it is administered to a representative sample of testtakers under conditions that simulate the conditions that the final version of the test will be administered under ( test tryout ). The data from the tryout will be collected and testtakers’ performance on the test as a whole and on each item will be analyzed. Statistical procedures, referred to as item analysis, are employed to assist in making judgments about which items are good as they are, which items need to be revised, and which items should be discarded. The analysis of the test’s items may include analyses of item reliability, item validity, and item discrimination. Depending on the type of test, item-difficulty level may be analyzed as well.

JUST THINK . . .

Can you think of a classic psychological test from the past that has never undergone test tryout, item analysis, or revision? What about so-called psychological tests found on the Internet?

Next in the sequence of events in test development is test revision. Here, test revision refers to action taken to modify a test’s content or format for the purpose of improving the test’s effectiveness as a tool of measurement. This action is usually based on item analyses, as well as related information derived from the test tryout. The revised version of the test will then be tried out on a new sample of testtakers. After the results arePage 230 analyzed the test will be further revised if necessary—and so it goes (see Figure 8–1). Although the test development process described is fairly typical today, let’s note that there are many exceptions to it, both with regard to tests developed in the past, and some contemporary tests. Some tests are conceived of and constructed but neither tried-out, nor item-analyzed, nor revised.

Figure 8–1 The Test Development Process

Test Conceptualization

The beginnings of any published test can probably be traced to thoughts—self-talk, in behavioral terms. The test developer says to himself or herself something like: “There ought to be a test designed to measure [fill in the blank] in [such and such] way.” The stimulus for such a thought could be almost anything. A review of the available literature on existing tests designed to measure a particular construct might indicate that such tests leave much to be desired in psychometric soundness. An emerging social phenomenon or pattern of behavior might serve as the stimulus for the development of a new test. The analogy with medicine is straightforward: Once a new disease comes to the attention of medical researchers, they attempt to develop diagnostic tests to assess its presence or absence as well as the severity of its manifestations in the body.

The development of a new test may be in response to a need to assess mastery in an emerging occupation or profession. For example, new tests may be developed to assess mastery in fields such as high-definition electronics, environmental engineering, and wireless communications.

In recent years, measurement interest related to aspects of the LGBT (lesbian, gay, bi-sexual, and transgender) experience has increased. The present authors propose that in the interest of comprehensive inclusion, an “A” should be added to the end of “LGBT” so that this term is routinely abbreviated as “LGBTA.” The additional “A” would acknowledge the existence of asexuality as a sexual orientation or preference.

JUST THINK . . .

What is a “hot topic” today that developers of psychological tests should be working on? What aspects of this topic might be explored by means of a psychological test?

Asexuality may be defined as a sexual orientation characterized by a long-term lack of interest in a sexual relationship with anyone or anything. Given that some research is conducted with persons claiming to be asexual, and given that asexual individuals must be selected-in or selected-out to participate in such research, Yule et al. (2015) perceived a need for a reliable and valid test to measure asexuality. Read about their efforts to develop and validate their rather novel test in this chapter’s Close-Up .Page 231

CLOSE-UP

Creating and Validating a Test of Asexuality*

In general, and with some variation according to the source, human asexuality may be defined as an absence of sexual attraction to anyone at all. Estimates suggest that approximately 1% of the population might be asexual (Bogaert, 2004). Although the concept of asexuality was first introduced by Alfred Kinsey in 1948, it is only in the past decade that it has received any substantial academic attention. Scholars are grappling with how best to conceptualize asexuality. For some, asexuality is thought of as itself, a sexual orientation (Berkey et al., 1990; Bogaert, 2004; Brotto & Yule, 2011; Brotto et al., 2010; Storms, 1978; Yule et al., 2014). Others view asexuality more as a mental health issue, a paraphilia, or human sexual dysfunction (see Bogaert, 2012, 2015).

More research on human asexuality would be helpful. However, researchers who design projects to explore human asexuality face the challenge of finding qualified subjects. Perhaps the best source of asexual research subjects has been an online organization called “AVEN” (an acronym for the Asexuality and Visibility Education Network). Located at asexuality.org , this organization had some 120,000 members at the time of this writing (in May, 2016). But while the convenience of these group members as a recruitment source is obvious, there are also limitations inherent to exclusively recruiting research participants from a single online community. For example, asexual individuals who do not belong to AVEN are systematically excluded from such research. It may well be that those unaffiliated asexual individuals differ from AVEN members in significant ways. For example, these individuals may have lived their lives devoid of any sexual attraction, but have never construed themselves to be “asexual.” On the other hand, persons belonging to AVEN may be a unique group within the asexual population, as they have not only acknowledged their asexuality as an identity, but actively sought out affiliation with other like-minded individuals. Clearly, an alternative recruitment procedure is needed. Simply relying on membership in AVEN as a credential of asexuality is flawed. What is needed is a validated measure to screen for human asexuality.

In response to this need for a test designed to screen for human asexuality, the Asexuality Identification Scale (AIS) was developed (Yule et al., 2015). The AIS is a 12-item, sex- and gender-neutral, self-report measure of asexuality. The AIS was developed in a series of stages. Stage 1 included development and administration of eight open-ended questions to sexual (n = 70) and asexual (n = 139) individuals. These subjects were selected for participation in the study through online channels (e.g., AVEN, Craigslist, and Facebook). Subjects responded in writing to a series of questions focused on definitions of asexuality, sexual attraction, sexual desire, and romantic attraction. There were no space limitations, and participants were encouraged to answer in as much or as little detail as they wished. Participant responses were examined to identify prevalent themes, and this information was used to generate 111 multiple-choice items. In Stage 2, these 111 items were administered to another group of asexual (n = 165) and sexual (n = 752) participants. Subjects in this phase of the test development process were selected for participation through a variety of online websites, and also through our university’s human subjects pool. The resulting data were then factor- and item-analyzed in order to determine which items should be retained. The decision to retain an item was made on the basis of our judgment as to which items best differentiated asexual from sexual participants. Thirty-seven items were selected based on the results of this item selection process. In Stage 3, these 37 items were administered to another group of asexual (n = 316) and sexual (n = 926) participants. Here, subjects were selected through the same means as in Stage 2, but also through websites that host psychological online studies. As in Stage 2, the items were analyzed for the purpose of selecting those items that best loaded on the asexual versus the sexual factors. Of the 37 original items subjected to item analysis, 12 items were retained, and 25 were discarded.

In order to determine construct validity, psychometric validation on the 12-item AIS was conducted using data from the same participants in Stage 3. Known-groups validity was established as the AIS total score showed excellent ability to distinguish between asexual and sexual subjects. Specifically, a cut-off score of 40/60 was found to identify 93% of self-identified asexual individuals, while excluding 95% of sexual individuals. In order to assess whether the measure was useful over and above already-available measures of sexual orientation, we compared the AIS to an adaptation of a previously established measure of sexual orientation (Klein Scale; Klein & Sepekoff, 1985). Incremental validity was established, as the AIS showed only moderate correlations with the Klein Scale, suggesting that the AIS is a better predictor of asexuality compared to an existing measure. To determine whether the AIS correlates with a construct that is thought to be highly related to asexuality (or, lack of sexual desire), convergent validity was assessed by correlating total AIS scores with scores on the Sexual Desire Inventory (SDI; Spector et al., 1996). As we expected, the AIS correlated only weakly with Solitary Desire subscale of the SDI, while the Dyadic Desire subscale of the SDI had a moderate negative correlation with the AIS. Finally, we conducted discriminant validity analyses by comparing the AIS with the Childhood Trauma Questionnaire (CTQ; Bernstein et al., 1994; Bernstein & Fink, 1998), the Short-Form Inventory of Interpersonal Problems-Circumplex scales (IIP-SC; Soldz et al., 1995), and the Big-Five Inventory (BFI; John et al., 1991; John et al., 2008; John & Srivastava, 1999) in order to determine whether the AIS was actually tapping into negative sexual experiences or personality traits. Discriminant validity was established, as the AIS was not significantly correlated with scores on the CTQ, IIP-SC, or the BFI.

Sexual and asexual participants significantly differed in their AIS total scores with a large effect size. Further, the AIS passed tests of known-groups, incremental, convergent, and discriminant validity. This suggests that the AIS is a useful tool for identifying asexuality, and could be used in future research to identify individuals with a lack of sexual attraction. We believe that respondents need not be self-identified as asexual in order to be selected as asexual on the AIS. Research suggests that the AIS will identify as asexual the individual who exhibits characteristics of a lifelong lack of sexual attraction in the absence of personal distress. It is our hope that the AIS will allow for recruitment of more representative samples of the asexuality population, and contribute toward a growing body of research on this topic.

Used with permission of Morag A. Yule and Lori A. Brotto.

* This Close-Up was guest-authored by Morag A. Yule and Lori A. Brotto, both of the Department of Obstetrics & Gynaecology of the University of British Columbia.Page 232

Some Preliminary Questions

Regardless of the stimulus for developing the new test, a number of questions immediately confront the prospective test developer.

· What is the test designed to measure? This is a deceptively simple question. Its answer is closely linked to how the test developer defines the construct being measured and how that definition is the same as or different from other tests purporting to measure the same construct.

· What is the objective of the test? In the service of what goal will the test be employed? In what way or ways is the objective of this test the same as or different from other tests with similar goals? What real-world behaviors would be anticipated to correlate with testtaker responses?

· Is there a need for this test? Are there any other tests purporting to measure the same thing? In what ways will the new test be better than or different from existing ones? Will there be more compelling evidence for its reliability or validity? Will it be more comprehensive? Will it take less time to administer? In what ways would this test not be better than existing tests?

· Who will use this test? Clinicians? Educators? Others? For what purpose or purposes would this test be used?

· Who will take this test? Who is this test for? Who needs to take it? Who would find it desirable to take it? For what age range of testtakers is the test designed? What reading level is required of a testtaker? What cultural factors might affect testtaker response?

· What content will the test cover? Why should it cover this content? Is this coverage different from the content coverage of existing tests with the same or similar objectives? How and why is the content area different? To what extent is this content culture-specific?

· How will the test be administered? Individually or in groups? Is it amenable to both group and individual administration? What differences will exist between individual andPage 233 group administrations of this test? Will the test be designed for or amenable to computer administration? How might differences between versions of the test be reflected in test scores?

· What is the ideal format of the test? Should it be true–false, essay, multiple-choice, or in some other format? Why is the format selected for this test the best format?

· Should more than one form of the test be developed? On the basis of a cost–benefit analysis, should alternate or parallel forms of this test be created?

· What special training will be required of test users for administering or interpreting the test? What background and qualifications will a prospective user of data derived from an administration of this test need to have? What restrictions, if any, should be placed on distributors of the test and on the test’s usage?

· What types of responses will be required of testtakers? What kind of disability might preclude someone from being able to take this test? What adaptations or accommodations are recommended for persons with disabilities?

· Who benefits from an administration of this test? What would the testtaker learn, or how might the testtaker benefit, from an administration of this test? What would the test user learn, or how might the test user benefit? What social benefit, if any, derives from an administration of this test?

· Is there any potential for harm as the result of an administration of this test? What safeguards are built into the recommended testing procedure to prevent any sort of harm to any of the parties involved in the use of this test?

· How will meaning be attributed to scores on this test? Will a testtaker’s score be compared to those of others taking the test at the same time? To those of others in a criterion group? Will the test evaluate mastery of a particular content area?

This last question provides a point of departure for elaborating on issues related to test development with regard to norm- versus criterion-referenced tests.

Norm-referenced versus criterion-referenced tests: Item development issues

Different approaches to test development and individual item analyses are necessary, depending upon whether the finished test is designed to be norm-referenced or criterion-referenced. Generally speaking, for example, a good item on a norm-referenced achievement test is an item for which high scorers on the test respond correctly. Low scorers on the test tend to respond to that same item incorrectly. On a criterion-oriented test, this same pattern of results may occur: High scorers on the test get a particular item right whereas low scorers on the test get that same item wrong. However, that is not what makes an item good or acceptable from a criterion-oriented perspective. Ideally, each item on a criterion-oriented test addresses the issue of whether the testtaker—a would-be physician, engineer, piano student, or whoever—has met certain criteria. In short, when it comes to criterion-oriented assessment, being “first in the class” does not count and is often irrelevant. Although we can envision exceptions to this general rule, norm-referenced comparisons typically are insufficient and inappropriate when knowledge of mastery is what the test user requires.

Criterion-referenced testing and assessment are commonly employed in licensing contexts, be it a license to practice medicine or to drive a car. Criterion-referenced approaches are also employed in educational contexts in which mastery of particular material must be demonstrated before the student moves on to advanced material that conceptually builds on the existing base of knowledge, skills, or both.

In contrast to techniques and principles applicable to the development of norm-referenced tests (many of which are discussed in this chapter), the development of criterion-referenced instruments derives from a conceptualization of the knowledge or skills to be mastered. For purposes of assessment, the required cognitive or motor skills may be broken down intoPage 234 component parts. The test developer may attempt to sample criterion-related knowledge with regard to general principles relevant to the criterion being assessed. Experimentation with different items, tests, formats, or measurement procedures will help the test developer discover the best measure of mastery for the targeted skills or knowledge.

JUST THINK . . .

Suppose you were charged with developing a criterion-referenced test to measure mastery of Chapter 8 of this book. Explain, in as much detail as you think sufficient, how you would go about doing that. It’s OK to read on before answering (in fact, you are encouraged to do so).

In general, the development of a criterion-referenced test or assessment procedure may entail exploratory work with at least two groups of testtakers: one group known to have mastered the knowledge or skill being measured and another group known not to have mastered such knowledge or skill. For example, during the development of a criterion-referenced written test for a driver’s license, a preliminary version of the test may be administered to one group of people who have been driving about 15,000 miles per year for 10 years and who have perfect safety records (no accidents and no moving violations). The second group of testtakers might be a group of adults matched in demographic and related respects to the first group but who have never had any instruction in driving or driving experience. The items that best discriminate between these two groups would be considered “good” items. The preliminary exploratory experimentation done in test development need not have anything at all to do with flying, but you wouldn’t know that from its name . . .

Pilot Work

In the context of test development, terms such as pilot work , pilot study, and pilot research refer, in general, to the preliminary research surrounding the creation of a prototype of the test. Test items may be pilot studied (or piloted) to evaluate whether they should be included in the final form of the instrument. In developing a structured interview to measure introversion/extraversion, for example, pilot research may involve open-ended interviews with research subjects believed for some reason (perhaps on the basis of an existing test) to be introverted or extraverted. Additionally, interviews with parents, teachers, friends, and others who know the subject might also be arranged. Another type of pilot study might involve physiological monitoring of the subjects (such as monitoring of heart rate) as a function of exposure to different types of stimuli.

In pilot work, the test developer typically attempts to determine how best to measure a targeted construct. The process may entail literature reviews and experimentation as well as the creation, revision, and deletion of preliminary test items. After pilot work comes the process of test construction. Keep in mind, however, that depending on the nature of the test, as well as the nature of the changing responses to it by testtakers, test users, and the community at large, the need for further pilot research and test revision is always a possibility.

Pilot work is a necessity when constructing tests or other measuring instruments for publication and wide distribution. Of course, pilot work need not be part of the process of developing teacher-made tests for classroom use. Let’s take a moment at this juncture to discuss selected aspects of the process of developing tests not for use on the world stage, but rather to measure achievement in a class.

Test Construction

Scaling

We have previously defined measurement as the assignment of numbers according to rules. Scaling may be defined as the process of setting rules for assigning numbers in measurement. Stated another way, scaling is the process by which a measuring device is designed andPage 235 calibrated and by which numbers (or other indices)—scale values—are assigned to different amounts of the trait, attribute, or characteristic being measured.

Historically, the prolific L. L. Thurstone (Figure 8–2) is credited for being at the forefront of efforts to develop methodologically sound scaling methods. He adapted psychophysical scaling methods to the study of psychological variables such as attitudes and values (Thurstone, 1959; Thurstone & Chave, 1929). Thurstone’s (1925) article entitled “A Method of Scaling Psychological and Educational Tests” introduced, among other things, the notion of absolute scaling—a procedure for obtaining a measure of item difficulty across samples of testtakers who vary in ability.

Figure 8–2 L. L. Thurstone (1887–1955) Among his many achievements in the area of scaling was Thurstone’s (1927) influential article “A Law ofComparative Judgment.” One of the few “laws” in psychology, this was Thurstone’s proudest achievement (Nunnally, 1978, pp. 60–61). Of course, he had many achievements from which to choose. Thurstone’s adaptations of scaling methods for use in psychophysiological research and the study of attitudes and values have served as models for generations of researchers (Bock & Jones, 1968). He is also widely considered to be one of the primary architects of modern factor analysis.© George Skadding/Time LIFE Pictures Collection/Getty Images

Types of scales

In common parlance, scales are instruments used to measure something, such as weight. In psychometrics, scales may also be conceived of as instruments used to measure. Here, however, that something being measured is likely to be a trait, a state, or an ability. When we think of types of scales, we think of the different ways that scales can be categorized. In Chapter 3, for example, we saw that scales can be meaningfully categorized along a continuum of level of measurement and be referred to as nominal, ordinal, interval, or ratio. But we might also characterize scales in other ways.

If the testtaker’s test performance as a function of age is of critical interest, then the test might be referred to as an age-based scale. If the testtaker’s test performance as a function of grade is of critical interest, then the test might be referred to as a grade-based scale. If all raw scores on the test are to be transformed into scores that can range from 1 to 9, then the test might be referred to as a stanine scale. A scale might be described in still other ways. For example, it may be categorized as unidimensional as opposed to multidimensional. It may be categorized as comparative as opposed to categorical. This is just a sampling of the various ways in which scales can be categorized.

Given that scales can be categorized in many different ways, it would be reasonable to assume that there are many different methods of scaling. Indeed, there are; there is no one method of scaling. There is no best type of scale. Test developers scale a test in the manner they believe is optimally suited to their conception of the measurement of the trait (or whatever) that is being measured.Page 236

Scaling methods

Generally speaking, a testtaker is presumed to have more or less of the characteristic measured by a (valid) test as a function of the test score. The higher or lower the score, the more or less of the characteristic the testtaker presumably possesses. But how are numbers assigned to responses so that a test score can be calculated? This is done through scaling the test items, using any one of several available methods.

For example, consider a moral-issues opinion measure called the Morally Debatable Behaviors Scale–Revised (MDBS-R; Katz et al., 1994). Developed to be “a practical means of assessing what people believe, the strength of their convictions, as well as individual differences in moral tolerance” (p. 15), the MDBS-R contains 30 items. Each item contains a brief description of a moral issue or behavior on which testtakers express their opinion by means of a 10-point scale that ranges from “never justified” to “always justified.” Here is a sample.

Cheating on taxes if you have a chance is:

1

2

3

4

5

6

7

8

9

10

never justified

always justified

The MDBS-R is an example of a rating scale , which can be defined as a grouping of words, statements, or symbols on which judgments of the strength of a particular trait, attitude, or emotion are indicated by the testtaker. Rating scales can be used to record judgments of oneself, others, experiences, or objects, and they can take several forms (Figure 8–3).

Figure 8–3 The Many Faces of Rating Scales Rating scales can take many forms. “Smiley” faces, such as those illustrated here as Item A, have been used in social-psychological research with young children and adults with limited language skills. The faces are used in lieu of words such as positive, neutral, and negative.

On the MDBS-R, the ratings that the testtaker makes for each of the 30 test items are added together to obtain a final score. Scores range from a low of 30 (if the testtaker indicates that all 30 behaviors are never justified) to a high of 300 (if the testtaker indicates that allPage 237 30 situations are always justified). Because the final test score is obtained by summing the ratings across all the items, it is termed a summative scale .

One type of summative rating scale, the Likert scale (Likert, 1932), is used extensively in psychology, usually to scale attitudes. Likert scales are relatively easy to construct. Each item presents the testtaker with five alternative responses (sometimes seven), usually on an agree–disagree or approve–disapprove continuum. If Katz et al. had used a Likert scale, an item on their test might have looked like this:

Cheating on taxes if you have a chance.

This is (check one):

_____

_____

_____

_____

_____

never justified

rarely justified

sometimes justified

usually justified

always justified

Likert scales are usually reliable, which may account for their widespread popularity. Likert (1932) experimented with different weightings of the five categories but concluded that assigning weights of 1 (for endorsement of items at one extreme) through 5 (for endorsement of items at the other extreme) generally worked best.

JUST THINK . . .

In your opinion, which version of the Morally Debatable Behaviors Scale is optimal?

The use of rating scales of any type results in ordinal-level data. With reference to the Likert scale item, for example, if the response never justified is assigned the value 1, rarely justified the value 2, and so on, then a higher score indicates greater permissiveness with regard to cheating on taxes. Respondents could even be ranked with regard to such permissiveness. However, the difference in permissiveness between the opinions of a pair of people who scored 2 and 3 on this scale is not necessarily the same as the difference between the opinions of a pair of people who scored 3 and 4.

Rating scales differ in the number of dimensions underlying the ratings being made. Some rating scales are unidimensional, meaning that only one dimension is presumed to underlie the ratings. Other rating scales are multidimensional, meaning that more than one dimension is thought to guide the testtaker’s responses. Consider in this context an item from the MDBS-R regarding marijuana use. Responses to this item, particularly responses in the low to middle range, may be interpreted in many different ways. Such responses may reflect the view (a) that people should not engage in illegal activities, (b) that people should not take risks with their health, or (c) that people should avoid activities that could lead to contact with a bad crowd. Responses to this item may also reflect other attitudes and beliefs, including those related to documented benefits of marijuana use, as well as new legislation and regulations. When more than one dimension is tapped by an item, multidimensional scaling techniques are used to identify the dimensions.

Another scaling method that produces ordinal data is the method of paired comparisons . Testtakers are presented with pairs of stimuli (two photographs, two objects, two statements), which they are asked to compare. They must select one of the stimuli according to some rule; for example, the rule that they agree more with one statement than the other, or the rule that they find one stimulus more appealing than the other. Had Katz et al. used the method of paired comparisons, an item on their scale might have looked like the one that follows.

Select the behavior that you think would be more justified:

a. cheating on taxes if one has a chance

b. accepting a bribe in the course of one’s duties

Page 238

For each pair of options, testtakers receive a higher score for selecting the option deemed more justifiable by the majority of a group of judges. The judges would have been asked to rate the pairs of options before the distribution of the test, and a list of the options selected by the judges would be provided along with the scoring instructions as an answer key. The test score would reflect the number of times the choices of a testtaker agreed with those of the judges. If we use Katz et al.’s (1994) standardization sample as the judges, then the more justifiable option is cheating on taxes. A testtaker might receive a point toward the total score for selecting option “a” but no points for selecting option “b.” An advantage of the method of paired comparisons is that it forces testtakers to choose between items.

JUST THINK . . .

Under what circumstance might it be advantageous for tests to contain items presented as a sorting task?

Sorting tasks are another way that ordinal information may be developed and scaled. Here, stimuli such as printed cards, drawings, photographs, or other objects are typically presented to testtakers for evaluation. One method of sorting, comparative scaling , entails judgments of a stimulus in comparison with every other stimulus on the scale. A version of the MDBS-R that employs comparative scaling might feature 30 items, each printed on a separate index card. Testtakers would be asked to sort the cards from most justifiable to least justifiable. Comparative scaling could also be accomplished by providing testtakers with a list of 30 items on a sheet of paper and asking them to rank the justifiability of the items from 1 to 30.

Another scaling system that relies on sorting is categorical scaling . Stimuli are placed into one of two or more alternative categories that differ quantitatively with respect to some continuum. In our running MDBS-R example, testtakers might be given 30 index cards, on each of which is printed one of the 30 items. Testtakers would be asked to sort the cards into three piles: those behaviors that are never justified, those that are sometimes justified, and those that are always justified.

A Guttman scale (Guttman, 1944a,b, 1947) is yet another scaling method that yields ordinal-level measures. Items on it range sequentially from weaker to stronger expressions of the attitude, belief, or feeling being measured. A feature of Guttman scales is that all respondents who agree with the stronger statements of the attitude will also agree with milder statements. Using the MDBS-R scale as an example, consider the following statements that reflect attitudes toward suicide.

Do you agree or disagree with each of the following:

a. All people should have the right to decide whether they wish to end their lives.

b. People who are terminally ill and in pain should have the option to have a doctor assist them in ending their lives.

c. People should have the option to sign away the use of artificial life-support equipment before they become seriously ill.

d. People have the right to a comfortable life.

If this were a perfect Guttman scale, then all respondents who agree with “a” (the most extreme position) should also agree with “b,” “c,” and “d.” All respondents who disagree with “a” but agree with “b” should also agree with “c” and “d,” and so forth. Guttman scales are developed through the administration of a number of items to a target group. The resulting data are then analyzed by means of scalogram analysis , an item-analysis procedure and approach to test development that involves a graphic mapping of a testtaker’s responses. The objective for the developer of a measure of attitudes is to obtain an arrangement of items wherein endorsement of one item automatically connotes endorsement of less extreme positions. It is not always possible to do this. Beyond the measurement of attitudes, Guttman scaling or scalogram analysis (the two terms are used synonymously) appeals to test developers in consumer psychology, where an objective may be to learn if a consumer who will purchase one product will purchase another product.Page 239

All the foregoing methods yield ordinal data. The method of equal-appearing intervals, first described by Thurstone (1929), is one scaling method used to obtain data that are presumed to be interval in nature. Again using the example of attitudes about the justifiability of suicide, let’s outline the steps that would be involved in creating a scale using Thurstone’s equal-appearing intervals method.

1. A reasonably large number of statements reflecting positive and negative attitudes toward suicide are collected, such as Life is sacred, so people should never take their own lives and A person in a great deal of physical or emotional pain may rationally decide that suicide is the best available option.

2. Judges (or experts in some cases) evaluate each statement in terms of how strongly it indicates that suicide is justified. Each judge is instructed to rate each statement on a scale as if the scale were interval in nature. For example, the scale might range from 1 (the statement indicates that suicide is never justified) to 9 (the statement indicates that suicide is always justified). Judges are instructed that the 1-to-9 scale is being used as if there were an equal distance between each of the values—that is, as if it were an interval scale. Judges are cautioned to focus their ratings on the statements, not on their own views on the matter.

3. A mean and a standard deviation of the judges’ ratings are calculated for each statement. For example, if fifteen judges rated 100 statements on a scale from 1 to 9 then, for each of these 100 statements, the fifteen judges’ ratings would be averaged. Suppose five of the judges rated a particular item as a 1, five other judges rated it as a 2, and the remaining five judges rated it as a 3. The average rating would be 2 (with a standard deviation of 0.816).

4. Items are selected for inclusion in the final scale based on several criteria, including (a) the degree to which the item contributes to a comprehensive measurement of the variable in question and (b) the test developer’s degree of confidence that the items have indeed been sorted into equal intervals. Item means and standard deviations are also considered. Items should represent a wide range of attitudes reflected in a variety of ways. A low standard deviation is indicative of a good item; the judges agreed about the meaning of the item with respect to its reflection of attitudes toward suicide.

5. The scale is now ready for administration. The way the scale is used depends on the objectives of the test situation. Typically, respondents are asked to select those statements that most accurately reflect their own attitudes. The values of the items that the respondent selects (based on the judges’ ratings) are averaged, producing a score on the test.

The method of equal-appearing intervals is an example of a scaling method of the direct estimation variety. In contrast to other methods that involve indirect estimation, there is no need to transform the testtaker’s responses into some other scale.

The particular scaling method employed in the development of a new test depends on many factors, including the variables being measured, the group for whom the test is intended (children may require a less complicated scaling method than adults, for example), and the preferences of the test developer.

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Solution Provider
Academic Mentor
Instant Assignments
Accounting Homework Help
Accounting & Finance Mentor
Top Class Results
Writer Writer Name Offer Chat
Solution Provider

ONLINE

Solution Provider

I will be delighted to work on your project. As an experienced writer, I can provide you top quality, well researched, concise and error-free work within your provided deadline at very reasonable prices.

$37 Chat With Writer
Academic Mentor

ONLINE

Academic Mentor

I reckon that I can perfectly carry this project for you! I am a research writer and have been writing academic papers, business reports, plans, literature review, reports and others for the past 1 decade.

$50 Chat With Writer
Instant Assignments

ONLINE

Instant Assignments

I am an academic and research writer with having an MBA degree in business and finance. I have written many business reports on several topics and am well aware of all academic referencing styles.

$26 Chat With Writer
Accounting Homework Help

ONLINE

Accounting Homework Help

I am a PhD writer with 10 years of experience. I will be delivering high-quality, plagiarism-free work to you in the minimum amount of time. Waiting for your message.

$44 Chat With Writer
Accounting & Finance Mentor

ONLINE

Accounting & Finance Mentor

I am a professional and experienced writer and I have written research reports, proposals, essays, thesis and dissertations on a variety of topics.

$41 Chat With Writer
Top Class Results

ONLINE

Top Class Results

I can assist you in plagiarism free writing as I have already done several related projects of writing. I have a master qualification with 5 years’ experience in; Essay Writing, Case Study Writing, Report Writing.

$29 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

A researcher conducts an independent measures two factor study - Distinguish the true pelvis from the false pelvis - Alibaba goes public case study - Arland williams plane crash - Fluid friction in a smooth bore pipe lab report - Value line publishing home depot lowes - The thatched cottage sutterton - Srs computer science - Watashi wa nihongo ga wakarimasen - Melbourne assessment prison cells - CC-6 - Nib ambulance cover only - Crm winback hilton honors 5 offer - Army reserve traineeship and apprenticeship program - Order 2275496: 13th Documentary Summary - Computing essentials 2017 chapter 2 - Judith lorber believing is seeing - Your client's degree of risk aversion is a - Pin photodiode advantages and disadvantages - Week 4 Determining Benchmarks - Customer Delight - Grandview heights aquatic centre - I Need a paper rewritten. I will give you the orgingal and the notes of what needs to be corrected - Don't look back 1996 - Never say never metallica - 02.03 the anti-federalists: assessment - Deflection of cantilever beam with two point loads - Tlmt601 - Consumer buying behavior report example - Rough Draft Qualitative Research Critique and Ethical Considerations - Cessna 210 crash darwin - Courage in shawshank redemption - What would happen if the process of mitosis skipped metaphase - Outlook web access newcastle - Apa format - Www householdresponse com blackpool - Magnesium nitrite trihydrate molecular mass - Australian safeway stores v zaluzna case summary - Universal hd virgin media - Assurance services involve all of the following except - Wk 3, IOP 470: Power, Decision-Making, and Leadership Paper - As an organization becomes more mechanistic its communication flow becomes - In process cost accounting manufacturing costs are summarized on a - 9 murralinga place mount eliza - Please hel with Anatomy and Physiology - Kristen swanson theory of caring - Process Synchronization Using Monitor and Pthreads - Write Essay - Visual literacy techniques glossary - Save the cat writes a novel pdf download - Paper - Artificial Intelligence in Security - Community centre hong kong - SECURITY ARCHITECTURE AND DESIGN (ISOL-536-M31) PHD IN IT - Global discussion 7 responses - Theories to determine moral status of a fetus - Bernoulli differential equation solved problems - Computer Science - Danske bank daily withdrawal limit - AIGN#3ECOM402-E-SUPPLY - Diamond foods scandal - Organizational Behavior - Portfolio Project(Opera Excellence) - Health the basics donatelle rebecca j 2019 13th edition - Case 2 2 the theoretical foundation of accounting principles - W2 Psychotherapy - Week 1 discussion organ and leader - Pat fenlon coronation street - Product life cycle of colgate ppt - Leadership Applications CRJ-565-MCOL3 - Unit 8 mobile apps development assignment - Official sponsors of the astros - 560 dis - Automotive engineers refer to the time rate of change - Literature - Kate cut a square into equal shares - What nfl player has sickle cell anemia - 140 bus timetable cairns - Mort management oversight and risk tree - In n out burger boston ma - Montgomery bank & trust ailey ga - Cambridge igcse biology multiple choice past papers - Similes in macbeth act 2 - Surge protection device calculation - Practical connection - Health and safety committees are formed so - Balance sheet examples for students - Difference between limitation and delimitation - Chemistry mole conversions worksheet answers - Cryptography and Network Security - Locate a bad news message somewhere on the internet - Parenting from the inside out cliff notes - Biblical financial planning ron blue - Itec - Proprietary estoppel by encouragement - Fully answer questions - Ruchazie housing association ltd - Paper - Successfactors vail resorts - Explain how nursing practice has changed over time