Loading...

Messages

Proposals

Stuck in your homework and missing deadline? Get urgent help in $10/Page with 24 hours deadline

Get Urgent Writing Help In Your Essays, Assignments, Homeworks, Dissertation, Thesis Or Coursework & Achieve A+ Grades.

Privacy Guaranteed - 100% Plagiarism Free Writing - Free Turnitin Report - Professional And Experienced Writers - 24/7 Online Support

Item difficulty in psychological testing

19/11/2021 Client: muhammad11 Deadline: 2 Day

Module Chapter 8 p655 wk3

C H A P T E R 8

Test Development

All tests are not created equal. The creation of a good test is not a matter of chance. It is the product of the thoughtful and sound application of established principles of test development. In this context, test development is an umbrella term for all that goes into the process of creating a test.

In this chapter, we introduce the basics of test development and examine in detail the processes by which tests are assembled. We explore, for example, ways that test items are written, and ultimately selected for use. Although we focus on tests of the published, standardized variety, much of what we have to say also applies to custom-made tests such as those created by teachers, researchers, and employers.

The process of developing a test occurs in five stages:

1. test conceptualization;

2. test construction;

3. test tryout;

4. item analysis;

5. test revision.

Once the idea for a test is conceived ( test conceptualization ), test construction begins. As we are using this term, test construction is a stage in the process of test development that entails writing test items (or re-writing or revising existing items), as well as formatting items, setting scoring rules, and otherwise designing and building a test. Once a preliminary form of the test has been developed, it is administered to a representative sample of testtakers under conditions that simulate the conditions that the final version of the test will be administered under ( test tryout ). The data from the tryout will be collected and testtakers’ performance on the test as a whole and on each item will be analyzed. Statistical procedures, referred to as item analysis, are employed to assist in making judgments about which items are good as they are, which items need to be revised, and which items should be discarded. The analysis of the test’s items may include analyses of item reliability, item validity, and item discrimination. Depending on the type of test, item-difficulty level may be analyzed as well.

JUST THINK . . .

Can you think of a classic psychological test from the past that has never undergone test tryout, item analysis, or revision? What about so-called psychological tests found on the Internet?

Next in the sequence of events in test development is test revision. Here, test revision refers to action taken to modify a test’s content or format for the purpose of improving the test’s effectiveness as a tool of measurement. This action is usually based on item analyses, as well as related information derived from the test tryout. The revised version of the test will then be tried out on a new sample of testtakers. After the results arePage 230 analyzed the test will be further revised if necessary—and so it goes (see Figure 8–1). Although the test development process described is fairly typical today, let’s note that there are many exceptions to it, both with regard to tests developed in the past, and some contemporary tests. Some tests are conceived of and constructed but neither tried-out, nor item-analyzed, nor revised.

Figure 8–1 The Test Development Process

Test Conceptualization

The beginnings of any published test can probably be traced to thoughts—self-talk, in behavioral terms. The test developer says to himself or herself something like: “There ought to be a test designed to measure [fill in the blank] in [such and such] way.” The stimulus for such a thought could be almost anything. A review of the available literature on existing tests designed to measure a particular construct might indicate that such tests leave much to be desired in psychometric soundness. An emerging social phenomenon or pattern of behavior might serve as the stimulus for the development of a new test. The analogy with medicine is straightforward: Once a new disease comes to the attention of medical researchers, they attempt to develop diagnostic tests to assess its presence or absence as well as the severity of its manifestations in the body.

The development of a new test may be in response to a need to assess mastery in an emerging occupation or profession. For example, new tests may be developed to assess mastery in fields such as high-definition electronics, environmental engineering, and wireless communications.

In recent years, measurement interest related to aspects of the LGBT (lesbian, gay, bi-sexual, and transgender) experience has increased. The present authors propose that in the interest of comprehensive inclusion, an “A” should be added to the end of “LGBT” so that this term is routinely abbreviated as “LGBTA.” The additional “A” would acknowledge the existence of asexuality as a sexual orientation or preference.

JUST THINK . . .

What is a “hot topic” today that developers of psychological tests should be working on? What aspects of this topic might be explored by means of a psychological test?

Asexuality may be defined as a sexual orientation characterized by a long-term lack of interest in a sexual relationship with anyone or anything. Given that some research is conducted with persons claiming to be asexual, and given that asexual individuals must be selected-in or selected-out to participate in such research, Yule et al. (2015) perceived a need for a reliable and valid test to measure asexuality. Read about their efforts to develop and validate their rather novel test in this chapter’s Close-Up .Page 231

CLOSE-UP

Creating and Validating a Test of Asexuality*

In general, and with some variation according to the source, human asexuality may be defined as an absence of sexual attraction to anyone at all. Estimates suggest that approximately 1% of the population might be asexual (Bogaert, 2004). Although the concept of asexuality was first introduced by Alfred Kinsey in 1948, it is only in the past decade that it has received any substantial academic attention. Scholars are grappling with how best to conceptualize asexuality. For some, asexuality is thought of as itself, a sexual orientation (Berkey et al., 1990; Bogaert, 2004; Brotto & Yule, 2011; Brotto et al., 2010; Storms, 1978; Yule et al., 2014). Others view asexuality more as a mental health issue, a paraphilia, or human sexual dysfunction (see Bogaert, 2012, 2015).

More research on human asexuality would be helpful. However, researchers who design projects to explore human asexuality face the challenge of finding qualified subjects. Perhaps the best source of asexual research subjects has been an online organization called “AVEN” (an acronym for the Asexuality and Visibility Education Network). Located at asexuality.org , this organization had some 120,000 members at the time of this writing (in May, 2016). But while the convenience of these group members as a recruitment source is obvious, there are also limitations inherent to exclusively recruiting research participants from a single online community. For example, asexual individuals who do not belong to AVEN are systematically excluded from such research. It may well be that those unaffiliated asexual individuals differ from AVEN members in significant ways. For example, these individuals may have lived their lives devoid of any sexual attraction, but have never construed themselves to be “asexual.” On the other hand, persons belonging to AVEN may be a unique group within the asexual population, as they have not only acknowledged their asexuality as an identity, but actively sought out affiliation with other like-minded individuals. Clearly, an alternative recruitment procedure is needed. Simply relying on membership in AVEN as a credential of asexuality is flawed. What is needed is a validated measure to screen for human asexuality.

In response to this need for a test designed to screen for human asexuality, the Asexuality Identification Scale (AIS) was developed (Yule et al., 2015). The AIS is a 12-item, sex- and gender-neutral, self-report measure of asexuality. The AIS was developed in a series of stages. Stage 1 included development and administration of eight open-ended questions to sexual (n = 70) and asexual (n = 139) individuals. These subjects were selected for participation in the study through online channels (e.g., AVEN, Craigslist, and Facebook). Subjects responded in writing to a series of questions focused on definitions of asexuality, sexual attraction, sexual desire, and romantic attraction. There were no space limitations, and participants were encouraged to answer in as much or as little detail as they wished. Participant responses were examined to identify prevalent themes, and this information was used to generate 111 multiple-choice items. In Stage 2, these 111 items were administered to another group of asexual (n = 165) and sexual (n = 752) participants. Subjects in this phase of the test development process were selected for participation through a variety of online websites, and also through our university’s human subjects pool. The resulting data were then factor- and item-analyzed in order to determine which items should be retained. The decision to retain an item was made on the basis of our judgment as to which items best differentiated asexual from sexual participants. Thirty-seven items were selected based on the results of this item selection process. In Stage 3, these 37 items were administered to another group of asexual (n = 316) and sexual (n = 926) participants. Here, subjects were selected through the same means as in Stage 2, but also through websites that host psychological online studies. As in Stage 2, the items were analyzed for the purpose of selecting those items that best loaded on the asexual versus the sexual factors. Of the 37 original items subjected to item analysis, 12 items were retained, and 25 were discarded.

In order to determine construct validity, psychometric validation on the 12-item AIS was conducted using data from the same participants in Stage 3. Known-groups validity was established as the AIS total score showed excellent ability to distinguish between asexual and sexual subjects. Specifically, a cut-off score of 40/60 was found to identify 93% of self-identified asexual individuals, while excluding 95% of sexual individuals. In order to assess whether the measure was useful over and above already-available measures of sexual orientation, we compared the AIS to an adaptation of a previously established measure of sexual orientation (Klein Scale; Klein & Sepekoff, 1985). Incremental validity was established, as the AIS showed only moderate correlations with the Klein Scale, suggesting that the AIS is a better predictor of asexuality compared to an existing measure. To determine whether the AIS correlates with a construct that is thought to be highly related to asexuality (or, lack of sexual desire), convergent validity was assessed by correlating total AIS scores with scores on the Sexual Desire Inventory (SDI; Spector et al., 1996). As we expected, the AIS correlated only weakly with Solitary Desire subscale of the SDI, while the Dyadic Desire subscale of the SDI had a moderate negative correlation with the AIS. Finally, we conducted discriminant validity analyses by comparing the AIS with the Childhood Trauma Questionnaire (CTQ; Bernstein et al., 1994; Bernstein & Fink, 1998), the Short-Form Inventory of Interpersonal Problems-Circumplex scales (IIP-SC; Soldz et al., 1995), and the Big-Five Inventory (BFI; John et al., 1991; John et al., 2008; John & Srivastava, 1999) in order to determine whether the AIS was actually tapping into negative sexual experiences or personality traits. Discriminant validity was established, as the AIS was not significantly correlated with scores on the CTQ, IIP-SC, or the BFI.

Sexual and asexual participants significantly differed in their AIS total scores with a large effect size. Further, the AIS passed tests of known-groups, incremental, convergent, and discriminant validity. This suggests that the AIS is a useful tool for identifying asexuality, and could be used in future research to identify individuals with a lack of sexual attraction. We believe that respondents need not be self-identified as asexual in order to be selected as asexual on the AIS. Research suggests that the AIS will identify as asexual the individual who exhibits characteristics of a lifelong lack of sexual attraction in the absence of personal distress. It is our hope that the AIS will allow for recruitment of more representative samples of the asexuality population, and contribute toward a growing body of research on this topic.

Used with permission of Morag A. Yule and Lori A. Brotto.

* This Close-Up was guest-authored by Morag A. Yule and Lori A. Brotto, both of the Department of Obstetrics & Gynaecology of the University of British Columbia.Page 232

Some Preliminary Questions

Regardless of the stimulus for developing the new test, a number of questions immediately confront the prospective test developer.

· What is the test designed to measure? This is a deceptively simple question. Its answer is closely linked to how the test developer defines the construct being measured and how that definition is the same as or different from other tests purporting to measure the same construct.

· What is the objective of the test? In the service of what goal will the test be employed? In what way or ways is the objective of this test the same as or different from other tests with similar goals? What real-world behaviors would be anticipated to correlate with testtaker responses?

· Is there a need for this test? Are there any other tests purporting to measure the same thing? In what ways will the new test be better than or different from existing ones? Will there be more compelling evidence for its reliability or validity? Will it be more comprehensive? Will it take less time to administer? In what ways would this test not be better than existing tests?

· Who will use this test? Clinicians? Educators? Others? For what purpose or purposes would this test be used?

· Who will take this test? Who is this test for? Who needs to take it? Who would find it desirable to take it? For what age range of testtakers is the test designed? What reading level is required of a testtaker? What cultural factors might affect testtaker response?

· What content will the test cover? Why should it cover this content? Is this coverage different from the content coverage of existing tests with the same or similar objectives? How and why is the content area different? To what extent is this content culture-specific?

· How will the test be administered? Individually or in groups? Is it amenable to both group and individual administration? What differences will exist between individual andPage 233 group administrations of this test? Will the test be designed for or amenable to computer administration? How might differences between versions of the test be reflected in test scores?

· What is the ideal format of the test? Should it be true–false, essay, multiple-choice, or in some other format? Why is the format selected for this test the best format?

· Should more than one form of the test be developed? On the basis of a cost–benefit analysis, should alternate or parallel forms of this test be created?

· What special training will be required of test users for administering or interpreting the test? What background and qualifications will a prospective user of data derived from an administration of this test need to have? What restrictions, if any, should be placed on distributors of the test and on the test’s usage?

· What types of responses will be required of testtakers? What kind of disability might preclude someone from being able to take this test? What adaptations or accommodations are recommended for persons with disabilities?

· Who benefits from an administration of this test? What would the testtaker learn, or how might the testtaker benefit, from an administration of this test? What would the test user learn, or how might the test user benefit? What social benefit, if any, derives from an administration of this test?

· Is there any potential for harm as the result of an administration of this test? What safeguards are built into the recommended testing procedure to prevent any sort of harm to any of the parties involved in the use of this test?

· How will meaning be attributed to scores on this test? Will a testtaker’s score be compared to those of others taking the test at the same time? To those of others in a criterion group? Will the test evaluate mastery of a particular content area?

This last question provides a point of departure for elaborating on issues related to test development with regard to norm- versus criterion-referenced tests.

Norm-referenced versus criterion-referenced tests: Item development issues

Different approaches to test development and individual item analyses are necessary, depending upon whether the finished test is designed to be norm-referenced or criterion-referenced. Generally speaking, for example, a good item on a norm-referenced achievement test is an item for which high scorers on the test respond correctly. Low scorers on the test tend to respond to that same item incorrectly. On a criterion-oriented test, this same pattern of results may occur: High scorers on the test get a particular item right whereas low scorers on the test get that same item wrong. However, that is not what makes an item good or acceptable from a criterion-oriented perspective. Ideally, each item on a criterion-oriented test addresses the issue of whether the testtaker—a would-be physician, engineer, piano student, or whoever—has met certain criteria. In short, when it comes to criterion-oriented assessment, being “first in the class” does not count and is often irrelevant. Although we can envision exceptions to this general rule, norm-referenced comparisons typically are insufficient and inappropriate when knowledge of mastery is what the test user requires.

Criterion-referenced testing and assessment are commonly employed in licensing contexts, be it a license to practice medicine or to drive a car. Criterion-referenced approaches are also employed in educational contexts in which mastery of particular material must be demonstrated before the student moves on to advanced material that conceptually builds on the existing base of knowledge, skills, or both.

In contrast to techniques and principles applicable to the development of norm-referenced tests (many of which are discussed in this chapter), the development of criterion-referenced instruments derives from a conceptualization of the knowledge or skills to be mastered. For purposes of assessment, the required cognitive or motor skills may be broken down intoPage 234 component parts. The test developer may attempt to sample criterion-related knowledge with regard to general principles relevant to the criterion being assessed. Experimentation with different items, tests, formats, or measurement procedures will help the test developer discover the best measure of mastery for the targeted skills or knowledge.

JUST THINK . . .

Suppose you were charged with developing a criterion-referenced test to measure mastery of Chapter 8 of this book. Explain, in as much detail as you think sufficient, how you would go about doing that. It’s OK to read on before answering (in fact, you are encouraged to do so).

In general, the development of a criterion-referenced test or assessment procedure may entail exploratory work with at least two groups of testtakers: one group known to have mastered the knowledge or skill being measured and another group known not to have mastered such knowledge or skill. For example, during the development of a criterion-referenced written test for a driver’s license, a preliminary version of the test may be administered to one group of people who have been driving about 15,000 miles per year for 10 years and who have perfect safety records (no accidents and no moving violations). The second group of testtakers might be a group of adults matched in demographic and related respects to the first group but who have never had any instruction in driving or driving experience. The items that best discriminate between these two groups would be considered “good” items. The preliminary exploratory experimentation done in test development need not have anything at all to do with flying, but you wouldn’t know that from its name . . .

Pilot Work

In the context of test development, terms such as pilot work , pilot study, and pilot research refer, in general, to the preliminary research surrounding the creation of a prototype of the test. Test items may be pilot studied (or piloted) to evaluate whether they should be included in the final form of the instrument. In developing a structured interview to measure introversion/extraversion, for example, pilot research may involve open-ended interviews with research subjects believed for some reason (perhaps on the basis of an existing test) to be introverted or extraverted. Additionally, interviews with parents, teachers, friends, and others who know the subject might also be arranged. Another type of pilot study might involve physiological monitoring of the subjects (such as monitoring of heart rate) as a function of exposure to different types of stimuli.

In pilot work, the test developer typically attempts to determine how best to measure a targeted construct. The process may entail literature reviews and experimentation as well as the creation, revision, and deletion of preliminary test items. After pilot work comes the process of test construction. Keep in mind, however, that depending on the nature of the test, as well as the nature of the changing responses to it by testtakers, test users, and the community at large, the need for further pilot research and test revision is always a possibility.

Pilot work is a necessity when constructing tests or other measuring instruments for publication and wide distribution. Of course, pilot work need not be part of the process of developing teacher-made tests for classroom use. Let’s take a moment at this juncture to discuss selected aspects of the process of developing tests not for use on the world stage, but rather to measure achievement in a class.

Test Construction

Scaling

We have previously defined measurement as the assignment of numbers according to rules. Scaling may be defined as the process of setting rules for assigning numbers in measurement. Stated another way, scaling is the process by which a measuring device is designed andPage 235 calibrated and by which numbers (or other indices)—scale values—are assigned to different amounts of the trait, attribute, or characteristic being measured.

Historically, the prolific L. L. Thurstone (Figure 8–2) is credited for being at the forefront of efforts to develop methodologically sound scaling methods. He adapted psychophysical scaling methods to the study of psychological variables such as attitudes and values (Thurstone, 1959; Thurstone & Chave, 1929). Thurstone’s (1925) article entitled “A Method of Scaling Psychological and Educational Tests” introduced, among other things, the notion of absolute scaling—a procedure for obtaining a measure of item difficulty across samples of testtakers who vary in ability.

Figure 8–2 L. L. Thurstone (1887–1955) Among his many achievements in the area of scaling was Thurstone’s (1927) influential article “A Law ofComparative Judgment.” One of the few “laws” in psychology, this was Thurstone’s proudest achievement (Nunnally, 1978, pp. 60–61). Of course, he had many achievements from which to choose. Thurstone’s adaptations of scaling methods for use in psychophysiological research and the study of attitudes and values have served as models for generations of researchers (Bock & Jones, 1968). He is also widely considered to be one of the primary architects of modern factor analysis.© George Skadding/Time LIFE Pictures Collection/Getty Images

Types of scales

In common parlance, scales are instruments used to measure something, such as weight. In psychometrics, scales may also be conceived of as instruments used to measure. Here, however, that something being measured is likely to be a trait, a state, or an ability. When we think of types of scales, we think of the different ways that scales can be categorized. In Chapter 3, for example, we saw that scales can be meaningfully categorized along a continuum of level of measurement and be referred to as nominal, ordinal, interval, or ratio. But we might also characterize scales in other ways.

If the testtaker’s test performance as a function of age is of critical interest, then the test might be referred to as an age-based scale. If the testtaker’s test performance as a function of grade is of critical interest, then the test might be referred to as a grade-based scale. If all raw scores on the test are to be transformed into scores that can range from 1 to 9, then the test might be referred to as a stanine scale. A scale might be described in still other ways. For example, it may be categorized as unidimensional as opposed to multidimensional. It may be categorized as comparative as opposed to categorical. This is just a sampling of the various ways in which scales can be categorized.

Given that scales can be categorized in many different ways, it would be reasonable to assume that there are many different methods of scaling. Indeed, there are; there is no one method of scaling. There is no best type of scale. Test developers scale a test in the manner they believe is optimally suited to their conception of the measurement of the trait (or whatever) that is being measured.Page 236

Scaling methods

Generally speaking, a testtaker is presumed to have more or less of the characteristic measured by a (valid) test as a function of the test score. The higher or lower the score, the more or less of the characteristic the testtaker presumably possesses. But how are numbers assigned to responses so that a test score can be calculated? This is done through scaling the test items, using any one of several available methods.

For example, consider a moral-issues opinion measure called the Morally Debatable Behaviors Scale–Revised (MDBS-R; Katz et al., 1994). Developed to be “a practical means of assessing what people believe, the strength of their convictions, as well as individual differences in moral tolerance” (p. 15), the MDBS-R contains 30 items. Each item contains a brief description of a moral issue or behavior on which testtakers express their opinion by means of a 10-point scale that ranges from “never justified” to “always justified.” Here is a sample.

Cheating on taxes if you have a chance is:

1

2

3

4

5

6

7

8

9

10

never justified

always justified

The MDBS-R is an example of a rating scale , which can be defined as a grouping of words, statements, or symbols on which judgments of the strength of a particular trait, attitude, or emotion are indicated by the testtaker. Rating scales can be used to record judgments of oneself, others, experiences, or objects, and they can take several forms (Figure 8–3).

Figure 8–3 The Many Faces of Rating Scales Rating scales can take many forms. “Smiley” faces, such as those illustrated here as Item A, have been used in social-psychological research with young children and adults with limited language skills. The faces are used in lieu of words such as positive, neutral, and negative.

On the MDBS-R, the ratings that the testtaker makes for each of the 30 test items are added together to obtain a final score. Scores range from a low of 30 (if the testtaker indicates that all 30 behaviors are never justified) to a high of 300 (if the testtaker indicates that allPage 237 30 situations are always justified). Because the final test score is obtained by summing the ratings across all the items, it is termed a summative scale .

One type of summative rating scale, the Likert scale (Likert, 1932), is used extensively in psychology, usually to scale attitudes. Likert scales are relatively easy to construct. Each item presents the testtaker with five alternative responses (sometimes seven), usually on an agree–disagree or approve–disapprove continuum. If Katz et al. had used a Likert scale, an item on their test might have looked like this:

Cheating on taxes if you have a chance.

This is (check one):

_____

_____

_____

_____

_____

never justified

rarely justified

sometimes justified

usually justified

always justified

Likert scales are usually reliable, which may account for their widespread popularity. Likert (1932) experimented with different weightings of the five categories but concluded that assigning weights of 1 (for endorsement of items at one extreme) through 5 (for endorsement of items at the other extreme) generally worked best.

JUST THINK . . .

In your opinion, which version of the Morally Debatable Behaviors Scale is optimal?

The use of rating scales of any type results in ordinal-level data. With reference to the Likert scale item, for example, if the response never justified is assigned the value 1, rarely justified the value 2, and so on, then a higher score indicates greater permissiveness with regard to cheating on taxes. Respondents could even be ranked with regard to such permissiveness. However, the difference in permissiveness between the opinions of a pair of people who scored 2 and 3 on this scale is not necessarily the same as the difference between the opinions of a pair of people who scored 3 and 4.

Rating scales differ in the number of dimensions underlying the ratings being made. Some rating scales are unidimensional, meaning that only one dimension is presumed to underlie the ratings. Other rating scales are multidimensional, meaning that more than one dimension is thought to guide the testtaker’s responses. Consider in this context an item from the MDBS-R regarding marijuana use. Responses to this item, particularly responses in the low to middle range, may be interpreted in many different ways. Such responses may reflect the view (a) that people should not engage in illegal activities, (b) that people should not take risks with their health, or (c) that people should avoid activities that could lead to contact with a bad crowd. Responses to this item may also reflect other attitudes and beliefs, including those related to documented benefits of marijuana use, as well as new legislation and regulations. When more than one dimension is tapped by an item, multidimensional scaling techniques are used to identify the dimensions.

Another scaling method that produces ordinal data is the method of paired comparisons . Testtakers are presented with pairs of stimuli (two photographs, two objects, two statements), which they are asked to compare. They must select one of the stimuli according to some rule; for example, the rule that they agree more with one statement than the other, or the rule that they find one stimulus more appealing than the other. Had Katz et al. used the method of paired comparisons, an item on their scale might have looked like the one that follows.

Select the behavior that you think would be more justified:

a. cheating on taxes if one has a chance

b. accepting a bribe in the course of one’s duties

Page 238

For each pair of options, testtakers receive a higher score for selecting the option deemed more justifiable by the majority of a group of judges. The judges would have been asked to rate the pairs of options before the distribution of the test, and a list of the options selected by the judges would be provided along with the scoring instructions as an answer key. The test score would reflect the number of times the choices of a testtaker agreed with those of the judges. If we use Katz et al.’s (1994) standardization sample as the judges, then the more justifiable option is cheating on taxes. A testtaker might receive a point toward the total score for selecting option “a” but no points for selecting option “b.” An advantage of the method of paired comparisons is that it forces testtakers to choose between items.

JUST THINK . . .

Under what circumstance might it be advantageous for tests to contain items presented as a sorting task?

Sorting tasks are another way that ordinal information may be developed and scaled. Here, stimuli such as printed cards, drawings, photographs, or other objects are typically presented to testtakers for evaluation. One method of sorting, comparative scaling , entails judgments of a stimulus in comparison with every other stimulus on the scale. A version of the MDBS-R that employs comparative scaling might feature 30 items, each printed on a separate index card. Testtakers would be asked to sort the cards from most justifiable to least justifiable. Comparative scaling could also be accomplished by providing testtakers with a list of 30 items on a sheet of paper and asking them to rank the justifiability of the items from 1 to 30.

Another scaling system that relies on sorting is categorical scaling . Stimuli are placed into one of two or more alternative categories that differ quantitatively with respect to some continuum. In our running MDBS-R example, testtakers might be given 30 index cards, on each of which is printed one of the 30 items. Testtakers would be asked to sort the cards into three piles: those behaviors that are never justified, those that are sometimes justified, and those that are always justified.

A Guttman scale (Guttman, 1944a,b, 1947) is yet another scaling method that yields ordinal-level measures. Items on it range sequentially from weaker to stronger expressions of the attitude, belief, or feeling being measured. A feature of Guttman scales is that all respondents who agree with the stronger statements of the attitude will also agree with milder statements. Using the MDBS-R scale as an example, consider the following statements that reflect attitudes toward suicide.

Do you agree or disagree with each of the following:

a. All people should have the right to decide whether they wish to end their lives.

b. People who are terminally ill and in pain should have the option to have a doctor assist them in ending their lives.

c. People should have the option to sign away the use of artificial life-support equipment before they become seriously ill.

d. People have the right to a comfortable life.

If this were a perfect Guttman scale, then all respondents who agree with “a” (the most extreme position) should also agree with “b,” “c,” and “d.” All respondents who disagree with “a” but agree with “b” should also agree with “c” and “d,” and so forth. Guttman scales are developed through the administration of a number of items to a target group. The resulting data are then analyzed by means of scalogram analysis , an item-analysis procedure and approach to test development that involves a graphic mapping of a testtaker’s responses. The objective for the developer of a measure of attitudes is to obtain an arrangement of items wherein endorsement of one item automatically connotes endorsement of less extreme positions. It is not always possible to do this. Beyond the measurement of attitudes, Guttman scaling or scalogram analysis (the two terms are used synonymously) appeals to test developers in consumer psychology, where an objective may be to learn if a consumer who will purchase one product will purchase another product.Page 239

All the foregoing methods yield ordinal data. The method of equal-appearing intervals, first described by Thurstone (1929), is one scaling method used to obtain data that are presumed to be interval in nature. Again using the example of attitudes about the justifiability of suicide, let’s outline the steps that would be involved in creating a scale using Thurstone’s equal-appearing intervals method.

1. A reasonably large number of statements reflecting positive and negative attitudes toward suicide are collected, such as Life is sacred, so people should never take their own lives and A person in a great deal of physical or emotional pain may rationally decide that suicide is the best available option.

2. Judges (or experts in some cases) evaluate each statement in terms of how strongly it indicates that suicide is justified. Each judge is instructed to rate each statement on a scale as if the scale were interval in nature. For example, the scale might range from 1 (the statement indicates that suicide is never justified) to 9 (the statement indicates that suicide is always justified). Judges are instructed that the 1-to-9 scale is being used as if there were an equal distance between each of the values—that is, as if it were an interval scale. Judges are cautioned to focus their ratings on the statements, not on their own views on the matter.

3. A mean and a standard deviation of the judges’ ratings are calculated for each statement. For example, if fifteen judges rated 100 statements on a scale from 1 to 9 then, for each of these 100 statements, the fifteen judges’ ratings would be averaged. Suppose five of the judges rated a particular item as a 1, five other judges rated it as a 2, and the remaining five judges rated it as a 3. The average rating would be 2 (with a standard deviation of 0.816).

4. Items are selected for inclusion in the final scale based on several criteria, including (a) the degree to which the item contributes to a comprehensive measurement of the variable in question and (b) the test developer’s degree of confidence that the items have indeed been sorted into equal intervals. Item means and standard deviations are also considered. Items should represent a wide range of attitudes reflected in a variety of ways. A low standard deviation is indicative of a good item; the judges agreed about the meaning of the item with respect to its reflection of attitudes toward suicide.

5. The scale is now ready for administration. The way the scale is used depends on the objectives of the test situation. Typically, respondents are asked to select those statements that most accurately reflect their own attitudes. The values of the items that the respondent selects (based on the judges’ ratings) are averaged, producing a score on the test.

The method of equal-appearing intervals is an example of a scaling method of the direct estimation variety. In contrast to other methods that involve indirect estimation, there is no need to transform the testtaker’s responses into some other scale.

The particular scaling method employed in the development of a new test depends on many factors, including the variables being measured, the group for whom the test is intended (children may require a less complicated scaling method than adults, for example), and the preferences of the test developer.

Writing Items

In the grand scheme of test construction, considerations related to the actual writing of the test’s items go hand in hand with scaling considerations. The prospective test developer or item writer immediately faces three questions related to the test blueprint:

· What range of content should the items cover?

· Which of the many different types of item formats should be employed?

· How many items should be written in total and for each content area covered?

Page 240

When devising a standardized test using a multiple-choice format, it is usually advisable that the first draft contain approximately twice the number of items that the final version of the test will contain. 1 If, for example, a test called “American History: 1940 to 1990” is to have 30 questions in its final version, it would be useful to have as many as 60 items in the item pool. Ideally, these items will adequately sample the domain of the test. An item pool is the reservoir or well from which items will or will not be drawn for the final version of the test.

A comprehensive sampling provides a basis for content validity of the final version of the test. Because approximately half of these items will be eliminated from the test’s final version, the test developer needs to ensure that the final version also contains items that adequately sample the domain. Thus, if all the questions about the Persian Gulf War from the original 60 items were determined to be poorly written, then the test developer should either rewrite items sampling this period or create new items. The new or rewritten items would then also be subjected to tryout so as not to jeopardize the test’s content validity. As in earlier versions of the test, an effort is made to ensure adequate sampling of the domain in the final version of the test. Another consideration here is whether or not alternate forms of the test will be created and, if so, how many. Multiply the number of items required in the pool for one form of the test by the number of forms planned, and you have the total number of items needed for the initial item pool.

How does one develop items for the item pool? The test developer may write a large number of items from personal experience or academic acquaintance with the subject matter. Help may also be sought from others, including experts. For psychological tests designed to be used in clinical settings, clinicians, patients, patients’ family members, clinical staff, and others may be interviewed for insights that could assist in item writing. For psychological tests designed to be used by personnel psychologists, interviews with members of a targeted industry or organization will likely be of great value. For psychological tests designed to be used by school psychologists, interviews with teachers, administrative staff, educational psychologists, and others may be invaluable. Searches through the academic research literature may prove fruitful, as may searches through other databases.

JUST THINK . . .

If you were going to develop a pool of items to cover the subject of “academic knowledge of what it takes to develop an item pool,” how would you go about doing it?

Considerations related to variables such as the purpose of the test and the number of examinees to be tested at one time enter into decisions regarding the format of the test under construction.

Item format

Variables such as the form, plan, structure, arrangement, and layout of individual test items are collectively referred to as item format . Two types of item format we will discuss in detail are the selected-response format and the constructed-response format. Items presented in a selected-response format require testtakers to select a response from a set of alternative responses. Items presented in a constructed-response format require testtakers to supply or to create the correct answer, not merely to select it.

If a test is designed to measure achievement and if the items are written in a selected-response format, then examinees must select the response that is keyed as correct. If the test is designed to measure the strength of a particular trait and if the items are written in a selected-response format, then examinees must select the alternative that best answers the question with respect to themselves. As we further discuss item formats, for the sake of simplicity we will confine our examples to achievement tests. The reader may wish to mentally substitute other appropriate terms for words such as correct for personality or other types of tests that are not achievement tests.Page 241

Three types of selected-response item formats are multiple-choice, matching, and true–false. An item written in a multiple-choice format has three elements: (1) a stem, (2) a correct alternative or option, and (3) several incorrect alternatives or options variously referred to as distractors or foils. Two illustrations follow (despite the fact that you are probably all too familiar with multiple-choice items).

· Now consider Item B:

Item B

A good multiple-choice item in an achievement test:

a. has one correct alternative

b. has grammatically parallel alternatives

c. has alternatives of similar length

d. has alternatives that fit grammatically with the stem

e. includes as much of the item as possible in the stem to avoid unnecessary repetition

f. avoids ridiculous distractors

g. is not excessively long

h. all of the above

i. none of the above

If you answered “h” to Item B, you are correct. As you read the list of alternatives, it may have occurred to you that Item B violated some of the rules it set forth!

In a matching item , the testtaker is presented with two columns: premises on the left and responses on the right. The testtaker’s task is to determine which response is best associated with which premise. For very young testtakers, the instructions will direct them to draw a line from one premise to one response. Testtakers other than young children are typically asked to write a letter or number as a response. Here’s an example of a matching item one might see on a test in a class on modern film history:

Directions: Match an actor’s name in Column X with a film role the actor played in Column Y. Write the letter of the film role next to the number of the corresponding actor. Each of the roles listed in Column Y may be used once, more than once, or not at all.

Column X

Column Y

________

1. Matt Damon

a. Anton Chigurh

________

2. Javier Bardem

b. Max Styph

________

3. Stephen James

c. Storm

________

4. Michael Keaton

d. Jason Bourne

________

5. Charlize Theron

e. Ray Kroc

________

6. Chris Evans

f. Jesse Owens

________

7. George Lazenby

g. Hugh (“The Revenant”) Glass

________

8. Ben Affleck

h. Steve (“Captain America”) Rogers

________

9. Keanu Reeves

i. Bruce (Batman) Wayne

________

10. Leonardo DiCaprio

j. Aileen Wuornos

________

11. Halle Berry

k. James Bond

l. John Wick

m. Jennifer Styph

Page 242

You may have noticed that the two columns contain different numbers of items. If the number of items in the two columns were the same, then a person unsure about one of the actor’s roles could merely deduce it by matching all the other options first. A perfect score would then result even though the testtaker did not actually know all the answers. Providing more options than needed minimizes such a possibility. Another way to lessen the probability of chance or guessing as a factor in the test score is to state in the directions that each response may be a correct answer once, more than once, or not at all.

Some guidelines should be observed in writing matching items for classroom use. The wording of the premises and the responses should be fairly short and to the point. No more than a dozen or so premises should be included; otherwise, some students will forget what they were looking for as they go through the lists. The lists of premises and responses should both be homogeneous—that is, lists of the same sort of thing. Our film school example provides a homogeneous list of premises (all names of actors) and a homogeneous list of responses (all names of film characters). Care must be taken to ensure that one and only one premise is matched to one and only one response. For example, adding the name of actors Sean Connery, Roger Moore, David Niven, Timothy Dalton, Pierce Brosnan, or Daniel Craig to the premise column as it now exists would be inadvisable, regardless of what character’s name was added to the response column. Do you know why?

Homework is Completed By:

Writer Writer Name Amount Client Comments & Rating
Instant Homework Helper

ONLINE

Instant Homework Helper

$36

She helped me in last minute in a very reasonable price. She is a lifesaver, I got A+ grade in my homework, I will surely hire her again for my next assignments, Thumbs Up!

Order & Get This Solution Within 3 Hours in $25/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 3 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 6 Hours in $20/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 6 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

Order & Get This Solution Within 12 Hours in $15/Page

Custom Original Solution And Get A+ Grades

  • 100% Plagiarism Free
  • Proper APA/MLA/Harvard Referencing
  • Delivery in 12 Hours After Placing Order
  • Free Turnitin Report
  • Unlimited Revisions
  • Privacy Guaranteed

6 writers have sent their proposals to do this homework:

Smart Accountants
Fatimah Syeda
Top Academic Tutor
Pro Writer
Unique Academic Solutions
Financial Solutions Provider
Writer Writer Name Offer Chat
Smart Accountants

ONLINE

Smart Accountants

Being a Ph.D. in the Business field, I have been doing academic writing for the past 7 years and have a good command over writing research papers, essay, dissertations and all kinds of academic writing and proofreading.

$33 Chat With Writer
Fatimah Syeda

ONLINE

Fatimah Syeda

I have written research reports, assignments, thesis, research proposals, and dissertations for different level students and on different subjects.

$42 Chat With Writer
Top Academic Tutor

ONLINE

Top Academic Tutor

I am an elite class writer with more than 6 years of experience as an academic writer. I will provide you the 100 percent original and plagiarism-free content.

$23 Chat With Writer
Pro Writer

ONLINE

Pro Writer

As per my knowledge I can assist you in writing a perfect Planning, Marketing Research, Business Pitches, Business Proposals, Business Feasibility Reports and Content within your given deadline and budget.

$16 Chat With Writer
Unique Academic Solutions

ONLINE

Unique Academic Solutions

I reckon that I can perfectly carry this project for you! I am a research writer and have been writing academic papers, business reports, plans, literature review, reports and others for the past 1 decade.

$26 Chat With Writer
Financial Solutions Provider

ONLINE

Financial Solutions Provider

I can assist you in plagiarism free writing as I have already done several related projects of writing. I have a master qualification with 5 years’ experience in; Essay Writing, Case Study Writing, Report Writing.

$46 Chat With Writer

Let our expert academic writers to help you in achieving a+ grades in your homework, assignment, quiz or exam.

Similar Homework Questions

Fastenal self tapping screws - Web security - Artifact that embodies ethical values - 5rs framework - Prepare a post closing trial balance - Developmentally Appropriate Practices - pOWERpOINT DUE IN 20 HOURS - The disappearance of the golden toad worksheet answers - Edi 997 implementation guide - Capitulo 3a realidades 1 answers - How to apply split entrance effect in powerpoint - Sam cengage excel project 1 - Durden junior joiner parts - Need help with DB - Flat anterior bite plane - 2 PAGE ASSIGNEMNT - As/nzs 4777.1 free download - Play doh rock cycle - What does prima facie mean - Nursing and Community (Due 24 hours) - Literal meaning of the poem the road not taken - Managing organizational change a multiple perspectives approach 2nd edition - Shear force in a beam lab report - Oat crusted chicken bodybuilding - Snap on solus ultra - How to draw mo - I m taking responsibility for my digital profile by - Transgenerational model of family therapy - Beach in a simple sentence - Les grands seigneurs poem - Undergraduate programs admission policy griffith - Zora won t talk to me outer worlds - Human Growth and Development Midterm Exam - Advantages of clap switch - Animals including humans year 3 - Library management system requirements - Why students should have phones in school - Ct shirts 3 for 99 2018 - The folding process is caused by - Assignment 5 employee compensation and benefits - Lag length selection eviews - Electronic document preparation and management quiz - Dis week 5 - CRAFT A PICOT QUESTION & A SEARCH STRATEGY - Threethorne cattery for sale - How to check queue in amadeus - Building a bridge out of popsicle sticks - Classroom and student factors lesson plan - BComm7 - Hr discussion - His soul stretched tight across the skies - Ucla english major requirements - Cpo focus on physical science answer key - Grapes of wrath steinbeck gutenberg - Methyl benzoate to methyl 3 nitrobenzoate mechanism - Lifewell peer support training - Qualities of a good research problem - Macroeconomics-need EXCEL NEED IT TODAY - Interviewing principles and practices 15th edition ebook - Hour fitness day free trial survey - Chelsea fc ticket office - Apa compare and contrast essay - Charity shopping that circulates goods - Julia de burgos school uniform - Api rp 8b free download - Energy content of fuels lab report answers - Beaufort wind scale bom - Why doesn t chillingworth assert his rights - Institute of certified bookkeepers - Garden variety flower shop uses clay pots - 2nd part - License application form rms 1001 - Does the novel praise or condemn gatsby's dream - Ahpra oral exam pharmacy - What is the molar mass of aluminum oxide al2o3 - Swinburne semester dates 2015 - Sarawak is a Malaysian state located on the Island of Borneo. The predominant soil order for most of Sarawak is the same as which of the following locations? - Fiona summers flex n tone - Loewenstein occupational therapy cognitive assessment manual - Lorenzo negrete yo soy mexicano - Information Systems - CDO - Coaster train schedule oceanside to san diego - Research Paper In American Literature I - Do not uniquely identify observations in the master data - Electromagnetic induction lab report conclusion - Digital radio mondiale receiver - Semple stadium seating plan rows - Aircraft hydraulic system components - Ruchazie housing association ltd - Kouzes and posner definition of leadership - Bipolar ii disorder dsm 5 code - I need 200 words in Economic Development in my 35 multiple choice questions - Financial markets and institutions test bank free - Nih open access policy - Radish seed germination experiment - Discusion-4 - Investigating the ph scale phet answers - Ethics, Law and Cybersecurity - Research Paper - Mexican History