Title
ABC/123 Version X
1
Reliability and Validity Worksheet
PSYCH/655 Version 3
1
University of Phoenix Material
Reliability and Validity Worksheet
Instrument Reliability
A reliable instrument is one that is consistent in what it measures. If, for example, an individual scores highly on the first administration of a test and if the test is reliable, he or she should score highly on a second administration.
Imagine that you are conducting a study for which you must develop a test in mathematics for 7th-grade students. You develop a 30-point test and distribute it to a class of 12, 7th-grade students. You then administer the test again one month later to the day. The scores of the students on the two administrations of the test are listed below. Use Microsoft® Excel® or IBM® SPSS® to create a scatterplot with the provided scores, formatted as shown in the example graph. What observations can you make about the reliability of this test? Explain.
30-POINT TEST 30-POINT TEST
(FIRST ADMINISTRATION) (SECOND ADMINISTRATION)
A 17 15_______________
B 22 18_______________
C 25 21_______________
D 12 15_______________
E 7 14_______________
F 28 27_______________
G 27 24_______________
H 8 5_______________
I 21 25_______________
J 24 21_______________
K 27 27_______________
L 21 19_______________
image1.png
What Kind of Validity Evidence: Content-Related, Criterion-Related or Construct-Related?
A valid instrument is one that measures what it says it measures. Validity depends on the amount and type of evidence there is to support one’s interpretations concerning data that has been collected. This week, you discussed three kinds of evidence that can be collected regarding validity: content-related, criterion-related, and construct-related evidence.
Each question below represents one of these three evidence types. In the space provided, write content if the question refers to content-related evidence, criterion if the question related to criterion-related evidence, and construct if the question refers to construct-related evidence of validity.
1. How strong is the relationship between the students’ scores obtained using this instrument and their teacher’s rating of their ability?
2. How adequately do the questions in the instrument represent that which is being measured?
3. Do the items that the instrument contains logically reflect that which is being measured?
4. Are there a variety of different types of evidence (test scores, teacher ratings, correlations, etc.) that all measure this variable?
5. How well do the scores obtained using this instrument predict future performance?
6. Is the format of the instrument appropriate?
Copyright © XXXX by University of Phoenix. All rights reserved.
Copyright © 2015, 2014 by University of Phoenix. All rights reserved.