7.2 Reliability:

Reliability is the degree to which a test consistently measures whatever it measures.

A reliable test gives the same scores when A reebie administered and readministered while an unreliable administered and readmi the  test does not give the same scores.

If an intelligence test was unreliable, then a If an inte student scoring an IQ of 120 today might scores an IQ of 140 tomorrow and 110 tomorrow and a 95 day after tomorrow. If the test was reliable was reliable and if the student IQ was 110, then we the student was 110, then we would not expect fluctuation in score from testing to would not expect fluctuation in score from testing. A score of 105 would not be unusual but a score of 145 would be very unlikely. 101-10glReliability is expressed numerically as a coefficient. A high coefficient indicates high reliability and a low coefficient indicates low reliability. If a test were perfectly reliable, the coefficient would be 1.00. A valid test is always reliable but a reliable test is not necessary valid.

Types of Reliability:

1. Test Retest Reliability:

Test retest reliability is the degree to which scores are consistent over time.Test retest reliability is established by determining the relationship between scores resulting from administering the same test, to the same group at different times.

Steps:

1.Administer the test.

2. After sometime (say two weeks) administer the same test to same group.

3.Correlate the two sets of scores.

4.High correlation coefficient indicates good test retest reliability.

2. Equivalent - Forms Reliability:

Equivalent forms of a test are two tests that are identical in every way except for the actual items included.

The two tests are identical in term of:

(i) Number of items

(ii) Structure(iii) Difficulty level (iv) Directions for administering, scoring and interpretation.(v) Measuring variable. If these two tests are administered to the same group, it is expected that group will get same scores in these tests.It is determined by establishing the relationship between scores resulting from administering two different forms of the same tests, to the same group, at the same time.

Steps:

1. Administer one form of the test to a group.

2.Administer the second form of the test to the same group.

3.Correlate the two set of scores.

4. High correlation coefficient indicate good equivalent-forms reliability.

3. Split-Half Reliability:

Split-half reliability is determined by establishing the relationship between the scores on two equialent halves of a test administered to a group at one time.In this type, test items are divided into two equal halves. Suppose a test consists of 20 items, it can be divided into two equal halves as under:

(i) Odd numbers: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19

(ii) Even numbers: 2, 4, 6, 8, 10, 12, 14, 16, 18, 20

The scores on odd numbers are correlated with even numbers to compute the split-half reliability. The correction formula known as Spearman-Brown prophecy formula is used which is as under:

Steps:

1. Administer a test to a group.

2.Divide the test-item into two halves (odd-even)

3.Compute the scores on each half.

4. Correlate the two sets of scores.

5.Apply the Spearman-Brown correction formula.

6.High correlation coefficient ll indicate good split-half reliability.

4. Kuder-Richardson Method:

Kuder-Richardson method estimates reliability. It measures reliability from a single administration of a test will the help of formulas. It measures the internal consistency of the test. One such formula is Kuder Richardson 20. It is based on proportion of persons passing each item and standard deviation of total scores. HOLA more simpler formula is Kuder-Richardson formula 21. It is based on the number of correct answers. This formula is

whereK= the number of items in the test

M= the mean

s = the standard deviation of the test scores