HOME | CURRENT | ARCHIVES | FORUM

Research World, Volume 3, 2006
Online Version


Report R3.12

Scales Measuring Behavioural Intentions and Expectations

Seminar Leader: Soumendra K. Dash, Institute of Finance and International Management, Bangalore
somendash2002[at]yahoo.com

Part 1

"The process of assigning scores according to rules is called measurement" (Harris, 1995, p. 11).

"Measurement is the systematic assignment of numbers to objects or events" (McCall, 1970, p. 1).

Measurement is vital for research. It facilitates comparison and helps comprehension of entities. For measurement to be useful, the symbols used need to be interpretable. A "scale" could be defined as an accepted standard that aids measurement. Lot of research has gone into developing scales for measuring various entities and properties.

A well-defined scale makes the task of research easier. In physical sciences, the presence of absolute zero and a constant unit of measurement ensure transferability of the measure across contexts. For example, temperature of different bodies can be measured on the same scale. Each scale possesses certain inherent assumptions regarding the correspondence of numbers with real world entities. An increase in knowledge of the phenomenon leads to an increase in the level of correspondence--thus increases the "robustness" of the scale. Robustness of a scale is defined as the degree to which the scale can function correctly in the presence of invalid inputs or stressful environmental conditions.

In behavioural sciences, due to the lack of constant units of measurement, the process of measurement itself becomes a complex issue. The lack of consistency gives the freedom of interpretation. There are certain concerns regarding measurement in behavioural sciences:

(a) Can we really measure behavioural variables?
(b) Do we pass judgements with our inherent biases in the name of measurement?
(c) Does the scale actually measure what it is intended to measure?

The goal of scales in behavioural sciences has been to measure behavioural constructs. Behavioural concepts are abstractions of reality and often subject to comparison, which is quite similar to a physical concepts. If we were to find who is taller among two individuals (using the concept "tallness"), we measure their heights (unsing the construct "height"). Similarly, if we were to measure the beauty (concept) of two individuals, we need to define suitable constructs (which should be meaningful and relevant in the desired context) and then measure them. The similarity here is that one individual could be taller than another and likewise one individual could be more beautiful than another. The problem arises when we try to interpret the observations across contexts. Height of an individual could be compared to height of a building, door, or say an elephant, in absolute terms. But, can we compare the beauty of a person with that of a flower, the rainbow, or say a small kitten? Additionally, the connotation of the concept of beauty may vary from social context to cultural context, or to physical context. But the connotation tallness is same across contexts. Hence, there have been attempts to bring uniformity of measure in (scaling for) the behavioural sciences. The subjectivity in the analysis of variables needs to be addressed carefully.

There are different kinds of scale to measure different kinds of variable. A variable is said to be "categorical" if it could be classified as belonging to a group or not, i.e., the variables tend to take a limited number of discrete values. With "continuous" variables, the objects of measurement vary in a graded way with respect to the property of interest. Categorical variables are generally measured on a "nominal" scale while "ordinal," "interval," and "ratio" scales are used to measure continuous variables.

* Nominal Scale: Here, numbers distinguish among categories. Numbers reflect nothing about the properties of the individuals other than that they are different. Example: Male=1, Female=2

* Ordinal Scale: They designate an ordering. Numbers represent the rank order of the variable being measured. This scale does not assume that the intervals between numbers are equal. Example: In a contest: 1st place=1, 2nd place=2, 3rd place=3.

* Interval Scale: This scale designates an equal-interval ordering. Numbers indicate relative amounts of the attribute; also, equal distances between numbers assigned to subjects reflect equal differences in the amounts of the attribute measured. The zero point on this scale is not absolute. Example: Fahrenheit temperature scale.

* Ratio Scale: Ratio scale designates an equal-interval ordering with a true zero point (i.e., the zero implies an absence of the thing being measured). Example: Height, Weight, time interval.

The seminar leader introduced the criteria for checking a scale and mentioned a few tests:

* Reliability tests to check the repeatability and internal consistency of the scales, i.e., whether the scale gives similar outputs for similar objects measured. Example: Test-retest method, split-half method, equivalent-form method.

* Validity tests: In terms of assessment, validity refers to the extent to which a scale is able to measure what is intended to be measured. Reliability is a necessary but not sufficient condition for validity. For instance, if the needle of the scale to measure weight is two kilograms away from zero, one could always over-report my weight by two kilograms. Here the scale is consistent, but consistently wrong. So, the scale is not valid.

Types of validity: Face validity, criterion validity, construct validity.

* Face validity refers to professional agreement that the scale logically appears to accurately measure what it is intended to measure.

* Criterion validity is the ability of some measure to correlate other measure of the same construct. There are different conceptualizations of criterion validity: (i) Concurrent Validity: A type of criterion validity whereby a new measure correlates with a criterion measure taken at the same time. (ii) Predictive Validity: A type of criterion validity whereby a new measure predicts a future event or correlates with a criterion measure administered at a later time.

* Construct validity refers to the degree to which inferences can legitimately be made from the operationalisations in the study to the theoretical constructs on which these operationalisations are based. The different conceptualizations of construct validity are as follows: (i) Discriminant validity: The ability of some measure to have a low correlation with measures of dissimilar concepts. (ii) Convergent Validity: It is synonymous with criterion validity

* Sensitivity tests: Sensitivity of a measurement instrument is its ability to accurately measure variability in stimuli or responses. For example, a 5-point Likert scale could be more sensitive than a 3-point Likert scale.

It was emphasised that scales measuring behavioural intentions and expectations need to be designed with utmost care. Such scales aim to take into account the subjectivity brought in by the entities to be measured.

Part 2

The second half of the seminar was devoted to a practical exercise of developing a scale. The objective of the activity was to develop a framework for faculty evaluation using the existing knowledge and develop a scale which could objectively measure the faculty performance. The objective of the process was to develop, from first principles, the expectations from management faculty. This was initiated by first brainstorming in the seminar group. It turned out that a number of criteria were similar to each other but were being articulated differently. Finally, we arrived at the following criteria:

1. Bringing out the relevance of theory to managerial practice
2. Enhancing the knowledge of the student in the subject
3. Getting the students interested in the subject
4. Promoting self-guided learning among students
5. Conducting classes as per pre-planned schedule
6. Keeping students attentive through the class
7. Accommodating divergent thinking on the subject
8. Accepting criticism on own teaching methods positively and improving
9. Giving a comprehensive perspective on the subject
10. Meeting the expected learning in the course
11. Broadening the outlook of the student in the subject
12. Measuring up on the desired breadth and depth of knowledge in the subject
13. Demonstrating communication skills
14. Helping students in academic matters
15. Giving suitable project work and assignments in augmenting class-room learning
16. Creating a level playing field for students with and without work experience
17. Producing healthy discussion in the class
18. Giving regular feedback to the students
19. Giving reasonable grades
20. Demonstrating impartiality of in evaluation
21. Bringing relevant industry experience into the course
22. Giving reasonable amount of relevant assignments
23. Engaging students in the subject, after the class
24. Maintaining discipline in the class

There was a need for a method to narrow down the list to a smaller set of criteria which could be measured practically. It was suggested that factor analysis could be useful for this.

References

Harris, M. B. (1995). Basic statistics for behavioral science research. Boston, MA: Allyn and Bacon.

McCall, R. B. (1970). Fundamental statistics for psychology. New York: Harcourt, Brace, and World.


Reported by Adwaita Govind Menon, with inputs from D. P. Dash and Jacob D. Vakkayil (19 February 2006).


Copyleft The article may be used freely, for a noncommercial purpose, as long as the original source is properly acknowledged.


Xavier Institute of Management, Xavier Square, Bhubaneswar 751013, India
Research World (ISSN 0974-2379) http://www1.ximb.ac.in/RW.nsf/pages/Home