the activity we call “statistics” exists in the middle of the Venn diagram formed by

measurement,comparison, andvariability. No two of the three is enough.

I’m dealing with the middle of that Venn diagram right now:

- Measurement — am I measuring what I want to measure? I wanted to measure intrinsic motivation to learn math but I’ve got an index that’s likely better described as positive attitudes towards math (PATM). And I don’t even know what it’s really measuring.
- Comparison — can I compare PATM across countries? If I use it as a regression predictor for math achievement, are results across countries comparable?
- Variability — how do scores on PATM vary across countries? Certainly not enough to just know the mean. This provides some information to answer the question about comparison.

When I plot histograms of different countries’ scores on PATM, I see some strange results. Some countries (most English-speaking and Asian for example) have a roughly normal distributions of scores, with extra mass sometimes at the top or bottom since scores are between -6 and 6. But most Middle Eastern, Latin American, and African countries have negatively skewed distributions, with half or more of the students hitting the very top score. And Eastern European countries tend to have positively skewed distributions, with more students saying they don’t like math than they do.

For example, here’s Japan:

(My histograms use IMLM not PATM because I made them when I was still calling the measure Intrinsic Motivation to Learn Math).

And here’s Jordan:

What’s the problem? I would expect positive attitudes towards math to be basically normally distributed throughout the population — most people would be neutral, some would dislike it, and some would really like it. If a lot of students say “I really really like it” (or, as in Eastern Europe, “I really really *don’t* like it”) then I wonder if my measurement instrument is measuring some additional thing besides what they feel about math. I don’t know that I can compare a Japanese student who scores 6 on PATM to a Jordanian student who scores the same.

**What to do?**

I’m not sure what to make of this or what to do with it. One thought I had is to just do my initial analysis with the countries that have roughly normal distributions of scores. For those cases, I feel like I can have some confidence that the PATM measure is actually getting at the actual distribution of positive attitudes towards math in the population vs. measuring something else or something in addition (optimist/pessimist mindset? lack of any rigorous math education which would separate the like-maths from the dont-like-maths?)

I guess I might also correlate skewness of the PATM distribution with other measures, for example, the country-level cultural measures I’m using to characterize the context in which students learn math or even just with math scores. Maybe that would give some insight into what’s going on here.

This is another example of how useful it was for me to present at our research meeting on Tuesday. One of the other students asked if there was good variability on the scores. I said, “sure, all the countries vary across all the values.” But then I realized the point she was making: *how *do they vary across the score levels… now that is important.

Pingback: Adjusting for response styles in cross-cultural research « Anne Z.