education

On the reductionism of analytics in education

I had the great pleasure (and distinct discomfort) of listening to Virginia Tech’s Gardner Campbell speak on learning analytics this week, through my haphazard participation in the Learning Analytics 2012 MOOC. Haphazard, I say, because I am so busy at work I can hardly spare any time to connect outside of it, whether through more structured means like the Learning Analytics course or less structured like Twitter and Facebook. Discomfort, I say, because Campbell launched some pointed criticisms of the current reductionist approach to learning analytics that prevails in education today. Yes, it prevails at Pearson too, not because we have bad motives, but because the process of education and learning is so complex that we feel compelled to simplify it in some way to make any sense of it.

M-theory vs. the x-y plane

Campbell drew an analogy to cosmology, contrasting 11-dimensional m-theory with the planar (two-dimensional) Cartesian coordinate system. He suggested that current work in learning analytics is like working in the x-y plane when we know that education and learning takes place in at least 11-dimensions.

Learning analytics, as practiced today, is reductionist to an extreme. We are reducing too many dimensions into too few. More than that, we are describing and analyzing only those things that we can describe and analyze, when what matters exists at a totally different level and complexity. We are missing emergent properties of educational and learning processes by focusing on the few things we can measure and by trying to automate what decisions and actions might be automated.

As I was writing this post, @webmink Simon Phipps tweeted about his post leaving room for mystery, in which he proposed that some problems will remain unsolved, some systems unanalyzed:

The real world is deliciously complex, and there will always be mysteries – systems too complex for us to analyse. It seems to me that one of the keys to maturing is learning to identify those systems and leave room for them to be mysteries, without discarding the rest of rational life.

Then Simon shared a definition of reductionism with me:

https://twitter.com/#!/webmink/status/176095165709164544

This echoes exactly what Campbell said in his presentation:

My fear is that computers as they are used in learning analytics mean that people will work on simpler questions. They may be complicated in terms of the scale of the data but they’re not conceptually rich. They won’t be trying more concepts or playing with new ideas.

We’ll have a map that makes the territory far simpler than it truly is and we’ll design school to that, not to the true complexity.

Reductionism in analyzing online discussion threads

Last week in a meeting one of my colleagues pointed out the inherent reductionism of our approach to the problem of measuring and characterizing student interactivity and learning via discussion threads. He pointed this out not as a criticism but as recognition and acknowledgement. We are applying a custom-developed coding scheme to threaded discussion posts. We code each post into one of four categories based on the pattern of topics discussed in each post and across the thread. We capture what topics were introduced, how they relate to topics in previous posts, and how they relate to the main discussion topic. We cannot capture all the details and complexity of what people have written and how they have interacted. We certainly aren’t paying any attention to the broader experiences and connections that individual students bring to the discussion. But we are trying nevertheless to capture some important kinds of meaning and interaction in the posts via our coding scheme.

This is, at heart, the analytics endeavor: to take very messy humanly-meaningful information and transform it into numbers that a computer can manipulate. It can be done in more sophisticated and subtle ways or more crude and careless ways, but it is always reductionist. It does not fully capture the human experience of learning. We can’t model learning in all its complexity.

The math is not the territory

I see it as critical in data analysis to remember that our numbers are useful shorthand — easy to manipulate, summarize, visualize, and report upon — but they are not the thing we are interested in. We use them because there is something else non-quantitative we are interested in, something human (at least in social sciences like education).

Campbell said,

We tend to believe the math is the territory and we tend to organize ourselves around just what we’re able to measure instead of organizing ourselves around creating better measurements of what we know to be nearly unimaginably complex.

The math is not the territory — the codes and numbers we use to represent human understanding and action and connection are not the territory — the visualizations are not it either.

Learning as delicious mystery?

Simon suggested some things are too complex to be answerable and should be left as mysteries. Is learning something that should be left unanalyzed? Certainly not, although aspects of it are mysteriously wonderful and not amenable to quantitative or qualitative analysis. There’s too much at stake — for individual students who benefit from success defined in many different ways, for the government that funds or subsidizes much of their education, for the citizenry that benefits from an educated populace.

I believe analytics can help, but I feel humble about its possibilities, more so than ever after listening to Campbell speak. I used to call my stance “cynicism” but I think I will reframe it as “humbleness” which makes it seem like there is some chance of success. As uncomfortable as it was, I’m glad I sat in on Campbell’s talk and listened to it again this morning to think about it further.

Advertisements