Links for March 30, 2012

The new LMS product: You [Audrey Watters/Hack Education]. On Blackboard’s recent strategy change to embrace open source and acquire MoodleRooms and Netstop. The value is in the data, not in the LMS software.

Are undergraduates actually learning anything? [Richard Arum and Josipa Roksa/The Chronicle of Higher Education. For many students, college doesn’t improve their critical thinking, complex reasoning, and written communications. 45+% of a sample of college students did not demonstrate any statistically significant improvement on the Collegiate Learning Assessment after two years of college. 36% of students did not show any significant improvement after four years. More disturbingly:

[We] find that learning in higher education is characterized by persistent and/or growing inequality. There are significant differences in critical thinking, complex reasoning, and writing skills when comparing groups of students from different family backgrounds and racial/ethnic groups. More important, not only do students enter college with unequal demonstrated abilities, but those inequalities tend to persist—or, in the case of African-American students relative to white students, increase—while they are enrolled in higher education.

An open letter to college admissions committees [Andrew F. Knight/Fairfax Times].

Consequently, the drive for high grades is blinding students and parents alike to the real purpose of education: learning. In parent-teacher conferences, “How can my child bring up her grade?” has replaced “How can my child better learn the material?” The system’s response to angry grade-obsessed parents and disgruntled students has been to fudge the indicator instead of improving the system in other words, to inflate grades in spite of worsening performance. I was routinely pressured by parents, students and even administrators to inflate grades in the form of curving scores, providing extra credit and retest opportunities, and more heavily weighting homework and projects that are easy to copy from friends. It is instructive to note that two-thirds of our students are on the honor roll. (That’s right.) When a majority of students routinely receive As and B’s in all their classes, the distinctions intended by a traditional A-F grading scale become hazy and meaningless.

What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study [Tamim, Bernard, Borkhovski, Abrami, & Schmid/Review of Educational Research]. A meta-meta-analysis of research on technology usage in education. Found random effects mean effect size of .35, statistically significantly different from zero. I have to wonder if that is meaningful in any way given the incredible variety of ways technology can be applied to learning. Have not read the full paper, only the abstract.

Health correlator: Calling self-experimentation N=1 is incorrect and misleading [Ned Kock/Health Correlator]. Self-experimentation is longitudinal, so n > 1. But results may not generalize to other people. Good for learning what works for you.


Links for January 20, 2012

Big data market survey: Hadoop solutions [Edd Dumbill/O’Reilly Radar].

Apache Hadoop is unquestionably the center of the latest iteration of big data solutions. At its heart, Hadoop is a system for distributing computation among commodity servers. It is often used with the Hadoop Hive project, which layers data warehouse technology on top of Hadoop, enabling ad-hoc analytical queries.

I’m starting my first ever project with Hadoop this week–a prototype of an analytics warehouse using Amazon Elastic MapReduce. Colleagues have told me EMR is a great way to get your head around Hadoop-based data processing.

CBO Report: Medicare pilot programs don’t control health-care costs [Megan McArdle/The Atlantic blogs]. McArdle describes what happened with a housing-project demolition program whose pilot studies suggested  much better effects than were actually seen at scale:

The initial study was small and involved highly screened people with a lot of support. And it seems to have suffered from publication bias–the most spectacular results got the most attention, even though these might just have been outliers.

This is distressingly common–not just in government or social-do-gooding research, but in organizations of all kinds–including corporations.

Programs at scale often don’t show results as good as pilot studies of those programs. More generally in program evaluation, it’s hard to find evidence of strong (or even weak) effects of interventions. Social systems are complex; factors other than those targeted by the intervention often determine outcomes. This is something I need to communicate regularly to my colleagues and our partners–student learning is largely determined by factors other than what we have control over. That’s not to say we shouldn’t improve our course design, teaching practices, and so forth but it is to say that there aren’t many easy pickings out there for improving student outcomes.

For-profits vs not-for-profits [Felix Salmon/Reuters blog].

I know full well that a lot of not-for-profit organizations are run in a dreadful fashion; I’m just not convinced that introducing a profit motive is always or even often the best way to fix that problem…. I very much doubt that for-profit education is ever a good idea. I just don’t see how the incentives there could possibly be aligned.

But the profit motive can’t provide optimal outcomes if there isn’t consumer discipline along with it. For-profit higher education is subsidized by the government in the form of grants and low-interest loans (and note that nonprofit education is subsidized in additional ways as well, in the case of public institutions). Would-be students do not have an incentive to seriously evaluate whether the education they are purchasing is worth what they pay, because there is a third-party payer involved. The situation is much like health care. Good discussion in post of the issues and controversy over for-profit higher education.