education, research

Getting ready for connected learning

Here’s a cool idea: the web enables a connectivist learning style based on network navigation, where “learning is the process of creating connections and developing a network.” Seems to me before you can learn connectedly, though, you need to first learn in more socially and contextually constrained ways.

Background: Three generations of distance education pedagogies

In this week’s Learning Analytics 2012 (LAK12) web session, Dragan Gasevic pointed us at an interesting paper describing three generations of distance education: cognitive-behaviorist, social constructivist, and connectivist. From Anderson and Dron (2011):

Anderson and Dron did not claim that the connectivist model would replace the cognitive-behaviorist or social-constructivist models but said that “all three current and future generations of [distance education] pedagogy have an important place in a well-rounded educational experience.”

These three models co-exist online today

LAK12 is itself an example of a course built in the connectivist paradigm, but just because a course is massive, open, and online doesn’t mean that it’s connectivist. For example, the Stanford machine learning class offered last fall was a (very effective) example of a cognitive-behaviorist approach. Students watched videos on their own schedule. Regular quizzes and homework assignments checked understanding. Andrew Ng was content creator and sage on the stage. While there was a Q&A forum available, the course design did not rely on them. A student could use them or not.

Typical online college courses today are often built in the social-constructivist mode, with instructors seeking to design and run courses that encourage many-to-many engagement through discussion threads and group projects. Does the addition of social features drive learning? It seems to be an article of faith among instructional designers today that it does. I’m not up on the research so I can’t say — but I can say that in online courses I’ve reviewed and taken, I don’t see evidence that social features have been designed in such a way that they make a difference in learning.

When are the different approaches useful?

I am thinking that whether a cognitive-behaviorist or constructivist or connectivist approach is best depends upon the preparation and goals of the learner. Maybe something like this:

I suspect that a student needs to gain basic grounding and fluency in a subject before constructivist approaches will be useful. An elementary schooler needs to learn to read and write and do arithmetic before you can do a group science project, for example. And it seems like a connectivist approach will be most effective once you already have some intermediate and contextual knowledge of a subject before trying to navigate out from it.

What do you think? When are cognitive-behaviorist vs. social constructivist vs. connectivist approaches to learning most useful? Do you think you need to have achieved a certain level of contextual and subject knowledge before connected learning is effective?


Experimental and quasi-experimental research designs

Ph.D. Topics : Research and Evaluation Methods

In an experimental design, subjects are randomly assigned to groups for different levels of treatment (or no treatment, i.e., the control group). In a quasi-experimental design, subjects are not randomly assigned to treatment; there is no randomization.

Random assignments of subjects helps control for participant differences, one of the main sources of threats to internal validity of a research study. Random assignment of subjects doesn’t guarantee that there are no participant differences; especially with smaller sample sizes, you may need to take steps to control for participant differences even after randomly assigning them to treatment levels. For example, administering a pre-test will control for different levels of ability or achievement prior to intervention. Measuring moderators such as demographics (gender, age, race, socioeconomic status) and including those in your analysis may help further isolate causal relationships between interventions and outcome. In some experimental designs, researchers may match participants across control and treatment so that each pair of participants can be treated as one virtual participant (Gliner & Morgan, 2000), giving a pseudo-within-subjects design.

A randomized experimental design with pre-test and post-test controls for threats to internal validity from participant characteristics but leaves some threats uncontrolled, specifically testing effects and bias from selective attrition. Testing effects–for example the possibility that taking a pre-test will help both control and treatment group participants do better on the post-test thus obscuring the actual treatment effect–can be controlled by a Solomon four-group design (Gliner & Morgan, 2000). In this design, there are two control groups and two treatment groups (assuming just one level of intervention). One control group and one treatment group takes a pre-test; the other control and treatment groups do not. This allows the potential testing effect to be teased out.

What if you can’t randomly assign subjects to treatments? This is a common problem in educational and other social settings. For example, if you are testing the introduction of a new curriculum it is unlikely you can randomly assign students to that curriculum. Students come to you in intact groups. In this case, you may choose to use a quasi-experimental design in which treatments are assigned to groups. The treatments may be assigned randomly (e.g., pick classrooms out of a hat to decide which intervention they will get) or purposively (e.g., principals at a school may have some say over which classrooms get an untested vs. the standard curriculum).

Like experimental designs, quasi-experimental designs may be improved by the use of a control group, measuring moderators and incorporating them into the analysis, or matching participants on factors that relate to the measured outcomes (Gliner & Morgan, 2000).

One type of quasi-experimental research design is the time series design, in which many observations are made over time, both without intervention and with intervention (Gliner & Morgan, 2000). Multiple observations are used to establish a baseline that shows an (ideally stable) level of the outcome of interest over time. Then multiple observations are made during intervention, ideally showing a change due to intervention. Then the treatment may be withdrawn, again in an attempt to isolate the relationship of treatment to observed outcome. This may be used with or without a control group. A single-subject design is a common time-series design, in which one or a very few subjects are followed through one or more baseline and treatment phases.


Gliner, J. A., & Morgan, G. A. (2000). Research Methods in Applied Settings: An Integrated Approach to Design and Analysis. Mahwah, N.J: Lawrence Erlbaum.


Internal validity of research studies

Ph.D. Topics : Research and Evaluation Methods

What is validity? If I say “that’s a valid argument” to you, it means your facts and your logic seem reasonable to me. In research methods, we talk about validity because we want to make statements about the world; we want to make knowledge claims. We want these claims to be valid, meaning they should be well-grounded in logic and fact so that we can trust in them.

Much of scientific research is concerned with making claims about causality. In education research, for example, we want to know what causes students’ math achievement to be high or low. Is it what their teacher does? Their raw brain power? How hard they work? And so forth. Obviously, the answer is, it’s many things, but to what extent possible we want to isolate the factors that are under our control (teaching method, curriculum, school culture) and find the factors that will result in the highest math achievement.

Internal validity = Extent to which you can infer causality

The internal validity of a research study is the extent to which you can make a causal claim based on the study; it is the validity of the causal inference you make. Different research designs provide stronger or weaker internal validity. For example, well-designed randomized experimental designs generally are considered to provide the strongest internal validity. Quasi-experimental studies in which treatments are assigned randomly to intact groups (e.g., classrooms) can have strong internal validity also.

Continue reading


Am I right?

How right are you about what you think and believe? Seth Roberts suggests you are less right than you think:

What’s interesting is that logarithmically right is a good way of describing how one’s beliefs should be transformed to be a fair approximation of the truth. When you think you are right, you probably are — but logarithmically. Much less than you think.

Yes, you are almost certainly wrong in many of your beliefs in some way or another. You are likely much less right than you think,* which means you are more wrong. So maybe a better question than asking yourself, “am I right?” is to ask, “in what ways am I wrong?”

This is especially important for researchers, because there are so many ways you can go wrong, even with a well-designed, well-controlled experiment. It’s even more critical for those who do observational studies. I do observational studies because you can’t randomize students to live in different cultures. Any explanation I come up with for why we see the outcomes we do must be considered in the context of the myriad other explanations that might give rise to the exact same correlational patterns.

Not all researchers want to try to see how they’re wrong. For example, T. Colin Campbell, author of The China Study, doesn’t seem interested in how he might be wrong. He just wants to prove that he’s right.

*Why do I say you are likely much less right than you think? Cognitive biases drag you away from objective understanding.