My psychometrics prof did a quick intro to confirmatory factor analysis in class last night, and since next quarter I’m taking a class out of sequence that depends on it (latent growth curve modeling), thought I’d summarize here to consolidate what I learned.

**EFA vs. CFA**

You can use exploratory factor analysis (EFA) or confirmatory factor analysis (CFA) to investigate the construct validity of a psychometric instrument. With exploratory, you don’t specify the factor structure up front — the analysis finds factors and their loadings on items for you. With confirmatory, you specify the factors and how they relate to items from the instrument.

The “factors” you are looking for are also known as *latent variables*, things you can’t directly measure (that’s why they’re called “latent”). As an example, intrinsic motivation to study math is a latent variable or construct. There’s no way to measure it directly so you develop some measurement instrument — usually a set of items asking about that construct. You might assess more than one construct at a time, for example intrinsic and extrinsic motivation, and so you want to see if the items relate to the underlying factors as you theorized.

You can do a confirmatory factor analysis with EFA techniques (e.g., with principal axis factoring and oblique or orthogonal factor rotation) but there are additional benefits to using CFA (from Gable & Wolf, 1993):

- You get a unique factor solution with CFA
- CFA assesses the degree of model fit
- CFA output on individual model parameters suggests how to improve the model
- You can test factorial invariance across groups

**How to do a simple CFA**

You express a factor analysis model using structural equation modeling (SEM) notation. A circle or oval indicates a latent variable (a.k.a. factor). A square or rectangle indicates an observed variable (a.k.a. indicator). A single-headed arrow shows causality, with factors causing indicators, not the other way around. A curved double-headed arrow indicates unanalyzed assocation.

Here’s an example of a CFA diagram that I made in Amos. This shows two factors — positive attitudes towards math (should be positive attitude toward math I think) and extrinsic motivation to learn math, each with four indicators.

Then you input correlations or covariances from your data set and run the analysis. You’ll get estimated factor loadings as well as a bunch of measures of goodness of fit of the model.

I won’t go over running the analysis here — I haven’t actually gotten that far myself — but here’s what to look for to see if you have good model fit:

- Root mean squared error of approximation (RMSEA) should be less than .05.
- Comparative Fit Index (CFI) — excellent model if > .95, good if between .90 and .95, poor if less than .95

The prof also mentioned the chi square measure of fit, but admitted it is worthless. Note for the chi square test statistic you are looking for a *nonsignificant *chi-square not a significant one, in contrast to most statistical tests. For large samples the chi square will virtually always be significant, so some statisticians recommend dividing by degrees of freedom. Some say that a model with chi square / df less than three is good.

Here’s a useful page summarizing more measures of goodness of fit of structural equation models.

**Reference**

Gable, R. K., & Wolf, M. B. (1993). *Instrument development in the affective domain: Measuring attitudes and values in corporate and school settings*. Evaluation in education and human services. Boston: Kluwer Academic.