The idea behind multilevel regression with poststratification (MRP) is to use a model that allows us to do inference in small domains as a way to get around some of the problems with poststratification. The poststratification step we already know. The main difference is that, rather than poststratifying raw values from the survey, we will poststratify instead the predicted values from a model.

We need to load a couple of libraries that we will use for the model:

library(optimx)
library(lme4)

And then repeat similar steps to what we did for poststratification. In this case, we are going to take a large study from the CIS and we are going to estimate the average ideology by age group in each of the CCAA. We are going to use some auxiliary data for poststratification. Same as before, the first step is bringing the two datasets in line.

But first, let’s look at the challenge: if we look at the cell size, we would be estimating means using sample sizes as small as 10. What we will try to do is to learn something about the larger cells and use that information for the smaller ones:

xtabs(~ ccaa + age, data=dta)

We now make sure that the two datasets use the same variable names and use information coded in the same way:

dta$educ <- dta$education; dta$education <- NULL dta$ideology <- dta$ideology - 1 dta$employstatus[dta$employstatus == "retired"] <- "other" frame$occup[frame$occup == "retired"] <- "other" frame$employstatus <- as.factor(as.character(frame$occup)) dta$employstatus <- as.factor(as.character(dta$employstatus)) Because we will be using a model, it is helpful to have covariates that help us improve the estimates. In this case, we will be estimating the ideology distribution so two obvious candidates at the group level are the average income per capita at the CCAA level and the vote share for PP in 2016. gdp <- list(list("andalucia", 19132, 33.56), list("cataluna", 30769, 13.36), list("madrid", 34916, 38.27), list("valencia", 22659, 35.49), list("galicia", 23294, 41.49), list("castilla-leon", 24397, 44.33), list("paisvasco", 34079, 12.85), list("canarias", 21031, 34.07), list("castilla-lamancha", 20645, 42.79), list("murcia", 21134, 46.74), list("aragon", 28640, 33.56), list("balears", 26764, 35.09), list("extremadura", 18174, 39.94), list("asturias", 23087, 35.28), list("navarra", 31809, 31.88), list("cantabria", 23817, 41.56), list("rioja", 26833, 42.63), list("ceuta", 20034, 51.91), list("melilla", 18842, 49.90)) gdp <- do.call(rbind, gdp) gdp <- as.data.frame(gdp) names(gdp) <- c("ccaa", "gdppc", "pp2016") gdp$ccaa <- unlist(gdp$ccaa) gdp$gdppc <- unlist(gdp$gdppc) gdp$pp2016 <- unlist(gdp$pp2016) Let’s now merge that information back in: dta <- dta[complete.cases(dta[, c("gender", "age", "educ", "ccaa", "employstatus", "ideology")]), ] dta <- merge(dta, gdp, by="ccaa") The model that we are interested in predicts individual-level ideology using a number of individual-level covariates specified as random effects and group-level covariates as fixed effects. We will see in a minute why: mod <- lmer(ideology ~ 1 + scale(gdppc) + scale(pp2016) + (1 | age) + (1 | gender) + (1 | educ) + (1 | age:educ) + (1 | educ:gender) + (1 | employstatus:age) + (1 | ccaa) + (1 | ccaa:age), data=dta, control=lmerControl(optimizer='optimx', optCtrl=list(method='nlminb'))) This is how our model look like: summary(mod) Start first by interpreting the fixed effects, ignoring for a second by the fact that it is aggregate information. fixef(mod) Now move to the random effects. For instance, look at the intercepts that we have just estimated for age and remember the idea of pooling in the context of mixed-effects models: ranef(mod)$age

The predictions from the model are the values that we are most interested in.

dta$P <- predict(mod, newdata=dta, allow.new.levels=TRUE) summary(dta$P)

Now comes the final step. We have estimates for each cell but we now need to bring it in line with its actual size in the population. In other words, we have an estimate for a person in a given cell but we now want to have the correct proportion of cases in that cell within a given group. This is the reason we want to poststratify.

We first calculate the cell counts in our dataset:

dta$age_gender <- interaction(dta$age, dta$gender) dta$age_educ <- interaction(dta$age, dta$educ)
dta$educ_gender <- interaction(dta$educ, dta$gender) grouped <- dta %>% group_by(age, gender, educ, age_gender, age_educ, educ_gender, employstatus, ccaa) %>% summarise(mean = mean(P)) And we do the same thing for our sample frame: frame$age_gender <- interaction(frame$age, frame$gender)
frame$age_educ <- interaction(frame$age, frame$educ) frame$educ_gender <- interaction(frame$educ, frame$gender)

gframe <- frame %>%
group_by(age,
gender,
educ,
age_gender,
age_educ,
educ_gender,
employstatus,
ccaa) %>%
summarize(N=n())

The only thing that remains is to adjust the counts, which in this case we do by merging the two datasets so that we have everything in a nice format for analysis and further exploration:

combined <- merge(grouped,
gframe,
c("age",
"educ",
"age_educ",
"age_gender",
"educ_gender",
"employstatus",
"ccaa"))

estimate <- combined %>%
group_by(ccaa, age) %>%
summarize(estimated=sum(N * mean)/sum(N))

The number that we would obtain in our dataset without any MRP would be:

raw <- dta %>%
group_by(ccaa, age) %>%
summarize(ideology=mean(ideology),
size=n())

We can see the effect of MRP on the estimates:

p <- ggplot(full, aes(x=age, y=value, colour=key, size=size))
pq <- p + geom_point(shape=1) +
facet_wrap(~ ccaa) +
labs(x="Age group", y="Self-placement in 1-10",
title="Older voters are more conservative (but not much)") +
theme_bw() +
scale_y_continuous(limits=c(2, 5.75)) +
theme(axis.text.x=element_text(angle=60, hjust=1)) +
scale_colour_discrete(name="Ideology",
labels=c("Estimated", "Raw")) +
scale_size_continuous(name="Cell size")