Autonomy has been described as a fundamental human need and satisfying the need for autonomy is an important motivator of behavior (Deci & Ryan, 2000). Autonomy can be supported in various ways, and autonomy-supportive environments are associated with more successful behavior change (Ng et al., 2012). Framing information and decisions in autonomous ways, emphasizing choice and agency, may increase receptivity to health messages and promote behavior change.

In this study, we assess the degree to which framing health messages related to COVID-19 in ways that support autonomy improves message effectiveness and sharing intentions.

The primary goal of this study was to further understand the ways in which autonomy-supportive message framing and autonomous functioning more generally promote sharing of COVID-19 related health messages. This study is an extension to the analyses reported in COVID-19 Full Study 1: Autonomy Analyses and the preregistration is available on OSF.

This study was conducted completely within person to increase power. We conducted power analyses using the PANGEA power calculator and estimated that we would be able to have >80% power to detect an effect of d = .12 with 150 people. Because we expected that 15-25% percent of participants will be excluded, we recruited a total of 200 participants. Exclusions were based on the criteria outlined in the standard operating procedure for this project (https://osf.io/xwbhu/).

Each participant saw 5 autonomously framed health messages and 5 control health messages about social distancing. These messages were randomly sampled from a pool of 10 possible health messages in each condition, which were selected from the Study 1 (which included 15 messages) because they showed the greatest differentiation between conditions. Each message was created to look like an instagram post that includes a visual message about COVID-19 accompanied by a “post” about the message. For each message, the post begins with the same stem (e.g., “Staying home protects our community by stopping the spread of #covid19.”). The message control condition contains this stem only, whereas the experimental condition contains additional text framed using autonomous language. All message stimuli can be viewed here. See below for example messages.

prep and check data


Tidy the data and check the number of item responses per condition for each measure

## load packages

if (!require(tidyverse)) {
  install.packages('tidyverse')
}
if (!require(lmerTest)) {
  install.packages('lmerTest')
}
if (!require(ggeffects)) {
  install.packages('ggeffects')
}
if (!require(fastDummies)) {
  install.packages('fastDummies')
}
if (!require(knitr)) {
  install.packages('knitr')
}
if (!require(kableExtra)) {
  install.packages('kableExtra')
}
if (!require(emo)) {
  install.packages('emo')
}
devtools::install_github("dcosme/specr", ref = "plotmods")
library(specr)

## define palettes

palette_cond = wesanderson::wes_palette("Zissou1", n = 2, type = "continuous")
palette_rel_motiv = wesanderson::wes_palette("Zissou1", n = 8, "continuous")[c(8,3:5,7)]
palette_geo = wesanderson::wes_palette("Zissou1", n = 3, type = "continuous")

## load cleaned data
# * Data was cleaned using the `data_cleaning.Rmd` script.

data = read.csv("covid19_study1a_clean_long.csv", stringsAsFactors = FALSE)


## tidy data for analysis

items = read.csv("item_text.csv", stringsAsFactors = FALSE)
items_surveys = items %>% 
  select(-item, -text) %>%
  unique()

messages = data %>%
  filter(grepl("msg", survey_name)) %>%
  filter(!grepl("time", survey_name)) %>%
  mutate(value = as.numeric(value)) %>%
  extract(item, c("condition", "survey_name", "item"), "(.*)_(msg_.*)_([0-9]{2})") %>%
  spread(survey_name, value) %>%
  mutate(msg_favorability = msg_positive - msg_negative) %>%
  select(-contains("positive"), -contains("negative")) %>%
  gather(survey_name, value, contains("msg")) %>%
  mutate(item = sprintf("%s_%s", survey_name, item)) %>%
  group_by(survey_name) %>%
  mutate(mean = mean(value, na.rm = TRUE),
         sd3 = 3*sd(value, na.rm = TRUE),
         value = ifelse(!grepl("familiarity", item) & value > mean + sd3, mean + sd3, value), # winsorize outliers
         value = ifelse(!grepl("familiarity", item) & value < mean - sd3, mean - sd3, value)) %>%
  select(-c(mean, sd3))

data_tidy = data %>%
  filter(!grepl("time", survey_name)) %>%
  filter(grepl("^agency|autonomy", survey_name)) %>% 
  mutate(value = as.numeric(value),
         value = ifelse(item == "agency_2", abs(value - 8), value), # reverse-code agency item
         value = ifelse(item %in% c("purpose_2", "purpose_4", "purpose_5", "purpose_6"), abs(value - 6), value), # reverse-code purpose
         survey_name = ifelse(item %in% c("autonomy_1", "autonomy_2", "autonomy_3", "autonomy_4", "autonomy_5"),
                              "IAF_autonomous", 
                       ifelse(item %in% c("autonomy_6", "autonomy_7", "autonomy_8", "autonomy_9", "autonomy_10"),
                              "IAF_controlling", survey_name))) %>% 
  bind_rows(messages) %>%
  ungroup() %>%
  mutate(group = factor(group, levels = c("control_autonomous", "autonomous_control"))) %>%
  left_join(., select(items, -survey_name),  by = "item") %>%
  mutate(text = ifelse(survey_name == "msg_familiarity", filter(items, item == "msg_familiarity")$text,
                ifelse(survey_name == "msg_rel_self", filter(items, item == "msg_rel_self")$text,
                ifelse(survey_name == "msg_rel_social", filter(items, item == "msg_rel_social")$text,
                ifelse(survey_name == "msg_motiv_self", filter(items, item == "msg_motiv_self")$text,
                ifelse(survey_name == "msg_motiv_other", filter(items, item == "msg_motiv_other")$text,
                ifelse(survey_name == "msg_positive", filter(items, item == "msg_positive")$text,
                ifelse(survey_name == "msg_negative", filter(items, item == "msg_negative")$text,
                ifelse(survey_name == "msg_share", filter(items, item == "msg_share")$text, text)))))))),
         measure = ifelse(survey_name == "msg_familiarity", filter(items, item == "msg_familiarity")$measure,
                   ifelse(survey_name == "msg_rel_self", filter(items, item == "msg_rel_self")$measure,
                   ifelse(survey_name == "msg_rel_social", filter(items, item == "msg_rel_social")$measure,
                   ifelse(survey_name == "msg_motiv_self", filter(items, item == "msg_motiv_self")$measure,
                   ifelse(survey_name == "msg_motiv_other", filter(items, item == "msg_motiv_other")$measure,
                   ifelse(survey_name == "msg_positive", filter(items, item == "msg_positive")$measure,
                   ifelse(survey_name == "msg_negative", filter(items, item == "msg_negative")$measure,
                   ifelse(survey_name == "msg_share", filter(items, item == "msg_share")$measure, measure)))))))))


control_vars = data %>%
  filter(grepl("state|gender|^age$|politics_party|politics_conserv|ses_income_household|ses_degree|covid_studies", survey_name)) %>%
  select(SID, survey_name, value) %>%
  unique() %>%
  spread(survey_name, value) %>%
  mutate(state = as.factor(state),
         gender = recode(gender, "1" = "male", "2" = "female", "3" = "other", "4" = "prefer not to say"),
         age = scale(as.integer(age), center = TRUE, scale = FALSE), # mean center age
         ses_degree = factor(ses_degree),
         ses_income_household = ifelse(ses_income_household %in% c("10", "11"), NA, ses_income_household),
         ses_income_household = factor(ses_income_household),
         politics_conserv = scale(as.integer(politics_conserv), center = TRUE, scale = FALSE),
         politics_party = scale(as.integer(politics_party), center = TRUE, scale = FALSE),
         covid_studies = ifelse(covid_studies == 1273, NA, as.numeric(covid_studies)),
         covid_studies = log(covid_studies),
         covid_studies = ifelse(covid_studies == -Inf, NA, covid_studies))

data_person = data_tidy %>%
  filter(grepl("msg|agency|autonomy", item)) %>%
  group_by(group, condition, SID, survey_name) %>%
  summarize(value = mean(value, na.rm = TRUE)) %>%
  group_by(survey_name) %>%
  mutate(mean = mean(value, na.rm = TRUE),
         sd3 = 3*sd(value, na.rm = TRUE),
         value = ifelse(!grepl("msg", survey_name) & value > mean + sd3, mean + sd3, value), # winsorize outliers
         value = ifelse(!grepl("msg", survey_name) & value < mean - sd3, mean - sd3, value)) %>%
  select(-c(mean, sd3)) %>%
  left_join(., items_surveys, by = "survey_name") %>%
  mutate(measure = ifelse(survey_name == "IAF_autonomous", "Index of Autonomous Functioning: self-congruence/authorship",
                 ifelse(survey_name == "IAF_controlling", "Index of Autonomous Functioning: susceptibility to control", measure)),
         scale = ifelse(grepl("IAF", survey_name), "1 = not at all true, 5 = completely", scale),
         citation = ifelse(grepl("IAF", survey_name), " Weinstein, Przybylski, & Ryan, 2012", citation))

dvs_covs_controls = data_person %>%
  select(-c(scale, measure, citation)) %>%
  filter(grepl("^agency|IAF", survey_name)) %>%
  group_by(survey_name) %>%
  mutate(value = scale(value)) %>% #scale within survey
  spread(survey_name, value) %>%
  left_join(., control_vars, by = "SID") %>%
  select(-group, -condition)

data_mod = messages %>%
  group_by(SID, survey_name) %>%
  extract(item, "message", "msg_.*_([0-9]{2})", remove = FALSE) %>%
  mutate(trial = row_number()) %>%
  select(-item) %>%
  group_by(survey_name) %>%
  mutate(value = scale(value)) %>% #scale within survey
  spread(survey_name, value) %>%
  left_join(., dvs_covs_controls) %>%
  mutate(SID = as.factor(SID)) %>%
  ungroup() %>%
  select(-msg_familiarity) %>%
  fastDummies::dummy_cols(., select_columns = "condition") %>%
  gather(var, val, contains("condition")) %>%
  mutate(var = gsub("condition_", "", var)) %>%
  spread(var, val) %>%
  mutate(condition = factor(condition, levels = c("control", "autonomous")),
         autonomous = as.numeric(autonomous),
         control = as.numeric(control),
         group = factor(group, levels = c("control_autonomous", "autonomous_control")))

data_mod_person = messages %>%
  group_by(SID, survey_name) %>%
  extract(item, "message", "msg_.*_([0-9]{2})", remove = FALSE) %>%
  mutate(trial = row_number()) %>%
  select(-item) %>%
  group_by(study, group, condition, SID, survey_name, survey_name) %>%
  summarize(value = mean(value, na.rm = TRUE)) %>% # take the mean across messages
  group_by(survey_name) %>%
  mutate(value = scale(value)) %>% # scale within survey
  spread(survey_name, value) %>%
  full_join(., dvs_covs_controls) %>%
  mutate(SID = as.factor(SID)) %>%
  fastDummies::dummy_cols(., select_columns = "condition") %>%
  gather(var, val, contains("condition_")) %>%
  mutate(var = gsub("condition_", "", var)) %>%
  spread(var, val) %>%
  mutate(condition = factor(condition, levels = c("control", "autonomous")),
         autonomous = as.numeric(autonomous),
         control = as.numeric(control),
         group = factor(group, levels = c("control_autonomous", "autonomous_control")))

## demographics
demo = data %>%
  filter(grepl("gender|race|hispanic_latino|ses_degree|income_household", item)) %>%
  ungroup() %>%
  select(study, SID, item, value) %>%
  mutate(value = ifelse(grepl("hispanic_latino", item), recode(value, "1" = "yes", "0" = "no"), value),
         value = ifelse(grepl("gender", item), recode(value, "1" = "male", "2" = "female", "3" = "other", "4" = "would rather not say"), value),
         value = ifelse(grepl("race", item), recode(value, "1" = "White", "2" = "Black or African American", 
                                                    "3" = "Asian", "4" = "American Indian or Alaskan Native", 
                                                    "5" = "Native Hawaiian or Other Pacific Islander", "6" = "Other"), value),
         value = ifelse(grepl("degree", item), recode(value, "1" = "Less than high school", "2" = "High school graduate (diploma)", 
                                                    "3" = "High school graduate (GED)", "4" = "Some college (1-4 years, no degree)", 
                                                    "5" = "Associate's degree (including occupational or academic degrees)", 
                                                    "6" = "Bachelor's degree (BA, BS, etc)", "7" = "Master's degree (MA, MS, MENG, MSW, etc)",
                                                    "8" = "Professional school degree (MD, DDC, JD, etc)",
                                                    "9" = "Doctorate degree (PhD, EdD, etc)"), value),
         value = ifelse(grepl("household", item), recode(value, "1" = "Less than $5,000", "2" = "$5,000 through $11,999", 
                                                    "3" = "$12,000 through $15,999", "4" = "$16,000 through $24,999", 
                                                    "5" = "$25,000 through $34,999", "6" = "$35,000 through $49,999",
                                                    "7" = "$50,000 through $74,999", "8" = "$75,000 through $99,999",
                                                    "9" = "$100,000 and greater", "10" = "NA", "11" = "NA"), value),
         item = gsub("hispanic_latino", "Hispanic / Latinx", item),
         item = gsub("ses_income_household", "household income", item),
         item = gsub("ses_degree", "highest degree completed", item)) %>%
  group_by(study, item, value) %>%
  summarize(n = n()) %>%
  ungroup() %>%
  rename(" " = item)

# states
states = data %>%
  filter(grepl("state", item)) %>%
  select(study, SID, item, value) %>%
  unique() %>%
  spread(item, value) %>%
  group_by(study, state) %>%
  summarize(n = n())

## define functions

plot_cond = function(data, survey, item=TRUE, group=FALSE, palette=palette) {
  if (item == FALSE) {
    if (group == TRUE) {
      data %>%
      filter(grepl(!!(survey), survey_name)) %>%
      mutate(value = as.numeric(value),
             condition = ifelse(is.na(condition), "none", condition)) %>%
      ggplot(aes(survey_name, value, color = condition, shape = group)) +
      stat_summary(fun.data = "mean_cl_boot", geom = "pointrange", position = position_dodge(width = .5)) +
      scale_color_manual(values = palette) +
      labs(x = "", y = "value\n") +
      theme_minimal() +
      theme(axis.text.x = element_text(angle = 45, hjust = 1),
            legend.position = "top")
      
    } else {
      
    data %>%
    filter(grepl(!!(survey), survey_name)) %>%
    mutate(value = as.numeric(value),
           condition = ifelse(is.na(condition), "none", condition)) %>%
    ggplot(aes(survey_name, value, color = condition)) +
    stat_summary(fun.data = "mean_cl_boot", geom = "pointrange", position = position_dodge(width = .5)) +
    scale_color_manual(values = palette) +
    labs(x = "", y = "value\n") +
    theme_minimal() +
    theme(axis.text.x = element_text(angle = 45, hjust = 1),
          legend.position = "top")
    }
  } else {
    data %>%
    filter(grepl(!!(survey), survey_name)) %>%
    mutate(value = as.numeric(value),
           condition = ifelse(is.na(condition), "none", condition)) %>%
    ggplot(aes(item, value, color = condition)) +
    stat_summary(fun.data = "mean_cl_boot", geom = "pointrange", position = position_dodge(width = .5)) +
    scale_color_manual(values = palette) +
    labs(x = "", y = "value\n") +
    theme_minimal() +
    theme(axis.text.x = element_text(angle = 45, hjust = 1),
          legend.position = "top")
  }
}

plot_compare = function(data, survey = ".*", palette, condition = FALSE) {

  rating_means = data %>%
    filter(grepl(!!(survey), survey_name)) %>%
    group_by(survey_name) %>%
    summarize(mean = mean(value))
  
  if (condition == TRUE) {
    data %>%
      filter(grepl(!!(survey), survey_name)) %>%
      ggplot(aes(message, value, color = condition)) +
      stat_summary(fun.data = "mean_cl_boot") +
      coord_flip() +
      geom_hline(data = rating_means, aes(yintercept = mean), linetype = "dotted") +
      facet_grid(~survey_name) +
      labs(x = "message\n", y = "\nvalue") +
      scale_color_manual(values = palette) +
      theme_minimal() +
      theme(legend.position = "top")
    
  } else {
    data %>%
      filter(grepl(!!(survey), survey_name)) %>%
      ggplot(aes(message, value)) +
      stat_summary(fun.data = "mean_cl_boot") +
      coord_flip() +
      geom_hline(data = rating_means, aes(yintercept = mean), linetype = "dotted") +
      facet_grid(~survey_name) +
      labs(x = "message\n", y = "\nvalue") +
      scale_color_manual(values = palette) +
      theme_minimal() +
      theme(legend.position = "top")
  }
}

plot_desc = function(data, survey, item=TRUE, condition=FALSE,
                     palette=palette_cond, min=1, max=7,
                     text_size=3, alpha=.5) {

    source("https://gist.githubusercontent.com/benmarwick/2a1bb0133ff568cbe28d/raw/fb53bd97121f7f9ce947837ef1a4c65a73bffb3f/geom_flat_violin.R")
    
    labels = data %>%
      filter(grepl(!!(survey), survey_name)) %>%
      group_by(survey_name, measure) %>%
      summarize(y = (max + min) / 2) %>%
      mutate(condition = "autonomous")
    
    if (condition == FALSE) {
      data %>%
        filter(grepl(!!(survey), survey_name)) %>%
        ggplot(aes(survey_name, value)) +
        geom_flat_violin(position = position_nudge(x = .1, y = 0), color = FALSE, fill = palette[1]) +
        geom_point(position = position_jitter(width = .05, height = .05), size = .5, alpha = alpha, color = palette[1]) +
        geom_boxplot(width = .1, outlier.shape = NA, alpha = .25) +
        geom_text(data = labels, aes(x = survey_name, y = y, label = measure), nudge_x = .55, size = text_size) +
        coord_flip() +
        scale_x_discrete(expand = expansion(add = c(0, .75))) +
        scale_y_continuous(limits = c(min, max), breaks = seq(min, max, 1)) +
        labs(x = "", y = "rating") +
        theme_minimal() +
        theme(axis.text.y = element_blank(),
              legend.position = "top")

    } else {
      
      data %>%
        filter(grepl(!!(survey), survey_name)) %>%
        ggplot(aes(survey_name, value, fill = condition)) +
        geom_flat_violin(position = position_nudge(x = .1, y = 0), alpha = .3, color = FALSE) +
        geom_point(aes(color = condition), position = position_jitter(width = .05, height = .05), size = .5, alpha = alpha) +
        geom_boxplot(width = .1, outlier.shape = NA, alpha = .25) +
        geom_text(data = labels, aes(x = survey_name, y = y, label = measure), nudge_x = .55, size = text_size) +
        coord_flip() +
        scale_x_discrete(expand = expansion(add = c(0, .75))) +
        scale_y_continuous(limits = c(min, max), breaks = seq(min, max, 1)) +
        scale_fill_manual(values = palette_cond) +
        scale_color_manual(values = palette_cond) +
        labs(x = "", y = "rating") +
        theme_minimal() +
        theme(axis.text.y = element_blank(),
              legend.position = "top")
    }
}

table_desc = function(data, survey, condition=FALSE, item=TRUE, message=FALSE) {
  
  if (item == FALSE) {
    
    if (condition == TRUE) {
      
      data %>%
        filter(grepl(!!(survey), survey_name)) %>%
        filter(!is.na(value)) %>%
        group_by(survey_name, measure, condition) %>%
        summarize(n = n(),
                  min = min(value, na.rm = TRUE),
                  max = max(value, na.rm = TRUE),
                  mean = mean(value, na.rm = TRUE),
                  median = median(value, na.rm = TRUE),
                  sd = sd(value, na.rm = TRUE)) %>%
        mutate_if(is.numeric, round, 2) %>%
        DT::datatable(rownames = FALSE, extensions = 'FixedColumns', 
                      options = list(scrollX = TRUE,
                                     scrollY = TRUE,
                                     fixedColumns = list(leftColumns = 2)))
      
    } else {
    
      data %>%
        filter(grepl(!!(survey), survey_name)) %>%
        filter(!is.na(value)) %>%
        group_by(survey_name, measure) %>%
        summarize(n = n(),
                  min = min(value, na.rm = TRUE),
                  max = max(value, na.rm = TRUE),
                  mean = mean(value, na.rm = TRUE),
                  median = median(value, na.rm = TRUE),
                  sd = sd(value, na.rm = TRUE)) %>%
        mutate_if(is.numeric, round, 2) %>%
        DT::datatable(rownames = FALSE, extensions = 'FixedColumns', 
                      options = list(scrollX = TRUE,
                                     scrollY = TRUE,
                                     fixedColumns = list(leftColumns = 2)))
    }
    
  } else {
  
    data %>%
      filter(grepl(!!(survey), survey_name)) %>%
      filter(!is.na(value)) %>%
      group_by(text) %>%
      summarize(n = n(),
                min = min(value, na.rm = TRUE),
                max = max(value, na.rm = TRUE),
                mean = mean(value, na.rm = TRUE),
                median = median(value, na.rm = TRUE),
                sd = sd(value, na.rm = TRUE)) %>%
      mutate_if(is.numeric, round, 2) %>%
      DT::datatable(rownames = FALSE, extensions = 'FixedColumns', 
                    options = list(scrollX = TRUE,
                                   scrollY = TRUE,
                                   fixedColumns = list(leftColumns = 2)))
  }
}

data_tidy %>%
  mutate(value = as.numeric(value)) %>%
  group_by(condition, survey_name) %>%
  summarize(n = n()) %>%
  spread(condition, n)

demographics

sample size

Before exclusions = 197
After exclusions = 150

geographic distribution

states %>%
  usmap::plot_usmap(data = ., values = "n", color = "grey50") + 
  scale_fill_gradient2(low = palette_geo[1], mid = palette_geo[2], midpoint = max(states$n) / 2, high = palette_geo[3], 
                       name = "", na.value = NA, limits = c(min(states$n), max(states$n)), breaks = seq(0, max(states$n), 5)) + 
  theme(legend.position = "right")

age

data %>%
  filter(item == "age") %>%
  summarize(`age range` = sprintf("%s - %s", min(value, na.rm = TRUE), max(value, na.rm = TRUE)),
            `mean age` = mean(as.numeric(value, na.rm = TRUE)),
            `sd age` = sd(as.numeric(value, na.rm = TRUE))) %>%
  kable(digits = 1) %>%
  kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
age range mean age sd age
20 - 70 38.2 11.9

gender

demo %>%
  filter(` ` == "gender") %>%
  ungroup() %>%
  select(-study, -` `) %>%
  rename("gender" = value) %>%
  mutate(total = sum(n),
         percent = sprintf("%.1f%s", (n / total) * 100, "%")) %>%

  kable(digits = 1) %>%
    kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
gender n total percent
female 60 150 40.0%
male 89 150 59.3%
would rather not say 1 150 0.7%

race and ethnicity

demo %>%
  filter(` ` == "Hispanic / Latinx") %>%
  ungroup() %>%
  select(-study) %>%
  mutate(total = sum(n),
         percent = sprintf("%.1f%s", (n / total) * 100, "%")) %>%
  select(-n, -total) %>%
  spread(value, percent) %>%
  kable(digits = 1) %>%
  kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
no yes
Hispanic / Latinx 83.3% 16.7%
demo %>%
  filter(` ` == "race") %>%
  ungroup() %>%
  select(-study) %>%
  mutate(total = sum(n),
         percent = sprintf("%.1f%s", (n / total) * 100, "%")) %>%
  select(value, percent) %>%
  rename("race" = value) %>%
  kable(digits = 1) %>%
  kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
race percent
American Indian or Alaskan Native 2.0%
Asian 8.0%
Black or African American 12.0%
Other 3.3%
White 74.7%

education

demo %>%
  filter(` ` == "highest degree completed") %>%
  ungroup() %>%
  select(-study) %>%
  mutate(total = sum(n),
         percent = sprintf("%.1f%s", (n / total) * 100, "%")) %>%
  mutate(value = factor(value, levels = c("Less than high school", "High school graduate (diploma)", "High school graduate (GED)", 
                                          "Some college (1-4 years, no degree)", "Associate's degree (including occupational or academic degrees)",
                                          "Bachelor's degree (BA, BS, etc)", "Master's degree (MA, MS, MENG, MSW, etc)",
                                          "Professional school degree (MD, DDC, JD, etc)", "Doctorate degree (PhD, EdD, etc)"))) %>%
  arrange(value) %>%
  select(value, percent) %>%
  rename("highest degree completed" = value) %>%
  kable(digits = 1) %>%
  kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
highest degree completed percent
High school graduate (diploma) 6.0%
High school graduate (GED) 3.3%
Some college (1-4 years, no degree) 20.0%
Associate’s degree (including occupational or academic degrees) 8.7%
Bachelor’s degree (BA, BS, etc) 49.3%
Master’s degree (MA, MS, MENG, MSW, etc) 10.0%
Professional school degree (MD, DDC, JD, etc) 2.0%
Doctorate degree (PhD, EdD, etc) 0.7%

socioeconomic status

demo %>%
  filter(` ` == "household income") %>%
  ungroup() %>%
  select(-study) %>%
  mutate(total = sum(n),
         percent = sprintf("%.1f%s", (n / total) * 100, "%")) %>%
  mutate(value = ifelse(is.na(value), "not reported", value),
         value = factor(value, levels = c("Less than $5,000", "$5,000 through $11,999", "$12,000 through $15,999", "$16,000 through $24,999", 
                                          "$25,000 through $34,999", "$35,000 through $49,999", "$50,000 through $74,999", "$75,000 through $99,999",
                                          "$100,000 and greater", "not reported"))) %>%
  arrange(value) %>%
  select(value, percent) %>%
  rename("household income" = value) %>%
  kable(digits = 1) %>%
  kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
household income percent
Less than $5,000 2.0%
$5,000 through $11,999 1.3%
$12,000 through $15,999 4.0%
$16,000 through $24,999 6.0%
$25,000 through $34,999 14.0%
$35,000 through $49,999 18.0%
$50,000 through $74,999 25.3%
$75,000 through $99,999 13.3%
$100,000 and greater 14.7%
not reported 1.3%


visualize variables

distributions and descriptives

message-level items

Below are tables and density plots for the distribution of average responses for each message item as a function of condition.

scale
1 = strongly disagree, 7 = strongly agree

For each person and item, ratings were averaged across messages.

items %>%
  filter(grepl("share|msg.*self|msg_agency", item)) %>%
  select(item, text) %>%
  kable() %>%
  kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
item text
msg_rel_self This message is relevant to me
msg_share I would share this message on social media.
msg_motiv_self This message motivates me to avoid physical contact with others.
msg_agency This message makes me feel that my personal actions can have a positive impact.

table

table_desc(data_person, "msg_.*_self|share|msg_agency", item = FALSE, condition = TRUE)

figure

plot_desc(data_person, "msg_.*_self|share|msg_agency", condition = TRUE, palette = palette_cond)

person-level items

agency

measure
Agency in mitigating the spread of COVID-19

question text
To what extent do you agree or disagree with the following statements?

scale
1 = strongly disagree, 7 = strongly agree

In all analyses, average agency for each person was computed by reverse scoring agency_2 and taking the mean across all items.

items %>%
  filter(grepl("^agency_[1-3]{1}", item)) %>%
  select(item, text) %>%
  kable() %>%
  kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
item text
agency_1 I, personally, have the ability to reduce further outbreak of COVID-19
agency_2 I am powerless to reduce further outbreak of COVID-19
agency_3 My actions matter for reducing further outbreak of COVID-19
table
table_desc(data_person, "^agency", item = FALSE, condition = TRUE)
figure
plot_desc(data_person, "^agency", condition = FALSE, palette = palette_cond)

autonomy

measure
Index of Autonomous Functioning: self-congruence/authorship

citation
Weinstein, Przybylski, & Ryan, 2012

question text
Please indicate how true each statement is of your experiences on the whole. Remember that there are no right or wrong answers. Please answer according to what really reflects your experience rather than what you think your experience should be.

scale
1 = not at all true, 5 = completely true

In all analyses, average autonomous functioning for each person was computed by reverse taking the mean across all items in the self-congruence/authorship subscale.

items %>%
  filter(grepl("authorship", measure)) %>%
  filter(!item == "autonomy_text") %>%
  select(item, text) %>%
  kable() %>%
  kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
item text
autonomy_1 My decisions represent my most important values and feelings.
autonomy_2 My actions are congruent with who I really am.
autonomy_3 My whole self stands behind the important decisions I make.
autonomy_4 My decisions are steadily informed by things I want or care about.
autonomy_5 I strongly identify with the things that I do.
table
table_desc(data_person, "IAF_autonomous", item = FALSE, condition = TRUE)
figure
plot_desc(data_person, "IAF_autonomous", condition = FALSE, palette = palette_cond)

average condition effects

In these plots, we average across message responses for each item within person and plot the difference between conditions.

across counter-balancing group

There do not appear to be differences between conditions for these message-level items.

plot_cond(data_person, "msg_.*_self|share|msg_agency", item = FALSE, palette = palette_cond)

by counterbalancing group

It looks like the group that saw the autonomous messages first rated the messages higher than the group that saw the control messages first. This might indicate carryover effects from one condition to the other, but it could also just be noise. However, adding group to the models doesn’t change anything, so I’ve left it out in the following analyses to be consistent with our preregistered analysis plan.

plot_cond(data_person, "msg_.*_self|share|msg_agency", item = FALSE, group = TRUE, palette = palette_cond)

data_person %>%
  filter(grepl("msg_.*_self|share|msg_agency", survey_name)) %>%
  mutate(value = as.numeric(value),
         condition = ifelse(is.na(condition), "none", condition)) %>%
  ggplot(aes(survey_name, value, shape = group)) +
  stat_summary(fun.data = "mean_cl_boot", geom = "pointrange", position = position_dodge(width = .5)) +
  scale_color_manual(values = palette_cond) +
  labs(x = "", y = "value\n") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, hjust = 1),
        legend.position = "top")


preregistered analyses


For all analyses, the data are grand-mean centered and scaled in standard units.

autonomy hypotheses

hypothesis 1

Hypothesis: Compared to the control condition, autonomously framed messages will be rated as being more likely to be shared.

❌ These data are not consistent with this hypothesis. Based on the Bayes factor, there is very strong evidence for the null.

run model

share_1 = lmer(msg_share ~ 1 + condition + (1 + condition | SID) + (1 | message), data = data_mod)
summary(share_1)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: msg_share ~ 1 + condition + (1 + condition | SID) + (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 3006.7
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.1790 -0.3706  0.0466  0.4788  4.0542 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.704336 0.83925       
##           conditionautonomous 0.028574 0.16904  -0.24
##  message  (Intercept)         0.007781 0.08821       
##  Residual                     0.308572 0.55549       
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                      Estimate Std. Error        df t value Pr(>|t|)
## (Intercept)          -0.02274    0.07672 122.43400  -0.296    0.767
## conditionautonomous   0.04462    0.03188 146.84915   1.400    0.164
## 
## Correlation of Fixed Effects:
##             (Intr)
## conditntnms -0.262

bayes factor using BIC

Evidence for H0 (no condition effect)

share_1_null = lmer(msg_share ~ 1 + (1 + condition | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(share_1_null, share_1, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
1 + (1 + condition | SID) + (1 | message) 182.1372
1 + condition + (1 + condition | SID) + (1 | message) 1.0000

plot predicted effects

ggeffects::ggpredict(share_1, c("condition")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous")) %>%
  ggplot(aes(x = x, y = predicted, color = x)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.3, .4)) +
  labs(x = "", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "none")

hypothesis 2

Hypothesis: Compared to the control condition, autonomously framed messages will be rated as conferring more personal agency.

❌ These data are not consistent with this hypothesis. Based on the Bayes factor, there is very strong evidence for the null.

run model

agency_1 = lmer(msg_agency ~ condition + (1 | SID) + (1 | message), data = data_mod)
summary(agency_1)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: msg_agency ~ condition + (1 | SID) + (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 3538.6
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -5.1360 -0.3426  0.1069  0.4810  2.9577 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  SID      (Intercept) 0.5094   0.7137  
##  message  (Intercept) 0.0178   0.1334  
##  Residual             0.4802   0.6930  
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                       Estimate Std. Error         df t value Pr(>|t|)
## (Intercept)           -0.01076    0.07627   57.05322  -0.141    0.888
## conditionautonomous    0.01899    0.03585 1336.03331   0.530    0.596
## 
## Correlation of Fixed Effects:
##             (Intr)
## conditntnms -0.234

bayes factor using BIC

Evidence for H0 (no condition effect)

agency_1_null = lmer(msg_agency ~ 1 + (1 | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(agency_1_null, agency_1, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
1 + (1 | SID) + (1 | message) 374.1284
condition + (1 | SID) + (1 | message) 1.0000

plot predicted effects

ggeffects::ggpredict(agency_1, c("condition")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous")) %>%
  ggplot(aes(x = x, y = predicted, color = x)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.3, .4)) +
  labs(x = "", y = "predicted standardized rating\n", title = "agency to mitigate the spread of COVID-19\n") +
  theme_minimal() +
  theme(legend.position = "none")

hypotheses 3 & 5

Hypothesis: Personal agency will be associated with higher sharing intentions and this relationship will be stronger in the autonomous condition

✅ These data are consistent with the hypothesis that personal agency is positively associated with sharing.

❌ However, they are not consistent with the hypothesis that condition moderates this effect. Based on the bayes factor, there is very strong evidence for the null (no moderation).

run model

share_mod_agency = lmer(msg_share ~ condition*msg_agency + (1 + condition | SID) + (1 | message), data = data_mod)
summary(share_mod_agency)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: msg_share ~ condition * msg_agency + (1 + condition | SID) +  
##     (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 2689.6
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.4093 -0.4223  0.0506  0.4681  4.3553 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.589991 0.76811       
##           conditionautonomous 0.019230 0.13867  -0.48
##  message  (Intercept)         0.002711 0.05207       
##  Residual                     0.250431 0.50043       
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                  Estimate Std. Error         df t value
## (Intercept)                      -0.01891    0.06737  142.88724  -0.281
## conditionautonomous               0.03760    0.02826  144.85719   1.331
## msg_agency                        0.34810    0.02362 1139.39538  14.737
## conditionautonomous:msg_agency    0.04043    0.02755  461.58546   1.468
##                                           Pr(>|t|)    
## (Intercept)                                  0.779    
## conditionautonomous                          0.185    
## msg_agency                     <0.0000000000000002 ***
## conditionautonomous:msg_agency               0.143    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt msg_gn
## conditntnms -0.353              
## msg_agency   0.004 -0.010       
## cndtntnms:_ -0.002  0.000 -0.584

bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_agency_null = lmer(msg_share ~ msg_agency + condition + (1 + condition | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(share_mod_agency_null, share_mod_agency, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_agency + condition + (1 + condition | SID) + (1 | message) 197.1609
condition * msg_agency + (1 + condition | SID) + (1 | message) 1.0000

plot predicted effects

ggeffects::ggpredict(share_mod_agency, c("condition", "msg_agency [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean", 
                 ifelse(group == "1", sprintf("+%s SD", group), sprintf("%s SD", group))),
         group = factor(group, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "\npersonal agency", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")

hypotheses 4 & 6

Hypothesis: Higher autonomous functioning will be associated with greater message sharing, self-relevance, motivation, personal agency, as well as higher agency in mitigating the spread of COVID-19

sharing

✅ These data are consistent with the hypothesis that individual differences in autonomous functioning are positively associated with sharing.

❌ However, they are not consistent with the hypothesis that condition moderates this effect. Based on the bayes factor, there is very strong evidence for the null (no moderation).

run model
share_mod_auto = lmer(msg_share ~ condition*IAF_autonomous + (1 + condition | SID) + (1 | message), data = data_mod)
summary(share_mod_auto)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: msg_share ~ condition * IAF_autonomous + (1 + condition | SID) +  
##     (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 3005.4
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.1478 -0.3788  0.0455  0.4837  4.0438 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.667025 0.8167        
##           conditionautonomous 0.029511 0.1718   -0.27
##  message  (Intercept)         0.007885 0.0888        
##  Residual                     0.308541 0.5555        
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                     Estimate Std. Error        df t value
## (Intercept)                         -0.02274    0.07515 118.30237  -0.303
## conditionautonomous                  0.04464    0.03198 145.88087   1.396
## IAF_autonomous                       0.20556    0.06994 147.96027   2.939
## conditionautonomous:IAF_autonomous   0.01110    0.03207 145.79402   0.346
##                                    Pr(>|t|)   
## (Intercept)                         0.76274   
## conditionautonomous                 0.16482   
## IAF_autonomous                      0.00382 **
## conditionautonomous:IAF_autonomous  0.72979   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt IAF_tn
## conditntnms -0.275              
## IAF_autonms  0.000  0.000       
## cndtnt:IAF_  0.000  0.000 -0.297
bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_auto_null = lmer(msg_share ~ IAF_autonomous + condition + (1 + condition | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(share_mod_auto_null, share_mod_auto, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
IAF_autonomous + condition + (1 + condition | SID) + (1 | message) 453.7614
condition * IAF_autonomous + (1 + condition | SID) + (1 | message) 1.0000
plot predicted effects
ggeffects::ggpredict(share_mod_auto, c("condition", "IAF_autonomous [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean", 
                 ifelse(group == "1", sprintf("+%s SD", group), sprintf("%s SD", group))),
         group = factor(group, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "\nautonomous functioning", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")

self-relevance

✅ These data are consistent with the hypothesis that individual differences in autonomous functioning are positively associated with self-relevance.

❌ However, they are not consistent with the hypothesis that condition moderates this effect. Based on the bayes factor, there is very strong evidence for the null (no moderation).

run model
rel_mod_auto = lmer(msg_rel_self ~ condition*IAF_autonomous + (1 + condition | SID) + (1 | message), data = data_mod)
summary(rel_mod_auto)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: msg_rel_self ~ condition * IAF_autonomous + (1 + condition |  
##     SID) + (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 3402.2
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.9740 -0.3498  0.1213  0.4822  3.1993 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.42567  0.6524        
##           conditionautonomous 0.02667  0.1633   -0.20
##  message  (Intercept)         0.02362  0.1537        
##  Residual                     0.43379  0.6586        
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                      Estimate Std. Error         df
## (Intercept)                         -0.023211   0.076018  38.439385
## conditionautonomous                  0.044908   0.036588 146.804575
## IAF_autonomous                       0.356933   0.058653 147.970540
## conditionautonomous:IAF_autonomous   0.007756   0.036703 146.781094
##                                    t value     Pr(>|t|)    
## (Intercept)                         -0.305        0.762    
## conditionautonomous                  1.227        0.222    
## IAF_autonomous                       6.086 0.0000000095 ***
## conditionautonomous:IAF_autonomous   0.211        0.833    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt IAF_tn
## conditntnms -0.259              
## IAF_autonms  0.000  0.000       
## cndtnt:IAF_  0.000  0.001 -0.336
bayes factor using BIC

Evidence for H0 (no moderation)

rel_mod_auto_null = lmer(msg_rel_self ~ IAF_autonomous + condition + (1 + condition | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(rel_mod_auto_null, rel_mod_auto, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
IAF_autonomous + condition + (1 + condition | SID) + (1 | message) 411.7667
condition * IAF_autonomous + (1 + condition | SID) + (1 | message) 1.0000
plot predicted effects
ggeffects::ggpredict(rel_mod_auto, c("condition", "IAF_autonomous [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean", 
                 ifelse(group == "1", sprintf("+%s SD", group), sprintf("%s SD", group))),
         group = factor(group, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "\nautonomous functioning", y = "predicted standardized rating\n", title = "self-relevance\n") +
  theme_minimal() +
  theme(legend.position = "top")

self motivation

✅ These data are consistent with the hypothesis that individual differences in autonomous functioning are positively associated with self motivation.

❌ However, they are not consistent with the hypothesis that condition moderates this effect. Based on the bayes factor, there is very strong evidence for the null (no moderation).

run model
motiv_mod_auto = lmer(msg_motiv_self ~ condition*IAF_autonomous + (1 + condition | SID) + (1 | message), data = data_mod)
summary(motiv_mod_auto)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: msg_motiv_self ~ condition * IAF_autonomous + (1 + condition |  
##     SID) + (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 3531.6
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.7553 -0.3974  0.1199  0.4932  2.9477 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr
##  SID      (Intercept)         0.335618 0.57933      
##           conditionautonomous 0.002865 0.05353  1.00
##  message  (Intercept)         0.016635 0.12898      
##  Residual                     0.491078 0.70077      
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                       Estimate  Std. Error          df
## (Intercept)                           0.002773    0.067497   41.383155
## conditionautonomous                  -0.006999    0.036512 1066.755406
## IAF_autonomous                        0.370463    0.053970  150.717522
## conditionautonomous:IAF_autonomous   -0.012829    0.036624 1067.697695
##                                    t value       Pr(>|t|)    
## (Intercept)                          0.041          0.967    
## conditionautonomous                 -0.192          0.848    
## IAF_autonomous                       6.864 0.000000000163 ***
## conditionautonomous:IAF_autonomous  -0.350          0.726    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt IAF_tn
## conditntnms -0.182              
## IAF_autonms  0.000  0.000       
## cndtnt:IAF_  0.000  0.001 -0.228
bayes factor using BIC

Evidence for H0 (no moderation)

motiv_mod_auto_null = lmer(msg_rel_self ~ IAF_autonomous + condition + (1 + condition | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(motiv_mod_auto_null, motiv_mod_auto, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
IAF_autonomous + condition + (1 + condition | SID) + (1 | message) 5061552009598394355137411809280
condition * IAF_autonomous + (1 + condition | SID) + (1 | message) 1
plot predicted effects
ggeffects::ggpredict(motiv_mod_auto, c("condition", "IAF_autonomous [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean", 
                 ifelse(group == "1", sprintf("+%s SD", group), sprintf("%s SD", group))),
         group = factor(group, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "\nautonomous functioning", y = "predicted standardized rating\n", title = "motivation to practice social distancing\n") +
  theme_minimal() +
  theme(legend.position = "top")

personal agency

✅ These data are consistent with the hypothesis that individual differences in autonomous functioning are positively associated with perceived personal agency.

❌ However, they are not consistent with the hypothesis that condition moderates this effect. Based on the bayes factor, there is very strong evidence for the null (no moderation).

run model
agency_mod_auto = lmer(msg_agency ~ condition*IAF_autonomous + (1 | SID) + (1 | message), data = data_mod)
summary(agency_mod_auto)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: 
## msg_agency ~ condition * IAF_autonomous + (1 | SID) + (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 3499
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -5.1810 -0.3619  0.1028  0.4831  3.0102 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  SID      (Intercept) 0.35969  0.5997  
##  message  (Intercept) 0.01798  0.1341  
##  Residual             0.48024  0.6930  
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                      Estimate Std. Error         df
## (Intercept)                          -0.01074    0.06954   42.01496
## conditionautonomous                   0.01914    0.03585 1335.34840
## IAF_autonomous                        0.40741    0.05532  184.45531
## conditionautonomous:IAF_autonomous   -0.03414    0.03596 1336.25962
##                                    t value         Pr(>|t|)    
## (Intercept)                         -0.155            0.878    
## conditionautonomous                  0.534            0.593    
## IAF_autonomous                       7.365 0.00000000000571 ***
## conditionautonomous:IAF_autonomous  -0.949            0.343    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt IAF_tn
## conditntnms -0.257              
## IAF_autonms  0.000  0.000       
## cndtnt:IAF_  0.000  0.001 -0.324
bayes factor using BIC

Evidence for H0 (no moderation)

agency_mod_auto_null = lmer(msg_agency ~ IAF_autonomous + condition + (1 | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(agency_mod_auto_null, agency_mod_auto, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
IAF_autonomous + condition + (1 | SID) + (1 | message) 273.4785
condition * IAF_autonomous + (1 | SID) + (1 | message) 1.0000
plot predicted effects
ggeffects::ggpredict(agency_mod_auto, c("condition", "IAF_autonomous [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean", 
                 ifelse(group == "1", sprintf("+%s SD", group), sprintf("%s SD", group))),
         group = factor(group, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "\nautonomous functioning", y = "predicted standardized rating\n", title = "perceived personal agency\n") +
  theme_minimal() +
  theme(legend.position = "top")

COVID-19 agency

✅ These data are consistent with the hypothesis that individual differences in autonomous functioning are positively associated with agency in mitigating the spread of COVID-19. Based on the bayes factor, there is very strong evidence for the alternative hypothesis.

run model
c_agency_mod_auto = lm(agency ~ IAF_autonomous, data = data_mod_person)
summary(agency_mod_auto)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: 
## msg_agency ~ condition * IAF_autonomous + (1 | SID) + (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 3499
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -5.1810 -0.3619  0.1028  0.4831  3.0102 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  SID      (Intercept) 0.35969  0.5997  
##  message  (Intercept) 0.01798  0.1341  
##  Residual             0.48024  0.6930  
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                      Estimate Std. Error         df
## (Intercept)                          -0.01074    0.06954   42.01496
## conditionautonomous                   0.01914    0.03585 1335.34840
## IAF_autonomous                        0.40741    0.05532  184.45531
## conditionautonomous:IAF_autonomous   -0.03414    0.03596 1336.25962
##                                    t value         Pr(>|t|)    
## (Intercept)                         -0.155            0.878    
## conditionautonomous                  0.534            0.593    
## IAF_autonomous                       7.365 0.00000000000571 ***
## conditionautonomous:IAF_autonomous  -0.949            0.343    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt IAF_tn
## conditntnms -0.257              
## IAF_autonms  0.000  0.000       
## cndtnt:IAF_  0.000  0.001 -0.324
bayes factor using BIC

Evidence for H1

c_agency_mod_auto_null =  lm(agency ~ 1, data = data_mod_person)
bayestestR::bayesfactor_models(c_agency_mod_auto_null, c_agency_mod_auto, denominator = 1) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
1 1
IAF_autonomous 50271206734
plot predicted effects
ggeffects::ggpredict(c_agency_mod_auto, c("IAF_autonomous [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = as.character(x),
         x = ifelse(x == "0", "mean", 
                 ifelse(x == "1", sprintf("+%s SD", x), sprintf("%s SD", x))),
         x = factor(x, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = x, y = predicted)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high)) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "\nautonomous functioning", y = "predicted standardized rating\n", title = "agency to mitigate the spread of COVID-19\n") +
  theme_minimal() +
  theme(legend.position = "top")


self-relevance and motivation

hypotheses 1 & 2

Hypothesis: Self-relevance and motivation will be associated with higher sharing intentions, and these relationships will be stronger in the autonomous framing condition.

self-relevance

✅ These data are consistent with the hypothesis that self-relevance is positively associated with sharing.

❌ However, they are not consistent with the hypothesis that condition moderates this effect. Based on the bayes factor, there is strong evidence for the null (no moderation).

run model
share_mod_rel = lmer(msg_share ~ condition*msg_rel_self + (1 + condition | SID) + (1 | message), data = data_mod)
summary(share_mod_rel)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: msg_share ~ condition * msg_rel_self + (1 + condition | SID) +  
##     (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 2867.2
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.3091 -0.4255  0.0445  0.4866  4.0271 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.61257  0.78267       
##           conditionautonomous 0.02992  0.17298  -0.38
##  message  (Intercept)         0.00401  0.06332       
##  Residual                     0.28188  0.53093       
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                    Estimate Std. Error         df t value
## (Intercept)                        -0.01711    0.06972  135.32256  -0.245
## conditionautonomous                 0.03247    0.03090  146.30165   1.051
## msg_rel_self                        0.23873    0.02577 1092.79689   9.266
## conditionautonomous:msg_rel_self    0.05544    0.03006  432.18172   1.844
##                                             Pr(>|t|)    
## (Intercept)                                   0.8065    
## conditionautonomous                           0.2950    
## msg_rel_self                     <0.0000000000000002 ***
## conditionautonomous:msg_rel_self              0.0658 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt msg_r_
## conditntnms -0.334              
## msg_rel_slf  0.009 -0.025       
## cndtntnm:__ -0.005 -0.001 -0.568
bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_rel_null = lmer(msg_share ~ msg_rel_self + condition + (1 + condition | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(share_mod_rel_null, share_mod_rel, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_rel_self + condition + (1 + condition | SID) + (1 | message) 99.72397
condition * msg_rel_self + (1 + condition | SID) + (1 | message) 1.00000
plot predicted effects
ggeffects::ggpredict(share_mod_rel, c("condition", "msg_rel_self [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean", 
                 ifelse(group == "1", sprintf("+%s SD", group), sprintf("%s SD", group))),
         group = factor(group, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "\nself-relevance", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")

self motivation

✅ These data are consistent with the hypothesis that self motivation is positively associated with sharing.

❌ However, they are not consistent with the hypothesis that condition moderates this effect. Based on the bayes factor, there is very strong evidence for the null (no moderation).

run model
share_mod_motiv = lmer(msg_share ~ condition*msg_motiv_self + (1 + condition | SID) + (1 | message), data = data_mod)
summary(share_mod_motiv)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: msg_share ~ condition * msg_motiv_self + (1 + condition | SID) +  
##     (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 2661.9
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.4128 -0.4961  0.0640  0.4776  4.8795 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.581784 0.76275       
##           conditionautonomous 0.010072 0.10036  -0.71
##  message  (Intercept)         0.002308 0.04804       
##  Residual                     0.248711 0.49871       
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                      Estimate Std. Error         df
## (Intercept)                          -0.02358    0.06664  143.60535
## conditionautonomous                   0.04745    0.02707  146.25051
## msg_motiv_self                        0.35380    0.02354 1161.64933
## conditionautonomous:msg_motiv_self    0.04815    0.02677  448.26439
##                                    t value            Pr(>|t|)    
## (Intercept)                         -0.354              0.7240    
## conditionautonomous                  1.753              0.0817 .  
## msg_motiv_self                      15.031 <0.0000000000000002 ***
## conditionautonomous:msg_motiv_self   1.799              0.0727 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt msg_m_
## conditntnms -0.384              
## msg_mtv_slf -0.001  0.004       
## cndtntnm:__  0.001  0.000 -0.602
bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_motiv_null = lmer(msg_share ~ msg_motiv_self + condition + (1 + condition | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(share_mod_motiv_null, share_mod_motiv, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_motiv_self + condition + (1 + condition | SID) + (1 | message) 118.7768
condition * msg_motiv_self + (1 + condition | SID) + (1 | message) 1.0000
plot predicted effects
ggeffects::ggpredict(share_mod_motiv, c("condition", "msg_motiv_self [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean", 
                 ifelse(group == "1", sprintf("+%s SD", group), sprintf("%s SD", group))),
         group = factor(group, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "\nmotivation to practice social distancing", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")


exploratory analyses


social predictors

The preregistered analyses focused on self-relevance and motivation for oneself. Here, we explore the effects of social relevance and motivation for others on sharing intentions.

Summary: Both social relevance and perceived motivation for others to practice social distancing were positively associated with sharing intentions, but these effects were not moderated by the message framing condition.

scale
1 = strongly disagree, 7 = strongly agree

items %>%
  filter(grepl("msg.*social|msg.*other", item)) %>%
  select(item, text) %>%
  kable() %>%
  kable_styling(bootstrap_options = "striped", full_width = F, position = "left")
item text
msg_rel_social This message is relevant to other people I know
msg_motiv_other This message would motivate people I know to avoid physical contact with others.

social relevance

These data are consistent with a main effect social relevance on sharing, but not with the hypothesis that condition moderates the relationship between social relevance and sharing.

run model

share_mod_rel_soc = lmer(msg_share ~ condition*msg_rel_social + (1 | SID) + (1 | message), data = data_mod)
summary(share_mod_rel_soc)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: 
## msg_share ~ condition * msg_rel_social + (1 | SID) + (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 2890.5
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.3557 -0.3804  0.0350  0.5131  3.9004 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  SID      (Intercept) 0.592554 0.76978 
##  message  (Intercept) 0.003614 0.06012 
##  Residual             0.292662 0.54098 
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                      Estimate Std. Error         df
## (Intercept)                          -0.02120    0.06857  145.69036
## conditionautonomous                   0.04170    0.02798 1334.00402
## msg_rel_social                        0.21691    0.02483 1429.70922
## conditionautonomous:msg_rel_social    0.05011    0.02875 1341.98553
##                                    t value            Pr(>|t|)    
## (Intercept)                         -0.309              0.7576    
## conditionautonomous                  1.490              0.1364    
## msg_rel_social                       8.736 <0.0000000000000002 ***
## conditionautonomous:msg_rel_social   1.743              0.0816 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt msg_r_
## conditntnms -0.203              
## msg_rel_scl  0.002 -0.007       
## cndtntnm:__ -0.001 -0.001 -0.552

bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_rel_soc_null = lmer(msg_share ~ msg_rel_social + condition + (1 | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(share_mod_rel_soc_null, share_mod_rel_soc, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_rel_social + condition + (1 | SID) + (1 | message) 117.6534
condition * msg_rel_social + (1 | SID) + (1 | message) 1.0000

plot predicted effects

ggeffects::ggpredict(share_mod_rel_soc, c("condition", "msg_rel_social [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean", 
                 ifelse(group == "1", sprintf("+%s SD", group), sprintf("%s SD", group))),
         group = factor(group, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "\nsocial relevance", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")

others’ motivation

These data are consistent with a main effect others’ motivation on sharing, but not with the hypothesis that condition moderates the relationship between social relevance and sharing.

run model

share_mod_motiv_other = lmer(msg_share ~ condition*msg_motiv_other + (1 + condition | SID) + (1 | message), data = data_mod)
summary(share_mod_motiv_other)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: 
## msg_share ~ condition * msg_motiv_other + (1 + condition | SID) +  
##     (1 | message)
##    Data: data_mod
## 
## REML criterion at convergence: 2684.5
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.2097 -0.4749  0.0585  0.5058  4.2638 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.573364 0.75721       
##           conditionautonomous 0.030317 0.17412  -0.46
##  message  (Intercept)         0.002713 0.05208       
##  Residual                     0.248307 0.49830       
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                       Estimate Std. Error         df
## (Intercept)                           -0.01137    0.06652  141.77334
## conditionautonomous                    0.02176    0.02946  147.33209
## msg_motiv_other                        0.35951    0.02420 1132.04445
## conditionautonomous:msg_motiv_other    0.03951    0.02833  454.23182
##                                     t value            Pr(>|t|)    
## (Intercept)                          -0.171               0.865    
## conditionautonomous                   0.739               0.461    
## msg_motiv_other                      14.857 <0.0000000000000002 ***
## conditionautonomous:msg_motiv_other   1.395               0.164    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) cndtnt msg_m_
## conditntnms -0.376              
## msg_mtv_thr  0.011 -0.032       
## cndtntnm:__ -0.007  0.001 -0.589

bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_motiv_other_null = lmer(msg_share ~ msg_motiv_other + condition + (1 + condition | SID) + (1 | message), data = data_mod)
bayestestR::bayesfactor_models(share_mod_motiv_other_null, share_mod_motiv_other, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_motiv_other + condition + (1 + condition | SID) + (1 | message) 210.4741
condition * msg_motiv_other + (1 + condition | SID) + (1 | message) 1.0000

plot predicted effects

ggeffects::ggpredict(share_mod_motiv_other, c("condition", "msg_motiv_other [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean", 
                 ifelse(group == "1", sprintf("+%s SD", group), sprintf("%s SD", group))),
         group = factor(group, levels = c("-1 SD", "mean", "+1 SD"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.7, .8)) +
  labs(x = "others' motivation to practice social distancing", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")


disaggregated effects

The preregistered analyses just looked at the grand-mean centered effects of message-level predictors, reflecting deviations from the mean across people. However, this effect reflect both within and between-person effects. Here, we disaggregate these effects by including two message predictors in each model, reflecting within person variation (centered within context) and between-person variation (grand-mean centered). The within-person effect is a level-1 predictor reflecting deviations from an individual’s mean, whereas the between-person effect is a level-2 predictor (an average for each individual), reflecting deveiations from the mean across people.

cwc = centered within context
gmc = grand-mean centered

In the figures, between-person effects are represented on the x-axis; “mean gmc” reflects the predicted value when ratings are at the average across people, whereas “+/- 1 SD gmc” reflects the predicted values when the ratings are at +/- 1 SD from the average across people. The within-person effects are represented by the facet; “mean cwc” reflects the predicted value when ratings are at the average within-person, whereas “+/- 1 SD cwc” reflects the predicted values when the ratings are at +/- 1 SD from the average within-person.

Summary: Across all models, there were significant positive within- and between-person effects of relevance, motivation, and personal agency on sharing intentions. The effects were strongest for the between-person effects. Together these results suggest that individuals who rate the messages as more relevant and motivating on average also have higher sharing intentions, but that messages that are perceived as more relevant and motivating are associated with even stronger sharing intentions. We also observed small moderating effects of mesage framing condition on the relationship between sharing intention and self-relevance and personal agency, such that individuals who on average rated messages as being more self-relevant and conferring higher personal agency had higher sharing intentions in the autonomous condition compared to the control condition.

gmc_subs = messages %>%
  group_by(SID, survey_name) %>%
  extract(item, "message", "msg_.*_([0-9]{2})", remove = FALSE) %>%
  group_by(survey_name, SID) %>%
  summarize(value_gmc = mean(value, na.rm = TRUE)) %>%
  group_by(survey_name) %>%
  mutate(value_gmc = scale(value_gmc),
         survey_name_gmc = sprintf("%s_gmc", survey_name)) %>%
  ungroup() %>%
  select(-survey_name) %>%
  spread(survey_name_gmc, value_gmc)

cwc_subs = messages %>%
  group_by(SID, survey_name) %>%
  extract(item, "message", "msg_.*_([0-9]{2})", remove = FALSE) %>%
  mutate(trial = row_number()) %>%
  select(-item) %>%
  group_by(survey_name, SID) %>%
  mutate(value_cwc = scale(value, scale = FALSE),
         survey_name_cwc = sprintf("%s_cwc", survey_name)) %>%  #scale within survey and particpant
  group_by(survey_name) %>%
  mutate(value_cwc = scale(value_cwc, scale = TRUE, center = FALSE)) %>%
  ungroup() %>%
  select(-value, -survey_name) %>%
  spread(survey_name_cwc, value_cwc)

data_mod_diss = messages %>%
  group_by(SID, survey_name) %>%
  extract(item, "message", "msg_.*_([0-9]{2})", remove = FALSE) %>%
  mutate(trial = row_number()) %>%
  group_by(survey_name) %>%
  mutate(value = scale(value)) %>%
  select(-item) %>%
  spread(survey_name, value) %>%
  left_join(., gmc_subs) %>%
  left_join(., cwc_subs) %>%
  left_join(., dvs_covs_controls) %>%
  ungroup() %>%
  mutate(SID = as.factor(SID)) %>%
  ungroup() %>%
  select(-contains("msg_familiarity")) %>%
  fastDummies::dummy_cols(., select_columns = "condition") %>%
  gather(var, val, contains("condition")) %>%
  mutate(var = gsub("condition_", "", var)) %>%
  spread(var, val) %>%
  mutate(condition = factor(condition, levels = c("control", "autonomous")),
         autonomous = as.numeric(autonomous),
         control = as.numeric(control),
         group = factor(group, levels = c("control_autonomous", "autonomous_control")),
         order = ifelse((group == "autonomous_control" & condition == "autonomous") | (group == "control_autonomous" & condition == "control"), 0, 1)) %>%
  unique()

self-relevance

run model

share_mod_rel_self = lmer(msg_share ~ msg_rel_self_gmc*condition + msg_rel_self_cwc*condition + (1 + condition | SID) +  (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
summary(share_mod_rel_self)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: msg_share ~ msg_rel_self_gmc * condition + msg_rel_self_cwc *  
##     condition + (1 + condition | SID) + (1 | message)
##    Data: data_mod_diss
## Control: lmerControl(optimizer = "bobyqa")
## 
## REML criterion at convergence: 2867.2
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.3339 -0.4246  0.0420  0.4858  4.0359 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.58812  0.76689       
##           conditionautonomous 0.02993  0.17301  -0.41
##  message  (Intercept)         0.00411  0.06411       
##  Residual                     0.28178  0.53083       
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                        Estimate Std. Error         df
## (Intercept)                            -0.01728    0.06862  133.71559
## msg_rel_self_gmc                        0.35096    0.06577  147.99879
## conditionautonomous                     0.03319    0.03090  145.87692
## msg_rel_self_cwc                        0.15546    0.01976 1285.48777
## msg_rel_self_gmc:conditionautonomous    0.05427    0.03098  145.59084
## conditionautonomous:msg_rel_self_cwc    0.01023    0.02925 1203.63058
##                                      t value            Pr(>|t|)    
## (Intercept)                           -0.252              0.8016    
## msg_rel_self_gmc                       5.336 0.00000034952808930 ***
## conditionautonomous                    1.074              0.2845    
## msg_rel_self_cwc                       7.869 0.00000000000000751 ***
## msg_rel_self_gmc:conditionautonomous   1.752              0.0819 .  
## conditionautonomous:msg_rel_self_cwc   0.350              0.7265    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##              (Intr) msg_rl_slf_g cndtnt msg_rl_slf_c ms___:
## msg_rl_slf_g  0.000                                        
## conditntnms  -0.347  0.000                                 
## msg_rl_slf_c  0.010 -0.001       -0.021                    
## msg_rl_sl_:   0.000 -0.364       -0.001  0.003             
## cndtntn:___  -0.007  0.001       -0.002 -0.697        0.001

bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_rel_self_null = lmer(msg_share ~ msg_rel_self_gmc + msg_rel_self_cwc + condition + (1 + condition | SID) + (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
bayestestR::bayesfactor_models(share_mod_rel_self_null, share_mod_rel_self, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_rel_self_gmc + msg_rel_self_cwc + condition + (1 + condition | SID) + (1 | message) 53697.81
msg_rel_self_gmc * condition + msg_rel_self_cwc * condition + (1 + condition | SID) + (1 | message) 1.00

plot predicted effects

ggeffects::ggpredict(share_mod_rel_self, c("condition", "msg_rel_self_gmc [-1, 0, 1]", "msg_rel_self_cwc [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean gmc", 
                 ifelse(group == "1", sprintf("+%s SD gmc", group), sprintf("%s SD gmc", group))),
         group = factor(group, levels = c("-1 SD gmc", "mean gmc", "+1 SD gmc")),
         facet = as.character(facet),
         facet = ifelse(facet == "0", "mean cwc", 
                 ifelse(facet == "1", sprintf("+%s SD cwc", facet), sprintf("%s SD cwc", facet))),
         facet = factor(facet, levels = c("-1 SD cwc", "mean cwc", "+1 SD cwc"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  facet_grid(~facet) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.9, 1.1)) +
  labs(x = "\nself-relevance", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")

social relevance

run model

share_mod_rel_social = lmer(msg_share ~ msg_rel_social_gmc*condition + msg_rel_social_cwc*condition + (1 + condition | SID) + (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
summary(share_mod_rel_social)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: 
## msg_share ~ msg_rel_social_gmc * condition + msg_rel_social_cwc *  
##     condition + (1 + condition | SID) + (1 | message)
##    Data: data_mod_diss
## Control: lmerControl(optimizer = "bobyqa")
## 
## REML criterion at convergence: 2888.6
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.4114 -0.3950  0.0433  0.4989  4.0405 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.623351 0.78953       
##           conditionautonomous 0.021432 0.14640  -0.47
##  message  (Intercept)         0.003726 0.06104       
##  Residual                     0.286770 0.53551       
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                          Estimate Std. Error         df
## (Intercept)                              -0.02144    0.07008  137.74454
## msg_rel_social_gmc                        0.30560    0.06760  148.01017
## conditionautonomous                       0.04163    0.03017  145.32957
## msg_rel_social_cwc                        0.14303    0.01951 1284.89871
## msg_rel_social_gmc:conditionautonomous    0.04992    0.03029  145.63381
## conditionautonomous:msg_rel_social_cwc    0.01626    0.02965 1207.13207
##                                        t value          Pr(>|t|)    
## (Intercept)                             -0.306             0.760    
## msg_rel_social_gmc                       4.521 0.000012526821244 ***
## conditionautonomous                      1.380             0.170    
## msg_rel_social_cwc                       7.329 0.000000000000408 ***
## msg_rel_social_gmc:conditionautonomous   1.648             0.101    
## conditionautonomous:msg_rel_social_cwc   0.549             0.583    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##              (Intr) msg_rl_scl_g cndtnt msg_rl_scl_c ms___:
## msg_rl_scl_g  0.000                                        
## conditntnms  -0.351  0.000                                 
## msg_rl_scl_c  0.002  0.011       -0.005                    
## msg_rl_sc_:   0.000 -0.365       -0.001 -0.022             
## cndtntn:___  -0.002 -0.007       -0.001 -0.690       -0.005

bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_rel_social_null = lmer(msg_share ~ msg_rel_social_gmc + msg_rel_social_cwc + condition + (1 + condition | SID) + (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
bayestestR::bayesfactor_models(share_mod_rel_social_null, share_mod_rel_social, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_rel_social_gmc + msg_rel_social_cwc + condition + (1 + condition | SID) + (1 | message) 58666.06
msg_rel_social_gmc * condition + msg_rel_social_cwc * condition + (1 + condition | SID) + (1 | message) 1.00

plot predicted effects

ggeffects::ggpredict(share_mod_rel_social, c("condition", "msg_rel_social_gmc [-1, 0, 1]", "msg_rel_social_cwc [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean gmc", 
                 ifelse(group == "1", sprintf("+%s SD gmc", group), sprintf("%s SD gmc", group))),
         group = factor(group, levels = c("-1 SD gmc", "mean gmc", "+1 SD gmc")),
         facet = as.character(facet),
         facet = ifelse(facet == "0", "mean cwc", 
                 ifelse(facet == "1", sprintf("+%s SD cwc", facet), sprintf("%s SD cwc", facet))),
         facet = factor(facet, levels = c("-1 SD cwc", "mean cwc", "+1 SD cwc"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  facet_grid(~facet) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.9, 1.1)) +
  labs(x = "\nsocial relevance", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")

self motivation

run model

share_mod_motiv_self = lmer(msg_share ~ msg_motiv_self_gmc*condition + msg_motiv_self_cwc*condition + (1 + condition | SID) + (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
summary(share_mod_motiv_self)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: 
## msg_share ~ msg_motiv_self_gmc * condition + msg_motiv_self_cwc *  
##     condition + (1 + condition | SID) + (1 | message)
##    Data: data_mod_diss
## Control: lmerControl(optimizer = "bobyqa")
## 
## REML criterion at convergence: 2666.3
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.4324 -0.4952  0.0531  0.4878  4.8561 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.566760 0.75283       
##           conditionautonomous 0.011066 0.10519  -0.68
##  message  (Intercept)         0.002383 0.04882       
##  Residual                     0.248523 0.49852       
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                          Estimate Std. Error         df
## (Intercept)                              -0.02410    0.06594  142.68378
## msg_motiv_self_gmc                        0.39879    0.06433  148.01893
## conditionautonomous                       0.04723    0.02718  145.43214
## msg_motiv_self_cwc                        0.23619    0.01862 1274.32325
## msg_motiv_self_gmc:conditionautonomous    0.03175    0.02729  145.70351
## conditionautonomous:msg_motiv_self_cwc    0.02338    0.02700 1199.53319
##                                        t value             Pr(>|t|)    
## (Intercept)                             -0.365               0.7153    
## msg_motiv_self_gmc                       6.199        0.00000000536 ***
## conditionautonomous                      1.738               0.0844 .  
## msg_motiv_self_cwc                      12.686 < 0.0000000000000002 ***
## msg_motiv_self_gmc:conditionautonomous   1.164               0.2464    
## conditionautonomous:msg_motiv_self_cwc   0.866               0.3866    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##               (Intr) msg_mtv_slf_g cndtnt msg_mtv_slf_c ms___:
## msg_mtv_slf_g  0.000                                          
## conditntnms   -0.386  0.000                                   
## msg_mtv_slf_c -0.002  0.007         0.003                     
## msg_mtv_s_:    0.000 -0.397        -0.002 -0.018              
## cndtntn:___    0.001 -0.005         0.001 -0.712         0.000

bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_motiv_self_null = lmer(msg_share ~ msg_motiv_self_gmc + msg_motiv_self_cwc + condition + (1 + condition | SID) + (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
bayestestR::bayesfactor_models(share_mod_motiv_self_null, share_mod_motiv_self, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_motiv_self_gmc + msg_motiv_self_cwc + condition + (1 + condition | SID) + (1 | message) 112870.2
msg_motiv_self_gmc * condition + msg_motiv_self_cwc * condition + (1 + condition | SID) + (1 | message) 1.0

plot predicted effects

ggeffects::ggpredict(share_mod_motiv_self, c("condition", "msg_motiv_self_gmc [-1, 0, 1]", "msg_motiv_self_cwc [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean gmc", 
                 ifelse(group == "1", sprintf("+%s SD gmc", group), sprintf("%s SD gmc", group))),
         group = factor(group, levels = c("-1 SD gmc", "mean gmc", "+1 SD gmc")),
         facet = as.character(facet),
         facet = ifelse(facet == "0", "mean cwc", 
                 ifelse(facet == "1", sprintf("+%s SD cwc", facet), sprintf("%s SD cwc", facet))),
         facet = factor(facet, levels = c("-1 SD cwc", "mean cwc", "+1 SD cwc"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  facet_grid(~facet) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.9, 1.1)) +
  labs(x = "\nmotivation to practice social distancing: self", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")

other motivation

run model

share_mod_motiv_other = lmer(msg_share ~ msg_motiv_other_gmc*condition + msg_motiv_other_cwc*condition + (1 + condition | SID) + (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
summary(share_mod_motiv_other)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: 
## msg_share ~ msg_motiv_other_gmc * condition + msg_motiv_other_cwc *  
##     condition + (1 + condition | SID) + (1 | message)
##    Data: data_mod_diss
## Control: lmerControl(optimizer = "bobyqa")
## 
## REML criterion at convergence: 2688
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.2449 -0.4711  0.0561  0.5116  4.2742 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.557724 0.7468        
##           conditionautonomous 0.030615 0.1750   -0.47
##  message  (Intercept)         0.002841 0.0533        
##  Residual                     0.248246 0.4982        
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                           Estimate Std. Error         df
## (Intercept)                               -0.01201    0.06583  140.48820
## msg_motiv_other_gmc                        0.41227    0.06385  147.97576
## conditionautonomous                        0.02241    0.02949  146.63741
## msg_motiv_other_cwc                        0.23339    0.01855 1278.77919
## msg_motiv_other_gmc:conditionautonomous    0.03238    0.02958  146.50395
## conditionautonomous:msg_motiv_other_cwc    0.01049    0.02744 1205.05142
##                                         t value             Pr(>|t|)    
## (Intercept)                              -0.182                0.855    
## msg_motiv_other_gmc                       6.457        0.00000000144 ***
## conditionautonomous                       0.760                0.449    
## msg_motiv_other_cwc                      12.585 < 0.0000000000000002 ***
## msg_motiv_other_gmc:conditionautonomous   1.094                0.276    
## conditionautonomous:msg_motiv_other_cwc   0.382                0.702    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##               (Intr) msg_mtv_thr_g cndtnt msg_mtv_thr_c ms___:
## msg_mtv_thr_g  0.000                                          
## conditntnms   -0.381  0.000                                   
## msg_mtv_thr_c  0.013  0.003        -0.028                     
## msg_mtv_t_:    0.000 -0.394        -0.002 -0.006              
## cndtntn:___   -0.009 -0.002        -0.002 -0.704         0.001

bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_motiv_other_null = lmer(msg_share ~ msg_motiv_other_gmc + msg_motiv_other_cwc + condition + (1 + condition | SID) + (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
bayestestR::bayesfactor_models(share_mod_motiv_other_null, share_mod_motiv_other, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_motiv_other_gmc + msg_motiv_other_cwc + condition + (1 + condition | SID) + (1 | message) 149835.9
msg_motiv_other_gmc * condition + msg_motiv_other_cwc * condition + (1 + condition | SID) + (1 | message) 1.0

plot predicted effects

ggeffects::ggpredict(share_mod_motiv_other, c("condition", "msg_motiv_other_gmc [-1, 0, 1]", "msg_motiv_other_cwc [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean gmc", 
                 ifelse(group == "1", sprintf("+%s SD gmc", group), sprintf("%s SD gmc", group))),
         group = factor(group, levels = c("-1 SD gmc", "mean gmc", "+1 SD gmc")),
         facet = as.character(facet),
         facet = ifelse(facet == "0", "mean cwc", 
                 ifelse(facet == "1", sprintf("+%s SD cwc", facet), sprintf("%s SD cwc", facet))),
         facet = factor(facet, levels = c("-1 SD cwc", "mean cwc", "+1 SD cwc"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  facet_grid(~facet) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.9, 1.1)) +
  labs(x = "\nmotivation to practice social distancing: others", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")

agency

run model

share_mod_agency = lmer(msg_share ~ msg_agency_gmc*condition + msg_agency_cwc*condition + (1 + condition | SID) + (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
summary(share_mod_agency)
## Linear mixed model fit by REML. t-tests use Satterthwaite's method [
## lmerModLmerTest]
## Formula: 
## msg_share ~ msg_agency_gmc * condition + msg_agency_cwc * condition +  
##     (1 + condition | SID) + (1 | message)
##    Data: data_mod_diss
## Control: lmerControl(optimizer = "bobyqa")
## 
## REML criterion at convergence: 2692
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -4.3810 -0.4283  0.0442  0.4694  4.4470 
## 
## Random effects:
##  Groups   Name                Variance Std.Dev. Corr 
##  SID      (Intercept)         0.582938 0.76350       
##           conditionautonomous 0.019507 0.13967  -0.51
##  message  (Intercept)         0.002839 0.05328       
##  Residual                     0.249929 0.49993       
## Number of obs: 1496, groups:  SID, 150; message, 10
## 
## Fixed effects:
##                                       Estimate  Std. Error          df
## (Intercept)                          -0.019307    0.067108  141.737849
## msg_agency_gmc                        0.364129    0.065180  148.009636
## conditionautonomous                   0.037664    0.028265  145.072213
## msg_agency_cwc                        0.244331    0.018623 1267.065496
## msg_agency_gmc:conditionautonomous    0.056814    0.028371  145.270964
## conditionautonomous:msg_agency_cwc   -0.009878    0.027035 1197.871328
##                                    t value             Pr(>|t|)    
## (Intercept)                         -0.288               0.7740    
## msg_agency_gmc                       5.586          0.000000108 ***
## conditionautonomous                  1.333               0.1848    
## msg_agency_cwc                      13.120 < 0.0000000000000002 ***
## msg_agency_gmc:conditionautonomous   2.003               0.0471 *  
## conditionautonomous:msg_agency_cwc  -0.365               0.7149    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##             (Intr) msg_gncy_g cndtnt msg_gncy_c msg__:
## msg_gncy_gm  0.000                                    
## conditntnms -0.365  0.000                             
## msg_gncy_cw  0.004 -0.003     -0.008                  
## msg_gncy_g:  0.000 -0.378     -0.002  0.006           
## cndtntnm:__ -0.002  0.003     -0.001 -0.707      0.001

bayes factor using BIC

Evidence for H0 (no moderation)

share_mod_agency_null = lmer(msg_share ~ msg_agency_gmc + msg_agency_cwc + condition + (1 + condition | SID) + (1 | message), 
                       data = data_mod_diss, control = lmerControl(optimizer = "bobyqa"))
bayestestR::bayesfactor_models(share_mod_agency_null, share_mod_agency, denominator = 2) %>%
  kable(format = "pandoc", row.names = FALSE)
Model BF
msg_agency_gmc + msg_agency_cwc + condition + (1 + condition | SID) + (1 | message) 39602.98
msg_agency_gmc * condition + msg_agency_cwc * condition + (1 + condition | SID) + (1 | message) 1.00

plot predicted effects

ggeffects::ggpredict(share_mod_agency, c("condition", "msg_agency_gmc [-1, 0, 1]", "msg_agency_cwc [-1, 0, 1]")) %>%
  data.frame() %>%
  mutate(x = ifelse(x == 1, "control", "autonomous"),
         group = as.character(group),
         group = ifelse(group == "0", "mean gmc", 
                 ifelse(group == "1", sprintf("+%s SD gmc", group), sprintf("%s SD gmc", group))),
         group = factor(group, levels = c("-1 SD gmc", "mean gmc", "+1 SD gmc")),
         facet = as.character(facet),
         facet = ifelse(facet == "0", "mean cwc", 
                 ifelse(facet == "1", sprintf("+%s SD cwc", facet), sprintf("%s SD cwc", facet))),
         facet = factor(facet, levels = c("-1 SD cwc", "mean cwc", "+1 SD cwc"))) %>%
  ggplot(aes(x = group, y = predicted, color = x)) +
  geom_line(aes(group = x), position = position_dodge(width = .1)) +
  geom_pointrange(aes(ymin = conf.low, ymax = conf.high), position = position_dodge(width = .1)) +
  facet_grid(~facet) +
  scale_color_manual(name = "", values = palette_cond) +
  scale_y_continuous(limits = c(-.9, 1.1)) +
  labs(x = "\nagency to mitigate the spread of COVID-19", y = "predicted standardized rating\n", title = "sharing intention\n") +
  theme_minimal() +
  theme(legend.position = "top")


scatterplots

Here, we visualize the relationships between several individual difference measures and the variables of interest in this study. Each individual plot shows the association between the ratings for a single measure (x-axis) and the ratings for another measures (y-axis). For measures that included multiple items, the average across items was taken so that each dot represents a single participant.

autonomous motivation

measure
Index of Autonomous Functioning: self-congruence/authorship

citation
Weinstein, Przybylski, & Ryan, 2012

question text
Please indicate how true each statement is of your experiences on the whole. Remember that there are no right or wrong answers. Please answer according to what really reflects your experience rather than what you think your experience should be.

scale
1 = not at all true, 5 = completely true

data_person %>%
  select(-c(scale, measure, citation)) %>%
  ungroup() %>%
  mutate(survey_name = ifelse(!is.na(condition), sprintf("%s_%s", survey_name, condition), survey_name)) %>%
  select(-condition) %>%
  spread(survey_name, value) %>%
  left_join(., select(control_vars, SID, age, contains("politics"))) %>%
  gather(var, val, -SID, -IAF_autonomous, -group) %>%
  ggplot(aes(IAF_autonomous, val)) +
  geom_point(alpha = .2) +
  geom_smooth(method = "lm", alpha = .2, fullrange = TRUE) +
  facet_wrap(~var, scales = "free") +
  scale_color_manual(values = palette_cond) +
  theme_minimal() +
  theme(legend.position = "top")

controlling motivation

measure
Index of Autonomous Functioning: susceptibility to control

citation
Weinstein, Przybylski, & Ryan, 2012

question text
Please indicate how true each statement is of your experiences on the whole. Remember that there are no right or wrong answers. Please answer according to what really reflects your experience rather than what you think your experience should be.

scale
1 = not at all true, 5 = completely true

data_person %>%
  select(-c(scale, measure, citation)) %>%
  ungroup() %>%
  mutate(survey_name = ifelse(!is.na(condition), sprintf("%s_%s", survey_name, condition), survey_name)) %>%
  select(-condition) %>%
  spread(survey_name, value) %>%
  left_join(., select(control_vars, SID, age, contains("politics"))) %>%
  gather(var, val, -SID, -IAF_controlling, -group) %>%
  ggplot(aes(IAF_controlling, val)) +
  geom_point(alpha = .2) +
  geom_smooth(method = "lm", alpha = .2, fullrange = TRUE) +
  facet_wrap(~var, scales = "free") +
  scale_color_manual(values = palette) +
  theme_minimal() +
  theme(legend.position = "top")

age

measure
Age

question text
How old are you?

scale
free response

data_person %>%
  select(-c(scale, measure, citation)) %>%
  ungroup() %>%
  mutate(survey_name = ifelse(!is.na(condition), sprintf("%s_%s", survey_name, condition), survey_name)) %>%
  select(-condition) %>%
  spread(survey_name, value) %>%
  left_join(., select(control_vars, SID, age, contains("politics"))) %>%
  gather(var, val, -SID, -age, -group) %>%
  ggplot(aes(age, val)) +
  geom_point(alpha = .2) +
  geom_smooth(method = "lm", alpha = .2, fullrange = TRUE) +
  facet_wrap(~var, scales = "free") +
  scale_color_manual(values = palette) +
  theme_minimal() +
  theme(legend.position = "top")

political ideology

measure
Political ideology

question text
We hear a lot of talk these days about liberals and conservatives. Here is a seven-point scale on which the political views that people might hold are arranged from extremely liberal to extremely conservative. Where would you place yourself on this scale?

scale
1 = extremely liberal, 7 = extremely conservative

data_person %>%
  select(-c(scale, measure, citation)) %>%
  ungroup() %>%
  mutate(survey_name = ifelse(!is.na(condition), sprintf("%s_%s", survey_name, condition), survey_name)) %>%
  select(-condition) %>%
  spread(survey_name, value) %>%
  left_join(., select(control_vars, SID, age, contains("politics"))) %>%
  gather(var, val, -SID, -politics_conserv, -group) %>%
  ggplot(aes(politics_conserv, val)) +
  geom_point(alpha = .2) +
  geom_smooth(method = "lm", alpha = .2, fullrange = TRUE) +
  facet_wrap(~var, scales = "free") +
  scale_color_manual(values = palette) +
  theme_minimal() +
  theme(legend.position = "top")

political party

measure
Political party

question text
In politics today, would you consider yourself a:

scale
1 = Strong Democrat, 2 = Weak Democrate, 3 = Independent leaning Democrat, 4 = Independent, 5 = Independent leaning Republican, 6 = Weak Repebulican, 7 = Strong Republican

data_person %>%
  select(-c(scale, measure, citation)) %>%
  ungroup() %>%
  mutate(survey_name = ifelse(!is.na(condition), sprintf("%s_%s", survey_name, condition), survey_name)) %>%
  select(-condition) %>%
  spread(survey_name, value) %>%
  left_join(., select(control_vars, SID, age, contains("politics"))) %>%
  gather(var, val, -SID, -politics_party, -group) %>%
  ggplot(aes(politics_party, val)) +
  geom_point(alpha = .2) +
  geom_smooth(method = "lm", alpha = .2, fullrange = TRUE) +
  facet_wrap(~var, scales = "free") +
  scale_color_manual(values = palette) +
  theme_minimal() +
  theme(legend.position = "top")

log-transformed number of covid studies

measure
Number of COVID-19 studies

question text
How many COVID-related studies have you participated prior to this one? Please enter a number.

scale
free response

data_person %>%
  select(-c(scale, measure, citation)) %>%
  ungroup() %>%
  mutate(survey_name = ifelse(!is.na(condition), sprintf("%s_%s", survey_name, condition), survey_name)) %>%
  select(-condition) %>%
  spread(survey_name, value) %>%
  left_join(., select(control_vars, SID, age, covid_studies, contains("politics"))) %>%
  gather(var, val, -SID, -covid_studies, -group) %>%
  ggplot(aes(covid_studies, val)) +
  geom_point(alpha = .2) +
  geom_smooth(method = "lm", alpha = .2, fullrange = TRUE) +
  facet_wrap(~var, scales = "free") +
  scale_color_manual(values = palette) +
  theme_minimal() +
  theme(legend.position = "top")


specification curve analysis

Here, let’s run some Specification Curve Analyses (SCA) help get an overview of the relationships between the autonmous framing condition and the outcomes of interest, and assess how robust the condition effects are to the inclusion of control variables.

The goal is to identify robust effects across various model specifications.

There’s a great package that makes thise easy to do: specr

Summary: Across model specifications, the autonomous message framing condition is not associated with either message- or person-level outcomes. However, perceived relevance, motivation, and personal agency of messages are consistently positively associated with sharing intentions across model specifications.

setup

Define functions and control variables

run_sca = function(data, var, outcome, controls=NULL, subsets=NULL, 
                   random_effects = NULL, model = "lm") {
  
    # define median bootstrapping function
    median_cl_boot = function(estimate, conf = 0.95) {
  
      lconf = (1 - conf)/2
      uconf = 1 - lconf
      require(boot)
      bmedian = function(estimate, ind) median(estimate[ind])
      bt = boot(estimate, bmedian, 1000)
      bb = boot.ci(bt, type = "perc")
      data.frame(obs_median = median(estimate), 
                 obs_conf_low = quantile(bt$t, lconf), 
                 obs_conf_high = quantile(bt$t, uconf))
    
    }
    
    # run scas
    results = run_specs(df = data,
                        y = outcome, 
                        x = var,
                        controls = controls,
                        random_effects = random_effects,
                        model = model,
                        subsets = subsets,
                        keep.results = FALSE)
    
    median_ci = results %>%
      group_by(x, y) %>%
      do({
        median_cl_boot(.$estimate)
      })
        
    summary = results %>%
      mutate(pos = ifelse(estimate > 0, 1, 0),
             neg = ifelse(estimate < 0, 1, 0),
             sig = ifelse(p.value < .05, 1, 0),
             pos_sig = ifelse(pos == 1 & sig == 1, 1, 0),
             neg_sig = ifelse(neg == 1 & sig == 1, 1, 0)) %>%
      group_by(x, y) %>%
      summarize(obs_n = n(),
                obs_n_positive = sum(pos),
                obs_n_negative = sum(neg),
                obs_n_significant = sum(sig),
                obs_n_positive_sig = sum(pos_sig),
                obs_n_negative_sig = sum(neg_sig)) %>%
      left_join(., median_ci, by = c("x", "y")) %>%
      select(x, y, obs_median, obs_conf_low, obs_conf_high, everything())
    
    return(list(results = results, summary = summary))
}

plot_sca = function(data, combined = TRUE, labels = c("A", "B"), title = FALSE, limits = NULL,
                    point_size = 1, point_alpha = 1, ci_alpha = .5, ci_size = .5, palette = palette,
                    text_size = 12, title_size = 6, line_alpha = 1, line_size = 6,
                    choices  = c("x", "y"), color_vars = FALSE,
                    remove_y = FALSE, remove_facet = FALSE) {
  
  medians = data %>%
    group_by(x) %>%
    summarize(median = median(estimate)) %>%
    ungroup() %>%
    mutate(color = sprintf("%s", palette))
  
  if (combined == TRUE) {
      p1 = plot_curve(data, point_size = point_size, point_alpha = point_alpha,
                      ci_alpha = ci_alpha, ci_size = ci_size,
                      limits = limits) +
        geom_hline(data = medians, aes(yintercept = median, color = color, linetype = x), alpha = line_alpha, size = .75, show_guide = TRUE) +
        scale_linetype_manual(name = "median effect", values = rep(1, nrow(medians)), 
                              guide = guide_legend(override.aes = list(color = palette))) +
        labs(x = "", y = "standarized\nregression coefficient\n")  +
        theme(legend.position = "top",
              text = element_text(size = text_size))
      
      if (title == TRUE) {
        if (is.null(limits)) {
          p1 = p1 + annotate("text", -Inf, Inf, label = unique(data$x), fontface = 2, size = title_size,
                       x = 0.5*(1 + nrow(data)), 
                       y = max(data$conf.high))
        } else {
          p1 = p1 + annotate("text", -Inf, Inf, label = unique(data$x), fontface = 2, size = title_size,
                       x = 0.5*(1 + nrow(data)), 
                       y = limits[2])
        }
      }
      
      if (color_vars == TRUE) {
        p2 = plot_choices(data, choices = choices, rename_controls = "covariates", size = line_size, color_vars = TRUE, palette = palette) +
          labs(x = "\nspecifications (ranked)")  +
          theme(strip.text.x = element_blank(),
                text = element_text(size = text_size))
      } else {
        p2 = plot_choices(data, choices = choices, rename_controls = "covariates", size = line_size) +
          labs(x = "\nspecifications (ranked)")  +
          theme(strip.text.x = element_blank(),
                text = element_text(size = text_size))
      }
        
  }
  else {
      p1 = plot_curve(data, point_size = point_size, point_alpha = point_alpha,
                      ci_alpha = ci_alpha, ci_size = ci_size) +
        geom_hline(yintercept = 0, linetype = "solid", color = "grey", size = .5) +
        geom_hline(data = medians, aes(yintercept = median, color = color, linetype = x), alpha = line_alpha, size = .75, show_guide = TRUE) +
        scale_linetype_manual(name = "median effect", values = rep(1, nrow(medians)), 
                              guide = guide_legend(override.aes = list(color = palette))) +
        labs(x = "", y = "standarized\nregression coefficient\n")  +
        theme(text = element_text(size = text_size))
      
      if (title == TRUE) {
        if (is.null(limits)) {
          p1 = p1 + annotate("text", -Inf, Inf, label = unique(data$y), fontface = 2, size = title_size,
                       x = 0.5*(1 + nrow(data)), 
                       y = max(data$conf.high))
        } else {
          p1 = p1 + annotate("text", -Inf, Inf, label = unique(data$y), fontface = 2, size = title_size,
                       x = 0.5*(1 + nrow(data)), 
                       y = limits[2])
        }
      }
      
      if (color_vars == TRUE) {
        p2 = plot_choices(data, choices = choices, rename_controls = "covariates", size = line_size, color_vars = TRUE, palette = palette) +
          labs(x = "\nspecifications (ranked)")  +
          theme(strip.text.x = element_blank(),
                text = element_text(size = text_size))
      } else {
        p2 = plot_choices(data, choices = choices, rename_controls = "covariates", size = line_size) +
          labs(x = "\nspecifications (ranked)")  +
          theme(strip.text.x = element_blank(),
                text = element_text(size = text_size))
      }
      
  }
  
  if (remove_y == TRUE) {
    p1 = p1 + labs(y = "")
    
    p2 = p2 + theme(axis.text.y = element_blank(),
                    axis.ticks.y = element_blank()) +
      labs(y = "")
  }

  if (remove_facet == TRUE) {
    p2 = p2 + theme(strip.text.y = element_blank())
    }
  
  plot_specs(plot_a = p1,
             plot_b = p2,
             labels = labels,
             rel_height = c(1, 2))
}

plot_sca_compare = function(data, pointrange = TRUE, labels = c("A", "B"), 
                            rel_heights = c(.75, .25), rel_widths = c(.75, .25), palette = palette,
                            title = FALSE, text_size = 14, title_size = 6, n_rows = 1,
                            remove_x = FALSE, remove_y = FALSE, sig = NULL, scipen = 999) {
  
  # define median bootstrapping function
  median_cl_boot = function(x, conf = 0.95, df = TRUE, ci = "low") {
  
    lconf = (1 - conf)/2
    uconf = 1 - lconf
    require(boot)
    bmedian = function(x, ind) median(x[ind])
    bt = boot(x, bmedian, 1000)
    bb = boot.ci(bt, type = "perc")
    
    if (df == TRUE){
      data.frame(y = median(x),
                 ymin = quantile(bt$t, lconf), 
                 ymax = quantile(bt$t, uconf))
      
    } else {
      if (ci == "low") {
        quantile(bt$t, lconf)
      } else {
        quantile(bt$t, uconf)
      }
    }
  }
    
  # merge and tidy for plotting
  plot.data = data %>%
    group_by(x) %>%
    arrange(estimate) %>%
    mutate(specification = row_number()) %>%
    ungroup() %>%
    unique()
  
  # labels
  labs = plot.data %>%
    group_by(x) %>%
    summarize(med = median(estimate),
              low = median_cl_boot(estimate, df = FALSE, ci = "low"),
              high = median_cl_boot(estimate, df = FALSE, ci = "high")) %>%
    mutate(range = max(high) - min(low),
           estimate = ifelse(med > 0, high + (range / 10), low - (range / 10)),
           label = ifelse(x %in% sig, "*", ""))
  
  # plot curves
  if (pointrange == TRUE) {
    a = plot.data %>%
    ggplot(aes(specification, estimate, color = x)) +
      geom_linerange(aes(ymin = conf.low, ymax = conf.high), size = .1) +
      geom_point() +
      geom_hline(yintercept = 0, linetype = "solid", color = "grey", size = .5) +
      scale_color_manual(name = "", values = palette) +
      labs(x = "\nspecification number (ranked)", y = "standarized\negression coefficient\n") + 
      theme_minimal() + 
      theme(strip.text = element_blank(), 
            axis.line = element_line("black", size = 0.5), 
            legend.position = c(.5, .1), 
            legend.direction = "horizontal",
            panel.spacing = unit(0.75, "lines"), 
            axis.text = element_text(colour = "black"),
            text = element_text(size = text_size))
    if (title == TRUE) {
      a = a + annotate("text", -Inf, Inf, label = unique(plot.data$y), fontface = 2, size = title_size,
                       x = 0.5*(min(plot.data$specification) + max(plot.data$specification)), 
                       y = max(plot.data$conf.high))
    }
    
  } else {
    a = plot.data %>%
      ggplot(aes(specification, estimate, color = x)) +
      geom_point() +
      geom_hline(yintercept = 0, linetype = "solid", color = "grey", size = .5) +
      scale_color_manual(name = "", values = palette) +
      labs(x = "\nspecification number (ranked)", y = "standarized\nregression coefficient\n") + 
      theme_minimal() + 
      theme(strip.text = element_blank(), 
            axis.line = element_line("black", size = 0.5), 
            legend.position = "none", 
            legend.direction = "horizontal",
            panel.spacing = unit(0.75, "lines"), 
            axis.text = element_text(colour = "black"),
            text = element_text(size = text_size))
    if (title == TRUE) {
      a = a + annotate("text", -Inf, Inf, label = unique(plot.data$y), fontface = 2, size = title_size,
                       x = 0.5*(min(plot.data$specification) + max(plot.data$specification)), 
                       y = max(plot.data$estimate))    
      }
  }
    options(scipen = scipen)
    b = plot.data %>%
      group_by(x) %>%
      mutate(order = median(estimate)) %>%
      ggplot(aes(reorder(x, order), estimate, fill = x)) +
      stat_summary(fun.y = "median", geom = "bar") +
      stat_summary(fun.data = median_cl_boot, geom = "errorbar", width = 0) +
      geom_text(data = labs, aes(label = label, x = x, y = estimate), size = 6) +
      scale_fill_manual(name = "", values = palette) +
      scale_y_continuous(breaks = scales::pretty_breaks(n = 3)) + 
      labs(x = "\n", y = "median effect\n") + 
      theme_minimal() + 
      theme(strip.text = element_blank(), 
            axis.line = element_line("black", size = 0.5), 
            legend.position = "none", 
            panel.spacing = unit(0.75, "lines"), 
            axis.text = element_text(colour = "black"),
            text = element_text(size = text_size),
            axis.text.x = element_text(angle = 45, hjust = 1))
    
  if (n_rows == 1) {
    a = a + theme(legend.position = c(.5, .1))
    b = b + coord_flip() +
      labs(x = "\n", y = "\nmedian effect") + 
      theme(axis.text.x = element_text(angle = 0, hjust = 1),
            axis.text.y = element_blank())
  }     
    

  if (remove_x == TRUE) {
    a = a + labs(x = "")
    
    if (n_rows == 1) {
      b = b + labs(y = "")
    } else {
      b = b + labs(x = "")
    }
  }    
  
  if (remove_y == TRUE) {
    a = a + labs(y = "")
    
    if (n_rows == 1) {
      b = b + labs(x = "")
    } else {
      b = b + labs(y = "")
    }
  }  
    
  cowplot::plot_grid(a, b, labels = labels, rel_heights = rel_heights, rel_widths = rel_widths, nrow = n_rows)
}

controls = c("politics_conserv", "politics_party", "age", "gender", "age", "ses_degree", "ses_income_household", "covid_studies")

To help illustrate how model specifications are selected, here are the models that will be included in the message-level SCA evaluating the effect of condition on the outcomes. Across these specifications, we examine the effect of the autonomous framing condition relative to the control condition on each outcome, with or without the following control variables:

politics_conserv = political ideology
politics_party = political party
age = age
gender = gender
state = US state residence
ses_degree = highest degree completed
ses_income_household = household income
covid_studies = log transformed number of COVID-19 studies

setup_specs(y = names(data_mod)[grepl("share|self|social|other|msg.*agency", names(data_mod))],
            x = c("autonomous v. control"),
            control = controls,
            random_effects = "(1 + condition | SID) + (1 | message)",
            model = c("lmer"))

framing condition SCA

Let’s run a SCA looking at the effect of condition (autonomous or control) on all dependent variables, and covariates we’re interested in. We’ll also visualize the median effect for each condition an whether the relationships tend to be positive, negative, or zero.

The top panels shows the IV-DV regression coefficients, ordered by effect size. Black points are statistically significant at p < .05; grey are p > .05. The color lines represent the median effect size for each condition.

The bottom panels show which variables were included for each model specification and they’re grouped into y (DV) and covariates categories.

The comparison plots show each condition curve separately, as well as the median effect size. The confidence intervals around the median effects are bootstrapped from the median effects in the curve.

message-level outcomes

These plots show us that the median effect size for the autonomous condition relative to the control condition across model specifications and outcomes is quite small (B = 0.03) and is inconsistent in direction.

# define variables
outcome = names(data_mod)[grepl("share|self|social|other|msg.*agency", names(data_mod))]
var = "autonomous"
random_effects = "(1 + condition | SID) + (1 | message)"
model = "lmer"

# run SCA
output = run_sca(data = data_mod, var = var, outcome = outcome, controls = controls,
                 random_effects = random_effects, model = model)
combined SCA curve
plot_sca(data = output$results, combined = TRUE, title = FALSE, choices = c("y", "controls"), palette = palette_cond[1])

compare SCA curves
plot_sca_compare(data = output$results, palette = palette_cond[1])

person-level outcomes

Here, the median effect of condition on perceived agency to mitigate COVID-19 and autonomous functioning is consistently ~0 across model specifications.

# define variables
outcome = names(data_mod_person)[grepl("IAF_autonomous|^agency", names(data_mod_person))]
var = "autonomous"
random_effects = NULL
model = "lm"

# run SCA
output = run_sca(data = data_mod_person, var = var, outcome = outcome, controls = controls,
                 random_effects = random_effects, model = model)
combined SCA curve
plot_sca(data = output$results, combined = TRUE, title = FALSE, choices = c("y", "controls"), palette = palette_cond[1])

compare SCA curves
plot_sca_compare(data = output$results, palette = palette_cond[1], scipen = -3)

motivation and relevance SCA

Here, we look at the self and social relevance, self and other motivation, and personal agency variables are predictors of sharing.

The top panels shows the IV-DV regression coefficients, ordered by effect size. Black points are statistically significant at p < .05; grey are p > .05. The color lines represent the median effect size for each condition.

The bottom panels show which variables were included for each model specification and they’re grouped into x (IV), y (DV), and covariates categories.

The comparison plots show each condition curve separately, as well as the median effect size. The confidence intervals around the median effects are bootstrapped from the median effects in the curve.

combined message outcomes

For all predictors, the relationship with sharing is positive and statistically significant across model specifications. Overall, personal agency and the motivation variables are more strongly related to sharing than the relevance variables.

# define variables
outcome = c( "msg_share")
var = c("msg_motiv_self", "msg_rel_self", "msg_motiv_other", "msg_rel_social", "msg_agency")
random_effects = "(1 + condition | SID) + (1 | message)"
model = "lmer"

# run SCA
output = run_sca(data = data_mod, var = var, outcome = outcome, controls = controls,
                 random_effects = random_effects, model = model)
combined SCA curve
plot_sca(data = output$results, combined = TRUE, title = FALSE, choices = c("x", "y", "controls"), palette = palette_rel_motiv, color_vars = TRUE)

compare SCA curves
plot_sca_compare(data = output$results, palette = palette_rel_motiv)


message effects

Here we look at variability in ratings for individual message, either across collapsed across conditions, or as a function of condition. The dotted line is the mean rating across all conditions and messages. All message stimuli can be viewed here

Summary: Although there was some message-level variability in ratings, overall, most messages tended to be perceived as moderately to highly motivating, relevant, agentic, and sharable. Averaged across these metrics, these were the most effective messages:

message_info = read.csv("message_info.csv", stringsAsFactors = FALSE) %>%
  rename("message_text" = text) %>%
  mutate(condition = tolower(condition)) %>%
  filter(condition %in% c("autonomous", "control")) %>%
  mutate(message = sprintf("%02d", as.numeric(image_number))) %>%
  filter(message %in% c("06", "07", "08", "09", "10", 
                        "12", "15", "17", "18", "20", 
                        "23", "24", "25", "31", "32")) %>%
  mutate(behavior = "social_distancing") %>%
  select(condition, message, message_text, behavior)

data_comp = messages %>%
  filter(!survey_name == "msg_familiarity") %>%
  extract(item, "message", "msg_.*_([0-9]{2})", remove = FALSE)

data_comp %>%
  filter(grepl("self|social|other|share|agency", survey_name)) %>%
  group_by(message, condition) %>%
  summarize(`mean rating` = round(mean(value), 2)) %>%
  arrange(desc(`mean rating`)) %>% 
  left_join(message_info) %>%
  select(message, condition, `mean rating`, message_text) %>%
  DT::datatable(rownames = FALSE, extensions = 'FixedColumns', 
                    options = list(scrollX = TRUE,
                                   scrollY = TRUE),
                caption = htmltools::tags$caption(style = 'caption-side: top; text-align: left;',
                                                  "Message text by condition"))

collapsed across condition

all ratings

plot_compare(data_comp, "self|social|other|share|msg_agency", palette = palette_cond)

self motivation

plot_compare(data_comp, "msg_motiv_self", palette = palette_cond)

self-relevance

plot_compare(data_comp, "msg_rel_self", palette = palette_cond)

agency

plot_compare(data_comp, "msg_agency", palette = palette_cond)

sharing

plot_compare(data_comp, "msg_share", palette = palette_cond)

by condition

all ratings

plot_compare(data_comp, "self|social|other|share|msg_agency", palette = palette_cond, condition = TRUE)

self motivation

plot_compare(data_comp, "msg_motiv_self", palette = palette_cond, condition = TRUE)

self-relevance

plot_compare(data_comp, "msg_rel_self", palette = palette_cond, condition = TRUE)

agency

plot_compare(data_comp, "msg_agency", palette = palette_cond, condition = TRUE)

sharing

plot_compare(data_comp, "msg_share", palette = palette_cond, condition = TRUE)

condition differentiation

Below is a summary of the degree to which each message differed as a function of condition on average. Positive values reflect higher average ratings for the autonomous messages; negative values reflect higher average ratings for the control condition.

data_comp %>%
  group_by(condition, message, survey_name) %>%
  summarize(mean = mean(value, na.rm = TRUE)) %>%
  spread(condition, mean) %>%
  mutate(diff = autonomous - control) %>%
  filter(grepl("self|share|agency", survey_name)) %>%
  group_by(message) %>%
  summarize(mean_diff = round(mean(diff), 2)) %>%
  arrange(desc(mean_diff))