Article ID: IBRbN

Pilot: Kyle MacDonald

Copilot: Mike Frank

Start date: Apr 21 2017

End date: Jul 14 2017

Final verification: Tom Hardwicke

Date: Nov 9 2017


Methods summary:

On each trial, adult participants saw pictures of concrete objects (e.g., a soccer ball) on a computer screen and were asked to produce the verbal label for that object as quickly as possible. The target words were generated from a set of 16 themes (e.g., soccer) and the key predictor variable was the ordinal position of the target word within its thematic context – that is, how many words from that theme had the participant already named in the experiment. The dependent variables were participants’ reaction times (RTs) and error rates, and the prediction was that words with higher ordinal positions would have slower RTs because of the “cumulative” interference from the previously named words within that theme.


Target outcomes:

For this article you should focus on the findings reported for Experiment 1 in section 2.2. Results and discussion. Specifically, you should attempt to reproduce all descriptive and inferential analyses reported in the text below and associated tables/figures:

Reaction times (RTs) for correct responses for each ordinal position of an item within the presented theme, collapsed across the three presentations, are presented in Fig. 1 (see also Table 1). A repeated measures analysis of variance (ANOVA) with the factors ordinal position (5) and presentation (3) with participants (F1) and themes (F2) as random variables (cf. Belke and Stielow, 2013 and Howard et al., 2006) revealed a main effects of presentation (F1(2, 46) = 54, p < .001, View the MathML source = .70; F2(2, 30) = 130.6, p < .001, View the MathML source = .89) and ordinal position (F1(4, 92) = 11.1, p < .001, View the MathML source = .33; F2(4, 60) = 7.0, p < .001, View the MathML source = .32). There was no interaction between presentation and ordinal position, Fs < 1.7. For the ordinal position effect, there was a significant linear trend, F1(1, 23) = 36.6, p < .001, View the MathML source = .62; F2(1, 15) = 19.1, p < .001, View the MathML source = .56, indicating that RTs increased linearly with each ordinal position.

An ANOVA of mean error rates revealed a main effect of presentation (F1 (2, 46) = 26, p < .001, View the MathML source = .53; F2(1, 30) = 30.2, p < .001, View the MathML source = .66) that reflects a decrease in errors between the first and later presentations (cf. Table 1). No other effects were found, Fs < 0.8.

Here’s the relevant table and figure from the paper:


Step 1: Load packages

library(tidyverse) # for data munging
library(knitr) # for kable table formating
library(haven) # import and export 'SPSS', 'Stata' and 'SAS' Files
library(readxl) # import excel files
library(CODreports) # custom report functions
library(magrittr) # for compound pipes
library(stringr) # for working with strings
library(ez) # for anovas
library(afex) # for anovas
library(lme4) # LMEMs
library(lmerTest) # ANOVA for LMEM

Step 2: Load data

Read the first sheet of excel workbook to get Experiment 1 data.

d <- read_excel(path = "data/data.xlsx", sheet = 1)

Check the structure of the data.

glimpse(d)
## Observations: 7,920
## Variables: 11
## $ `Participant(F1)`         <chr> "VP_01", "VP_01", "VP_01", "VP_01", ...
## $ TotalTrialNr              <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 1...
## $ PresentationTrialNr       <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 1...
## $ ObjectSetCode             <chr> "O2", "Z14", "P4", "R1", "N3", "K5",...
## $ Examplar                  <chr> "Spielautomat", "Heizung", "Mistgabe...
## $ `Theme(F2)`               <chr> "Casino", "Filler3", "Bauernhof", "U...
## $ Presentation              <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ OrdPosition               <dbl> 1, 6, 1, 1, 1, 1, 1, 6, 2, 2, 6, 2, ...
## $ RT                        <dbl> 0, 930, 739, 868, 1231, 1850, 828, 0...
## $ `ErrorCode (175=correct)` <dbl> 176, 175, 177, 177, 175, 175, 175, 1...
## $ ExperimentNr              <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...

Data are already in long format and look relatively tidy.

Check if we have 24 participants.

n_expected <- 24

test_n <- d %>% 
  select(`Participant(F1)`) %>% 
  unique() %>% 
  nrow() == n_expected

The output of the test is: TRUE, so we have the correct number of participants.

Check if we have 5 ordinal positions:

pos_expected <- 5

test_ord <- d %>% 
  select(OrdPosition) %>% 
  unique() %>% 
  nrow() == pos_expected

The output of the test is: FALSE, so there are a different number of ordinal positions in the data file (reported = 5, obtained = 6).

qplot(x = `Theme(F2)`, OrdPosition, geom = "jitter", data = d) + 
  theme(axis.text.x = element_text(angle = 90, hjust = .5))

Fillers are in position 6. Filter them.

d %<>% filter(!str_detect(`Theme(F2)`, "Filler"))

Check if we have 3 presentations:

presents_expected <- 3

test_present <- d %>% select(Presentation) %>% 
  unique() %>% 
  nrow() == presents_expected

The output of the test is: TRUE, so we have the correct number of presentation orders.

Step 3: Tidy data

Create binary (T/F) accuracy variable by recoding the ErrorCode var (175 = correct; not sure what 176 or 177 mean).

correct_code <- 175

d %<>% mutate(correct = ifelse(`ErrorCode (175=correct)` == correct_code, 
                               TRUE, 
                               FALSE))

Step 4: Run analysis

Pre-processing

No pre-processing steps reported in the paper.

Descriptive statistics

Try to reproduce the values in Table 1. From the table caption,

Mean naming latencies in milliseconds, mean error rates in percent and the corresponding standard deviations of means for each ordinal position and presentation.

Rose and Rahman do not report whether they averaged for participants prior to getting condition averages, so I wasn’t exactly sure how to do the aggregation to reproduce their table.

We assume first that this is done by first aggregating subject means and then further aggregating acrodss means. (This decision is important because data are slightly unbalanced across participants).

# average rt for each participant and condition
ss_rt <- d %>% 
  filter(correct == T) %>% # just keep correct RTs
  group_by(`Participant(F1)`, OrdPosition, Presentation) %>% 
  summarise(ss_rt = mean(RT)) 
  
# for each condition
ms_rt <- ss_rt %>% 
  group_by(OrdPosition, Presentation) %>% 
  summarise(m = mean(ss_rt),
            sd = sd(ss_rt)) %>% 
  mutate_if(is.numeric, round, digits = 0)

tab1 <- ms_rt %>% 
  ungroup %>%
  gather(measure, rt, m, sd) %>%
  mutate(ord_measure = paste0(as.character(OrdPosition), "-", measure)) %>%
  select(-OrdPosition, -measure) %>%
  spread(ord_measure, rt)

kable(tab1)
Presentation 1-m 1-sd 2-m 2-sd 3-m 3-sd 4-m 4-sd 5-m 5-sd
1 1011 149 1001 145 1041 165 1023 138 1046 139
2 843 115 904 131 905 102 927 141 935 136
3 835 112 851 111 854 108 894 109 888 142

Explictly compare all means:

pres1_ord1_m <- tab1 %>% filter(Presentation == 1) %>% pull("1-m")
pres1_ord2_m <- tab1 %>% filter(Presentation == 1) %>% pull("2-m")
pres1_ord3_m <- tab1 %>% filter(Presentation == 1) %>% pull("3-m")
pres1_ord4_m <- tab1 %>% filter(Presentation == 1) %>% pull("4-m")
pres1_ord5_m <- tab1 %>% filter(Presentation == 1) %>% pull("5-m")

pres2_ord1_m <- tab1 %>% filter(Presentation == 2) %>% pull("1-m")
pres2_ord2_m <- tab1 %>% filter(Presentation == 2) %>% pull("2-m")
pres2_ord3_m <- tab1 %>% filter(Presentation == 2) %>% pull("3-m")
pres2_ord4_m <- tab1 %>% filter(Presentation == 2) %>% pull("4-m")
pres2_ord5_m <- tab1 %>% filter(Presentation == 2) %>% pull("5-m")

pres3_ord1_m <- tab1 %>% filter(Presentation == 3) %>% pull("1-m")
pres3_ord2_m <- tab1 %>% filter(Presentation == 3) %>% pull("2-m")
pres3_ord3_m <- tab1 %>% filter(Presentation == 3) %>% pull("3-m")
pres3_ord4_m <- tab1 %>% filter(Presentation == 3) %>% pull("4-m")
pres3_ord5_m <- tab1 %>% filter(Presentation == 3) %>% pull("5-m")

Reprinting:

reportObject <- compareValues2(reportedValue = "1011", obtainedValue = pres1_ord1_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (1011) and the obtained value (1011) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "843", obtainedValue = pres2_ord1_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (843) and the obtained value (843) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "835", obtainedValue = pres3_ord1_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (835) and the obtained value (835) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "1000", obtainedValue = pres1_ord2_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (1000) and the obtained value (1001) differed by 0.1%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "904", obtainedValue = pres2_ord2_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (904) and the obtained value (904) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "850", obtainedValue = pres3_ord2_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (850) and the obtained value (851) differed by 0.12%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "1040", obtainedValue = pres1_ord3_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (1040) and the obtained value (1041) differed by 0.1%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "904", obtainedValue = pres2_ord3_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (904) and the obtained value (905) differed by 0.11%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "853", obtainedValue = pres3_ord3_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (853) and the obtained value (854) differed by 0.12%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "1022", obtainedValue = pres1_ord4_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (1022) and the obtained value (1023) differed by 0.1%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "926", obtainedValue = pres2_ord4_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (926) and the obtained value (927) differed by 0.11%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "894", obtainedValue = pres3_ord4_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (894) and the obtained value (894) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "1046", obtainedValue = pres1_ord5_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (1046) and the obtained value (1046) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "935", obtainedValue = pres2_ord5_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (935) and the obtained value (935) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "888", obtainedValue = pres3_ord5_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (888) and the obtained value (888) differed by 0%. NB obtained value was rounded to 0 decimal places."

Explictly compare all SDs:

pres1_ord1_sd <- tab1 %>% filter(Presentation == 1) %>% pull("1-sd")
pres1_ord2_sd <- tab1 %>% filter(Presentation == 1) %>% pull("2-sd")
pres1_ord3_sd <- tab1 %>% filter(Presentation == 1) %>% pull("3-sd")
pres1_ord4_sd <- tab1 %>% filter(Presentation == 1) %>% pull("4-sd")
pres1_ord5_sd <- tab1 %>% filter(Presentation == 1) %>% pull("5-sd")

pres2_ord1_sd <- tab1 %>% filter(Presentation == 2) %>% pull("1-sd")
pres2_ord2_sd <- tab1 %>% filter(Presentation == 2) %>% pull("2-sd")
pres2_ord3_sd <- tab1 %>% filter(Presentation == 2) %>% pull("3-sd")
pres2_ord4_sd <- tab1 %>% filter(Presentation == 2) %>% pull("4-sd")
pres2_ord5_sd <- tab1 %>% filter(Presentation == 2) %>% pull("5-sd")

pres3_ord1_sd <- tab1 %>% filter(Presentation == 3) %>% pull("1-sd")
pres3_ord2_sd <- tab1 %>% filter(Presentation == 3) %>% pull("2-sd")
pres3_ord3_sd <- tab1 %>% filter(Presentation == 3) %>% pull("3-sd")
pres3_ord4_sd <- tab1 %>% filter(Presentation == 3) %>% pull("4-sd")
pres3_ord5_sd <- tab1 %>% filter(Presentation == 3) %>% pull("5-sd")

Reprinting:

reportObject <- compareValues2(reportedValue = "149", obtainedValue = pres1_ord1_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (149) and the obtained value (149) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "114", obtainedValue = pres2_ord1_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (114) and the obtained value (115) differed by 0.88%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "111", obtainedValue = pres3_ord1_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (111) and the obtained value (112) differed by 0.9%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "145", obtainedValue = pres1_ord2_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (145) and the obtained value (145) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "130", obtainedValue = pres2_ord2_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (130) and the obtained value (131) differed by 0.77%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "111", obtainedValue = pres3_ord2_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (111) and the obtained value (111) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "165", obtainedValue = pres1_ord3_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (165) and the obtained value (165) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "102", obtainedValue = pres2_ord3_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (102) and the obtained value (102) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "107", obtainedValue = pres3_ord3_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (107) and the obtained value (108) differed by 0.93%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "137", obtainedValue = pres1_ord4_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (137) and the obtained value (138) differed by 0.73%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "141", obtainedValue = pres2_ord4_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (141) and the obtained value (141) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "109", obtainedValue = pres3_ord4_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (109) and the obtained value (109) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "138", obtainedValue = pres1_ord5_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (138) and the obtained value (139) differed by 0.72%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "136", obtainedValue = pres2_ord5_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (136) and the obtained value (136) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "141", obtainedValue = pres3_ord5_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (141) and the obtained value (142) differed by 0.71%. NB obtained value was rounded to 0 decimal places."

We see very minor differences throughout (at most 1ms).

Do the same aggregation for accuracy scores (reported in %). Note that we multiply by 100 to convert proportion to percentage and then subtract from 100 to convert to mean error rate.

# average acc each participant and condition
ss_acc <- d %>% 
  group_by(`Participant(F1)`, OrdPosition, Presentation) %>% 
  summarise(ss_acc = mean(correct)) 
  
# average for each condition '
ms_acc <- ss_acc %>% 
  group_by(OrdPosition, Presentation) %>% 
  summarise(m = 100 - (mean(ss_acc) * 100),
            sd = (sd(ss_acc) * 100)) %>% 
  mutate_if(is.numeric, round, digits = 1) 

tab1_acc <- ms_acc %>%  
  ungroup %>%
  gather(measure, accuracy, m, sd) %>%
  mutate(ord_measure = paste0(as.character(OrdPosition), "-", measure)) %>%
  select(-OrdPosition, -measure) %>%
  spread(ord_measure, accuracy)

Explictly compare all means:

pres1_ord1_m <- tab1_acc %>% filter(Presentation == 1) %>% pull("1-m")
pres1_ord2_m <- tab1_acc %>% filter(Presentation == 1) %>% pull("2-m")
pres1_ord3_m <- tab1_acc %>% filter(Presentation == 1) %>% pull("3-m")
pres1_ord4_m <- tab1_acc %>% filter(Presentation == 1) %>% pull("4-m")
pres1_ord5_m <- tab1_acc %>% filter(Presentation == 1) %>% pull("5-m")

pres2_ord1_m <- tab1_acc %>% filter(Presentation == 2) %>% pull("1-m")
pres2_ord2_m <- tab1_acc %>% filter(Presentation == 2) %>% pull("2-m")
pres2_ord3_m <- tab1_acc %>% filter(Presentation == 2) %>% pull("3-m")
pres2_ord4_m <- tab1_acc %>% filter(Presentation == 2) %>% pull("4-m")
pres2_ord5_m <- tab1_acc %>% filter(Presentation == 2) %>% pull("5-m")

pres3_ord1_m <- tab1_acc %>% filter(Presentation == 3) %>% pull("1-m")
pres3_ord2_m <- tab1_acc %>% filter(Presentation == 3) %>% pull("2-m")
pres3_ord3_m <- tab1_acc %>% filter(Presentation == 3) %>% pull("3-m")
pres3_ord4_m <- tab1_acc %>% filter(Presentation == 3) %>% pull("4-m")
pres3_ord5_m <- tab1_acc %>% filter(Presentation == 3) %>% pull("5-m")

Reprinting:

reportObject <- compareValues2(reportedValue = "20.0", obtainedValue = pres1_ord1_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (20) and the obtained value (20.3) differed by 1.5%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "11.7", obtainedValue = pres2_ord1_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (11.7) and the obtained value (11.7) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "12.5", obtainedValue = pres3_ord1_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (12.5) and the obtained value (12.5) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "19.7", obtainedValue = pres1_ord2_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (19.7) and the obtained value (19.8) differed by 0.51%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "10.6", obtainedValue = pres2_ord2_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (10.6) and the obtained value (10.7) differed by 0.94%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "9.6", obtainedValue = pres3_ord2_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (9.6) and the obtained value (9.9) differed by 3.13%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "17.7", obtainedValue = pres1_ord3_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (17.7) and the obtained value (17.7) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "10.9", obtainedValue = pres2_ord3_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (10.9) and the obtained value (10.9) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "10.1", obtainedValue = pres3_ord3_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (10.1) and the obtained value (10.2) differed by 0.99%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "16.4", obtainedValue = pres1_ord4_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (16.4) and the obtained value (16.7) differed by 1.83%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "13.8", obtainedValue = pres2_ord4_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (13.8) and the obtained value (13.8) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "9.1", obtainedValue = pres3_ord4_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (9.1) and the obtained value (9.1) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "16.6", obtainedValue = pres1_ord5_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (16.6) and the obtained value (16.9) differed by 1.81%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "12.5", obtainedValue = pres2_ord5_m, valueType = 'mean')
## [1] "MATCH for mean. The reported value (12.5) and the obtained value (12.5) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "8.0", obtainedValue = pres3_ord5_m, valueType = 'mean')
## [1] "MINOR NUMERICAL ERROR for mean. The reported value (8) and the obtained value (8.1) differed by 1.25%. NB obtained value was rounded to 1 decimal places."

Explictly compare all SDs:

pres1_ord1_sd <- tab1_acc %>% filter(Presentation == 1) %>% pull("1-sd")
pres1_ord2_sd <- tab1_acc %>% filter(Presentation == 1) %>% pull("2-sd")
pres1_ord3_sd <- tab1_acc %>% filter(Presentation == 1) %>% pull("3-sd")
pres1_ord4_sd <- tab1_acc %>% filter(Presentation == 1) %>% pull("4-sd")
pres1_ord5_sd <- tab1_acc %>% filter(Presentation == 1) %>% pull("5-sd")

pres2_ord1_sd <- tab1_acc %>% filter(Presentation == 2) %>% pull("1-sd")
pres2_ord2_sd <- tab1_acc %>% filter(Presentation == 2) %>% pull("2-sd")
pres2_ord3_sd <- tab1_acc %>% filter(Presentation == 2) %>% pull("3-sd")
pres2_ord4_sd <- tab1_acc %>% filter(Presentation == 2) %>% pull("4-sd")
pres2_ord5_sd <- tab1_acc %>% filter(Presentation == 2) %>% pull("5-sd")

pres3_ord1_sd <- tab1_acc %>% filter(Presentation == 3) %>% pull("1-sd")
pres3_ord2_sd <- tab1_acc %>% filter(Presentation == 3) %>% pull("2-sd")
pres3_ord3_sd <- tab1_acc %>% filter(Presentation == 3) %>% pull("3-sd")
pres3_ord4_sd <- tab1_acc %>% filter(Presentation == 3) %>% pull("4-sd")
pres3_ord5_sd <- tab1_acc %>% filter(Presentation == 3) %>% pull("5-sd")

Reprinting:

reportObject <- compareValues2(reportedValue = "12.3", obtainedValue = pres1_ord1_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (12.3) and the obtained value (12.1) differed by 1.63%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "10.4", obtainedValue = pres2_ord1_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (10.4) and the obtained value (10.5) differed by 0.96%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "8.0", obtainedValue = pres3_ord1_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (8) and the obtained value (8) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "10.0", obtainedValue = pres1_ord2_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (10) and the obtained value (10) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "7.9", obtainedValue = pres2_ord2_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (7.9) and the obtained value (7.9) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "8.6", obtainedValue = pres3_ord2_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (8.6) and the obtained value (8.4) differed by 2.33%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "11.6", obtainedValue = pres1_ord3_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (11.6) and the obtained value (11.6) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "8.0", obtainedValue = pres2_ord3_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (8) and the obtained value (8.1) differed by 1.25%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "8.4", obtainedValue = pres3_ord3_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (8.4) and the obtained value (8.4) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "9.5", obtainedValue = pres1_ord4_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (9.5) and the obtained value (9.2) differed by 3.16%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "10.2", obtainedValue = pres2_ord4_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (10.2) and the obtained value (10.3) differed by 0.98%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "7.1", obtainedValue = pres3_ord4_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (7.1) and the obtained value (7.1) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "11.6", obtainedValue = pres1_ord5_sd, valueType = 'sd')
## [1] "MINOR NUMERICAL ERROR for sd. The reported value (11.6) and the obtained value (11.9) differed by 2.59%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "8.4", obtainedValue = pres2_ord5_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (8.4) and the obtained value (8.4) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "8.1", obtainedValue = pres3_ord5_sd, valueType = 'sd')
## [1] "MATCH for sd. The reported value (8.1) and the obtained value (8.1) differed by 0%. NB obtained value was rounded to 1 decimal places."

Inferential statistics

RT: Try to reproduce the RT ANOVA model. From the paper,

A repeated measures analysis of variance (ANOVA) with the factors ordinal position (5) and presentation (3) with participants (F1) and themes (F2) as random variables (cf. Belke and Stielow, 2013 and Howard et al., 2006) revealed a main effects of presentation (F1(2, 46) = 54, p < .001, View the MathML = .70; F2(2, 30) = 130.6, p < .001, View the MathML source = .89) and ordinal position (F1(4, 92) = 11.1, p < .001, View the MathML source = .33; F2(4, 60) = 7.0, p < .001, View the MathML source = .32).

Note that “View the MathML source” = \(\eta_p^2\).

d.rt.model <- d %>% 
  filter(correct == TRUE) %>% # only include correct RTs model
  mutate(`Theme(F2)` = ifelse(str_detect(`Theme(F2)`, "Filler"), 
                              "Filler",`Theme(F2)`)) %>% 
  select(`Participant(F1)`, `Theme(F2)`, Presentation, OrdPosition, RT) %>% 
  mutate_at(vars(1:4), funs(as.factor)) %>%
  rename(Participant = `Participant(F1)`, 
         Theme = `Theme(F2)`)
m1.rt.ez <- aov_ez(data = d.rt.model, dv = "RT", id = "Participant", within = c("OrdPosition", "Presentation"), anova_table = list(correction = "none", es = "pes"))

m2.rt.ez <- aov_ez(data = d.rt.model, dv = "RT", id = "Theme", within = c("OrdPosition", "Presentation"), anova_table = list(correction = "none", es = "pes"))

main effects of presentation (F1(2, 46) = 54, p < .001, View the MathML = .70; F2(2, 30) = 130.6, p < .001, View the MathML source = .89)

# participant
df1 <- m1.rt.ez$anova_table$`num Df`[2]
df2 <- m1.rt.ez$anova_table$`den Df`[2]
F1 <- m1.rt.ez$anova_table$`F`[2]
p <- m1.rt.ez$anova_table$`Pr(>F)`[2]
pes <- m1.rt.ez$anova_table$`pes`[2]
reportObject <- compareValues2(reportedValue = "2", obtainedValue = df1, valueType = 'df')
## [1] "MATCH for df. The reported value (2) and the obtained value (2) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "46", obtainedValue = df2, valueType = 'df')
## [1] "MATCH for df. The reported value (46) and the obtained value (46) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "54", obtainedValue = F1, valueType = 'F')
## [1] "MATCH for F. The reported value (54) and the obtained value (54) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "eyeballMATCH", obtainedValue = p, valueType = 'p')
## [1] "MATCH for p. Eyeball comparison only."
reportObject <- compareValues2(reportedValue = ".70", obtainedValue = pes, valueType = 'es')
## [1] "MATCH for es. The reported value (0.7) and the obtained value (0.7) differed by 0%. NB obtained value was rounded to 2 decimal places."
# theme
df1 <- m2.rt.ez$anova_table$`num Df`[2]
df2 <- m2.rt.ez$anova_table$`den Df`[2]
F1 <- m2.rt.ez$anova_table$`F`[2]
p <- m2.rt.ez$anova_table$`Pr(>F)`[2]
pes <- m2.rt.ez$anova_table$`pes`[2]
reportObject <- compareValues2(reportedValue = "2", obtainedValue = df1, valueType = 'df')
## [1] "MATCH for df. The reported value (2) and the obtained value (2) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "30", obtainedValue = df2, valueType = 'df')
## [1] "MATCH for df. The reported value (30) and the obtained value (30) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "130.6", obtainedValue = F1, valueType = 'F')
## [1] "MATCH for F. The reported value (130.6) and the obtained value (130.6) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "eyeballMATCH", obtainedValue = p, valueType = 'p')
## [1] "MATCH for p. Eyeball comparison only."
reportObject <- compareValues2(reportedValue = ".89", obtainedValue = pes, valueType = 'es')
## [1] "MINOR NUMERICAL ERROR for es. The reported value (0.89) and the obtained value (0.9) differed by 1.12%. NB obtained value was rounded to 2 decimal places."

and ordinal position (F1(4, 92) = 11.1, p < .001, View the MathML source = .33; F2(4, 60) = 7.0, p < .001, View the MathML source = .32).

# participant
df1 <- m1.rt.ez$anova_table$`num Df`[1]
df2 <- m1.rt.ez$anova_table$`den Df`[1]
F1 <- m1.rt.ez$anova_table$`F`[1]
p <- m1.rt.ez$anova_table$`Pr(>F)`[1]
pes <- m1.rt.ez$anova_table$`pes`[1]
reportObject <- compareValues2(reportedValue = "4", obtainedValue = df1, valueType = 'df')
## [1] "MATCH for df. The reported value (4) and the obtained value (4) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "92", obtainedValue = df2, valueType = 'df')
## [1] "MATCH for df. The reported value (92) and the obtained value (92) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "11.1", obtainedValue = F1, valueType = 'F')
## [1] "MINOR NUMERICAL ERROR for F. The reported value (11.1) and the obtained value (11.2) differed by 0.9%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "eyeballMATCH", obtainedValue = p, valueType = 'p')
## [1] "MATCH for p. Eyeball comparison only."
reportObject <- compareValues2(reportedValue = ".33", obtainedValue = pes, valueType = 'es')
## [1] "MATCH for es. The reported value (0.33) and the obtained value (0.33) differed by 0%. NB obtained value was rounded to 2 decimal places."
# theme
df1 <- m2.rt.ez$anova_table$`num Df`[1]
df2 <- m2.rt.ez$anova_table$`den Df`[1]
F1 <- m2.rt.ez$anova_table$`F`[1]
p <- m2.rt.ez$anova_table$`Pr(>F)`[1]
pes <- m2.rt.ez$anova_table$`pes`[1]
reportObject <- compareValues2(reportedValue = "4", obtainedValue = df1, valueType = 'df')
## [1] "MATCH for df. The reported value (4) and the obtained value (4) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "60", obtainedValue = df2, valueType = 'df')
## [1] "MATCH for df. The reported value (60) and the obtained value (60) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "7.0", obtainedValue = F1, valueType = 'F')
## [1] "MATCH for F. The reported value (7) and the obtained value (7) differed by 0%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "eyeballMATCH", obtainedValue = p, valueType = 'p')
## [1] "MATCH for p. Eyeball comparison only."
reportObject <- compareValues2(reportedValue = ".32", obtainedValue = pes, valueType = 'es')
## [1] "MATCH for es. The reported value (0.32) and the obtained value (0.32) differed by 0%. NB obtained value was rounded to 2 decimal places."

Try to reproduce the linear trend model. From the paper:

For the ordinal position effect, there was a significant linear trend, F1(1, 23) = 36.6, p < .001, View the MathML source = .62; F2(1, 15) = 19.1, p < .001, View the MathML source = .56, indicating that RTs increased linearly with each ordinal position.

d.rt.model %<>% mutate(OrdPositionNumeric = as.numeric(OrdPosition), 
                       PresentationNumeric = as.numeric(Presentation))

m1.rt.ez <- ezANOVA(data = d.rt.model, 
                 dv = RT, 
                 wid = Participant, 
                 within = .(OrdPositionNumeric, PresentationNumeric),
                 within_full = Theme)

m2.rt.ez <- ezANOVA(data = d.rt.model, 
                 dv = RT, 
                 wid = Theme, 
                 within = .(OrdPositionNumeric, PresentationNumeric),
                 within_full = Participant)

lin1 <- m1.rt.ez %>% as.data.frame() %>% filter(ANOVA.Effect == 'OrdPositionNumeric')

reportObject <- compareValues2(reportedValue = "1", obtainedValue = lin1$ANOVA.DFn, valueType = 'df')
## [1] "MATCH for df. The reported value (1) and the obtained value (1) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "23", obtainedValue = lin1$ANOVA.DFd, valueType = 'df')
## [1] "MATCH for df. The reported value (23) and the obtained value (23) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "36.6", obtainedValue = lin1$ANOVA.F, valueType = 'F')
## [1] "MINOR NUMERICAL ERROR for F. The reported value (36.6) and the obtained value (36.7) differed by 0.27%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "eyeballMATCH", obtainedValue = lin1$ANOVA.p, valueType = 'p')
## [1] "MATCH for p. Eyeball comparison only."
lin2 <- m2.rt.ez %>% as.data.frame() %>% filter(ANOVA.Effect == 'OrdPositionNumeric')

reportObject <- compareValues2(reportedValue = "1", obtainedValue = lin2$ANOVA.DFn, valueType = 'df')
## [1] "MATCH for df. The reported value (1) and the obtained value (1) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "15", obtainedValue = lin2$ANOVA.DFd, valueType = 'df')
## [1] "MATCH for df. The reported value (15) and the obtained value (15) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "19.1", obtainedValue = lin2$ANOVA.F, valueType = 'F')
## [1] "MINOR NUMERICAL ERROR for F. The reported value (19.1) and the obtained value (19.2) differed by 0.52%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "eyeballMATCH", obtainedValue = lin2$ANOVA.p, valueType = 'p')
## [1] "MATCH for p. Eyeball comparison only."

Accuracy: Try to reproduce the ANOVA model on mean error rates. From the paper,

An ANOVA of mean error rates revealed a main effect of presentation (F1 (2, 46) = 26, p < .001, g2p = .53; F2(1, 30) = 30.2, p < .001, g2p = .66)

First, prep the data for the model by converting predictor variables to factors and aggregating to get mean error rates.

d.acc.model <- d %>% 
  select(`Participant(F1)`, `Theme(F2)`, Presentation, OrdPosition, correct) %>% 
  rename(Participant = `Participant(F1)`,
         Theme = `Theme(F2)`) %>%
  mutate_at(vars(1:4), funs(as.factor)) %>%
  mutate(correct = as.numeric(correct))

Fit the accuracy F1 and F2 models.

m1.acc.ez <- aov_ez(data = d.acc.model, dv = "correct", id = "Participant", within = c("OrdPosition", "Presentation"), anova_table = list(correction = "none", es = "pes"))

m2.acc.ez <- aov_ez(data = d.acc.model, dv = "correct", id = "Theme", within = c("OrdPosition", "Presentation"), anova_table = list(correction = "none", es = "pes"))

main effect of presentation (F1 (2, 46) = 26, p < .001, g2p = .53; F2(1, 30) = 30.2, p < .001, g2p = .66)

# participant
df1 <- m1.acc.ez$anova_table$`num Df`[2]
df2 <- m1.acc.ez$anova_table$`den Df`[2]
F1 <- m1.acc.ez$anova_table$`F`[2]
p <- m1.acc.ez$anova_table$`Pr(>F)`[2]
pes <- m1.acc.ez$anova_table$`pes`[2]
reportObject <- compareValues2(reportedValue = "2", obtainedValue = df1, valueType = 'df')
## [1] "MATCH for df. The reported value (2) and the obtained value (2) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "46", obtainedValue = df2, valueType = 'df')
## [1] "MATCH for df. The reported value (46) and the obtained value (46) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "26", obtainedValue = F1, valueType = 'F')
## [1] "MINOR NUMERICAL ERROR for F. The reported value (26) and the obtained value (27) differed by 3.85%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "eyeballMATCH", obtainedValue = p, valueType = 'p')
## [1] "MATCH for p. Eyeball comparison only."
reportObject <- compareValues2(reportedValue = ".53", obtainedValue = pes, valueType = 'es')
## [1] "MINOR NUMERICAL ERROR for es. The reported value (0.53) and the obtained value (0.54) differed by 1.89%. NB obtained value was rounded to 2 decimal places."
# theme
df1 <- m2.acc.ez$anova_table$`num Df`[2]
df2 <- m2.acc.ez$anova_table$`den Df`[2]
F1 <- m2.acc.ez$anova_table$`F`[2]
p <- m2.acc.ez$anova_table$`Pr(>F)`[2]
pes <- m2.acc.ez$anova_table$`pes`[2]
reportObject <- compareValues2(reportedValue = "1", obtainedValue = df1, valueType = 'df')
## [1] "MAJOR NUMERICAL ERROR for df. The reported value (1) and the obtained value (2) differed by 100%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "2", obtainedValue = df1, valueType = 'df')
## [1] "MATCH for df. The reported value (2) and the obtained value (2) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "30", obtainedValue = df2, valueType = 'df')
## [1] "MATCH for df. The reported value (30) and the obtained value (30) differed by 0%. NB obtained value was rounded to 0 decimal places."
reportObject <- compareValues2(reportedValue = "30.2", obtainedValue = F1, valueType = 'F')
## [1] "MINOR NUMERICAL ERROR for F. The reported value (30.2) and the obtained value (32.2) differed by 6.62%. NB obtained value was rounded to 1 decimal places."
reportObject <- compareValues2(reportedValue = "eyeballMATCH", obtainedValue = p, valueType = 'p')
## [1] "MATCH for p. Eyeball comparison only."
reportObject <- compareValues2(reportedValue = ".66", obtainedValue = pes, valueType = 'es')
## [1] "MINOR NUMERICAL ERROR for es. The reported value (0.66) and the obtained value (0.68) differed by 3.03%. NB obtained value was rounded to 2 decimal places."

These tests also reproduce the ANOVA results for accuracy, with minor numerical errors for the \(F\) values.

There was also a major numerical error for the \(df\) in the second test: for the \(F2\) test, the \(df\) should be \(F(2,30)\) rather than \(F(1,30)\).

Step 5: Conclusion

Overall, there were minor numerical errors throughout, but most results were quite clearly replicated. It took a bit of guesswork to figure out the particular F1/F2 specification and implement this in R, but we believe that the result is correct.

We identified a major numerical error - a single DF mismatch in the accuracy F2 ANOVA. The authors have confirmed that this was typo.

reportObject$Article_ID <- "IBRbN"
reportObject$affectsConclusion <- "no"
reportObject$error_typo <- 1
reportObject$error_specification <- 0
reportObject$error_analysis <- 0
reportObject$error_data <- 0
reportObject$error_unidentified <- 0
reportObject$Author_Assistance <- T
reportObject$resolved_typo <- 0
reportObject$resolved_specification <- 0
reportObject$resolved_analysis <- 0
reportObject$resolved_data <- 0
reportObject$correctionSuggested <- NA
reportObject$correctionPublished <- NA

# decide on final outcome
if(reportObject$Decision_Errors > 0 | reportObject$Major_Numerical_Errors > 0 | reportObject$Insufficient_Information_Errors > 0){
  reportObject$finalOutcome <- "Failure"
  if(reportObject$Author_Assistance == T){
    reportObject$finalOutcome <- "Failure despite author assistance"
  }
}else{
  reportObject$finalOutcome <- "Success"
  if(reportObject$Author_Assistance == T){
    reportObject$finalOutcome <- "Success with author assistance"
  }
}

# save the report object
filename <- paste0("reportObject_", reportObject$Article_ID,".csv")
write_csv(reportObject, filename)

Report Object

valuesChecked eyeballs Total_df Total_p Total_mean Total_sd Total_se Total_ci Total_bf
99 8 17 8 30 30 0 0 0
Total_t Total_F Total_es Total_median Total_irr Total_r Total_z Total_coeff Total_n Total_x2
0 8 6 0 0 0 0 0 0 0
Total_other Insufficient_Information_Errors Decision_Errors Major_Numerical_Errors Minor_Numerical_Errors
0 0 0 1 37
Major_df Major_p Major_mean Major_sd Major_se
1 0 0 0 0
Major_ci Major_bf Major_t Major_F Major_es
0 0 0 0 0
Major_median Major_irr Major_r Major_z Major_coeff
0 0 0 0 0
Major_n Major_x2 Major_other affectsConclusion error_typo
0 0 0 no 1
error_specification error_analysis error_data error_unidentified Author_Assistance resolved_typo
0 0 0 0 TRUE 0
resolved_specification resolved_analysis resolved_data correctionSuggested correctionPublished finalOutcome
0 0 0 NA NA Failure despite author assistance

Session information

devtools::session_info()
##  setting  value                       
##  version  R version 3.4.3 (2017-11-30)
##  system   x86_64, darwin15.6.0        
##  ui       X11                         
##  language (EN)                        
##  collate  en_US.UTF-8                 
##  tz       America/Los_Angeles         
##  date     2018-01-30                  
## 
##  package      * version  date      
##  acepack        1.4.1    2016-10-29
##  afex         * 0.19-1   2018-01-08
##  assertthat     0.2.0    2017-04-11
##  backports      1.1.2    2017-12-13
##  base         * 3.4.3    2017-12-07
##  base64enc      0.1-3    2015-07-28
##  bindr          0.1      2016-11-13
##  bindrcpp     * 0.2      2017-06-17
##  broom          0.4.2    2017-02-13
##  car            2.1-6    2017-11-19
##  cellranger     1.1.0    2016-07-27
##  checkmate      1.8.5    2017-10-24
##  cli            1.0.0    2017-11-05
##  cluster        2.0.6    2017-03-10
##  coda           0.19-1   2016-12-08
##  codetools      0.2-15   2016-10-05
##  CODreports   * 0.1      2017-12-18
##  coin           1.2-2    2017-11-28
##  colorspace     1.3-2    2016-12-14
##  compiler       3.4.3    2017-12-07
##  crayon         1.3.4    2017-09-16
##  data.table     1.10.4-3 2017-10-27
##  datasets     * 3.4.3    2017-12-07
##  devtools       1.13.4   2017-11-09
##  digest         0.6.13   2017-12-14
##  dplyr        * 0.7.4    2017-09-28
##  emmeans      * 1.1      2018-01-10
##  estimability   1.2      2016-11-19
##  evaluate       0.10.1   2017-06-24
##  ez           * 4.4-0    2016-11-02
##  forcats      * 0.2.0    2017-01-23
##  foreign        0.8-69   2017-06-22
##  Formula        1.2-2    2017-07-10
##  ggplot2      * 2.2.1    2016-12-30
##  glue           1.2.0    2017-10-29
##  graphics     * 3.4.3    2017-12-07
##  grDevices    * 3.4.3    2017-12-07
##  grid           3.4.3    2017-12-07
##  gridExtra      2.3      2017-09-09
##  gtable         0.2.0    2016-02-26
##  haven        * 1.1.0    2017-07-09
##  highr          0.6      2016-05-09
##  Hmisc          4.1-1    2018-01-03
##  hms            0.4.0    2017-11-23
##  htmlTable      1.11.2   2018-01-20
##  htmltools      0.3.6    2017-04-28
##  htmlwidgets    0.9      2017-07-10
##  httr           1.3.1    2017-08-20
##  jsonlite       1.5      2017-06-01
##  knitr        * 1.17     2017-08-10
##  labeling       0.3      2014-08-23
##  lattice        0.20-35  2017-03-25
##  latticeExtra   0.6-28   2016-02-09
##  lazyeval       0.2.1    2017-10-29
##  lme4         * 1.1-14   2017-09-27
##  lmerTest     * 2.0-36   2017-11-30
##  lubridate      1.7.1    2017-11-03
##  magrittr     * 1.5      2014-11-22
##  MASS           7.3-47   2017-02-26
##  Matrix       * 1.2-12   2017-11-20
##  MatrixModels   0.4-1    2015-08-22
##  memoise        1.1.0    2017-04-21
##  methods      * 3.4.3    2017-12-07
##  mgcv           1.8-22   2017-09-24
##  minqa          1.2.4    2014-10-09
##  mnormt         1.5-5    2016-10-15
##  modelr         0.1.1    2017-07-24
##  modeltools     0.2-21   2013-09-02
##  multcomp       1.4-8    2017-11-08
##  munsell        0.4.3    2016-02-13
##  mvtnorm        1.0-6    2017-03-02
##  nlme           3.1-131  2017-02-06
##  nloptr         1.0.4    2014-08-04
##  nnet           7.3-12   2016-02-02
##  parallel       3.4.3    2017-12-07
##  pbkrtest       0.4-7    2017-03-15
##  pkgconfig      2.0.1    2017-03-21
##  plyr           1.8.4    2016-06-08
##  psych          1.7.3.21 2017-03-22
##  purrr        * 0.2.4    2017-10-18
##  quantreg       5.34     2017-10-25
##  R6             2.2.2    2017-06-17
##  RColorBrewer   1.1-2    2014-12-07
##  Rcpp           0.12.14  2017-11-23
##  readr        * 1.1.1    2017-05-16
##  readxl       * 1.0.0    2017-04-18
##  reshape2       1.4.3    2017-12-11
##  rlang          0.1.4    2017-11-05
##  rmarkdown      1.8      2017-11-17
##  rpart          4.1-11   2017-03-13
##  rprojroot      1.2      2017-01-16
##  rstudioapi     0.7      2017-09-07
##  rvest          0.3.2    2016-06-17
##  sandwich       2.4-0    2017-07-26
##  scales         0.5.0    2017-08-24
##  SparseM        1.77     2017-04-23
##  splines        3.4.3    2017-12-07
##  stats        * 3.4.3    2017-12-07
##  stats4         3.4.3    2017-12-07
##  stringi        1.1.6    2017-11-17
##  stringr      * 1.2.0    2017-02-18
##  survival       2.41-3   2017-04-04
##  TH.data        1.0-8    2017-01-23
##  tibble       * 1.3.4    2017-08-22
##  tidyr        * 0.7.2    2017-10-16
##  tidyselect     0.2.3    2017-11-06
##  tidyverse    * 1.2.1    2017-11-14
##  tools          3.4.3    2017-12-07
##  utils        * 3.4.3    2017-12-07
##  withr          2.1.0    2017-11-01
##  xml2           1.1.1    2017-01-24
##  xtable         1.8-2    2016-02-05
##  yaml           2.1.16   2017-12-12
##  zoo            1.8-1    2018-01-08
##  source                                              
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.3)                                      
##  cran (@0.2.0)                                       
##  CRAN (R 3.4.3)                                      
##  local                                               
##  CRAN (R 3.4.0)                                      
##  cran (@0.1)                                         
##  cran (@0.2)                                         
##  cran (@0.4.2)                                       
##  CRAN (R 3.4.3)                                      
##  cran (@1.1.0)                                       
##  CRAN (R 3.4.2)                                      
##  cran (@1.0.0)                                       
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.3)                                      
##  Github (CognitionOpenDataProject/CODreports@79ccb56)
##  CRAN (R 3.4.3)                                      
##  cran (@1.3-2)                                       
##  local                                               
##  cran (@1.3.4)                                       
##  CRAN (R 3.4.2)                                      
##  local                                               
##  CRAN (R 3.4.2)                                      
##  CRAN (R 3.4.3)                                      
##  cran (@0.7.4)                                       
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.1)                                      
##  CRAN (R 3.4.0)                                      
##  cran (@0.2.0)                                       
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.1)                                      
##  cran (@2.2.1)                                       
##  cran (@1.2.0)                                       
##  local                                               
##  local                                               
##  local                                               
##  CRAN (R 3.4.1)                                      
##  cran (@0.2.0)                                       
##  cran (@1.1.0)                                       
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.3)                                      
##  cran (@0.4.0)                                       
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.1)                                      
##  CRAN (R 3.4.1)                                      
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.1)                                      
##  cran (@0.3)                                         
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.0)                                      
##  cran (@0.2.1)                                       
##  CRAN (R 3.4.2)                                      
##  CRAN (R 3.4.3)                                      
##  cran (@1.7.1)                                       
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.0)                                      
##  local                                               
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.0)                                      
##  cran (@1.5-5)                                       
##  cran (@0.1.1)                                       
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.2)                                      
##  cran (@0.4.3)                                       
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.3)                                      
##  local                                               
##  CRAN (R 3.4.0)                                      
##  cran (@2.0.1)                                       
##  cran (@1.8.4)                                       
##  cran (@1.7.3.2)                                     
##  cran (@0.2.4)                                       
##  CRAN (R 3.4.2)                                      
##  CRAN (R 3.4.0)                                      
##  cran (@1.1-2)                                       
##  CRAN (R 3.4.3)                                      
##  cran (@1.1.1)                                       
##  cran (@1.0.0)                                       
##  cran (@1.4.3)                                       
##  cran (@0.1.4)                                       
##  CRAN (R 3.4.2)                                      
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.1)                                      
##  cran (@0.3.2)                                       
##  CRAN (R 3.4.1)                                      
##  cran (@0.5.0)                                       
##  CRAN (R 3.4.0)                                      
##  local                                               
##  local                                               
##  local                                               
##  CRAN (R 3.4.2)                                      
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.0)                                      
##  cran (@1.3.4)                                       
##  cran (@0.7.2)                                       
##  cran (@0.2.3)                                       
##  cran (@1.2.1)                                       
##  local                                               
##  local                                               
##  CRAN (R 3.4.2)                                      
##  cran (@1.1.1)                                       
##  CRAN (R 3.4.0)                                      
##  CRAN (R 3.4.3)                                      
##  CRAN (R 3.4.3)