Testing the PlatoWork Headset

Lasse Hjorth Madsen

2021 maj 03

Introduction

One promising technology for improving cognitive ability in healthy humans is Transcranial Direct Current Stimulation (tDCS). Many devices that implement that technique, are available for purchase.

My son and I did an informal test of one such device, the tDCS headset from PlatoScience. As a learning task, we used the speed typing test available from 10fastfingers.com. We did a number of test sessions each, usually one per day, with between 4 and 12 tests per session. Each session was performed with either:

Sham stimulus and real stimulus refers to either a placebo or an actual direct current stimulation. During testing those were labeled as A and B; only after the test we learned which was which. This functionality was available from a research version of the PlatoWork app.

R Package

The R package platowork provides a small data set with the results of our experiments. It also includes the vignette you are reading now, with some examples of how the data can be analyzed.

The R package is available from GitHub here: https://github.com/lassehjorthmadsen/platowork. It can be installed directly from Github with devtools::install_github("https://github.com/lassehjorthmadsen/platowork", build_vignettes = T).

The purpose of the package is to make the data available to those interested. It may be useful as an example data set to teach, practice, or demo basic statistical techniques and visualizations. (See below.)

The data and analysis may also be interesting to those who have experiences with tDCS themselves.

This vignette can be read from R Studio using: vignette("testing-platowork") or accessed directly at https://rpubs.com/lassehjorthmadsen/764374.

Data

In R, the data set is stored as plato. In the R Studio console, type ?plato to read a description of the data set, or simply type head(plato) to see the first few rows:

library(platowork)
head(plato)
##                  date subject wpm  error stimulus session
## 1 2021-04-01 00:04:00   Lasse  51 0.8916     None       1
## 2 2021-04-01 00:10:00   Lasse  49 0.8889     None       1
## 3 2021-04-01 00:13:00   Lasse  61 0.9840     None       1
## 4 2021-04-01 00:19:00   Lasse  53 0.9020     None       1
## 5 2021-04-01 00:23:00   Lasse  54 0.9085     None       1
## 6 2021-04-01 08:35:00   Lasse  48 0.9380     None       2

Each row represents the result of a speed typing test. Each test has a date/time stamp; a subject (the person doing the test); a wpm (words per minute) performance score; an error rate; a stimulus (“None”, “Sham” or “Real”, were “Sham” is a placebo-like stimulus). Finally, since each test was taken several times in a row, we also have a session id.

Below is a quick summary of the data, with a duration in minutes calculated for the sessions:

## # A tibble: 6 x 6
##   subject stimulus no_tests no_sessions first_test          avr_session_duration
##   <chr>   <chr>       <int>       <int> <dttm>              <drtn>              
## 1 Lasse   None            9           2 2021-04-01 00:04:00 16.8 mins           
## 2 Lasse   Sham           44           6 2021-04-01 17:12:00 15.0 mins           
## 3 Lasse   Real           46           5 2021-04-06 19:02:00 15.0 mins           
## 4 Villads None            7           1 2021-04-01 17:25:00 12.0 mins           
## 5 Villads Sham           57           5 2021-04-02 18:47:00 21.2 mins           
## 6 Villads Real           62           5 2021-04-07 12:33:00 20.8 mins

Visualizing the data

The chart below plots all 225 data points in the data set. Each dot represents the result of one speed typing test measures in words-per-minute on the y-axis. The x-axis shows the session ids. Vertical bars show the average for each session. The plot is split by the two test subjects: Lasse and Villads.

By visual inspection alone, it is apparent that one subject (Villads) perhaps experienced enhanced learning from using the headset with real stimuli. The same cannot be said for Lasse: For me the green dots (real stimuli) were not higher on average than the blue dots (sham stimuli).

Another way to look at the data is to inspect the estimated density plots, while adding the overall averages for each subject/stimulus combination. See below:

For me, the typing speed was very slightly lower using real stimuli; Villads had a somewhat higher average typing speed under real stimulation. Villads improved from about 65 words per minute to about 67 words per minute. A small but possibly real improvement.

Both of us did worse in the initial base mark trials, suggesting that some learning was going on during the experiment.

Statistical test

While we do get a visual impression from the charts above, we might want to do a more formal statistical test to find out if the results indicate that it made a difference which stimulus we were exposed to while speed typing.

A simple and natural choice of test might be an analysis of variance, ANOVA.

m <- aov(wpm ~ stimulus, data = plato)
summary(m)
##              Df Sum Sq Mean Sq F value Pr(>F)  
## stimulus      2    272  136.12   3.926 0.0211 *
## Residuals   222   7697   34.67                 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

With a P-value of 0.02, it looks like these results would be somewhat unlikely if all typing test results were drawn from the same distribution.

However, we might want to filter the data to disregard the initial test done without headset, that is not really part of the relevant comparison. While we are at it, we also do a bit of data wrangling that will be useful later on:

# 1) Standardize wpm;
# 2) Filter out no stimulus;
# 3) Set factor levels for later plotting.

data <- plato %>% 
  group_by(subject) %>% 
  mutate(wpm_stand = (wpm - mean(wpm)) / sd(wpm)) %>% 
  ungroup() %>% 
  filter(stimulus != "None") %>%
  mutate(stimulus = factor(stimulus, levels = c("Sham", "Real")))

The same model, applied to the slightly reduced dataset look like this:

m <- aov(wpm ~ stimulus, data = data)
summary(m)
##              Df Sum Sq Mean Sq F value Pr(>F)
## stimulus      1     59   58.65   1.717  0.192
## Residuals   207   7070   34.15

With a P-value of 0.19 indicating no detectable effect of the headset.

This is because I got slightly better results on average with sham stimuli, while my son got better results with real stimuli.

But the fact that we type at different speeds, also inflates the within-group variance at the ANOVA test above. In other words: The dependent variable, wpm, looks extra volatile, since we are looking at results from two different subjects. We could try to correct for that by standardizing the data, by subtracting the mean and dividing by the standard deviations, for each subject. (This is why we needed the wpm_stand variable calculated just before.)

Standardizing enables us to build a model that focuses on individual improvements from individual averages:

m <- aov(wpm_stand ~ stimulus, data = data)
summary(m)
##              Df Sum Sq Mean Sq F value Pr(>F)  
## stimulus      1   2.76  2.7578   2.913 0.0894 .
## Residuals   207 195.96  0.9466                 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

As expected, this yields a lower p-value, 0.09, but still not exactly convincing evidence of improved typing speed when using the headset.

It is possible, that the headset works for one but not for the other. Indeed, that is the impression one can get from the first two charts. We might want to capture this by including subject in the analysis as interaction, so the effect of the stimulus depends on the subject:

m <- aov(wpm_stand ~ stimulus * subject, data)
summary(m)
##                   Df Sum Sq Mean Sq F value Pr(>F)  
## stimulus           1   2.76   2.758   2.981 0.0858 .
## subject            1   0.03   0.032   0.035 0.8527  
## stimulus:subject   1   6.25   6.252   6.757 0.0100 *
## Residuals        205 189.67   0.925                 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

One way to interpret this output is that there is little evidence that the stimulus works overall, but some evidence that the headset might help my son more than me. An equally plausible interpretation might be, that my son just learns faster or better than me, headset or not.

We could try to focus on the learning that goes on by looking at the slopes of learning curves over time:

Looking at the information this way, we see more clearly that I learned faster while exposed to sham stimuli; my son learned faster at the last part of the experiment, while exposed to real stimuli.

We could try to test this impression using linear regression, with three-way interaction between session number, stimulus, and subject, in effect producing a model that allows for four different slopes:

m <- lm(wpm_stand ~ session * stimulus * subject, data = data)
summary(m)
## 
## Call:
## lm(formula = wpm_stand ~ session * stimulus * subject, data = data)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.69181 -0.56271  0.03133  0.62884  2.03750 
## 
## Coefficients:
##                                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)                         -2.08098    0.50280  -4.139 5.13e-05 ***
## session                              0.38066    0.08304   4.584 8.01e-06 ***
## stimulusReal                         3.42672    1.13629   3.016  0.00289 ** 
## subjectVillads                       1.57832    0.61207   2.579  0.01063 *  
## session:stimulusReal                -0.50359    0.12292  -4.097 6.07e-05 ***
## session:subjectVillads              -0.31569    0.11689  -2.701  0.00751 ** 
## stimulusReal:subjectVillads         -5.89702    1.38697  -4.252 3.25e-05 ***
## session:stimulusReal:subjectVillads  0.80166    0.16749   4.786 3.29e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.8784 on 201 degrees of freedom
## Multiple R-squared:  0.2195, Adjusted R-squared:  0.1924 
## F-statistic: 8.078 on 7 and 201 DF,  p-value: 1.19e-08

This model takes into account both learning over time, use of headset, and individual differences on how we react to both of those.

It is not easy to interpret, however, since each term only has meaning taking the other bits into consideration.

If we want something more readily interpretable, we might fit a linear model for each of the four combinations: 2 subjects x 2 stimuli. This is not efficient for estimation (we estimate more parameters that we need to), but it is simpler to interpret. Using the purrr package, it is easy to split the dataset, apply a model to each part, and pull out the part we want; here the slope of session parameter with corresponding p-value:

library(purrr) # For map function

data %>%
  split(list(.$stimulus, .$subject)) %>%
  map(~ lm(wpm_stand ~ session, data = .)) %>%
  map(summary) %>%
  map("coefficients") %>% 
  map(as_tibble, rownames = "parameter") %>% 
  bind_rows(.id = "model") %>% 
  filter(parameter == "session") %>% 
  select(model, parameter, slope = Estimate, p_value = `Pr(>|t|)`) %>% 
  mutate(across(where(is.numeric), round, 3))
## # A tibble: 4 x 4
##   model        parameter  slope p_value
##   <chr>        <chr>      <dbl>   <dbl>
## 1 Sham.Lasse   session    0.381   0    
## 2 Real.Lasse   session   -0.123   0.162
## 3 Sham.Villads session    0.065   0.417
## 4 Real.Villads session    0.363   0

We basically see the same information as in the previous plot, but expressed numerically rather that graphically: For me, the headset did not work, for my son it might have.

Conclusion

This is, of course, a tiny, informal, private experiment, that does not say a whole lot about how the headset might work on a wider population, doing something different than speed typing.

For the two of us taken together, we could not measure a clear gain from using the tDCS headset when speed typing. It is possible that for Villads the headset induced an improvement in learning speed, but that could also be unrelated to the headset.

Subjectively, neither of us experienced a clear sense of enhanced learning while using the headset.