irtrees 1.0.0

Zhaojun Li, Ivailo Partchev and Paul De Boeck

December 6, 2021

The first version of package irtrees contained just two functions, dendrify() and exogenify() (preserved in the current version for compatibility), and some data sets permitting the user to replicate the examples in De Boeck and Partchev (2012). The main interest lied in the psychometric methodology introduced by the paper. Nine years later almost to the date, IRTrees models have become quite popular, with applications to many aspects of human and animal behavior. An update of the package was overdue, and it includes several substantial improvements.

Back in 2012, the principal way to estimate an IRTrees model was with the glmer() function in package lme4 (Bates et al. 2015). Hence, we needed to translate all data sets to the long data shape expected by that package. Another constraint was that all items in the test were expected to follow the same IRTrees model. In the meanwhile, more flexible IRTrees models have been introduced, and the range of possibilities to estimate them has expanded to include mirt (Chalmers 2012), flirt (Jeon, Rijmen, and Rabe-Hesketh 2014), but also TAM, STAN, MPlus, \(\ldots\) the list can certainly be continued. Each of these programs has different demands on the shape of input data, and they also tend to offer differential advantages. This new version of irtrees strives to facilitate IRTrees modeling with different software and for a wider range of models.

First and foremost, a set of new functions written by Zhaojun Li allow users to prepare their data sets in a much more flexible way:

Both the original data sets and the generated IRTrees data sets may be either wide-format or long-format. This allows for extra flexibility in the choice of software for the actual estimation of the model parameters.

Another addition is a function that allows users to draw the tree for a specific item and have it automatically translated to a mapping matrix. We see two advantages in this: first, the graphical output may be useful for publication; second, even if setting up a mapping matrix is relatively simple, there will always be persons who think better in terms of matrices and those feeling more at ease with graphs.

Wide and long format

As mentioned above, some of the software packages that can fit IRTrees models expect to see the data in long format, while others require wide format. Thanks to the tidyverse movement (Wickham et al. 2019), awareness of long format has increased in recent times. Until not too long ago, a statistician could reach retirement without ever seeing a data set in long format. After all, wide format, the sample-by-variable matrix, is the native data representation in statistics. It is predominant in textbooks, easily understood, it readily yields summary statistics and conveniently highlights the structure of missing data (if any). Two related matrices, the sample-by-sample dissimilarity matrix and the variable-by-variable covariance (or correlation) matrix, provide the input for a plethora of statistical methods. A very minimal example of a data set in wide format is shown below.

From the point of view of the professionals in data management, the wide format is quite insane because the vessel containing the data must be remade from scratch with each problem: both the number of the columns and their meaning are redefined. On the other hand, any problem can be accommodated by a single data structure with three columns whose meaning is fixed once and for all: sample, variable, and value:

Whether the test has 2, 20, or 200 items is now a data problem, not a data structure problem. Missing values do not have to be stored at all, which can be very efficient with sparse data; on the other hand, we cannot observe directly any potentially revealing patterns of missingness. Computing covariance or distance matrices from data in long format is not as easy as compared to wide format. Long before we come to IRTrees models, we observe that the two data representations have their relative merits and disadvantages!

The data format in which we are actually interested, and which we will call “long” in this vignette, is somewhat different:

Note how the person property, in this case sex, is replicated with each response. This is because the data is actually multi-level. The basic unit of interest is the response, resulting from the interaction of an individual with an item. Responses are nested within persons, which is why we need to replicate all person properties. The same would be true of any item properties (note that we would not be able to include these directly in wide format, as they pertain to the columns). A huge advantage of this format from the statistical point of view is that the item side and the person side are perfectly symmetric, which makes it easy to include both item and person covariates, and possibly their interactions, in the model. This is not something that would preoccupy data management people: they would simply place the data for each level in separate tables related by key variables:

Eight new functions

Eight new functions can be used to prepare data originally in wide or long format for IRTrees modelling in wide or long format, and with one or multiple trees involved. These are summarized in the following table:

We are not going to discuss each function in full detail. After one more detailed example, we explain how and why the functions differ, and we provide examples where needed.

Simple example: wide input, wide output, all items share the same tree

As a first example, consider the following dummy data set:

five_wide = data.frame(
  id = 1:100,
  gender = factor(sample(c('M','F'), 100, replace=TRUE)),
  item1 = sample.int(5, 100, replace = TRUE),
  item2 = sample.int(5, 100, replace = TRUE),
  item3 = sample.int(5, 100, replace = TRUE),
  item4 = sample.int(5, 100, replace = TRUE),
  item5 = sample.int(5, 100, replace = TRUE)
  )
str(five_wide)
#> 'data.frame':    100 obs. of  7 variables:
#>  $ id    : int  1 2 3 4 5 6 7 8 9 10 ...
#>  $ gender: Factor w/ 2 levels "F","M": 2 2 2 1 2 1 1 1 2 2 ...
#>  $ item1 : int  1 4 1 1 3 4 1 3 5 3 ...
#>  $ item2 : int  4 2 1 2 4 5 1 1 1 5 ...
#>  $ item3 : int  3 1 3 4 2 5 1 4 3 1 ...
#>  $ item4 : int  4 5 1 4 5 3 2 5 4 4 ...
#>  $ item5 : int  5 1 1 2 4 2 1 1 1 4 ...

There are 5 five-point items (not very good ones, but we are not going to analyze this data anyway, just show how to prepare it for analysis); a person ID variable; and a person-specific covariate, gender. Item categories 1 through 5 represent ‘Strongly disagree,’ ‘Disagree,’ ‘Neither agree or disagree,’ ‘Agree,’ and ‘Strongly Agree,’ respectively. All items are assumed to follow the same IRTrees model which can be represented as follows:

We have borrowed this tree from Jeon and Boeck (2019). It provides a good example of an IRTrees model and of the ‘pseudo-item’ approach to fitting it. According to the hypothesized tree structure, there are four binary internal nodes of which the first, Y1, determines whether the response is a midpoint response (category 3) or any of other responses. If not midpoint, a second node, Y2, determines whether the response is positive (category 4 or 5) or negative (category 1 or 2). Given the direction of the response, two further nodes, Y3 and Y4, determine whether the response is extreme or not (category 1 vs. 2 if negative, and category 5 vs. 4 if positive).

From the tree, it is not difficult to deduce the mapping matrix:

Each column represents a node. Each row corresponds to a response category, and contains the coefficients received from each node. For example, the fourth category, Agree, has:

Having the mapping matrix, we are ready to recode the data set with function WtoW_single.tree():

five_wide_single.tree = WtoW_single.tree(
  data = five_wide, 
  cmx = five_cmx, 
  id.col = 1, 
  resp.col = c(3:7), 
  covar.col = 2
  )
str(five_wide_single.tree)
#> 'data.frame':    100 obs. of  22 variables:
#>  $ id         : int  1 2 3 4 5 6 7 8 9 10 ...
#>  $ item1.node1: num  0 0 0 0 1 0 0 1 0 1 ...
#>  $ item2.node1: num  0 0 0 0 0 0 0 0 0 0 ...
#>  $ item3.node1: num  1 0 1 0 0 0 0 0 1 0 ...
#>  $ item4.node1: num  0 0 0 0 0 1 0 0 0 0 ...
#>  $ item5.node1: num  0 0 0 0 0 0 0 0 0 0 ...
#>  $ item1.node2: num  0 1 0 0 NA 1 0 NA 1 NA ...
#>  $ item2.node2: num  1 0 0 0 1 1 0 0 0 1 ...
#>  $ item3.node2: num  NA 0 NA 1 0 1 0 1 NA 0 ...
#>  $ item4.node2: num  1 1 0 1 1 NA 0 1 1 1 ...
#>  $ item5.node2: num  1 0 0 0 1 0 0 0 0 1 ...
#>  $ item1.node3: num  1 NA 1 1 NA NA 1 NA NA NA ...
#>  $ item2.node3: num  NA 0 1 0 NA NA 1 1 1 NA ...
#>  $ item3.node3: num  NA 1 NA NA 0 NA 1 NA NA 1 ...
#>  $ item4.node3: num  NA NA 1 NA NA NA 0 NA NA NA ...
#>  $ item5.node3: num  NA 1 1 0 NA 0 1 1 1 NA ...
#>  $ item1.node4: num  NA 0 NA NA NA 0 NA NA 1 NA ...
#>  $ item2.node4: num  0 NA NA NA 0 1 NA NA NA 1 ...
#>  $ item3.node4: num  NA NA NA 0 NA 1 NA 0 NA NA ...
#>  $ item4.node4: num  0 1 NA 0 1 NA NA 1 0 0 ...
#>  $ item5.node4: num  1 NA NA NA 0 NA NA NA NA 0 ...
#>  $ gender     : Factor w/ 2 levels "F","M": 2 2 2 1 2 1 1 1 2 2 ...

In this call,

Column names may be used instead of column numbers:

five_wide_single.tree = WtoW_single.tree(
  data = five_wide, 
  cmx = five_cmx, 
  id.col = "id", 
  resp.col = c("item1", "item2", "item3"), 
  covar.col = "gender"
)

In a longitudinal survey where the same test is taken by the same person more than once, data in wide format can be stacked together with an additional variable identifying the test occasion; this must be passed to the function with the additional argument, time.col.

Variation: wide input, long output, all items share the same tree

The wide output file would be fine if we wanted to estimate the IRTrees model with a program like mirt, but what if we prefer to use lme4 as in the original JSS paper? All we need to do is change the function; the input is the same, and so are all parameters:

five_long_single.tree = WtoL_single.tree(
  data = five_wide, 
  cmx = five_cmx, 
  id.col = "id", 
  resp.col = c("item1","item2","item3"), 
  covar.col = "gender")

str(five_long_single.tree)
#> 'data.frame':    1200 obs. of  6 variables:
#>  $ id    : int  1 1 1 2 2 2 3 3 3 4 ...
#>  $ item  : chr  "item1" "item2" "item3" "item1" ...
#>  $ node  : chr  "node1" "node1" "node1" "node1" ...
#>  $ sub   : chr  "item1:node1" "item2:node1" "item3:node1" "item1:node1" ...
#>  $ resp  : num  0 0 1 0 0 0 0 0 1 0 ...
#>  $ gender: Factor w/ 2 levels "F","M": 2 2 2 2 2 2 2 2 2 1 ...

Complication: wide input, items may have different trees

The original version of irtrees assumed that all items have the same number of categories governed by the same tree. To relax this rather severe limitation, we need to replace the mapping matrix with a list of mapping matrices, and provide another list of the same length specifying the items to which each of the matrices will be applied. All other parameters of the function remain unchanged.

As an example, let us assume a test of five items and two different trees specified with three mapping matrices, mx1, and mx2. The first matrix applies to items 1, 3, and 5, and the second matrix applies to items 2 and 4. We already have one matrix, five_cmx, and let the other one be:

#>      [,1] [,2]
#> [1,]    0    0
#> [2,]    0    1
#> [3,]    1   NA
#> [4,]    0    2
#> [5,]    0    3

We then need to specify:

matrices_list = list(five_cmx, five_cmx.2)

items_list = list(c('item1', 'item3', 'item5'), c('item2', 'item4'))

five_wide_multiple.tree = WtoW_multiple.tree(
  data = five_wide, 
  cmx_list = matrices_list,
  resp.col_list = items_list,
  id.col = "id", 
  covar.col = "gender"
)

To produce an output data set in long rather than wide shape, just replace WtoW_multiple.tree() with WtoL_multiple.tree().

What if the input data is in long shape?

Test data may be put in long shape as a matter of preference, but it may also originate like this. In older times, when all tests were paper-based and results had to be typed in manually or, with some luck, scanned with an optical reader, wide shape was ubiquitous. With computer-based testing becoming widespread, data is much more likely to arrive as person-item-response triplets. This is handy if we want to fit our models in lme4: in particular, it is equally easy to include person covariates, item covariates, or interactions of both.

To illustrate, we will use a much-analyzed data set that was also the subject of our previous paper (De Boeck and Partchev 2012). It concerns the self-reported verbal reactions to four different embarrassing situations. Person covariates include gender and trait anger. Item covariates include the situation, the type of reaction (‘curse,’ ‘scold,’ or ‘shout’), and its mode (‘actually do’ vs. ‘want to do’), while the possible responses for each combination are ‘no,’ ‘perhaps,’ or ‘yes.’ One of the interesting findings with this data set is that, other things equal, women ‘want to’ react verbally as often as men do, but ‘actually do’ react less frequently. This is a hypothesis about an interaction between a person property and an item property.

There may be many good reasons to perform an analysis with mirt instead: faster estimation of fixed parameters, the possibility to use the 2PL model or perhaps an unfolding model, etc. However, not only does mirt expect the data in an \(n\times p\) shape, with \(n\) the number of persons and \(p\) the number of items; any person properties to be used as covariates must be supplied in a separate data frame with \(n\) rows, while the item properties, if any, must go into yet another data frame with with \(p\) rows. While the person properties will be found in the first columns of the IRTrees wide data set we are going to generate, any item covariates are simply discarded and the corresponding data frame must be constructed by hand.

The data set is bundled with the lme4 package. Currently irtrees expects the item categories to be represented with consecutive integers starting with 1, so we start by transforming the factor holding the responses to a vector of integers:

VerbAgg$resp = as.integer(VerbAgg$resp)

As in (De Boeck and Partchev 2012), we assume a linear tree that expresses the idea of an ordinal item:

For the verbal aggression example, with the mapping matrix in lincmx, transforming the long data set to wide starts with:

VerbAgg$resp = as.integer(VerbAgg$resp)

VerbAgg_wide = LtoW_single.tree(
  data = VerbAgg, 
  cmx = lincmx, 
  id.col = "id",
  item.col = "item", 
  resp.col = "resp", 
  covar.col = c("Anger","Gender","btype","situ","mode"))
#> Warning in LtoW_single.tree(data = VerbAgg, cmx = lincmx, id.col = "id", :
#> Removing some covariates

str(VerbAgg_wide)
#> tibble [316 × 51] (S3: tbl_df/tbl/data.frame)
#>  $ id               : Factor w/ 316 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
#>  $ Anger            : int [1:316] 20 11 17 21 17 21 39 21 24 16 ...
#>  $ Gender           : Factor w/ 2 levels "F","M": 2 2 1 1 1 1 1 1 1 1 ...
#>  $ S1WantCurse:node1: num [1:316] 0 0 1 1 1 1 1 0 0 1 ...
#>  $ S1WantScold:node1: num [1:316] 0 0 1 1 0 1 1 0 0 1 ...
#>  $ S1WantShout:node1: num [1:316] 0 0 1 1 1 0 1 0 0 1 ...
#>  $ S2WantCurse:node1: num [1:316] 0 0 1 1 1 1 1 0 1 1 ...
#>  $ S2WantScold:node1: num [1:316] 0 0 0 1 0 0 1 0 1 1 ...
#>  $ S2WantShout:node1: num [1:316] 0 0 1 1 0 0 1 0 0 1 ...
#>  $ S3WantCurse:node1: num [1:316] 0 0 1 1 1 1 1 0 1 1 ...
#>  $ S3WantScold:node1: num [1:316] 0 0 0 0 0 1 0 0 0 0 ...
#>  $ S3WantShout:node1: num [1:316] 1 0 0 0 0 1 1 0 1 1 ...
#>  $ S4wantCurse:node1: num [1:316] 1 0 0 0 1 0 0 0 1 1 ...
#>  $ S4WantScold:node1: num [1:316] 0 0 0 0 0 0 0 0 0 1 ...
#>  $ S4WantShout:node1: num [1:316] 0 0 0 0 0 0 0 0 0 1 ...
#>  $ S1DoCurse:node1  : num [1:316] 1 0 0 1 1 1 1 1 1 1 ...
#>  $ S1DoScold:node1  : num [1:316] 0 0 1 1 1 0 1 0 0 1 ...
#>  $ S1DoShout:node1  : num [1:316] 1 0 1 1 0 0 1 0 0 1 ...
#>  $ S2DoCurse:node1  : num [1:316] 1 0 0 1 1 1 1 1 1 1 ...
#>  $ S2DoScold:node1  : num [1:316] 0 0 0 1 0 0 1 0 0 1 ...
#>  $ S2DoShout:node1  : num [1:316] 0 0 1 1 0 0 1 0 0 1 ...
#>  $ S3DoCurse:node1  : num [1:316] 1 1 0 1 1 1 0 1 1 0 ...
#>  $ S3DoScold:node1  : num [1:316] 0 0 0 0 0 0 0 0 0 0 ...
#>  $ S3DoShout:node1  : num [1:316] 0 0 0 0 0 0 0 0 1 0 ...
#>  $ S4DoCurse:node1  : num [1:316] 1 0 1 0 1 1 0 1 0 1 ...
#>  $ S4DoScold:node1  : num [1:316] 1 0 0 0 0 1 0 0 1 1 ...
#>  $ S4DoShout:node1  : num [1:316] 1 0 0 0 0 0 0 0 0 1 ...
#>  $ S1WantCurse:node2: num [1:316] NA NA 0 0 0 1 1 NA NA 1 ...
#>  $ S1WantScold:node2: num [1:316] NA NA 0 0 NA 1 1 NA NA 1 ...
#>  $ S1WantShout:node2: num [1:316] NA NA 0 0 0 NA 1 NA NA 1 ...
#>  $ S2WantCurse:node2: num [1:316] NA NA 0 0 0 1 0 NA 0 1 ...
#>  $ S2WantScold:node2: num [1:316] NA NA NA 0 NA NA 0 NA 0 1 ...
#>  $ S2WantShout:node2: num [1:316] NA NA 0 0 NA NA 1 NA NA 0 ...
#>  $ S3WantCurse:node2: num [1:316] NA NA 0 0 0 0 0 NA 0 0 ...
#>  $ S3WantScold:node2: num [1:316] NA NA NA NA NA 0 NA NA NA NA ...
#>  $ S3WantShout:node2: num [1:316] 0 NA NA NA NA 0 0 NA 0 0 ...
#>  $ S4wantCurse:node2: num [1:316] 1 NA NA NA 0 NA NA NA 0 0 ...
#>  $ S4WantScold:node2: num [1:316] NA NA NA NA NA NA NA NA NA 0 ...
#>  $ S4WantShout:node2: num [1:316] NA NA NA NA NA NA NA NA NA 0 ...
#>  $ S1DoCurse:node2  : num [1:316] 0 NA NA 0 0 1 1 1 1 0 ...
#>  $ S1DoScold:node2  : num [1:316] NA NA 0 0 0 NA 1 NA NA 1 ...
#>  $ S1DoShout:node2  : num [1:316] 0 NA 0 0 NA NA 1 NA NA 0 ...
#>  $ S2DoCurse:node2  : num [1:316] 0 NA NA 1 0 1 0 1 1 1 ...
#>  $ S2DoScold:node2  : num [1:316] NA NA NA 0 NA NA 0 NA NA 1 ...
#>  $ S2DoShout:node2  : num [1:316] NA NA 0 0 NA NA 1 NA NA 0 ...
#>  $ S3DoCurse:node2  : num [1:316] 0 0 NA 0 0 1 NA 1 0 NA ...
#>  $ S3DoScold:node2  : num [1:316] NA NA NA NA NA NA NA NA NA NA ...
#>  $ S3DoShout:node2  : num [1:316] NA NA NA NA NA NA NA NA 0 NA ...
#>  $ S4DoCurse:node2  : num [1:316] 1 NA 0 NA 0 1 NA 1 NA 0 ...
#>  $ S4DoScold:node2  : num [1:316] 1 NA NA NA NA 1 NA NA 0 0 ...
#>  $ S4DoShout:node2  : num [1:316] 1 NA NA NA NA NA NA NA NA 0 ...

The first thing to do after that is make sure that the order of items in the wide data set is as expected; otherwise we might end up specifying the wrong model. Next, we construct the person covariates data frame and the item covariates data set; following the mirt conventions, we will call these covdata and itemdesign. Note that the first one can be used with both the mirt() and the mixedmirt() functions, while the second one is only available with mixedmirt().

covdata = VerbAgg_wide[,2:3]
VerbAgg_wide = VerbAgg_wide[,-(1:3)]
itemdesign = data.frame(node = factor(rep(1:2, each=24)))
itemdesign$mode = factor(ifelse(grepl('Do', names(VerbAgg_wide)), 'Do', 'Want'))

We will also make a long data set in order to run both lme4 and mirt:

VerbAgg_long = LtoL_single.tree(
  data = VerbAgg, 
  cmx = lincmx, 
  id.col = "id",
  item.col = "item", 
  resp.col = "resp", 
  covar.col = c("Anger","Gender","btype","situ","mode"))

str(VerbAgg_long)
#> 'data.frame':    15168 obs. of  10 variables:
#>  $ id    : Factor w/ 316 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
#>  $ item  : Factor w/ 24 levels "S1WantCurse",..: 1 1 1 1 1 1 1 1 1 1 ...
#>  $ node  : chr  "node1" "node1" "node1" "node1" ...
#>  $ sub   : chr  "S1WantCurse:node1" "S1WantCurse:node1" "S1WantCurse:node1" "S1WantCurse:node1" ...
#>  $ resp  : num  0 0 1 1 1 1 1 0 0 1 ...
#>  $ Anger : int  20 11 17 21 17 21 39 21 24 16 ...
#>  $ Gender: Factor w/ 2 levels "F","M": 2 2 1 1 1 1 1 1 1 1 ...
#>  $ btype : Factor w/ 3 levels "curse","scold",..: 1 1 1 1 1 1 1 1 1 1 ...
#>  $ situ  : Factor w/ 2 levels "other","self": 1 1 1 1 1 1 1 1 1 1 ...
#>  $ mode  : Factor w/ 2 levels "want","do": 1 1 1 1 1 1 1 1 1 1 ...

The adjustments for situations with more than one tree are similar in logic to those for wide shape data: supply a list of matrices and a list of items to which they apply.

A complete example

One of the hypotheses tested in De Boeck and Partchev (2012) had to do with the ordinality of the items. This can be accomplished by comparing the goodness of fit of the one-dimensional model (a common latent trait for both nodes) with the two-dimensional model (a different latent trait per node). In lme4, the two models would be:

model1 = glmer(resp ~ 0 + node:item + (1 | id),        family=binomial, data=VerbAgg_long)
model2 = glmer(resp ~ 0 + node:item + (0 + node | id), family=binomial, data=VerbAgg_long)

48 or 96 fixed parameters may take quite a long time in lme4, so we might prefer to treat the items as random:

model1 = glmer(resp ~ 0 + node + (0 + node | item) + (1 | id),        family=binomial, data=VerbAgg_long)
model2 = glmer(resp ~ 0 + node + (0 + node | item) + (0 + node | id), family=binomial, data=VerbAgg_long)

The difference between the two models is in the random effects for the persons: the one-dimensional model has (1 | id) while the two-dimensional model has (0 + node | id). Both models have a fixed effect of node – in the second model it provides the means of the two correlated latent variables.

In mirt, we define the univariate and the multivariate model more explicitly:

mm1 = "F1=1-48"

mm2 = "F1=1-24 
       F2=25-48 
       COV=F1*F2"

mirt has no fear of a large number of fixed parameters – estimation is quite fast, and the code would be something like:

mirt1 =  mirt(data = VerbAgg_wide, model = mm1, itemtype="Rasch", verbose=FALSE)
mirt2 =  mirt(data = VerbAgg_wide, model = mm2, itemtype="Rasch", verbose=FALSE)

However, we will use mixedmirt() for better comparability with whatever we did with lme4. With mixedmirt(), we can make the item parameters random, and we can include the effect of node. The code becomes:

mirt1 =  mixedmirt(data=VerbAgg_wide, model=mm1, fixed = ~ 0 + node, random= ~ 1 | items, 
                   itemdesign=itemdesign, SE=TRUE, verbose=FALSE)
mirt2 =  mixedmirt(data=VerbAgg_wide, model=mm2, fixed = ~ 0 + node, random= ~ 1 | items, 
                   itemdesign=itemdesign, SE=TRUE, verbose=FALSE)

The likelihoods seem to differ between lme4 and mirt but the general message is the same:

anova(model1, model2)
#> Data: VerbAgg_long
#> Models:
#> model1: resp ~ 0 + node + (0 + node | item) + (1 | id)
#> model2: resp ~ 0 + node + (0 + node | item) + (0 + node | id)
#>        npar   AIC   BIC  logLik deviance  Chisq Df Pr(>Chisq)    
#> model1    6 12861 12905 -6424.3    12849                         
#> model2    8 12468 12527 -6226.2    12452 396.35  2  < 2.2e-16 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(mirt1, mirt2)
#> 
#> Model 1: mixedmirt(data = VerbAgg_wide, model = mm1, fixed = ~0 + node, 
#>     random = ~1 | items, itemdesign = itemdesign, SE = TRUE, 
#>     verbose = FALSE)
#> Model 2: mixedmirt(data = VerbAgg_wide, model = mm2, fixed = ~0 + node, 
#>     random = ~1 | items, itemdesign = itemdesign, SE = TRUE, 
#>     verbose = FALSE)
#>        AIC     AICc    SABIC       HQ      BIC    logLik      X2  df   p
#> 1 14421.01 14421.14 14423.34 14427.01 14436.03 -7206.503     NaN NaN NaN
#> 2 14113.49 14113.76 14117.00 14122.49 14136.02 -7050.746 311.515   2   0

Specifying mapping matrices in mermaid

When we deal with tree models, publication is usually on our mind, and we can do with some nice tree diagrams. One handy way to produce some is with mermaid, which is a javaScript library equipped with its own easy language to describe graphs. mermaid supports many types of diagrams; the one we need is called ‘flowchart’ in their nomenclature. The easiest way to produce a tree diagram in mermaid is with their live editor. On the left, replace their own example with

graph TB
  X1  --0-->  no
  X1  --1-->  X2
  X2  --0-->  perhaps
  X2  --1-->  yes

As you type, you will see the tree emerge on the right. When you are done, click on ‘COPY IMAGE TO CLIPBOARD’ and paste the diagram into your Word or LibreOffice document. Further buttons export the graph in a raster format (PNG) or a vector format (SVG).

mermaid is supported in R with package DiagrammeR (Iannone 2020), which is loaded automatically by irtrees. When authoring in RStudio, the graph can be displayed in the following way:

mytree = 'graph TB
  X1  --0-->  no
  X1  --1-->  X2
  X2  --0-->  perhaps
  X2  --1-->  yes'

DiagrammeR::mermaid(mytree)

This works well if the output is an HTML file; to make it also work with PDF output, install webshot with

install.packages("webshot")
webshot::install_phantomjs()

Although it is quite easy to derive the mapping matrix from the tree, we thought it would be nice to have a function that could do this for us automatically. This is fully possible within the mermaid language, but we need to add some conventions of our own. mermaid distinguishes between a node name and a node label, and so far we have only used node names. To be able to derive the matrix, we will play with the names and the labels in the following way:

Following these additional rules, our tree description string becomes

mytree = 'graph TB
  X1((X1))  --0-->  L1[no]
  X1((X1))  --1-->  X2((X2))
  X2((X2))  --0-->  L2[perhaps]
  X2((X2))  --1-->  L3[yes]'

and would produce the same graphical output as before, but now we can get the mapping matrix with:

mmx = graph2mx(mytree)
mmx
#>      1  2
#> [1,] 0 NA
#> [2,] 1  0
#> [3,] 1  1

Why repeat the labels? Sometimes we want to cheat. Consider the response styles tree from Jeon and Boeck (2019) that we examined at be beginning of this vignette and reproduce now in mermaid:

tree1 = 'graph TB
  X1((X1)) --1--> L1[Neutral]
  X1((X1)) --0--> X2((X2))
  X2((X2)) --0--> X3((X3))
  X2((X2)) --1--> X4((X4))
  X3((X3)) --1--> L2[Strongly disagree]
  X3((X3)) --0--> L3[Disagree]        
  X4((X4)) --0--> L4[Agree]
  X4((X4)) --1--> L5[Strongly agree]'

DiagrammeR::mermaid(tree1)
graph2mx(tree1)
#>      1  2  3  4
#> [1,] 1 NA NA NA
#> [2,] 0  0  1 NA
#> [3,] 0  0  0 NA
#> [4,] 0  1 NA  0
#> [5,] 0  1 NA  1

We have, in X2, the target trait (positive or negative opinion), and, in X3 and X4, two extreme style nodes: one for positive extreme, and one for negative extreme responses. What if we are happy with just one extreme style variable, irrespective of positive or negative? We cannot just merge X3 and X4 like this:

because we will mess up the edges to the leaves. But, cheating a bit with the node labels, we can get the right mapping matrix:

mmx = graph2mx(tree2)
mmx
#>      1  2  3
#> [1,] 1 NA NA
#> [2,] 0  0  1
#> [3,] 0  0  0
#> [4,] 0  1  0
#> [5,] 0  1  1

References

Bates, Douglas, Martin Mächler, Ben Bolker, and Steve Walker. 2015. “Fitting Linear Mixed-Effects Models Using lme4.” Journal of Statistical Software 67 (1): 1–48. https://doi.org/10.18637/jss.v067.i01.
Chalmers, R. Philip. 2012. mirt: A Multidimensional Item Response Theory Package for the R Environment.” Journal of Statistical Software 48 (6): 1–29. https://doi.org/10.18637/jss.v048.i06.
De Boeck, Paul, and Ivailo Partchev. 2012. “IRTrees: Tree-Based Item Response Models of the GLMM Family.” Journal of Statistical Software, Code Snippets 48 (1): 1--28. https://doi.org/10.18637/jss.v048.c01.
Iannone, Richard. 2020. DiagrammeR: Graph/Network Visualization. https://CRAN.R-project.org/package=DiagrammeR.
Jeon, Minjeong, and Paul De Boeck. 2019. “Evaluation on Types of Invariance in Studying Extreme Response Bias with an IRTree Approach.” British Journal of Mathematical and Statistical Psychology.
Jeon, Minjeong, Frank Rijmen, and Sophia Rabe-Hesketh. 2014. “Flexible Item Response Theory Modeling with FLIRT.” Applied Psychological Measurement 38 (5): 404–5. https://doi.org/10.1177/0146621614524982.
Wickham, Hadley, Mara Averick, Jennifer Bryan, Winston Chang, Lucy D’Agostino McGowan, Romain François, Garrett Grolemund, et al. 2019. “Welcome to the tidyverse.” Journal of Open Source Software 4 (43): 1686. https://doi.org/10.21105/joss.01686.