Decentralized Construct Taxonomies

Gjalt-Jorn Ygram Peters

2023-03-05

Preprint

There is now a preprint discussing the background of this package in detail. That preprint, “Knowing What We’re Talking About: Facilitating Decentralized, Unequivocal Publication of and Reference to Psychological Construct Definitions and Instructions” is available at https://psyarxiv.com/8tpcv/. The original vignette contents are retained below.

Background

In psychology, many different guidelines exist for developing measurement instruments and manipulations of constructs. When conducting empirical research or secondary research, therefore, it is important to have a clear idea, and clearly communicate, which guidelines are followed. However, these guidelines are fragmented, and no universal consensus exists regarding how any construct must be measured or manipulated. Therefore, developing one ‘final’ authoritative list or taxonomy is not feasible or even desirable. At the same time, it is important that people clearly communicate their ‘study-specific taxonomy’. The psyverse package makes it possible to combine these two goals.

For more background information on the concept underlying the Decentralized Construct Taxonomies, see the reasoning set out in Peters & Crutzen (2017a, 2017b) and Crutzen & Peters (2018). In brief, in those papers the following premises are postulated:

From these premises it follows that is is not feasible (or desirable) to develop one ‘final’ taxonomy of psychological constructs or of manipulations of psychological constructs. Instead, multiple taxonomies exist, each of which has different uses. For example, in behavior change research, one taxonomy has been developed that is well-suited for describing intervention content but that is not useful when developing interventions (Abraham & Michie, 2008), and another that is well-suited for developing interventions, but is in its present form less useful for coding interventions (Kok et al., 2016).

Similarly, different theories that explain behavior postulate different psychological variables as important predictors of behavior. These variables generally overlap with each other: usually between theories, but also within the same theory. For example, the main three determinants in the Reasoned Action Approach are postulated to correlate (Fishbein & Ajzen, 2010), correlations that are empirically almost impossible to distinguish from structural composition (Peters & Crutzen, 2017a). Where different theories contain the same construct, they often contain slightly different definitions and have different operationalisational implications.

However, given that psychological constructs do not correspond to natural kinds (Peters & Crutzen, 2017; also see e.g. Fried, 2017b), there is no way to settle on ‘the best’ definitions or operationalisations. Expert consensus is no sensible instrument, because the opinions and preferences of experienced researchers reflect, to an unknown degree, their training; and therefore, convictions from decades ago (which does not necessarily make them wrong; but at the same time does nothing to make it likely that they are right). Neither can this problem be solved statistically. For example, correlation strengths also reflect artefacts such as correlated measurement error that render the assumption that correlation strengths can be used to infer ‘the best’ definitions or operationalisations circumspect (e.g. nonsense items obtain equally high internal consistencies as items that appear sensible at face value; Maul, 2017).

Perhaps then, it is no surprise that in practice, operationalisations often vary between studies, with studies often using a wide array of items, not to mention the variation in languages, target behaviors, and cultures. For example, a recent review of self-identity operationalisations identified hundreds of different items purportedly measuring the same construct (Snippe, Peters & Kok, 2019). These different measurement instruments purporting to measure the same construct turn out to measure different things when this claim is tested (see Fried, 2017 for depression; Weidman, Steckler & Tracy, 2017, for emotions; Warnell & Redcay, 2019, for theory of mind; and Williams & Rhodes, 2016a, for self-efficacy). And when put to the test, many commonly used measurement instruments turn out to be invalid in themselves (Hussey & Hughes, 2019). As a consequence of this pluriformity and lack of validity, research syntheses suffer, and in communication about research results, a lot is lost in translation.

However, this variation observed in practice is hard to reconcile with the need for consistency in communication. How can this be realised given that there’s no scientifically justifiable way to identify ‘the best’ definitions or operationalisations?

This is the problem Decentralized Construct Taxonomies (DCTs) aim to solve.

Decentralized Construct Taxonomies

DCTs are plain text files that follow a number of conventions that enable unequivocal communication and reference to specific constructs and definitions, as well as their assumed structural composition (which is still assumed to be equivalent to causal influence for most practical purposes). Specifically, for any given construct, DCTs allow explicit specification of the following:

The five main specifications can be based on a source that can also be specified in a human- an machine readable manner. The definitions should describe exactly what aspects of the human psychology are, and which are not, part of the target construct. The four instructions should provide researchers with what they need to develop or recognize (code) measurement instruments and to conduct qualitative research to explore the construct, for example in a given context or population (note that this latter approach, while commonplace in health psychology, where theory is commonly applied to a variety of different populations and target behaviors, may not apply or be useful in other areas).

Of course, these five specifications should form a coherent, consistent set within each DCT specification. DCTs enable researchers to easily communicate how exactly they define their target constructs, as well as exactly what implications this carries for operationalisation. Other researchers can adopt the exact same definitions (and therefore, normally the same instructions for operationalisation etc) by simply copying the DCT specifications and also publishing them with their paper. Or, alternatively, they can adjust them, or use different ones.

This is enabled by the DCT identifiers. To enable decentralized specification of DCTs, yet maintain the ability to unequivocally refer to a specific DCT, each DCT specification has an identifier. This identifier is generated by the psyverse R package, and is virtually unique without the need for central oversight. This mechanism ensures that it is possible to refer unequivocally to any given DCT, thereby simultaneously supporting clarity of communication and variation in definitions and operationalisations.

In addition to facilitating clear communication without imposing the need for central curation, DCTs facilitate building databases of construct definitions and accompanying instructions. A lab or course can collect a set of DCT specifications and simply use the DCT package to compile them into sets of instructions for developing operationalisations, or for coding qualitative data, consistent with the other work done at that lab or institution.

Underlying technology

A DCT file is a YAML file of the following form (in its simplest version):

---
dct:
  version: 0.1.0
  id: 
  label: ""
  date: ""
  dct_version: 1

################################################################################

  definition:
    definition: ""

################################################################################

  measure_dev:
    instruction: ""

################################################################################

  measure_code:
    instruction: ""

################################################################################

  aspect_dev:
    instruction: ""

################################################################################

  aspect_code:
    instruction: ""

################################################################################

  rel:
    id: ""
    type: ""

################################################################################
---

An example in progress is available at https://a-bc.gitlab.io/dct-raa/.

References

Abraham, C., & Michie, S. (2008). A taxonomy of behavior change techniques used in interventions. Health Psychology, 27(3), 379–87. https://doi.org/10.1037/0278-6133.27.3.379

Crutzen, R., & Peters, G.-J. Y. (2018). Evolutionary learning processes as the foundation for behaviour change. Health Psychology Review, 12(1), 43–57. https://doi.org/10.1080/17437199.2017.1362569

Fried, E. I. (2017a). The 52 symptoms of major depression: Lack of content overlap among seven common depression scales. Journal of Affective Disorders, 208, 191-197.

Fried, E. I. (2017b). What are psychological constructs? On the nature and statistical modeling of psychological kinds. Health Psychology Review, 11, 130-134.

Hussey, I., & Hughes, S. (2019) Hidden invalidity among fifteen commonly used measures in social and personality psychology. PsyArXiv. https://doi.org/10.31234/osf.io/7rbfp

Kok, G., Gottlieb, N. H., Peters, G.-J. Y., Mullen, P. D., Parcel, G. S., Ruiter, R. A. C., … Bartholomew, L. K. (2016). A taxonomy of behavior change methods: an Intervention Mapping approach. Health Psychology Review, 10(3), 297–312. https://doi.org/10.1080/17437199.2015.1077155

Peters, G.-J. Y., & Crutzen, R. (2017a). Pragmatic nihilism: how a Theory of Nothing can help health psychology progress. Health Psychology Review, 11(2). https://doi.org/10.1080/17437199.2017.1284015

Peters, G.-J. Y., & Crutzen, R. (2017b). Confidence in constant progress: or how pragmatic nihilism encourages optimism through modesty. Health Psychology Review, 11(2), 140–144. https://doi.org/10.1080/17437199.2017.1316674

Maul, A. (2017). Rethinking Traditional Methods of Survey Validation. Measurement: Interdisciplinary Research and Perspectives, 15:2, 51:69. https://doi.org/10.1080/15366367.2017.1348108

Snippe, M. H. M, Peters, G.-J. Y., & Kok, G. (2019) The Operationalization of Self-Identity in Reasoned Action Models: A Systematic Review of Self-Identity Operationalizations in Three Decades of Research. PsyArXiv, https://doi.org/10.31234/osf.io/ygpmw

Warnell, K. R., & Redcay, E. (2019). Minimal coherence among varied theory of mind measures in childhood and adulthood. Cognition, 191, 103997.

Weidman, A. C., Steckler, C. M., & Tracy, J. L. (2017). The jingle and jangle of emotion assessment: Imprecise measurement, casual scale usage, and conceptual fuzziness in emotion research. Emotion, 17(2), 267.

Williams, D. M., & Rhodes, R. E. (2016a). The confounded self-efficacy construct: review, conceptual analysis, and recommendations for future research. Health Psychology Review, 10(2), 113–128. https://doi.org/10.1080/17437199.2014.941998

Williams, D. M., & Rhodes, R. E. (2016b). Reviving the critical distinction between perceived capability and motivation: A response to commentaries. Health Psychology Review, 7199(April), 1–7. https://doi.org/10.1080/17437199.2016.1171729