Methodological Practice as Matters of Justice, Justification, and the Pursuit of Verisimilitude

research-in-human-development-free-download-information-essay-article-citation

Todd D. Little*

Texas Tech University

Abstract

My wish is for ready adoption of methodological advances without fear of reviewer resistance or worry over the complexity of it all. In my essay, I discuss the inherent problems and impediments to ready adoption and I offer a number of concrete suggestions to mitigate the current state of practice. My essay is grounded on the idea that the intricacy of modern methods and their corresponding questions must be embraced if policy and practice are to be truly served. Inadequate findings derived from status-quo methods propagate mis-information and beget adoption of bad policy and practice – thereby undermining our obligations as stewards of social justice.

Methodological Practice as Matters of Justice, Justification, and the Pursuit of Verisimilitude

My wish is tied to one of my peeves — a “peeve” is a vexation born of repeated annoyance by well-meaning persons with misguided impressions of X – in my case, X is methodological innovation.

The problem

Let me start with some background: I’ve lost count of the number of times I have heard from a colleague, “… reviewers for this journal would never allow me to publish using that” or “…review panels are not accepting of this.” These types of comments are most common about two minutes after I give a talk or workshop on an innovative methodological procedure or technique — an attendee approaches me and prefaces the seemingly inevitable comment with, “Oh, I just loved what you had to say and I think it’s so important to be doing this, but….” (insert first or second quote above). This sentiment is also commonly found when, after asking me to consult on a project, my well-meaning client retreats into the comfort of “your way sounds all well and good, but this is how my field does it; so, sorry, we can’t do that” (personification common). I get rather disheartened when I hear things like this coming from the mouths of impressionable graduate students who, sadly, are being poorly socialized. Another way that I have heard this impediment to progress expressed is, “the only way to get published or to get funded is to use the simplest methods possible,” where simple actually means “we are making way too many unwarranted assumptions.” These sentiments are rather ubiquitous across the social, political, economic, and medical sciences and are still found among those who study human development.

Unpacking my wish is like peeling the layers of an onion, and, perhaps ironically, each layer seems to bring more questions: How do we assuage the fuddy FUDSI (persons overcome by Fear, Uncertainty, Doubt, and Statistical Ignorance) of using modern methods? How do we educate in ways that transcend all fields? How do we create a new culture of unrestrained and unconstrained methodological rigor? How can we embrace the ‘C word’ (Causality) and not feel like we have offended our academic deities? And what the heck does the title of this essay mean?

Social Justice

Let me first turn to the Justice part of my wish. To me, all research is a matter of social justice. Research findings are the building blocks of policy and practice. When the building blocks are made of faulty material, the foundation of policy and practice is at best weak and at worst on the precipice of collapse. Here, methodology carries an almost moral quality; I once reviewed a paper that did not attempt to redress the study’s longitudinal missing data. I admonished the authors in a very constructive and helpful way to please use a modern treatment to ensure the greatest possible accuracy of the findings. Their reply letter simply countered, “We refuse to impute on moral grounds.” I rejected that revision and added, “If science is religion, the only morally unconscionable action is not imputing.” [Read Enders (2010) and similar works by Graham (2012), van Buuren (2012), and such and you will understand why you can, in fact, “Impute with impunity!”] My point here is simple: If we don’t adopt advanced, sophisticated, and complex methods, our results will be inaccurate and, by implication, the results will reflect a miscarriage of social justice. Social justice and the pursuit of verisimilitude demands best practice.

Principled Justification

Next, research methods are not applied — they are justified. Justification implies making reasoned choices and principled adaptations of the methods that are selected and the steps that are taken. Modeling data is a reasoned process involving constant choices that are informed by having a fluent dialog with data. Choices need to be justified. In this regard, there’s rarely such a thing as bad research, just bad methodological choices. We should eschew the routinized and mindless application of the methods of yore. Instead, if we practice the unconstrained and unrestrained methodological rigor of principled modeling justification, we can go beyond the normal science of our academic progenitors and make real progress in the nature of academic epistemology. This goal requires extensive training in the underlying principles of modern methods: Teaching students “how” to think about statistics and methods rather than “what” to think. Teaching students “what” to think only atrophies creative and flexible applications of methods. Here, for example, I think default settings for any statistical procedure should be outlawed. In other words, research questions should not be crammed into a favored or favorite methodological machinery; rather, the methodological machinery should be tailored to the research question. Research questions can thereby become more sophisticated and more nuanced because the methodological machinery can be adapted, through principled and thoughtful justification, to provide minimally equivocal answers.

Research as Wesearch, not Mesearch

When methods are tailored and questions are expanded, I am referring to the proverbial dance between theory and methods. When this scientific tango is done well, we see an elegant tandem whose coordinated interplay leads to unparalleled accomplishment. The partners form a wesearch team. When theory dances alone, it looks more like a statue than a dance (and methodology doesn’t dance well alone either). When theory or methods dance alone, the model is one of “mesearch.” A mesearch-oriented theoretician would opine: “use only the methods that everyone else is doing because I have to compete with them and they would not understand my work if I used these other methods. I can’t collaborate to make my work better because I need sole-author publications to get promoted.” A purely methods-based researcher might say, “I’m an expert in this technique, so this is the technique I recommend,” or “real data are too messy, I prefer simulations.” As my advisor oft noted, the theoretician never lets data stand in the way of good theory and the statistician will never let theory interfere with good data. This model of “mesearch” is fundamentally flawed. I would argue that the sole-author (and somewhat hubristic) bent of many merit and promotion committees is at best irresponsible – at worst it is a social injustice. Equation 1 displays the traditional model of mesearch:

Both Theoretician & Statistician in one = underdeveloped and minimally useful results (1)

Here a given researcher must be both theoretician and statistician and is thereby limited in one or both categories. Typically, it is the statistician side where once a simple statistical procedure like ANOVA is mastered all research is crammed into the easily mastered statistical tool. Equation 2 represents another model of mesearch where a pure theoretician teams with a pure statistician:

A Theoretician + A Statistician = disjointed and minimally useful results (2)

When a pure theoretician and pure statistician are paired we have two persons engaged in parallel play. They aren’t coordinated in their efforts and nothing meaningful emerges.

As intimated above, the model needs to be one of “wesearch.” To me the solution is creating teams of investigators with overlapping acumen (like Ballantine/Venn diagrams). The team can swarm to a problem, innovate, and solve the research question at hand:

Methodologically Savvy Theoreticians + Theoretically Savvy Methodologists

= maximally useful results (3)

The model of wesearch reflects the ultimate in transdisciplinarity and provides for the overlapping expertise needed for maximally useful results. Team members engage in authentic collaboration and, like a gestalt, the product they produce is greater than the sum of their parts. Transdisciplinary wesearch is a process of team engagement, thought adaptation, and mindful collaboration – it is beautiful to behold – besides, isn’t everything awesome when you work as a team? Figure 1 presents a graphical representation of transdisciplinary wesearch and what it leads to.

The impediments

What are the impediments to wesearch? The impediments are layered too. Science, as a way of knowing, is fundamentally skeptical and critical; however, it also has the paramount charge of being accurate (at least provisionally). The current state of contemporary practice in most social science fields is still too far behind methodological best practice. The default skepticism and pervasive risk aversion of life science research is, therefore, an inherent impediment. Another impediment is the lack of training opportunities. The pipeline for training in quantitative methods has a very low capacity. The pipeline, however, can be augmented by training that is offered regularly across the world. I started Stats Camp (statscamp.org) in 2003 just for this purpose, and other institutions have an even longer history of providing training. With training comes statistical knowledge that will assuage the fear, uncertainty, and doubt of modern methods. It is incumbent upon this generation of researchers to seep themselves in the modern methods and lead the charge to elevate standards of practice in their respective field. Becoming methodologically savvy is, in this sense, an imperative in order to achieve the social justice that science should serve. To do so, we all need to become itinerant workshop groupies (Barbara Byrne, personal communication).

Causality versus causality: The pursuit of verisimilitude

Much of my wish gets wrapped around the ‘C word’ – Causality. A number of my colleagues have noted how much developmentalists feel that the word “Causality” is somehow taboo or sinful. A truth is that we can never know Cause without omnipotence. Rather than fear Causality, we should embrace causality (lower case c). In other words, verisimilitude (i.e., truth-like value or causality with a lower case c), like parsimony, works pretty darn well. In fact, I would argue that the pursuit of verisimilitude is more important that the pursuit of Causality. Causality implies a closed system with perfect measurements of all constructs and population parameter estimates that are all infallible. Verisimilitude implies a model that is open because not all constructs can be adequately represented in any one study or any one model. Verisimilitude by definition implies a degree of causality. The pursuit of Causality has the side-effect of devaluing work that is “non-causal” and overvaluing work that uses designs and techniques to maximize causal inference but at the expense of generalizability. In this regard, like perfect is the enemy of good, Causality is the enemy of verisimilitude.

Our goal should be to approach Causality with as much verisimilitude as feasible such that we can “depict” human behavior with meaningful accuracy. By “depict,” I mean describe, explain, predict, improve, control behavior, and test the assumptions that guide the practice and policies (see Little, 2013). Depicting human behavior offers results that can effectively shape policy – results that are truly evidence-based and that can be used to develop effective interventions that work for the most possible, given the catchment population. Special populations adhere to the same principles. Causality implies an intervention that works for each person in an almost idiosyncratic way. Verisimilitude implies an intervention that is effective for the most possible at a beneficial cost basis for a defined population.

Verisimilitude, like parsimony, is a challenging cost-benefit calculus. Does the cost of measuring and specifying additional model parameters reap the benefit of actionable new knowledge? The pursuit of verisimilitude means striving to find the sweet spot of the right amount of enough information to provide maximal guidance for policy and practice. When optimal verisimilitude is achieved by well justified and principally adapted methodological machinery, social justice is served.

Wishful thinking

In the end, I guess a final element of my one wish is simply to embrace the complexity. Complex systems require complex theory and complex models to understand them. Embracing the complexity allows researchers to pierce the gauntlet of FUDSI that reviewers and panel members too often erect. Doing good science isn’t supposed to be easy and the apparent preference for easy science is tantamount to pseudo-science. In my view, good science is innovation science. The continued use of ordered categories (introduced by Likert in 1932!) as the preferred tool of measurement, for example, should become a banished practice. Instead, innovations in measurement like using electronic delivery with visual analog scales or real time intensive recording of data should be employed more readily. Similarly, innovations in design like creating planned missing protocols to save costs, increase database coverage, and reduce participant burden (among other benefits) ought to become standard practice (Little, Jorgenson, Lang, & Moore, 2014). Moreover, moving away from simple, flat representations of data to incorporating linked meta-data and integrated relational databases, opens up an amazing world of potential modeling benefits to easily allow cross-nested, multi-way relationships across various hierarchies of influence (e.g., n-Level-SEM modeling; Mehta, 2015).

Innovations in modeling complex data structures need to be planned for and then craft-fully executed (Little, Wang, and Gorrall, in press). Here, the frequently mixtured and often messy multilevel nature of multivariate data (typically made sparse by the many mechanisms of missing data) coupled with the moderating and mediating mechanisms in multiply caused outcomes makes for a massively challenging modeling process. It also makes today’s research so much fun! And, lest I forgot something: et cetera.

*Author Note

This text is the prepublication version of the one wish essay Dr. Little contributed to the special issue of Research on Human Development edited by Richard A. Settersten Jr. & Megan McClelland. The special issue can be down loaded at http://www.tandfonline.com/toc/hrhd20/12/3-4 and the published version of this article is available here.

Todd D. Little is founder and director his annual Stats Camp (statscamp.org) and the director of the Institute for Measurement, Methodology, Analysis, and Policy (IMMAP) at Texas Tech University. He is a professor in the Research, Evaluation, Measurement, and Statistic Program of the Educational Psychology Department at TTU.

Special thanks to Gregory R. Hancock, Katherine E. Masyn, Annegret Hannawa, Patricia H. Hawley, my team of personnel in IMMAP, and the editors for their helpful comments and suggestions. I would also like to express my gratitude to my many colleagues, such as those in the Society of Multivariate Experimental Psychology, for regularly stimulating me with methodological innovation and rigor. I’d like to also acknowledge the many methodologically oriented conferences and subprograms that provide a forum for viewing and presenting methodological innovations.

Download the published version of this article (PDF)

References:

Enders, C. K. (2010). Applied missing data analysis. New York: Guilford.

Graham, J. (2012). Missing data: Analysis and design. New York: Springer.

Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140, 5-55.

Little, T. D. (2013). Longitudinal structural equation modeling. New York: Guilford.

Little, T. D., Jorgensen, T. D., Lang, K. M., & Moore, E. W. G. (2014). On the joys of missing data. Journal of Pediatric Psychology, 39, 151-162.

Little, T. D., Wang, E. W., & Gorrall, B. K. (in press). The past, present, and future of developmental methodology. In N. A. Card (Ed.), Developmental methodology; Monographs of the Society for Research in Child Development.

Mehta, P. (2015). xxM Reference Guide. Houston, TX: Author.

van Buuren, S. (2012). Flexible imputation of missing data. Boca Raton, FL: CRC Press.