Physicians rarely rely on a single diagnostic test when confronting a complex disease. They combine imaging, laboratory work, and genetic screening because each technique reveals a different dimension of the underlying condition, and together they produce a diagnosis that no single instrument could achieve alone. Yet researchers in management and social sciences have long operated by a different logic, committing to a single statistical tradition and defending it against all others as though methodological loyalty were a virtue. Nowhere is this tendency more visible, or more consequential, than in structural equation modeling (SEM).
A new article published in the Journal of Global Marketing challenges this entrenched practice. Hair, Sharma, Chin, Sarstedt, and Ringle (2026) propose a multimethod SEM framework that integrates the two dominant estimation traditions — factor-based and composite-based SEM — within a single analytical workflow. Rather than asking which method is superior, the framework asks a more productive question: what can researchers learn by applying both simultaneously? The answer, the authors argue, is considerably more than either tradition can offer on its own.
Two Traditions, One Model
Structural equation modeling has evolved along two largely separate paths. Factor-based SEM, most commonly implemented as covariance-based SEM (CB-SEM), assumes that observable indicators are caused by an underlying latent trait. Shared variance among those indicators is used to estimate model parameters, and model evaluation centers on global fit indices — metrics like χ², CFI, RMSEA, and SRMR — that assess how well the theoretical structure reproduces the observed covariance patterns. This tradition has served as the backbone of theory confirmation in management and social science research for decades.
Composite-based SEM, most prominently represented by partial least squares SEM (PLS-SEM), takes a different approach. Rather than extracting variance from a latent common factor, it linearly combines indicators into composite scores that serve as proxies for the theoretical constructs. Model evaluation in this tradition focuses on the proportion of variance explained (R²) and, crucially, on out-of-sample predictive power assessed through routines like the cross-validated predictive ability test (CVPAT). Where CB-SEM excels at internal validity and theory confirmation, PLS-SEM stands out for assessing whether a model can generalize its predictions to new, unseen data.
These differences are not merely technical. They reflect distinct ontological and epistemological commitments about how constructs exist in the world, how they should be measured, and how structural relationships should be interpreted. The same construct can be treated as a common factor in one study and as a composite in another, leading to different path estimates, different conclusions, and different managerial implications — even when applied to the same data. Hair and colleagues argue that this inconsistency represents a fundamental problem that has been insufficiently recognized in the research community.
Explanation and Prediction as Complementary Aims
One of the article’s most important contributions is its articulation of the difference between explanation and prediction as research objectives, and its insistence that most meaningful research requires both. Explanation — the goal of understanding why relationships exist — demands internal validity and is the domain of factor-based methods. Prediction — the goal of estimating outcomes in practice — demands external validity and is the strength of composite-based methods. The authors draw a medical analogy that captures this distinction elegantly: a diagnosis that explains the symptoms without guiding treatment is incomplete for medicine, just as a model that explains without predicting is incomplete for applied research.
Building on a two-dimensional conceptualization developed by Sharma et al. (2024), the article maps structural paths onto a four-case typology that cross-tabulates in-sample explanatory significance with out-of-sample predictive validation. A path that is statistically significant in CB-SEM but not predictively relevant in PLS-SEM may be theoretically sound yet practically overfitted. A path that predicts well but lacks theoretical significance may point toward an unexplained mechanism — a conceptual gap waiting to be filled. Only when a path demonstrates both explanatory and predictive support can it be described as robustly important for both theory and practice. This framework pushes researchers to stop treating statistical significance as the finish line and to begin treating prediction as an equally essential criterion of scientific progress.
The Multimethod Workflow
The multimethod SEM framework is operationalized through a structured nine-step workflow that researchers can apply to any structural model. The initial steps follow established best practices: specifying the model based on theory, conducting requisite data checks, establishing measurement model quality through validity and reliability assessments, and validating the overall model structure using both CB-SEM fit indices and PLS-SEM explained variance measures. These preparatory steps ensure that the structural analysis rests on a sound foundation before the path-level evaluation begins.
The distinctive contribution of the framework emerges in the multimethod workflow steps. Once overall model adequacy is confirmed, the researcher estimates the structural paths using CB-SEM, then re-estimates those same paths using PLS-SEM, and compares the results at the path level. This dual estimation makes it possible to ask, for every individual structural relationship in the model, whether the finding is robust to the choice of estimator or whether it depends on assumptions specific to one tradition. Following the in-sample comparison, PLS-SEM is then used to conduct out-of-sample prediction tests that evaluate whether each path contributes meaningfully to the model’s predictive capacity, over and above a nested model that omits that path. The final step integrates the in-sample explanatory evidence with the out-of-sample predictive evidence within the four-case framework, and reports findings transparently with respect to both convergence and divergence.
Convergence and Divergence as Scientific Information
Perhaps the most intellectually significant aspect of this framework is how it treats disagreement between estimators. Traditional methodological debates frame divergence between CB-SEM and PLS-SEM results as a problem to be resolved by declaring one method correct. The multimethod framework reframes divergence as theoretically informative rather than merely inconvenient. When both estimators agree that a structural path is significant and prediction confirms its practical relevance, researchers can report the finding with high confidence as a robust effect central to both theory and practice. This convergence is the strongest form of evidence available within the confirmatory explanatory-predictive mode.
When estimators disagree, however, divergence signals that the result is sensitive to modeling assumptions — meaning that the underlying theoretical mechanism, the construct conceptualization, or the structural path itself may require closer scrutiny. Rather than hiding this uncertainty by reporting only one method’s results, the multimethod framework requires that divergence be reported transparently and treated as an invitation to explore boundary conditions, hidden mediators, or measurement model misalignment. In this way, the framework transforms methodological sensitivity from a source of embarrassment into a source of theoretical insight.
Implications for Health Management and Applied Research
The implications of this framework extend well beyond the marketing context in which the article is situated. In health management research — an applied field that routinely uses SEM to model complex relationships among patient behaviors, organizational processes, and health outcomes — the distinction between explanation and prediction carries direct practical significance. A structural model that demonstrates strong in-sample fit but poor out-of-sample predictive performance offers limited guidance for designing interventions that need to work in new populations and settings. Conversely, a model that predicts well but lacks theoretical grounding provides little basis for understanding what mechanisms to target. The multimethod framework provides researchers in health management with a principled way to pursue both goals simultaneously, and to report findings in a manner that is transparent about which conclusions rest on robust evidence and which remain provisional.
The increasing availability of SEM software that supports multiple estimation paradigms — including SmartPLS, R-based packages, and specialized tools for IGSCA and Henseler-Ogasawara approaches — makes the practical adoption of this framework more feasible than ever. The authors are explicit that the framework is not a mandate to abandon single-estimator studies in all cases, but rather an argument that exclusive reliance on a single estimator can obscure important aspects of inference, particularly when researchers face uncertainty about the data-generating process and pursue multiple modeling objectives. The multimethod SEM approach is best understood as an evolving research program that enriches the toolkit available to rigorous researchers, regardless of discipline.
The analogy to medicine with which the article opens proves to be more than rhetorical flourish. The blind men in Saxe’s parable each grasped part of the elephant, but none understood the whole. In the same way, a research community committed to a single analytical lens captures only part of empirical reality. The multimethod SEM framework proposed by Hair and colleagues offers a path toward a more complete and credible picture — one that honors both the theoretical depth that explanation demands and the practical relevance that prediction requires.
Reference
Hair, J. F., Sharma, P. N., Chin, W. W., Sarstedt, M., & Ringle, C. M. (2026). A multimethod SEM framework for analyzing models with latent variables. Journal of Global Marketing. Advance online publication. https://doi.org/10.1080/08911762.2026.2638909
