1 |
The term 'component variable' will be used as a substitute for the term
'main effect' in this paper. 'Main Effect', traditional in ANOVA, is a misleading
term. It makes an implicit claim of superiority for the measure it represents,
discouraging the researcher from examining interaction variables which are,
by implication, 'secondary effects'. In fact, the interaction is often as
important as the 'main effects' which are its components. In the interpersonal
perception model, they can be more important. The term 'component variable,'
although not necessarily the best or onlly term available to describemain
effects, avoids this bias. Component variables are components of data analyses.
The are the components of interaction variables. The erm is humble, suggeting
a role in a larger process, but not primacy. In this it encourages, rather
than discourages, he search for additional explanation in interation variables. |
2 |
The orthagonality possible between interation and components can be deomonstrated
by the careful anallysis of Figure 1. Those desiring further proof are referred
to Kerlinger and Pedhauser (1973). Total lack of correlation between cause
and inteactionshould only be expected in what we will soon call the reversing
interaction. Any correlation between cause and inteaction should probably
be spurious. |
3 |
Codes are, after all, essentially meaningless. |
4 |
Appeal to power analsis can reduce the risk of such spurious findings,
but such treatment only solves part of the problem. |
5 |
If one four component constraining interaction proves important but remains
uninerpreted in the face of a test of reversing interactions only, the variance
attributedable to that interaction will be spread equally among all of the
variables tested. |
6 |
Testing both the constraining and reversing interaction variables leaves
the test of significance overdetermined. Any combination of three of the
four variables will yield an equal level of variance accounted for. |
7 |
The difficulty involved in adequately interpreting significances obtained
in the testing of all possible inteactions have led to a substantial literature
devoted to methids of interpreting those interactions. See Bogartz (1976),
Games (1973), Golding (1975), Levin and Marascuilo (1970, 1976), Mayo (1961),
Phillips (1977), and, for non-parametric approaches, Weber (1972). |
8 |
An inclusion level is a stage within a causal model for which the assumption
is made that all variables occur at the same point in time. No variable
within a single inclusion level can cause any variable within that same
inclusion level with the exception case of reciprocal causation, a variable
of little or no consequence in a time series causal model, but of great
import when there is no periodic measurement of the same variables for tme
series. |
9 |
An exogenous variable is any variable which is not directly caused within
a causal model. In an experiment, they would be the intependent variable.
In a causal model, they most often occur at the first inclusion level. |
10 |
Any variable which is directlly caused within a causal model is endogenous.
In an experiment, this is the dependent variable. The confusing part is
that the endogenous variable does not necessarily occur only at the end
of the model. The endogenous variable can cause as well as be caused. Thus,
the last variable in a model is sometimes referred to as a terminal endogenous
variable. |
11 |
The exogenous nature of the interaction variable may be controversial,
frist from the standpoint of autocorrelation, and second, in the possibility
that the component of an interaction may be changed through a change in
he interaction. The latter case would imply direct effects on an endogenous
interaction variable.Such treatment would be highly controversial within
linear regression. In any case, the estimation of error terms (variance
accounted for) might easily prove impossible. Thus, until a good rationale
for exogenous treatment can be found, interactions must remain endogenous. |