There are a lot of terms that get thrown around in the academic lexicon, sometimes they align with those you’ll find in a dictionary, sometimes they don’t. So I thought I’d outline a good handful for you here that will be helpful as you wade through some sweet, delicious mass comm theories (Fig. 1). This article is based on Reynolds’s book: a primer in theory construction (a must have for aspiring theorists), citation at the end of the article, as well as from a grad class I took.
Object of analysis: The system whose properties we are trying to explain. The research problem should determine what attributes of the system we are interested in. If the attributes are those of the individual person (e.g., a personality characteristic, attitude change), then it probably belongs to cognitive theory. If the attributes are those of a group of persons (e.g., community status, rate of diffusion), then it lies in the social systems realm. Societies, communities, large organizations, and primary groups are types of social systems.
Concepts: The most basic elements in theory, they are the attributes of the object that we are trying to explain and those that we are using to provide the explanation. They are abstractions from reality. We also use them in everyday life, of course, but research concepts are supposed to be more precise. Concepts are interesting to researchers only when they vary; we call a concept that can be observed to have different values a variable (as contrasted to a constant). Often called constructs because scientific concepts are carefully constructed from observation.
Conceptual definition: Each concept in a theoretical system (a collection of interrelated theoretical statements) should have a clear and unambiguous definition that is consistently used by the individual theorist and in agreement with the way other theorists define the concept. But that is seldom the case in social science. Careful definition of concepts is where we must begin with theory building (Normally I hate italics, but dammit, that sentence is important, write it down!).
Postulates: Ideas, biases, and strategies of a particular theorist that help to explain why his theory is constructed as it is and why he does the kind of research he does (nothing to do with posteriors). Theses statements are more abstract than assumptions or theoretical statements and not usually testable. They may represent statements about human nature, causation, the nature of data, and the broad type of causal forces in society – in short, what’s important to look at and how you should do it.
Assumptions: These are statements about the concepts used in the theory. Assumptions are taken for granted in the theory being tested. They are not investigated, but the falsification of that theoretical statement may result in the revision of the assumption in the future. Assumptions (or revised assumptions) may serve as hypotheses in subsequent research. Two or more assumptions provide the premises from which the theoretical statements (and hypotheses) are derived through logic.
Theoretical Statement: The statement specifying the relation between two or more concepts (variables). Reynolds calls these relational statements and distinguishes these from existence statements that include postulates, definitions and assumptions. Other people call theoretical statements axioms, theorems or propositions. Seriously, the label doesn’t matter, just so we know what we’re referring to.
Relations: (No not that kind, get your mind out of the gutter) The connection between concepts can be stated in a number of forms: that one variable causes another, that two variables are associated, and more complicated relations are possible.
Operational definitions: The set of procedures a researcher uses to measure (or manipulate as in experiments) a given concept. These should follow clearly and logically from the conceptual definition of the concept. These are less abstract than conceptual definitions. They tell us “how to measure it,” ideally using more than one method.
Explication: The process by which conceptual and operational definitions are connected. This is done either by analysis using the logical criteria of definition or through empirical analysis using research data to clarify measurement to distinguish the concept from other concepts. Abstract concepts often need to be broken down into two or more lower order (less abstract) concepts before they can be translated into hypotheses. Basically a fancy way of saying “explain.”
Measurement: The assignment of values to objects on the basis of rules relevant to the concept being measured. Reynolds describes four levels of measurement: nominal, ordinal, interval, and ratio. The quality of measurement is assessed by reliability and validity. Speaking of reliability…
Reliability: The stability and precision of measurement of a variable. Stability overtime is called test-retest reliability (i.e., do those scoring high at one time also score high at a second point in time). A second form, equivalence, looks at the level of agreement across items (internal consistency) or forms, or between coders doing the measurement.
Validity: The degree to which you’re really measuring what you think you’re measuring. There are two different approaches: you find external independent evidence (e.g., a criterion group known to possess the characteristic) against which to compare your measurement (pragmatic validity), or you look at the extent to which the empirical relationships of the concept to other concepts fit your theory (construct validity).
Hypothesis: A statement of the relationship between two or more operational definitions. It should be capable of being stated in an “if, then” form, and is less abstract than theoretical statements, assumptions, and postulates. The type of research you are doing will largely dictate how to phrase your hypothesis.
Dependent Variables + Independent Variables: The dependent variable is the “effect” that we are seeking to explain; the independent variable is the presumed “cause” of that effect. We often say “x” is the independent variable that is the cause of the dependent variable “y,” (the effect). There are various names for third variables: extraneous variable, intervening variable, mediating variable, etc. that alter the relationship between the independent and dependent variables.
Empirical testing: A good theory must be capable of being tested by observation in the “real world.” Most frequently, statistics are used to make this test. Note that we test theory indirectly through hypotheses and operational definitions. It is made even more indirect by the fact that we test the null hypothesis: the statistical hypothesis of no difference – that the relationship is not strong enough to reject chance. If the data is judged to be not strong enough to reject the null hypothesis, then we have falsified the theoretical statement. If the observations are judged sufficient to reject the null hypothesis, then theory merely remains viable or tenable.
Type I and Type II errors: One of the problems of doing research is that you can be wrong in the inferences you make from research evidence. You can be wrong if you decide to reject the null hypothesis and say that the result is consistent with your theory. That’s a type I error. If your results don’t look very supportive and you decide you can’t reject the null hypothesis, you can be wrong too. In that case you incorrectly gave up on your research hypothesis (indirectly falsifying your theory), but there really was support in the “real world” and your research wasn’t good enough to detect it. That is a type II error.
Causality: As you may know by now, this is a “can of worms.” It’s probably better to think of establishing causality between two variables as something that we move toward than to think of it as being capable of being “discovered” through an experiment. Realize that it is better to think in terms of various types of causes than to look for “the cause” of something. To work toward causality, three conditions have to be met: There has to be an association (correlation) between the two variables; a time order has to be established such that the presumed cause precedes the effect; and other explanations have to be ruled out, such as that some third variable causes both of the two variables of interest. If this last is the case, then we say the relationship we thought was causal was really spurious.
Necessary condition: A situation that must be present for some effect to take place. This is one type of cause. Sometimes a necessary condition describes the level of a third variable that is essential for the relationship between two other variables to hold. In this case the third variable can also be called a contingent condition. Third variables that make the relationship stronger or weaker but don’t totally limit its domain (are not necessary conditions) are called contributory conditions.
Sufficient condition: A situation that if present is enough to produce all effects. This implies that there are no contingent conditions. Experiments are probably more suited to finding sufficient conditions than are nonexperimental sample surveys and other methods. Social scientists would like to find necessary and sufficient conditions, but that is a goal, not an immediate reality.
Models and paradigms: Social scientists sometimes find it useful to employ simplified versions of reality to gain insight and to illustrate their theoretical ideas. A model is a conceptual structure borrowed from some field of study other than the one at hand; it needs not include causal statements, but it does specify structural relationships among variables. A paradigm is a conceptual structure designed specifically for the field of application; it also specifies structural relationships. When a model or paradigm incorporates causal statements, it is usually called a theory. Models and paradigms can be assessed on the basis of their usefulness in helping us construct valid theory.
Reynolds, P. D. (1971). A primer in theory construction. Indianapolis, IN: Bobbs-Merrill Educational Publishing.