How to Define Your Concept a.k.a. Concept Explication [Part 2]

This is part 2 (the final part) of a series, be sure to read part 1 as well.

A key to research that can be used and repeated is the careful definition of the major concepts in the study. A hazy definition of a concept may enter into relationships with other variables, but since the concept was ill-defined the meaning of those relationships can be no better than ill-defined. The process by which concepts are defined for scientific purposes is called explication, that’s your ten-dollar-impress-your-grad-professor word of then day. Also in academia the word often substitutes for the word “explanation” becase it sounds much, much cooler. So, now that the intro is covered, let’s jump into part 2!

Author’s note: This post is based on a handout from my grad work and the monograph, “Fundamentals of Concept Formation in Empirical Science,” by Carl G. Hempel (1952) – citation at the end of the post.

  1. Apply Defining Criteria:  By this point in your defining you should have culled down your thinking to a few definitions. The more specific you can get, the better. Analyze  them by means of these criterions:
    1. Specificity, or how specific the definition currently is , in terms both of details of observation and lack of  sentences linking the various elements of the concept (the fewer the better). The more general the definition the worse off you are. Examples:
      1. It is more useful to record that Jim-Bob “watched Channel 13 from 7 to 9 p.m. yesterday evening” than to say he “watched TV last night.”
      2. A definition of “dissonance” as “any cognitive discrepancy” is less helpful than an extended definition that catalogs the various kinds of cognitions that can be discrepant with one another, the various means by which they might be that way, etc.
    2. Non-reification, Ok we’re getting a bit more complex here. Nothing insane, just pay attention. Avoid giving names to attributes that you might imagine exist, but that cannot be observed. You may think that there is a key factor that has not been observed, but that could be given empirical meaning by careful research. If this is the case, you are proposing a hypothetical construct (the hypothesis being that it does, in fact, exist). If you really need to do this, the first task of your research should then be a “validity check” on its existence. When you provide evidence of a hypothetical construct, it attains the more secure status of as a variable. If a hypothetical construct remains unobserved, it is considered a reification (see, took me a while, but we got around to  the definition), and other researchers are unlikely to be persuaded by your reference to it. The important thing is to recognize that status of all elements of your definition, and to design research that will demonstrate their empirical content.  Examples:
      1. Some common reifications in communication research are terms “catharsis,” “dissonance,” “group cohesiveness,” “coorientation,” and “attitude.”  So far, none of these things has ever been observed, yet they hold important positions in certain theoretical formulations.  The danger is that they may not exist, except in the minds of the theoreticians.
      2. By careful research, some hypothetical constructs that have gradually been converted into variables include “empathy,” “understanding,” “learning,” and “conformity.”  However, these concepts are tied to very specific operational definitions, and when they are used to cover other kinds of situations they are simply reified terms.
    3. Invariance of usage.  This is a simple one – the same person should use a term consistently. Sadly, this isn’t always the case. Some writers use the same term to refer to different things at different times. Even more common is the switching levels of analysis without making any terminological distinction. Examples:
      1. Marshall McLuhan jumps from discussion of individual differences in perception to statements about national character, historical epochs, and other macroscopic concepts (no surprise there, McLuhan was a bit all over the place).
      2. The term “generation” is a term used appropriately for analyzing families and other kinship systems. It can be is misapplied to differences between age groups in society as a whole in the notion of a generation gap.
    4. Inter-observer invariance – the measure of scientific usage would be that everyone uses the concept to mean the same thing. This level of agreement is practically impossible to achieve. But it is a useful goal to strive for, and careful application of the concept criteria and explanation can move you toward that goal.
  2. Set boundaries.  Perhaps the most important step in explication is to decide on clear boundaries for your concept. In meaning analysis, this is simply a matter of considering whether of not to include various lower concepts in your definition. In empirical analysis, boundaries are set by understanding which conditions are necessary and/or sufficient, and which are neither necessary or sufficient. In both cases, this stage of explication consists of stripping the concept of extra meanings. Examples:
    1. A study shows that the strength of an expressed opinion can be increased by reinforcing it through social approval. The author’s conclusion is that reinforcement is a necessary condition for opinion formation. A later study demonstrates that there are conditions under which opinions change without reinforcement. So the definition is watered-down, in that reinforcement becomes a sufficient condition, rather than a necessary one. Finally, it is found that in some instances opinions shift in a direct opposite to the pattern of reinforcement.  So, the element of reinforcement is eliminated from the definition of opinion formation, because it is neither necessary or sufficient.
  3. Tentatively define. Try to develop a satisfactory definition via empirical analysis. You may find that it is surprisingly brief and simplified. Simpler is better as long as you are satisfied that it covers what you want the concept to mean, and does not cover anything else. If an empirical definition eludes you, more research may be needed. So turn to meaning analysis and work on a list of lower concepts. Keep in mind, though, that this is an intermediary stage in the development of your concept.
  4. Define operationally. For each element of each concept that you retain in your final definition, you must specify at least one operational definition. The more specific the better, and the more carefully each operation is linked to your conceptual definition by clear reduction statements the better. It is not necessary to attempt to list all operational definitions; indeed, if your concept is not trivial, it will be impossible to list them all.  But it is necessary to demonstrate that each element of your definition is amenable to observation in real world experiences. Operational definition consists of stating the observable indicators of the attributes (properties or relations) involved, so that someone else can “know one when he sees one.” Operational definitions might be contrived in the form of interview questions, experimental manipulations, unobtrusive observations, content categories, etc.  The key to this final stage of explication is that all your reasoning and linkages be spelled out explicitly, so that someone else reading your work will know what you have done, what you think it represents conceptually, and why.

In the early stages of planning a research project, it is unnecessary to reduce operational definitions to precise terms.  What is needed is to demonstrate conclusively that you can do so when the time comes to design an empirical study.

This was Part 2 and the thrilling conclusion to: How to Define Your Concept a.k.a. Concept Explication, be sure to read Part 1.

Citation: Hempel, C. G. (1952). Fundamentals of concept formation in empirical science. Chicago: University of Chicago Press.

How to Define Your Concept a.k.a. Concept Explication [Part 1]

A key to research that can be used and repeated is the careful definition of the major concepts in the study. A hazy definition of a concept may enter into relationships with other variables, but since the concept was ill-defined the meaning of those relationships can be no better than ill-defined. The process by which concepts are defined for scientific purposes is called explication, that’s your ten-dollar-impress-your-grad-professor word of then day. Also in academia the word often substitutes for the word “explanation” becase it sounds much, much cooler.

Author’s note: This post is based on a handout from my grad work and the monograph, “Fundamentals of Concept Formation in Empirical Science,” by Carl G. Hempel (1952) – citation at the end of the post.

So, before we can begin defining our concept, we need to choose what we will be studying…

Selecting the Concept: You have to start with at least a basic idea of what you want to study, or a commonly used label that might be an interesting object of analysis (don’t know what that is? Check out the theory words & definitions post).  In the beginning of your quest about the only thing you can choose is what you want to focus on. Your thinking about that concept or focal variable should change quite a bit as you study it. Keep in mind that you should try to select a concept that is amenable to empirical observation, and likely to fit into relationships that are important for mass comm and communication theory.  Avoid using operational definitions from other people’s research. You can make your best contribution by a fresh start that might lead to innovative studies.

Literature Review: Once you have decided roughly what your focus is to be (focal variable!!), scour research journals, books, articles, etc. in search of studies that have dealt with it (DO NOT use Wikipedia, a Department Chair clubs a baby seal every time you do). Your goal is to locate the various definitions that have been used. Keep a running list of all the ways that the concept has been defined for research purposes and where. A spreadsheet or Google Doc can be very handy for this. You can ignore purely abstract definitions, those where the concept is given a meaning that doesn’t seem to relate to the real world or any place where your term is used and no definition is provided. There will undoubtedly be cases where your concept has been given some other name – keep track of those too.  It is the empirical usage or main idea of the concept that is truly important, not the label that is put on it. However, be sure to note in your writing that the concept can go by different names.

Definition Levels: Sort out the various definitions you have found, into one of the three basic types:

  1. Nominal Definition:When a set of operational procedures is given an arbitrary name without any “reduction statements” linking the name to the measure, the definition is a nominal one.  This is the most common type of definition in mass comm and communication research and, sadly, the least useful. It can usually be spotted by the obvious gap between the label and the measure (or definition).  Examples:
    1. Intelligence is what an I.Q. test measures. Ok, but this still tells me nothing about what intelligence actually is.
    2. Communication development is a nation’s daily newspaper circulation per capita. What? I sort of get it, but still very unclear.
    3. Consensus consists of a majority vote. Right, but what does it mean? 51%? More? Does it apply to other situations?
  2. Real Definition – Meaning Analysis: A much more useful type of definition is to express the meaning of a top level term by listing the lower level concepts that compose it. The lower terms are less complex in that they are more clearly tied to actual definitions. This list of lower concepts is expandable and replaceable usually – new items can be added and others may be removed. Any changes of this sort change the meaning of the concept. Examples:
    1. Mass media are newspapers, books, magazines, radio, television… (Note that this list is clearly able to go on and on, however depending on what you add, can change the meaning).
    2. Legal controls on the press include laws against libel, sedition, obscenity, blasphemy… (There is actually a much longer list that sadly expands).
  3. Real definition – Empirical analysis: This form of definition is the listing of the necessary and sufficient conditions for observation of the concept. This is the most useful type of definition for scientific purposes since changes in the lower concepts do not change the nature of the higher concept. In a way, these definitions are hypotheses, subject to modification as we learn more about the concept. In mass comm and communication research, this type of definition is rare, and frankly, awesome to come across. Some cursory efforts, as examples:
    1. Communication requires that a symbol be transmitted by one person and received by a second person, and a signal (represented by the symbol) must be shared, at least in part, by the transmitter and the receiver.
    2. Information seeking consists of a person undertaking some action to increase his [or her] input of a specific type of communication content; that he [she] be, to some extent, uncertain what content he [she] will receive; and that his [her] action is to some extent motivated by uncertainty.
    3. In both these cases you can see how clearly we’ve defined the term. It’s not 100% there but we’re way past giving examples or listing things that are part of it.
  4. Level of Analysis: The next step is to distinguish between two kinds of attributes that are called property terms and relational terms. A property term is an attribute that is observable for one person or object (or,  you know, a property of that object), in isolation from other persons or objects. A relational term is only observable in the interaction of two persons, or the comparison of two objects, or in some similar two-unit relationship (like a relationship, not rocket science here). Most of the attributes we are interested in for communication research are relational in nature.  Strangely, they are often described as if they were properties, in that only one person, say, is observed at a time. This kind of anomaly is a serious error in research procedure. Early in the process of explication (admit it, it sounds cooler) you should decide whether your concept is a property or a relation. Any further work with the concept should stick to whichever level of analysis you have decided on. Examples:
    1. Income is a property, but socioeconomic status is a relational term.  So if you are interested in SES but have data only on income, you should be treating that data as relational. Easy cheesy.
    2. Information seeing can be thought of as a property of an individual. But it may be relational to other forms of behavior.  For instance, it preempts other forms of communication, in that a person can only do one thing at a time. So your explication might well lead you into defining a whole typology of forms of communication, which are mutually exclusive.  This is very frequent in social research, and provides a rich source of hypotheses.
    3. It should be clear that such concepts as obedience, power, I.Q., liberalism, relevance, and knowledge are relational for most purposes. It should be clear. It isn’t always that way.

Stay tuned for Part 2 and the thrilling conclusion to: How to Define Your Concept a.k.a. Concept Explication coming soon to a mass communication blog near you (this one, in case that was confusing).

Citation: Hempel, C. G. (1952). Fundamentals of concept formation in empirical science. Chicago: University of Chicago Press.

Beginners Guide to the Research Proposal

Don’t know how to write or where to start when writing a research proposal? Here is a simple guide to get you thinking in the right direction: I heartily recommend that you cut/paste the sections into your document and use this post a reference in crafting each section.

Success Keys: Overall Quality of the Study

  • Good research question (Read the in-depth article on writing qualitative research questions here)
  • Appropriate research design
  • Rigorous and feasible methods
  • Qualified research team

Success Keys: Quality of the Proposal

  • Informative title
  • Self-sufficient and convincing abstract
  • Clear research questions
  • Scholarly and pertinent background and rationale
  • Relevant previous work
  • Appropriate population and sample
  • Appropriate methods of measurement and manipulation
  • Quality control
  • Adequate sample size
  • Sound analysis plan
  • Ethical issues well addressed
  • Tight budget
  • Realistic timetable

Quality of the Presentation

  • Clear, concise, well-organized
  • Helpful table of contents and subheadings
  • Good schematic diagrams and tables
  • Neat and free of errors

Research Proposal Elements

  1. Title
  2. Abstract
  3. Study Problem
  4. Relevance of the Project
  5. Literature Review
  6. Specific Study Objectives
  7. Research Methods
    1. Study design
    2. Participants
      1. Inclusion/exclusion criteria
      2. Sampling
      3. Recruitment plans
      4. Method of assignment to study groups
    3. Data collection
      1. Variables: outcomes, predictors, confounders
      2. Measures/instruments
      3. Procedures
    4. Statistical considerations
      1. Sample size
    5. data analysis
  8. Ethical Considerations
  9. Work Plan
  10. Budget
  11. Bibliography

Literature Review

A critical summary of research on a topic of interest, generally prepared to put a research problem in context or to identify gaps and weaknesses in prior studies so as to justify a new investigation.

Be sure to:

  • Be thorough and complete
  • Present a logical case
  • Include recent research as justification
  • Propose original research (or if duplicating, note that)
  • Include primary sources
  • Include a critical appraisal of your study
  • Build a case for new study

Study Problem (Study Purpose)

Broad statement indicating the goals of the project. This was commonly called the “who gives a shit?” question in my grad program. Ask yourself that simple question and address it. If the answer is “no one,” rethink your study. In your answer be:

  • Clear
  • Relevant
  • Logical
  • Documented

Objectives/Research Questions/Hypotheses

Identifying the research problem and developing a question to be answered are the first steps in the research process. The research question will guide the remainder of the design process (read the in-depth article on writing qualitative research questions here).

Research Objectives
A clear statement of the specific purposes of the study, which identifies the key study variables and their possible interrelationships as well as the nature of the population of interest.

Research Question
The specific purpose stated in the form of a question. You study will be the answer to this question.

A tentative prediction or explanation of the relationship between two or more variables. A prediction of the answer to the research question is usually a hallmark of a quantitative study, qualitative studies are usually have far more open ended and don’t always contain predictions.


  • Provide reviewers with a clear picture of what you plan to accomplish.
  • Show the reviewers that you have a clear picture of what you want to accomplish.
  • Form the foundation for the rest of the proposal.
  • Will be used to assess the adequacy/appropriateness of the study’s proposed methods.

Keys to Success

  • Clear and consistent.
  • Key concepts/constructs identified.
  • Includes the independent and dependent variables (if applicable).
  • Measurable.
  • Hypotheses clearly predict a relationship between variables.
  • Relevant or novel

Research/Study Designs

The overall plan for obtaining an answer to the research question or for testing the research hypothesis.

Will have been chosen based on:

  • Research question/hypothesis.
  • Strengths and weaknesses of alternative designs.
  • Feasibility, resources, time frame, ethics.
  • Type of study: Qualitative, quantitative, or mixed.

Keys to Success

  • Clearly identify and label study design using standard terminology.
    • Quantitative/qualitative
    • Cross-sectional/longitudinal
    • True Experiment/Quasi-Experiment
  • Must specify the major elements of the design
    • Variables, instruments
    • Participants: sampling frame, sample size, selection procedures
    • Timing of testing/intervention
  • Use a diagram
  • Must be consistent with objectives/hypotheses.
  • Must justify choice of design
    • Appropriate choice to answer question
    • Lack of bias/validity
    • Precision/power
    • Feasible
    • Ethical


Obviously based on your type of study you may or may not have participants. A content analysis, for example, wouldn’t include this section.

  • Who will be studied?
  • How will they be selected?
  • How will they be recruited?
  • How will they be allocated to study groups?

1. Who Will Be Studied: Specify eligible participants

  • Target population: demographic characteristics
  • Accessible population: temporal & geographic characteristics
  • Inclusion/Exclusion Criteria

2. How Will They Be Selected: Sampling

The process of selecting a portion of the population to represent the entire population.

Types of Sampling

  1. Probability: each element in the population has an equal, independent chance of being selected.
    1. Simple random sampling
    2. Stratified random sampling
    3. Cluster sampling
    4. Systematic sampling
  2. Nonprobability
    1. Convenience sampling
    2. Snowball sampling
    3. Judgmental sampling

Keys to Success

  • Clear description of study population.
  • Appropriate inclusion/exclusion criteria.
  • Justification of study population and sampling method (bias).
  • Clear description of sampling methods.

3. How Will They Be Recruited?

Describe what methods will be used to recruit participants. Important to document that the study will be feasible and that there will be no ethical problems.

4. How Will They Be Allocated To Study Groups?

Random Allocation: The assignment of participants to treatment conditions in a manner determined by chance alone.

Goal of Randomization: to maximize the probability that groups receiving differing interventions will be comparable.

Methods of randomization

  • Drawn from a hat
  • Random number table
  • Computer generated

Data Collection

Variables: Characteristic or quality that takes on different values.

In Research Identify:

  • Dependent or outcome variables (the presumed effect).
  • Independent or predictor variables (the presumed cause).
  • Note: Variables are not inherently independent or dependent.
  • In descriptive and exploratory studies, this distinction is not made.

Questionnaire: A method of gathering self-report information from respondents through self-administration of questions in a paper and pencil format (Read the in-depth article on crafting a good survey questionnaire here).

Keys to Success

  • Are the words simple, direct and familiar to all?
  • Is the question as clear and specific as possible?
  • Is it a double-barreled question?
  • Does the question have a double negative?
  • Is the question too demanding?
  • Are the questions leading or biased?
  • Is the question applicable to all respondents?
  • Can the item be shortened with no loss of meaning?
  • Will the answers be influenced by response styles?
  • Have you assumed too much knowledge?
  • Is and appropriate time referent provided?
  • Does the question have several possible meanings?
  • Are the response alternatives clear and mutually exclusive (and exhaustive)?

Scale: A composite measure of an attribute, consisting of several items that have a logical or empirical relationship to each other; involves the assignment of a score to place participants on a continuum with respect to the attribute.

Examples of Scales

  • Quality of Life
  • Customer Satisfaction
  • Source Credibility
  • Social Economic Status

Criteria for Instrument Selection

  • Objective of the study
  • Definitions of concept and measuring model
  • Reliability: degree of consistency with which an instrument or rater measures a variable (i.e., internal consistency, test-retest reproducibility, inter-observer reliability).
  • Validity: degree to which an instrument measures what it is intended to measure (i.e., content validity, concurrent validity and construct validity).
  • Sensitivity: ability to detect change.
  • Interpretability: the degree to which one can assign qualitative meaning to an instruments quantitative scores.
  • Burden or ease of use

Keys to Success

  • Always pretest questionnaires.
  • Always indicate if a questionnaire has been pretested.

In experimental research, the experimental treatment or manipulation.

Keys to Success

  • Careful description of treatment/manipulation
  • Be aware of unintended manipulations

Data Analysis

Detail your planned procedures for:

  • Recording, storing and reducing data
  • Assessing data quality
  • Statistical analysis

Step 1: Descriptive statistics

  • Describe the shape, central tendency and variability
  • Looking at variables one at a time: mean, median, range, proportion


  • Summarize important feature of numerical data
  • Pick up data entry errors: i.e. 3 genders, age 150
  • Characterize participants
  • Determine distribution of variables

Assess assumptions for statistical tests: Some statistical tests, such as a t test, are only valid if certain assumptions about the data hold true. For the t test, the assumptions are that the data for the two groups are from populations with a Normal distribution and that the variances of the two populations are the same. Inherent in these two assumptions is that the study sample represents a random sample from the population. These same assumptions hold for tests such as analysis of variance and multiple linear regression. When these assumptions can not safely be believed to be true than alternate, distribution-free, methods can be used. These are called non-parametric tests. Examples of these are the Wilcoxon signed rank test and the rank sum test.

Step 2: Analytic/inferential statistics

  • Example: Looking at associations among two or more variables


  • Estimate pattern and strength of associations among variables
  • Test hypotheses

Sample Size

To make a rough estimate of how many participants required answering the research question. During the design of the study, the sample size calculation will indicate whether the study is feasible. During the review phase, it will reassure the reviewers that not only the study is feasible, but also that resources are not being wasted by recruiting more participants than is necessary.

Hypothesis-based sample sizes indicate the number of participants necessary to reasonably test the study’s hypothesis. Hypotheses can be proven wrong, but they can never be proven correct. This is because the investigator cannot test all potential patients in the world with the condition of interest. The investigator attempts to test the research hypothesis through a sample of the larger population.

Keys to Success

  • Justify sample size
  • Provide data necessary to calculate and state how the sample estimates were obtained, including desired power, Alpha level, one/two-sided tests, estimated effect size.

Ethical Considerations

Many time you’ll need to certify your study with your school’s approval board for research on human subjects, pretty much so you don’t repeat the Stanford Prison Experiment.

  • Ethical Principles
    • Respect for persons (autonomy)
    • Non-maleficence (do not harm)
    • Beneficence (do good)
    • Justice (exclusion)
  • Ethical Considerations
    • Scientific validity – is the research scientifically sound and valid?
    • Recruitment – how and by whom are participants recruited?
    • Participation – what does participation in the study involve?
    • Harms and benefits – what are real potential harms and benefits of participating in the study?
    • Informed consent – have the participants appropriately been asked for their informed consent?


Getting funded is the primary reason for submitting a grant application.

Keys to Success

  • Read instructions (i.e., overhead, issues not covered, if in doubt call the person in charge of the grants)
  • Itemization of costs
    • Personnel (salary and benefits)
    • Consultants (salary) – Equipment
    • Supplies (be complete, include cost per item)
    • Travel
    • Other expenses
    • Indirect costs
  • Do not inflate the costs
  • Justify the budget
  • Enquire about the granting agency’s range
  • Review a successful application
  • Start early, pay attention to instructions/criteria
  • Carefully develop research team
  • Justify decisions
  • Have others review your proposal


Present a Works Cited list at the end of your proposal (i.e.: a list of only the works you have summarized, paraphrased, or quoted from in the paper.)

This basic information was available at in a sub-page, obviously I’ve added my own editorial and information throughout. But I’ve been unable to locate it, so it’s here for your enjoyment & enlightenment. If you know where I can attribute it please contact me and I’ll be happy to do so.