How to Define Your Concept a.k.a. Concept Explication [Part 1]

A key to research that can be used and repeated is the careful definition of the major concepts in the study. A hazy definition of a concept may enter into relationships with other variables, but since the concept was ill-defined the meaning of those relationships can be no better than ill-defined. The process by which concepts are defined for scientific purposes is called explication, that’s your ten-dollar-impress-your-grad-professor word of then day. Also in academia the word often substitutes for the word “explanation” becase it sounds much, much cooler.

Author’s note: This post is based on a handout from my grad work and the monograph, “Fundamentals of Concept Formation in Empirical Science,” by Carl G. Hempel (1952) – citation at the end of the post.

So, before we can begin defining our concept, we need to choose what we will be studying…

Selecting the Concept: You have to start with at least a basic idea of what you want to study, or a commonly used label that might be an interesting object of analysis (don’t know what that is? Check out the theory words & definitions post).  In the beginning of your quest about the only thing you can choose is what you want to focus on. Your thinking about that concept or focal variable should change quite a bit as you study it. Keep in mind that you should try to select a concept that is amenable to empirical observation, and likely to fit into relationships that are important for mass comm and communication theory.  Avoid using operational definitions from other people’s research. You can make your best contribution by a fresh start that might lead to innovative studies.

Literature Review: Once you have decided roughly what your focus is to be (focal variable!!), scour research journals, books, articles, etc. in search of studies that have dealt with it (DO NOT use Wikipedia, a Department Chair clubs a baby seal every time you do). Your goal is to locate the various definitions that have been used. Keep a running list of all the ways that the concept has been defined for research purposes and where. A spreadsheet or Google Doc can be very handy for this. You can ignore purely abstract definitions, those where the concept is given a meaning that doesn’t seem to relate to the real world or any place where your term is used and no definition is provided. There will undoubtedly be cases where your concept has been given some other name – keep track of those too.  It is the empirical usage or main idea of the concept that is truly important, not the label that is put on it. However, be sure to note in your writing that the concept can go by different names.

Definition Levels: Sort out the various definitions you have found, into one of the three basic types:

  1. Nominal Definition:When a set of operational procedures is given an arbitrary name without any “reduction statements” linking the name to the measure, the definition is a nominal one.  This is the most common type of definition in mass comm and communication research and, sadly, the least useful. It can usually be spotted by the obvious gap between the label and the measure (or definition).  Examples:
    1. Intelligence is what an I.Q. test measures. Ok, but this still tells me nothing about what intelligence actually is.
    2. Communication development is a nation’s daily newspaper circulation per capita. What? I sort of get it, but still very unclear.
    3. Consensus consists of a majority vote. Right, but what does it mean? 51%? More? Does it apply to other situations?
  2. Real Definition – Meaning Analysis: A much more useful type of definition is to express the meaning of a top level term by listing the lower level concepts that compose it. The lower terms are less complex in that they are more clearly tied to actual definitions. This list of lower concepts is expandable and replaceable usually – new items can be added and others may be removed. Any changes of this sort change the meaning of the concept. Examples:
    1. Mass media are newspapers, books, magazines, radio, television… (Note that this list is clearly able to go on and on, however depending on what you add, can change the meaning).
    2. Legal controls on the press include laws against libel, sedition, obscenity, blasphemy… (There is actually a much longer list that sadly expands).
  3. Real definition – Empirical analysis: This form of definition is the listing of the necessary and sufficient conditions for observation of the concept. This is the most useful type of definition for scientific purposes since changes in the lower concepts do not change the nature of the higher concept. In a way, these definitions are hypotheses, subject to modification as we learn more about the concept. In mass comm and communication research, this type of definition is rare, and frankly, awesome to come across. Some cursory efforts, as examples:
    1. Communication requires that a symbol be transmitted by one person and received by a second person, and a signal (represented by the symbol) must be shared, at least in part, by the transmitter and the receiver.
    2. Information seeking consists of a person undertaking some action to increase his [or her] input of a specific type of communication content; that he [she] be, to some extent, uncertain what content he [she] will receive; and that his [her] action is to some extent motivated by uncertainty.
    3. In both these cases you can see how clearly we’ve defined the term. It’s not 100% there but we’re way past giving examples or listing things that are part of it.
  4. Level of Analysis: The next step is to distinguish between two kinds of attributes that are called property terms and relational terms. A property term is an attribute that is observable for one person or object (or,  you know, a property of that object), in isolation from other persons or objects. A relational term is only observable in the interaction of two persons, or the comparison of two objects, or in some similar two-unit relationship (like a relationship, not rocket science here). Most of the attributes we are interested in for communication research are relational in nature.  Strangely, they are often described as if they were properties, in that only one person, say, is observed at a time. This kind of anomaly is a serious error in research procedure. Early in the process of explication (admit it, it sounds cooler) you should decide whether your concept is a property or a relation. Any further work with the concept should stick to whichever level of analysis you have decided on. Examples:
    1. Income is a property, but socioeconomic status is a relational term.  So if you are interested in SES but have data only on income, you should be treating that data as relational. Easy cheesy.
    2. Information seeing can be thought of as a property of an individual. But it may be relational to other forms of behavior.  For instance, it preempts other forms of communication, in that a person can only do one thing at a time. So your explication might well lead you into defining a whole typology of forms of communication, which are mutually exclusive.  This is very frequent in social research, and provides a rich source of hypotheses.
    3. It should be clear that such concepts as obedience, power, I.Q., liberalism, relevance, and knowledge are relational for most purposes. It should be clear. It isn’t always that way.

Stay tuned for Part 2 and the thrilling conclusion to: How to Define Your Concept a.k.a. Concept Explication coming soon to a mass communication blog near you (this one, in case that was confusing).

Citation: Hempel, C. G. (1952). Fundamentals of concept formation in empirical science. Chicago: University of Chicago Press.

Qualitative Media Analysis: Altheide’s Approach

Today we’ll be diving into Altheide’s approach to qualitative media analysis. This approach is comprehensively postulated in his book: Altheide, D. L. (1996). Qualitative Media Analysis. Thousand Oaks, CA: Sage. If you plan on doing an Altheide analysis, I HIGHLY recommend you pick it up.

The main premise of the approach is the study of documents or content. Documents are studied to understand culture, they can be conceptualized as the process and the array of objects, symbols, and meanings that make up social reality shared by members of a society. For our purposes, a large part of culture consists of documents. A document can be defined as any symbolic representation that can be recorded or retrieved for analysis.


  • News article
  • Book
  • TV Show
  • Film
  • Magazine
  • Newpaper

Ethnographic Content Analysis

Ethnographic content analysis is oriented to documenting and understanding the communication of meaning, as well as verifying theoretical relationships. A major difference, however, is the reflexive and highly interactive nature of the investigator, concepts, data collection and analysis. Altheide’s method tends to hold almost a dual focus on the ethnographic approach as well as a straight content analysis. Unlike in qualitative content analysis, in which the protocol is the instrument, the investigator is continually central in ethnographic content analysis, although protocols may be used in later phases of the research. As with all ethnographic research, the meaning of a message is assumed to be reflected in various modes of information exchange, format, rhythm, and style.

Process of Qualitative Document Analysis

Problem & Unit of Analysis

  • Select your specific problem to be investigated.
  • Become familiar with the process and context of the information source.
  • Become familiar with several examples of relevant documents, noting particularly the format. Select a unit of analysis.

Constructing a Protocol (p. 25)

  • List several items of categories (variables) to guide data collection a draft a protocol.
  • Test the protocol by collecting data from several documents.
  • Revise the protocol and select several additional cases to further refine the protocol.

Determine Themes and Frames

Overlapping concepts that aim to capture the emphasis and meaning are frame, theme, and discourse. These are related to communication formats which, in the case of the mass media, refer to selection, organization, and presentation of information.

  • Arrive at a sampling rationale and strategy – examples: theoretical, opportunistic, cluster, stratified random (Note that this will usually be theoretical sampling)
  • Theoretical Sampling
  • Stratified Random Sampling

Collecting the Data

  • Collect the Data, using preset codes, if appropriate, and many descriptive examples.

Data Analysis

  • Perform data analysis, including conceptual refinement and data coding (p. 41).

Finally, compare and contrast “extremes” and “key differences” within each category or item. Next, combine brief summaries with an example of the typical case as well as the extremes. Then integrate the findings with your interpretation and key concepts.

The most important thing to note about the Altheide method is that it is incredibly through, some might argue that is is too through, but either way the time and amount of data collected is staggering. This article is meant to serve a a very brief overview of his method and to give you and to give you an idea of his rigorous approach to the analysis of documents should you be so inclined to undertake this method. One major advantage of using this method is that while it is more time consuming from a data gathering standpoint, the entire method has been validated and stored in his book in meticulous detail.

If you do decide to go down the road of Atheide’s Media Analysis, get the book, it’s a handy guide and walks you through every step of the process.

Theory Words & Definitions

There are a lot of terms that get thrown around in the academic lexicon, sometimes they align with those you’ll find in a dictionary, sometimes they don’t. So I thought I’d outline a good handful for you here that will be helpful as you wade through some sweet, delicious mass comm theories (Fig. 1). This article is based on Reynolds’s book: a primer in theory construction (a must have for aspiring theorists), citation at the end of the article, as well as from a grad class I took.

a primer in theory construction by paul davidson reynolds

Fig. 1: Sweet, delicious mass comm theory… and a book.

Object of analysis: The system whose properties we are trying to explain. The research problem should determine what attributes of the system we are interested in.  If the attributes are those of the individual person (e.g., a personality characteristic, attitude change), then it probably belongs to cognitive theory. If the attributes are those of a group of persons (e.g., community status, rate of diffusion), then it lies in the social systems realm. Societies, communities, large organizations, and primary groups are types of social systems.

Concepts: The most basic elements in theory, they are the attributes of the object that we are trying to explain and those that we are using to provide the explanation. They are abstractions from reality. We also use them in everyday life, of course, but research concepts are supposed to be more precise. Concepts are interesting to researchers only when they vary; we call a concept that can be observed to have different values a variable (as contrasted to a constant).  Often called constructs because scientific concepts are carefully constructed from observation.

Conceptual definition: Each concept in a theoretical system (a collection of interrelated theoretical statements) should have a clear and unambiguous definition that is consistently used by the individual theorist and in agreement with the way other theorists define the concept. But that is seldom the case in social science. Careful definition of concepts is where we must begin with theory building (Normally I hate italics, but dammit, that sentence is important, write it down!)

Postulates:  Ideas, biases, and strategies of a particular theorist that help to explain why his theory is constructed as it is and why he does the kind of research he does (nothing to do with posteriors). Theses statements are more abstract than assumptions or theoretical statements and not usually testable. They may represent statements about human nature, causation, the nature of data, and the broad type of causal forces in society – in short, what’s important to look at and how you should do it.

Assumptions: These are statements about the concepts used in the theory.  Assumptions are taken for granted in the theory being tested. They are not investigated, but the falsification of that theoretical statement may result in the revision of the assumption in the future. Assumptions (or revised assumptions) may serve as hypotheses in subsequent research. Two or more assumptions provide the premises from which the theoretical statements (and hypotheses) are derived through logic.

Theoretical Statement: The statement specifying the relation between two or more concepts (variables). Reynolds calls these relational statements and distinguishes these from existence statements that include postulates, definitions and assumptions. Other people call theoretical statements axioms, theorems or propositions. Seriously, the label doesn’t matter, just so we know what we’re referring to.

Relations: (No not that kind, get your mind out of the gutter) The connection between concepts can be stated in a number of forms: that one variable causes another, that two variables are associated, and more complicated relations are possible.

Operational definitions: The set of procedures a researcher uses to measure (or manipulate as in experiments) a given concept. These should follow clearly and logically from the conceptual definition of the concept. These are less abstract than conceptual definitions. They tell us “how to measure it,” ideally using more than one method.

Explication: The process by which conceptual and operational definitions are connected. This is done either by analysis using the logical criteria of definition or through empirical analysis using research data to clarify measurement to distinguish the concept from other concepts. Abstract concepts often need to be broken down into two or more lower order (less abstract) concepts before they can be translated into hypotheses. Basically a fancy way of saying “explain.”

Measurement: The assignment of values to objects on the basis of rules relevant to the concept being measured.  Reynolds describes four levels of measurement: nominal, ordinal, interval, and ratio. The quality of measurement is assessed by reliability and validity. Speaking of reliability…

Reliability: The stability and precision of measurement of a variable.  Stability overtime is called test-retest reliability (i.e., do those scoring high at one time also score high at a second point in time). A second form, equivalence, looks at the level of agreement across items (internal consistency) or forms, or between coders doing the measurement.

Validity: The degree to which you’re really measuring what you think you’re measuring. There are two different approaches: you find external independent evidence (e.g., a criterion group known to possess the characteristic) against which to compare your measurement (pragmatic validity), or you look at the extent to which the empirical relationships of the concept to other concepts fit your theory (construct validity).

Hypothesis: A statement of the relationship between two or more operational definitions. It should be capable of being stated in an “if, then” form, and is less abstract than theoretical statements, assumptions, and postulates. The type of research you are doing will largely dictate how to phrase your hypothesis.

Dependent Variables + Independent Variables: The dependent variable is the “effect” that we are seeking to explain; the independent variable is the presumed “cause” of that effect. We often say “x” is the independent variable that is the cause of the dependent variable “y,” (the effect). There are various names for third variables: extraneous variable, intervening variable, mediating variable, etc. that alter the relationship between the independent and dependent variables.

Good. Good. Let the empirical testing flow through you.

Empirical testing: A good theory must be capable of being tested by observation in the “real world.” Most frequently, statistics are used to make this test. Note that we test theory indirectly through hypotheses and operational definitions. It is made even more indirect by the fact that we test the null hypothesis: the statistical hypothesis of no difference – that the relationship is not strong enough to reject chance. If the data is judged to be not strong enough to reject the null hypothesis, then we have falsified the theoretical statement. If the observations are judged sufficient to reject the null hypothesis, then theory merely remains viable or tenable.

Type I and Type II errors: One of the problems of doing research is that you can be wrong in the inferences you make from research evidence. You can be wrong if you decide to reject the null hypothesis and say that the result is consistent with your theory. That’s a type I error. If your results don’t look very supportive and you decide you can’t reject the null hypothesis, you can be wrong too. In that case you incorrectly gave up on your research hypothesis (indirectly falsifying your theory), but there really was support in the “real world” and your research wasn’t good enough to detect it. That is a type II error.

Causality: As you may know by now, this is a “can of worms.” It’s probably better to think of establishing causality between two variables as something that we move toward than to think of it as being capable of being “discovered” through an experiment. Realize that it is better to think in terms of various types of causes than to look for “the cause” of something. To work toward causality, three conditions have to be met: There has to be an association (correlation) between the two variables; a time order has to be established such that the presumed cause precedes the effect; and other explanations have to be ruled out, such as that some third variable causes both of the two variables of interest. If this last is the case, then we say the relationship we thought was causal was really spurious.

Necessary condition: A situation that must be present for some effect to take place. This is one type of cause. Sometimes a necessary condition describes the level of a third variable that is essential for the relationship between two other variables to hold. In this case the third variable can also be called a contingent condition. Third variables that make the relationship stronger or weaker but don’t totally limit its domain (are not necessary conditions) are called contributory conditions.

Sufficient condition: A situation that if present is enough to produce all effects. This implies that there are no contingent conditions. Experiments are probably more suited to finding sufficient conditions than are nonexperimental sample surveys and other methods. Social scientists would like to find necessary and sufficient conditions, but that is a goal, not an immediate reality.

Models and paradigms: Social scientists sometimes find it useful to employ simplified versions of reality to gain insight and to illustrate their theoretical ideas. A model is a conceptual structure borrowed from some field of study other than the one at hand; it needs not include causal statements, but it does specify structural relationships among variables. A paradigm is a conceptual structure designed specifically for the field of application; it also specifies structural relationships. When a model or paradigm incorporates causal statements, it is usually called a theory. Models and paradigms can be assessed on the basis of their usefulness in helping us construct valid theory.

Reynolds, P. D. (1971). A primer in theory construction. Indianapolis, IN: Bobbs-Merrill Educational Publishing.

Beginners Guide to the Research Proposal

Don’t know how to write or where to start when writing a research proposal? Here is a simple guide to get you thinking in the right direction: I heartily recommend that you cut/paste the sections into your document and use this post a reference in crafting each section.

Success Keys: Overall Quality of the Study

  • Good research question (Read the in-depth article on writing qualitative research questions here)
  • Appropriate research design
  • Rigorous and feasible methods
  • Qualified research team

Success Keys: Quality of the Proposal

  • Informative title
  • Self-sufficient and convincing abstract
  • Clear research questions
  • Scholarly and pertinent background and rationale
  • Relevant previous work
  • Appropriate population and sample
  • Appropriate methods of measurement and manipulation
  • Quality control
  • Adequate sample size
  • Sound analysis plan
  • Ethical issues well addressed
  • Tight budget
  • Realistic timetable

Quality of the Presentation

  • Clear, concise, well-organized
  • Helpful table of contents and subheadings
  • Good schematic diagrams and tables
  • Neat and free of errors

Research Proposal Elements

  1. Title
  2. Abstract
  3. Study Problem
  4. Relevance of the Project
  5. Literature Review
  6. Specific Study Objectives
  7. Research Methods
    1. Study design
    2. Participants
      1. Inclusion/exclusion criteria
      2. Sampling
      3. Recruitment plans
      4. Method of assignment to study groups
    3. Data collection
      1. Variables: outcomes, predictors, confounders
      2. Measures/instruments
      3. Procedures
    4. Statistical considerations
      1. Sample size
    5. data analysis
  8. Ethical Considerations
  9. Work Plan
  10. Budget
  11. Bibliography

Literature Review

A critical summary of research on a topic of interest, generally prepared to put a research problem in context or to identify gaps and weaknesses in prior studies so as to justify a new investigation.

Be sure to:

  • Be thorough and complete
  • Present a logical case
  • Include recent research as justification
  • Propose original research (or if duplicating, note that)
  • Include primary sources
  • Include a critical appraisal of your study
  • Build a case for new study

Study Problem (Study Purpose)

Broad statement indicating the goals of the project. This was commonly called the “who gives a shit?” question in my grad program. Ask yourself that simple question and address it. If the answer is “no one,” rethink your study. In your answer be:

  • Clear
  • Relevant
  • Logical
  • Documented

Objectives/Research Questions/Hypotheses

Identifying the research problem and developing a question to be answered are the first steps in the research process. The research question will guide the remainder of the design process (read the in-depth article on writing qualitative research questions here).

Research Objectives
A clear statement of the specific purposes of the study, which identifies the key study variables and their possible interrelationships as well as the nature of the population of interest.

Research Question
The specific purpose stated in the form of a question. You study will be the answer to this question.

A tentative prediction or explanation of the relationship between two or more variables. A prediction of the answer to the research question is usually a hallmark of a quantitative study, qualitative studies are usually have far more open ended and don’t always contain predictions.


  • Provide reviewers with a clear picture of what you plan to accomplish.
  • Show the reviewers that you have a clear picture of what you want to accomplish.
  • Form the foundation for the rest of the proposal.
  • Will be used to assess the adequacy/appropriateness of the study’s proposed methods.

Keys to Success

  • Clear and consistent.
  • Key concepts/constructs identified.
  • Includes the independent and dependent variables (if applicable).
  • Measurable.
  • Hypotheses clearly predict a relationship between variables.
  • Relevant or novel

Research/Study Designs

The overall plan for obtaining an answer to the research question or for testing the research hypothesis.

Will have been chosen based on:

  • Research question/hypothesis.
  • Strengths and weaknesses of alternative designs.
  • Feasibility, resources, time frame, ethics.
  • Type of study: Qualitative, quantitative, or mixed.

Keys to Success

  • Clearly identify and label study design using standard terminology.
    • Quantitative/qualitative
    • Cross-sectional/longitudinal
    • True Experiment/Quasi-Experiment
  • Must specify the major elements of the design
    • Variables, instruments
    • Participants: sampling frame, sample size, selection procedures
    • Timing of testing/intervention
  • Use a diagram
  • Must be consistent with objectives/hypotheses.
  • Must justify choice of design
    • Appropriate choice to answer question
    • Lack of bias/validity
    • Precision/power
    • Feasible
    • Ethical


Obviously based on your type of study you may or may not have participants. A content analysis, for example, wouldn’t include this section.

  • Who will be studied?
  • How will they be selected?
  • How will they be recruited?
  • How will they be allocated to study groups?

1. Who Will Be Studied: Specify eligible participants

  • Target population: demographic characteristics
  • Accessible population: temporal & geographic characteristics
  • Inclusion/Exclusion Criteria

2. How Will They Be Selected: Sampling

The process of selecting a portion of the population to represent the entire population.

Types of Sampling

  1. Probability: each element in the population has an equal, independent chance of being selected.
    1. Simple random sampling
    2. Stratified random sampling
    3. Cluster sampling
    4. Systematic sampling
  2. Nonprobability
    1. Convenience sampling
    2. Snowball sampling
    3. Judgmental sampling

Keys to Success

  • Clear description of study population.
  • Appropriate inclusion/exclusion criteria.
  • Justification of study population and sampling method (bias).
  • Clear description of sampling methods.

3. How Will They Be Recruited?

Describe what methods will be used to recruit participants. Important to document that the study will be feasible and that there will be no ethical problems.

4. How Will They Be Allocated To Study Groups?

Random Allocation: The assignment of participants to treatment conditions in a manner determined by chance alone.

Goal of Randomization: to maximize the probability that groups receiving differing interventions will be comparable.

Methods of randomization

  • Drawn from a hat
  • Random number table
  • Computer generated

Data Collection

Variables: Characteristic or quality that takes on different values.

In Research Identify:

  • Dependent or outcome variables (the presumed effect).
  • Independent or predictor variables (the presumed cause).
  • Note: Variables are not inherently independent or dependent.
  • In descriptive and exploratory studies, this distinction is not made.

Questionnaire: A method of gathering self-report information from respondents through self-administration of questions in a paper and pencil format (Read the in-depth article on crafting a good survey questionnaire here).

Keys to Success

  • Are the words simple, direct and familiar to all?
  • Is the question as clear and specific as possible?
  • Is it a double-barreled question?
  • Does the question have a double negative?
  • Is the question too demanding?
  • Are the questions leading or biased?
  • Is the question applicable to all respondents?
  • Can the item be shortened with no loss of meaning?
  • Will the answers be influenced by response styles?
  • Have you assumed too much knowledge?
  • Is and appropriate time referent provided?
  • Does the question have several possible meanings?
  • Are the response alternatives clear and mutually exclusive (and exhaustive)?

Scale: A composite measure of an attribute, consisting of several items that have a logical or empirical relationship to each other; involves the assignment of a score to place participants on a continuum with respect to the attribute.

Examples of Scales

  • Quality of Life
  • Customer Satisfaction
  • Source Credibility
  • Social Economic Status

Criteria for Instrument Selection

  • Objective of the study
  • Definitions of concept and measuring model
  • Reliability: degree of consistency with which an instrument or rater measures a variable (i.e., internal consistency, test-retest reproducibility, inter-observer reliability).
  • Validity: degree to which an instrument measures what it is intended to measure (i.e., content validity, concurrent validity and construct validity).
  • Sensitivity: ability to detect change.
  • Interpretability: the degree to which one can assign qualitative meaning to an instruments quantitative scores.
  • Burden or ease of use

Keys to Success

  • Always pretest questionnaires.
  • Always indicate if a questionnaire has been pretested.

In experimental research, the experimental treatment or manipulation.

Keys to Success

  • Careful description of treatment/manipulation
  • Be aware of unintended manipulations

Data Analysis

Detail your planned procedures for:

  • Recording, storing and reducing data
  • Assessing data quality
  • Statistical analysis

Step 1: Descriptive statistics

  • Describe the shape, central tendency and variability
  • Looking at variables one at a time: mean, median, range, proportion


  • Summarize important feature of numerical data
  • Pick up data entry errors: i.e. 3 genders, age 150
  • Characterize participants
  • Determine distribution of variables

Assess assumptions for statistical tests: Some statistical tests, such as a t test, are only valid if certain assumptions about the data hold true. For the t test, the assumptions are that the data for the two groups are from populations with a Normal distribution and that the variances of the two populations are the same. Inherent in these two assumptions is that the study sample represents a random sample from the population. These same assumptions hold for tests such as analysis of variance and multiple linear regression. When these assumptions can not safely be believed to be true than alternate, distribution-free, methods can be used. These are called non-parametric tests. Examples of these are the Wilcoxon signed rank test and the rank sum test.

Step 2: Analytic/inferential statistics

  • Example: Looking at associations among two or more variables


  • Estimate pattern and strength of associations among variables
  • Test hypotheses

Sample Size

To make a rough estimate of how many participants required answering the research question. During the design of the study, the sample size calculation will indicate whether the study is feasible. During the review phase, it will reassure the reviewers that not only the study is feasible, but also that resources are not being wasted by recruiting more participants than is necessary.

Hypothesis-based sample sizes indicate the number of participants necessary to reasonably test the study’s hypothesis. Hypotheses can be proven wrong, but they can never be proven correct. This is because the investigator cannot test all potential patients in the world with the condition of interest. The investigator attempts to test the research hypothesis through a sample of the larger population.

Keys to Success

  • Justify sample size
  • Provide data necessary to calculate and state how the sample estimates were obtained, including desired power, Alpha level, one/two-sided tests, estimated effect size.

Ethical Considerations

Many time you’ll need to certify your study with your school’s approval board for research on human subjects, pretty much so you don’t repeat the Stanford Prison Experiment.

  • Ethical Principles
    • Respect for persons (autonomy)
    • Non-maleficence (do not harm)
    • Beneficence (do good)
    • Justice (exclusion)
  • Ethical Considerations
    • Scientific validity – is the research scientifically sound and valid?
    • Recruitment – how and by whom are participants recruited?
    • Participation – what does participation in the study involve?
    • Harms and benefits – what are real potential harms and benefits of participating in the study?
    • Informed consent – have the participants appropriately been asked for their informed consent?


Getting funded is the primary reason for submitting a grant application.

Keys to Success

  • Read instructions (i.e., overhead, issues not covered, if in doubt call the person in charge of the grants)
  • Itemization of costs
    • Personnel (salary and benefits)
    • Consultants (salary) – Equipment
    • Supplies (be complete, include cost per item)
    • Travel
    • Other expenses
    • Indirect costs
  • Do not inflate the costs
  • Justify the budget
  • Enquire about the granting agency’s range
  • Review a successful application
  • Start early, pay attention to instructions/criteria
  • Carefully develop research team
  • Justify decisions
  • Have others review your proposal


Present a Works Cited list at the end of your proposal (i.e.: a list of only the works you have summarized, paraphrased, or quoted from in the paper.)

This basic information was available at in a sub-page, obviously I’ve added my own editorial and information throughout. But I’ve been unable to locate it, so it’s here for your enjoyment & enlightenment. If you know where I can attribute it please contact me and I’ll be happy to do so.