Qualitative Media Analysis: Altheide’s Approach

Today we’ll be diving into Altheide’s approach to qualitative media analysis. This approach is comprehensively postulated in his book: Altheide, D. L. (1996). Qualitative Media Analysis. Thousand Oaks, CA: Sage. If you plan on doing an Altheide analysis, I HIGHLY recommend you pick it up.

The main premise of the approach is the study of documents or content. Documents are studied to understand culture, they can be conceptualized as the process and the array of objects, symbols, and meanings that make up social reality shared by members of a society. For our purposes, a large part of culture consists of documents. A document can be defined as any symbolic representation that can be recorded or retrieved for analysis.

Examples:

  • News article
  • Book
  • TV Show
  • Film
  • Magazine
  • Newpaper

Ethnographic Content Analysis

Ethnographic content analysis is oriented to documenting and understanding the communication of meaning, as well as verifying theoretical relationships. A major difference, however, is the reflexive and highly interactive nature of the investigator, concepts, data collection and analysis. Altheide’s method tends to hold almost a dual focus on the ethnographic approach as well as a straight content analysis. Unlike in qualitative content analysis, in which the protocol is the instrument, the investigator is continually central in ethnographic content analysis, although protocols may be used in later phases of the research. As with all ethnographic research, the meaning of a message is assumed to be reflected in various modes of information exchange, format, rhythm, and style.

Process of Qualitative Document Analysis

Problem & Unit of Analysis

  • Select your specific problem to be investigated.
  • Become familiar with the process and context of the information source.
  • Become familiar with several examples of relevant documents, noting particularly the format. Select a unit of analysis.

Constructing a Protocol (p. 25)

  • List several items of categories (variables) to guide data collection a draft a protocol.
  • Test the protocol by collecting data from several documents.
  • Revise the protocol and select several additional cases to further refine the protocol.

Determine Themes and Frames

Overlapping concepts that aim to capture the emphasis and meaning are frame, theme, and discourse. These are related to communication formats which, in the case of the mass media, refer to selection, organization, and presentation of information.

  • Arrive at a sampling rationale and strategy – examples: theoretical, opportunistic, cluster, stratified random (Note that this will usually be theoretical sampling)
  • Theoretical Sampling
  • Stratified Random Sampling

Collecting the Data

  • Collect the Data, using preset codes, if appropriate, and many descriptive examples.

Data Analysis

  • Perform data analysis, including conceptual refinement and data coding (p. 41).

Finally, compare and contrast “extremes” and “key differences” within each category or item. Next, combine brief summaries with an example of the typical case as well as the extremes. Then integrate the findings with your interpretation and key concepts.

The most important thing to note about the Altheide method is that it is incredibly through, some might argue that is is too through, but either way the time and amount of data collected is staggering. This article is meant to serve a a very brief overview of his method and to give you and to give you an idea of his rigorous approach to the analysis of documents should you be so inclined to undertake this method. One major advantage of using this method is that while it is more time consuming from a data gathering standpoint, the entire method has been validated and stored in his book in meticulous detail.

If you do decide to go down the road of Atheide’s Media Analysis, get the book, it’s a handy guide and walks you through every step of the process.

Theory Words & Definitions

There are a lot of terms that get thrown around in the academic lexicon, sometimes they align with those you’ll find in a dictionary, sometimes they don’t. So I thought I’d outline a good handful for you here that will be helpful as you wade through some sweet, delicious mass comm theories (Fig. 1). This article is based on Reynolds’s book: a primer in theory construction (a must have for aspiring theorists), citation at the end of the article, as well as from a grad class I took.

a primer in theory construction by paul davidson reynolds

Fig. 1: Sweet, delicious mass comm theory… and a book.

Object of analysis: The system whose properties we are trying to explain. The research problem should determine what attributes of the system we are interested in.  If the attributes are those of the individual person (e.g., a personality characteristic, attitude change), then it probably belongs to cognitive theory. If the attributes are those of a group of persons (e.g., community status, rate of diffusion), then it lies in the social systems realm. Societies, communities, large organizations, and primary groups are types of social systems.

Concepts: The most basic elements in theory, they are the attributes of the object that we are trying to explain and those that we are using to provide the explanation. They are abstractions from reality. We also use them in everyday life, of course, but research concepts are supposed to be more precise. Concepts are interesting to researchers only when they vary; we call a concept that can be observed to have different values a variable (as contrasted to a constant).  Often called constructs because scientific concepts are carefully constructed from observation.

Conceptual definition: Each concept in a theoretical system (a collection of interrelated theoretical statements) should have a clear and unambiguous definition that is consistently used by the individual theorist and in agreement with the way other theorists define the concept. But that is seldom the case in social science. Careful definition of concepts is where we must begin with theory building (Normally I hate italics, but dammit, that sentence is important, write it down!)

Postulates:  Ideas, biases, and strategies of a particular theorist that help to explain why his theory is constructed as it is and why he does the kind of research he does (nothing to do with posteriors). Theses statements are more abstract than assumptions or theoretical statements and not usually testable. They may represent statements about human nature, causation, the nature of data, and the broad type of causal forces in society – in short, what’s important to look at and how you should do it.

Assumptions: These are statements about the concepts used in the theory.  Assumptions are taken for granted in the theory being tested. They are not investigated, but the falsification of that theoretical statement may result in the revision of the assumption in the future. Assumptions (or revised assumptions) may serve as hypotheses in subsequent research. Two or more assumptions provide the premises from which the theoretical statements (and hypotheses) are derived through logic.

Theoretical Statement: The statement specifying the relation between two or more concepts (variables). Reynolds calls these relational statements and distinguishes these from existence statements that include postulates, definitions and assumptions. Other people call theoretical statements axioms, theorems or propositions. Seriously, the label doesn’t matter, just so we know what we’re referring to.

Relations: (No not that kind, get your mind out of the gutter) The connection between concepts can be stated in a number of forms: that one variable causes another, that two variables are associated, and more complicated relations are possible.

Operational definitions: The set of procedures a researcher uses to measure (or manipulate as in experiments) a given concept. These should follow clearly and logically from the conceptual definition of the concept. These are less abstract than conceptual definitions. They tell us “how to measure it,” ideally using more than one method.

Explication: The process by which conceptual and operational definitions are connected. This is done either by analysis using the logical criteria of definition or through empirical analysis using research data to clarify measurement to distinguish the concept from other concepts. Abstract concepts often need to be broken down into two or more lower order (less abstract) concepts before they can be translated into hypotheses. Basically a fancy way of saying “explain.”

Measurement: The assignment of values to objects on the basis of rules relevant to the concept being measured.  Reynolds describes four levels of measurement: nominal, ordinal, interval, and ratio. The quality of measurement is assessed by reliability and validity. Speaking of reliability…

Reliability: The stability and precision of measurement of a variable.  Stability overtime is called test-retest reliability (i.e., do those scoring high at one time also score high at a second point in time). A second form, equivalence, looks at the level of agreement across items (internal consistency) or forms, or between coders doing the measurement.

Validity: The degree to which you’re really measuring what you think you’re measuring. There are two different approaches: you find external independent evidence (e.g., a criterion group known to possess the characteristic) against which to compare your measurement (pragmatic validity), or you look at the extent to which the empirical relationships of the concept to other concepts fit your theory (construct validity).

Hypothesis: A statement of the relationship between two or more operational definitions. It should be capable of being stated in an “if, then” form, and is less abstract than theoretical statements, assumptions, and postulates. The type of research you are doing will largely dictate how to phrase your hypothesis.

Dependent Variables + Independent Variables: The dependent variable is the “effect” that we are seeking to explain; the independent variable is the presumed “cause” of that effect. We often say “x” is the independent variable that is the cause of the dependent variable “y,” (the effect). There are various names for third variables: extraneous variable, intervening variable, mediating variable, etc. that alter the relationship between the independent and dependent variables.

Good. Good. Let the empirical testing flow through you.

Empirical testing: A good theory must be capable of being tested by observation in the “real world.” Most frequently, statistics are used to make this test. Note that we test theory indirectly through hypotheses and operational definitions. It is made even more indirect by the fact that we test the null hypothesis: the statistical hypothesis of no difference – that the relationship is not strong enough to reject chance. If the data is judged to be not strong enough to reject the null hypothesis, then we have falsified the theoretical statement. If the observations are judged sufficient to reject the null hypothesis, then theory merely remains viable or tenable.

Type I and Type II errors: One of the problems of doing research is that you can be wrong in the inferences you make from research evidence. You can be wrong if you decide to reject the null hypothesis and say that the result is consistent with your theory. That’s a type I error. If your results don’t look very supportive and you decide you can’t reject the null hypothesis, you can be wrong too. In that case you incorrectly gave up on your research hypothesis (indirectly falsifying your theory), but there really was support in the “real world” and your research wasn’t good enough to detect it. That is a type II error.

Causality: As you may know by now, this is a “can of worms.” It’s probably better to think of establishing causality between two variables as something that we move toward than to think of it as being capable of being “discovered” through an experiment. Realize that it is better to think in terms of various types of causes than to look for “the cause” of something. To work toward causality, three conditions have to be met: There has to be an association (correlation) between the two variables; a time order has to be established such that the presumed cause precedes the effect; and other explanations have to be ruled out, such as that some third variable causes both of the two variables of interest. If this last is the case, then we say the relationship we thought was causal was really spurious.

Necessary condition: A situation that must be present for some effect to take place. This is one type of cause. Sometimes a necessary condition describes the level of a third variable that is essential for the relationship between two other variables to hold. In this case the third variable can also be called a contingent condition. Third variables that make the relationship stronger or weaker but don’t totally limit its domain (are not necessary conditions) are called contributory conditions.

Sufficient condition: A situation that if present is enough to produce all effects. This implies that there are no contingent conditions. Experiments are probably more suited to finding sufficient conditions than are nonexperimental sample surveys and other methods. Social scientists would like to find necessary and sufficient conditions, but that is a goal, not an immediate reality.

Models and paradigms: Social scientists sometimes find it useful to employ simplified versions of reality to gain insight and to illustrate their theoretical ideas. A model is a conceptual structure borrowed from some field of study other than the one at hand; it needs not include causal statements, but it does specify structural relationships among variables. A paradigm is a conceptual structure designed specifically for the field of application; it also specifies structural relationships. When a model or paradigm incorporates causal statements, it is usually called a theory. Models and paradigms can be assessed on the basis of their usefulness in helping us construct valid theory.

Reynolds, P. D. (1971). A primer in theory construction. Indianapolis, IN: Bobbs-Merrill Educational Publishing.

Beginners Guide to the Research Proposal

Don’t know how to write or where to start when writing a research proposal? Here is a simple guide to get you thinking in the right direction: I heartily recommend that you cut/paste the sections into your document and use this post a reference in crafting each section.

Success Keys: Overall Quality of the Study

  • Good research question (Read the in-depth article on writing qualitative research questions here)
  • Appropriate research design
  • Rigorous and feasible methods
  • Qualified research team

Success Keys: Quality of the Proposal

  • Informative title
  • Self-sufficient and convincing abstract
  • Clear research questions
  • Scholarly and pertinent background and rationale
  • Relevant previous work
  • Appropriate population and sample
  • Appropriate methods of measurement and manipulation
  • Quality control
  • Adequate sample size
  • Sound analysis plan
  • Ethical issues well addressed
  • Tight budget
  • Realistic timetable

Quality of the Presentation

  • Clear, concise, well-organized
  • Helpful table of contents and subheadings
  • Good schematic diagrams and tables
  • Neat and free of errors

Research Proposal Elements

  1. Title
  2. Abstract
  3. Study Problem
  4. Relevance of the Project
  5. Literature Review
  6. Specific Study Objectives
  7. Research Methods
    1. Study design
    2. Participants
      1. Inclusion/exclusion criteria
      2. Sampling
      3. Recruitment plans
      4. Method of assignment to study groups
    3. Data collection
      1. Variables: outcomes, predictors, confounders
      2. Measures/instruments
      3. Procedures
    4. Statistical considerations
      1. Sample size
    5. data analysis
  8. Ethical Considerations
  9. Work Plan
  10. Budget
  11. Bibliography

Literature Review

A critical summary of research on a topic of interest, generally prepared to put a research problem in context or to identify gaps and weaknesses in prior studies so as to justify a new investigation.

Be sure to:

  • Be thorough and complete
  • Present a logical case
  • Include recent research as justification
  • Propose original research (or if duplicating, note that)
  • Include primary sources
  • Include a critical appraisal of your study
  • Build a case for new study

Study Problem (Study Purpose)

Broad statement indicating the goals of the project. This was commonly called the “who gives a shit?” question in my grad program. Ask yourself that simple question and address it. If the answer is “no one,” rethink your study. In your answer be:

  • Clear
  • Relevant
  • Logical
  • Documented

Objectives/Research Questions/Hypotheses

Identifying the research problem and developing a question to be answered are the first steps in the research process. The research question will guide the remainder of the design process (read the in-depth article on writing qualitative research questions here).

Research Objectives
A clear statement of the specific purposes of the study, which identifies the key study variables and their possible interrelationships as well as the nature of the population of interest.

Research Question
The specific purpose stated in the form of a question. You study will be the answer to this question.

Hypotheses
A tentative prediction or explanation of the relationship between two or more variables. A prediction of the answer to the research question is usually a hallmark of a quantitative study, qualitative studies are usually have far more open ended and don’t always contain predictions.

Functions

  • Provide reviewers with a clear picture of what you plan to accomplish.
  • Show the reviewers that you have a clear picture of what you want to accomplish.
  • Form the foundation for the rest of the proposal.
  • Will be used to assess the adequacy/appropriateness of the study’s proposed methods.

Keys to Success

  • Clear and consistent.
  • Key concepts/constructs identified.
  • Includes the independent and dependent variables (if applicable).
  • Measurable.
  • Hypotheses clearly predict a relationship between variables.
  • Relevant or novel

Research/Study Designs

The overall plan for obtaining an answer to the research question or for testing the research hypothesis.

Will have been chosen based on:

  • Research question/hypothesis.
  • Strengths and weaknesses of alternative designs.
  • Feasibility, resources, time frame, ethics.
  • Type of study: Qualitative, quantitative, or mixed.

Keys to Success

  • Clearly identify and label study design using standard terminology.
    • Quantitative/qualitative
    • Cross-sectional/longitudinal
    • True Experiment/Quasi-Experiment
  • Must specify the major elements of the design
    • Variables, instruments
    • Participants: sampling frame, sample size, selection procedures
    • Timing of testing/intervention
  • Use a diagram
  • Must be consistent with objectives/hypotheses.
  • Must justify choice of design
    • Appropriate choice to answer question
    • Lack of bias/validity
    • Precision/power
    • Feasible
    • Ethical

Participants

Obviously based on your type of study you may or may not have participants. A content analysis, for example, wouldn’t include this section.

  • Who will be studied?
  • How will they be selected?
  • How will they be recruited?
  • How will they be allocated to study groups?

1. Who Will Be Studied: Specify eligible participants

  • Target population: demographic characteristics
  • Accessible population: temporal & geographic characteristics
  • Inclusion/Exclusion Criteria

2. How Will They Be Selected: Sampling

The process of selecting a portion of the population to represent the entire population.

Types of Sampling

  1. Probability: each element in the population has an equal, independent chance of being selected.
    1. Simple random sampling
    2. Stratified random sampling
    3. Cluster sampling
    4. Systematic sampling
  2. Nonprobability
    1. Convenience sampling
    2. Snowball sampling
    3. Judgmental sampling

Keys to Success

  • Clear description of study population.
  • Appropriate inclusion/exclusion criteria.
  • Justification of study population and sampling method (bias).
  • Clear description of sampling methods.

3. How Will They Be Recruited?

Describe what methods will be used to recruit participants. Important to document that the study will be feasible and that there will be no ethical problems.

4. How Will They Be Allocated To Study Groups?

Random Allocation: The assignment of participants to treatment conditions in a manner determined by chance alone.

Goal of Randomization: to maximize the probability that groups receiving differing interventions will be comparable.

Methods of randomization

  • Drawn from a hat
  • Random number table
  • Computer generated

Data Collection

Variables: Characteristic or quality that takes on different values.

In Research Identify:

  • Dependent or outcome variables (the presumed effect).
  • Independent or predictor variables (the presumed cause).
  • Note: Variables are not inherently independent or dependent.
  • In descriptive and exploratory studies, this distinction is not made.

Measures/Instruments
Questionnaire: A method of gathering self-report information from respondents through self-administration of questions in a paper and pencil format (Read the in-depth article on crafting a good survey questionnaire here).

Keys to Success

  • Are the words simple, direct and familiar to all?
  • Is the question as clear and specific as possible?
  • Is it a double-barreled question?
  • Does the question have a double negative?
  • Is the question too demanding?
  • Are the questions leading or biased?
  • Is the question applicable to all respondents?
  • Can the item be shortened with no loss of meaning?
  • Will the answers be influenced by response styles?
  • Have you assumed too much knowledge?
  • Is and appropriate time referent provided?
  • Does the question have several possible meanings?
  • Are the response alternatives clear and mutually exclusive (and exhaustive)?

Scale: A composite measure of an attribute, consisting of several items that have a logical or empirical relationship to each other; involves the assignment of a score to place participants on a continuum with respect to the attribute.

Examples of Scales

  • Quality of Life
  • Customer Satisfaction
  • Source Credibility
  • Social Economic Status

Criteria for Instrument Selection

  • Objective of the study
  • Definitions of concept and measuring model
  • Reliability: degree of consistency with which an instrument or rater measures a variable (i.e., internal consistency, test-retest reproducibility, inter-observer reliability).
  • Validity: degree to which an instrument measures what it is intended to measure (i.e., content validity, concurrent validity and construct validity).
  • Sensitivity: ability to detect change.
  • Interpretability: the degree to which one can assign qualitative meaning to an instruments quantitative scores.
  • Burden or ease of use

Keys to Success

  • Always pretest questionnaires.
  • Always indicate if a questionnaire has been pretested.

Manipulation
In experimental research, the experimental treatment or manipulation.

Keys to Success

  • Careful description of treatment/manipulation
  • Be aware of unintended manipulations

Data Analysis

Detail your planned procedures for:

  • Recording, storing and reducing data
  • Assessing data quality
  • Statistical analysis

Step 1: Descriptive statistics

  • Describe the shape, central tendency and variability
  • Looking at variables one at a time: mean, median, range, proportion

Purposes

  • Summarize important feature of numerical data
  • Pick up data entry errors: i.e. 3 genders, age 150
  • Characterize participants
  • Determine distribution of variables

Assess assumptions for statistical tests: Some statistical tests, such as a t test, are only valid if certain assumptions about the data hold true. For the t test, the assumptions are that the data for the two groups are from populations with a Normal distribution and that the variances of the two populations are the same. Inherent in these two assumptions is that the study sample represents a random sample from the population. These same assumptions hold for tests such as analysis of variance and multiple linear regression. When these assumptions can not safely be believed to be true than alternate, distribution-free, methods can be used. These are called non-parametric tests. Examples of these are the Wilcoxon signed rank test and the rank sum test.

Step 2: Analytic/inferential statistics

  • Example: Looking at associations among two or more variables

Purposes

  • Estimate pattern and strength of associations among variables
  • Test hypotheses

Sample Size

To make a rough estimate of how many participants required answering the research question. During the design of the study, the sample size calculation will indicate whether the study is feasible. During the review phase, it will reassure the reviewers that not only the study is feasible, but also that resources are not being wasted by recruiting more participants than is necessary.

Hypothesis-based sample sizes indicate the number of participants necessary to reasonably test the study’s hypothesis. Hypotheses can be proven wrong, but they can never be proven correct. This is because the investigator cannot test all potential patients in the world with the condition of interest. The investigator attempts to test the research hypothesis through a sample of the larger population.

Keys to Success

  • Justify sample size
  • Provide data necessary to calculate and state how the sample estimates were obtained, including desired power, Alpha level, one/two-sided tests, estimated effect size.

Ethical Considerations

Many time you’ll need to certify your study with your school’s approval board for research on human subjects, pretty much so you don’t repeat the Stanford Prison Experiment.

  • Ethical Principles
    • Respect for persons (autonomy)
    • Non-maleficence (do not harm)
    • Beneficence (do good)
    • Justice (exclusion)
  • Ethical Considerations
    • Scientific validity – is the research scientifically sound and valid?
    • Recruitment – how and by whom are participants recruited?
    • Participation – what does participation in the study involve?
    • Harms and benefits – what are real potential harms and benefits of participating in the study?
    • Informed consent – have the participants appropriately been asked for their informed consent?

Budget

Getting funded is the primary reason for submitting a grant application.

Keys to Success

  • Read instructions (i.e., overhead, issues not covered, if in doubt call the person in charge of the grants)
  • Itemization of costs
    • Personnel (salary and benefits)
    • Consultants (salary) – Equipment
    • Supplies (be complete, include cost per item)
    • Travel
    • Other expenses
    • Indirect costs
  • Do not inflate the costs
  • Justify the budget
  • Enquire about the granting agency’s range
  • Review a successful application
  • Start early, pay attention to instructions/criteria
  • Carefully develop research team
  • Justify decisions
  • Have others review your proposal

Bibliography

Present a Works Cited list at the end of your proposal (i.e.: a list of only the works you have summarized, paraphrased, or quoted from in the paper.)


This basic information was available at http://www.ucalgary.ca/ in a sub-page, obviously I’ve added my own editorial and information throughout. But I’ve been unable to locate it, so it’s here for your enjoyment & enlightenment. If you know where I can attribute it please contact me and I’ll be happy to do so.

How to Write a Good Survey Questionnaire

Came across this in the thousands of papers I have lying around – thought it might be helpful to those who are trying to design survey questionnaires!

The complex art of question writing has been investigated by many researchers. From their experiences, they offer valuable advice. Below are some helpful hints typical of those that appear most often in texts on question construction. Survey research is traditionally almost always used in quantitative research, however it can be integrated into qual. with some creative thinking.

Keep the language simple & relax your grammar.
Analyze your audience and write on their level. Whenever possible, always avoid using technical terms/jargon. Remember, the simpler the better and if someone can misunderstand something, they certainly will. Relaxing your grammar can make much more formal questions sound a bit more personable, for instance feel free to use “who” in an instance where formal tradition might suggest that “whom” is, in fact, correct.

Write short questions and a short questionnaire.
Long questions can become ambiguous and confusing. A survey respondent, in trying to understand a long question, may forget part of the question and thus misunderstand question. Above all, draw a distinction between what is essential to know, what is useful, and what is unnecessary. Keep the essential, keep the useful to a minimum, and throw out anything unnecessary.

Always apply the “So what?” and “Who cares?” tests to each question. Remember, however, to keep in mind that you should not leave out questions that would yield necessary data just because it will shorten your survey. If the information is needed, ask the question.

Limit each question to one idea or concept.
A question consisting of more than one idea may confuse the respondent and lead to a pointless answer. Consider this question: “Are you in favor of raising taxes and lowering deficit?” What the hell would either answer mean?

Don’t write leading questions.
These questions are worded in a manner that suggests an answer. Some respondents may give the answer you are looking for whether or not they think it is right. Think about what you assume when you ask each question. For example, if you ask “What is the best day of the week to schedule the new review meeting?” you’re assuming that that everyone taking the questionnaire even wants/needs another meeting.

Remember a perfectly worded question gives the respondent no idea as to which answer you may believe to be correct.

Avoid subjective words & double negatives.
These terms mean different things to different people (hence, subjective…). One person’s “fair” may be another person’s “god awful.” How much is “often” and how little is “seldom?” You can easily confuse respondents when involving tow negative words. So tell me, don’t you not like this blog?

Allow for all possible answers.
Respondents who can’t find their answer among your list will be forced to give an invalid reply or, possibly, become frustrated and refuse to complete the survey. Wording the question to reduce the number of possible answers is the first step.

Avoid dichotomous questions. If you cannot avoid them, add a third option, such as no opinion, don’t know, or other. These may not get the answers you need but they will minimize the number of invalid responses. A great number of “don’t know” answers to a question in a fact-finding survey can be a useful piece of information. But a majority of other answers may mean you have a poor question, and perhaps should be cautious when analyzing the results.

Avoid emotional/morally charged questions.
This one kind of speaks for itself. It’s OK to ask about a person’s morals, etc. just don’t write leading questions.

Assure a common understanding.
Write questions that everyone will understand in the same way. Don’t assume that everyone has the same understanding of the facts or a common basis of knowledge. Identify even commonly used abbreviations to be certain that everyone understands.

Start with interesting questions.
Start the survey with questions that are likely to sound interesting and attract the respondents’ attention. Save the questions that might be difficult or threatening for later.

Don’t make the list of choices too long.
If the list of answer categories is long and unfamiliar, it is difficult for respondents to evaluate all of them. Keep the list of choices short.

Avoid difficult recall questions.
People’s memories are increasingly unreliable as you ask them to recall events farther and farther back in time. You will get far more accurate information from people if you ask, “About how many times in the last month have you gone out and seen a movie in a movie theater or drive-in?” rather than, “About how many times last year did you go out and see a movie in a movie theater or drive-in?”

Use Closed-ended questions rather than Open-ended ones.
Most questionnaires rely on questions with a fixed number of response categories from which respondents select their answers. These are useful because the respondents know clearly the purpose of the question and are limited to a set of choices where one answer is right for them. An open-ended question is a written response. For example: “If you don’t like the product, please explain why”. If there are an excessive number of written response questions, it reduces the quality and attention the respondents give to the answers.

Put your questions in a logical order.
The issues raised in one question can influence how people think about subsequent questions. It is good to ask a general question and then ask more specific questions. For example, you should avoid asking a series of questions about a free product sample and then question about the most important factors in selecting a product.

Understand the should-would question.
Formulate your questions and answers to obtain exact information and to minimize confusion.

For example, does “How old are you?” mean on your last or your nearest birthday? Does “What is your (military) grade?” mean permanent or temporary grade? As of what date? By including instructions like “Answer all questions as of (a certain date)”, you can alleviate many such conflicts.

Include a few questions that can serve as checks on the accuracy and consistency of the answers as a whole.

Have some questions that are worded differently, but are soliciting the same information, in different parts of the questionnaire. These questions should be designed to identify the respondents who are just marking answers randomly or who are trying to game the survey (giving answers they think you want to hear). If you find a respondent who answers these questions differently, you have reason to doubt the validity of their entire set of responses. For this reason, you may decide to exclude their response sheet(s) from the analysis.

Organize the pattern of the questions:

  • Place demographic questions at the end of the questionnaire.
  • Have your opening questions arouse interest.
  • Ask easier questions first.
  • To minimize conditioning, have general questions precede specific ones.
  • Group similar questions together.
  • If you must use personal or emotional questions, place them at the end of the questionnaire.

Pretest the questionnaire.
This is the most important step in preparing your questionnaire. The purpose of the pretest is to see just how well your cover letter motivates your respondents and how clear your instructions, questions, and answers are. You should choose a small group of people (from three to ten should be sufficient) you feel are representative of the group you plan to survey. After explaining the purpose of the pretest, let them read and answer the questions without interruption. When they are through, ask them to critique the cover letter, instructions, and each of the questions and answers. Don’t be satisfied with learning only what confused or alienated them. Question them to make sure that what they thought something meant was really what you intended it to mean. Use the above hints as a checklist, and go through them with your pilot test group to get their reactions on how well the questionnaire satisfies these points. Finally, redo any parts of the questionnaire that are weak.

And have fun in the wild world of survey research you freaking rebel!