The field of sport management is more reliant on survey research than are many other sport disciplines. In particular, sport management subdisciplines such as organizational behavior and sport marketing rely on surveys to reach mass target populations of sport firm employees and sport team fans, respectively. Surveys allow sport management researchers to gain desired information about a characteristic, attitude, or behavior within a selected sample or population. Surveys are particularly useful in describing the characteristics of a large population, and they make it feasible to collect data from large samples. Surveys require standardization, which strengthens measurement quality by asking exactly the same questions of all subjects and allowing for a particular response. Standardization ensures that the survey’s content remains consistent throughout the process of data collection. This quality, however, can make surveys an unattractive option for those who are investigating new phenomena and want to use probing questions based on a respondent’s previous response to an item. As a result, researchers who use surveys must often develop their items to represent the lowest common denominator among the targeted sample or population when assessing people’s attitudes, orientations, circumstances, and experiences.
It is important to note the difference between a survey and a questionnaire. Although the terms are often used interchangeably in sport management research (and, accordingly, within this textbook), the term survey technically refers to the action of collecting information, whereas a questionnaire is only one method of collecting data that involves asking a set of questions. As this distinction implies, survey research options involve much more than a standard questionnaire.
Whether conducted in person or by telephone, interviews allow researchers to incorporate a human element into the data collection process. This human element is magnified in face-to-face interviews, which may allow the interviewer to gain rapport with the interviewee—a particularly useful factor when exploring sensitive topics. However, the interviewer must be careful not to influence the respondent to answer in a particular way, either consciously or unconsciously. Furthermore, given the necessity of training interviewers appropriately, the process of interviewing can be very expensive and time consuming. In addition, even though face-to-face interviews may help researchers gain rapport with interviewees, respondents may still be untruthful about sensitive or potentially embarrassing information (e.g., recreational drug use); this issue is further complicated by the phenomenon of social desirability, which involves the tendency of individuals to respond in a manner that makes them appear better than they are. Overall, even though face-to-face interviews offer tangible advantages to researchers in certain situations, their expense and practical limitations (e.g., time required) limit their broad use in sport management research.
To reduce the participant withdrawal rate, telephone interviews are often much shorter than face-to-face interviews. Successful telephone interviews typically last no more than 10 minutes, which of course may limit the interviewer’s ability to ask probing and complex questions. The major advantage is that the interviewer need not be in the same location as the interviewee, which in some scenarios significantly reduces the overall cost of administering the survey. However, telephone interviews are subject to selection bias because some members of a specified population may not have a telephone listing, either because they lack a personal telephone or because they request an unlisted number. The issue has been complicated by the advent of answering machines and telephones with caller ID (i.e., call display) capabilities, since some potential participants may decide not to answer calls from an unfamiliar source. Despite these disadvantages, telephone surveys allow researchers to contact potential respondents over large geographical areas and are quite useful in some research settings (e.g., projects with an international focus).
Questionnaires can be administered in several ways: distributed in person to potential respondents individually or as a group, mailed to potential respondents, or conducted via the Internet. Regardless of how they are distributed, questionnaires require that the participant have at least minimal literacy proficiency in order to complete the survey. For this reason, researchers should consider the typical literacy level of the target population and make their questionnaires as clear and direct as possible.
If the research question can be addressed via a questionnaire, then it is likely to be the researcher’s most efficient and cost-effective option. Direct distribution of paper surveys is particularly effective when participants are grouped together, as in a classroom full of students or an arena full of sport consumers. On the other hand, as with face-to-face interviews, direct distribution of paper surveys requires that the researcher (or a trained representative) be in the same place as the potential respondents. Nonetheless, direct distribution of paper surveys is a popular method of data collection in sport management, both because questionnaires give respondents a stronger feeling of anonymity and because a large number of questionnaires can be administered simultaneously.
As with telephone interviews, a mailed questionnaire offers an economical way to reach a large number of individuals over a broad geographic region; it also offers efficiency, in that a large number of questionnaires can be distributed simultaneously. However, mailed questionnaires have historically received low response rates—often in the range of 10 to 20 percent—and thus can increase the cost of data collection by requiring researchers to send many more surveys than the desired sample size. For example, assuming a response rate of 20 percent, a researcher who wants a sample of 100 respondents would need to send 500 surveys to valid potential respondents. In addition, respondents may not be representative of the entire sample, because those who respond to questionnaires are likely to be people who find the survey to be of particular interest or have strong opinions about the topic, whereas those who know or care very little about the topic may disregard the survey even if they fall within the sample focus.
The rise of the Internet has given researchers yet another option for collecting data. Internet surveys are often used as a way to administer the survey instrument to a large number of subjects over a broad geographical area. According to Reips (2000), Internet surveys offer several specific advantages and disadvantages when compared with their traditional “paper and pencil” counterparts. Advantages include the following:
1. Ease of access to a large number of demographically and culturally diverse participants
2. Ease of access to very rare, specific participant populations
3. A stronger justification for generalizing findings of Internet experiments to the general population compared to laboratory experiments if convenience samples are avoided
4. Generalizability of findings to more settings and situations, since external validity is considered to be high in Internet experiments compared to laboratory experiments (due to familiarity with the physical environment)
5. Avoidance of time constraints
6. Avoidance of organizational problems (e.g., less scheduling difficulties, since thousands of participants may participate simultaneously)
7. Completely voluntary participation
8. Ease of acquiring just the optimal number of participants for achieving high statistical power while being able to draw meaningful conclusions
9. Detectability of motivational confounding
10. Reduction of experimenter effects
11. Reduction of demand characteristics
12. Cost savings in terms of lab space, person hours, equipment, and administration
13. Greater openness of the research process assuming the project remains openly accessible indefinitely on the Internet for documentation purposes
14. Ability to assess the number of nonparticipants via comparison of total webpage viewers to participants
15. Ease of comparing results with results from a locally tested sample
16. Greater external validity through greater technical variance (i.e., equipment malfunctions are likely to be confined to individual users rather than impacting the entire experiment)
17. Ease of access for participants (through bringing the experiment to participants instead of vice versa)
18. Public control of ethical standards (participants, peers or other members of the academic community might look at an experiment and communicate any objections to the researchers)
Disadvantages (and some suggested solutions) include the following:
1. Multiple submissions can be avoided or minimized by collecting personal identification items (e.g., birthdates), checking internal consistency (see chapter 4) as well as date and time consistency of answers, and using techniques such as subsampling (i.e., analyzing a selected sample of a larger sample), participant pools, and provision of passwords.
2. Experimental control may be an issue in some experimental designs but is less problematic when using a between-subjects design with random distribution of participants to experimental conditions.
3. Self-selection can be controlled by using the multiple-site entry technique.
4. Dropout is high in Internet experiments, especially if no financial incentives are given for participation.
5. The reduction or absence of interaction with participants during an Internet experiment creates problems if instructions are misunderstood.
6. The comparative basis for the Internet experiment method is low.
7. External validity of Internet experiments may be limited by their dependence on computers and networks.
As with the limitations of telephone interviews, researchers who use Web-based questionnaires must understand that not everyone is connected to the Internet, and that, as a result, online surveys can be particularly ineffective with certain populations known to have little or no access to the Internet (e.g., people in some rural locations). Furthermore, even if they are connected to the Internet, some respondents are less computer literate than others, and researchers need to ensure that the questionnaire, as presented on the screen, remains consistent across multiple computer platforms and Web browsers. In addition, the development of spam filters may prevent a researcher’s request for participation from reaching its intended audience in the first place. Finally, since e-mail addresses are not necessarily as standardized as telephone numbers are, the sampling of e-mail addresses is challenging. For example, some potential participants have multiple e-mail addresses and thus may potentially be able to respond to the questionnaire several times, which of course could skew the results of the study.