What is a Survey? Definition, Examples & Best Practices
A survey is a method of collecting information from a group of people through a set of standardised questions. Researchers, businesses, and organisations use surveys to measure opinions, behaviours, and experiences across a population. Whether run online, by phone, or in person, surveys produce comparable data because every respondent answers the same questions in the same way. The resulting data can reveal patterns, track change over time, or support decisions that would otherwise rest on guesswork.
Survey Definition
A survey is a data-collection method that presents a fixed set of questions to a sample of respondents and records their answers in a structured way. The goal is to draw conclusions about a larger population based on the responses gathered from that sample.
The word "survey" covers both the instrument (the questionnaire) and the process of administering it and collecting responses. A customer satisfaction poll sent after a purchase, an annual employee engagement study, and a government census are all surveys. What they share is a consistent question format applied to multiple respondents so the answers can be compared, counted, and analysed.
Surveys differ from interviews in one important way: in an interview, a researcher can probe and follow up; in a survey, the questions are fixed. That constraint is also a strength. Because everyone sees the same questions, the data is directly comparable across hundreds or thousands of respondents.
Types of Surveys: Online, Telephone, In-Person, Postal
Online surveys
Online surveys are the most common format today. They are distributed by email, embedded on websites, or shared via social media. Costs are low, turnaround is fast, and response data feeds directly into analysis tools. MindProbe lets teams build and distribute online surveys with branching logic so respondents only see questions relevant to them. The main limitation of online surveys is self-selection: respondents must choose to participate, which can skew results toward people with strong opinions on the topic.
Telephone surveys
Telephone surveys are conducted by a researcher calling respondents directly. They tend to produce higher response rates than online surveys, particularly with older populations less likely to engage online. Computer-assisted telephone interviewing (CATI) systems automate dialling and data capture. Telephone surveys are more expensive per response than online methods and have become harder to execute as caller ID filters unknown numbers.
In-person surveys
In-person surveys are administered face-to-face, either by an interviewer reading questions aloud or by giving respondents a paper form to complete on the spot. Exit polls, point-of-sale feedback forms, and street intercept studies fall into this category. The presence of an interviewer can improve completion rates and allow clarification of confusing questions, but it also introduces interviewer bias.
Postal surveys
Postal surveys send a printed questionnaire to respondents by post. They work well for populations with low internet access and can achieve reasonable response rates when paired with a pre-paid return envelope and follow-up reminders. The lag between sending and receiving responses makes postal surveys slow and expensive relative to digital methods.
When Surveys Are Used
Surveys are used whenever a team or organisation needs to measure something across a population and needs data that can be aggregated.
Common uses include:
- Customer research: satisfaction scores, net promoter scores (NPS), product feedback, and churn analysis
- Employee research: engagement, wellbeing, exit interviews, and pulse checks
- Market research: brand awareness, purchase intent, and competitive benchmarking
- Academic research: attitude and behaviour measurement in social science, psychology, and public health
- Public policy: population health monitoring, census data, and service evaluation
According to Pew Research Center (2023), surveys remain the primary data source for tracking public opinion in democratic societies, with national polling organisations running tens of thousands of surveys each year across member countries.
The decision to use a survey rather than an interview, focus group, or observational study usually comes down to scale. When you need comparable data from hundreds of people, a survey is the practical choice. For deep exploratory work with a handful of individuals, qualitative methods tend to be more appropriate.
Key Elements of a Good Survey
A well-designed survey has six core elements:
- A clear objective. Every question should connect to a specific thing you want to know. If you cannot articulate what decision a question will inform, it probably does not belong in the survey.
- A defined population and sample. A survey is only as useful as its sample is representative. Random sampling, where every member of the target population has an equal chance of being selected, produces the most generalisable results. Convenience samples are quicker but introduce bias.
- Precise, neutral question wording. "How satisfied are you with our service?" produces cleaner data than "Don't you think our service has improved?" Leading questions, double negatives, and jargon all distort responses.
- An appropriate scale. Likert scales (strongly agree to strongly disagree), numeric rating scales, and binary yes/no questions each suit different kinds of measurement. The scale should match the precision the data actually needs.
- A manageable length. Completion rates drop as surveys get longer. Bain and Company research on B2B survey design found that completion rates fell sharply beyond 10 to 12 minutes of estimated completion time. MindProbe's question-timing feature lets designers see estimated completion time before launch, which helps keep surveys within an acceptable range.
- Logical flow. Questions on the same topic should appear together, and harder or more sensitive questions should come after easier ones. Skip logic routes respondents to relevant questions based on earlier answers, keeping the experience relevant and reducing unnecessary friction.
Surveys vs Questionnaires
The terms "survey" and "questionnaire" are often used interchangeably, but they refer to different things.
A questionnaire is the document: the list of questions. A survey is the full process: designing the questionnaire, selecting a sample, collecting responses, and analysing the data. Every survey uses a questionnaire, but a questionnaire is not automatically a survey. A medical intake form is a questionnaire; an annual patient experience study is a survey.
The distinction matters when reporting research. Saying "respondents completed a questionnaire" is accurate; saying "a survey of 500 customers" implies a full data-collection process with sampling and analysis.
For a deeper look at what separates these two concepts, visit our thought-piece here.
Frequently Asked Questions
A survey is a set of questions given to a group of people to collect information. The responses are recorded and analysed to understand the group's opinions, behaviours, or characteristics. Surveys can be paper-based or digital, short or long, and targeted at consumers, employees, students, or the general public. The defining feature is that every respondent answers the same questions, making the data comparable.
A questionnaire is the list of questions. A survey is the full research process that includes the questionnaire, a defined sample, data collection, and analysis. Think of the questionnaire as one component of a survey. You can hand someone a questionnaire without conducting a survey, but you cannot run a survey without a questionnaire.
It depends on the survey type and the relationship with respondents. Internal employee surveys typically achieve 60 to 80 percent when leadership visibly supports them. External customer surveys average 10 to 30 percent. Academic and market research surveys sent to cold lists often see 5 to 15 percent. A low response rate is not automatically a problem as long as the respondents who did reply are representative of the target population.
Most practitioners aim for surveys that take 5 to 10 minutes to complete, which typically means 10 to 20 questions depending on question type. Open-ended questions take longer to answer than rating scales. Bain and Company found completion rates fall noticeably beyond 12 minutes. The better question is: what is the minimum number of questions needed to answer your research objective?
Yes. Survey bias takes several forms. Sampling bias occurs when the respondents do not represent the population. Response bias occurs when respondents answer in ways they think are expected rather than honestly. Question order and wording can also push respondents toward particular answers. Good survey design minimises these risks through neutral wording, random sampling, and piloting questions before full distribution.
An online survey is a questionnaire distributed and completed over the internet, typically via email link, website embed, or direct URL. Responses are collected and stored digitally, eliminating manual data entry. Online surveys are faster and cheaper to run than paper or telephone methods. Platforms like MindProbe allow teams to set up conditional logic, randomise answer options, and export results directly to analysis tools.