Skip to main content

Understanding Survey Dispositions: How Responses Are Categorized

In survey research, dispositions help categorize participant responses based on their eligibility, behavior, and data quality. These classifications are essential for ensuring data integrity, identifying trends in participant behavior, and maintaining high-quality insights.

At Data Quality Co-op, we use nine dispositions to categorize survey responses. These dispositions fall into three primary categories:

  1. Qualification Dispositions – Determine if a participant qualifies for the survey.
  2. Data Quality Dispositions – Track participant engagement and data quality issues.
  3. Completion Dispositions – Determine if a participant's response was used in the final analysis.

Qualification Dispositions

Overquota

A participant is marked as Overquota when they belong to a quota group that has already been filled. Quotas are pre-set limits for specific participant segments based on demographics, behaviors, or other attributes.

For example, if a survey requires 100 participants aged 18–24 and that quota is met, any additional participants in that age group will be categorized as Overquota and prevented from continuing.

Did Not Qualify (DNQ)

A participant is marked as Did Not Qualify when they do not meet the required screening criteria. The survey screener filters out participants who do not match the target audience.

For instance, if a survey is about coffee consumption, a participant who indicates they do not drink coffee will receive a DNQ status and will not proceed further.

Duplicate

A Duplicate response occurs when a participant attempts to submit multiple survey entries. This can happen due to:

  • Re-accessing the survey link
  • Using multiple devices
  • Refreshing the survey page

Duplicate responses can distort survey results and must be filtered out to maintain data accuracy.

Data Quality Dispositions

Abandon

A participant is marked as Abandon when they start a survey but do not complete it. This can happen at any point, including during the screener or main survey.

OSQ (Out-of-Survey Quality Fail)

An OSQ Fail is assigned when a response is flagged due to external participant characteristics or suspicious behavior outside of the survey environment.

Common triggers include:

  • Using disallowed or suspicious devices (e.g., bots, emulators)
  • IP address mismatches or flagged locations
  • External fraud detection systems identifying the participant as a bot

These responses are excluded from the final dataset to maintain data integrity.

Manual ISQ (Manual In-Survey Quality Fail)

A Manual ISQ Fail occurs when a researcher manually removes a response due to quality concerns in the participant’s in-survey behavior.

Common reasons for manual ISQ fails:

  • Nonsensical open-ended responses
  • Response inconsistencies
  • Patterns that automated checks missed

While manual reviews help catch nuanced issues, they are less efficient and can introduce subjectivity.

Automated ISQ (Automated In-Survey Quality Fail)

An Automated ISQ Fail is flagged in real time when a participant exhibits clear low-quality behaviors.

Automated ISQ checks may trigger on:

  • Speeding (completing the survey too quickly)
  • Straight-lining (selecting the same response for every question)
  • Failing honey pot questions (intended to catch inattentive participants)

These responses are excluded from the final analysis to ensure data reliability.

Completion Dispositions

Flagged Complete

A Flagged Complete response contains quality flags but is still included in the final analysis.

These flags indicate minor concerns, such as:

  • Borderline speeding
  • Slight inconsistencies in responses

Researchers may review flagged completes separately or weigh them differently during analysis.

Qualified Complete

A Qualified Complete is a high-confidence survey response that meets all quality checks.

A response qualifies as complete if it:
✔ Passes all fraud and quality checks
✔ Meets survey criteria (e.g., demographic/behavioral requirements)
✔ Shows no signs of low-quality behaviors (e.g., rushing, inconsistent answers)

These responses are included in the final analysis as reliable data points.