Skip to main content

Device Failures Explanations

Below is a list of common device integrity and fraud indicators detected by the DQC Tools.
Each section outlines:

  • What it is
  • How we know
  • Why it’s a problem

Wrapper Tampering

  • What it is: The participant attempted to alter or delete local browser data used by the Quality Tools.
  • How we know: Our script detects changes or missing keys in local or session storage that should only be set by the system itself.
  • Why it’s a problem: It suggests intentional manipulation to reset identity, erase device history, or bypass fraud checks—behaviors consistent with deceptive participation.

Bot Detection

  • What it is: The request behavior matches that of an automated script rather than a real human.
  • How we know: We identify programmatic patterns such as identical timing intervals, missing browser fingerprints, or known automation tools (e.g., Selenium, Playwright).
  • Why it’s a problem: Bots can flood surveys with fake responses, degrading data quality and wasting incentives meant for real people.

TOR Network Usage

  • What it is: The device is routing its traffic through the TOR network to conceal identity or location.
  • How we know: The IP address is associated with known TOR exit nodes, which are publicly listed and updated frequently.
  • Why it’s a problem: TOR usage masks true location and identity, making it impossible to validate geography-based quotas or ensure unique participation.

Request Blocked

  • What it is: The browser actively prevented the Quality Tools from running.
  • How we know: The tools’ scripts failed to execute due to ad blockers, script blockers, or user privacy extensions.
  • Why it’s a problem: When blocked, no device signals can be collected—resulting in a default device score of 0 and eliminating our ability to assess respondent trustworthiness.

Developer Tools Open

  • What it is: The browser’s developer console is open during the survey.
  • How we know: The system detects when the DevTools interface is active or inspecting scripts.
  • Why it’s a problem: It often indicates someone is examining or modifying survey code, a red flag for data tampering or automated scraping.

Timezone Mismatch

  • What it is: The device’s system timezone does not align with its IP-based location.
  • How we know: The browser’s reported time offset conflicts with the region identified by its network address.
  • Why it’s a problem: The mismatch may indicate a proxy, VPN, or fake location—casting doubt on eligibility for geo-targeted surveys.

Virtual Machine

  • What it is: The device is running inside virtualized hardware rather than a normal computer or phone.
  • How we know: Hardware and GPU signatures indicate a virtualized environment (e.g., VMware, VirtualBox).
  • Why it’s a problem: VMs are frequently used to manage multiple fake accounts or bypass bans; legitimate respondents rarely need one.

Location Spoofing

  • What it is: The device’s reported location does not match its real network location.
  • How we know: Discrepancies exist between geolocation data (GPS or browser location) and IP-based country or region.
  • Why it’s a problem: Spoofing hides a participant’s true origin and can invalidate studies that rely on regional accuracy.

Identity Reuse Tampering

  • What it is: The browser’s identity data was manually changed to appear as a different participant.
  • How we know: Storage tokens tied to a single participant ID show signs of manual modification or deletion.
  • Why it’s a problem: This allows fraudulent respondents to take multiple surveys as “new” users, inflating sample counts.

Tampering Detection

  • What it is: The browser environment shows signs of direct manipulation or interference with integrity checks.
  • How we know: Internal validation scripts detect modified JavaScript functions or disabled security hooks.
  • Why it’s a problem: Such tampering indicates an effort to evade detection, undermine quality checks, or falsify device signals.

High Activity Device

  • What it is: A device that has participated in an unusually large number of requests in a short period.
  • How we know: The system tracks device fingerprints and flags those in the top 2% of total activity across all customers in the past 24 hours.
  • Why it’s a problem: Excessive participation is a hallmark of professional or automated survey takers, skewing data integrity and fairness.

Incognito Mode

  • What it is: The device is operating in private or incognito mode.
  • How we know: The browser disables persistent storage and reports limited fingerprinting capabilities.
  • Why it’s a problem: It hides long-term device identifiers, preventing us from recognizing repeat or fraudulent users across sessions.

Public VPN

  • What it is: The device is connected through a public VPN provider.
  • How we know: The IP address belongs to a known VPN service such as NordVPN, ProtonVPN, or Surfshark.
  • Why it’s a problem: VPNs obscure location and identity, enabling participants to circumvent country restrictions or duplicate participation.

Privacy Settings

  • What it is: The browser’s privacy or tracking settings are unusually strict.
  • How we know: The browser disables or blocks APIs like localStorage, canvas fingerprinting, or third-party cookies.
  • Why it’s a problem: While privacy-conscious users exist, these settings can mimic fraud prevention evasion, making it harder to validate genuine respondents.

Proxy

  • What it is: The device routes traffic through a proxy server instead of connecting directly.
  • How we know: The IP range and connection metadata indicate use of an intermediary server.
  • Why it’s a problem: Proxies conceal real IPs and are commonly used for fraud.

Types:

  • Residential Proxy: Uses real consumer IPs, harder to detect, often used to bypass restrictions.
  • Datacenter Proxy: Hosted on cloud infrastructure, easier to detect, frequently used for bulk automation.