Skip to main content

Your Survey Data Might Be Lying to You — And a UHD Professor Just Proved It

Most organizations trust their surveys. They should think twice.

A new study by Dr. Zhonghao Wang, Assistant Professor of Management at the Marilyn Davies College of Business (MDCOB) at the University of Houston–Downtown, is challenging one of the most widely accepted best practices in organizational research — and the findings have real consequences for how companies hire, assess, and make decisions about their people.

The research, published in Organizational Research Methods, recently earned Dr. Wang the prestigious Joyce and Robert Hogan Award for Personality and Work Performance, one of the most competitive recognitions in applied personality research, awarded annually to work that demonstrates excellence and innovation in understanding how personality shapes outcomes at work.

 

The Problem With "Best Practices"

For decades, researchers and HR professionals have relied on a go-to solution for cleaner survey data: use multiple sources. Instead of asking employees to rate both their own personality and their job performance, you split the measurement — employees self-report personality while supervisors evaluate performance. This approach, called multi-source data collection, is widely trusted to reduce a problem known as common method bias, where using a single survey inflates how related two things appear to be.

It's a reasonable fix. But Dr. Wang's study reveals a blind spot hiding inside it.

The culprit is insufficient effort responding (IER) — what happens when survey respondents don't take questions seriously, rushing through or selecting answers at random without actually reading the items. The critical finding: if a person tends to respond this way, that tendency follows them across sources. Meaning even when personality and performance are measured separately, low-quality responses from the same underlying person can still distort the data — and inflate relationships that may not actually exist.

"What was most surprising," Dr. Wang explained, "is that using multiple data sources — often considered a best practice — does not fully safeguard against bias."

 

Where the Idea Came From

The research was sparked by a conversation with Dr. Wang's doctoral advisor, Jason Huang, during their time at Michigan State University. The key insight that drove everything forward: insufficient effort responding might not be random noise. It might reflect a stable individual trait — a consistent tendency in how a person approaches survey-taking altogether.

That reframe changed everything. If IER is a trait, then splitting your data sources doesn't eliminate the problem. You're still dealing with the same person, with the same underlying tendencies, on both ends of the data.

The research was developed through collaboration with colleagues who brought complementary expertise in methodology and organizational theory, meeting through shared conversations at academic conferences. After several years of theory development, data collection, analysis, and peer review, the study cleared the bar for scientific publication — and then earned its field's highest honor.

 

What This Means for Your Organization

If your company uses personality assessments, engagement surveys, 360-degree reviews, or any multi-source evaluation tool, Dr. Wang's work has three direct takeaways:

  1. Keep using multiple data sources — it's still a valuable practice and better than a single source alone.
  2. Build in attention checks — incorporate data quality safeguards throughout your surveys to detect and flag low-effort responses before they contaminate your analysis.
  3. Interpret results with healthy skepticism — even clean-looking correlations across data sources can reflect "false positives" driven by IER, not real relationships.

Beyond methodology, Dr. Wang also recommends that organizations make surveys feel meaningful to participants. When employees understand why their input matters — and when participation is genuinely voluntary without social pressure to comply — data quality improves. The goal is thoughtful engagement, not compliance.

These lessons apply broadly: any sector that uses survey data for hiring decisions, performance evaluation, or workforce planning should pay attention.

 

What's Next

Dr. Wang's current research is moving into new territory: how employees make decisions with the help of artificial intelligence. As AI tools become embedded in everyday work tasks, understanding how people learn from and adapt to AI-assisted decision environments is becoming increasingly important. That work uses experimental methods to track performance over time — and notably, incorporates rigorous data quality protocols informed by the very findings that just earned the Hogan Award.

 

Recognized. Rigorous. Right Here in Houston.

The Joyce and Robert Hogan Award isn't just a credential — it's a signal that the questions Dr. Wang is asking matter, and that the answers have consequences for real organizations making real decisions about real people.

That kind of work is happening at MDCOB. And it's just getting started.