93% of employers rely on background checks to mitigate hiring risks. But when records surface, most companies struggle to determine if they represent actual risk and a good reason for rejection. Their screening providers can’t offer much help, needing to stay in their regulatory lane of delivering neutral data. As a result, businesses are left interpreting complex legal jargon using gut instincts and decade-old guidance from the EEOC to make high-stakes decisions.
The good news is criminologists and economists have made huge strides in understanding re-offense risk, with positive findings for everyone involved: most people age out of crime quickly and there are reliable ways to identify them. So why are companies still using outdated criteria and seven-year rules that ignore the science?
For one, the research is siloed in academic journals, far from the decision-makers who need it most. Businesses don’t have time to read 40-page papers and decode what the findings mean for them. More importantly, our current system disincentivizes employers to be transparent about their screening criteria, preventing us from being able to help them improve their decision-making. I call this the liability problem.
The Liability Problem
In the U.S., all liability for background check decision-making falls on employers. If an employee harms someone on the job and they didn’t do a background check, they can be sued for negligent hiring. If they used background checks but their hiring decisions had a disparate impact on certain demographics, they can be sued for discrimination (despite the background check data itself being a reflection of a biased legal system).
It is no wonder why businesses refuse to disclose their screening criteria and outcomes in a system that punishes them for doing so. This lack of transparency serves no one—candidates can’t understand why they’re rejected, employers can’t learn from each other, and policymakers can’t measure what’s actually working (leading to well-intentioned but ill-informed policies that create more liability for employers, perpetuating existing problems). The result is millions of individual reviewers making subjective, fear-driven decisions that err on the side of rejection.
Why Transparent, Science-Based Tools Work
Decision-support tools grounded in science offer something the current system never could: complete transparency into what factors are being considered and why. When companies can point to a reputable third-party risk assessment that undergoes continuous testing, they can finally be open about their hiring standards. They can prove that decisions are based on defensible evidence, not prejudice, allowing them to stand behind every hire.
This transparency creates shared accountability. We can audit for bias. We can test for fairness across different demographic groups. We can use the outcome data to continuously train the tools in real-time (contrasted with one-time training for staff). Any model we build using criminal justice data will inherit the legal system’s inherent flaws and biases. But identifying and mitigating the bias in one model is significantly more feasible than doing so with millions of humans using the same data.
Objective, third-party support systems is not a novel concept. The Netherlands, France, Germany and many other developed countries have entire agencies that support employers with making unbiased decisions about hiring risk. The employer often never sees the person’s criminal record and can rely on the agency to determine if there’s relevant risk. This approach reduces liability, increases hiring rates and presents a win-win-win for employers, candidates and communities. The U.S. isn’t ready to adopt a fully closed-record system but there’s much to be learned from these international models that work.
The Path Forward
My startup is bridging the gap between businesses and the scientific community through practical decision-support tools. Rather than replacing human judgment, we give companies expertise where theirs falls short, letting them focus on decisions that require nuanced insight. Our solutions are not perfect. No system that involves estimating human potential or risk can be. But the beauty of science is it’s constantly evolving, allowing us to learn and improve over time. Done right, with proper principles and safeguards, a human + science-based approach is dramatically more effective than our status quo and represents a meaningful leap toward a fairer future of work.
Companies and individuals affected by background check decisions deserve better than the current system. They deserve to know the standards they’re being held to, to see improvement over time, and to have confidence that the process is as fair and effective as we can make it.
The future of fair hiring isn’t about eliminating human judgment. It’s about supercharging it with science. This brings more transparency, more accountability and more growth. That’s a future I’m proud to build toward.
Stay tuned for our startup’s big upcoming launch. If you’re a business interested in piloting our solution for free, reach out by July 16th.
