The State of AI in Hiring
AI is creeping into nearly every corner of HRtech and recruitment with promises of efficiency, better signal, and lower time to hire while cutting down the time spent manually screening resumes. But it also comes with a big question: how do we do it fairly and safely, without handing over decision-making to a black box algorithm we can’t audit or trust?
AI and recruiting are just taking off. Here’s what the current landscape looks like:
About 87 percent of companies now use AI tools somewhere in their hiring workflows: screening, scheduling, candidate matching, etc.
Recruiters say time savings are a major win. Many report AI coming through on its promises of speeding up resume screening, reducing time‑to‑hire, and cutting costs per hire.
On the candidate's side though, there is caution. About two thirds of U.S. adults say they would not apply for a job that uses AI to make hiring decisions, because of fairness and transparency concerns.
AI in recruiting is growing fast. The tools are better. People are more familiar. But regulation and public trust are scrambling to catch up.
A Concerned Public and Government Response
Governments and the public are worried (rightfully so) about what might go wrong long term. States are stepping in with laws targeting AI in hiring, especially where there is high risk of bias, opaque decision making, or unfair discrimination.
Examples include:
Illinois: The Artificial Intelligence Video Interview Act requires employers to notify applicants, explain what AI evaluates, obtain consent, and allow video deletion upon request. A new amendment (2026) expands notice requirements to any AI use in employment decisions.
New York City: Local Law 144 requires auditing of automated employment decision tools (AEDTs) annually for race and gender bias, and posting transparency notices.
California: The Civil Rights Council prohibits AI systems with discriminatory selection criteria, and enforces transparency expectations.
Colorado: Developing laws focused on algorithmic bias, transparency, and notifications in employment.
Themes across these laws: transparency, responsibility, notice & consent, and auditability.
The Litigation Risks are Real
The biggest legal danger for companies using AI in hiring is litigation. If AI tools reproduce bias or make decisions without transparency, organizations open themselves up to lawsuits from candidates, class actions, or enforcement actions from regulators. The ongoing Mobley v. Workday case, which alleges that Workday’s AI tools discriminated based on race, age, and disability, shows how these risks are very real and growing.
A well-known precedent is Amazon’s experimental hiring tool from 2014-2015. It was designed to scan resumes for top candidates but ended up reinforcing bias against women. For example, it penalized resumes that mentioned “women’s college.” The project was scrapped once the bias became clear. That story remains a cautionary tale about the dangers of letting AI systems replicate historical inequities.
The Reality Across AI and HRtech
Most HRtech companies don’t build massive AI models. It’s too expensive. They outsource or license from labs like OpenAI, Google/Gemini, and Anthropic. Their “special sauce” often boils down to prompt engineering, filters, or overlays. They're usually broad and not role-specific.
Because many vendors don’t reveal how prompts or training data work, recruiters can’t adjust screening logic or see what drives outcomes. That’s a problem with laws demanding transparency or audits.
To put it bluntly: most new HRtech is selling a few glorified prompts. Anyone can take a resume, drop it into a model like ChatGPT, and run a simple prompt. That’s basically the “secret sauce” being sold.
SeeVee’s Approach
Here’s where SeeVee is different. We don’t pretend to have a hidden algorithm or a mysterious “secret sauce.” What we provide is a framework that recruiters and hiring managers can actually use to interpret applicant data on their own terms.
Every single prompt is visible and editable. Instead of locking users out, we hand them the actual prompts so they can tweak, adjust, and refine the criteria themselves. This means:
Recruiters are not forced to rely on vague outputs. They can shape how AI evaluates candidates in ways that match the specifics of their job and culture.
A log of all prompts and adjustments is maintained, making the process auditable for compliance and bias testing later.
Transparency is built into the workflow. There are no hidden processes running behind the curtain.
Breaking it down: At every level, prompts guide how the AI interprets information and every prompt is editable by the recruiter.
Workspace level prompts: Define “hero data” like company description, values, and culture. The match score prompt lives here, guiding how resumes and interviews are compared against the job description.
Job level prompts: Recruiters add prompts tailored to the role. For example: “Emphasize candidates with 5+ years of backend engineering experience” or “Highlight candidates fluent in Spanish.”
Candidate level prompts: Recruiters can ask prompts like “What leadership experiences does this candidate highlight?” or “Summarize their collaboration skills.” AI surfaces information from resumes and interview transcripts accordingly.
Interview level prompts: Recruiters can shape the live interview process by writing prompts that add custom questions, adjust the weighting of certain answers, or guide how AI evaluates transcript responses.
By giving recruiters control and visibility into prompts at each level, SeeVee ensures that hiring is transparent, auditable, and adaptable to the unique needs of every role and company.
Why This is Important
Reduce legal and reputational risk by building transparency and fairness into hiring processes.
Improve quality of hire by keeping humans in the loop and catching strong non‑traditional candidates.
Build trust with candidates by clearly explaining how AI is used.
Ensure compliance with rapidly evolving laws.
Stay adaptable as regulations change and expectations grow.
Summary
AI in recruitment is here to stay. It can deliver faster screening and better matching but only if done thoughtfully. The risks lie in black box decisions, hidden bias, and lack of accountability.
SeeVee’s solution: give recruiters tools to adjust prompts, build transparency, and make auditing standard. Because in an era of rising regulation and scrutiny, anything less is asking for trouble.