Skip to main content

How Fraud Detection Works

How Endorsed's AI fraud detection agent works, from the algorithm, to interpreting the signals, to the actions you should take

David Head avatar
Written by David Head
Updated today

1. The Big Picture

Endorsed uses a machine learning model to score every candidate’s fraud risk. Think of it like a credit score for hiring: the system looks at dozens of data points about a candidate and produces a single risk level.

The four risk levels are:

Risk Level

What It Means

What You Should Do

🟢 Clear

Very high confidence this is a real person

Proceed normally. No extra steps needed.

⚪️ Low Risk

Most likely real, small chance of fraud

Advance the candidate, but pay closer attention during interviews. Confirm the person on the call matches their LinkedIn photo. Ask a few questions about where they live or their background to verify they are who they say they are.

🟡 Medium Risk

Too close to call, needs human judgment

Gather more data during the interview process. Verify the person is not a deepfake. Confirm they match their LinkedIn profile photo. Consider asking them to hold their ID next to their face on the call. Ask specific questions about the area they claim to be in. If something feels off, escalate.

🔴 High Risk

~90-95% chance this candidate is fraudulent

Strongly consider rejecting. If you decide to advance, we recommend doing everything listed under Medium Risk, as well as escalating to your security team. The worst-case scenario at this risk level is a rented identity, where a real person interviews on behalf of a fraudster. Even if the candidate looks and sounds real the entire time, a fraudster may be in the loop doing the actual work after hire.

2. How the Score Works

The Score and the Signals Are Two Different Things

This is the most important thing to understand: the risk score is produced by the AI model independently. The signals you see in the dashboard are an explanation layer. They show you pieces of what the model is considering, but they don’t show you everything.

An analogy: Imagine a doctor gives you a health score. They then show you a few of the test results that went into that score: your blood pressure, your cholesterol, your weight. Those test results help you understand the score, but the doctor also considered dozens of other lab results, your family history, and patterns across thousands of other patients. The test results you see illustrate the score. They didn’t create it.

Why Two Candidates Can Look Similar But Get Different Scores

You might see two candidates with a similar number of green and yellow dots, but one is “Clear” and the other is “High Risk.” This can feel confusing. Here’s why it happens:

  • Not all signals carry the same weight. Some signals matter dramatically more than others. In some cases, one signal can carry 50 times more weight than another. A single heavyweight red flag (like a VOIP phone number or suspicious application patterns) can push a candidate to High Risk even if most other signals look fine.

  • There are signals you can’t see. Endorsed evaluates additional confidential signals behind the scenes (more on why in the next section). Some of these hidden signals are very high-weight, so they can significantly affect the score without being visible on your screen.

  • The AI looks at the full picture, not a checklist. The model doesn’t just count green vs. yellow vs. red dots. It evaluates the combination and pattern of signals together, the way a doctor reads lab results as a whole rather than one at a time.

3. What You Can See, and What You Can’t

The Visible Signals

The dashboard shows you signals across several categories:

Category

What It Checks

Application Analysis

Was the candidate referred? Are there suspicious patterns across similar applications? Does the resume text have unusual word choice, writing style, or formatting?

Online Identity

Can we verify this person exists across data sources? Is their LinkedIn profile real, established, and consistent? Do they have connections, a photo, recommendations?

Contact Info

Is the email established or brand new? Is the phone number valid and not a VOIP line? Does the phone owner’s name match the candidate?

Resume vs LinkedIn

Do the education and work experience on the resume match what’s on LinkedIn? Mismatches can indicate a fraudulent application.

Resume Signals

You may see yellow or red resume signals like “Resume: Word Choice” or “Resume: Writing Style.” These mean that Endorsed’s AI has detected statistical patterns in the resume text that are correlated with fraudulent applications.

We intentionally don’t go into detail about what these patterns are. There are two reasons for this. First, if we explained exactly which word patterns or formatting details we look for, fraudsters could simply adjust their resumes to avoid detection.

Second, some of these signals are based on statistical correlations that are genuinely useful for detection but would raise more questions than they answer if shown in detail. They’re the kind of thing that sounds strange in isolation but is highly predictive when the AI evaluates it alongside everything else.

The important thing to know: these signals have been validated against thousands of known fraud cases. If a resume signal fires, it means the text matches patterns that fraudsters disproportionately exhibit.

Resume vs. LinkedIn Mismatches

Endorsed compares the education and work experience on a candidate’s resume to what’s on their LinkedIn profile. When no matches are found between the two, it’s a warning sign. Some fraudulent applicants use completely different histories on their resume vs. their LinkedIn.

One nuance: matching education or experience between resume and LinkedIn doesn’t guarantee authenticity. Most fraudsters try to keep them in sync. So a match is neutral (not necessarily safe), while a complete mismatch is a red flag.

Why Some Signals Are Confidential

Fraud is an adversarial problem. The people committing fraud are actively trying to figure out how detection systems work so they can beat them. Every signal we expose publicly is a signal that fraudsters can learn to game.

Our data science team has identified patterns that fraudsters don’t know we’re looking for, and keeping them confidential is what makes them effective. If we revealed how these signals work, bad actors would reverse-engineer the system and adjust their behavior to avoid detection.

We know it can feel frustrating to see “Secret Signals: Suspicious activity detected” without being able to dig into the details. The tradeoff is real: the more we reveal, the less effective those signals become. We’d rather have signals that actually catch fraudsters than signals that are fully transparent but easy to circumvent.

4. Reading the Dashboard

Top Signals

At the top of every candidate’s fraud report, you’ll see “Top Signals Relevant to Risk Score.” These are the most important signals for this candidate, the ones that had the biggest influence on the score. Start here.

Comprehensive Fraud Evaluation

Below that, you can expand “View Comprehensive Fraud Evaluation” to see every signal we can show you, organized into the categories described above.

What the Colors Mean

  • 🟢 Green = This check passed. A positive signal.

  • 🟡 Yellow / Orange = A warning sign was found. It doesn’t mean fraud on its own, but it’s something the model noted.

  • ⚪️ Gray = Neutral or unable to verify. Not enough data to say positive or negative.

Remember: a few yellow signals don’t automatically mean fraud. The model weighs each signal differently. What matters is the overall risk level at the top.

5. When the System Is Wrong

Endorsed’s fraud model is approximately 94% accurate, but no system is perfect. Here’s what you should know about how it handles mistakes:

The system is designed to err on the side of caution for real candidates. For both legal and compliance reasons, our highest priority is making sure real candidates are not wrongly flagged as fraudulent. This means the system is more likely to miss a fraudster (giving them a lower score than they deserve) than it is to wrongly flag a real person as high risk.

In plain terms: if the system says someone is High Risk, you should take that very seriously. If it says someone is Clear, they’re almost certainly real, but occasionally a sophisticated fraudster might slip through with a lower score.

Reporting a Wrong Result

If you believe the system got it wrong, there are two ways to report it:

  1. Contact the Endorsed team via email or live chat with the specific candidate example. Our team will analyze the case manually and use what we learn to improve the next version of the AI.

  2. Use the buttons in the dashboard. When you click “Mark Fraud and Reject” or “Not Fraud” and there’s a significant discrepancy between your assessment and the system’s score, our team flags those cases for manual review as well.

Either way, every correction helps. The more examples our team can analyze, the more accurately the system performs for everyone.

6. Quick Reference

“Why is this candidate High Risk when all their signals look green?”

Because the model evaluates signals you can’t see (the confidential signals), and some signals carry dramatically more weight than others. A single heavy signal can outweigh many light ones.

“Why is this candidate Clear when they have yellow warning signals?”

Because those yellow signals were low-weight. The model considered them but determined they weren’t strong enough, individually or together, to meaningfully increase the fraud risk.

“What are the resume signals checking?”

Statistical patterns in how the resume is written: word choice, writing style, and formatting. These patterns are correlated with fraudulent applications based on thousands of confirmed cases. Details are kept confidential to prevent fraudsters from adjusting their resumes to avoid detection.

“What are the secret signals?”

Additional fraud indicators that our data science team has identified through extensive research. They’re kept confidential to prevent bad actors from learning how to circumvent them. If “Secret Signals: Suspicious activity detected” appears, it means multiple confidential checks have flagged this candidate.

“Should I override the system’s recommendation?”

If your strong gut sense is that the risk score is wrong, trust your gut, but do more work to verify the candidate is real during the interview process. Keep in mind that sophisticated fraudsters can steal real people’s identities, and some go as far as having the real person show up to interviews while a different person does the actual work. Even when overriding, stay vigilant.

“How does Endorsed get better over time?”

When you mark candidates as fraud or not fraud, Endorsed uses that feedback to improve. If you have opted into AI training, your corrections are used as examples to train and improve the AI model directly. If you have not opted in, our team still analyzes corrections manually to inform improvements to the next version.

Questions? Reach out to your Endorsed contact or email [email protected]

Did this answer your question?