AI & Careers

ChatGPT Prompts to Spot Fake Job Postings

By HRLens Editorial Team · Published · 12 min read

Quick Answer

The best ChatGPT prompts to spot fake job postings make the model audit pay, company identity, recruiter channel, application flow, and pressure tactics. Don’t ask AI if a job is legit. Ask it to score scam signals, separate ghost jobs from scams, and tell you exactly what to verify on the employer’s site.

Why are fake job postings harder to spot in 2026?

Fake job ads got better because the writing got better. A scam used to look sloppy, rushed, and weirdly formal. Now a decent model can generate a polished remote marketing manager listing in seconds, add a believable salary band, and follow up with a recruiter message that sounds like a real talent coordinator. The FTC said reports about game-like task scams climbed from zero in 2020 to about 20,000 in just the first half of 2024, and those scams made up a big share of job scam reports. You’re not imagining it. The market really is noisier.

Most fake-job advice is stuck in an older internet. Bad grammar isn’t the main tell anymore. Behavior is. A posting can read perfectly and still be a scam if the recruiter pushes you to WhatsApp, asks you to buy equipment, or wants your bank details before a real interview. LinkedIn’s 2026 safety data also pointed to a familiar trap: getting pulled off-platform into a personal chat app. That’s why a good prompt shouldn’t just critique wording. It should audit how the job behaves, how the recruiter behaves, and where the application path breaks from normal hiring.

Job scam signals worth knowing
20,000
Estimated task scam reports in just the first half of 2024
FTC data spotlight
38.8%
Share of first-half 2024 job scam reports tied to task scams
FTC estimate
32%
Younger professionals who said they ignored scam warning signs
LinkedIn Job Search Safety Pulse 2026
Recent FTC and LinkedIn figures show why simple gut checks are no longer enough

What fake job listing red flags should your prompt check first?

When I screen a listing, I want the model to check five things in order: company, compensation, channel, conversion, and credentials. Company means the employer is real and the role exists on an official careers page. Compensation means the pay matches the level, location, and responsibilities instead of reading like a fantasy draft. Channel means the recruiter is using a corporate domain or a verifiable platform workflow. Conversion means the process moves toward a real application, not toward gift cards, crypto, or a training fee. Credentials means the posting includes enough operational detail to sound like an actual hiring manager wrote it.

You also need the model to separate a ghost job from a scam. Those aren’t the same thing. A ghost job is the annoying posting that’s stale, already filled, or there to collect resumes. A scam job is trying to get money, data, access, or unpaid labor out of you. That distinction matters because the next step changes. For a ghost job, you usually move on. For a scam, you document, report, and stop talking. Most prompt libraries miss this and give you one mushy verdict when you actually need a sharper split.

Here’s the universal job posting analysis prompt I’d use across any model: Read the job posting and any recruiter message like a fraud analyst and an in-house recruiter. Separate your findings into likely real, likely ghost job, and likely scam. Score scam risk from 0 to 100. Flag fake job listing red flags involving pay, domain, recruiter identity, application path, urgency, equipment purchases, upfront fees, personal data requests, messaging apps, and vague company details. Then give me the five fastest checks I should do in the next ten minutes before I apply.

Which ChatGPT prompts to spot fake job postings actually work?

In current ChatGPT, GPT-5 is the cleanest choice when you want a structured audit instead of a vague maybe. Use this prompt: Analyze this job ad like a skeptical recruiting lead. I want a scam-risk score, a ghost-job score, and a legitimacy score. Show the exact phrases that triggered each score. Tell me what feels normal for a real hiring process and what looks off. Then write three verification questions I can send the recruiter without sounding accusatory. That last line matters. Spotting a fake job is useful. Knowing how to test it without burning a real opportunity is better.

If your workspace still gives you GPT-4o, use it for screenshot triage. GPT-4o is still handy when you want the model to read the LinkedIn post, the recruiter profile, the careers page, and the follow-up email together. Prompt: Compare these screenshots and find mismatches in title, salary, location, branding, dates, spelling, domain, and application steps. Decide whether this looks like a scam, a stale repost, or a sloppy but real listing. List the evidence in plain English. That works especially well when the text itself looks normal but the visual details don’t line up.

The single highest-value ChatGPT move is to combine the job ad with the first recruiter message. Most scams don’t break in the posting. They break in the handoff. Use this prompt: Audit the transition from public job post to private recruiter contact. Flag pressure tactics, requests for sensitive data, off-platform moves, interview shortcuts, equipment-payment language, and anything inconsistent with a normal hiring funnel for this type of role. Then draft a reply that asks for the official job link, recruiter work email, and interview process. That’s the only AI prompt you need if you want a quick pass before you spend your evening tailoring a CV.

How should you rewrite the prompt for Claude, Gemini, Copilot, Perplexity, Grok, Meta AI, DeepSeek, and Le Chat?

Claude Sonnet or Opus is excellent when you want slower, cleaner reasoning and a model that will sit with ambiguity instead of rushing to reassure you. Claude prompt: Compare this job listing against how legitimate employers describe scope, reporting line, compensation, interview stages, and location. Surface contradictions, missing operational detail, and signs of employer impersonation. Do not comfort me. Be conservative and tell me what evidence would change your mind. Gemini is strong when the task becomes compare-and-contrast. Gemini prompt: Cross-check this listing against the employer’s official careers page and summarize differences in title, location, compensation, benefits, and application flow. Tell me which version looks canonical.

Copilot works best when the posting is already open in Edge or when you want the model to pull context from the page you’re staring at. Copilot prompt: Using the open tab, extract the company name, recruiter details, job requirements, application method, and any suspicious asks. Then compare this against standard enterprise hiring behavior for a similar role. Perplexity is my first choice when I need source-backed scam recruiter detection. Perplexity prompt: Find the employer’s official careers page, confirm whether this role exists there, look for recruiter identity evidence, and show me cited sources for every claim. Then tell me what cannot be verified from public information. That last sentence keeps the model honest.

Grok is useful when the signal lives in public chatter, founder accounts, or social activity around a startup that says it’s hiring fast. Grok prompt: Check whether this employer, recruiter, and role have a believable public footprint across company channels and recent posts. Flag inconsistencies between the hiring claims and what the company publicly says it is building. Meta AI is better for quick consumer-style checks than for formal due diligence, but it can still help with local employer presence. Meta AI prompt: Summarize whether this company appears active, current, and locally real based on public web and social signals, and tell me why that still does not prove the job posting is genuine.

DeepSeek is useful when you want a fast first-pass pattern check or you’re screening a batch of listings and need the same job posting analysis prompt repeated at speed. DeepSeek prompt: Evaluate this listing for scam patterns, ghost-job patterns, and normal hiring patterns. Use concise reasoning, score each category from 0 to 100, and give me a final yes, no, or verify. Mistral Le Chat is a good fit when you want web search plus document reading in the same flow. Le Chat prompt: Read this job posting, search for the official version, compare language and application steps, and tell me where the evidence conflicts. For bulk triage, DeepSeek is efficient. For one careful check, Le Chat is more comfortable.

Which AI prompts should you stop using?

Stop asking a model, Is this job legit? That prompt is too lazy, and lazy prompts get lazy answers. The model will usually mirror your own optimism or paranoia. If you sound hopeful, it often sounds reassuring. If you sound suspicious, it often sounds dramatic. Ask for evidence, not vibes. Ask for a scored audit, a list of missing facts, and the next verification steps. Also stop asking AI to write your resume for a job you haven’t verified. Tailoring a CV to a scam is a depressing waste of a Tuesday night, and it pushes more of your personal data into a bad pipeline.

Stop pasting sensitive personal information into the prompt just because the model asked for context. Your full street address, government ID, date of birth, bank info, and employee numbers have no business inside a fake-job check. Redact first. A better instruction is: Before analysis, tell me what personal data should be removed, then analyze only the non-sensitive parts. This matters even more when the recruiter has already asked for forms, direct deposit details, or an equipment reimbursement setup before you’ve had a real conversation with a human being at a verifiable company domain.

Most viral prompt threads get one thing wrong: they teach the model to make the decision for you. That’s backwards. You want the model to expose contradictions and pressure points so you can verify them yourself. Bad prompt: Tell me if this is fake. Better prompt: Identify concrete risk signals, missing facts, and the smallest set of external checks that would confirm or kill this opportunity. Even better: Draft a professional reply that asks for the official careers link, the recruiter’s corporate email, and the next interview step. That turns AI from fortune teller into filter, which is where it’s actually useful.

How do you verify a recruiter or posting after the AI flags it?

Run a simple ladder. First, find the role on the employer’s official careers site. Not a repost, not a screenshot, not a PDF floating around LinkedIn comments. Second, match the recruiter’s domain, title, and company presence. Third, confirm the application path ends in a real system, not a form that asks for banking details before anyone has screened you. Fourth, check whether the recruiter is trying to move you into WhatsApp, Telegram, Signal, or personal email too early. Real recruiters sometimes text. Real hiring funnels don’t usually start by hiding from the company’s own infrastructure.

You also need to know what weird but real can look like. A legitimate employer might send you from LinkedIn into Workday, Greenhouse, or Lever, and then later into a screening layer like HireVue or Sapia. That can feel cold, automated, and frankly annoying, but it isn’t automatically a scam. The test is traceability. Does the job exist on the company’s site? Does the recruiter use a corporate domain? Does the interview invite connect back to the named role? If the process is messy but traceable, keep checking. If it’s polished but untraceable, walk.

Once the posting clears basic verification, then tailor your CV. Not before. Running it through HRLens CV analysis after you’ve confirmed the job is real helps you line up keywords, tighten the experience section, and catch ATS gaps without optimizing yourself for a scam. I’d do that right after the official careers-page check and before the application starts asking for work samples or salary expectations. It keeps your workflow clean: verify the employer, adapt the CV, apply, then save the prompt and reuse it on the next listing instead of reinventing your filter every time.

How can you AI-proof your CV and job search once the role is real?

AI-proofing your CV doesn’t mean jamming keywords into every line until it reads like a broken skills database. It means making the truth easier for both software and humans to parse. Use a readable structure, standard headings, exact job titles when they matter, and quantified outcomes that match the level of the role. If you’re a senior backend engineer at a Series B fintech, say what you shipped, what stack you owned, and what changed because of your work. Don’t hide core skills in a dense paragraph and expect an ATS or recruiter AI to infer them.

You should also assume more employers will use AI-assisted screening, scheduling, and early interview layers. That means your job search needs two modes. Mode one is machine-readable: clean CV, clear LinkedIn profile, consistent dates, grounded achievements. Mode two is human-proof: good stories, sharp judgment, and answers that sound like lived experience rather than AI wallpaper. If you hit a one-way screen or an AI interviewer, keep your answers tight and specific. The hardest things for any model to fake well are tradeoff thinking, stakeholder judgment, conflict handling, and concrete examples from work that actually happened.

The sharp takeaway is simple: run the prompt before you run the application. Five minutes of AI triage can save you five hours of emotional admin, CV rewriting, and fake optimism. If a role can’t survive a structured prompt, a careers-page check, and a recruiter verification pass, it doesn’t deserve your time. Save your energy for the jobs that are real, traceable, and worth tailoring for. Most job seekers don’t need more motivation. They need a better filter.

Frequently asked questions

Can ChatGPT really detect fake job postings?
ChatGPT can help you detect patterns that often show up in scams, such as off-platform messaging, unrealistic pay, missing company details, equipment purchases, and pressure to move fast. It can’t certify that a job is real. Use it as a triage tool, then verify the role on the employer’s official careers site and confirm the recruiter through a corporate domain or another trusted source.
What’s the difference between a ghost job and a scam job?
A ghost job is a posting that may be stale, already filled, or never intended to convert into a hire. It wastes your time. A scam job is trying to take money, personal data, account access, or free labor from you. Your prompt should force the model to separate those two outcomes, because the next action is different. Ghost jobs get ignored. Scam jobs get reported and blocked.
Which model is best for checking whether a recruiter is real?
Perplexity is usually the strongest first pick when you want source-backed verification because it can search the web and show where the evidence came from. ChatGPT and Claude are better when you already have the posting, recruiter message, and screenshots and want a sharper reasoning pass. The best setup is often two-step: Perplexity for public verification, then ChatGPT or Claude for risk analysis.
Should I paste my full resume or ID into an AI prompt?
No. Redact first. For scam checks, the model does not need your full address, government ID, date of birth, banking information, or any onboarding documents. It usually doesn’t need your full resume either. Paste only the job ad, recruiter message, and the minimum context needed to judge fit or risk. If you want resume help, do that after you’ve verified the posting is real.
What should I do if a recruiter moves me to WhatsApp or Telegram?
Treat it as a serious warning sign and slow the process down immediately. Ask for the official job link on the employer’s careers page, the recruiter’s corporate email address, and the next step in the formal hiring process. If they dodge those requests, push urgency, or ask for money or sensitive data, stop replying. A real employer can usually be traced back to its own site and systems.
Are AI interviews on platforms like HireVue or Sapia always fake?
No. Plenty of real employers use automated screening or AI-assisted interview platforms, especially for high-volume hiring. The issue is not whether AI is involved. The issue is whether the process is traceable. If the invitation connects back to a real job on the company’s official site, comes from a verifiable company domain, and matches the stated hiring flow, it may be annoying but still legitimate.