Skip to contentAtlasby Brightline Labs
Back to field notes

Field note · Resume Strategy

Resume Mistakes That Lose Both AI and Human Screens

The resume mistakes that fail ATS parsers are not the same as the ones that fail the eight-second human skim. Here is the dual-rejection matrix for 2026.

resume mistakes to avoid 2026ai resume screening mistakesats resume errorscommon resume problems

Two screens. Two different rejection patterns.

A modern resume gets read twice in the first day — once by a parser-and-LLM stack the company has integrated into its ATS, and once by a human who spends six to ten seconds on the rendered output. Each screen rejects on different grounds.

Most resume advice collapses this into one problem. It is not one problem. The mistakes that get you cut by an ATS are not the same as the mistakes that get you cut by a hiring manager skimming a printed PDF. We covered the positioning side of this in our piece on passing both screens; this post is about the specific mistakes that fail one, the other, or both — and which ones most candidates are still optimizing for the wrong way.

The mistakes only AI screens reject

Some resume mistakes never reach the human. The screener catches them in parsing, gives them a low score, and the application stops there. The candidate never sees a rejection that explains why.

  • Multi-column layouts or text inside graphics. Pretty in InDesign; unparseable by half of ATS stacks. Single-column flat text wins.
  • Embedded text in images or logo headers. The parser sees nothing. Names, contact info, and section headers that live inside SVGs or PNGs are invisible.
  • Tables for skills sections. Some parsers handle tables; many flatten them into unreadable token streams. Bullet lists are the safe default.
  • Custom fonts that fall back to a different glyph set. The visual layout collapses when the parser substitutes a default — and the substitution often happens silently in the screening pipeline.
  • PDF locked or saved as image-only. The screener cannot extract text at all; the resume effectively never existed.

The mistakes only humans reject

Other mistakes pass the parser cleanly and then die in the human skim. The ATS scored you fine; the recruiter or hiring manager saw the rendered PDF and put it in the no pile in eight seconds.

  • Buried headline. The first thing a human sees should tell them what role you do. If the top of the resume is a generic objective statement, the human reader moves on before reaching the experience that would have changed their mind.
  • Bullet inflation. Eight bullets per role where two are strong and six are filler. Humans skim — the filler dilutes the strong work and the reader assumes the strong work is filler too.
  • Date-format inconsistency. 'Jan 2024 – Present' next to '03/2023 – 11/2023' reads as unfinished. The parser does not care; the human does.
  • Title inflation that does not match the company. 'Chief of Staff at a 5-person startup' is a real role but reads to a skim as posturing. Calibrate titles to what the company hierarchy would actually call them.
  • Hyperbolic verbs. 'Revolutionized,' 'spearheaded,' 'transformed' — humans now read these as resume-generator output, even when you wrote them yourself.

The mistakes both screens reject

These are the dangerous ones. They get you cut at the ATS stage AND they would have gotten you cut by a human anyway. Most candidates do not know these are double-rejections because the ATS already filtered them out — they never see the human side of the bullet.

  • Job titles that don't match the posting's vocabulary. Parser scores you low on keyword match; human reads your title and decides you are not in their world either.
  • Missing role-level keywords. 'Engineer' instead of 'senior engineer.' Parser cannot calibrate level; human assumes you are a junior and skips. Match the posting's level language exactly.
  • Resume gaps without context. Parser flags the gap; human assumes the worst. A one-line acknowledgement ('career break for caregiving, July 2024 – March 2025') passes both screens cleanly. Hiding the gap fails both.
  • Quantification that does not survive scrutiny. '300% revenue increase' on a six-person team — parser counts the number, human laughs at it. If the number cannot defend itself in an interview, it fails both screens.

The mistake nobody else is talking about

Keyword over-density. Five years ago the rule was: cram every keyword from the JD into the resume so the ATS gives you a high match score. That rule produced a generation of resumes that read like keyword soup. ATS engines noticed.

Modern LLM-based screeners now actively downrank resumes where keyword density crosses a threshold — they read keyword stuffing as templated AI-generated output and assign it low signal. The exact threshold varies by vendor, but the pattern is consistent: a resume that hits every keyword in the JD with no narrative connective tissue now scores below a resume that hits the important keywords naturally inside specific accomplishment statements.

This is the same dynamic as the AI cover-letter screen: the system that used to reward template-matching now penalizes it. Keywords still matter; over-density now hurts. The replacement is to use the JD's vocabulary in your own bullets when the bullet is true — not to dump the JD into your skills section.

A two-pass test you can run in five minutes

Before you send a resume, run it through this two-pass test. Five minutes total.

Pass one — the ATS pass. Open the PDF, select all text, copy, paste into a plain-text editor. Read what came out. If your name and email survived, if section headers are present, if dates are readable, if your bullets are intact prose — you passed. If anything is missing or scrambled, an ATS sees the same garbage.

Pass two — the human pass. Print the resume. Hold it at arm's length. Look at it for exactly eight seconds. What can you tell about this person? If your answer is 'experienced professional' but you cannot say which role they do, your headline is buried. If you can answer the role question but cannot find a single quantified accomplishment, your evidence is buried. Both are common; both are fixable in fifteen minutes.

The third pass — and this is where Atlas earns its rent — is the score-against-a-real-role pass. Drop the resume against the actual posting, get a 0-100 fit score with a per-criterion breakdown, see exactly which criteria are weak. The upstream feed for that scoring is your own evidence, which is why a weekly accomplishments log compounds more than any one-time resume rewrite — the dual-rejection matrix above is the manual version of the scoring, and the log is the manual version of the evidence database that feeds it.

Take the next step

Stop optimizing for the screen you'll never see

Atlas scores your resume against every role's actual posting — what the ATS will see, what the recruiter will skim, and where the gap between the two will lose you the callback.

Atlasby Brightline Labs

Atlas is a job search execution platform for experienced candidates. Upload a resume and Atlas builds a structured profile: headline, role history, skills, education, and career patterns, all editable field by field. Every night at 04:30 ET, the agent hits five major boards, dedupes ~600 listings, and scores each 0–100 against your profile and scoring criteria.

The scoring criteria editor exposes everything: seniority floor, industry stance, disqualifying patterns, and high-value signals you actively want. Feedback compounds: mark a role interested or dismissed with a one-line reason, and after about five signals the model synthesizes persistent rules you can read and edit. Atlas does not sell your data and does not train on it.

Product

Documentation

Company

Stay in the loop

New guides and product notes, maybe twice a month. Never more.

Join the beta →