Employee Survival Guide®
The Employee Survival Guide® is an employment law podcast only for employees about everything related to work and your career. We will share with you all the employment law information your employer and Human Resources does not want you to know about working and guide you through various work and employment law issues. This is an employee podcast.
The Employee Survival Guide® podcast is hosted by seasoned Employment Law Attorney Mark Carey, who has only practiced in the area of Employment Law for the past 29 years. Mark has seen just about every type of employment law and work dispute there is and has filed several hundred work related lawsuits in state and federal courts around the country, including class action suits. He has a no frills and blunt approach to employment law and work issues faced by millions of workers nationwide. Mark endeavors to provide both sides to each and every issue discussed on the podcast so you can make an informed decision. Again, this is a podcast only for employees.
Subscribe to our employee podcast show in your favorite podcast app including Apple Podcasts and Spotify.
You can also subscribe to our feed via RSS or XML.
If you enjoyed this episode of the Employee Survival Guide ® please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this employee podcast on your favorite podcast player such as Apple Podcasts and Spotify. Thank you!
For more information, please contact Carey & Associates, P.C. at 203-255-4150, or email at info@capclaw.com.
Also go to our website EmployeeSurvival.com for more helpful information about work and working.
Employee Survival Guide®
AI Hiring Bias: Job Seekers Discrimination and Algorithmic Bias
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Comment on the Show by Sending Mark a Text Message.
Are you tired of the AI hiring bias? Join Mark Carey in this eye-opening episode of the Employee Survival Guide® as he unpacks the complexities of the modern hiring landscape, dominated by AI and automated employment decision tools (AEDTs). The emotional rollercoaster of the AI hiring bias and job applications—from the thrill of discovering the ideal job posting to the crushing disappointment of rejection—is all too familiar. But what if we told you that many of those rejections come not from a human, but from an algorithm?
As we delve into the alarming reality of how the AI hiring bias and how algorithms filter applications without any human oversight, it's crucial for job seekers to understand the mechanics behind these systems. Carey introduces the concept of the 'black box' in hiring, where opaque algorithms can lead to discriminatory outcomes, AI hiring bias, leaving applicants in the dark about why they were overlooked. With the rise of AI hiring bias and the potential for discrimination based on race, age, gender, and more, knowledge is power in navigating this treacherous terrain.
But fear not! Carey shares invaluable strategies for job seekers to enhance their resumes and beat the bots. Learn how to effectively mirror job descriptions and utilize simple formatting to increase your chances of getting noticed. This episode is packed with practical tips that empower you to take control of your job search process, transforming you from a passive candidate into an active participant in your career development.
We also explore the legal implications surrounding employment discrimination, including the importance of bias audits and understanding your rights as an applicant. Whether you're facing issues like hostile work environments, retaliation, or discrimination based on disability or pregnancy, this episode equips you with the knowledge to advocate for yourself. The Employee Survival Guide® is not just about surviving the job market; it's about thriving within it.
Ultimately, Mark Carey encourages listeners to embrace their power in a system that often feels rigged against them. By understanding the intricacies of AI in hiring and knowing your employee rights, you can navigate the employment landscape with confidence. Tune in for a compelling discussion that not only highlights the challenges of the job search but also offers actionable insights to empower you on your journey. Don’t let algorithms dictate your future—take charge of your career today!
If you enjoyed this episode of the Employee Survival Guide please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts and Spotify. Leaving a review will inform other listeners you found the content on this podcast is important in the area of employment law in the United States.
For more information, please contact our employment attorneys at Carey & Associates, P.C. at 203-255-4150, www.capclaw.com.
Disclaimer: For educational use only, not intended to be legal advice.
Okay, I want you to picture a scenario. Just put yourself in this headspace for a second. It's a Tuesday.
SPEAKER_00:Always a Tuesday.
SPEAKER_01:Right. Always a Tuesday. You are sitting at your desk, maybe you're at home, or maybe you're in an office that you were just trying desperately to leave. And you're doing the doom scroll. You know exactly what I'm talking about. You're refreshing LinkedIn. You're you're tabbing through indeed. Just sort of waiting for something to jump out at you.
SPEAKER_00:It's the modern gold rush. Everyone is just panning for gold in this massive river of data.
SPEAKER_01:Exactly. And then suddenly you stop because you see it. The posting.
SPEAKER_00:The one.
SPEAKER_01:Yes, the one. And it's not just a job, it's the job. It genuinely feels like someone broke into your house, stole your diary, looked at your entire career history, and then wrote a wish list based exactly on who you are.
SPEAKER_00:Oh, that is the golden ticket moment.
SPEAKER_01:It really is.
SPEAKER_00:That rush of adrenaline is very real. It activates the dopamine centers in your brain. You sit there and you think, you know, this is it. This is the off-ramp for my current situation.
SPEAKER_01:100%. The title is right. The responsibilities are perfect. Even those weird niche skills you picked up five years ago that you thought were totally useless.
SPEAKER_00:They're listed as required.
SPEAKER_01:Yes, required. So you go into full battle mode, you pull up your resume, and you don't just send it, right? You craft it.
SPEAKER_00:Oh, yeah. You're investing massive elotional capital at this point.
SPEAKER_01:You spend, let's be honest, three or four hours tailoring this thing. You are tweaking bullet points, you're agonizing over the verbs. Like, did I manage the project or did I so straight? Right. Did I orchestrate the project? And you write this cover letter that is basically a piece of persuasive literature.
SPEAKER_00:While you're already calculating the commute, you're imagining the coffee shop near the new office.
SPEAKER_01:Oh, I've already picked out my order at that coffee shop.
SPEAKER_00:Yeah.
SPEAKER_01:You take a deep breath, you check the clock, it's 2.00 p.m. on the dot. Okay. And you hit submit. You lean back, you feel incredibly accomplished. Maybe you go grab a fresh coffee to celebrate. You come back to your desk. It is 2.06 p.m.
SPEAKER_00:Just six minutes later.
SPEAKER_01:Six minutes later, your phone buzzes. It's an email. Sender is no reply. Subject line is update on your application. And you know, you know before you even click it. Thank you for your interest. But we have decided to pursue other candidates whose qualifications more closely align with our needs.
SPEAKER_00:That feeling is just it's gut-wrenching. It is a physical blow. But I want us to pause right there. I really want to look at the math of that exact moment.
SPEAKER_01:The six minutes.
SPEAKER_00:Six minutes.
SPEAKER_01:I mean, it seems fast, but you hear about recruiters moving fast, right? Maybe they were just online at that exact moment looking at the portal. No.
SPEAKER_00:Not that fast. Think about the physics of it. It is physically impossible for a human being, even the fastest reader on earth, to open a PDF, parse ten years of complex work history, analyze your soft skills, cross-reference that against the company culture, and make a qualitative judgment on your life's work.
SPEAKER_01:Right.
SPEAKER_00:In 360 seconds.
SPEAKER_01:So if a human didn't read it, nobody read it. Nobody.
SPEAKER_00:No one did. That is the hard truth we have to start with today. You were rejected by a recruiter. You were shadowbanned by an algorithm.
SPEAKER_01:And that right there is exactly what we are unpacking in this deep dive. We are going straight into the black box of modern hiring.
SPEAKER_00:It's dark in there.
SPEAKER_01:It is very dark. And to guide us through this, we are relying on a really incredible stack of source material produced by Mark Carey. He is an employment lawyer and the principal at Carey and Associates PC.
SPEAKER_00:Mark is really the authority on this specific intersection. He's been tracking how employment law crashes into technology for years now.
SPEAKER_01:Yeah.
SPEAKER_00:And the insights we have from him today are, frankly, they're a little terrifying. But they're also the roadmap you need to actually survive this process.
SPEAKER_01:Aaron Powell Exactly. And we need to set the scene for you listening because the context implies a very specific point in time. The sources place us in March 2026. Trevor Burrus, Jr.
SPEAKER_00:Right. So we really need to shift our mindset forward here. We aren't talking about the early experimental days of basic keyword scanners or early chat GPT anymore.
SPEAKER_01:This is in 2023.
SPEAKER_00:Exactly. In 2026, the AI revolution in HR is no longer just a trend piece in a business magazine. It is the infrastructure. Trevor Burrus, Jr.
SPEAKER_01:It's the whole system.
SPEAKER_00:It is the wall. And for the vast majority of applicants, that wall is completely invisible. Trevor Burrus, Jr.
SPEAKER_01:Which makes it so much harder to climb. And that's the mission for this deep dive. We aren't just going to sit here and complain about robots taking our jobs. We want to understand why you are getting filtered out before the game even begins. We're going to look at the legal landscape of what they call AEDT's automated employment decision tools. And most importantly, we are going to give you the high-level strategy guide to beat the parser.
SPEAKER_00:And we will definitely talk about the nuclear option.
SPEAKER_01:Yes.
SPEAKER_00:Because what do you do when the machine actually breaks the law? Because, spoiler alert, algorithms can break the law, and they do it all the time.
SPEAKER_01:Aaron Powell So let's start with this concept of the black box. Because I think the biggest misconception people have, and I have totally been guilty of this, is about the motivation behind this technology. How so? Well, I always assume companies use AI because they want to find the absolute best person on the planet. Like the AI is scanning the globe for the next Steve Jobs or the most genius engineer out there. Trevor Burrus, Jr.
SPEAKER_00:Yeah, that's the marketing pitch. Right. That is exactly what the vendors who sell this software tell the Fortune 500 CEOs. But Mark Carey points out a really hard truth in his analysis. Which is companies are not using AI to find the best candidate. They're using AI to find the safest candidate with the least amount of effort.
SPEAKER_01:Safe and lazy.
SPEAKER_00:Safe and lazy.
SPEAKER_01:That is a very different goal than excellence.
SPEAKER_00:It completely changes the paradigm. Think about it from a corporate liability and efficiency standpoint. Yeah. A human recruiter is a bottleneck. Sure. They get tired, they get hungry, they have personal biases, they are slow.
SPEAKER_01:They need weekends off.
SPEAKER_00:Exactly. An AI can process 50,000 applications in an hour. But here's the kicker. The AI isn't programmed to find potential.
SPEAKER_01:Right.
SPEAKER_00:It isn't programmed to spot a diamond in the rough. It is programmed to find patterns.
SPEAKER_01:Okay. Patterns. This brings us to the concept of the racist robot, which I think is a fascinating part of Carrie's material. Because whenever we see headlines about AI bias, I feel like people imagine a machine that has been secretly programmed to be hateful.
SPEAKER_00:Like an evil villain.
SPEAKER_01:Yeah. Like there's a literal line of code in the software that explicitly says, you know, do not hire this specific demographic.
SPEAKER_00:Right. The evil genius coder theory. Yeah. And that's just it's rarely the case. It would be way too obvious. And frankly, it would be illegal even under the old outdated laws.
SPEAKER_01:Aaron Powell Because it's explicit.
SPEAKER_00:Exactly. If you wrote code that said if gender equals female, then reject, you would be sued into oblivion in five minutes flat.
SPEAKER_01:So how does the bias actually happen then?
SPEAKER_00:The bias in 2026 is much more subtle. And because of that, it's much more dangerous. It's what we call the mirror effect.
SPEAKER_01:It's a mirror effect. Okay, let's break that down. How does that actually work mechanically inside the software?
SPEAKER_00:Okay, imagine a company is trying to train its brand new hiring AI. And AI is essentially like a toddler, right? It knows absolutely nothing until you teach it. So how does the AI know what a quote unquote good employee looks like?
SPEAKER_01:You have to give it an example.
SPEAKER_00:You feed it data. Specifically, companies feed it the resumes of their successful hires from the last 10 years.
SPEAKER_01:Okay. I mean, that sounds perfectly logical on the surface. Clone your winners.
SPEAKER_00:It sounds totally logical until you look closely at who those winners actually were. Let's say, just for the sake of argument, that the top sales representatives at this firm for the last decade happened to be men.
SPEAKER_01:Okay.
SPEAKER_00:And let's say a disproportionate number of them went to Ivy Lead schools. And just to get really specific for this example, maybe a lot of them played lacrosse.
SPEAKER_01:Okay. So we have a lacrosse brose data set.
SPEAKER_00:Exactly. Now the AI analyzes this giant stack of resumes.
SPEAKER_01:Yeah.
SPEAKER_00:The AI doesn't know why these men were successful.
SPEAKER_01:Right. It doesn't know what sales actually is.
SPEAKER_00:Exactly. It doesn't understand charisma or work ethic or networking. It just sees data points. It sees Dartmouth, it sees Brown, it sees lacrosse, it sees mail. So it builds a mathematical ghost, a profile. It decides, strictly based on the math, that lacrosse is a high-value keyword. It decides that specific universities are proxies for competence.
SPEAKER_01:So it completely mistakes correlation for causation.
SPEAKER_00:Precisely. It creates what we call a proxy variable. It says people who play lacrosse are good employees.
SPEAKER_01:Okay, so now fast forward to today, a new applicant comes along. Right. Let's say she's a brilliant saleswoman. She has smashed every quota she's ever been handed. Top performer. Top performer. But she went to a state school. And instead of playing lacrosse, she spent her weekends volunteering at a local community center.
SPEAKER_00:Now, to a human specifically, a human with a functioning brain volunteering at a community center shows leadership. It shows empathy, drive, community engagement. It's a huge plus.
SPEAKER_01:Absolutely.
SPEAKER_00:But to the AI, it is a mismatch. It's just noise. She doesn't match the ghost.
SPEAKER_01:So the AI doesn't explicitly say reject her because she's a woman from a state school.
SPEAKER_00:No, it just calculates a math score. Yeah. The lacrosse player gets a 98% match. The state school volunteer gets a 62% match.
SPEAKER_01:And the cutoff is probably like 90%?
SPEAKER_00:Oh, at least. Yeah. So the AI puts her resume at the very bottom of a stack of 5,000 people. No human being ever even sees her name. She isn't rejected because she's bad. She's rejected because she's different from the past.
SPEAKER_01:That is the shadow ban right there. It effectively just automates the biases of the past. If you hire based on who you hired 10 years ago, you are literally just cloning the workforce of 10 years ago.
SPEAKER_00:That's exactly it. And in the legal industry, the specific term for this, and Mark Carey emphasizes this heavily in his analysis, is disparate impact.
SPEAKER_01:Disparate impact. Let's define that for everyone because that sounds like crucial legalese we really need to understand.
SPEAKER_00:It is crucial. You really have to understand the difference here. There are basically two types of discrimination we look at. Disparate treatment is the obvious one. That's when the manager says, I am not hiring you because you are older.
SPEAKER_01:Right. It's direct and intentional.
SPEAKER_00:Very direct. But disparate impact means the policy or the algorithm looks totally neutral on its face. It's not explicitly saying no women, but the criteria it uses, the inputs, like the lacrosse example, result in discrimination against a protected class.
SPEAKER_01:So the intent of the programmer or the company doesn't actually matter.
SPEAKER_00:The intent is completely irrelevant legally. The outcome is what matters. If your fancy supposedly neutral math problem results in filtering out 90% of African American applicants, you have a disparate impact problem and you are liable.
SPEAKER_01:That is so incredibly frustrating for the applicant, though. It just makes you feel entirely powerless. Because how do you fight a math problem? You can't argue with a server farm.
SPEAKER_00:Well, that is exactly where the narrative starts to shift. Because we are looking at this from the perspective of 2026, and the legal landscape has actually caught up.
SPEAKER_01:Right.
SPEAKER_00:Mark Carey makes a really big point of this. The Wild West days of AI hiring, where companies could just do whatever they wanted and hide behind the tech, those days are coming to an end.
SPEAKER_01:Yeah. One of the things Mark mentions in the source material is the death of the trade secret defense. I remember hearing about this a few years ago. What exactly was that defense?
SPEAKER_00:So back in the early 2020s, if you sued a company for discrimination, they would just throw their hands up in the air and say, hey, look, we don't know why the algorithm rejected you. It's a black box.
SPEAKER_01:Right. The computer did it.
SPEAKER_00:Exactly. They'd say, we bought this software from a third-party vendor, and the underlying code is a proprietary trade secret, so we legally can't show it to you. Sorry.
SPEAKER_01:It's basically the dog ate my homework, but for corporate lawyers.
SPEAKER_00:The algorithm ate my homework.
SPEAKER_01:Yeah.
SPEAKER_00:It's intellectual property. We can't reveal the secret sauce. But in 2026, that excuse is dead. Judges and legislatures are finally saying, we don't care about your trade secret, we care about civil rights.
SPEAKER_01:Which is a massive shift.
SPEAKER_00:Huge. We are seeing a rise in labor laws specifically targeting these AEDTs.
SPEAKER_01:Automated employment decision tools.
SPEAKER_00:Correct. And this brings us to the bias audit.
SPEAKER_01:The bias audit. Okay, this sounds like an IRS audit, but for robots.
SPEAKER_00:Essentially, yes. In major jurisdictions, New York really started this trend with local law 144, but it's spreading federally now. Companies are often required to perform annual independent audits of their AI.
SPEAKER_01:And what does that entail?
SPEAKER_00:They have to prove, mathematically, that their machines aren't filtering out protected classes.
SPEAKER_01:So they literally have to run the numbers and publish them.
SPEAKER_00:They have to show their work. They have to answer the hard questions. Did we reject women at a significantly higher rate than men for this specific role?
SPEAKER_01:And if they did.
SPEAKER_00:If the answer is yes, they can't just shrug and blame the black box anymore. They have a massive liability problem. They have to either fix the tool, adjust the algorithm, or stop using it entirely.
SPEAKER_01:Wow. Mark also talks about the right to transparency, which is something I honestly didn't even know existed.
SPEAKER_00:It's a right that so many applicants just don't know they have. In several jurisdictions right now, if you suspect you were screened out by an AI, you have the right to ask.
SPEAKER_01:You can just ask them.
SPEAKER_00:You can formally send a request. You ask, was an automated tool the sole or primary factor in my rejection?
SPEAKER_01:And they actually have to tell you. They can't just ignore the email or send a form letter.
SPEAKER_00:They're legally required to disclose it. And that disclosure is the first step for you. If they say yes, an ADDT made this decision, and you know for a fact you are highly qualified, you now have the very first building block of a legal case.
SPEAKER_01:Aaron Powell That is empowering. We also have the EEOC, the Equal Employment Opportunity Commission, getting heavily involved here. Mark mentions they are taking a very aggressive stance, particularly regarding age discrimination. Yes. This seems like a really huge area for AI bias.
SPEAKER_00:Aaron Powell It's fascinating and it really impacts a huge portion of listeners. The EEOC is really cracking down on what they call coding out older workers. We've seen cases where AI was programmed to actively look for terms like digital native.
SPEAKER_01:Digital native. See, that sounds like corporate buzzword bingo. It sounds completely harmless.
SPEAKER_00:Aaron Powell It sounds harmless, yes. But legally it is a proxy for age. Ah. Think about it. Who is a digital native? Someone born after 1980, arguably even after 1990. If you require that term in a resume or profile, you are effectively excluding everyone over the age of 45.
SPEAKER_01:Right. Because I'm not putting digital native on my resume.
SPEAKER_00:Exactly. Or even simpler, the AI just filters for graduation dates.
SPEAKER_01:Oh wow.
SPEAKER_00:Yeah. If the machine is set to prioritize recent grads, let's say, was looking for a class of 2018 or later, it is systematically eliminating anyone with significant long-term experience. Trevor Burrus, Jr.
SPEAKER_01:Which is just plain age discrimination. It is plain age discrimination.
SPEAKER_00:And the old argument of, well, the machine did it doesn't fly with the EEOC anymore.
SPEAKER_01:Don't buy it.
SPEAKER_00:Not at all. The underlying principle in 2026 is this the machine is the weapon, but the employer pulls the trigger.
SPEAKER_01:I like that phrasing. Trevor Burrus, Jr.
SPEAKER_00:You are absolutely responsible for the tools you choose to use. You cannot outsource your civil rights liability to a software vendor.
SPEAKER_01:Oh, okay. So the law is catching up. That is definitely the good news. But I want to be really practical here for our listeners. If I'm listening to this right now and I'm applying for a job today, I can't really wait for a class action lawsuit to wind its way through the federal courts for three years.
SPEAKER_00:Right. You need to pay rent.
SPEAKER_01:I need a job. I need to pay my rent. I need to get past this gatekeeper right now.
SPEAKER_00:Absolutely. We need to move from the theory to the action. We really need to talk strategy. Mark Carey calls this specific section beating the bot.
SPEAKER_01:I love this part. It feels like we're planning a casino heist.
SPEAKER_00:It kind of is.
SPEAKER_01:But the heist is just getting a human being to read our PDF.
SPEAKER_00:That's the sad reality. The heist is just getting seen by human eyes. The first step here is a total mindset shift. You have to stop writing for humans.
SPEAKER_01:Stop writing for humans.
SPEAKER_00:Stop trying to be charming. Stop trying to be unique or witty. You need to start writing exclusively for the parser.
SPEAKER_01:The parser. Let's define that clearly so everyone is on the same page. What exactly is the parser seeing when I submit my application?
SPEAKER_00:The parser is the software component that strips your resume down to its bones. It doesn't see your nice, elegant layout. It doesn't see your bold fonts, your italics, or the cream-colored background you picked.
SPEAKER_01:Right. It doesn't care about aesthetics.
SPEAKER_00:It only reads raw text. It essentially converts your entire professional career into a series of data strings. It is strictly looking for code.
SPEAKER_01:Okay, so how do we actually make this parser happy? What's the practical move?
SPEAKER_00:Strategy number one. The seed data mirror.
SPEAKER_01:Seed data.
SPEAKER_00:Every AI is fed seed data. These are the specific words it is explicitly told to hunt for by the employer. And usually these words are hiding in plain sight. They are right there in the key responsibilities section of the job posting.
SPEAKER_01:Okay, walk me through an example of how we mirror that.
SPEAKER_00:Let's say the job posting says the candidate must have, quote, facilitated cross-functional team alignment.
SPEAKER_01:That is a very specific, incredibly corporate, slightly annoying phrase.
SPEAKER_00:It's terrible. Now you might look at that as a normal human and think, okay, I did that. I led groups, I ran meetings across different departments. So you write on your resume under your last job, led groups.
SPEAKER_01:Which is good writing. Lead groups is punchy, it's active voice. Strunk and white would absolutely love it.
SPEAKER_00:And the robot will absolutely hate it. And it will get you rejected. Because the AI is a literalist. It doesn't know that led groups means the exact same thing as facilitated cross-functional team alignment. It is looking for the mathematical string match.
SPEAKER_01:So it wants the exact sequence of letters.
SPEAKER_00:It wants to see exactly what it was fed.
SPEAKER_01:So you have to parrot it back. Like you literally just copy and paste. The thesaurus is the enemy.
SPEAKER_00:The thesaurus is your absolute enemy here. If they say client success, do not write customer service. To a human, they are the same. To a parser, they're entirely different data strings.
SPEAKER_01:That feels so counterintuitive, though. We are taught in school to not be repetitive, to use varied vocabulary to show intelligence.
SPEAKER_00:You're writing for a machine now, not a high school literature professor. Once you get past the machine, you can explain the nuance and your great vocabulary in the actual human interview. But you don't get the interview if you don't feed the machine the exact seed data it's starving for.
SPEAKER_01:You just have to be shameless about it.
SPEAKER_00:It's completely shameless.
SPEAKER_01:Okay, so copy the homework verbatim. Got it. What is strategy number two? And I think this one is really going to hurt some of the graphic designers listening out there. Kill the columns.
SPEAKER_00:Oh, this is a massive one. I see this mistake all the time. People want their resume to look beautiful. They want it to fit perfectly on one page. So they go to Canva or somewhere and they create this beautiful two-column layout.
SPEAKER_01:Yes, the modern template.
SPEAKER_00:Right. On the left, maybe there's a dark sidebar with their skills and their phone number and email and white text. And then on the right, the wider column has their actual work history.
SPEAKER_01:It looks clean. It's very modern.
SPEAKER_00:It looks fantastic to a human eye. But remember the parser.
SPEAKER_01:Right.
SPEAKER_00:Most parsers read raw text, usually from left to right, straight across the entire page. Yeah. Completely ignoring the visual separation you created.
SPEAKER_01:Oh no. I see exactly where this is going.
SPEAKER_00:So the parser reads the first line of your sidebar, which is maybe your phone number. Then it jumps across the invisible boundary and reads the first line of your experience on the right, which is maybe your job title.
SPEAKER_01:So it reads them as one single line of text.
SPEAKER_00:Yes. Then it goes to the second line of the sidebar and mashes it with the second line of the experience.
SPEAKER_01:Aaron Powell So it's just mashing them together entirely.
SPEAKER_00:Completely. It creates absolute gibberish sentences that might read 555-0199 senior project manager.
SPEAKER_01:Which means nothing.
SPEAKER_00:To the AI, that is not actionable data. It's corruption. You don't look like a highly qualified candidate. You look like your file is broken. You essentially become mathematically illiterate in the eyes of the algorithm.
SPEAKER_01:That is wild. Because you spent hours making it look good and you actually ruined your chances. So what's the fix? Are we just going back to the absolute basics?
SPEAKER_00:Boring is best.
SPEAKER_01:Boring is best.
SPEAKER_00:Single column, top to bottom, standard headers, use Arial or Times New Roman. Do not make the robot work to understand your formatting.
SPEAKER_01:Because the robot won't try.
SPEAKER_00:Exactly. If the robot gets confused, it doesn't email you and ask for clarification. It just deletes you and moves to the next of the 50,000 files.
SPEAKER_01:Okay. Single column only. Got it. Now strategy number three involves what Mark calls contextual keywords. He says the bots in 2026 have actually evolved beyond just counting the number of words.
SPEAKER_00:Right. Because in the old days, and by old days I mean like 2018 or 2019 and 2019, you could just spam the word sales 50 times in white text at the bottom of your PDF, and maybe you could trick the bot into thinking you are a sales genius.
SPEAKER_01:Oh, I definitely knew people who did the white text trick.
SPEAKER_00:Aaron Powell It used to work.
SPEAKER_01:Yeah.
SPEAKER_00:But that doesn't work anymore. The AI is much smarter now. It is actively looking for keyword proximity.
SPEAKER_01:Aaron Powell Proximity, meaning where the words are located in relation to each other on the page.
SPEAKER_00:Aaron Powell Exactly. The AI wants to see a specific formula to validate your experience. It's looking for skill plus tool plus result. And it wants to see them in the same breath, in the same sentence. It's looking for context, not just frequency.
SPEAKER_01:Aaron Powell Can you give us a before and after example of that?
SPEAKER_00:Aaron Powell Sure. So a weak bullet point, even if it has keywords, might be use project management software, increase speed. Aaron Powell Okay.
SPEAKER_01:The keywords are there. Project management speed.
SPEAKER_00:They are there, but they are Disconnected and they are vague. The AI scores this very low because it lacks validation.
SPEAKER_01:Trevor Burrus It's generic.
SPEAKER_00:Now a high value data sentence, one that truly feeds the machine's hunger, would be written like this. Utilize JIRA for project management to increase delivery speed by 20%.
SPEAKER_01:Oh, I see. JIRA is the tool.
SPEAKER_00:Exactly.
SPEAKER_01:Project management is the skill. Right. And 20% is the result or the metric.
SPEAKER_00:Aaron Powell Yes. It links them all together perfectly. The AI sees JIRA right next to project management, right next to 20%. That specific proximity signals deep competency.
SPEAKER_01:It creates a complete picture.
SPEAKER_00:It creates a mathematical vector of success. It bypasses the shadow band because it hits all the high value markers simultaneously.
SPEAKER_01:It's almost like you are literally learning a programming language, but the code is written in English.
SPEAKER_00:That is a brilliant way to put it.
SPEAKER_01:So we've covered the black box, we've covered the strategies for beating the parser. But sometimes, even if you do everything absolutely right, you use the single column, you mirror the seed data keywords perfectly, you do the proximity formulas, things still go wrong.
SPEAKER_00:I do.
SPEAKER_01:Or worse, things are actually malicious. We need to talk about what Mark calls the nuclear option, legal recourse.
SPEAKER_00:This is where Mark Carey's expertise as a litigator really, really shines. Because there are times when you shouldn't just tweak your resume and try again. There are times when you should sue.
SPEAKER_01:When does a rejection actually become a lawsuit, though? That feels like a massive leap for a normal person to make.
SPEAKER_00:It is a big leap. But Mark points to a huge rising trend in retaliation claims involving AI. And this is really sinister. And frankly, it is happening way more than we think it is.
SPEAKER_01:Paint the picture for me. How does AI play into retaliation?
SPEAKER_00:Okay, imagine a scenario. You are a current employee at a big company. Everything is fine, your performance reviews are good. Then you have to file a whistleblower complaint about some safety violations you saw. Or maybe a more common scenario, you file for FMLA leave because you need to care for a sick parent. You are just exercising your standard federally protected legal rights.
SPEAKER_01:Right. You're being a good citizen or you're taking care of your family, totally normal.
SPEAKER_00:Right. But then a few months later, you apply for an internal promotion or maybe a transfer to a different department to get away from a bad manager. And you are rejected instantly by the system.
SPEAKER_01:And your manager just shrugs and says, Hey, sorry, the computer said no. My hands are totally tied.
SPEAKER_00:Exactly. Employers are increasingly using AI as legal insulation.
SPEAKER_01:Yeah.
SPEAKER_00:They want to hide behind the algorithm. They claim, in court, it wasn't the human manager who was retaliating against the whistleblower. It was just our unbiased algorithm that didn't select them. We had absolutely nothing to do with it.
SPEAKER_01:But that's a lie.
SPEAKER_00:It's almost always a lie because the algorithm is controlled by humans. If we can prove that the data fit into that algorithm was poisoned.
SPEAKER_01:Poisoned, like they flagged your file.
SPEAKER_00:Exactly. Meaning someone went into the HR system and flagged your file. Or that the criteria were secretly adjusted specifically to exclude your unique profile. That is workplace retaliation. Pure and simple. It's just high-tech retaliation.
SPEAKER_01:That is absolutely chilling. It's gaslighting on a corporate scale.
SPEAKER_00:It is. It's weaponizing math against employees.
SPEAKER_01:So what is the action plan here? If I'm listening to this right now and I'm thinking, wait a minute, I think this exactly happened to me. What do I actually do?
SPEAKER_00:Mark outlines three very clear steps. Step one, request your data. Do not just sit there and take the no. Ask the company for the specific criteria used by the automated screening tool. Put them on notice. Put them on notice that you know your rights and you are watching them.
SPEAKER_01:Okay, step two.
SPEAKER_00:Step two is critical. Look for patterns. If it's just you who got rejected, it might be really hard to prove bias or retaliation in court. But if you talk to your coworkers, which by the way, you have a federally protected legal right to do.
SPEAKER_01:Well you can discuss this stuff.
SPEAKER_00:You can. And if you find out that five other colleagues who are all over the age of 50 were all inexplicably screened out for the exact same internal role. Or if a group of women returning from maternity leave were all rejected instantly.
SPEAKER_01:Well then that's not a coincidence.
SPEAKER_00:No. You don't have a tech glitch, you have a class action lawsuit.
SPEAKER_01:There is absolute strength in numbers.
SPEAKER_00:And step three document the impossible rejection. Keep those emails.
SPEAKER_01:The 2.06 PM email.
SPEAKER_00:Yes. Do not delete them. That timestamp is your physical evidence.
SPEAKER_01:How does the timestamp help in court, though?
SPEAKER_00:It proves mathematically that no individualized assessment actually occurred. In a court of law, showing a judge that a decision was made in six minutes about a 20-year complex career helps completely dismantle the corporate argument that you are given a fair and thorough shot. It proves beyond a doubt the machine made the call.
SPEAKER_01:It's amazing how much legal power is held in that one little six-minute timestamp.
SPEAKER_00:It is the smoking gun of the 2026 hiring landscape.
SPEAKER_01:You know, we have talked a lot about machines today, algorithms, parsers, mathematical ghosts. It is so easy to get completely lost in the tech and feel overwhelmed. But as we wrap up this deep dive, I think it's incredibly important to come back to the human element here.
SPEAKER_00:Absolutely. The boss might be a line of code these days, but the law still belongs to the people. That is the core message Mark Carey really wants to get across in all of his work.
SPEAKER_01:We are not just data points. We are not just seed data or statistical proxies for an AI to sort through.
SPEAKER_00:No. And you shouldn't let a black box determine your actual worth. The tools are there for you, the transparency laws, the bias audits we discussed, the resume strategies to beat the parser. It's really all about taking back control. You are not powerless in this system.
SPEAKER_01:If you are dealing with this right now, if you have a toxic boss who is hiding behind an algorithm or you've lost a job and you feel like you are just shouting your resume into a void, you do not have to do this alone.
SPEAKER_00:Mark Carey and his firm, Carrie and Associates PC, are really the go-to resource for this exact issue. They are actively on the front lines of this fight. You can find them very easily by searching carryandassociates.com. They have a ton of free resources on this new digital frontier of employment law.
SPEAKER_01:And here is our final call to action for you listening. You know that friend you have who is currently stuck in the application grind, the one who is just exhausted and feeling totally defeated. Share this deep dive with them.
SPEAKER_00:Send it to them right now. Give them the cheat codes. Let's get them back in the game with a fair shot.
SPEAKER_01:Because the rejection might feel incredibly personal, but remember, it was probably just a six-minute math error. And now you know exactly how to correct it. Precisely. But I do want to leave you with one final slightly provocative thought before we go. Something to just mull over.
SPEAKER_00:Okay, let's hear it.
SPEAKER_01:We've talked all about how companies use AI to write the job descriptions and screen the resumes, right? Right. Well, what happens in a year or two when every applicant is also using AI to write the resumes? You're gonna have an AI reading a job description, telling another AI to write a resume, to bypass an AI gatekeeper, to get a job that might eventually be done by an AI.
SPEAKER_00:It's literally just machines talking to machines at that point. The dead internet theory applied to human resources.
SPEAKER_01:Exactly. Where does the human actually sit into human resources anymore? It's something we are all gonna have to figure out very soon. Thanks for listening to the deep dive. We will see you next time.