Employee Survival Guide®
The Employee Survival Guide® is an employment law podcast only for employees about everything related to work and your career. We will share with you all the employment law information your employer and Human Resources does not want you to know about working and guide you through various work and employment law issues. This is an employee podcast.
The Employee Survival Guide® podcast is hosted by seasoned Employment Law Attorney Mark Carey, who has only practiced in the area of Employment Law for the past 29 years. Mark has seen just about every type of employment law and work dispute there is and has filed several hundred work related lawsuits in state and federal courts around the country, including class action suits. He has a no frills and blunt approach to employment law and work issues faced by millions of workers nationwide. Mark endeavors to provide both sides to each and every issue discussed on the podcast so you can make an informed decision. Again, this is a podcast only for employees.
Subscribe to our employee podcast show in your favorite podcast app including Apple Podcasts and Spotify.
You can also subscribe to our feed via RSS or XML.
If you enjoyed this episode of the Employee Survival Guide ® please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this employee podcast on your favorite podcast player such as Apple Podcasts and Spotify. Thank you!
For more information, please contact Carey & Associates, P.C. at 203-255-4150, or email at info@capclaw.com.
Also go to our website EmployeeSurvival.com for more helpful information about work and working.
Employee Survival Guide®
AI Hallucinations and the Employment At Will Rule were Misstatements Until They Became Reality
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Comment on the Show by Sending Mark a Text Message.
AI Hallucinations are misstatements just like the employment at will rule, both were never intended but became reality. In this episode of the Employee Survival Guide®, Mark Carey dives deep into the intersection of employment law and AI hallucinations. These AI Hallucinations pose risks that echo historical inaccuracies in legal doctrine, potentially reshaping the landscape of employee rights and workplace culture.
Carey begins by unraveling the at-will employment rule, a cornerstone of employment law that has persisted despite its shaky origins. He draws a stark parallel between the historical evolution of employment law and the current challenges posed by AI hallucinations, emphasizing the critical need for verification and scrutiny of AI Hallucination outputs in legal contexts. As AI continues to permeate our workplaces, the dangers of unverified information become increasingly apparent, creating a precarious environment for employees navigating issues such as discrimination, retaliation, and hostile work environments.
Throughout the episode, listeners will gain valuable insights into the implications of AI Hallucinations on employment law, including how AI hiring bias can affect job opportunities and the potential for discrimination in the workplace. Carey advocates for a transformative shift from at-will employment to a more accountable system that mandates stated reasons for termination, ensuring transparency and fairness in employee relations.
Join us as we explore how understanding employment contracts, negotiating severance packages, and advocating for employee rights can empower you in the face of evolving workplace dynamics. Whether you're dealing with performance reviews, workplace harassment, or navigating remote work challenges, this episode is packed with essential tips and strategies to enhance your job survival skills.
Don't miss this opportunity to equip yourself with the knowledge needed to thrive in today's complex work environment. Tune in to the Employee Survival Guide® and discover how to navigate the intricacies of employment law, safeguard your rights, and advocate for a healthier, more equitable workplace culture. Your career deserves it!
If you enjoyed this episode of the Employee Survival Guide please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts and Spotify. Leaving a review will inform other listeners you found the content on this podcast is important in the area of employment law in the United States.
For more information, please contact our employment attorneys at Carey & Associates, P.C. at 203-255-4150, www.capclaw.com.
Disclaimer: For educational use only, not intended to be legal advice.
Hey, it's Mark, and welcome back to the next edition of the Employee Survival Guide. Today's topic of choice is how employment law demonstrates the dangers of AI hallucination. Bear with me. I'm going to get into a good one. Employment law has long served as a proving ground for how legal issues harden into doctrine because it sits at the intersection of economics, institutional power, and social policy. Employment law often absorbs contested assumptions early and then carries them forward long after their origins have faded from view. The most familiar example is the at-will employment rule. You know what that means. You've heard about it. Today it is a treated as a background principle so fundamental and entrenched that it is rarely interrogated or examined, except for I do it. I call it into question. Yet legal historians widely agree that the at-will employment rule rests on historical misstatements that hardened into doctrine through repetition, convenience, and institutional inertia. That history matters now more than ever because it closely parallels a growing risk associated with artificial intelligence. Hallucinated assertions, once relied upon and repeated, can quite quietly become law. The danger is not that AI gets things wrong, the danger is that what happens when we no one checks it? Do you check your AI research? This is especially true in a field that already knows how easily unverified assumptions can become binding rules. The modern at-will doctrine can be traced with unusual precision to a single source. Thus a fellow, an attorney up in New York named Horace Gay Wood, back in 1877, yes, I'm going that far back, published a treatise titled Master and Servant. That's how he used to write stuff back then. It's a real thing I actually learned in law school. Writing with confidence rather than caution, Wood asserted that the common law, which he borrowed from England, had long recognized employment relationships as terminable at will of either the party unless a fixed term was expressly stated, like, you know, one or two years. He presented the proposition not as a contested view, but as settled law. Now look what happened. Courts proved it receptive.
unknown:Yeah.
SPEAKER_00:Well, trial courts cited Wood, appellate courts cited those trial courts. Soon courts were no longer citing Wood at all. They were citing one another. This is called star decisis in the area of law. You cite the prior case that came before it. Within a remarkable short period of time, Wood's formulation stopped looking like an argument and began to look like a description of reality. By the early 20th century, the at-will rule no longer required explanation. It was simply the law. And employers loved it. The difficulty, as later historians and scholars painstakingly demonstrated, was that the Woods account did not accurately reflect common law, he claimed to summarize. Many of the English and American cases he cited did not stand for at-will termination at all. Earlier courts often presumed year-to-year employment or required cause for dismissal, particularly in skilled trades and long-term service relationships. I know you're getting bored to talk about this stuff, but I'm going to keep going. The historical record was even uneven, contextual, and fact-dependent. Wood's rule was not. By the time those inconsistencies were exposed, it was too late. Employment law, perhaps more than any other field rewards rules that are easy to administer by courts. Once courts began relying on the At Will doctrine to resolve cases efficiently, they had little incentive to reopen its foundations. I've done an article a long time ago about this same topic and where the words of that will came, and it was it's you try to argue for a judge, and it's nearly impossible to change their mind or change an employer's mind. Each citation became another layer of insulation. The doctrine's authority no longer depended on whether it was historically correct, but on the simple fact that it was already being treated as correct. Sound familiar? This is the critical mechanism by which legal fiction becomes legal reality, not through conspiracy or bad faith, but through repetition, convenience, institutional trust, and employers. Employers. Don't forget that. They love this rule. It allows them to run their private governments without any question and to hide discrimination behind the at-will rule. If you haven't gotten that by me yet from me, please accept that fact. It's a reality. Large language models are not malicious. They do not lie. They predict text based on patterns. When source material is ambiguous, incomplete, or conflicting, AI fills the gaps in a way that sounds authoritative. From websites. The risk emerges when those outputs are relied upon without verification, repeated by others, embedded in briefs, policies, pleadings, or articles, and later cited as if they reflect settled fact. That statement is a scary reality that's coming true. At that point, the hallucination no longer looks like an error, it looks like consensus. This is precisely the same dynamic that allowed the At Will Implement doctrine to take hold. Confidence substituted for accuracy, repetition substituted for proof. Until recently, concerns about AI hallucinations in the law field were largely hypothetical. Warnings about what might happen if fabricated authority slipped through the cracks, that line has now crossed. Let me pause and set something up. When we're league uh litigating in court, arguing motions for a judge, judge is a lawyer, we're arguing points of authority, meaning case decisions that came before, or statutes, or cases interpreting statutes, precedent. And we have to make sure that our citations to cases and arguments we're using are real. I mean, ethically, we cannot lie or misrepresent those case law decisions or statutes to the court because the whole legal fabric falls apart. So let's go further. In Shahid versus Assam, a Court of Appeals case in June 30th, 2025, the Georgia Court of Appeals vacated a trial court order after concluding that the order relied on non-existent case law that had been cited by counsel. The appellate court expressly noted that the trial court's written order had incorporated bogus authorities, cases that did not exist in any reporter or database, and imposed sanctions in connection with their use. The error was ultimately corrected, the order was vacated, the fictitious cases did not become binding precedent, but would have with on more than simple oversight. But the significance lies in that what happened before correction. Hallucinated case law, crossed the institutional threshold, and entered in an operative judicial order. That's huge, folks. That is the precise point at which legal fiction stops being theoretical. Employment law teaches us that not every error is caught immediately, and that some survive long enough to shape doctrine before anyone realizes what happened. That's what happened to the Employment at will rule. Again, it's just an hallucination. Okay, it's it involves all of your jobs. You all are at-will employees. Now do you see the significance of this? Implement law developed in response to industrial efficiency, labor mobility, economic pressure. Courts favored simple, repeatable rules, at-will employment fit that need. Modern institutional institutions face similar pressures with AI, speed over verification, cost savings over primary research, plausibility or over providence. AI text is especially dangerous because it does not announce its uncertainty. An hallucinated case, citation, can look indistinguishable from a real one. A fabricated historical fact can sound identical to a well-sourced one. And once that material circulates, particularly in employment policies, handbooks, or litigation templates, it acquires legitimacy simply by being written down. It isn't at its core a technology problem, but rather a process problem. Employment law is uniquely vulnerable to this phenomenon. It is precedent-driven, policy sensitive, and often shaped by broad generalizations rather than narrow holdings. Doctrinal shortcuts like employment at will implement tend to persist precisely because they are useful. If AI-generated errors enter employment law practice through briefs, internal guidance, training materials, or template policies, they may not be challenged immediately. They may simply be repeated. Over time, they can reshape how rules are understood, even if no single case openly endorses them. That is how employment law has always evolved, for better or worse. The lessons of the at-will employment is not that courts are careless or that technology is dangerous. It is that the systems reward convenience, and once convenient rule is accepted, it rarely gets revisited. At-will employment rule survive, not because it was correct, but because it was useful. AI hallucinations will survive the same reason for the same reason unless institutions impose discipline. AI can be an extraordinary tool for employment lawyers, drafting research, pattern recognition. We just went through a review of an AI closed and sandbox system. We're doing another one next week or so to see how it makes our business and work we do as lawyers more efficient. But it must be treated the way lawyers treat junior associates. Helpful, fast, and never authoritative without review. We have to check things. Primary source verification is not optional. In this new paradigm, institutional norms must be treat, must treat AI output as a starting point, not an end point, especially in a field where assumptions can harden into doctrine for generations. And you folks have been you live and work in a doctrine of at will employment for generations. That's how long it has existed. Employment law shows us how easily legal fictions can become foundational rules. It also shows us that how difficult those rules are to unwind once they take hold. The Georgia case demonstrates that AI hallucinations are no longer hypothetical. They have already entered court orders, if only briefly. History suggests that the new era may exist longer. Further, a comprehensive survey has never been done to see where such errors may have already worked their way into orders unannounced. AI does not introduce a new risk, it accelerates old ones. The question is not whether AI will hallucinate, it will. The question is whether lawyers who already live with the consequences of historical misstatements will stop, verify, and ask, because that's their job, that's their licensure, requires them to stop, look, verify, and ask, whether the thing everyone is repeating is actually true. I guess I'm doing that with employment I will. I'm questioning all along. Is it really true? And I keep on questioning you because I don't think it's true. I think it's a better way. It's called employment for cause, meaning stated reasons for termination instead of just ips it dix it, we're gonna fire you for no reason without any notice. So, with all that said, thank you for listening and let me be a service.