Employee Survival Guide®
The Employee Survival Guide® is an employees only podcast about everything related to work and working. We will share with you all the information your employer does not want you to know about working and guide you through various work and employment law issues.
The Employee Survival Guide® podcast is hosted by seasoned Employment Law Attorney Mark Carey, who has only practiced in the area of Employment Law for the past 28 years. Mark has seen just about every type of work dispute there is and has filed several hundred work related lawsuits in state and federal courts around the country, including class action suits. He has a no frills and blunt approach to work issues faced by millions of workers nationwide. Mark endeavors to provide both sides to each and every issue discussed on the podcast so you can make an informed decision.
Subscribe to our show in your favorite podcast app including Apple Podcasts, Stitcher, and Overcast.
You can also subscribe to our feed via RSS or XML.
If you enjoyed this episode of the Employee Survival Guide ® please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts. Thank you!
For more information, please contact Carey & Associates, P.C. at 203-255-4150, or email at info@capclaw.com.
Also go to our website EmployeeSurvival.com for more helpful information about work and working.
Employee Survival Guide®
S6 Ep 141 When Your Boss is a Robot: Understanding AI in the Workplace and Your Rights
Comment on the Show by Sending Mark a Text Message.
Your next performance review might be scored by a model you’ve never met. We dig into how AI is reshaping hiring, promotion, discipline, and workplace surveillance, and we explain what that means for your rights under anti-discrimination and privacy laws. From the promise of efficiency to the reality of bias, we unpack why intent isn’t required for liability and how disparate impact applies whether a manager or a machine makes the call.
We walk through real examples, including Amazon’s abandoned hiring tool that learned to prefer men, and the EEOC’s first AI hiring settlement that signaled employers can’t outsource accountability to vendors. We also trace the policy whiplash: federal agencies stepping back from guidance, while states and cities step up. New York City’s bias audits and applicant notices, Illinois’s expanded protections and BIPA enforcement, and California’s “No Robobosses” proposals point to a patchwork of rules that matter the moment software touches your resume, your video interview, or your keyboard.
Surveillance is expanding too. Keystroke tracking, productivity dashboards, and biometric tools promise insight but raise serious questions about consent, data handling, and monitoring off-duty or in private spaces. We share practical steps: ask if AI is used in decisions about you, request accessible alternatives, document outcomes that don’t add up, and remember that retaliation for raising concerns is illegal. The technology may be new, but your core protections are not. Subscribe for more clear guidance on navigating AI at work, share this conversation with a colleague who needs it, and leave a review to help others find the show.
If you enjoyed this episode of the Employee Survival Guide please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts. Leaving a review will inform other listeners you found the content on this podcast is important in the area of employment law in the United States.
For more information, please contact our employment attorneys at Carey & Associates, P.C. at 203-255-4150, www.capclaw.com.
Disclaimer: For educational use only, not intended to be legal advice.
Hey, it's Mark and welcome back. Today we're talking about when your boss is a robot, understanding AI in the workplace and your rights. Sigmund Freud lived between 1856 and 1939 and was therefore witness to the surge of technology that resulted from the Industrial Revolution. While he acknowledged the usefulness of the technical innovations of his day, he was also somewhat skeptical of them. Freud famously commented, man has, and as it were, become a kind of prosthetic god. He argued that humans, through technology, have created artificial limbs and tools that amplify their abilities, making them godlike, but also creating new troubles. Freud had no idea what was coming. The science fiction future that was unimaginable in Freud's day has arrived. And it's reviewing your job application. Artificial intelligence is no longer just something we see in movies. It's making real decisions about real people's livelihood every day. And while AI promises efficiency and objectivity, it's bringing some very human problems into America's workplaces discrimination, privacy violations, and a fundamental shift in the balance of power between workers and employers. If you've applied for a job recently, there's a good chance an algorithm screens your resume before any human eyes ever saw it. In fact, about 65% of companies now use some form of AI or automation in their hiring process. That's not necessarily a bad thing, except when the algorithm is making biased decisions that would be illegal if a human manager made them. Here's a comforting thought. Computers can't be racist, sexist, or ageists. They're just following their programming, right? Unfortunately, it's not that simple. AI tools from learn from data, and if that data reflects historical discrimination, the AI will perpetuate that discrimination into the future. When Amazon deployed an AI hiring tool, the tech giant discovered their algorithm was discriminating against women. The system had learned from the company's past hiring patterns, which favored men, was essentially programmed to continue that bias. Think about that. One of the world's most sophisticated technology companies with virtually unlimited resources couldn't create an AI hiring system that didn't discriminate. If Amazon struggled with this, what are the odds that the automated system reviewing your application is fair? The resume scanner that dings you for not having the right keywords might be eliminating qualified women because men's resumes historically use different terminology. The video interview, AI that analyzes your facial expressions and speech patterns, could be filtering out candidates based on race or ethnicity. The chat box that the ask pre-screening questions might create barriers for older workers who are less comfortable with the technology, even when tech uh proficiency isn't required for the job. Here's what every worker needs to understand. We were just following the algorithm, it's not a legal defense. Under federal interdiscrimination laws, you don't need to prove your employer intended to discriminate against you based on sex, race, religion, uh discrimination, disability, age, and uh and another protected characteristic. You only need to prove that their policies had a discriminatory effect on your employment. Or as the Supreme Court recently held in Muldrill versus City of St. Louis, you experienced some harm in the terms and conditions of your job. This principle applies whether the decision was made by a biased manager or a biased algorithm. In 2023, the EEOC, the Equal Employment Opportunity Commission, settled its first ever AI hiring discrimination case, recovering$365,000 for a group of job seekers. That settlement sent a clear message: employers remain liable for discriminatory outcomes even when those outcomes are produced by automatic or automated systems that they purchase from third-party vendors. The legal landscape for AI in employment has become dramatically unclear, and that should concern every working person in America, me included. On his first day in office, President Trump rescinded Executive Order 14110, which had directed federal agencies to address AI-related risks, including bias, privacy violations, and safety concerns. The EOC removed key guidance documents explaining how Title VII and the Americans Disabilities Act applied to AI tools. The Department of Labor has signaled that its prior guidance on AI best practices may no longer reflect current policy. In other words, the federal government has largely stepped back from regulating AI in the workplace, leaving workers with far less protection than they had just months ago. Fortunately, several states have stepped into the vacuum. New York City's Local Law 144, which took effect on January 1st, 2023, requires employers using automated employment decisions tools to conduct independent bias audits and provide notice to job candidates. Illinois recently amended the Illinois Human Rights Act to prohibit employers from using AI in ways that lead to discriminatory outcomes based on protected characteristics. California has introduced several bills aimed at regulating AI and employment, including the no I like this phrase, the title, No Robobosses Act, SB7, which would require employers to provide 30 days' notice before using any automated decision systems and mandate human oversight and employment decisions. Over 25 states introduced similar legislation in 2025. For workers in Connecticut and New York, the current situation is particularly frustrating. Connecticut saw a bill fail that would have protected employees and limited electronic monitoring for employers. While New York City has protections, New York State has yet to pass comprehensive AI employment protections beyond those affecting state agencies. While much attention focuses on AI and hiring, the technology is being used throughout the employment relationship, often without workers' knowledge or consent. AI systems are increasingly used to monitor employee productivity, track keystrokes, analyze work patterns, and even predict which employees are likely to quit. These tools raise profound privacy concerns. AI systems often require access to employee communications, performance records, personal information, and companies may unknowingly cross legal boundaries that could result in privacy violations or breach of employment agreements lawsuits. Illinois Biometric Information Privacy Act, BIPA, BIPA, has been particularly impactful. Companies have faced multimillion dollar settlements for BIPA violations related to AI systems that analyze employee facial recognition, voice patterns, and other biometric identifiers without prior consent. Some proposed legislation would address AI-driven workplace surveillance. California's AB 1221 and AB1331 would require transparency and limit monitoring during off-duty hours or in private spaces like a bathroom. But in most states, employers have broad latitude to monitor workers using AI tools, often without their knowledge. Because I have said before, employers are little private governments and they can do whatever they please. And really, there's not much the state and federal governments can do unless it's a flagrant error or a violation. The Stop Spying Bosses Act introduced in Congress would prohibit electronic surveillance for certain purposes, including monitoring employees' health, keeping tabs on off-duty workers, and interfering with union organization or organizing. However, this legislation is not yet enacted into law. AI tools aren't just screening job applicants, they're making recommendations about who should be promoted, who should be disciplined, and who should be laid off. And because machine learning systems become more entrenched in their biases over time, discriminatory patterns can become a vicious cycle. The more AI makes biased decisions, the more that bias becomes embedded in the training data for the next generation of AI tools. Employee privacy rights don't disappear simply because an employer is using an AI technology. Under both federal and state employment laws, employers have an obligation to protect employee information and notify workers about monitoring or data collection practices. This is the old is your employer recording you and giving you notice to it? So some states require that employers give notice. So the law is relatively new in that respect. AI, obviously, notices are being, you know, in each state are separate and different. Many jurisdictions require explicit employee consent before collecting or processing personal data for AI training purposes. Simply updating the employee handbook may not be sufficient. Specific agreements addressing AI data use may be required. However, workers often face a coercive choice, consent to extensive AI monitoring and data collection, or lose your job. The practical reality is Stark. AI systems learn from the data that are fed. If that data includes your communication, performance records, and personal information, your employer may be using your private information in ways you never imagined, and potentially in violation of your privacy rights. If you're concerned about AI affecting your employment, here's what you need to understand. Discrimination based on race, sex, religion, national origin, age, disability, or genetic information is illegal, whether the discriminatory decision is made by a person or an algorithm. Retaliation for complaining about discrimination is also illegal. So know your rights. Ask questions. You have the right to know if AI tools being used are being used to make employment decisions about you. While not all states require disclosure, asking the question puts employers on notice that you're paying attention. New York City employers, for example, must provide notice at least 10 business days before using an automated employment decision tool. Document everything. If you suspect AI discrimination, document the circumstances that you find. Employers should provide alternatives to AI tools when necessary. Be aware of data privacy. Understand what employee data your employer collects and how it's used. In some states, you have a right to regarding your personal information. Illinois workers, in particular, have a strong protection under BIPA, as we discussed before, for biometric data. Don't assume the decision is final. Just because an AI rejected your application or recommended disciplinary action doesn't mean that decision was correct or legal. Automated tools make mistakes and they can be challenged. And as I talked about in the past, uh the decision by a former uh sorry, an employee uh who sued Workday under that same premise. The future work is here and it's increasingly automated, but workers still have rights. The fact that an employer is using sophisticated technology doesn't give them permission to discriminate, violate policy, or ignore employment laws that have protected workers that have protected workers for decades. As legislatures continue to grapple with how AI to regulate IA and employment, the fundamental legal principles remain unchanged. Employers cannot discriminate based on protected characteristics. They cannot retaliate against workers who assert their rights, and they must respect employee privacy within the bounds of applicable law. The AFL CIO put it well in supporting proposed federal legislation, quote, working people must have a voice in the creation, implementation, and regulation of technology, end quote. That voice includes understanding when your rights are being violated and taking action when they are. The paradox identified by Freud in his quote above about humans becoming prosthetic gods is nowhere more evident than in the realm of AI. While the technology indeed gives humans godlike powers, Freud also noted that these technologies are processes that were not naturally grown and therefore can cause problems for the human condition. Freud questioned whether these tools truly lead to happiness, even as they increase human power. In the American workplace, we all have to grapple with the paradox as AI becomes increasingly common in everyday life. If you believe you've been discriminated against by an AI hiring tool, unfairly monitored by an automatic surveillance system, or subjected to biased AI-driven employment decisions, you don't have to accept it. The law is evolving rapidly, but your fundamental rights as a worker remain protected. The employment attorneys at Kerry and Associates understand both technology and the law. We've been following these issues closely and are prepared to help workers navigate the new frontier of employment law, which it certainly is. Whether you're facing discrimination in hiring, unfair AI driven performance evaluations, or privacy violations through workplace surveillance, we can evaluate your situation and advise you on your legal options. Hope you enjoyed this uh episode and talk to you soon.