Employee Survival Guide®
The Employee Survival Guide® is an employees only podcast about everything related to work and working. We will share with you all the information your employer does not want you to know about and guide you through various work and employment law issues.
The Employee Survival Guide® podcast is hosted by seasoned Employment Law Attorney Mark Carey, who has only practiced in the area of Employment Law for the past 27 years. Mark has seen just about every type of employment dispute there is and has filed several hundred lawsuits in state and federal courts around the country, including class action suits. He has a no frills and blunt approach to employment issues faced by millions of workers nationwide. Mark endeavors to provide both sides to each and every issue discussed on the podcast so you can make an informed decision.
The Employee Survival Guide® podcast is just different than other lawyer podcasts! This podcast is for employees only because no one has considered conveying work and employment information directly to employees, especially information their employers do not want them to know about. Mark is not interested in the gross distortion and default systems propagated by all employers, but targets the employers intentions, including discriminatory animus, designed to make employees feel helpless and underrepresented within each company. Company’s have human resource departments which only serve to protect the employer. You as an employee have nothing! Well, now you have the Employee Survival Guide® to deal with your employer.
Through the use of quick discussions about individual employment law topics, Mark easily provides the immediate insight you need to make important decisions. Mark also uses dramatizations based on real cases he has litigated to explore important employment issues from the employee’s perspective. Both forms used in the podcast allow the listener to access employment law issues without all the fluff used by many lawyers.
Subscribe to our show in your favorite podcast app including Apple Podcasts, Stitcher, and Overcast.
You can also subscribe to our feed via RSS or XML.
If you enjoyed this episode of the Employee Survival Guide ® please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts. Thank you!
For more information, please contact Carey & Associates, P.C. at 203-255-4150, or email at info@capclaw.com.
Also go to our website EmployeeSurvival.com for more helpful information about work and working.
Employee Survival Guide®
Negative Impacts of AI on Employees and Working
Comment on the Show by Sending Mark a Text Message.
Could the very tools designed to enhance our productivity in the workplace be silently shaping a future of bias and invasion of privacy? Join me, Mark, as we delve into the profound impact AI is having on employment, from the boardroom to the break room. Along with insights from industry consultants, we unpack the transformative effects on hiring practices, highlighting the unseen biases lurking within AI algorithms. We confront the unsettling reality of how these systems could perpetuate discrimination and examine their role in employee surveillance, questioning the trade-off between efficiency and ethical practice.
In a world where AI's judgment can influence your career trajectory, understanding its reach into performance evaluations and mental health assessments is crucial. Our discussion traverses the spectrum from the benefits of AI, such as personalized support and early symptom detection for mental well-being, to the darker side of increased scrutiny and emotional surveillance. We dissect the delicate balance between leveraging AI for good while safeguarding against its potential to exacerbate workplace stress and breach the sanctity of personal data.
Finally, we grapple with the complex relationship between trust and technology as AI surveillance becomes an unwelcome fixture in our professional lives. I emphasize the pressing need for self-awareness and proactive measures in protecting our digital footprints from prying algorithmic eyes. The responsibility to navigate these murky waters lies not only with employers and regulators but with each of us as individuals. As we sign off, I urge you to stay vigilant and informed, for the AI-driven workplace is not a distant future—it's here, and its implications are profound.
If you enjoyed this episode of the Employee Survival Guide please like us on Facebook, Twitter and LinkedIn. We would really appreciate if you could leave a review of this podcast on your favorite podcast player such as Apple Podcasts. Leaving a review will inform other listeners you found the content on this podcast is important in the area of employment law in the United States.
For more information, please contact our employment attorneys at Carey & Associates, P.C. at 203-255-4150, www.capclaw.com.
Disclaimer: For educational use only, not intended to be legal advice.
It's Mark and welcome back. Today we're going to touch upon a topic that I've been thinking about for some time and you've been confronted with it day in, day out Artificial intelligence and working. A pretty complex area, and I'm wading in because I don't think many other people are. I think we're really early in the process. I'm using Google Gemini, and I'm sure you're using chat gdp and other devices to play around or draw pictures, et cetera, art, et cetera. But my focus is really on what is going to happen in terms of AI in the workplace affecting you as employees and executives, and I may be a little all over the place today, but I did try to organize my thoughts. For example, we're going to talk about hiring and discrimination, about my favorite performance monitoring and surveillance. Other topics include AI decision making and how about mental health, and the weird one I came up with as well is AI is a harassment amplifier using, and people would use a bot to. You know, go after you. And then employee trust was a really a good one too, because that dawned on me. You know, people really don't have a. They have low employee engagement, so they have low trust of employers these days, so let's just dig into it. You know, as a preface, I see a lot of cases, a lot of fact patterns, and I'm just a very curious person. So I went out and just looked at and, yeah, I used a AI to search for this podcast because I wanted to see what it was doing, because what you know, if you understand what AI is, it's you know it's. I was watching a podcast with Elon Musk the other day, from a fellow in, I believe, norway, and Musk said that we're running out of data and the implication was that the AI is so fast at picking up all available data. He even talked about having all books in the world that ever were written have already been analyzed in, you know, in terms of the machine learning, and then photos, tvs, podcasts, you name it, and so it's going to run out of data. That's pretty alarming if you think about it.
Speaker 1:So, getting back to what did AI produce in terms of its relationship to itself in employment law, I will say that the topical areas were that, you know, when I typed it in in terms of Google, gemini to come up with issues, then I looked at the topics and said, you know, was it accurate or not, based on my experience, and I wanted to just you know, share with you what I discovered and also what I'm going to just talk about in terms of the reality check of what these different topics are and what's going to happen to us as employees, how employers are going to react to this issue, how they're going to handle it. So let's just dive in Sorry segue. So the first one is hiring and discrimination it. So let's just dive in Sorry segue. So the first one is hiring and discrimination.
Speaker 1:Ai-powered recruiting tools can inadvertently perpetuate bias if the data they are trained on contains historical patterns of discrimination. That's probably the biggest issue that most people think about. Is, you know, employment or human input into it? How does the program, as the coders, prevent the AI machine learning of not replicating bias? So this could lead to cases involving employment discrimination based on race, gender, age, disability, et cetera, and employers will need to be very careful about how they use AI in the hiring process to avoid legal trouble. Good luck with that.
Speaker 1:I see it's just fraught with issues. If anybody has a recent college graduate, they know that their son or daughter has been interviewed by a computer. The computer is doing all the work. It's the screening process that's more common these days than not. So it's not something that I experienced when I was ever interviewed for a job. But I was ever interviewed for a job, but I never really interviewed for a job because I've been doing this all my life. But that's happening at a very quick pace. So one question is in terms of what is the AI interview process like and what is it looking at? How do you know recognition of facial, how you're tweaking on the? Is it looking at your nervousness in the video, et cetera.
Speaker 1:The problem in discrimination in the hiring is the AI system used in recruiting often is trained on historical data. If the data reflects past biases for example, fewer women or minorities hired in certain roles the AI algorithm may learn and replicate these patterns. As I indicated, this can lead to qualified candidates being overlooked simply because they don't match the biased historical model the AI is using. We could see discrimination cases where a rejected job might be consumed by employers alleging disparate treatment. For example, they were intentionally discriminated against due to their membership in a protected class. Disparate impact is another form of discrimination theory we use where a seemingly neutral AI hiring process disproportionately screens out individuals in a protected category. So the defense challenges meaning the employer will face the following of proving the AI system is fair. This may involve demonstrating that it doesn't have an adverse impact on protected groups. This is a complex, especially with less transparent AI models.
Speaker 1:You have to understand that the developers don't really know what's in the black box, and I'll get to that in a second. This is happening so quickly and we all have at least I have this is the Schwarzenegger film when the machines take over the world approach, you're like you know the concern that AI becomes too smart and it begins to eliminate the humans. But I digress again and so there's discrimination in the hiring process, a possible issue that we might have. It may be happening now. How would you know if you're being discriminated based on the hiring process or the AI bot? You really wouldn't. Honestly, the law is very slow to pick up as current events and so you know you wouldn't find unless somebody wrote about it, like I would write about something where I found a little tweak here and there my discovery of cases in terms of the discovery of you know data in actual legal cases, and we came up with something that you know. You would learn it that way, but it's a very slow process from the legal standpoint to bring these issues to the forefront. So I think it's a very slow process from the legal standpoint to bring these issues to the forefront. So I think it's a while west of discrimination in the hiring process. You just pray that they're going to do it correctly in terms of the coding of what they're looking for. But again, I have really mixed feelings about that that humans put in data into computer code that can generate the replication of the bias Onward.
Speaker 1:Number two performance monitoring and surveillance my favorite. This was a really big topic when we all went into our own homes during the pandemic and we discovered and reporting happened where a lot of employee monitoring took place. It's been going on for a while but it came out. So AI can be used to monitor employee productivity, communication and even physical movements. This raises significant privacy concerns and can lead to cases where focused on unlawful surveillance, unreasonable expectations and the creation of a hostile work environment. So AI-augmented monitoring tools go far beyond traditional performance tracking. They may analyze employee email and communications for get this sentiment and potential disloyalty. Think about that. You know you're writing an email. Everything you do at work, everything you touch it can be analyzed in a millisecond by a computer to determine whether you're, you know, loyal or whether you're something maybe is happening to, maybe mental health. So employee monitoring is a huge concern here.
Speaker 1:In terms of the AI issue, we don't know when, you know, is the government regulating how companies use this technology with employees? We know there are keystroke monitoring software. I always laughed at the one where you can buy a little device and move your mouse and make sure that the mouse is moving to trick the computer for detecting low productivity. But the facial expressions and body language on Zoom, calls and things of that nature, all of that data, emails, slack, text, visual from Zoom, all of that gets dumped into the data because the data, so the AI bots, are so hungry for more information and you can just think about the insidious nature of what it's looking for, even your physical location, where you are.
Speaker 1:Some people want to work remotely from wherever these days and you know, maybe that comes into play. Types of cases we're looking at in terms of this aspect of performance monitoring and surveillance, you get obviously the invasion of privacy issue. Employees may argue on reasonably intrusive surveillance violates the right to privacy in the workplace. I agree, because we had this issue come up. When we all went to Zoom during the pandemic, there was other things happening in the household around us that we could see Forget the cat, the pet, et cetera but it was like conversations between spouses, family matters, serious issues. So how far is the reach and breach that the employer can go to monitor people is really a serious question. I always argued that the issue when you had remote working and employee monitoring is you have this issue of violation of trust.
Speaker 1:We have these little things on our computers, laptops. We can turn off the screen, but can you really turn off the microphone? Our computers, laptops, we can turn off the screen, but can you really turn off the microphone? And it's not likely. And if you are, I'm going to give you a warning If you have a computer device from your work, shut it down if you want some privacy, because leaving it open is like letting Alexa listen to your conversations with your spouse, remembering it's there and remembering they can listen in on everything you're doing. So I don't mean to scare you, but it's reality. And, by the way, if you got something as an ad targeting you after listening to or working or talking in front of Alexa, it's no joke. It's actually doing that. It's listening to you and it's spinning back advertising to you. It's not an irony?
Speaker 1:Next issue data protection violations. Collection and storage of vast quantities of employee data by AI systems will raise issues regarding data protection laws like the GDPR, and I apologize, I didn't look at the acronym, but the issue here is, I thought about health concerns issue. What if the AI is working on through the laptop and listening to you have a conversation about your cancer diagnosis? You didn't tell anybody at work. You know there's a. Where does HIPAA come into play there? How does the computer know to shut off when it hears a medical issue like that? We're not being told anything. Is your employer telling you about what the constraints are going to be about? You know when it hears a medical issue happening across its devices, I mean certainly they're not going to say you know, screen, come on and saying sorry, susan, but you know you can't talk about that on this. You know in front of this computer, because we have to observe your HIPAA rights, I mean that's not going to happen. Employers are their own private governments, they do what they want in their own ways and it's all secretive. And you know we, as employment lawyers, try to enforce the rights of people against employers, because employers are terrible. They don't want to observe the ethics or morals of the issues. They claim that they do, but in reality, they're sitting there listening to you right now, listening to this podcast. You know how about that? Your own time at home?
Speaker 1:Third issue gig work and independent contractor status. This one came about in terms of the Gemini-related feedback I got. Ai platforms often manage gig workers and freelancers, so the AI bot said in response legal battles are likely to emerge around whether these workers are truly independent contractors or if they should be classified as employees and the rights and benefits that come with the status. Now I saw that feedback from the AI device. This is Google Gemini. I'm like you know what the data, what I just read to you, didn't really make too much sense in terms of that's an issue. Well, we understand what gig workers are, we understand what independent contractors are, but the legal battles it doesn't. The AI device is not smart enough yet it will be to tell us why this is really a big issue. It's saying often managing whether these employees are truly independent contractors. I mean it's not being more specific. So you're really at the advent of, in this case, gemini, its inability to learn about particular areas, but it'll catch up. It'll probably listen to this podcast and reform itself to give a more explanative analysis about the roles between independent contractors or the why, the, what do you call it? The AI devices may be interfering or doing something with the independent contractor. So I found that that particular concept gig work, independent contractor, feedback from AI was not actually. It didn't tell us anything. I guess is the point, and I'm struggling actually to help you interpret what's happened because it didn't really tell us much about anything in terms of because the AI device is designed to learn, but it's not producing yet it will.
Speaker 1:The fourth thing is automation and job displacement. Well, here we get something really sound and concrete. You know, I read in the Wall Street Journal that and I think it picked over in the New York Times as well Wall Street is in. You know, certain levels of workers in the investment community at the very low end are getting eliminated because and I think it was Goldman Sachs eliminated a range of individuals at the low end. So if you're an investment banker trying to start out, ai is doing your job for you. There's no more pairing decks and spreadsheets and the like. That's all being done by AI.
Speaker 1:So it's automation is happening in job displacement. It's a good example. It just recently occurred, and so the AI Gemini said to us as AI automates more tasks, job losses will occur. This could be cases related to severance packages, retraining obligations and the overall responsibilities companies have toward displaced workers and the overall responsibilities companies have towards displaced workers. Again, another example of Gemini not producing a response that is adequate to explain the topic automation, job displacement, the issue of severance package cases, retraining obligations Well, retraining, yeah, maybe to retain your workers and have them do something more you know that AI devices can't do, of course and then overall responsibilities I don't know what that. They didn't really explain it. So another example that AI gets it wrong or at least doesn't provide enough explanation. So I included these because they were just, they just kind of stuck out as being, you know, nonsensical sometimes in terms of their explanation.
Speaker 1:The next one is algorithmic decision making. Here we're talking about the coders, the human people who code, and I'll tell you what the response was I got from Jim and I. When AI systems make decisions about promotions, terminations and disciplinary actions, there is the potential for bias and unfairness actions, there is the potential for bias and unfairness. Lawsuits may challenge a lack of transparency in these AI systems, demanding explanations for the decisions that heavily affect employees. So I included this one because we all knew that the data in data out. You can have data in meaning the biased data by human individuals and then next think about this by human individuals and then next think about this.
Speaker 1:The AI device is learning from a wide range of things. It's going to learn everything. It's going to learn what the Klu Klux Klan was. It's going to learn. I think that Google tried to change its AI device to do certain things as a way to not offend people. So the AI device is going to. The machine learning device is going to pick up and learn about biases in historical American history, world history, et cetera, and it's going to bring that into its algorithmic equations and it's going to make decisions about your job. And are they going to get it right? Is it going to you know? Is the employer going to be transparent about it? If the employer says we were convinced by Accenture or whomever to have AI in our workplace and, by the way, accenture has a large team of people trying to promote AI in workplace. That's what they do as consultants, and so they well, what about the actual the box itself in terms of putting the code in and what is it learning, and how you control for bias? And so the fear is being that the bias is being pumped into the algorithmic equation, being that the bias is being pumped into the algorithmic equation and thus it's going to be impacting you adversely Again.
Speaker 1:Next, one number six AI decision-making, or AI-driven decision-making. Here we get into examples of performance evaluations or promotions, or disciplinary actions and terminations, and think about this in all honesty. It's very important. Companies love to automate things, and so you're probably experiencing an automated performance evaluation. You know your manager is, you know, maybe creating something of an input, and then you're getting a question back from an AI device and you're having to feed into it and it's going to analyze the time it takes for you to do it. It's going to analyze your pattern of communication, what you say, the language you use and what it can potentially mean. It'll interpret that and then it'll assess other aspects.
Speaker 1:Performance reviews can include now your emails and your work on various products, and if you received a performance improvement plan or a negative performance review, where it goes into various line items where the accusations are for performance because you need improvement. It's going to put various facts in. Well, now we're going to see that the AI device is going to grab from your available work product you as the employee and then from your 360 review by your coworkers. Remember those? They still happen too, and that's all be fed into the performance evaluation. And the AI device is going to help the manager rate you, but maybe it'll just rate you itself. That's pretty scary, and it's going to be pretty smart about it too, because it's looking at every piece of data you've ever created about your performance. And who's smarter right now in terms of your memory versus the computer? The computer is going to memorize every single email you wrote and every single deck you wrote, et cetera, and maybe have more data than you. That's scary. It means that you have to get on top of your game now, because it's starting now.
Speaker 1:I'm talking to you about it on a podcast and what concerns our performance evaluations are going to get more uglier. We know that they don't work themselves, but we know why they're used. They're used to basically get rid of people, but maybe that's going to get more intense. The next thing is promotions, but maybe that's going to get more intense. The next thing is promotions. What about AI systems identifying candidates for advancement based on a wide range of factors, potentially including social media activity and personality assessments?
Speaker 1:I thought I included this one because we have different generations of people who put out social media. We have people who put out LinkedIn. We have people who put out, let's say, youtube or TikTok or you know, and various ages and people are you know. I think of the classic example of two things. Actually, I heard a story yesterday I was where there was YouTubers who were putting on about vaping. There was apparently a large phenomenon of people just video recording themselves vaping teenagers generally and whether that you know it's stuck on YouTube, you can't take it off, so it's there and the machine learns it. And as you grow older, you know it's factored into your MO of who you are, as an employee potentially and the computer has a wide range of reach to understand you. So it's like, well, first off, never put that stuff on YouTube, or anywhere else for that matter. But that's a different population of workers who live their lives in the iPhone generation. I have three kids like that and so you know they put.
Speaker 1:Well, I don't know if they're not putting a lot of data out there, but they're on Instagram or whatever it is, and so the concern is that all that data that they put out there is going to be used to evaluate their performance, potentially because the machine learning is learning from everywhere. It's very scary. So anybody of any age putting out data in the social media sphere of any sort, it's going to be used to qualify you for performance evaluations or promotions. And how about disciplinary action? Maybe you're fitting a pattern that you engaged in some form of insidious bullying of somebody on the internet, or shaming or whatever it was, and then brought it into the workspace because that, because the computers were told to go out there and see what you do and to speak very clear to you that when you do a background check on an individual, do you know that background check companies will search your social media profile? I only know this personal experience because I've asked for them. In terms of background checks, I want to know who I'm dealing with sometimes, and so I didn't ask the background check folks to do that, but they went ahead and did it.
Speaker 1:I say that as an example, because now you have the AI devices doing what Much, much more quantitatively, bringing in data all about you. So we have this kind of reckoning of your prior activities and what your future activities will be about putting information out there. At one point you want to put information out there because it's job relevant in terms of maybe it's LinkedIn, you want to put something out. Or if you're a younger employee and you want to, you know you've just been fired and you go viral on your TikTok and you want to share that because you're a 20-something and you know and the company just you know got rid of you in such a way. These are recent examples where young employees have done this and it's gone viral and people and the CEO of one example in one company had to apologize to the individual and the way it was the termination was handled. So really, caution in terms of AI-driven decision-making and your availability of data that you're putting out there or putting out there individually at work, even outside of work. As you can hear, it's quite crazy in terms of what we're dawning upon.
Speaker 1:So let's move into section number seven, the challenge of the black box, the AI. Not to be redundant here, but the one significant challenge is that many complex AI systems, especially those using deep learning, are considered black boxes. This means that even the developers may not fully understand how the AI arrives at specific decisions. I mean that's pretty crazy. It presents issues with transparency, potential for bias accountability, issues with transparency, potential for bias accountability. So if an employee is terminated or denied promotion due to an AI assessment, it may be impossible to provide a satisfactory explanation of why that decision was made. The lack of transparency goes against notions of fairness and due process.
Speaker 1:I pause here, because employers are always going to try to create a defense about why someone was let go, and they're going to build a case If they can't understand why a device did this to intimidate an employee or et cetera. That's a problem for employers and almost want to sit there and wait and pause and watch the landscape develop because they want to use this product. They got to control the product, they got to know what it's going to do, and you and I can sit here and watch as they screw up, because they're going to make these screw ups, and I'm going to bring these examples once they happen. So transparency is not something employers are going to want to do, because they're not transparent with you now, are they? No, and that's all I ever talk about to raise your awareness. I'm being transparent because that's what I see it. Employers don't want to be transparent because that's not the way it works. They want you to be hidden. The very essence of this podcast is telling you what your employer does not want you to know, including this podcast. Ok, so transparency and AI are in conflict, but because employers have to justify their reactions to a court for why they made a decision, they can't tell the judge. Your honor, the AI bot did it. Well, no, mr Employer, you got to explain it because that's the law. So there's a conundrum there that they want it and they want to use it, but it's going to get out of control very quickly. Transparency, I doubt it. Okay.
Speaker 1:The next one is very similar potential for bias. Even if the AI developers themselves have no discriminatory intent, hidden biases in the training data or even the very features of the AI selects for analysis meaning historical data, or, you know, current data news, new York Times, wall Street Journal, you name it may also lead to unfair outcomes. Detecting the bias within a black box system can be extremely difficult. Yeah, I'm waiting for this one too. You're going to do some type of audit trail, some type of accountability, to ensure that it's. You know it's, there's no bias there. I think it's just it's a black box. For a good reason, because it's going to probably get a lot of employers in trouble. You'll have a lot of class actions and I'll be looking for it because you know who else should look for it the federal government, not likely. So it's up to you and I to police the situation. So it's up to you and I to police the situation. And why not? You know, this is kind of the dawn of the employee era, where an employee actually matters and employers realize they need employees instead of the other way around. They can just sit there and abuse them, which is changing. It's very slow. So potential for bias is extremely important. Hidden bias, even though the employees and companies say we don't have it and they're going to claim that there's no bias there. There's EEO employer, et cetera, equal employment opportunity, employer. But it's going to be the Wild West to watch this develop.
Speaker 1:Now let's talk about something really important. Another topic AI workplace mental health, well-being. This one popped up and I said I'm going to pause and just ruminate on this with you because and so AI is poised to influence employees' mental health in positive and negative ways, and so the potential benefits are this personalized support Now think about an AI-powered chatbots or apps provide tailored advice on stress management to you. Early symptom recognition saying you know, susan, I think you might be exhibiting the patterns of the diagnosis of a current, maybe major, depression or anxiety, something like that. So you think about that happening to you and then assess resources that will be helpful to you. So you think about that happening to you and then assess resources that will be helpful to you. So that's positive. That sounds likely to happen, I think, right. But so it's monitoring how I'm talking with you right now and assessing. You know, is Mark having some DSM-5,? Dsm-5 is the diagnostic statistical manual to assess. You know, is there something about the way his tone and intonation about? You know he might be feeling kind of blue today. You know it's something of that nature you think about, but you know it's the laptops listening to you constantly.
Speaker 1:Proactive intervention the AI could analyze communication patterns, as I was just describing. Behavioral data to identify employees at risk of burnout or mental health decline. Behavioral data to identify employees at risk of burnout or mental health decline, allowing for early intervention. Well, again, makes sense. I personally would never use that at my employees at work. That's something of their own personal privacy. But some employers may decide that this is a relevant area for them to create apps and help employees, like under an employee assistance umbrella. But again, pretty strange and unusual, but maybe we recognize it being a positive. The risk aspects are self-evident.
Speaker 1:So increased work pressure, ai systems setting productivity goals, more monitoring employees' activity constantly could exacerbate stress and anxiety. I mean, who are you working for? You're not working for the man any longer, or the woman. You're working for the bot, and the bot doesn't have this consciousness about you, about, oh, I think Mark looks a little overworked right now. His tasks and timing is actually slowing down. Maybe we should give him a break or something like that.
Speaker 1:Think about the I'm sure it does exist at Amazon the warehouse I mean the package fillers are. You know they're working constantly and there's lawsuits about this issue and people are burning out and describing this warehouse as a very difficult place to work. But yet you and I both need our packages at our doorstep and it's amazing that it happens every day. But someone there in that pipeline of logistics is probably experiencing some level of stress and being monitored by it. Why wouldn't Amazon do that? Of course they would. So increased work pressure.
Speaker 1:How about emotional surveillance? Employers may deploy AI tools for sentiment analysis of emails. I've been discussing that. Facial expressions monitoring worker emotional status and raise serious ethical and privacy concerns. I mean, folks, when you step into the workplace, you have some privacy and it's related to your HIPAA, and when you go to the bathroom, other than that, you've got no privacy and employers basically run the place like a private government, like I tell you. So you know. Emotional surveillance, I mean that's basically stepping over the line and you know, are employers going to do this? Examples and performance reviews where the manager doesn't really tack, identify fact issues, examples, but goes after the subtleties of how you reacted or spoke or interacted with your team. You know what I'm talking about. You've seen and heard about this and that's an emotional surveillance scenario. That's happening now, but think about it happening at a magnitude of 100, you know, using computers to do the work that humans can do and putting that information in the hands of managers to make decisions.
Speaker 1:I personally know of a case in MetLife. I did a podcast years ago and it was the woman had been, was working remotely I think it was during the pandemic and they, they told her she had negative performance review. Uh, it was a race case. But they said, uh, because her, she, they assessed basically her emotional intelligence. This person had a phd and worked at a very high level. Um and uh, they basically, you know shit, canned her because she had emotional intelligence uh, issues. Uh, because they surveilled her and they actually told her that we produced later on in the discovery, but these five different items that she had failed and they were all based upon how she reacted or said something, et cetera. So, quite serious issue, it does happen. It's going to get worse. How about the algorithmic bias and mental health assessment?
Speaker 1:Ai systems used to predict mental health concerns can be based on biased data leading to misdiagnosis or unfair treatment of employees. I love this one because you want to have a learning machine. That's going to be, you know, it's going to feed in the data of what it needs to learn. All of the DSM-5, okay, so get that. All of any articles, research reports, all that, because it's consuming everything off the Internet and so didn't you have this thing of unfair treatment due to it regarded you as having a disability you either you have or don't have, and that gets into the Americans Disabilities Act because that's the act of controls and governs mental health and well-being at the workplace. So you could have a potential situation where too much data goes in and it feeds about this junk back to the manager and the manager reacts to it and you know it basically terminates you. So that does happen. People do get fired when their mental health status is disclosed because some employers, you know, don't understand it and they react to it. So it can get even worse than that.
Speaker 1:Privacy concerns violations. Yeah, collection of sensitive biometric and emotional data by an AI system will raise alarms about employee privacy and potential misuse of this information. Yes, all day long Run. You know this is going to get this. Mental health and well-being at work is probably going to be the largest issue that's going to come up out of this because of all the surveillance that's happening. And if you have co-workers or yourself or have mental illness of any sort you know, nominal to severe you're kind of put on notice right now because you have to manage yourself with your employer, which is like, wait a minute, I have to think about how I write something or how I talk about. You know, at work in general, and I'm being assessed. I mean, folks, that's what's happening now. That's what they're going to do to you. They're going to and it already does happen, and it's actually.
Speaker 1:Bridgewater Associates is the largest hedge fund in the world, down the street from my office here. They actually instituted this, the principles that Ray Dalio had, and they monitored people the way that they spoke and they rated them. I don't think that they do it to the extent they used to do it, but they still operate with principals over there, but they used to operate with this. You know two iPads sitting there and they rate each other as having a business discussion about you know, tone, intonation, whatever it is, of being effective or transparent or whatever they wanted to do. I mean, it's been done already and so now you're going to speed that, up that process, with AI doing it in a way that is not apparent to you, done through, maybe, laptops, because it's probably easy to program that and because it has the device in front of me, has a microphone on it, has a camera on it. I mean it has all the things you need to pick up. It maybe has a sensor to pick up my blood pressure if I push the little fingerprint icon on the dashboard here. So enormous privacy issues can come up because of what the data is coming the AI device is coming up with as it monitors you.
Speaker 1:This leads to the next issue of discrimination based on mental health. If the AI system flags individuals with potential mental health struggles, could this lead to unfair treatment? Yes, mispromotions, yes, termination. Yes, that sounds insidious, but that's what currently happens today. We just don't have an ai system doing that.
Speaker 1:Humans do this to people. Humans do this to people. That's why you have mental health cases under the american disability act and state law in the courts, and there's been thousands of them because humans do this to one another. If you become on, become unhealthy, you're going to get fired or mistreated, not to say that all employers do this. But humans are not nice folks at work and if you're not a healthy person at work, you're a lesser than person and it's called dehumanization. If you don't know what that means, learn it. It happens all the time. I think you know. Think about the person who's at, again, the Amazon warehouse and they're experiencing physical problems and not able to move the packages along, etc. They're being less productive and they're being monitored for number of packages they can get in a minute an hour, whatever, and it's not about them as being a human with emotions, it's about them as a machine. Ok, so it happens. You need to be aware of this. So I won't get into the newer responsibilities for employers. Feedback I got was duty to monitor the AI impact and then reasonable accommodations. I'll leave that for another day.
Speaker 1:Here's another topic AI as the harassment amplifier. The new story today is teenagers, which can be downright mean, are using AI. Currently, I think there's some ban on production of sexually explicit content in terms of if you ask an AI to do this, but deep fakes Currently. This is really sad. But teenagers, mostly females, they're being deep faked in terms of their nudity and it's being put out there. That's the story. So now let's bring that into the workplace. Number one that deepfake that's out there. It's going to remain out there for that individual.
Speaker 1:That's an identity issue, that somebody has basically stolen their identity and is going to file them in the future for their employment. But in the workplace there's the potential that the ai could excel pattern recognition and data analysis in terms of uh, analyzing of, let's say you're, you're an employee and you like someone and you're, you know you're not and you basically reach over the line. You start to analyze the their social media. You write, write, basically an algorithm. Essentially, you create your own learning machine and you go out there and gather everything the person you adore, that you want to have a relationship with and you go out and look at all of their sensitive personal information in any way possible and you're mining it, even your communications with them, so you can get into a scenario where you have harassment of any nature it doesn't have to be sexual based, it can be any nature where the co-worker is using an AI device to develop a scenario to basically harass you online and potentially at the office. So a deep well of potential misbehavior by co-workers and also by companies, who are responsible for managing this issue as well, because it's their work environment. So you now have taken the work environment and you stuck AI into it and you've given this free range to the entire world to use AI, and, of course, they're using it. The data statistics of the first week of its production of AI the first chat that came out. It was insane how many people went on it, and I did too. I'm sure you did so to find out what the heck's going on. But now the employers are responsible in the work environment, with the existence of AI externally being brought into the workplace to harass employees. I don't think we have early examples of that just yet, but you can imagine the social media presence of someone and using that in some way to extract out some vengeance or some level of harassment. Again, people are humans, are mean individuals, so that's an issue. You could have the deepfake issue happening to a coworker at work. It's possible because you can tell a device to do that and it will.
Speaker 1:Next issue. Employee trust in AI is a hot topic. You should, at this juncture of the podcast episode, feel quite uneasy about this new topic of AI in the workplace. There was already a trust issue beforehand. Now it's getting more intense, and so employers have to manage this issue and they're not going to tell you how they're managing you. They may bring it up at work.
Speaker 1:If you're an outside consultant, you're working in AI for corporations. You know a lot more than the average employee, but employers are not telling employees what they're using AI for at work. If they do, let me know. Send me an email. I want to know, like you know, earliest signs of what's happening. But the employee trust issue is poor to none at this juncture. It's called employee engagement. I think it's in the low 20s to 30s, depending upon age, so that's pretty low. I mean, that's like that's really, that's really low. So that means a lot of people are unhappy at work, and so you then throw this AI issue on top of it. Then you have a lack of trust of the black box issue, because even employers don't know what the heck is going to happen in the future and they're implementing these technologies into your workspace.
Speaker 1:So how do you gain trust in the process? I guess and I thought about this before I started to include this topic but think about yourself, your role in this process, and it's always going to come back to your role in this process what are you doing for yourself to protect yourself in any way against any misuse of AI? So we can think about basic examples. I can close my machine down, turn the little screen thing off, I can turn the machine off the laptop, whatever, and I got some privacy at home and I go about this lifestyle to ensure that. Am I in a safe, private space from my employer. So you can do that.
Speaker 1:The next thing you can probably think about is you know your professionalism. You're working at work. Obviously, you don't want to be baited into an argument by anybody or raise your voice. You want to avoid that stuff. It's pretty commonplace. When you write your emails, you know, or when you're having conversations, don't drop the F-bomb on the individual. So we're going to get back to maybe a political correctness aspect that we have since moved away from, but maybe that comes back.
Speaker 1:So think about things you can do for yourself to protect yourself, because your employer is not going to do this. They may say that they are, but they're going to implement these tools because they can't resist and they're being sold this technology by consultants. So it's at a scale you don't understand. It's going to overtake the workplace and so, in order to control AI, you can control how it interacts with you by making choices and being aware and observant of what's happening around you. Now you're just trying to do your job, but now you're going to saddle this next level of observation and protectionism of what's happening around you. Now you're just trying to do your job, but now you're going to saddle this next level of observation and protectionism of yourself. Well, yeah, you got to do that because your employer is not going to do it for you, and so if you're ever vigilant about gaining information about work, now's the time, because this exponential effect it's going to have upon you is going to get insane, meaning that AI is going to gather all data about you in any way, shape or form, and we're talking current active data on projects you're working on.
Speaker 1:Whatever it is that's feeding into their system. It's watching you and that's crazy, and you have to just be vigilant about what know what your boundaries are, what you're saying in such a way, and turn things off when your work-related device is off. People have phones, I mean, that's work-related device. Uh, and what do you do with it? Do you put it to a some type of like you know, safe that you, you know you can't hear? Do you turn it off? You know we don't turn it off, but we should. So I'm just prompting you to think about all these things that are coming to mind as I'm talking with you about this, that are just quite serious and insane, and I don't want to trust employers to do the right thing.
Speaker 1:So the aspects of eroding trust employees about AI is topical issues like the black box problem I just talked about. They're opaque, you don't understand them, employers don't understand them. Big problem Fear of job loss. Because the AI device is going to eradicate investment, banking, early stage jobs that people get out of college, and so job loss, a fear of job loss even at, let's say, you're in your 50s and you already have fear of job loss already because you're going to age out at your prime, earning years. But now the AI device is going to maybe, you know take some of your job responsibilities away. You shouldn't see that on the march, in terms of before it happens to you. Likewise, you should see that your job responsibilities some of them are given to younger workers. Usually that's a telltale sign that you're being led to the pasture. Privacy concerns extensive workplace monitoring, as I discussed. Big issue perceived bias, as we talked about. So you already have a lack of trust.
Speaker 1:Low employee engagement um, employers still trying to figure this stupid thing out. Just type in the word of the phrase employee engagement Okay, in a job search, a Google search, and you'll see what I mean. You can't get past. Maybe I don't know 10 pages of Google, if you want to. What pages are anymore, but they're all consultants who are out there trying to pitch this thing called employee engagement. It's just everywhere. So you'll find nothing in terms of employee engagement helping employees meaning this podcast helping you. You won't find that topic. You'll find employers and consultants pushing their information out there, doing a lot of SEO to push employee engagement like it meant something, but with an employee engagement at 30%, I mean they're obviously not doing their job. They're making a lot of money, but they're not doing their job.
Speaker 1:So how do you build trust in the AI process that's now upon you, making things transparent and explainable to you? Well, that's not going to happen. Why would employers want to do that? Next topic is involving employees. I've been talking about that for a while involving employees in the performance review process, but they don't want to do that either. No input from employees other than making a widget better, that's fine, but no involvement in the AI implementation process. Why would employers want to do that? Ethical and responsible AI that's so new in terms of AI being so new. That's like a. That's a topic that maybe policymakers will think about implementing. But you know, trying to trying to type into an AI like Gemini and trying to ask it for anything of a sexual nature, a picture, an image, it won't do that. So there's some level of concern and holdback that maybe the companies themselves are implementing this, or I have maybe a lookup of a policymaking, maybe there's any new federal sanctions come out, but I don't know yet.
Speaker 1:Um, so, trust in the ai, ai, I, you know, I, I don't think there's going to be any employee trust for a long time. It's more going to be the opposite of. You know, run for the hills, folks, this shit's happening to you in real time, it's like, and the problem is in my sincerest thoughts about this is that no one is thinking about this. No employee is actively thinking about this is going to happen around them. That's the purpose of the podcast episode, because it's happening now and I've only touched upon a small scale of what's happened to you.
Speaker 1:I gave you a large in scale of what's happened to you. I gave you a large in terms of overview, but it's already here and so what are you going to do to protect yourself? I've said to you some common concerns of just data privacy et cetera, but don't put your data out there. And I know you want to be social on social media, but it's going to come around to kicking the ass hard and it's going to affect your job. Employers don't care. They're just going to want to have this machine learn everything it possibly can, because it's doing that now.
Speaker 1:All right, so I won't talk any further. I think you got a hard taste of what we're seeing. I'll talk about more as I research further into this. But it's here and it's going to come across in sideways manners and it's going to affect you and your job and I'll try to bring these things to light. So you're aware of it and I'm on the front lines because I'm concerned about it. And who else on the front line with me? The federal government and the courts, somebody else on the front line with us? You but we don't talk about you because I don't want to talk about you. The employee. But you're here because you're listening and if you're pissed off, freaked out, whatever, because I'm bringing this to your attention, then good, and now you have more information to deal with to protect yourself. So, with that, have a good week. I'll talk to you soon, thank.