AI in the Workplace: A Roadmap for Employers
See all Insights

AI in the Workplace: A Roadmap for Employers

Brownstein Client Alert, Sept. 5, 2023

Artificial intelligence (AI) is an exciting new frontier that is becoming more readily accessible to the public. As governments grapple with the right approach to regulating AI, legal risks are already present, including potential perils for employers arising from concerns around bias and discrimination, as well as copyright infringement, inaccurate data and privacy considerations. Now is the time for employers to consider enacting explicit policies regulating the use of AI.
 

Government Oversight

As summarized below, there is an emerging patchwork of laws, guidance and voluntary commitments that impact companies’ use of AI:

  • Voluntary Commitment by AI Leaders. On July 21, 2023, seven leading companies made commitments to further safety, security and transparency in their AI development. Specifically relevant to employers, that group will make a significant investment to study the outcomes of AI as it relates to bias, discrimination and consumer privacy rights.
  • U.S. Equal Employment Opportunity Commission (EEOC): As outlined in a previous Brownstein client alert, on Jan. 10, 2023, the EEOC issued a draft strategic enforcement plan that included AI-related employment discrimination on its list of priorities, highlighting the risk that AI can “intentionally exclude or adversely impact protected groups.” Additionally, on Aug. 9, 2023, the EEOC announced a first-of-its-kind settlement of a discrimination lawsuit involving the use of AI. The EEOC negotiated a $365,000 settlement to resolve allegations that a company programmed its recruitment software to automatically reject older applicants.
  • New York City: On July 5, 2023, New York City began enforcing a new law regulating the use of AI in “employment decisions.” Before employers or HR departments use an “automated employment decision tool” to assess or evaluate candidates in New York City, they must: (1) conduct a bias audit; (2) publish a summary of the results of the bias audit on their website, or linked to their website, disclosing selection or scoring rates across gender and race/ethnicity categories; and (3) give advance notice to candidates about the use of thattool and provide the opportunity to request an “alternative selection process or accommodation.”
  • Illinois: In 2020, Illinois enacted the Artificial Intelligence Video Interview Act governing the use of AI to assess job candidates. Employers hiring for positions located within Illinois must: (1) obtain consent from applicants before using AI in video interviews, after explaining how the AI works and its evaluation standards; (2) delete recordings upon request; and (3) if the employer relies solely on AI to determine whether a candidate will advance for an in-person interview, report data to the Illinois Department of Commerce and Economic Development indicating how candidates in different demographic categories fare under the AI evaluation.
  • Maryland: Also in 2020, the State of Maryland enacted a similar law to Illinois hat prohibits employer use of AI facial recognition technology unless an applicant consents to the tool being used. Applicants can consent to the use of facial recognition during the interview if they sign a waiver that states: (1) the applicant’s name, (2) the interview date, (3) the fact the applicant consents to the use of facial recognition during the interview and (4) whether the applicant read the consent waiver.
  • European Union: In April 2021, the European Commission proposed the Artificial Intelligence Act, which could be the first comprehensive legislation addressing the use of AI. Topics that are proposed for coverage by the include ensuring that the use of AI is transparent to all those who use a platform that includes AI technology and ensuring that AI is applied in a nondiscriminatory manner.
  • State Task Force: State legislators in New York, Connecticut, Colorado, Virginia and Minnesota are forming a task force to create model legislation in the fall of 2023. Early reports indicate that discussions will include “broad guardrails” and focus on matters like product liability and requiring impact assessments of AI systems.

Intellectual Property (IP) Concerns

AI raises various IP-related questions that employers should be aware of, including:

  • If a company uses AI to create a work, who owns it? Companies should not presume they own any resulting rights in work. On Aug. 18, 2023, the U.S. District Court for the District of Columbia held that authors of works must be human beings, which excludes machine-authored works: “Human authorship is a bedrock requirement of copyright.” Therefore, AI users should proceed with caution if they plan to rely on AI outputs as part of what they consider to be intellectual property of their organization.
  • Is there potential liability for copyright infringement if the AI tool uses unlicensed work in generating results or if the results incorporate unlicensed work? To date, AI companies have not sought permission from copyright owners to use their works as part of the large data sets ingested by AI tools. Because of this, there is an emerging area of litigation in which authors, artists and major rights holders in various fields are asserting that AI companies are infringing upon their copyrights. Generative AI has even become a point of negotiation in the recent 2023 Writers Guild of America strike. To date, lawsuits have not targeted third-party users of AI platforms who are the recipients of the allegedly infringing materials, but that could change if the results are clearly infringing (in one case, the results of a generative AI platform preserved a watermark applied by the original rights holder).

Privacy Concerns

The rapid increase in generative AI tools has coincided with tremendous expansion of U.S. state privacy laws. Employers must be aware of the inherent risks associated with disclosing any data about their workforce to AI tools and should consider the following:

  • What are the risks associated with the disclosure of personal data to AI tools? By inputting personal data into an AI tool, an employer may lose control of the data and find it was made publicly available or disclosed as the result of a data breach. Employee data is often highly sensitive, and thus the repercussions of inadvertent disclosure can be great. To mitigate this risk, data can be deidentified prior to submitting it to an AI tool, but companies must be careful to adhere to requirements as to what constitutes “deidentified” under applicable law. Companies must also understand and review the terms and conditions and privacy policy of the AI tool prior to using it to understand how data inputted into the AI tool will be used and what rights the company has once data is submitted.
  • Is the company still able to comply with requests to exercise data rights as required by applicable law if data is inputted into an AI tool? Depending on where an employee resides, the employee may have rights to access, correct, delete or stop the processing of their personal data. If that personal data has been submitted to an AI tool, deleting the personal data may be problematic.

AI in the Workplace

As governments are taking the first steps to design regulatory frameworks, the uses of AI in the workplace are proliferating, including the following:

  • Generative AI. Generative AI processes extremely large sets of information to produce new content and can do so in a format that the AI tool creates (i.e., images, written text, audible output). Early reports of the high-quality output from generative AI tools created a boom in their use to support work in a variety of industries. However, recent examples have shown that the reliability of AI outputs, particularly when asked to analyze fact sets or perform basic computation functions, fluctuates significantly and is far from assured. For example, researchers found that in March 2023 one popular generative AI tool could correctly answer relatively straightforward math questions 98% of the time. However, when asked to answer the same question in July 2023, it was correct only 2% of the time. Likewise, law firm attorneys recently were sanctioned when they utilized AI in drafting a brief after it was discovered that the AI tool had fabricated legal authority, including creating legal opinions that did not exist. As AI proliferates, employees will likely utilize it more frequently and in a growing number of ways in the course of performing their work.
  • AI products are evolving to make recruiting more efficient and effective, including by identifying potential candidates who may not have applied to the job but have the required skills and qualifications; screening large volumes of résumés, matching job requirements with candidate qualifications and experience; and using predictive analytics to analyze candidate data, including résumés and social media profiles, to predict which candidates are most likely to be successful in the role. This essentially “black box” assessment of candidates is fraught with peril.
  • Predicting misconduct. Various tools on the market claim they can identify “hot spots” for potential misconduct and allow management and HR to take action before a problem arises. By analyzing large volumes of information, including the tone of workplace communications and work schedules and volumes, AI represents that it can help pinpoint problematic areas for HR’s proactive engagement. With limited, if any, visibility into the accuracy of their predictive assessments, employers should proceed with caution particularly given that the use of predictive tools has the potential to raise significant concerns among employees related to privacy and fairness.
  • Retaining talent. Companies are also leveraging AI in their efforts to retain top talent by using machine learning to predict if an employee is likely to depart. Some AI programs claim they can be used to identify why people stay and when someone is at risk of leaving by predicting the key factors that could lead employees to depart.

Recommendations for Employers

It is not a question of whether employers will need to address AI in the workplace. Rather, it is an issue of when and how they should address it. Given the rapid proliferation of AI to date, the time is now. And employers must be nimble, closely following regulatory developments to ensure that their policies remain up to date in this fast-changing AI landscape. In the short term, employers would be well-advised to take the following steps:

  • Become familiar with what AI is generally, what AI the company is already using and what AI it may be using in the near future.
  • Assemble the right group of stakeholders to discuss appropriate policies governing the use of AI at work. Who needs to be at the table—chief technology officer, business leaders, chief people officer, others?
  • Consider what uses of AI are appropriate for your workplace and, equally as important, what uses are not appropriate.
  • Incorporate legal compliance considerations when designing your policy, including:
    • Ensuring that AI is not used in a way that could adversely impact any group based on protected characteristics. To help address this issue, consider performing a bias audit to ensure that AI is being implemented appropriately.
    • Providing appropriate notice to candidates and/or employees concerning the company’s use of AI and obtaining consent as may be required under applicable law.
    • Ensuring that the use of AI does not conflict with any statutory or contractual right to privacy held by candidates, employees or consultants.
  • Develop and implement a policy for employees governing the use of AI in the workplace. The policy should specify which AI tools are permitted to be used and what information is permitted to be submitted to such AI tools. Consider offering training to employees on appropriate uses of AI to ensure a clear understanding across your workforce.
  • If applicable, develop and implement a similar policy for how vendors and/or independent contractors may use AI in the work they perform for your organization. Additionally, consider whether your vendor agreements need to be updated to control whether and how vendors are permitted to use your data in AI applications.
  • Understand how data is being gathered and used. What is AI collecting and how is it assimilating and using data at an organizational level and at a personal level?
    • Even if data is deleted, it may have been incorporated into the calibration of AI in a future analysis. Is that something you are comfortable with?
  • Assign responsibility for all aspects of the use of AI within your organization so that roles are clearly understood and accountability exists.

AI offers exciting new opportunities, but it also comes with risks and a degree of uncertainty. By ensuring that they understand the uses of AI within the organization, the way it functions and the end results, employers can effectively utilize this tool while minimizing legal risk.

Please reach out to your Brownstein attorney or one of the authors below for specific advice related to the subject matter of this client alert.


This document is intended to provide you with general information regarding various considerations related to AI in the workplace. The contents of this document are not intended to provide specific legal advice. If you have any questions about the contents of this document or if you need legal advice as to an issue, please contact the attorneys listed or your regular Brownstein Hyatt Farber Schreck, LLP attorney. This communication may be considered advertising in some jurisdictions. The information in this article is accurate as of the publication date. Because the law in this area is changing rapidly, and insights are not automatically updated, continued accuracy cannot be guaranteed.

Recent Insights

Loading...