New NYC Law on Preventing Bias in Automated Employment Assessments

Jan 21, 2022

Reading Time : 4 min

Employers in New York City using artificial intelligence (AI), data analytics or statistical modeling in the hiring or promotion process will need to notify candidates in advance and conduct an annual “bias audit.”

Passed on November 10, 2021, this new law is one of the most significant measures yet seen to address concerns from civil rights groups that machine learning may result in discrimination against women and minorities. The law comes into effect January 1, 2023, with fines of $500 for first-time violations and up to $1,500 for subsequent violations.

Broad Scope

Although one might expect the new law to specifically target algorithmic decision making, the language seems to cover a far wider range of employment tests. The law applies to “automated employment decision tools” defined as “any computational process, derived from machine learning, statistical modeling, data analytics or artificial intelligence” that generates a “simplified output, including a score, classification, or recommendation,” and substantially assists or replaces discretionary employment decisions.1

Even commonplace online employment assessments, predating AI technology, could be swept in by the broad definition of “automated employment decision tools.” For example, under Title VII of the Civil Rights Act, an employment test must be job related and consistent with business necessity if it has a disparate impact on members of a protected group. Job relatedness typically is established through a validation study, and most validation studies rely upon some form of “statistical modeling” to demonstrate a correlation between the assessment and the knowledge, skills, abilities and behavioral characteristics required to successfully perform the job. The same is true to justify the method of scoring, weighting and otherwise using an assessment in the selection process. As such, the vast majority of properly validated employment tests use a “computational process” that was “derived from” either “statistical modeling” or “data analytics” with a “simplified output,” such as a final score or a pass/fail flag. Likewise, all objective scored tests can be described as replacing “discretionary decision making.” Finally, while the law includes some exceptions, the exceptions do not materially impact employment decisions, such as “a junk mail filter, firewall, antiviral software, calculator, spreadsheet, databases, data set or other compilation of data.”2

Notification Requirement

New York City employers and employment agencies that use “automated employment decision tools” will have to meet strict notice requirements. Specifically, all candidates who reside in the City and who will be screened by such tools must receive notice, at least 10 business days in advance, (i) that an automated employment decision tool will be involved in assessing their candidacy; (ii) the job qualifications and characteristics the tool will be assessing; and (iii) that the candidate may request an unspecified alternative selection procedure or accommodation.3

The notice requirements will create challenges for employers using many of the AI sourcing and screening tools on the market today. In most cases, the vendors who sell these tools claim to be assessing candidates on job related factors, yet refuse to provide any specifics because their algorithms are proprietary. In fact, the vendors themselves may not know the characteristics and qualifications being screened because certain algorithms continually change, or become “smarter,” based on incorporating successful recruiting or hiring outcomes into the algorithm to prefer candidates who share some commonality with those selected. 

Annual Bias Audits

The new law also requires a “bias audit” at least annually, defined as an “impartial evaluation” conducted by an “independent auditor,” that includes, at a minimum, an analysis of whether the automated employment decision tool has resulted in a disparate impact based on gender, race or national origin.4 The law does not specify who qualifies as an “independent auditor” but presumably it would not include an in-house expert or the vendor who created the assessment. Potentially most problematic for employers, the “bias audit” must be published on the employer’s website, with “the distribution date of the tool to which such audit applies” before the employer may use the tool, meaning employers will need to launch the assessment, either with real candidates or incumbents, for development purposes only in order to gather the necessary data to test for disparate impact and, hopefully, satisfy the bias audit requirement.

Takeaway

The New York City law is the latest and greatest effort by regulators to curtail bias when AI is being used to make employment decisions. Earlier in 2021 the Equal Employment Opportunity Commission (EEOC) launched an initiative to study AI tools used in hiring decisions, highlighting the concern over bias and discrimination. Illinois passed its own AI employment law, which gives job applicants the right to know if AI is being used in a video interview and the option to have the video data deleted, while Maryland passed a law requiring job applicant consent for the use of facial recognition technology. Washington, D.C. likewise announced proposed legislation that would regulate algorithmic decision making, complete with annual audits similar to those of the New York City law.  

The broad scope of this law leaves many open questions, such as whether long-standing computer-based assessments that were derived from traditional testing validation strategies are covered by the law, or whether passive evaluation tools, such as recommendation engines used by employment firms, could fall within the scope of the law.   

In the absence of regulatory guidance, employers who wish to screen New York City residents for employment or promotion using computer-based assessments will need to take the necessary steps before January 2023 to ensure compliance. The fines, $500 for first-time violations and $1,500 for repeat offenses, are counted as separate violations each day that the violating automated employment decision tool is used.5 And, while the law does not include a private right of action, it also does not prevent a candidate from bringing a private action under other federal, state or local laws, such as the traditional antidiscrimination laws.6

Please contact a member of Akin Gump’s labor team or cybersecurity, privacy and data protection team if you have any questions about this new law or how these requirements will affect your company.


1 Id. at 1.

2 Id. 1.

3 Id. at 2.

4 “protected individuals” are those persons required to be reported by employers under 42 U.S.C. §2000e-8(c), as specified in 29 CFR §1602.7.

5 Id. at 3.

6 Id. at 3-4.

Share This Insight

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.