Skip to content Skip to footer

Hot takes in IO: 3 potential pitfalls of NYC Local Law 144

Introduction

If you have a job, you are no stranger to the use of technology in hiring. Chances are that you applied to a job on the internet, using a resume that you developed from an online template. The company you applied to likely used an applicant tracking system (ATS) to organize your application materials as well as track your progress through the hiring cycle. And there was probably automation—and even artificial intelligence (AI)—involved at some stage. 

A 2022 survey by the Society for Human Resource Management (SHRM) found that the use of AI to support HR-related activities is increasing; of the organizations using such technology, 79% of them are focusing on automation for recruitment and hiring. Despite the common use of automated technology in hiring, utilizing AI tools can lead to concerns regarding the potential for algorithmic bias, discrimination, and a lack of transparency in these systems. As a result, lawmakers have begun implementing policies to regulate the use of such automation in hiring to ensure fairness, equity, and accountability. 

New York City Local Law 144 (NYC LL 144) is a prime example of this trend, as it sets out comprehensive regulations to govern automated employment decision tools (AEDTs). This article will delve into the implications of NYC LL 144, including its historical context, potential advantages and pitfalls, and recommendations for future legislative actions based on Industrial-Organizational (IO) Psychology best practices.

A brief history of technology in hiring 

Over the past four decades, technology has revolutionized the way we hire: from posting jobs, to screening applicants, to tracking applicants via an applicant tracking system (ATS), to emailing the candidate with a formal offer. However, some employers and candidates are skeptical about the use of technology for hiring, and, in some cases, that skepticism is rightfully placed. 

It’s important to recognize that hiring tools, both under human review and artificial intelligence, can incorporate biases in the hiring process. As a recent example, just a few years ago Amazon ditched an AI recruiting tool after they found it was biased against women. However, we cannot place blame wholly on technology. Research has shown humans can incorporate numerous biases into the hiring process, including biases around gender and attractiveness, as well as race. If humans are the ones developing the technology behind these tools, then it follows that some of these biases may be unintentionally incorporated. 

However, all hope is not lost. AI, when developed thoughtfully, can actually mitigate bias in hiring. AI can be used to write gender-neutral job descriptions, systematically screen resumes, objectively measure the skills of candidates, and so much more. Plus, AI tools can be systematically analyzed for bias, and clear bias-related metrics can be tied directly back to the tools. 

Given the growing use of technology in hiring and its tumultuous history, it is no surprise that policy experts have pushed for regulations. NYC LL 144 is just one of the first major laws that looks to regulate the use of automated tools in hiring. 

The origins of NYC LL144

Although NYC LL 144 officially became enforced in July 2023, its history goes back several years. The law was first proposed in 2020, and was passed by the New York City Council in late 2021. It underwent many iterations over the three years it took to go from proposal to being in effect, with efforts being led by the NYC Department of Consumer and Worker Protection (DCWP). These iterations included changes to the verbiage and scope, shaped by policy experts and feedback given via public hearings held in late 2022 and early 2023. Following these sessions, the DCWP finalized the rules in April 2023 and set the enforcement date for July 5, 2023. The law has been in effect since. 

What does the law require?

NYC LL 144 is the first law in the US that regulates the use of automation in hiring. It requires that automated employment decision tools (AEDTs) have undergone an independent bias audit in the last year of use. Likewise, employers must publicly display a summary of the results of the most recent bias audit, including key statistics, for the tool on the employer or employment agency’s website. 

Responses to NYC Local Law 144 

Because NYC LL 144 is the first law of its kind in the United States, it has naturally generated a lot of buzz. In fact, the first attempt at a public hearing resulted in the video conferencing system crashing due to the high volume of attendees trying to join. Later sessions drew over 250 attendees, many of whom voiced their individual perspectives on the law. However, as with much pioneering legislation, opinions on the law are decidedly mixed. No matter which side of the argument you fall on, it’s important to recognize that there are both positives and potential shortcomings to the law. 

The good

NYC LL 144 introduces many potential benefits through the regulation of automated tools. First, the law fosters transparency by mandating clear reporting and oversight when deploying automated decision-making systems. This could help prevent algorithmic biases, ensuring that these tools do not disproportionately impact marginalized communities and underrepresented groups. The law’s guidelines also encourage continuous monitoring and evaluation of the automated tool(s), which could promote the refinement and improvement of automated systems over time. 

Overall, the transparency that stems from NYC LL 144 has the intent and potential to enhance public trust in technology, mitigate potential harms, and pave the way for responsible and equitable innovation within the city. However, there are a few important implications of NYC LL 144 that could have unintentional negative consequences. 

The potential bad

Despite the positive intent of the law, it remains to be seen if NYC LL 144 will have a positive impact on the NYC workforce and the diversity of organizations. If this law is used as a framework for other legislation, new variations of the law could lead to organizations taking misguided steps, such as prioritizing compliance over the validity of their hiring tools or incorporating more bias into the hiring process. Consider these potential challenges.  

No validity required: NYC LL 144 does not consider validity as evidence. There are several types of validity that can be used to evaluate a hiring system, including content validity and criterion validity; validation is the process of collecting evidence to evaluate how well a hiring tool (e.g., an assessment) or system measures what it is supposed to measure. Conducting validation is important to establish the job relevance and predictiveness of hiring tools. Skipping validation studies could result in the absence of both of these things. 

Because the law does not require any validation, NYC LL 144 could inadvertently encourage employers to use hiring tools that are not job related because they are only focused on demonstrating equality in outcomes (e.g., pass rates), rather than also ensuring on-the-job relevance and predictiveness of hiring measures. Relatedly, employers may opt for tools that claim to measure important predictors of job success, but in reality do not measure anything at all. 

The unintentional chilling effect: While NYC LL 144 aims to increase the fairness of hiring practices through transparency, it could actually negatively impact fairness, leading to worse diversity, equity, and inclusion (DEI) outcomes through the chilling effect. In the workplace, the chilling effect occurs when some aspect of the organization—whether it be signing a non-compete agreement or a negative comment from a supervisor—deters an individual from doing something that they otherwise would have done. 

In the case of NYC LL 144, the use of automated tools, and publicly posting adverse impact calculations, could lead underrepresented groups to opting out of the hiring process entirely, as individuals might feel as if they already have a lower likelihood of success in the hiring process. This could have a number of potential impacts, including potential candidates deciding to not apply to the organization in the first place, opting to not take a pre-hire assessment, or dropping out prior to an automated interview. Candidates dropping out before the hiring process even begins, as well as at key stages throughout, could have a huge, yet virtually unmeasurable, impact on diversity metrics. 

Unintended bias shift: While the law seeks to eliminate bias in automated hiring systems, there’s a risk that it could shift bias to other stages of the hiring process. Employers might opt to swap their automated tools for subjective alternatives, unintentionally introducing different forms of discrimination. For instance, instead of using an algorithm to review resumes, the organization may choose human review instead. This could incorporate unconscious bias into the process, which may not be subject to the same level of rigorous review that an automated tool alternative would undergo. 

More worryingly, the subjectivity and inconsistency in human review could mean that employers are functionally making biased employment decisions “in a black box”, which is the exact outcome this law seeks to avoid. Ultimately, NYC LL 144 could result in employers choosing tools based on avoiding compliance requirements, which may not necessarily correlate with “better” tools or positive outcomes.  

A better path forward

Reactions to NYC LL 144 are marked by a mix of support and skepticism. While many appreciate its intentions to create a fairer hiring environment and greater transparency, there are concerns about its operationalizations that could lead to potential drawbacks, including increased costs and challenges to innovation in the job market. These varied perspectives highlight the need for ongoing evaluation and adaptation as the law’s impact becomes clearer over time.

All that being said, regulations are an important avenue to promoting fairness and transparency, and could make the public more comfortable with the use of AI in hiring. However, I’d caution that we must not abandon best practices from IO Psychology when formulating such legislation. 

While NYC LL 144 is the first of its kind in the US, it won’t be the last. Nationwide, there is a notable trend of jurisdictions actively reviewing and enacting laws aimed at regulating technology in hiring. States including California, Illinois, New Jersey, New York, and the District of Columbia have been at the forefront of this movement. As these states continue to refine their regulatory frameworks, there appears to be a growing recognition of the importance of ethical and responsible technology adoption in hiring, setting the stage for potential nationwide standards in the future.

Luckily, the IO Psychology world has multiple documents that can serve as resources to build future frameworks on this issue. Two of these include the EEOC’s Uniform Guidelines on Employee Selection Procedures, published in 1978, as well as SIOP’s Principles for the Validation and Use of Personnel Selection Procedures, last updated in 2018. However, even more recently, the Society for Industrial Organizational Psychologists published Considerations and Recommendations for the Validation and Use of AI-Based Assessments for Employee Selection. These guidelines outline clear considerations and recommendations for the development and use of AI tools for hiring. 

Some of the best practices SIOP’s guidelines outline include ensuring that the AI tools produce scores that are predictive of a chosen outcome (e.g., job performance), produce consistent scores reflecting job-related criteria, and that they produce scores that are considered fair and unbiased. It is these same principles and processes that I would hope to see reflected in future legislation. While transparency and positive intent are admirable qualities for legislation to have, it’s also crucial for selection tools to have established job relevancy and predictiveness. In all, my argument is this: When in doubt, go back to the IO basics. 

About the author

Hayley Walton is a Talent Science Consultant at CodeSignal. In her role, Hayley acts as a strategic partner and subject matter expert in the IO and talent science space to collaborate with both internal and external stakeholders. She received her Master’s degree in Industrial-Organizational Psychology from the University of Tulsa. Hayley is an active member in the Society for Industrial-Organizational Psychology (SIOP), serving on the Diversifying I-O Psychology Committee.