Key elements:
- NYC’s Automated Employment Decision Tool law (Local Law 144) requires employers to conduct regular audits assessing statistical fairness of AI-powered hiring processes across race and gender.
- Despite good intentions, the law has several limitations mainly around transparency, accountability, and clarity.
- Future regulations can learn important lessons from NYC’s AI hiring law to create more effective and comprehensive frameworks ensuring fair AI usage in hiring.
The Biden administration and federal agencies can learn crucial lessons on fair AI usage from New York City’s AI hiring law. NYC’s Automated Employment Decision Tool law, also known as Local Law 144, is the first to mandate algorithmic fairness audits for commercial AI systems used in hiring and promotions. It requires employers to notify applicants if they are using AI-powered hiring tools, conduct and publish annual audits assessing the fairness of their processes across race and gender. The underlying premise of the law is to promote transparency, incentivizing employers to refrain from using biased AI systems.
Despite its intentions, the law is perceived as inadequately designed with various flaws and limitations curtailing its effectiveness. The law’s loopholes lie mainly in its structure of accountability, notification methods, lack of centralized repository for audits, absence of defined thresholds for acceptable bias levels, and loose terminologies. For instance, the law calls for employers to place audit results online for general public view, yet there is no centralized location for these audits, making access difficult. Also, problematic is the lack of clarity couple with ambiguities surrounding the systems’ definitions and terminologies used in the law, leading to creative avoidance techniques by involved entities.
Noteworthy is the law’s exclusion of platforms like LinkedIn, Indeed, and Monster from the auditing obligation as they don’t make final hiring decisions, creating significant accountability gaps. Lessons highlighted include the need for broader definitions focused on the systems’ purpose or use instead of technical specifications, clear lines of accountability, and central repositories ensuring easy access to audits. Therefore, future regulations must provide clearer guidance on permissible bias levels and ensure distributed accountability to uphold the system’s safety and fairness.