What The White Houses ‘AI Bill Of Rights Blueprint Could Mean For HR Tech
Over the past decade, the use of artificial intelligence in areas such as hiring, recruiting and workplace monitoring has moved from speculation to reality for many jobs. Now the technology is attracting the attention of the country's top officials.
On Oct. 4, the White House Office of Science and Technology released the "AI Bill of Rights," a 73-page document outlining recommendations to combat bias and discrimination in automated technology in order to "provide protections against startups." where marginalized communities have a voice in the development process and designers work hard to ensure that the benefits of technology reach everyone.”
The plan focuses on five areas to protect American citizens from AI: system security and efficiency; algorithmic discrimination; data privacy; notices and explanations when using automated systems; and access to humane alternatives when necessary. It also follows the release in May of two warning documents by the US Equal Employment Opportunity Commission and the US Department of Justice that specifically address the use of decision-making tools, algorithms in hiring and other work activities.
Work is listed in the plan as one of several "sensitive domains" that deserve greater privacy and data protection. People handling sensitive free/busy information must ensure that this information is used only for "functions that are strictly necessary for the domain", while all unnecessary functions "shall be optional".
Additionally, the plan states that continuous monitoring and surveillance systems "must not be used in a physical or digital workplace," regardless of employment status. Oversight is very sensitive in the context of trade unions; The draft notes that federal law "requires employers and any consultants they retain to report the costs of supervising workers in the context of labor disputes, providing transparency mechanisms to help protect workers' organizing."
growing presence
The prevalence of AI and work-centric automation may depend on the size and type of organization being studied, although studies show that most employers have embraced the technology.
For example, a February study by the Society for Human Resource Management found that nearly a quarter of employers use the tool , including 42% of employers with more than 5,000 employees. According to SHRM, 79% of all respondents using AI or automation said they use the technology for hiring and recruiting, which is the most cited application.
Similarly, a Mercer 2020 study found that 79% of employers already use or plan to start using algorithms this year to identify the best candidates based on publicly available information. But AI has applications beyond recruiting and hiring. Mercer found that most respondents said they also use technology for employee self-service management, performance management and recruiting, among other things.
What does "plan" mean to an employer?
Employers should be aware that the plan is not legally binding, is not official US government policy and does not necessarily indicate future policy, said Niloy Ray, a shareholder at management firm Littler Mendelson. He added that while the principles contained in the document may be suitable for artificial intelligence and automation systems, the plan is not final.
"Obviously, this helps raise the level of scientific knowledge and thought leadership in the field," Ray said. "But it does not rise to the level of some laws or regulations".
Employers could benefit from a single federal standard for AI technology, Ray said, especially given that this is a multi-jurisdictional legislative area. A law will go into effect next year in New York to limit the use of AI in recruiting . Meanwhile, similar legislation has been proposed in Washington, DC, and the California Fair Employment and Housing Board has proposed rules on the use of automated decision-making systems.
Additionally, there is an international regulatory landscape that can create even more challenges, Ray said. Ray added that because of the complexities involved, employers may want more discussion about uniform federal standards, and the Biden administration's plan could be a way to start that discussion.
"Let's not go through 55 sets of hoops," Ray said of the potential federal standard. "We're going to have a game of jumping rings."
Incorporating standards into projects related to data privacy and other areas can be important for employers, as AI and automation platforms used for hiring often take into account public data that candidates don't know is being used for selection purposes , says Yuliya Stojanovic. . , co-founder and director of the Center for Responsible AI at New York University.
Stojanovic co-authored a paper in August in which a group of NYU researchers detailed their analysis of two personality tests used by two automated recruiting providers, Humantic AI and Crystal. The analysis found that the platforms show "significant instability in key aspects of the measurements" and concluded that they "cannot be considered reliable personality assessment tools".
Even before artificial intelligence entered the equation, the idea that a candidate's personality profile could predict job performance was controversial, Stojanovic said. He added that legislation like the one in New York could help provide more transparency about how automated hiring platforms work and give HR teams a better idea of whether the tool is serving its purpose.
"The fact that we are starting to organize this space is very good news for entrepreneurs," said Stojanovic. "We know there are a lot of tools out there that don't work and benefit no one but the companies that make money selling those tools."