The NYC mandate dictates that if companies wish to employ artificial intelligence in their processes to screen, assess and select suitable candidates for various positions, then they’ll have to incorporate major revisions in their hiring strategy.
City authorities will ban employers from using automated hiring tools unless a yearly bias audit can show that they will not be discriminating basis applicant’s race or gender. The legislation will see city authorities impose fines of up to $1,500 per violation, on employers or employment agencies. However, the onus of conducting the yearly audits would lie solely with the vendors, who will have to justify to the employers that their AI tools meet the city’s requirements.
“A recruiter shouldn’t rely too much on the behavioural assessments done by the AI. Human touch needs to be amalgamated with the machine at the time of shortlisting, to make the process effective”
Narottam Sharma, chief information officer, Mastek
The idea of AI taking the short route towards unconscious bias is not far-fetched. Artificial intelligence basically tracks down the desired qualities and qualifications as key words, from the resumes of previously selected candidates for the same profile that the company is looking to hire for. Therefore, when a profile misses out on certain keywords, which may even include personal categorisations such as gender or age or in America’s case, race, then the tool screens them out at the first stage itself. This, in turn, becomes problematic when one is aiming to develop a diverse workforce and a healthy company culture.
Even with this shortcoming, the use of AI in hiring is unavoidable. In a previous conversation with HRKatha, Krish Shankar, CHRO, Infosys, said that for larger companies that hire to the tune of thousands every quarter, it would be almost impossible to interact with every resume that comes in for a position. In such cases, these tools are indispensable for a quick and efficient talent acquisition.
However, he also said that AI should be employed along with an accurate auditing system. This auditing system should keep a check on which resumes have been shortlisted and which ones have been weeded out.
Auditing systems, in comparison to a strong AI-based hiring model, can identify any possible biases that may have inherently developed. This, in turn, brings forth people with similar traits for interviews.
Narottam Sharma, chief information officer, Mastek, told HRKatha that an AI’s inherent bias can be reflective of some biases that the designer of the AI may harbour.
“The machine may be inherently biased if it hasn’t been designed to identify certain aspects of candidates during assessment. These inherent biases can even be planted at the time of inception of the machine by the designer,” says Sharma.
The bias a machine may develop may also depend on the fact that it may not have been constructed to deal with certain diverse aspects of hiring. “Certain kinds of behaviours displayed by a candidate may automatically lead to the machine barring them due to the inherent bias that it may be working with,” he points out. Further, he adds, ”A recruiter shouldn’t rely too much on the behavioural assessments done by the AI. Human touch needs to be amalgamated with the machine at the time of shortlisting, to make the process effective”.
Amit Chincholikar, global chief human resources officer, Tata Consumer Products, tells HRKatha, that organisations need to be very objective with their use of the machine in hiring.
“It cannot replace the hiring process but just be a part of the shortlisting process. It is only helpful when one has a set of very clearly-defined criteria for a role. As long as one is able to apply these criteria objectively, AI will help one narrow down one’s process,” he says.
Chincholikar further questions the criteria basis which a company using AI is hiring.
Many from the tech side are not in possession of the highest educational qualifications, and yet, they are very skilled. However, when it comes to the initial profile, the former would be more apparent in the CV than the latter.
“If the communication is done right and logically, the apprehensions that people may have with AI may be eliminated. Regulating and mandating are definitely the right things to do and other authorities could take cognisance of NYC’s mandate.”
Amit Chincholikar, global chief human resources officer, Tata Consumer Products
The NYC mandate will also force makers of AI tools to disclose more about their opaque workings and give candidates the option to choose an alternative process — such as a human — to review their application.
Speaking on why companies don’t divulge the use of AI in hiring, Chincholikar says, “It is similar to how one is apprehensive about driverless cars or aircraft. Also, there are candidates who may wonder whether the use of AI lends itself to the right criteria being used in the process of hiring.”
He further says,” I think transparency will definitely enhance the employer brand. If the communication is done right and logically, the apprehensions that people may have with AI may be eliminated. Regulating and mandating are definitely the right things to do and other authorities could take cognisance of NYC’s mandate.”
Speaking on whether such mandates could come into effect in India, Chincholikar opines, “In India, data privacy is the only context in which such a mandate can come into effect. When AI tools are deployed, companies need to be transparent in terms of who has access to the information. As a tool, if it gets deployed in other forms, it could be an issue.”
Sharma believes that regulation will not work unless a defined action is not planned and implemented. “One has to be upfront in divulging whether one is using AI in hiring anyway. Transparency should be a normal practice,” he asserts.