Lately, ChatGPT has taken over news forums everywhere. Its emergence has triggered conversations about the utility of artificial intelligence (AI) and its role in our lives. In the open market, AI is still in its early stages. While people are still trying to figure out how to respond to its functions, they do seem to be optimistic about its future.
Big corporations such as Microsoft and Google seem to be in an AI cold war of sorts. What with the former buying a larger stake in Open AI, the creators of ChatGPT and Dall E, and the latter upgrading its pre-existing AI programme LaMDA to create a competitor to ChatGPT in Google BARD!
Much has been written about the advantages that AI, more specifically ChatGPT, provides in many facets of work. In terms of HR, AI bots can help in recruitment, employee engagement, task management and improving overall performance.
When spoken about in isolation, each of these sound great. When it comes to HR, ChatGPT and AI have a long way to go before they can create the seamless system people envision. ChatGPT poses many risks today as well as in the future, and not just in terms of job security.
“Organisations must weigh out the risks of using such platforms before using them,”
Sailesh Menezes, sr. director and head of HR, India, HPE
Cybersecurity
As more functions of HR become digitised, the issue of cybersecurity becomes even more pertinent. Organisations are shifting large physical databases to online systems for management and safeguarding. These databases usually hold a lot of sensitive information — about the employees and the organisations themselves — that is very well protected by numerous firewalls and other security measures.
Although systems such as data-loss prevention, fraud detection, identity and access management, and so on leverage AI to automate many functions, the same is true for hackers and frauds as well. It has been found that AI has widely proven to be more effective as an attacker than a defender.
Numerous reports have described how cybercriminals are taking advantage of ChatGPT. Frequent users of online hacking forums have described how they use ChatGPT to corrupt and extract information. A user on the platform shared a Python script given to him by ChatGPT, which he claimed could encrypt someone’s machine without any user interaction!
Users with no knowledge of coding have leveraged the AI-based platform to generate elaborate codes that could be used for spyware, ransomware and other malicious tasks.
“Any new technological development is met with its fair share of people who want to exploit it. When google first came out, people found ways to manipulate the search engine for their own ulterior benefits. Organisations must weigh out the risks of using such platforms before using them,” says Sailesh Menezes, sr. director and head of HR, India, Hewlett Packard Enterprise.
Bias
Artificial intelligence was touted to be the next big thing that will change the hiring process for organisations. People hoped that removing humans from the hiring function would allow for a more equitable process.
As companies began developing AI-driven hiring platforms, they soon realised it may not be the solution they were looking for. Amazon quickly rolled back its AI hiring programme after finding concerning patterns in its selection process. The screening tool was reportedly notorious for discriminating against female applicants.
From an external perspective, the programme had been coded to vet candidates on the basis of patterns learnt from resumes of Amazon employees over a 10-year period. While it seemed reasonably objective, a major aspect the Company failed to realise was that most of the resumes fed to the system were of men. This made the system summarise that applications from men were preferred over those from women.
ChatGPT has also had its fair share of criticism on similar lines. Many believe the AI platform has a political, racial and gender bias.
Organisations that have integrated the AI platform to write generic progress reports, performance overviews and so on, have reportedly seen a stark gender bias on the platform. The bot automatically assigns genders to an occupation’s stereotypical gender without any prompts. This could prove to be incredibly dangerous if organisations look to use
Unfortunately, AI will only be as unbiased as those who code it. Finding ways to mitigate unconscious prejudices is nearly impossible, as all humans hold biases to some extent. Many experts suggest that predictive AI will always maintain a status quo since it will always be modelled on biassed and inadequate datasets.
Menezes calls for strict regulation on such platforms to mitigate such biases.
“Organisations must refrain from using ChatGPT for such uses. A platform that showcases such bias cannot be utilised in a corporate environment as it risks discriminating against a group of people in the workplace that could end up fostering an unhappy work environment. Strict regulations must be made to ensure ChatGPT is bias free before using it for HR functions,” he says.
“Applications like ChatGPT dont pose such a risk to HR as it lacks the ability to think on its own, in an empathetic manner,”
Mangesh Bhide, sr. VP and head HR, Jio
Will HR become AI-R?
HR’s dependence on technology is a topic that has been discussed for a few years now. Many fear that the shift HR departments are making to automated solutions may risk HR losing the ‘human’ aspect of the job.
Artificial intelligence has the capability to do more than just automate certain functions. People have already begun discussing its role in active talent management and rotation, more advanced hiring applications, benefit management and so on.
Mangesh Bhide, sr. VP and head HR, Jio believes AI platforms such as ChatGPT can only perform repetitive tasks that are bound to be automated sometime in the future.
“Tasks that do not require much human intelligence have always been bound for automation, thus applications like ChatGPT dont pose such a risk to HR as it lacks the ability to think on its own, in an empathetic manner,” says Bhide.
The ability of AI to learn as it takes on more information is one of its greatest boons and banes at the same time. The urge to let AI programmes take in information about employees and the organisation, unhindered, for a seamless automated programme may sound enticing. However, it poses many questions on the lengths to which we’re willing to have our lives dictated by a computer programme.
Fortunately for us today, we stand on the safer side of the Turing test. The day AI becomes indistinguishable from humans is the day we should fear. Maybe it is time to take appropriate measures to stop the line between humans and AI from blurring.
Value our content... contribute towards our growth. Even a small contribution a month would be of great help for us.
Since five years, we have been serving the industry through daily news and stories. Our content is free for all and we plan to keep it that way.
Support HRKatha. Pay Here (All it takes is a minute)