Apple has banned its employees from using ChatGPT and other AI tools. Trained AI tool Github’s Copilot, which helps developers to write code, is also banned from use.
Artificial intelligence tools and services rely on user input data to train their modela to make them more user-friendly and as accurate as possible. This formula can unintentionally leak the private data of its users. It is a matter of concern for companies if their employees upload any inside data on AI services.
In January, Amazon banned the use of ChatGPT for its staff members as the company discovered the resemblance between ChatGPT’s answers and its data.
In February, financial services company, JP Morgan Chase also restricted the use of ChatGPT to avoid sharing sensitive financial information with a third-party application.
After that, similar financial firms including Bank of America, Citigroup, Deutsche Bank, Wells Fargo and Goldman Sachs, have banned the use of AI chatbots.
Before Apple, Samsung also implemented a similar ban on AI earlier this month, after finding a sensitive internal data leak which was uploaded on ChatGPT by one of its engineers.
The major reason behind this ban is that most companies are concerned about how these AI platforms such as ChatGPT and Google Bard store data on their servers.
Although ChatGPT offers the option to delete user history, whether it permanently deletes the data or not is not certain.
According to The Wall Street Journal’s report, Apple is persistently working on developing its own AI tool with the help of John Giannandrea, former Google employee.
Value our content... contribute towards our growth. Even a small contribution a month would be of great help for us.
Since five years, we have been serving the industry through daily news and stories. Our content is free for all and we plan to keep it that way.
Support HRKatha. Pay Here (All it takes is a minute)