Several major companies, including Walmart, Delta, T-Mobile, Chevron, and Starbucks, are reportedly using an AI-powered platform called ‘Aware’ to monitor employee conversations on platforms such as Slack and Microsoft Teams.
Aware analyses a vast dataset of up to 20 billion communications from over 3 million workers, aiming to identify potential issues like employee dissatisfaction and safety concerns.
While companies tout the platform as a tool to gauge employee sentiment, assess morale, and identify potential safety issues, it has sparked anxieties about AI encroaching on private communication. Companies claim it offers real-time insights into employee morale and helps identify harmful behaviours such as harassment and bullying.
Critics argue that the system, which anonymises data but can identify specific names in critical situations, represents an infringement on employee privacy. Some question the reliability of AI systems, while others express discomfort over the potential for data misuse.
While the companies involved emphasise that Aware is not used for sentiment analysis or tracking ‘toxicity’, some acknowledge using the tool to monitor employee sentiment and trends alongside feedback from traditional channels.
This news comes amidst broader anxieties surrounding the use of AI in the workplace. Recently, Amazon warned employees against using third-party AI tools for confidential tasks due to the risk of data breaches.
The increasing adoption of AI for employee monitoring highlights the need for a balanced approach that prioritises both employee privacy and organisational needs while ensuring responsible and transparent use of such technologies.