Meta has scrapped an internal tool designed to track employees’ use of artificial intelligence after concerns arose over sensitive data being shared outside the company. The feature, which was live only briefly, was intended to push deeper adoption of AI across teams.
The system, internally called “Claudeonomics,” ranked employees based on their usage of generative AI tools. It measured activity through “tokens,” which reflect how much data is processed by large language models. The leaderboard tracked top users and introduced gamified elements, including badges to reward high engagement.
The initiative highlighted the scale at which AI tools are being used within the company. Employees collectively processed massive volumes of tokens over a short period, with some individuals logging extremely high usage. The tool also drew its name from the Claude model developed by Anthropic, which is widely used within Meta, particularly for technical and coding tasks.
Despite its focus on engagement, the tool was quickly withdrawn after internal metrics began circulating publicly. The company clarified that the feature was meant to offer a light, interactive way to visualise AI usage, but concerns around data exposure led to its removal.
The episode reflects a broader shift in the tech industry, where companies are increasingly measuring how employees interact with AI tools. This trend, sometimes described as “tokenmaxxing,” treats AI usage as a proxy for productivity and efficiency.
While Meta has paused this experiment, it underscores a larger transition underway, as organisations explore new ways to quantify performance in an AI-driven workplace while managing emerging risks around data governance.



