Krish Shankar, former Group Head of HR at Infosys and founding mentor at Crossmentors, recently shared a presentation titled There Is A Lot Brewing! on the CHRO’s role in navigating the AI transformation. It was provocative, sweeping, and deliberately uncomfortable.
In this conversation with Dr. Prajjal Saha, founder and editor of HRKatha, he unpacks a future where AI could shrink workforces, flatten organisations, and force leaders to confront difficult trade-offs between efficiency, control, and the human experience of work.
The AI-enabled workday
You paint a future where every employee and manager is surrounded by AI agents. But let’s get specific. What does a typical workday actually look like, and what part of work stays distinctly human?
A typical workday in the future would be defined by what I call “hybrid intelligence”, where humans are supported by a suite of AI agents and the nature of work shifts towards orchestration and high-level decision-making.
For the individual employee, daily workflows will be supported by agents handling email, meeting scheduling, task prioritisation, and travel logistics. A co-pilot will assist with research and analysis, while much of the technical and operational work is handled by AI.
For managers, in addition to personal productivity agents, there would be an entire support system of agents for team performance monitoring, productivity coaching, and learning partnerships. Agents could also assist as business partners for HR and finance decisions.
What remains distinctly human is the ability to imagine, synthesise, and lead. I see the future professional as a “Synthiant Imagineer”—someone who is creative, sentient, and capable of orchestrating systems.
Leadership, ethical judgement, and the ability to connect disparate pieces of information remain fundamentally human strengths. Ensuring ethical outcomes, managing bias, and preserving human values will become even more critical in an AI-led workplace.
“The boundary must be clear: AI will recommend, but humans must take decisions.”
Rethinking organisational design
You speak about “rocket-shaped organisations” and fewer layers of management. That sounds clean in a presentation. But in practice, what breaks first, and how should leaders rethink control and accountability?
The impact will vary by industry, but knowledge-work-heavy sectors will see the most significant change. AI is a general-purpose technology, such as electricity or the internet. It allows us to rethink work fundamentally, not just automate it.
The first disruption will be in middle management. Their traditional role as conduits of information and supervision will diminish as AI takes over routine monitoring, reporting, and coordination.
Decision-making will become faster and more decentralised, with decisions moving at least two levels lower than in current structures. Functional silos will weaken as leaders are required to think across systems rather than operate within departments.
There is also a risk. Continuous monitoring by AI can lead to loss of autonomy and alienation if the human element is not actively managed.
Leaders will need to shift from managing execution to designing systems. Control will come through how workflows and AI agents are architected. At the same time, span of control will expand significantly – up to 15 to 20 direct reports – requiring a more empowering and emotionally intelligent leadership style.
Accountability, too, will evolve to include governance of AI systems. Leaders, particularly the CHRO, will be the custodians of ethics and human values.
“If not managed carefully, AI-led workplaces risk creating a loss of autonomy and a sense of alienation.”
The workforce question
You indicate significant workforce reduction in certain sectors. Is this inevitability or projection? What assumptions are you making, and what should companies be doing today?
This is a gradual transformation. It will not happen overnight. But over time, it is inevitable.
In sectors such as IT, data processing, and customer service, I see a potential 50 to 60 per cent reduction in the current scope of work. In sectors such as manufacturing or mining, the impact may be lower, around 10 to 20 per cent.
These projections rest on a few assumptions: that AI will handle most technical work, that a large part of white-collar work will be automated within the next few years, and that organisations will integrate enterprise-level agents capable of managing core processes across functions.
At that point, the shift becomes less about possibility and more about timing.
For organisations, the real question is not how many roles will go, but how AI can help them grow, compete, and move faster.
Companies should treat AI as a catalyst for transformation. Start experimenting. Allow teams to tinker. Focus on how AI can improve speed and competitiveness, not just reduce headcount. Build a strategic roadmap, run pilots, and invest heavily in upskilling. That is non-negotiable.
“Control will no longer come from supervision, but from how well systems and AI workflows are designed.”
The new leadership playbook
You describe future leaders as “orchestrators” working with both humans and AI agents. What capabilities will define them?
Leaders in the AI age will need a combination of imagination, orchestration, empowerment, and systems thinking.
They will act as architects – designing and integrating AI agents into workflows and building hybrid performance pods where humans and machines work together.
The ability to synthesise information and recognise patterns across the enterprise will become critical, moving beyond narrow functional expertise.
Emotional intelligence will matter even more, given the scale and complexity of teams. Leaders will need to build trust, maintain cohesion, and preserve the human element.
They will also play a key role in ethical oversight, ensuring that AI-driven decisions align with organisational values and remain free from bias.
Finally, collaboration will need to become far more open and cross-functional. Leaders will have to think and act across systems, not silos.
“The future leader is not a manager, but a system orchestrator working with humans and AI agents.”
The ethics and trust challenge
Here’s where it gets uncomfortable. If AI systems are monitoring performance, recommending decisions, and shaping careers, how do organisations prevent a loss of autonomy and trust? Where should CHROs draw the line?
This is a tough balance to strike.
Organisations must begin by defining a clear philosophy for AI — what it should and should not do. This needs to be discussed at the leadership level and embedded into the culture.
There are real second-order effects to consider: loss of autonomy, feelings of alienation, and the perception of constant surveillance.
CHROs must ensure that AI systems are governed by strong ethical frameworks, with rigorous checks for bias and fairness.
At the same time, organisations need to consciously preserve the human element. Decisions related to performance, careers, and growth should remain human-led and accountable.
“Over time, workforce reduction in current roles is inevitable. The real question is how organisations reinvent growth.”
What CHROs must do now
If you had to prioritise, what are the two or three decisions CHROs must take in the next 12-18 months, and what should they avoid?
Three priorities stand out.
First, define a clear AI roadmap. Identify use cases, run pilots, and articulate guiding principles for how AI will be deployed. Build a culture of experimentation.
Second, design the future organisation. Redefine roles, build new career paths, and invest in large-scale upskilling, and not just in AI literacy, but in emotional intelligence and synthesis capabilities.
Third, establish strong ethical governance frameworks. CHROs must act as champions of human values, ensuring that AI systems are fair, transparent, and aligned with organisational principles. A key boundary must be clear: AI will recommend, and you take decisions.
What should CHROs avoid?
Do not frame AI purely in terms of job losses. Focus instead on growth and new opportunities.
And avoid a purely top-down approach. Involve employees in shaping how AI is adopted, and they will ultimately be the ones working alongside it.
As Daniel Kahneman says, removing the restraining forces is always a better strategy than pushing the driving forces!



