-
Categorías:
A few days ago, I had the pleasure of participating in the 4th Annual Work-Life Balance Conference, organized by Fundación Másfamilia at CaixaForum Madrid.
A necessary space to reflect on a reality that is no longer the future. It is the present.
The relationship between artificial intelligence, mental health, and leadership.
Beyond everything that was shared, there is one idea that clearly summarizes my perspective:
For years, we have been talking about artificial intelligence in terms of productivity, automation, and efficiency. And that matters.
But today, we need to ask a different question:
Not what AI can do, but what we must protect when we use it.
Because if there is something I consistently observe in my work with leaders and teams, it is this:
This is not a problem of capability.
This is a problem of wellbeing.
Teams that function, but are exhausted.
Professionals who perform, but are emotionally disconnected.
Organizations that do not understand why their talent wears out or leaves.
And this is not something that can be solved with technology alone.
The data is clear. Absenteeism has reached historic levels, and mental health has become one of the main causes of work incapacity.
But beyond the data, there is something even more important: these are not statistics, they are people.
Leaders overwhelmed, teams with accumulated emotional fatigue, organizations that do not know how to interpret what is happening to them.
And this is where artificial intelligence opens up a meaningful opportunity.
AI can become a highly valuable tool for organizational wellbeing.
It allows us to detect signals that previously went unnoticed:
In short, it allows us to move from a reactive model to a preventive one.
And this has enormous value.
Because emotional distress does not appear overnight. It builds gradually. Detecting it early allows us to intervene before the impact becomes greater.
But alongside this opportunity, there is a risk that I consider especially relevant:
Confusing monitoring with caring.
Having data does not mean we are improving wellbeing. Measuring is not intervening. Detecting is not transforming.
When an organization measures wellbeing but does not act on its root causes, it creates something very dangerous: the illusion of care.
And people notice. Always.
They can tell whether technology is serving their wellbeing or becoming a tool for control. And that perception directly impacts trust, which is the foundation of any healthy environment.
One of the key points I shared during the conference is this:
AI is not neutral. It is an amplifier.
It amplifies the culture that already exists.
If an organization has a culture based on trust, care, and healthy leadership, AI can multiply that positive impact.
But if the culture is based on control, pressure, or distrust, technology can intensify those dynamics.
So the question is not whether AI is good or bad.
The question is: Is your organization emotionally and culturally prepared to use it?
In this context, leadership becomes the critical variable.
AI can detect that a team is at risk.
But it cannot do what defines leadership:
AI informs.
Leadership cares.
And that difference cannot be automated.
In fact, there is an idea that is becoming increasingly clear:
The more artificial intelligence we introduce into organizations, the more important emotional intelligence in leadership becomes.
Before incorporating technology into wellbeing processes, organizations must ask themselves an essential question:
Not what we can measure, but what we should measure, and why.
Because not everything that is technically possible is ethically appropriate.
The difference between a tool that cares and a tool that controls is not defined by the algorithm.
It is defined by organizational intent.
And by the type of leadership that accompanies it.
If you are introducing artificial intelligence into your organization, there is one question you should not avoid:
What are you really using it for?
Not from the official narrative, but from daily practice.
Because technology does not define people’s experience.
The way we use it does.
And that use, ultimately, is a leadership decision.
I am deeply optimistic about the potential of artificial intelligence to improve wellbeing in organizations.
It can help us detect earlier, intervene better, and prevent a great deal of unnecessary suffering.
But that potential only becomes reality when something already exists:
A culture oriented toward care, and leadership that assumes its emotional responsibility.
AI can be an excellent co-pilot, but the pilot is still human leadership.
Because, ultimately, this is more than a technological shift, it is an ethical responsibility.
Working with technology is inevitable. Learning how to lead in this context is not.
That is where true leadership development really begins.