You have probably heard the line already.
AI will not replace you. A person using AI will.
It is catchy, but I think the real shift is slightly different and more commercially important.
The biggest change is not that jobs disappear overnight. It is that the scope of what one capable person can do keeps expanding. And when that happens, organisations do not just change how work gets done. They change how many people they need to do it, which roles need specialist support, and where they are willing to pay for depth.
That is the part many teams are still underestimating.
A good example is product management.
A strong product manager with AI support can now do far more of the surrounding work themselves than most businesses would have expected even a year or two ago. They can summarise customer feedback, draft a problem statement, structure a discovery plan, pressure-test assumptions, create first-pass wireframes, and prepare stakeholder comms in a fraction of the time. In many environments, they can also interrogate product data, explore funnel performance, generate hypotheses, and sketch a dashboard without waiting in a queue for someone else to do it for them.
That does not mean they have become a data analyst, a designer, or an engineer.
It means they have become more self-sufficient.
And self-sufficiency changes the economics of teams.
For years, a lot of knowledge work has depended on handoffs. A question goes to the analyst. A concept goes to the designer. A simple build goes to the developer. A summary goes to operations. A slide deck goes to someone “more technical” or “better with structure.” Those handoffs are not free. They create delay, queues, context-switching, and overhead.
AI reduces the cost of those handoffs by helping people do more competent first-pass work on their own.
That is why the impact shows up first in role boundaries, not org charts.
Business users are becoming more capable of doing work that used to sit just outside their lane. The self-service analytics movement was already pushing in this direction before generative AI arrived. The difference now is speed and range. Modern tools do not just help people access data. They help them frame questions, explore patterns, build outputs, and communicate findings faster than before.
That matters because organisations do not buy capability in the abstract. They buy outcomes.
If one product manager can independently answer more questions, run more experiments, and move an idea further before specialist support is required, the business starts needing specialists differently. Not always fewer in an absolute sense, but fewer doing routine support work. More of their value has to come from deeper judgment, harder problems, stronger governance, and the work that still genuinely benefits from expertise.
In other words, AI does not eliminate technical roles so much as it raises the bar for what those roles need to be doing.
The same pattern shows up in analytics.
A lot of analysts have spent years acting as a reporting service desk for the business. Pull this number. Slice that segment. Update that chart. Check that trend. Build that dashboard. Some of that work will remain, but a growing share of it is becoming easier for capable stakeholders to do themselves when the data foundations are good enough.
That is the real dependency shift.
When a commercial leader, product manager, or operator can self-serve more of their questions, they stop needing as much day-to-day support for straightforward analysis. The analysts who remain most valuable are the ones who can build trusted data products, improve definitions, solve messy cross-functional problems, and turn confusion into clarity. The work moves up the value chain.
Developers are seeing a version of this too. AI coding tools do not remove the need for engineering judgment, architecture, testing, and production discipline. But they do let non-engineers prototype more, and they let engineers themselves move faster on certain classes of work. That changes team leverage. A person who previously needed support to get a small internal tool, workflow, or proof of concept off the ground may now get much further alone.
This is why I think the lazy version of the debate misses the point.
The question is not only whether AI can do your job.
The better question is whether someone around you can now absorb enough of your function that your role becomes harder to justify in the same quantity, at the same level, or in the same form.
That is a different kind of risk.
It is less dramatic than mass replacement headlines, but far more realistic inside ordinary businesses. Most organisations will not wake up one morning and delete whole departments because a model got better. What they will do is gradually redesign work around people who can move faster with fewer dependencies.
That has two consequences.
First, the people who learn to use AI well become more valuable because they increase their own leverage. They write faster, analyse faster, explore faster, and unblock themselves more often.
Second, the people whose value depended mainly on being the person who could perform a routine support function become more exposed, unless they move upward into more strategic, more technical, or more judgment-heavy work.
That is not a reason for panic. But it is a reason to be honest.
The safest position is not to assume your title protects you. It is to become the person who can combine domain understanding, commercial judgment, and AI fluency better than the person next to you.
Because that is where the advantage is now.
Not in AI replacing humans.
In humans who know how to use AI expanding the amount of useful work they can do, and forcing every organisation to rethink where specialist effort is still most needed.