The global economy is engaged in a frenetic race to acquire “AI talent.” Companies are offering unprecedented salaries for data scientists, machine learning engineers, and prompt engineers, driven by a deep-seated fear of being left behind.
At B9F7 Parvis Trust, our investments in the Education and AI sectors give us a unique perspective on this race. What we see is a profound and dangerous misconception. We believe that as an economy, we are solving the wrong problem. We are training for the last war, not the next one.
The current “skills gap” narrative is fixated on a shortage of technical practitioners: the people who can build, code, and train complex models. While these skills are vital, this is a narrow, temporary bottleneck. The far larger and more permanent challenge—the one that will define corporate success or failure for the next decade—is the catastrophic shortage of AI-native managers and strategists.
We are operating under the flawed assumption that AI is just a better, faster “tool,” like a new piece of software. It is not. AI is a new, non-human participant in the workforce. It is a system that learns, makes decisions, and creates outputs. You do not manage AI the same way you manage a spreadsheet.
The true skills gap is not technical; it is conceptual. We lack the human leaders who can:
- Ask the Right Questions: An AI can find a correlation in a petabyte of data in seconds. It cannot, however, tell you which question to ask. We need managers who can blend deep business acumen with a conceptual understanding of data to frame a problem the AI can meaningfully solve.
- Manage Opaque Systems: An AI model is often a “black box.” It can give you a correct answer—for instance, “reject this loan application”—without explaining its reasoning in a way a human can understand. A technical engineer can tell you if the model is working. A new type of leader is needed to decide if we should trust it, and how to build processes that manage this new “black box risk.”
- Govern the Human-Machine Interface: When an AI automates half of a team’s workflow, the manager’s job fundamentally changes. Their new role is to orchestrate a hybrid team of human and machine-learning agents, re-designing processes to maximize what humans do best (creativity, empathy, complex strategy) and what AI does best (pattern recognition, optimization, scale).
- Integrate Security and Ethics: An AI model trained on biased data will produce biased results, creating massive legal and reputational risk. A data scientist can’t solve this alone. It requires a leader who can enforce Data Security and governance, understand the ethical implications of a dataset, and build safeguards into the business process, not as an afterthought.
Our current educational and corporate training systems are failing to produce these leaders. They remain siloed: business schools teach strategy with old case studies, while computer science departments teach algorithms in isolation.
At B9F7, our Education investment thesis is focused on precisely this gap. We are backing the new models of interdisciplinary learning—programs that merge data science with ethics, economics with machine learning, and business strategy with data governance. The most valuable professionals of the next decade will not be pure technologists or pure strategists. They will be the translators, the orchestrators, and the governors who stand at the intersection of both. The race for AI talent is a marathon, not a sprint, and the companies that win will be those who invest in this new generation of leadership.





Leave a Reply