Here was the “Godfathers of AI” — a man who helped invent deep learning — sounding the alarm about a future he’d helped build. In interviews, he warned that AI might soon outpace our control. That rogue systems could emerge. That we may have already gone too far.
And yet, in nearly the same breath, we hear from leaders like Microsoft CEO Satya Nadella, who envisions AI not as a threat, but as the next great leap in human productivity. A new operating system for work that will fully transform business, education, healthcare, logistics and, of course, data analytics.
So who’s right?
I this week’s newsletter, I want to take a step back and offer a broader perspective on the topic of Artificial Intelligence. Because understanding where AI is going requires recognizing multiple truths at once: the promise, the peril, and the present reality.
Let’s start with Satya Nadella’s optimistic vision — and work our way toward something more grounded.
The Optimist’s View — AI as the Next Platform Shift
Nadella’s vision for AI is nothing short of revolutionary.
This future involves not just adding AI to apps, but rethinking how intelligence shows up in our tools, decisions, and daily work. Nadella describes AI as a new operating system with “copilots” that sit across communication, content, coordination, and creativity. These aren agents capable of executing multi-step tasks across departments.
Human roles, in this vision, evolve from executors to orchestrators.
This is especially true in analytics. Where analysts once spent hours cleaning data, building dashboards, and wrangling spreadsheets, AI is stepping in to automate much of the groundwork. Need a summary of a 10-tab workbook? Consider it done. Want to conduct a cohort analysis instantly? No problem. Need suggestions for predictive models based on detected patterns? Certainly. Just ask the agent.
In this new landscape the analyst’s job is no longer to execute every step, it’s to govern the process. To collaborate with a “team” of AI agents, guide them. To review, refine, and redirect as needed.
This means:
- Framing better questions.
- Setting the right prompts, parameters, and constraints.
- Vetting outputs for accuracy and risk.
- Spotting where automation adds value and where it introduces error.
- Making the final judgment call before an insight becomes an action.
The analyst becomes a critical filter as someone who pairs AI’s speed and scale with human oversight and business context to do more, faster, without sacrificing trust or rigor.
But the real shift isn’t just speed, it’s scope.
With the combination of quantum computing and AI, it’s possible to simulate whole economies, ecosystems, or consumer networks and gain insights from the results. For data analysts, this means advancing from reactive insights that answer the question “What happened?” or even “Why?” to predictive and even generative intelligence, by answering “What if?” and “What’s next?”
Nadella is clear: the goal isn’t just to improve workflows. It’s to unlock new economic value. And he believes AI infrastructure will remain diverse, competitive, and open.
But not everyone is so confident.
The Cautionary View — AI Beyond Human Control
Geoffrey Hinton’s resignation was a warning shot.
He contends that superintelligence — AI that outstrips human abilities in nearly all areas — is no longer a far-off dream. Once these systems begin to pursue objectives such as self-preservation or acquiring resources, our control over them could vanish.
Superintelligence is frequently mentioned along with AGI, or Artificial General Intelligence, which refers to AI capable of reasoning, learning, and solving problems across various tasks like a human. However, superintelligence takes it further: it doesn’t just match human intelligence, it surpasses it significantly.
This should be a major concern for analysts. If we can’t fully grasp how a model produces its results or if it starts operating beyond our understanding, who bears responsibility? What do we do when a model uncovers a crucial insight that we can’t trace, replicate, or explain?
Hinton also highlights that we shouldn’t only be concerned about future AGI but also about current threats from malicious actors. Deepfakes, data tampering, and AI-driven fraud are all dangers that data experts need to address today.
His recommendation? Begin integrating governance, transparency, and ethical oversight into every AI system we develop now, rather than waiting.
The Overlooked Reality — AI’s Environmental Cost
Even if AGI is far off, today’s AI still comes at a cost, and that cost is environmental.
Running large models comes with costs. They require significant energy and water. A recent estimate predicts data centers could consume 12% of U.S. electricity by 2028. A single ChatGPT query might use as much water as filling a glass — imagine that multiplied by billions.
As analysts, we typically examine return on investment, but a look at ROI now should also consider: What’s the energy cost per insight? While tech companies emphasize sustainability goals, there’s no regulatory framework and limited transparency. Meanwhile, more efficient models like Phi-3 and DeepSeek present viable alternatives. They’re smaller, less costly to operate, and often just as effective.
The lesson? Being smarter doesn’t always mean being bigger; sometimes, it means being more efficient.
So… Where Are We Really?
It’s easy to get swept up in the hype. But here’s what’s actually true right now:
- LLMs are maturing. The core transformer architecture is stable. We’re now seeing value in smaller, task-tuned models that are cheaper, faster, and more domain-specific.
- Agents are real and coming fast. In analytics, this means orchestrating chains of tasks across tools: cleaning data → analyzing patterns → generating charts → writing narratives.
- Robotics? Still hard. Physical-world intelligence is lagging behind digital. For analysts, this means that while bots won’t take your seat at the table just yet, the assistant sitting next to you is getting much, much smarter.
- We’re building the future unevenly. Marketing, operations, and finance have already been largely transformed. But professions where emotional intelligence and interpersonal connection (such as therapists, teachers, healthcare providers), physical labor, and legal understanding are still adapting.
The End of the Beginning
I don’t believe we are at the beginning of the end. I believe we are at the end of the beginning.
The question is no longer whether AI will change our personal and professional lives. Clearly, it already has. The question is: will we use it to amplify human insight, or automate it out of existence?
If you’re in analytics, the work ahead isn’t only about learning new tools. It’s about asking better questions, managing the systems that answer them, keeping the human in the loop, and ensuring the ethical use of this mercurial technology. Because no matter how powerful AI becomes, it will always need someone to ask: “Does this make sense?”
Make sure that someone is you.
Keep Analyzing.