top of page

Microsoft stops asking for permission

  • Writer: Nikita Silaech
    Nikita Silaech
  • Nov 13
  • 1 min read
Image on Unsplash
Image on Unsplash

For years, the frontier AI playbook was predictable. You’d train a model, license it, and watch others build on top of it. Microsoft’s latest move changes that. The company has stood up a super intelligence unit under Mustafa Suleyman, Microsoft’s AI CEO, to train advanced models in‑house, and reworked its OpenAI deal so it can pursue independent R&D without waiting on a partner’s roadmap.


Suleyman plainly stated that Microsoft “needs to be self‑sufficient in AI.” This means owning the models, owning the timelines, and owning what happens when things go sideways. Instead of licensing capability from labs, Microsoft is now training at scale on its own compute estates, which cuts latency, tightens integration, and kills the dependency risk of being downstream from someone else’s strategic shifts.


The philosophy they’re advocating for is “Humanist Super Intelligence,” wherein they will build very powerful systems that stay controllable, useful, and accountable to people rather than open‑ended experiments. Their early targets are specific and practical, including medical diagnostics and AI companions, where real‑world deployment depends on careful testing before use. 


It’s a shift away from vague AGI talk toward projects that can be measured and evaluated in plain sight.


For customers, the benefit is straightforward. Product roadmaps now track Microsoft’s own engineering cadence instead of a partner’s pricing or release decisions.  For regulators, accountability is clearer because one company owns the model, the deployment, and the safety claims that connect them.  


The updated OpenAI terms give Microsoft more room to run while still keeping access to a partner’s work where it helps them.

Comments


bottom of page