top of page

When “Working” Is Not Enough Anymore

  • 6 days ago
  • 4 min read

A subtle shift is underway in how we judge products. It is no longer enough for software to “work” in the narrow sense. Once people get used to AI inside their tools, they start expecting those tools to understand context, anticipate needs, and remove friction almost by default.


Microsoft’s 2025 Work Trend Index is a good place to see this shift in numbers. Based on data from 31,000 workers across 31 countries, it reports that 82% of leaders say this is a pivotal year to rethink key aspects of strategy and operations, and 81% expect AI agents to be moderately or extensively integrated into their company’s AI strategy in the next 12 to 18 months. 82% are confident they will use digital labor to expand workforce capacity over the same period (Microsoft, 2025). That is not the posture of companies dabbling with a new feature. It is the posture of companies assuming AI will be part of how their products and processes are supposed to work.


Once you have worked with tools that summarise a long document into a page, help you draft a response, or surface what matters from a noisy inbox, what happens is that your tolerance for tools that simply sit there and wait for you drops. A product that only stores files or only routes messages starts to feel like it is not doing its share of the work.


The Microsoft data hints at why people are gravitating to AI in the first place. When workers choose AI over asking a colleague, their top reasons are that it is available 24/7 (42%), offers machine speed and quality (30%), and provides unlimited ideas on demand (28%) (Microsoft, 2025). In other words, people are not primarily using AI to avoid other humans; they are using it because it removes delay and gives them a sense of immediate momentum (Microsoft, 2025). That is an expectation‑shaping experience. Once you know a system can respond instantly, it is hard to go back to tools that feel slow or indifferent.


But the story is not as simple as “people love AI, so expectations go up.” A 2025 global study from the University of Melbourne and KPMG, covering more than 48,000 people across 47 countries, found that 66% of respondents use AI regularly and 83% believe AI will deliver a wide range of benefits, yet only 46% say they are willing to trust AI systems (Gillespie et al., 2025). At the same time, 70% believe AI regulation is needed, 66% report relying on AI output without evaluating its accuracy, and 56% say they have made mistakes in their work because of AI (Gillespie et al., 2025). People increasingly rely on AI and expect it to help, while remaining uneasy about how much they can safely hand over.


This is the tension AI‑enabled products are dropping into. On one side, AI is training users to expect more from software: more speed, more guidance, more support. On the other, each visible error, hallucination, or unexplained decision reminds people that there is a cost to leaning too hard on systems they do not fully understand. It is entirely possible to feel dependent on AI and suspicious of it at the same time.


For product teams, that means the bar is moving in two directions at once. It is no longer enough to bolt on an “AI feature” and call it a day. People now notice whether a product is actively reducing their cognitive overhead, by helping them get from question to answer or from intention to action with less effort. But they also notice when the system feels like a black box, or when its confidence does not match its reliability.


Microsoft’s own findings suggest that users are already starting to treat AI as more than a simple command line. In their survey, 52% of respondents said they see AI primarily as a command‑based tool, but 46% said they see it as a thought partner they can have a conversational exchange with to challenge their thinking, brainstorm ideas, or spark creativity (Microsoft, 2025). The moment a product crosses that line, from utility to quasi‑collaborator, the expectations on it change. People begin to ask not just “does this work?” but “does this get me to a better place than I would have reached alone?”


The result is a new kind of baseline. A “normal” product is no longer just one that is stable and feature‑complete. Increasingly, a normal product is one that is responsive, context‑aware, and willing to take the first step on your behalf. A product that merely provides options starts to feel dated next to one that proposes a sensible starting point.


The risk, of course, is that we end up with products that are extremely helpful right up to the moment they are not. That is why the trust numbers from the KPMG study matter so much: high usage, high perceived benefit, low willingness to trust, and a non‑trivial share of people admitting they have already made mistakes because of AI (Gillespie et al., 2025). It is a reminder that “intelligent” behaviour and trustworthy behaviour are not the same thing.


The upshot is that AI is not just giving products new abilities. It is quietly renegotiating the contract between people and software. Users now expect tools to share more of the load while also being more transparent about what they are doing. The companies that take that seriously, those who design for both reduced friction and clearer accountability, will feel aligned with where expectations are going. The ones that simply add AI and call it innovation will feel out of step much sooner than they think.


Once you have lived with products that feel like they are genuinely on your side, it becomes very hard to go back to software that only does what it is told.

 
 
 

Comments


bottom of page