By Nick Beaugeard · 5 minute read · ← All posts

Walk into any Australian AI-consulting sales meeting in 2026 and you will hear about AWS Bedrock, Snowflake Cortex, Databricks and maybe a Google Cloud partnership for good measure. You will hear less about .NET, Azure, and AI Foundry. That is partly fashion and partly path dependence — most of the large consultancies were built on AWS practices and have inertia to match.

Released went the other way. We are Microsoft-first by choice. Here's why.

Where Australian mid-market stacks actually live

Look at a real Australian mid-market client — a funded startup with thirty staff, a fabricator with a hundred, a scale-up with two hundred. The systems they already have on premise or in the cloud are overwhelmingly Microsoft: Microsoft 365, Entra ID, Dynamics or a .NET LOB app, SQL Server somewhere, SharePoint, Power BI. When you say "let's add AI to your operations", the natural integration surface is Microsoft Graph, Entra, and the same identity and governance boundaries they already trust.

Pushing them onto AWS for agentic AI adds a whole second cloud estate to govern. For some clients that is correct (data gravity, partner preference, existing IP). For most it is a distraction. Meeting the client where they already are is faster, cheaper and safer.

What Azure AI Foundry buys us

Azure AI Foundry has consolidated what used to be three or four separate services into a single developer surface: hosted foundation models, vector search, agent orchestration, evaluation tooling and observability. For a senior building a bespoke product, the practical effect is that "wire up a RAG chat surface over a SQL database with Entra-gated access" stops being a two-week integration exercise. It becomes a one-day skeleton plus refinement.

The .NET SDK for Azure OpenAI and the evolving Microsoft Agent Framework give us first-class support for long-running agentic workflows hosted on Azure App Service. That matters: agents that fail silently in a background function will lose a client's trust in a week. Azure's observability and hosting story around agentic workloads is maturing fastest on the .NET side, specifically because Microsoft has prioritised the developer experience there.

Why we don't spread across clouds

Multi-cloud AI strategies make beautiful slides and terrible delivery outcomes for mid-market clients. Every additional cloud costs the client a governance model, a pipeline, a set of CI runners, a set of secrets — and it costs us a learning tax. We'd rather be exceptional on a single stack than adequate across three.

Opinionated stacks ship faster. The only wrong opinion is one that doesn't match the client's reality.

Where we would tell you to go elsewhere

We are honest about where the opinionated stack fails:

  • You are already deeply committed to AWS and moving would be structurally wrong. Talk to Mantel or DiUS.
  • Your product needs data infrastructure that lives on Snowflake or Databricks and the tradeoffs are settled. Same recommendation.
  • You need a clinical or regulated AI system that must use a specific vendor for assurance reasons (e.g., certain health or finance use cases). Talk to Max Kelsen (Bain) or Intelligrate.

If your stack is Microsoft, or your stack is greenfield, or your stack is mixed but open — we can ship faster on .NET and Azure than any other configuration available to us, because it is the configuration we have deliberately built Symphony to run on. Opinion is faster than neutrality. For a bespoke mid-market build, faster is the whole point.

Microsoft-shop or greenfield? Let's talk.

Thirty minutes with Nick. If your stack is a good fit, we will show you exactly how fast we can move.

Book a free initial meeting