FAQ
Picking an IaC platform
Terraform or Bicep â which should I pick?
Both produce the exact same Azure architecture (identical RGs, networking, security baseline, monitoring, optional firewall/VPN). Pick what your team already knows.
- Terraform â best if youâre multi-cloud, already use HashiCorp tooling, or want a single state file you can
importand refactor freely. Brings the AVM ecosystem and a mature testing story (terraform test). - Bicep â best if youâre Microsoft-only, want zero extra tooling beyond
az, prefer ARM-native deployments tracked in the portal, and donât want to manage a remote state backend.
The deployment wizard asks this as question #1 and emits the matching parameter file + commands.
Can I switch from Terraform to Bicep (or vice versa) later?
Yes, but itâs a manual cutover â the two stacks donât share state. The cleanest path:
- Deploy the new stack to the same subscription using a different
name_prefix(e.g.contoso2). - Migrate workloads.
- Tear down the old stack with
terraform destroyoraz group delete.
Both deployments are idempotent within their own world but neither will adopt resources created by the other.
Do I have to pick â can I use both?
You can, but you shouldnât deploy both into the same subscription with the same name_prefix/region â theyâll fight over resource names. Two valid patterns:
- Different subscriptions per IaC (e.g. dev = Bicep, prod = Terraform) â fine.
- Same subscription, different prefixes â fine but doubles your cost.
Architecture & design
Why one root module instead of separate modules per scenario?
Youâd duplicate the entire hub-spoke baseline four times. Toggling resources by count = local.use_firewall ? 1 : 0 (Terraform) or if (deployFirewall) (Bicep) is uglier in a single file but reads better as a whole because every shared concept (RGs, Key Vault, peering, DNS) is defined exactly once.
Why Terraform workspaces instead of directories?
Workspaces let the same code produce isolated state files (foundation.<scenario>.tfstate). Keeps the CI matrix trivial. Directories would mean copy-pasting the backend config four times. Bicep doesnât need this â each az deployment sub create produces its own deployment record automatically.
Why Firewall Basic instead of Standard?
SMB cost-tier alignment. Standard adds ~$700/mo, IDPS, TLS inspection, threat intel â features most SMB workloads donât need on day one. Switching is a one-line change in modules.firewall.tf (Terraform) or modules/firewall.bicep (Bicep).
Why no Bastion / no policy assignments / no management groups in the foundation?
The foundation module is intentionally subscription-scoped. MG / policy / Defender plans live one layer above the landing zone and would prevent the module from running in customer subscriptions where you only have Contributor.
If you do want them, this repo also ships an opt-in, separate module: management-groups. It deploys an ALZ-aligned hierarchy (including the new Local MG from ALZ 2026.04) and lets you pick which built-in policy initiatives to assign â see the policy catalog. Itâs separate because it needs tenant-root permissions and its own state.
Can I extend this for additional spokes?
Yes. Terraform: add another VNet module call + peering in modules.networking.tf, parameterized by a var.additional_spokes = list(object({ name, address_space })). Bicep: add another module spokeVnet '...spoke-vnet.bicep' = [for spoke in spokes: ...] loop. The hub side scales linearly either way.
Can I deploy to multiple regions?
Today the foundation deploys to a single region. For a multi-region pattern, deploy the foundation twice with different name_prefix/location combinations and add a hub-to-hub VNet peering yourself. A multi-region root module is on the roadmap.
Regional gotchas
My region doesnât support availability zones â will the deploy fail?
No, but it used to. The foundation now detects non-AZ regions (e.g. westcentralus, northcentralus, centralindia, switzerlandwest, australiacentral, canadaeast, ukwest, japanwest, etc.) and omits the zones property on Public IPs, NAT Gateway, Azure Firewall, and the VPN Gateway PIP automatically.
- Terraform: see
local.region_supports_zonesandlocal.availability_zonesininfra/terraform/foundation/locals.tf. - Bicep: same list lives in
infra/bicep/foundation/main.bicepasavailabilityZonesand is passed into the networking, firewall, and vpn modules.
If you deploy to a new region not on the list and hit LocationNotSupportAvailabilityZones, add the region name to that list and re-apply.
Why is the VPN Gateway pinned to VpnGw2AZ?
In May 2026 Microsoft retired the non-AZ VpnGw1âVpnGw5 SKUs (NonAzSkusNotAllowedForVPNGateway). The azurerm providerâs validation also dropped VpnGw1AZ, leaving VpnGw2AZâVpnGw5AZ as the only valid choices. The foundation pins VpnGw2AZ (Generation 2) because itâs the cheapest still-supported tier:
- Even in regions without AZs, Azure now requires an
AZSKU. The gateway is created without zones in non-AZ regions but the SKU name still ends inAZ. - To upgrade to
VpnGw3AZ/VpnGw4AZ/VpnGw5AZ(more bandwidth, more S2S tunnels), editmodules.vpn.tf(Terraform) ormodules/vpn.bicep(Bicep).
What about the Firewall â does it need an AZ SKU too?
No. Azure Firewall takes a zones array directly. In non-AZ regions the foundation passes [] and the firewall deploys non-zonal. Same applies to the firewallâs three Public IPs.
State, deployments, and history
Where is Terraform state stored?
In the Storage Account created by scripts/bootstrap-state.sh (rg-tfstate-<prefix>-<region>). The state file is foundation.<scenario>.tfstate in the tfstate container. Versioning is enabled on the container so you can roll back.
Where is Bicep âstateâ?
Bicep doesnât have explicit state â every az deployment sub create posts an ARM template and Azure tracks the result in the subscriptionâs deployment history (visible in Subscription â Deployments). To inspect: az deployment sub list -o table and az deployment sub show --name <name>. Deployment history is retained for ~90 days by default.
What happens if a deployment fails halfway?
Terraform: state records what succeeded; rerun apply and it picks up where it left off. Bicep: ARM rolls back the failing resource only; everything that succeeded stays. Rerun az deployment sub create and ARM will skip resources that already match.
Does terraform destroy clean everything up?
Almost. Soft-deleted Key Vaults remain for 7 days (and block name reuse). See Day-2 operations for az keyvault purge. The state RG (rg-tfstate-*) is intentionally not Terraform-managed â delete manually after customer offboarding.
How do I tear down a Bicep deployment?
Bicep has no destroy. Delete the resource groups directly: az group delete --name <rg> --yes --no-wait for each of the 6 RGs. See Day-2 operations for the full loop.
CI/CD & GitHub Actions
How do I set up the OIDC service principal?
Full step-by-step is on the CI/CD pipeline page (steps 2 and 3). One service principal works for all 8 workflows.
Do I need separate Azure credentials for Terraform vs Bicep?
No â the same Entra app registration / OIDC identity works for both. The only difference is which GitHub environment the apply runs under (prod for Terraform, apply for Bicep), and you add a federated credential per environment.
Can the wizardâs output be committed to a public repo?
Subscription IDs and tenant IDs are not secrets on their own (they appear in every resource ID), but theyâre useful for phishing reconnaissance â treat them as internal. Best practice: keep *.auto.tfvars and *.bicepparam out of public repos, or use a private repo. The wizard reminds you of this on the result page.
How do I update AVM module versions (Terraform)?
Dependabot opens a weekly PR (.github/dependabot.yml). The plan workflow validates the new versions against all four scenarios before merge.
How do I update AVM-Bicep versions?
Bicep modules pin versions in their br/public:avm/... references. To bump, search for the module path in infra/bicep/foundation/modules/*.bicep, change the version tag, run az bicep build locally to validate, and open a PR â bicep-plan.yml will run what-if for you.
Operations
How do I add diagnostic settings?
The Log Analytics workspace is created but not wired up by default (to avoid hitting the per-resource diagnostic settings cap). Add them in diagnostics.tf (Terraform) or a new modules/diagnostics.bicep (Bicep). See Day-2 operations for which categories to enable per resource type.
Can I add custom resources alongside this?
Yes â both stacks are designed to share the subscription. Deploy your own RGs and resources in the same subscription; just donât reuse the <prefix> slug for resources you donât want this module to manage.
Where do I report bugs?
GitHub: azure-launchpad/issues.