@arnaudlh Best practices - Design pattern for referencing resources within CAF. For example:
private_dns_zone_id = try(
local.combined_objects_private_dns[each.value.private_dns_zone.lz_key][each.value.private_dns_zone.key].id,
local.combined_objects_private_dns[local.client_config.landingzone_key][each.value.private_dns_zone.key].id,
each.value.private_dns_zone.id,
null
)
Q1. For local & lower landingzones, we are using remote_objects = {}. But when do we used local.combined_objects vs local.<resource>?
Q2. Do we have a pattern that works for all scenarios? if the reference objects exists remote state or local state or local code block or external resource?
Q3. Are we supporting patterns such as vnet_key or switch to vnet { key="" lz_key=""} or support both?
Hi, when I deploy an Access Policy in a Key Vault it appears as Unknown, instead of Application
creation_policies = {
logged_in_user = {
secret_permissions = ["Set", "Get", "List", "Delete", "Purge"]
certificate_permissions = ["managecontacts", "manageissuers"]
}
azuread_application = {
object_id = "c432f4fc-a70e-4833-9d8e-fa44bcexxxxx"
secret_permissions = ["Set", "Get", "List", "Delete", "Purge", "Recover"]
}
}
How I have to do it as an application instead of unknown? Is it possible to use function app key name instead of the object_id?
Hey, we're trying to upgrade our rover version from aztfmod/rover:1.0.4-2108.1305 to the latest versin: aztfmod/rover:1.0.11-2112.0809
Getting a strange error which we can't quite figure out, it's worth noting that we haven't change any configuration code, we're simply changing the image version:
Error on or near line 391: Error running terraform plan; exiting with status 1
cleanup variables
clean_up backend_files
##[error]Bash exited with code '1'.
##[error]Bash wrote one or more lines to the standard error stream.
##[error]WARNING: The command requires the extension resource-graph. It will be installed first.
##[error]
Error: Error loading state error
with data.terraform_remote_state.remote["launchpad"],
on locals.remote_tfstates.tf line 19, in data "terraform_remote_state" "remote":
19: backend = var.landingzone.backend_type
error loading the remote state: blobs.Client#Get: Failure responding to
request: StatusCode=403 -- Original Error: autorest/azure: Service returned
an error. Status=403 Code="AuthorizationPermissionMismatch" Message="This
request is not authorized to perform this operation using this
permission.\nRequestId:xxxxxxxxxxxxxxxxxxxxxxx\nTime:2022-01-12T08:30:52.9562577Z"
Has anybody else come across this issue?
Hello all, please forgive me if this has already been answered, but I am not finding in the examples the proper way to assign group owners... I have what I believe to be a legitimate configuration for the group:
azuread_groups = {
architects = {
name = "AZ Group 1"
description = "a super cool group"
members = {
user_principal_names = [
"me@mycompany.com"
]
group_names = []
object_ids = []
group_keys = []
service_principal_keys = []
}
owners = {
user_principal_names = [
"somebody_else@mycompany.com"
]
}
prevent_duplicate_name = true
}
}
I can see that the member is getting applied correctly... but the owner object id in the plan is always the person etc. running the command... Anyone know what I am doing wrong here?
Thanks in advance!
Hi, Just wanted to ask if anyone else has come across the same issue. I have tried to deploy https://github.com/Azure/caf-terraform-landingzones-starter/tree/starter/configuration/sandpit/level1/gitops/azure_devops. and get below error when running PLAN: on /home/vscode/.terraform.cache/modules/caf/modules/security/keyvault_access_policies/policies.tf line 12, in module "azuread_apps":
│ 12: object_id = var.azuread_apps[try(try(each.value.azuread_app_lz_key, each.value.lz_key),var.client_config.landingzone_key)][each.value.azuread_app_key].azuread_service_principal.object_id
│ ├────────────────
│ │ each.value is object with 3 attributes
│ │ each.value.lz_key is "launchpad"
│ │ var.azuread_apps is object with 1 attribute "azdo-contoso"
│ │ var.client_config.landingzone_key is "azdo-contoso"
│
│ The given key does not identify an element in this collection value.
Hello forum. I ran into a similar error like the one quoted here. The proposed solution by Luke to set azure_devops to "level1" and launchpad level to "lower" is incorporated in the new stable release. Has anyone experienced this? So I have been trying to 'customize' the starter template a bit to suit my needs by commenting out deployments to level3 and level4 which I do not need. I don't know if that has anything to do with this error. Thanks in advance for any tips or advice.
Evening all, hopefully someone can help me here. I've been using Terraform since the early days, probably started with v0.8 and loved it from day one. Deployed to AWS, Azure, VMWare and various other providers in all sorts of environments. So I'm very comfortable with it.
However, I'm struggling with the CAF. Not necessarily the actual module (it's complex but I know when I sit down and work through it I'll be fine). The issue is the actual implementation of it for real-world deployments. I'm really struggling following all the different repos and examples. Seems that there's always some other example or repo lurking that only serves to make me 'unlearn what I have just learnt' .
Is anyone aware of any good content that brings all this together and may help bridge the gap? I've not been able to put my finger on why I'm struggling having this all sink in.
Thanks all.....
Hello everyone, I have a question I hope someone can help me out with. Our environment does not use Rover.
I created the azuread group in a shared.tfstate file. But for different environments stored in env.tfstate I'd like to create the role assignments and pass the group_key. I want to reference this group through terraform_remote_state. I folowed the pattern in that doc: https://github.com/Azure/caf-terraform-landingzones/blob/master/documentation/code_architecture/service_composition.md
I added the data "terraform_remote..." and the local variable for azuread_groups in my main.tf. But when I try to reference the group in my env.tfvar it doesn't know about it. Any help would be appreciated.
Thanks in advance!
azuread_groups = {
group_key = {
id = "data.terraform_remote_state.shared.outputs.objects.test.azuread_groups.group_key.id"
}
}
#shared.tf
azuread_groups = {
group_key = {
name = "Test Group"
description = "Test Group"
members = {
user_principal_names = []
group_names = []
object_ids = []
group_keys = []
service_principal_keys = []
}
owners = {
user_principal_names = []
service_principal_keys = []
}
prevent_duplicate_name = false
}
}
# main.tf
data "terraform_remote_state" "shared" {
backend = "azurerm"
config = {
...
key = "shared.tfstate"
}
}
#env.tf
locals {
...
azuread_groups = data.terraform_remote_state.shared.outputs.azuread_groups
...
Hi, I've stood up the CAF demo environment through Level 2 and used the 100-single-linux-vm to stand a VM up in Level 3. I then wanted to tear down the level 3 vm, change some stuff and re-run it. When I ran the rover destroy process I received errors from KeyVault.
Error: purging of Secret "xyzzy-vm-examplevm1-ssh-public-key-openssh" (Key Vault "https://xyzzy-kv-vmlinuxakv.vault.azure.net/") : keyvault.BaseClient#PurgeDeletedSecret: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="Forbidden" Message="Operation \"purge\" is not allowed because purge protection is enabled for this vault. Key Vault service will automatically purge it after the retention period has passed.\r\nVault: xyzzy-kv-vmlinuxakv;location=eastus2"
What is the proper recovery process when an error like this occurs? Also is this an error that should be reported somewhere?
Hi, I have question regarding ip_groups scope and landing zone. See below my issue:
lz_1
ip_groups.tfvars
lz_2
firewall_policies.tfvars
firewall_policies.tfvars contains a list of firewall rules.
Can I reference source_ip_groups_keys
with keys defined in lz_1 or keys only works withing the same landing zone ?
I have tried to add lz1 tfstate in the lz2 landingzone block, but it still can't recognize the group key defined in lz_1
// lz_2/configuration.tfvars
landingzone = {
backend_type = "azurerm"
global_settings_key = "foundations"
level = "level2"
key = "lz_2"
tfstates = {
lz1 = {
level = "current"
tfstate = "lz1.tfstate"
}
}
}
Hi everyone,
what could cause the problem that i am not able do access global_settings from level0 in the higher level1 / eslz?
│ Error: Unsupported attribute
│
│ on enterprise_scale.tf line 10, in module "enterprise_scale":
│ 10: default_location = local.global_settings.regions[local.global_settings.default_region]
│ ├────────────────
│ │ local.global_settings is object with no attributes
│
│ This object does not have an attribute named "regions".
Hi, I wrote our image gallery module in level1. Its it working fine. We are using our own harded RHEL images.
The output in the tstate file is as follow:
"shared_image_gallery": {
"value": {
"rhel84": {
"id": "/subscriptions/xxxxxx/resourceGroups/mcdta-rg-image-gallery-ocpe/providers/Microsoft.Compute/galleries/RedHat/images/RHEL_8/versions/8.4.0"
}
The id of the image is used to create a virtual machine in level2.
A snip of the tfvar file of the virtual server.
os_disk = {
name = "idm1-os"
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
disk_size_gb = "40"
disk_encryption_set_key = "set1"
}
custom_image_ids = {
lz_key = "shared_image_gallery"
custom_image_key = "rhel84"
}
And the config to read the lower tfstate file.
landingzone = {
backend_type = "azurerm"
global_settings_key = "management"
level = "level2"
key = "identity_virtual_host"
tfstates = {
identity_network = {
level = "current"
tfstate = "identity_network.tfstate"
}
shared_image_gallery = {
level = "lower"
tfstate = "shared_image_gallery.tfstate"
}
}
}
Unfortunately, the custom_image_ids is not handled properly in the module. I think the problem is in the module of the virtualserver. This one looks like this:
source_image_id = try(each.value.custom_image_id,var.custom_image_ids[each.value.lz_key][each.value.custom_image_key].id, null)
If the module terraform-azurerm-caf/modules/compute/virtual_machine/vm_linux.tf is modified as follows, will the variable from the lower tfstate file be retrieved correctly? Are we missing a landingzone_key ?
source_image_id = try(each.value.custom_image_id,try(var.custom_image_ids[var.client_config.landingzone_key][each.value.lz_key][each.value.custom_image_key].id,var.custom_image_ids[each.value.lz_key][each.value.custom_image_key].id))
Nevertheless, if the custom_image_id is used with the full id:/subscription/.. from the azure configuration, everything works properly. But then we have added a static value to a variable configuration.
How can the module be adapted?
It will help us a lot
Hi there,
After deploying launchpad level0 via rover ignite from contoso-2201 it fails with
Apply complete! Resources: 146 added, 0 changed, 0 destroyed.
Outputs:
diagnostics = <sensitive>
global_settings = <sensitive>
launchpad_identities = <sensitive>
objects = <sensitive>
tfstates = <sensitive>
Terraform apply return code: 0
@calling get_storage_id
@calling upload_tfstate
Moving launchpad to the cloud
ERROR: argument --ids: expected at least one argument
Examples from AI knowledge base:
az storage account show --ids /subscriptions/{SubID}/resourceGroups/{ResourceGroup}/providers/Microsoft.Storage/storageAccounts/{StorageAccount}
Show properties for a storage account by resource ID.
az storage account show --resource-group MyResourceGroup --name MyStorageAccount
Show properties for a storage account using an account name and resource group.
https://docs.microsoft.com/en-US/cli/azure/storage/account#az_storage_account_show
Read more about the command in reference docs
Error on or near line 142; exiting with status 1
Error on or near line 142; exiting with status 1
@calling clean_up_variables
cleanup variables
clean_up backend_files
vscode@7b5da0201736:/tf/caf/$
My details:
aztfmod/rover:1.1.3-2201.2106
When I execute the rover command again it fails again.
Any idea what I'm missing or is this error known?
```base) welcome@Traianos-MacBook-Pro landingzones % rover landingzone list -level level0
error: Found argument 'landingzone' which wasn't expected, or isn't valid in this context
USAGE:
rover [FLAGS] [OPTIONS] <SUBCOMMAND>
For more information try --help
```
`
version: aztfmod/rover:1.0.11-2201.2106
cat: /tf/caf/.devcontainer/docker-compose.yml: No such file or directory
The version of your local devcontainer rover:1.0.11-2201.2106 does not match the required version .
Click on the Dev Container buttom on the left bottom corner and select rebuild container from the options.
`
rover login -t xxxxxxxx -s yyyyyyyyyyyyyy
Initializing the backend...
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
╷
│ Error: Failed to get existing workspaces: containers.Client#ListBlobs: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationPermissionMismatch" Message="This request is not authorized to perform this operation using this permission.\nRequestId:01a538f8-201e-004a-214f-1c5e3d000000\nTime:2022-02-07T18:18:41.2116142Z"
│
│
╵
Error on or near line 211; exiting with status 1
Hi All,
Problem solved. Never use a tag like this with a new storage account.
tags = {
# Those tags must never be changed while set as they are used by the rover to locate the launchpad and the tfstates.
tfstate = "level1"
environment = "mcdta"
launchpad = "launchpad"
A quick copy and paste 'll never be good.