Hi, I've stood up the CAF demo environment through Level 2 and used the 100-single-linux-vm to stand a VM up in Level 3. I then wanted to tear down the level 3 vm, change some stuff and re-run it. When I ran the rover destroy process I received errors from KeyVault.
Error: purging of Secret "xyzzy-vm-examplevm1-ssh-public-key-openssh" (Key Vault "https://xyzzy-kv-vmlinuxakv.vault.azure.net/") : keyvault.BaseClient#PurgeDeletedSecret: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="Forbidden" Message="Operation \"purge\" is not allowed because purge protection is enabled for this vault. Key Vault service will automatically purge it after the retention period has passed.\r\nVault: xyzzy-kv-vmlinuxakv;location=eastus2"
What is the proper recovery process when an error like this occurs? Also is this an error that should be reported somewhere?
Hi, I have question regarding ip_groups scope and landing zone. See below my issue:
lz_1
ip_groups.tfvars
lz_2
firewall_policies.tfvars
firewall_policies.tfvars contains a list of firewall rules.
Can I reference source_ip_groups_keys
with keys defined in lz_1 or keys only works withing the same landing zone ?
I have tried to add lz1 tfstate in the lz2 landingzone block, but it still can't recognize the group key defined in lz_1
// lz_2/configuration.tfvars
landingzone = {
backend_type = "azurerm"
global_settings_key = "foundations"
level = "level2"
key = "lz_2"
tfstates = {
lz1 = {
level = "current"
tfstate = "lz1.tfstate"
}
}
}
Hi everyone,
what could cause the problem that i am not able do access global_settings from level0 in the higher level1 / eslz?
│ Error: Unsupported attribute
│
│ on enterprise_scale.tf line 10, in module "enterprise_scale":
│ 10: default_location = local.global_settings.regions[local.global_settings.default_region]
│ ├────────────────
│ │ local.global_settings is object with no attributes
│
│ This object does not have an attribute named "regions".
Hi, I wrote our image gallery module in level1. Its it working fine. We are using our own harded RHEL images.
The output in the tstate file is as follow:
"shared_image_gallery": {
"value": {
"rhel84": {
"id": "/subscriptions/xxxxxx/resourceGroups/mcdta-rg-image-gallery-ocpe/providers/Microsoft.Compute/galleries/RedHat/images/RHEL_8/versions/8.4.0"
}
The id of the image is used to create a virtual machine in level2.
A snip of the tfvar file of the virtual server.
os_disk = {
name = "idm1-os"
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
disk_size_gb = "40"
disk_encryption_set_key = "set1"
}
custom_image_ids = {
lz_key = "shared_image_gallery"
custom_image_key = "rhel84"
}
And the config to read the lower tfstate file.
landingzone = {
backend_type = "azurerm"
global_settings_key = "management"
level = "level2"
key = "identity_virtual_host"
tfstates = {
identity_network = {
level = "current"
tfstate = "identity_network.tfstate"
}
shared_image_gallery = {
level = "lower"
tfstate = "shared_image_gallery.tfstate"
}
}
}
Unfortunately, the custom_image_ids is not handled properly in the module. I think the problem is in the module of the virtualserver. This one looks like this:
source_image_id = try(each.value.custom_image_id,var.custom_image_ids[each.value.lz_key][each.value.custom_image_key].id, null)
If the module terraform-azurerm-caf/modules/compute/virtual_machine/vm_linux.tf is modified as follows, will the variable from the lower tfstate file be retrieved correctly? Are we missing a landingzone_key ?
source_image_id = try(each.value.custom_image_id,try(var.custom_image_ids[var.client_config.landingzone_key][each.value.lz_key][each.value.custom_image_key].id,var.custom_image_ids[each.value.lz_key][each.value.custom_image_key].id))
Nevertheless, if the custom_image_id is used with the full id:/subscription/.. from the azure configuration, everything works properly. But then we have added a static value to a variable configuration.
How can the module be adapted?
It will help us a lot
Hi there,
After deploying launchpad level0 via rover ignite from contoso-2201 it fails with
Apply complete! Resources: 146 added, 0 changed, 0 destroyed.
Outputs:
diagnostics = <sensitive>
global_settings = <sensitive>
launchpad_identities = <sensitive>
objects = <sensitive>
tfstates = <sensitive>
Terraform apply return code: 0
@calling get_storage_id
@calling upload_tfstate
Moving launchpad to the cloud
ERROR: argument --ids: expected at least one argument
Examples from AI knowledge base:
az storage account show --ids /subscriptions/{SubID}/resourceGroups/{ResourceGroup}/providers/Microsoft.Storage/storageAccounts/{StorageAccount}
Show properties for a storage account by resource ID.
az storage account show --resource-group MyResourceGroup --name MyStorageAccount
Show properties for a storage account using an account name and resource group.
https://docs.microsoft.com/en-US/cli/azure/storage/account#az_storage_account_show
Read more about the command in reference docs
Error on or near line 142; exiting with status 1
Error on or near line 142; exiting with status 1
@calling clean_up_variables
cleanup variables
clean_up backend_files
vscode@7b5da0201736:/tf/caf/$
My details:
aztfmod/rover:1.1.3-2201.2106
When I execute the rover command again it fails again.
Any idea what I'm missing or is this error known?
```base) welcome@Traianos-MacBook-Pro landingzones % rover landingzone list -level level0
error: Found argument 'landingzone' which wasn't expected, or isn't valid in this context
USAGE:
rover [FLAGS] [OPTIONS] <SUBCOMMAND>
For more information try --help
```
`
version: aztfmod/rover:1.0.11-2201.2106
cat: /tf/caf/.devcontainer/docker-compose.yml: No such file or directory
The version of your local devcontainer rover:1.0.11-2201.2106 does not match the required version .
Click on the Dev Container buttom on the left bottom corner and select rebuild container from the options.
`
rover login -t xxxxxxxx -s yyyyyyyyyyyyyy
Initializing the backend...
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
╷
│ Error: Failed to get existing workspaces: containers.Client#ListBlobs: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationPermissionMismatch" Message="This request is not authorized to perform this operation using this permission.\nRequestId:01a538f8-201e-004a-214f-1c5e3d000000\nTime:2022-02-07T18:18:41.2116142Z"
│
│
╵
Error on or near line 211; exiting with status 1
Hi All,
Problem solved. Never use a tag like this with a new storage account.
tags = {
# Those tags must never be changed while set as they are used by the rover to locate the launchpad and the tfstates.
tfstate = "level1"
environment = "mcdta"
launchpad = "launchpad"
A quick copy and paste 'll never be good.
Hello,
We are deploying the multi-subscription contoso sample of enterprise scale based on the 2112.int branch (/templates/enterprise-scale/contoso/platform).
We've hit a wall when trying to rover-plan the eslz step after running management :
Error: Invalid index
│
│ on archetype_config_overrides.tf line 98, in locals:
│ 98: for key, value in param_value : key => local.caf[value.output_key][value.lz_key][value.resource_type][value.resource_key][value.attribute_key]
│ ├────────────────
│ │ local.caf is object with 8 attributes
│ │ value.lz_key is "management"
│ │ value.output_key is "diagnostics"
│ │ value.resource_key is "eastus2logs"
│ │ value.resource_type is "log_analytics"
│
│ The given key does not identify an element in this collection value.
Our override looks like this
logAnalytics:
lz_key: management
output_key: diagnostics
resource_type: log_analytics
resource_key: eastus2logs
attribute_key: id
based on the sample that ships with contoso which looks like this
logAnalytics:
lz_key: management
output_key: diagnostics
resource_type: log_analytics
resource_key: central_logs_sea
attribute_key: id
The only difference being the name of the resource key which we changed in management.yaml
. My gut is that the terraform isn't able to resolve the id of the LA workspace we deployed to management, but I can't understand why. Our LA workspace is in a rg named management
in the management sub with the name eastus2logs
Any ideas
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the
last "terraform apply":
# module.solution.module.diagnostic_event_hub_namespaces_diagnostics["mylogs"].azurerm_monitor_diagnostic_setting.diagnostics["mylogs"] has been changed
~ resource "azurerm_monitor_diagnostic_setting" "diagnostics" {
id = "/subscriptions/2cf1acf7-2536-4dae-8893-867367e0e202/resourceGroups/xxo01-rg-mgmt/providers/Microsoft.EventHub/namespaces/xxo01-ehn-logs|operational_logs_and_metrics"
name = "operational_logs_and_metrics"
# (2 unchanged attributes hidden)
+ log {
+ category = "ApplicationMetricsLogs"
+ enabled = false
+ retention_policy {
+ days = 0
+ enabled = false
}
}
+ log {
+ category = "RuntimeAuditLogs"
+ enabled = false
+ retention_policy {
+ days = 0
+ enabled = false
}
}
# (8 unchanged blocks hidden)
}
# module.solution.module.keyvaults["secrets"].module.diagnostics.azurerm_monitor_diagnostic_setting.diagnostics["mylogs"] has been changed
~ resource "azurerm_monitor_diagnostic_setting" "diagnostics" {
id = "/subscriptions/2cf1acf7-2536-4dae-8893-867367e0e202/resourceGroups/xxo01-rg-mgmt/providers/Microsoft.KeyVault/vaults/xxo01-kv-mgmtsecrets|eh_logs_and_metrics"
name = "eh_logs_and_metrics"
# (3 unchanged attributes hidden)
+ log {
+ category = "AzurePolicyEvaluationDetails"
+ enabled = false
+ retention_policy {
+ days = 0
+ enabled = false
}
}
# (2 unchanged blocks hidden)
}
# module.solution.module.keyvaults["secrets"].module.diagnostics.azurerm_monitor_diagnostic_setting.diagnostics["mylogs"] has been changed
~ resource "azurerm_monitor_diagnostic_setting" "diagnostics" {
id = "/subscriptions/2cf1acf7-2536-4dae-8893-867367e0e202/resourceGroups/xxo01-rg-mgmt/providers/Microsoft.KeyVault/vaults/xxo01-kv-mgmtsecrets|operational_logs_and_metrics"
name = "operational_logs_and_metrics"
# (2 unchanged attributes hidden)
+ log {
+ category = "AzurePolicyEvaluationDetails"
+ enabled = false
+ retention_policy {
+ days = 0
+ enabled = false
}
}
# (2 unchanged blocks hidden)
}
Unless you have made equivalent changes to your configuration, or ignored the
relevant attributes using ignore_changes, the following plan may include
actions to undo or respond to these changes.
─────────────────────────────────────────────────────────────────────────────
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.solution.module.diagnostic_event_hub_namespaces_diagnostics["mylogs"].azurerm_monitor_diagnostic_setting.diagnostics["mylogs"] will be updated in-place
~ resource "azurerm_monitor_diagnostic_setting" "diagnostics" {
id = "/subscriptions/2cf1acf7-2536-4dae-8893-867367e0e202/resourceGroups/xxo01-rg-mgmt/providers/Microsoft.EventHub/namespaces/xxo01-ehn-logs|operational_logs_and_metrics"
name = "operational_logs_and_metrics"
# (2 unchanged attributes hidden)
- log {
- category = "ApplicationMetricsLogs" -> null
- enabled = false -> null
- retention_policy {
- days = 0
Hi there,
my first actual question here, im happy to see such an active and public community around this project.
I just rolled out the caf launchpad with a modified 200 scenario.
Additionally, I rolled out azure devops v1 with a modified contoso example.
Those things were applied to a new and clean subscription.
Also important for my question: Yes, we have a microsoft customer agreement to be able to automate subscription handling.
Now i am looking into deploying level1 based on caf-starter - branch contoso-2201
As I understand the docs, level 1 can be deployed to a separate subscription. Which i would like to do.
But how can we create this subscription automatically?
I saw this subscriptions.tfvars file in the launchpad - 200 scenario, but it does not contain an example for further subscriptions.
I can see that the called module has the capability to create subscriptions, but im just not able to find an example for that.
As I further understand the docs, level2 can be also applied to a separate subscription. I guess this subscription will be create the same way as the level1 subscription using the launchpad. Am I correct by assuming this?
As I further understand the docs, level3 (workload specific) landingzones can also be deployed to separate subscriptions. Is there an other place to create those subscriptions other then the launchpad? It feels stange to me to create app specific subscriptions in level0, assign them to management groups in level1, potentionaly add them to routing in level2 and them use them in a level3 repo. Is there an other, more streamlined way to do that, or am i completely wrong with my assumptions?
Did anyone see this error:
"ERROR: argument --ids: expected at least one argument"
Due to
az storage account show --resource-group MyResourceGroup --name MyStorageAccount
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
diagnostics = <sensitive>
global_settings = <sensitive>
launchpad_identities = <sensitive>
objects = <sensitive>
tfstates = <sensitive>
Terraform apply return code: 0
@calling get_storage_id
@calling upload_tfstate
Moving launchpad to the cloud
ERROR: argument --ids: expected at least one argument
Examples from AI knowledge base:
az storage account show --ids /subscriptions/{SubID}/resourceGroups/{ResourceGroup}/providers/Microsoft.Storage/storageAccounts/{StorageAccount}
Show properties for a storage account by resource ID.
az storage account show --resource-group MyResourceGroup --name MyStorageAccount
Show properties for a storage account using an account name and resource group.
https://docs.microsoft.com/en-US/cli/azure/storage/account#az_storage_account_show
Read more about the command in reference docs
Error on or near line 142; exiting with status 1
Error on or near line 142; exiting with status 1
Seen with Launchpad deployment using aztfmod/rover:1.1.3-2201.2106 but after 1-2 iterations of deployement.
Hello, I am working on an implementation of an Azure landing zone with the CAF Solution and have a question about accessing resources across landingzones. While the Current and Lower landingzone references work great for many resources, 2 use cases have arisen:
1) An AzureAD _rw group from level0 with a landingzone Service Principle needs to be included in a Key Vault Access Policy in level2
2) A custom disk image custom_image_id created in Level1 needs to be referenced in level3 and level4 deployments.
This breaks the Zero Trust read/write access to Current level and read access to one level Lower model so I was wondering how people have solved these kind of resource references that are needed more globally. Is the solution to add these kind values to Global_Settings?
2nd question, if such a reference is later updated in a Lower landingzones and is consumed by higher level landingzones, is this reference updated with a Terraform Apply at the higher level landingzones or is this only read in once during the initial deployment?
Hi there,
yesterday i created two new issues for the caf-terraform-landingzones project
Did anyone encounter those issues too, or did i miss something there.
They are about the devops_v1 addon and the devops_agent addon
Error: Error in function call
│
│ on /home/vscode/.terraform.cache/bravent/rover_jobs/20220221090740962008800/modules/solution/virtual_machines.tf line 37, in module "virtual_machines":
│ 37: boot_diagnostics_storage_account = try(local.combined_diagnostics.storage_accounts[each.value.boot_diagnostics_storage_account_key].primary_blob_endpoint,
│ 38: each.value.boot_diagnostics_storage_account_key == "" ? "" : each.value.throw_error,
│ 39: can(tostring(each.value.boot_diagnostics_storage_account_key)) ? each.value.throw_error : null)
│ ├────────────────
│ │ each.value is object with 8 attributes
│ │ each.value.boot_diagnostics_storage_account_key is "bootdiag_region1"
│ │ local.combined_diagnostics.storage_accounts is object with no attributes
│
│ Call to function "try" failed: no expression succeeded:
│ - Invalid index (at /home/vscode/.terraform.cache/bravent/rover_jobs/20220221090740962008800/modules/solution/virtual_machines.tf:37,85-134)
│ The given key does not identify an element in this collection value.
│ - Unsupported attribute (at /home/vscode/.terraform.cache/bravent/rover_jobs/20220221090740962008800/modules/solution/virtual_machines.tf:38,76-88)
│ This object does not have an attribute named "throw_error".
│ - Unsupported attribute (at /home/vscode/.terraform.cache/bravent/rover_jobs/20220221090740962008800/modules/solution/virtual_machines.tf:39,78-90)
│ This object does not have an attribute named "throw_error".
│
│ At least one expression must produce a successful result.
Log Analytics deletion fails due to "LogManagement" dependency:
│ Error: deleting Log Analytics Solution: (Solution Name "LogManagement(an01-logs-re1)" / Resource Group "an01-logging"): operationsmanagement.SolutionsClient#Delete: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidInput" Message="Solution of type 'LogManagement' can not be deleted. Operation Id: '67c8bd6ae0ecc846ab49cde963c1b7d6'"
Has anyone encountered it, please?
@calling login_as_launchpad
Getting launchpad coordinates from subscription: xxxe5052-0714-4xxx-90b5-xxxff260fxxx
- keyvault_name:
ERROR: argument --vault-name: expected one argument
Examples from AI knowledge base:
az keyvault secret show --name mysecret --vault-name myvault
Get a specified secret from a given key vault. (autogenerated)
az keyvault secret show --id "/subscriptions/00000000-0000-0000-0000-00000000000000000/resourceGroups/myrg/providers/Microsoft.KeyVault/vaults/mykv/privateEndpointConnections/mykv.00000000-0000-0000-0000-00000000000000000" --vault-name myvault
Get a specified secret from a given key vault. (autogenerated)
https://docs.microsoft.com/en-US/cli/azure/keyvault/secret#az_keyvault_secret_show
Read more about the command in reference docs
- tenant_id :
Error on or near line 357: Not authorized to manage landingzones. User must be member of the security group to access the launchpad and deploy a landing zone; exiting with status 102
@calling clean_up_variables
cleanup variables
clean_up backend_files