│ Error: Attempt to index null value
│
│ on /home/vscode/.terraform.cache/modules/solution/modules/resource_group/module.tf line 15, in resource "azurerm_resource_group" "rg":
│ 15: location = var.global_settings.regions[lookup(var.settings, "region", var.global_settings.default_region)]
│ ├────────────────
│ │ var.global_settings.default_region is "region1"
│ │ var.global_settings.regions is null
│ │ var.settings is object with 1 attribute "name"
│
│ This value is null, so it does not have any indices.
getting an error on level 1 deploy
│ Error: Attempt to index null value │ │ on /home/vscode/.terraform.cache/modules/solution/modules/resource_group/module.tf line 15, in resource "azurerm_resource_group" "rg": │ 15: location = var.global_settings.regions[lookup(var.settings, "region", var.global_settings.default_region)] │ ├──────────────── │ │ var.global_settings.default_region is "region1" │ │ var.global_settings.regions is null │ │ var.settings is object with 1 attribute "name" │ │ This value is null, so it does not have any indices.
@kgibson-insight:matrix.org If you would like, we can pair on this over zoom.
Let me know if you have 15 mins.
Hi I'm trying to deploy the es_root archetype (https://github.com/Azure/terraform-azurerm-caf-enterprise-scale/tree/main/modules/archetypes/lib). But when I do so, I keep getting the following error
Terraform returned errors:
╷
│ Error: reading Policy Set Definition "Deploy-ASC-Config": policy.SetDefinitionsClient#GetBuiltIn: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="PolicySetDefinitionNotFound" Message="The policy set definition 'Deploy-ASC-Config' could not be found."
│
│ with module.enterprise_scale.data.azurerm_policy_set_definition.external_lookup["/providers/Microsoft.Management/managementGroups/im/providers/Microsoft.Authorization/policySetDefinitions/Deploy-ASC-Config"],
│ on /home/vscode/.terraform.cache/modules/enterprise_scale/locals.policy_assignments.tf line 90, in data "azurerm_policy_set_definition" "external_lookup":
│ 90: data "azurerm_policy_set_definition" "external_lookup" {
│
╵
╷
│ Error: reading Policy Set Definition "Deploy-Diagnostics-LogAnalytics": policy.SetDefinitionsClient#GetBuiltIn: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="PolicySetDefinitionNotFound" Message="The policy set definition 'Deploy-Diagnostics-LogAnalytics' could not be found."
│
│ with module.enterprise_scale.data.azurerm_policy_set_definition.external_lookup["/providers/Microsoft.Management/managementGroups/im/providers/Microsoft.Authorization/policySetDefinitions/Deploy-Diagnostics-LogAnalytics"],
│ on /home/vscode/.terraform.cache/modules/enterprise_scale/locals.policy_assignments.tf line 90, in data "azurerm_policy_set_definition" "external_lookup":
│ 90: data "azurerm_policy_set_definition" "external_lookup" {
I can see that the policy definitions/assignment exist in that lib folder so i'm not sure what the issue is - has anyone else come across something like this?
│ Error: Invalid index
│
│ on /home/vscode/.terraform.cache/modules/solution/modules/security/keyvault_access_policies/policies.tf line 48, in module "azuread_group":
│ 48: object_id = try(each.value.lz_key, null) == null ? var.azuread_groups[var.client_config.landingzone_key][each.value.azuread_group_key].id : var.azuread_groups[each.value.lz_key][each.value.azuread_group_key].id
│ ├────────────────
│ │ each.value.azuread_group_key is "keyvault_level1_rw"
│ │ each.value.lz_key is "launchpad"
│ │ var.azuread_groups is object with 4 attributes
│
│ The given key does not identify an element in this collection value.
Deployed all of Level0, using Sandpit as the example, which deploys diagnostics for the different resources. Now I'm working on ESLZ which deploys diagnostic policies and the remediation of those policies are failing because a diagnostic has already been deployed with a different name. I thought about passing in the profile name to match, however, as the "Deploy-Resource-Diag" is an initiative I can only pass in one profile name. Where some resources the diagnostics are called "operational_logs_and_metrics" or "operations" or "siem" or "storageAccountsDiagnosticsLogsToWorkspace" etc.
We have decided to remove all the diagnostics from our level0 deployment and just use Azure Policies going forward. Could the people who created this project comment on this please? Also interested in anyone elses thoughts on this.
I banging my head against the wall with this at the moment. I have followed the documentation and I receive the following errors:
╷
│ Error: Unsupported attribute
│
│ on dynamic_secrets.tf line 11, in module "dynamic_keyvault_secrets":
│ 11: keyvault = module.launchpad.keyvaults[each.key]
│ ├────────────────
│ │ module.launchpad is a object, known only after apply
│
│ This object does not have an attribute named "keyvaults".
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 74, in locals:
│ 74: storage_account_name = module.launchpad.storage_accounts[var.launchpad_key_names.tfstates[0]].name
│ ├────────────────
│ │ module.launchpad is a object, known only after apply
│
│ This object does not have an attribute named "storage_accounts".
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 75, in locals:
│ 75: container_name = module.launchpad.storage_accounts[var.launchpad_key_names.tfstates[0]].containers["tfstate"].name
│ ├────────────────
│ │ module.launchpad is a object, known only after apply
│
│ This object does not have an attribute named "storage_accounts".
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 76, in locals:
│ 76: resource_group_name = module.launchpad.storage_accounts[var.launchpad_key_names.tfstates[0]].resource_group_name
│ ├────────────────
│ │ module.launchpad is a object, known only after apply
│
│ This object does not have an attribute named "storage_accounts".
diagnostic_event_hub_namespace
options posted anywhere?
name
and location
resource_groups = {
level0 = {
"Reader" = {
azuread_groups = {
keys = ["caf_launchpad_Reader"]
}
}
}
# naming convention
resource "azurecaf_name" "rg" {
name = var.resource_group_name
resource_type = "azurerm_resource_group"
prefixes = var.global_settings.prefixes
random_length = var.global_settings.random_length
clean_input = true
passthrough = var.global_settings.passthrough
use_slug = var.global_settings.use_slug
}
resource "azurerm_resource_group" "rg" {
name = azurecaf_name.rg.result
location = var.global_settings.regions[lookup(var.settings, "region", var.global_settings.default_region)]
tags = merge(
var.tags,
lookup(var.settings, "tags", {})
)
}
having used many TF modules, it's pretty standard to do something like
module "resource_group" "rg" {
source = "git::https://github.com/aztfmod/terraform-azurerm-caf/blob/master/modules/resource_group/module.tf"
version = "1.0.0"
name = var.rg_name
location = var.rg_location
tags = var.rg_tags
}
How is it better to use this round about way of specifying configs?
if nothing else, it'd be nice to have things structured as something that matches the CAF config definition like
resource_group = {
name = "Reader"
azuread_groups = [group1, group2]
caf_level = level0
}
My real confusion is how do I know which landing zone configs the resource group needs to specify. Where is the landing zone configuration spec?
Importing KeyVault with rover 🔐
Command used
rover -lz /tf/caf/landingzones/caf_solution/ -tfstate example.tfstate -var-folder /tf/caf/configuration/example/level3/example-linux-vm -parallelism 30 -env example -level level3 -a import module.solution.module.keyvaults[\"example_vm_rg1\"].module.initial_policy[0].module.logged_in_user[\"logged_in_user\"].azurerm_key_vault_access_policy.policy /subscriptions/<subscription-id>/resourceGroups<example>-virtual-machine-rg1/providers/Microsoft.KeyVault/vaults/<vault-name>/objectId/<objectId>
I could use some help importing Keyvault. Seeing the following errors:
Terraform import return code: 1
Terraform returned errors:
╷
│ Error: Invalid index
│
│ on /home/vscode/.terraform.cache/modules/solution/modules/compute/virtual_machine/output.tf line 39, in output "ssh_keys":
│ 39: ssh_private_key_pem = azurerm_key_vault_secret.ssh_private_key[local.os_type].name
│ ├────────────────
│ │ azurerm_key_vault_secret.ssh_private_key is object with no attributes
│ │ local.os_type is "linux"
│
│ The given key does not identify an element in this collection value.
╵
╷
│ Error: Invalid index
│
│ on /home/vscode/.terraform.cache/modules/solution/modules/compute/virtual_machine/output.tf line 40, in output "ssh_keys":
│ 40: ssh_public_key_open_ssh = azurerm_key_vault_secret.ssh_public_key_openssh[local.os_type].name
│ ├────────────────
│ │ azurerm_key_vault_secret.ssh_public_key_openssh is object with no attributes
│ │ local.os_type is "linux"
│
│ The given key does not identify an element in this collection value.
╵
╷
│ Error: Invalid index
│
│ on /home/vscode/.terraform.cache/modules/solution/modules/compute/virtual_machine/output.tf line 41, in output "ssh_keys":
│ 41: ssh_private_key_open_ssh = azurerm_key_vault_secret.ssh_public_key_openssh[local.os_type].name #for backard compat, wrong name, will be removed in future version.
│ ├────────────────
│ │ azurerm_key_vault_secret.ssh_public_key_openssh is object with no attributes
│ │ local.os_type is "linux"
│
│ The given key does not identify an element in this collection value.
╵
Error on or near line 573: Error running terraform import; exiting with status 2003
@arnaudlh following your youtube video https://www.youtube.com/watch?v=fqgv4Wsvo88 I'm doing nothing different from you apart from the location of UKSouth and my own password, and I get the below error message.
TASK [wait for the WinRM port to come online] *****************************************************************************************************************************
ok: [localhost]
PLAY [Setup the CAF development tools] ************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "ntlm: the specified credentials were rejected by the server", "unreachable": true}
PLAY RECAP ****************************************************************************************************************************************************************
localhost : ok=12 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
I'm able to log into the VM using RDP with the king.admin and password I have set. Are you able to advise why its failing for me?
enterprise scale
repo to build out the management group structure, but using the CAF unified repo to deploy resources appears to be suicide at this point.