[209116 ms] Port forwarding 57354 > 40275 > 40275 terminated with code 0 and signal null.
Updating https://github.com/pre-commit/pre-commit-hooks ... already up to date.
[291413 ms] Port forwarding 57354 > 40275 > 40275: Local close
Hi @LaurentLesle, hi @arnaudlh,
I am currently wondering why the Rover Plan ist taking quite long in my runs. I found that the step for loading the plugins is taking quite too long (5-10 Minutes).
Is this normal or a known issue?
I'm using CAF 5.5.8 and Rover 1.1.7-2203.2311
Hi,
Trying to grasp all the templating stuff, which is great. However, I have a few questions:
1) As far as I understand with this new approach, we only need to versions yaml files in source control since all tfvars should be generated. Is it correct?
2) Everytime we change one value in a yaml file, we need to re-run 'ansible-playbook /tf/caf/landingzones/templates/ansible/ansible.yaml --extra-vars "@/tf/caf/platform/definition/ignite.yaml"' which takes 4:30 min. Then, we need to run terraform plan, which takes 6 min (for caf-solution lz). Meaning that its 10:30min everytime we need to change something. Is it the right way to do it?
3) Let say i want to create a new landing zone and create a new resource group (using yaml file only). How do I do it ?
Thank you very much
Hello all. I am new and today for the first time tried to follow the "Single subscription deployment lab" step by step. First of all thanks for the great module. I am sure that our cloud migration will be significantly accelerated by the module and the quality will be increased.
I hope the community can help me with my startup difficulties. The steps rover ignite and rover plan from the lab were successful. Unfortunately I get the following error when using rover apply:
╷
│ Error: local-exec provisioner error
│
│ with module.launchpad.module.azuread_roles_service_principals["identity"].null_resource.set_azure_ad_roles["Groups Administrator"],
│ on /home/vscode/.terraform.cache/prd/modules/launchpad/modules/azuread/roles/roles.tf line 11, in resource "null_resource" "set_azure_ad_roles":
│ 11: provisioner "local-exec" {
│
│ Error running command '/home/vscode/.terraform.cache/prd/modules/launchpad/modules/azuread/roles/scripts/set_ad_role.sh': exit status 1. Output:
│ Directory role 'Groups Administrator'
│ Enabling directory role: Groups Administrator
│ - body: {
│ "roleTemplateId": "fdd7a751-b60b-444a-984c-02652fe8fa1c"
│ }
│ ERROR: Bad Request({"error":{"code":"Request_BadRequest","message":"A conflicting object with one or more of the specified property values is present
│ in the directory.","details":[{"code":"ConflictingObjects","message":"A conflicting object with one or more of the specified property values is
│ present in the
│ directory.","target":"Role_8e481c99-f40b-47e2-bc7a-a688cdfa2340"}],"innerError":{"date":"2022-07-15T11:42:42","request-id":"e5583fc7-1c49-48f1-894e-403f3fb74735","client-request-id":"e5583fc7-1c49-48f1-894e-403f3fb74735"}}})
Does anyone have a clue why this error occurs and how I can fix it? I have checked both the App Registration and the Enterprise Application created for identity in the portal and I don't see any "Groups Administrator" role assigned.
Rover Ignite - Templating - Experience / Feedback @LaurentLesle @arnaudlh
Has anyone implemented Rover Ignite / Templating for two or more clients (ClientA & ClientB) with two or more environments (Dev/QA/Prod)?
Templating: Guess the objective was to simplify config generation but seems a lot of work is required in writing the YAML/J2 definition files, generating and managing .tfvars with the assumption that the stack would be identical between the clients and between each of those environments. How does this compare with having a global_settings.tfvar file per stack and diffing between stacks - it may not be elegant but to me seems like a quick and easy solution.
Handling stack variations: Again, Dev/QA/Prod may not be an identical stack for practical reasons. Can we handle variations between stacks or do variations nullify the purpose of Rover ignite/templating?
Speed of change: each time we need to make an update, we need to update our YAML/J2 definitions, run the Ansible playbook, generate the .tfvars, run Terraform plan & apply, and repeat if any mistakes are done.
Upgrades: How do we handle stack upgrades with Rover ignite using templates? How would it re-generate the configuration for each stack for each client?
Documentation: We need more clarity on the workflow of using Rover Ignite/Templating for a single stack explaining how to introduce a new change (adding new resource to the stack or whole new landingzone) initially for a single subscription but eventually in a multi-tenant, multi-subscription, multi-stack scenario.
Hello again,
I am wondering what I have todo to use multiple subscriptions instead of just a single one like I did when following the the lab. I tried to modify the ignite.yaml to use two seperate subscriptions. One for all launchpad resources and one for all the other platform resources. The generated subscriptions.yaml was configured correctly with my input from the ignite.yaml:
resources:
subscriptions:
subscriptions:
launchpad:
name: CAF Launchpad
create_alias: false
subscription_id: xxxxxxxx-xxxx-xxxx-xxxx-534f587e31eb
identity:
name: CAF Platform
create_alias: false
subscription_id: xxxxxxxx-xxxx-xxxx-xxxx-a79c4fcf01e4
connectivity:
name: CAF Platform
create_alias: false
subscription_id: xxxxxxxx-xxxx-xxxx-xxxx-a79c4fcf01e4
management:
name: CAF Platform
create_alias: false
subscription_id: xxxxxxxx-xxxx-xxxx-xxxx-a79c4fcf01e4
But all resources where generated in the CAF Launchpad subscription. After i realized this i noticed that in all readme files in the rover command the CAF Launchpad subscription id was generated as target_subscription.
After I simply specified the desired subscription id in the parameter target_subscription, the resources were generated in this as desired.
My question is only whether this is the right way to achieve my goal?
And if yes, if in the future rover ignite will use the subscriptions stored in ignite.yaml to generate the readme files?
Thank you and best regards!
Hi, I am trying to build a landingzone at level3 with its own block of global-settings (passthrough = false) and a unique prefix, which works fine. However, post-creation of my landingzone(level3), if I reference another statefile (from current or lower levels) that has global-settings (passthrough = true) it is taking precedence and ignoring current-level (passthrough = false). Even though I am NOT using global_settings_key = "lower-level" within the current (level3) landinzone={} block, just the tfstate={} block.
This is potentially dangerous if we work on a higher level landing-zone that has a requirement to reference a lower level state file with a difference in Global Settings (passthrough/prefixes/suffixes/slug/random_length).
Lower level global-settings should only be active, if global_settings_key is referenced within the current level but not when we only reference tfstate={}.
Could you help with the right understanding and usage of Global_Settings, please. There could be few permutations & combinations for the above settings.
Can anyone help with the Hub-Spoke implementation if they are in two different Subscriptions please
Hub network must be in level 2 in its own landingzone subscription sub1. VWan is in same subscription/resource group but different landingzone & hence different tfstate file.
Spoke network must be in level3 and in a different subscription sub2.
I defined the virtual_hub_connections at level 3:
virtual_hub_connections = {
my-spoke_TO_my-hub = {
name = "my-spoke_TO_my-hub"
virtual_hub = {
lz_key = "my-hub-landingzone-key"
key = "my-hub-key"
}
vnet = {
vnet_key = "my-spoke-vnet-key"
}
}
}
module.solution.azurerm_virtual_hub_connection.vhub_connection["my-spoke_TO_my-hub"]: Creating...
╷
│ Error: creating Hub Virtual Network Connection: (Name "my-spoke_TO_my-hub" / Virtual Hub Name "my-hub" / Resource Group "myhub-rg"): network.HubVirtualNetworkConnectionsClient#CreateOrUpdate: Failure sending request: StatusCode=404 -- Original Error: Code="ResourceGroupNotFound" Message="Resource group 'myhub-rg' could not be found."
│
│ with module.solution.azurerm_virtual_hub_connection.vhub_connection["my-spoke_TO_my-hub"],
│ on /home/vscode/.terraform.cache/mydev/rover_jobs/20220728004934565321893/modules/solution/networking_virtual_hub_connection.tf line 14, in resource "azurerm_virtual_hub_connection" "vhub_connection":
│ 14: resource "azurerm_virtual_hub_connection" "vhub_connection" {
│
TF Plan shows the right plan but when it does apply it fails and I assume it is targeting the level3 subscription when it does TF Apply even though TF plan shows the correct level2 subscription.
** Do we create a virtual_hub_connections landingzone exclusive for connections (no other resources defined) and then run -target-subscription <level2-subscription or move virtual_hub_connections to level 2 and reference level3 vnet using resource ID?
Note: The tfstate for Hub & VWan landingzones are reference at level3 - order toggled but still issue persists.
Hello Everyone, hope you're well and safe.
Sorry for the newbie question, but that's something I've been trying to figure out for sometime and it's not clear yet for me :
What's the difference between Azure/terraform-azurerm-caf-enterprise-scale and aztfmod/terraform-azurerm-caf ?
Which one is recommended to use for ESLZ deployments ? Both are endorsed by Microsoft ?
Is there a way to use Azure/terraform-azurerm-caf-enterprise-scale for standalone deployments ?
I've started to use azfmod/terraform-azurerm-caf for my standalone deployments (not related to CAF), but we are also about to adopt CAF in our environment and the consultancy that is helping us in this journey is using Azure/terraform-azurerm-caf-enterprise-scale for the deployment.
Any rover expert here? Trying to deploy caf landing zones in multi subscription. Followed the documentation of single subscription and modified subscriptions section in ignite.yaml, however it still deploys everything in single subscription. Here is how my subscriptions look like in ignite.yaml.
subscriptions:
launchpad: # Do not rename the key
name: Launchpad
create_alias: false
subscription_id: abcd
identity: # Do not rename the key
name: Identity
create_alias: false
subscription_id: efgh
connectivity: # Do not rename the key
name: Connectivity
create_alias: false
subscription_id: ijkl
management: # Do not rename the key
name: Management
create_alias: false
subscription_id: pqrs
any help towards how to deploy these in their respective landing zones is appreciated