Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    ronaldjeden
    @ronaldjeden
    but that one is on small
    Michael Aldridge
    @the-maldridge
    ah yeah that is a thing to keep in mind
    you need enough memory to load an unpack the initrd
    ronaldjeden
    @ronaldjeden
    what sizes did you use?
    Michael Aldridge
    @the-maldridge
    I think most things I run are on t3a.medium or larger right now
    ronaldjeden
    @ronaldjeden
    ok let me try
    Michael Aldridge
    @the-maldridge
    but the massive rails apps on them might have something to do with that :/
    ronaldjeden
    @ronaldjeden
    image.png
    Michael Aldridge
    @the-maldridge
    that looks like its finalizing init
    ssh should become available in a few moments after that
    ronaldjeden
    @ronaldjeden
    yeah i can ssh
    i tried nomad status and consul status but did not work
    Michael Aldridge
    @the-maldridge
    until the cluster is bootstrapped nomad won't be up
    also heed the warning when you ssh into the machine, its namespaced and you're not in the root namespace
    ronaldjeden
    @ronaldjeden
    aha for this i need a small tutorial I guess. :) at least the images work.
    Michael Aldridge
    @the-maldridge
    perhaps
    if you're using the snippet I provided and the metadata module, you'll also need to configure AWSSM to store the initial keys the system depends on
    ronaldjeden
    @ronaldjeden
    the linuxkit-setup module you mean?
    or all the other aws modules. Then I will try set them up tomorrow
    Michael Aldridge
    @the-maldridge
    the other AWS modules
    the hashistack needs certain keys available in certain order to bootstrap
    you need consul, then vault, then nomad
    the images are configured to come up a little bit at a time, they'll just keep polling until all the values are available
    ronaldjeden
    @ronaldjeden
    aha ok will try to figure that out
    if i ssh now i try to execute ctr --namespace services.linuxkit containers ls
    but it give an error
    should this be possible?
    Michael Aldridge
    @the-maldridge
    ctr isn't in the ssh namespace
    you can nsenter -m -t 1 to enter the root mount namespace
    then binaries will become visible to you
    ronaldjeden
    @ronaldjeden
    ah ok this is a bit new to me but I am beginning to get the gist of it.
    If i configure all aws modules they will poll I guess for variables with which they can startup?
    Michael Aldridge
    @the-maldridge
    once you're in the root namespace ctr will work, and you can exec into other containers
    every machine needs an IAM profile to at the minimum list tags on machines to be able to find consul peers, and to contact secrets manager for encryption keys
    the security-setup module can do this for you
    ronaldjeden
    @ronaldjeden
    aha I came this far I a not gonne give up
    Michael Aldridge
    @the-maldridge
    once you have the AWS side setup done, you then need to do the normal hashistack bootstrapping to setup ACLs and enable integrations
    that script is how I do this in my hardware cluster, and the process is the same for AWS. Just instead of storing keys on the filesystem, put them in their right place
    ronaldjeden
    @ronaldjeden
    yeah
    Michael Aldridge
    @the-maldridge
    if you want to see what its doing, you can ssh into a machine and watch the log from emissary, which does the token acquisition during init
    ronaldjeden
    @ronaldjeden
    ah ok
    ronaldjeden
    @ronaldjeden
    Hi Michael,
    ronaldjeden
    @ronaldjeden

    For my undertstanding. The cluster is kind of up and running now and need to be provisioned. However I only need to 'port' the above mentioned script.

    How do the nodes get this script provisioned?
    Would I put the address of the loadbalancer here?

    export CONSUL_HTTP_ADDR=http://node1:8500 --> so http://aws-loadblancer:8500
    export NOMAD_ADDR=http://node1:4646
    export VAULT_ADDR=http://node1:8200

    Do I need to run this script locally in a new directory and just use the environment variables?

    I am trying to grasp the concept more or less I guess. I know this is the bare-metal variant.

    Come this far but this last part is confusing me. Maybe I am too dumb.

    We have the end-to-end sample now. It would also be really nice if the final step is being documented that would be perfect for the community.

    Michael Aldridge
    @the-maldridge
    the nodes don't get the script, you run the script locally where you have the tools available
    its just stepping through the processes documented on learn.hashicorp.com for setting up a consul cluster with acls, setting up vault, and setting up the nomad/consul/vault token integrations. It should be relatively clear as 3 distinct phases happening
    ronaldjeden
    @ronaldjeden
    Ok I will try it 🤫.
    ronaldjeden
    @ronaldjeden
    And how do i connect to the cluster via the load balancer?
    Michael Aldridge
    @the-maldridge
    you should have an output from terraform that's the public address of the load balancer, but that might not exist in the version you have.
    if not, decsribe ALBs and pull the address of the load balancer, and then you can point {nomad,consul,vault}.<base-domain> at it and it will proxy you through to the clusters