Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Bjorn S
    @Bjeaurn
    is there a way, if you have like replicated containers, to connect to a "service" name or whatever the proper terminology is?
    Ah yeah, good point.
    Mike Holloway
    @mh720
    http://my-container:80 exactly
    Bjorn S
    @Bjeaurn
    hmmm
    I gotta test this
    cause now I'm wondering if my-container:80 will have docker swarm loadbalance across all available containers under that name
    if it does. then I may have been going at this the wrong way entirely
    Thanks so much for your time and insights @mh720
    Mike Holloway
    @mh720
    Yes, in your docker compose, set the service to ‘deploy: mode: global’ and it would run the service on all your hosts, then you could connect to a specific instance of the service directly by pointing at http(s)://swarmnode.fqdn:port .. if you set the service to only replicate X copies that are less than the number of nodes, they will bounce around nodes and you will never know where they are. You could also use a trick of running individual copies of the service and ‘pin’ them to a specific known node using ‘placement: constraints’.
    Bjorn S
    @Bjeaurn
    yeah I read about that. But the http://container-name:80 should work wherever it ends up right?
    Mike Holloway
    @mh720
    Yes exactly, swarm will proxy the traffic transparently to some swarm host that can handle that traffic for you
    back in a bit
    Bjorn S
    @Bjeaurn
    Sure thing!
    Bjorn S
    @Bjeaurn
    hmmm okay I gotta lookup how to connect to containers like this from the swarm. I think this way of connecting is for inside the stacks, not the entire swarm
    or from outside of the swarm for that sense.
    Bjorn S
    @Bjeaurn
    Like I was trying things like http://stack_container:80 but that didn't work
    and the container name doesn't work at all in docker swarm/stack mode
    Bjorn S
    @Bjeaurn
    Alright I'm severly misunderstanding something or I am just not doing it right @mh720
    Mike Holloway
    @mh720
    Bjorn you’re correct, dns names are available only on the inside of docker networks, and you could have same service names in different stacks with same service name (different stack name) and they won’t collide or resolve each other. If for instance nginx is in a stack with appA and appB, you could bind nginx to 80 external:80 internal then in nginx reverse proxy config :80{ proxy http://fqdn/appA/ appA:80, http://fqdn/appB/ appB:80} and in the stack appA and appB will each have their own (internal) IP, and they can bind to whatever ports they each want (80 above), and only things inside that stack can access/ping them since you aren’t exposing those services directly to the outside (only nginx service is exposed)
    You could have 100 containers in the stack, in that stack network, all bound to their own port 80, not exposed; and then make 100 stanzas in your exposed reverse proxy to get traffic to them from the outside world.
    Mike Holloway
    @mh720
    Only one stack can “expose” 80 though, over the entire swarm (because all swarm hosts will accept traffic on that exposed port and will transparently get the traffic to the right place.
    Mike Holloway
    @mh720
    To isolate different customer apps, or just for isolation security; you could have one stack do nothing but expose 80 to a proxy service container. AppA in a totally different stack (or just network), and run a proxy also in that stack, exposing port 81 for example. In the first stack (proxy only) have it reverse proxy :80{ http://fqdn/appA -> swarmhostfqdn:81} and in the appA stack’s proxy have it also reverse proxy to 10 different services names in that stack, all bound to their own port 80 but not otherwise accessible....
    The proxy in the first network would be a point of compromise, but even if rooted can only access appA’s containers via the proxy in the appA stack, and couldn’t otherwise talk/hack directly appA’s service containers, only talk to them through whatever the appA proxy routes.
    At the end of the day, unless you want to run separate routers, load balancers, and swarms, for each separate customer’s traffic, there will points on the network where all of your customer traffic comes back together before heading back out to the internet, so choose your battles.
    Bjorn S
    @Bjeaurn
    Thanks @mh720 for taking the time to help me out. I ended up asking this question on reddit and twitter and got a few pointers, basically what you were saying as well. I guess I needed to write out my exact problem to get to a better question to ask. What I ended up doing that made it work for now was creating a new "public" network, let all the publishable services join that network and now I can route over my containers using their hostnames or IPs. This was what I was looking for. I've even gotten the reverse proxy setup now as well which automatically detects the service and adds it, so that seems to be working great.
    Now to figure out if replication works as I think it does, but the main issue I was having and just couldn't wrap my head around is fixed and I've definitely learned a few things about swarm and how the networks in docker function.
    Thanks again!
    Rafael Vicente Saura
    @AsixCompany_gitlab
    It is possible build a windows docker container from gitlab ci?
    Ievgenii Shepeliuk
    @eshepelyuk
    Hello, everyone
    Is there any limitations for number of stacks that can be run in swarm ?
    Is it OK to run each service in own stack ?
    Chase Pierce
    @syntaqx
    :wave:
    Chase Pierce
    @syntaqx
    What's a good way of allowing CircleCI to connect to a swarm you created with docker-machine? Trying to understand the "right" way, not just a way . Literally just trying to allow circle to do a rolling deploy on build :)
    Mike Holloway
    @mh720
    Not sure of a/the definitive ‘right’ way to accomplish this, but replied in gitter/swarmstack this morning that our team uses something like Portainer’s webhook API for builds to trigger CI/CD updates to containers and stacks.
    Chase Pierce
    @syntaqx
    Seems like the portainer stuff via swarmstack just auto-updates a given tag, no? (ie, my/service:latest)
    Isn't the more common convention in orchestration to use a specific tag (ie my/service:1.0.1) so you can have rollbacks?
    Mike Holloway
    @mh720
    Curious to hear what others here do. For dev CI/CD we use latest and fail forward, for production a tagged release. Not a fit for every environment, but is adequate for some.
    Chase Pierce
    @syntaqx
    For CI/CD do you guys redeploy stacks, update services, a mixture? I'm curious how people manage the individual services within stacks and how to update them individually
    Jose Marcelius Hipolito
    @joeyhipolito
    @all is there a way to share SSH_AUTH_SOCK from a windows host to my containers...?
    Chase Pierce
    @syntaqx
    Which ssh agent are you using?
    atarutin
    @atarutin
    do the network interfacs need to be initialized in --advertise-addr and --data-path-addr?
    Chandrasekar
    @chandru1989_gitlab
    Is there any document which shows about starting docker container swarm via swarm plugin from jenkins ..
    rps2ff
    @rps2ff
    has anyone deployed a kafka cluster using docker swarm?
    Jack Murphy
    @rightisleft
    Hi folks - i seem to be having a new issue with a custom defined attachable network. When attempting to connect a container to it, the docker daemon is unable to locate it. This was working correctly yesterday
    Mike Holloway
    @mh720
    UDP 4789 open between swarm hosts? Did you try restarting Docker daemon on each host to potentially let Docker fix up the INPUT chain firewall rules?
    Daniel Nordberg
    @dnordberg
    hey, are docker secrets really secure? more secure than just passing env? my issue is you can easily enter a container if you have access to the host and view secrets anyway
    is there any better way of managing secrets if this is the case?
    Chase Pierce
    @syntaqx
    @dnordberg The idea is that there isn't a trace of the value existing - It's a file handler, so it's literally up to the host system to provide a secret rather than the value being set on the container itself
    So yes, you can still login to a container that has a secret set and echo that value out, that's the reality with any secret system when you can read it - But the value can be encrypted at rest when not in use, and is only available when you mount it - not after, and not retrievable through layers
    Think of it more of as a secure pipe to the secret rather than the secret itself being "secured" while mounted.
    Chase Pierce
    @syntaqx
    @dnordberg Read over https://www.alexandraulsh.com/2018/06/25/docker-npmrc-security/ for an example of why secrets are better. Read the update to the document last, but it should give you an idea of what Docker is actually securing