Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Mike Holloway
    @mh720
    @inayath123
    ecaepp
    @ecaepp
    @inayath123 have you checked the service logs? docker service logs <service-name>
    ecaepp
    @ecaepp
    @CharcoGreen To answer your question docker documentation recommends volume mounts over bind mounts. Docker Service Volumes
    ecaepp
    @ecaepp
    @CharcoGreen As for your use case I would like to point out that docker by default creates local volumes. So lets say a service is created on node-1 with a volume mount, and if node-1 goes down the service
    is migrated to node-2 the service will create a new volume on node-2 that will not have the same data as the local volume created on node-1.
    SOBNGWI
    @sobngwi
    Please how to update a link ( depends in) to a service already started?
    Mike Holloway
    @mh720
    @sobngwi https://stackoverflow.com/questions/35832095/difference-between-links-and-depends-on-in-docker-compose-yml if the service is already up, I don’t see how it would matter changing it’s run-time depends-on, as it’s already up.
    Mike Holloway
    @mh720
    Just shared this in another community, relevant for Docker and Docker swarm users alike: I’ve done quite a bit of work tuning network systctl’s in general for https://github.com/swarmstack/swarmstack users, see this commented ansible playbook for tuning up the default networking sysctl’s, which are generally set to lowest common denominator by default to support old 1998 processors and hardware: https://github.com/swarmstack/swarmstack/blob/master/ansible/playbooks/includes/tasks/sysctl_el7.yml
    SOBNGWI
    @sobngwi
    @mmikirtumov The use case is to dynamically update, change a dependency. You have component A which depends on component B or C. You started A with B. 2 hours later you want to update A with C dependency. B and C are database component. And we have not use a docket compose file.
    Jim Northrup
    @jnorthrup
    hi is there a faq for docker swarm mysql ? im hours into bad guesswork and SO articles and still have not been able to reach mysql inside a swarm
    the only access means that seems to reach mysql is when the script randomizes password. but then... i have to parse json to get the docker logs to extract the password from init.
    Jim Northrup
    @jnorthrup
    is there anything a little more elegant than a) relying on the random password generator and b) plucking it with pw=( $(docker logs $(docker network inspect kbrcms_default |grep kbrcms_db|cut -f4 -d'"') 2>/dev/null|grep 'GENERATED ROOT PASSWORD'|cut -f 2 -d:) );set -x; sed -i'.bak' s,DB_PASSWORD='.*'$,DB_PASSWORD=${pw}, kbr_php/.env;set +x # ?
    Bjorn S
    @Bjeaurn
    Hello! Anyone here have some pointers on figuring out how the swarm network operates? Steps to take to make it externally visible in a good way, random ports and all that? I've got a situation now where a single stack can expose/bind port 80, but every other stack/app needs to be using different ports. This feels off and I can't seem to figure out what the proper way is to set this up.
    Mike Holloway
    @mh720
    @Bjeaurn check out the docker-compose syntax of https://github.com/swarmstack/swarmstack/blob/master/docker-compose.yml and search for ‘ports’ under the caddy service, which forwards exposed port : internal container port (in this stack caddy handles most external HTTPS termination, then proxies to other services on an internal encrypted network. Swarm service exposed network ports are accessible by all external hosts by default (via any swarm host, they will proxy the traffic automatcally to the container host), and only filtered via firewall FORWARD chains, not via the standard firewall INPUT chains, so if you wish to limit traffic to exposed service ports to only some IPs, see https://github.com/swarmstack/swarmstack which includes a stand-alone firewall management ansible playbook that helps manage locking down Docker service port access to specific IPs. If you don’t want to use the ansible playbook, specifically look at iptables rules here: https://github.com/swarmstack/swarmstack/blob/master/ansible/roles/swarmstack/files/etc/swarmstack_fw/rules/docker.rules
    Bjorn S
    @Bjeaurn
    Thanks @mh720 I'll take a look into these pointers!
    Mike Holloway
    @mh720
    You could even do something like use caddy to redirect anything on 80 to 443/https, and use the same caddy instance as a “reverse proxy” to pass different URLs through to different containers so that everything stays on default https port 443 (i.e. https://1.2.3.4/appA/[..]). Your containers could each just bind to port 80 and not need to worry about TLS/https as traffic passed to them from caddy would traverse an encrypted overlay network. And if you have one container that needs to bind to a specific port (I.e. 8080, or 443), you can reverse proxy that specific container URL to that different port, and your users don’t even need to know, they just see https://1.2.3.4/appB/ @Bjeaurn
    Mike Holloway
    @mh720
    I like caddy for its small binary size and automatic https, but you can use traefik, nginx, Apache, etc to accomplish the same reverse proxy scheme. You can a fairly complete Caddyfile configfile example in the swarmstack/caddy directory linked above.
    s/can/have/
    Bjorn S
    @Bjeaurn
    Yeah I was playing around with Nginx.
    I had some nginx-proxy going that uses dockergen to generate the correct values in a nginx conf.
    This is a nice idea. But my main issue lies that all the stacks by default now are exposing directly outside.
    So I think I'm not using networks in the right way.
    Like if I have StackA (or AppA, doesn't matter) and it has 3 services including a webpage on :80, so AppA:80. this works fine.
    and when I go to whatever domain, it'll serve me that app.
    But when I try to deploy another stack next to it, say AppB, that uses a different domain with a webpage (also on port 80 for simplicity), it starts to complain about already having a port 80 exposed so it can't deploy.
    And when I change it to another port, it works from both domains. So that's what you want to reverse proxy.
    But I thought, that you just expose your services to port 80 and let Docker Swarm assign a port, that you can use to do the internal reverse proxying.
    But I can't seem to get that to work. @mh720
    Mike Holloway
    @mh720
    As you surmised, only one service can expose a given port to the outside world across a swarm cluster. The same is true on a non-swarm Docker host, or even just on a non-Docker host only one program can bind to a port. Docker doesn’t assign or manage random/ephemeral ports; how would your users know what to connect to?
    Bjorn S
    @Bjeaurn

    Yeah of course, that makes sense. But if you're running multiple applications next to eachother, something Docker (swarm) does allow and take care of. I'd say it's good practice to have default "Web facing apps" expose over port 80.

    Let me rephrase; if I want to make all my apps expose port 80 by default on the front facing part, how would I do this in a docker-compose file that swarm accepts, so I can then start to figure out how to route inside that specific network where port 80 is exposed for that application? Should I be removing/disabling the default joining of the ingress or overlay network?

    I'm just trying to grasp what a good configuration that does this looks like, and then the exact solution for the reverse-proxy is something I'll figure out and have some experience with already. Main issue is that I can't get two apps in the swarm to play nice next to eachother and not bother eachother.
    Bjorn S
    @Bjeaurn
    So basically, ports: 0:80 automatically assigns it. Got that figured out now.
    But not there yet.
    Mike Holloway
    @mh720
    I learn something new every day. Good practice would be to secure your user’s traffic with HTTPS, so redirect their feeble http:// connections to https://, and as long as you don’t define “ports:” in the Docker compose file, the services won’t be “expose”d at all web-facing. Then use a proxy (or for best practice in more secure environments, more than one layer of proxies) to secure the applications that only expose their whatever ports inside the encrypted overlay network that only the proxy can route to. You have to turn on that encryption (off by default, why?) but that’s just one more line in your compose file, see 'encrypted: “true”’ within https://github.com/swarmstack/swarmstack/blob/master/docker-compose.yml and the caddy/Caddyfile at same project has all the examples you should need for the above.
    HTTPS/TLS isn’t mandatory, so it’s up to you if you want to do the redirection or not. If not, then just do all your reverse-proxying based on URL to the back-end container apps on the proxy port 80 and you are done.
    Mike Holloway
    @mh720
    copy the caddy: service from the above docker-compose.yml, strip out the labels: and volumes: from it, expose only it’s ports: 80:80 to forward the externally exposed swarm service port 80 to caddy listening on the internal overlay network on port 80 (or listening on any port you want for that matter), and see the bottom of https://github.com/swarmstack/swarmstack/blob/master/caddy/Caddyfile for examples of adding multiple ‘proxy’ stanzas within your ':80 {‘ block to proxy different URLs to different containers internal ports.
    Bjorn S
    @Bjeaurn

    Well that's exactly what I'm trying to do @mh720, I'm just not seeing the results I expect.

    Basically, I have multiple apps, we used this analogy already. What I'm seeing right now is that all ports exposed are available at the docker swarm loadbalancer directly; but some apps only need to be available from inside that stack's network. The apps that I do want to expose, I would expect to expose the web apps at port 80. I don't mind so much doing to automatic port assignment, but what I would prefer and what I thought docker swarm would do for me; is that you get internal networks (like private IPs?) that you can use to route traffic to.

    So I do have a bunch of networks available in my list, 10.0.3.1, 10.0.4.1 etc. as the gateway machines. But when I ssh into the manager server and curl 10.0.3.1:80 I get nothing

    and this is where I'm at a loss.
    cause when I do curl 0.0.0.0:30000 I get appA and curl 0.0.0.0:30001 I get appB
    which is a step in the right direction, but not what I was going for.
    30000 and 30001 are auto assigned ports in that sense
    so this is where I figured I'm not understanding the docker swarm networks and/or are making a mistake in how I'm setting it all up. By now, networks are entirely default in my docker-compose.yml files I use to setup these stacks.
    Mike Holloway
    @mh720
    In your docker-compose stack file, make sure you are attaching the service to a network (networks: - net) in the docker-compose.yml example above, and don’t define any ports: under that service. The containers will come up and bind to an ephemeral internal IP address on whatever ports they care to bind on, but those ports won’t be exposed to the outside world.
    Bjorn S
    @Bjeaurn
    Hmmm ok.
    What is the go to way to share snippets here? Just copy and paste between ``` ?
    Mike Holloway
    @mh720
    Bjorn S
    @Bjeaurn
    version: '3.3'
    services:
      nginx:
        image: nginx:latest
        environment:
          VIRTUAL_HOST: example.com
        ports:
         - 30001:80
        networks:
         - default
        logging:
          driver: json-file
    networks:
      default:
        driver: overlay
    This is my docker-compose.yml as it gets set up after I load in my stuff. It basically adds the default network in there for me
    the 30001 can be considered 0
    what you're saying is I shouldn't bind ports, and run the default network which is already in overlay mode.
    Mike Holloway
    @mh720
    yup
    Bjorn S
    @Bjeaurn
    I think this will make the nginx container unavailable, as it's not exposing any ports