Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Mike Holloway
    @mh720
    copy the caddy: service from the above docker-compose.yml, strip out the labels: and volumes: from it, expose only it’s ports: 80:80 to forward the externally exposed swarm service port 80 to caddy listening on the internal overlay network on port 80 (or listening on any port you want for that matter), and see the bottom of https://github.com/swarmstack/swarmstack/blob/master/caddy/Caddyfile for examples of adding multiple ‘proxy’ stanzas within your ':80 {‘ block to proxy different URLs to different containers internal ports.
    Bjorn S
    @Bjeaurn

    Well that's exactly what I'm trying to do @mh720, I'm just not seeing the results I expect.

    Basically, I have multiple apps, we used this analogy already. What I'm seeing right now is that all ports exposed are available at the docker swarm loadbalancer directly; but some apps only need to be available from inside that stack's network. The apps that I do want to expose, I would expect to expose the web apps at port 80. I don't mind so much doing to automatic port assignment, but what I would prefer and what I thought docker swarm would do for me; is that you get internal networks (like private IPs?) that you can use to route traffic to.

    So I do have a bunch of networks available in my list, 10.0.3.1, 10.0.4.1 etc. as the gateway machines. But when I ssh into the manager server and curl 10.0.3.1:80 I get nothing

    and this is where I'm at a loss.
    cause when I do curl 0.0.0.0:30000 I get appA and curl 0.0.0.0:30001 I get appB
    which is a step in the right direction, but not what I was going for.
    30000 and 30001 are auto assigned ports in that sense
    so this is where I figured I'm not understanding the docker swarm networks and/or are making a mistake in how I'm setting it all up. By now, networks are entirely default in my docker-compose.yml files I use to setup these stacks.
    Mike Holloway
    @mh720
    In your docker-compose stack file, make sure you are attaching the service to a network (networks: - net) in the docker-compose.yml example above, and don’t define any ports: under that service. The containers will come up and bind to an ephemeral internal IP address on whatever ports they care to bind on, but those ports won’t be exposed to the outside world.
    Bjorn S
    @Bjeaurn
    Hmmm ok.
    What is the go to way to share snippets here? Just copy and paste between ``` ?
    Mike Holloway
    @mh720
    Bjorn S
    @Bjeaurn
    version: '3.3'
    services:
      nginx:
        image: nginx:latest
        environment:
          VIRTUAL_HOST: example.com
        ports:
         - 30001:80
        networks:
         - default
        logging:
          driver: json-file
    networks:
      default:
        driver: overlay
    This is my docker-compose.yml as it gets set up after I load in my stuff. It basically adds the default network in there for me
    the 30001 can be considered 0
    what you're saying is I shouldn't bind ports, and run the default network which is already in overlay mode.
    Mike Holloway
    @mh720
    yup
    Bjorn S
    @Bjeaurn
    I think this will make the nginx container unavailable, as it's not exposing any ports
    right?
    Mike Holloway
    @mh720
    well, if you want external traffic to reach your containers eventually, SOMETHING needs to expose a port, typically this would be your nginx or proxy
    Bjorn S
    @Bjeaurn
    I'm not entirely sure, I should check this; but the default network is the one generated by the stack upon creation right? so appa_default network?
    Yeah, I would assume you would just do ports: - 80:80
    Mike Holloway
    @mh720
    yes
    Bjorn S
    @Bjeaurn
    and then from inside the swarm, you would route towards this network
    instead what happens, and this is where I'm missing what I'm doing wrong, is when I bind 80:80, it becomes available to the entire world and docker swarm, instead of just in this network that I was expecting to be routing to
    Mike Holloway
    @mh720
    Yes, usually via the container ‘name’ (https://containername:internalport)
    Bjorn S
    @Bjeaurn
    so this specific stack configuration now affects all my other stack configurations
    ah yeah, the containername is interesting; wouldn't you expect this to be at a stack level or something?
    considering they're like isolated applications on their own?
    Mike Holloway
    @mh720
    if you expose a port (EXPOSED PORT:INTERNAL PORT), it will do exactly what you are seeing
    Bjorn S
    @Bjeaurn
    alright, so if I want to expose 80:80 but only to that internal network and then let the swarm route to that network and let that figure out what the entrypoint is; I need to setup a different network? Maybe in bridge mode instead of overlay?
    basically, if I would grab a docker-compose.yml from anywhere with a web app; you may assume a port 80 is bound somewhere. However, if there's already another stack that uses it, this won't work. This seems weird to me, as I understand that you'd have to route to that specific virtual network in your swarm (hence; the reverse proxy). But I wasn't expecting it to be swarm wide when you expose a port.
    Mike Holloway
    @mh720
    I think I’m missing something you are trying to accomplish, I must be. Are you wanting to isolate app containers into individual networks, and route between those networks (not expose them to the world)
    Bjorn S
    @Bjeaurn
    Really appreciate you taking the time talk me through it by the way @mh720
    Yes, basically.
    In my mind, I have multiple "fake" networks, isolated per stack
    In that network you can have containers expose port 80. so your config for appA doesn't touch your config for appB, they can both be using port 80 whatever.
    Then you either reverse proxy into the correct network (via IP or generated port? I don't know?), and let the docker swarm loadbalancer take it from there
    am I making a mistake in my mental mindmap?
    I mean, when I look at my networks topology now in swarmpit; I have test_default, test2_default (being appA and appB)
    and I would expect, they can both have their port 80 assignment in there.
    instead, they seem to give up their private network and influence the entire swarm, instead of just their own personal little stack
    am I making any sense at all? :-P
    Mike Holloway
    @mh720
    I don’t have any experience there. My experience is that you need to join other stacks into the SAME network in order for them to be able to route to each other. See ‘attachable: true’ within https://github.com/swarmstack/swarmstack/blob/master/docker-compose.yml and then maybe https://github.com/swarmstack/errbot-docker/blob/master/docker-compose-swarmstack.yml for example of a second stack connecting to the first one’s network
    Bjorn S
    @Bjeaurn
    hmmm ok, my main point being that I think stacks shouldn't (by default) be connecting to eachother
    Surely they need to expose something to the entire swarm, and that's up to a reverse proxy to figure out then
    which is a different situation in which I have a bit more experience myself using nginx
    It's just that I can't wrap my head around separating my stacks and containers into their own isolated areas and then having a main manager handle the routing to the correct stack/container
    Mike Holloway
    @mh720
    Seperate stack named networks will only be able to connect to each other if they ‘expose’ something to the world, otherwise they would be isolated from each other.
    Bjorn S
    @Bjeaurn
    hmm ok let's flip this around
    Ah ok that makes a bit more sense to me
    but that would mean you wouldn't expose port 80:80, but have it randomize instead so your proxy can take port 80:80