Well that's exactly what I'm trying to do @mh720, I'm just not seeing the results I expect.
Basically, I have multiple apps, we used this analogy already. What I'm seeing right now is that all ports exposed are available at the docker swarm loadbalancer directly; but some apps only need to be available from inside that stack's network. The apps that I do want to expose, I would expect to expose the web apps at port 80. I don't mind so much doing to automatic port assignment, but what I would prefer and what I thought docker swarm would do for me; is that you get internal networks (like private IPs?) that you can use to route traffic to.
So I do have a bunch of networks available in my list, 10.0.3.1, 10.0.4.1 etc. as the gateway machines. But when I ssh into the manager server and curl 10.0.3.1:80
I get nothing
curl 0.0.0.0:30000
I get appA
and curl 0.0.0.0:30001
I get appB
docker-compose.yml
files I use to setup these stacks.
version: '3.3'
services:
nginx:
image: nginx:latest
environment:
VIRTUAL_HOST: example.com
ports:
- 30001:80
networks:
- default
logging:
driver: json-file
networks:
default:
driver: overlay
This is my docker-compose.yml
as it gets set up after I load in my stuff. It basically adds the default network in there for me
0
ports
, and run the default
network which is already in overlay mode.
ports: - 80:80
80:80
, it becomes available to the entire world and docker swarm, instead of just in this network that I was expecting to be routing to
docker-compose.yml
from anywhere with a web app; you may assume a port 80
is bound somewhere. However, if there's already another stack that uses it, this won't work. This seems weird to me, as I understand that you'd have to route to that specific virtual network in your swarm (hence; the reverse proxy). But I wasn't expecting it to be swarm wide when you expose a port.
test_default
, test2_default
(being appA and appB)
port 80
assignment in there.