I have a very simple python script, not a web app, it runs based on some arguments and returns something, I run it with “docker run —rm -it my_image:v1 python mydir/app.py”, I see that host folder that I added as a volume can’t be find in the container.
To test it, I created a python hello world flask app and run it with docker-compose up, in this case volumes are working fine, changes in the host machine is reflected to the running container. But this doesn’t look a good solution to me.
Is there a way to run the app with “docker run —rm -it my_image:v1 python mydir/app.py” and use volumes so if I change a file in the host, python mydir/app.py will consume the right content.
I worked hours on that, tried adding volume on the fly with -v parameter, used docker compose, only Dockerfile with volume description etc.
I’ll be appreciated if at least you can share some docs, blogs or ideas.
bindmounts. Docker Service Volumes
node-1with a volume mount, and if
node-1goes down the service
node-2the service will create a new volume on
node-2that will not have the same data as the local volume created on
exposeyour services to
port 80and let Docker Swarm assign a port, that you can use to do the internal reverse proxying.
Yeah of course, that makes sense. But if you're running multiple applications next to eachother, something Docker (swarm) does allow and take care of. I'd say it's good practice to have default "Web facing apps" expose over port 80.
Let me rephrase; if I want to make all my apps expose port 80 by default on the front facing part, how would I do this in a docker-compose file that swarm accepts, so I can then start to figure out how to route inside that specific network where port 80 is exposed for that application? Should I be removing/disabling the default joining of the ingress or overlay network?
Well that's exactly what I'm trying to do @mh720, I'm just not seeing the results I expect.
Basically, I have multiple apps, we used this analogy already. What I'm seeing right now is that all ports exposed are available at the docker swarm loadbalancer directly; but some apps only need to be available from inside that stack's network. The apps that I do want to expose, I would expect to expose the web apps at port 80. I don't mind so much doing to automatic port assignment, but what I would prefer and what I thought docker swarm would do for me; is that you get internal networks (like private IPs?) that you can use to route traffic to.
So I do have a bunch of networks available in my list, 10.0.3.1, 10.0.4.1 etc. as the gateway machines. But when I ssh into the manager server and
curl 10.0.3.1:80 I get nothing