RUN ln -sf /dev/stdout /var/log/myapp/services-info.log \
&& ln -sf /dev/stderr /var/log/myapp/services-error.log
i added this in docker
i am running it on aws ecs
i am getting the logs on cloudwatch but cannot tail inside container
both of them not running
not showing any logs
base image for this docker is ubuntu
I have an ubuntu and win10 machine and try to join them using swarm
So on win 10 i did a docker swarm inint and it gave me
docker swarm join --token SWMTKN-1-2x0d8gcwdajbjt78xtgnfdgrlggd2yp9sh6t3klcx48bszl7nk-99on3a92ej4pdhgbglfmdychy 192.168.65.3:2377
i copied this to ubuntu machine and the response was
Error response from daemon: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 192.168.0.8:2377: connect: connection refused"
faizal@faizal-K42JA:~/hypledger$ docker swarm join --token SWMTKN-1-5m96ou268r2csm89faroxa4313crw484px8kzyvkthq9ggdixc-8xqlzsnp1w41khm3i84ek5sl8 192.168.0.8:2377
i coudnt do a telnet even from my win10 mchine or even ping it.
How to make win10 as manager and ubuntu as worked using docker swarm
docker swarm initcommand provided you with the command you have to run on the swarm nodes to join the swarm cluster. In your sample it was
docker swarm join --token SWMTKN-1-2x0d8gcwdajbjt78xtgnfdgrlggd2yp9sh6t3klcx48bszl7nk-99on3a92ej4pdhgbglfmdychy 192.168.65.3:2377. But I see different token and IP was used later in your sample. Could it be a problem there?
I have a very simple python script, not a web app, it runs based on some arguments and returns something, I run it with “docker run —rm -it my_image:v1 python mydir/app.py”, I see that host folder that I added as a volume can’t be find in the container.
To test it, I created a python hello world flask app and run it with docker-compose up, in this case volumes are working fine, changes in the host machine is reflected to the running container. But this doesn’t look a good solution to me.
Is there a way to run the app with “docker run —rm -it my_image:v1 python mydir/app.py” and use volumes so if I change a file in the host, python mydir/app.py will consume the right content.
I worked hours on that, tried adding volume on the fly with -v parameter, used docker compose, only Dockerfile with volume description etc.
I’ll be appreciated if at least you can share some docs, blogs or ideas.
bindmounts. Docker Service Volumes
node-1with a volume mount, and if
node-1goes down the service
node-2the service will create a new volume on
node-2that will not have the same data as the local volume created on
exposeyour services to
port 80and let Docker Swarm assign a port, that you can use to do the internal reverse proxying.
Yeah of course, that makes sense. But if you're running multiple applications next to eachother, something Docker (swarm) does allow and take care of. I'd say it's good practice to have default "Web facing apps" expose over port 80.
Let me rephrase; if I want to make all my apps expose port 80 by default on the front facing part, how would I do this in a docker-compose file that swarm accepts, so I can then start to figure out how to route inside that specific network where port 80 is exposed for that application? Should I be removing/disabling the default joining of the ingress or overlay network?