Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    FaizalKhan
    @smfaizalkhan

    Hello All,
    I have a docker swarm on runningon two different hosts ,Leader in Ubuntu and worker in Windows
    on docker network ls i can see the network getting listed

    C:\Users\Faizal>docker network ls
    NETWORK ID NAME DRIVER SCOPE
    18887aba757f bridge bridge local
    00df062ded65 docker_default bridge local
    1e42220f0b70 docker_gwbridge bridge local
    90c53993c421 host host local
    udozhr1kppne ingress overlay swarm
    9e08a2fd21fe none null local
    rtcs20mrhmnt overnet overlay swarm

    Docker inspect gives the service myserice

    docker network inspect overnet
    [
    {
    "Name": "overnet",
    "Id": "rtcs20mrhmntlmwh02upgk5f2",
    "Created": "2019-01-02T06:41:23.5165731Z",
    "Scope": "swarm",
    "Driver": "overlay",
    "EnableIPv6": false,
    "IPAM": {
    "Driver": "default",
    "Options": null,
    "Config": [
    {
    "Subnet": "10.0.0.0/24",
    "Gateway": "10.0.0.1"
    }
    ]
    },
    "Internal": false,
    "Attachable": false,
    "Ingress": false,
    "ConfigFrom": {
    "Network": ""
    },
    "ConfigOnly": false,
    "Containers": {
    "1e9a962147b2443d41537d98fd91a020d3aa090bf1ea2bacafb7ceacce9df99c": {
    "Name": "myservice.2.ydxmr7tb8jlzu4g6jwvnc7ryr",
    "EndpointID": "0bd35007868ff220064ddf3fa19f9e36708853b45bc6234635b3f11b45f9660e",
    "MacAddress": "02:42:0a:00:00:24",
    "IPv4Address": "10.0.0.36/24",
    "IPv6Address": ""
    },
    "lb-overnet": {
    "Name": "overnet-endpoint",
    "EndpointID": "7ed5fc49f3846b6b80a30a48b43170f7225b2a2210e99ee2219ac14fed6a6182",
    "MacAddress": "02:42:0a:00:00:25",
    "IPv4Address": "10.0.0.37/24",
    "IPv6Address": ""
    }
    },
    "Options": {
    "com.docker.network.driver.overlay.vxlanid_list": "4097"
    },
    "Labels": {},
    "Peers": [
    {
    "Name": "eb0e60ee3298",
    "IP": "192.168.0.8"
    },
    {
    "Name": "8b49e41d1df7",
    "IP": "192.168.0.7"
    }
    ]
    }
    ]

    now i enter into conatiner and try to ping another conatiner on ubuntu ,but i dont get inof ,all the data is lost.

    ping 10.0.0.36
    PING 10.0.0.36 (10.0.0.36): 56 data bytes
    64 bytes from 10.0.0.36: seq=0 ttl=64 time=0.068 ms
    64 bytes from 10.0.0.36: seq=1 ttl=64 time=0.176 ms
    64 bytes from 10.0.0.36: seq=2 ttl=64 time=0.185 ms
    64 bytes from 10.0.0.36: seq=3 ttl=64 time=0.176 ms
    ^C
    --- 10.0.0.36 ping statistics ---
    4 packets transmitted, 4 packets received, 0% packet loss
    round-trip min/avg/max = 0.068/0.151/0.185 ms
    / # ping 10.0.0.35
    PING 10.0.0.35 (10.0.0.35): 56 data bytes
    ^C
    --- 10.0.0.35 ping statistics ---
    32 packets transmitted, 0 packets received, 100% packet loss

    Help needed?
    vishnuavenudev
    @vishnuavenudev

    @vishnuavenudev
    RUN ln -sf /dev/stdout /var/log/myapp/services-info.log \
    && ln -sf /dev/stderr /var/log/myapp/services-error.log
    i added this in docker
    i am running it on aws ecs
    i am getting the logs on cloudwatch but cannot tail inside container

    tailf /var/log/myapp/services-info.log
    tailf /dev/stdout
    both of them not running
    not showing any logs
    base image for this docker is ubuntu

    FaizalKhan
    @smfaizalkhan

    Hi All,
    I have an ubuntu and win10 machine and try to join them using swarm

    So on win 10 i did a docker swarm inint and it gave me

    docker swarm join --token SWMTKN-1-2x0d8gcwdajbjt78xtgnfdgrlggd2yp9sh6t3klcx48bszl7nk-99on3a92ej4pdhgbglfmdychy 192.168.65.3:2377

    i copied this to ubuntu machine and the response was

    Error response from daemon: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 192.168.0.8:2377: connect: connection refused"
    faizal@faizal-K42JA:~/hypledger$ docker swarm join --token SWMTKN-1-5m96ou268r2csm89faroxa4313crw484px8kzyvkthq9ggdixc-8xqlzsnp1w41khm3i84ek5sl8 192.168.0.8:2377

    i coudnt do a telnet even from my win10 mchine or even ping it.

    How to make win10 as manager and ubuntu as worked using docker swarm

    mario947
    @mario947
    @smfaizalkhan docker swarm init command provided you with the command you have to run on the swarm nodes to join the swarm cluster. In your sample it was docker swarm join --token SWMTKN-1-2x0d8gcwdajbjt78xtgnfdgrlggd2yp9sh6t3klcx48bszl7nk-99on3a92ej4pdhgbglfmdychy 192.168.65.3:2377. But I see different token and IP was used later in your sample. Could it be a problem there?
    FaizalKhan
    @smfaizalkhan
    @mario947 .sorry for the typo mistake.
    image.png
    Adam Jorgensen
    @OOPMan
    Anyone in here have any experience running a Vault cluster in docker swarm
    SHUB9914
    @SHUB9914
    Hey everyone, I am new in docker and facing some issue in deployment using docker swarm
    I have docker-compose file, In which i have defined few services, some of them directly taking image from docker hub and some i am creating using build.So when I am deploying it using docker swarm , so those images that are available in docker hub , is successfully started in my other nodes but , those images that i created, is running only my manager node because it's a available on manager node, so can any one help how to distrubute or run my other images to the other nodes as well usig docker swarm, Without pushing it to create own repo
    Swastik Roy
    @royswastik
    Hi, what's the correct way to update a secret in swarm?
    I know how to rotate it, but the secret name has to be changed after update. Is there a proper way to update the secret and maintain same name after updating?
    Swastik Roy
    @royswastik
    Got the answer
    ikarapanca
    @tosolveit

    Hi everyone,
    I have a very simple python script, not a web app, it runs based on some arguments and returns something, I run it with “docker run —rm -it my_image:v1 python mydir/app.py”, I see that host folder that I added as a volume can’t be find in the container.

    To test it, I created a python hello world flask app and run it with docker-compose up, in this case volumes are working fine, changes in the host machine is reflected to the running container. But this doesn’t look a good solution to me.

    Is there a way to run the app with “docker run —rm -it my_image:v1 python mydir/app.py” and use volumes so if I change a file in the host, python mydir/app.py will consume the right content.

    I worked hours on that, tried adding volume on the fly with -v parameter, used docker compose, only Dockerfile with volume description etc.

    I’ll be appreciated if at least you can share some docs, blogs or ideas.

    Regards!

    CharcoGreen
    @CharcoGreen
    Hi mens, I have a question. For production is better, volume or bind?
    SAMEER KUMAR
    @sameerkasi200x
    @CharcoGreen what is your use case?
    Mike BRIGHT
    @mjbright
    Hi @tosolveit, sounds like it's perfectly doable with docker run. Can you share your working docker-compose. It should be easy to determine the -v options from that.
    Pavol Noha
    @nohaapav
    Hey folks, brand new mobile-friendly 1.6 release is out for a while. Check it out. https://swarmpit.io
    CharcoGreen
    @CharcoGreen
    hi! I have this case, one service with repliqued = 1 in swarmme with two nodes. I need that this services is reploy in node-1 but if node-1 is off , need than this service go to node-2, but only in this case. is possible? Im try with affinity and his don´t work.
    inayath123
    @inayath123
    Hi! i have deployed node server in container and it is up and running, when i create replicas using docker-compose file, replicas doesn't run, it shows 0/5? Anybody have idea?
    Mike Holloway
    @mh720
    Your swarm node(s) appears active from “docker node ls” ? Any firewall running on your swarm node(s)?
    @inayath123
    ecaepp
    @ecaepp
    @inayath123 have you checked the service logs? docker service logs <service-name>
    ecaepp
    @ecaepp
    @CharcoGreen To answer your question docker documentation recommends volume mounts over bind mounts. Docker Service Volumes
    ecaepp
    @ecaepp
    @CharcoGreen As for your use case I would like to point out that docker by default creates local volumes. So lets say a service is created on node-1 with a volume mount, and if node-1 goes down the service
    is migrated to node-2 the service will create a new volume on node-2 that will not have the same data as the local volume created on node-1.
    SOBNGWI
    @sobngwi
    Please how to update a link ( depends in) to a service already started?
    Mike Holloway
    @mh720
    @sobngwi https://stackoverflow.com/questions/35832095/difference-between-links-and-depends-on-in-docker-compose-yml if the service is already up, I don’t see how it would matter changing it’s run-time depends-on, as it’s already up.
    Mike Holloway
    @mh720
    Just shared this in another community, relevant for Docker and Docker swarm users alike: I’ve done quite a bit of work tuning network systctl’s in general for https://github.com/swarmstack/swarmstack users, see this commented ansible playbook for tuning up the default networking sysctl’s, which are generally set to lowest common denominator by default to support old 1998 processors and hardware: https://github.com/swarmstack/swarmstack/blob/master/ansible/playbooks/includes/tasks/sysctl_el7.yml
    SOBNGWI
    @sobngwi
    @mmikirtumov The use case is to dynamically update, change a dependency. You have component A which depends on component B or C. You started A with B. 2 hours later you want to update A with C dependency. B and C are database component. And we have not use a docket compose file.
    Jim Northrup
    @jnorthrup
    hi is there a faq for docker swarm mysql ? im hours into bad guesswork and SO articles and still have not been able to reach mysql inside a swarm
    the only access means that seems to reach mysql is when the script randomizes password. but then... i have to parse json to get the docker logs to extract the password from init.
    Jim Northrup
    @jnorthrup
    is there anything a little more elegant than a) relying on the random password generator and b) plucking it with pw=( $(docker logs $(docker network inspect kbrcms_default |grep kbrcms_db|cut -f4 -d'"') 2>/dev/null|grep 'GENERATED ROOT PASSWORD'|cut -f 2 -d:) );set -x; sed -i'.bak' s,DB_PASSWORD='.*'$,DB_PASSWORD=${pw}, kbr_php/.env;set +x # ?
    Bjorn S
    @Bjeaurn
    Hello! Anyone here have some pointers on figuring out how the swarm network operates? Steps to take to make it externally visible in a good way, random ports and all that? I've got a situation now where a single stack can expose/bind port 80, but every other stack/app needs to be using different ports. This feels off and I can't seem to figure out what the proper way is to set this up.
    Mike Holloway
    @mh720
    @Bjeaurn check out the docker-compose syntax of https://github.com/swarmstack/swarmstack/blob/master/docker-compose.yml and search for ‘ports’ under the caddy service, which forwards exposed port : internal container port (in this stack caddy handles most external HTTPS termination, then proxies to other services on an internal encrypted network. Swarm service exposed network ports are accessible by all external hosts by default (via any swarm host, they will proxy the traffic automatcally to the container host), and only filtered via firewall FORWARD chains, not via the standard firewall INPUT chains, so if you wish to limit traffic to exposed service ports to only some IPs, see https://github.com/swarmstack/swarmstack which includes a stand-alone firewall management ansible playbook that helps manage locking down Docker service port access to specific IPs. If you don’t want to use the ansible playbook, specifically look at iptables rules here: https://github.com/swarmstack/swarmstack/blob/master/ansible/roles/swarmstack/files/etc/swarmstack_fw/rules/docker.rules
    Bjorn S
    @Bjeaurn
    Thanks @mh720 I'll take a look into these pointers!
    Mike Holloway
    @mh720
    You could even do something like use caddy to redirect anything on 80 to 443/https, and use the same caddy instance as a “reverse proxy” to pass different URLs through to different containers so that everything stays on default https port 443 (i.e. https://1.2.3.4/appA/[..]). Your containers could each just bind to port 80 and not need to worry about TLS/https as traffic passed to them from caddy would traverse an encrypted overlay network. And if you have one container that needs to bind to a specific port (I.e. 8080, or 443), you can reverse proxy that specific container URL to that different port, and your users don’t even need to know, they just see https://1.2.3.4/appB/ @Bjeaurn
    Mike Holloway
    @mh720
    I like caddy for its small binary size and automatic https, but you can use traefik, nginx, Apache, etc to accomplish the same reverse proxy scheme. You can a fairly complete Caddyfile configfile example in the swarmstack/caddy directory linked above.
    s/can/have/
    Bjorn S
    @Bjeaurn
    Yeah I was playing around with Nginx.
    I had some nginx-proxy going that uses dockergen to generate the correct values in a nginx conf.
    This is a nice idea. But my main issue lies that all the stacks by default now are exposing directly outside.
    So I think I'm not using networks in the right way.
    Like if I have StackA (or AppA, doesn't matter) and it has 3 services including a webpage on :80, so AppA:80. this works fine.
    and when I go to whatever domain, it'll serve me that app.
    But when I try to deploy another stack next to it, say AppB, that uses a different domain with a webpage (also on port 80 for simplicity), it starts to complain about already having a port 80 exposed so it can't deploy.
    And when I change it to another port, it works from both domains. So that's what you want to reverse proxy.
    But I thought, that you just expose your services to port 80 and let Docker Swarm assign a port, that you can use to do the internal reverse proxying.
    But I can't seem to get that to work. @mh720
    Mike Holloway
    @mh720
    As you surmised, only one service can expose a given port to the outside world across a swarm cluster. The same is true on a non-swarm Docker host, or even just on a non-Docker host only one program can bind to a port. Docker doesn’t assign or manage random/ephemeral ports; how would your users know what to connect to?
    Bjorn S
    @Bjeaurn

    Yeah of course, that makes sense. But if you're running multiple applications next to eachother, something Docker (swarm) does allow and take care of. I'd say it's good practice to have default "Web facing apps" expose over port 80.

    Let me rephrase; if I want to make all my apps expose port 80 by default on the front facing part, how would I do this in a docker-compose file that swarm accepts, so I can then start to figure out how to route inside that specific network where port 80 is exposed for that application? Should I be removing/disabling the default joining of the ingress or overlay network?

    I'm just trying to grasp what a good configuration that does this looks like, and then the exact solution for the reverse-proxy is something I'll figure out and have some experience with already. Main issue is that I can't get two apps in the swarm to play nice next to eachother and not bother eachother.
    Bjorn S
    @Bjeaurn
    So basically, ports: 0:80 automatically assigns it. Got that figured out now.