Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Sameroom
@sameroom-bot
[travis] when I have to do crazy things like cherry-pick
[kagen101] @dpavlos this is the inverse of that
[dpavlos] i sync them each time by fetching the master from mantl upstream
[kagen101] FF your branches to master in the ui
[kagen101] not locally
[kagen101] Anyway it does not work like expected
[kagen101] moving on we will burn later again
[dpavlos] @kagen101 never had an issue with that. In rare cases with conflicts a rebase might needed
[dpavlos] but it's a very standart procedure i would say
Sameroom
@sameroom-bot
[kagen101] Okay have an up to date k8s-ui feature branch going to check what it going on with that
[kagen101] Very easy will leave githyb alone for kagen101/mantl <- mantl/mantl changes...feel safer
Sameroom
@sameroom-bot
[dpavlos] Do you still have trouble with the dashbord? Keep in mind that it might take a few seconds to become available.
Sameroom
@sameroom-bot
[kagen101] Current k8-uis branch has dashboard and dns in there but on master that is part of the k8 master tasks now...what si the plan with this? Are they not addons anymore as such?
[kagen101] Also te k8-uis branch does not find CNI now and seems like this was an issue before mantl/mantl#1430. But the config is as the solution here and better with new calico
Sameroom
@sameroom-bot
[dpavlos] @kagen101 for the first question, I stumbled upon this mantl/mantl#1666 yesterday while I was reviewing the open issues
[dpavlos] seems that we don't need the separate folder anympre
Sameroom
@sameroom-bot
[kagen101] Think we can add it again when we add add ons? A very cool add on is Prometheus with Grafana
Sameroom
@sameroom-bot
[dpavlos] yes I agree with you. Addon functionality would be good to have. I am not sure though if the already existing /addons folder can be used instead. I am testing it since yesterday with ELK deployments. @travis might have better insight on this
Sameroom
@sameroom-bot
[kagen101] In the k8-ui branch it seems like all the grafana addons and things are in here. Someone was trying to make it into an awesome default k8 stack
Sameroom
@sameroom-bot
[berle] @dpavlos How long will it take before the dashboard goes online?
[berle] Stuff like this is also the perfect example of why we picked mesos over kubernetes @ work. :P
Sameroom
@sameroom-bot
[kagen101] @travis please reset master back to 61f0d68f80c79009a1ac97c3680d91949c1251ba otherwise the tree goes weird if you merge master cause those commits are already in the tree :/
Sameroom
@sameroom-bot
[dpavlos] @berle I noticed very little delay (a few seconds) untill all pods start running on control node (for a vagrant based deployment).
[berle] I'm gonna try to start all nodes within one zone.
[dpavlos] @berle if you keep getting 504 errors check the state of the k8s pods
Sameroom
@sameroom-bot
[travis] I'll do it when I get home 😀
[kagen101] 👍🏻👊🏻
Sameroom
@sameroom-bot
[kagen101] Have tried doing k8s-ui with merge but the changes are disperse so it takes changes from both sets. Going to cherry pick merge everything that we want from there and then check
Sameroom
@sameroom-bot
[berle] @dpavlos "reason: Unschedulable"
[berle] The dashboard pods status.
[berle] Seens kube-dns isnt running either.
Sameroom
@sameroom-bot
[kagen101] Just try adding more kubewokers I think now that is running we need atleast 3 or depending on your instance type maybe 4. On amazon 2 m4.large leaves some pods pending
[kagen101] I use 3 m4.large should probably make it 4 mesos and 4 kube
Sameroom
@sameroom-bot
[berle] Adjusting from 3 to 5 and trying again then.
Sameroom
@sameroom-bot
[travis] @kagen101 master has been rebased~2
Sameroom
@sameroom-bot
[kagen101] Sweet :)
Sameroom
@sameroom-bot
[berle] Adjusting from 3 to 5 did nothing. Trying a different machine type.
Sameroom
@sameroom-bot
[kagen101] Which pod is stating in scheduling mode?
[kagen101] Dashboard can you check what the log says and the describe?
[kagen101] What is your VM VPC subnet?
[berle] Logs say nothing last time I checked. Will have to wait for redeploy before I check again.
Sameroom
@sameroom-bot
[berle] Describe says "no nodes available to schedule pods"
[berle] Though all 5 nodes are visible.
[dpavlos] what the kubectl get nodes says?
[berle] $ kubectl get nodes
NAME STATUS AGE
hydra-control-01 Ready,SchedulingDisabled 20m
hydra-control-02 Ready,SchedulingDisabled 20m
hydra-control-03 Ready,SchedulingDisabled 20m
hydra-kubeworker-01 Ready 17m
hydra-kubeworker-02 Ready 17m
hydra-kubeworker-03 Ready 17m
hydra-kubeworker-04 Ready 17m
hydra-kubeworker-05 Ready 17m
Sameroom
@sameroom-bot
[berle] Trying to bump the size of the control nodes and see if that works better.
Sameroom
@sameroom-bot
[zogg] weird, we used 3 mediums all the time without probs
Sameroom
@sameroom-bot
[berle] Doubled the available memory on the nodes, still nothing.
Sameroom
@sameroom-bot
[berle] "NetworkUnavailable True Mon, 01 Jan 0001 00:00:00 +0000 Thu, 27 Apr 2017 00:09:07 +0200 NoRouteCreated Node created without a route"
[berle] Hmm..