Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Kirk Ross
    @kirkbross_gitlab
    Although I also created an ingress like you said and that gave me (I think) a static ip which also woks.
    It took a while to generate. It was empty last night for a long time and then I just noticed now that it populated: http://34.95.90.233/
    I could probably use that as my A record too.
    Mark Terrel
    @mterrel
    That's awesome!!
    Congrats!
    Mark Terrel
    @mterrel
    So I'd suggest using the IP of the Ingress and here's why: no matter which type of resource we're talking about, if you delete it and re-create it, you'll likely get a new IP address. So you want to pick the resource that's going to be least likely to be deleted and re-created. As long as you don't delete the Ingress, it can stay around (and keep its IP address) indefinitely. That gives you the flexibility to make changes more freely to the underlying Kubernetes resources, like moving them to a different region or making changes to the size of compute instances in the cluster without losing the IP address.
    Plus, it has some cool additional features like making it super easy to enable HTTPS and handle certificates automagically when you're ready for all that.
    Kirk Ross
    @kirkbross_gitlab
    I'm using the static IP now. Adding some firebase auth now... :)
    Mark Terrel
    @mterrel
    Nice!
    Kirk Ross
    @kirkbross_gitlab
    Okay... so after adding Firebase auth and a few new components, I ran adapt update and got some unscheduled pod errors, saying I lacked resources. So, I enabled autoscaling and it created an additional pod and added a core (I think). Shouldn't the small cluster created with gcloud container clusters create mycluster --enable-ip-alias --machine-type g1-small --disk-size 30 --num-nodes 1 --no-enable-stackdriver-kubernetes be plenty to handle a login page and a single landing page?
    Kirk Ross
    @kirkbross_gitlab
    Side issue: I've been deleting / creating clusters in my experimenting and in one instance I deleted the cluster from my browser. Then I tried adapt destroy foo-bar and it fails with a bunch of errors. (I'm guessing because you can't delete a deployment if the container is gone). I'm not sure how to clear that up.
    Example of one of the errors:
    [17:23:51] Applying changes to environment [failed] [17:23:51] → Command failed with exit code 1: C:\Users\kirkb\AppData\Local\Temp\kubectl-mX3oaO\kubectl.exe --kubeconfig C:\Users\kirkb\AppData\Local\Temp\tmp-G9HsBT\kubeconfig get -o json Service urlrouter-netsvc-66158c624f7fa14b30a66e2ef6fe6b5d
    Mark Terrel
    @mterrel
    For the first question, I'm not really sure. I do know that for that blog, we used the absolute smallest resources that Google recommended. So I wouldn't be totally surprised that it doesn't take much to run it out of juice.
    On the side issue: there is an open issue for that on GitLab. If the cluster is gone, there's not much we can do besides try our hardest to destroy everything, but just ignore the fact that we can't, and go ahead and remove the Adapt deployment. So that's what the open issue recommends is something like adding --force to adapt destroy.
    Does that sound like that does what you want (one --force is available) or is there something different you were hoping for?
    Kirk Ross
    @kirkbross_gitlab
    I haven't tried --force. I just want to be able to run the adapt run k8s-test blah using the same name.
    in my case that's "wuddit-test"
    (in your tutorial it's app-test)
    adapt destroy wuddit-test --force gets unexpected argument --force
    It's not crucial at all... I can rename it
    Manish Vachharajani
    @mvachhar
    Im out of pocket until Tuesday but Mark or I will take a look at the --force issue as soon ad one of us is at a keyboard. I might be able to take a look tonight but no guarantee.
    Kirk Ross
    @kirkbross_gitlab
    I'm all good for now. I deployed with a new name. Really the issue is running adapt destroy after a cluster has been deleted manually outside of adapt.
    If you need I can post the console output.
    Manish Vachharajani
    @mvachhar
    Ok, it looks like your destroy problem is Issue #95 (unboundedsystems/adapt#95) which says that we have to add a --force and a --no-stop option to resolve this and a related case, not that you can just add a --force option to destroy :-). Since you've run in to this, I'll discuss with mark on prioritizing this issue for a fix soon. We'll keep you posted. I'll spare you the hand edit of the adapt config to remove it via text editor as well.
    Unless of course, you really want that extra deployment gone now.
    Kirk Ross
    @kirkbross_gitlab
    Nope. That extra deployment is welcome to persist in my gcloudosphere as long as need be! :)
    Kirk Ross
    @kirkbross_gitlab
    I noticed I have 3 storage buckets, one with prefix "artifacts," one with prefix "staging" and one with no prefix. Are those created by adapt, or can one or more of them be safely deleted? (just trying to keep costs down). I don't recall any bucket creation steps in the tutorial so I'm thinking they are adapt-omatic creations, or they are vestiges of other experiments I never noticed.
    Manish Vachharajani
    @mvachhar
    So artifacts are the images pushed to gcr.io for use in gke. You can't delete them and adapt doesn't really know when an image isnt needed. You can delete images manually via the gcr.io panel. Not sure what staging is.
    When you delete the gcr.io image It'll remove the storage in artifacts.
    Registry cleanup seems to be lacking across the board. I'm open to any good solutions.
    Manish Vachharajani
    @mvachhar
    You can also set a policy to delete images that are old, but id only recommend that for dev, not prod.
    Kirk Ross
    @kirkbross_gitlab
    Cool. I'm weak on dev ops so my lay person suggestion is a --all flag adapt destroy foobar-test --all :)
    I know that's not very helpful... :D
    Separate issue... I made some front end changes and did adapt update my-deployment and it's stuck in an infinite loop Waiting for enough pods (0 (available)/1 (desired).
    I turned on autoscaling on my pod with min 1, max 4... so not sure why it's not finding a pod.
    Kirk Ross
    @kirkbross_gitlab
    Are we supposed to modify pod CPU and memory specs for a given deployment? i.e. via requests and limits in a yaml or something?
    Manish Vachharajani
    @mvachhar
    Hmm, have you looked at the gke console to see the status if the pod. It is waiting for the pod to become available which usually means it was scheduled but failed to start properly.
    Memory limits and such can be set via style but we have to work out a portable way to do that when you declare the service.
    Manish Vachharajani
    @mvachhar
    Any luck with the hung deployment Kirk?
    Kirk Ross
    @kirkbross_gitlab
    Yes. I nuked my cluster and created a new one with a slightly larger vm. In your tutorial, when creating the cluster you have: --machine-type g1-small --disk-size 30 and I made it --machine-type e2-small --disk-size 32.
    I tried creating a cluster from the gcloud console in the browser but ended up with 6 cores and 12GB RAM which is like a billion dollars a month.
    I think because it created one for each of three regions. Either I missed that setting when creating it or it defaults to three regions.
    I'm still trying to learn the whole hierarchy of cluster > pod > node and how they all interact. There are so many docs, it's overwhelming so I just learn them as I need to to solve a current problem.
    Manish Vachharajani
    @mvachhar
    Did you actually get a chance to look at the error messages from the pods that got scheduled?
    As far as cluster -> node -> pods, the cluster is made up of nodes, and the k8s scheduler chooses which node to run each pod on based on a set of criteria that includes the memory and cpu limits, but also taints and tolerations.
    BTW, you don't need to nuke the whole cluster. From the console, you can actually create a new pool of nodes, or change the number of nodes in the pool (though I don't think you can change the type of nodes). You can even have multiple pools.
    I forgot to add the storage-ro scope to a custom cluster and had to figure that out since without storage-ro (which is the default, BTW). Sadly, you cannot change the access scopes on an existing pool so I had to figure out how to make a new pool and delete the old one. If you have some time I can walk you through it tomorrow on a zoom/google meet if you want.
    Kirk Ross
    @kirkbross_gitlab
    Isn't everything constrained by the max CPU / RAM of the VM?
    I was hesitant to create new pools in the my existing cluster because I think I tried that before and my $26/month VM became $52.