Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Kirk Ross
    @kirkbross_gitlab
    I'm all good for now. I deployed with a new name. Really the issue is running adapt destroy after a cluster has been deleted manually outside of adapt.
    If you need I can post the console output.
    Manish Vachharajani
    @mvachhar
    Ok, it looks like your destroy problem is Issue #95 (unboundedsystems/adapt#95) which says that we have to add a --force and a --no-stop option to resolve this and a related case, not that you can just add a --force option to destroy :-). Since you've run in to this, I'll discuss with mark on prioritizing this issue for a fix soon. We'll keep you posted. I'll spare you the hand edit of the adapt config to remove it via text editor as well.
    Unless of course, you really want that extra deployment gone now.
    Kirk Ross
    @kirkbross_gitlab
    Nope. That extra deployment is welcome to persist in my gcloudosphere as long as need be! :)
    Kirk Ross
    @kirkbross_gitlab
    I noticed I have 3 storage buckets, one with prefix "artifacts," one with prefix "staging" and one with no prefix. Are those created by adapt, or can one or more of them be safely deleted? (just trying to keep costs down). I don't recall any bucket creation steps in the tutorial so I'm thinking they are adapt-omatic creations, or they are vestiges of other experiments I never noticed.
    Manish Vachharajani
    @mvachhar
    So artifacts are the images pushed to gcr.io for use in gke. You can't delete them and adapt doesn't really know when an image isnt needed. You can delete images manually via the gcr.io panel. Not sure what staging is.
    When you delete the gcr.io image It'll remove the storage in artifacts.
    Registry cleanup seems to be lacking across the board. I'm open to any good solutions.
    Manish Vachharajani
    @mvachhar
    You can also set a policy to delete images that are old, but id only recommend that for dev, not prod.
    Kirk Ross
    @kirkbross_gitlab
    Cool. I'm weak on dev ops so my lay person suggestion is a --all flag adapt destroy foobar-test --all :)
    I know that's not very helpful... :D
    Separate issue... I made some front end changes and did adapt update my-deployment and it's stuck in an infinite loop Waiting for enough pods (0 (available)/1 (desired).
    I turned on autoscaling on my pod with min 1, max 4... so not sure why it's not finding a pod.
    Kirk Ross
    @kirkbross_gitlab
    Are we supposed to modify pod CPU and memory specs for a given deployment? i.e. via requests and limits in a yaml or something?
    Manish Vachharajani
    @mvachhar
    Hmm, have you looked at the gke console to see the status if the pod. It is waiting for the pod to become available which usually means it was scheduled but failed to start properly.
    Memory limits and such can be set via style but we have to work out a portable way to do that when you declare the service.
    Manish Vachharajani
    @mvachhar
    Any luck with the hung deployment Kirk?
    Kirk Ross
    @kirkbross_gitlab
    Yes. I nuked my cluster and created a new one with a slightly larger vm. In your tutorial, when creating the cluster you have: --machine-type g1-small --disk-size 30 and I made it --machine-type e2-small --disk-size 32.
    I tried creating a cluster from the gcloud console in the browser but ended up with 6 cores and 12GB RAM which is like a billion dollars a month.
    I think because it created one for each of three regions. Either I missed that setting when creating it or it defaults to three regions.
    I'm still trying to learn the whole hierarchy of cluster > pod > node and how they all interact. There are so many docs, it's overwhelming so I just learn them as I need to to solve a current problem.
    Manish Vachharajani
    @mvachhar
    Did you actually get a chance to look at the error messages from the pods that got scheduled?
    As far as cluster -> node -> pods, the cluster is made up of nodes, and the k8s scheduler chooses which node to run each pod on based on a set of criteria that includes the memory and cpu limits, but also taints and tolerations.
    BTW, you don't need to nuke the whole cluster. From the console, you can actually create a new pool of nodes, or change the number of nodes in the pool (though I don't think you can change the type of nodes). You can even have multiple pools.
    I forgot to add the storage-ro scope to a custom cluster and had to figure that out since without storage-ro (which is the default, BTW). Sadly, you cannot change the access scopes on an existing pool so I had to figure out how to make a new pool and delete the old one. If you have some time I can walk you through it tomorrow on a zoom/google meet if you want.
    Kirk Ross
    @kirkbross_gitlab
    Isn't everything constrained by the max CPU / RAM of the VM?
    I was hesitant to create new pools in the my existing cluster because I think I tried that before and my $26/month VM became $52.
    It's so easy to make a little tweak and dramatically increase cost.
    Manish Vachharajani
    @mvachhar
    It is, yes, so if your pod is bigger than the CPU and RAM of all nodes, then you won't be to run the pod
    Oh yeah, we got hit with a $230 bill last month because we forgot to shut down a couple small clusters.
    If you delete the old pool and choose the right instance sizes you should be fine though. However, none of the providers really make it easy to see your instantaneous costs.
    Kirk Ross
    @kirkbross_gitlab
    Yeah. They have a projected cost in the billing area, but I don't know how often it updates.
    Manish Vachharajani
    @mvachhar
    A pro tip on creating and deleting pools though is that you have to use kubectl drain to offline nodes manually before deleting a pool, especially the nodes running the master process for k8s. Learned that the hard way :)
    Oh they do, I hadn't noticed that. I'll have to keep an eye on it.
    Kirk Ross
    @kirkbross_gitlab
    Right now I'm running a $26/month VM but my "projected costs" for October are $82.
    Manish Vachharajani
    @mvachhar
    Would have probably saved us $1-2k over the last year :)
    Hmm, is that because of storage, do they give you a break down?
    I'm checking it out now
    Kirk Ross
    @kirkbross_gitlab
    I think because it's factoring in one of my "mistake" VM that I deleted pretty quick.
    Manish Vachharajani
    @mvachhar
    Oh yeah they do, nice.
    Ahh
    That is a really nice feature. I wonder if AWS has something similar. They have a manual calculator, I haven't seen an automated one like the GCP one, but it would be easy to miss in AWS.
    Kirk Ross
    @kirkbross_gitlab
    I want to start setting up my postgres db and I noticed in deploy/styles.tsx the test function k8sTestStyle() has a mockDbName and mockDataPath... is this a functioning db I can read and write to? Not sure how to connect to it, or id I have to do a prod deploy for postgres to work.
    Mark Terrel
    @mterrel
    The postgres db that's deployed by k8sTestStyle is a fully functioning postgres database that should work just fine for development and testing purposes. BUT, it's not deployed in a way that's suitable for production. It does not create any persistent storage, so when you destroy your deployment, you destroy your data too (just like you'd typically want for dev or testing). That style also is able to pre-load data from a .sql file or files, which is what mockDataPath is.
    Right now, for more long-term data and definitely for production, we'd recommend using a database service from one of the major vendors, like Cloud SQL from Google. To use that, you'd create that database service separate from Adapt. (In the future, we'd like to add components for that.) Then there's an example in the Adapt styles.tsx of a Postgres component that just takes the connection & login information for the prod style. If you need some help with that component, I can go into more detail.
    Kirk Ross
    @kirkbross_gitlab
    I created a PostgreSQL instance in my gcloud project. I'm not too worried about having separate testing and prod databases, so I'll just use the one for everything. I'm going to spend a few days doing some Lynda (now Linked In Learning) tutorials on how to connect to it from the app. I also want to learn how to use .env file to store the username and password so they are safe. Once I know how to create, edit and delete records, I'll be at the first rung of the ladder of knowledge :) I installed pgAdmin on my PC so I'll mess with that and see if I can connect, then try to move some of those settings over to the app.
    Manish Vachharajani
    @mvachhar
    Great, let us know if you want some help in how to get that .env variable into Adapt.
    Rahul Saxena
    @rksio_twitter
    “Oh yeah, we got hit with a $230 bill last month because we forgot to shut down a couple small clusters.” — wonder if a “watcher” or “resourcelimiter” attribute might be useful to warn when resource usage continues.
    Mark Terrel
    @mterrel
    That sounds like a great idea. :)