Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Efe Karakus
    @efekarakus
    Hey Gavin!! Let me dig into this later today and I'll get back to you. To be honest what you seem to be doing seems to be completely okay :thought_balloon: so I can't see anything obviously wrong at first glance
    Efe Karakus
    @efekarakus

    Hey @GavinRay97 okay I just played around this too! and it's actually the expected behavior with how the ECS+ALB integration works if the path for your service is not just the root "/".

    So for example, if you want to get /api or /api/console you can write the following:

    app.get('/api', (req, res) => {
      res.send('Hello from API!')
    });
    
    app.get('/api/console', (req, res) => {
      res.send('Hello from the console of API!')
    });

    I think in your situation since you're using hasura, you might now this flexibility. Then there are two different possible routes that might work for you:

    1. Create the hasura service as a "Backend Service" instead. Then in your main "frontend" load balanced web service, you can redirect a request to a hasura path to go to your backend service. For example:
    // In the frontend service's code.
    
    
    app.get('/hasura/console', (req, res) => {
       // Forward the requests to the "hasura" backend service
       const results = await axios.get(`hasura.${process.env.COPILOT_SERVICE_DISCOVERY_ENDPOINT}/console`);
       return res.send(results);
    })

    My javascript foo is not the best anymore :D but hopefully it made sense. Here is the wiki on how you can communicate between services with service discovery in Copilot: https://github.com/aws/copilot-cli/wiki/Developing-With-Service-Discovery

    1. You can create two separate applications or environments. The goal here is to get a separate load balancer for your two different services. This is probably not the preferred route.
    GavinRay97
    @GavinRay97

    @efekarakus Ahh okay. Genuinely appreciate you spending the time to look into this, thank you :pray:

    I think your idea about separate Copilot applications per service might be the best. The route changes would also work but then the app code would be designed around the ALB on AWS which seems like it might cause more headaches later on (and like you mentioned, I don't think I can change this with Hasura).

    Is there any negative or increased costs to managing each as a separate Copilot app? Would I need to pay for the extra ALB's and ECS repos or whatnot.

    Another potential solution might be to put NGINX in front of everything right, and then wire up NGINX's route rules right? Not super familiar with this approach.
    IE NGINX as the only Load Balanced Web Service, all others as Backend Service

    Efe Karakus
    @efekarakus

    Np! Yeah, unfortunately the tradeoff is that you'll get a separate ALB per service and each ALB costs roughly ~$16/month (https://aws.amazon.com/elasticloadbalancing/pricing/)

    The NGINX approach that you're suggesting also works where all the other services are backend services. It's pretty similar to the route change option in the "frontend" service but instead of you writing this code it's in an NGINX configuration :). Personally, I'd recommend that option to balance cost and maintain all my related microservices in a single app. It'll take a bit more time to set things up but it'll be cheaper and easy to maintain. But if ~$32/mo seems to be okay creating multiple apps is always an easy option!

    Ronique Ricketts
    @RoniqueRicketts
    Hey guys with following Aws’s single dB format is there a guide in copilot that covers this practice?
    Efe Karakus
    @efekarakus
    What type of database? Are you using DynamoDB?
    Ronique Ricketts
    @RoniqueRicketts
    Yes sir Dynamodb. So since AWS is suggesting a single dB design. Is there a way to select an existing dB when I am activating a new service in the same project. Example a project with 3 services fe, accounting and products. Based on Dynamodb’s suggestion we should have 1 dB that serves both back end services. Is there a way to implement this in copilot?
    Austin Ely
    @bvtujo
    @RoniqueRicketts The “copilot-y” way to do this would be to create a Backend Service which fronts your database and exposes a simple api that matches the needs of each backend service. That way you don’t have to handle importing resources via the addons stack and can make DB requests to an endpoint like “db.application.local/accounting/“ though it does have the downside of mixing business logic abstractions. You could also create the database by using storage initand attaching it to an arbitrary backend service (say, accounting), then writing an IAM policy allowing access to your database and place it in the ‘addons’ directory for “products.”
    Ronique Ricketts
    @RoniqueRicketts
    That works
    Austin Ely
    @bvtujo
    @RoniqueRicketts I forgot to mention the best part: the service discovery endpoint is an environment variable in all containers, so you can send requests to db.$COPILOT_SERVICE_DISCOVERY_ENDPOINT/accounting/some-request as described on our wiki!
    Efe Karakus
    @efekarakus

    Heya Copilots!! 🚀

    We just released v0.4.0 https://github.com/aws/copilot-cli/releases/tag/v0.4.0 !! Autoscaling support and lots of other changes are here 🥳

    4 replies
    Efe Karakus
    @efekarakus
    Adam and Brent are doing a show in 30 mins demoing Copilot on Twitch and Youtube: https://twitter.com/realadamjkeller/status/1308470743164878848?s=20 if anyone is interested :)
    Ronique Ricketts
    @RoniqueRicketts
    Hello guys, is there a way to prevent the cli from adding the service name in the frontend domain when I am using a custom domain? Lets say I have example.com and the front-end name is web when I launch the app it will be something like web.test.api.example.com. How can I make it just api.example.com?
    3 replies
    Tim Greene
    @tggreene
    Hey folks, is it straightforward to add a custom domain to an existing copilot application?
    4 replies
    jaybauson
    @jaybauson
    Hi guys, I never had any issue with ecs-preview/co-pilot until yesterday. I am getting this error: Error: deploy application: change set with name ecscli-87853176-4cbc-4b14-a9fa-ef5a180cce0f for stack xxxxxxx-prod-pubtool has no changes any thought on this?
    David Killmon
    @kohidave
    Oooh. Make sure you commit your changes
    To copilot it looks like nothing changed
    Or you can pass in —tag some-exciting-id
    That’ll force a deployment
    jaybauson
    @jaybauson
    image.png
    I have this in bitbucket pipeline, I totally moved from Jenkins to Bitbucket pipelines. How would I force the deployment?
    David Killmon
    @kohidave
    Are you using ‘copilot deploy’?
    jaybauson
    @jaybauson
    yes
    David Killmon
    @kohidave
    Try adding ‘—tag 1234-manual’ to the end of it
    jaybauson
    @jaybauson
    that fixed it, looks like I have to start adding my build number as tag for my deploys. thanks @kohidave !
    David Killmon
    @kohidave
    It usually uses the git SHA. So if the sha hasn’t changed - it won’t update. But I’m glad we got it to work!
    jaybauson
    @jaybauson
    remind me of buying you a beer if you visit the Philippines :smiley:
    David Killmon
    @kohidave
    🍻
    Joshua Kleiner
    @surrealchemist
    Is there any way to add https to the load balancer automatically? Normally I have ALB take traffic on 443 and send it to port 80 on the back end. I can set it up manually but since everything is being managed via cloudformation for me I would rather integrate it.
    6 replies
    Joshua Kleiner
    @surrealchemist
    We use Bamboo Server, so we don't have pipelines. I wonder if the new specs feature is anyway compatible, or if I can run copilot inside a container as apposed to installing it server wide.
    Rahul Nair
    @rahulunair
    hey all, i have a noob question, can we explicitly set the instance-type for a container deployed using copilot.. Essentially I am looking to see if i can explicitly map a particular type of hardware (CPU type) for my containers that require instructions that may not be available on all CPUs,
    David Killmon
    @kohidave
    Hi @rahulunair ! Copilot only uses Fargate - so you can choose the size of the cpu/mem but you can’t choose the architecture
    Rahul Nair
    @rahulunair
    Hi @kohidave i thought copilot can deploy directly to ECS as well (powered by EC2), not only Fargate, or is it that now Fargate is the only option used by ECS ..? I assumed Fargate was more of a serverless solution..
    updated my comment @kohidave
    David Killmon
    @kohidave
    Fargate is just a compute type :smile: ECS can manage containers using either Fargate compute or EC2 compute
    Rahul Nair
    @rahulunair
    ah.. thank you @kohidave for making that clear :D.. my mistake :) ..if i have to set the instance-type for a container , then I guess the only option is to use the ecs cli and do something like this : aws ecs run-task --cluster default --task-definition ecs-gpu-task-def \ --placement-constraints type=memberOf,expression="attribute:ecs.instance-type == p2.xlarge" or is there a tool like copilot for that ?
    Anish Dcruz
    @anishdcruz_gitlab
    Hi everyone
    Can someone help me with the manifest.yml?
    # The manifest for the "core-system" service.
    # Read the full specification for the "Load Balanced Web Service" type at:
    #  https://github.com/aws/copilot-cli/wiki/Manifests#load-balanced-web-svc
    
    # Your service name will be used in naming your resources like log groups, ECS services, etc.
    name: core-system
    # The "architecture" of the service you're running.
    type: Load Balanced Web Service
    
    image:
      # Docker build arguments. You can specify additional overrides here. Supported: dockerfile, context, args
      build: Dockerfile
      # Port exposed through your container to route traffic to it.
      port: 80
    
    http:
      # Requests to this path will be forwarded to your service. 
      # To match all requests you can use the "/" path. 
      path: '/'
      # You can specify a custom health check path. The default is "/"
      healthcheck: '/healthcheck'
      # You can enable sticky sessions.
      # stickiness: true
    
    # Number of CPU units for the task.
    cpu: 256
    # Amount of memory in MiB used by the task.
    memory: 512
    # Number of tasks that should be running in your service.
    count: 1
    
    # Optional fields for more advanced use-cases.
    #
    variables:                    # Pass environment variables as key value pairs.
      APP_NAME: core-system
    
    secrets:                      # Pass secrets from AWS Systems Manager (SSM) Parameter Store.
    
    # You can override any of the values defined above by environment.
    environments:
      test:
        variables:
          APP_ENV: production
          APP_URL: https://dev.core-system.test.url
        secrets:
          APP_KEY: CORE_TEST_APP_KEY
      prod:
        variables:
          APP_ENV: production
          APP_URL: https://dev.core-system.production.url
        secrets:
          APP_KEY: CORE_APP_KEY
    The problem is the ecs task definition is only using the prod env variables and secrets even in test env
    Efe Karakus
    @efekarakus
    Heya looks like you cut us the issue already :) aws/copilot-cli#1535 We're working on this issue right now so hopefully a fix should be out soon!
    Joshua Kleiner
    @surrealchemist
    Is there any way to add an existing EFS volume? I have an existing stack on ec2 I want to move over but I need some kind of persistent storage (meh, wordpress)
    Austin Ely
    @bvtujo

    Hi @surrealchemist, EFS isn't currently possible due to some fields in the Task Definition that we don't currently support through the manifest, but we do have plans to support it very soon. Follow along and +1 on this issue aws/copilot-cli#1559 to keep up to date with our progress on it!

    We're really excited about adding support for it, as it's a really useful service, but its necessarily a little more involved than setting up a database or an S3 bucket, as I'm sure you know :)

    Joshua Kleiner
    @surrealchemist
    Thanks. The workarounds for CMS's using s3 are kinda weak. It would be nice to have some kind of persistent storage that isn't dependent on fixing the app. Not lucky enough to have a big dev team to fix it up, so ECS adding EFS was the one thing I was missing. If only there was some kind of s3 gateway built in to let you use a bucket/path as volume, or even a sidecar.
    Efe Karakus
    @efekarakus

    Heya Copilots! v0.5.0 is now released!🍻
    https://github.com/aws/copilot-cli/releases/tag/v0.5.0

    Notably, it has a bunch of bug fixes, new commands for scheduling jobs, and being able to use an existing image instead of building from a Dockerfile!

    4 replies
    Lamine Gaye
    @craquiest

    Hi,
    I started using Copilot a few days ago, after watching as many videos I could find and reading the docs.
    I set up a pipeline for my app ( 1 nginx frontend svc, and 2 backend svc ) in the test environment.
    The initial deployment with copilot pipeline update --yes works fine, with the frontend finding the backend endpoint with nginx.conf pointing to the service discovery addresses

    location /api {
      #  proxy_pass http://backend:8888;
       proxy_pass http://backend.XXXX.local:8888;
       proxy_set_header X-Real-IP  $remote_addr;
       proxy_set_header X-Forwarded-For $remote_addr;
       proxy_set_header Host $host;
       proxy_set_header X-Forwarded-Proto $scheme;
      #  proxy_redirect http://backend:8888 $scheme://$http_host/;
       proxy_redirect http://backend.XXXX.local:8888 $scheme:

    The problem is with subsequent git push triggered pipeline updates, or even 'Release Change' in AWS CodePipeline Console: the build and deploy happen fine but then the frontend in the browser cannot seem to find the backend anymore. JS console shows 502 Gateway error and the copilot svc logs -n frontend show that the request is hitting the old private address for the backend still. This is despite the fact that Route 53 Hosted Zone for XXX.local is pointing to the new private IP addresses, as well as AWS Cloud Map console....

    I find I can fix the problem manually by either:

    • copilot deploy --name frontend --env test to deploy again the frontend
    • or stop the frontend task by hand in ECS Cluster Console, then copilot pipeline update --yes ( this is a little faster as it does not re-build but just re-launch a new front-end task based to the same task definition)

    What am I missing ? I thought the git push triggered Pipeline action was meant to take care of the whole process without me having to re-deploy manually one particular service to make sure it works... One extra command is a small price to pay, but I am worried it may be a symptom of something bigger in the way I set up everything.

    my guess is something to do with the nginx.conf maybe ( I am no expert ) or maybe the order in which the deploy stage finish ( but the frontend finishes its deployment last every single time so maybe not.. ).

    Any help will be much appreciated!! Please forgive the long post.

    I upgraded to Copilot 0.5.0 to see if it solves the pb but no luck..
    Efe Karakus
    @efekarakus

    Heya @craquiest I believe this is due to how service discovery dns records are cached on the "frontend" tasks :(
    If I'm understanding right, you're updating your backend services and the frontend service does not resolve to the newer entries. By default Copilot sets the TTL field to 10s (instead of the default 60s) so my guess is that after 10s the frontend service will start resolving properly to the new entries properly.

    The somewhat good news is that we have been talking to the CloudMap team highlighting this issue and the fix is in preview :) If you'd like we can add you to the preview feature, can you email me your account id and region to karakuse@amazon.com?
    Until then, unfortunately your approach is correct :( you'd need to "bounce" the service, i.e. trigger a new "frontend" deployment, everytime backend services are updated to mitigate this problem.

    Lamine Gaye
    @craquiest

    Thanks @efekarakus for the quick response! Yes that'd be great! I have just sent you that email.

    Do you think separating the 3 pipelines might help? Currently I am having all 3 services in the same pipeline just to stay within the 1 Free-tiers Pipeline. but if I am understanding well your reply, that might be irrelevant..

    I'll keep using the bouncing method, and hopefully I can try that preview feature soon.
    Thanks again!

    Efe Karakus
    @efekarakus

    :+1: Thanks I received it!

    Yeah, I don't think splitting to multiple pipelines will help with this issue.