copilot svc deploy
works fine, I just setup a default pipeline to automatically build and deploy after commit. I can see the pipeline getting triggered automatically on commit to main
branch but the build is failing even though it builds fine when I run copilot svc deploy. Are the same env variables and secrets defined in my manifest used in the pipeline build? I don't see any env vars when poking around in CodePipeline admin.
It fails at this step COMMAND_EXECUTION_ERROR: Error while executing command: for workload in $WORKLOADS; do manifest
but no useful error message just Reason: exit status 1
so I'm not sure how to proceed debugging, any ideas?
Hi David,
I have tried to add this to addons:
Resources:
SSMAccessPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- ssm:Describe*
- ssm:Get*
- ssm:List*
Resource: "{{ resource ARN }}"
Outputs:
SSMAccessPolicyArn:
Description: "The ARN of the ManagedPolicy to attach to the task role."
Value: !Ref SSMAccessPolicy
and i get an error saying ResourceNotReady: failed waiting for successful resource state: Parameter values specified for a template which does not require them.
Heya @srikaransc !
Can you try this:
Parameters:
App:
Type: String
Description: Your application's name.
Env:
Type: String
Description: The environment name your service, job, or workflow is being deployed to.
Name:
Type: String
Description: The name of the service, job, or workflow being deployed.
Resources:
SSMAccessPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- ssm:Describe*
- ssm:Get*
- ssm:List*
Resource: "{{ resource ARN }}"
Outputs:
SSMAccessPolicyArn:
Description: "The ARN of the ManagedPolicy to attach to the task role."
Value: !Ref SSMAccessPolicy
Copilot always passes these parameters to the Addons stack so that you can build your own fancy names, or maybe import values from the environment or service stack
Hi everyone,
Please can someone tell me which directory does buildspec post_build execute in?
I want to upload file to newly created s3 bucket
- aws s3 sync public/assets/images s3://bucket-name
Also is it possbile to get dynamic bucket name inside buildspec?
Thanks
Hi, I'm trying to ship log from my container to Datadog with logging sidecars
Here is how my manifest.yml look like
logging:
image: amazon/aws-for-fluent-bit:latest
destination:
Name: datadog
TLS: on
apikey: <DD_API_KEY>
enableMetadata: true
configFile: /fluent-bit/configs/parse-json.conf
Is it possible to set this apikey
from SSM?
Hi everyone, I am trying to apply a custom firelens config to make the Fargate logs work better with our Kinesis + Function Beat setup.
Our manifest.yml looks like:
logging:
image: 123456789.dkr.ecr.us-east-1.amazonaws.com/ns/firelens-custom:v0.2
destination:
Name: cloudwatch
region: us-east-1
log_group_name: /copilot/test-fargate-services
log_stream_prefix: copilot/
configFile: /extra.conf
The Fargate service works well, however the custom configuration changes don't get applied to the generated logs.
I reckon its because of the service task definition generated by copilot. The custom config is missing from the firelensConfiguration
:
"image": "123456789.dkr.ecr.us-east-1.amazonaws.com/ns/firelens-custom:v0.2",
"startTimeout": null,
"firelensConfiguration": {
"type": "fluentbit",
"options": {
"enable-ecs-log-metadata": "true"
}
},
Any ideas on why copilot is not adding the custom config to task definition?
configFilePath
instead.
i've been using copilot with a pipeline to deploy an app. today when I went to merge it triggered my codepipeline just fine, build worked, but it has been stuck on the deploy step. Upon further investigation the ECS update is what is stuck. If I watch the ECS service there are actually two tasks (one that is active) and then one that goes PROVISIONING -> PENDING -> ACTIVE then disappears. It just continues that loop over and over and over again. In the logs everything looks perfect, I can see server started successfully with no errors.
Any idea why this infinite loop is happening? I can't find any more information anywhere in the console
Hi everyone, is there a way to force the copilot scheduled job to use the private subnets?
Our use case requires the requests to originate from a specific IP address that's whitelisted by a 3rd party and our VPC setup provides that for ECS services running from private subnets.
ecs-preview mesh init
and was closed on Nov 16th, 2020. However I don't see in the docs how to setup a mesh for the app