For support requests please search or create posts on Serverless Forum: https://forum.serverless.com
carlosandrebp on master
upload email icon (compare)
carlosandrebp on master
Icons update (compare)
carlosandrebp on master
updates animations (compare)
andrepiresbp on master
Updates (compare)
andrepiresbp on master
Update (compare)
Hello!
I am trying to automatically execute a lambda every X time. I am using this syntax:
powerfull-lambda:
handler: lambda.handler
events:
- schedule: rate(1 minute)
And in the handler:
module.export.handler = async (_event, _context, _callback) => {
console.info('Powerfull Lambda running...');
};
But the console.info
is not showing in the console. ¿What I'm doing bad?
Thx for the help
If anyone is interested, I just posted an $800 contract to upwork.com to modify a serverless stack.
name: ${ssm:/path/to/service/myParam}
format, but it didn't seem to populate properly. Any suggestions, or is this not supported at this time? Thanks!
Hi fellow cloud inhabitants
one issue I am facing is around isolation of environments - using stages.
we start a mediaconvert job (success) and want to consume the cloudwatchEvent it produces.
but the events are consumed across stages, meaning that our dev stage consumes events from our staging stage.
to remedy this, i added the stage: - ${opt:stage} to our function trigger events.
source:
- 'aws.mediaconvert'
detail-type:
- 'MediaConvert Job State Change'
detail:
stage:
- ${opt:stage , 'dev'}
status:
- 'COMPLETE'
- 'ERROR'
Since having included this parameter, the lambda has not been triggered at all.
If i omit this parameter, the lambda triggers on all stages simultaneously, independent of the eventsource stage. (
Since adding AWS Xray to my project, I seem to be unable to use --noDeploy
to validate my serverless file before deployment. For example, if I run locally, sls deploy --noDeploy
I get:
Rest API id could not be resolved.
This might be caused by a custom API Gateway configuration.In given setup stage specific options such as
tracing
,logs
andtags
are not supported.
Or through the CI/CD pipeline in Gitlab I get:
AWS provider credentials not found. Learn how to set up AWS provider credentials in our docs here: .
Does anybody know of an issue with using AWS Xray and the --noDeploy
option?
serverless
command gives no output. Any pointer?
Hey everyone, having a weird issue.
I'm using one of my serverless files to reference another so I can share custom configs: custom: ${file(../serverless.custom.yml)}
Within the custom, I have my dotenv that specifies the path
However, it seems it dosnt get a config. Modifying the dotenv file, it seems that the reference to serverless.service.custom is not the actual custom object as loaded from the file, but the string referencing "${file(../serverless.custom.yml)}" itself.
How can I get the actual contents?
Can someone help me what to pass secretName and secretKey in below serverless file.
functions:
myFunction:
handler: gcr.io/knative-releases/github.com/knative/eventing-contrib/cmd/event_display:latest
events:
- awsSqs:
secretName: aws-credentials
secretKey: credentials
queue: QUEUE_URL
Hi,
I am new to Serverless.
I just want to know if there is any way around that we can create s3 buckets using serverless if else plugin.
What I am trying to achieve is that I want to create new s3 buckets as per Stage I gave in if else statement.
Fr example if script(serverless.yml) runs in production stage it should create different s3 bucket and if script runs in development stage then it should create different bucket according to if else statement.
Please let me know if there is any way around to achieve this.