Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 16 2019 18:39

    carlosandrebp on master

    upload email icon (compare)

  • Jan 15 2019 14:03

    carlosandrebp on master

    Icons update (compare)

  • Nov 13 2018 16:54
    hassankhan commented #3
  • Nov 02 2018 15:10
    hassankhan commented #3
  • Nov 02 2018 15:10
    hassankhan commented #3
  • Nov 01 2018 23:19
    hassankhan commented #3
  • Oct 29 2018 16:32
    hassankhan commented #3
  • Oct 24 2018 14:38
    hassankhan opened #3
  • Aug 14 2018 23:50

    carlosandrebp on master

    updates animations (compare)

  • Jul 19 2018 16:34
    brianneisler closed #1
  • Jul 19 2018 16:34
    brianneisler commented #1
  • Jul 18 2018 16:36

    andrepiresbp on master

    Updates (compare)

  • Jul 18 2018 15:50

    andrepiresbp on master

    Update (compare)

  • Jul 09 2018 19:25
    gornostal commented #59
  • Jul 09 2018 19:12
    maheshmasale commented #59
  • May 24 2018 23:39
    DavidWells removed as member
  • Feb 13 2018 19:40
    ivan-myob commented #54
  • Feb 08 2018 08:48
    julien2x opened #62
  • Dec 22 2017 16:14
    awcheng opened #61
  • Oct 23 2017 19:12
    dluu2015 closed #60
Cyril Scetbon
@cscetbon
$ k get configmaps kubeless-config -n kubeless                                                                                                                                  
NAME              DATA   AGE
kubeless-config   10     2m7s

$ serverless deploy                                                                                                                                                             
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Unable to find required information for authenticating against the cluster
Unable to find required information for authenticating against the cluster

  Error --------------------------------------------------

  Error: Request returned: 403 - configmaps "kubeless-config" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kubeless"
    Response: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"kubeless-config\" is forbidden: User \"system:anonymous\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"kubeless\"","reason":"Forbidden","details":{"name":"kubeless-config","kind":"configmaps"},"code":403}
  false
any idea what I'm missing ?
Frank Lemanschik
@frank-dspeed
Error: Request returned: 403 - configmaps "kubeless-config" is forbidden: User "system:anonymous"
forbidden means not not found
any way try this
k get pods -n kubeless && serverless deploy
i think k is your kubectl
oh
and please run serverless deploy -v
for verbose you will see some info about expired tokens
Frank Lemanschik
@frank-dspeed
or maybe you find a other detailed error via -v but looks like a bug
Cyril Scetbon
@cscetbon
not better
Frank Lemanschik
@frank-dspeed
what is the output of -v
ok i go sleep good night my frind
Cyril Scetbon
@cscetbon
gn
Francisco Ramón Sánchez Favela
@safv12
Hi guys, I would like to learn how to create a plugin for serverless, but I want to create something that help others, any idea for a serverless plugin?
baggednismo
@baggednismo

I am getting Access Denied errors from my aws api's. When I set an iamrolestatement down to specific table all lambdas that use that table are ok. If i try to wildcard it because I have multiple tables im creating with lambdas and endpoints all lambdas fail with access denied.

This works for all lambdas that use dynamodb table "merchants"
Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/merchants}"

this breaks all lambdas including ones that worked previously having defined the tables explicitly
Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:*}"

Do i need to define all the tables explicitly? what does that syntax look like.
Francisco Ramón Sánchez Favela
@safv12

Did you try maintaining the table keyword?

arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/*}

baggednismo
@baggednismo
same error. not authorized to perform: dynamodb:PutItem on resource: arn:aws:dynamodb:us-east-1:xxxx:table/merchants
`provider:
name: aws
runtime: nodejs12.x
environment:
MERCHANTS_TABLE: merchants
PRODUCTS_TABLE: products
iamRoleStatements:
- Effect: Allow
  Action:
    - dynamodb:Query
    - dynamodb:Scan
    - dynamodb:GetItem
    - dynamodb:PutItem
    - dynamodb:UpdateItem
    - dynamodb:DeleteItem
  Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/*}"`
baggednismo
@baggednismo
I dont see many examples that are showing multiple tables per serverless project. is that just something I am expected to keep separate?
"Resource tells what resources the permission statement affects. The value is an ARN or list of ARNs to which the statement applies. This lets you give permissions on a more granular basis, such as limiting the ability to query a particular DynamoDB table rather than granting the ability to query all DynamoDB tables in your account. Like the Action element, you can use the wildcard * to apply the statement to all resources in your account."
this shows the previous that I used to allow all tables
https://serverless.com/framework/docs/providers/aws/guide/functions#permissions
Francisco Ramón Sánchez Favela
@safv12
I’m looking on this document to see how can we generalize to all tables:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/using-identity-based-policies.html
If you replace the table name in the resource ARN (Books) with a wildcard character (*) , you allow any DynamoDB actions on all tables in the account. Carefully consider the security implications if you decide to do this.
It seems that you have a } char at the end of the resource, is this ok?
baggednismo
@baggednismo
@safv12 yeah... that would be the issue. Thanks for pointing that out.
Francisco Ramón Sánchez Favela
@safv12
Great @baggednismo :)
Steven Johnson
@stevenj
Is serverless compatible with the AWS CLI V2, or do i have to use V1? Or doesn't it matter?
Mickaël CASSY
@Micka33

Hi Guys,
I am deploying my functions to Google Cloud Functions.
Since the last update, I face a very annoying issue.

  Your Environment Information ---------------------------
     Operating System:          darwin
     Node Version:              12.13.1
     Framework Version:         1.67.0 (standalone)
     Plugin Version:            3.5.0
     SDK Version:               2.3.0
     Components Version:        2.22.3

When deploying a new function, it is not publicly accessible (Cloud Functions Invoker -> allUsers).
I can't find in the documentation how to set it up in my serverless.yml file, so I add it directly through the Google Cloud Console UI.
However, when serverless deploy'ing again, I get the following error

...
...
Serverless: Artifacts successfully uploaded...
Serverless: Updating deployment...
Serverless: Checking deployment update progress...
.....
  Error --------------------------------------------------

  Error: Deployment failed: RESOURCE_ERROR

       {"ResourceType":"gcp-types/cloudfunctions-v1:projects.locations.functions","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid JSON payload received. Unknown name \"location\" at 'function': Cannot find field.","status":"INVALID_ARGUMENT","details":[{"@type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"field":"function","description":"Invalid JSON payload received. Unknown name \"location\" at 'function': Cannot find field."}]}],"statusMessage":"Bad Request","requestPath":"https://cloudfunctions.googleapis.com/v1/projects/airtable-XXXXXX/locations/europe-west1/functions/airtable-rows-readonly-dev-airtable-rows-readonly","httpMethod":"PATCH"}}
      at throwErrorIfDeploymentFails (/Users/username/Documents/remotal/remotal-airtable-rows-readonly/node_modules/serverless-google-cloudfunctions/shared/monitorDeployment.js:71:11)
      at /Users/username/Documents/remotal/remotal-airtable-rows-readonly/node_modules/serverless-google-cloudfunctions/shared/monitorDeployment.js:42:17
      at processTicksAndRejections (internal/process/task_queues.js:93:5)
  From previous event:
...
...

Do any of you know how to make a GCF publicly available via serverless deploy ?

Manoj Jadhav
@jadhavmanoj
Hi All,
I am using serverless with Flask App in AWS Lambda. I want to fill some data in the environment variable for flask app like DB info which is coming from different services.
I dont see any option to call custom function which call before initializing flask app with sls wsgi
can any one help ? or is that something I am doing wrong
Steven Johnson
@stevenj
is there a way to specify multiple "methods" for an event, other than by using ANY. For example, I want an API call that only happens on GET and POST but nothing else?
Mayur Bhatt
@go4mayur_twitter

Hi Everyone,

I need help with my nodejs script to import large CSV file into ibm cloudant database, with csvtojson package i managed to iterate over all the CSV rows and convert those to JSON object, but when i am trying to make GET requests in a loop for each CSV row unique id check, after some of the GET requests like about 1000 requests script ends up with memory heap related error: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory, Can anyone please help me with that, the data i am trying to import is 1.3 million records using CSV file uploaded on AWS s3 bucket.

Any help will be much appreciated.

Thanks in advance.

Nicky Mølholm
@moelholm_twitter

hi there I have a custom arg like this "sls --suffix '-ce3231'" .... when I try to reference that in serverless.yml ... then I get an error message similar to "Trying to populate non string value into a string for variable xxxx .... Please make sure the value of the property is a string"

any idea for how I can work around this ?

Mickaël CASSY
@Micka33
@go4mayur_twitter Can you share your script?
@stevenj on AWS Lambda or Google Cloud Function?
Mickaël CASSY
@Micka33
@stevenj Google Cloud Functions directly sends you the requests.
This is what I use:
exports.http = (request, response) => {
  var p =  new Promise( (resolve) => {
    // Set CORS headers for preflight requests
    // Allows GETs from any origin with the Content-Type header
    // and caches preflight response for 3600s
    response.set('Access-Control-Allow-Origin', '*');
    if (request.method === 'OPTIONS') {
      // Send response to OPTIONS requests
      response.set('Access-Control-Allow-Headers', 'api_token,Content-Type');
      response.set('Access-Control-Allow-Methods', 'POST');
      response.set('Access-Control-Max-Age', '3600');
      response.sendStatus(204);
      resolve();
    }
    else if (request.method === 'POST') {
      check_token_async(request, response).then(() => {
        var param = get_param(request);
        handle_post_async(param, response).then(resolve).catch(resolve);
      }).catch((data) => {
          response.set('Content-Type', 'application/json');
          response.status(400);
          response.send({results:data});
          resolve();
      });
    }
    else {
      response.set('Content-Type', 'application/json');
      response.status(404);
      response.send({results:`Method ${request.method} not allowed.`});
      resolve()
    }
  });
  p.then(()=>{}).catch(error => {
    response.set('Content-Type', 'application/json');
    response.status(500);
    response.send({results:error});
  });
  return p;
};

function check_token_async(request, response) {
  return new Promise((resolve, reject) => {
    // Check if the API token is right here
    // if (token is good) {
    //   resolve();
    // } else {
    //   reject({message:'Bad token.'});
    // }
  });
}

function handle_post_async(param, response) {
  return new Promise((resolve, reject) => {
    // require files and process a response here
    //...
    //
  });
}

function get_param(req) {
  var param;

  switch (req.get('content-type')) {
    case 'application/json':
      param = req.body;
      break;
    case 'application/octet-stream':
      param = JSON.parse(req.body.toString());
      break;
    case 'text/plain':
      param = JSON.parse(req.body.toString());
      break;
  }
  return param
}
Dhaval Soni
@dhavalsoni2001
Hey, Guys, I am trying to upload a file on AWS S3 via presigned URL so I am able to create URL and upload file via PUT method but when I download it, it got corrupted some how... ( I am uploading file using POSTMAN form-data )
Joobi
@joobi10_twitter
Hi
what are the some best practises for using ORMs with serverless (planning to use typeorm)
Francisco Ramón Sánchez Favela
@safv12
AWS Lambda now supports Netcore 3.1 is this transparent for serverless framework or it requires some adjustments to support it?
VED-StuartMorris
@VED-StuartMorris
Eugene Tolbakov
@etolbakov

Hey folks @here!
Does anyone use Condition for function handlers?
When I specify the condition my λ is skipped (which is expected) but AWS::Lambda::EventSourceMapping is created(while it should not afaik)
The error is the following

  Template format error: Unresolved resource dependencies [HandlerKinesisLambdaFunction] in the Resources block of the template

any thoughts on how this could be worked around?
P.S. my sls version is 1.67.0

Eugene Tolbakov
@etolbakov
For the above ^
Looks like I've found the workaround: declare explicitly EventSourceMapping resource (effectivly override it) with condition and all should work
abinhho
@abinhho
I got error InvalidAccessKeyId after deployed to AWS uses serverless. But very weird, only s3.upload got error, s3.putObject still worked fine. Both of upload and putObject worked fine with serverless-offline. Any help!
anshul1790
@anshul1790

Hi there, facing an issue when recreating the stack using YAML, the requirement is to add lambda invocation on s3 object creation. The intial deployment works fine but subsequent deployments give error as:
Failed to create resource. Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type. See details in CloudWatch Log:

# Lambda functions
functions:
  ExperimentPlayVer04:
    name : experiment-play-ver04 # actual name in AWS
    handler: lambdaMain.lambda_handler
    role: ExperimentPlayLambdaRoleVer04
    events:
      - s3:
          bucket: ${self:custom.bucketRef.${self:provider.stage}}
          event: s3:ObjectCreated:*
          rules:
            - prefix: test-path/
            - suffix: .csv

logged the issue here as well:
https://forum.serverless.com/t/issue-in-creating-the-lambda-invocation-from-s3-bucket/10982

Alex v
@alexvdvalk
Hey Everyone. I'm working aws and api gateway REST apis. I want to allow GET and POST method on my serverless function. Is this possible? or do i have to use ANY method