Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Michael Robinson
    t2.medium btw
    Michael Robinson
    The problem has gone away after I terminated the ECS’ EC2 container
    Michael Robinson
    Michael Robinson
    I’m still getting write /dev/stdout: resource temporarily unavailable, nobody else has this issue?
    Michael Robinson
    what else migh tcause this?
    Ed Lim
    quick question, I'm attempting to convert the lambci cloudformation template to terraform. Just wanted to ask if you forsee any issues if I went down this route
    it appears you use a custom resource in cloudformation to trigger lambda to update the github hooks?
    Hey everyone. This is cool. Was wondering if LambCI can easily point to other git sources instead of Github. I use Gitlab
    Michael Hart
    @mrjk05 it's not so much lambci pointing at other sources – it's more the other way around! At the moment it doesn't appear that Gitlab supports SNS, which is what LambCI uses as a Lambda hook to get notified of git updates: https://gitlab.com/gitlab-org/gitlab-ce/issues/3486
    It would be possible to integrate with GitLab without SNS – using API Gateway directly, in front of Lambda – but this would need some work to be done to create that hook
    David Aktary
    hey @mhart - I've never been able to get a build to work. I keep getting npm ERR! tar.unpack untar error /tmp/lambci/home/.npm/lodash/4.17.4/package.tgz npm ERR! tar.unpack untar error /tmp/lambci/home/.npm/lodash/4.17.4/package.tgz npm ERR! tar.unpack untar error /tmp/lambci/home/.npm/core-js/2.4.1/package.tgz Any ideas what's going on?
    Michael Hart
    @aktary is it running out of space perhaps?
    Lambda only has 500MB available – so if your deps use more than that with npm@2 then you're outta luck. I'm going to upgrade to using the node6.10 runtime at some point soon, which has npm3 on it by default – which should use less space when npm installing due to its deduping. You could try to use nave right now to achieve this too: https://github.com/lambci/lambci#nodejs
    David Aktary
    Hey @mhart, if my repo has lambdas in a sub-directory, is there a way to point lambCI at just the lambdas?
    Cory Mawhorter
    anyone have any luck getting private git(hub) npm dependencies building with nodejs? this seems to be the unsolvable CI nodejs problem...
    Cory Mawhorter
    For the record, it seems the only way to get it working (anywhere) is using git+https a la git+https://<token>:x-oauth-basic@github.com/<user>/<repo>.git. lambci would have to preprocess package.json and replace all github: or git+ssh...github entries with the access token version.
    though... that only works for direct deps. hmmm.
    Kyle Campbell
    Does anyone here have any experience injecting NPM_TOKEN into docker build using --build-args?
    I'd rather not hardcode my NPM_TOKEN in the build-args or the docker file
    I opened an issue about it on the lambci repo, I think it's an easy fix.
    feel free to ping me if I can help with it
    yesterday lambci started to throw weird errors when using gcc. my build cmd looks like this: . ~/init/gcc && npm install && npm test. output is
    $ . ~/init/gcc && npm install && npm run test
    Installing GCC 4.8.5...
    ++ curl -sSL https://lambci.s3.amazonaws.com/binaries/gcc-4.8.5.tgz
    ++ tar -xz -C /tmp
    ++ set +x
    GCC setup complete
    /bin/bash: line 1:    33 Segmentation fault      (core dumped) npm install
    Build #1 failed: Command ". ~/init/gcc && npm install && npm run test" failed with code 139
    if i remove . ~/init/gcc part, then build will succeed
    does anyone know whats wrong? :(
    Stenio Ferreira
    How are you guys accessing secrets from the Docker container? I am particularly interested in getting the AWS keys
    I updated the Docker image to expect those as parameters on runbuild.sh, and updated the Lambda function to pass those env vars in the dockerBuild function
    I am hosting the revised image locally and redeployed the Lambda function. However ecs doesn't output any logs, so I might have messed up something. Wondering if there is an easier approach (other than hardcoding on lambci.json under the Docker configs)
    I just want to deploy whatever I build and test to S3... can't do on Lambda only because hit the disk size limit
    Michael Hart
    @stenio123 the ECS container will be running under an IAM Role already
    If you're using the aws-sdk for example, it will just get the credentials automatically
    @interisti that seg fault error should be fixed now – it was due to updates on the Lambda system that they did without warning. Let me know if it works for you now or not
    Stenio Ferreira
    @mhart this is the Dockerfile.test in my repo
    FROM mhart/alpine-node:6
    RUN apk update && \
        apk upgrade && \
        apk add \
            util-linux \
            pciutils \
            usbutils \
            coreutils \
            binutils \
            findutils \
            grep \
            curl \
            bash \
    RUN apk add \
            python \
            py-pip \
            && \
        pip install --upgrade awscli
    WORKDIR /tmp/lambci/build
    USER root
    # Build node_modules
    ADD  package.json /buildserver/
    RUN cd /buildserver && npm install
    # Now add the rest, the above layers will be cached if package.json doesn't change
    ADD . /tmp/lambci/build
    # Move npm modules over
    RUN rm -rf /tmp/lambci/build/server/node_modules && mv /buildserver/node_modules /tmp/lambci/build/server/
    RUN  cd /tmp/lambci/build && ./lambci.sh
    and in lambci.sh I have a bash script that tests and builds the code. The final step if to call an aws s3 sync... which is why I need the credentials
    Michael Hart
    I'm not sure what all the buildserver stuff is? Is this running on ECS and being triggered by Lambda?
    Stenio Ferreira
    yes. It is just because the files in my repo are not in the root directory, they are inside a "server" directory
    It builds fine, the only problem is I cant deploy to s3 because I dont have access to the credentials. Not sure how to rely on the IAM role for that
    When I ran this from Lambda (no Docker), it worked because the credentials are available as env variables. But I hit the 500mb limit. And in Docker the secret env vars are not passed
    Michael Hart
    the ec2 server should be running with an iam role – any time you use the aws sdk (and cli) on that server, it should pull the credentials for that iam role from the ec2 metadata
    so you'll just need to give that role access to whatever s3 bucket you need
    If you're using the CloudFormation template, you can just update the permissions from there – and update your stack: https://github.com/lambci/ecs/blob/master/cluster.template#L97-L137
    Stenio Ferreira
    @mhart OMG you are right lol. I got caught up in other errors and thought that was the problem. I think now I know what is wrong, and it is not the lack of AWS credentials. Thank you!
    Stenio Ferreira
    @mhart I am trying to understand what controls the constraints of how many concurrent deployments are possible when using lambci/ecs.
    On the CloudFormation template, the InstanceType parameter will determine the disk space/available computing power.
    How about the AutoScalingGroup properties? Do the defaults mean that only 1 build can be run at the same time? If I change that do I also need to change the CreationPolicy.ResourceSignal?
    Another thing is I have run into a "Thin pool" error, which went away once I cleaned the old Docker images. How do you deal with that, do you have a cron job on the ECS instance host that does the cleanup regularly?
    Stenio Ferreira
    The ECS agent supposedly should do the cleanup since it is version 1.14.1, however the problem only goes away once I run docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
    I believe it is because I am building the Dockerfile.test everytime. I have seen that lambci/ecs runs build followed by run, but it is not clear to me how to pass a shell script to the run stage. Maybe if I could leverage $LAMBCI_DOCKER_RUN_ARGS?
    @mhart we removed that gcc step, cause dynalite now downloads the prebuilt image.
    Craig Steinberger
    Anyone interested in a PR that would help with deployments of lambci off of your own S3 buckets, rather than the public ones? It seems to be some parameterization in the publish.sh script, plus a step to create the buckets in the first place.
    Rohit Verma
    Is there any gitlab support or poc of that, has someone tried that?
    Amrit Gill
    New to lambci and wanted to know if it would be possible to pull lambda src from git, zip it and push it to an s3 bucket??? If so can I use the the lambci installation CFN template or would I have to customize it?
    Benjamin Baldivia
    Having a bit of trouble just getting started with this. I attempted to use "launch stack" as well as a few different ways (all really used same template though) and I get an error on "Custom::ConfigUpdater" every time. It looks like at the point in time "Custom::ConfigUpdater" is run the ConfigTable is not created yet. I adjusted the template and added "DependsOn: ["ConfigTable"]" for "ConfigUpdater" but the result of this was a completely different error: " Failed to create resource. User: arn:aws:sts::837930595040:assumed-role/lambci-private-LambdaExecution-LR33W5LBSV98/lambci-private-build is not authorized to perform: iam:PassRole on resource: arn:aws:iam::837930595040:role/lambci-private-SnsFailures-1Q43HAMIVMMR5" . I am honestly not very familiar with CloudFormation and all the config options that seem to be available. Is nobody else having this issue with this stack?