Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 14:07
    jugalde-r7 commented #2352
  • Jan 31 2019 09:36
    takeyourhatoff commented #2427
  • Jan 31 2019 09:35
    takeyourhatoff commented #2427
  • Jan 30 2019 23:10
    diehlaws commented #2427
  • Jan 30 2019 23:09
    diehlaws labeled #2427
  • Jan 30 2019 23:09
    diehlaws labeled #2427
  • Jan 30 2019 21:47
    diehlaws labeled #2352
  • Jan 30 2019 21:47
    diehlaws commented #2352
  • Jan 30 2019 21:12
    diehlaws commented #2342
  • Jan 30 2019 20:58
    diehlaws assigned #2427
  • Jan 30 2019 20:54
    diehlaws unlabeled #81
  • Jan 30 2019 20:54
    diehlaws unlabeled #142
  • Jan 30 2019 20:54
    diehlaws unlabeled #618
  • Jan 30 2019 20:54
    diehlaws unlabeled #81
  • Jan 30 2019 20:54
    diehlaws unlabeled #142
  • Jan 30 2019 20:54
    diehlaws unlabeled #619
  • Jan 30 2019 20:54
    diehlaws unlabeled #628
  • Jan 30 2019 20:54
    diehlaws unlabeled #568
  • Jan 30 2019 20:54
    diehlaws unlabeled #521
  • Jan 30 2019 20:54
    diehlaws unlabeled #487
Simon Woldemichael
@swoldemi
@swtch1 Same error for v2, without the empty endpoint this time:
"BucketRegionError: incorrect region, the bucket is not in 'us-east-2' region
status code: 301, "
https://play.golang.org/p/sGvlNRESryV
Josh
@swtch1
Thanks @swoldemi
Simon Woldemichael
@swoldemi
@swtch1 Np! You should also be able to provide your own endpoints.ResolverFunc to be able to configure for 1 region but use different regions for a particular service. Probably why the aws cli is able to. Under "Using Custom Endpoints"; https://docs.aws.amazon.com/sdk-for-go/v2/api/aws/endpoints/
Qinusty
@Qinusty_gitlab
Hey I'm looking at an issue in https://github.com/google/go-cloud which makes use of aws-sdk-go for S3, the blob writing functionality seems to support multi part uploads through the use of the s3manager package but fails to copy files over the 5GB limit, can anyone point me in the direction of how to use the multipart Upload Part Copy API in this sdk?
Simon Woldemichael
@swoldemi
@Qinusty_gitlab What sort of failure/error is occuring? There is a 5 TB limit on the Multipart Upload API. go-cloud uses s3manager.Uploader.UploadWithContext which performs a multipart upload
Qinusty
@Qinusty_gitlab
I am making use of CopyObjectWithContext rather than the s3manager, I hadn't spotted a nice way to use the s3manager for performing a copy operation
Simon Woldemichael
@swoldemi
@Qinusty_gitlab I misunderstood your question. You would want to use s3.UploadPartCopy. I don't think s3manager.Downloader supports "copying" between buckets, and downloading to host and running Uploader is expensive. Let's see if I can get you an example
Simon Woldemichael
@swoldemi
@Qinusty_gitlab This should be a good start https://play.golang.org/p/NQe_pD_q0xZ
postry
@postry
Hi, I am trying to create a simple project in GO where I read/write my IOT shadow. Where do I start?
Simon Woldemichael
@swoldemi
Pranav Kandarpa
@pranavkandarpa_twitter
Is there a function to use to see if an ECR image has already been scanned for vulnerabilities in the AWS SDK for GO?
Simon Woldemichael
@swoldemi
@pranavkandarpa_twitter DescribeImageScanFindingsshould return and error as ErrCodeScanNotFoundException if the image you provided does not have any scan data:
https://docs.aws.amazon.com/sdk-for-go/api/service/ecr/#DescribeImageScanFindings
Pranav Kandarpa
@pranavkandarpa_twitter
Okay, thank you @swoldemi
Qinusty
@Qinusty_gitlab
@swoldemi That's great, thanks for the example. I'll take a look at using this as a working solution to the problem
Simon Woldemichael
@swoldemi
@pranavkandarpa_twitter @Qinusty_gitlab No problem!
Pranav Kandarpa
@pranavkandarpa_twitter
How do you find a repository in CodeCommit using the commit id?
Simon Woldemichael
@swoldemi

@pranavkandarpa_twitter I don't think that's possible. Best I can think of is doing a linear search w/ some degree of concurrency (if you really have a large number of repositories) after listing all repositories you have access to read (codecommit.ListRepositories), and using codecommit.GetCommit. If the commit doesn't exist in the repository, then ErrCodeCommitIdDoesNotExistException will be returned. If you have multiple commits you want to check for then codecommit.BatchGetCommits

https://docs.aws.amazon.com/sdk-for-go/api/service/codecommit/#CodeCommit.ListRepositories
https://docs.aws.amazon.com/sdk-for-go/api/service/codecommit/#CodeCommit.GetCommit
https://docs.aws.amazon.com/sdk-for-go/api/service/codecommit/#CodeCommit.BatchGetCommits

Cristian Măgherușan-Stanciu
@cristim
I've seen the compute optimizer API recently landed but it doesn't show in the API reference documentation
Simon Woldemichael
@swoldemi
@cristim There's an open issue on the regeneration of the documentation. I noticed the same thing when image scanning for ECR was released but not sure as to why it's happening: aws/aws-sdk-go#3013
Simon Woldemichael
@swoldemi
Cristian Măgherușan-Stanciu
@cristim
Thanks
Iain Cox
@Iaincox
I have a go app currently using prisma. Having hit some client generation issues i am thinking of migrating to Appsync. My backend is currently Postres and I do have a .GraphQL file can i use this directly to create my endpoints
Simon Woldemichael
@swoldemi

@Iaincox Yes, you are able to submit your own schema to AppSync [1]. The Go SDK method for this is StartSchemaCreation [2], but you can also use the CLI (and maybe the console?). As for using Postgres, I believe the only way that AppSync will support that data source is if you use Aurora Severless [3]. From a blog [4] "Today, AppSync supports NoSQL (Amazon DynamoDB), search (Amazon Elasticsearch Service), and relational (Amazon Aurora Serverless) data sources among others."

[1] https://docs.aws.amazon.com/appsync/latest/devguide/designing-your-schema.html
[2] https://docs.aws.amazon.com/sdk-for-go/api/service/appsync/#StartSchemaCreation
[3] https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-aurora-with-postgresql-compatibility-supports-serverless/
[4] https://aws.amazon.com/blogs/mobile/integrating-aws-appsync-neptune-elasticache/

mikkergimenez
@mikkergimenez

Anyone have experience with the GetCostAndUsageWithResources Method? I'm getting the very generic error:

"The query is invalid. Please refer to the documentation and revise the query."

here is my Input:

    groupBy := []*costexplorer.GroupDefinition{
        {Key: aws.String("NAME"), Type: aws.String("TAG")},
        {Key: aws.String("RESOURCE_ID"), Type: aws.String("DIMENSION")},
    }

    start := "2019-11-01"
    end := "2019-12-01"
    dateInterval := costexplorer.DateInterval{
        Start: &start,
        End:   &end,
    }

    getCostAndUsageWithResourcesInput := costexplorer.GetCostAndUsageWithResourcesInput{
        Granularity: aws.String("MONTHLY"),
        Metrics:     []*string{aws.String("AMORTIZED_COST")},
        GroupBy:     groupBy,
        TimePeriod:  &dateInterval,
        Filter: &costexplorer.Expression{
            Dimensions: &costexplorer.DimensionValues{
                Key: aws.String("SERVICE"),
                Values: []*string{
                    aws.String("Amazon Elastic Compute Cloud - Compute"),
                },
            },
        },
    }
Simon Woldemichael
@swoldemi
@mikkergimenez I think the vagueness may be a result of the data not being available yet, but I'm not sure. Did you recently opt-in to the feature? The console says it may take up to 24 hours for the data to be available
Simon Woldemichael
@swoldemi
The docs also say valid values for the Metrics slice should be in PascalCase (AmortizedCost) but I tried it and it didn't make a difference. Doing a .String() of your Input also seems to match up with the API documentation https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_GetCostAndUsageWithResources.html#API_GetCostAndUsageWithResources_Examples
Jasdeep Singh
@jay-dee7
guys, does anyone know how to install elasticsearch plugins on aws elasticsearch service?
Tristan Janicki
@TristanJanicki
nope
does anyone know where to find the Password used in the ConfirmForgotPasswordInput, it wasn't emailed to me
the email only contained the confirmation code
Arjun Mayilvaganan
@arjunmayilvaganan_gitlab

Imagine I'm working with S3 buckets situated across 3-4 regions. Is there a neat way to declare clients for such a situation?

It looks like I have to declare multiple clients, and still pass bucket URL later as well :(

Simon Woldemichael
@swoldemi

@jay-dee7 Installing plugins to Amazon Elasticsearch Service works the same was as a self-hosted installation, but there are some limitations on what is supported: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-plugins.html

I couldn't find any guides on the AWS docs, but this Linode guide looks like a good start. Skip to the Elasticsearch Plugins section: https://www.linode.com/docs/databases/elasticsearch/a-guide-to-elasticsearch-plugins/#elasticsearch-plugins

Take note that you should not make your Elasticsearch domain public. I have very little experience with Elasticsearch but running it in a VPC and doing the administrative plugin work from an EC2 instance within the VPC should be good. If you already have a good setup then you can disregard this. An example of using some HTTP client (cURL, etc) to interact with the domain can be found here: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-search.html

@TristanJanicki The documentation really doesn't explain much for this, but it looks like this is the new password that you want to set? Maybe this will be helpful: https://stackoverflow.com/questions/51428563/aws-cognito-resetting-user-password-documentation-seems-to-contradict-itself
Simon Woldemichael
@swoldemi

@arjunmayilvaganan_gitlab The simplest way to do that would be to change the region of a single client before you make the API call. For example, if you configure your client to us-east-1, interact with some bucket in us-east-1, then setsvc.Config.Region = "us-east-2" (where svc is the s3 service client of type s3.S3 returned by s3.New(yourSession)). You can also use GetBucketLocation to make this a bit more dynamic, but might have to do some extra work normalizing the returned value.

(The client embedded in type S3 has an exported Config which lets you set Region, EndpointResolver, and Endpoint) https://docs.aws.amazon.com/sdk-for-go/api/aws/client/#Client

https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#GetBucketLocation

Nayyara Samuel
@nayyara-samuel
I can't find a version of the GoSDK for AWS that actually has this method signature available:
func (c *KMS) Sign(input *SignInput) (*SignOutput, error)
Iain Cox
@Iaincox
I am looking to write a go application to talk to AppSync and looking for a good example, any ideas would be gratefully received
Iain Cox
@Iaincox
Hi, I have already created a schema using appsync and made it available, put some test data in DynamoDB. I can read this day with the query tool or from NodeJS, but have not yet worked out how to build and execute a query using go. any examples would be appeciated
Mike Dalrymple
@mousedownmike
@nayyara-samuel I have aws-sdk-go v1.26.8 and it has the following implementation of that method:
func (c *KMS) Sign(input *SignInput) (*SignOutput, error) {
    req, out := c.SignRequest(input)
    return out, req.Send()
}
Harinath Selvaraj
@harinathselvaraj
Hi, I am doing a copy of JSON data from S3 to Redshift using Redshift COPY command. It takes lot of time to parse the data since the data volume high. I found that Redshift LOAD commands are fast. So, I am thinking of converting all JSON files (which has arrays in it) to a single file in tab delimited format. Is this feasible using Go API?
Mike Dalrymple
@mousedownmike
Converting data from JSON to TSV format should be pretty straightforward in GO. You would just be using the aws-sdk to access the JSON in S3 and (I assume) store the TSV to S3 so you can run the LOAD from there.
I've found Go's csv package works well for basic tasks... it starts to fall down when reading data that isn't perfectly formatted.
Dewayne Richardson
@dewrich
Looking for a basic example that works with Cognito+API Gateway (Oauth2) calling custom backend API's with Access tokens
Preferably in Go
nagarjuna gowtham karuturi
@gowthamkaruturi

Hi , i am currently working on issue for which i don't the root cause in one of golang projects which deals with reading AWS SQS messages.
issue: unknown locked/dead state of the routine.it doesn't process or does nothing for which it is supposed to.
To explain the issue, we have two routines r1 , r2 using a channel c1 the below pointers is the flow(error handling has been taken care of).
Routine r1 reads the messages from the SQS and
1.invokes a function to parse the SQS content , get the details of s3 ,
2.invokes a function to the download the files from the s3 using the details read from the previous step.
3.invokes a function to read the contents of the file and communicate to the routine r2 to process further by putting the content from in to the channel c1.
Though the above scenario looks pretty simple , we have an issue where in routine r1 suddenly is in a lock state with no logs of any trace why this has happened.
I have been punching in the dark to find out the reasons. The reasons that i could think are.

aws session issue - "no clue", if this is the issue, i would be thankful if someone come across this kind of situation and solved.
proxy issue : we have tested all the possible conditions by enabling and disabling proxy. it looks fine while testing.
deadlock : which is not the case with code flow we have.we dont have this error, fatal error: all goroutines are asleep - deadlock! in the logs.
Any help is appreciated in this case. please let me know if any additional information is needed.

Mike Dalrymple
@mousedownmike
That sounds like you're just not reading from a channel that is receiving one of your messages. Hard to say for sure without seeing but it doesn't sound like an issue with AWS Session.
nagarjuna gowtham karuturi
@gowthamkaruturi
@mousedownmike , the routine r1 which i mentioned about has an infinite loop which continuously pool for sqs messages and pushes to the channel c1 after completing everything mentioned in the steps 1..3.
Mike Dalrymple
@mousedownmike
Is there anything reading off c1?
nagarjuna gowtham karuturi
@gowthamkaruturi
@mousedownmike , the r2 is actually picking up from the c1.
@gowthamkaruturi Do you have a code snippet or log output you can share? This sounds like something that can be done in a single goroutine, 1 for each new message on the queue. Is the session only being used to download the object from the bucket and to long poll the queue?