Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    I am deploying Cloudformation stack in the newly created sub account using cdk but I got error Stack [stack1] already exists (Service: CloudFormation, Status Code: 400, Request ID: 8f6972f4-9f26-42ea-a3d3-56ade88fc75b). I checked that there is no stack named stack1 in my account. why is this happening
    1 reply
    how can I triage this
    Can Request ID: 8f6972f4-9f26-42ea-a3d3-56ade88fc75b help what had happened at that time
    Sofia Oliveira
    Hello! Does anyone have an example of a Step Functions worker using SfnAsyncClient?
    Mohit Hapani
    Hello - Is there a way we can run select object content (s3 select) on specific version of s3 object using version Id? I cannot find any references in select object content documentation to specify the version Id like we have version Id field in get Object request. Any help is appreciated?
    2 replies
    I am using the AWS SDKv2 to audit my S3 buckets. My S3 buckets are spread out across different regions. Is there a way to create a region agnostic S3 Client ? Or can we add more than one regions in the S3Client ? Since S3 bucket names are globally unique, S3Client should be self sufficient in the terms of figuring out which region a bucket is in. As I understand, we can query a bucket to know its location but that is an extra network call.
    3 replies
    Dave Brosius
    Hi folks, I'm using S3Client.builder()...build() and it's working fine, but i now need to configure proxy settings. I see documentation refering to a ClientConfiguration class, but i don't see that, or how you install it with the builder? Anyone point me in the right direction?
    8 replies
    Gaurav Rawat
    A quick general question are sqs generated messageIds (UUID) unique . Can they be used as identifiers for the consuming system to figure our message uniqueness checks say for s3 event notifications . Also is there a chance of the consumer receiving a duplicate message id from a queue on 2 different messages.
    Hi All, I have a very basic question regarding S3 buckets. The API to get Server Side Encryption Rule s3Client.getBucketEncryption(bucketName).getServerSideEncryptionConfiguration().getRules() returns a List<ServerSideEncryptionRule>. Since there are no option of selection multiple Server Side Encryptions (The console has a radio button to select between various encryption types), why a list of Rules with just element is returned. Would it not have been a better API design to just return a ServerSideEncryptionRule instead of a list ?
    Paolo Di Tommaso
    hi all, any suggestion to retrieve all buckets in a region without doing n+1 API calls (list all + bucket location for each one)?
    Paolo Di Tommaso
    Hi All, I am trying to find a way to query S3 Bucket and check if the "Object-level logging" is enabled or disabled for that bucket. Any ideas how I can do that ?
    murtuza boxwala
    Hi all, I am trying to build a client application using Cognito, but I would like the user to be logged into multiple "workspaces" at the same time, like slack. Amplify seems too restrictive because I think it stores the session as a singleton...is that correct? Do you think this should be possible using the Java SDK?
    Hi, I am creating a spot instance and I get instance ID returned as a string, now I need more details about this instance like the ipaddress, Can anyone help me here on how to do this. For a on demand instance I get an Instance object returned while creating the instance which has all these details. But not able to do the same for spot instance.
    2 replies
    let's say at a path A/, I have uploaded, F1, F2, ... F30 in that order.
    So, now we have A/F1, A/F2, A/F3, A/F4, ... A/F30.
    Is there a way to upload objects to the s3 bucket at a path - let's say A/ maintaining the number of objects uploaded at that path only at a fixed number? for example, in this case, if I fix that number to 10, is there a way that when I'm updating F11, F1 will get deleted and when I'm uploading F12, F2 will get deleted and so on.. (keeping the number of objects within A as max 10)
    Reza Nikoopour

    Hey All,

    I'm trying to create a signed request to use in the vault-java-driver (https://github.com/BetterCloud/vault-java-driver/blob/fef74b881b9f3620b68759f1b0d297591f80975e/src/main/java/com/bettercloud/vault/api/Auth.java#L838-L917)

    What's the correct way to go about creating a signed request and passing along the signed data?

    Using the golang SDK you could just create a request objet and then call ..sign on it. I was hoping there would be something similar in Java.

    Hi All, I am trying to pass session tags for STSAssumeRoleSessionCredentialsProvider but the builder STSAssumeRoleSessionCredentialsProvider.Builde does not provide any way to set the tags that can be passed during the STS assume role operation..
    I have submitted an enhancement request for the Java SDK for that. Is there presently any other way to approach this? I want to avoid writing token renewal code and use STSAssumeRoleSessionCredentialsProvider for that
    For the SES API v2 (aws-java-sdk-sesv2) we should handle maximum sending rate per second error in the same way it was in SES API v1 (Throttling errorCode) (https://aws.amazon.com/blogs/messaging-and-targeting/how-to-handle-a-throttling-maximum-sending-rate-exceeded-error/) or in SES API v2 we should catch the TooManyRequestsException for the SendEmail operation (https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html) ?
    2 replies
    I'm new here, so i don't wanna make you waste your time guys... I have a problem trying to connect multiple consumer to kinesis shard... Someone know how could i do it? I'm using Java
    Thank you for your help
    Hi! Is there a way to be able to contribute code to this project? ( can I create a PR? )
    Scott Macdonald
    You can create a PR for our Docs Github Code examples here -- https://github.com/awsdocs/aws-doc-sdk-examples
    and what about aws/aws-sdk-java ?
    1 reply
    Michael Brewer
    I am trying to query a table by a GSI and then update it and i am getting a no mapping for HASH key error
    Model :
    @DynamoDBTable(tableName = "ignored")
    data class PendingOrder(
        var PK: String? = null,
        @DynamoDBIndexRangeKey(globalSecondaryIndexName = "pendingOrders")
        var createdAt: String? = null,
        @DynamoDBIndexHashKey(globalSecondaryIndexName = "pendingOrders")
        var pending: Int? = null,
        open fun findPendingOrders(createdAt: String): List<PendingOrder>? {
            val expression = DynamoDBQueryExpression<PendingOrder>()
                .withKeyConditionExpression("pending = :pending AND createdAt < :createdAt")
                        ":pending" to AttributeValue().withN("1"),
                        ":createdAt" to AttributeValue(createdAt)
            return mapper.query(PendingOrder::class.java, expression)
    Error i am getting :
    PendingOrder; no mapping for HASH key: com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMappingException
    com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMappingException: PendingOrder; no mapping for HASH key
    Michael Brewer
    looks like it is a kotlin issue.
    @DynamoDBTable(tableName = "ignored")
    data class PendingOrder(
        @get:DynamoDBHashKey(attributeName = "PK")
        var PK: String? = null,
        @get:DynamoDBIndexRangeKey(globalSecondaryIndexName = "pendingOrders")
        var createdAt: String? = null,
        @get:DynamoDBIndexHashKey(globalSecondaryIndexName = "pendingOrders")
        var pending: Int? = null,
    fixes it
    Hello. We have a spring boot application that allows file uploads which we are sending to s3 via aws-java-sdk. It is about to receive a lot of traffic in the very near future. We've been doing load testing and trying to improve the performance of many concurrent file uploads and it seems the upload to s3 is one of the bottlenecks. Our files are between 3-10mb, but there will be many many of them. We are in the process of implementing transfermanager, but I am doubtful that this will gain us much performance since the files themselves are mostly below the 5Mb limit. I've also read that it is not possible to use inputstreams to stream files directly to s3 via transfermanager (aws/aws-sdk-java#689) or at least that it won't improve performance. Is there a better way to improve performance for many concurrent file uploads?
    Hello, I meet high CPU usage 100% problem when upload a large file about 800+MB to AWS s3 using aws-java-sdk-s3 1.11.731. I use TransferManager to upload the big file. Could you please tell me how to solve it?
    I am trying to understand if two events in some way can be posted together using the SDK. We need to post multiple events concurrently and we can rely on another framework that calls the SDK to do that. Does the SDK do that ? I am not asking about transactions. Just code support.
    We want to send two events to SQS. If one is not sent because we have a timeout we should let the other succeed.
    We shouldn't let the other succeed.
    Like a CompletableFuture's two stages are joined. We have SDK 1.x because SDK 2 does not have support for JMS listeners. May be I am wrong about this support.
    Cristopher K. Pinzón Resendiz
    Hi, can someone guide in how to run the sample codes?, I have tried with installing the package with mvn or opening the Kinesis Sample project and running ant but both fail
    2 replies
    Hi there, faced with a problem during uploading 100gb file to S3 using AmazonS3EncryptionClientV2 with multipart upload. After 65gb SDK throws exception. Docs says that by default CryptoMode.StrictAuthenticatedEncryption are used, and 2^36-32, or ~64G the maximum message size in bytes that can be encrypted. Is any other way to upload such files with client side encryption?
    8 replies
    Quick question, does the v1 SDK, create temporary files on the filesystem when uploading a file to the S3?
    Jorge Aliss
    Hi there. I need to run spark + hadoop on a k8 cluster using IAM roles for service accounts. I noticed the latest Hadoop version depends on an aws sdk version that does not have WebIdentityTokenCredentialsProvider. I thought I'd try backporting WebIdentityTokenCredentialsProvider and related classes to the same version that Hadoop depends on and provide a custom build. That might make things work as I need. Does this sound like a reasonable thing to do? :)
    2 replies
    I am using TransferManager to upload file to s3 server. I am trying to reuse the instance of TransferManager for all my uploads. And at the end of upload i am trying to shutdown with false input parameter. However this only works for the first file upload. If i remove the shutdown, it works fine for all the files. I am wondering if it is good to not shutdown after the file is uploaded, will it create a memory leak? Or is there a way to startup the same TransferManager when a new file to be uploaded? Appreciate your help in the topic..
    2 replies
    Günter Platzer
    hi all,
    i am using amazon-sqs-java-messaging-lib.
    is in SQSConnection some auto reconnection mechanism?

    How can I know programmatically that used AWS SDK version is compatible with any of the AWS region?

    For eg. I am using the aws-java-sdk-cognitoidentity version 1.11.967 and if I want to check that given version supported in eu-west-1 region or not. How can I check?

    2 replies
    Debora N. Ito
    @/all Hello everyone! We have raised the SDK version from 1.11 to 1.12 in order to upgrade the version of the jackson dependencies. Check our blog post for additional info: https://aws.amazon.com/blogs/developer/aws-sdk-for-java-version-1-12/
    Hi guys, how can I know my default cluster in aws-eks?
    2 replies
    Sergej Andropov
    Hi there! Is it possible to use one presigned url for uploading multiple files?
    Hello! How can I get the StorageClass or StorageType values from an S3 Bucket?
    Hi, I am working on a bug fix and a feature from the issues. I read the contributing guidelines- can you please assign this to me? I see this is a good functionality to have. One of these was flagged for community support. The issues seem straightforward.