Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Deepak Thukral
    @deepak0004
    @here Can someone provide some leads on the issue - aws/aws-sdk-java-v2#2760
    rnaval
    @rnaval
    Hello! How can I get the StorageClass or StorageType values from an S3 Bucket?
    1 reply
    Maziz
    @MazizEsa
    Hi @here
    Wondering the enchance dynmoDbTable the .table(..). Must it be initialized each time when we about to do something with the table (ie. getItem(..))?
    Can we keep it as singleton or a bean and use the same object throughout the lifetime of the app? What's the best practice?
    2 replies
    cwgroppe
    @cwgroppe

    @here greetings everyone, i'm very new to Fargate/ECS/Batch so forgive my ignorance, but I have a docker image/container thats running a scala batch job. it uses the aws-sdk-v2, hadoop client, and monix to make a bunch of parallel api calls to an http endpoint, bring the data back, do some transforms, and then make a bunch of parallel writes to s3. everything works fine locally, but when i run it as a batch job with fargate i get this error:

    java.nio.file.AccessDeniedException: ...
    org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by ContainerCredentialsProvider : com.amazonaws.AmazonServiceException: Too Many Requests (Service: null; Status Code: 429; Error Code: null; Request ID: null)

    when i try to write the data to s3 as parquet files with code that looks very similar to this (monix-connect parquet):

    val parquetWriter: ParquetWriter[GenericRecord] = {
      AvroParquetWriter.builder[GenericRecord](path)
        .withConf(conf)
        .withSchema(schema)
        .build()
    }
    
    // returns the number of written records
    val t: Task[Long] = {
      Observable
        .fromIterable(elements) // Observable[Person]
        .map(_ => personToGenericRecord(_))
        .consumeWith(ParquetSink.fromWriterUnsafe(parquetWriter))
    }

    it doesnt happen with every write taking place in the job, but it does with a lot of them. Here are my hadoop settings if that helps:

        conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem")
        conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
        conf.set("fs.s3a.aws.credentials.provider", "com.amazonaws.auth.ContainerCredentialsProvider")

    The reading ive done has made me more confused about the whole thing, like maybe its container settings for using IAM for creds, but it half works as is? i dunno,
    I would just like this process to run like it does in docker on my local machine, any help would be much appreciated.

    Michael Dinsmore
    @mjdinsmore_twitter
    When using the TransferManager API (see https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/transfer-manager.html), has anyone experiences an exception like?
    Caused by: java.util.concurrent.CompletionException: software.amazon.awssdk.crt.s3.CrtS3RuntimeException: Retry cannot be attempted because the maximum number of retries has been exceeded. AWS_IO_MAX_RETRIES_EXCEEDED(1069)
    10 replies
    Any ideas where to begin to figure out what the root cause is?
    rnaval
    @rnaval
    Hello! Will this issue be fixed? https://stackoverflow.com/questions/57384857/getting-tags-from-aws-ecs-clusters-return-empty-lists Tried out 2.17.76 but is still an issue there. Thanks!
    Inan Bayram
    @inanbayram
    Hi, it's possible to create a new ecs task definition revision with the v2 sdk? It looks like I have to describe the task definition, create a new task definition with the same informations (from the described task definition) and register it
    sreenadhvp
    @sreenadhvp
    Hi Team, I am new to this community, Not sure this is the right place to ask my use case question. This is regarding s3 bucket integration. I have my own java service class that consume s3 bucket and upload/download files to s3. Wanted to enforce the rule on my s3 bucket, such that the bucket will reject and error out if the user uploads file which has a size greater than 50 MB. Can someone advise me?
    Michael Dinsmore
    @mjdinsmore_twitter
    @sreenadhvp if this is something the user is going to be upload, then I would definitely use the File.length() to check the size locally before sending it. It’ll be quicker and cheaper ( not paying AWS for transferred bytes to a bucket that you eont want to keep anyways).
    German Anders
    @AndersGerman_twitter
    Hi Team, Im new to the community, I need to create a connection pool to handle multiple account connections in scala, is there any package on the sdk that I must need to be aware of? any recommendations will really appreciated.
    balajikarthickkbk
    @balajikarthickkbk
    This message was deleted
    2 replies
    Ved Singh
    @vedgunjan
    Hi All, Is it possible to auto-create partition key using enhanced dynamodb ?
    Marcin Szałomski
    @baldram
    Hi AWS Team, I know that SDK team is not responsible for lambda library.
    But this is the only channel to contact with AWS Team from my side.
    Would you please inform some person from aws-lambda-java-libs regarding this security related PR? Waits for review, hopefully approval and merge.
    The previous update on CVE-2021-44228 vulnerability is not enough. This one fixes the CVE-2021-45046.
    aws/aws-lambda-java-libs#293
    Thanks!
    German Anders
    @AndersGerman_twitter

    I'm having some issues while trying to process a response from an async api call. Basically I'm trying to describe some rds clusters from aws on scala with the following approach:

    def describeClusters(): Unit = { val clustersReq = DescribeDbClustersRequest.builder().build() val clustersResp = client.describeDBClustersPaginator(clustersReq) val clusters = clustersResp.dbClusters() ... }

    So far the clusters val is of type SdkPublisher[DBCluster], and from there I don't know exactly how to handle it. I suppose that I need to get a stream of data from that publisher, so I need to subscribe to that in order to request that stream. But I don't know how can I do it. Any ideas?
    knuspertante
    @knuspertante

    Hi Team,

    how can I get an updated item (I use @DynamoDbVersionAttribute vor optimistic versioning and @DynamoDbAutoGeneratedTimestampAttribute for updated-date) after an dynamodbTable.putItem as in SDK V1?

    Is every single time a getItem request necessary?

    rnaval
    @rnaval

    Hello all! I'm currently trying to monitor the namespace AWS/Events via Cloudwatch. I noticed that there seems to be an undocumented dimension called EventBusName when I try to list metrics:

    {
        "Metrics": [
            {
                "Namespace": "AWS/Events",
                "MetricName": "FailedInvocations",
                "Dimensions": [
                    {
                        "Name": "EventBusName",
                        "Value": "test-event-bus"
                    },
                    {
                        "Name": "RuleName",
                        "Value": "test-rule-custom"
                    }
                ]
            },
            {
                "Namespace": "AWS/Events",
                "MetricName": "Invocations",
                "Dimensions": [
                    {
                        "Name": "EventBusName",
                        "Value": "test-event-bus"
                    },
                    {
                        "Name": "RuleName",
                        "Value": "test-rule-custom"
                    }
                ]
            },
            {
                "Namespace": "AWS/Events",
                "MetricName": "TriggeredRules",
                "Dimensions": [
                    {
                        "Name": "EventBusName",
                        "Value": "test-event-bus"
                    },
                    {
                        "Name": "RuleName",
                        "Value": "test-rule-custom"
                    }
                ]
            }
        ]
    }

    The interesting this here is that when the event bus being used is the default event bus, this dimension disappears:

    {
        "Metrics": [
            {
                "Namespace": "AWS/Events",
                "MetricName": "TriggeredRules",
                "Dimensions": [
                    {
                        "Name": "RuleName",
                        "Value": "test-rule-default"
                    }
                ]
            },
            {
                "Namespace": "AWS/Events",
                "MetricName": "Invocations",
                "Dimensions": [
                    {
                        "Name": "RuleName",
                        "Value": "test-rule-default"
                    }
                ]
            },
            {
                "Namespace": "AWS/Events",
                "MetricName": "FailedInvocations",
                "Dimensions": [
                    {
                        "Name": "RuleName",
                        "Value": "test-rule-default"
                    }
                ]
            }
        ]
    }

    Any idea on why this is the case? It looks like a bug, but keen to know if this is expected by any chance. Thanks!

    Nicholas Connor
    @nkconnor

    anybody have an example of the right type parameters to use here:

    val rsp: CompletableFuture[GetObjectResponse] =
      client.getObject(objectRequest, AsyncResponseTransformer.toBytes())

    None of the Java examples I've found are specifying type parameters. I'm guessing there is some new type inference in the Java compiler?

    To be clear, that client is S3AsyncClient and I'm getting an overloaded method error with alternatives
    Nicholas Connor
    @nkconnor

    searching a bit more I found this similar situation from this channel:

    @drocsid
    I found some ways to reason about this here: aws/aws-sdk-java-v2#94
    That post suggests the documentation was updated, but I ran into similar issues...
    Not sure where to look for the updated docs.
    @drocsid
    So the answer was to construct the StreamingResponseHandler with type parameter predefined like in #94
    def getFileResponseHandler(path: String): StreamingResponseHandler[GetObjectResponse,GetObjectResponse] = StreamingResponseHandler.toFile(Paths.get(path))

    So possibly I just need to name the result type of AsyncResponseTransformer.toBytes()
    surajgawas110
    @surajgawas110
    Hi All, need one help with JAVA SDK for ECS, trying to get available regions for my account for ECS service - Fargate. how can I get that?
    Anirudh Mergu
    @AnirudhMergu
    How to use @DynamoDbAutoGeneratedTimestampAttribute in Kotlin. I'm getting errors if I use
    @set:DynamoDbAutoGeneratedTimestampAttribute lateinit var creationDate: Instant or @get lateinit var creationDate: Instant
    kotlin.UninitializedPropertyAccessException: lateinit property creationDate has not been initialized
    katiekiki
    @katiekiki
    Hi I am trying to use aws sdk 2 with dax and the code reference in the aws documentation doesn't compile. Is there any updated code sample I can refer to? Specifically, I am trying to make dax work with enchanceddynamodbclient
    Jackie S
    @eikkaj
    Hey folks, I'm trying to use the textract java sdk v2 for form analysis. The results are completely different between the aws management console and what I see output with the SDK. Wondering if I can get any support / guidance around this.
    Jackie S
    @eikkaj
    fwiw i also submitted: aws/aws-sdk-java-v2#3095
    Daniel Svensson
    @dsvensson
    This issue was put into SDK Team Backlog (Ordered) in New Features (Public) on Jul 10, 2019.... does that have any significance on when it will be done? aws/aws-sdk-java-v2#370
    Mikhail Nemenko
    @eniqen

    hey guys, I have an issue with software.amazon.awssdk.securitymanager client
    I'm trying to mock request call using wiremock but it cause an error when I got a result

    Here is the test itself

    class SecretManagerSpec extends AnyFlatSpec with Matchers with BeforeAndAfterEach {
      val port = 8080
      val host = "localhost"
      val wireMockRule =
        new WireMockServer(wireMockConfig().port(port))
    
      override def beforeEach: Unit = {
        wireMockRule.start()
        WireMock.configureFor(host, port)
      }
    
      override def afterEach {
        wireMockRule.stop()
      }
    
      it should "mock security manager call and return db connection properties" in {
    
        val dummy =
          s"""|{
              |"arn": "arn:aws:secretsmanager:us-west-2:123456789012:secret:secret-abcdef",
              |"CreatedDate": 1.523477145713E9,
              |"Name": "NAME",
              |"VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987SECRET1",
              |"VersionStages": ["AWSPREVIOUS"],
              |"SecretString": "{"password":"pass","dbname":"db","engine":"postgres","port":5432,"dbInstanceIdentifier":"rrr","host":"www","username":"random"}"
              |}""".stripMargin
    
        stubFor(post(anyUrl()).withHeader("X-Amz-Target", equalTo("secretsmanager.GetSecretValue"))
                  .willReturn(
            ok().withBody(dummy).withHeader("Content-Type", "text/plain")
          )
        )
    
        val dbConfig = DBConfig("secret", "us-west-2")
        val dbConn = DBConfig.getConnectionOptions(dbConfig)
      }
    }
    the problem is in the SecretString field when I pass it in double quotes like it is a string but include there json it fails with this error
    17:33:30,887 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]"
    17:33:30,887 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "Content-Type: text/plain[\r][\n]"
    17:33:30,887 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "Matched-Stub-Id: f18f77d3-562d-4d26-8503-de70e9b924a7[\r][\n]"
    17:33:30,887 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "Vary: Accept-Encoding, User-Agent[\r][\n]"
    17:33:30,887 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "Transfer-Encoding: chunked[\r][\n]"
    17:33:30,887 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "[\r][\n]"
    17:33:30,887 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "1FF[\r][\n]"
    17:33:30,908 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "{[\n]"
    17:33:30,909 DEBUG org.apache.http.wire                                          - http-outgoing-0 << ""arn": "arn:aws:secretsmanager:us-west-2:123456789012:secret:secret-abcdef",[\n]"
    17:33:30,909 DEBUG org.apache.http.wire                                          - http-outgoing-0 << ""CreatedDate": 1.523477145713E9,[\n]"
    17:33:30,909 DEBUG org.apache.http.wire                                          - http-outgoing-0 << ""Name": "NAME",[\n]"
    17:33:30,909 DEBUG org.apache.http.wire                                          - http-outgoing-0 << ""VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987SECRET1",[\n]"
    17:33:30,909 DEBUG org.apache.http.wire                                          - http-outgoing-0 << ""VersionStages": ["AWSPREVIOUS"],[\n]"
    17:33:30,909 DEBUG org.apache.http.wire                                          - http-outgoing-0 << ""SecretString": "{"password":"test,"dbname":"test","engine":"postgres","port":5432,"dbInstanceIdentifier":"test","host":"somehost","username":"test"}"[\n]"
    17:33:30,909 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "}[\r][\n]"
    17:33:30,909 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "0[\r][\n]"
    17:33:30,909 DEBUG org.apache.http.wire                                          - http-outgoing-0 << "[\r][\n]"
    Left(software.amazon.awssdk.core.exception.SdkClientException: Unable to unmarshall response (software.amazon.awssdk.thirdparty.jackson.core.JsonParseException: Unexpected character ('p' (code 112)): was expecting comma to separate Object entries
     at [Source: (software.amazon.awssdk.http.AbortableInputStream); line: 7, column: 21]). Response Code: 200, Response Text: OK)
    Mikhail Nemenko
    @eniqen
    do you have any idea what is wrong ? if I change SecretString to simple string like "HELLO FROM CHAT" is works
    1 reply
    also I'm passing some java properties inside sbt build like this set javaOptions in Test ++= Seq("-Daws.accessKeyId=123", "-Daws.secretAccessKey=321")
    and have manager client like this
      private def buildSecretManagerClient(dbConfig: DBConfig): SecretsManagerClient =
        SecretsManagerClient.builder
          .region(Region.of(dbConfig.region))
          .endpointOverride(new java.net.URI("http://localhost:8080"))
          .httpClientBuilder(ApacheHttpClient.builder)
          .build
    hidayath85
    @hidayath85
    Hi , i am using Aws java sdk v2 facing connection pool shutdown error, my application deployed in aws cluster connecting with service account.
    S3Client.builder().region(Region.of("us-west2")).build();
    @Bean(destroyMethod = "close")
    public S3Client getAWSS3client(final AuditActivityProperties activityProps) {
    LOGGER.info("AWS S3 ClientConfig initializing");
        return S3Client.builder().region(Region.of("us-west2")).build();
    }
    any clues
    1 reply
    hidayath85
    @hidayath85
    i tried without destory method also , but still same error, connection pool shut down
    @Bean
    public S3Client getAWSS3client(final AuditActivityProperties activityProps) {
    LOGGER.info("AWS S3 ClientConfig initializing");
        return S3Client.builder().region(Region.of("us-west2")).build();
    }
    Ali Imran
    @aliimran-pk
    I am unable to parse any pdf using textrat aws management console , however image of that pdf working fine
    is this a bug in textract in aws management console
    ?
    Emmanuel Kaku
    @gindeli05_gitlab
    how to I set up ssm sdk
    Faiz Kidwai
    @fykidwai
    Is S3TransferManager stable enough to be used in production, considering it is still in PREVIEW mode?
    Debora N. Ito
    @debora-ito
    @fykidwai we don't recommend using anything in PREVIEW mode in production environments, as breaking changes can still occur. We are working towards the GA release of TransferManager V2.
    Aster
    @asterd
    Hi guys.. someone experience a 503 error when try to createEndpoint on sns?
    the strange thing is that the endpoint is created but the call throws a 503 exception and I can't read the associated ARN