Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Debora Naomi Ito
    @debora-ito
    Including examples of streaming input types when uploading data to S3
    Mario Kapusta
    @majusko
    great, thank you very much 🙂
    Dillon P
    @dillonius01_gitlab
    hi there! is there any way to configure the aws ApacheHttpClient to use CachingHttpClient? For reference: https://hc.apache.org/httpcomponents-client-4.5.x/httpclient-cache/apidocs/org/apache/http/impl/client/cache/CachingHttpClient.html
    Thank you!
    or any http client that respects the Cache-Control HTTP header
    Juan José Rodríguez
    @juaoose
    Hello everyone. Do you guys know if the v2 sdk for SQS has an issue when setting the Endpoint (I believe v1 overrides the endpoint with the queue URL)
    symandria
    @symandria

    hi :) I'm just starting out with the SDK and trying to import a RESTApi to Api Gateway using it's JSON Api Definition file.

    Using the CLI I have it working with: aws apigateway import-rest-api --body \"file://C:\Users\Josiah\Downloads\SimpleCalc.json"

    Using the SDK I tried:

    String apiBody = Util.convertToSingleString(Util.readAllLines("C:\Users\Josiah\Downloads\SimpleCalc.json"));
    ImportApiRequest importApiRequest = ImportApiRequest.builder().body(apiBody).build();
    ImportApiResponse importApiResponse = ApiGatewayV2Client.create().importApi(importApiRequest);

    But I get the error: Exception in thread "main" software.amazon.awssdk.services.apigatewayv2.model.BadRequestException: Unable to build importer with provided input. Unable to determine OAS version (Service: ApiGatewayV2, Status Code: 400, Request ID:...

    Does anyone know the proper format for the apiBody since just passing it the text contents of a working file doesn't seem to do it? Any help would be greatly appreciated!

    Vivian Duong
    @vivian-duong_gitlab

    The following code is giving me error software.amazon.awssdk.services.s3.model.S3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: S3, Status Code: 301, Request ID: -redacted-)

    import java.nio.file.Paths
    import software.amazon.awssdk.services.s3.model.GetObjectResponse
    
    // In Scala ResponseTransformer.toFile(Paths.get(path)) returns ResponseTransformer[Nothing,Nothing]
    // intead of ResponseTransformer[software.amazon.awssdk.services.s3.model.GetObjectResponse,ReturnT]
    // Somebody wrote on https://gitter.im/aws/aws-sdk-java-v2 
    // Sep 19 2017 that they fixed it by parameterizing fixed by parametrizing toFile call with GetObjectResponse.
    def getFileResponseHandler(path: String): ResponseTransformer[GetObjectResponse,GetObjectResponse] = {
        ResponseTransformer.toFile(Paths.get(path))}
    
    s3Client.getObject(GetObjectRequest.builder()
                                       .bucket("redacted-bucket")
                                       .key("redacted-key")
                                       .build()
                                    ,
                      getFileResponseHandler("test-out"))

    I don't know how to get an object from s3 that let's me specify the full endpoint, which Google has told me is in like a url format, s3.amazonaws.com/bucket/key. Some got this error because they didn't configure the correct region, but I think my region is correct because I can get the object from the command line using the same credentials and configurations as I am when running the code above. Can anybody provide any insight?

    Jouni Latvatalo
    @Sandmania
    Any ideas if @LambdaFunction / LambdaInvokerFactory convenience will be available at some point?
    Debora Naomi Ito
    @debora-ito
    @vivian-duong_gitlab are you specifying the bucket's region to the s3Client?
    @Sandmania the task is in our backlog, but there's no ETA yet
    Vivian Duong
    @vivian-duong_gitlab

    I think so? I have the region of my aws profile set in my aws credentials file.
    I am using DefaultCredentialsProvider

    val s3Client = S3Client.builder
            .credentialsProvider(DefaultCredentialsProvider.builder.build)
            .build

    Oh, could I be conflating the region for my aws "profile" and the region for the bucket?

    Vivian Duong
    @vivian-duong_gitlab
    I set the wrong region. I assumed that b/c I could access the bucket on the command line, then I could access it using Scala. Apparently the region I specify doesn't matter on the command line. Thank you @debora-ito !
    Debora Naomi Ito
    @debora-ito
    @vivian-duong_gitlab No problem!
    bbccccn
    @bbccccn

    Hi everyone!

    I've started migration from sdk v1 to skd v2 and miss TransferManager very much. Is there any analogue of it in new sdk?
    Now I'm changing code which was doing multipart upload but it fails with software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Read timed out. Any clues why does it can happen?

    Thanks in advance.

    Debora Naomi Ito
    @debora-ito
    @bbccccn TransferManager is not available yet in V2, you can track its progress in this issue: aws/aws-sdk-java-v2#37 and in our (public) features backlog
    As for the read timeout exception, it's hard to analyze without knowing what the code is trying to do... you can open an issue with more details in github and we can take a look.
    bbccccn
    @bbccccn
    Ok, thank you!
    wangyuzhen
    @sail456852
    Hello
    CodingPenguin
    @TheCodingPenguin
    Hello is there anyone who used the new 2.x version in Scala ?
    Bertrand Deweer
    @geniusit
    Hello, i don't understand why the async version of the putObject (s3) takes +/- 900 ms comparing to the sync version that takes +/- 1.1 s, why a so little difference between them ? I use guava CloudWatch to measure it .
    Debora Naomi Ito
    @debora-ito
    @geniusit can you elaborate on why you expected to have a bigger difference? Maybe share a code sample?
    haneefkassam
    @haneefkassam
    @geniusit You likely won't see much difference with a single request. async really shines when you have to perform multiple parallel operations...the overall transfer will complete faster with async. You may also be seeing the timing similar because the APIs are using multipart uploads (I believe both sync and async are doing this). The sync api just blocks your code execution whereas the async one will not.
    Michael Dinsmore
    @mjdinsmore_twitter
    Is the expectation that I should get the same number of items returned calling aws s3 ls s3://my.bucket.name/foobar/ --recursive | wc -l and using the S3Client.listObjectsV2Paginator(request) and having the ListObjectsV2Request object set .prefix("foobar/") and then counting the number of results? I am getting different counts from the two. Note, this bucket has tens of thousands of objects. Also, is there a way to just get to a certain depth in the bucket space? In the v1.x API, I could set maxDepth. I have hundreds of thousands of objects in a bucket and just want to find objects up to certain depths, i.e. s3://my.bucket.name/foobar/ so that it doesn't return s3://my.bucket.name/foobar/one/two/ -- this is required so my call can exit out quickly instead of trolling the entire space which takes the api call tens of seconds to complete.
    Michael Dinsmore
    @mjdinsmore_twitter
    i.e.
    request = ListObjectsV2Request
                    .builder()
                    .bucket(bucketName)
                    .prefix(prefix)
                    .build();
    ListObjectsV2Iterable response = getClient().listObjectsV2Paginator(request);
    
    List<String> fileList = response.stream()
                    .flatMap(r -> r.contents().stream())
                    .map(S3Object::key)
                    .collect(Collectors.toList()); 
    System.out.println(fileList.size());
    Debora Naomi Ito
    @debora-ito
    @mjdinsmore_twitter there's no maxDepth option in the original S3 APIs, probably because S3 does not have the concept of levels, and that's why the sdk s3 client does not have this option. You can try to use prefix together with delimiter.
    As for the difference in the number of results, I guess it will depend on how your objects were created, but it's hard to tell without looking at the actual results.
    sec-stack
    @SyCode7
    Hi, how can I get metrics like "requests per second", for requests against IAM and s3?
    Bob Namestka
    @bnamestka
    Anyone have a simple example that they'd be willing to share of using the java sdk v2 with dynamodb, maven and employing the java module system? Say CreateTable example that includes the pom.xml and the module-info.java file. I'm migrating to Java13 and I'm stuck.
    Bob Namestka
    @bnamestka
    Figured out the java sdk v2/maven/dynamodb issues with the java module system for anyone experiencing the same discomforts.
    sullis
    @sullis
    Hello! I have a Java application that uses AWS SDK version 1.x. The applications uses S3, Lambda, and SQS from the v1 SDK. I'd like to migrate the S3 code from v1 to v2. And I would like leave the Lambda and SQS code on the v1 SDK. Are other customers doing this? Any pitfalls that I should know about?
    nidhivis123
    @nidhivis123

    Hi,

    I am working on upgrading a Java application's aws-java-sdk version from v1 to v2. It uses s3. Is there a guide/example that I can refer? What is the TransferManager called in v2? A quick search on the repo https://github.com/aws/aws-sdk-java-v2 returned no results.

    nidhivis123
    @nidhivis123
    Thank you
    Pradeep Venkataraman
    @pradeepvenkataraman
    What is the equivalent of APIGatewayResponseProxyEvent in SDK v2?
    Raf Theunis
    @raftheunis87

    Hey team, can you help me to use SQSClient and Jms ConnectionFactory? Previous SDK version had class SQSConnectionFactory, but latest version don't. Any plants to add support JMS into SDK v.2?

    any news on this maybe?

    would love to use Spring JMS with SDK v2 for java. We need the SDK v2 because we use the WebIdentityTokenFile on EKS for the credentials, and this is not working well with sdk v1
    Raf Theunis
    @raftheunis87
    or are there maybe other ways to use AWS SQS in combination with Spring which give annotations on methods?
    right now I'm using this: @JmsListener(destination = "queue-name")
    cause all these code samples are using sqs straight in the main class
    Raf Theunis
    @raftheunis87
    does anyone maybe have a Spring example of SQS from the AWS java SDK v2 which keeps listening for new messages on the SQS queue?
    Michael Dinsmore
    @mjdinsmore_twitter
    @debora-ito Apologies - that was my internal API. Nonetheless, the new 2.x SDK does not perform the same functionality as the 1.x for this test case. Whereas I can make a call like this with the old 1.x API:
    com.amazonaws.services.s3.model.ListObjectsRequest listObjectsRequest = new com.amazonaws.services.s3.model.ListObjectsRequest().withBucketName(bucketName).withPrefix(prefix).withDelimiter(PATH_DELIMITER); ObjectListing objects = getV1Client().listObjects(listObjectsRequest);
    It will return in < 1 second whereas the new SDK will take minutes.
    Michael Dinsmore
    @mjdinsmore_twitter
    @nidhivis123 As you saw, there is no TransferManager in the 2.x api yet. They claim that they are working on adding that, but I wouldn't have high hopes, considering they've had a couple of years since they first released this SDK and they still haven't added it. I would go with the examples that @sullis had suggested.
    Daniel Peebles
    @copumpkin
    What's the idiomatic way in aws-sdk-java-v2 to get regional STS endpoints?
    is there some special flag or do I need to continue overriding the endpoint URL like in most other SDKs?
    Michael Dinsmore
    @mjdinsmore_twitter
    @copumpkin - not sure what you're asking exactly. Could you elaborate? I typically specify the region in the client that's connecting and requesting the resource/activity, so you can just make multiple clients for the various regions, i.e.
    SnsClient snsClient = SnsClient.builder()
                    .region(Region.US_WEST_2)
                    .build();
    
    GetTopicAttributesRequest request = GetTopicAttributesRequest.builder()
                    .topicArn(topicArn)
                    .build();
    
    GetTopicAttributesResponse result = snsClient.getTopicAttributes(request);
    Debora Naomi Ito
    @debora-ito
    @mjdinsmore_twitter can you open a github issue in v2 with details about how long your call is taking to run, how many objects you have in the bucket and a reproducible code? We are interested in investigating performance issues.
    As for the TransferManager, yes, it's not available yet in 2.x but it's high in our priority list. Right now the team is actively working in the new DynamoDB Enhanced Client, which is another highly anticipated feature. These features take time, and we appreciate your patience.
    Michael Dinsmore
    @mjdinsmore_twitter
    @debora-ito Sure thing!
    Daniel Peebles
    @copumpkin
    @mjdinsmore_twitter for historical reasons, STS gets special treatment in most AWS SDKs and if you ask for an STS client for e.g., ap-northeast-1, you'll still hit the default global endpoint
    if you actually want regional STS, you almost always need to explicitly override the endpoint URL on the client configuration as well as the signing region
    some newer SDKs like Go have a boolean flag where you can say "do the regional thing for me when I ask for a custom region" and don't need to wrangle URLs by hand, and I was wondering whether the java v2 SDK had something similar