@here greetings everyone, i'm very new to Fargate/ECS/Batch so forgive my ignorance, but I have a docker image/container thats running a scala batch job. it uses the aws-sdk-v2, hadoop client, and monix to make a bunch of parallel api calls to an http endpoint, bring the data back, do some transforms, and then make a bunch of parallel writes to s3. everything works fine locally, but when i run it as a batch job with fargate i get this error:
java.nio.file.AccessDeniedException: ...
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by ContainerCredentialsProvider : com.amazonaws.AmazonServiceException: Too Many Requests (Service: null; Status Code: 429; Error Code: null; Request ID: null)
when i try to write the data to s3 as parquet files with code that looks very similar to this (monix-connect parquet):
val parquetWriter: ParquetWriter[GenericRecord] = {
AvroParquetWriter.builder[GenericRecord](path)
.withConf(conf)
.withSchema(schema)
.build()
}
// returns the number of written records
val t: Task[Long] = {
Observable
.fromIterable(elements) // Observable[Person]
.map(_ => personToGenericRecord(_))
.consumeWith(ParquetSink.fromWriterUnsafe(parquetWriter))
}
it doesnt happen with every write taking place in the job, but it does with a lot of them. Here are my hadoop settings if that helps:
conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem")
conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
conf.set("fs.s3a.aws.credentials.provider", "com.amazonaws.auth.ContainerCredentialsProvider")
The reading ive done has made me more confused about the whole thing, like maybe its container settings for using IAM for creds, but it half works as is? i dunno,
I would just like this process to run like it does in docker on my local machine, any help would be much appreciated.
Caused by: java.util.concurrent.CompletionException: software.amazon.awssdk.crt.s3.CrtS3RuntimeException: Retry cannot be attempted because the maximum number of retries has been exceeded. AWS_IO_MAX_RETRIES_EXCEEDED(1069)
I'm having some issues while trying to process a response from an async api call. Basically I'm trying to describe some rds clusters from aws on scala with the following approach:
def describeClusters(): Unit = {
val clustersReq = DescribeDbClustersRequest.builder().build()
val clustersResp = client.describeDBClustersPaginator(clustersReq)
val clusters = clustersResp.dbClusters()
...
}
Hello all! I'm currently trying to monitor the namespace AWS/Events via Cloudwatch. I noticed that there seems to be an undocumented dimension called EventBusName when I try to list metrics:
{
"Metrics": [
{
"Namespace": "AWS/Events",
"MetricName": "FailedInvocations",
"Dimensions": [
{
"Name": "EventBusName",
"Value": "test-event-bus"
},
{
"Name": "RuleName",
"Value": "test-rule-custom"
}
]
},
{
"Namespace": "AWS/Events",
"MetricName": "Invocations",
"Dimensions": [
{
"Name": "EventBusName",
"Value": "test-event-bus"
},
{
"Name": "RuleName",
"Value": "test-rule-custom"
}
]
},
{
"Namespace": "AWS/Events",
"MetricName": "TriggeredRules",
"Dimensions": [
{
"Name": "EventBusName",
"Value": "test-event-bus"
},
{
"Name": "RuleName",
"Value": "test-rule-custom"
}
]
}
]
}
The interesting this here is that when the event bus being used is the default event bus, this dimension disappears:
{
"Metrics": [
{
"Namespace": "AWS/Events",
"MetricName": "TriggeredRules",
"Dimensions": [
{
"Name": "RuleName",
"Value": "test-rule-default"
}
]
},
{
"Namespace": "AWS/Events",
"MetricName": "Invocations",
"Dimensions": [
{
"Name": "RuleName",
"Value": "test-rule-default"
}
]
},
{
"Namespace": "AWS/Events",
"MetricName": "FailedInvocations",
"Dimensions": [
{
"Name": "RuleName",
"Value": "test-rule-default"
}
]
}
]
}
Any idea on why this is the case? It looks like a bug, but keen to know if this is expected by any chance. Thanks!
anybody have an example of the right type parameters to use here:
val rsp: CompletableFuture[GetObjectResponse] =
client.getObject(objectRequest, AsyncResponseTransformer.toBytes())
None of the Java examples I've found are specifying type parameters. I'm guessing there is some new type inference in the Java compiler?
client
is S3AsyncClient
and I'm getting an overloaded method error with alternatives
searching a bit more I found this similar situation from this channel:
@drocsid
I found some ways to reason about this here: aws/aws-sdk-java-v2#94
That post suggests the documentation was updated, but I ran into similar issues...
Not sure where to look for the updated docs.
@drocsid
So the answer was to construct the StreamingResponseHandler with type parameter predefined like in #94def getFileResponseHandler(path: String): StreamingResponseHandler[GetObjectResponse,GetObjectResponse] = StreamingResponseHandler.toFile(Paths.get(path))
AsyncResponseTransformer.toBytes()
@set:DynamoDbAutoGeneratedTimestampAttribute lateinit var creationDate: Instant
or @get lateinit var creationDate: Instant
hey guys, I have an issue with software.amazon.awssdk.securitymanager client
I'm trying to mock request call using wiremock but it cause an error when I got a result
Here is the test itself
class SecretManagerSpec extends AnyFlatSpec with Matchers with BeforeAndAfterEach {
val port = 8080
val host = "localhost"
val wireMockRule =
new WireMockServer(wireMockConfig().port(port))
override def beforeEach: Unit = {
wireMockRule.start()
WireMock.configureFor(host, port)
}
override def afterEach {
wireMockRule.stop()
}
it should "mock security manager call and return db connection properties" in {
val dummy =
s"""|{
|"arn": "arn:aws:secretsmanager:us-west-2:123456789012:secret:secret-abcdef",
|"CreatedDate": 1.523477145713E9,
|"Name": "NAME",
|"VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987SECRET1",
|"VersionStages": ["AWSPREVIOUS"],
|"SecretString": "{"password":"pass","dbname":"db","engine":"postgres","port":5432,"dbInstanceIdentifier":"rrr","host":"www","username":"random"}"
|}""".stripMargin
stubFor(post(anyUrl()).withHeader("X-Amz-Target", equalTo("secretsmanager.GetSecretValue"))
.willReturn(
ok().withBody(dummy).withHeader("Content-Type", "text/plain")
)
)
val dbConfig = DBConfig("secret", "us-west-2")
val dbConn = DBConfig.getConnectionOptions(dbConfig)
}
}
17:33:30,887 DEBUG org.apache.http.wire - http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]"
17:33:30,887 DEBUG org.apache.http.wire - http-outgoing-0 << "Content-Type: text/plain[\r][\n]"
17:33:30,887 DEBUG org.apache.http.wire - http-outgoing-0 << "Matched-Stub-Id: f18f77d3-562d-4d26-8503-de70e9b924a7[\r][\n]"
17:33:30,887 DEBUG org.apache.http.wire - http-outgoing-0 << "Vary: Accept-Encoding, User-Agent[\r][\n]"
17:33:30,887 DEBUG org.apache.http.wire - http-outgoing-0 << "Transfer-Encoding: chunked[\r][\n]"
17:33:30,887 DEBUG org.apache.http.wire - http-outgoing-0 << "[\r][\n]"
17:33:30,887 DEBUG org.apache.http.wire - http-outgoing-0 << "1FF[\r][\n]"
17:33:30,908 DEBUG org.apache.http.wire - http-outgoing-0 << "{[\n]"
17:33:30,909 DEBUG org.apache.http.wire - http-outgoing-0 << ""arn": "arn:aws:secretsmanager:us-west-2:123456789012:secret:secret-abcdef",[\n]"
17:33:30,909 DEBUG org.apache.http.wire - http-outgoing-0 << ""CreatedDate": 1.523477145713E9,[\n]"
17:33:30,909 DEBUG org.apache.http.wire - http-outgoing-0 << ""Name": "NAME",[\n]"
17:33:30,909 DEBUG org.apache.http.wire - http-outgoing-0 << ""VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987SECRET1",[\n]"
17:33:30,909 DEBUG org.apache.http.wire - http-outgoing-0 << ""VersionStages": ["AWSPREVIOUS"],[\n]"
17:33:30,909 DEBUG org.apache.http.wire - http-outgoing-0 << ""SecretString": "{"password":"test,"dbname":"test","engine":"postgres","port":5432,"dbInstanceIdentifier":"test","host":"somehost","username":"test"}"[\n]"
17:33:30,909 DEBUG org.apache.http.wire - http-outgoing-0 << "}[\r][\n]"
17:33:30,909 DEBUG org.apache.http.wire - http-outgoing-0 << "0[\r][\n]"
17:33:30,909 DEBUG org.apache.http.wire - http-outgoing-0 << "[\r][\n]"
Left(software.amazon.awssdk.core.exception.SdkClientException: Unable to unmarshall response (software.amazon.awssdk.thirdparty.jackson.core.JsonParseException: Unexpected character ('p' (code 112)): was expecting comma to separate Object entries
at [Source: (software.amazon.awssdk.http.AbortableInputStream); line: 7, column: 21]). Response Code: 200, Response Text: OK)
private def buildSecretManagerClient(dbConfig: DBConfig): SecretsManagerClient =
SecretsManagerClient.builder
.region(Region.of(dbConfig.region))
.endpointOverride(new java.net.URI("http://localhost:8080"))
.httpClientBuilder(ApacheHttpClient.builder)
.build
return S3Client.builder().region(Region.of("us-west2")).build();
}
return S3Client.builder().region(Region.of("us-west2")).build();
}