CNAME
file doesn't have a correct format: https://github.com/monix/monix-connect/settings
gh-pages
: https://github.com/monix/monix-connect/tree/gh-pages
[error] ## Exception when compiling 6 sources to /home/thijs/repositories/monix-connect/aws-auth/target/scala-2.13/classes
[error] java.lang.StackOverflowError
[error] scala.tools.nsc.typechecker.Typers$Typer.typedBlock(Typers.scala:2571)
[error] scala.tools.nsc.typechecker.Typers$Typer.typedOutsidePatternMode$1(Typers.scala:5911)
[error] scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5946)
[error] scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5993)
[error] scala.tools.nsc.typechecker.Typers$Typer.typedTyped$1(Typers.scala:5703)
[error] scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5939)
[error] scala.tools.nsc.typechecker.Typers$Typer.typed(Typers.scala:5993)
[error] scala.tools.nsc.typechecker.Macros$DefMacroExpander.$anonfun$onSuccess$1(Macros.scala:631)
[error] scala.tools.nsc.typechecker.Macros$DefMacroExpander.typecheck$1(Macros.scala:631)
[error] scala.tools.nsc.typechecker.Macros$DefMacroExpander.onSuccess(Macros.scala:643)
[error] scala.tools.nsc.typechecker.Macros$MacroExpander.$anonfun$expand$1(Macros.scala:582)
[error] scala.tools.nsc.Global.withInfoLevel(Global.scala:228)
wanted to adjust this:
def deleteObject(
bucket: String,
key: String,
bypassGovernanceRetention: Option[Boolean] = None,
mfa: Option[String] = None,
requestPayer: Option[String] = None,
versionId: Option[String] = None)(implicit s3AsyncClient: S3AsyncClient): Task[DeleteObjectResponse] = {
val request: DeleteObjectRequest =
S3RequestBuilder.deleteObject(bucket, key, bypassGovernanceRetention, mfa, requestPayer, versionId)
S3RequestBuilder.deleteObject(bucket, key, bypassGovernanceRetention, mfa, requestPayer, versionId)
deleteObject(request)
}
into
def deleteObject(
bucket: String,
key: String,
bypassGovernanceRetention: Option[Boolean] = None,
mfa: Option[String] = None,
requestPayer: Option[String] = None,
versionId: Option[String] = None): Task[DeleteObjectResponse] = {
val request: DeleteObjectRequest =
S3RequestBuilder.deleteObject(bucket, key, bypassGovernanceRetention, mfa, requestPayer, versionId)
deleteObject(request)
}
aws-auth
subproject?
sbt -mem 2048 aws-auth/compile
Hi guys!
I've just found this library and it's a nice piece of work! I have several ideas for improvement though - it might be better to discuss it here than in GH issue I guess... Everything is related to S3 connector as it's what I want to use.
MonixAwsConf
so one can fill it manually (or parse it, whatever) and not has to use fromConfig
.fromConfig
takes a ConfigSource
so users are not forced to have the configuration in a hardcoded place (root, in this case).MonixAwsConf
) private? :-Doffset
or range
parameter in the downloadMultipart
method - we need to be able to resume download of a big file and that is currently not possible.So, what do you think? I'm willing to help implementing these changes, I just want your approval first so I don't waste my time :-D
Thanks!
@here greetings everyone, i'm very new to Fargate/ECS/Batch so forgive my ignorance, but I have a docker image/container thats running a scala batch job. it uses the aws-sdk-v2, hadoop client, and monix to make a bunch of parallel api calls to an http endpoint, bring the data back, do some transforms, and then make a bunch of parallel writes to s3. everything works fine locally, but when i run it as a batch job with fargate i get this error:
java.nio.file.AccessDeniedException: ...
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by ContainerCredentialsProvider : com.amazonaws.AmazonServiceException: Too Many Requests (Service: null; Status Code: 429; Error Code: null; Request ID: null)
when i try to write the data to s3 as parquet files with code that looks very similar to the example from the docs :
val parquetWriter: ParquetWriter[GenericRecord] = {
AvroParquetWriter.builderGenericRecord
.withConf(conf)
.withSchema(schema)
.build()
}
// returns the number of written records
val t: Task[Long] = {
Observable
.fromIterable(elements) // Observable[Person]
.map( => personToGenericRecord())
.consumeWith(ParquetSink.fromWriterUnsafe(parquetWriter))
}
it doesnt happen with every write taking place in the job, but it does with a lot of them. Here are my hadoop settings if that helps:
conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem")
conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
conf.set("fs.s3a.aws.credentials.provider", "com.amazonaws.auth.ContainerCredentialsProvider")
The reading ive done has made me more confused about the whole thing, like maybe its container settings for using IAM for creds, but it half works as is? i dunno,
I would just like this process to run like it does in docker on my local machine, any help would be much appreciated.