albogdano on master
fixed minor logback configurati… (compare)
albogdano on master
fixed minor logback configurati… (compare)
albogdano on snyk-upgrade-197524d3cc6d88fd7885656246fb9bbe
albogdano on snyk-upgrade-197524d3cc6d88fd7885656246fb9bbe
fix: upgrade ch.qos.logback:log… (compare)
albogdano on snyk-upgrade-c65b70ef889a98b1c687103e477689fe
fix: upgrade org.glassfish.jers… (compare)
albogdano on snyk-upgrade-c65b70ef889a98b1c687103e477689fe
albogdano on master
[maven-release-plugin] prepare … (compare)
albogdano on v1.48.0
albogdano on master
javadoc [maven-release-plugin] prepare … (compare)
albogdano on master
updated dependencies fixed deprecations updated dependencies and 2 more (compare)
albogdano on snyk-upgrade-91fca50903fac6134169e4e731869aaf
fix: upgrade org.glassfish.jers… (compare)
albogdano on snyk-upgrade-91fca50903fac6134169e4e731869aaf
albogdano on snyk-upgrade-9ac21fad7ff2c26235bef48175d7fd06
albogdano on snyk-upgrade-9ac21fad7ff2c26235bef48175d7fd06
fix: upgrade io.dropwizard.metr… (compare)
albogdano on snyk-upgrade-19947937ef40291b776e09dd7f18dc5a
Thanks @albogdano. So that would be adding the AWS access key and secret into credentials file under the .aws folder right? But what I still don't understand is how to tell Para to use the AWS DynamoDB service instead of using the DynamoDB on the localhost. For example to setup Para to use MongoDB we would be setting the following properties in the application.conf file in the Para server:
⁃ para.dao = "MongoDBDAO"
⁃ para.mongodb.uri = ""
⁃ para.mongodb.host = ""
⁃ para.mongodb.port = ""
⁃ para.mongodb.database = ""
⁃ para.mongodb.user = ""
⁃ para.mongodb.password = ""
Is there a similiar properties we have to set in the application.conf so that Para connected to AWS DynamoDB?
Aug 25 15:18:18 ip-10-0-11-24 java[12319]: / __ \/ __` / ___/ __` /
Aug 25 15:18:18 ip-10-0-11-24 java[12319]: / /_/ / /_/ / / / /_/ /
Aug 25 15:18:18 ip-10-0-11-24 java[12319]: / .___/\__,_/_/ \__,_/ v1.46.1
Aug 25 15:18:18 ip-10-0-11-24 java[12319]: /_/
Aug 25 15:18:18 ip-10-0-11-24 java[12319]: 2022-08-25 15:18:18 [INFO ] --- Para.initialize() [production] ---
Aug 25 15:18:18 ip-10-0-11-24 java[12319]: 2022-08-25 15:18:18 [INFO ] Loaded new DAO, Search and Cache implementations - H2DAO, LuceneSearch and CaffeineCache.
Aug 25 15:18:21 ip-10-0-11-24 java[12319]: 2022-08-25 15:18:21 [INFO ] Server is healthy.
Aug 25 15:18:21 ip-10-0-11-24 java[12319]: 2022-08-25 15:18:21 [INFO ] Found root app 'para' and 1 existing child app(s).
Aug 25 15:18:22 ip-10-0-11-24 java[12319]: 2022-08-25 15:18:22 [INFO ] Starting ParaServer using Java 11.0.16 on ip-10-0-11-24 with PID 12319 (/etc/para/para-jar-1.46.1.jar started by root in /etc/para)
Aug 25 15:18:22 ip-10-0-11-24 java[12319]: 2022-08-25 15:18:22 [INFO ] The following 1 profile is active: "production"
Thanks @albogdano, it worked now. So running the command java -jar -Dconfig.file=./application.conf -Dloader.path=lib para-jar-1.46.1.jar
started up just fine and create the table on AWS DynamoDB. However, when I try to run Para again after a system reboot I get the following error:
software.amazon.awssdk.services.dynamodb.model.DynamoDbException: User: arn:aws:sts::xxxxxxxxxxx:assumed-role/iam-role/i-00000000000000 is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:ap-southeast-2:xxxxxxxxxxx:table/para because no identity-based policy allows the dynamodb:CreateTable action (Service: DynamoDb, Status Code: 400, Request ID: E89KFTGEHK95N9LVP8PHHAO68FVV4KQNSO5AEMVJF66Q9ASUAAJG)
The permissions for the IAM User is correct and worked the first time it ran. Is there something else that I'm missing?
Hi guys - I've got a questions on setting up AWS S3 storage provider for file uploads. I've setup an IAM User with a AWS access key and secret key. For testing purpose I've attached the AmazonS3FullAccess AWS managed policy. Then created a S3 bucket with the default settings (no bucket policy). When I start the Scoold server I get the following warning message:
S3 bucket *S3_BUCKET_NAME* does not exist. Bucket must be created before files can be uploaded.
So next, I tried adding in a bucket policy allowing all S3 operations but restricted to the IAM User I created by specifying the IAM User ARN for the bucket policy Principal. Still got the same error message.
However, when I set the bucket policy Principal as any user is allowed to perform any S3 operations then it works and I'm able to successfully upload files. Basically it works when the S3 bucket is Public which defeats the purpose of using a access key and shared secret.
Not sure what I'm doing wrong here to get S3 working for file uploads and being secure. Anyone that has going through this before, can help take me through the correct way of setting up and using S3 for file uploads. Thank you.
Hi @albogdano - I'm definitely in the same region as the bucket and running Scoold Pro v1.50.1
. On the Scoold EC2 instance, I've set the AWS access/secret key in the application.conf
file and also in the ~/.aws/credentials
file. Here is my Scoold file storage configuration that I've used.
# File Storage Configuration
scoold.s3_bucket = "file-storage-bucket-name"
scoold.s3_path = "uploads"
scoold.s3_region = "ap-southeast-2"
scoold.s3_access_key = "AKIAXXXXXXXXXXXXXXXX"
scoold.s3_secret_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Scoold is only able to find the bucket when it's only set to Public. Which means it's not picking up the access key and secret key from the application.conf
or ~/.aws/credentials
file.