?page=x
to ask for page number x
GET /mybucket.files?pagesize=100&page=3
will give you files from 300nd to 399th
?filter={ <mongo query> }
is the way to limit the result set.
GET /mybucket.files/_size?filter={ <mongo query> }
you'll get the count of the files that mach the query
Greetings! I am testing retrieving a binary file from a bucket, but realized many of the files were empty, and RESTHeart returns a 500 http status code:
% http --verify=no -a admin:secret -f GET https://localhost/storage/mybucket.files/myfile.jpg/binary
HTTP/1.1 500 Internal Server Error
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Location, ETag, X-Powered-By, Auth-Token, Auth-Token-Valid-Until, Auth-Token-Location
Auth-Token: 3ixg98kbwzxso77wqpwt11y8z65a08icn27ssncbs2nlm085i0
Auth-Token-Location: /tokens/admin
Auth-Token-Valid-Until: 2022-04-04T18:28:26.530537652Z
Connection: close
Content-Disposition: inline; filename="file"
Content-Length: 0
Content-Transfer-Encoding: binary
Content-Type: image/jpeg
Date: Mon, 04 Apr 2022 18:13:26 GMT
ETag: 6204a40e9bf8cb3fb5a0a642
Server: Apache
Set-Cookie: ROUTEID=.route1; path=/
X-Powered-By: restheart.org
Meanwhile, RESTHeart logs print:
18:13:26.533 [XNIO-1 task-3] ERROR org.restheart.handlers.ErrorHandler - Error handling the request
com.mongodb.MongoGridFSException: Unexpected Exception when reading GridFS and writing to the Stream
at com.mongodb.client.gridfs.GridFSBucketImpl.downloadToStream(GridFSBucketImpl.java:578)
Caused by: com.mongodb.MongoGridFSException: Could not find file chunk for file_id: BsonString{value='myfile.jpg'} at chunk index 0.
at com.mongodb.client.gridfs.GridFSDownloadStreamImpl.getBufferFromChunk(GridFSDownloadStreamImpl.java:246)
18:13:26.535 [XNIO-1 task-3] ERROR io.undertow.request - UT005071: Undertow request failed HttpServerExchange{ GET /mybucket.files/myfile.jpg/binary}
com.mongodb.MongoGridFSException: Unexpected Exception when reading GridFS and writing to the Stream
at com.mongodb.client.gridfs.GridFSBucketImpl.downloadToStream(GridFSBucketImpl.java:578)
Caused by: com.mongodb.MongoGridFSException: Could not find file chunk for file_id: BsonString{value='myfile.jpg'} at chunk index 0.
at com.mongodb.client.gridfs.GridFSDownloadStreamImpl.getBufferFromChunk(GridFSDownloadStreamImpl.java:246)
18:13:26.537 [XNIO-1 task-3] INFO org.restheart.handlers.RequestLogger - GET http://localhost/mybucket.files/myfile.jpg/binary from /127.0.0.1:34524 => status=500 elapsed=10ms contentLength=0 username=admin roles=[admin]
Would you kindly help me decode the message and how to solve it?
1) Retrieving myfile.jpg
metadata (without /binary
works fine)
2) I did delete few documents using MongoDB Compass from mybucket.files
collection and didn't delete the corresponding document in mybucket.chunks
. I'm assuming MongoDB Compass does that automatically, or it doesn't really matter.
From https://www.mongodb.com/docs/manual/core/gridfs/
GridFS uses two collections to store files. One collection stores the file chunks, and the other stores file metadata. The section GridFS Collections describes each collection in detail.
You should access your files via the GridFS API
To store and retrieve files using GridFS, use either of the following:
A MongoDB driver. See the drivers documentation for information on using GridFS with your driver.
The mongofiles command-line tool. See the mongofiles reference for documentation.
As long as I understand you deleted data from one collection, so your bucket data is not cosistent.
That's the reason why you get the error from RESTHeart
The mongo driver finds the metadata (stored in mybucket.files
) but not the chunks (stored in mybucket.chunks
)
mybucket.files
have the corresponding documents in mybucket.chunks
The 6.3.0 release introduces a few bug fixes and some important security enhancements:
✅ Add new security interceptor bruteForceAttackGuard
(defends from brute force attacks by returning "429 Too Many Requests" when failed auth attempts in the last 10 seconds from the same IP are more than 50%)
✅ Upgrade undertow to v2.2.16.Final
✅ Add WildcardInterceptor that allows intercepting requests to any service
✅MongoRealmAuthenticator can check the password field on user document updates and reject it when it is too weak
✅ Ensure that the defined auth mechanisms are executed in the correct order
✅ filterOperatorsBlacklist is now enabled by default with blacklist = [ "$where" ] (prevents code injections at the database level)
✅ Fix error message in case of var not bound in aggregation and MongoRequest.getAggregationVars() method name
✅ Fix CORS headers for request OPTIONS /bucket.files/_size
✅ Set default MongoDB connections minSize=0
✅ Allow specifying ReadConcern, WriteConcern and ReadPreference at the request level
myProdDB.Orders
. So an aggregate query would look like: myProdDB.Orders.aggregate([])
GET /coll/_meta
What's coll
, what's _meta
? Where are these in relation to myProdDB.Orders
?PUT /coll HTTP/1.1
in the Examples. What's coll
here? In which db is it?mongo-mounts:
- what: myProdDB/Orders
where: /prod/orders
/prod/orders
. So you need to add the aggregation to the collection properties and you do it withPATCH /prod/orders
{
"aggrs": [
{
"stages": [
{ "$match": { "name": { "$var": "n" } } },
{ "$group": { "_id": "$name", "avg_age": { "$avg": "$age" } } }
],
"type": "pipeline",
"uri": "example-pipeline"
}]
}
GET /prod/orders/_meta
. This would return your aggrs
metadata