@RegisterPlugin(name = "csvTransformer",
interceptPoint = InterceptPoint.RESPONSE,
description = "transform the response to CSV format",
enabledByDefault = true)
public class CsvTransformer implements MongoInterceptor {
@Override
public void handle(MongoRequest request, MongoResponse response) {
var docs = response.getContent().asArray();
var sb = new StringBuilder();
// add the header
if (docs.size() > 0) {
docs.get(0).asDocument().keySet().forEach(k -> sb.append(k).append(","));
sb.append("\n");
}
// add rows
docs.stream()
.map(BsonValue::asDocument)
.forEach(fdoc -> {
sb.append(fdoc.entrySet().stream()
.map(e -> e.getValue())
.map(v -> BsonUtils.toJson(v))
.collect(Collectors.joining(",")));
sb.append("\n");
});
response.setContentType("text/csv");
response.setCustomSender(() -> response.getExchange().getResponseSender().send(sb.toString()));
}
@Override
public boolean resolve(MongoRequest request, MongoResponse response) {
return request.isGet()
&& request.isCollection()
&& response.getContent() != null
&& request.getQueryParameterOfDefault("csv", null) != null;
}
}
$ http -b -a admin:secret :8080/coll\?csv
_id,a,_etag,
{"$oid":"6202562ce5078606d08b79e2"},1,{"$oid":"6202562ce5078606d08b79e1"}
{"$oid":"62025626e5078606d08b79df"},1,{"$oid":"62025662e5078606d08b79e5"}
GET /coll?page=1
, GET /coll?page=2
,…. until you reach the page with an empty array.
## Read Performance
# default-pagesize is the number of documents returned when the pagesize query
# parameter is not specified
# see https://restheart.org/docs/mongodb-rest/read-docs#paging
default-pagesize: 100
# max-pagesize sets the maximum allowed value of the pagesize query parameter
# generally, the greater the pagesize, the more json serializan overhead occurs
# the rule of thumb is not exeeding 1000
max-pagesize: 1000
# cursor-batch-size sets the mongodb cursor batchSize
# see https://docs.mongodb.com/manual/reference/method/cursor.batchSize/
# cursor-batch-size should be smaller or equal to the max-pagesize
# the rule of thumb is setting cursor-batch-size equal to max-pagesize
# a small cursor-batch-size (e.g. 101, the default mongodb batchSize)
# speeds up requests with small pagesize
cursor-batch-size: 1000
ERROR o.r.mongodb.handlers.ErrorHandler - Error handling the request
com.mongodb.MongoQueryException: Query failed with error code 292 and error message 'Executor error during find command :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.' on server
allowDiskUse
?avars={"var1":1, "var2": {"an": "object}}
?page
and /pagesize
values as parameters, see https://restheart.org/docs/mongodb-rest/aggregations/#predefined-variables
filter
is there a way to limit (or make unlimited) the number of returned documents? I noticed that it returns 100 only. How do I get all? Or limit to 10 only? I know RESTHeart supports pagination, but I can't figure out how to use it. Thank you!!
?page=x
to ask for page number x
GET /mybucket.files?pagesize=100&page=3
will give you files from 300nd to 399th
?filter={ <mongo query> }
is the way to limit the result set.
GET /mybucket.files/_size?filter={ <mongo query> }
you'll get the count of the files that mach the query
Greetings! I am testing retrieving a binary file from a bucket, but realized many of the files were empty, and RESTHeart returns a 500 http status code:
% http --verify=no -a admin:secret -f GET https://localhost/storage/mybucket.files/myfile.jpg/binary
HTTP/1.1 500 Internal Server Error
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Location, ETag, X-Powered-By, Auth-Token, Auth-Token-Valid-Until, Auth-Token-Location
Auth-Token: 3ixg98kbwzxso77wqpwt11y8z65a08icn27ssncbs2nlm085i0
Auth-Token-Location: /tokens/admin
Auth-Token-Valid-Until: 2022-04-04T18:28:26.530537652Z
Connection: close
Content-Disposition: inline; filename="file"
Content-Length: 0
Content-Transfer-Encoding: binary
Content-Type: image/jpeg
Date: Mon, 04 Apr 2022 18:13:26 GMT
ETag: 6204a40e9bf8cb3fb5a0a642
Server: Apache
Set-Cookie: ROUTEID=.route1; path=/
X-Powered-By: restheart.org
Meanwhile, RESTHeart logs print:
18:13:26.533 [XNIO-1 task-3] ERROR org.restheart.handlers.ErrorHandler - Error handling the request
com.mongodb.MongoGridFSException: Unexpected Exception when reading GridFS and writing to the Stream
at com.mongodb.client.gridfs.GridFSBucketImpl.downloadToStream(GridFSBucketImpl.java:578)
Caused by: com.mongodb.MongoGridFSException: Could not find file chunk for file_id: BsonString{value='myfile.jpg'} at chunk index 0.
at com.mongodb.client.gridfs.GridFSDownloadStreamImpl.getBufferFromChunk(GridFSDownloadStreamImpl.java:246)
18:13:26.535 [XNIO-1 task-3] ERROR io.undertow.request - UT005071: Undertow request failed HttpServerExchange{ GET /mybucket.files/myfile.jpg/binary}
com.mongodb.MongoGridFSException: Unexpected Exception when reading GridFS and writing to the Stream
at com.mongodb.client.gridfs.GridFSBucketImpl.downloadToStream(GridFSBucketImpl.java:578)
Caused by: com.mongodb.MongoGridFSException: Could not find file chunk for file_id: BsonString{value='myfile.jpg'} at chunk index 0.
at com.mongodb.client.gridfs.GridFSDownloadStreamImpl.getBufferFromChunk(GridFSDownloadStreamImpl.java:246)
18:13:26.537 [XNIO-1 task-3] INFO org.restheart.handlers.RequestLogger - GET http://localhost/mybucket.files/myfile.jpg/binary from /127.0.0.1:34524 => status=500 elapsed=10ms contentLength=0 username=admin roles=[admin]
Would you kindly help me decode the message and how to solve it?
1) Retrieving myfile.jpg
metadata (without /binary
works fine)
2) I did delete few documents using MongoDB Compass from mybucket.files
collection and didn't delete the corresponding document in mybucket.chunks
. I'm assuming MongoDB Compass does that automatically, or it doesn't really matter.