Hey guys, I have a question regarding the storage mechanism of replicated logs. In my current projects, I will have an Atomix cluster running for months without restarting, collecting plenty of data every day. I need this data to be 100% consistent across all nodes (which is why I plan to use Raft as the replication mechanism). However, I only need to access the data of the last 24 hours. So I was wondering: Is there a way for Atomix to get rid of logs that are older than that, to save drive space and memory? Cause in case I need to restart a node, the startup would take a long time since all log entries (most of which are not even needed anymore) would have to be replayed.