philipmarzullo64 on 3.15
information_schema.triggers DDL… (compare)
philipmarzullo64 on 3.14
0005685: MariaDB JDBC Driver ve… (compare)
philipmarzullo64 on 3.13
0005685: MariaDB JDBC Driver ve… (compare)
erilong on 3.14
0005678: Extract fails with dur… (compare)
erilong on 3.13
0005678: Extract fails with dur… (compare)
erilong on 3.12
0005677: Extract fails with dur… (compare)
evan-miller-jumpmind on 3.14
0005671: Brought back java rout… (compare)
erilong on 3.14
0005676: Initial load sends too… (compare)
erilong on 3.13
0005676: Initial load sends too… (compare)
erilong on 3.12
0005667: Initial load sends too… (compare)
catherinequamme on 3.14
0005675: Bidirectional sync cau… (compare)
erilong on 3.13
0005672: Sync triggers fails wi… (compare)
erilong on 3.12
0003576: Sync triggers fails wi… (compare)
erilong on 3.14
0003576: Sync triggers fails wi… (compare)
evan-miller-jumpmind on 3.14
0005669: Prevented a trigger's … (compare)
joshahicks on 3.14
0005643: Requesting a full load… Merge branch '3.14' of https://… Merge branch '3.14' of https://… and 2 more (compare)
evan-miller-jumpmind on 3.14
0005664: Fixed incorrect alteri… (compare)
erilong on 3.14
0005663: Add table reload reque… (compare)
erilong on 3.14
0005662: Snapshot util too slow… (compare)
erilong on 3.13
0005659: Snapshot util too slow… (compare)
Hi, I am upgrading from 3.8 to 3.13 and noticed that the entries in the sym_outgoing_batch table for channel_id = 'heartbeat' are being inserted but when the target node is offline that the sym_outgoing_batch.status of the previous entries are no longer updated. Hence when the node comes back online it can have many 'outdated superfluous' heartbeat entries that must be synched rather than just the most recent. I noticed that in revision 0003883 5/03/19 that the PushHeartbeatListener.heartbeat() method was changed from
log.debug("Updating my node info");
engine.getOutgoingBatchService().markHeartbeatAsSent(); <---- this has been removed
engine.getNodeService().updateNodeHostForCurrentNode();
log.debug("Done updating my node info");
to
log.debug("Updating my node info");
if (engine.getOutgoingBatchService().countOutgoingBatchesUnsentHeartbeat() == 0) {
engine.getNodeService().updateNodeHostForCurrentNode();
}
log.debug("Done updating my node info");
Is there some other functionality that replaced the updating of the 'outdated' sym_outgoing_batch heartbeat entries that I need to configure for 3.13 or is it now intended for these to be synched?
The incoming purge process is about to run
Getting range for incoming batch
: Creating dump
About to purge incoming batch
Done purging 62 of data rows
Getting range for outgoing batch
Getting first batch_id for outstanding batches
About to purge data_event_range using range 26324 thro>2022-07-15 00:00:00.201 INFO 2584 --- [server-000-Done purging 0 of data_event_range rows
About to purge outgoing_batch_range using range 26324 >2022-07-15 00:00:00.201 INFO 2584 --- [server-000-Done purging 0 of outgoing_batch_range rows
About to purge data_event using range 26061 through 26>2022-07-15 00:00:00.225 INFO 2584 --- [server-000-Done purging 47 incoming batch rows
Purging incoming error rows
Purged 0 incoming error rows
Purging registration requests that are older than Sun>2022-07-15 00:00:00.227 INFO 2584 --- [server-000-job-12] Purging monitor events that are older than Thu Jul 14>2022-07-15 00:00:00.250 INFO 2584 --- [server-000-job-The incoming purge process has completed
Done purging 62 of data_event rows
About to purge outgoing_batch using range 26061 throug>2022-07-15 00:00:00.426 INFO 2584 --- [server-000-Done purging 49 of outgoing_batch rows
Looking for lingering batches before batch ID 26389
Found 14 lingering batches to purge
Done purging 14 lingering batches and 32 rows
Getting range for stranded data events
About to purge stranded_data_event using range 0 throu>2022-07-15 00:00:00.561 INFO 2584 --- [server-000-Done purging 0 of stranded_data_event rows
Getting range for stranded data
I have an issue where old records that are updated don't get synchronized. Please here is my log from the target node.
Please can anyone provide any help?
Hi everyone,
Is it normal for a sync of 14,000 records to take hours to sync?
Here's my current configuration on a source node:
job.routing.period.time.ms=5000
job.push.period.time.ms=3000
job.pull.period.time.ms=3000
dataloader.max.rows.before.commit=100
auto.resolve.foreign.key.violation.reverse=true
dataloader.use.primary.keys.from.source=true
@jyotid1815_gitlab without any changes SymmetricDS will sync every 60s and will send up to 100 batches at a time with a max size of 10,000. A lot of other factors come into play regarding performance though such as the network speed, the size of the data, and the machines it is running on. You may need to check some of these areas to determine where the slowness is occurring before adjustments can be made. You could start by changing the parameters arournd the frequency of push and pulls : job.push.period.time.ms & job.push.period.time.ms
You could also adjust the channel max batch to send from 100 down to 5 to see if it is having issues sending that many batches at once