Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Nov 25 15:26

    vanmetjk on 3.14

    0005596: Failure to Flush when … (compare)

  • Nov 25 15:20

    vanmetjk on 3.13

    0005595: Failure to Flush when … (compare)

  • Nov 25 15:14

    vanmetjk on 3.12

    0005594: Failure to Flush when … (compare)

  • Nov 22 16:20

    erilong on 3.9

    remove docbook (compare)

  • Nov 21 18:07

    catherinequamme on 3.14

    0005593: Incorrect Logic in var… 0005593: Incorrect Logic in var… (compare)

  • Nov 16 20:27

    evan-miller-jumpmind on 3.15

    0005589: Moved REST API to Symm… (compare)

  • Nov 15 18:41

    evan-miller-jumpmind on 3.12

    0005584: Added test for overlap… (compare)

  • Nov 15 18:36

    evan-miller-jumpmind on 3.13

    Ran spotlessApply (compare)

  • Nov 15 18:36

    evan-miller-jumpmind on 3.13

    0005585: Added test for overlap… (compare)

  • Nov 15 18:30

    evan-miller-jumpmind on 3.14

    0005586: Added test for overlap… (compare)

  • Nov 15 16:05

    philipmarzullo64 on 3.14

    0005586: Trigger creation fails… (compare)

  • Nov 15 16:01

    philipmarzullo64 on 3.13

    0005585: Trigger creation fails… (compare)

  • Nov 15 15:50

    philipmarzullo64 on 3.12

    0005584: Trigger creation fails… (compare)

  • Nov 14 18:50

    erilong on 3.14

    0005583: Service wrapper wait f… (compare)

  • Nov 14 15:17

    catherinequamme on 3.14

    0005582: Database Platforms Tha… Merge branch '3.14' of https://… (compare)

  • Nov 10 21:19

    evan-miller-jumpmind on 3.15

    0005547: Treat any default valu… Ran spotlessApply 0004874: SQL Server and Sybase … and 28 more (compare)

  • Nov 10 21:17

    evan-miller-jumpmind on 3.14

    Ran spotlessApply (compare)

  • Nov 09 19:34

    catherinequamme on 3.14

    0005577: Adding Module for Azur… 0005581: buildPro does not work… Merge branch '3.14' of https://… (compare)

  • Nov 08 18:07

    evan-miller-jumpmind on 3.14

    0003109: Improved error message… (compare)

  • Nov 07 21:11

    erilong on 3.14

    0005560: retry routing with con… (compare)

joshahicks
@joshahicks
@zalmanlew we use to ship the JTDS driver but have swithched to the MSSQL one with the latest releases finding it faster and more stable
3 replies
Are you using SQL Anywhere for the other issues above by the way
1 reply
joshahicks
@joshahicks
Could you send us a log
1 reply
susana prado
@susanap23640384_twitter
Hola alguien sabe por que no puedo escoger master o node
joshahicks
@joshahicks
@susanap23640384_twitter ¿Te refieres a la consola web? Cambiamos la redacción. "Configurar nuevo .." = "Maestro" y "Unirse a existente..." = "Nodo".
susana prado
@susanap23640384_twitter
Al darle clic en Open Web Console la siguiente pantalla no me deja escoger master , o nodo , dice que existe un error inesperado
joshahicks
@joshahicks
Mire logs/wrapper.log y vea qué error está teniendo al iniciarse. El siguiente lugar para verificar es logs/symmetric.log.
adamitsch
@adamitsch
I am using android client example and get this error when trying to transfer data (MSSQL)
https://pastebin.com/VS5Q0pm6
adamitsch
@adamitsch
I have tried disabling monitor job but it didn't help
adamitsch
@adamitsch
on server it only keeps registering node and nothing else
jmckgenerali
@jmckgenerali:matrix.org
[m]
Hi, there seems to be a bug in SymmetricDS. I've installed the latest Pro version (trial) on an ubuntu VM. It doesn't let me connect to Oracle DB in Cloud using an oracle wallet. https://i.imgur.com/Ue0vERN.png
what could be the cause?
Oracle SQL Developer connects successfully (tried from my windows desktop):
jmckgenerali
@jmckgenerali:matrix.org
[m]
@zalmanlew:
adamitsch
@adamitsch
Screenshot from 2022-07-12 11-18-28.png
Jyoti D
@jyotid1815_gitlab
hi,
SymmetricDS is taking a lot of time to sync the data. I'm able to see in logs the data is in process to sync . In log it is showing data of last 3 to 4 days is still pending to sync. Please help me out.
1 reply
jmckgenerali
@jmckgenerali:matrix.org
[m]
I've created a bug report regarding the jdbc issue. It includes logs with stack traces.
joshahicks
@joshahicks
@jmckgenerali:matrix.org thank you for submitting the issue we are trying to release 3.14 so that has been the focus most recently but we will look at it soon
joshahicks
@joshahicks
@adamitsch it looks like an issue with Android or a class loading issue with apache commons jar. Once we get this 3.14 released we will try to take a look. Please let us know if you find anything on your side that fixes it
@jyotid1815_gitlab do you have any nodes offline? Nodes that are registered but not actually running or connecting? This would backlog the data. Also make sure routing and purge jobs are not throwing any errors in the logs. Do you have any batches in error
Jyoti D
@jyotid1815_gitlab
Nodes are not offline. There is no error in the logs. The outgoing batches showing the status NE.
joshahicks
@joshahicks
@jyotid1815_gitlab is the target set up to pull these batches ?
adamitsch
@adamitsch
@joshahicks Any idea how to debug this? I have tried different with different release versions but just encountered other errors.
Jyoti D
@jyotid1815_gitlab
Yes, target set to pull batches. right now out of 150000 only 4000 data is synced in the table. But error log still showing one entry.
veripolis-ms
@veripolis-ms
milliseconds are not synced in datetime(6) fields.
dikum
@dikum
@joshahicks Please when I enter a new record, it syncs to the target node successfully, also works when I update such a record. But when an older record is updated, it does not sync. Is there a possible cause for this?
mitpjones
@mitpjones

Hi, I am upgrading from 3.8 to 3.13 and noticed that the entries in the sym_outgoing_batch table for channel_id = 'heartbeat' are being inserted but when the target node is offline that the sym_outgoing_batch.status of the previous entries are no longer updated. Hence when the node comes back online it can have many 'outdated superfluous' heartbeat entries that must be synched rather than just the most recent. I noticed that in revision 0003883 5/03/19 that the PushHeartbeatListener.heartbeat() method was changed from

        log.debug("Updating my node info");
        engine.getOutgoingBatchService().markHeartbeatAsSent();   <---- this has been removed
        engine.getNodeService().updateNodeHostForCurrentNode();
        log.debug("Done updating my node info");

to

        log.debug("Updating my node info");
        if (engine.getOutgoingBatchService().countOutgoingBatchesUnsentHeartbeat() == 0) {
            engine.getNodeService().updateNodeHostForCurrentNode();
        }
        log.debug("Done updating my node info");

Is there some other functionality that replaced the updating of the 'outdated' sym_outgoing_batch heartbeat entries that I need to configure for 3.13 or is it now intended for these to be synched?

Jyoti D
@jyotid1815_gitlab
SymmetricDS is taking a lot of time to sync the data. I'm able to see in logs the data is in process to sync . In log it is showing data of last 3 to 4 days is still pending to sync. Please help me out.Please give inputs for resolving this issue.
dikum
@dikum

The incoming purge process is about to run Getting range for incoming batch : Creating dump About to purge incoming batch Done purging 62 of data rows Getting range for outgoing batch Getting first batch_id for outstanding batches About to purge data_event_range using range 26324 thro>2022-07-15 00:00:00.201 INFO 2584 --- [server-000-Done purging 0 of data_event_range rows About to purge outgoing_batch_range using range 26324 >2022-07-15 00:00:00.201 INFO 2584 --- [server-000-Done purging 0 of outgoing_batch_range rows About to purge data_event using range 26061 through 26>2022-07-15 00:00:00.225 INFO 2584 --- [server-000-Done purging 47 incoming batch rows Purging incoming error rows Purged 0 incoming error rows Purging registration requests that are older than Sun>2022-07-15 00:00:00.227 INFO 2584 --- [server-000-job-12] Purging monitor events that are older than Thu Jul 14>2022-07-15 00:00:00.250 INFO 2584 --- [server-000-job-The incoming purge process has completed Done purging 62 of data_event rows About to purge outgoing_batch using range 26061 throug>2022-07-15 00:00:00.426 INFO 2584 --- [server-000-Done purging 49 of outgoing_batch rows Looking for lingering batches before batch ID 26389 Found 14 lingering batches to purge Done purging 14 lingering batches and 32 rows Getting range for stranded data events About to purge stranded_data_event using range 0 throu>2022-07-15 00:00:00.561 INFO 2584 --- [server-000-Done purging 0 of stranded_data_event rows Getting range for stranded data

I have an issue where old records that are updated don't get synchronized. Please here is my log from the target node.
Please can anyone provide any help?

veripolis-ms
@veripolis-ms
@dikum I'm pretty new myself, but I suggest you check whether there is an update trigger on the source table if its updates don't get synchronized out.
I have an issue too. I am synchronizing four databases all ways, and it looks correct short-term, but if I wait and check later, deleted rows are back.
veripolis-ms
@veripolis-ms
How is the correct way to alter the structure of a synchronized table?
veripolis-ms
@veripolis-ms
It seems that earlier rows, which are deleted, are stored in files somewhere, and not only in the symmetricds database? Where are they stored?
I have re-created all databases and still old rows are showing up
dikum
@dikum
@veripolis-ms Thanks. The thing is, if I enter a new record it synchronizes. The update made on that new record also synchronizes. The problem is with the existing records in the source db. It doesn't get inserted nor updated in the target DB.
dikum
@dikum
I think I figured it out. It was the initial_load_select column on the sym_trigger_router table that was restricting the number of records being loaded. I set this field to null and it seems to be fine.
dikum
@dikum

Hi everyone,
Is it normal for a sync of 14,000 records to take hours to sync?
Here's my current configuration on a source node:

job.routing.period.time.ms=5000
job.push.period.time.ms=3000
job.pull.period.time.ms=3000
dataloader.max.rows.before.commit=100
auto.resolve.foreign.key.violation.reverse=true
dataloader.use.primary.keys.from.source=true

joshahicks
@joshahicks
@dikum if you look at sym_outgoing_batch for batches with a status of OK on channel default (or whatever channel you are using), there are stats in there regarding where the slowness might be coming from. Look at extract_millis, network_millis, and load_millis to start to see if any of them are much larger than the others
adamitsch
@adamitsch
I would like to add ORM layer over SQLite in Android. Which solution/library works best together with SymmetricDS?
joshahicks
@joshahicks
SymmetricDS works directly with the database so any applications on top should not matter
adamitsch
@adamitsch
I mean some create its own database and you need to manually copy from existing database etc. I just tried DBFlow and somehow can't specify database name following the documentation... And just wanna know what ORM people use the most.
Jyoti D
@jyotid1815_gitlab
extract_millis=169 showing in sym_outgoing_batch and status is NE . Is this the reason for slow syncing?
joshahicks
@joshahicks
@jyotid1815_gitlab this means it is extracting very fast and sitting in new status waiting for the target node to pull the change. So the source seems to look good but the target is probably where you want to look next because it is not pulling the change.
Did you find any batches with OK status? They would have all the stats populated
Jyoti D
@jyotid1815_gitlab
very few batches are in OK status and most of the batches are in NE and LD status. What changes need to do at target to make PULL fast? Because it is taking 4 to 5 days to sync 100000 record in a table.
joshahicks
@joshahicks

@jyotid1815_gitlab without any changes SymmetricDS will sync every 60s and will send up to 100 batches at a time with a max size of 10,000. A lot of other factors come into play regarding performance though such as the network speed, the size of the data, and the machines it is running on. You may need to check some of these areas to determine where the slowness is occurring before adjustments can be made. You could start by changing the parameters arournd the frequency of push and pulls : job.push.period.time.ms & job.push.period.time.ms

You could also adjust the channel max batch to send from 100 down to 5 to see if it is having issues sending that many batches at once