I recently started using pgloader to transition a MySQL DB to PostgreSQL. Everything is working well, but I do see a single unique index error. I have an AFTER LOAD step that alters the database and adds a proper unique index. Is it possible to skip this index to avoid the error without modifying the MySQL database? I'd love to be able to tell pgloader to just not try to create that one index.
ERROR PostgreSQL Database error 23505: could not create unique index "idx_106456_my_column"
DETAIL: Key ("myColumn")=(48301) is duplicated.
It seems I am not the first person to ask this question either, a couple of people on stackoverflow asked it but never received a reply. Curious if anyone here has tackled this before.
2022-04-24T21:28:55.911543Z ERROR Database error 40001: restart transaction: TransactionRetryWithProtoRefreshError: cannot publish new versions for descriptors: [{repos 282 4}], old versions still in use
HINT: See: https://www.cockroachlabs.com/docs/v21.2/transaction-retry-error-reference.html
QUERY: COMMIT
> Debug: Database error 40001: restart transaction: TransactionRetryWithProtoRefreshError: cannot publish new versions for descriptors: [{repos 282 4}], old versions still in use
> HINT: See: https://www.cockroachlabs.com/docs/v21.2/transaction-retry-error-reference.html
> QUERY: COMMIT
> While executing: LPARALLEL.KERNEL::CALL-WITH-TRACKED-ERROR, in process lparallel(10).
;;;
;;; #<PROCESS lparallel(10) [Active] #x302001CE7CFD> requires access to Shared Terminal Input
;;; Type (:y 10) to yield control to this thread.
;;;
Hello! pgloader seems like the easiest way to migrate MySQL to Postgres, so thanks for creating it! :)
I'm running into a problem: I have a MariaDB column obj.type char(1) not null default 'X'
, but when I run pgloader, it complains that there's a syntax error on the column
type char(1) not null default ''X'',
I found a previous issue dimitri/pgloader#533 and stackoverflow question https://stackoverflow.com/questions/54616661/pgloader-quote-handling-problems-when-create-table-for-default-values, but they don't really help (the stackoverflow question doesn't really have an answer.)
I don't think the table was originally created by triple-quoting the default value, as in the github issue, though I guess I have no way to know anymore... SHOW CREATE TABLE
and select * from information_schema.columns
both indicate the default value is 'X'
though.
Worst-case I'll just drop the default value and add it back later, but is there a better workaround than migrating my original database first?
LOAD DATABASE
FROM mysql://dupa:pass@12.345.678.910:3306/mydb?useSSL=true&clientCertificateKeyStoreUrl=file:/home/xxx/keys/clientkeystore.jks&clientCertificateKeyStorePassword=mypass&trustCertificateKeyStoreUrl=file:/home/xxx/keys/serverTrust.jks&trustCertificateKeyStorePassword=mypass
INTO pgsql://new:pass@127.0.0.1:5432/newpsql
WITH include drop, create tables
ALTER SCHEMA 'mydb' RENAME TO 'public';
ssh -L 3306:127.0.0.1:3306 some-bastion-host-in-private-network-with-db
and use the localhost port to connect unencrypted to the mysql db
Is pgloader the right tool to import a file to PostgreSQL, multiples times a day?
My use case is to import a file, update the old rows and delete rows that are not in the file anymore.
Currently, I'm using a temp table, but I'm wondering if it's efficient.