hi postgres pros, I'm trying to figure out the right architecture for a Saved Search notification system.
Essentially, we have thousands of users who we want to create a Saved Search or to Watch a Search, and to get a notification when there are new records for that particular query.
I've read about Listen/Notify feature in PSQL, but im not sure if this is going to be efficient for us given the scale.
An alternative is to use elasticsearch but then we have to add that to our stack. Any ideas if PSQL can handle this as scale? Any drawbacks you can identify?
Hi All, its my first time here in the Group,
I have question about replicating data into two different tables
Consider this example
I am getting some BookingData from one of the api as json => I am saving the data in the Table1 as
Id - Pk Bigint
UniqueID- unique string value
data - jsonb
these data I need to sent to other parties I can simply dump json and sent to others
Also once step completed I need these data need to be filtered from my UI application, which meanas i need to use normalized tables
Table2, Table3, Table4 with Pk-FK tables
question is it okay to write the data into diffrent places in tow different formats, so that my read or UI/API can be much easy to have complete filters
do I need to live with single table normalize all data, when you send again group together and sent to othres..
tell me whihc is option is good?
for me writing two places, so that UI/others can rely on the data..
@tsayao Per that page:
Thus, with a cache setting of one it is safe to assume that nextval values are generated sequentially; with a cache setting greater than one you should only assume that the nextval values are all distinct, not that they are generated purely sequentially.
So, with a cache setting equal to 1.
So I have the current problem:
maintable and a
table_2that holds additional info about records in
mainis updated in any way, I want to update the
changedtimestamp on that table
table_2is updated, I want to update both
I have the trigger written for changing 1 table, but I can't figure out how to accomplish my solution above
CREATE TABLE if NOT EXISTS table_1 ( id UUID NOT NULL, data TEXT NULL, changed TIMESTAMP NOT NULL default current_timestamp ) CREATE TABLE if NOT EXISTS table_2 ( id UUID NOT NULL, other_data TEXT NULL, changed TIMESTAMP NOT NULL default current_timestamp ) -- update single table CREATE OR REPLACE FUNCTION update_single_table() RETURNS TRIGGER AS $$ BEGIN NEW.changed_timestamp = now(); RETURN NEW; END; $$ language 'plpgsql'; -- table 1 trigger CREATE TRIGGER update_table_1_trigger BEFORE UPDATE ON table_1 FOR EACH ROW EXECUTE PROCEDURE update_single_table(); -- this needs to change to update BOTH tables. CREATE TRIGGER update_table_2_trigger BEFORE UPDATE ON table_2 FOR EACH ROW EXECUTE PROCEDURE update_single_table();
Hello everyone! :) I have some problems with PostgreSQL instance level encryption. So, I am trying to encrypt "at rest" a postgresql database and I have followed the steps from this source:
But when I want to create the encrypted instance using the command:
initdb -K /data/my_pwd_command.sh -D data_encrypted
it says that the option -K illegal. So, I can not pass the encryption key... Could anyone explain why I do not have this option? Or could suggest me another way to encrypt the DB at rest? Thank you :)
I have 2 tables in PostgreSQL:-
student_id name score 1 Adam 10 2 Brian 9
I have a python script which fetches a DataFrame with columns - "name" and "score" and then populates it to the student table.
log_id student_id score 1 1 10 2 2 9
I want to update the student and student_log table whenever the "score" changes for a student. Also, if there is a new student name in the dataframe, I want to add another row for it in the student table as well as maintain its record in the "student_log" table. Can anyone suggest how it can be done?
Let us consider the new fetched DataFrame looks like this:-
name score Adam 7 Lee 5
Then the Expected Result is:-
student_id name score 1 Adam 7 2 Brian 9 3 Lee 5
log_id student_id score 1 1 10 2 2 9 3 1 7 4 3 5
Hi all, has anyone got any idea why this join is a Seq Scan on sub_accounts? There's an index on
account_id in the sub_accounts table, and an index on
"user_account_id_idx" btree ((((document -> 'role'::text) ->> 'accountId'::text)::uuid)) in the
Hello! I am looking into building a change data capture pipeline for our monolith Postgres database, using debezium. I am currently trying to estimate the impact of using debezium on our primary postgres instance. Specifically, I'm interested in:
Are there any benchmark tests that exist, or relevant reading someone can point me to? Thanks in advance! :pray:
Hey all, I'm referring to the 9.6
ALTER TABLE documentation around
SET DATA TYPE. I had a few questions around the following paragarph:
SET DATA TYPE This form changes the type of a column of a table. Indexes and simple table constraints involving the column will be automatically converted to use the new column type by reparsing the originally supplied expression. The optional COLLATE clause specifies a collation for the new column; if omitted, the collation is the default for the new column type. The optional USING clause specifies how to compute the new column value from the old; if omitted, the default conversion is the same as an assignment cast from old data type to new. A USING clause must be provided if there is no implicit or assignment cast from old to new type.
1) What are "simple table constraints"? What would qualify as a non-simple constraint?
2) What does "reparsing the originally supplied expression" mean? Does that refer to the original index/constraint expression, except with the new data type subbed in?
3) For the type of conversion I'm attempting to execute (
INTEGER), there's an assignment cast available. What are the performance characteristic of attempting that
ALTER TABLE call assuming that the column is involved in one or more
Thanks for any help and insight.
Context: Lets say I have a system where users can edit something like a comment, but I want to keep an archive of all previous version of comments. 98% of the time, only the current comment/post/etc is relevant. I expect the ratio of current-to-archive site wide, to avearge 1-1, but it could be up to 1-to-5. Comments/etc rows would be usually looked up by indexed columns (authorId, commentId, threadId, etc). Anything like a keyword search won't be done on these tables. Depending on which option I chose (below), it's likely there will be many tables, where I chose to follow either pattern.
Option 2: One Table. Use one table for both current & archives, with an enum-column whether it's current/old. The advantage is reduced application complexity & number of tables. Disadvantage may(?) be slower lookups & larger table-size.
Which would you generally chose/recommend? And why?
Bonus Question: Roughly, how much of a performance advantage (if any) would Option-1 likely provide, in a scenario like 1-comment, to 5-archives, on a table with something like 200-million comments?