Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
    PREPARE result_have AS select privilege_type from information_schema.role_table_grants where table_name='address_type' and grantee='rm_rw_role' LIMIT 1; SELECT results_eq('result_have', $$VALUES ( 'INSERT'::varchar(255)) $$, 'Test');
    I get 'Number of columns or their types differ between the queries'. It is a pgtap test.
    Flummoxed really.
    There is just one row returned.
    This passes.
    PREPARE result_have AS select LOWER(privilege_type) from information_schema.role_table_grants where table_name='address_type' and grantee='rm_rw_role' LIMIT 1;
    PREPARE result_expected AS VALUES ('insert');
    SELECT results_eq( 'result_have', 'result_expected', 'Test');

    hi postgres pros, I'm trying to figure out the right architecture for a Saved Search notification system.

    Essentially, we have thousands of users who we want to create a Saved Search or to Watch a Search, and to get a notification when there are new records for that particular query.

    I've read about Listen/Notify feature in PSQL, but im not sure if this is going to be efficient for us given the scale.

    An alternative is to use elasticsearch but then we have to add that to our stack. Any ideas if PSQL can handle this as scale? Any drawbacks you can identify?

    Dave Cramer
    use logical decoding
    hi guys, if I have a sales table that store sales data, which could have 1.5 millions row each month, are there any options for faster query method? The only option I have is creating a MV and indexing the necessary MV's columns, but it could really take a lot of storage space

    Hi All, its my first time here in the Group,
    I have question about replicating data into two different tables

    Consider this example

    1. I am getting some BookingData from one of the api as json => I am saving the data in the Table1 as
      Id - Pk Bigint
      UniqueID- unique string value
      data - jsonb
      these data I need to sent to other parties I can simply dump json and sent to others

    2. Also once step completed I need these data need to be filtered from my UI application, which meanas i need to use normalized tables
      Table2, Table3, Table4 with Pk-FK tables

    question is it okay to write the data into diffrent places in tow different formats, so that my read or UI/API can be much easy to have complete filters
    do I need to live with single table normalize all data, when you send again group together and sent to othres..

    tell me whihc is option is good?
    for me writing two places, so that UI/others can rely on the data..

    Thiago Milczarek Sayao
    Looking at https://www.postgresql.org/docs/12/sql-createsequence.html it seems that sequences are not 100% guaranteed to be sequential on many transactions
    any best practice suggestion how this can be done?
    I dont want to generate 3 before 2 for example
    it must always be in order
    Fora example if transaction #1 starts, gets seq1, transaction #2 starts, gets seq2 then transaction #2 finishes first, it will write seq2 before seq1
    this is not desired
    Ben Johnson

    @tsayao Per that page:

    Thus, with a cache setting of one it is safe to assume that nextval values are generated sequentially; with a cache setting greater than one you should only assume that the nextval values are all distinct, not that they are generated purely sequentially.

    So, with a cache setting equal to 1.

    They still won't necessarily be gapless, but they should be sequential.
    Thiago Milczarek Sayao
    @cbj4074 I don't mind gapes, but on my scenario the sequence 2 (for example) must never be written before 1, unless 1 is never written.
    Ben Johnson
    I think the above will satisfy your requirements, then.
    As for what implications that will have on performance, I do not know.
    Thiago Milczarek Sayao
    Can I use the default scram-sha-256 for authentication on pg13 but have a legacy user with md5?
    or I have to change everything back to md5
    defi jesus
    Hey everyone. What modern migration tooling do yall recommend for postgres these days?
    Im very interested on how people solve data transformation today
    Some things are not possible (or very hard) to do in pure SQL so I’m looking for a solution that allows me to drop down to a programming language when needed
    Hello everybody. I've got a problem with a generated column. On insert, it works. But when I update a row, the generated column is not executed again. How is it possible to execute it again ?
    Drew Boardman

    So I have the current problem:

    • I have a main table and a table_2 that holds additional info about records in main
    • When main is updated in any way, I want to update the changed timestamp on that table
    • When table_2 is updated, I want to update both table_2 and main timestamp

    I have the trigger written for changing 1 table, but I can't figure out how to accomplish my solution above

    CREATE TABLE if NOT EXISTS table_1 (
      id UUID NOT NULL,
      data TEXT NULL,
      changed TIMESTAMP NOT NULL default current_timestamp
    CREATE TABLE if NOT EXISTS table_2 (
      id UUID NOT NULL,
      other_data TEXT NULL,
      changed TIMESTAMP NOT NULL default current_timestamp
    -- update single table
    CREATE OR REPLACE FUNCTION update_single_table()
        NEW.changed_timestamp = now();
        RETURN NEW;
    $$ language 'plpgsql';
    -- table 1 trigger
    CREATE TRIGGER update_table_1_trigger
        BEFORE UPDATE ON table_1
        FOR EACH ROW
    EXECUTE PROCEDURE update_single_table();
    -- this needs to change to update BOTH tables.
    CREATE TRIGGER update_table_2_trigger
        BEFORE UPDATE ON table_2
        FOR EACH ROW
    EXECUTE PROCEDURE update_single_table();
    Teofana Enachioiu

    Hello everyone! :) I have some problems with PostgreSQL instance level encryption. So, I am trying to encrypt "at rest" a postgresql database and I have followed the steps from this source:

    But when I want to create the encrypted instance using the command:
    initdb -K /data/my_pwd_command.sh -D data_encrypted
    it says that the option -K illegal. So, I can not pass the encryption key... Could anyone explain why I do not have this option? Or could suggest me another way to encrypt the DB at rest? Thank you :)

    hello everyone, please help me on posegresql architectural health check, if you have any documentation for best practices, share with me, thank you
    Anyone can show point to a practical example of CREATE DOMAIN.
    Something based on a real use case for a DOMAIN reusable constraint.
    Riley Kennedy
    Hi Everyone! Does anyone here use node-pg-migrate or know where I could get help?
    Hi need a help to migrate postgresql from windows to linux machine, any tips?
    Hello Guys i started a blog series for people to easily understand indexing - http://hello-worlds.in/2021/05/03/understanding-postgresql-indexes-part-1/ , and also when it chooses one index over the other - http://hello-worlds.in/2021/05/28/why-postgresql-index-not-working-bitmap-vs-index-part-2/ -- any inputs will help me to improve them . Thanks :)
    Chris Ochs
    Key/value store where value is a bytea and integer primary key. And another integer key that most of our data is partitioned on(region id). Looking at ways to make sure performance will scale ok. It's a vertically scaled game server. So we might have upwards of 100k or so regions. Every region is pinned to a physical server that has exclusive write access. Almost all reads happen from the same server also. So our pattern is a server loads all the region data into memory once, then queries from memory, updates go into a memory hashmap and also to a write behind cache that eventually updates the db.
    Chris Ochs
    The constant updates come from entity position updates. So even buffered and only writing the most recent value occasionally I don't think it's scalable using normal means, it's going to hit like 50k+ updates per second easily. So the idea I had was the app writes the most recent to disk periodically. Writes out an array in it's native memory format directly to disk. Then another process can update the db from that very occasionally. Anyone have any other ideas on this?
    Lucas Bersier

    Given a parameter with an in statement like

    select * from my_table
    where some_value in :id

    is there a thing I can pass into id to make it ignore the in statement and return all values, like all() or something?

    Atul Kamat

    I have 2 tables in PostgreSQL:-

    1. "student" table
      student_id name   score
      1         Adam     10
      2         Brian    9
    2. "student_log" table:-
      log_id student_id score
      1         1       10
      2         2       9
      I have a python script which fetches a DataFrame with columns - "name" and "score" and then populates it to the student table.

    I want to update the student and student_log table whenever the "score" changes for a student. Also, if there is a new student name in the dataframe, I want to add another row for it in the student table as well as maintain its record in the "student_log" table. Can anyone suggest how it can be done?

    Let us consider the new fetched DataFrame looks like this:-

    name score
    Adam  7
    Lee   5

    Then the Expected Result is:-

    "student" table

    student_id name   score
    1         Adam     7
    2         Brian    9
    3         Lee      5

    "student_log" table:-

    log_id student_id score
    1         1       10
    2         2       9
    3         1       7    
    4         3       5
    Dear all,
    I am using postgresql13 with PHP code that need to execute complex script I got problem some time query no result. I checked at Pgadmin4 dashboard tab 'Transactions Per Second' It happen when large concurrent visitor I don't know what happen to Postgresql, My server RAM 256G, CPU 56,
    I command top-c: cup only 3%
    Help me recommend any setting to accept large traffic
    Henri Cook

    Hi all, has anyone got any idea why this join is a Seq Scan on sub_accounts? There's an index on account_id in the sub_accounts table, and an index on "user_account_id_idx" btree ((((document -> 'role'::text) ->> 'accountId'::text)::uuid)) in the users table

    Pastebin of EXPLAIN ANALYZE: https://pastebin.com/DDm6i0YV
    Pastebin of query: https://pastebin.com/2frhzTqk

    sub_accounts schema/idx/FK - https://pastebin.com/DikPwRLs
    Henri Cook
    If I rewrite it using a subquery to select the accountId from users it drastically improves its profile (by ~0.05ms) and the explain/analyse says it's all indexes - but I can't figure out why my original way, above, worked

    Hello! I am looking into building a change data capture pipeline for our monolith Postgres database, using debezium. I am currently trying to estimate the impact of using debezium on our primary postgres instance. Specifically, I'm interested in:

    1. the impact of changing the wal_level from hot_standby to logical on the size of the wal logs
    2. The impact of reading the wal files and logical decoding using pgoutput, on the postgres instance itself.

    Are there any benchmark tests that exist, or relevant reading someone can point me to? Thanks in advance! :pray:


    Hey all, I'm referring to the 9.6 ALTER TABLE documentation around SET DATA TYPE. I had a few questions around the following paragarph:

    This form changes the type of a column of a table. 
    Indexes and simple table constraints involving the column will be automatically converted to use the new column type by reparsing the originally supplied expression. 
    The optional COLLATE clause specifies a collation for the new column; if omitted, the collation is the default for the new column type. 
    The optional USING clause specifies how to compute the new column value from the old; if omitted, the default conversion is the same as an assignment cast from old data type to new.
     A USING clause must be provided if there is no implicit or assignment cast from old to new type.

    1) What are "simple table constraints"? What would qualify as a non-simple constraint?
    2) What does "reparsing the originally supplied expression" mean? Does that refer to the original index/constraint expression, except with the new data type subbed in?
    3) For the type of conversion I'm attempting to execute (NUMERIC to INTEGER), there's an assignment cast available. What are the performance characteristic of attempting that ALTER TABLE call assuming that the column is involved in one or more UNIQUE constraints?

    Thanks for any help and insight.


    Context: Lets say I have a system where users can edit something like a comment, but I want to keep an archive of all previous version of comments. 98% of the time, only the current comment/post/etc is relevant. I expect the ratio of current-to-archive site wide, to avearge 1-1, but it could be up to 1-to-5. Comments/etc rows would be usually looked up by indexed columns (authorId, commentId, threadId, etc). Anything like a keyword search won't be done on these tables. Depending on which option I chose (below), it's likely there will be many tables, where I chose to follow either pattern.

    • Option 1: Two Tables. Archive old posts in a second nearly identical table. This increases complexity, due to more tables, DAOs, migrations, etc but may(?) result in faster lookups.
    • Option 2: One Table. Use one table for both current & archives, with an enum-column whether it's current/old. The advantage is reduced application complexity & number of tables. Disadvantage may(?) be slower lookups & larger table-size.

      Which would you generally chose/recommend? And why?

    Bonus Question: Roughly, how much of a performance advantage (if any) would Option-1 likely provide, in a scenario like 1-comment, to 5-archives, on a table with something like 200-million comments?

    sal fn
    This way can grant the permissions to readwrite role for the tables user1 created. But is it also useful for the new tables user1 will create in the future?