Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 31 2019 21:30
    roji commented #2307
  • Jan 31 2019 20:35
    YohDeadfall commented #2307
  • Jan 31 2019 10:20
    capan starred npgsql/npgsql
  • Jan 31 2019 06:10
    SeanFarrow synchronize #2308
  • Jan 30 2019 20:37
    austindrenski commented #2308
  • Jan 30 2019 20:37
    pentagra commented #1445
  • Jan 30 2019 20:33
    SeanFarrow commented #2308
  • Jan 30 2019 20:31
    austindrenski commented #2308
  • Jan 30 2019 20:27
    SeanFarrow commented #2308
  • Jan 30 2019 20:09
    austindrenski labeled #2308
  • Jan 30 2019 20:05
    roji commented #1520
  • Jan 30 2019 19:50
    gyzod commented #1445
  • Jan 30 2019 19:06
    SeanFarrow commented #1520
  • Jan 30 2019 18:54
    roji commented #1520
  • Jan 30 2019 18:53
    roji commented #1520
  • Jan 30 2019 18:52
    roji commented #2050
  • Jan 30 2019 18:45
    Trolldemorted commented #2050
  • Jan 30 2019 18:21
    SeanFarrow commented #1520
  • Jan 30 2019 18:16
    SeanFarrow review_requested #2308
  • Jan 30 2019 18:16
    SeanFarrow review_requested #2308
Shay Rojansky
@roji
EF6 was really not made for extensibility for other databases... for example, the type system is completely closed
In EF Core I could add all the special PG types
And the fact you can map an array/List property to PG arrays is quite powerful. Then you can do stuff like blog.SomeArrayProperty.Any(p => p > 3), which will translate into the correct PG construct and execute efficiently
With other databases you have to either have a comma-separated string column, or a one-to-many relationship...
If you like expression trees and the idea of translating constructs from C# to SQL, then EF Core is really fun to work on
Emill
@Emill
Newer MySQL seems to have some json aggregator function like postgres. Maybe the materialization approach, when only simple data types are involved can be used there as well. Or maybe MySQL only have simple types and hence always works with json parsing?
Shay Rojansky
@roji
I have no idea...
Emill
@Emill
I'm thinking if the record type handler can be improved to be more flexible when the user knows what kind of data to expect inside it.
right now you more or less get an object[]
with records with no db-defined composite type
I'm not sure what would be most efficient considering EF as the user.
Shay Rojansky
@roji
I'm not sure records are needs for array_agg-based materialization - composite types may very well do the trick as well
One (current) issue is that Npgsql requires mapping them in advance to CLR types, but we may be able to do something nice with that
But maybe records could also be the thing
Emill
@Emill
I mean EF should be able to dynamically select the data to return, for example using array_agg(row(tbl.thing_a, tbl.thing_b, 'hi', 3))
that will make PG return a record[]
if you instead write array_agg(tbl) then PG will return a tbl[] (which uses the same I/O format as record[])
but with oid set to the array type for the table
Shay Rojansky
@roji
OK, I think you've already investigated this more than me :)
Emill
@Emill
well just did a quick test yesterday :)
Shay Rojansky
@roji
My thinking (for now) was simply do replace one-to-many JOINs with an array_agg that returns an array of the many side - in this case you don't even need records, just a composite type of the "many" table
It's indeed an ideal way to load relational data
Emill
@Emill
If I understand correctly, composite type is more or less equal to record, right?
just that you can in PG define a composite type so that the fields have names etc.
Shay Rojansky
@roji
It's a static type - so every table has a composite type that describes a single row
Yeah
The names aren't part of the wire encoding though (that would be inefficient)
So on the wire I think the encoding is the same as a record
Emill
@Emill
I looked at how pgAdmin does
Shay Rojansky
@roji
But composite types definitions are loaded by Npgsql ahead of time, and so you can map them to CLR types etc.
Emill
@Emill
when a query returns a column with a specific oid as type, it immediately sends a new query to check what that oid means
Shay Rojansky
@roji
Interesting
Npgsql loads all the type OIDs when it first connects to a database, including all type definitions. So we already know everything when we receive the record
Emill
@Emill
unless the user alters the table in the meantime, or adds a new table...
Shay Rojansky
@roji
This is how we are able to provide the user with an object[] for a record - we know which type each field is in the record, and which type handler can decode it
Yep
Exactly - on type changes it's the user's responsibility to call ReloadTypes
Emill
@Emill
anyway for pgAdmin it's not an issue to perform extra queries
Shay Rojansky
@roji
Yeah
Emill
@Emill
but Npgsql must be fast for intense workloads
Shay Rojansky
@roji
Yeah. And PG types change pretty rarely, both types and table schemas are relatively static
Occasionally a user comes along and I tell them to call ReloadTypes, but it's quite rare
Emill
@Emill
the binary format for the record type contains the oid for every element
Shay Rojansky
@roji
Yep, it has to. Otherwise the receiver would just get unidentifiable binary data
Emill
@Emill
the only issue is when the record type itself is of a table oid that is not known, since then we don't know that it should be decoded as a record
Shay Rojansky
@roji
Npgsql usually loads all table types (AKA composites) as well at connection time, so again unless a table is created later it's ok
Emill
@Emill
still I get The field 'array_agg' has type 'public.tbl2[]', which is currently unknown to Npgsql.
even though I have LoadTableComposites=true in the conn string...
Shay Rojansky
@roji
That's strange...
It should work