Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Aug 18 19:40

    mikependon on repodb-fixes-1052

    (compare)

  • Aug 18 19:39
    mikependon closed #1052
  • Aug 18 19:39
    mikependon commented #1052
  • Aug 18 19:38
    mikependon unpinned #1052
  • Aug 18 19:38
    mikependon labeled #1052
  • Aug 18 19:38

    mikependon on master

    Initial enhancement for adding … 1052 Unit Tests updates. #1052 More updates to fix the f… and 4 more (compare)

  • Aug 18 19:38
    mikependon closed #1061
  • Aug 18 19:38
    mikependon opened #1061
  • Aug 18 19:37

    mikependon on repodb-fixes-1052

    #1052 Actual logic that passes … (compare)

  • Aug 18 06:22
    mikependon commented #1052
  • Aug 18 06:22
    mikependon commented #1052
  • Aug 18 06:22
    mikependon edited #1060
  • Aug 18 06:21
    mikependon assigned #1060
  • Aug 18 06:21
    mikependon labeled #1060
  • Aug 18 06:21
    mikependon opened #1060
  • Aug 18 06:08
    mikependon commented #1023
  • Aug 18 06:08
    mikependon commented #1023
  • Aug 18 06:07
    mikependon labeled #1059
  • Aug 18 06:07
    mikependon assigned #1059
  • Aug 18 06:07
    mikependon opened #1059
Swoorup Joshi
@Swoorup
busy with trading systems. Z:P
Michael Camara Pendon
@mikependon
:smile:
Swoorup Joshi
@Swoorup
hmm, is there a way to support postgres enums that have their enumerations in lower case?
Exception data:
    Severity: ERROR
    SqlState: 22P02
    MessageText: invalid input value for enum location_category: "Suburb"
    File: enum.c
    Line: 133
    Routine: enum_in
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
   at System.Threading.Tasks.Task`1.get_Result()
   at Domain.Pls.FsUtil.Uti
I have defined the enumerations as
type location_category =
  | Suburb = 0
  | Area = 1
  | Region = 2

type rtree_state =
  {
    [<NpgsqlDbType(NpgsqlDbType.Unknown)>]
    location_category: location_category
    [<NpgsqlDbType(NpgsqlDbType.Jsonb)>]
    data: string }
Oh F# allows lowercase enums
let me try that
Swoorup Joshi
@Swoorup
works, nvm
Vincent B.
@evilz
Hello all,
What would be you recommandation about testing when using RepoDB ?
Real database we integression test ?
Michael Camara Pendon
@mikependon
@evilz - I never replied to your question ASAP as I am waiting for the others to do it. Anyway, in our case, we do use the real database for the integration test. Even RepoDB itself has its own Integration Test with real database, not memory-based database to ensure the penetration of the underlying data store.
2 replies
Thomas Andre
@zetomatoz

First of all, kudos to you @mikependon! RepoDB is amazing and feels like the sweet spot between ado.net/dapper and EF. I am trying to bring it to production. I have observed a funky, non-blocking behavior with the Map attribute and was wondering if it is expected before submitting a new issue.

Experimentation

  1. Decorate a property named Value, which is related to a DB column named "value_raw", with a Map("value_raw") attribute.
  2. Decorate a property named SmoothedValue, which is related to a DB column named "value", with a Map("value") attribute.
  3. Use a Query method from BaseRepository to retrieve persisted entities.

Observations

  1. Data in the "value_raw" column is correctly retrieved and assigned to the property Value. The Map("value_raw") attribute behaves as expected.
  2. Data in the "value" column is not assigned to the property SmoothedValue. The Map("value") attribute does NOT behave as expected.
  3. The property SmoothedValue actually matches the property Value.

Interpretation
It looks like the Map("value")attribute does not pick up data from the DB column named "value". Instead it picks up data from the Value property of the poco.

Workaround
Rename the property Value to ValueRaw.

Michael Camara Pendon
@mikependon
@zetomatoz - thanks a lot for the kind words and congrats for making it happening on your side. We as well do use the library in most of our critical operations with the organization.
In relation to your issue, we will simulate and revert back to you as soon as we can. It looks like a simple use-case but I think, it is not covered on our Integration Test.
LP
@grofit

Hey could do with some advice.

So someone has written a raw SQL block which does lots of stuff, then they run it through ExecuteQueryAsync(theSqlString); now it all works fine for them, but I need to come along and add paging to it, and I was hoping I could just leave that all as is and just tack onto the end ExecuteQueryAsync(theSqlString).Skip(n).Take(n); however when I do that it blows up telling me Only dynamic object is supported in the 'where' expression.

So is this scenario doable or should I fudge the limit into the SQL block?

LP
@grofit
oh wait no ignore me it seems to be failing on another bit :D
Michael Camara Pendon
@mikependon
@grofit - mate, since RepoDB is not a deferred execution like EF, then Skip().Take() is happening on the client on this regards, which is not advisable if you're reading big datasets. A piece of advice, please do the paging on the SQL instead and not on the client memory.
LP
@grofit
ok thanks for letting me know chief
oh.... does that apply to EVERYTHING?
i.e even if I do a QueryAsync with a QueryGroup then Skip and Take etc
if so is there a "better" way to handle paging when you are not using SQL directly but the Query Fields etc as I can see there is a top field which I assume lets me at least specify how many records to return, but cant see an offset field
LP
@grofit
oh its ok, I think I found it via the BatchQueryAsync
Can anyone tell me if you can provide a nuil/empty string to OrderField objects? as in some cases I wont know the best field to order by so would just like to default it, but seems like for Batches I need to provide an order field
oh nope it throws an exception there, will just get the primary key cache to get the mapping for that field I guess then
Michael Camara Pendon
@mikependon
When you do the batch query, the ordering is important as the paging will depends on the ordering. We suggest you utilize the i dex page for ordering the result.
The linq Skip().Take() is happening on the client side on whichever method you use due to its nature of being tightly coupled to an IEnumerable and not IQueryable.
We have proper SkipQuery() impl in proposal but does has not been prioritized as the need is not much on this ORM.
Tarrence E. Davis
@tdavis2010
Hello, I'm wondering how to pull multiple rows from two tables that are related by foreign keys without writing the SQL by hand. For example, let's say I have a Products table (Id, Name, CategoryId) and a ProductCategories table (Id, Name, Description), with CategoryId being a foreign key. How would I get a list of all products with their corresponding category name?
Brandon
@cajuncoding
Firstly, writing tiur own SQL will always have advantages so its not somethi g to be afraid of and will usually be the optimal approach…
But RepoDB wont do this for tou as its intented to encourage optimal sql querying by the developers…
However your second best option is to use something like SqlKata which is much more focused on code based querying and has a pretty elegant mechanism for joining and retrieving data…
yes youll be using two ORMs but if your app is layered well then it should be abstracted behind the data layer…. And SqlKata doesnt replace RepoDB because it lacks the extremely powerful Bulk/Batch updatimg of data!
Tarrence E. Davis
@tdavis2010
I agree that it's important to be able to write optimized queries yourself, but I also think there are some simple joins that don't have many opportunities for optimization.
I was looking for something like: var products = connection.QueryAll<Product>().Join<ProductCategory>("CategoryId") versus
var products = connection.ExecuteQuery<Product>("SELECT p.*, pc.Name FROM [dbo].[Product] p INNER JOIN [dbo].[ProductCategories] pc ON p.CategoryId = pc.Id;")
Brandon
@cajuncoding
Well it may seem theres no riom for “performance optimization” but there are other ways to optimize…for example if your table structure changes then youd have to change more code vs just updating a query or view to map the code compatible models… tok much coupling between your code and the queries can be detrimental long term.
Brandon
@cajuncoding
I personaly like to create SQL views and abstraction via repository/service methods for data so then we have alot more flexibility to optimize the data realtionships, structure, etc within our Schema with less ripple impacts into the code…
Brandon
@cajuncoding
But sure you could still encapsulate the code joins and get similar benefits if thats your cup of tea :-)…
check out SqlKata for that kind of functionality…nothing wrong with kultiile orms for different purposes…they are both lightweight and if encapsulated well (repository/service: persostence layer) then no calling code will have any dependencies…
Brandon
@cajuncoding
Kultiile = multiple
Vincent B.
@evilz
Hello,
Small question
Does array parameters in PG is working for function like ALL or ANY
Swoorup Joshi
@Swoorup
Any one has issues using array parameter in a parameter object when using ExecuteNonQueryAsync?

I have the schema

create table draft_item
(
    topic     text        not null,
    id        text        not null,
    timestamp timestamptz not null,
    sequence  bigint      not null,
    payload   jsonb       not null,
    errors    text[]      not null,
    constraint pk_draft_items primary key (id, topic)
);

And I am trying to update the errors column using the following.

    let sql =
      """
        update draft_item
        set errors = @errors
        where topic = @topic and id = @id
      """

    task {
      use! conn = connBuilder ()
      let param =
        {| topic = topic
           id = id
           errors = errors |}

      let! _ = conn.ExecuteNonQueryAsync(sql, param = param)
      return ()

However I get column "errors" is of type text[] but expression is of type text instead.

Swoorup Joshi
@Swoorup
@mikependon any ideas?
Michael Camara Pendon
@mikependon
@Swoorup is it an array of string you passed on the 'errors' parameter?
Swoorup Joshi
@Swoorup
yep

@Swoorup is it an array of string you passed on the 'errors' parameter?

full method

  let updateErrors (connBuilder: DbConnBuilder) (Topic topic) (MsgId id) (errors: string list) =
    let errors = Array.ofList errors
    let sql =
      """
        update draft_item
        set errors = @errors
        where topic = @topic and id = @id
      """

    task {
      use! conn = connBuilder ()
      let param =
        {| topic = topic
           id = id
           errors = errors |}

      let! _ = conn.ExecuteNonQueryAsync(sql, param = param)
      return ()
Michael Camara Pendon
@mikependon
@Swoorup it sounds like an isssue in the raw execution and there is no way we explicitly set to which type it is on the underlying data store. As per my initial thoughts, it should be automatically be mapped to the proper type. Anyway, since this issue occurs, we need time to examine this.
Swoorup Joshi
@Swoorup
hmm
Michael Camara Pendon
@mikependon
AFAIK, Npgsql is doing the proper mapping of arrays automatically, but I need to check in which area RepoDB does an additional conversion (or type conversion) that made the parameter values failed during the execution.