Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Shao Chenyang
    @tzsword
    glad to help you ~~~~
    Shao Chenyang
    @tzsword

    hello eveyone,

    when i run diesel migration run in workspace_a,
    diesel will add contents of all workspaces/migrations to the workspace_a/src/schema.rs
    how to avoid this? let workspace_a/src/schema.rs just have the content of workspace_a/migrations?

    project
    │   migrations    --- common global database schemas
    │   diesel.toml   --- [print_schema] file = "src/schema.rs"
    │   │
    │   └───src
    │       │   schema.rs
    │       │   ...
    │
    └───workspace_a
    │   │
    │   │   migrations    --- workspace_a database schemas
    │   │   diesel.toml   --- [print_schema] file = "src/schema.rs"
    │   └───src
    │       │   schema.rs
    │       │   ...
    │   
    └───workspace_b
    │   │   migrations    --- workspace_a database schemas
    │   │   diesel.toml   --- [print_schema] file = "src/schema.rs"
    │   └───src
    │       │   schema.rs
    │       │   ...
    2 replies
    please forgive my kindergarten Chingenglish haha~~~
    Ibraheem Ahmed
    @ibraheemdev
    Is this query possible with the dsl? select sum(t.total) from (select rate * time_spent as total from tasks) as t
    1 reply
    Ashwani44
    @Ashwani44
    Does anybody has any idea about this error:
    ^^^^^^^^^^^^^^^^ `std::result::Result<_, diesel::result::Error>` cannot be formatted with the default formatter
    here is my code:
     let total_department = department.count().get_result(&connection);
        println!("{}", total_department);
    Shao Chenyang
    @tzsword
    @Ashwani44 try this:
    let total_department: i64 = department
        .count()
        .get_result(&conn)
        .expect("Error counting cute kittens");
    println!("{}", total_department);
    Francis Le Roy
    @GrandChaman_gitlab

    Hi ! I've got a problem applying pending migration from the executable directly. The first time the migration are applied, everything is fine and the table __diesel_schema_migrations is created in the public schema.

    All my table are in a separate schema, when I retry to apply the migration, it fails and creates a new table __diesel_schema_migrations in the new schema.

    I am not using -csearch_path in the connection string.

    How could I force diesel to use the public schema for its __diesel_schema_migrations table ?

    2 replies
    Jari Pennanen
    @Ciantic
    Why can't I use chrono's DateTime<Utc> with Sqlite datetime? Conceptually the conversion should not differ from NaiveDateTime. It just seems like Diesel really wants me to use NaiveDateTime even though the datetimes in the database are in UTC.
    1 reply
    Now I wonder can I do ToSql and FromSql traits myself for the Timestamp <-> DateTime<Utc>
    1 reply
    Andrew Wheeler(Genusis)
    @genusistimelord
    is there a way to update a index of an Array?
    cstats -> Array<SmallInt>, like i want to filter ID 2 and update it to 12 for example.
    Andrew Wheeler(Genusis)
    @genusistimelord
    I managed to get it to work via sql_query but i was wondering if there is another way?
    Georg Semmler
    @weiznich
    @genusistimelord That's nothing that is currently supported by the built-in dsl, but it should be easily possible to define such operators via diesel_infix_operator! (and/or _postfix/_prefix).
    For that specific case we would likely also accept a PR adding those operators to our dsl. See this (diesel-rs/diesel#2566) PR for an example PR.
    David Harrington
    @harringtondavid_twitter
    Error -- sslmode value "require" invalid when SSL support is not compiled in". Connecting to postgres 12 on google cloud. Any suggestions?
    James Kerr
    @disconsented
    Hey @weiznich sorry to bother you, but I am working on tests for PR #1846. Looking around the rest of the tests inside the workspace they seem focused (for lack of a better term). Do you have any objections with me breaking the mould and setting up an in memory sqlite database to run tests against?
    Georg Semmler
    @weiznich
    @harringtondavid_twitter Diesel accepts a valid libpq connection url. See their (Section 33.1.1) for details. I would guess that there is somewhere an example in the google cloud documentation how to connect via libpq.
    @disconsented As already mentioned there: I think a much better solution would be to make the ALL_MIGRATIONS constant part of the public API and move such a functionality to diesel_migrations itself.
    Erlend Langseth
    @Ploppz
    why can I not select non-aggregate and aggregate columns in the same select?
    ^^^^ the trait `NonAggregate` is not implemented for `aggregate_ordering::max::max<diesel::sql_types::Integer, step_idx>`
    Full query:
                m::measurement
                    .inner_join(p::plate.on(m::plate_id.eq(p::id)))
                    .inner_join(e::experiment.on(e::id.eq(p::experiment_id)))
                    .inner_join(pr::project.on(pr::id.eq(e::project_id)))
                    .inner_join(pj::pipeline_job.on(pj::measurement_id.eq(m::id)))
                    .inner_join(wj::worker_job.on(wj::id.eq(pj::job_id)))
                    .filter(wj::status.eq("Done"))
                    .select((
                        pr::default_pipeline_id,
                        e::default_pipeline_id,
                        p::default_pipeline_id,
                        max(pj::step_idx)
                    ))
                    .load::<(Option<i64>, Option<i64>, Option<i64>, i32)>(&conn)
    Georg Semmler
    @weiznich
    That's just something that is not supported on 1.4.5 yet.
    Erlend Langseth
    @Ploppz
    When will it be supported? And any ideas how I might get around this for now?
    Georg Semmler
    @weiznich
    Also I think that query misses a group by cause to do what you expect.
    Erlend Langseth
    @Ploppz
    hmm yeah you're right, need to group by m::id
    Georg Semmler
    @weiznich
    About "When will something be supported": We generally do not give any estimates when specific features are implemented.
    (Group by is the next thing that is not supported yet on any release, which is also the underlying reason why mixing aggregate and non aggregate clauses is not supported yet)
    Erlend Langseth
    @Ploppz
    Ah, ok
    Georg Semmler
    @weiznich
    Corresponding issues: diesel-rs/diesel#210 and #3
    Erlend Langseth
    @Ploppz
    Thanks.
    Is it within the scope of Diesel to support https://www.postgresqltutorial.com/postgresql-coalesce/ ?
    oh hm, I guess I should use sql_function for this
    Christopher Lee
    @Clee681
    Hello, I could use a little assistance resolving this compiler error:
    5 | #[derive(Insertable)]
      |          ^^^^^^^^^^ the trait `diesel::Expression` is not implemented for `std::string::String`
    This example (https://github.com/diesel-rs/diesel/blob/master/examples/postgres/all_about_inserts/src/lib.rs) does not seem to implement anything special for the use of String
    Georg Semmler
    @weiznich
    @Clee681 That's hard to answer without knowing your code. In general this indicates that you've some type mismatch between your schema and your strict.
    Christopher Lee
    @Clee681
    Ah, you're right. There was a type mismatch
    #[derive(Insertable)]
    #[table_name = "order_items"]
    pub struct NewOrderItem {
      pub order_num: i32,
      pub ts: chrono::NaiveDateTime,
      pub description: String,
      pub src_doc: String,
    }
    
    table! {
      order_items (id) {
          id -> Int4,
          order_num -> Int4,
          ts -> Timestamp,
          description -> Text,
          src_doc -> Jsonb,
      }
    }
    src_doc being different caused the compiler error
    If I remove the src_doc field, I just get an error for the Expression is not implemented for NaiveDateTime
    Is there any documentation or an example for how to implement that expression trait for chrono::NaiveDateTime and Jsonb?
    Christopher Lee
    @Clee681
    Re: the NaiveDateTime, diesel has a chrono feature :raised_hands:
    Christopher Lee
    @Clee681
    And looks like there's also a serde_json feature :raised_hands: http://docs.diesel.rs/diesel/pg/types/sql_types/struct.Jsonb.html
    Pavan Kumar Sunkara
    @pksunkara
    So, I found the root cause for default not being already implemented. It looks like sqlite doesn't support it.
    But Insertable code already compensates for it. So, all I have to do is implement what you suggested and then expression later on
    David Harrington
    @harringtondavid_twitter

    @harringtondavid_twitter Diesel accepts a valid libpq connection url. See their (Section 33.1.1) for details. I would guess that there is somewhere an example in the google cloud documentation how to connect via libpq.

    Thanks @weiznich. Is Diesel configured to support these file locations and/or the environment variables for SSL certificates?

    https://www.postgresql.org/docs/9.4/libpq-ssl.html
    "location of the certificate and key files can be overridden by the connection parameters sslcert and sslkey or the environment variables PGSSLCERT and PGSSLKEY"

    https://www.howtoforge.com/postgresql-ssl-certificates
    "On the client, we need three files. For Windows, these files must be in %appdata%\postgresql\ directory. For Linux ~/.postgresql/ directory.
    root.crt (trusted root certificate)
    postgresql.crt (client certificate)
    postgresql.key (private key)"

    Georg Semmler
    @weiznich
    @harringtondavid_twitter We just pass the connection string to libpq, so we do whatever libpq does internally.
    Erlend Langseth
    @Ploppz
    I have a problem adding a filter to a working diesel query, which makes it not compile anymore. The relevant filter, query and error: https://bpa.st/WFEA
    In short, I have a filter that is dyn BoxableExpression<pipeline::table, Pg, SqlType = diesel::sql_types::Bool> which I think would work if used on the pipeline table alone, but I guess it doesn't work now due to left joins, but why and how can I fix?
    Georg Semmler
    @weiznich
    By using the correct query source for your BoxableExpression. So instead of pipeline::table you need to use the type representing the join.
    Erlend Langseth
    @Ploppz
    How can I do that? The QS type parameter I guess corresponds to QuerySource which says that there are internal structs which represent joins
    Georg Semmler
    @weiznich
    @Ploppz I'm not sure if we already expose the necessary helper types via diesel::dsl. If not I'm happy to receive a PR adding those types.
    Erlend Langseth
    @Ploppz
    So (before I look into it) these types actually exist but are not exposed? There can't possible be one type for each combination of tables you want to join so how does it work?