Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Per-Åke Minborg
    @minborg
    You only get the high_line_count. How does it differ?
    solangepaz
    @solangepaz

    I do not know, but for example, this returns the same result:

      Map<Boolean, Long> grouped= joinFirst.stream()
                    .collect(partitioningBy(t->t.get0().getOOrderpriority().equals("1-URGENT") || t.get0().getOOrderpriority().equals("2-HIGH"), counting()));
    Map <Boolean, Long> grouped = joinFirst.stream ()
                     .collect (partitioningBy (t> t.get0)) getOOrderpriority (). equals ("1-URGENT"), counting ()));

    and I guess that should make it different

    solangepaz
    @solangepaz
    It is counting all the rows and not through t->t.get0().getOOrderpriority().equals("2-HIGH")
    Per-Åke Minborg
    @minborg
    You should get a Map with 2 entries (True and False) that counts of all that matches the predicate and the one that does not.
    solangepaz
    @solangepaz

    Now i have this:

            Map<String ,Map<Boolean,Long>> grouped= joinFirst.stream()
                    .collect(Collectors.groupingBy(t->t.get1().getLShipmode(),
                            partitioningBy(t-> t.get0().getOOrderpriority().equals("1-URGENT") || t.get0().getOOrderpriority().equals("2-HIGH") , counting())));

    And the result is this:

    MAIL       false 13209
    MAIL       true 0
    SHIP       false 13224
    SHIP       true 0

    But it should be:

    MAIL       false 0
    MAIL       true 5376
    SHIP       false 0
    SHIP       true 5346
    Per-Åke Minborg
    @minborg
    ok. Make sure that your predicate in the partitioningBy clause really works by manually printing out the result for some items.
    solangepaz
    @solangepaz
    I think the error is as I'm doing group by, because the values are correct, that is, the total of true is correct, but it is not divided by "MAIL" and "SHIP"
    Maarten Winkels
    @mwinkels_gitlab
    Hi, I'm trying to run the "speedment:tool" goal for a SQLite database, but I get this exception:
    Caused by: java.util.NoSuchElementException: No value present
    at java.util.Optional.get(Optional.java:135)
    at com.speedment.runtime.connector.sqlite.internal.SqliteMetadataHandler.lambda$null$23(SqliteMetadataHandler.java:358)
    Can this be caused by a 'TEXT' type foreign key?
    Thanks!
    Emil Forslund
    @Pyknic
    @mwinkels_gitlab Hm, I think the exception indicates that the database has a primary key (not a foreign key) but the connector can't find the associated column. It could indicate a bug. Do you know which table it is, and if so, what primary keys does it have?
    solangepaz
    @solangepaz
    Hi, I have a question. Is speedment supposed to return different results for the same query?
    For example, I ran a query three times and those three results are different.
    Per-Åke Minborg
    @minborg
    @solangepaz Hi @solangepaz . Strictly speaking, the contract for a Manager’s stream method states that the order is unspecified:
         *The order in which elements are returned when the stream is eventually
         * consumed <em>is unspecified</em>. The order may even change from one
         * invocation to another. Thus, it is an error to assume any particular
         * element order even though is might appear, for some stream sources, that
         * there is a de-facto order.
         * <p>
         * If a deterministic order is required, then make sure to invoke the
         * {@link Stream#sorted(java.util.Comparator)} method on the {@link Stream}
         * returned.
         * <p>
    The reason is that different databases have different orders and order guarantees. Add a sorted() operation if you need a deterministic order (this will cost performance though).
    solangepaz
    @solangepaz
    This message was deleted

    But the results are very different. For example, this query in Postgresql:

    select
        l_returnflag,
        l_linestatus,
        sum(l_extendedprice) as sum_charge,
        avg(l_tax)
    from
        lineitem
    where
        l_shipdate <= date '1998-12-01'
    group by
        l_returnflag,
        l_linestatus
    order by
        l_returnflag,
        l_linestatus;

    It has the following output:
    image.png

    With the speedment the output is this:

    A F 1,20E+10 0,039955076
    N F 3,23E+08 0,040103375
    N O 1,91E+10 0,040013735
    R F 1,20E+10 0,039975758

    Per-Åke Minborg
    @minborg
    Average values looks good but not the sum(). Do you have the Java code?
    solangepaz
    @solangepaz
    Yes, the code is this:
        Calendar cal = Calendar.getInstance();
        cal.set(Calendar.DAY_OF_MONTH,1);
        cal.set(Calendar.MONTH,Calendar.DECEMBER);
        cal.set(Calendar.YEAR,1998);
    
         java.sql.Date sqlDate = new java.sql.Date(cal.getTimeInMillis());
         LineitemManager lineitem = app.getOrThrow(LineitemManager.class);
    
         Map<Tuple2<String, String>, AbstractMap.SimpleEntry<Double, Double>> grouped = lineitem.stream().filter(Lineitem.L_SHIPDATE.lessOrEqual(sqlDate))
                    .collect(groupingBy(t->Tuples.of(t.getLReturnflag(), t.getLLinestatus()),
                            Collectors.collectingAndThen(Collectors.toList(),
                                    list-> {double first =
                                            list
                                                    .stream()
                                                    .mapToDouble(t -> t.getLExtendedprice().get().doubleValue()).sum();
                                        double second =
                                                list
                                                        .stream()
                                                        .collect(averagingDouble(t->t.getLTax().get().doubleValue()));
    
                                        return new AbstractMap.SimpleEntry<>(first, second);})
                    ));
    
    
            grouped.forEach((key, value) -> System.out.println(key + ", " + value));
    Per-Åke Minborg
    @minborg
    Can’t spot the problem. Have you checked that the Date looks ok and filters out the right elements?
    solangepaz
    @solangepaz
    Yeah, it looks okay. Even because I have already tested on two machines with the same database. The first machine returns the result just like PostgreSQL. The second machine returns these wrong results.
    The only difference is here:.mapToDouble (t -> t.getLExtendedprice (). Get (). DoubleValue ()). Sum ();
    The first machine (where it works ok) only accepts .mapToDouble (t -> t.getLExtendedprice (). DoubleValue ()). Sum () ;. And the second machine needs get () before doubleValue ()
    Per-Åke Minborg
    @minborg
    So have you set the column l_extendedprice to nullable on one but not on the other?
    In the speedment Tool I mean.
    solangepaz
    @solangepaz
    Hi, does speedment not support an H2 database? I am using SQLite, but it is very slow.
    solangepaz
    @solangepaz
    And is it possible to use this through the mutator?
    Connection connection = DriverManager.getConnection("jdbc:sqlite::memory:"); connection.createStatement().executeUpdate("restore from database.db");
    Per-Åke Minborg
    @minborg
    Currently, there is no support for H2. However, since we now have support for SQLite, the effort of writing an H2 driver would be much less. Anyone up to the challenge?
    @solangepaz it should be possible to execute any code within the mutator. But can’t you simply run the code before you create the application builder?
    solangepaz
    @solangepaz
    Thank you, I've already been able to do this with the mutator. However SQLite is still very slow in speedment
    Per-Åke Minborg
    @minborg
    I suspect it is SQLite that is slow and not Speedment?
    solangepaz
    @solangepaz
    I think the problem is not in SQLite. If I use SQLite in memory with jdbc for a query I get a response at 376ms. With the same query in speedment and with SQLite in memory I have a response in 2ms.
    Per-Åke Minborg
    @minborg
    @solangepaz There must be some error with the response times you gave?
    solangepaz
    @solangepaz
    Yes, I'm sorry. I changed the times. The correct one is this: 376ms with speedment and sqlite in memory; 2ms with jdbc and sqlite in memory.
    Per-Åke Minborg
    @minborg
    ok. For what query/stream?
    solangepaz
    @solangepaz
    For a very simple query. In this case I tried for select count (*) from customer;
    Per-Åke Minborg
    @minborg
    ok. As you know, there is a known issue regarding this particular query (speedment/speedment#720) and we have made some progress recently.
    However, there is more to be done. If you run the query many times, I expect the difference to be much smaller.
    solangepaz
    @solangepaz
    I got those values by executing the query 10 times and calculated the mean value. I thought the same problem did not replicate with an in-memory database.
    Per-Åke Minborg
    @minborg
    Apparently, there is still some overhead that remains in the Speedment code. Since the in memory DB is much faster, the Speedment overhead becomes more apperent relatively speaking.
    mainakmandal
    @mainakmandal
    Hi Speedment team, we have a scenario where db tables have approx 50000 records and the front end web application needs a bulk CRUD operations against these tables for approx 40000 records. We are thinking to use Speedment ORM for performance improvement for such bul CRUD operations but reluctant about the performance issue as we believe generally ORMS suffer performance issues for caching such huge number of records. Will Speedment work properly with such scenarios ?
    Per-Åke Minborg
    @minborg
    @mainakmandal I think definitely yes, Speedment would be able to handle terabytes of data because data is stored off-heap. Let me know how it works out for you guys.
    Anush B M
    @BMAnush
    Hi...encountered into Speedment recently...would like to try it out....having trouble getting started....once I am done generating the entities....how do I run them as a spring boot application and how to I trigger them as an API?
    Documentation jumps too fast from basic to advanced....finding it hard to get the connectivity....
    Per-Åke Minborg
    @minborg
    @BMAnush Hi! There is a Spring Boot plugin you can activate that will generate a lot of connetivity code for you. Have you read the Wiki? https://github.com/speedment/speedment/wiki/Tutorial:-Speedment-Spring-Boot-Integration
    The Wiki is for “manual” integration whereas the plugin generates code automatically.
    Let me know if that helps you back on track again!
    Anush B M
    @BMAnush
    Thank you @minborg ....I shall take a look at that tutorial.....I believe this should help out in getting started....
    Per-Åke Minborg
    @minborg
    @BMAnush Great to hear!
    Arnab Samanta
    @arnab192
    is there any way to convert join<Tuple4> object to join<Tuple3> . I need it as join object.
    Per-Åke Minborg
    @minborg
    I think the easiest way to get a Tuple3 from 4 joined tables is to provide a custom constructor in the .build() method. For example build((a, b, c, d) -> Tuples.of(b, c, d)) if you only want b, c, and d and not a.
    Arnab Samanta
    @arnab192
    Thanks @minborg . this is the solution I was looking for. I have another query - is there any way to construct join query dynamically? that mean sometimes I am using 3 entity sometimes I am using 4 entity to join . I need to construct that query dynamically.
    Per-Åke Minborg
    @minborg
    Sure. That can be done easily. If they both return Tuple3 for example, you can treat them in the same code branch.
    After an if statement for example.