Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 22 07:31
    handcode commented #232
  • Jun 22 07:01
    handcode commented #232
  • May 20 10:02
    helprepair starred bedezign/yii2-audit
  • Apr 01 23:46
    thtmorais edited #284
  • Mar 31 15:33
    lmribeiro starred bedezign/yii2-audit
  • Dec 18 2021 15:50
    lloyd966 opened #287
  • Dec 15 2021 08:48
    ashaduri opened #286
  • Dec 14 2021 09:03
    eluhr starred bedezign/yii2-audit
  • Dec 05 2021 16:02
    lloyd966 commented #285
  • Dec 03 2021 14:22
    lloyd966 opened #285
  • Nov 02 2021 08:42
    mazfreelance starred bedezign/yii2-audit
  • Oct 06 2021 11:02
    lucianolima00 starred bedezign/yii2-audit
  • Oct 05 2021 20:11
    thtmorais commented #284
  • Oct 05 2021 20:01
    thtmorais synchronize #284
  • Sep 18 2021 03:35
    thtmorais opened #284
  • Aug 13 2021 16:39
    bellinilo starred bedezign/yii2-audit
  • Jul 14 2021 13:53
    remorac starred bedezign/yii2-audit
  • Jul 13 2021 05:22
    suryaeko commented #278
  • Jul 13 2021 05:16

    Blizzke on master

    Update ErrorHandlerTrait.php s… Merge pull request #280 from kr… (compare)

  • Jul 13 2021 05:16
    Blizzke closed #280
Abayomi Osamiluyi
@yomexzo
or wrong place?
Steve Guns
@Blizzke
you can try, i'll see what i can do :)
no guarantees though
Abayomi Osamiluyi
@yomexzo
awesome!
i have a table of about 1million records
each record has the date/time it was inserted and who inserted it
and when it was last updated and who updated it
and, who created might be different from the last person that updated
table has about 30 columns and is currently 716.7Mib
problem now is querying it for information based on the created time, who created, last updated time, who updated last runs into 25seconds
or even more
following? making any sense?
Steve Guns
@Blizzke
i am and yes
I assume you added the necessary indexes?
Abayomi Osamiluyi
@yomexzo
yes. have indexes. but don’t quite understand fully the impacts
let me show you a sample query to get the full picture
Abayomi Osamiluyi
@yomexzo
sigh. stuff requires a lot of background knowledge.
but here we go
SELECT
    COUNT(CASE WHEN packages.location_id IN (1) AND packages.date_generated >= 1436181080 AND packages.date_generated <= 1438773080 THEN 1 END) AS `ingested`, 
    COUNT(CASE WHEN packages.source_id IN (1) AND IF(packages.arrival_time IS NULL, 0, packages.arrival_time) >= 1436181080 AND IF(packages.arrival_time IS NULL, 0, packages.arrival_time) <= 1438773080 THEN 1 END) AS `received`, 
    COUNT(CASE WHEN packages.source_id IN (1) AND packages.last_update_time >= 1436181080 AND packages.last_update_time <= 1438773080 AND currentStatus.type = 'delivery' THEN 1 END) AS `delivered`, 
    COUNT(CASE WHEN packages.source_id IN (1) AND packages.last_update_time >= 1436181080 AND packages.last_update_time <= 1438773080 AND currentStatus.type = 'return' THEN 1 END) AS `returned`, 
    SUM(IF(packages.location_id IN (1) AND packages.date_generated >= 1436181080 AND packages.date_generated <= 1438773080 AND packages.ingestion_fee IS NOT NULL, packages.ingestion_fee, 0)) AS `ingestion_earnings`, 
    SUM(IF(packages.source_id IN (1) AND packages.last_update_time >= 1436181080 AND packages.last_update_time <= 1438773080 AND packages.freight_charge IS NOT NULL, packages.freight_charge, 0)) AS `processed_earnings` 
FROM `packages` 
LEFT JOIN `package_statuses` `currentStatus` ON `packages`.`package_status_id` = `currentStatus`.`id`
really, it’s a logistics systems. we track packages
location_id is where the waybill/package was generated
source_id is where it is currently
arrival_time is when it got to where it is
of course, we know last_update_time and date_generated
Steve Guns
@Blizzke
but thats a full select?
Abayomi Osamiluyi
@yomexzo
that query takes forever (at least 17 secs) on the table
Steve Guns
@Blizzke
of the entire table
Abayomi Osamiluyi
@yomexzo
i just pray i make a little sense
nope, i cut out some of the aggregations for simplicity
but that should give a picture
yes. scans the entire table apparently
Steve Guns
@Blizzke
yeah but its the aggregations that will cause the slowdown, not what you just pasted
hmm
Abayomi Osamiluyi
@yomexzo
that’s the aggregations
sums, and counts
only selected those 6 for simplicity
Steve Guns
@Blizzke
yeah i see now
you should probably run that stuff with an explain on and see which indexes get used and which dont
Abayomi Osamiluyi
@yomexzo
of course. straight away
Steve Guns
@Blizzke
but as i see it it just selects the entire table every time just to generate your stats
Abayomi Osamiluyi
@yomexzo
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE packages ALL NULL NULL NULL NULL 988883 NULL
1 SIMPLE currentStatus eq_ref PRIMARY PRIMARY 4 db.packages.package_status_id 1 NULL
yes. exactly. @entire table
Steve Guns
@Blizzke
yeah its not using indexes at all for the full table scan
Abayomi Osamiluyi
@yomexzo
yeah. because it’s a full table scan
right?
Steve Guns
@Blizzke
no because you're using ranges
for the timestamp
Abayomi Osamiluyi
@yomexzo
oh
honestly, a million records is very small to me
makes me wonder how people run analytics/reports e.t.c