Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 19:31

    mvitale on main

    brief doc for reconciliation api (compare)

  • 19:03

    mvitale on main

    use 'phrase' match for multi-wo… Merge branch 'recon_improvement… (compare)

  • 16:39
    eliagbayani synchronize #158
  • 16:33
    eliagbayani synchronize #158
  • 15:31

    mvitale on main

    enable 'de' locale ignore casing for locale menu s… (compare)

  • Jun 16 19:46

    mvitale on production

    (compare)

  • Jun 16 17:52

    mvitale on main

    rack-mini-profiler config a few profiling steps for names… sort classification nodes by sc… and 7 more (compare)

  • Jun 16 17:51

    mvitale on optimize_names

    reenable caching of page classi… (compare)

  • Jun 16 17:49

    mvitale on optimize_names

    remove profiling code (compare)

  • Jun 16 15:20

    JRice on main

    Another attempt at fixing quote… (compare)

  • Jun 16 14:52
    eliagbayani synchronize #158
  • Jun 15 22:32
    eliagbayani synchronize #158
  • Jun 15 19:30

    JRice on main

    Fixing a quoting problem (I hop… (compare)

  • Jun 15 19:20

    mvitale on profile_names

    include association for child/… (compare)

  • Jun 15 17:45

    mvitale on main

    remove 'best photo' display for… (compare)

  • Jun 15 17:15

    mvitale on profile_names

    fix includes for classification… (compare)

  • Jun 15 16:53

    mvitale on profile_names

    sort classification nodes by sc… cache sanitized/downcased node … (compare)

  • Jun 15 15:44

    mvitale on profile_names

    a few profiling steps for names… (compare)

  • Jun 15 15:17

    mvitale on profile_base

    rack-mini-profiler config add stackprof (flamegraph) (compare)

  • Jun 15 10:30
    eliagbayani synchronize #158
Michael Vitale
@mvitale
trying one idea now...
No luck.
Michael Vitale
@mvitale
@JRice looks like the number of unicorn processes is back down to what it was before in production, see https://one.newrelic.com/-/0rVRV343Kja
Jeremy Rice
@JRice
I did have to do some merging this time, which may have overwritten it.
Michael Vitale
@mvitale
looks like it's back to killing processes all the time again too
Jeremy Rice
@JRice
Huh. How many processes were you expecting?
Michael Vitale
@mvitale
26
Jeremy Rice
@JRice
Ahhh. Yes, indeed, sorry, my memory was bad.
Michael Vitale
@mvitale
bottom right graph
Jeremy Rice
@JRice
Nono, it's definitely 12. I thought that was right, though. 50% higher than it had been. I forgot about the more recent re-doubling.
Michael Vitale
@mvitale
:+1:
Jeremy Rice
@JRice
Well, if you think that's reboot-worthy, I will reboot. Easily fixed the config...
Michael Vitale
@mvitale
I do
should we wait a bit and see if there are any patches that need to go out today?
Jeremy Rice
@JRice
Dammit, I just took the site down by accident. :S
It'll be back in 5. Sorry.
Back now.
I DON'T expect it took the config change... but I'll check...
(It didn't.)
Jeremy Rice
@JRice
I'm building now (so that it will), but I agree it's worth waiting for potential patches.
THAT said: I'm not here for much longer (another 30 min). ...and I really do have to leave.
Michael Vitale
@mvitale
I don't think I'll have anything ready for the wordcloud in that amount of time
but I don't think that's an urgent patch
Jen Hammock
@jhammock
+1 that can wait
Michael Vitale
@mvitale
if nothing's obviously broken in production I think it's worth going ahead with the unicorn change
it'll give a more realistic picture of performance I think
Jen Hammock
@jhammock
+1 that too
Jeremy Rice
@JRice
Meowkay.

Restarting prod now

Glut of workers running now.
Looks like Varnish has picked up the change. We're live.
Michael Vitale
@mvitale
looks like we have even more than before -- my 26 was based on NR but perhaps that count is inflated or something, it's now 31 -- but unless we see system resource starvation I think it's a good thing
we were hitting capacity even with the increase before
if this is real, we've got a radical improvement on the names tab: https://one.newrelic.com/-/0eqwyWP5xjn
it's not even in the top 5 slowest pages any more
Jeremy Rice
@JRice
Cool. Yeah, I thought the 26 was artificial, but it's no skin off my nose. Nor the host's. It's got a ton of CPU cores.
Michael Vitale
@mvitale
names tab avg time is down to < 1s from > 4
Jeremy Rice
@JRice
That's great! Thanks again.
Michael Vitale
@mvitale
overview gains aren't so drastic, looks like we're hovering around 800ms avg from 1200ish
Jeremy Rice
@JRice
That's a pretty marked improvement, though.
Michael Vitale
@mvitale
but that's still something, and hopefully the brief summary cache warming will improve it further
yeah
Jeremy Rice
@JRice
I'm ghosting. I'll be back online (but not "working") after 3:30.
Jonathan A Rees
@jar398
I was trying to test some timeout-handling logic on beta, and obtained a query result after almost 3 minutes. I would find it helpful if the neo4j and nginx timeouts were set to be the same as they are on production (i.e. a minute or less)... but maybe there is a reason not to do this that I'm not aware of? @JRice
i.e. I wanted a timeout and didn't get one
Jen Hammock
@jhammock
I… didn’t know queries could run that long anywhere
Jeremy Rice
@JRice
Indeed, we had increased the beta timeout because of some task or another. I can normalize it again tomorrow, easy enough. Can't today.

NOTE TO SELF: do that ^

Jonathan A Rees
@jar398
okay thanks!