Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 28 21:31
    depfu[bot] synchronize #822
  • Nov 28 21:31

    depfu[bot] on update

    Update offline-github-changelog… (compare)

  • Nov 28 20:24
    depfu[bot] edited #822
  • Nov 28 20:22

    depfu[bot] on update

    (compare)

  • Nov 28 20:20
    papandreou closed #840
  • Nov 28 20:20

    papandreou on master

    Update prettier to version 2.5.0 Merge pull request #840 from un… (compare)

  • Nov 28 05:41
    shicks commented #525
  • Nov 26 21:35
    depfu[bot] labeled #840
  • Nov 26 21:35
    depfu[bot] opened #840
  • Nov 26 20:31

    depfu[bot] on update

    Update prettier to version 2.5.0 (compare)

  • Nov 18 07:51
    depfu[bot] synchronize #832
  • Nov 18 07:51

    depfu[bot] on update

    Update jest to version 27.3.1 (compare)

  • Nov 18 07:44

    depfu[bot] on update

    (compare)

  • Nov 18 07:44

    papandreou on master

    Update karma to version 6.3.9 Merge pull request #839 from un… (compare)

  • Nov 18 07:44
    papandreou closed #839
  • Nov 18 00:05
    depfu[bot] labeled #839
  • Nov 18 00:05
    depfu[bot] opened #839
  • Nov 17 23:00

    depfu[bot] on update

    Update karma to version 6.3.9 (compare)

  • Nov 08 15:30
    depfu[bot] synchronize #832
  • Nov 08 15:30

    depfu[bot] on update

    Update jest to version 27.3.1 (compare)

Andreas Lind
@papandreou
Are they really too big to be streamed?
IDK if the HTTP-based api supports PUT with Content-Range
Gustav Nikolaj
@gustavnikolaj

The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
https://aws.amazon.com/s3/faqs/

We have average memory consumption of a couple hundred of megabytes, but it will typically be used to debug containers nearing the memory limit. So I'd expect sizes in the interval from ~200 MB to 4 GB. Should be enough with a single PUT then

All the docs I've read so far only mentioned the recommendation of > 100 megs
But it's internal network traffic all of it, so it should be alright :)
Andreas Lind
@papandreou
Watch out for all the PII, private keys etc. that such a heap snapshot might contain :)
Gustav Nikolaj
@gustavnikolaj
Thanks. That's good advice. It's definitely not something you want to be too liberal about
Gustav Nikolaj
@gustavnikolaj
How do you prove, in an objective way, that a node.js server is doing something blocking? :D
Gustav Nikolaj
@gustavnikolaj
My best idea so far is to graph the amount of processes running, and the throughput/requests per sec, and then see if we ever have a higher throughput than the number of processes
Sune Simonsen
@sunesimonsen
@Munter @alexjeffburke do you know a good way to add typescript definitions a dynamic library but where I want to expose types for the public API.
Is type annotations in comments an option?
Peter Müller
@Munter
@sunesimonsen I remember a couple of people on twitter talking about being able to write js with jsdoc, and having typescript types extracted from that. The problem is that jsdoc is pretty limited when it comes to generics, if you might want to use those. And when I tried this approach in the very early days of that tooling, the output wasn't super useful
You don't want to write .d.ts files by hand?
Sune Simonsen
@sunesimonsen
I have a setup where I can generate the .d.ts from JSDoc, but I think I'll just write the .d.ts by hand. Then I avoid any new tooling.
It is pretty limited amount of typing I want to expose.
Peter Müller
@Munter
If it's for unexpected, i have some type definitions i have copied around for a few years now, which take care of most of the API besides assertions
Sune Simonsen
@sunesimonsen
it is for work, I want as little types as I can get away with 😂
Andreas Lind
@papandreou
There’s a library called blocked that uses a setInterval or similar to guesstimate how many milliseconds the event loop has been blocked for, and fires an event if it crosses a threshold you’ve specified.
It has a drop-in replacement called blocked-at that uses async_hooks to also provide a stack trace of where the blocking operation was initiated. Unfortunately it has a big perf overhead, so it’s not suitable for production.
@gustavnikolaj :point_up:
Sune Simonsen
@sunesimonsen
Screenshot 2021-10-07 at 21.14.28.png
Scripting with https://github.com/sunesimonsen/transformation is pretty fun and useful.
Gustav Nikolaj
@gustavnikolaj
I am debugging a memory leak in a node.js application. I observe that my rss is way bigger than my heapTotal and external memory segments combined when I inspect process.memoryUsage(). I would expect that heapTotal + external gives me all the memory used by js objects and c++ objects referenced from js. Am I missing something? Wouldn't a runaway rss number be an indication that theres some native code leaking? I am observing up to 70 to 80 percent of memory use being outside of heap and external...
Thanks for the suggestions on the event loop blocking @papandreou. I managed to convince the teams that they actually had a block without having to resort to scientific proofs :) Unfortunately we were sick all week so I never got back to you :(
Andreas Lind
@papandreou
@gustavnikolaj, external will only include memory that is allocated outside the JS heap and correctly reported using Isolate::AdjustAmountOfExternalAllocatedMemory: https://v8docs.nodesource.com/node-4.8/d5/dda/classv8_1_1_isolate.html#ae1a59cac60409d3922582c4af675473e
... so it's not an exact science. V8 doesn't automatically know that malloc gets called.
Maybe that explains the gap, depending on which native modules are involved?
Gustav Nikolaj
@gustavnikolaj
That could certainly be an explanation
These are the modules using nan as a dependency - that's the quickest way I know to check for native modules :) iconv, sharp, genx and node-expat.
I'd by default suspect that latter two most, but it actually seems to happen regardless of wether I engage the XML-related bit of the server :D
Andreas Lind
@papandreou
sharp, for sure :)
Gustav Nikolaj
@gustavnikolaj
We saw some very unfortunate interactions with sharp and ubuntu 18.04. We went from stable memory use to rampant leaks, pretty much making the apps unusable due to oomkilling... But they seemed mostly solved by enabling jemalloc.
But I'll take a look at sharp :) Thanks for the pointer!
Andreas Lind
@papandreou
At least the symptoms you list are consistent with what I've seen with sharp before.
Sune Simonsen
@sunesimonsen
I made a Babel plugin for stylewars to minify CSS in place. Now my Hackernews example with 7 dependencies (VDOM, store, router and styling) is down to 12.7K JavaScript :tada:
Andreas Lind
@papandreou
In bed with the enemy, huh? :)
Sune Simonsen
@sunesimonsen
Haha I only fighting for a no tooling developing experience, I think there will always be an optimizing build for production.
That build tooling can be anything as I'm just following the rules of the web :-)
So if there was a snazzy asset-graph builder that would do it all, I would welcome it ;-)
Andreas Lind
@papandreou
I recently made a rollup integration :)
Sune Simonsen
@sunesimonsen
I know, ideally I want something that I can just point my HTML file and it just works. I made that for depository, but in general.
I don't want to configure anything if I can avoid it.
Andreas Lind
@papandreou
Same :)
Miroslav Nikolov
@moubi

Hello.

I am finding myself to often encounter such construct in React components:

useEffect(() => {
    loadCapabilitiesAction().then(capabilityList => {
      if (capabilityList) {
        setCapabilities(capabilityList);
      }
    });
  });

It's all about fetching some data and setting the state which then renders some DOM.

It's not very straightforward to test it though.
Do you have any advice or direction to follow with unexpected?

A component test looks like that

const loadCapabilitiesActionPromise = Promise.resolve([
      { id: "mailadmin", metadata: {} },
      { id: "website_builder_premium", metadata: {} },
      { id: "wordpress", metadata: {} },
      { id: "webshop", metadata: {} },
      { id: "onephoto", metadata: {} },
      { id: "filemanager", metadata: {} },
      { id: "dns", metadata: {} },
      { id: "external_ssh", metadata: {} },
      { id: "backup", metadata: {} },
      { id: "php_mysql", metadata: {} },
      { id: "guest_users", metadata: {} }
    ]);
    props.loadCapabilitiesAction.returns(loadCapabilitiesActionPromise);

    let component = null;

    await act(async () => {
      component = mount(<QuickAccess {...props} />);
    });

    loadCapabilitiesActionPromise.then(() => {
      expect(component, "not to contain test id", "quick-access-website-stats");
    });

but to be honest I am in an infinite loop with such an approach.

:disappointed:
Gustav Nikolaj
@gustavnikolaj

You're mixing data fetching with your view in the same component, so you'll have to stub one to test the other.

One way to fix it is to split the component in two: One component that has all the view logic and take all the data as props, and then component with no "DOM-like" components, but the data fetching and a forward to the plain view component. Both components can be exported from the same file.

You can also isolate your data fetching logic in a similar function, so that you can test that in isolation without also bringing in React
Miroslav Nikolov
@moubi
Thanks Gustav. I am not sure which approach will win at the end.
I am also a bit hesitant about abstracting too much at the moment.
Peter Müller
@Munter
Andreas Lind
@papandreou
Argh, thanks for sharing :scream: