Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 28 2019 14:22
    guog starred spotify/heroic
  • Jan 27 2019 10:29
    udoprog added as member
  • Jan 26 2019 18:15
    danielnorberg removed as member
  • Jan 26 2019 18:15
    rohansingh removed as member
  • Jan 23 2019 08:17
    garciparedes starred spotify/heroic
  • Jan 23 2019 04:28
    odeke-em starred spotify/heroic
  • Jan 22 2019 09:39
    anke522 starred spotify/heroic
  • Jan 21 2019 13:23
    svergtanya starred spotify/heroic
  • Jan 18 2019 16:16

    lmuhlha on 0.8.5

    (compare)

  • Jan 17 2019 22:00
    hashkanna starred spotify/heroic
  • Jan 16 2019 21:29

    lmuhlha on suggestfail

    (compare)

  • Jan 16 2019 21:29

    lmuhlha on master

    Don't mark all api nodes as unh… Merge pull request #406 from sp… (compare)

  • Jan 16 2019 21:29
    lmuhlha closed #406
  • Jan 16 2019 20:20
    codecov[bot] commented #406
  • Jan 16 2019 20:03
    lmuhlha review_requested #406
  • Jan 16 2019 20:03
    lmuhlha review_requested #406
  • Jan 16 2019 20:03
    lmuhlha opened #406
  • Jan 16 2019 20:03

    lmuhlha on suggestfail

    Don't mark all api nodes as unh… (compare)

  • Jan 16 2019 19:59

    lmuhlha on suggestfail

    Don't mark all api nodes as unh… (compare)

Tariq Ibrahim
@tariq1890
Hey @udoprog
John-John Tedro
@udoprog
Yeah?
Tariq Ibrahim
@tariq1890
I was looking at the code for AbstractLocalConsumerIT
And It does a lot of initialization work.
Is there a possibility to mock out the other HeroicModules ?
Writing multiple test cases for resource classes would be overkill if I am not mistaken
if we have the Integration test starting up all the modules and built-in modules.
John-John Tedro
@udoprog
Most, of not all, can be mocked. IIRC they frequently are. But you probably don't need everything that's in there. Probably don't care about clustering for example. There should be a single node IT as well.
Tariq Ibrahim
@tariq1890
Ah yes!
I could use that actually
Thanks. I’ll get to using that.
John-John Tedro
@udoprog
awesome :)
John-John Tedro
@udoprog
@tariq1890 I've started working on a way to build better clients, among them is #330. I'm looking into doing code-generation instead of writing everything by hand, and 'special' syntaxes are not very well supported.
Like the array ast that is filters.
Tariq Ibrahim
@tariq1890
Great! I’d be glad to help in any way possible
lucilecoutouly
@lucilecoutouly
Hello I realy need somme hello
Tariq Ibrahim
@tariq1890
Did you mean you need some help ?
lucilecoutouly
@lucilecoutouly
hello, i need some help for this problem : spotify/heroic#333
John-John Tedro
@udoprog
@lucilecoutouly so you found something?
lucilecoutouly
@lucilecoutouly
yes i think that it was cassandra that stop the connection
John-John Tedro
@udoprog
Yeah, could be. I didn't spot anything like that in your stacktrace. But I hope it helps.
lucilecoutouly
@lucilecoutouly
so i am happy .... it is working now
John-John Tedro
@udoprog
:)
Florian Lautenschlager
@FlorianLautenschlager
Hi there, can i add two time series? And if so, how are time series aligned?
John-John Tedro
@udoprog
@FlorianLautenschlager no, not at the moment. We suggest doing it clientside since the query implications are similar. There were some plans of adding high-level aggregations to support it, but the added complexity isn't currently justified.
Florian Lautenschlager
@FlorianLautenschlager
thx ;-)
Florian Lautenschlager
@FlorianLautenschlager
Hi again, does heroic make use of compression?
John-John Tedro
@udoprog
Not natively no, but you can enable compression on a DB level (e.g. C* or Bigtable has it).
Nothing as good as Gorilla though. That would complicate ingestion.
Florian Lautenschlager
@FlorianLautenschlager
Hi again, can i query the raw values within a range using heroic? I am not sure if the aggregation (https://spotify.github.io/heroic/#!/docs/api/post-query-metrics) is optional.
John-John Tedro
@udoprog
@FlorianLautenschlager it is, I think. Otherwise there is the empty aggregation.
jcabmora
@jcabmora
Hello @udoprog ! I was wondering if you had a chance to look at spotify/heroic#371. Build is failing coverage, and I was wondering if you have any pointers about how to solve this. I actually added an integration test (which is passing) . I suspect the codecov tests might be broken since they are pointing to stuff that was not changed on the PR that I submitted. E.g. https://codecov.io/gh/spotify/heroic/pull/371/changes
0x6875790d0a
@huydx
Hi, wonder that anybody still here ?
laoduo
@laoduo
How to solve this error?
ERR: ShardError(nodes=[LocalClusterNode(localMetadata=NodeMetadata(version=0, id=0336e36b-8d5c-49bb-9d8d-b435eb9eaf32, tags={}, service=ServiceInfo(name=The Heroic Time Series Database, version=0.0.1-SNAPSHOT, id=heroic)))], shard={}, error=Some fetches failed (4) or were cancelled (0), caused by Some fetches failed (4) or were cancelled (0))
memory
@memory
Hi folks. I'm cc'ing this from the freenode channel, apologies for the duplication.
I'm attempting to update a rather ancient heroic deployment to 0.8.7 and from the kafka consumer to the
+pubsub consumer, and am running into a frankly bizarre issue: messages with a certain key format appear to be silently not recorded in the metrics backend.
I've turned on debug logging and the pubsub consumer appears to be correctly ingesting messages:
15:39:20.168 [Gax-15] DEBUG com.spotify.heroic.consumer.pubsub.Receiver - Received ID:495196398495706 with content: 
{"uuid":"ff4bf904-41fe-43d8-bede-3212359aa7f8:60","timestamp":1554478740000,"value":130.0,"metric":"test130","aggregation":"max"}
heroic correctly updates the metadata store for that key:
$ curl -s -H "Content-Type: application/json" http://heroic:8080/metadata/series  \
  -d '{"range": {"type": "relative", "unit": "HOURS", "value": 1}, "filter": ["key", "ff4bf904-41fe-43d8-bede-3212359aa7f8:60"]}' \
  | jq '.series[0]'
{
  "key": "ff4bf904-41fe-43d8-bede-3212359aa7f8:60",
  "tags": {
    "aggregation": "deltasum",
    "metric": "test136"
  },
  "resource": {}
}
...but it doesn't seem to actually be storing the data:
$ curl -s -H "Content-Type: application/json" http://heroic:8080/query/metrics \
  -d '{"range": {"type": "relative", "unit": "HOURS", "value": 1}, "filter": ["key", "ff4bf904-41fe-43d8-bede-3212359aa7f8:60"]}' \
  | jq '.result'
[]
memory
@memory
tailing the logs while this is happening reveals nothing but the "Received" debug log -- no errors, no warnings, nothing.
the data just gets dropped on the floor
and just to make it exceedingly weird: this only happens when the key name ends in :60 or :600 (these keys are generated by a rollup process we run as a beam job)
if the key name doesn't end like that, the data is correctly stored:
$ curl -s -H "Content-Type: application/json" http://heroic:8080/query/metrics  \
  -d '{"range": {"type": "relative", "unit": "HOURS", "value": 1}, "filter": ["key", "ff4bf904-41fe-43d8-bede-3212359aa7f8"]}' \
  | jq '.result[0].values[0:2]'
[
  [
    1554477496000,
    115
  ],
  [
    1554477497000,
    115
  ]
]
:confounded:
this is heroic release 0.8.7 running on openjdk8 with the bigtable metrics backend and elasticsearch metadata backend, although I was able to reproduce the same behavior with the in-memory metadata and metrics stores.
memory
@memory
ahahahahahaha.... and, predictably: nevermind. PEBKAC. Thank you for letting me rubber-duck this here. :)