For example if you got on a pod while it was at resource version 1, then changed that pod and did a watch with resource version 1, you should see an event for your change come through
It’s intended to make sure you don’t miss changes on resources that are changing frequently
Likewise if you need to restart a watcher you can pass in the last resource version you saw to make sure you don’t miss anything
If you don’t have a resource version it’s ok to leave that off and Kazan will figure it out for you
rodesousa
@rodesousa
Ok i had not did the link with resource version of an kubernetes object. Ok thanks =)
rodesousa
@rodesousa
Last things :/, i can not receive msgs from my watcher. After run Kazan.Watcher.start_link(I took the handle_infos from the doc) do i missed another thing ? GenServer conf ?
Graeme Coupar
@obmarg
Are you able to share the watcher bit of your code on gist.github.com?
A lot easier to diagnose when I can see what’s going on 🙂
Ok, so a watcher that’s started with send_to: self will send messages to the current process. You’ve added a handle_info function, but handle_info only gets sent messages when it’s part of a GenServer. If you wanted to receive messages outside of a GenServer you could use a receive block instead
So yeah, either make it a GenServer or add a receive block to handle the watcher messages
It's not better to catch the newest resource version when it is too old ?
_
Graeme Coupar
@obmarg
That is one of the edge cases with using watchers: I don’t know exactly what circumstances it happens under, but sometimes watchers get a message that says the resource they’re watching is gone
When that happens I think the recommendation is to re-fetch the resource and restart the watch
But I’ve not actually encountered this myself
This was a bit of a bug up until recently, you might be able to find more info in the bug report: obmarg/kazan#45
If not, I’d be happy to help tomorrow. It’s quite late here in the uk, so I have to head now
rodesousa
@rodesousa
OK, thanks a lot for helping =)
i will see the PR
Akash
@atomicnumber1
HI everyone, I've a quick question about configuration for gke. You see, I previously had, config :kazan, :server, {:kubeconfig, "path/to/file"} and it worked fine, Now, for gke, it says that I've to use Kazan.Server.resolve_token/2 to resolve auth, doest that mean, now I've pass server to every Kazan.run?
Graeme Coupar
@obmarg
Hey @atomicnumber1 - it kinda depends on how your gke is set up. If the kubeconfig option was working for you before it should continue to work
The resolve token stuff is just for a particular config setup of gke that I assumed was the default
But if kubeconfig was working before then your gke is probably set up differently
Akash
@atomicnumber1
Hey @obmarg , I see, I was using the @smn's fork previously for the gcp and the config thing was working fine, as the pull requested was merge, I switched to the default kazan and I get resolve_auth errors. makes sense?
P.S. Thanks for the reply. I'm very new to this stuff.
Akash
@atomicnumber1
We've kazan setup as, in production it uses in_cluster config, and in development it uses local kubeconfig, now this happens only when we authenticate locally, right? I don't know how to approach this. sigh
Graeme Coupar
@obmarg
Ah, I see, you were using smn’s fork
That makes sense - the resolve auth call got added after that
Graeme Coupar
@obmarg
So, it sounds like you’d like to be able to continue using the application config but also need GCP support locally?
Don’t think that’s something we support right now, but if I was to provide an API something like this, would that work for you: