Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Graeme Coupar
    @obmarg
    That is one of the edge cases with using watchers: I don’t know exactly what circumstances it happens under, but sometimes watchers get a message that says the resource they’re watching is gone
    When that happens I think the recommendation is to re-fetch the resource and restart the watch
    But I’ve not actually encountered this myself
    This was a bit of a bug up until recently, you might be able to find more info in the bug report: obmarg/kazan#45
    If not, I’d be happy to help tomorrow. It’s quite late here in the uk, so I have to head now
    rodesousa
    @rodesousa
    OK, thanks a lot for helping =)
    i will see the PR
    Akash
    @atomicnumber1
    HI everyone, I've a quick question about configuration for gke.
    You see, I previously had,
    config :kazan, :server, {:kubeconfig, "path/to/file"}
    and it worked fine,
    Now, for gke, it says that I've to use Kazan.Server.resolve_token/2 to resolve auth,
    doest that mean, now I've pass server to every Kazan.run?
    Graeme Coupar
    @obmarg
    Hey @atomicnumber1 - it kinda depends on how your gke is set up. If the kubeconfig option was working for you before it should continue to work
    The resolve token stuff is just for a particular config setup of gke that I assumed was the default
    But if kubeconfig was working before then your gke is probably set up differently
    Akash
    @atomicnumber1

    Hey @obmarg , I see, I was using the @smn's fork previously for the gcp and the config thing was working fine, as the pull requested was merge, I switched to the default kazan and I get resolve_auth errors. makes sense?

    P.S. Thanks for the reply. I'm very new to this stuff.

    Akash
    @atomicnumber1
    We've kazan setup as, in production it uses in_cluster config, and in development it uses local kubeconfig, now this happens only when we authenticate locally, right?
    I don't know how to approach this. sigh
    Graeme Coupar
    @obmarg
    Ah, I see, you were using smn’s fork
    That makes sense - the resolve auth call got added after that
    Graeme Coupar
    @obmarg
    So, it sounds like you’d like to be able to continue using the application config but also need GCP support locally?
    Don’t think that’s something we support right now, but if I was to provide an API something like this, would that work for you:
    Kazan.Server.from_app_env(:your_app) |> Kazan.Server.resolve_auth |> Kazan.Client.send
    Pretty sure I could get it to work for your in cluster and kubeconfig use cases
    Akash
    @atomicnumber1
    yes,
    um... I'm sorry I didn't understand how this api would work.
    Graeme Coupar
    @obmarg
    Sorry, I’m on a train on my phone so not the easiest to explain right now
    When you were on the fork, you could call Kazan.run directly and it would just work without providing a server
    Akash
    @atomicnumber1
    1. np. you can explain later ,
    2. yes
    Graeme Coupar
    @obmarg
    The problem I’m seeing is that you’d then need to write your own code to construct a Kazan.Server from config, which Kazan currently does for you
    So I’m proposing that I expose the code to construct a server from config as a function
    You’ll still need to call resolve_token on it, but it should make it easier
    Akash
    @atomicnumber1
    yeah, that makes sense. That'd be cool!
    Graeme Coupar
    @obmarg
    Cool, so I’m at work just now but have raised an issue on github to ensure I handle this. It’s a pretty small job so will hopefully get to it over the weekend
    rodesousa
    @rodesousa

    Hi,

    i try to connect with Cluster Authentication, Kazan.Server.in_cluster()
    i have a good structure in response

    %Kazan.Server{
      auth: #TokenAuth<...>,
      ca_cert: <<...>>,
      insecure_skip_tls_verify: nil,
      url: "https://XX.XX.XX:443"
    }

    but when a run a request i have this:

    ** (ArgumentError) argument error
        (stdlib) :ets.lookup_element(:hackney_config, :mod_metrics, 2)
        /source/deps/hackney/src/hackney_metrics.erl:27: :hackney_metrics.get_engine/0
        /source/deps/hackney/src/hackney_connect.erl:76: :hackney_connect.create_connection/5
        /source/deps/hackney/src/hackney_connect.erl:45: :hackney_connect.connect/5
        /source/deps/hackney/src/hackney.erl:329: :hackney.request/5
        lib/httpoison/base.ex:746: HTTPoison.Base.request/6
        lib/kazan/client/imp.ex:68: Kazan.Client.Imp.run/2

    not too explicit :/

    For debug, i use cluster-admin as service account. And when i curl the API with token and ca_cert, it's works
    kazan: 0.10.0

    rodesousa
    @rodesousa
    I found it. i forget HTTPoison start
    Graeme Coupar
    @obmarg
    Ah yeah, that’d do it @rodesousa - you could also try and make sure httpoison is started as part of your apps supervision tree in mix.exa
    Mix.exs rather
    Akash
    @atomicnumber1
    @obmarg Thank You!
    Praki Prakash
    @MonadicT
    I have a requirement to connect to multiple K8s servers. Is it possible to provide ca_cert content directly instead of a system file path when calling Kazan.Server.from_map?
    Graeme Coupar
    @obmarg
    @MonadicT I believe that’s how it works by default
    Praki Prakash
    @MonadicT
    @obmarg Thanks for replying and indeed it works. I was passing the cert file path which caused warnings from SSL library. However, passing in the result from pem_decode works as expected and no more warnings :)
    Graeme Coupar
    @obmarg
    👌
    Curtis Schiewek
    @cschiewek
    :wave: I can't get any watch requests to work. They all timeout. I've tested against a GKE cluster at 1.10.7, and a kops deployed cluster at 1.10.5
    my genserver looks like this
    defmodule Watcher do
      @moduledoc """
      Documentation for Watcher.
      """
      use GenServer
    
      def start_link do
        Kazan.Apis.Core.V1.list_namespace!(watch: true)
        |> Kazan.Watcher.start_link(send_to: self())
      end
    
      ## Server Callbacks
      def init(:ok), do: {:ok, %{}}
    
      def handle_info(message, state) do
        IO.puts inspect(message)
        {:noreply, state}
      end
    end
    But when I try to start the genserver it just times out
    I've also tried just Kazan.Apis.Core.V1.list_namespace!(watch: true) |> Kazan.run! which times out as well
    The exact error is
    ** (EXIT from #PID<0.214.0>) shell process exited with reason: an exception was raised:
        ** (MatchError) no match of right hand side value: {:error, %HTTPoison.Error{id: nil, reason: :timeout}}
    rodesousa
    @rodesousa

    HI,

    I have a simply request, in lib/kazan/codegen/models/* there are from_oai_desc fn. When are those functions called ?

    Graeme Coupar
    @obmarg
    @rodesousa they’re called at compile time when we’re parsing the Kazan open api specs
    Sorry, the k8s open api specs
    rodesousa
    @rodesousa
    and it's in lib/kazan/codegen/models.ex with defmacro from_spec(spec_file) that all object like Kazan.Apis.Core.V1.* are created ? (i am beginner in Elixir ^^)
    Graeme Coupar
    @obmarg
    Yep
    Well, all the structure are created in there
    The API functions are generated in codegen/apis.ex
    rodesousa
    @rodesousa
    thx =)
    Bradley D Smith
    @bradleyd
    Anyone know how to execute a command on a pod with Zazan? I get back upgrade required which looks like it is expecting a websocket