Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 19 11:59
    lawrencejones commented #644
  • Aug 19 11:07
    lawrencejones commented #641
  • Aug 15 06:43
    qasimmaqbool commented #602
  • Aug 14 02:49
    hardyhhuang opened #691
  • Aug 13 08:58
    idetoile opened #690
  • Aug 13 08:57
    idetoile commented #689
  • Aug 12 23:33
    idetoile synchronize #689
  • Aug 12 23:08
    idetoile synchronize #689
  • Aug 12 22:51
    idetoile commented #689
  • Aug 12 21:31
    idetoile synchronize #689
  • Aug 12 17:38
    idetoile synchronize #689
  • Aug 12 17:36
    idetoile synchronize #689
  • Aug 12 09:58
    sgotti commented #688
  • Aug 11 17:08
    idetoile synchronize #689
  • Aug 11 17:03
    idetoile synchronize #689
  • Aug 11 15:53
    idetoile synchronize #689
  • Aug 11 13:58
    idetoile commented #689
  • Aug 11 13:50
    idetoile synchronize #689
  • Aug 11 08:14
    idetoile commented #689
  • Aug 11 07:53
    idetoile opened #689
Nicolas
@nikosXY_gitlab
output
ERROR cmd/sentinel.go:1853 cannot update sentinel info {"error": "update failed: Pod \"stolon-sentinel-daemonset-f8sgd\" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image
any idea ? :(
Lawrence Jones
@lawrencejones
Hey all! I've tagged Simone specifically as it's his issue, but feedback on a suggestion for cascading replication configuration would be great from anyone who has stake in this feature: https://github.com/sorintlab/stolon/issues/17#issuecomment-499396095
Nicolas
@nikosXY_gitlab
Hi, I have news. If I deploy the sentinel with DaemonSet, i recive the error written yesterday: if I deploy sentinel with Deployment It works fine, and there aren't any errors. Any suggestion?

We have seen that in the stolon project there is a vendors folder with extensions for kubernet, are there options to integrate these features? or are they included automatically?

https://github.com/sorintlab/stolon/blob/master/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/daemonset.go

Simone Gotti
@sgotti
@nikosXY_gitlab stolon needs to update the pods metadata when using the k8s api store. This isn't possible when using a daemon set. But I don't see a daemon set the right fit for the keepers. Just use a statefulset with the right pod antiaffinity.
Nicolas
@nikosXY_gitlab
-thank you Simone, we are trying daemonset just for sentinel and proxy
Do you have any sugestions about this? Could it be a good idea to use statefullset for sentinel and proxy, too?
Nicolas
@nikosXY_gitlab
we need a sigle sentinel and a single proxy for each node. Is there a way to get that with deployment?
Simone Gotti
@sgotti
@nikosXY_gitlab Please take a look at the k8s example and at the architecture doc. Statefulset are needed only for keeper that must maintain a persistent volume. sentinels and proxies are stateless so a statefulset doesn't makes sense and a deployment/replicaset is the right resource type.
Nicolas
@nikosXY_gitlab
Thanks
Lee
@leev
Our etcd cluster that is solely in use for stolon keeps growing to the point of hitting the quota backend (8gb) and then errors. Anyone else seen this before?
Lawrence Jones
@lawrencejones
Have you turned on compaction Lee?
Lee
@leev
Yep
Might look at switching to --auto-compaction-mode=revision
Alex
@SolomonAlex
Greetings to all! Strange question: I need to add a postgres parameter to the configuration [search_path = '"$ user", common'] but [stolonctl update --patch] adds only [search_path = ',common'] ... How do I add a parameter with quotes?
Alex
@SolomonAlex
I guessed. Thanks to all! You just need to escaping quotes.
Qasim Maqbool
@qasimmaqbool
Hi everyone. I'm observing some weird behavior where my keeper nodes always resync from scratch after a restart, i.e. the data directory gets completely emptied and then the pgbasebackup process copies over all of the data from master. I'm running Stolon on Kubernetes with configmaps and a snippet from the keeper logs upon starting up shows:
2019-06-18T10:14:20.067Z INFO cmd/keeper.go:1911 exclusive lock on data dir taken 2019-06-18T10:14:20.073Z INFO cmd/keeper.go:492 keeper uid {"uid": "keeper0"} 2019-06-18T10:14:50.083Z ERROR cmd/keeper.go:723 error retrieving cluster data {"error": "failed to get latest version of configmap: Get https://10.233.0.1:443/api/v1/namespaces/default/configmaps/stolon-cluster-test1: dial tcp 10.233.0.1:443: i/o timeout"} 2019-06-18T10:14:50.141Z ERROR cmd/keeper.go:641 cannot get configured pg parameters {"error": "dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory"} 2019-06-18T10:14:50.152Z INFO cmd/keeper.go:982 no db assigned 2019-06-18T10:14:52.642Z ERROR cmd/keeper.go:641 cannot get configured pg parameters {"error": "dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory"} 2019-06-18T10:14:55.142Z ERROR cmd/keeper.go:641 cannot get configured pg parameters {"error": "dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory"} 2019-06-18T10:14:55.160Z INFO cmd/keeper.go:1037 current db UID different than cluster data db UID {"db": "0643b44c", "cdDB": "19afbc17"} 2019-06-18T10:14:55.160Z INFO cmd/keeper.go:1190 resyncing the database cluster 2019-06-18T10:14:55.357Z INFO cmd/keeper.go:827 syncing from followed db {"followedDB": "4fbb9d56", "keeper": "keeper1"} 2019-06-18T10:14:55.358Z INFO postgresql/postgresql.go:824 running pg_basebackup. This suggests the configmap is not accessible in the beginning, even though it is able to eventually read and write configuration data. Can anyone point out where I'm going wrong? Thanks.
Qasim Maqbool
@qasimmaqbool
Edit related to my last post: I realize a full copy of the PGDATA directory from scratch is how pgbasebackup actually works. The only way to avoid this is to use pgRewind. Has anyone here used it, and if so what is the correct way to add this setting in my Kubernetes manifests? (I can't find anything in the Stolon docs other than stolonctl update --patch '{ "usePgrewind" : true }'.
Andrey A. Konovalov
@DRVTiny
Hello4All!
One short question: is it possible to migrate existing master->slave to Stolon?
We dont need to re-initdb, but keeper wants to do so
AMyltsev
@AMyltsev
@lawrencejones Hello, as far as I remember you added metrics to all components of stolon. Do you have grafana dashboard which can be shared for me?
Lawrence Jones
@lawrencejones
Hey! The dashboards we've been using are here:
If you boot the docker-compose of that project then you'll have a Prometheus and grafana instance with all the dashboards loaded for you, available at localhost:3000 (I think, readme should cover it)
Guillaume Philippe
@stormtrooper35_gitlab
Hi all. I've deployed stolon with helm charts and it's well working. When there is one proxy-keeper-sentinel on many nodes, i've seen a problem : when the server (on which there is these components) is down, stolon proxy keeps old keeper pod address (found in logs: INFO cmd/proxy.go:246 proxying to master address {"address": "10.233.99.169:5432"} ) of the keeper and not the new one. Is someone has already had this issue (i don't know if it's comes from helm charts or stolon directly) ? Is there a specific configuration to activate ? Thx
Guillaume Philippe
@stormtrooper35_gitlab
@sgotti Do you have an idea for this old address into proxy ? How I can inform sentinel that old uid is not active but the new one ?
Amath SARR
@Mattzr

Hi everyone !
I've been trying to install stolon via helm in GKE using my own supplied secrets (for superuser and repl) but for some reason getting some error from the keeper:

2019-07-17 11:47:54.350 UTC [39] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2019-07-17 11:47:54.363 UTC [39] LOG: end-of-line before authentication method
2019-07-17 11:47:54.363 UTC [39] CONTEXT: line 1 of configuration file "/stolon-data/postgres/pg_hba.conf"
2019-07-17 11:47:54.363 UTC [39] LOG: invalid connection type "md5"

I have checked file and the format is invalid, carriage return gets added after the username:

local postgres su_username$
 md5$
local replication repl_username$
 md5$
host all su_username$
 0.0.0.0/0 md5$
host all su_username$
 ::0/0 md5$
host replication repl_username$
 0.0.0.0/0 md5$
host replication repl_username$
 ::0/0 md5$
host all all 0.0.0.0/0 md5$
host all all ::0/0 md5$
thepotatocannon
@thepotatocannon
Hi everyone,
I'm relatively new to kubernetes and all the components related to the topic. Could anyone provide me a in-depth explanation about all the components in this project? How do proxy communicate with keepers and sentinels, how do sentinel monitor the cluster view? Thanks a lot for replying :)
Simone Gotti
@sgotti
@stormtrooper35_gitlab I'm not sure I understand what you're asking but if the keeper is down the sentinel will try to elect a new master. If the sentinel is down this won't happen, if the sentinel cannot elect a new primary pg also the old primary address will remain. Are you sure a new primary is elected? You should look at you sentinels logs to see what's happening .
@thepotatocannon stolon can work on k8s but its architecture is not related to k8s. You can find all the information here https://github.com/sorintlab/stolon/blob/master/doc/architecture.md or look at the code
Alexandre Assouad
@t0k4rt
Hi everyone, I've a question about init mode, is it possible to change the init mode of a stolon cluster ? I tried it and got an error: "Cannot change init mode". I want to add a new server in my stolon cluster but instead of initializing it with pg_basebackup, I want it to use my wal-e backups. Could someone hase any insights about this to help me solve my problem ? Thanks a lot !
Guillaume Philippe
@stormtrooper35_gitlab
@sgotti Thanks for your help. In fact the replicaset of all stolon components must be at least at 2. When a keeper is down the other is elected. In my case, replicaset was only one :(. If it can help someone, official charts (it's not you i know ;) ) work fine in HA, with persistence and with an init.
Simone Gotti
@sgotti
@t0k4rt initmode is for cluster initialization, not for new keeper initialization. What you want is depicted in #389.
AixesHunter
@aixeshunter
Hi, I runned a sidecar container named pg-agent in keeper pod. And then, I want to access this pg-agent container by proxy, does proxy support this situation?
Simone Gotti
@sgotti
@aixeshunter the proxy role is to contact only the primary postgres instance (see the architecture). But I'm not sure what the pg-agent does but you shouldn't use the proxy for this. Just use other discovery methods.
Vinod Gupta
@codervinod
Hi @sgotti , how is slave stolon cluster notified when a slave is promoted in master cluster, in this setup as defined here: https://github.com/sorintlab/stolon/blob/master/doc/standbycluster.md
We are running into an issue when a slave is promoted on master, a new timeline is created and local slaves resync with new master. however on the stand by cluster, the master which was replicating doesnt get notified, this cause timelines to be out of sync and slaves stop replicating. Appreciate your help.
I created this issue to get help on it: sorintlab/stolon#688
twelfthdoctor
@twelfthdoctor
Hi everyone. Is there any way to split read/write queries? I want to send read queries to replicas and write queries to master.
@sgotti
AixesHunter
@aixeshunter
@twelfthdoctor If u run stolon in k8s, u can create a service resource to achieve this.
Francisc Balint
@franciscbalint
@twelfthdoctor Django offers a database router where you can setup replicas for reading queries
Dmitri Moore
@demisx
Hi everyone. The stable/stolon helm charts installs pg10 version by default. Can I install pg11 instead by simply setting the image tag name? Something like this:
helm install stable/stolon \
  --name stolon \
  --set image.tag=v0.13.0-pg11
Кирилл
@TakT1_gitlab

Hi!

I'm using Stolon v0.14.0 with Postgres 11.

We have 2 standby clusters in different DCs. Today we made a promotion to the master at the second reserved cluster, for servicing the master cluster in the first DC.

Now you need to switch the cluster in the backup DC to standby. In the current documentation on Stolon did not find such an opportunity.
Please tell me why you need a standby cluster if there is no way to switch it to standby>master>standby?

Dmitri Moore
@demisx
Does anyone know if it’s possible to deploy Stolon helm chart in Kubernetes and point to
existing keeper volumes (e.g. EBS volumes in AWS)? I want to be able to reuse previous data after rebuilding Kubernetes cluster. Currently, each time I install Stolon helm chart it creates new keeper volumes and all data is gone. Thanks.
DanielVaknin
@DanielVaknin
Hi,
Question: How can I create Stolon cluster in K8s cluster with multiple nodes (and therefor multiple keepers) while preserving "old" postgres folder that was exported from a postgres pod?
I mean that I want to find a way so that I can take the entire postgres folder from the old postgres server/pod, place it on 1 of the k8s nodes, then start stolon (InitMode existing/new?) and then Stolon will start with all the old data and replicate it to all other pods which run on other nodes
DanielVaknin
@DanielVaknin
Alternatively, is there way to force the placement of the keeper master pod on a specific K8s node? (and all other keeper pods will run on other nodes)