Hi @cloudm2 , thanks for the heads up. Any chance you can cut a bugfix version containing the fix for the new apt cookbook?
Hi @cloudm2 , thanks for the release 0.8.1, could you please also push it to the supermarket? Cheers
It's odd but that happens to be controlled by an individual. We may however explore other options soon.
@cloudm2 hi! I'm here to help
@cloudm2 what's your supermarket account?
Assuming we can talk to @guilhemfr I can add you as a collaborator.
Thanks! I have asked guilhemfr to do that but he declined.
that's a problem
I asked to make me an collaborator for the ceph supermarket but he declined to do so.
Can i ask why? That seems like something that should be run by ceph the company not a person
When I took over the project it appeared to not have been updated or used in a while. It was far behind the latest release cycle. We wanted to clean it up a bit and bring it up to the latest version. We reached out all that we knew of around the project and heard from most but not guilhemfr so we moved forward. This did not sit well with him (was not our intent) so we halted to let Ceph/Red Hat make the decision which is where it is now. We're ready to move but we want to make sure folks are happy too.
Ah ok, i guess i get that
I'm talking to our internal Community Engineering team too about this
hello, I want to accept AWS4 in Ceph Server , and ceph version is : " 10.2.10 " . so how should I change to accept signature version 4 ?
its good :|
Hello guys, I am trying a POC with ceph and openstack and was looking to your cookbook to deploy the ceph env.
The doc says just the debian and ubuntu OSs were tested, does somebody has information about this coobook running on centos7?
Hi, all I configure ceph cluster as pvc of jupyterhub-k8s, the jupyter docker throws error: Warning FailedMount kubelet, ip-10-0-58-63.ap-southeast-1.compute.internal MountVolume.SetUp failed for volume "pvc-81d53fb4-2699-11e8-af36-06a45990210c" : CephFS: mount failed: mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/81d86df3-2699-11e8-af36-06a45990210c/volumes/kubernetes.iocephfs/pvc-81d53fb4-2699-11e8-af36-06a45990210c --scope -- mount -t ceph -o name=kubernetes-dynamic-user-81ef691d-2699-11e8-bb7d-0a586013a,secret=AQBajaAA7eOnPlR7LI5gG3sEtt7y== 10.0.10.112:6789:/volumes/kubernetes/kubernetes-dynamic-pvc-81ef68d3-2699-11e8-bb7d-0a586460013a /var/lib/kubelet/pods/81d86df3-2699-11e8-af36-06a45990210c/volumes/kubernetes.iocephfs/pvc-81d53fb4-2699-11e8-af36-06a45990210c Output: Running scope as unit run-rf6411478a9b348c5a1716841365e5056.scope. mount: 10.0.10.112:6789:/volumes/kubernetes/kubernetes-dynamic-pvc-81ef68d3-2699-11e8-bb7d-0a586460013a: can't read superblock
anybody here worked with rbd?
Hello, everyone. I am having trouble with setting up ceph-osd in conjunction with other components of the openstack via juju I have a maas cluster with 10 nodes. Each node has a single 120 GB SSD. When i deploy ceph-osd and ceph-mon charms, ceph-osd inavitable always ends up not finding any cluster which use the appropriate configuration. Are there any prerequisites to running ceph in this particular situation? Do I need to format or parition the SSDs in a specific way? I start deploy from 0, machines are not deployed, juju manages the OS installation and ceph-osd is the first one to be deployed. I essentially follow this tutorial: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-openstack.html Thanks you!