which will have 1 region and 2 zones using the same storage cluster
wanted assistance in using the cookbooks to setup a federated gateway
@jsuchome@hufman ....can you guys also help me out
Sorry @akshatknsl I never used the radosgw part of that cookbook; So I don't have experience with it. We used that cookbook for tests with ceph as backend for our Opennebula/KVM infrastructure. Hope someone can help you with the radosgw;
I've made a pull request to help set up radosgw with different pool settings, #154. However, there isn't support for actually creating the pools
additionally, the cookbook doesn't yet have support for running the federation daemon, because i haven't gotten it figured out yet
I'm having issues with using the ceph-cookbook to bootstrap a new cluster
It seems when I'm setting up the first monitor, I get stopped at the point where it's trying to get the key for client.boostrap-osd
I don't remember the exact output of the command when run manually, but I believe it was an issue of not being able to authenticate
So I manually added a client.admin user/key and with that was able to run ceph auth list, which didn't show a client.bootstrap-osd
Adding subsequent monitor nodes required having the client.admin key in place for them to proceed past the same step.
Sergio de Carvalho
Hi, I'm having trouble to deploy a Ceph cluster with the ceph-cookbook using an encrypted data bag (EDB) to store the monitor and OSD secrets (I can deploy a cluster just fine if I'm not using an EDB). This is what I'm doing: I've manually created 2 random secret keys and uploaded them to an EDB on my Chef server. I can then deploy the first node with a monitor using the mon secret stored in the EDB. However, once this node is deployed and the cluster is up, a bootstrap-osd key is automatically generated in the auth system. As a result, when another node of the cluster is deployed with an OSD daemon, the OSD secret stored in the EDB obviously won't match the one generated by the first node, and the OSD then fails to activate. The OSD recipe retrieves the OSD secret from the EDB but I don't see how this secret ever gets imported into the cluster. I'd appreciate if anyone could help me understand how the cookbook works with EDBs. Thanks!
Thanks, @tomzo. I have already deployed Ceph with the ceph-cookbook without EDB but security is a major concern for me right now and I don't keys exposed on node attributes. I've read the code many times over but just can't see how it could possibly work once EDB is enabled. I'm actually wondering if anyone is using the ceph-cookbook with EDB.
Sergio de Carvalho
I've created a pull request with a change that makes the cookbook work for me with EDB: ceph/ceph-cookbook#201
Hi, is this cookbook still maintained and if so, who is the current maintainer?
Hi @jklare, I'm then new maintainer. My name is Chris Jones and I'm with Bloomberg. I'm going to create a new 'wip-master' that will contain the updated Chef code for Chef 12+. Once the new branch is working well I will archive the current master so folks can still get to it and then replace the master branch with the wip-master branch. Should see a lot of new activity in the coming weeks.
Hi @cloudm2 , thanks for the heads up. Any chance you can cut a bugfix version containing the fix for the new apt cookbook?
Hi @cloudm2 , thanks for the release 0.8.1, could you please also push it to the supermarket? Cheers
It's odd but that happens to be controlled by an individual. We may however explore other options soon.
@cloudm2 hi! I'm here to help
@cloudm2 what's your supermarket account?
Assuming we can talk to @guilhemfr I can add you as a collaborator.
Thanks! I have asked guilhemfr to do that but he declined.
that's a problem
I asked to make me an collaborator for the ceph supermarket but he declined to do so.
Can i ask why? That seems like something that should be run by ceph the company not a person
When I took over the project it appeared to not have been updated or used in a while. It was far behind the latest release cycle. We wanted to clean it up a bit and bring it up to the latest version. We reached out all that we knew of around the project and heard from most but not guilhemfr so we moved forward. This did not sit well with him (was not our intent) so we halted to let Ceph/Red Hat make the decision which is where it is now. We're ready to move but we want to make sure folks are happy too.
Ah ok, i guess i get that
I'm talking to our internal Community Engineering team too about this
hello, I want to accept AWS4 in Ceph Server , and ceph version is : " 10.2.10 " . so how should I change to accept signature version 4 ?
its good :|
Hello guys, I am trying a POC with ceph and openstack and was looking to your cookbook to deploy the ceph env.
The doc says just the debian and ubuntu OSs were tested, does somebody has information about this coobook running on centos7?
Hi, all I configure ceph cluster as pvc of jupyterhub-k8s, the jupyter docker throws error: Warning FailedMount kubelet, ip-10-0-58-63.ap-southeast-1.compute.internal MountVolume.SetUp failed for volume "pvc-81d53fb4-2699-11e8-af36-06a45990210c" : CephFS: mount failed: mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/81d86df3-2699-11e8-af36-06a45990210c/volumes/kubernetes.iocephfs/pvc-81d53fb4-2699-11e8-af36-06a45990210c --scope -- mount -t ceph -o name=kubernetes-dynamic-user-81ef691d-2699-11e8-bb7d-0a586013a,secret=AQBajaAA7eOnPlR7LI5gG3sEtt7y== 10.0.10.112:6789:/volumes/kubernetes/kubernetes-dynamic-pvc-81ef68d3-2699-11e8-bb7d-0a586460013a /var/lib/kubelet/pods/81d86df3-2699-11e8-af36-06a45990210c/volumes/kubernetes.iocephfs/pvc-81d53fb4-2699-11e8-af36-06a45990210c Output: Running scope as unit run-rf6411478a9b348c5a1716841365e5056.scope. mount: 10.0.10.112:6789:/volumes/kubernetes/kubernetes-dynamic-pvc-81ef68d3-2699-11e8-bb7d-0a586460013a: can't read superblock
anybody here worked with rbd?
Hello, everyone. I am having trouble with setting up ceph-osd in conjunction with other components of the openstack via juju I have a maas cluster with 10 nodes. Each node has a single 120 GB SSD. When i deploy ceph-osd and ceph-mon charms, ceph-osd inavitable always ends up not finding any cluster which use the appropriate configuration. Are there any prerequisites to running ceph in this particular situation? Do I need to format or parition the SSDs in a specific way? I start deploy from 0, machines are not deployed, juju manages the OS installation and ceph-osd is the first one to be deployed. I essentially follow this tutorial: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-openstack.html Thanks you!