Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    j0n3
    @j0n3
    we want to make ad insertion easier for the user, so we want to suggest a category based on uploaded pics
    Traun Leyden
    @tleyden
    I'm planning to add an api for that, since it will be a popular use case
    very cool
    j0n3
    @j0n3
    you are great
    Traun Leyden
    @tleyden
    thanks!
    j0n3
    @j0n3
    ;)
    too many technologies involved
    I mean more than I already have to use
    I can't even imagine coding this freaking awesome stuff!
    high skills...
    I'm just a computer passionate but not an engineer :(
    Jesse Vander Does
    @FreakTheMighty
    j0n3 have you seen the image classification example using python and flask in caffes examples directory?
    j0n3
    @j0n3
    just landed a day or two ago...
    not yet
    do you know if CamFind app is using caffe?
    it works pretty nice
    Jesse Vander Does
    @FreakTheMighty
    You should check that example out.
    It does what you're describing
    j0n3
    @j0n3
    I will
    thank you!
    Traun Leyden
    @tleyden

    @j0n3 can you try from scratch?

    If you have nothing else on coreos, you can destroy everything with:

    $ cd ~/Vagrant/core-os
    $ vagrant destroy
    $ cd ~/Vagrant
    $ rm -rf core-os
    $ cd ~/.vagrant.d/boxes
    $ rm -rf coreos-alpha

    then start from scratch using these new updated instructions:

    https://github.com/tleyden/elastic-thought#install-coreos-on-vagrant

    (it uses a newly pushed fork of coreos-vagrant)

    j0n3
    @j0n3
    I'm updating the box (vagrant box update) and starting from scratch :)
    let's see, I'll report in some minutes
    thank you so much!
    j0n3
    @j0n3
    sed -i '' 's/420/0644/' user-data
    you may want to remove the quotes after -i ;)
    j0n3
    @j0n3
    It works at this point
    but needed the sed replacements
    now fleetctl list-machines show two nodes working
    j0n3
    @j0n3
    now from inside core-01 vm I have downloaded and executed ./elasticthought-cluster-init.sh -v 3.0.1 -n 2 -u "user:passw0rd" -p cpu
    it is now downloading all docker stuff repos
    j0n3
    @j0n3
    2015/03/27 10:32:45 Connect to etcd on localhost
    2015/03/27 10:32:45 verifyEnoughMachinesAvailable()
    2015/03/27 10:32:45 /verifyEnoughMachinesAvailable()
    2015/03/27 10:32:45 Failed: Found residue -- key: /couchbase.com/couchbase-node-state in etcd. You should destroy the cluster first, then try again.
    2015/03/27 10:32:45 Error invoking target: exit status 1
    Failed to kick off couchbase server
    core@core-01 ~ $ fleetctl list-units
    UNIT MACHINE ACTIVE SUB
    couchbase_node@1.service 96533266.../172.17.8.102 active running
    couchbase_node@2.service eaca4bf6.../172.17.8.101 active running
    couchbase_sidekick@1.service 96533266.../172.17.8.102 active running
    couchbase_sidekick@2.service eaca4bf6.../172.17.8.101 active running
    j0n3
    @j0n3
    I have destroyed those 4 units and runned the cluster init script again:
    core@core-01 ~ $ ./elasticthought-cluster-init.sh -v 3.0.1 -n 2 -u user:passw0rd -p cpu
    true
    Kick off couchbase cluster
    2015/03/27 10:57:01 Update-Wrapper: updating to latest code
    github.com/tleyden/couchbase-cluster-go (download)
    github.com/coreos/fleet (download)
    github.com/coreos/go-systemd (download)
    github.com/tleyden/go-etcd (download)
    github.com/docopt/docopt-go (download)
    github.com/tleyden/couchbase-cluster-go
    github.com/tleyden/couchbase-cluster-go/cmd/couchbase-cluster
    github.com/tleyden/couchbase-cluster-go/cmd/couchbase-fleet
    github.com/tleyden/couchbase-cluster-go/cmd/sync-gw-cluster
    github.com/tleyden/couchbase-cluster-go/cmd/sync-gw-config
    2015/03/27 10:59:39 Connect to etcd on localhost
    2015/03/27 10:59:39 verifyEnoughMachinesAvailable()
    2015/03/27 10:59:39 /verifyEnoughMachinesAvailable()
    2015/03/27 10:59:39 Launch fleet unit couchbase_node (1)
    2015/03/27 10:59:39 response body: 
    2015/03/27 10:59:39 Launch fleet unit couchbase_sidekick (1)
    2015/03/27 10:59:40 response body: 
    2015/03/27 10:59:40 Launch fleet unit couchbase_node (2)
    2015/03/27 10:59:40 response body: 
    2015/03/27 10:59:40 Launch fleet unit couchbase_sidekick (2)
    2015/03/27 10:59:40 response body: 
    2015/03/27 10:59:40 Waiting for cluster to be up ..
    2015/03/27 10:59:40 Connect to etcd on localhost
    2015/03/27 10:59:40 FindLiveNode returned err: Error getting key.  Err: 100: Key not found (/couchbase.com/couchbase-node-state) [6019] or empty ip
    2015/03/27 10:59:40 Sleeping 10 seconds
    2015/03/27 10:59:50 FindLiveNode returned err: Error getting key.  Err: 100: Key not found (/couchbase.com/couchbase-node-state) [6043] or empty ip
    2015/03/27 10:59:50 Sleeping 20 seconds
    Traun Leyden
    @tleyden
    That looks normal
    Traun Leyden
    @tleyden
    I need to cleanup the output...
    @J0n3 did it get any further?
    Traun Leyden
    @tleyden
    Currently trying to "scale down" to make it easy to run on a developer workstation
    manyan.chen
    @manyan

    @tleyden I run into the same situation as well, 2015/04/29 10:36:42 FindLiveNode returned err: Error getting key. Err: 100: Key not found (/couchbase.com/couchbase-node-state) [5641] or empty ip
    2015/04/29 10:36:42 Sleeping 80 seconds.
    and its still counting
    and below is the output from
    fleetctl list-units
    UNIT MACHINE ACTIVE SUB
    couchbase_node@1.service 083cea09.../172.31.8.246 failed failed
    couchbase_node@2.service 434ef145.../172.31.38.129 failed failed
    couchbase_node@3.service 6a8d63b5.../172.31.29.29 failed failed
    couchbase_sidekick@1.service 083cea09.../172.31.8.246 inactive dead
    couchbase_sidekick@2.service 434ef145.../172.31.38.129 inactive dead
    couchbase_sidekick@3.service 6a8d63b5.../172.31.29.29 inactive dead

    any idea? or what should i do to walkaround it? cheers

    Traun Leyden
    @tleyden
    @manyan can you run these commands and post the output?
    $ fleetctl journal couchbase_node@1.service
    $ etcdctl ls /couchbase.com --recursive
    $ cat /etc/os-release
    also it would be useful if you could post the full output in a gist from running the elasticthought-cluster-init.sh script.
    I'm currently working on scaling things down to make it as easy as possible to get a minimal setup running (much less dependencies and much faster startup time)
    manyan.chen
    @manyan
    @tleyden I run and destroy the docker for 3 times, and the 4th time i run it, its ok .... thanks for the reply, I will setup another cluster in near future, will send you the log if it fails again
    Traun Leyden
    @tleyden
    ah ok, good to hear! which version of core-os are you running? cat /etc/os-release should tell you.
    manyan.chen
    @manyan
    @tleyden just got another issue, with the whole cluster is running, I can use restful api as a charm, but not the python client, I figured should be the internal topologic nodes=['172.XXX:8091', '172.XXX:8091', '172.XXX:8091'], they are all private ip from EC2, Would like to check whats the best practice for this? should I change the config file within the docker or? cheers
    Traun Leyden
    @tleyden
    I would say the best practice is not to expose the couchbase db to the outside world, instead have your python client running in another docker container within the same cluster.
    @manyan what are you trying to do exactly?
    it's not supported to talk directly to a couchbase server bucket if there is a sync gateway using that bucket.
    manyan.chen
    @manyan
    @tleyden We are migrating the whole stacks into AWS, but we would like to setup a couchbase cluster first, and keep the rest in our own datacenter for now. So we would like the outside to talk to a centralised db. no sure whether the sync gateway would help, my current throughtput is quite big :(
    manyan.chen
    @manyan
    @tleyden just checked the sync gateway, I don't think its the cure for this situation, Since its not a mobile app, and we have quite a big throughput, and thats the reason we choose to put it on AWS to make it easier to scale. Is there a way to open it to the outside world?