These are chat archives for azukiapp/azk

Apr 2016
Andreas Schmelas
Apr 08 2016 08:54
Hey, I need some help. I've a test project which deploys successfully to AWS EC2 using docker-deploy. Now I'm trying to mirror this into my "real" project and deploy to a fresh EC2 instance but it false to connect. ... If I choose to deploy to the EC2 Instance I've used before (for my test project) it works! I'm a bit puzzled as I can't remember that I did anything at all to pre-setup my test EC2 instance - expect for choosing the right ssh keypair, which is the same I used previously!
By the way I'm using the latest AZK version 0.18.0
My azkfile looks like this:
 * Documentation:
// Adds the systems that shape your system
  'azk-craft2': {
    // Dependent systems
    depends: ["mariadb"],
    // More images:
    image: {"docker": "azukiapp/php-fpm:5.6"},
    // Steps to execute before running instances
    provision: [
      "composer install"
    workdir: "/azk/#{manifest.dir}",
    shell: "/bin/bash",
    wait: 45,
    mounts: {
      '/azk/#{manifest.dir}': sync("."),
      '/azk/#{manifest.dir}/craft/storage': persistent("./craft/storage"),
      '/azk/#{manifest.dir}/public/uploads': persistent("./public/uploads"),
      '/azk/#{manifest.dir}/vendor': persistent("./vendor"),
      '/azk/#{manifest.dir}/composer.lock': path("./composer.lock"),
      '/etc/nginx/sites-enabled/default': path('./craft.conf')
      '/etc/nginx/sites-available/craft.conf': path("./craft.conf")*/
    scalable: {"default": 1},
    http: {
      domains: [
    ports: {
      // exports global variables
      http: "80/tcp",
    envs: {
      // Make sure that the PORT value is the same as the one
      // in ports/http below, and that it's also the same
      // if you're setting it in a .env file
      APP_DIR: "/azk/#{manifest.dir}",
  'mariadb': {
    image: { 'docker': 'mariadb:10.1.10' },
    shell: '/bin/bash',
    wait: 25,
    mounts: {
      '/var/lib/mysql': persistent('mariadb_data'),
    ports: {
      data: '3306:3306/tcp',
    envs: {
      MYSQL_USER         : 'xxx',
      MYSQL_PASSWORD     : 'xxx',
      MYSQL_DATABASE     : 'xxx'
    export_envs: {
      DATABASE_URL: 'mysql2://#{envs.MYSQL_USER}:#{envs.MYSQL_PASSWORD}@#{}:#{}/#{envs.MYSQL_DATABASE}',
      DB_HOST: '#{}',
      DB_PORT: '#{}',
      DB_NAME: '#{envs.MYSQL_DATABASE}',
      DB_USER: '#{envs.MYSQL_USER}',
      DB_PASS: '#{envs.MYSQL_PASSWORD}'
  deploy: {
    image: {"docker": "azukiapp/deploy"},
    mounts: {
      "/azk/deploy/src":          path("."),
      "/azk/deploy/.ssh":         path("#{env.HOME}/.ssh"),
      "/azk/deploy/.config":      persistent("deploy-config")
    scalable: {"default": 0, "limit": 0},
    envs: {
      HOST_DOMAIN:                "",
      REMOTE_HOST:                "",
      REMOTE_ROOT_USER:           "ubuntu",
      SSH_PRIVATE_KEY_FILE:       "aws"

Connecting to the remote EC2 instance via ssh – either in the azk deploy shell or from my osx terminal - works just fine!
Andreas Schmelas
Apr 08 2016 09:01
But trying to deploy via azk deploy -vvvv results in:
PLAY ***************************************************************************

TASK [setup] *******************************************************************
fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "ERROR! SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue", "unreachable": true}

PLAY RECAP *********************************************************************
default                    : ok=0    changed=0    unreachable=1    failed=0
Booth the "working" EC2 instance and the production EC2 instance running "ubuntu-trusty-14.04-amd64-server".
Andreas Schmelas
Apr 08 2016 09:32
Hmm, looks like this is a buck in AZK when using a different SSH_PRIVATE_KEY_FILE then the default one (id_rsa). I just added my ssh credentials to my .ssh/config file, restarted the AZK agent and now it works. And now I vaguely remember that I did similar things last time for my test project - strange thing is, I later removed the credentials again it it kept working!
can anybody confirm that this is a actual bug and not just some weird hickup with of setup - if so I'll file a bugreport
Apr 08 2016 14:07

And now I vaguely remember that I did similar things last time for my test project - strange thing is, I later removed the credentials again it it kept working!

Hi @m9dfukc, that happens because azk just copy your ssh keys to its persistent folder. As you are using 0.18 version (this is cool) you can use the new azk info --filter=mounts. I've not tested yet, but you may see the deploy-configpersistent folder. Please check files inside this folder.