These are chat archives for spring-cloud/spring-cloud

13th
Oct 2015
Spencer Gibb
@spencergibb
Oct 13 2015 00:00 UTC
wahoo!
turick
@turick
Oct 13 2015 00:02 UTC
that was all you spencer :) thank you so much
so that helps a lot with tracing... but if i deploy jar files in a production environment and the output is written to a file.... how do you employ a strategy for rolling log files like log4j does? is it recommended to build wars and deploy to an external tomcat container? i can use nohup and redirect to a file, but what is the enterprise solution for centralized logging?
you have the full power of logback, log4j or log4j2. You can send the logs to /var/log and let your OS deal with them. whatever you want.
turick
@turick
Oct 13 2015 00:16 UTC
sorry for being so ignorant... it seems if i deploy a jar file, the only logging occurs at the console, and i have to redirect that on the command line to a file. i don't understand, even with the documentation you linked, how to replicate the behavior of a war file deployed to tomcat, where it creates a new log file day per my config
Spencer Gibb
@spencergibb
Oct 13 2015 00:16 UTC
You can also set the location of a file to log to (in addition to the console) using "logging.file".
turick
@turick
Oct 13 2015 00:19 UTC
perfect! thanks :) can that be aggregated to a centralized logging server?
Spencer Gibb
@spencergibb
Oct 13 2015 00:19 UTC
anything you would do with your logfiles from tomcat
turick
@turick
Oct 13 2015 00:22 UTC
well that's the problem. with a monolith, you don't have to worry about it, because from tomcat you have centralized logging because everything is happening within the same app. i'm still wrapping my mind around the micro services concept. if all of those logs are maintained locally within each service it becomes difficult to track down what happened where. with slueth, it helps track it, but how do you see the actual log files in a consolidated manner?
ccit-spence
@ccit-spence
Oct 13 2015 01:14 UTC
@turick Something I implemented was a Logback Redis Appender. For simplicity we then used a Docker ELK stack container and send the logs to the ELK stack. So far for 6 months the solution has worked perfectly.
Dave Syer
@dsyer
Oct 13 2015 05:34 UTC
I think best practice is to stay "Cloud Native". Log to stdout only and have your platform aggregate that in a standard way. Anything else turns into a bunch of snowflakes.
hacbq
@hacbq
Oct 13 2015 08:39 UTC

Hi all, I'm using Zuul for my edge service. Here is my config

zuul:
  routes:
    product:
      path: /product/**
      stripPrefix: false
      serviceId: product

It's correct if url is : http://localhost:8765/products :smile:
if url is http://localhost:8765/products.json, Zuul can't forward to my service. :worried:
Please help me. Thank you :smile:

Pedro Vilaça
@pmvilaca
Oct 13 2015 10:59 UTC
@hacbq your problem is the ‘/‘ before the ‘**'
turick
@turick
Oct 13 2015 12:44 UTC
@ccit-spence thanks for the ELK tip. i was looking into that some time ago. looks like it's time to revisit. @dsyer - thanks for introducing me to the term "Cloud Native" :) it's interesting advice to log to stdout only. would you consider the ELK stack "Cloud Native"?
Pedro Vilaça
@pmvilaca
Oct 13 2015 13:17 UTC
@turick there are a lot of ways to handle logs.. you can use graylog2, sumo logic, spunk, loggly, ...
it depends on your needs/budget
turick
@turick
Oct 13 2015 14:54 UTC
trying to put this all together... i'd like to stay in the open source realm and have a centralized place to use the sleuth traces to see all of the log entries for a given request. if that's possible, it makes me question the need for zipkin though. zipkin shows the path a request took, but it seems like the ELK stack would as well, along with providing the actual log entries.
Dave Syer
@dsyer
Oct 13 2015 15:27 UTC
Indeed
Dave Syer
@dsyer
Oct 13 2015 15:32 UTC
You have to do a bit of analysis probably but you get potentially more information with raw logs.
Zipkin aims to be a bit more surgical.
The idea is for you to have a choice
turick
@turick
Oct 13 2015 16:03 UTC
excellent, thank you. i think i'm going to deploy the ELK stack.
ccit-spence
@ccit-spence
Oct 13 2015 17:13 UTC
@turick This is the docker container we used for Redis ELK. I did take the approach of considering old logs pointless to store. This means I don’t care if the docker container needs to be restarted and loses its information. https://hub.docker.com/r/leorowe/redis-elk/
You could add a storage point to the HD from the container. I just didn’t see the point
This is the Logback Redis Appender https://github.com/kmtong/logback-redis-appender
Takes a few logback config files to get this working
We use tiny HD sizes for our instances. That being the case I send all logs to the appender so the HD does not get full. I turn off local logs.
Marcin Grzejszczak
@marcingrzejszczak
Oct 13 2015 17:25 UTC
If you wanted ansible scripts to provision elk stack: https://github.com/microservice-hackathon/infrastructure
turick
@turick
Oct 13 2015 19:50 UTC
thanks guys... that's great stuff and should make implementation very easy