These are chat archives for getredash/redash

24th
Nov 2015
anthony
@Sigma-anthony
Nov 24 2015 09:27
Hi guys having an issue where I'd set up redash and it's dependencies on separate docker containers, it was working fine for a couple of weeks and all of sudden it became really slow, and the logs show its looking for redash postgres locally. I haven't changed anything in the setup files. And all my envs point to the right places. What otherwise can I trouble shoot why redash thinks to look locally for postgres and not use the envs which worked fine for a couple of weeks?
anthony
@Sigma-anthony
Nov 24 2015 09:49
manage.py check_settings also shows the right config as well
Arik Fraimovich
@arikfr
Nov 24 2015 10:06
@gabrielcrowdtilt true. I'll probably package the final one soon.
@Sigma-anthony can you share the logs?
anthony
@Sigma-anthony
Nov 24 2015 10:16
sure
This message was deleted
```[2015-11-24 09:35:29,430: ERROR/MainProcess] Task redash.tasks.refresh_schemas[cda1a24f-d0f9-4347-833e-b9734193c1b9] raised unexpected: OperationalError('could not connect to server: Connection refused\n\tIs the server running on host "localhost" (::1) and accepting\n\tTCP/IP connections on port 5432?\ncould not connect to server: Connection refused\n\tIs the server running on host "localhost" (127.0.0.1) and accepting\n\tTCP/IP connections on port 5432?\n',)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/opt/redash/current/redash/tasks.py", line 24, in __call__
    return super(BaseTask, self).__call__(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 437, in __protected_call__
    return self.run(*args, **kwargs)
  File "/opt/redash/current/redash/tasks.py", line 231, in refresh_schemas
    ds.get_schema(refresh=True)
  File "/opt/redash/current/redash/models.py", line 294, in get_schema
    schema = sorted(query_runner.get_schema(), key=lambda t: t['name'])
  File "/opt/redash/current/redash/query_runner/pg.py", line 93, in get_schema
    results, error = self.run_query(query)
  File "/opt/redash/current/redash/query_runner/pg.py", line 116, in run_query
    _wait(connection)
  File "/opt/redash/current/redash/query_runner/pg.py", line 34, in _wait
    state = conn.poll()
OperationalError: could not connect to server: Connection refused
    Is the server running on host "localhost" (::1) and accepting
    TCP/IP connections on port 5432?
could not connect to server: Connection refused
    Is the server running on host "localhost" (127.0.0.1) and accepting
    TCP/IP connections on port 5432?ยดยดยด
its basically a long repetition of this error
Arik Fraimovich
@arikfr
Nov 24 2015 10:20
You have a data source configured to use local postgres. It has nothing to do with the database it uses for storing its metadata.
It also probably has nothing to do with the slowness.
anthony
@Sigma-anthony
Nov 24 2015 10:21
but how can I track down which data source is causing this?
when you sat data source you mean, one of the data sources you add on the redash front end?
Arik Fraimovich
@arikfr
Nov 24 2015 10:22
yes
just check which one is defined to use local postgres
anthony
@Sigma-anthony
Nov 24 2015 10:25
Well I actually removed all the data sources earlier on
I was trying to figure if one of my data sources is causing the new found slowness, so I removed them and have no datasources configured
Arik Fraimovich
@arikfr
Nov 24 2015 10:31
what do you mean by "slowness"?
anthony
@Sigma-anthony
Nov 24 2015 10:36
well if you recall I said ealier my redash setup was running fine for a couple of weeks, then just became slow, when I checked the logs I found the error above
Arik Fraimovich
@arikfr
Nov 24 2015 10:36
but what became slow - the time to load the UI? time to run a query? time to load query results?
anthony
@Sigma-anthony
Nov 24 2015 10:36
time to load the ui and queries
and the thing thats most confusing is no changes were made to the underlying setup when this happened. no new data sources were added
Arik Fraimovich
@arikfr
Nov 24 2015 10:41
the last time things were slow for you, was due to lack of resources to the postgres container. is it possible it's happening again?
anthony
@Sigma-anthony
Nov 24 2015 10:44
no I don't think so, the last time is was slow it was a bower issue, was running wonderfully fast until few days ago, but to rule it out I'll double the ram on the postgres container and add another half cpu, but the issue is more that all over sudden redash was looking locally for postgres, when manage.py check_settings and .env and running env all point to the postgres container and redis. Even after removing all the data sources.
I also start redash with gunicorn so that it reads the envs
Arik Fraimovich
@arikfr
Nov 24 2015 11:20
do you have any monitoring on the containers, to see CPU / RAM usage?
anthony
@Sigma-anthony
Nov 24 2015 12:45
yes can view cpu/ram usage for the containers
postgres:
CONTAINER           CPU %               MEM USAGE/LIMIT     MEM %               NET I/O
98fa26a8f0ef        0.01%               40.2 MB/1.074 GB    3.74%               2.799 MB/1.891 MB
This message was deleted
and for redash
CONTAINER           CPU %               MEM USAGE/LIMIT     MEM %               NET I/O
448897e20c0a        0.8%               0 B/2.147 GB        0.00%               4.564 kB/3.944 kB
Arik Fraimovich
@arikfr
Nov 24 2015 12:51
strange, 0B memory usage?
anthony
@Sigma-anthony
Nov 24 2015 13:01
sorry, just saw that the instance wasn't up yet, was restarting.
I'm gonna merge with the lastest master on github and rebuild the image will come back if i get the same error or if I manage to figure out what was causing the issue where redash looks for postgres locally despite all the settings to the contrary.
Arik Fraimovich
@arikfr
Nov 24 2015 13:04
you still see that message in the log (with a recent timestamp) even after removing the data source?
because it has nothing to do with your settings.
and I suggest you understand why it's slow before changing the image
because otherwise you're just adding another change which will make it harder to understand what happened