Hey friends! I'm currently writing a traffic replay framework, using locust as the base.
My current issue is this:
I want to be able to spawn locust workers and have them make requests asynchronously, and then report back to master, right? But! I need them to NOT play the same traffic. Essentially, I need a centralized entity to pub/sub events to tell my workers to make the requests.
I thought the obvious choice there was to have the locust master manage the messaging for sending the requests.
So I have two questions:
slave_report
.helm install stable/redis
isn't a huge deal.name
or the uri
but not both
def kpi_zabbix_handler(request_type, name, response_time, response_length, **kw):
print("Successfully fetched: %s in %s" % (name, response_time))
payload = []
payload.append("- kpiMeasurement.discovery { \"data\": [{ \"{#KPINAME}\": \"%s\" }] }" % (name.replace(" ", "_")))
payload.append("- kpiMeasurement[%s] %s" % (name.replace(" ", "_"), int(response_time)))
with subprocess.Popen(["/usr/bin/zabbix_sender", "-c", "/etc/zabbix/zabbix_agentd.conf", "-i", "-"], stdin=subprocess.PIPE) as proc:
proc.stdin.write(bytes("\n".join(payload), encoding='utf-8'))
events.request_success += kpi_zabbix_handler
success_req_stats.csv
+ failure_req_stats.csv
--master
instance:/INFO/locust.runners: Discarded report from unrecognized worker [server]