by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jun 12 08:58
    urbanishimwe closed #768
  • Jun 11 18:59
    arijitAD synchronize #769
  • Jun 11 18:32
    arijitAD synchronize #769
  • Jun 11 15:45
    arijitAD synchronize #769
  • Jun 11 15:44
    arijitAD synchronize #769
  • Jun 11 15:04
    arijitAD synchronize #769
  • Jun 11 13:54
    arijitAD synchronize #769
  • Jun 11 13:51
    arijitAD synchronize #769
  • Jun 11 06:25
    buger commented #772
  • Jun 10 04:11
    slimus commented #772
  • Jun 09 20:43
    urbanishimwe commented #772
  • Jun 09 20:36
    urbanishimwe edited #772
  • Jun 09 20:36
    urbanishimwe edited #772
  • Jun 09 12:30
    CLAassistant commented #734
  • Jun 09 09:48
    slimus labeled #772
  • Jun 09 09:48
    slimus opened #772
  • Jun 09 08:28
    gavinshark commented #737
  • Jun 09 08:23
    gavinshark synchronize #737
  • Jun 09 04:25
    slimus labeled #766
  • Jun 09 04:17
    slimus commented #766
Vinay Dwivedi
@0biwanken0bi
@buger Awesome. Many thanks for getting back.
Christopher Bradshaw
@Christopher-Bradshaw
Looking at the rate limiting options in gor, it appears that there are two choices. 1) % of captured traffic 2) max reqs per second. Would you be interested in a PR to add a third of just reqs per second. This would be similar to max reqs per second which according to https://github.com/buger/gor/wiki/Rate-limiting does staging.server will not get more than X requests per second but will instead do staging.server will always get X requests per second. This would make testing much easier as we can specify a request rate rather than hunt around for it using the proxy of %
Leonid Bugaev
@buger
Frankly i made "max" strategy because it was just much easier to implement, and in my view it should be replaced by more advanced technique of making stable RPS amount.
@Christopher-Bradshaw I guess it may require gathering some stats about existing traffic, to know current RPS, and based on that skip requests in more stable way.
So yes, i would love to see such PR, just briefly explain how do you want to implement it
Christopher Bradshaw
@Christopher-Bradshaw
Thanks @buger. I will think about it and get back to you
Nikita
@nikitasherman
Hi,
I would like to ask if somebody have used gor as "production" tool
for broadcasting important traffic to other production server and not as part of integration testing (replying data to test env)?
Leonid Bugaev
@buger
Hello! Gor does not guarantee that it will capture all the requests, due to traffic interception nature, so i do not recommend rely on it for backups. It is definitely possible on low traffic, but when network usage is high chances for loosing some packets grow.
I have some future plans to add "proxy" mode, but this will mean putting Gor to your traffic pipeline. Do you think it is acceptable for your purpose?
Nikita
@nikitasherman
What we need now is something we hope to not keep for a long time, some king of patch due to time constraints
The loosing of packets/requests is less acceptable for us. If you asking about the "pipeline" as general it sounds like an acceptable solution.
Thank you for your answer!
Christopher Bradshaw
@Christopher-Bradshaw
We use gor on our production servers to store requests to later be used for replay testing. These are on servers getting > 1 million requests per day. Not quite what you are asking as we are not forwarding to other servers, just to file but it has performed without errors for us
Eugene
@pegasd
Hey, guys! This is probably a stupid question, but if I write request log using --input-raw :80 --output-file file.log --output-file-append and then replay it to 2 separate instances using --input-file file.log --output-http , is it guaranteed that both instances will receive the exact same requests? Probably worth mentioning that the instance I'm getting the input from is under heavy load, that's why I'm writing the results to a log first.
It doesn't really matter to me whether I'll capture all of the requests. But it does matter that the other two instances I'm replaying requests to will receive the exact same ones.
Leonid Bugaev
@buger
@pegasd hello! Missed your message. Totally get what you mean, and if you replay from the file, both servers receive exactly same requests. Worth noticing, that it does not mean that they receive exactly same streams. So for example if in original traffic you had first user making 10 requests (in same tcp session), and second user making 2 requests (in same tcp session), while replaying GoReplay community version does not guarantee that this will be 2 TCP sessions, most likely it will be like 5 sessions with 2 requests per each. GoReplay PRO handle it, and replay them with exactly same number of TCP sessions.
Vincent Spiewak
@vspiewak
Hello, I wanted to split traffic on N servers. for instance, a POST on localhost will POST on server1 & server2
Do i have to launch a server beside gor or gor can do this alone ?
Leonid Bugaev
@buger
@vspiewak it depends. You can just add multiple --output-http args and it will duplicate requests to every output.
worldtiki
@worldtiki
hey @buger. raised this pr to bump the version of gor to 16.1 in the dockerfile
buger/goreplay#532
Marcin Kuźmiński
@marcinkuzminski
Hi There, gor is an awesome tool ! But i've got a question is it possible to reply catched operation from file to endpoint at certain concurrency e.g 20x at once
Marcin Kuźmiński
@marcinkuzminski
or rather it's a bit unclear how --input-file-loop together with rate limiting works
Leonid Bugaev
@buger
@marcinkuzminski when you reply from file you can set replay rate in percentage, like this: --input-file "requests.gor|2000%", so it will be 20x more speed. Note that it is not concurrency, but faster replay. If you really need concurrency, I would recommend just adding as many --input-file arguments as you want.
Marcin Kuźmiński
@marcinkuzminski
@buger ahh thanks for that explanation, so it means if i want to have 10x concurency i add 10x --input-file ?
@buger, what exactly means faster replay ? I
sorry this isn't clear for me ;(
So basically if you had 2 requests in your file and time period between them is 100ms, if you set 2000% replay speed (20x), GoReplay will short this time by 20x as well to 5ms, in other words will replay faster
Marcin Kuźmiński
@marcinkuzminski
@buger thanks this makes it clear, you should put that last sentence in docs, clears things out a lot :)
Mike Juarez
@mjuarez
Greetings! I'm working on creating some middleware to record http responses (particularly image files). I'm trying to follow along on the docs and the proto spec, but I've been noticing some odd behavior with the response bodies. It looks like they might be output as framed chunks?
Besides the \r\n\r\n delimiter between headers and body. The body appears to be separated out in chunks delimited by \r\n and some extra data
Are there any more details or specs around that formatting?
Mike Juarez
@mjuarez
Also of note, looks like the Buy link for the Pro version doesn't work :-)
Leonid Bugaev
@buger

Also of note, looks like the Buy link for the Pro version doesn't work

Looks like bug happens when you go to pro page from home page. But if you directly visit PRO page it works. Thank you for the report!

@mjuarez regarding chunks you see, it is so called HTTP chunked encoding. GoReplay can transform it to one single chunk for you if you pass —prettify-http option. Hope it helps!
Mike Juarez
@mjuarez
It did indeed help! Many thanks
fanlushuai
@fanlushuai
gor can not get params of the request from iOS client,but Android work well。
Anybody has same experience?
image.png
image.png
Leonid Bugaev
@buger
For sure weird
I wonder if its because different accept encoding headers
Can you try to validate this theory?
Rocky Breslow
@rbreslow
@buger What do the stats mean? output_http:latest,mean,max,count,count/second,gcount, what are these metrics?
What is gcount for example?
letao100
@letao100
@buger Hi Leonid, why enter the command, always return the goreplay version, such as "Version: 0.16.1", is that bcz no real traffic at that time of the server ? thanks.
letao100
@letao100
image.png
letao100
@letao100
@buger I have the request file,but it's a log file with .log postfixx, can I covert it to a request.gor file ,and then replay it, tried to find how to change the file format, seems didn't find it so far. Could you give some advices please ? thx.
Tanvir Hassan
@thassanauny_twitter

@buger Hi, I'm using gor to replay prod request to prelive env. but I see the rate of request is less than 50%
do you have any idea?

this is the command : gor --input-raw :2801 --output-http abc.com:2801 --http-allow-method POST

RONAK BAID
@ronak319
@buger Is it possible to allow multiple urls in single command. https://github.com/buger/goreplay/wiki/Request-filtering . Wiki only shows single urls
RONAK BAID
@ronak319
@buger Does it support replaying of HTTPS requests ?