Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 06:37
    liuyunlong1229 opened #8145
  • Dec 02 20:03
    locao opened #8144
  • Dec 02 19:54

    locao on least-connections_precision

    tests(least-connections) increa… (compare)

  • Dec 02 19:36
    locao labeled #7997
  • Dec 02 19:36
    locao commented #7997
  • Dec 02 18:21
    mheap added as member
  • Dec 02 17:21

    locao on consistent_hashing_missing_header_kong_2.6.x


  • Dec 02 17:21
    locao closed #8142
  • Dec 02 17:21

    locao on 2.6.x

    fix(balancer) avoid using nil h… (compare)

  • Dec 02 17:21

    locao on 2.7.x

    fix(balancer) avoid using nil h… (compare)

  • Dec 02 17:21

    locao on consistent_hashing_missing_header


  • Dec 02 17:21
    locao closed #8141
  • Dec 02 16:20
    bungle edited #8140
  • Dec 02 16:19
    bungle edited #8140
  • Dec 02 16:11
    bungle edited #8139
  • Dec 02 16:11
    bungle labeled #8140
  • Dec 02 16:11
    bungle labeled #8140
  • Dec 02 16:11
    bungle unlabeled #8140
  • Dec 02 15:45
    locao opened #8142
  • Dec 02 15:42
    bungle synchronize #8140
and not actual request sent out
hi team can any1 please help
Eric Satterwhite
the 2nd to last bullet point says that kong has to write ever request + response to disk. Is this still accurate?
Avinash K
I ran into an issue with kong migrations from 2.0.5 to 2.4.1. I did a blue-green of Kong deployment with Postgres DB. After I ran "kong migrations up” command, I was able to make the 2.4.1 version of Kong Gateway to point to the same DB and I was able to access the already created routes. But when I ran “kong migrations finish” command to complete the migration process, it wasn’t complete. Even though “kong migrations finish” completed execution without any errors, I was still not able to fetch the existing kong entities via Kong 2.4.1 Admin API. Can anyone help with this issue?
Priyansh Jain
Hey folks!
I wanted to know how the kong api gateway works with plugins
For example if i were to have a plugin that connects to redis by creating a connection object, does this mean that for every request a new connection object will be created? Or does the lua runtime handle it differently?
Team - We are running Kong on K8s backed with postgres DB. Does Kong hits DB for every calls from consumers to validate OAuth token, Basic Auth / Key Auth creds..
Narendra Patel
Hi All, need some help around [Kong/kong#7608]. Any leads would be highly appreciated.
Team, I have one more query on how Kong interacts with postgres.
We have two DataCenters.
DC - A -> Kong on K8s with PG DB. We run master (read and write) PG and replica (read only) instance here.
DC - B -> Kong on K8s listening to master PG on DC -A. We also have one more replica (read only) here.
Both DC’s are active and to avoid latency in DC-B as it has to talk to master in DC-A, along with pg_host property, we configured pg_ro_host property in DC-B which points to DC-B’s replica for read only calls. This helped us reduce the latency.
But for some reasons, if we bring down DC-B’s replica, pg_ro_host goes down and eventually bringing down Kong itself even-though master in DC-A is active.
So question is, can Kong switch to use pg_host property if the host configured in pg_ro_host is not reachable?
Rishabh Gupta
@Presto412 : Redis connections will be managed by the Redis driver you are using within the plugin. If using the standard options for Redis lua driver like openresty-redis, then it manages all connections in an shm object and hence takes care of connection pooling.
Ben Cheung

Hi everyone, when using the request-transformer, one of my header which its value is ""(emtpy) is transformed to a " "(a space character).

Going into the source code, I find out the following. My expectation is that the header value should not be changed when i set nothing on the header part.

in the access.lua, line 248


which call set_headers() in request.lua, line 361

request.set_headers = function(headers)

    if type(headers) ~= "table" then
      error("headers must be a table", 2)

    -- Check for type errors first


    -- Now we can use ngx.req.set_header without pcall

    for k, v in pairs(headers) do
      if string_lower(k) == "host" then
        ngx.var.upstream_host = v

      ngx.req.set_header(k, normalize_multi_header(v))


and then call normalize_multi_header() in checks.lua, line 13

function checks.normalize_multi_header(value)
  local tvalue = type(value)

  if tvalue == "string" then
    return value == "" and " " or value

  if tvalue == "table" then
    local new_value = {}
    for i, v in ipairs(value) do
      new_value[i] = v == "" and " " or v
    return new_value

  -- header is number or boolean
  return tostring(value)

when the value is "", it will return " "

I have Kong running on K8s backed with Postgres DB. We have a requirement where need to build plugins in GOLang and JavaScript. Can someone please guide how to build pluginserver for these programming languages in the above mentioned setup.
Kong - 2.5.0 and KIC - 1.3.1
We have a requirement where we need to dynamically generate the backend service endpoint based on the incoming URI from consumer. Any suggestion on this? Ours is Kong running on K8s with postgres DB.
I am trying to use the nokia/kong-oidc plugin in order to authenticate my users. We do not have username/passwords. We are authenticating using certificates. Has anyone run into this use case? I am having some difficulty passing the certs to keycloak openid-token.
Pugazhendhi Thanikasalam

Broken Pipe Error - Large Payload Size - 223.86 KB Size

When sending the http request with large payload size , getting broken pipe error in JS plugin , when reading the request for transformation .

Postman ---------------------> Kong Gateway [ JS Plugin that decrypt the request ] ------------------------- > Micro Service

Code :
if (!this.isWhiteList) {
let requestRaw = await kong.request.getRawBody() ------> exception here

Logs, server: kong, request: "POST /paotang/v1/registration/grant HTTP/1.1", host: "localhost:9000"
2021/07/29 20:43:57 [error] 44#0: 234 send() failed (32: Broken pipe), client:, server: kong, request: "POST /paotang/v1/registration/grant HTTP/1.1", host: "localhost:9000"
2021/07/29 20:43:57 [notice] 43#0: signal 17 (SIGCHLD) received from 51
2021/07/29 20:43:57 [error] 44#0:
234 [kong] mp_rpc.lua:308 [decrypt] broken pipe, client:, server: kong, request: "POST /paotang/v1/registration/grant HTTP/1.1", host: "localhost:9000"
2021/07/29 20:43:57 [notice] 43#0: 18 [kong] process.lua:258 external pluginserver 'js' terminated: exit 1, context: ngx.timer
2021/07/29 20:43:57 [notice] 43#0:
18 [kong] process.lua:248 Starting js, context: ngx.timer

Temur Saidkhodjaev
Hi! I'm trying to configure anonymous consumer on Kong Enterprise in DB-less mode and use it as a fallback for OpenID Connect plugin. I tried setting consumer username in config.anonymous of the OIDC plugin, but it says it requires UUID, which is unavailable before Kong is run. I tried doing the same through Kong Admin API, but DB is required to change the plugin config. It seems like this issue Kong/kong#5551 has been fixed already in Kong 2.0, but it doesn't work for me in 2.4. Any ideas/suggestions? Is anonymous access even supported in DB-less mode? Thanks.
I get a log 2021/08/16 01:01:34 [error] 86#0: 21580818...
May I know what's the meaning of "86#0" and "
Konstantin Smolyakov
Hi! Is there any simple way to run some mock http endpoints when integration testing plugins? E.g. my plugins accepts http url in the config and I need to mock its endpoints during tests. I tried mockbin but it seems a bit overkill and doesn't handle routing well. Thanks.
Hi All,
Can someone help with the regex for "Request Transformer" plugin to replace the URI from the frontend request
www.google.com/xyz/openapi to backend request being sent as
Hi All, we just noticed that access log entry is missing for some cases when upstream timed out.. I think the expectation here is that Kong should still log the req in access log with 504 status code.. Did someone noticed the same error? is there a fix available?
2021/08/30 04:12:54 [error] 20932#0: *2003976052 upstream timed out (110: Connection timed out) while reading response header from upstream
i'm using the request transformer plugin as well as the oauth2 plugin. i want to transform the body first then i want the oauth2 plugin to use transformations i add. how do i control the order of plugin execution?
hi all, i have microservice with websocket endpoint (ws) and kong gateway with ingress with letsencrypt tls.
request to wss://api.example.com/subscription returning Unexpected server response: 101
what i miss?
Ramy Abadlia
hello guys


I am currently playing around with the Kong API Gateway and I would like to use it to validate the authentication and the authorization of users at the gateway and restrict access to services if the user is not logged in properly.

I have already an existing authentication django microservice which issues JWTs whenever a user logs in but i cant share this tokens with the other microservices because each one of them have his own database.

So i would now like to know if i can share the JWT secret produced by django and use it for jwt plugin of kong api and also do i need to create a consumer for every users and what is the right steps to achieve that ?

Any hint would be highly appreciated.

Liu Chuan

Hi, I want to know if do a unit test execution on kong must install vagrant ?? I'm getting an error when trying to run the unit tests for kong, my kong version is 2.2.2, and I run this command "busted --lpath=/usr/local/openresty/lualib/?.lua ./03-plugins/09-key-auth/01-api_spec.lua" but I got this error
0 successes / 0 failures / 1 error / 0 pending : 0.007311 seconds

Error → /usr/local/share/lua/5.1/kong/tools/utils.lua @ 36

What do I have to do to get the unit tests to run properly?? anyone can help me??
@swapnilpotnis hi, have you find hw to do that ?
(strip part of uri)

Hi All, I have kong docker container using as load balancer + proxy server and my server application container is running on the same host . I am trying to create 200k websocket connections.

I am unable to achieve more than 85k connections when server application and kong containers are on the same host.
I am able to achieve 200k connection when I bypass kong and establish direct connections with server.
I am able to achieve 200k connection when I deployed kong container on the other host and my server application on different host .
I have tried changing cache, send, read timeout limits but nothing worked for me . I am curious what’s limiting the number of connections through kong when running on the same host as server application and what are the parameters that needs to be changed.

any help here would be appreciated, please

Jakub Kądzielawa

hello :) I have two different Kubernetes cluster (EKS on AWS) with TEST/PROD environments. tomcat version, configuration of application and application version is the same on the both environments.

however on one TEST cluster I have in front of Classic Load Balancer because of Kong which is based on nginx and from some reason SSE notification are not working, it means that requesting for the specific URL closes connection because I am getting some headers like Connection: close and it's not keeping alive like expected therefore I am not getting any sse notifications

on PROD Cluster (Application Load Balancer) everything works as expected, after requesting URL connection is keeping and notification are received. any ideas what's wrong with that?

there is some configuration of nginx in kong

location / {
    set $kong_proxy_mode             'http';

    proxy_http_version      1.1;
    proxy_buffering          off;
    proxy_request_buffering  off;
    proxy_cache off;
    chunked_transfer_encoding off;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";


Hi everyone. I'm using kong without any response size limiting plugins.
When trying to download a response of over 1GB, the connection abruptly cuts off and I get this error
curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
any ideas?
I am deploying a rtsp service behind kong, and the "stream_listen" is configured, and works fine.
I wonder how to make my custom plugin supporting "tcp", it always says "grpc, grpcs, http, https", and it not invoked at all?
1 reply
Jakub Kądzielawa
hello, is it possible to add Routes or Services of kong in some object in Kubernetes or in other some programatically way?
it would be great if we could add routes/services in some better way than via Admin CLI or via Konga directly to database
4 replies
Eduardo Rosales Fernández
This message was deleted

Hello All,

select * from oauth2_tokens order by created_at desc limit 1;
id | created_at | credential_id | service_id | access_token | refresh_token | token_type | expires_in | authenticated_userid | scope
| ttl | ws_id
58676128-5f9b-485f-9822-7e8b56b73209 | 2021-10-21 10:02:05+00 | e6768c02-8cf6-4175-8656-d606bfe28a55 | | 5XitRZ114RXIiefI9uXGVCq7LoBoO013 | | bearer | 7200 | | phone
| 2021-11-04 10:02:05+00 | 71b2746d-815b-4cd5-8d3b-0b0281688e0c

service_id id field is empty and its not getting associated with a token. can you please let me know the fix for this

Helli i am using kong-2.1
in dbless mode
gett dns resolver error suddently. its unable to resolve upstream target
can any1 please help with a quick fix for it
2 replies
Harmohan Parmar
can kong have in upstream website(html/css/js etc)?
we are using kong-2.2.2
how to support websockets in kong, I searched for plugins not able to find.
Please help to suggest how to support websockets.
hi guys, is this possible to add upstream service in kong logs on kubernetes ?
Hi Guys, Do we have any solution for the PostgresDB Master-Slave setup? I have a requirement to scale the Postgres DB.
@tsn77130 Are you taking about service endpoint(svc.servicename)?
Chris Castro
hi, is there a way to make Kong to reconnect/retry upstream connections while keeping client downstream connections alive?
Our backends use streams connections to communicate with clients, we would like to be able to redeploy these backends without interrupting client connections. is it possible?


I am new to Kong. I am trying to deploy Kong API Gateway into my kubernetes cluster in DBLESS mode.

What I have done was I created a dockerfile which I push to ECR repo.

Which looks like this
FROM kong:2.5

KONG_PROXY_ERROR_LOG=/dev/stderr \
KONG_ADMIN_ERROR_LOG=/dev/stderr \

Then I am pulling down image from ecr in my deployment.yaml. I have load kong.yaml into a ConfigMap but ran into this error

stack traceback:
[C]: in function 'error'
/usr/local/share/lua/5.1/kong/init.lua:525: in function 'init'
init_by_lua:3: in main chunk
nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:525: error parsing declarative config file /app/kong.yaml:
/app/kong.yaml: No such file or directory
stack traceback:
[C]: in function 'error'
/usr/local/share/lua/5.1/kong/init.lua:525: in function 'init'
init_by_lua:3: in main chunk

Has anyone come across this?

1 reply

I just installed kong api gateway and konga UI on google cloud. I configured a simple backend service and UI page with route to outsite, everything is working as expected. I have added route and service for deployed UI application.

my question is if there is any plugin for Kong that can keep the original browser URL after the redirect?

For example:
client send request to www.koko.example.com and Kong redirect him to www.fofo.example.com. What happened now is that the user see in the browser URL address line the www.fofo.example.com and I want to keep the www.koko.example.com address there..

Is there any option to do it?