mperham on master
Improve dark Web UI (#4674) * … (compare)
mperham on master
Allow middleware to yield argum… (compare)
mperham on darkui-contrast
More constrast fixes (compare)
mperham on master
document update using YARD anno… (compare)
Hey folks 👋
I'm using sidekiq (4.2.10) and I have this configuration
Sidekiq.configure_client do |config|
config.client_middleware do |chain|
chain.add MyModule::SidekiqLoggingMiddleware
end
end
to add this SidekiqLoggingMiddleware
as a middleware. this middleware relies on another class (FilterArguments
)
module MyModule
class SidekiqLoggingMiddleware
def call(worker_class, msg, *)
yield
ensure
filtered_args = FilterArguments.new(msg['args']).filter
Rails.logger.info "Enqueued #{worker_class}##{msg['jid']} with args: #{filtered_args}"
end
end
end
this works fine..
but in development, when I change some code, (probably the hot reload) crashes something, and I get this error
ArgumentError in Api::ApplicationPlansController#masterize A copy of MyModule::SidekiqLoggingMiddleware has been removed from the module tree but is still active!
and I need to restart my server
Hello, I've been encountering an unusual problem. Scheduled jobs are being remove from the scheduled set, but never get pushed to their respective queue. Here's an example from my redis monitor logs
$ cat redis.log.16 | grep -a -i dd9bb66082f6a4a34a03e9e1
1576007694.107665 [0 127.0.0.1:49956] "sadd" "b-IMhn1NRyOCqBYQ-jids" "dd9bb66082f6a4a34a03e9e1"
1576007694.107711 [0 127.0.0.1:49956] "zadd" "schedule" "1576007994.0583317" "{\"class\":\"WorkerClass\",\"args\":[72654,4611361],\"retry\":true,\"queue\":\"first_queue\",\"backtrace\":true
,\"jid\":\"dd9bb66082f6a4a34a03e9e1\",\"created_at\":1576007694.058427,\"bid\":\"IMhn1NRyOCqBYQ\"}"
1576007996.291126 [0 127.0.0.1:49926] "zrem" "schedule" "{\"class\":\"WorkerClass\",\"args\":[72654,4611361],\"retry\":true,\"queue\":\"first_queue\",\"backtrace\":true,\"jid\":\"dd9bb66082
f6a4a34a03e9e1\",\"created_at\":1576007694.058427,\"bid\":\"IMhn1NRyOCqBYQ\"}
I've ruled out latency issues, since there's no connection errors in the sidekiq logs or my rails log. Is this a bug in sidekiq? I noticed that the code for queuing scheduled jobs has this comment
# We need to go through the list one at a time to reduce the risk of something
# going wrong between the time jobs are popped from the scheduled queue and when
# they are pushed onto a work queue and losing the jobs.
Is this just something "going wrong"? I'm running 10 sidekiq processes, but this bug only seems to appear with scheduled jobs. Would appreciate any tips.
Sidekiq version 6.0.3, pro 5.0.1, redis 5.0.7.
require "sidekiq/web"
# Basic authentication:
#
# require "kemal-basic-auth"
# basic_auth "username", "password"
Kemal.config do |config|
# To enable SSL termination:
# ./web --ssl --ssl-key-file your_key_file --ssl-cert-file your_cert_file
#
# For more options, including changing the listening port:
# ./web --help
end
Kemal::Session.config.secret = "my_super_secret"
# Exact same configuration for the Client API as above
Sidekiq::Client.default_context = Sidekiq::Client::Context.new
Kemal.run
redis://:some-password@url:6379/0
batch.jobs
, but why does it need a batch block when it's already inside the overall
batch block?Hi there! Our company is using sidekiq very usefully.
By the way, I wonder how sidekiq handle pick the job data in redis.
In my opinion, it seems that sidekiq pushes job data to redis, using redis methods like sadd, lpush. and then Several methods are subsequently executed and the poller is started.
after poller started, is the poller getting the job information through the BRPOP method while polling the jobs accumulated in redis?
I'm confused that how sidekiq gets the job information exactly from redis. 🥺