len(connected_websockets)
would indicate how many connected websockets you have
@pgjones sorry for the late reply, but here's my code:
from quart import Quart, websocket
from functools import wraps
import asyncio
app = Quart(__name__)
connected_websockets = set()
def collect_websocket(func):
@wraps(func)
async def wrapper(*args, **kwargs):
global connected_websockets
queue = asyncio.Queue()
connected_websockets.add(queue)
print(connected_websockets)
try:
return await func(queue, *args, **kwargs)
finally:
connected_websockets.remove(queue)
return wrapper
async def broadcast(message):
for queue in connected_websockets:
queue.put_nowait("New connection")
@app.websocket('/')
@collect_websocket
async def ws(queue):
while True:
data = await websocket.receive()
print(data)
await broadcast(data)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8080)
You can ignore the host="0.0.0.0"
, that's just there because it's on replit
"New connection"
(ignoring the data) on a queue, but you never read the data from the queue. The connected_websockets
set will contain a queue for each websocket connection though. What are you hoping will happen?
Hi, everyone.
I've just started using quart for my tests, I'm using this simple code :
def start_app():
from keyboard import press
import quart
app = quart.Quart(__name__)
@app.route("/api", methods=["POST"])
async def json():
return {"hello": "quart world"}
app.run(host='myip',port=5000)
start_app()
and it's working fine, except that I always get these unwanted messages when it starts:
- Serving Quart app 'main'
- Environment: production
- Please use an ASGI server (e.g. Hypercorn) directly in production
- Debug mode: False
- Running on http://myhost:5000 (CTRL + C to quit)
[2021-04-05 18:19:07,784] Running on http://myhost:5000 (CTRL + C to quit).
Can anyone please tell me how to disable them?
I've tried
logging.getLogger('werkzeug').setLevel(logging.CRITICAL)
logging.getLogger('app.serving').setLevel(logging.CRITICAL)
logging.getLogger('quart.serving').setLevel(logging.CRITICAL)
logging.getLogger('app.serving').disabled = True
logging.getLogger('quart.serving').disabled = True
logging.getLogger('app.serving').propagate = False
logging.getLogger('quart.serving').propagate = False
nothing helps
thank you, it worked in combination with
sys.stdout = open(os.devnull, 'w')
sys.stderr = open(os.devnull, "w")
but I actually wanted to start this function in background via this code:
import multiprocess as mp
proc = mp.Process(target=start_app, args=())
proc.start()
and that still doesn't work because besides those lines I'm also getting an empty line in console (the app waiting for the requests I guess).
Is there a way to start/stop the quart app in background?
import mock_service_app
import multiprocessing as mp
proc = mp.Process(target=mock_service_app.start_app, args=())
proc.start()
proc = mp.Process(target=start_app, args=(), initializer=mute)
configure_error_handlers
is somehow a standard block which is in all my Flask / Quart projects, but without jsonify it's even more beautiful. Does it respect the JSON encoder I set here without jsonify: https://github.com/openbikebox/connect/blob/master/webapp/common/filter.py#L13 ?
hey everyone!
hope you are all well
I am trying to learn how to make an ML-based web app using Quart.
I want to build an application that can handle prediction of a batch of images. I want to divide the batch into small chunks, make predictions and return the response to the user. This process should be done until the last chunk
I thought of making an async recursive api call. All I want to do is to make it recursively return the response.
Any help is appreciated.
This is the function that I want to call it recursively.
@app.route('/predict', methods=['POST'])
async def predict():
if request.method == 'POST':
has_files = (await request.files).get("file", None)
if has_files:
files = (await request.files).getlist("file")
responses= {}
# split files into small chunks to place here !
for file, i in zip(files, range(len(files))):
img_bytes = file.read()
class_id, class_name = await get_prediction(model=model,
image_bytes=img_bytes,
imagenet_class_index= imagenet_class_index)
responses[i] = {'class_id': class_id, 'class_name': class_name}
print(responses[i])
return await render_template("results.html", text=str(responses))
else:
return await render_template("results.html", text="please upload image!")