(Official Discord: https://discord.gg/NWpN5mmg3x) | Async actor framework for Rust. | https://actix.rs/book/actix/
robjtede on msgbody-err
static form extract future (#21… Merge branch 'master' into msgb… (compare)
robjtede on msgbody-err
update changelog and fix msrv (compare)
robjtede on msgbody-err
clippy (compare)
robjtede on msgbody-err
clippy (compare)
robjtede on msgbody-err
propagate into<error> change (compare)
robjtede on msgbody-err
introduce encoder error type migrate some bounds to into<err… (compare)
github-actions[bot] on gh-pages
Deploying to gh-pages from @ 0… (compare)
robjtede on master
static form extract future (#21… (compare)
tokio_serial
(4.3) on actix (0.10) - when I call (and await on) tokio_serial::Serial::pair
inside a tokio::Runtime::block_on
inside a new thread, there are no issues and everything works as expected.ctx::spawn
) it inside a Actor::start_in_arbiter
closure, instead of 1) getting a panic about executors, 2) it working, 3) it hanging unpolled, it instead 4) it seems to create the Serial
just fine but calling read
on it returns Ok(0)
instantly i.e. it's now closed.ResponseError
for bb8_tiberius::Error
?pub async fn ws_index(r: HttpRequest, stream: web::Payload) -> Result<HttpResponse, Error> {
let app_state = r.app_data::<web::Data<AppState>>().unwrap();
println!("Shared State: {:?}", app_state.clone().configuration.clone().name);
println!("{:?}", r);
let res = ws::start(MyWebsocket::new(app_state.ws_clients), &r, stream);
println!("{:?}", res);
res
}
impl actix_web::error::ResponseError for ServerError
I get the App data not configured.
impl actix_web::error::ResponseError for ServerError {
fn error_response(&self) -> HttpResponse {
match self {
Self::Argonautica(_) => HttpResponse::InternalServerError().json("Argonautica Error"),
Self::Database(e) => {
HttpResponse::InternalServerError().json(format!("Database Error: {:?}", e))
}
[…]
impl actix_web::error::ResponseError for ServerError {
fn error_response(&self) -> HttpResponse {
match self {
Self::Argonautica(_) => {
//eprintln!("Argonautica Error happened: {:?}", e);
HttpResponse::InternalServerError().json("test2")
}
_ => {
//eprintln!("Argonautica Error happened: {:?}", e);
HttpResponse::InternalServerError().json("test")
} /*
Self::Database(e) => {
//eprintln!("Database Error happened: {:?}", e);
HttpResponse::InternalServerError().json(&Self::render_error(
"Server Error",
"Database could not be accessed!",
))
}
.data()
- but that error could really be better!
Why does tokio block actix? When I use tokio I can't connect to the server via websockets. the tokio code is time_now()
function
actix code:
#[actix_web::main]
async fn main() -> std::io::Result<()> {
time_now();
let port = env::var("PORT")
.unwrap_or_else(|_| "8081".to_string())
.parse()
.expect("PORT must be a number");
println!("Server is working on {:?}", port);
let chat_server = Lobby::default().start(); //create and spin up a lobby
HttpServer::new(
move || App::new()
.service(start_connection_route)
.data(chat_server.clone())
)
.bind(("0.0.0.0", port))
.expect("Can not bind to port")
.run()
.await
}
tokio code:
pub async fn time_now() {
let mut time: currentTime = currentTime {
dayTime: "".to_string(),
seconds: 0,
minutes: 0,
hours: 0,
days: 0,
shift: 0,
};
let mut one_second = Duration::from_secs(1);
actix::spawn(async move {
loop {
std::thread::sleep(one_second);
time.seconds += 1;
if (time.seconds == 60) {time.minutes += 1; time.seconds = 0}
if (time.minutes == 60) {time.hours += 1; time.minutes = 0}
}
});
}
Hi,
I want to achieve async sender with client that receives data in time, progressively.
the only way I have found to achieve it is with actix_utils::mpsc::channel()
it's based on: actix/actix-web#1066
as far as I understand the code, in each loop we are providing the sender Result<web::Bytes, actix_web::Error>
each loop copies the data into shared VecDeque between Sender
and Reciver
the code is here: https://github.com/actix/actix-net/blob/1.0/actix-utils/src/mpsc.rs
Now what I would like to achieve is:
The whole thing could be arranged by writing a custom Reciver<T>, Sender<T>
just like in mpsc
, they seem quite simple.
After digging deeper however I've realized that Receiver<T>
has trait for futures::Stream
which produces via poll_next
Items
which I bet are consumed by next entities body::BodyStream
So Do I correctly assume that there is no method currently to achieve sending to client data progressively (which he receives in tempo) with a constant buffer?
#[post("/library/{id}/device)")]
async fn post_library_device(pool: web::Data<PgPool>, web::Path(id): web::Path<i32>) -> Result<impl Responder, Error> {
...
}
FYI to avoid questions,
ab
is quite compact and eats very few resources compared to siege/wrk and most certainly jmeter ;)assuming such simple server that outputs 10x blah
use actix_web::dev::HttpResponseBuilder;
use actix_web::{middleware, web, App, HttpRequest, HttpResponse, HttpServer};
struct LiveStream {
offset: i64,
}
impl LiveStream {
pub fn new() -> LiveStream {
LiveStream { offset: 10 }
}
}
// https://gitter.im/actix/actix-web?at=5f94900e7be0d67d2794c0df
impl futures::stream::Stream for LiveStream {
type Item = Result<web::Bytes, ()>;
fn poll_next(mut self: std::pin::Pin<&mut Self>, cx: &mut futures::task::Context<'_>) -> futures::task::Poll<Option<Self::Item>> {
if self.offset <= 0 {
return futures::task::Poll::Ready(None);
}
let bytes = web::Bytes::from("blah");
return futures::task::Poll::Ready(Some(Ok(bytes)));
}
}
#[actix_web::get("/foo/{path}")]
fn foo(_: HttpRequest) -> HttpResponse {
return HttpResponseBuilder::new(actix_web::http::StatusCode::OK)
.header(actix_web::http::header::CONTENT_LENGTH, 10 * 4 as usize)
.streaming(LiveStream::new());
}
#[actix_web::get("/bar/{path}")]
fn bar(_: HttpRequest) -> HttpResponse {
let (tx, rx_body) = actix_utils::mpsc::channel::<actix_web::Result<web::Bytes, actix_web::Error>>();
println!("starting response");
actix_web::rt::spawn(async move {
let mut offset: i64 = 10;
while offset > 0 {
offset -= 1;
if let Err(_) = tx.send(Ok::<_, actix_web::Error>(web::Bytes::from("blah"))) {
return;
}
futures_timer::Delay::new(std::time::Duration::from_micros(1)).await; // required
}
println!("response finished");
drop(tx);
});
return HttpResponseBuilder::new(actix_web::http::StatusCode::OK).streaming(rx_body);
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
if std::env::var("RUST_LOG").is_err() {
std::env::set_var("RUST_LOG", "actix_web=debug");
}
env_logger::init();
let plain_server = HttpServer::new(|| App::new().wrap(middleware::Logger::default()).service(bar).service(foo))
.keep_alive(16)
.max_connection_rate(512)
.max_connections(40000)
.bind("0.0.0.0:8080")?
.workers(600)
.run();
let servers = vec![plain_server];
futures::future::join_all(servers).await;
Ok(())
}
lets say you:ab -k -c 1 -n 20 "http://127.0.0.1:8080/bar/xxx"
ab -k -c 1 -n 20 "http://127.0.0.1:8080/foo/xxx"
we receive log:
starting response
response finished
[2021-04-09T17:32:59Z INFO actix_web::middleware::logger] 127.0.0.1:56404 "GET /bar/xxx HTTP/1.0" 200 40 "-" "ApacheBench/2.3" 0.001095
starting response
response finished
[2021-04-09T17:32:59Z INFO actix_web::middleware::logger] 127.0.0.1:56404 "GET /bar/xxx HTTP/1.0" 200 40 "-" "ApacheBench/2.3" 0.000665
starting response
response finished
[2021-04-09T17:33:15Z INFO actix_web::middleware::logger] 127.0.0.1:56416 "GET /bar/xxx HTTP/1.0" 200 40 "-" "ApacheBench/2.3" 0.001046
starting response
response finished
[2021-04-09T17:33:15Z INFO actix_web::middleware::logger] 127.0.0.1:56416 "GET /bar/xxx HTTP/1.0" 200 40 "-" "ApacheBench/2.3" 0.000665
starting response
response finished
[2021-04-09T17:33:31Z INFO actix_web::middleware::logger] 127.0.0.1:56440 "GET /bar/xxx HTTP/1.0" 200 40 "-" "ApacheBench/2.3" 0.001854
...
ab sems to be timeouting after 2nd response
similar handler based on impl futures::stream::Stream
with poll_next
has identical behaviour.
Hello guys, new to rust and antix.
Want to get some guidance on how to go about this situation.
What I want to use is this
struct DomainSettingProvider {
realm: Domain,
settings: Arc<HashMap<Domain, RefCell<Settings>>>
}
In Java, I would have provider injected through DI as a singleton.
In high load, this settings would be hold in memory, cached, and updated every so often.
This is to avoid hitting db consistently on every request. And not having a burden on memory creating and garbage collecting those objects.
How would I go about this in rust? I is this a right approach? How would I go about controlling that it can only be updated in one place and act like a cache?
hello! i am trying to implement post request for the webhook. i want to get the post reqwest body but i am facing the following error.
....
#[post("/mutate")]
async fn index(body: AdmissionReview<DynamicObject>) -> impl Responder {
"Welcome!"
}
...
#[post("/mutate")]
| ^^^^^^^^^^^^^^^^^^ the trait `FromRequest` is not implemented for `AdmissionReview<DynamicObject>
How can i fix this issue?
async fn get_hash(
data: web::Data<AppState>,
web::Path(data_id): web::Path<String>,
) -> impl Responder {
let hash_map = data.hash.lock().unwrap();
match hash_map.get(&data_id) {
Some(info) => HttpResponse::Ok().body(info),
None => HttpResponse::NotFound(),
}
}
async fn add_data(data: web::Data<AppState>, text: String) -> impl Responder {
let mut hash_map = data.hash.lock().unwrap();
hash_map.insert("apple".to_string(), text);
HttpResponse::Ok().body("done")
}
let connector = awc::Connector::new().rustls(Arc::new(cfg)).finish();
let client = awc::ClientBuilder::new().connector(connector).finish();
The above is my code, I am trying to use awc to create a websocket client with tls encryption. I just updated to latest betas and i get the following error:error[E0308]: mismatched types
--> router/src/lib.rs:382:58
|
382 | let client = awc::ClientBuilder::new().connector(connector).finish();
| ^^^^^^^^^ expected struct `Connector`, found struct `actix_http::client::connector::ConnectorServicePriv`
removing .wrap(middleware::Logger::default())
still shows the logs from actix_web + my own logs. #[actix_web::main]
async fn main() -> Result<(), anyhow::Error> {
tracing_subscriber::fmt()
.with_max_level(tracing::Level::INFO)
.init();
...
HttpServer::new(|| {
App::new()
.wrap(middleware::Logger::default())
.service(mutate)
.service(health)
....
Apr 14 12:08:56.061 INFO image_constraint: Started http server: 127.0.0.1:8443
Apr 14 12:08:56.063 INFO actix_server::builder: Starting 1 workers │
│ Apr 14 12:08:56.063 INFO actix_server::builder: Starting "actix-web-service-0.0.0.0:8443" service ││ on 0.0.0.0:8443 ││
Hi all,
I don't know if this is the right place to discuss a feature request, but I have something I think might be an enhancement to actix-web
.
Oftentimes, I use the fn x() -> Result<(), Error>
pattern in my code for handling errors in functions that otherwise return void, e.g.
#[actix_web::main]
async fn main() -> Result<(), std::io::Error> { ... }
I haven't found anything in the documentation (whether this pattern has a name), but it is something I frequently encounter when working with rust.
I'd like to use the same pattern in my handlers, but actix_web::Responder
is not implemented for ()
, so instead I use the less verbose async fn handler() -> Result<HttpResponse, Error>
which returns Ok(HttpResponse::NoContent().finish())
. Unfortunately rust does not allow me to implement Responder
for ()
in my crates, so it must be implemented in actix-web
directly. Would that be something other actix
users would be interested in as well?
Kind regards,
Jonas
Getting:
Compiling actix-files v0.6.0-beta.4
error[E0053]: method `error_response` has an incompatible type for trait
--> /home/bjorn/.cargo/registry/src/github.com-1ecc6299db9ec823/actix-files-0.6.0-beta.4/src/error.rs:19:5
when attempting to depend on actix-web
4.0.0-beta.6
and actix-files
0.6.0-beta.4
. Aren't those supposed to go togheter?
actix-session
to see if i can update it to support actix-web:4.0.0-beta.6
. When looking at the insert
method of Session, and the InsertError
definition, it seems like the error is meant to carry whatever value the user was trying to insert if something went wrong, but the insert
method only requires value: impl Serialize
. Do anyone have any thoughts on whether the value parameter should be constrained, or if the error shouldn`t carry the value which caused the error?
pub fn init() -> App {
let frontend_origin = env::var("FRONTEND_ORIGIN").ok();
let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
let database_pool = new_pool(database_url).expect("Failed to create pool.");
let message_handler = SyncArbiter::start(num_cpus::get(), move || MessageHandler {
db_connection_pool: database_pool.clone()
});
let state = AppState {
message_handler: message_handler.clone(),
};
App::new()
.data(state)
.wrap(Logger::default())
.configure(route::routes)
}