Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Dec 03 18:22
    ethindp commented #768
  • Dec 03 14:48
    jedel1043 commented #768
  • Dec 03 05:29
    lucaszanella commented #570
  • Dec 01 20:24
    lucsm00 commented #884
  • Nov 28 15:53
    dcarrier commented #615
  • Nov 25 17:02
    GRLaken commented #403
  • Nov 25 13:58
    toothbrush7777777 commented #844
  • Nov 25 13:56
    toothbrush7777777 commented #844
  • Nov 25 13:56
    toothbrush7777777 commented #844
  • Nov 25 13:55
    toothbrush7777777 commented #844
  • Nov 24 20:20
    ergpopler commented #884
  • Nov 23 08:20
    mt-rainier commented #884
  • Nov 23 07:27
    mt-rainier opened #885
  • Nov 23 03:50
    Gabriel-Alves-Cunha commented #768
  • Nov 22 12:47

    phil-opp on post-10

    CI: Use environment files inste… Merge branch 'post-04' into pos… Merge branch 'post-05' into pos… and 4 more (compare)

  • Nov 22 12:47

    phil-opp on post-11

    CI: Use environment files inste… Merge branch 'post-04' into pos… Merge branch 'post-05' into pos… and 5 more (compare)

  • Nov 22 12:47

    phil-opp on post-05

    CI: Use environment files inste… Merge branch 'post-04' into pos… (compare)

  • Nov 22 12:47

    phil-opp on post-07

    CI: Use environment files inste… Merge branch 'post-04' into pos… Merge branch 'post-05' into pos… and 1 more (compare)

  • Nov 22 12:47

    phil-opp on post-12

    CI: Use environment files inste… Merge branch 'post-04' into pos… Merge branch 'post-05' into pos… and 6 more (compare)

  • Nov 22 12:47

    phil-opp on post-06

    CI: Use environment files inste… Merge branch 'post-04' into pos… Merge branch 'post-05' into pos… (compare)

Ethin Probst
@ethindp

Not that I know of. It happens in the core crate. The error message is this:
error: could not compile core

Caused by:
process didn't exit successfully: rustc --crate-name core --edition=2018 C:\Users\ethin\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C panic=abort -C embed-bitcode=no -C codegen-units=64 -C debuginfo=2 -C metadata=9503ede77df8d2d8 -C extra-filename=-9503ede77df8d2d8 --out-dir C:\Users\ethin\source\kernel\target\x86_64-kernel-none\debug\deps --target \\?\C:\Users\ethin\source\kernel\x86_64-kernel-none.json -Z force-unstable-if-unmarked -L dependency=C:\Users\ethin\source\kernel\target\x86_64-kernel-none\debug\deps -L dependency=C:\Users\ethin\source\kernel\target\debug\deps --cap-lints allow -C target-feature=+sse,+sse2 (exit code: 0xc0000409, STATUS_STACK_BUFFER_OVERRUN)
warning: build failed, waiting for other jobs to finish...
error: build failed

My target file looks like this:

{
"llvm-target": "x86_64-kernel-none",
"data-layout": "e-m:e-i64:64-f80:128-n8:16:32:64-S128",
"arch": "x86_64",
"target-endian": "little",
"target-pointer-width": "64",
"target-c-int-width": "32",
"os": "none",
"executables": true,
"linker-flavor": "ld.lld",
"linker": "rust-lld",
"panic-strategy": "abort",
"disable-redzone": true,
"features": "-mmx,+sse,+sse2,+soft-float"
}

And my .cargo/config.toml looks like:

[unstable]
build-std = ["core", "compiler_builtins", "alloc"]

[build]
target = "x86_64-kernel-none.json"
rustflags = ["-C", "target-feature=+sse,+sse2"]

[target.'cfg(target_os = "none")']
runner = "bootimage runner"
Ethin Probst
@ethindp
I'm trying to add SSE/SSE2 into particular crates because (for some reason) my memory allocation routines get really, really slow (like, to the point where the kernel is halted and waiting for that to finish) when I allocate a 4KiB page of RAM for my disk controller. I've no idea precisely where the problem is at; I have a theory that its in the frame allocator, when I repeatedly loop through the usable frames over and over and then extract one (e.g.: this code), but I'm not sure how to optimize it:
unsafe impl FrameAllocator<Size4KiB> for GlobalFrameAllocator {
    fn allocate_frame(&mut self) -> Option<PhysFrame> {
        self.pos += 1;
        let regions = self.memory_map.iter();
        let usable_regions = regions.filter(|r| r.region_type == MemoryRegionType::Usable);
        let addr_ranges = usable_regions.map(|r| r.range.start_addr()..r.range.end_addr());
        let frame_addresses = addr_ranges.flat_map(|r| r.step_by(4096));
        frame_addresses
            .map(|addr| PhysFrame::containing_address(PhysAddr::new(addr)))
            .nth(self.pos)
    }
}
GDB is pretty much no help either. And I can't add a trace! macro call in there because it'll get ridiculously spammy. The slowdown happens on both debug and release versions.
Ethin Probst
@ethindp
So any thoughts about how I can fix things? I can't really continue development until I get this fixed somehow.
Scott McWhirter
@konobi
Isn't sse/sse2 just available under x86_64?
Ethin Probst
@ethindp
You still have to enable it.
Isaac Woods
@IsaacWoods
it'll get ridiculously spammy
Sometimes you just have to wade through it
Philipp Oppermann
@phil-opp

@ethindp

I can't really continue development until I get this fixed somehow.

You mentioned two problems: The stack buffer overrun error when compiling and the slow memory allocation. Which of these issues do you mean with "this"?

For the compilation error, I would try to reduce it as much as possible to find the cause. Start by copying your project and then deleting as much stuff as possible while preserving the issue. If you have a small enough example that still throws this error when compiling, I would recommend opening an issue in the rust repository since this is not something that should happen.
Philipp Oppermann
@phil-opp
@ethindp I just saw that there is an open rustc issue where the build process hangs: rust-lang/rust#76980
It seems to be caused by an LLVM upgrade. You could try the nightly before this upgrade and see whether it is also causing your build issue.
Ethin Probst
@ethindp
@phil-opp I meant the memory allocation as "this". I don't necessarily need SSE or SSE2 in my kernel; if I did use it I think my kernel would be the first in history to need it, which may or may not be a good sign. @IsaacWoods Its hard to wade through that much info. If I add a trace! macro call I get more than a thousand messages within the first 30 seconds of boot (and it slows down the boot process like you wouldn't believe). I might make a special function, emited only in debug builds, that allows memory tracing without tracing every single memory allocation that the kernel ever makes.
Ethin Probst
@ethindp
Okay, so I've no idea what's going wrong with this. My NVMe controller allocates a 16KiB ring buffer for the controller to write to. In this case, it was at 277FC5BC4000. However, my kernel (somehow) just went off the rails. I let it just run and run and in that time it allocated 5,653 memory frames -- all the way up to address 277FC71D9000, which is far beyond the end of the ring buffer I originally wanted to allocate. I compute size based on address + end, e.g.: if you want to allocate a single page, you call allocate_phys_range(addr, addr + 4096). Is this not the correct way to compute how large I want the page to be? This seems to work in every other instance except this one, where the allocator just goes haywire (seemingly) trying to allocate all of the RAM of the machine.
Rob Gries
@robert-w-gries
Hey Phil, I'm thinking of restructuring my scheduling code to use Futures like your Async/Await post does. Before going down that route, I was curious if you could give some thoughts from a high-level how you were planning to use Rust's Futures with a preemptive scheduler. For my kernel, I am currently using a timer interrupt to call scheduler.resched() every 20 ms, which invokes the costly context switch you mentioned in your post. Were you hoping to do something similar where tasks are swapped out at a consistent interval? Overall, I really like how tasks are represented in your kernel, and I think their flexibility would improve the ergonomics of my kernel a lot
Ethin Probst
@ethindp
I like the way tasks are represented as well, but the cooperative nature of the scheduler gives no benefit. My code uses the scheduler but its akin to me just putting function calls one after another. Its a start, though, and I might be doing something wrong.
Philipp Oppermann
@phil-opp
@ethindp The difference is what happens when you do I/O or need to wait for something else (e.g. input on a message channel). A normal function call would block the complete function, while the async function can be paused at that point so that other functions can run in that wait time.
Philipp Oppermann
@phil-opp

@robert-w-gries My basic idea is to define a maximum execution time for tasks, perhaps depending on the number of waiting tasks in the system. If a task runs longer than that, it gets preempted by the timer interrupt to give other tasks the chance to run. This could be implemented by dynamically spawning more threads when encountering long-running tasks. This way, we could avoid most context switches as long as most tasks are I/O bound. The difference to traditional threads that also block when waiting for I/O is that no register state needs to be saved, as the state machine itself saves the minimal state required to continue .

For multi-core systems, we could extend this approach by defining different allowed execution times for different cores. For example, one core could be the batch processing core with a much higher allowed execution time. Tasks that are not latency-critical could be spawned there through an additional spawn_batched method on our executor and avoid the context switch cost this way. Similarly, we could extend our executor with support for latency-critical tasks by lowering the maximum execution time on some CPU cores dynamically.

Of course things get a bit more complicated when you want to schedule user-space programs too. While you could expose a futures-like interface at the system call level and let the kernel do all the task scheduling, you would loose some of the performance benefits because a system call would be required for every userspace task, even if they are very small. So it's probably better to let the user-space program define its own cooperative scheduler and cooperate with the kernel-level scheduler. For example, the kernel-level scheduler could set a "please finish in the next 10ms" flag, which the user-level scheduler could utilize to pause at a convenient point. Of course, user-level programs might not cooperate (they are not trusted), so you still have to implement a traditional preemption mechanism if the program does not respect the flag, but at least you give the user programs the chance of improved performance.

Rob Gries
@robert-w-gries
Thanks for the great info! I'm a little Rust-y with kernel dev right now, but I'm hoping that implementing your cooperative scheduler in my kernel will help get me back to speed. I'll probably ask more questions when I'm re-implementing the preemptive scheduler, if that's alright with you
Philipp Oppermann
@phil-opp
Sure!
Ethin Probst
@ethindp
@phil-opp Right, but I do I/O all the time (MMIO and Port IO) and everything is still sequential. Unless the scheduler just isn't complete yet, I mean.
Or I'm not doing something right...
Philipp Oppermann
@phil-opp
If you just use multiple awaits in a single function, the subsequent ones are not executed until the previous ones are finished. Try the future::join method to wait on multiple futures at once. Alternatively, you can spawn the futures as separate tasks in the executor and communicate through channels.
Comcx
@Comcx
Hi, I'm a CS student and I really love your articles! If I want to develop a file system to help me better understand os theory and Rust, is there anything I have to change with crate bootloader since right now I just assume bootloader will automatically create suitable image for me?
rybot666
@rybot666
@Comcx The bootloader is currently only equipped to output a flat binary with no filesystem. We're working on a rewrite which will probably load from a FAT filesystem of some sort
Comcx
@Comcx
@rybot666 Great to hear that, Thank you very much!
Rob Gries
@robert-w-gries

Really impressed with the flexibility of Tasks using your async/await system. The following is valid code inside my kernel_main function:

pub async fn rxinu_main() { kprintln!("TEST TEST TEST") }

let y = 42;
let test = async move |x| { kprintln!("{} {}", x, y); };

executor.spawn(task::Task::new(rxinu_main()));
executor.spawn(task::Task::new(test("The answer is")));

Output:

TEST TEST TEST
The answer is 42
Well done @phil-opp !
Philipp Oppermann
@phil-opp
Great to hear that you like it! I still have a few ideas how to improve it further. For example, I plan to use From/Into to avoid the Task::new boilerplate. Also, I think it would be useful if the spawn method (or a separate method) returned the result of the future similar to thread::spawn in the standard library.
Rob Gries
@robert-w-gries

Sounds good! By the way, I found async-std has a 'yield_now' function that works with Futures, and I was able to implement it for my kernel. The usage looks like this:

// do some work
task::yield_now().await;
// do more work

Here's a test case if you're interested

Rob Gries
@robert-w-gries
@phil-opp Have you looked into the async-std crate at all? It looks very similar in terms of the Task primitive and its Executors, resulting in potentially re-usable code. They also have implemented nice features, such as JoinHandles, task.cancel(), and thread local executors
Philipp Oppermann
@phil-opp
I have looked at it, but only for std dependent crates. Is it usable (and useful) on no_std as well?
Philipp Oppermann
@phil-opp
For my work projects, I mostly used the smol runtime instead of async-std, which seems to be a bit more modular. One of its sub-crates is async-task, which seems have no_std support.
Rob Gries
@robert-w-gries
It supports no_std, but after looking more into it there's a lot of task functionality that is feature gated behind #[feature(default)] which requires std. I'm surprised because most of the task functionality doesn't need std and would suffice with just alloc. Maybe we can build our own no_std async runtime that extends async-task and keep it under rust-osdev
Philipp Oppermann
@phil-opp
Sounds good! I think the async-executor crate should be quite easy to port for our use case.
Scott McWhirter
@konobi
would rtic.rs be of any relevance, or too tied to arm?
Philipp Oppermann
@phil-opp
Ah, this used to be called RTFM, right? It certainly looks useful for real time applications, but I think it's not general enough for an OS kernel. Also, I personally don't like how it hides the control flow through macros.
Voodlaz
@Voodlaz
Is there's gonna be more lessons about Multitasking?
Philipp Oppermann
@phil-opp
Yes, there will be posts about threading and multi-core processing. However, I'm currently working on adding UEFI, framebuffer, and APIC support, so it will take me some time until I get to writing new multitasking posts.
Vinay Chandra
@vinaychandra
@phil-opp Is a separate stack required for every single type of interrupt? Once in user mode, any interrupt can potentially corrupt the user stack. So, should we have a blanket dedicated stack for all interrupts?
Philipp Oppermann
@phil-opp
The CPU automatically switches the stack when switching from user to kernel mode, e.g. to process interrupts. The mechanism for this is the interrupt stack table we already used in the double faults post. See https://os.phil-opp.com/double-fault-exceptions/#the-ist-and-tss
Vinay Chandra
@vinaychandra
Yes, this was used in the double faults. Question is, usually, do we use a dedicated stack for all interrupts? Not just double fault
Philipp Oppermann
@phil-opp
The IST structure has multiple fields. We used its "interrupt stack table" field for setting a double fault stack. What I meant above is the "privilege stack table" field, which we haven't used yet. This field allows to define a stack that is used whenever switching from user to kernel mode, including syscalls and exceptions/interrupts.
Luuk van Oijen
@Lucky4Luuk
Hey guys, I was looking into getting to ring 3, but when trying to add a user_code_selector and user_data_selector entry to my GDT and loading these afterwards, my kernel immediately start bootlooping. Does anyone have an example of how to properly add these?
My code: https://gist.github.com/Lucky4Luuk/d04957d8470030d015a2aedd2c487af2
Scott McWhirter
@konobi
@Lucky4Luuk osdev wiki (which I'm not sure if you've already referred to) has a section on ring 3, mentions that x86 is pretty quirky about how to achieve it "The only way to get to ring 3 is to fool the processor into thinking it was already in ring 3 to start with": https://wiki.osdev.org/Getting_to_Ring_3
>.<
ergpopler
@ergpopler
Is there like a list of upcoming things?
Luuk van Oijen
@Lucky4Luuk
@konobi Thanks for the link, but I had indeed already read it. Sorry for the late reply, I never use gitter so I didn't realize I had been pinged here haha
linuss
@linuss
Hey guys! I'm working through the blog, enjoying it a great deal so far :)
I'm currently trying to get the VGA writer to work, but I'm running into an issue. After adding the impl fmt::Write for Writer and adding a write!(writer, "foobar") call, I get this error: no method namedwrite_fmtfound for structWriterin the current scope. Do I have to implement that function too, or am I missing something?
linuss
@linuss
Oh! Using use core::fmt::Write fixed it :)