Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Aug 22 12:59
    rianquinn labeled #873
  • Aug 22 12:59
    rianquinn labeled #873
  • Aug 22 12:59
    rianquinn assigned #873
  • Aug 22 12:59
    rianquinn assigned #873
  • Aug 22 12:58
    rianquinn opened #873
  • Aug 22 12:58
    rianquinn opened #873
  • Jul 18 14:49
    JaredWright closed #872
  • Jul 18 14:49
    JaredWright closed #872
  • Jul 18 14:49
    JaredWright commented #872
  • Jul 18 14:49
    JaredWright commented #872
  • Jul 18 01:14
    Outstep commented #872
  • Jul 18 01:14
    Outstep commented #872
  • Jul 17 23:41
    JaredWright commented #872
  • Jul 17 23:41
    JaredWright commented #872
  • Jul 17 23:08
    Outstep commented #872
  • Jul 17 23:08
    Outstep commented #872
  • Jul 17 22:42
    rianquinn commented #872
  • Jul 17 22:42
    rianquinn commented #872
  • Jul 17 21:28
    Outstep commented #872
  • Jul 17 21:28
    Outstep commented #872
Rian Quinn
@rianquinn
Yeah. The function method doesn’t work well with Boxy since it creates additional vCPUs and doesn’t provide its own versions of those so I would stick to the inheritance method for now.
Rian Quinn
@rianquinn
Out of surgery. It went well. In lots of pain but all is good. I plan to spend the next several hours working on Boxy
Jk. Hahahahaha.
Stewart Sentanoe
@ssentanoe
Glad to hear that.
I will try later on
Tamas K Lengyel
@tklengyel
does boxy have a virtual interrupt system implemented yet (like xen's event channels)?
Connor Davis
@connojd
No not yet
Rian Quinn
@rianquinn
Thats not true. It does. I just haven’t upstreamed it yet. Its really simple though. Event channels are needlessly complicated
I implemented the vclock using them and they work fine
If you want to see the patch just let me know. The vIRQ part was easy
The vclock needs changes to the base which is what I am waiting on. Just been too busy the finish that work but I am close
Tamas K Lengyel
@tklengyel
So Stewart should prob see that example, he needs a way to have the hypervisor let the boxy guest know that there is a new vmi event it should process
Otherwise the boxy vm would have to continuously spin to check if there is a new one on the shared buffer
Rian Quinn
@rianquinn
Ok, I will get that setup by Friday. That patch set is a giant mess, so it would actually be a good thing for me to take moment and get it cleaned up, and even attempt to upstream the Boxy changes. We have a MASSIVE overhaul coming to Bareflank in an attempt to enforce API (not ABI) stability moving forward, complete for CI enforcement (i.e. changes to the public API would result in a PR failure on GitHub, and require admin rights and and extra step to merge), and some of the changes that my Boxy patch set needed for Suspend/Resume are in that patch set
The reality is, I can strip that part and keep it simple for now, and I think Suspend/Resume will have to be done a different way anyways to support VT-d, so that code can just be stripped for now
I should have something in upstream for him by Firday
Tamas K Lengyel
@tklengyel
Sounds good
Stewart Sentanoe
@ssentanoe
thanks :D
Tamas K Lengyel
@tklengyel
and yes, API stability would be nice :P :D
Stewart Sentanoe
@ssentanoe
but, take your time, next week I will be in the honeynet workshop anyway
drinking with Tamas probably :P
Tamas K Lengyel
@tklengyel
haha, yes indeed :)
Rian Quinn
@rianquinn
Yeah, I apologize for the insanity on the project WRT the API. Adding guest support really put the APIs to the test, and now that we have a good idea of what is needed, we had a lot of debate internally on how to ensure the APIs are in a state that we are comforitable maintaining for years to come, which as it turns out, is not an easy thing to do.
Stewart Sentanoe
@ssentanoe
no need to apologize, I guess that thing happen from time to time, it was a cool work (bareflank/boxy)
Rian Quinn
@rianquinn
Lol... our API has changes sooooo much in the past year or two, and some of our patches are simply far too large because of it. Forcing a stable API will fix both, which is a day I really look forward too.
Stewart Sentanoe
@ssentanoe
hahaha, looking forward to use the new APIs for the VMI stuffs LOL
Rian Quinn
@rianquinn
They are pretty awesome. The build system is drastically more simple (it adds on top of Connor's already amazing fixes, but takes it further so that extensions only have to use add_subproject(), not other macros are needed). The public APIs are fully documented, and in a their own location, so you can read through all of the APIs without having to read through code as well, which also means that we were able to remove unused documentation on internal code. We also got rid of x64:: as we have a better way to add amd64 support without duplication, so the ambiguity is gone. The entire C++ logic is all gone from the project and will be in its own project, so when you use Bareflank, you will only see Bareflank specific logic, and the ELF loader has also been dramatically simplified and moved to its own project as well. We no longer use NASM as we can compile the same code using Clang with Intel syntax, and the BFSDK folder has been dramatically simplified to just what is needed. In general, there is far less code, things are a lot easier to find, and the APIs can be forced to remain stable. Its pretty awesome.
Oh... and it just runs a lot faster.
Stewart Sentanoe
@ssentanoe
Haha, all in all, sounds awesome
Hopefully it can simplify stuffs (for example: my life) 😂
Rian Quinn
@rianquinn
yeah, I imagine that it will make your life suck at first since you will have to port some code, and learn the APIs, but once you are past that hurdle, life will be a lot bette
Rian Quinn
@rianquinn
@ssentanoe / @tklengyel The PRs for the changes to both the base and Boxy are in. This doesn't provide full support for suspend/resume as I plan to do this differently than what I was originally planning (mainly to support VT-d), but it does add vIRQ support in Linux and it adds a vClock which provides support for high-resolution timers so performance should be a hell of a lot better.
This doesn't add vIRQ support to the host, just to the guest VMs. Adding support to the host will take a bit more work as we would have to add the Linux patches to whatever host kernel you are using, and find a way to implement vIRQ support in WIndows, which I still have to figure out.
Let me know if that helps. Once the PRs pass our tests, I will merge
wrathofodin
@wrathofodin
add AMD support :P
I'm gonna buy ryzen 3xxx
Rian Quinn
@rianquinn
hahahaha.... @dark2201 I will be personally adding support near the end of the year as I too am getting a system. AMD support is coming
Connor Davis
@connojd
Good place to start an implementation
Rian Quinn
@rianquinn
Agreed. That’s the one I am using already. The console is the first device that I am working on
Connor Davis
@connojd
The auto industry is pushing for virtio as the standard with the ability to swap out hypervisors underneath: https://at.projects.genivi.org/wiki/display/DIRO/Virtual+Platform+definition+for+Automotive+Hypervisor+Environments
Rian Quinn
@rianquinn
I saw something like this as well. I didn't realize there was a spec for it though. That is awesome. Nice find
wrathofodin
@wrathofodin
virtio is like hypervisor abstraction?
doesn't look very friendly, I think I would still come up with something custom
looks like far from completed as well
Rian Quinn
@rianquinn
All good points. We might start with custom and then move onto virtio. Just depends on where the industry moves. If virtio ends up being the standard, we may not have a choice but to support it at some point. Even Xen is considering it. But yeah. A custom set of devices would likely work better.
Connor Davis
@connojd
@rianquinn have we ever tested shared pointers in the vmm?
Rian Quinn
@rianquinn
you mean a std::shared_ptr? Yes, at one point that was all we used until I realized we should have been using a std::unique_ptr
why.... is it broken?
Connor Davis
@connojd
I used it a little yesterday and it didn't seem to work. The use_count at the time of destruction was ~2048 when I expected it to be 1.