Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Tim Dettrick
@tjdett
However I agree that adding 20 minutes to your current workflow is something we really want to avoid if we can.
Tim Dettrick
@tjdett
@DamienIrving Are you able to upload one of your files to AARNet CloudStor? I would suggest doing so over a wired connection at UniMelb, not wifi.
Damien Irving
@DamienIrving
Sure. I'm away from my desk now until about 3pm, but I can do it then.
Tim Dettrick
@tjdett
No problem. For the moment I'll use a randomly generated file of approximately the same size.
BTW, it turns out data write speed in the melbourne-only NeCTAR cell is a lot faster than I expected. It wrote a random 12GB file in 35s.
Damien Irving
@DamienIrving
nice
Tim Dettrick
@tjdett
It turns out the download speeds I were seeing may have been somewhat inaccurate. The problem was that the entire 600MB archive Paul & Louise are using only took 5s to download, which didn't make for calculating good average speed.
Download took ~90s for a 12GB file. That is impressively fast.
Tim Dettrick
@tjdett
I see where I went wrong: somehow the tool I was using thought "MB/s" was Megabits/s, not Megabytes/s.
Also, the "small" 600MB file I was using originally low-balled the download speed by, well, a lot.
To download a 12GB file in 90s, required an average of over a 1 Gb/s. (ie. The original, final phase, fibre-to-your-doorstep NBN could not have downloaded the file faster.)

BTW:

Ah, yes, 20, not 2 minutes

You had it right, not me. Sorry about that. I should have done it in my head rather than relying on an online tool.

Damien Irving
@DamienIrving
@tjdett To get my data onto cloudstor it looks like I'd have to download it to my own machine first and then upload to cloudstor using their web interface. Is there any way I can cut out the middle step and simply download directly to cloudstor (the place that I'm getting the data from provides a series of csh scripts for downloading in a unix environment - would be nice to use those and download direct to cloudstor).
Tim Dettrick
@tjdett
@DamienIrving Not easily. The closest I can offer is downloading to a machine that has a good connection to CloudStor. What's the upload speed like from your uni desktop?
Damien Irving
@DamienIrving
Not sure. Will test it out (probably tomorrow) and let you know.
Tim Dettrick
@tjdett
I can set you up with NeCTAR VM access if that would be easier. You're out and about today I take it?
Damien Irving
@DamienIrving
I'm out and about this afternoon, but I think my desktop at uni should be fine. I'll give it a try tomorrow morning.
Tim Dettrick
@tjdett
OK. Let me know how you go. I can provision a NeCTAR VM with sufficient space if you need it.
Tim Dettrick
@tjdett
@DamienIrving Excluding temporary files, what do you think would be a good upper limit for a DIT4C container size?
I'm provisioning a compute node for you to do this long-term research test on, and I'm trying to put in place the sorts of resource constraints I'm expecting we'll need later.
Damien Irving
@DamienIrving
Hard to say. What size becomes a problem for you? I'm envisaging that a researcher would only keep the finished products of their workflows (e.g. small summary data files, images) and all intermediary files would be temporary, so you might be able to get away with at little as 500 MB?
(Apologies for the delay in getting data up on Cloudstor. This week is looking a little hectic for me, so it might be next week before I get a chance.)
Tim Dettrick
@tjdett
The only Docker backend that supports me limiting disk usage (!) is "devicemapper", and it has a default total container size of 10GB. I'm wondering if I could drop that to 5GB without causing problems.
Damien Irving
@DamienIrving
I think 5GB would be plenty
Tim Dettrick
@tjdett
Our biggest images, like Slicer, have yet to reach 2.5GB in size, so that still leaves over 2GB of space for extra packages and files.
OK, we'll go with that for your test environment. If you run up against the limit, I'll manually save your container, change the size, and then reprovision it.
Tim Dettrick
@tjdett
OK, DIT4C running Autodesk Maya is our heaviest install yet, but it only reaches 2.7GB. That was my biggest worry for a 5GB limit.
@pmignone How big do Maya files get? Is it conceivable that your project files could be bigger than 2GB?
Paul Mignone
@pmignone
It is possible to get some scene files in the GB. Though I wouldn't worry about it for now.
Damien Irving
@DamienIrving
@tjdett When I'm on DIT4C, how do I access cloudstor?
Tim Dettrick
@tjdett
The best way is through the webdav interface. You'll need to set your sync password first:
https://cloudstor.aarnet.edu.au/plus/index.php/settings/personal
Then you can use the webdav endpoint:
https://cloudstor.aarnet.edu.au/plus/remote.php/webdav/
Tim Dettrick
@tjdett
In terms of WebDAV clients, cadaver can be installed with sudo yum install -y cadaver and is relatively easy to use.
EasyWebDAV looks like a relatively good webdav library for Python, and is available through PIP.
Damien Irving
@DamienIrving
@tjdett Getting funny behaviour on resbaz.cloud.edu.au at the moment. When I try and switch the state of my work container (on the long-term-sandbox) it immediately switches itself off again.
Tim Dettrick
@tjdett
Hmm... it's doing it for my container too.
Tim Dettrick
@tjdett
@DamienIrving The problem appears to have been caused by me using a different Docker backend for this machine which allows container size limiting.
Disturbingly, this "devicemapper" backend is considered to be the most "stable" of the lot.
I'll let you know when it's fixed.
Damien Irving
@DamienIrving
No rush
Tim Dettrick
@tjdett
Looks like I've lost all the containers somehow. I'll admit I was trying new things, but I didn't think this was quite that dicey. I'm going to get things working and then see if I can identify what went wrong.
Tim Dettrick
@tjdett
Well, that bug was not nice. Apparently I was getting corruption of the thin pool on reboot of the CoreOS host. That wasn't the reason why containers weren't turning on, but it would have meant we lost all the containers later anyway.
I have switched to the slower configuration of devicemapper which doesn't use LVM thin pools, and it seems to be working well.
Tim Dettrick
@tjdett
@DamienIrving Apologies for losing your container. Everything should be right to use again now.
bmeade
@bmeade
@tjdett Could you please give us a hand to set up the URL "Showcase2015.3dprinting.edu.au" and direct it to "digitalfabrication.unimelb.edu.au/3dshowcase/";?
Tim Dettrick
@tjdett
Hmm... you realise that will require a server to do a 3xx redirect right? It's not a straight DNS job.
bmeade
@bmeade
Oh, ok. Not really familiar with the process. What do we need to do/provide? I can set up a server but it would be preferable to have the content sit in the Unimelb CMS.
Tim Dettrick
@tjdett
As noted with details in private chat, I don't have access to domainname.edu.au, so I can't make any of the required changes. It's possible the registrar has a URL redirection service for just this purpose, but I won't know until I can log in again.
Tim Dettrick
@tjdett
@bmeade OK, so now I have DNS access. Before we go any further though, do you realise that http://digitalfabrication.unimelb.edu.au/3dshowcase/ requires a UniMelb login?
bmeade
@bmeade
Sorry, that is because is isn't active yet - still under construction in CMS. Do you need it to be live?