Yes for the Dex. Quit and restarted the app on the phone. With the previous uploader, the uploader would stop doing anything until a press on the center dex button which the Dex would take immediately into account. Here, that's the dex that seems frozen. Had it twice today and twice yesterday (son seems to be going low 6-7 hours after his lantus)
Has anyone reported a bug with the way the REST API uploader uploads the devicestatus for the battery indicator on Pebble? I found that the devicestatus documents are populated different, and when the Pebble is pointed to an API site, the battery remains at 99%, even though the document shows accurate battery level, ie 31%. Below is a paste of the URI document for devicestatus and and API repectively.
Just verified my two uploaders. The URI is pointing to my Mongo's on Rackspace, and the API is going to the site that is pointed to AWS's Mongos. So only thing I can think of was early setup may have uploaded incorrect info, but would have thought that would just clear out after 48 hours or something, Has been like this for at least a month or more? Was when the last big Azure/Mongo outage occurred on a weekend nite a while back.
Yeah - I've got a few more then most with a couple DR/Test sites. I'm also testing the UptimeRobot and there was an alert on one of the MongoDB ports on AWS that I was monitoring the other day and it was down for 10-15 minutes at about 6am and I totally missed the emails. Want to see if that truly will detect a MongoDB issue/outage. Maybe will switch to text, but I just don't care to wake the Mrs. in middle of the night with a non relevant text message. Maybe will do that on the main one that we use for regular home "production" use.
I realistically could do two watch faces couldn't I? Would be good to try that on all three of our watches and would a good check of two different datasets, but also good for an outage scenario. Hmm.. Once the Pebble debacle smooths out will give that a whirl.
It's interesting seeing the response times to Mongo on the 3 carriers. I'm impressed that the Mongo's on Rackspace DFW is 0-3 miliseconds, and AWS US EAST1 and Azure EASTUS hover around 31, but AWS is more stable around 30-31 then Azure. RS is quick.
But, what I don't know is where UptimeRobot is sending the monitors from.
MongoDB had a bit of downtime for the reboot of their aws servers. I first thought it was because of the bash issue, but it was for the xen issue that wasn't public yet at that time. They did announce the downtime on their status page but not the precise time. I run the node server on one of my local servers in Belgium and can run mongodb there as well, but on the whole mongolab is the most reliable piece of the puzzle imho.