wrench, did you see cloudspannerecosystem/wrench#21
lgruen/yo. Could you reconfirm it?
Is all that's needed this set of environment variables?
If you could send another description snippet for the website, that would be great!
@skuruppu No problem at all.
Most of our teams employ some sort of distributed processing in our services, and distributed locking is quite a common technique in this area. We use some of the more well known ones such as Zookeeper and Consul. These systems are quite difficult to maintain by themselves though, so any option that are easier when it comes to maintenance are welcome to us. Redis is also common in our backends and I even have a Redis-based distributed locker lib https://github.com/flowerinthenight/kettle that we use as well. But the Redis version is not really that reliable as described here. This would lead me to:
We use Spanner heavily as one of our main databases, among others. So if I could do distributed locking using Spanner itself, I don't have to maintain another Zookeeper just for locking. I think TrueTime is crucial here and may even be more reliable than the Redis version, time will tell. So far, our Zookeepers are the most reliable but I think that this library can be as reliable as well, which it is, so far.
Runfunction into a couple of separate functions? The current nesting of a combination of functions and loops is quite deep and that makes it difficult to grasp exactly what is going on there.
@skuruppu For #2, yes I was looking into the possible use of the emulator for tests but haven't got the time to try and implement it yet.
@olavloite Thanks very much. One record equals a single lock. Since the name column is the primary key, it should be unique. Throughout the duration of the lock lease, that single record (the one that's grabbed during initial) will be used, with the heartbeat and timestamp columns being updated periodically. During the initial iteration, it is important to use INSERT, and not InsertOrUpdate to make sure that only one client/process will succeed and all others fail. The same story with the succeeding attempts to grab the lock. It needs to use INSERT, not InsertOrUpdate. The cleanup part is actually not necessary. It's just there to keep the table contents small and will be done by the current lock owner. If removed, and, say, we have a lock duration of 1sec, a new record will be added to the table every second. This bit probably needs a better implementation.
For the 2nd one, yes you're right. It could do with a bit of breaking down. I'll make some improvements hopefully sooner when I can find some spare time. I just have to make sure that changes will not affect our current production.
Thanks @flowerinthenight, we would like to have some tests before we publish it in the repo.
For the publishing process, I think we can create a repo in the ecosystem then do a repo move. I will have to go through some internal processes to create the repo. Note that you and anyone else that has committed to the repo will need to sign a Google CLA (https://cla.developers.google.com/about/google-individual). Would this be ok with you and the other contributors?