Saturday, April 4, 2020

Distributed Locking from Python while running on AWS

In the day and age of eventually consistent web-scale applications the concept of locking may seem very archaic. However in some instances attempting to obtain a lock and failing to do so within a limited window can prevent dogpile effects for expensive server side operations or prevent over-write of already executing long running tasks such as ETL processes.
I have used 3-basic approaches to create distributed locks on AWS with the help of built-in services and accessed them via Python which is what I build most of my sofware in.

File locks upgraded to EFS

File based locks in UNIX file-systems are very common. They are typically created using the flock command, avalaible in Python under os-specific flock API. Also checkout the platform independent filelock. This is well and good for a VM or single application instance. For distributed locking, we will need EFS as the filesystem on which these locks are held, Linux-Kernel and NFS will use byte-range locks to help simulate locally attached file system type locks. However if the client loses connectivity the NFS lock-state cannot be determined, better run that EFS with enough replicas to ensure connectivity.
File locking this way is very useful if we are using EFS for holding large file and processing data anyway.

Redis locks upgraded to ElastiCache

Another popular pattern for holding locks in Python is using Redis. This can be upgraded in the cloud-hosted scenario to Redis-Elasticache, This pairs well with the redis-lock library.
Using redis requires a bit of setup and is subject to similar network vagaries and EFS. It makes sense when using Redis already as an in-memory cache for accelration or as a broker/results mechanism for Celery. Having data encrypted at rest and transit may require running an Stunnel Proxy.

An AWS only Method - DynamoDB

A while ago AWS published an article for creating and holding locks on DynamoDB using a Java lock client. This client creates the lock and holds it live using heart-beats while the relevant code section executes. Since then it has been ported to Python and I am maintaining my own fork.
It works well and helps scale-out singleton processes run as Lambdas to multiple lambdas in a serverless fashion, with a given lambda quickly skipping over a task another lambda is holding a lock on. I have also used it on EC2 based stuff where I was already using DynamoDB for other purposes. This is possibly the easiest and cheapest method for achieving distributed locking. Locally testing this technique is also quite easy using local-dynamodb in a docker container.
Feel free to ping me other distributed locking solutions that work well on AWS and I will try them out.

Friday, January 24, 2020

Testing the OrangeCrab r0.1

After hassling Greg Davill for a while on twitter and admiring his OrangeCrab hardware I managed to catch up with him in person in Adelaide. I have been away in Nairobi till October last year, then I spent a brief few days in Adelaide before coming over to Canberra to take up a position in Geoscience Australia. The new gig is much less time commitment than the start-up world and hopefully will allow more time for blogging and board bring-ups like this one.

I caught up with Greg at a Japanese restaurant in Rundle mall and was treated to his now trademark led cube and led icosahedron. They are insanely detailed pieces of work and deserve staring at. However I am most grateful for the care package he left me, an OrangeCrab v0.1. This is an ECP5 board in feather form factor with an ADC built in to respect the Analog In pins on the feather. My aim for this board is to host some energy monitoring code on the FPGA with a fast 4mbps or so ADC and perform power/energy calculation on some parts of LUT's/DSP and have a softcpu push data out.
Greg also left me a home-made FTDI based board to use a JTAG programmer. The whole setup requires 3 USB cables:

  •  To plugin and power the FPGA board (eventually it should be alos programmable via this port) 
  • To attach a USB-Serial converter and watch the console when the gateware comes up 
  • To program the board over JTAG using an FTDI chip


Getting firmware compiled these days is getting easier, but Greg had done his initial testing with Lattice Diamond. I managed to installed it in WSL and promptly ran into a tonne of issues. The weirdest being close coupling to bash, Ubuntu actually uses dash as its default shell. You can get a Diamond licence and help support integration of diamond in litex-buildenv.
I was about to give up then Greg got it working with the opensource toolchain and [NextPNR-ECP5. I had by then setup litex-buildenv to support the orangecrab. So getting some gateware on was relatively easy. Then I got stuck on RAM timing bug in Litex till a few hours ago when I tested out some new gateware.

Also Checkout out Greg's foboot fork and help make programming over the USB possible and reduce 1 USB cable. More work on this including getting MicroWatt running coming soon.