August 5, 2021
The React app part was pretty blissful in comparison. I ran create-react-app, I npm installed some components and I created front-end-magic. I pointed it at my localhost and it could consume my APIs with `fetch.' I was displaying loading animations while the API worked, I used some deep-seated CSS muscle memory to lay stuff out - I had colours, it was fun! It was incredibly hard fun, but it was fun. It was nothing to show a front-end developer but it was certainly better than the WordPress version I could have made. I was entranced. I’d update code and React would load it for me, I’d compile down to static HTML and React offered to webserve it to me. It looked as awful on my seven inch tablet as it did on my UWHD monitor. It was really great.
This is the part of any project, that you’ve built using newfangled local mocking and tools, where you start to actually wonder whether it can exist ‘up there’ in the Cloud - an inevitable part to any journey where you just have to down more tools, pick up the inevitable Cloud spade and ladder and start shifting things into some sort of cloud home, where you can find out how FUBAR the whole thing was from the start. At least you think so.
At Cutover, we practice Infrastructure as Code about as much as any similar sized company. i.e. there are still some bits that aren't.
We’re still jerryrigging things into production that could have been sliced out five years ago and that sort of thing. It’s normal.
But I have been spending five years preaching, again, and not necessarily practicing. With your home projects it’s easier to be half asleep clicking a mouse around and experimenting wildly. Again, most terraform on my home machine was from completing interview exercises, occasionally getting involved in open source, and very rarely (not recently) git cloning something from work to fix.
I had to sit there and look at my original whiteboard and look at what I’d done and work out how it would be in AWS.
It turns out knowing how the software works really helps with this. I think I needed four short .tf files and it was up. Working from local Lambda/SAM/API to terraform to AWS was a breeze. DynamoDB took another few minutes. I compiled the HTML and dragged and dropped it into AWS S3 in a (terraformed) static website bucket. I just had to work out how CORS worked and redeploy the APIs a few times and my site was live.
It was amazing - one of the proud moments of my life where you think ‘wow I actually can do things’.
It was around the real-life-internet part of the project where I started using only real data and started worrying yet more about the integrity of it all. The container in Fargate was spinning around like clockwork but it would time out after 20 seconds of Chrome not finding the element. And that happened a lot. Fargate was also utilizing the equivalent of a t3.micro, and in Fargate that isn’t Free Tier.
Come to think of it, if they abstract the server away from you but charge you for it it’s not really any better than running the t3.micro for free, especially if you use Fargate so much it barely powers anything down.
Serverless, schmerverless, I was bastardising the idea again.
I had my front end live at a domain, though, and clicking around between my now live portal and the GivenGain website I was thinking that there must be a way to get these two things to talk to each other. If there is no open API then what is there? Back in my pentesting hat I started really rinsing out the packet inspection to firstly try and fake the auth-token or just anything that would get around the calls back to GivenGain’s home base. But then I gave up, half because, well, charity, and half because I’d found a WebSocket.
Here is another grey area that I just hadn’t covered at the school of hard dox. What even is the thing that people have been saying at work for 15 years. A what? I kept on thinking of WinSock on Windows 3.1 and the big flag that animated. That was what I had been daydreaming about for 15 years while people talked shoppe about WS://.
I did a Google and found out that you have a websocket client and a server then they can talk to each other. An instant messaging software might be using them. Your applications frontend, backend and other services might be sending notifications via them. Hold on. It seems very similar to loads of protocols and services I DO know about. Where had they been all my life? Right there, in the traffic.
By subscribing to this and listening for messages - suddenly - you become a server.
If, even if it’s a Lambda or a Fargate container waiting, it’s still waiting.
It’s still up and it’s still a server. Keeping a Lambda on 24/7 goes against the point, and it isn’t even cheap - same goes for ECS or EKS in any implementation - it’s cheaper to run a t3.micro and better for the environment, maybe.
I did the math and because of the seven second rule I could run a Lambda every six seconds for 1.5 seconds and I’d get all the messages, I think, but it still felt damn weird. In fact I did it for a night and the table is still there full of nasty not-really-serverless feeling data I never used.
I gave up on this socket and went back to the SMTP checker which although quaint was fundamentally serverlesser, and, nevermind.
So there I was - my full stack existed. People would go to my domain and see the flashing lights and pretty colours and click some components that would lead them off to GivenGain to pay money. Once back to my domain they’d be able to swap their Donation ID for one of my items, which would be removed from availability and change colour. It was a lot of fun and fairly amateur. I was extremely happy and the donations started rolling in. People commented on the Geocities aesthetic and occasionally got a 3/4/500 error from somewhere. GivenGain one day changed the formatting of the email and my whole backend fell over. This happened the week of the threatening email and I assumed they were linked and called the project closed.
I was relieved to shut down as I’d run out of the energy it required to maintain and update a website - but it had done its job and I had learned a hell of a lot that I never would have during my day job. All using tools I’d learned at/for work.
To top it off I’d raised around $1k USD for the charity. And hardly spent a thing.
Something something serverless.
It turned out I had made a schoolboy error when creating the DynamoDB resources in terraform. AWS offers Free Tier on provisioned capacity. This is free capacity for however long your Free Tier lasts for. Mine ran out without me realising in January 2021.
Before I’d worked out what was going on (some Cloud Engineer, right!) I’d been charged $3.50 a day for the 10 or so tables I had kicking about. For 10 or so days while I rested over New Year.
I had billing alerts turned on and I’d checked when I had my first alert but I couldn’t tell why it was costing me money. I found an old EC2 t3.micro jumpbox I’d left in eu-west-2 and closed that down.
It wasn’t that.
I looked at the DynamoDB charge and scratched my head - was someone ragging the endpoints causing untold amounts of IO ? I couldn’t find any evidence. I switched the tables to on-demand pricing and the charges stopped.
January 2021 was one of those long months where you don’t get paid from mid-December until the end of Jan and I put off paying Amazon the 41 dollars until today - the threatening emails finally stopped.
I will, one day, develop for Cloud again, but next time I’ll read the docs. Finally.