Cloud Consolidation Part Two, AWS Boogaloo

April 13, 2020

Cloud Consolidation Part Two, AWS Boogaloo

Part One discussed the reasoning behind consolidating, part two looks at the current configuration

When I reviewed the post that’s now part one, I realised I didn’t completely talk about the actual technical overview of what’s currently running on AWS. So, I decided to make a part two where I talk about the current footprint and my plans on where I want to go with AWS. Also, it’s a nice sanity check for me to be positive that I remember everything that I wanted to do cloud wise!

Configuration Pre-Centralisation

As I hinted at in part one, I had some existing components already configured on AWS. Mostly, this stuff was from when I primarily focused on being self-employed. But when that wrapped up, I kept the lights on since by that stage, I could pay the bills myself and I was in a fierce learning mood. I will try to start from the bottom up and explain what’s what.

  • Regions – Everything I run is configured in the Ireland region for the natural reason that I’m based in Ireland. In the event of needing cross regional assets, It would either be Frankfurt or Stockholm that would be the second choice.
  • Networking – I had one Virtual Private Cloud (VPC) configured for pretty much all my resources. That then had six total subnets, three of each for public and private for the three availability zones within the Ireland region. I chose NAT Instances as versus NAT Gateways for handling address translation entirely for cost reasons. There’s a tiny amount of traffic in my VPCs, so I can run T3.Nanos at a fraction of the cost for a NAT Gateway. Security Groups are also within here, I generally categorise them in to Personal and External. Personal being that I only need access so they can be firewalled to my own IP ranges. External meaning that the wider Internet needs to use them. But, since I use CloudFlare, that security group is firewalled to their IP addresses so one couldn’t bypass them as a proxy.
  • Compute – As mentioned, there’s three T3.Nanos for NAT Instances running 24/7/365. I also tend to run Minecraft servers for my friends and I when we get that itch to play the game. Vanilla Minecraft is usually a T3.Small while a modpack will be a T3.Medium. I was also running another T3.Small for Visual Studio Code Remote SSH, but that job has since been passed to my XPS 13. Reserved Instances were bought for anything long term, while instances that needed it, were given Elastic IP addresses. Anything very experimental was usually run as a Spot instance. Remember, majority of external facing websites are not yet on AWS – they’re still handled by DOK8s
  • Storage – EC2 instances obviously have their EBS volumes, but for other object-based storage I’m naturally using S3. There’s not a lot to it here, it just works as they say!
  • DNS – CloudFlare does handle my DNS, but health checking of external website is done by Route 53 today. I’ve used it in the past extensively and is another example of something that just works.
  • Developer Tools – I also use Github Actions these days, but I do have the AWS Developer Tools experience to use them, which includes CodeBuild and CodePipeline. We will see that in today’s configuration, I’m beginning to reuse CodeBuild with EKS. I think as well as I move towards developing Lambda functions, I will start to use more of the AWS Developer Tool suite since these things tend to play nicely together out of the box.
  • Monitoring – Today I just use CloudWatch basic metrics, but I want to in the future move this all towards an Elastic Stack deployment.

It’s funny how there’s still quite a lot there when you look at it in text form! I look at it with a sense of pride though, all of that has served me quite well and it makes me happy to start building again and expanding my footprint. All in all, what I’m now running today does not change a whole lot. Remember that I was just trying to shift my Kubernetes workloads.

Configuration Post Centralisation

  • Networking – My Kubernetes cluster was struggling to create successfully within my existing VPC for reasons beyond my comprehension. As a result, a new VPC was created pretty much identical to the existing one save for CIDR changes.
  • Compute – My compute footprint expanded with the new NAT instances and the new worker nodes for my cluster. Currently, running two T3.Medium instances for a total of four vCPUs and eight GBs of RAM. I’m still in an evaluation phase for my compute, so this may change before I commit to reserved instances.
  • Containers – An EKS Management Plane is now running in the new VPC. This is using managed node groups for worker nodes and it should be the only management plane I’m running due to the cost associated with it. Save for experimental clusters that will only live for a maximum of eight to 24 hours.
  • Container Registry – I moved from Github Package Registry to Elastic Container Registry (ECR) as part of this move. Just easier and means I’m not dealing with extra secrets for image pulling within Kubernetes. I also avoid any Data Transfer out charges from Github for the Package Registry.

All in all, the move was painful. I’ll probably write a separate post about it, but EKS proved very troublesome and I still struggle today with IAM and interacting with the cluster in a manner that doesn’t involve me using my personal user directly. I think I’ve found a solution though, so you will probably see another blog post detailing what exactly I’m doing here in terms of Kubernetes and automation.

Configuration in The Future?

This overall footprint will change massively if I complete all the side projects I want to complete. Naturally, I hope to write and talk about them all. Things that come to mind though include items such as dedicated monitoring services with the Elastic Stack and orchestration at a large scale for when I want to write applications that are distributed by Lambda functions. All in all, the aim is to have a set up where any new application I want to create or work on can be built, monitored and deployed with relative ease. Yes, there’s numerous solutions out there that will do this for me, but I want to learn from the ground up and build it myself since this is what I love to do!

Thank you!

You could of consumed content on any website, but you went ahead and consumed my content, so I'm very grateful! If you liked this, then you might like this other piece of content I worked on.

My frustrations with AWS

Photographer

I've no real claim to fame when it comes to good photos, so it's why the header photo for this post was shot by Taylor Vick . You can find some more photos from them on Unsplash. Unsplash is a great place to source photos for your website, presentation and more! But it wouldn't be anything without the photographers who put in the work.

Find Them On Unsplash

Support what I do

I write for the love and passion I have for technology. Just reading and sharing my articles is more than enough. But if you want to offer more direct support, then you can support the running costs of my website by donating via Stripe. Only do so if you feel I have truly delivered value, but as I said, your readership is more than enough already. Thank you :)

Support My Work

GitHub Profile

Visit My GitHub

LinkedIn

Connect With Me

Support my content

Support What I Do!

My CV / Resume

Download Here

Email

contact at evanday dot dev

Client Agreement

Read Here