Why do you have servers?
Well it’s not to run the occasional dusty old blog… for that we have a server. Just one. But multiple servers? We need them so our portals, platforms, and apps can run, and keep on running – even when usage peaks.
That’s why we have servers.
But what do they really do for us?
When you strip away all the complexity, the basic thing that application servers do for us is to run our code. All the big buzzwords that are bandied about in boardrooms (highly-available, scalable, resilient, fault-tolerant, etc) are all just fancy ways of saying that we want to run our code and keep it running. Consistently. And no matter the state of the network or growth in number of users.
So we get multiple servers. We hire sysadmins and devops to manage them. We put them behind load balancers. We rebuild our systems with zero affinity. We make them stateless. We split them into microservices. We containerise the microservices. We establish rules and thresholds to alert sysadmins to deploy more servers proactively. Our devops build funky automation chains so these new servers join our cluster and tell the load balancers and start serving. We run container management tools to scale our ever-growing swarm of containers across all our servers. And eventually, if we are very, very clever, we virtualise completely and replace around half of our huge team of sysadmins and devops with orchestration tools to deploy new servers for us automatically.
And that’s just the application servers! We go through a similar exercise for our database servers, file storage, messaging, and more. Each brings its own raft of challenges and each needs its own teams of experts. But we bite the bullet and within a few years of paying these salaries we get it done! Phew!
We finally have a fully scalable system! Our uptime is above 99% (excluding maintenance windows). A majority of our sysadmins sleep most nights. We’ve done great, haven’t we?
Well no. Not a bit. Not even a little.
You think you need servers? Think again. All the tooling, the staff, the automation... that stuff is hard. But it is a solved problem. And it is not your problem.
Set yourself free.
Huh? Why is a technology post spouting aphorisms? Because that is truly what it feels like.
Let me explain.
I have a reasonable background in IT. Come September, I’ll have been building technology solutions commercially for 30 years. But I’m not “old guard” – my work is all about best practice, so I’ve tended to stay closer to the front of my profession… not always the forefront, because that carries its own risks, but perhaps just behind – the point at which bleeding edge becomes leading edge.
I’ve always built systems to the goals of being resilient, secure, scalable, highly-available… in theory. In practice however, many past clients could not afford the sheer engineering load required to attain those goals. So we get as close as we can in the budget we have. Clients walk away happy with their shiny new app or website, but without a clear understanding of the wide-ranging implications if their product is wildly successful. This has always left me a little hollow and dissatisfied.
No more!
I decided that enough was enough. In 2017 my brother and I started App Factory Store with the goal of addressing this problem in an affordable way, without compromising anything else.
As Lead Solutions Architect, every day I design solutions that tick all these boxes (and some new ones) but at a price point that is nearly as affordable as doing things “the old way”.
We don’t have an army of system administrators, we don’t have a bunch of devops, or DBAs. We simply don’t need them. We just have developers. The developers write code, and that code runs. And it keeps on running regardless of load.
Best of all, our clients don’t need any engineering staff to keep these apps and websites running. Yes, you read that right. We build zero-management solutions that simply keep running on their own. But what happens, you ask, if the product is featured on TechCrunch, or CNN, and there are 300 million new users in 3 hours? Well, nothing much happens… Our clients’ infrastructure costs may go up some. Their income will go up a lot. But that’s about it. The system just keeps running.
This is where I lose a lot of people. They shake their heads and walk away, believing that it is just a marketing claim, because it cannot possibly be true. But it is true. We build these systems every day.
The landscape has changed. The tools have changed. The game has changed. There is a road to scalability. And it is now paved.
So how do we do it?
We achieve our goal of scalable high-availability by leveraging so-called “serverless” technologies. The name “serverless” doesn’t mean there are no servers, it just means they are not our problem. Nor our clients’ problem.
At the core of “serverless” is the ability to execute code in the cloud. We have no servers to manage, only blocks of code. Under the hood (and invisible to us), whenever needed, a containerised environment is established for a block of code, and the code is executed. When it is done running that chunk of code, the same environment can handle the next request. Or if things go quiet, then after a bit of idle time the environment is destroyed. If things get busy, more containers are spun up to handle additional requests. In this way we can do pretty much anything we want, to whatever scale we want.
One of the largest cloud vendors, Amazon AWS, has an impressive fleet of serverless products, so I’ll use them in the use case example below.
Consider a typical mobile app. It has a front-end, which runs on the user’s mobile device, and it has a back-end, which the front-end talks to. The back-end receives and handles these requests, stores or processes data, and returns a response to the front-end which is often displayed to the user. The more popular the app, the more concurrent requests are made to the back-end. If the back-end cannot scale to handle additional load, the app won’t get the answers it needs in good time (or at all) and the app becomes useless. When this happens, users will likely ditch it and not bother with it again.
We build our scalable back-ends out of the following AWS products, none of which need ANY server management whatsoever:
- Lambda: This is where we keep our code, which is executed on demand, and scaled on demand. No servers to manage. The basic unit in Lamdba is a code project, typically deployed as a collection of code files.
- DynamoDB: We use this for storing data, just like a NoSQL database. But it is not centralised and there are no servers. The basic unit is a data table, and access to it can scale on demand.
- Serverless Aurora RDS: Aurora is Amazon’s managed relational database product. The serverless option decouples processing from storage, so that our MySQL-compatible databases can scale on demand.
- API Gateway: This provides an entry point for our app. Requests are sent here, and then routed to the appropriate Lambdas. Here we can implement strong security measures. It can scale on demand.
- S3: Amazon’s Simple Storage Service is for storing and retrieving files. Our Lambdas can put files here, and so can the app front-end. It is durable and scalable, and the basic unit is a file (within a collection called a “bucket”).
- Streams: These are trigger conditions that execute Lambdas when certain data changes. Streams allow us to make our solutions event-driven, and separate larger projects into microservices.
- SNS: The Simple Notification Service lets us extend our event-driven architecture to let Lambdas communicate with other Lambdas, external processes, or even humans.
- SQS: The Simple Queue Service lets us handle events or requests in order. Lambdas can queue a request and immediately return, so the app doesn’t have to wait. The queued action then takes place in its own time.
- SES: Amazon’s Simple Email Service lets us send significant amounts of email without worrying about managing and tweaking any email servers, which are usually a huge pain to manage.
That’s an impressive array of products, none of which are our responsibility to manage. We simply use them as needed. And at a surprisingly low price point (sometimes free) as you’ll see below…
There are no rules of architecture for a castle in the clouds.
— Gilbert K. Chesterton
Great, but what does it all cost?
This is possibly the most exciting bit.
In the case of App Factory Store, we stitch all these pieces together into a coherent back-end, which -once it goes live on Amazon AWS- you only pay for as you use it.
If you’re a small startup launching slowly without much marketing, you can expect to pay very little. It may even be free. Then as your app grows in popularity your back-end costs will go up. But because a bigger user base earns you more revenue, the back-end remains affordable, no matter your growth.
And don’t forget, there’s nothing to manage, so no need to hire a team of sysadmins, devops, and DBAs that you usually would.
Yes, you got that right: no staff costs for infrastructure, and your costs keep pace with your growth!
This is like a holy grail! Why isn't everyone doing this?
Well, the cool kids are doing it. What’s your excuse?
Feel free to get in touch to see how we can help you ditch all those servers for good.
Geoff Ellison
Geoff Ellison is Lead Solutions Architect and CEO at App Factory Store. He has been designing and executing innovative technology solutions for 30 years. Geoff lives in paradise (Pottsville Beach, NSW) with his wife and two children. He also plays guitar (in theory), and tells appalling dad jokes.
Geoff is always open for work and new projects, particularly in web/app development, cloud Devops, security & encryption, and consulting. Get in touch.