Serverless Computing with lambda
Before we dive in, let’s look at what is meant by Serverless.
Serverless basically is a cloud-native development model that allows developers to build and run applications without having to manage servers.
There still exist servers in serverless, but they are hidden away from app development. The provider handles the routine work of provisioning, maintaining, and scaling the server infrastructure. Developers can simply deploy their code.
Once deployed, serverless apps automatically scale up and down as needed. Serverless offerings from most providers are usually metered on-demand through an event-driven execution model. As a result, when a serverless function is sitting idle, it doesn’t cost anything.
Now, on to Lambda.
Lambda is a computing platform provided by Amazon Web Services (AWS). It provides a computing platform to execute code in the cloud. As in any serverless system, it hides away the complexities of managing a cloud infrastructure.
So how do they work?
Each Lambda function runs in its own container. When a function is created, Lambda packages it into a new container and then executes that container on a cluster of machines managed by AWS. Before the functions start running, each function’s container is allocated its necessary RAM and CPU capacity. The customers then get charged based on the allocated memory and the amount of run time the function took to complete.
On a very basic level, serverless applications are made up of 2 or 3 components; these are event sources, functions and/or services. An event source contains anything that can invoke a function, such as uploads to an S3 bucket, changes to data state or requests to an endpoint. When any one of the specified events occurs, the Lambda function will run in its own container. Once the request is completed, your Lambda function will either return a result back to the invocation source or a connected service, or it could make changes to a connected service (such as a database).
Is it Secure?
For Lambda, AWS follows the Shared Responsibility Model where AWS manages the server. Consequently, the hardening, the hypervisor, the Runtime, the Sandbox, the hardware, etc. and you manage the code, third party libraries, configurations and the IAM configurations applied on the Lambda.
Lambda functions are considerably secure by default. Your function can’t talk to other services nor can it be invoked by any client, you’ll have to enable it to do so. The permissions related to this fall into two: execution policies and resource-based policies.
Execution policies determine which services and resources a Lambda function has access to, as well as which event sources can trigger a Lambda function to run. Resource-based policies grant other accounts and AWS services access to your Lambda resources. These include functions, versions, aliases and layer versions.
So what are the advantages?
The time and effort saved from creating and maintaining your infrastructure. The time saved can be used to market your application, greater agility as your team is able to move faster, and more time spent on more important tasks such as bug fixes or new features.
You only pay per use. You basically only pay for the compute time for your functions plus any network traffic generated. This is generally more cost effective for workloads that scale significantly according to the time of day.
The underlying architecture is managed by AWS so you don’t need to worry about the underlying servers. This can result in significant savings on operational tasks such as upgrading the operating system or managing the network layer.
AWS Lambda creates the instances of your function as they are requested. There is no pre-scaled pool, no scale levels to worry about, no settings to tune, and at the same time your functions are available whenever the load increases or decreases.
Tight integration with other AWS products. AWS Lambda integrates with services like DynamoDB, S3 and API Gateway, allowing you to build functionally complete applications within your Lambda functions.
It’s not all sunshine and rainbows though.
Cold Starts. When a function is started in response to an event, there may be a small amount of latency between the event and when the function runs. If your function hasn’t been used in the last 15 minutes, the latency can be as high as 5-10 seconds, making it hard to rely on Lambda for latency-critical applications.
Execution time limit. A Lambda function will time out after running for 15 minutes. There is no way to change this limit. If running your function typically takes more than 15 minutes, AWS Lambda might not be a good solution for your task.
Concurrency. By default, the concurrent execution for all AWS Lambda functions within a single AWS account are limited to 1,000. (You can request a limit increase for this number by contacting AWS support)
In a nutshell,
With AWS Lambda, you pay only for the used function runtime (plus any associated charges like network traffic). This can produce significant cost savings for certain usage patterns, for example, with cron jobs or other on-demand tasks. However, when the load for your application increases, the AWS Lambda cost increases proportionally and might end up being higher than the cost of similar infrastructure on AWS EC2 or other cloud providers.