Serverless architecture is still new and design patterns are still evolving. But most of them are not new. They are transferred from patterns used in a distributed system and patterns used in microservices.
The serverless system tends to be more complex. But the hardest part is forgetting rules you learned in the past and adjusting to new principals. Once you do, everything seems easy because the system is divided into microservices that can be managed separately.
Yesterday's Best Practice Is Today's Antipattern
Building a system used to be about data consistency, data normalization, transactions, fast immediate responses, and high-performance infrastructure that rarely fails. None of this is common in serverless and most of these things are antipattern now. What do you mean? If we do not rely on this base stone, the system turns to chaos. But actually, with using correct desing patterns, you build a more reliable and more resilient system than before and it can scale infinitely.
In most cases, you can still use your traditional patterns. You can even run the Node.js Express application in Lambda and use a classical SQL database. But such a system would not run at its optimum regarding performance, reliability, scalability, and price.
Yesterday's best practice is today's antipattern in serverless
Lambda at its core is stateless. A container that runs Lambda can be reused, but you cannot hold state at the container, because the container can be stopped at any time. And you never know which container will process the next request. You can still store state in a database or other systems, but not in a function.
Lambda is the core of the serverless system and the glue between other services you use. Lambda responds to events. Events can be HTTP request, write to database (DynamoDB), storage (S3), queue (SQS), notification (SNS), scheduled (CloudWatch), and so on. When responding to events, you connect systems together. You use principals of event-driven programing.
Lambda is the core of the serverless system and the glue between other services you use.
Microservices and Nanoservices
You should design your serverless system as microservices and nanoservices. You break the system into small autonomous pieces. The purpose of splitting to microservices or nanoservices is to get small independent units that can be individually managed, monitored, and scaled. That does not mean you should separate the system into the smallest possible piece and function and then call one function from another function. You can have common code and they can be shared between functions or you could use a new feature called Layers. Calling a function from another function is valid and preferred in some patterns like fan-out. But in other cases, it is redundant and you are just paying double for functions — once for calling a function that is waiting and another for a function that is executing. You make debugging more complex and remove the value of the isolation of your functions.
Prefers Asynchronous Operations
Synchronous processes are processes when you wait for results, but in an asynchronous process you finish the operation without waiting for results. If a user is involved in an asynchronous process that means you will not give direct response to a user with a finished result. You will notify them that the data is processing and later notify them about the results if necessary. This is much better because a request can be queued, retried, and you can optimize system capacity. And they are not very time sensitive.
Asynchronous operations are preferred in serverless, because a request can be queued, retried, and you can optimize system capacity
Synchronous processing is more appropriate for lightweight functions like an API call to get information from a database or a fast responding system. Asynchronous processing is better for a process that takes longer, calls other services with a slow response, or does more processing. You use synchronous processing when real-time response is critical.
Serverless system are an inherently distributed system. Data is moved from service to service, from microservice to microservice, from event to event, and from one part of an application to another.
Prefers Eventual Consistency
Because data is flowing between services, there is no strong (immediate) consistency of the data. Consistency is ensured eventually. Until consistency is obtained for the particular data, you should design some kind of apology system that informs the user that data is processing or you return the last known data or implement a similar solution. That does not mean there could not be a strong consistency, but the design pattern that implements eventual consistency is, in most cases, a better fit for serverless systems.
In serverless system data should be flowing between services. Desing for no strong (immediate) consistency of the data. Consistency is ensured eventually.
This is another reason besides scalability why NoSQL databases are a better fit for serverless. They also prefer eventual consistency.
As strong consistency is not the primary focus, the same goes for data normalization that is a base stone for relation databases. In NoSQL databases, data demoralization is a norm. You duplicate data so you can read it faster. For example, all customer details can be saved with an order, so you can read them faster. When you update the entity, you must ensure you update all instances and manually ensure consistency. Changing of data can trigger a notification. And a notification can triggers functions that synchronize data.
Do as Little as Possible in Lambda
If you use Lambda, use as little as possible and do one thing only. This differs from the traditional system. If you are doing a lot of things in Lambda, you could be doing it wrong. A lot of things can be done without Lambda, like downloading or uploading a file from or to S3. You can even write to DynamoDB, SQS, SNS, Kinesis, and so on, without Lambda through an API Gateway.
Design for Failure
As in any system, error can occur. This is especially true for a distributed system such as serverless where you depend on a lot of pieces. There can be server crashes, resource limits, throttling, third-party issues, network outages, versioning conflicts, or even just bad code. You should take these errors into account when you are designing a system. That does not mean serverless services are not reliable.
You should gracefully handle errors and retry tasks if necessary. In some cases, the system retries them automatically. And here we come to a very important term: idempotent. Idempotent means that you can run the same task and get the same result. And also, multiple identical requests should have the same effect as a single request. Repeating of the task must not have side effects. The task is repeated because it did not complete or because of concurrency or other reasons. For example, repeated processing of the order must not result in shipments of two items. This means you have to perform checks at the beginning of processing and prior to outputting results.
You should take special care of accidental recursive calls. For example, if you set up an event that is triggered when a new image is dropped in S3, to shrink it and upload it again to S3 you will trigger the same event again and cause recursion. It's better to drop a new file into another bucket. This happened to this guy. Or another example, if you process web pages and flow up links, make sure that you do not reprocess the same page over and over again. This is only one example of a reclusive call you can make by mistake and self-DoS your system.
When designing serverless system, desing for failure. That does not mean serverless is not reliable.
Connection to Other Systems
Lambda and other serverless systems can scale to infinitive. That does not mean other systems that you service depends on can also scale accordingly. If you are concerned about their capacity, use asynchronous communication if possible and put queue (SQS) or Kinesis Data Streams between them to adjust the load.
Connecting to systems that demand a live connection that is expensive to set up with a limited number of them can present a problem. SQL databases are an example of such a system. Connecting to Redis and MongoDb can also present a challenge. Because Lambda is stateless, you cannot hold a connection, and it can be really problematic if you do. Read more about communicating with other systems here.
In serverless there can be issues communicating with other systems
Low Power Hardware
Functions run in containers. With a memory setting, you set how powerful your hardware is. IT can have from 128 MB to 3008 MB memory, but you will usually choose 512 GB or 1024 MB. That is not very much, but you do not need more for running one function. If there is more than one function needed at the same time, you can get a new container with new capabilities. For each function, you can optimize hardware for what is needed. If you need a fast response, you will pick better hardware, for low priority batch operations, you will choose cheaper settings. That means you can optimize your costs, which you could not do before. You can read about one such ultra-smart optimization here.
These were just the base principles of serverless patterns. Here is my favorite collection of resources:
- Serverless Microservice Patterns for AWS by Jeremy Daly (central point for serverless patterns)
- The 5 Best Use Cases for the Serverless Beginner
- Serverless Architectural Patterns by Eduardo Romero
- AWS re:Invent 2018: The Future of Enterprise Applications is Serverless
- NDC Oslo 2018 Serverless Architectural Patterns - Yan Cui
- AWS Builders' Day, Serverless Architectural Patterns, Danilo Poccia
Serverless Microservice Patterns = read @jeremy_daly blog