Reading Time: 5 minutes

What is Serverless?

What.is.Serverless

Serverless computing (or serverless for short), is an execution model where the cloud provider manages and allocates resources dynamically without the need for infrastructure. Resource allocation is based on the as needed, real-time use of your application or website. When running this type of hosting, you are only charged for the amount of resources that our code uses. 

Everything that is “served up” from a serverless platform is served from a stateless compute containers that is event-triggered. These triggered events are the same ones that would run on your ordinary server; HTTP requests, database events, monitoring alerts, cronjobs, and so forth.

In most cases, the code that is sent to the cloud provider for execution is usually in the form of a function. Because of this, serverless is often called "Functions as a Service" or "FaaS."  This is a term that you most likely read about as well. There are several thoughts to be aware of if we ever consider transitioning to a serverless environment. 

In this article we will cover some of the basics of serverless computing. What serverless is, what is it used for, and what are some of its pros and cons.

Considerations of Transitioning to Serverless

Microservices

Your application should be constructed in the form of functions. The majority of developers deploy their applications as a single rails application, but in serverless, we will adapt the code to a microservice architecture. You can run an entire application as a group of separate function but is not recommended.

Stateless Functions

As stated earlier, functions run inside stateless containers. Be aware that your functions will most likely be invoked in a new container every time because we cannot run code after the event has completed. 

Cold Starts

Because our functions run in a stateless container, the functions respond when an event is triggered. Every time this happens, a small amount of latency occurs, which is why it is called a "Cold Start."

When a function completes, our container will stay active for a short while before going back to a stateless mode. If another event is triggered while the container is still running, the container will respond more quickly. This is called "Warm Start." How long the cold starts will last depends entirely on a cloud provider and programming language used to write the function.

Now that we know how serverless works let's review some of the pros and cons of a serverless architecture. 

serverless.071720

Pros of Serverless Architecture

  • Lower costs - The main advantage of serverless architecture is that we are not paying for hardware or when our services are not being used. 
  • Minimal patching and updating - Updating the LAMP stack and applying security patches is a thing of the past. With serverless architecture, this is entirely in the hands of your cloud provider.
  • Faster revisions - A serverless architecture can reduce the timeframe needed to for 'go to market' implementation. Instead of a complex deployment procedure to roll changes, developers can add, modify or remove code in small stages.
  • Simplified code - Using FaaS, developers can make simple functions that perform independently to achieve a single role.
  • Scalability - You plan to build an application that is very popular, and after a marketing campaign, it explodes! Nothing to worry about there. Serverless architecture has the ability to automatic scale depending on the current traffic volume.

Cons of Serverless Architecture

  • Limited mobility - It may be a difficult task to change cloud providers. A complete overhaul of the application may be needed due to the differences in architecture or services on another cloud provider which may not be compatible with your application or functions.
  • Cold starts - If you are not invoking your functions often, a significant performance drop can be expected. This can be avoided using small, precise functions, but this is not easy because inefficiencies could creep in.
  • Complexity - The learning curve for serverless can be steep. Because this is a relatively new technology, developers will need time to test an application to ensure that the functions are working with the data as expected. For example, certain providers have time-restriction features which limits functions to 5 minutes. If you have a function that needs more than that timeframe in which to run, you will need smaller functions, or you will need to rewrite the code to accommodate this limitation.
  • Logging & Monitoring - Currently, in many serverless environments, the extent in which logging and monitoring is well-supported is limited at best. However, work in this arena is continuing to expand.
  • Testing - Some serverless platforms make remote testing possible, but usually only at the component (function) level, but not at the application level. It can be difficult at best to test a complex application without having to set up a separate account with the provider, so that testing does not impact production.
  • Debugging - Debugging distributed serverless apps is often difficult because of their stateless aspect. An option is to use a runtime debugger that allows line-by-line examination.

Now that we have reviewed some basics of serverless, and have appraised several pros and cons, let's look into a some best practice methodologies when deploying serverless apps.

best.practice

15 Best Practices Benchmarks

  • Observability - Observe your serverless architecture to analyze how the system behaves during various events and functions.
  • Minimal privilege - Only enable access to what is required by the function.
  • Limit permission scope - Only enable needed permissions to achieve required objective.
  • Limit code revisions - Never change complete code. Observe the behaviors and resource usage of your functions and then optimize.
  • Limit function parameters - To improve performance, reduce the number of tasks that your code has to execute. 
  • Test locally - Do not push changes to the live environment. Test tweaks and changes locally.
  • Employ API Security - Use API gateways as a means of security by employing them as a reverse proxy
  • Limit vulnerabilities - Sanitize input events to limit injection attempts
  • Monitoring - Pre-establish monitoring parameters during the planning stage.
  • Logging - Pre-establish logging functions during the planning phase.
  • Enforce Code integrity - Adhere to standard coding conventions for application security.
  • Restrict Deployments - Limit new code deployment to off hours and never before a weekend.
  • Transport security - Use known methods to authenticate data in transit
  • Confinement & Isolation - Ensure secrets are stored in a secure location and accessible only to required functions
  • Granular deployment - Practice granular changes. Limit deploying functions in bulk to prevent abuse if code is compromised.

Conclusion

I hope that this article was useful to get a sense of what serverless is and how it traditionally works. Being that it is a newer technology that has only been on the market for three years or so, serverless architecture is still pretty complex for most developers, and just like anything else, it has some flaws. Again, we are sure that serverless will become mainstream in hosting.

Because we pride ourselves on being The Most Helpful Humans In Hosting™, our support staff is always available to assist 24 hours a day, 7 days a week 365 days a year.

If you are a Fully Managed VPS server, Cloud Dedicated, VMWare Private Cloud, Private Parent server or a Dedicated server owner and you are uncomfortable with implementing any of the ideas outlined above, we are available, via our ticketing systems at support@liquidweb.com, by phone (at 800-580-4986) or via a LiveChat to assist.

We work hard for you so you can relax.

Avatar for Dean Conally

About the Author: Dean Conally

I am a Linux enthusiast and console gamer, dog lover, and amateur photographer. I've been working at Liquid Web for a bit less than two years. Always looking for knowledge to expand my expertise, thus tackling new technologies and solutions one day at a time.

Latest Articles

In-place CentOS 7 upgrades

Read Article

How to use kill commands in Linux

Read Article

Change cPanel password from WebHost Manager (WHM)

Read Article

Change cPanel password from WebHost Manager (WHM)

Read Article

Change the root password in WebHost Manager (WHM)

Read Article