For the best web experience, please use IE11+, Chrome, Firefox, or Safari
OneLogin + One Identity delivering IAM together. Learn more

What is Serverless Computing?

 

Serverless computing takes away the pain of provisioning and managing infrastructure from developers and engineers. Without serverless computing, here’s how a typical software development, deployment, and management lifecycle looks like:

  1. The developers write the source code.

  2. The ops team evaluates the performance and scalability requirements of the application, and prepares a proposed infrastructure plan.

  3. The plan is shared with the cloud services provider, and a specific number of servers are purchased.

  4. The application runs smoothly for a week, before it crashes because of high memory usage.

  5. The developers and ops engineers debug the issue, and end up purchasing a few extra servers to distribute the load.

  6. Other issues keep occurring intermittently, and the developers spend half their work days debugging infrastructure issues.

  7. The ops team is under constant pressure from the finance department about the rise in cloud costs, forcing them to always look for optimization avenues.

With serverless computing, the developers’ job finishes at step 1. The cloud provider automatically allocates, manages, and scales the infrastructure required to run the application. No need to do infrastructure planning, no need to monitor server health checks, and no need to worry about being overcharged. Since the cloud provider automatically adds or removes resources based on application demands, you are only billed for what you use. Nothing more, nothing less.

An important thing to remember here is that serverless doesn’t mean that servers are not being used. They very much are, but the usual tasks associated with servers, like infrastructure planning, management, and scalability, are hidden away from the developers.

How does serverless computing work?

In a serverless environment, once a developer has finished implementing business logic, they can expect the cloud provider to handle the rest. The cloud provider could run their application inside a virtual machine, or a container; the developer doesn’t have to know or care. The bottom-line is that the application will run, utilizing the bare-minimum resources, and scaling whenever need be.

serverless computing

Serverless vs. Function-as-a-Service (FaaS)

Function-as-a-Service (FaaS) is the serverless way of building micro-service applications. With FaaS, developers can write code on-the-fly, which can be executed upon receiving certain events, e.g. when a user clicks a button on a web application, or when a new message is received in a message queue, or when someone sends a request to your HTTP server.

Wait, what are micro-services?

The micro-service architecture is an implementation approach that allows developers to split their business logic across multiple, loosely-coupled services. This has several advantages:

  • Smaller services are much easier to test and maintain.

  • Business logic spread across multiple services prevents a single point of failure.

  • Faster deployment.

You can think of FaaS as a subset of serverless. Serverless typically covers all service categories like compute, databases, API gateways, and storage, etc., abstracting away the planning, management, and billing from the developers. FaaS, on the other hand, focuses solely on event-driven computing, where developers can trigger application code based on events. Another important benefit of FaaS is that there is no need to use a single programming language or framework to build your application; you can write your logic driving functions in any number of different languages or frameworks. Let’s consider an example.

In your cloud environment, you have connected your messaging queue with a function. Whenever a message is received in the queue (event), the function calls your REST API, which then executes your business logic. Similarly, you can write other functions to connect your authentication service with your database, and your email provider with other parts of your system.

API Diagram

Serverless vs. Platform-as-a-Service (PaaS)

Serverless and Platform-as-a-Service are similar in the sense that they both keep the infrastructure invisible to developers, who only have to worry about writing code. However, there are several differences between the two.

  • For PaaS, the applications have to be manually configured to scale. On the other hand, serverless automatically scales applications based on need.

  • For serverless applications, the code executes in an agile manner; only when invoked. E.g., a serverless application/function may completely shut down when there is no activity, and then restart (in milliseconds) to respond to an event. PaaS applications can’t scale up or down at such speed.

  • With PaaS, you have more control over your infrastructure and deployments. However, serverless abstracts away just about everything.

Serverless vs. Backend-as-a-Service (Baas)

Most software applications have a frontend and a backend. The frontend usually includes the user-facing interface, along with the client-side logic. For example, the frontend of a web application is what you see on your browser. The backend comprises all that happens behind the scenes. When you log in, your username and password are received by the frontend, but they are sent to the backend server for verification. Similarly, all user information collected during signup is also stored in a database, which is also a part of the backend. The server, on which all the backend entities (authentication service, databases, APIs) are running, is also part of the backend.

Backend-as-a-Service (BaaS) takes the responsibility of developing and managing the backend, away from the developers. They can focus on the frontend of the application and connect it to the backend using APIs and SDKs (pre-built kits to develop software).

BaaS and serverless are similar because they both hide the backend from the developers. However, with BaaS, you don’t get the scalability-on-demand which you get with serverless. Moreover, just like PaaS, BaaS applications don’t scale up or down as quickly as serverless.

Serverless computing example scenario

In the image below you can see, in a traditional setup, we will probably have two servers running two applications. This is a safe approach because we have provisioned individual servers for applications; but it’s also wasteful, because, (supposedly) 30% of the time, the applications don’t receive any requests. Also, in case the load drastically increases, there is no way to scale the applications automatically.

In contrast, on the left side, the serverless architecture only has two servers running four applications, because as per current demands, that’s enough. There is no need to provision more servers, at this point. However, as soon as the platform realizes a need to scale, it will add more servers as necessary. It may also decommission a server in case it’s no longer needed.

traditional vs servrless

Is serverless computing secure?

  • As your infrastructure is defined, maintained, and scaled by your cloud provider, its security is predominantly handled by them as well. You no longer have to do OS hardening or implement firewalls.

  • Another aspect of serverless computing that makes it so secure, and difficult to hack, is its ephemeral nature. Serverless applications/functions/containers run and stop depending on need, which significantly reduces the chances of long-term attacks.

  • With serverless, you have the ability to move to a micro-services approach. This allows you to define per-service policies, which is much easier when the service is small.

With that said, some of the responsibility of security still resides on your shoulders. Remember:

  • Your cloud provider may patch OS-level dependencies for you, but you will still need to ensure that your application dependencies are patched and up-to-date.

  • While allowing access to your servers/functions, use the principle of least privilege, i.e., give a user the bare-minimum levels of access required to perform their duties.

  • If using event-driven functions, ensure that your input data is sanitized to avoid injection attacks (SQL injection, etc.)

  • Lastly, ensure that your application code is secure, using the relevant best practices for the used programming language(s) or framework(s).

Pros and cons of serverless computing

Pros

  • Increase developer productivity by enabling them to focus on what they do best: write code.

  • No need for any formal infrastructure planning.

  • Decreased costs because of the pay-as-you-go model.

  • Enhanced, automated scalability.

Cons

  • Cold starts (servers restarting after shutting down because of inactivity) can hinder performance for some serverless products.

  • It is usually very hard to migrate your serverless environment from one provider to another.

  • May cost more for long-running tasks. Serverless is ideal for tasks that are not needed to run indefinitely, thus decreasing costs when the function/application is idle.

  • Some users may experience a steep learning curve. Building applications in serverless environments is fundamentally different from traditional approaches.

The future of serverless computing

One major concern that most people have regarding serverless is cold starts. As we discussed above, when an application in a serverless environment hasn’t been invoked in a while, it goes dormant. As soon as a new request is received, the application becomes active again. For some serverless products, this startup time can add noticeable latency. However, cloud providers are trying to address this issue by minimizing the time it takes to spin up an application/server/function.

Since serverless has so much in store for organizations of all types and sizes, we expect that its adoption will increase over the next few years.

Try OneLogin for Free

Experience OneLogin’s Access Management capabilities first-hand for 30 days