Skip to contents
Story

What is Serverless Architecture?: Advantageous Cases and Precautions

28-02-2026

The cloud has moved from renting infrastructure (IaaS) to entrusting the platform (PaaS), and now to consuming only the execution units. Traffic has become unpredictable, service launches must be accelerated, and operational staff are constantly in short supply. At the intersection of these conditions, serverless is not simply a technology option; it's a turning point in operational strategy.


The concept of serverless

Serverless is a model that uses computing resources per code execution unit, without directly managing servers. Developers don't have to worry about creating, patching, or scaling servers. Instead, they focus solely on "what code executes when what event occurs."

The key point is that, contrary to the name, the servers are not gone, but the responsibility for operating them has been completely transferred to the cloud provider.

Typical components

  • Function as a Service
  • Event triggers (HTTP requests, message queues, file uploads, etc.)
  • Managed Database and Storage
  • Automatic scaling and billing

 

Key Features of Serverless

  • Event-driven execution: Execute code only when requested.
  • Auto-scaling: Respond to traffic spikes without any setup.
  • Usage-based billing: You only pay for the execution time and number of calls.
  • Minimize operational burden: Reduce server management, patching, and monitoring burden.

Serverless is not an “always-on server,” but rather an execution environment that wakes up only when needed.

 

In what cases is it advantageous to introduce it?

1. Services with high traffic volatility

  • Event Campaign Page
  • Promotion API
  • Backend functions that are called intermittently

You can respond to rapid traffic changes without fixed server costs.

2. API-centric backend service

  • Mobile/Web API
  • BFF (Backend for Frontend) architecture
  • Some features of microservices

It goes well with services separated into small functional units.

3. MVP and new service experiments

  • Services requiring rapid market validation
  • Startups and organizations with limited operational staff

It is suitable for the strategy of “make first, grow later.”

4. Internal automation and deployment tasks

  • Log processing
  • Data conversion
  • Notification/Linked Tasks

Ideal for tasks that don't need to be running all the time.

 

Things to watch out for when introducing serverless

1. Cold Start

Latency may occur when a request is first executed without any requests. This may affect services that require real-time responses.

2. Vendor Lock-in

Serverless relies heavily on the cloud provider's event model, runtime, and permission structure. This can result in higher costs than anticipated.

3. Difficulty in debugging and local testing

Problem reproduction and tracking are more complex than in traditional server environments. Observability (logs, tracing, and metrics) design is essential.

4. Not suitable for long-term work

It is not suitable for the cases below as there is an execution time limit.

  • Large-scale batches
  • long-term operation

5. Cost reversal phenomenon

If the call frequency is very high and the execution time is long, it may be more expensive than a fixed server.

 

Serverless vs. Traditional Servers: What's the Difference?

divisionServerlessTraditional servers
How it worksEvent-basedAlways running
ScalingautomaticManual/Semi-automatic
ChargingUsage-basedInstance-based
Operating burdenvery lowheight
ControlLimitedheight

 

Strategic Use from a Corporate Perspective

  • Core services: traditional servers or IaaS/PaaS
  • Auxiliary functions/extensions: Serverless
  • Experimentation and Automation Area: Serverless First

Serverless isn't a solution that replaces everything, but rather a piece of the puzzle that makes architecture more flexible.

 

Insight Summary

  • Serverless is not about having no servers, but rather a structure without operational responsibility.
  • Powerful when traffic volatility, rapid experimentation, and operational efficiency are key.
  • Understanding cold starts, vendor lock-in, and cost structures is essential.
  • The correct answer is “Serverless for the right parts” rather than “Serverless for everything.”