This content originally appeared on DEV Community and was authored by Juun Roh
Introduction to Cloud Native and considerations for designing cloud native applications.
A Paradigm Shift in Application Development
The paradigm of application development has recently shifted toward emphasizing flexibility, scalability, sustainability, and maintainability, separating the domains of service delivery and feature development. This change is not simply following technological trends, but has become an essential strategy for survival in rapidly changing business environments.
Traditional monolithic architectures integrated all functionality into a single massive codebase. While this approach offered advantages like simple initial development and deployment, over time it revealed limitations such as deployment risks, scalability constraints, technology stack rigidity, and team collaboration difficulties. Microservices Architecture (MSA) emerged to address these problems. However, MSA cannot exist in isolation—it required the development of supporting foundational technologies to become reality.
Cloud Computing
Cloud computing enabled dynamic allocation of hardware-level resources, complementing the traditional on-premise (bare metal) approach of securing and managing all necessary resources based on predicted usage, thereby improving infrastructure-level resource utilization efficiency. By subdividing and distributing physical resources more granularly, it became possible to significantly increase resource availability rates. However, traditional monolithic applications could not benefit greatly from efficiency improvements, as they had to distribute surging traffic by scaling up entire new application instances.
Microservices Architecture (MSA)
The container concept that emerged with Docker freed applications from various dependencies like hardware and OS. Additionally, when Kubernetes appeared to effectively manage containerized services, enabling the application of cloud computing’s dynamic resource allocation concept down to the application service level, applications began to be divided based on service domains. These divided services are called microservices, and designs based on this principle are called MSA.
Service-oriented design approaches were already there before MSA. Service-Oriented Architecture (SOA) was the predecessor, sharing the same principle of dividing applications by service domain, but differing in the scope of design application. While SOA applies at enterprise scale to avoid duplicate development of identical services, MSA divides services at the application level. Unlike SOA, which aims to manage common services at the enterprise level through ESB (Enterprise Service Bus) introduction, MSA can be said to target the securing of availability and scalability through divided service provision.
Aspect | SOA | MSA |
---|---|---|
Scope | Enterprise | Application-level |
Communication | ESB | Direct(REST, gRPC) |
Governance | Centralized | Decentralized |
Difference between SOA and MSA
Cloud Native
“Divide and conquer is an algorithm design paradigm based on multi-branched recursion. A divide-and-conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same or related type until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.” – Wikipedia
Red Hat Blog introduces Cloud Native as adopting the divide-and-conquer algorithm as its principle. Accordingly, by dividing services that were integrated into a single application, application updates and service delivery can be separated, and services experiencing high demand can be provided without delay through dynamic resource allocation per service. This ensures service availability, improves development efficiency, brings overall cost reduction effects, and plays a major role in improving application user experience. Cloud Native refers to the entire ecosystem for effectively operating applications configured in this way.
- Monitoring and controlling services and resources
- Continuous Integration & Delivery (CI/CD)
- Container orchestration and networking
- Self-healing, circuit breaker, test automation, roll-out, roll-back, bin packing, etc
However, the increasing number of items used to define a single microservice (pod, deployment, replicaset, service, ingress, job, service account, role, etc.) results in increased application complexity. As high understanding of the entire system is required not only for development but also for operations personnel, a new personnel composition form called DevOps emerged. The form with additional security management personnel is called DevSecOps.
Cloud Native development is ongoing, and third-party tool development to help configure Cloud Native environments is also actively progressing. CNCF (Cloud Native Computing Foundation) is part of the non-profit Linux Foundation, serving as an open-source, vendor-neutral hub with the goal of popularizing Cloud Native. It leads or supports many projects including Argo (Continuous Integration & Delivery), Cilium (Cloud Native Network), Harbor (Container Registry), Helm (Application Definition & Image Build), Istio (Service Mesh), Prometheus (Observability), Kubernetes (Scheduling & Orchestration), and many others.
Application Operating System?
As services became separated, certain elements became essential in Cloud Native environments. While examining these, I discovered they were remarkably similar to the composition of a particular software: Android OS.
Role | Cloud Native | Android OS |
---|---|---|
Messaging | Message Broker(e.g. Kafka) | Broadcasting |
Routing / Gateway | API Gateway | Intent Filter |
Lifecycle Management | Container Orchestration | Activity Manager |
Service Communication | Service Mesh(e.g. Istio) | Binder IPC |
Monitoring | Health Checks / Monitoring | Watchdog / ANR Detection |
Fault Tolerance | Circuit Breaker | Process Isolation |
Resource Management | Auto Scaling | Memory Management |
Conceptual mapping of responsibilities between cloud native and android OS
Cloud Native and Android OS share the commonality of following Distributed Design Patterns. As a result, significant similarities can be confirmed beyond the mentioned aspects. Despite having different purposes—Cloud Native for ‘scalability and fault tolerance,’ Android OS for ‘device optimization’—they have converged evolutionarily in similar directions. Could Cloud Native be classified as an Operating System for more efficiently operating applications?
Core Design Considerations for MSA
Building Cloud Native Applications: Why & How? – KANINI
MSA-based application design must approach from different perspectives than before. As evident from the similarity to Android OS, Cloud Native has the characteristics of a dedicated OS for applications, creating a paradoxical dependency of the application on the Cloud Native system itself. Design must proceed considering compatibility with the Cloud Native environment. Accordingly, considerations are divided as follows:
1. Infrastructure Architecture
Defining the Cloud Native environment where the application will operate is the priority.
- Centralized Configuration Management: Centralized configuration management capable of managing physical resources, external integration services, cloud resources, etc.
- Observability: Monitoring systems that can check resource usage, service health status, etc.
- Security: Security management including network policies, ingress & egress, firewall settings, authority definition and management, etc.
- Inter-service Communication: Defining communication methods between services – synchronous (REST/gRPC) or asynchronous (message queue/event streaming).
- Service Discovery & Load Balancing: Mechanisms for finding service instances and distributing load in dynamic environments.
2. Microservice Definition
Based on the defined Cloud Native environment, components of a single microservice can be identified.
- Data Management: Allocating independent data stores to each service and establishing consistency maintenance policies for distributed data sources
- API Design: Designing specific communication methods following defined service communication approaches and versioning for individual service communication methods
- Resilience Patterns: Timeout, retry and error handling policies, and fault response policies through circuit breakers, etc.
Based on these elements, a single microservice is configured. Each service defines service boundaries and contexts according to Domain-Driven Design (DDD) principles, and is designed to have appropriate communication formats, network and firewall security policies for the environment.
3. Others
Beyond these, many areas of change must be undergone to apply MSA to applications, including CI/CD pipelines for service updates without service delivery interruption, development, testing and deployment environment construction, and organizational structure and cultural innovation following DevOps adoption.
Case Studies
Numerous migration cases to Cloud Native applications can already be found. Among them, CNCF Case Studies primarily introduces cases of CNCF projects.
- Netflix: Netflix, consistently mentioned as a Cloud Native success case, introduces the effects gained by replacing internally developed RPC (Remote Procedure Call) technology with gRPC, expanding its application to improve inter-service communication.
- Kakao: Korean company Kakao can also be found. Kakao experienced network problems when adding kube-proxy and Nginx Ingress to their Kubernetes platform, but by adopting Cilium as CNI (Container Network Interface), they reduced network costs and eliminated the need to introduce kube-proxy.
Conclusion
“The best way to implement complex systems is to use a series of simple, loosely coupled components.” – Martin Fowler
Cloud Native and MSA represent not simply technical choices, but a philosophical change in software development and operations. Just as Android OS created an ecosystem where countless apps operate independently yet harmoniously on a single platform, Cloud Native presents a new paradigm where microservices each have their own lifecycle while delivering unified business value.
This journey is never easy. It demands not only technical complexity but fundamental changes in organizational culture and development processes. However, if these changes are successfully implemented, sustainable competitive advantage can be secured in rapidly changing business environments.
This content originally appeared on DEV Community and was authored by Juun Roh