Microservice architecture: PaaS cloud platform architecture design based on microservices and Docker container technology (microservice architecture implementation principles)

Microservice architecture: PaaS cloud platform architecture design based on microservices and Docker container technology (microservice architecture implementation principles)

The goal of building a PaaS cloud platform based on microservice architecture and Docker container technology is to provide our developers with a set of processes for rapid service development, deployment, operation and maintenance management, and continuous development and continuous integration. The platform provides resources such as infrastructure, middleware, data services, and cloud servers. Developers only need to develop business code and submit it to the platform code library, make some necessary configurations, and the system will automatically build and deploy to achieve agile development and rapid iteration of applications. In terms of system architecture, the PaaS cloud platform is mainly divided into three parts: microservice architecture, Docker container technology, and DveOps. This article focuses on the implementation of the microservice architecture.

Implementing microservices requires a lot of technical resources to develop infrastructure, which is obviously unrealistic for many companies. Don't worry, the industry already has excellent open source frameworks for us to refer to. At present, the more mature microservice frameworks in the industry include Netflix, Spring Cloud, and Alibaba's Dubbo. Spring Cloud is a complete framework for implementing microservices based on Spring Boot. It provides the components required for developing microservices. If used with Spring Boot, it will be very convenient to develop cloud services with microservice architecture. Spring Cloud contains many subframeworks, among which Spring Cloud Netflix is ​​one of them. In our microservice architecture design, many components of the Spring Cloud Netflix framework are used. The Spring Cloud Netflix project has not been around for a long time, and there are few related documents. The blogger studied this framework and read a lot of English documents at the time, which was simply painful. For students who are just starting to get in touch with this framework, if you want to build a microservice application architecture, you may not know how to start. Next, we will introduce our microservice architecture construction process and what frameworks or components are needed to support the microservice architecture.

In order to directly and clearly show the composition and principles of the microservice architecture, the blogger drew a system architecture diagram as follows:

As can be seen from the figure above, the general path of microservice access is: external request → load balancing → service gateway (GateWay) → microservice → data service/message service. Both the service gateway and microservice use service registration and discovery to call other dependent services, and each service cluster can obtain configuration information through the configuration center service.

Service Gateway (GateWay)

The gateway is a door between the external system (such as client browsers, mobile devices, etc.) and the internal system of the enterprise. All client requests access the backend service through the gateway. In order to cope with high concurrent access, the service gateway is deployed in the form of a cluster, which means that load balancing is required. We use Amazon EC2 as a virtual cloud server and ELB (Elastic Load Balancing) for load balancing. EC2 has the function of automatically configuring capacity. When user traffic reaches a peak, EC2 can automatically add more capacity to maintain the performance of the virtual host. ELB elastic load balancing automatically distributes the incoming traffic of the application among multiple instances. In order to ensure security, client requests need to be protected by https encryption, which requires us to perform SSL offloading and use Nginx to offload encrypted requests. After ELB load balancing, external requests are routed to a GateWay service in the GateWay cluster, and forwarded to the microservice by the GateWay service. As the boundary of the internal system, the service gateway has the following basic capabilities:

1. Dynamic routing: Dynamically route requests to the required backend service cluster. Although the internal structure is a complex distributed microservice mesh, the external system looks like a whole service from the gateway, and the gateway shields the complexity of the backend service.

2. Current limiting and fault tolerance: Allocate capacity for each type of request. When the number of requests exceeds the threshold, discard external requests, limit traffic, and protect background services from being overwhelmed by large traffic. When internal party services fail, directly create some responses at the boundary and perform fault tolerance processing in a centralized manner, rather than forwarding requests to the internal cluster, to ensure a good user experience.

3. Identity authentication and security control: Perform user authentication on each external request, reject requests that have not passed authentication, and implement anti-crawler functions through access pattern analysis.

4. Monitoring: The gateway can collect meaningful data and statistics to provide data support for background service optimization.

5. Access log: The gateway can collect access log information, such as which service was accessed? What was the processing process (what exceptions occurred) and results? How long did it take? By analyzing the log content, the backend system can be further optimized.

We use Zuul, an open source component of the Spring Cloud Netflix framework, to implement the gateway service. Zuul uses a series of different types of filters, and by rewriting the filters, we can flexibly implement various functions of the gateway.

Service Registration and Discovery

Since the microservice architecture is a mesh structure composed of a series of fine-grained services with single responsibilities, services communicate with each other through lightweight mechanisms, which introduces the problem of service registration and discovery. The service provider must register and report the service address, and the service caller must be able to discover the target service. Eureka components are used in our microservice architecture to implement service registration and discovery. All microservices (by configuring Eureka service information) register with the Eureka server and send heartbeats regularly for health checks. The default configuration of Eureka is to send a heartbeat every 30 seconds, indicating that the service is still alive. The time interval for sending heartbeats can be configured by Eureka's configuration parameters. After receiving the last heartbeat of the service instance, the Eureka server needs to wait for 90 seconds (the default configuration is 90 seconds, which can be modified through the configuration parameters) before it determines that the service has died (that is, it has not received a heartbeat for 3 consecutive times). When the Eureka self-protection mode is turned off, the registration information of the service will be cleared. The so-called self-protection mode means that when a network partition occurs and Eureka loses too many services in a short period of time, it will enter the self-protection mode, that is, if a service does not send a heartbeat for a long time, Eureka will not delete it. The self-protection mode is enabled by default and can be turned off by configuring parameters.

The Eureka service is deployed in a cluster (the deployment method of the Eureka cluster is described in detail in another blogger's article). All Eureka nodes in the cluster will automatically synchronize the registration information of microservices at regular intervals, so that all Eureka service registration information can be kept consistent. So how does the Eureka node discover other nodes in the Eureka cluster? We use the DNS server to establish the association between all Eureka nodes. In addition to deploying the Eureka cluster, we also need to build a DNS server.

When the gateway service forwards external requests or the backend microservices call each other, it will go to the Eureka server to find the registration information of the target service, discover the target service and call it, thus forming the entire process of service registration and discovery. Eureka has a large number of configuration parameters, up to hundreds, which the blogger will explain in detail in another article.

Microservice deployment

Microservices are a series of services with single responsibilities and fine granularity. They split our business into independent service units, with good scalability and low coupling. Different microservices can be developed in different languages, and each service handles a single business. Microservices can be divided into front-end services (also called edge services) and back-end services (also called middle services). Front-end services are necessary aggregation and tailoring of back-end services and are exposed to different external devices (PC, Phone, etc.). All services will register with the Eureka server when they are started, and there will be complex dependencies between services. When the gateway service forwards external requests to call the front-end service, the target service can be found by querying the service registry. The same is true when the front-end service calls the back-end service. A request may involve mutual calls between multiple services. Since each microservice is deployed in the form of a cluster, load balancing is required when services call each other, so each service has a LB component to achieve load balancing.

Microservices run in Docker containers in the form of images. Docker container technology makes our service deployment simple and efficient. The traditional deployment method requires the installation of the operating environment on each server. If we have a large number of servers, installing the operating environment on each server will be an extremely arduous task. Once the operating environment changes, it has to be reinstalled, which is simply disastrous. With Docker container technology, we only need to generate a new image with the required basic image (jdk, etc.) and microservices, and deploy this final image in a Docker container. This method is simple, efficient, and can quickly deploy services. Multiple microservices can run in each Docker container. Docker containers are deployed in clusters and managed using Docker Swarm. We create an image warehouse to store all basic images and the generated final delivery images, and manage all images in the image warehouse.

Service Fault Tolerance

There are intricate dependencies between microservices. A request may rely on multiple backend services. In actual production, these services may fail or be delayed. In a high-traffic system, once a service is delayed, it may exhaust system resources in a short period of time and bring down the entire system. Therefore, if a service cannot isolate and fault-tolerant its failures, it is disastrous in itself. Hystrix components are used in our microservice architecture for fault-tolerant processing. Hystrix is ​​an open source component of Netflix. It provides elastic fault-tolerant protection for services through mechanisms such as circuit breaker mode, isolation mode, fallback, and current limiting to ensure system stability.

1. Fuse mode: The principle of fuse mode is similar to that of a circuit breaker. When a short circuit occurs in the circuit, the fuse blows to protect the circuit from catastrophic losses. When the service is abnormal or has a large delay, the service caller will actively start the fuse when the fuse conditions are met, execute the fallback logic and return directly, and will not continue to call the service to further drag down the system. The default configuration of the fuse service call error rate threshold is 50%. If the threshold is exceeded, the fuse mode will be automatically started. After the service is isolated for a period of time, the fuse will enter a semi-fuse state, that is, a small number of requests are allowed to be tried. If the call still fails, it will return to the fuse state. If the call is successful, the fuse mode will be turned off.

2. Isolation mode: Hystrix uses thread isolation by default. Different services use different thread pools and are not affected by each other. When a service fails and exhausts its thread pool resources, the normal operation of other services is not affected, achieving the effect of isolation. For example, we configure a service to use a thread pool named TestThreadPool through andThreadPoolKey to achieve isolation from other named thread pools.

3. Fallback: The fallback mechanism is actually a fault-tolerant method for service failures, and its principle is similar to exception handling in Java. You only need to inherit HystixCommand and rewrite the getFallBack() method. In this method, you can write the processing logic, such as directly throwing an exception (fast failure), returning a null value or a default value, or returning backup data. When an exception occurs in a service call, it will switch to executing getFallBack(). The following situations will trigger fallback:

1) The program throws a non-HystrixBadRequestExcepption exception. When a HystrixBadRequestExcepption exception is thrown, the calling program can capture the exception and no fallback is triggered. When other exceptions are thrown, fallback is triggered;

2) The program runs timeout;

3) Fuse start;

4) The thread pool is full.

4. Current limiting: Current limiting refers to limiting the concurrent access to the service, setting the number of concurrent requests per unit time, and rejecting and falling back requests that exceed the limit to prevent the background service from being overwhelmed.

Hystix uses the command mode HystrixCommand to wrap the dependent call logic, so that the related calls are automatically under the elastic fault-tolerant protection of Hystrix. The calling program needs to inherit HystrixCommand and write the call logic in run(), and use execute() (synchronous blocking) or queue() (asynchronous non-blocking) to trigger the execution of run().

Dynamic Configuration Center

Microservices have many dependent configurations, and some configuration parameters may need to be dynamically modified during service operation, such as dynamically adjusting the circuit breaker threshold according to the access traffic. The traditional method of implementing information configuration, such as putting it in configuration files such as xml and yml, and packaging it together with the application, requires resubmitting the code, packaging and building, generating a new image, and restarting the service every time it is modified. This is too inefficient and obviously unreasonable, so we need to build a dynamic configuration center service to support dynamic configuration of microservices. We use Spring Cloud's configserver service to help us build a dynamic configuration center. The microservice codes we developed are stored in the private repository of the git server. All configuration files that need to be dynamically configured are stored in the configserver (configuration center, also a microservice) service under the git server. The microservices deployed in the Docker container dynamically read the configuration file information from the git server. When the local git repository modifies the code and pushes it to the git server repository, the git server hooks (post-receive, which is automatically called after the server completes the code update) automatically detects whether there is a configuration file update. If so, the git server sends a message to the configuration center (configserver, a microservice deployed in a container) through the message queue to notify the configuration center to refresh the corresponding configuration file. In this way, the microservice can obtain the latest configuration file information and realize dynamic configuration.

The above frameworks or components are the core supporting the implementation of microservice architecture. In actual production, we will also use many other components, such as log service components, message service components, etc., and choose to use them according to business needs. In our microservice architecture implementation case, we refer to and use many open source components of the Spring Cloud Netflix framework, mainly including Zuul (service gateway), Eureka (service registration and discovery), Hystrix (service fault tolerance), Ribbon (client load balancing), etc. These excellent open source components provide us with a shortcut to implement microservice architecture.

The above mainly introduces the basic principles of microservice architecture. There are some relatively detailed things, such as the configuration instructions of various parameters of Eureka and the process of building a dynamic configuration center. The blogger will make detailed explanations in other articles for your reference.

<<:  Summarize some knowledge points of Android modularization.

>>:  Let's talk about how to achieve high performance with CQRS architecture

Recommend

The latest guide to advertising after consumption upgrade!

When placing advertisements, many advertisers pre...

This is how to effectively review your Xiaohongshu campaign!

In the years that I have been doing Xiaohongshu a...

How to effectively promote APP soft articles?

APP soft articles are roughly classified The firs...

This kind of "fly" can not only be caught by hand, but also eaten

If there is a kind of "fly" that is lar...

Console-exclusive games are coming to PC. Where will TV games go?

Recently, there have been media reports that the ...

Tips for placing massive information flow ads!

Massive information flow is the first tier of the...

Why are there so many “vortices” in the weather forecast?

In weather forecasts, we often hear the word &quo...