Nginx Concept
Nginx It's a High performance HTTP And reverse proxy services . It is characterized by less memory , Strong concurrency , in fact nginx The concurrency ability of is better in the same type of web server .
Nginx Developed for performance optimization , Performance is the most important consideration , Focus on efficiency , Can withstand the test of high load , There are reports that support up to 50000 Number of concurrent connections .
When the connection is highly concurrent ,Nginx yes Apache A good substitute for the service :Nginx In the United States, it is one of the software platforms often chosen by the bosses of virtual host business .
Reverse proxy
Before talking about reverse proxy , Let's talk about what is agent and forward agent .
agent
Agent is actually an intermediary ,A and B Could have connected directly , Insert a... In the middle C,C It's an intermediary . At the beginning , Agents are mostly help Intranet client( LAN ) visit Extranet server With . Then there was reverse proxy , reverse
The meaning of this word here actually means the opposite direction , That is, the agent forwards the request from the external client to the internal server , From the outside to the inside .
Forward agency
A forward agent is a client agent , Proxy client , The server does not know the client that actually initiates the request .
Forward acting is like a springboard machine , Agents access external resources .
For example, we visit Google in China , No direct access to , We can use a forward proxy server , The request is sent to the proxy service , The proxy server has access to Google , In this way, the agent accesses Google to get the returned data , And back to us , So we can visit Google .

Reverse proxy
The reverse agent is the server agent , Agent server , The client does not know the server that actually provides the service .
The client is unaware of the existence of the proxy server .
It means accepting... As a proxy server Internet Connection request on , Then forward the request to the server on the internal network , And return the results from the server to the Internet Clients requesting connections on , At this time, the proxy server acts as a reverse proxy server .

Load balancing
About load balancing , Let's start with an example :
Everyone should have taken the subway , We usually take the subway in the morning rush hour , There is always a subway entrance that is the most crowded , Now , There is usually a subway staff A Take a big horn and shout “ People in a hurry please go B mouth ,B Few people and empty cars ”. And the subway staff A Is responsible for load balancing .
In order to improve all aspects of the ability of the website , Generally, we will form a cluster of multiple machines to provide external services . However , Our website provides one access to the outside world , such as
www.taobao.com
. So when the user types in the browserwww.taobao.com
How to distribute users' requests to different machines in the cluster , This is what load balancing is doing .
Load balancing (Load Balance), It means to load ( Work task , Access request ) Balance 、 Allocate to multiple operating units ( The server , Components ) Go ahead and execute . It's about high performance , A single point of failure ( High availability ), Extensibility ( Horizontal expansion ) The ultimate solution .

Nginx There are three main ways to provide load balancing : polling , Weighted polling ,Ip hash.
polling
nginx The default is that the polling weights are all defaulted to 1, The order in which the server processes requests :ABCABCABCABC....

Weighted polling
Different number of requests are distributed to different servers according to the weight of configuration . If not set , The default is 1. The request order of the following servers is :ABBCCCABBCCC....

ip_hash
iphash Requested by the client ip Conduct hash operation , And then according to hash The result will be the same client ip The request is distributed to the same server for processing , Can solve session The problem of not sharing .

Dynamic and static separation
The difference between dynamic and static pages
- Static resources : When users access this resource multiple times , The source code of the resource will never change the resource ( Such as :HTML,JavaScript,CSS,img Wait for the documents ).
- Dynamic resources : When users access this resource multiple times , The source code of the resource may send changes ( Such as :.jsp、servlet etc. ).
What is dynamic and static separation
-
Dynamic and static separation is to make dynamic web pages in dynamic websites distinguish constant resources from frequently changing resources according to certain rules , After the separation of dynamic and static resources , We can cache static resources according to their characteristics , This is the core idea of website static processing .
-
The simple generalization of dynamic and static separation is : Separation of dynamic files from static files .
Why use dynamic and static separation
In order to speed up the analysis of the website , Dynamic resources and static resources can be parsed by different servers , Speed up parsing . Reduce the pressure on a single server .

Nginx install
windows Lower installation
1、 download nginx
nginx.org/en/download… Download stable version . With nginx/Windows-1.20.1 For example , Direct download nginx-1.20.1.zip. Decompress after downloading , After decompression, it is as follows :

2、 start-up nginx
-
Direct double click nginx.exe, After double clicking, a black pop-up window flashed by
-
open cmd Command window , Switch to nginx Unzip the directory , Enter the command
nginx.exe
, You can enter.
3、 Check nginx Startup successful
Enter the web address directly in the browser address bar http://localhost:80 enter , The following page appears to indicate that the startup was successful !

Docker install nginx
I also mentioned in my previous article Linux Next installation steps , I'm going to use docker Installed , It's simple .
Related links are as follows : Docker( 3、 ... and ):Docker Deploy Nginx and Tomcat
1、 View images on all local hosts , Use command docker images

2、 establish nginx Containers And start the container , Use command docker run -d --name nginx01 -p 3344:80 nginx

3、 View started containers , Use command docker ps
[ Picture upload failed ...(image-af849a-1631168446877)]
Browser access The server ip:3344
, as follows , Indicates that the installation started successfully .
Be careful
: How can't connect , Check whether the alicloud security group has open ports , Or whether the server firewall has open ports !

linux Lower installation
1、 install gcc
install nginx You need to compile the source code downloaded from the official website first , Compile dependencies gcc Environmental Science , without gcc Environmental Science , You need to install :
2、PCRE pcre-devel install
PCRE(Perl Compatible Regular Expressions) It's a Perl library , Include perl Compatible regular expression library .nginx Of http Module USES pcre To parse regular expressions , So you need to be in linux Installation on pcre library ,pcre-devel It's using pcre Development of a secondary development library .nginx We also need this library . command :
3、zlib install
zlib The library provides many ways to compress and decompress , nginx Use zlib Yes http The contents of the package go on gzip , So you need to be in Centos Installation on zlib library .
4、OpenSSL install
OpenSSL Is a strong secure socket layer password library , Including the main cipher algorithm 、 Common key and certificate encapsulation management functions and SSL agreement , And provide rich applications for testing or other purposes . nginx Not only support http agreement , And support https( That is to say ssl Over protocol transmission http), So you need to be in Centos install OpenSSL library .
5、 Download installation package
Manual Download .tar.gz Installation package , Address : nginx.org/en/download…

Download and upload to the server /root
6、 decompression

7、 To configure
Use default configuration , stay nginx Execute... In the root directory
Find the installation path : whereis nginx

8、 start-up nginx

Successful launch , Access page :ip:80

Nginx Common commands
Be careful
: Use Nginx Operation command premise , Must enter into Nginx Catalog /usr/local/nginx/sbin
1、 see Nginx Version number :./nginx -v

2、 start-up Nginx:./nginx

3、 stop it Nginx:./nginx -s stop
perhaps ./nginx -s quit

4、 Reload the configuration file :./nginx -s reload

5、 see nginx process :ps -ef|grep nginx

Nginx The configuration file
Nginx The location of the configuration file :/usr/local/nginx/conf/nginx.conf

Nginx The configuration file has 3 Part of it is made up of :

1、 Global block
From the configuration file to events Content between blocks , Mainly set up some influences nginx Configuration instructions for the server to run as a whole , such as :worker_processes 1
.
This is a Nginx Server concurrent processing service key configuration ,worker_processes The bigger the value is. , The more concurrent processing you can support , But it's going to be hardware 、 Software and other equipment constraints . General settings and CPU The number of cores is the same .
2、events block
events Block involves the main effect of the instruction Nginx Network connection between server and user , such as :worker_connections 1024
Represent each work process The maximum number of connections supported is 1024, The configuration of this part is right Nginx The performance of , In practice, it should be flexibly configured .
3、http block
This is considered. Nginx The most frequent part of server configuration .
Demo sample
Reverse proxy / Load balancing

We are windows Next presentation , First we create two springboot project , The port is 9001 and 9002, as follows :

All we have to do is localhost:80
agent localhost:9001
and localhost:9002
These two services , And let polling access these two services .
nginx The configuration is as follows :
Let's type the project into jar package , Then start the project from the command line , And then access it on the browser localhost
To access these two projects , I also printed the log in the project , Take a look at the results , Are two items polled to be accessed .


You can see , visit localhost
, These two items are polled and accessed .
Next, we change the weight to the following settings :
Reload a nginx Configuration file for :nginx -s reload
Loading finished , Let's visit its localhost
, Observe the proportion of visits :


Results show ,9002 The number of accesses to the port is the same as 9001 The number of visits is basically 3:1
.
Dynamic and static separation
1、 Put the static resources into the newly created local file , for example : stay D Create a new file on disk data, And then again data There are two new folders in the folder , One img Folder , Store image ; One html Folder , Deposit html file ; Here's the picture :

2、 stay html Create a new a.html
file , The contents are as follows :
3、 stay img Put a photo in the folder , as follows :

4、 To configure nginx in nginx.conf
file :
5、 start-up nginx, Access its file path , Type in the browser http://localhost/html/a.html
, as follows :

6、 Type in the browser http://localhost/img/

Nginx working principle
mater&worker

master After receiving the signal, assign the task to worker To perform ,worker There can be multiple .

worker How to work
The client sends a request to master after ,worker The mechanism for obtaining tasks is neither direct allocation nor polling , It's a competition mechanism ,“ rob ” Perform the task after the task , That is, select the target server tomcat etc. , And then return the result .

worker_connection
The maximum concurrent number of normal static access is :worker_connections * worker_processes/ 2
; if HTTP As a reverse agent , The maximum number of concurrent should be worker_connections * worker_processes/ 4
, Because as a reverse proxy , Each concurrency will establish a connection with the client and the back-end server , Will take up two connections .
Yes, of course ,worker The more the better ,worker Number and server CPU When the numbers are equal .
advantage
have access to nginx –s reload
Thermal deployment , utilize nginx Perform hot deployment operations woker It's an independent process , If one of them woker Problems arise , Others continue to compete , Implement the request process , No service disruption .
summary
About Nginx Basic concepts of 、 Installation tutorial 、 To configure 、 Use examples and working principle , This article has done a detailed description . I hope this article has been helpful .