Interview Essentials: nginx knowledge sorting

PHP Development Engineer 2021-09-15 09:58:48

Preface

Complete examples can be used for reference and learning . Address : http://github.crmeb.net/u/defu

Nginx Concept

Nginx It's a High performance HTTP And reverse proxy services . It is characterized by less memory , Strong concurrency , in fact nginx The concurrency ability of is better in the same type of web server .

Nginx Developed for performance optimization , Performance is the most important consideration , Focus on efficiency , Can withstand the test of high load , There are reports that support up to 50000 Number of concurrent connections .

When the connection is highly concurrent ,Nginx yes Apache A good substitute for the service :Nginx In the United States, it is one of the software platforms often chosen by the bosses of virtual host business .

Reverse proxy

Before talking about reverse proxy , Let's talk about what is agent and forward agent .

agent

Agent is actually an intermediary ,A and B Could have connected directly , Insert a... In the middle C,C It's an intermediary . At the beginning , Agents are mostly help Intranet client( LAN ) visit Extranet server With . Then there was reverse proxy , reverse The meaning of this word here actually means the opposite direction , That is, the agent forwards the request from the external client to the internal server , From the outside to the inside .

Forward agency

A forward agent is a client agent , Proxy client , The server does not know the client that actually initiates the request .

Forward acting is like a springboard machine , Agents access external resources .

For example, we visit Google in China , No direct access to , We can use a forward proxy server , The request is sent to the proxy service , The proxy server has access to Google , In this way, the agent accesses Google to get the returned data , And back to us , So we can visit Google .

Reverse proxy

The reverse agent is the server agent , Agent server , The client does not know the server that actually provides the service .

The client is unaware of the existence of the proxy server .

It means accepting... As a proxy server Internet Connection request on , Then forward the request to the server on the internal network , And return the results from the server to the Internet Clients requesting connections on , At this time, the proxy server acts as a reverse proxy server .

Load balancing

About load balancing , Let's start with an example :

Everyone should have taken the subway , We usually take the subway in the morning rush hour , There is always a subway entrance that is the most crowded , Now , There is usually a subway staff A Take a big horn and shout “ People in a hurry please go B mouth ,B Few people and empty cars ”. And the subway staff A Is responsible for load balancing .

In order to improve all aspects of the ability of the website , Generally, we will form a cluster of multiple machines to provide external services . However , Our website provides one access to the outside world , such as www.taobao.com. So when the user types in the browser www.taobao.com How to distribute users' requests to different machines in the cluster , This is what load balancing is doing .

Load balancing (Load Balance), It means to load ( Work task , Access request ) Balance 、 Allocate to multiple operating units ( The server , Components ) Go ahead and execute . It's about high performance , A single point of failure ( High availability ), Extensibility ( Horizontal expansion ) The ultimate solution .

Nginx There are three main ways to provide load balancing : polling , Weighted polling ,Ip hash.

polling

nginx The default is that the polling weights are all defaulted to 1, The order in which the server processes requests :ABCABCABCABC....

upstream mysvr {
server 192.168.8.1:7070;
server 192.168.8.2:7071;
server 192.168.8.3:7072;
}
 Copy code 

Weighted polling

Different number of requests are distributed to different servers according to the weight of configuration . If not set , The default is 1. The request order of the following servers is :ABBCCCABBCCC....

upstream mysvr {
server 192.168.8.1:7070 weight=1;
server 192.168.8.2:7071 weight=2;
server 192.168.8.3:7072 weight=3;
}
 Copy code 

ip_hash

iphash Requested by the client ip Conduct hash operation , And then according to hash The result will be the same client ip The request is distributed to the same server for processing , Can solve session The problem of not sharing .

upstream mysvr {
server 192.168.8.1:7070;
server 192.168.8.2:7071;
server 192.168.8.3:7072;
ip_hash;
}
 Copy code 

Dynamic and static separation

The difference between dynamic and static pages

  • Static resources : When users access this resource multiple times , The source code of the resource will never change the resource ( Such as :HTML,JavaScript,CSS,img Wait for the documents ).
  • Dynamic resources : When users access this resource multiple times , The source code of the resource may send changes ( Such as :.jsp、servlet etc. ).

What is dynamic and static separation

  • Dynamic and static separation is to make dynamic web pages in dynamic websites distinguish constant resources from frequently changing resources according to certain rules , After the separation of dynamic and static resources , We can cache static resources according to their characteristics , This is the core idea of website static processing .

  • The simple generalization of dynamic and static separation is : Separation of dynamic files from static files .

Why use dynamic and static separation

In order to speed up the analysis of the website , Dynamic resources and static resources can be parsed by different servers , Speed up parsing . Reduce the pressure on a single server .

Nginx install

windows Lower installation

1、 download nginx

nginx.org/en/download… Download stable version . With nginx/Windows-1.20.1 For example , Direct download nginx-1.20.1.zip. Decompress after downloading , After decompression, it is as follows :

2、 start-up nginx

  • Direct double click nginx.exe, After double clicking, a black pop-up window flashed by

  • open cmd Command window , Switch to nginx Unzip the directory , Enter the command nginx.exe , You can enter.

3、 Check nginx Startup successful

Enter the web address directly in the browser address bar http://localhost:80 enter , The following page appears to indicate that the startup was successful !

Docker install nginx

I also mentioned in my previous article Linux Next installation steps , I'm going to use docker Installed , It's simple .

Related links are as follows :Docker( 3、 ... and ):Docker Deploy Nginx and Tomcat

1、 View images on all local hosts , Use command docker images

2、 establish nginx Containers And start the container , Use command docker run -d --name nginx01 -p 3344:80 nginx

3、 View started containers , Use command docker ps

Browser access The server ip:3344, as follows , Indicates that the installation started successfully .

Be careful : How can't connect , Check whether the alicloud security group has open ports , Or whether the server firewall has open ports !

linux Lower installation

1、 install gcc

install nginx You need to compile the source code downloaded from the official website first , Compile dependencies gcc Environmental Science , without gcc Environmental Science , You need to install :

yum install gcc-c++
 Copy code 

2、PCRE pcre-devel install

PCRE(Perl Compatible Regular Expressions) It's a Perl library , Include perl Compatible regular expression library .nginx Of http Module USES pcre To parse regular expressions , So you need to be in linux Installation on pcre library ,pcre-devel It's using pcre Development of a secondary development library .nginx We also need this library . command :

yum install -y pcre pcre-devel
 Copy code 

3、zlib install

zlib The library provides many ways to compress and decompress , nginx Use zlib Yes http The contents of the package go on gzip , So you need to be in Centos Installation on zlib library .

yum install -y zlib zlib-devel
 Copy code 

4、OpenSSL install

OpenSSL Is a strong secure socket layer password library , Including the main cipher algorithm 、 Common key and certificate encapsulation management functions and SSL agreement , And provide rich applications for testing or other purposes . nginx Not only support http agreement , And support https( That is to say ssl Over protocol transmission http), So you need to be in Centos install OpenSSL library .

yum install -y openssl openssl-devel
 Copy code 

5、 Download installation package

Manual Download .tar.gz Installation package , Address :nginx.org/en/download…

Download and upload to the server /root

6、 decompression

tar -zxvf nginx-1.20.1.tar.gz
cd nginx-1.20.1
 Copy code 

7、 To configure

Use default configuration , stay nginx Execute... In the root directory

./configue
make
make install
 Copy code 

Find the installation path : whereis nginx

8、 start-up nginx

./nginx
 Copy code 

Successful launch , Access page :ip:80

Nginx Common commands

Be careful : Use Nginx Operation command premise , Must enter into Nginx Catalog /usr/local/nginx/sbin

1、 see Nginx Version number :./nginx -v

2、 start-up Nginx:./nginx

3、 stop it Nginx:./nginx -s stop perhaps ./nginx -s quit

4、 Reload the configuration file :./nginx -s reload

5、 see nginx process :ps -ef|grep nginx

Nginx The configuration file

Nginx The location of the configuration file :/usr/local/nginx/conf/nginx.conf

Nginx The configuration file has 3 Part of it is made up of :

1、 Global block

From the configuration file to events Content between blocks , Mainly set up some influences nginx Configuration instructions for the server to run as a whole , such as :worker_processes 1.

This is a Nginx Server concurrent processing service key configuration ,worker_processes The bigger the value is. , The more concurrent processing you can support , But it's going to be hardware 、 Software and other equipment constraints . General settings and CPU The number of cores is the same .

2、events block

events Block involves the main effect of the instruction Nginx Network connection between server and user , such as :worker_connections 1024

Represent each work process The maximum number of connections supported is 1024, The configuration of this part is right Nginx The performance of , In practice, it should be flexibly configured .

3、http block

http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;# Listening port 
server_name localhost;# domain name 
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
 Copy code 

This is considered. Nginx The most frequent part of server configuration .

Demo sample

Reverse proxy / Load balancing

We are windows Next presentation , First we create two springboot project , The port is 9001 and 9002, as follows :

All we have to do is localhost:80 agent localhost:9001 and localhost:9002 These two services , And let polling access these two services .

nginx The configuration is as follows :

worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream jiangwang {
server 127.0.0.1:9001 weight=1;// The weight of polling is... By default 1
server 127.0.0.1:9002 weight=1;
}
server {
listen 80;
server_name localhost;
 #charset koi8-r;
 #access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
proxy_pass http://jiangwang;
}
}
}
 Copy code 

Let's type the project into jar package , Then start the project from the command line , And then access it on the browser localhost To access these two projects , I also printed the log in the project , Take a look at the results , Are two items polled to be accessed .

You can see , visit localhost, These two items are polled and accessed .

Next, we change the weight to the following settings :

upstream jiangwang {
server 127.0.0.1:9001 weight=1;
server 127.0.0.1:9002 weight=3;
}
 Copy code 

Reload a nginx Configuration file for :nginx -s reload

Loading finished , Let's visit its localhost, Observe the proportion of visits :

Results show ,9002 The number of accesses to the port is the same as 9001 The number of visits is basically 3:1.

Dynamic and static separation

1、 Put the static resources into the newly created local file , for example : stay D Create a new file on disk data, And then again data There are two new folders in the folder , One img Folder , Store image ; One html Folder , Deposit html file ; Here's the picture :

2、 stay html Create a new a.html file , The contents are as follows :

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Html file </title>
</head>
<body>
<p>Hello World</p>
</body>
</html>
 Copy code 

3、 stay img Put a photo in the folder , as follows :

4、 To configure nginx in nginx.conf file :

location /html/ {
root D:/data/;
index index.html index.htm;
}
location /img/ {
root D:/data/;
autoindex on;# Indicates that all contents in the current folder are listed 
}
 Copy code 

5、 start-up nginx, Access its file path , Type in the browser http://localhost/html/a.html, as follows :

6、 Type in the browser http://localhost/img/

Nginx working principle

mater&worker

master After receiving the signal, assign the task to worker To perform ,worker There can be multiple .

worker How to work

The client sends a request to master after ,worker The mechanism for obtaining tasks is neither direct allocation nor polling , It's a competition mechanism ,“ rob ” Perform the task after the task , That is, select the target server tomcat etc. , And then return the result .

worker_connection

Send request occupied woker Two or four connections .

The maximum concurrent number of normal static access is :worker_connections * worker_processes/ 2 , if HTTP As a reverse agent , The maximum number of concurrent should be worker_connections * worker_processes/ 4 . Yes, of course ,worker The more the better ,worker Number and server CPU When the numbers are equal .

advantage

have access to nginx –s reload Thermal deployment , utilize nginx Perform hot deployment operations woker It's an independent process , If one of them woker Problems arise , Others continue to compete , Implement the request process , No service disruption .

summary

About Nginx Basic concepts of 、 Installation tutorial 、 To configure 、 Use examples and working principle , This article has done a detailed description . I hope this article has been helpful .

If you think this article is useful to you , Please give our open source project a little bit star: http://github.crmeb.net/u/defu       Thank you for !
Please bring the original link to reprint ,thank
Similar articles

2021-09-15