It's Mid Autumn Festival. Can you optimize chrome rendering performance?

Bicentric Fe 2021-09-15 10:22:10

chrome Eat memory

M5HK7zs.gif 1262497171475751057.gif

There are a lot of jokes on the Internet Chrome Eat memory diagram , After reading this article, you can get a rough idea of why Chrome How about eating so much? Let him eat less .

chrome How to render the page


The picture above is Chrome The whole process of rendering a frame of picture , First of all JS Execution of code , Then the style calculation will be carried out (Style)、 Layout (Layout)、 draw (Paint), Finally, layer synthesis (Composite), The final output frame is displayed on the screen , This process is also called pixel pipeline ; It should be noted that , This pipe is blocked , If the previous phase is not completed , The following links will not be carried out ;

So if the page visual changes want to achieve 60fps( Output per second 60 frame ), You need to ensure that all the work of a pixel pipeline is in 16.7ms Finish in , If this time is exceeded, frame loss will occur, that is, Caton ; The optimization of rendering performance is around this 5 The first link , The following describes the optimization schemes for each link ;

performance Performance analysis

Before introducing the specific optimization scheme , You need to know how to monitor the time consumption of each stage to find performance bottlenecks ;

Chrome Provides a sharp tool for performance analysis :performance panel , The picture below is performance After recording :


You can see from the picture , This tool can record in rendering FPS、CPU Consume 、 Network consumption 、 The picture of each frame 、 Memory consumption 、JS Execution stack 、DOM Quantity, etc , This snapshot information allows you to analyze the performance at each moment ;

Here we focus on the flame diagram at the bottom , The figure shows the calling sequence of browser scripts from left to right , From top to bottom is the order of function nesting , Because it's shaped like an inverted flame , So it's called flame diagram ; Look at the flame diagram. First look at the task with the longest span , That is, the longest horizontal line , This is the most time-consuming task , It is also a performance bottleneck , The browser will automatically label ( There will be a red mark in the upper right corner ) Time consuming tasks , The goal of optimizing performance can be achieved by solving the most time-consuming tasks ;


The optimization scheme of each link

Next, the optimization scheme is given one by one for the five links of pixel pipeline .

JavaScript The optimization of the

From the above, we know that the pixel pipeline is blocked , Tasks are linked , Poor performance on any task , Will cause the whole pipeline to take longer , Therefore, the core logic of performance optimization is to avoid time-consuming tasks in each link ;

First let's look at JS What are the means to avoid long tasks during task execution , There are three options :

  1. Use Web Worker Share the pressure of pure computing tasks : Pure computational work ( Unwanted DOM Access rights of ) Move to Web Worker To deal with , Such as data manipulation and traversal , So the main thread ( Usually responsible for UI Interaction ) It's going to be smooth , Will not be blocked or slowed down , of Web Worker For more information, please refer to Teacher Ruan's article ;
  2. Use Time Slicing( Time slice ) programme : Time slicing is to cut long tasks into countless tasks with very short execution time , At the end of each frame requestAnimationFrame Internal operation ;React16 Fiber Architecture is to use this optimization method , Think about it here React16 Fiber Why not Web Worker To optimize ? because React Is the operation required when updating the page DOM Of , and Web Worker cannot access DOM; The following figure is an example of splitting a long task :


  1. Use WebAssembly: It's a low-level assembly language , It has a compact binary format , Because it's binary bytecode , So it runs faster 、 A smaller ( Multi function video player , Audio transcoding tool , Webpage game , encryption ), May refer to MDN Introduction to ;

We will not expand the technical details of the above three schemes , But we can summarize their optimization ideas and logic :Web Worker For multi process parallel processing JS Mission ;Time Slicing It's the splitting of time-consuming tasks , Shard to perform ;WebAssembly Is to really improve the execution efficiency of tasks, so as to reduce the execution time ; The above means have achieved the goal of avoiding long tasks .

Style calculation optimization

Style calculation is based on css The process of calculating the final style of each element , This is a process of parsing matching , therefore CSS The simpler the selector DOM The smaller the number, the faster the style calculation , The three ways of writing in the figure below are to achieve the same effect , But the execution speed slows down in turn , namely ID Selectors are better than Class Selectors 、Class Selectors are better than pseudo class selectors ; image.png But should pay attention to , This phase is generally performed very quickly , Will not become a performance bottleneck , Avoid premature and inefficient optimization , Refactoring the style of the whole project is far from beneficial to the project JS Perform cost-effective optimization .

Layout optimization

We first understand what the layout is doing , Here, the next step of pixel pipeline drawing is used to illustrate :

Like the picture below , The layout is to frame the position and size of each field first , Drawing is to plant in each field ; If the size of a field changes , It is necessary to re frame and plant the fields around it ;


Analogy with the above farming , Layout is to calculate the space occupied by elements and their position on the screen , Due to the streaming layout of web pages , It means that the size and position of one element may affect the layout of other elements , So layout often happens ;

This link is the link most likely to become a performance bottleneck , Therefore, the number of layouts and the areas affected by the layout should be minimized ;

Reduce the number of layouts

It has to be modified before CSS Property to achieve the desired page effect , Try to modify only paint only attribute , For example, background picture 、 The color or shade of words ; The modification of these properties will not affect the page layout , The browser will skip the layout phase , Execute drawing directly . It should be noted that different browser rendering engines have the same css The period triggered by property modification may be different , Accessible cssTriger see ;

Reduce the area affected by the layout

Here we need to mention a new keyword : Layout boundary , Make any layout changes within the scope of the layout boundary elements , Only required " Partial rearrangement ”, By constructing a reasonable layout boundary , The effect of reducing the area affected by the layout can be achieved ; Through some small CSS adjustment , We can force layout boundaries in the document , The following figure shows the conditions for building layout boundaries :


The layout boundary sounds strange at first , But in actual development, everyone has used , I just didn't realize ; For example, the horizontal scroll area in the red box in the figure below forms a layout boundary because of the specified height , The change of internal elements will only trigger the rearrangement in the red box , The rearrangement of the area outside the red box will not be triggered ;


Avoid forcing layout synchronization

When JS change DOM Geometric properties of elements , Then get the geometric attributes of the element , Browser to answer my question ( What is the width ), It must be laid out at this moment , At this point, the layout is synchronized , This is the forced synchronization layout , The following code will trigger the forced synchronization layout :

// Set the width = '100px';
// Then ask for it
const width = el.offsetWidth;
 Copy code 

If the above code is in a loop , Then the performance problem will be very prominent ;

How to solve this problem : Just change the order of the two lines of code .

Painting optimization

The concept of drawing is well understood , Drawing is the process of filling pixels , The pixels are finally synthesized onto the user's screen ; But drawing is not always a single image drawn into memory , If necessary, painting will occur on multiple composite layers (Layers) On ;Layers Be similar to PS Layers in , The browser is drawing each Layer Then merge these layers for output . Note that this layer is not z-index Divided layers , It's on the console layers Panel view , The following figure is an example of multiple layers :


The drawn optimization idea is similar to the layout optimization idea above , It also reduces the area and times of drawing .

Reduce the area drawn

If the page is split into layers in and , When the page changes , Only the changing parts need to be drawn , The invariant part does not need to be drawn , This drawing will only happen in one Layer in , This achieves the purpose of reducing the drawing area .

So how to create a new layer , Add the following... To the element css Attribute is enough :

.moving-element {
will-change: transform;
transform: translateZ(0);
 Copy code 

Reduce the number of drawings

When you want to achieve animation , Try to use only transform and  opacity, The modification of these two properties should neither layout nor draw , The browser will skip layout and drawing and only perform synthesis , The pixel pipeline will only execute the steps shown in the figure below , This is also the best pixel pipeline ;


This operation is most suitable for high pressure points in the application life cycle , Like animation or scrolling , So try to use these two attributes for animation .

Layer synthesis optimization

Just mentioned the use of css To create new layers to reduce the area drawn when the page changes , If only all elements were promoted to one layer ? no way , It is necessary to layer reasonably , Because each layer requires memory and management overhead , This is also the key to layer synthesis optimization : Don't create too many layers . Painting and layer composition are actually a bit of a trade-off , It needs specific analysis of specific scenarios , Find the best balance of performance ;

A specific product requirement

Let's take a look at a specific product requirement scenario , Do Performance Optimization Practice , Function as follows : A voice chat room , It will display the real-time messages of users in the chat room, and automatically scroll the message list to display the latest messages ;


Junior programmers began to implement chat room messages , After mobile phone is too laggy, users can get a long time to get a page image.png

Senior programmers began to do performance optimization , Stable operation after online , The cell phone is not so hot image.png

Let's see what optimizations the senior programmer with less hair has done .

Message list optimization scheme

Experienced programmers here will realize that it should be running for a long time with more and more messages , Performance problems caused by slower and slower rendering of message lists ;

The senior programmer did the following :

  1. Maximum number of messages displayed : Display at most 200 strip , Discard old messages when exceeded ; Ensure that the page DOM The number will not expand indefinitely , Reduce the pressure of layout drawing ;
  2. Message interception :200ms Consume the latest news accumulated once , Batch rendering , Instead of rendering one by one ; Reduce the number of layouts and rearrangements ;
  3. Virtual scrolling : Further reduce DOM Number , Reduce the pressure of layout drawing , Detailed introduction later ;
  4. Image resource reuse : Reduce the storage occupation of pages , Detailed introduction later ;

The first two are easy to understand , Let's focus on the next two optimization ;

Virtual scrolling

In this message list , Although many elements of the list are no longer visible in the page , But these elements will still undergo rearrangement and redrawing , This part of the consumption is redundant , Virtual scrolling is to reduce this unnecessary consumption ; In short , Virtual scrolling is to render only the visible area of the list , Empty outside the visual area DOM or padding Make a space , This reduces the performance consumption during scrolling or when the list elements change ; Space is limited, we don't do expansion here ;

github There are many open source virtual scrolling component libraries on , Such as :vue-virtual-scroll-listreact-tiny-virtual-listvue-virtual-scroller etc. ;

Image resource clipping ladder reuse

The third party used by the author's company CDN It has the function of splicing parameters for cutting , The user's original avatar can be cut according to the specific business scenario and then sent back to the client , So as to reduce network consumption ;

There are many avatars of different sizes on this page , Is it OK to load resources directly by splicing and clipping parameters according to the actual size of these pictures ? No, it isn't , The author has made a step reuse here , Is to cut the pictures within a certain size range to the same size , Such as 20px-25px Pictures are used uniformly 25px Pictures of the , Such a picture with similar size will only request one network resource , There is only one picture in memory ;

The pictures are multiplexed step by step , After a long run, the memory consumption gap is still obvious ;

I want to mention again Chrome Another memory usage mechanism ,Chrome The resources used by the page will be put in the transport and storage , Whether subsequently used or previously used DOM Whether it has been uninstalled , It will not be released unless the system level transportation and storage recovery is triggered , And the occupation of this part of transportation and storage is not JS heap On the level of ,performance Cannot count , You need to use an activity monitor or chrome The task manager to view the real transportation and storage consumption , Here I use an experiment to prove the above mechanism ;

open google Picture search any picture , Mark as the beginning ; Scroll the page to load more pictures , Mark as termination ; Select... Manually body Element to delete , Record as manual deletion DOM after ; Memory performance is as follows :

Record the timing JS Heap Memory Task manager memory
start 20-30M 119M
End 22-23M 217M
Delete manually DOM after 20-21M 210M

It can be seen that Chrome Put all the resources used on the page in the storage , Even if you used resources before DOM Has been unloaded , This part of memory has not been released .

Other categories chrome Environmental optimization suggestions

Basic optimization techniques are common , Combined with the specific implementation of the operating environment, it is optimized for ;

  1. electron Next :

    1. Non native dependencies are placed on devDependence Instead of dependence in ;
    2. call webFrame.clearCache() Manual memory release ;
  2. Applet :

Applet is characterized by rendering threads and JS Thread separation , The advantage of this design is that the form animation is stable ; The disadvantages are rendering threads and JS Frequent data interaction between threads will become a performance bottleneck , If there is multi-threaded data interaction , Note the size of the interactive data ;

Please bring the original link to reprint ,thank
Similar articles