Multithreading (II) -- high level chapter

\u674e\u80b2\u6b22 2021-11-25 14:36:53

One .CAS

1. summary

(1) Technical background
Tasks performed by threads , When the workload is small , Thread safety requires the use of synchronized( Multiple threads compete for object locks at the same time ) Lock , Low efficiency ( The contention failed thread quickly switches between the blocked state and the awakened state )
(2) Use the premise
Code execution is very fast
(3) Purpose
Optimize efficiency on the premise of safety — Use more scenes : Modify variables to ensure thread safety
(4) principle
Use CAS, No thread blocking ( Always running )

2. What is? CAS

Full name Compare and swap, Compare and exchange
When multiple threads are working on a resource at the same time CAS operation , Only one thread can operate successfully , But it doesn't block other threads , Other threads will only receive a signal that the operation has failed , so CAS It's an optimistic lock
Implementation source code

public final class Unsafe{

public final int getAndSetInt(Object var1,int var4){

int vars;
do{

var5 = this.getInVolatile(var1,var2);
}while(this.compareAndSwapInt(var1,var2,var5,var4));
return vars;

Be careful :

  • native Method
  • Non blocking method
  • Given three parameters : Original value , Expected value , Modified value , Version number
    Original value : The real value of the variable in main memory
    Expected value : Value copied to working memory
    Modified value :n++ Namely 1
     Insert picture description here

The first thread was modified successfully : Main memory variable value =1,
The second thread modifies : Failure ,compareAndSwapXXX() Don't block , Go straight back to false, Re execution do Code —> Get the latest variable value again , Revise
Based on the original value + Modified value Try to modify the operation , Determine whether it is equal to the expected value , Not satisfied is not to modify and return false, If it is satisfied, modify it directly

3. The problem is

ABA problem
** Cause of occurrence :** The time period during which the current thread copies the main memory value to the working memory for modification , Other threads put variable values in main memory A—>B—>A It is equivalent to having been modified by other threads , But during this period, we don't know the process , Still changing …
Solution : Adopt optimistic lock design , Introduce version number for control

4.jdk in , use CAS Realized API:

  • java.util.concurrent.atomic, Atomic concurrent packages api
  • synchronized in , When multiple threads execute synchronized code blocks at different points in time ,jdk Optimization will take CAS
  • Other , Such as 1.8 edition ConcurrentHashMap Realization ,put In operation , The node is empty , take CAS

5.CAS The implementation of the

Spin the operation of trying to set the value

Two . Common lock strategies

An idea to solve thread safety in design

1. Optimism lock VS Pessimistic locking

An idea to solve thread safety in design
(1) Optimism lock
The design is always optimistic that most scenarios of data modification are thread concurrent modification , In a few cases , Thread safety adopts version number to control — The user judges the version number , And deal with
(2) Pessimistic locking
Pessimistically, there are always other threads modifying concurrently , Every time it's a lock operation ,
Problems with pessimistic locks : Always need a competitive lock , This leads to thread switching , Suspend other threads , So the performance is not high
Problems with optimistic locks : Not always able to handle all problems , Therefore, a certain system complexity will be introduced

2. spinlocks (Spin Lock)

Realization principle :

  • Cycle dead, etc
  • Interruptible way —>interrupt
  • Judge the number of cycles , Exit when threshold is reached
  • Determine the total time of the cycle , Exit when threshold is reached
while( Grab the lock (lock) == Failure ){

}

shortcoming

  • Premise : Soon execute , If it can't be satisfied , The thread is always running, and the loop executes CAS, Performance consumption is relatively large
  • There are many threads , The premise may not be satisfied . perhaps CPU Switch between many threads —> Performance consumption is relatively large

3. Read-write lock

Technical background :
There is no thread safety problem between data readers , But data writers need to be mutually exclusive with each other and with readers , If the same lock is used in both scenarios , There will be a great performance consumption , Read / write locks are therefore created
Read-write lock : Additional table names are required to read and write the intent when performing the lock operation , Plural readers are not mutually exclusive , Then the writer asks anyone to be mutually exclusive

4. Reentrant lock

Allow the same thread to acquire the same lock multiple times , For example, a recursive function has a lock operation , This lock does not block itself during recursion , It's called a reentrant lock ( Reentrant locks are also called recursive locks )
java As long as Reentrant The first named locks are all reentrant locks , and JDK Of all threads provided lock Implementation class , Include synchronized Keyword locks are reentrant

3、 ... and .synchronized principle ( a key )

1. Realization principle

Lock through the object header
monitor Mechanism : When compiling to bytecode , Generate monitorenter,monitorexit

2. Object head lock status

(1) unlocked
No locking of resources , All threads can access and modify the same resource , But at the same time, only one thread can modify successfully , Other threads that fail to modify will try again and again until the modification succeeds
(2) Biased locking
Prefer the first locking thread , The thread will not actively release the biased lock , Only when other threads try to compete for biased locks will they be released
Partial lock revocation :
When there is no bytecode executing at a certain point in time , First pause the thread with biased lock , Then judge whether the lock object is in the locked state , If the thread is not active , Set the object header to be unlocked , And undo the biased lock
If the thread is active , Upgrade to lightweight lock state
(3) Lightweight lock
When the lock is biased towards the lock , By the second thread B visit , The biased lock will be upgraded to a lightweight lock , Threads B Try to acquire the lock in the form of spin , Threads don't block . To improve performance
Currently there is only one waiting thread , Then the thread will wait through spin , But when you spin more than a certain number of times , Lightweight locks will be upgraded to heavyweight locks , When a thread already holds a lock , Another thread is spinning , When there is a third thread to access , Lightweight locks will be upgraded to heavyweight locks
(4) Heavyweight lock
When a thread acquires a lock , All other threads waiting to acquire the lock will be blocked
Be careful :java The virtual machine needs to ensure that the lock is in the normal execution path , And the exception execution path can be unlocked

3.JVM Yes synchronized Optimization scheme

According to different scenes , Use different locking mechanisms
Be careful : The level of the following lock policies ranges from low to high !!!
(1) Biased locking :

  • Apply for the held object lock again for the same thread
  • The most optimistic kind of lock : From beginning to end, only one thread requests a lock
  • Realization principle :CAS

(2) Lightweight lock :

  • The probability is at the same point in time , Only one thread applies for an object lock
  • Realization principle :CAS
  • shortcoming : The thread will block , Wake up the

(3) Heavyweight lock :

  • The probability is at the same point in time , Multiple threads compete for the same object lock
  • Realization principle : Using the operating system mutex lock
  • shortcoming : It involves the scheduling of the operating system , Switch from user mode to kernel mode , It's very expensive , The thread will block . Wake up the

explain :synchronized Locks can only be upgraded, not demoted ( In order to improve the efficiency of obtaining and releasing locks ),
Other optimizations :

  • Lock coarsening : A thread repeatedly obtains and releases the lock of the same object , When there is no other affected code in the middle , Can be combined into one lock
public class Test{

private static StringBuffer sb = new StringBuffer();
public static void main(String[] args){

StringBuffer sb = new StringBuffer();
sb.append("a");
sb.append("b");
sb.append("c");
}
}
  • Lock elimination : Critical area code , No object escaped from the current thread , The description itself is a thread safe operation , You can delete the lock directly , Don't use locks
public class Test{

public static void main(String[] args){

StringBuffer sb = new StringBuffer();
sb.append("a").append("b").append("c");
}
}

Four . Deadlock

1. summary

A deadlock is a situation where , Multiple threads are blocked at the same time , One or the front of them is waiting for a resource to be released , Because the thread is blocked indefinitely . Therefore, the program cannot terminate normally

2. Prerequisite

The essence of synchronization is : A thread waits for another thread to finish executing before it can continue executing , But if the related threads are waiting for each other , Then it will cause deadlock
explain :
There are at least two threads , Each holds the object lock applied by the other party , Cause mutual waiting , It makes it impossible to execute

Four necessary conditions

  • Mutually exclusive use , That is, when a resource is used by a thread ( occupy ) when , Other threads cannot use
  • Do not take , The resource requester cannot forcibly seize the resource from the resource owner , Resources can only be released by the resource owner
  • Request and maintain , That is, when a resource request requests other resources while maintaining the possession of the original resources
  • Loop waiting for , There is a waiting queue :p1 occupy p2 Resources for ,p2 occupy p3 Resources for ,p3 occupy p1 Resources for , This creates a waiting loop

When all the above four conditions are true , A deadlock is formed , Break any condition , The deadlock will disappear

3. consequence

Thread blocking waiting , Cannot perform downward

4. Solution

  • One time allocation of resources ( Break request and hold condition )
  • Deprivable resources : When the thread meets the condition , Release existing resources
  • Orderly distribution of resources : The system assigns a number to each type of resource , Each thread requests resources according to the label , Release is the opposite

5. Means of detecting deadlocks

  • Use jdk Monitoring tools (jconsole.jstack View thread status )
  • Banker Algorithm

6. Banker Algorithm

How a banker can safely lend a certain amount of money to several clients , So that these customers can borrow money to complete what they want to do , At the same time, the banker can recover all the money without going bankrupt .
The banker algorithm needs to ensure the following four points :
1、 When a customer's maximum demand for funds does not exceed the banker's existing funds, the customer can be accepted ;
2、 Customers can borrow money in installments , But the total number of loans can't exceed the maximum demand ;
3、 When the banker's existing funds can not meet the amount of loans customers still need , Loans to customers can be deferred , But it can always make customers get loans in limited time ;
4、 When customers get all the money they need , Can return all the funds in a limited time .

5、 ... and .Lock system

1. brief introduction

jdk Provide a method to remove Synchronized Other locking methods , Defines a lock object to operate
and synchronized comparison :
lost Synchronized Keyword implicit locking convenience , But it has the operability of lock acquisition and release , There are many kinds of interruptible lock acquisition and timeout lock acquisition Synchronized Keywords do not have the synchronization features

Lock lock = new ReentrantLock();
lock.lock();// Set the synchronization status of the current thread , And save the thread and thread synchronization status in the queue 
// Setting synchronization succeeded , To perform the , Failure , The code shown in the uplink is blocked 
try{

......
}finally{

lock.unlock();
}

Be careful :synchronized The lock will be automatically released when the synchronization block is completed or an exception is encountered , and Lock Must call unlock() Method release lock , So in finally Middle release lock

2.Lock How locks work :AQS

AQS(AbstractQueueSynchronizer)
AQS: Queue synchronizer

  • Realization principle : Dual end queue saves thread and thread synchronization status , And pass CAS Provides methods for setting synchronization status , Such as ReentranLock The implementation , call lock.lock() operation , Will keep setting thread synchronization status
  • About queues :(1) Double ended (2)AQS The head and tail nodes of the queue are saved in
  • AQS The template methods provided can be divided into 3 class
    1. Exclusive get and release synchronization state
    2. Shared get and release synchronization state
    3. Query the waiting threads in the synchronization queue
     Insert picture description here

3.Lock Features of locks

(1) Provide fair lock and unfair lock
Whether to set the synchronization status of threads in the order of queuing — When multiple threads apply for locking operation , Whether to lock in chronological order
(2)AQS Provides exclusive and shared settings for synchronization status ( An exclusive lock , Shared lock )
exclusive : Only one thread is allowed to acquire the lock
Shared : A certain number of threads share the lock
(3) belt Reentrant Keywords Lock Under bag API: Reentrant lock
It is allowed to get the same... Multiple times Lock The lock of the object

4.Lock Read / write locks provided in the system API:ReentranReadWriteLock

  • Use scenarios : When multithreading performs an operation , Allow to read - Read concurrent / Parallel execution , You are not allowed to read - Write , Write - Write concurrent / Parallel execution , Such as multithreading to read and write files
  • A lock between a read lock and a write lock can only be downgraded and cannot be upgraded ( Write lock —> Read the lock )
public class ReadWriteTest {

private static ReadWriteLock LOCK = new ReentrantReadWriteLock();
private static Lock READLOCK = LOCK.readLock();
private static Lock writeLock = LOCK.writeLock();
public static void readFile(){

try {

READLOCK.lock();
//IO Reading documents 
} finally {

READLOCK.unlock();
}
}
public static void writeFile(){

try {

writeLock.lock();
//IO Writing documents 
} finally {

writeLock.unlock();
}
}
public static void main(String[] args) {

//20 A thread reads a file 
for(int i = 0;i < 20;i++){

//
}
//20 A thread writes a file 
for(int i = 0;i < 20;i++){

//
}
}
}

advantage : Execute concurrently for reading , Improve operational efficiency
condition: Inter thread communication

public static Lock LOCK = new ReemtrantLock();
public static Condition CONDITION = LOCK.newCondition();
public static void t3(){

try{

LOCK.lock()//=synchronized() Lock code 
while( The inventory has reached the upper limit ){

CONDITION.wait();//=wynchronized The object of the lock .wait()
}
System.out.println("t3");
CONDITION.signal();//=synchronized Lock object .notify();
CONDITION.signalAll();//=synchronized Lock object .notifyAll()
}finally{

LOCK.unlock();
}

6、 ... and .AQS The implementation of the / application

1.CountDownLatch

  • Use scenarios : In a thread A Somewhere , Block waiting , Until a group of threads are executed , Re execution A Subsequent code
  • matters needing attention : Only counter decrement operations are provided

Case study : The entry thread executes t Method : The entry thread is blocked waiting , Until all child threads are executed

public class Main {

// The first way :yield()
@Test
public void t1(){

for(int i = 0;i < 20;i++){

new Thread(()->{

System.out.println(Thread.currentThread().getName());
}).start();
}
while(Thread.activeCount() > 1){

Thread.yield();
}
System.out.println(" completion of enforcement :"+Thread.currentThread().getName());
}
// The second way :join()
@Test
public void t2() throws InterruptedException {

List<Thread> threads = new ArrayList<>();
for(int i = 0;i < 20;i++){

Thread t = new Thread(()->{

System.out.println(Thread.currentThread().getName());
});
threads.add(t);
t.start();
}
for(Thread t : threads){

t.join();
}
System.out.println(" completion of enforcement :"+Thread.currentThread().getName());
}
// The third way :CountDownLatch
@Test
public void t3() throws InterruptedException {

CountDownLatch cdl = new CountDownLatch(20);// The initial value of the counter 
for(int i = 0;i < 20;i++){

Thread t = new Thread(()->{

System.out.println(Thread.currentThread().getName());
cdl.countDown();// The value of the counter -1
});
t.start();
}
cdl.await();// The current thread is blocked waiting , Until the value of the counter equals 0
System.out.println(" completion of enforcement :"+Thread.currentThread().getName());
}
}

2.Semaphore

One count semaphore , It is mainly used to control the access of multiple threads to the common resource library
(1) Use scenarios :

  • and CountDownLatch The same place
  • Multithreaded access to limited resources

(2) common API

  • new Semaphore(int);// The initial value of a given number , The default of no parameter is 0
  • release(int);// The counter value is incremented by a given number
  • acquire(int);// The current thread gets Semaphore A given number of resources in the object , Get : Reduce the number of resources , The current thread executes down , Can't get : The current thread is blocked waiting
public class Semaphore {

// Blocking waits for a group of threads to complete execution , Re execution XX Mission 
@Test
public void t1() throws InterruptedException {

java.util.concurrent.Semaphore s = new java.util.concurrent.Semaphore(0);
for(int i = 0;i < 20;i++){

Thread t = new Thread(()->{

System.out.println(Thread.currentThread().getName());
s.release();// Release resources ( If there is no participation, it is 1)
});
t.start();
}
s.acquire(20);// Get resources , After the execution of the sub thread , Get , The sub thread cannot finish executing , Block and wait 
System.out.println(" completion of enforcement :"+Thread.currentThread().getName());
}
// Restrict access to limited resources 
// Simulate the server receiving the client http request : only 1000 concurrent 
// At a point in time , The number of client tasks reached 1000. Another client request , Will block waiting 
@Test
public void t2() throws InterruptedException {

java.util.concurrent.Semaphore s = new java.util.concurrent.Semaphore(1000);
for(;;){

Thread t = new Thread(()->{

try {

s.acquire();// Get resources , without , Block and wait 
// Simulate each thread processing the client http request 
System.out.println(Thread.currentThread().getName());
} catch (InterruptedException e) {

e.printStackTrace();
}finally{

s.release();// Release resources ( If there is no participation, it is 1)
}
});
t.start();
}
}
}

7、 ... and .ThreadLocal

1. Concept

ThreadLocal Used to provide thread local variables ., In a multithreaded environment, it can ensure that the variables in each thread are independent of those in other threads , in other words ThreadLocal You can create one for each thread [ Individual variable copies ], It's equivalent to thread private static Type variable
ThreadLocal The effect of is somewhat opposite to the synchronization mechanism , Synchronization mechanism is to ensure the consistency of data in multi-threaded environment , and ThreadLocal Is to ensure the independence of data in multi-threaded environment

2. Use scenarios

Isolate variables between threads . Ensure that each thread uses a copy of the variable in its own thread

public class ThreadLocalTest {

private static ThreadLocal<String> HOLDER = new ThreadLocal<>();
@Test
public void t1() {

// Are bound to threads , The value inside is isolated from each thread 
// HOLDER.get();// Get current thread , Some ThreadLocal The value of the object 
// HOLDER.remove();// Removes the current thread , Some ThreadLocal The value of the object 
// HOLDER.set("abc");// Set the current thread , Some ThreadLocal The value of the object 
try {

HOLDER.set("abc");
for (int i = 0; i < 20; i++) {

final int j = i;
new Thread(() -> {

try {

HOLDER.set(String.valueOf(j));
if (j == 5) {

Thread.sleep(500);
System.out.println(HOLDER.get());
}
} catch (InterruptedException e) {

e.printStackTrace();
}finally{

HOLDER.remove();
}
}).start();
}
while (Thread.activeCount() > 1) {

Thread.yield();
}
System.out.println(HOLDER.get());
} finally {

HOLDER.remove();// As long as... Is set in a thread ThreadLocal value , Before the thread ends , must do remove.
}
}
}

explain : As long as... Is set in a thread ThreadLocal value , Before the thread ends , must do remove
Recommended code writing :
Define class variables :static ThreadLocal< The type of data saved > ThreadLocal = new ThreadLocal<>();
ThreadLocal Multiple threads use the same , But the value inside is bound to the thread , Threads are irrelevant

 When there are thread settings , Before the thread ends ,remove
new Thread(()->{

try{

threadLocal.set( Set the value of the );
}finally{

threadLocal.remove();
}
}).start();

3. principle

 Insert picture description here
Thread Objects have their own ThreadLocalMap. call ThreadLocal Object settings set(value), Get value operation get(), Delete value operation remove(), It's all for the current thread ThreadLocalMap Operations on objects , So variables in each thread are isolated

4. Memory leak

 Insert picture description here

If Entry Does not inherit weak reference types ( Set to key reference ), Lead to in ThreadLocalMap Of Set(),get() and remove() In the method , There are clear invalid Entry The operation of . This is done to reduce the possibility of memory leaks ,Entry Medium key A weak reference is used , This is done to reduce the probability of memory leakage , But the memory leak assumption cannot be completely avoided Entry Medium key There is no way to use weak references , Instead, strong references are used : because ThreadLocalMap The life cycle of is as long as the current thread , So when quoting ThreadLocal After the object is recycled , because ThreadLocalMap And hold ThreadLocal And the corresponding value A strong reference to ,ThreadLocalMap And corresponding value It won't be recycled , This leads to a memory leak , therefore Entry Avoid... By weak reference ThreadLocal Memory leak caused by not being recycled , But at this time value It's still not recyclable , It will still lead to memory leakage ,ThreadLocalMap This situation has been taken into account , And there are some protective measures : Calling ThreadLocalMap Of get(),set() and remove() The current thread will be cleared when ThreadLocalMap All in key by null Of value, This can reduce the probability of memory leakage , So we're using ThreadLocal When , Every time I use it ThreadLocal All call remove() Method , Clear data , Prevent memory leaks

If Entry Does not inherit weak reference types ( Set to key reference ), When the thread does not use a value , There has always been a reference to V, There is a memory leak
Suppose the thread has not finished executing for a long time ,K Is a strong quote :ThreadLocal Objects can never be recycled ,V I can't use it , Cause memory leaks
Set up K Benefits for weak references : Reduce the risk of memory leaks
Every garbage collection , As long as no other strong reference points to ThreadLocal object , Just recycle ,ThreadLocalMap The implementation . Check key is null Of , hold V The variable is set to null.V There is no object to point to , Recyclable

8、 ... and .ConcurrentHashMap( a key )

1.HashMap

Non thread safe ,JDK1.7 Based on the array + Linked list ,JDK1.8 Based on the array + Linked list + Red and black trees
(1)JDK1.7:
transient Entry<K,V>[] table Insert picture description here

final float loadFactor;// Load factor 
static clas Entry<K,V> implements Map.Entry<K,V>{

final K key;
V value;
Entry<K,V> next;// Linked list ( A one-way )
int hash;
}
  • Multiple construction methods , Parameters can be passed in , Specify initial capacity ( The length of the array ), Or not , Use the default value ( The array length is 16, The load factor is 0.75)
  • Underlying data structure : Array + Linked list
  • When storing data , The number of buckets has reached the current array length * Load factor , The array will be expanded exponentially , Capacity expansion is very performance consuming (rehash, Copying data and other operations )
  • Hash When the conflict is serious , The linked list on the bucket will become longer and longer , Query efficiency is getting lower and lower , The time complexity is O(n)
  • Space for time : If you want to speed up key Search time , It can also further reduce the load factor , Increase initial capacity , To reduce the probability of hash conflicts
  • Data operation under multithreading is not safe , And there may be a circular linked list, resulting in an endless cycle ( During capacity expansion, the order of elements in the linked list will be changed , Insert... From the head , The uncertainty of multithreading execution order may occur 1. take node1.next Point to node2, And then execute the thread 2 take node2.next Point to node1)

(2)jdk1.8
 Insert picture description here

static class Node<K,V> implements Map.Entry<K,V>{

final int hash;
final K key;
V value;
Node<K,V> next;
}
  • The underlying data structure is : Array + Linked list + Red and black trees
  • When storing data , After the length of the linked list reaches the threshold , It will turn into a red black tree
  • The data query efficiency in red black tree is O(logN)
  • Iterators are strongly consistent iterators : Fast failure iterator , Create iterator iterator after , Update the element , Concurrent modification exceptions will be thrown

2.Hashtable

Thread safe ,1.7 and 1.8 It's all arrays + Linked list , All methods are based on synchronized Lock , Very inefficient

3.ConcurentHashMap

Thread safe , And many scenarios support concurrent operations , Improved efficiency
(1)JDK1.7
 Insert picture description here

static final class Segment<K,V> extends ReentrantLock implements Serializable{

transient volatile HashEntry<K,V>[] table;
final float loadFactor;
static final class HashEntry<K,V>{

final int hash;
final K key;
volatile V value;
volatile HashEntry<K,V> next;
}
}
  • The underlying data structure is still an array + Linked list ,HashEntry Is the node of the linked list
  • Adopted segment Segmented lock technology , When a multithreaded concurrent update operation , To the same segment Lock synchronously , Secure data , This can be based on different segment Do concurrent write operations
  • The implementation of synchronization is based on ReentrantLock Locking mechanism (Segment Inherited from ReentranLock)
  • and HashMap equally , It also exists hash When the conflict , The problem of low efficiency of linked list query

(2)JDK1.8
 Insert picture description here

static class Node<K,V> implements Map.Entry<K,V>{

final int hash;
final K key;
volatile V val;
volatile Node<K,V> next;
}
  • The underlying data structure and HashMap1.8b You're the same , It's all based on arrays + Linked list + Red and black trees
  • Support multithreaded concurrent operation , The realization principle is :CAS+synchronized Ensure concurrent updates
  • put Method to store elements : adopt key Object's hashcode Calculate the index of the array , without Node, Then use CAS Try inserting elements , If it fails, spin unconditionally until the insertion is successful ; If there is Node, Then use synchronized Lock the door Node Elements ( Linked list / The head node of the red black tree ), Then perform the insert operation

(3)JDK1.7 and 1.8 All exist :

  • key , Value iterators are weakly consistent iterators , After creating the iterator , You can update the element
  • This operation is not locked ,value yes volatile Embellished , Guaranteed visibility , So it's safe
  • Separation of reading and writing can improve efficiency , Multithreading for different Node/Segment Insertion / Deletion can be concurrent , Parallel execution , To the same Node/Segment Write operations are mutually exclusive , All read operations are lockless , Can the concurrent / Parallel execution

(4) summary :

  • jdk 1.7:
    Based on the array + Linked list implementation , Essentially based on Segment Segmented lock technology ,Segment Inherited ReentrantLock, Different Segment Multithreading can operate concurrently , The same Segment In between is the use of Lock Lock
  • jdk1.8:
    Based on the array + Linked list + Red and black trees , Use... In essence Synchronized Lock to achieve thread safety , Threads of different nodes can operate concurrently , Threads of different nodes can operate concurrently , Such as put operation , If the same node is empty, use CAS, If it's not empty , Use synchronized Lock
  • 1.7 and 1.8 effect
    Separation of reading and writing can improve efficiency , Multithreading for different Node/Segment Insertion / Deletion can be concurrent , Parallel execution , To the same Node/Segment Write operations are mutually exclusive , All read operations are lockless , Can the concurrent / Parallel execution

4.HashMap/HashTable/ConcurrentHashMap contrast

 Insert picture description here

Please bring the original link to reprint ,thank
Similar articles