monitor Mongo The slow query

1. Use mongostat monitor MongoDB Overall situation

 mongostat yes mongdb The state detection tool that comes with me , Use... On the command line . It will get... At regular intervals MongoDB The current running state of , And the output .
If you find that the database suddenly slows down or has other problems , Your first-hand operation is to consider mongostat Check it out. mongo The state of . mongostat --host localhost:27017 -uroot -p123456 --authenticationDatabase admin
Parameter description :
host: Appoint IP Address and port , Or just write IP, And then use --port Parameter specifies the port number
-u: If authentication is turned on , You need to fill in the user name after
-p: password
--authenticationDatabase: If authentication is enabled , You need to fill in the authentication library after this parameter ( Note that the database that authenticates the above account number )

mongostat Output details

insert/s : The official explanation is the number of objects inserted into the database per second , If it is slave, Then there is *, It means the copy set operation 
query/s : Number of query operations per second
update/s : Number of update operations per second
delete/s : Number of delete operations per second
getmore/s: Query per second cursor( The cursor ) At the time of the getmore Operands
command: The number of commands executed per second , Two values will be displayed in the master-slave system ( for example 3|0), The sub table represents Local | Copy command
notes : The number of commands executed in a second, such as bulk insert , It's just an order ( So it doesn't mean much )
dirty: Only for WiredTiger engine , The official website explains that the percentage of dirty data bytes in cache
used: Only for WiredTiger engine , The official website explains the percentage of cache in use
flushes:
For WiredTiger engine : finger checkpoint During a polling interval
For MMAPv1 engine : To perform a second fsync The number of times the data was written to the hard disk
notes : It's usually 0, The discontinuity would be 1, By calculating two 1 The time between , We can get a general idea of how long flush once .flush It's expensive ,
If frequently flush, Maybe it's time to find out why
vsize: Virtual memory usage , Company MB ( This is a stay mongostat The total data of the last call )
res: Physical memory usage , Company MB ( This is a stay mongostat The total data of the last call )
notes : This is for you top See the same , vsize Generally there will be no big change , res It will rise slowly , If res It often drops suddenly , Check to see if there are other programs that are eating memory . qr: Client waiting from MongoDB The queue length of the instance read data
qw: Client waiting from MongoDB The queue length of the data written by the instance
ar: Number of active clients performing read operations
aw: Number of live clients that perform write operations
notes : If these two values are very large , So that is DB Blocked ,DB The processing speed of is less than the request speed . See if there are expensive slow queries . If the inquiry is all right , It's really a load , We need to add machines
netIn:MongoDB Instance network traffic
netOut:MongoDB Instance network traffic
notes : These two fields table name network bandwidth pressure , In general , Not a bottleneck
conn: Total number of open connections , yes qr,qw,ar,aw The sum of
notes :MongoDB Create a thread for each connection , The creation and release of threads also have overhead , So try to properly configure the startup parameters of the number of connections ,
maxIncomingConnections, Engineer Ali suggested that 5000 following , Basically meet most of the scenarios
set: The name of the replica set
repl: The replication state of the node
M ---master
SEC ---secondary
REC ---recovering
UNK ---unknown
SLV ---slave
RTR ---mongs process("router')
ARB ---arbiter

2. Use Profiling Capture slow queries

Be similar to MySQL Of slow log, mongodb All slow and slow queries can be monitored . This tool is Profiling, The tool collects information about MongoDB Of Write operations , The cursor , Database commands, etc , You can turn on the tool at the database level , It can also be turned on at the instance level . The tool will write everything collected to system.profile Collection , The set is a capped collection.Profiling Function will definitely affect efficiency , But it's not too serious , The reason is that he uses system.profile To record , and system.profile It's a capped collection, such collection There are some limitations and characteristics in operation , But it's more efficient .

2.1 Slow query analysis process

1. Set a time threshold , such as 200ms
2. stay profiling in (system.profile) More than found 200ms The sentence of
3. see execStats, Analysis execution plan
4. According to the analysis , Decide if you need to add an index

2.2 Profiling Basic operation

mongoshell( Or other clients like mongochef etc.

# Check the status : Level and time

PRIMARY> db.getProfilingStatus()

{ "was" : 1, "slowms" : 200 }

# View levels 
PRIMARY> db.getProfilingLevel() # Level description :
0: close , Don't collect any data .
1: Collect slow query data , The default is 100 millisecond .
2: Collect all the data # Set level
PRIMARY> db.setProfilingLevel(2)
{ "was" : 1, "slowms" : 100, "ok" : 1 } # Here is the last setting # Set the level and time
PRIMARY> db.setProfilingLevel(1,200)
{ "was" : 2, "slowms" : 100, "ok" : 1 } # Here is the last setting # close Profiling
PRIMARY> db.setProfilingLevel(0)
{ "was" : 1, "slowms" : 200, "ok" : 1 } # Here is the last setting # Empty system.profile Or change the size
# close Profiling
PRIMARY> db.setProfilingLevel(0)
{ "was" : 0, "slowms" : 200, "ok" : 1 }
# Delete system.profile aggregate
PRIMARY> db.system.profile.drop()
true
# Create a new system.profile aggregate --- 4M
PRIMARY> db.createCollection( "system.profile", { capped: true, size:4000000 } )
{ "ok" : 1 }
# Re open Profiling
PRIMARY> db.setProfilingLevel(1,200)
{ "was" : 0, "slowms" : 200, "ok" : 1 } # If it's a replica set environment , To modify the copy of system.profile Size , The replica must be removed from the replica set first , Then perform the above steps , Finally, add the replica set . # It's fine too MongoDB Startup time , Turn on Profiling
mongod --profile=1 --slowms=200
# Or add... To the configuration file
profile = 1
slowms = 200

3. For daily use Profiling Query script

# Go back to the nearest 10 Bar record 
db.system.profile.find().limit(10).sort({ts:-1}).pretty()
# Return all operations , except command Type of
db.system.profile.find({op: {$ne:'command'}}).pretty()
# Returns a specific collection
db.system.profile.find({ns:'mydb.test'}).pretty()
# Return is greater than the 5 Millisecond slow operation
db.system.profile.find({millis:{$gt:5}}).pretty()
# Return information from a specific time frame
db.system.profile.find(
{
ts : {
$gt : new ISODate("2015-10-18T03:00:00Z"),
$lt : new ISODate("2015-10-19T03:40:00Z")
}
}
).pretty()
# At a certain time , Restrict users , Sort by time consumed
db.system.profile.find(
{
ts : {
$gt : newISODate("2015-10-12T03:00:00Z") ,
$lt : newISODate("2015-10-12T03:40:00Z")
}
},
{ user : 0 }
).sort( { millis : -1 } )
# View the latest Profile Record :
db.system.profile.find().sort({$natural:-1}).limit(1)
# Show 5 A recent event
show profile

4. case analysis

4.1 Get slow query

# The following statement filters several large tables , Because it's basically impossible to optimize , The logic needs to be developed , So we did the exclusion , In terms of output, only what I think is important , It's convenient for analysis and quick positioning 

db.system.profile.find({"ns":{"$not":{"$in":["F10data3.f10_4_4_1_gsgg_content", "F10data3.f10_5_1_1_gsyb_content"]}}}, {"ns":1,"op":1, "query":1,"keysExamined":1,"docsExamined":1,"numYield":1, "planSummary":1,"responseLength":1,"millis":1,"execStats":1}).limit(10).sort({ts:-1}).pretty()

# Here's a more than 200ms Query statement 
{
"op" : "query", # Operation type , Yes insert、query、update、remove、getmore、command
"ns" : "F10data3.f10_2_8_3_jgcc",
"query" : { # Specific query statements Including filter conditions ,limit Row number Sort field
filter" : {
"jzrq" : {
"$gte" : ISODate("2017-03-31T16:00:00.000+0000"),
"$lte" : ISODate("2017-06-30T15:59:59.000+0000")
},
"jglxfldm" : 10.0
},
"ntoreturn" : 200.0,
"sort" : { # If there is an order The sorted fields are displayed Here is RsId
"RsId" : 1.0
}
},
"keysExamined" : 0.0, # Number of index scans Here is a full scan , No index So it is 0
"docsExamined" : 69608.0, # Number of documents viewed Here is a full scan So it's the whole collection Total number of documents in
"numYield" : 546.0, # The number of times the operation was abandoned in order for other operations to complete . Generally speaking , When they need to visit
When the data in memory has not been fully read , The operation will abort . This makes it possible to MongoDB in order to
Abort the operation while reading the data , There are other operations of data in memory .
"locks" : { # Lock information ,R: A global read lock ;W: Global write lock ;r: Read locks for specific databases ;w: Write locks for specific databases
"Global" : {
"acquireCount" : {
"r" : NumberLong(1094) # The time taken by this operation to acquire a global level lock .
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(547)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(547)
}
}
},
"nreturned" : 200.0, # Number of documents returned
"responseLength" : 57695.0, # Returns the length of bytes , If that's a big number , Consider the value to return the required field
"millis" : 264.0, # Time consumed ( millisecond )
"planSummary" : "COLLSCAN, COLLSCAN", # Executive Overview From this point of view It's a full table scan
"execStats" : { # Detailed implementation plan I'm going to skip that It can be used later explain Come to the specific analysis
},
"ts" : ISODate("2017-08-24T02:32:49.768+0000"), # Time of execution of order
"client" : "10.3.131.96", # Access to the ip Or the mainframe
"allUsers" : [ ],
"user" : ""
}

4.2 Analyze slow queries

 1. If you find that millis It's worth more , Then we need to optimize .
2. If docsExamined It's a big number , Or close to the total number of records ( Number of documents ), Then you may not use index queries , It's a full table scan .
3. If keysExamined The number of 0, Or maybe it's no index .
4. combination planSummary Show in , In the example above is "COLLSCAN, COLLSCAN" Make sure it's a full scan
5. If keysExamined The value is higher than nreturned Value , In order to find the target document, the database scans many documents . At this time, we can consider creating an index to improve efficiency .
6. The key value of index can be selected according to query Output reference in , In the above example filter: Contains jzrq and jglxfldm And in accordance with the RsId Sort , therefore Our index
The index can be built like this : db.f10_2_8_3_jgcc.ensureindex({jzrq:1,jglxfldm:1,RsId:1})

4.3 Carry out the plan TYPE type

COLLSCAN # Full table scan avoid 
IXSCAN # An index scan Can improve Choose a more efficient index
FETCH # Search the specified... According to the index document
SHARD_MERGE # Return the data of each segment to merge Avoid cross slice queries as much as possible
SORT # Indicates that sorting is done in memory ( With the old version of scanAndOrder:true Agreement ) The order has to have index
LIMIT # Use limit Limit the number of returns There should be limits Limit+(Fetch+ixscan) The optimal
SKIP # Use skip Go ahead and skip Avoid unreasonable skip
IDHACK # in the light of _id The query recommend ,_id Default primary key , Fast query speed
SHARDING_FILTER # adopt mongos Query fragment data SHARDING_FILTER+ixscan The optimal
COUNT # utilize db.coll.explain().count() And so on count operation
COUNTSCAN #count Don't use Index Conduct count At the time of the stage return avoid In this case, it is suggested that
COUNT_SCAN #count Used Index Conduct count At the time of the stage return recommend
SUBPLA # Not used to index $or Of the query stage return avoid
TEXT # When using full-text index for query stage return
PROJECTION # When the return field is limited stage Return Select the data you need , recommend PROJECTION+ixscan

monitor Mongo Slow search for more related articles

  1. Mongo Simple query summary

    mongo Simple query db.ansheng.findOne()// Return the first matching data db.ansheng.findOne({"aaaaa":4})db.ansheng.find( ...

  2. (ArcGIS API For Silverlight )QueryTask Cross layer query , And monitor the complete query !

    (ArcGIS API For Silverlight )QueryTask Cross layer query , And monitor the complete query !     Directly in the source code :     Define global variables :    int  index=0; /// & ...

  3. Mongo Application query

    Official website operation manual , Basically enough https://docs.mongodb.com/manual/ Here is an example of grouping query , Used in the project and looked up an example , I understand , I think it's good and powerful . https://blog. ...

  4. grafana Use Prometheus Data source monitoring mongo database

    The database uses mongo after , Monitoring needs need to be integrated into grafana in , Because I always insist on docker Chemical deployment , So this time is no exception . 1. install Prometheus: What is Prometheus? Prome ...

  5. Zabbix monitor Mongo

    install Zabbix-agent # groupadd zabbix # useradd -g zabbix zabbix # yum -y install gcc mysql-community-dev ...

  6. Mongo Common query syntax

    One . Inquire about find Method db.collection_name.find(); Query all results : select * from users; db.users.find(); Specify which columns to return ( key ): se ...

  7. SqlProfilter Monitor the specified database data table —— monitor linq Generated by combining queries sql

    1. Example In the actual test environment, many tests are calling the database , So how to use SqlProfilter Monitor and filter to the table corresponding to the database you want to monitor linq Generated sql It's time to make the following settings ........... u ...

  8. mongo Slow query configuration

    I'm a piece by piece deployment , So the configuration of slow query is on the bootstrap service . Execute query command , Is in share Of primary On . 1. mongodb The slow query     To configure Slow query data is mainly stored in local Library system.pro ...

  9. Lepus (Lepus) Monitoring system slow query analysis platform installation configuration

    The monitored end should be installed pt Tools [root@HE1~]## yum -y install perl-IO-Socket-SSL [root@HE1~]## yum -y install perl-DBI [r ...

Random recommendation

  1. Misused volatile

    In embedded programming , There is an operation to read an address twice , Such as address mapping IO. But if the compiler processes it directly p[0] = *a; p[1] = *a In this operation , The latter one is often ignored , And directly use the previous calculated result . There's a problem , because ...

  2. poj1584

    The question : Given a polygon's n Vertex coordinates , Then give me a nail , Given the radius of the nail and the coordinates of the center of the circle , First, judge whether the polygon is convex or not , If it's a convex polygon , Then judge whether the nail can be placed inside the convex polygon . Ideas : 1. Because the order of vertices given may be inverse ...

  3. Error: Linux Next mysql.sock Solution to file loss and deletion

    By default ,Mysql It will be installed later in /tmp Create one in the directory mysql.sock file , If the file is missing Mysql Will not start properly , resolvent : Use mysqld_safe Start and solve : #basedir:m ...

  4. About solving configuration Tomact Related problems in the process

    Studying recently JavaWeb, But there was a problem in the first step , What's the problem , It's about Tomact Configuration of . Now I'll explain in detail the problems in my configuration process and how to solve them ,  I hope it can help you . First , I ...

  5. EffectiveC++ The first 7 Chapter Templates and Generic Programming

    According to my own understanding , Refining the essence of the original text , And in some difficult to understand places with their own " Maybe more accurate " Of 「 translate 」. Chapter 7 Template and generic programming Templates and Gen ...

  6. javascript Simulate Jingdong's closing the billboard

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  7. be based on SceneControl Click the implementation of the query function

    private void HandleIdentify_MouseDown(object sender, ISceneControlEvents_OnMouseDownEvent e) { this. ...

  8. iOS- Fast implementation of display layout

    summary More regular pages , Fast implementation of display layout , Improve development efficiency . detailed The code download :http://www.demodashi.com/demo/10713.html See this interface , Don't you think it's more regular than that ...

  9. ( turn )MFC Of ClistCtrl Delete the selected multiple line items

    MFC Of ClistCtrl Control after adding multiple lines of data , To delete the selected multiple rows of data , have access to ClistCtrl Member function of , I found many examples on the Internet , I found that there were problems , Because it's deleting ClistCtrl when , Delete the line below the line at the meeting ...

  10. Wechat applet --- Choose pictures and photos

    wx.chooseImage Select Picture / Take pictures // Get application instance const app = getApp() Page({ data: { onOff:true }, btnclick:funct ...