MongoDB: Defragmentation testing

[dbversit@dbversity bin]$ ps -ef | grep mongo | grep -v grep
dbversit 6471 1 0 04:55 ? 00:00:45 ./mongod –configsvr –dbpath /tmp/mongodb/config1 –logpath /tmp/mongodb/logs/config.log –port 30001 –config /etc/mongod.conf
dbversit 7165 1 3 05:00 pts/2 00:07:47 ./mongod –shardsvr –replSet rs –dbpath /tmp/mongodb/data/ –logpath /tmp/mongodb/logs/rs1.log –port 27010
dbversit 7249 1 1 05:00 pts/2 00:04:10 ./mongos –configdb 10.11.12.01:30001,10.11.12.02:30002,10.11.12.03:30003 –logpath /tmp/mongodb/logs/router.log –port 10000
[dbversit@dbversity bin]$
[sn55756@vm-c935-3307 bin]$ ps -ef | grep mongo | grep -v grep
sn55756 18006 15808 0 04:58 pts/0 00:00:55 ./mongod –configsvr –dbpath /tmp/mongodb/config2 –logpath /tmp/mongodb/logs/config2.log –port 30002
sn55756 18022 15808 0 04:58 pts/0 00:00:55 ./mongod –configsvr –dbpath /tmp/mongodb/config3 –logpath /tmp/mongodb/logs/config3.log –port 30003
sn55756 18325 15808 2 05:00 pts/0 00:06:32 ./mongod –shardsvr –replSet rs –dbpath /tmp/mongodb/data/ –logpath /tmp/mongodb/logs/rs2.log –port 27011
[sn55756@vm-c935-3307 bin]$
DB Stats Before test cases :-
—————————————

[dbversit@dbversity bin]$ ./mongo –port 27010
MongoDB shell version: 2.4.5
connecting to: 127.0.0.1:27010/test
rs:PRIMARY>
rs:PRIMARY> use fragmentation
switched to db fragmentation
rs:PRIMARY>
rs:PRIMARY> db.stats()
{
“db” : “fragmentation”,
“collections” : 3,
“objects” : 2047848,
“avgObjSize” : 525.7855934620147,
“dataSize” : 1076728976,
“storageSize” : 1382903808,
“numExtents” : 21,
“indexes” : 1,
“indexSize” : 84498960,
“fileSize” : 4226809856,
“nsSizeMB” : 16,
“dataFileVersion” : {
“major” : 4,
“minor” : 5
},
“ok” : 1
}
rs:PRIMARY>

[dbversit@dbversity bin]$
[dbversit@dbversity bin]$
[dbversit@dbversity bin]$
[dbversit@dbversity bin]$ free -m
total used free shared buffers cached
Mem: 3832 3615 216 0 200 2833
-/+ buffers/cache: 581 3251
Swap: 0 0 0
[dbversit@dbversity bin]$
[dbversit@dbversity bin]$

[dbversit@dbversity bin]$ free -m
total used free shared buffers cached
Mem: 3832 3696 136 0 31 3068
-/+ buffers/cache: 595 3237
Swap: 0 0 0
[dbversit@dbversity bin]$
[dbversit@dbversity bin]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 50G 15G 33G 31% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/sda1 194M 47M 137M 26% /boot
[dbversit@dbversity bin]$
[dbversit@dbversity bin]$
Data imported to DB:-
———————————

[dbversit@dbversity bin]$ head -2 /home/dbversit/1_queries.js
db.fragcol.insert({ “id” : 0 , “name” : “Can you do some research and or testing on how or whether fragmentation in the data files reduces the amount of actual data cached in memory I believe MongoDB only memory maps chunks of the data files but if we have a database run for a long time with out of place updates, when would be the right time to do a compact or possibly defragment by creating a new secondary node: 0”, “iteration” : “iteration-0” })
db.fragcol.insert({ “id” : 1 , “name” : “Can you do some research and or testing on how or whether fragmentation in the data files reduces the amount of actual data cached in memory I believe MongoDB only memory maps chunks of the data files but if we have a database run for a long time with out of place updates, when would be the right time to do a compact or possibly defragment by creating a new secondary node: 1”, “iteration” : “iteration-1” })
[dbversit@dbversity bin]$
[dbversit@dbversity bin]$
[dbversit@dbversity bin]$ wc -l /home/dbversit/1_queries.js
1248953 /home/dbversit/1_queries.js
[dbversit@dbversity bin]$
Queries used to create the Fragmentation :
—————————————

db.fragcol.update({“iteration” : /.*iteration-173.*/},{$set: {“iteration”: “-173_UPDATED_FOR_FRAGMENTATION_PURPOSE”}},{multi:1})
db.fragcol.remove({“iteration” : /.*iteration-4.*/})
DB Stats after crating test cases :-
——————————-

rs:PRIMARY> db.stats()
{
“db” : “fragmentation”,
“collections” : 3,
“objects” : 2047848,
“avgObjSize” : 525.7855934620147,
“dataSize” : 1076728976,
“storageSize” : 1382903808,
“numExtents” : 21,
“indexes” : 1,
“indexSize” : 84498960,
“fileSize” : 4226809856,
“nsSizeMB” : 16,
“dataFileVersion” : {
“major” : 4,
“minor” : 5
},
“ok” : 1
}
rs:PRIMARY>

rs:SECONDARY> db.stats()
{
“db” : “fragmentation”,
“collections” : 3,
“objects” : 2047848,
“avgObjSize” : 526.0200639891242,
“dataSize” : 1077209136,
“storageSize” : 1382903808,
“numExtents” : 21,
“indexes” : 1,
“indexSize” : 84498960,
“fileSize” : 4226809856,
“nsSizeMB” : 16,
“dataFileVersion” : {
“major” : 4,
“minor” : 5
},
“ok” : 1
}
rs:SECONDARY>
After Compacting the collection :-
——————————-

rs:SECONDARY> rs.slaveOk()
rs:SECONDARY>
rs:SECONDARY> show collections
fragcol
system.indexes
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY> db.fragcol.runCommand(“compact”)

{ “ok” : 1 }
rs:RECOVERING>

With resyc :–
——————–

[sn55756@vm-c935-3307 bin]$ ps -ef | grep mongo
sn55756 18006 15808 0 04:58 pts/0 00:01:00 ./mongod –configsvr –dbpath /tmp/mongodb/config2 –logpath /tmp/mongodb/logs/config2.log –port 30002
sn55756 18022 15808 0 04:58 pts/0 00:00:59 ./mongod –configsvr –dbpath /tmp/mongodb/config3 –logpath /tmp/mongodb/logs/config3.log –port 30003
sn55756 18325 15808 2 05:00 pts/0 00:07:00 ./mongod –shardsvr –replSet rs –dbpath /tmp/mongodb/data/ –logpath /tmp/mongodb/logs/rs2.log –port 27011
sn55756 22293 15808 0 09:03 pts/0 00:00:00 grep mongo
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$ ./mongo –port 27011
MongoDB shell version: 2.4.5
connecting to: 127.0.0.1:27011/test
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY> use admin
switched to db admin
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY> db.shutdownServer()
Wed Mar 12 09:04:04.262 DBClientCursor::init call() failed
server should be down…
Wed Mar 12 09:04:04.262 trying reconnect to 127.0.0.1:27011
Wed Mar 12 09:04:04.263 reconnect 127.0.0.1:27011 failed couldn’t connect to server 127.0.0.1:27011
>
bye
[3]+ Done numactl –interleave=all ./mongod –shardsvr –replSet rs –dbpath /tmp/mongodb/data/ –logpath /tmp/mongodb/logs/rs2.log –port 27011
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$ ps -ef | grep mongo
sn55756 18006 15808 0 04:58 pts/0 00:01:00 ./mongod –configsvr –dbpath /tmp/mongodb/config2 –logpath /tmp/mongodb/logs/config2.log –port 30002
sn55756 18022 15808 0 04:58 pts/0 00:00:59 ./mongod –configsvr –dbpath /tmp/mongodb/config3 –logpath /tmp/mongodb/logs/config3.log –port 30003
sn55756 22339 15808 0 09:04 pts/0 00:00:00 grep mongo
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$ cd /tmp/mongodb/data/
[sn55756@vm-c935-3307 data]$
[sn55756@vm-c935-3307 data]$
[sn55756@vm-c935-3307 data]$ ls -lhtr
total 8.3G
-rw——- 1 sn55756 mongod 64M Mar 12 05:01 local.0
-rw——- 1 sn55756 mongod 16M Mar 12 05:13 test.ns
-rw——- 1 sn55756 mongod 128M Mar 12 05:13 test.1
-rw——- 1 sn55756 mongod 64M Mar 12 05:13 test.0
-rw——- 1 sn55756 mongod 16M Mar 12 08:43 local.ns
-rw——- 1 sn55756 mongod 2.0G Mar 12 08:43 local.1
-rw——- 1 sn55756 mongod 2.0G Mar 12 09:00 fragmentation.6
-rw——- 1 sn55756 mongod 256M Mar 12 09:00 fragmentation.2
-rw——- 1 sn55756 mongod 512M Mar 12 09:00 fragmentation.3
-rw——- 1 sn55756 mongod 16M Mar 12 09:00 fragmentation.ns
-rw——- 1 sn55756 mongod 2.0G Mar 12 09:00 fragmentation.5
-rw——- 1 sn55756 mongod 1.0G Mar 12 09:00 fragmentation.4
drwxr-xr-x 2 sn55756 mongod 4.0K Mar 12 09:00 _tmp
-rw——- 1 sn55756 mongod 128M Mar 12 09:00 fragmentation.1
-rw——- 1 sn55756 mongod 64M Mar 12 09:00 fragmentation.0
-rwxr-xr-x 1 sn55756 mongod 0 Mar 12 09:04 mongod.lock
drwxr-xr-x 2 sn55756 mongod 4.0K Mar 12 09:04 journal
[sn55756@vm-c935-3307 data]$
[sn55756@vm-c935-3307 data]$
[sn55756@vm-c935-3307 data]$ cd ..
[sn55756@vm-c935-3307 mongodb]$
[sn55756@vm-c935-3307 mongodb]$
[sn55756@vm-c935-3307 mongodb]$ mv data/ data_old
[sn55756@vm-c935-3307 mongodb]$ mkdir data
[sn55756@vm-c935-3307 mongodb]$
[sn55756@vm-c935-3307 mongodb]$
[sn55756@vm-c935-3307 mongodb]$
[sn55756@vm-c935-3307 mongodb]$
[sn55756@vm-c935-3307 mongodb]$ cd /tmp/mongodb/bin/
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$ numactl –interleave=all ./mongod –shardsvr –replSet rs –dbpath /tmp/mongodb/data/ –logpath /tmp/mongodb/logs/rs2.log –port 27011
all output going to: /tmp/mongodb/logs/rs2.log
log file [/tmp/mongodb/logs/rs2.log] exists; copied to temporary file [/tmp/mongodb/logs/rs2.log.2014-03-12T13-04-58]
^Z
[3]+ Stopped numactl –interleave=all ./mongod –shardsvr –replSet rs –dbpath /tmp/mongodb/data/ –logpath /tmp/mongodb/logs/rs2.log –port 27011
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$ bg
[3]+ numactl –interleave=all ./mongod –shardsvr –replSet rs –dbpath /tmp/mongodb/data/ –logpath /tmp/mongodb/logs/rs2.log –port 27011 &
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$ disonw
bash: disonw: command not found
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$ disown
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$
[sn55756@vm-c935-3307 bin]$ ./mongo –port 27010
MongoDB shell version: 2.4.5
connecting to: 127.0.0.1:27010/test
Wed Mar 12 09:05:17.520 JavaScript execution failed: Error: couldn’t connect to server 127.0.0.1:27010 at src/mongo/shell/mongo.js:L114
exception: connect failed
[sn55756@vm-c935-3307 bin]$ ./mongo –port 27011
MongoDB shell version: 2.4.5
connecting to: 127.0.0.1:27011/test
>
>
>
>
>
>
rs:RECOVERING>
rs:RECOVERING>
rs:RECOVERING>
rs:RECOVERING>

At Primary :-

rs:PRIMARY> rs.status()
Wed Mar 12 09:05:28.574 Socket recv() errno:104 Connection reset by peer 127.0.0.1:27010
Wed Mar 12 09:05:28.590 SocketException: remote: 127.0.0.1:27010 error: 9001 socket exception [1] server [127.0.0.1:27010]
Wed Mar 12 09:05:28.597 DBClientCursor::init call() failed
Wed Mar 12 09:05:28.599 JavaScript execution failed: Error: error doing query: failed at src/mongo/shell/query.js:L78
Wed Mar 12 09:05:28.599 trying reconnect to 127.0.0.1:27010
Wed Mar 12 09:05:28.599 reconnect 127.0.0.1:27010 ok
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:PRIMARY>
rs:PRIMARY> rs.status()
{
“set” : “rs”,
“date” : ISODate(“2014-03-12T13:05:40Z”),
“myState” : 1,
“members” : [
{
“_id” : 0,
“name” : “dbversity:27010”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 14731,
“optime” : Timestamp(1394628231, 876),
“optimeDate” : ISODate(“2014-03-12T12:43:51Z”),
“self” : true
},
{
“_id” : 1,
“name” : “10.11.12.02:27011”,
“health” : 1,
“state” : 3,
“stateStr” : “RECOVERING”,
“uptime” : 20,
“optime” : Timestamp(0, 0),
“optimeDate” : ISODate(“1970-01-01T00:00:00Z”),
“lastHeartbeat” : ISODate(“2014-03-12T13:05:40Z”),
“lastHeartbeatRecv” : ISODate(“2014-03-12T13:05:40Z”),
“pingMs” : 41,
“lastHeartbeatMessage” : “initial sync need a member to be primary or secondary to do our initial sync”,
“syncingTo” : “dbversity:27010”
}
],
“ok” : 1
}
rs:PRIMARY>
rs:PRIMARY>
rs:PRIMARY>
rs:PRIMARY>
rs:PRIMARY> rs.status()
{
“set” : “rs”,
“date” : ISODate(“2014-03-12T13:07:03Z”),
“myState” : 1,
“members” : [
{
“_id” : 0,
“name” : “dbversity:27010”,
“health” : 1,
“state” : 1,
“stateStr” : “PRIMARY”,
“uptime” : 14814,
“optime” : Timestamp(1394628231, 876),
“optimeDate” : ISODate(“2014-03-12T12:43:51Z”),
“self” : true
},
{
“_id” : 1,
“name” : “10.11.12.02:27011”,
“health” : 1,
“state” : 3,
“stateStr” : “RECOVERING”,
“uptime” : 103,
“optime” : Timestamp(0, 0),
“optimeDate” : ISODate(“1970-01-01T00:00:00Z”),
“lastHeartbeat” : ISODate(“2014-03-12T13:07:02Z”),
“lastHeartbeatRecv” : ISODate(“2014-03-12T13:07:02Z”),
“pingMs” : 32,
“lastHeartbeatMessage” : “initial sync cloning db: fragmentaion”,
“syncingTo” : “dbversity:27010”
}
],
“ok” : 1
}
rs:PRIMARY>

At Secondary :-
rs:SECONDARY> use fragmentation
switched to db fragmentation
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY>
rs:SECONDARY> db.stats()

{
“db” : “fragmentation”,
“collections” : 3,
“objects” : 2047848,
“avgObjSize” : 467.92043159453243,
“dataSize” : 958229920,
“storageSize” : 1164926976,
“numExtents” : 20,
“indexes” : 1,
“indexSize” : 55662208,
“fileSize” : 4226809856,
“nsSizeMB” : 16,
“dataFileVersion” : {
“major” : 4,
“minor” : 5
},
“ok” : 1
}
rs:SECONDARY>

  • Ask Question