How to remove a shard from the MongoDB sharded cluster

Removes a shard from a sharded cluster.

When you run removeShard, MongoDB first moves the shard’s chunks to other shards in the cluster. Then MongoDB removes the shard.

[root@dbversity bin]# ./mongo –port 10000
MongoDB shell version: 2.4.11
connecting to: 127.0.0.1:10000/test
mongos>
mongos>
mongos>
mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“version” : 3,
“minCompatibleVersion” : 3,
“currentVersion” : 4,
“clusterId” : ObjectId(“5677c1513fcc7d6d2d23103d”)
}
shards:
{ “_id” : “rs1”, “host” : “rs1/dbversity:27010,dbversity:27011” }
{ “_id” : “rs2”, “host” : “rs2/dbversity:27020,dbversity:27021” }
{ “_id” : “rs3”, “host” : “rs3/dbversity:27030,dbversity:27031” }
databases:
{ “_id” : “admin”, “partitioned” : false, “primary” : “config” }
{ “_id” : “shrdb”, “partitioned” : true, “primary” : “rs2” }
shrdb.mycol
shard key: { “user_id” : 1 }
chunks:
rs1 2
rs3 4
rs2 2
{ “user_id” : { “$minKey” : 1 } } –>> { “user_id” : 1 } on : rs1 Timestamp(3, 0)
{ “user_id” : 1 } –>> { “user_id” : 4854 } on : rs1 Timestamp(5, 0)
{ “user_id” : 4854 } –>> { “user_id” : 144192 } on : rs3 Timestamp(4, 1)
{ “user_id” : 144192 } –>> { “user_id” : 284215 } on : rs2 Timestamp(6, 1)
{ “user_id” : 284215 } –>> { “user_id” : 420152 } on : rs2 Timestamp(5, 2)
{ “user_id” : 420152 } –>> { “user_id” : 552441 } on : rs3 Timestamp(6, 2)
{ “user_id” : 552441 } –>> { “user_id” : 684190 } on : rs3 Timestamp(6, 4)
{ “user_id” : 684190 } –>> { “user_id” : { “$maxKey” : 1 } } on : rs3 Timestamp(6, 5)

mongos>
mongos>

From the mongo shell, the removeShard operation resembles the following:

mongos> use admin
switched to db admin
mongos>
mongos> db.runCommand( { removeShard : “rs3” } )
{
“msg” : “draining started successfully”,
“state” : “started”,
“shard” : “rs3”,
“ok” : 1
}
mongos>
Check the Status of the Migration

To check the progress of the migration at any stage in the process, run removeShard from the admin database again. For example, for a shard named rs3, run:

mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“version” : 3,
“minCompatibleVersion” : 3,
“currentVersion” : 4,
“clusterId” : ObjectId(“5677c1513fcc7d6d2d23103d”)
}
shards:
{ “_id” : “rs1”, “host” : “rs1/dbversity:27010,dbversity:27011” }
{ “_id” : “rs2”, “host” : “rs2/dbversity:27020,dbversity:27021” }
{ “_id” : “rs3”, “draining” : true, “host” : “rs3/dbversity:27030,dbversity:27031” }
databases:
{ “_id” : “admin”, “partitioned” : false, “primary” : “config” }
{ “_id” : “shrdb”, “partitioned” : true, “primary” : “rs2” }
shrdb.mycol
shard key: { “user_id” : 1 }
chunks:
rs1 3
rs2 2
rs3 3
{ “user_id” : { “$minKey” : 1 } } –>> { “user_id” : 1 } on : rs1 Timestamp(3, 0)
{ “user_id” : 1 } –>> { “user_id” : 4854 } on : rs1 Timestamp(5, 0)
{ “user_id” : 4854 } –>> { “user_id” : 144192 } on : rs1 Timestamp(7, 0)
{ “user_id” : 144192 } –>> { “user_id” : 284215 } on : rs2 Timestamp(6, 1)
{ “user_id” : 284215 } –>> { “user_id” : 420152 } on : rs2 Timestamp(5, 2)
{ “user_id” : 420152 } –>> { “user_id” : 552441 } on : rs3 Timestamp(7, 1)
{ “user_id” : 552441 } –>> { “user_id” : 684190 } on : rs3 Timestamp(6, 4)
{ “user_id” : 684190 } –>> { “user_id” : { “$maxKey” : 1 } } on : rs3 Timestamp(6, 5)

mongos>
mongos> db.runCommand( { removeShard : “rs3” } )
{
“msg” : “draining ongoing”,
“state” : “ongoing”,
“remaining” : {
“chunks” : NumberLong(4),
“dbs” : NumberLong(0)
},
“ok” : 1
}
mongos>

You can not remove more than a Shard at a time.
mongos> db.runCommand( { removeShard : “rs2” } )
{ “ok” : 0, “errmsg” : “Can’t have more than one draining shard at a time” }
mongos>
Determine the Name of the Shard to Remove

To determine the name of the shard, connect to a mongos instance with the mongo shell and either:

Use the listShards command, as in the following:

mongos> db.adminCommand( { listShards: 1 } )
{
“shards” : [
{
“_id” : “rs1”,
“host” : “rs1/dbversity:27010,dbversity:27011”
},
{
“_id” : “rs2”,
“host” : “rs2/dbversity:27020,dbversity:27021”
},
{
“_id” : “rs3”,
“draining” : true,
“host” : “rs3/dbversity:27030,dbversity:27031”
}
],
“ok” : 1
}
mongos>
Also, the Draining shard shouldn’t be as a Primary database.
Ideally, Primary database store all the unsharded databases data i.e., The shard that holds all the un-sharded collections.
See the illustration below.
mongos> use newdb
switched to db newdb
mongos>
mongos> for(var i=1; i <= 100000 ; i++){db.newdb_mycol.insert({ “user_id” : i, “iteration#” : i, company : “dbfry”, data: “dummy data”, occurance : i })}
mongos>
mongos>
mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“version” : 3,
“minCompatibleVersion” : 3,
“currentVersion” : 4,
“clusterId” : ObjectId(“5677c1513fcc7d6d2d23103d”)
}
shards:
{ “_id” : “rs1”, “host” : “rs1/dbversity:27010,dbversity:27011” }
{ “_id” : “rs2”, “host” : “rs2/dbversity:27020,dbversity:27021” }
{ “_id” : “rs3”, “draining” : true, “host” : “rs3/dbversity:27030,dbversity:27031” }
databases:
{ “_id” : “admin”, “partitioned” : false, “primary” : “config” }
{ “_id” : “shrdb”, “partitioned” : true, “primary” : “rs2” }
shrdb.mycol
shard key: { “user_id” : 1 }
chunks:
rs1 4
rs2 4
rs3 2
{ “user_id” : { “$minKey” : 1 } } –>> { “user_id” : 1 } on : rs1 Timestamp(3, 0)
{ “user_id” : 1 } –>> { “user_id” : 4854 } on : rs1 Timestamp(5, 0)
{ “user_id” : 4854 } –>> { “user_id” : 144192 } on : rs1 Timestamp(7, 0)
{ “user_id” : 144192 } –>> { “user_id” : 284215 } on : rs2 Timestamp(6, 1)
{ “user_id” : 284215 } –>> { “user_id” : 420152 } on : rs2 Timestamp(5, 2)
{ “user_id” : 420152 } –>> { “user_id” : 486296 } on : rs2 Timestamp(8, 0)
{ “user_id” : 486296 } –>> { “user_id” : 552441 } on : rs1 Timestamp(9, 0)
{ “user_id” : 552441 } –>> { “user_id” : 618315 } on : rs2 Timestamp(10, 0)
{ “user_id” : 618315 } –>> { “user_id” : 684190 } on : rs3 Timestamp(10, 1)
{ “user_id” : 684190 } –>> { “user_id” : { “$maxKey” : 1 } } on : rs3 Timestamp(6, 5)
{ “_id” : “newdb”, “partitioned” : false, “primary” : “rs2” }

mongos>
mongos>
mongos>
Changing the Primary database to a Shard for a particular database.
mongos> use admin
switched to db admin
mongos>
mongos> db.runCommand( { movePrimary : “newdb”, to : “rs3” })
{ “primary ” : “rs3:rs3/dbversity:27030,dbversity:27031”, “ok” : 1 }
mongos>
mongos>
mongos>
mongos>
mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“version” : 3,
“minCompatibleVersion” : 3,
“currentVersion” : 4,
“clusterId” : ObjectId(“5677c1513fcc7d6d2d23103d”)
}
shards:
{ “_id” : “rs1”, “host” : “rs1/dbversity:27010,dbversity:27011” }
{ “_id” : “rs2”, “host” : “rs2/dbversity:27020,dbversity:27021” }
{ “_id” : “rs3”, “draining” : true, “host” : “rs3/dbversity:27030,dbversity:27031” }
databases:
{ “_id” : “admin”, “partitioned” : false, “primary” : “config” }
{ “_id” : “shrdb”, “partitioned” : true, “primary” : “rs2” }
shrdb.mycol
shard key: { “user_id” : 1 }
chunks:
rs1 4
rs2 4
rs3 2
{ “user_id” : { “$minKey” : 1 } } –>> { “user_id” : 1 } on : rs1 Timestamp(3, 0)
{ “user_id” : 1 } –>> { “user_id” : 4854 } on : rs1 Timestamp(5, 0)
{ “user_id” : 4854 } –>> { “user_id” : 144192 } on : rs1 Timestamp(7, 0)
{ “user_id” : 144192 } –>> { “user_id” : 284215 } on : rs2 Timestamp(6, 1)
{ “user_id” : 284215 } –>> { “user_id” : 420152 } on : rs2 Timestamp(5, 2)
{ “user_id” : 420152 } –>> { “user_id” : 486296 } on : rs2 Timestamp(8, 0)
{ “user_id” : 486296 } –>> { “user_id” : 552441 } on : rs1 Timestamp(9, 0)
{ “user_id” : 552441 } –>> { “user_id” : 618315 } on : rs2 Timestamp(10, 0)
{ “user_id” : 618315 } –>> { “user_id” : 684190 } on : rs3 Timestamp(10, 1)
{ “user_id” : 684190 } –>> { “user_id” : { “$maxKey” : 1 } } on : rs3 Timestamp(6, 5)
{ “_id” : “newdb”, “partitioned” : false, “primary” : “rs3” }

mongos>
mongos>
mongos>
mongos>
mongos>
bye
Now, see the data at Shard 3

[root@dbversity bin]# ./mongo –port 27030
MongoDB shell version: 2.4.11
connecting to: 127.0.0.1:27030/test
rs3:PRIMARY>
rs3:PRIMARY> use newdb
switched to db newdb
rs3:PRIMARY>
rs3:PRIMARY>
rs3:PRIMARY> show collections
newdb_mycol
system.indexes
rs3:PRIMARY>
rs3:PRIMARY> db.newdb_mycol.count()
100000
rs3:PRIMARY>
rs3:PRIMARY>
rs3:PRIMARY> db.newdb_mycol.find()
{ “_id” : ObjectId(“5677d1cb56975513949d03ee”), “user_id” : 1, “iteration#” : 1, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 1 }
{ “_id” : ObjectId(“5677d1cb56975513949d03ef”), “user_id” : 2, “iteration#” : 2, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 2 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f0”), “user_id” : 3, “iteration#” : 3, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 3 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f1”), “user_id” : 4, “iteration#” : 4, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 4 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f2”), “user_id” : 5, “iteration#” : 5, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 5 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f3”), “user_id” : 6, “iteration#” : 6, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 6 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f4”), “user_id” : 7, “iteration#” : 7, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 7 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f5”), “user_id” : 8, “iteration#” : 8, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 8 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f6”), “user_id” : 9, “iteration#” : 9, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 9 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f7”), “user_id” : 10, “iteration#” : 10, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 10 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f8”), “user_id” : 11, “iteration#” : 11, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 11 }
{ “_id” : ObjectId(“5677d1cb56975513949d03f9”), “user_id” : 12, “iteration#” : 12, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 12 }
{ “_id” : ObjectId(“5677d1cb56975513949d03fa”), “user_id” : 13, “iteration#” : 13, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 13 }
{ “_id” : ObjectId(“5677d1cb56975513949d03fb”), “user_id” : 14, “iteration#” : 14, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 14 }
{ “_id” : ObjectId(“5677d1cb56975513949d03fc”), “user_id” : 15, “iteration#” : 15, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 15 }
{ “_id” : ObjectId(“5677d1cb56975513949d03fd”), “user_id” : 16, “iteration#” : 16, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 16 }
{ “_id” : ObjectId(“5677d1cb56975513949d03fe”), “user_id” : 17, “iteration#” : 17, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 17 }
{ “_id” : ObjectId(“5677d1cb56975513949d03ff”), “user_id” : 18, “iteration#” : 18, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 18 }
{ “_id” : ObjectId(“5677d1cb56975513949d0400”), “user_id” : 19, “iteration#” : 19, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 19 }
{ “_id” : ObjectId(“5677d1cb56975513949d0401”), “user_id” : 20, “iteration#” : 20, “company” : “dbfry”, “data” : “dummy data”, “occurance” : 20 }
Type “it” for more
rs3:PRIMARY>
rs3:PRIMARY>
bye
Now if we check the status for Shard 3, you’ll see below error.

you need to drop or movePrimary these databases

mongos> use admin
switched to db admin
mongos>
mongos>
mongos>
mongos> db.runCommand( { removeShard : “rs3” } )
{
“msg” : “draining ongoing”,
“state” : “ongoing”,
“remaining” : {
“chunks” : NumberLong(3),
“dbs” : NumberLong(1)
},
“note” : “you need to drop or movePrimary these databases”,
“dbsToMove” : [
“newdb”
],
“ok” : 1
}
mongos>
mongos> db.runCommand( { movePrimary : “newdb”, to : “rs1” })
{ “primary ” : “rs1:rs1/dbversity:27010,dbversity:27011”, “ok” : 1 }
mongos>
mongos>
mongos>
mongos>
mongos> db.runCommand( { removeShard : “rs3” } )
{
“msg” : “draining ongoing”,
“state” : “ongoing”,
“remaining” : {
“chunks” : NumberLong(3),
“dbs” : NumberLong(0)
},
“ok” : 1
}
mongos>
mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“version” : 3,
“minCompatibleVersion” : 3,
“currentVersion” : 4,
“clusterId” : ObjectId(“5677c1513fcc7d6d2d23103d”)
}
shards:
{ “_id” : “rs1”, “host” : “rs1/dbversity:27010,dbversity:27011” }
{ “_id” : “rs2”, “host” : “rs2/dbversity:27020,dbversity:27021” }
{ “_id” : “rs3”, “draining” : true, “host” : “rs3/dbversity:27030,dbversity:27031” }
databases:
{ “_id” : “admin”, “partitioned” : false, “primary” : “config” }
{ “_id” : “shrdb”, “partitioned” : true, “primary” : “rs2” }
shrdb.mycol
shard key: { “user_id” : 1 }
chunks:
rs1 5
rs2 5
rs3 3
{ “user_id” : { “$minKey” : 1 } } –>> { “user_id” : 1 } on : rs1 Timestamp(3, 0)
{ “user_id” : 1 } –>> { “user_id” : 4854 } on : rs1 Timestamp(5, 0)
{ “user_id” : 4854 } –>> { “user_id” : 144192 } on : rs1 Timestamp(7, 0)
{ “user_id” : 144192 } –>> { “user_id” : 284215 } on : rs2 Timestamp(6, 1)
{ “user_id” : 284215 } –>> { “user_id” : 420152 } on : rs2 Timestamp(5, 2)
{ “user_id” : 420152 } –>> { “user_id” : 486296 } on : rs2 Timestamp(8, 0)
{ “user_id” : 486296 } –>> { “user_id” : 552441 } on : rs1 Timestamp(9, 0)
{ “user_id” : 552441 } –>> { “user_id” : 618315 } on : rs2 Timestamp(10, 0)
{ “user_id” : 618315 } –>> { “user_id” : 684190 } on : rs1 Timestamp(11, 0)
{ “user_id” : 684190 } –>> { “user_id” : 763142 } on : rs2 Timestamp(12, 0)
{ “user_id” : 763142 } –>> { “user_id” : 842095 } on : rs3 Timestamp(12, 1)
{ “user_id” : 842095 } –>> { “user_id” : 1000000 } on : rs3 Timestamp(11, 5)
{ “user_id” : 1000000 } –>> { “user_id” : { “$maxKey” : 1 } } on : rs3 Timestamp(11, 3)
{ “_id” : “newdb”, “partitioned” : false, “primary” : “rs1” }
{ “_id” : “test”, “partitioned” : false, “primary” : “rs2” }

mongos>
mongos>

 

 

 

 

 

 

mongos> db.runCommand( { removeShard : “rs3” } )
{
“msg” : “removeshard completed successfully”,
“state” : “completed”,
“shard” : “rs3”,
“ok” : 1
}
mongos>
mongos>
mongos>
mongos> db.adminCommand( { listShards: 1 } )
{
“shards” : [
{
“_id” : “rs1”,
“host” : “rs1/12d4-dl585-04:27010,12d4-dl585-04:27011”
},
{
“_id” : “rs2”,
“host” : “rs2/12d4-dl585-04:27020,12d4-dl585-04:27021”
}
],
“ok” : 1
}
mongos>
mongos>
mongos>
mongos>
mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“version” : 3,
“minCompatibleVersion” : 3,
“currentVersion” : 4,
“clusterId” : ObjectId(“5677c1513fcc7d6d2d23103d”)
}
shards:
{ “_id” : “rs1”, “host” : “rs1/12d4-dl585-04:27010,12d4-dl585-04:27011” }
{ “_id” : “rs2”, “host” : “rs2/12d4-dl585-04:27020,12d4-dl585-04:27021” }
databases:
{ “_id” : “admin”, “partitioned” : false, “primary” : “config” }
{ “_id” : “shrdb”, “partitioned” : true, “primary” : “rs2” }
shrdb.mycol
shard key: { “user_id” : 1 }
chunks:
rs1 7
rs2 7
{ “user_id” : { “$minKey” : 1 } } –>> { “user_id” : 1 } on : rs1 Timestamp(3, 0)
{ “user_id” : 1 } –>> { “user_id” : 4854 } on : rs1 Timestamp(5, 0)
{ “user_id” : 4854 } –>> { “user_id” : 144192 } on : rs1 Timestamp(7, 0)
{ “user_id” : 144192 } –>> { “user_id” : 284215 } on : rs2 Timestamp(6, 1)
{ “user_id” : 284215 } –>> { “user_id” : 420152 } on : rs2 Timestamp(5, 2)
{ “user_id” : 420152 } –>> { “user_id” : 486296 } on : rs2 Timestamp(8, 0)
{ “user_id” : 486296 } –>> { “user_id” : 552441 } on : rs1 Timestamp(9, 0)
{ “user_id” : 552441 } –>> { “user_id” : 618315 } on : rs2 Timestamp(10, 0)
{ “user_id” : 618315 } –>> { “user_id” : 684190 } on : rs1 Timestamp(11, 0)
{ “user_id” : 684190 } –>> { “user_id” : 763142 } on : rs2 Timestamp(12, 0)
{ “user_id” : 763142 } –>> { “user_id” : 842095 } on : rs1 Timestamp(13, 0)
{ “user_id” : 842095 } –>> { “user_id” : 921047 } on : rs2 Timestamp(14, 0)
{ “user_id” : 921047 } –>> { “user_id” : 1000000 } on : rs1 Timestamp(15, 0)
{ “user_id” : 1000000 } –>> { “user_id” : { “$maxKey” : 1 } } on : rs2 Timestamp(16, 0)
{ “_id” : “newdb”, “partitioned” : false, “primary” : “rs1” }
{ “_id” : “test”, “partitioned” : false, “primary” : “rs2” }

mongos>
mongos>
mongos>
bye
[root@12d4-dl585-04 2]#
[root@12d4-dl585-04 2]# ll -lhtr /data/2/shard3_*
/data/2/shard3_1:
total 7.1G
-rwxr-xr-x 1 root root 6 Dec 21 04:04 mongod.lock
-rw——- 1 root root 512M Dec 21 04:10 local.9
-rw——- 1 root root 512M Dec 21 04:10 local.8
-rw——- 1 root root 512M Dec 21 04:10 local.7
-rw——- 1 root root 512M Dec 21 04:10 local.6
-rw——- 1 root root 512M Dec 21 04:10 local.5
-rw——- 1 root root 512M Dec 21 04:10 local.4
-rw——- 1 root root 512M Dec 21 04:10 local.11
-rw——- 1 root root 512M Dec 21 04:10 local.10
-rw——- 1 root root 512M Dec 21 04:10 local.3
-rw——- 1 root root 512M Dec 21 04:10 local.2
-rw——- 1 root root 16M Dec 21 04:12 local.0
-rw——- 1 root root 512M Dec 21 04:53 shrdb.5
drwxr-xr-x 2 root root 4.0K Dec 21 05:20 _tmp
-rw——- 1 root root 32M Dec 21 06:12 shrdb.1
drwxr-xr-x 2 root root 4.0K Dec 21 06:15 journal
-rw——- 1 root root 16M Dec 21 06:15 local.ns
-rw——- 1 root root 16M Dec 21 06:15 shrdb.ns
-rw——- 1 root root 64M Dec 21 06:15 shrdb.2
-rw——- 1 root root 16M Dec 21 06:15 shrdb.0
-rw——- 1 root root 256M Dec 21 06:15 shrdb.4
-rw——- 1 root root 128M Dec 21 06:15 shrdb.3
-rw——- 1 root root 512M Dec 21 06:15 local.1
-rw——- 1 root root 512M Dec 21 06:15 local.12

/data/2/shard3_2:
total 6.1G
-rwxr-xr-x 1 root root 6 Dec 21 04:04 mongod.lock
-rw——- 1 root root 512M Dec 21 04:12 local.9
-rw——- 1 root root 512M Dec 21 04:12 local.8
-rw——- 1 root root 512M Dec 21 04:12 local.7
-rw——- 1 root root 512M Dec 21 04:12 local.6
-rw——- 1 root root 512M Dec 21 04:12 local.5
-rw——- 1 root root 512M Dec 21 04:12 local.4
-rw——- 1 root root 512M Dec 21 04:12 local.3
-rw——- 1 root root 512M Dec 21 04:12 local.2
-rw——- 1 root root 16M Dec 21 04:12 local.0
-rw——- 1 root root 512M Dec 21 04:53 shrdb.5
drwxr-xr-x 2 root root 4.0K Dec 21 05:20 _tmp
drwxr-xr-x 2 root root 4.0K Dec 21 06:11 journal
-rw——- 1 root root 32M Dec 21 06:12 shrdb.1
-rw——- 1 root root 16M Dec 21 06:15 shrdb.ns
-rw——- 1 root root 16M Dec 21 06:15 local.ns
-rw——- 1 root root 512M Dec 21 06:15 local.10
-rw——- 1 root root 64M Dec 21 06:15 shrdb.2
-rw——- 1 root root 16M Dec 21 06:15 shrdb.0
-rw——- 1 root root 256M Dec 21 06:15 shrdb.4
-rw——- 1 root root 128M Dec 21 06:15 shrdb.3
-rw——- 1 root root 512M Dec 21 06:15 local.1
[root@12d4-dl585-04 2]#
[root@12d4-dl585-04 2]#
[root@12d4-dl585-04 2]# /opt/mongodb/bin/mongo –port 10000
MongoDB shell version: 2.4.11
connecting to: 127.0.0.1:10000/test
mongos>
mongos>
mongos> show dbs
admin (empty)
config 0.046875GB
newdb 0.125GB
shrdb 1.99951171875GB
mongos>
mongos>
mongos> use shrdb
switched to db shrdb
mongos>
mongos>
mongos> show collections
mycol
system.indexes
mongos>
mongos>
mongos> db.mycol.count()
2000000
mongos>
mongos>
bye
[root@12d4-dl585-04 2]#
[root@12d4-dl585-04 2]# /opt/mongodb/bin/mongo –port 27010
MongoDB shell version: 2.4.11
connecting to: 127.0.0.1:27010/test
rs1:PRIMARY>
rs1:PRIMARY> use shrdb
switched to db shrdb
rs1:PRIMARY>
rs1:PRIMARY> db.mycol.count()
868234
rs1:PRIMARY>
rs1:PRIMARY>
bye
[root@12d4-dl585-04 2]# /opt/mongodb/bin/mongo –port 27020
MongoDB shell version: 2.4.11
connecting to: 127.0.0.1:27020/test
rs2:PRIMARY>
rs2:PRIMARY> use shrdb
switched to db shrdb
rs2:PRIMARY>
rs2:PRIMARY> db.mycol.count()
1131766
rs2:PRIMARY>
rs2:PRIMARY>
bye
[root@12d4-dl585-04 2]#
[root@12d4-dl585-04 2]# /opt/mongodb/bin/mongo –port 27030
MongoDB shell version: 2.4.11
connecting to: 127.0.0.1:27030/test
rs3:PRIMARY>
rs3:PRIMARY> use shrdb
switched to db shrdb
rs3:PRIMARY>
rs3:PRIMARY> db.mycol.count()
0
rs3:PRIMARY>
rs3:PRIMARY>
rs3:PRIMARY>
bye
[root@12d4-dl585-04 2]# /opt/mongodb/bin/mongo –port 10000
MongoDB shell version: 2.4.11
connecting to: 127.0.0.1:10000/test
mongos>
mongos>
mongos> sh.addShard(“rs3/12d4-dl585-04:27030,12d4-dl585-04:27031”)
{
“ok” : 0,
“errmsg” : “can’t add shard rs3/12d4-dl585-04:27030,12d4-dl585-04:27031 because a local database ‘shrdb’ exists in another rs2:rs2/12d4-dl585-04:27020,12d4-dl585-04:27021”
}
mongos>
mongos>

mongos> use admin
switched to db admin
mongos>
mongos>
mongos> db.runCommand( { movePrimary : “shrdb”, to : “rs3” })
{
“code” : 13129,
“ok” : 0,
“errmsg” : “exception: can’t find shard for: rs3”
}
mongos>

  • Ask Question