[MongoDB]: Tag aware/Data centre aware sharding

MongoDB supports tagging a range of shard key values to associate that range with a shard or group of shards.
Those shards receive all inserts within the tagged range.

The balancer obeys tagged range associations, which enables the following deployment patterns:

> isolate a specific subset of data on a specific set of shards.
> ensure that the most relevant data reside on shards that are geographically closest to the application servers.
> This document describes the behavior, operation, and use of tag aware sharding in MongoDB deployments.

Considerations
————–

Shard key range tags are distinct from replica set member tags.
Hash-based sharding does not support tag-aware sharding.
Shard ranges are always inclusive of the lower value and exclusive of the upper boundary.

Behavior and Operations
———————–

The balancer migrates chunks of documents in a sharded collections to the shards associated with a tag that has a shard key range with an upper bound greater than the chunk’s lower bound.
During balancing rounds, if the balancer detects that any chunks violate configured tags, the balancer migrates chunks in tagged ranges to shards associated with those tags.

After configuring tags with a shard key range, and associating it with a shard or shards, the cluster may take some time to balance the data among the shards.
This depends on the division of chunks and the current distribution of data in the cluster.

To achive this excersise, first created a 3 sharded cluster using below script.
We’re doing this excesise in a single server with different port numbers.

[root@dbversity.com bin]# cat shard_creation_script.sh
##### Killing the existing Mongo processes ################
for i in `ps -ef | egrep ‘shardsvr|configsvr|replSet|configdb’ | grep -v egrep | awk -F” ” ‘{print $2}’`; do kill -9 $i; done
##### Creating Mongo data & log files ################
mkdir -p /opt/mongodb/data /opt/mongodb/logs
rm -rf /opt/mongodb/data/*
rm -rf /opt/mongodb/logs/*
cd /opt/mongodb/data/
mkdir -p shard1_1 shard1_2 shard2_1 shard2_2 shard3_1 shard3_2 config1 config2 config3 arbiter1 arbiter2 arbiter3 router
cd /opt/mongodb/bin
##### Starting the Mongo Config,Shard,Arbiter & Router services ################
## Config Servers #####
numactl –interleave=all ./mongod –configsvr –dbpath /opt/mongodb/data/config1 –logpath /opt/mongodb/logs/config1.log –port 39000 –config /etc/mongod.conf &
numactl –interleave=all ./mongod –configsvr –dbpath /opt/mongodb/data/config2 –logpath /opt/mongodb/logs/config2.log –port 39001 –config /etc/mongod.conf &
numactl –interleave=all ./mongod –configsvr –dbpath /opt/mongodb/data/config3 –logpath /opt/mongodb/logs/config3.log –port 39002 –config /etc/mongod.conf &
## Replica Set 1 ######
numactl –interleave=all ./mongod –shardsvr –replSet rs1 –dbpath /opt/mongodb/data/shard1_1 –logpath /opt/mongodb/logs/shard1_1.log –port 27010 –config /etc/mongod.conf &
numactl –interleave=all ./mongod –shardsvr –replSet rs1 –dbpath /opt/mongodb/data/shard1_2 –logpath /opt/mongodb/logs/shard1_2.log –port 27011 –config /etc/mongod.conf &
## Replica Set 2 ######
numactl –interleave=all ./mongod –shardsvr –replSet rs2 –dbpath /opt/mongodb/data/shard2_1 –logpath /opt/mongodb/logs/shard2_1.log –port 27020 –config /etc/mongod.conf &
numactl –interleave=all ./mongod –shardsvr –replSet rs2 –dbpath /opt/mongodb/data/shard2_2 –logpath /opt/mongodb/logs/shard2_2.log –port 27021 –config /etc/mongod.conf &
## Replica Set 3 ######
numactl –interleave=all ./mongod –shardsvr –replSet rs3 –dbpath /opt/mongodb/data/shard3_1 –logpath /opt/mongodb/logs/shard3_1.log –port 27030 –config /etc/mongod.conf &
numactl –interleave=all ./mongod –shardsvr –replSet rs3 –dbpath /opt/mongodb/data/shard3_2 –logpath /opt/mongodb/logs/shard3_2.log –port 27031 –config /etc/mongod.conf &
## Arbiters ####
numactl –interleave=all ./mongod –replSet rs1 –dbpath /opt/mongodb/data/arbiter1 –logpath /opt/mongodb/logs/arbiter1.log –port 27012 –config /etc/mongod.conf &
numactl –interleave=all ./mongod –replSet rs2 –dbpath /opt/mongodb/data/arbiter2 –logpath /opt/mongodb/logs/arbiter2.log –port 27022 –config /etc/mongod.conf &
numactl –interleave=all ./mongod –replSet rs3 –dbpath /opt/mongodb/data/arbiter3 –logpath /opt/mongodb/logs/arbiter3.log –port 27032 –config /etc/mongod.conf &
sleep 10
echo -e “\n\n Config, Shard, Router & Arbiter services are initiating … \n\n”
numactl –interleave=all ./mongos –configdb xx.xx.xx.xx:39000,xx.xx.xx.xx:39001,xx.xx.xx.xx:39002 –logpath /opt/mongodb/logs/router.log –port 10000 &
sleep 10
./mongo dbversity.com:27010/admin –eval “rs.initiate()”
./mongo dbversity.com:27020/admin –eval “rs.initiate()”
./mongo dbversity.com:27030/admin –eval “rs.initiate()”
sleep 30
echo -e “\n\n Replica sets are added. \n\n”
./mongo dbversity.com:27010/admin –eval “rs.add(\”dbversity.com:27011\”)”
./mongo dbversity.com:27020/admin –eval “rs.add(\”dbversity.com:27021\”)”
./mongo dbversity.com:27030/admin –eval “rs.add(\”dbversity.com:27031\”)”
./mongo dbversity.com:27010/admin –eval “rs.addArb(\”dbversity.com:27012\”)”
./mongo dbversity.com:27020/admin –eval “rs.addArb(\”dbversity.com:27022\”)”
./mongo dbversity.com:27030/admin –eval “rs.addArb(\”dbversity.com:27032\”)”
sleep 20
./mongo dbversity.com:10000/admin –eval “sh.addShard(\”rs1/dbversity.com:27010,dbversity.com:27011\”)”
./mongo dbversity.com:10000/admin –eval “sh.addShard(\”rs2/dbversity.com:27020,dbversity.com:27021\”)”
./mongo dbversity.com:10000/admin –eval “sh.addShard(\”rs3/dbversity.com:27030,dbversity.com:27031\”)”
[root@dbversity.com bin]#
[root@dbversity.com bin]# sh shard_creation_script.sh
about to fork child process, waiting until server is ready for connections.
forked process: 3908
all output going to: /opt/mongodb/logs/config2.log
about to fork child process, waiting until server is ready for connections.
about to fork child process, waiting until server is ready for connections.
forked process: 3912
forked process: 3914
all output going to: /opt/mongodb/logs/shard2_2.log
about to fork child process, waiting until server is ready for connections.
all output going to: /opt/mongodb/logs/arbiter1.log
about to fork child process, waiting until server is ready for connections.
about to fork child process, waiting until server is ready for connections.
forked process: 3921
forked process: 3923
all output going to: /opt/mongodb/logs/shard2_1.log
about to fork child process, waiting until server is ready for connections.
about to fork child process, waiting until server is ready for connections.
forked process: 3927
all output going to: /opt/mongodb/logs/shard1_2.log
forked process: 3929
forked process: 3930
about to fork child process, waiting until server is ready for connections.
all output going to: /opt/mongodb/logs/shard3_2.log
all output going to: /opt/mongodb/logs/config3.log
all output going to: /opt/mongodb/logs/config1.log
forked process: 3940
all output going to: /opt/mongodb/logs/shard1_1.log
about to fork child process, waiting until server is ready for connections.
forked process: 3944
all output going to: /opt/mongodb/logs/arbiter2.log
about to fork child process, waiting until server is ready for connections.
forked process: 3958
all output going to: /opt/mongodb/logs/arbiter3.log
about to fork child process, waiting until server is ready for connections.
forked process: 3964
all output going to: /opt/mongodb/logs/shard3_1.log
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
child process started successfully, parent exiting
Config, Shard, Router & Arbiter services are initiating …
all output going to: /opt/mongodb/logs/router.log
MongoDB shell version: 2.4.11
connecting to: dbversity.com:27010/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:27020/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:27030/admin
[object Object]
Replica sets are added.
MongoDB shell version: 2.4.11
connecting to: dbversity.com:27010/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:27020/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:27030/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:27010/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:27020/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:27030/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:10000/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:10000/admin
[object Object]
MongoDB shell version: 2.4.11
connecting to: dbversity.com:10000/admin
[object Object]
[root@dbversity.com bin]#
[root@dbversity.com bin]# ps -ef | grep mongo
root 3908 1 0 02:04 ? 00:00:00 ./mongod –configsvr –dbpath /opt/mongodb/data/config2 –logpath /opt/mongodb/logs/config2.log –port 39001 –config /etc/mongod.conf
root 3912 1 0 02:04 ? 00:00:00 ./mongod –replSet rs1 –dbpath /opt/mongodb/data/arbiter1 –logpath /opt/mongodb/logs/arbiter1.log –port 27012 –config /etc/mongod.conf
root 3914 1 0 02:04 ? 00:00:00 ./mongod –shardsvr –replSet rs2 –dbpath /opt/mongodb/data/shard2_2 –logpath /opt/mongodb/logs/shard2_2.log –port 27021 –config /etc/mongod.conf
root 3921 1 0 02:04 ? 00:00:00 ./mongod –shardsvr –replSet rs1 –dbpath /opt/mongodb/data/shard1_2 –logpath /opt/mongodb/logs/shard1_2.log –port 27011 –config /etc/mongod.conf
root 3923 1 0 02:04 ? 00:00:00 ./mongod –shardsvr –replSet rs2 –dbpath /opt/mongodb/data/shard2_1 –logpath /opt/mongodb/logs/shard2_1.log –port 27020 –config /etc/mongod.conf
root 3927 1 0 02:04 ? 00:00:00 ./mongod –configsvr –dbpath /opt/mongodb/data/config1 –logpath /opt/mongodb/logs/config1.log –port 39000 –config /etc/mongod.conf
root 3929 1 0 02:04 ? 00:00:00 ./mongod –configsvr –dbpath /opt/mongodb/data/config3 –logpath /opt/mongodb/logs/config3.log –port 39002 –config /etc/mongod.conf
root 3930 1 0 02:04 ? 00:00:00 ./mongod –shardsvr –replSet rs3 –dbpath /opt/mongodb/data/shard3_2 –logpath /opt/mongodb/logs/shard3_2.log –port 27031 –config /etc/mongod.conf
root 3940 1 0 02:04 ? 00:00:00 ./mongod –shardsvr –replSet rs1 –dbpath /opt/mongodb/data/shard1_1 –logpath /opt/mongodb/logs/shard1_1.log –port 27010 –config /etc/mongod.conf
root 3944 1 0 02:04 ? 00:00:00 ./mongod –replSet rs2 –dbpath /opt/mongodb/data/arbiter2 –logpath /opt/mongodb/logs/arbiter2.log –port 27022 –config /etc/mongod.conf
root 3958 1 0 02:04 ? 00:00:00 ./mongod –replSet rs3 –dbpath /opt/mongodb/data/arbiter3 –logpath /opt/mongodb/logs/arbiter3.log –port 27032 –config /etc/mongod.conf
root 3964 1 0 02:04 ? 00:00:00 ./mongod –shardsvr –replSet rs3 –dbpath /opt/mongodb/data/shard3_1 –logpath /opt/mongodb/logs/shard3_1.log –port 27030 –config /etc/mongod.conf
root 4422 1 0 02:04 pts/0 00:00:00 ./mongos –configdb xx.xx.xx.xx:39000,xx.xx.xx.xx:39001,xx.xx.xx.xx:39002 –logpath /opt/mongodb/logs/router.log –port 10000
root 4945 31457 0 02:05 pts/0 00:00:00 grep mongo
[root@dbversity.com bin]#
[root@dbversity.com bin]# ./mongo –port 10000
MongoDB shell version: 2.4.11
connecting to: 127.0.0.1:10000/test
mongos>
mongos>
mongos> sh.status()
— Sharding Status —
sharding version: {
“_id” : 1,
“version” : 3,
“minCompatibleVersion” : 3,
“currentVersion” : 4,
“clusterId” : ObjectId(“545b1d6c7532482f2245e98d”)
}
shards:
{ “_id” : “rs1”, “host” : “rs1/dbversity.com:27010,dbversity.com:27011” }
{ “_id” : “rs2”, “host” : “rs2/dbversity.com:27020,dbversity.com:27021” }
{ “_id” : “rs3”, “host” : “rs3/dbversity.com:27030,dbversity.com:27031” }
databases:
{ “_id” : “admin”, “partitioned” : false, “primary” : “config” }
mongos>
mongos> use dbversity
switched to db dbversity
mongos> sh.enableSharding(“dbversity”)
{ “ok” : 1 }
mongos>
mongos> sh.shardCollection(“dbversity.nosql”, {mongodb:1});
{ “collectionsharded” : “dbversity.nosql”, “ok” : 1 }
mongos> sh.shardCollection(“dbversity.rdbms”, {oracle:1});
{ “collectionsharded” : “dbversity.rdbms”, “ok” : 1 }
mongos> sh.shardCollection(“dbversity.newsql”, {memsql:1});
{ “collectionsharded” : “dbversity.newsql”, “ok” : 1 }
mongos>
mongos> // add data
mongos> for (var i=0; i<100000; i++) { db[“nosql”].insert({mongodb: Math.random(), count: i, time: new Date()}); }
mongos> for (var i=0; i<100000; i++) { db[“rdbms”].insert({oracle: Math.random(), count: i, time: new Date()}); }
mongos> for (var i=0; i<100000; i++) { db[“newsql”].insert({memsql: Math.random(), count: i, time: new Date()}); }
mongos>
mongos> db.nosql.count()
100000
mongos> db.rdbms.count()
100000
mongos> db.newsql.count()
100000
mongos>
mongos> db.nosql.find()
{ “_id” : ObjectId(“545b2364abf3fe56bba4ecfc”), “mongodb” : 0.999929167330265, “count” : 3395, “time” : ISODate(“2014-11-06T07:29:40.783Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4dfd7”), “mongodb” : 0.04660748760215938, “count” : 30, “time” : ISODate(“2014-11-06T07:29:40.599Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4dfb9”), “mongodb” : 0.06472490564920008, “count” : 0, “time” : ISODate(“2014-11-06T07:29:40.596Z”) }
{ “_id” : ObjectId(“545b2366abf3fe56bba52a81”), “mongodb” : 0.9999975571408868, “count” : 19144, “time” : ISODate(“2014-11-06T07:29:42.200Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4dff5”), “mongodb” : 0.06193767231889069, “count” : 60, “time” : ISODate(“2014-11-06T07:29:40.601Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4dfba”), “mongodb” : 0.2638449161313474, “count” : 1, “time” : ISODate(“2014-11-06T07:29:40.596Z”) }
{ “_id” : ObjectId(“545b236babf3fe56bba553f3”), “mongodb” : 0.999967097537592, “count” : 29754, “time” : ISODate(“2014-11-06T07:29:47.303Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4e017”), “mongodb” : 0.058666468830779195, “count” : 94, “time” : ISODate(“2014-11-06T07:29:40.604Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4dfbb”), “mongodb” : 0.6324741698335856, “count” : 2, “time” : ISODate(“2014-11-06T07:29:40.596Z”) }
{ “_id” : ObjectId(“545b236dabf3fe56bba586d9”), “mongodb” : 0.9999306299723685, “count” : 42784, “time” : ISODate(“2014-11-06T07:29:49.724Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4e01a”), “mongodb” : 0.060032202396541834, “count” : 97, “time” : ISODate(“2014-11-06T07:29:40.604Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4dfbc”), “mongodb” : 0.2646367633715272, “count” : 3, “time” : ISODate(“2014-11-06T07:29:40.596Z”) }
{ “_id” : ObjectId(“545b236fabf3fe56bba5dca2”), “mongodb” : 0.9999547791667283, “count” : 64745, “time” : ISODate(“2014-11-06T07:29:51.750Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4e01e”), “mongodb” : 0.0580502413213253, “count” : 101, “time” : ISODate(“2014-11-06T07:29:40.605Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4dfbd”), “mongodb” : 0.801704294513911, “count” : 4, “time” : ISODate(“2014-11-06T07:29:40.596Z”) }
{ “_id” : ObjectId(“545b2371abf3fe56bba5fcec”), “mongodb” : 0.9999664565548301, “count” : 73011, “time” : ISODate(“2014-11-06T07:29:53.814Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4e033”), “mongodb” : 0.048614636762067676, “count” : 122, “time” : ISODate(“2014-11-06T07:29:40.606Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4dfbe”), “mongodb” : 0.8228244979400188, “count” : 5, “time” : ISODate(“2014-11-06T07:29:40.596Z”) }
{ “_id” : ObjectId(“545b2371abf3fe56bba60644”), “mongodb” : 0.9999797125346959, “count” : 75403, “time” : ISODate(“2014-11-06T07:29:53.885Z”) }
{ “_id” : ObjectId(“545b2364abf3fe56bba4e03d”), “mongodb” : 0.04186769598163664, “count” : 132, “time” : ISODate(“2014-11-06T07:29:40.607Z”) }
Type “it” for more
mongos>
mongos> db.rdbms.find()
{ “_id” : ObjectId(“545b238eabf3fe56bba666ad”), “oracle” : 0.020482106832787395, “count” : 84, “time” : ISODate(“2014-11-06T07:30:22.709Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba66659”), “oracle” : 0.4306581960991025, “count” : 0, “time” : ISODate(“2014-11-06T07:30:22.702Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba666c1”), “oracle” : 0.00039984448812901974, “count” : 104, “time” : ISODate(“2014-11-06T07:30:22.711Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba6665a”), “oracle” : 0.20070368680171669, “count” : 1, “time” : ISODate(“2014-11-06T07:30:22.702Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba66700”), “oracle” : 0.0014638984575867653, “count” : 167, “time” : ISODate(“2014-11-06T07:30:22.716Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba6665b”), “oracle” : 0.49023133306764066, “count” : 2, “time” : ISODate(“2014-11-06T07:30:22.702Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba66705”), “oracle” : 0.007067275466397405, “count” : 172, “time” : ISODate(“2014-11-06T07:30:22.717Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba6665c”), “oracle” : 0.09193867188878357, “count” : 3, “time” : ISODate(“2014-11-06T07:30:22.702Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba6671b”), “oracle” : 0.012777357595041394, “count” : 194, “time” : ISODate(“2014-11-06T07:30:22.718Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba6665d”), “oracle” : 0.8412913086358458, “count” : 4, “time” : ISODate(“2014-11-06T07:30:22.703Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba6671f”), “oracle” : 0.03296106238849461, “count” : 198, “time” : ISODate(“2014-11-06T07:30:22.719Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba6665e”), “oracle” : 0.8105593263171613, “count” : 5, “time” : ISODate(“2014-11-06T07:30:22.703Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba66723”), “oracle” : 0.012037476524710655, “count” : 202, “time” : ISODate(“2014-11-06T07:30:22.719Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba6665f”), “oracle” : 0.6317445852328092, “count” : 6, “time” : ISODate(“2014-11-06T07:30:22.703Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba66724”), “oracle” : 0.03140151943080127, “count” : 203, “time” : ISODate(“2014-11-06T07:30:22.719Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba66660”), “oracle” : 0.10321825044229627, “count” : 7, “time” : ISODate(“2014-11-06T07:30:22.703Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba66729”), “oracle” : 0.016137028811499476, “count” : 208, “time” : ISODate(“2014-11-06T07:30:22.720Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba66661”), “oracle” : 0.5830355044454336, “count” : 8, “time” : ISODate(“2014-11-06T07:30:22.703Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba6672b”), “oracle” : 0.019615979166701436, “count” : 210, “time” : ISODate(“2014-11-06T07:30:22.720Z”) }
{ “_id” : ObjectId(“545b238eabf3fe56bba66662”), “oracle” : 0.248803136870265, “count” : 9, “time” : ISODate(“2014-11-06T07:30:22.703Z”) }
Type “it” for more
mongos>
mongos> db.newsql.find()
{ “_id” : ObjectId(“545b23aeabf3fe56bba80c90”), “memsql” : 0.9999854390043765, “count” : 8087, “time” : ISODate(“2014-11-06T07:30:54.941Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ecf9”), “memsql” : 0.028616901952773333, “count” : 0, “time” : ISODate(“2014-11-06T07:30:54.487Z”) }
{ “_id” : ObjectId(“545b23afabf3fe56bba81ba1”), “memsql” : 0.9999952062498778, “count” : 11944, “time” : ISODate(“2014-11-06T07:30:55.074Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ecfa”), “memsql” : 0.5529389099683613, “count” : 1, “time” : ISODate(“2014-11-06T07:30:54.487Z”) }
{ “_id” : ObjectId(“545b23b5abf3fe56bba8ba96”), “memsql” : 0.9999968854244798, “count” : 52637, “time” : ISODate(“2014-11-06T07:31:01.764Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ecfb”), “memsql” : 0.6903098535258323, “count” : 2, “time” : ISODate(“2014-11-06T07:30:54.487Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ecfc”), “memsql” : 0.8088215559255332, “count” : 3, “time” : ISODate(“2014-11-06T07:30:54.487Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ecfd”), “memsql” : 0.08257138938643038, “count” : 4, “time” : ISODate(“2014-11-06T07:30:54.487Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ecfe”), “memsql” : 0.10492610442452133, “count” : 5, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ecff”), “memsql” : 0.867301288060844, “count” : 6, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed00”), “memsql” : 0.2785170767456293, “count” : 7, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed01”), “memsql” : 0.7362539491150528, “count” : 8, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed02”), “memsql” : 0.9679773005191237, “count” : 9, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed03”), “memsql” : 0.5766658072825521, “count” : 10, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed04”), “memsql” : 0.8718549513723701, “count” : 11, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed05”), “memsql” : 0.7328023710288107, “count” : 12, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed06”), “memsql” : 0.5615942764561623, “count” : 13, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed07”), “memsql” : 0.49184474581852555, “count” : 14, “time” : ISODate(“2014-11-06T07:30:54.488Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed08”), “memsql” : 0.3296324305702001, “count” : 15, “time” : ISODate(“2014-11-06T07:30:54.489Z”) }
{ “_id” : ObjectId(“545b23aeabf3fe56bba7ed09”), “memsql” : 0.10970728122629225, “count” : 16, “time” : ISODate(“2014-11-06T07:30:54.489Z”) }
Type “it” for more
mongos>
Now we have 3 shards and 3 dbversity. If you look at where the chunks are, you should see that they’re pretty evenly spread out amongst the shards:

mongos> db.chunks.find({ns: “dbversity.nosql”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs1” }
{ “shard” : “rs1” }
{ “shard” : “rs1” }
mongos>
mongos> db.chunks.find({ns: “dbversity.rdbms”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs2” }
{ “shard” : “rs2” }
{ “shard” : “rs2” }
mongos>
mongos> db.chunks.find({ns: “dbversity.newsql”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs3” }
{ “shard” : “rs3” }
{ “shard” : “rs3” }
mongos>
Our goal:
Shard Namespace
rs1 “dbversity.nosql”
rs2 “dbversity.rdbms”
rs3 “dbversity.newsql”
To accomplish this, we’ll use tags.

A tag describes a property of a shard, any property (they’re very flexible). So, you might tag a shard as “fast” or “slow” or “east coast” or “rackspace”.
In this example, we want to mark a shard as belonging to a certain dbversity, so we’ll add dbversity’ nicknames as tags.

mongos>
mongos> sh.addShardTag(“rs1”, “MON”)
mongos> sh.addShardTag(“rs2”, “ORA”)
mongos> sh.addShardTag(“rs3”, “NEW”)
mongos>
mongos>
mongos> db.shards.find()
{ “_id” : “rs1”, “host” : “rs1/dbversity.com:27010,dbversity.com:27011”, “tags” : [ “MON” ] }
{ “_id” : “rs2”, “host” : “rs2/dbversity.com:27020,dbversity.com:27021”, “tags” : [ “ORA” ] }
{ “_id” : “rs3”, “host” : “rs3/dbversity.com:27030,dbversity.com:27031”, “tags” : [ “NEW” ] }
mongos>

This says, “put any chunks tagged ‘MON’ on rs1.”
The second thing we have to do is to make a rule, “For all chunks created in the dbversity.nosql collection, give them the tag ‘MON’.” To do this, we can use the addTagRange helper:

mongos> sh.addTagRange(“dbversity.nosql”, {mongodb:MinKey}, {mongodb:MaxKey}, “MON”)

This says, “Mark every chunk in dbversity.nosql with the ‘MON’ tag” (MinKey is negative infinity, MaxKey is positive infinity, so all of the chunks fall in this range).
Now let’s do the same thing for the other two collections:

mongos> sh.addTagRange(“dbversity.rdbms”, {oracle:MinKey}, {oracle:MaxKey}, “ORA”)
mongos> sh.addTagRange(“dbversity.newsql”, {memsql:MinKey}, {memsql:MaxKey}, “NEW”)

Now wait a couple of minutes (it takes a little while for it to rebalance) and then look at the chunks for these collections.
use config

mongos> db.chunks.find({ns: “dbversity.nosql”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs1” }
{ “shard” : “rs1” }
{ “shard” : “rs1” }
mongos>
mongos> db.chunks.find({ns: “dbversity.rdbms”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs2” }
{ “shard” : “rs2” }
{ “shard” : “rs2” }
mongos>
mongos> db.chunks.find({ns: “dbversity.newsql”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs3” }
{ “shard” : “rs3” }
{ “shard” : “rs3” }
mongos>

/opt/mongodb/data/shard2_1:
total 1.3G
-rwxr-xr-x 1 root root 5 Nov 6 02:04 mongod.lock
-rw——- 1 root root 64M Nov 6 02:05 local.0
drwxr-xr-x 2 root root 4.0K Nov 6 02:28 journal
-rw——- 1 root root 128M Nov 6 02:29 dbversity.1
drwxr-xr-x 2 root root 4.0K Nov 6 02:30 _tmp
-rw——- 1 root root 16M Nov 6 02:53 local.ns
-rw——- 1 root root 16M Nov 6 02:53 dbversity.ns
-rw——- 1 root root 64M Nov 6 02:53 dbversity.0
-rw——- 1 root root 1.0G Nov 6 02:53 local.1

/opt/mongodb/data/shard2_2:
total 1.3G
-rwxr-xr-x 1 root root 5 Nov 6 02:04 mongod.lock
-rw——- 1 root root 64M Nov 6 02:05 local.0
drwxr-xr-x 2 root root 4.0K Nov 6 02:28 journal
-rw——- 1 root root 128M Nov 6 02:29 dbversity.1
drwxr-xr-x 2 root root 4.0K Nov 6 02:30 _tmp
-rw——- 1 root root 16M Nov 6 02:53 local.ns
-rw——- 1 root root 1.0G Nov 6 02:53 local.1
-rw——- 1 root root 16M Nov 6 02:53 dbversity.ns
-rw——- 1 root root 64M Nov 6 02:53 dbversity.0

/opt/mongodb/data/shard1_1:
total 1.6G
-rwxr-xr-x 1 root root 5 Nov 6 02:04 mongod.lock
-rw——- 1 root root 64M Nov 6 02:05 local.0
drwxr-xr-x 2 root root 4.0K Nov 6 02:28 journal
-rw——- 1 root root 256M Nov 6 02:30 dbversity.2
drwxr-xr-x 2 root root 4.0K Nov 6 02:30 _tmp
-rw——- 1 root root 16M Nov 6 02:57 dbversity.ns
-rw——- 1 root root 16M Nov 6 02:57 local.ns
-rw——- 1 root root 128M Nov 6 02:57 dbversity.1
-rw——- 1 root root 64M Nov 6 02:57 dbversity.0
-rw——- 1 root root 1.0G Nov 6 02:57 local.1

/opt/mongodb/data/shard1_2:
total 1.6G
-rwxr-xr-x 1 root root 5 Nov 6 02:04 mongod.lock
-rw——- 1 root root 64M Nov 6 02:05 local.0
drwxr-xr-x 2 root root 4.0K Nov 6 02:28 journal
-rw——- 1 root root 256M Nov 6 02:30 dbversity.2
drwxr-xr-x 2 root root 4.0K Nov 6 02:30 _tmp
-rw——- 1 root root 16M Nov 6 02:57 dbversity.ns
-rw——- 1 root root 16M Nov 6 02:57 local.ns
-rw——- 1 root root 128M Nov 6 02:57 dbversity.1
-rw——- 1 root root 64M Nov 6 02:57 dbversity.0
-rw——- 1 root root 1.0G Nov 6 02:57 local.1

/opt/mongodb/data/shard3_1:
total 1.6G
-rwxr-xr-x 1 root root 5 Nov 6 02:04 mongod.lock
-rw——- 1 root root 64M Nov 6 02:05 local.0
drwxr-xr-x 2 root root 4.0K Nov 6 02:28 journal
-rw——- 1 root root 256M Nov 6 02:53 dbversity.2
drwxr-xr-x 2 root root 4.0K Nov 6 02:53 _tmp
-rw——- 1 root root 16M Nov 6 02:53 local.ns
-rw——- 1 root root 16M Nov 6 02:53 dbversity.ns
-rw——- 1 root root 128M Nov 6 02:53 dbversity.1
-rw——- 1 root root 1.0G Nov 6 02:53 local.1
-rw——- 1 root root 64M Nov 6 02:53 dbversity.0
/opt/mongodb/data/shard3_2:
total 1.6G
-rwxr-xr-x 1 root root 5 Nov 6 02:04 mongod.lock
-rw——- 1 root root 64M Nov 6 02:05 local.0
drwxr-xr-x 2 root root 4.0K Nov 6 02:28 journal
-rw——- 1 root root 256M Nov 6 02:53 dbversity.2
drwxr-xr-x 2 root root 4.0K Nov 6 02:53 _tmp
-rw——- 1 root root 16M Nov 6 02:53 local.ns
-rw——- 1 root root 16M Nov 6 02:53 dbversity.ns
-rw——- 1 root root 128M Nov 6 02:53 dbversity.1
-rw——- 1 root root 1.0G Nov 6 02:53 local.1
-rw——- 1 root root 64M Nov 6 02:53 dbversity.0

[root@dbversity.com bin]#
Scaling with Tags
suppose, if rdbms data not sufficient with this arrangement, we can move the nosql and newsql’s collections to one shard and expand rdbms ORA’s to two by manipulating tags as below.

mongos>
mongos>
mongos> db.shards.find()
{ “_id” : “rs1”, “host” : “rs1/dbversity.com:27010,dbversity.com:27011”, “tags” : [ “MON” ] }
{ “_id” : “rs2”, “host” : “rs2/dbversity.com:27020,dbversity.com:27021”, “tags” : [ “ORA” ] }
{ “_id” : “rs3”, “host” : “rs3/dbversity.com:27030,dbversity.com:27031”, “tags” : [ “NEW” ] }
mongos>
// move newsql to rs1
mongos> sh.addShardTag(“rs1”, “NEW”)
mongos> sh.removeShardTag(“rs3”, “NEW”)
mongos>
mongos> db.shards.find()
{ “_id” : “rs2”, “host” : “rs2/dbversity.com:27020,dbversity.com:27021”, “tags” : [ “ORA” ] }
{ “_id” : “rs3”, “host” : “rs3/dbversity.com:27030,dbversity.com:27031”, “tags” : [ ] }
{ “_id” : “rs1”, “host” : “rs1/dbversity.com:27010,dbversity.com:27011”, “tags” : [ “MON”, “NEW” ] }
mongos>
mongos> // expand rdbms to rs3
mongos> sh.addShardTag(“rs3”, “ORA”)
mongos>
mongos> db.shards.find()
{ “_id” : “rs2”, “host” : “rs2/dbversity.com:27020,dbversity.com:27021”, “tags” : [ “ORA” ] }
{ “_id” : “rs3”, “host” : “rs3/dbversity.com:27030,dbversity.com:27031”, “tags” : [ “ORA” ] }
{ “_id” : “rs1”, “host” : “rs1/dbversity.com:27010,dbversity.com:27011”, “tags” : [ “MON”, “NEW” ] }
mongos>

Now if you wait a couple minutes and look at the chunks, you’ll see that rdbms’s collection is distributed across 2 shards and the other two collections are on rs1.

mongos> db.chunks.find({ns: “dbversity.rdbms”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs2” }
{ “shard” : “rs2” }
{ “shard” : “rs2” }
mongos>
mongos>
mongos> db.chunks.find({ns: “dbversity.nosql”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs1” }
{ “shard” : “rs1” }
{ “shard” : “rs1” }
mongos> db.chunks.find({ns: “dbversity.newsql”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs1” }
{ “shard” : “rs1” }
{ “shard” : “rs1” }
mongos>
mongos> db.chunks.find({ns: “dbversity.rdbms”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs2” }
{ “shard” : “rs2” }
{ “shard” : “rs2” }
mongos>
mongos> sh.isBalancerRunning()
true
mongos>
mongos> sh.isBalancerRunning()
false
mongos> db.chunks.find({ns: “dbversity.rdbms”}, {shard:1, _id:0}).sort({shard:1})
{ “shard” : “rs2” }
{ “shard” : “rs3” }
{ “shard” : “rs2” }
mongos>
If you still would like to one shard to be good and one to be bad.
Then we divide up the traffic: send 50% of ORA’s writes to the SSD shard and 50% to the spinning disk shard.
First, we’ll add tags to the shards, describing them:

mongos> sh.addShardTag(“rs2”, “spinning”)
mongos> sh.addShardTag(“rs3”, “ssd”)

The value of the “oracle” field is between 0 and 1, so we want to say, “If oracle < .5, send this to the spinning disk. If oracle = .5, send to the SSD.”

mongos> sh.addTagRange(“dbversity.rdbms”, {oracle:MinKey}, {oracle:.5}, “spinning”)
mongos> sh.addTagRange(“dbversity.rdbms”, {oracle:.5}, {oracle:MaxKey}, “ssd”)
mongos>

  • Ask Question