Centos 7 – install a mongo cluster with 3 replica set

This post is describing how to setup a Mongo Db database in a clustering mode. I choose to deploy 3 replica set on 3 different bare metal server running centos 7.

You will find the different steps to make this configuration running, the way to secure it in a vlan and to activate the authentication.

I also added some elements on the way to backup it. Feel free to propose enhancement and links in the comments.

3 server centos7 with a 50GB drive in /opt/mongo as LVM to start, mongo accessible on a private VLAN on 192.168.0.X

  • Add entry for each in the etc/hosts file
192.168.0.6 mongo_1_1
192.168.0.7 mongo_1_2
192.168.0.8 mongo_1_3
  • create yum repo

/etc/yum.repos.d/mongodb-org-3.6.repo

[mongodb-org-3.6]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.6/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.6.asc
  • Install mongo

sudo yum install -y mongodb-org

  • Add selinux permission (if selinux activated)
semanage port -a -t mongod_port_t -p tcp 27017
  • Create a firewall zone for eth1 on 192.168.0.X
# firewall-cmd --permanent --new-zone=vlan
# firewall-cmd --reload
# firewall-cmd --permanent --zone=vlan --change-interface=eth1

The last line will force the creation of the eth1 configuration script. Normally the zone should be correctly written in the file. You can check this :

The default zone has to be set in the file corresponding to the network card attached to lan /etc/sysconfig/network-scripts/ifcfg-Wired_connection_1 adding the new zone 

...
ZONE=vlan

Then restart network

# systemctl restart network

We should have the following

# firewall-cmd --get-active-zones
vlan
  interfaces: eth1
public
  interfaces: eth0
  • Open firewall port for mongo
# firewall-cmd --permanent --zone=vlan --add-port=27017/tcp
# firewall-cmd --permanent --zone=vlan --add-service=ssh 
# firewall-cmd --permanent --zone=vlan --add-service=dhcpv6-client
# firewall-cmd --reload

get details :
# firewall-cmd --permanent --zone=vlan --list-all
  • Create directory in /opt/mongo
mkdir db; mkdir log; chown mongod:mongod log db
  • Configure /etc/mongod.conf
systemLog:
   destination: file
   logAppend: true
   path: /opt/mongo/log/mongod.log
storage:
  dbPath: /opt/mongo/db
... 
bindIp: 192.168.0.6 
...
replication:
   replSetName: "mongo_1"  # all will have the same rs name
...
  • Restart mongo
# systemctl enable mongod
# systemctl restart mongod
  • connect to mongo & initiate replicas
# mongo --host 192.168.0.6
> rs.initiate()
> rs.add("mongo_1_2")
> rs.add("mongo_1_3")
> rs.status()
  • create a registered user to protect the collection agains public access

The following command must be executed on the master of the cluster. Not a slave

# mongo --host 192.168.0.6
> use admin 
> db.createUser(
{  user: "adminUser",
   pwd: "adminPassword",
   roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
})

Create a keyfile for the inter-cluster authentication

# openssl rand -base64 741 > /opt/mongo/mongodb.key
# chmod 600 /opt/mongo/mongodb.key
# chown mongod:mongod /opt/mongo/mongodb.key

Then copy this keyfile on all the cluster members.

Edit /etc/mongod.conf and activate security by adding the flowing lines

#security:
security:
  authorization: enabled
  keyFile: /opt/mongo/mongodb.key

Then restart mongod :

# systemctl restart mongod

Log in mongo and add the clusterAdmin right to the user

# mongo --host 192.168.0.6
> use admin
> db.auth("adminUser", "adminPassword" )
> db.grantRolesToUser("adminUser",["clusterAdmin"])

You can verify your user setting with the following command

> db.system.users.find()

Now we can create a collection in the cluster and add a user able to access it

> use myNewCollection
> db.createUser({user:"newUser", pwd:"newUserPassword", 
    roles:[{role:"readWrite", db:"myNewCollection"}] })

You can quit and try to login on the different cluster node

> use myNewCollection
> db.auth("newUser", "newUserPassword" )

The last step is to connect to your application. As an exemple with SpringBoot, if you want to setup a mongoDB cluster you can set the propertie file like this:

spring.data.mongodb.uri=mongodb://newUser:NewUserPassword@192.168.0.6:27017,192.168.0.7:27017,192.168.0.8:27017/newCollection

The hosts executing the springboot application need to have /etc/hosts file knowing the mongodb cluster members:

192.168.0.6 mongo_1_1 
192.168.0.7 mongo_1_2 
192.168.0.8 mongo_1_3

Add Nfs Backup

To backup the Mongo data you potentially need to backup only one of the node regarding the mongo data.

Nfs can be installed on all the mongo nodes

# yum -y install nfs-utils bzip2

On the backup server the following component need to be started

# systemctl enable nfs-server.service
# systemctl start nfs-server.service

We are going to create a backup directory we will mount on the different mongo nodes

# mkdir -p /opt/backup/storage
# chown nfsnobody:nfsnobody /opt/backup/storage/
# chmod 755 /opt/backup/storage

Now we can configure the filesystem export in the file  /etc/exports adding the following line

/opt/backup/storage        192.168.0.0/24(rw,sync,no_subtree_check)

And authorize the nfs service on a firewalled vlan :

# firewall-cmd --permanent --zone=vlan --add-service=nfs
# firewall-cmd --permanent --add-service=mountd --zone=vlan
# firewall-cmd --permanent --add-service=rpc-bind --zone=vlan
# firewall-cmd --reload

On each of the mongo node (nfs client) we add an entry in the /etc/fstab file

backup:/opt/backup/storage /opt/backup/storage nfs rw,sync,hard,intr 0 0

Now we can backup the mongodb collection into the shared file system

mongodump --host mongo_1_3 -u newUser-p 'newUserPwd' --db databaseToBackup --out /opt/backup/storage >/dev/null

Restore a backup

To restore a backup you need to have you admin user able to access the database to be restored:

> db.grantRolesToUser("adminUser",[{role:"readWrite", db:"restoredDB"}])

Then you can use the restoration tool like this

mongorestore --host 192.168.0.8 -u adminUser -p 'adminPassword' \
      --authenticationDatabase admin --db dbNameToRestore \  
      pathToTheBackupFile

 

Basic performance test

The underlaying filesystem performance is a critical keypoint for mongo, here are some basic test you can perform

  • Read Test
# hdparm -Tt /dev/mapper/pool--mongo-disk0

This is reporting the read performance in cache and buffered. As a reference I have a performance of 61M/s on a virtual disk running on a sata drive. I’ve got a performance of 219M/s on a high speed disk on OVH public cloud.

  • Write Test

Write test can be performed with dd write 800Mb of zero :

# dd if=/dev/zero of=/opt/mongo/output bs=8k count=100k; rm -f /opt/mongo/output

As a reference I’ve got a performance of 311MB/s on my virtual drive and only 104MB/s on OVH public cloud high speed disk.

Advanced operations on database

Once the authentication is activated some of the operation like the more critical are not authorized like executing an $eval (this is locking the db during operation and use for like collection copy). To activate this right to a user you need first to create a new role :

> db.createRole( 
{ 
  role: "executeFunctions", 
  privileges: [ 
    { resource: { anyResource: true }, 
      actions: [ "anyAction" ] 
    } 
  ], 
  roles: [] 
} )

Then this role can be assigned to one superuser

 > db.grantRolesToUser("adminUser", [ { role: "executeFunctions", db: "admin" } ])

To give rights on all the databases you can also extends the right of the adminUser like this:

> db.grantRolesToUser("adminUser",[{role:"dbOwner", db:"admin"}])

 

This entry was posted in Systems and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *