Uninstallation process of CDH (Cloudera Hadoop packages)
Here’are the uninstallation steps
1. Stop ALL services:
a. service cloudera-scm-server stop
b. service cloudera-scm-server-db stop
c. service cloudera-scm-agent hard_stop
2. Uninstall the CDH and Cloudera Manager packages:
[srinivas@dbversity.com ~]# hadoop version
Hadoop 2.0.0-cdh4.5.0
Subversion file:///var/lib/jenkins/workspace/CDH4.5.0-Packaging-Hadoop/build/cdh4/hadoop/2.0.0-cdh4.5.0/source/hadoop-common-project/hadoop-common -r 30821ec616ee7a21ee8447949b7c6208a8f1e7d8
Compiled by jenkins on Wed Nov 20 14:35:49 PST 2013
From source with checksum 9848b0f85b461913ed63fa19c2b79ccc
This command was run using /data/6/Hadoop-CDH45/hadoop-2.0.0-cdh4.5.0/share/hadoop/common/hadoop-common-2.0.0-cdh4.5.0.jar
[srinivas@dbversity.com ~]#
[srinivas@dbversity.com ~]# rpm -e –allmatches $(rpm -qa | grep -ehadoop -ecloudera -ehue -eoozie -ehbase -ehcatalog -eflume -ehive -esqoop -esqoop2 -epig -emahout -ewebhcat -ebigtop -ewhirr -ezookeeper)
[srinivas@dbversity.com ~]#
3. Clean the YUM CACHE
[srinivas@dbversity.com ~]# yum clean all
Repository soe6iiproducts is listed more than once in the configuration
Repository soe6local is listed more than once in the configuration
Cleaning repos: soe-bigdata soe-bigdata-cm soe6gdeproducts soe6local soe6products soe6u5
Cleaning up Everything
[srinivas@dbversity.com ~]#
4. On all Agent hosts, remove all Cloudera Manager data.-
[srinivas@dbversity.com ~]#
[srinivas@dbversity.com ~]# rm -Rf /usr/share/{cmf,hue} /var/lib/cloudera* /var/cache/yum/cloudera*
[srinivas@dbversity.com ~]#
5. On all Agent hosts, kill any running Cloudera Manager and Hadoop processes.
Note: This step should not be necessary if you stopped all the services and the Cloudera Manager agent correctly.
[srinivas@dbversity.com ~]# for u in hdfs mapred cloudera-scm hbase hue zookeeper oozie hive; do sudo kill $(ps -u $u -o pid=); done
6. Remove the Cloudera Manager lock file.
[srinivas@dbversity.com ~]#
[srinivas@dbversity.com ~]# rm /tmp/.scm_prepare_node.lock
rm: remove regular empty file `/tmp/.scm_prepare_node.lock’? y
[srinivas@dbversity.com ~]#
7. Remove data files and directories:
[srinivas@dbversity.com ~]# rm -rf /etc/cloudera-scm-agent;
[srinivas@dbversity.com ~]# rm -rf /dfs;
[srinivas@dbversity.com ~]#
#(Assumes /dfs is where HDFS data is stored.)
#For our main cluster that use /data as the location use these commands on each data node:
[srinivas@dbversity.com ~]# rm -rf /data/1/dfs/dn/*;
[srinivas@dbversity.com ~]# rm -rf /data/2/dfs/dn/*;
[srinivas@dbversity.com ~]# rm -rf /data/3/dfs/dn/*;
[srinivas@dbversity.com ~]# rm -rf /data/4/dfs/dn/*;
[srinivas@dbversity.com ~]# rm -rf /data/5/dfs/dn/*;
[srinivas@dbversity.com ~]# rm -rf /data/6/dfs/dn/*;
[srinivas@dbversity.com ~]# rm -rf /data/7/dfs/dn/*;
[srinivas@dbversity.com ~]# rm -rf /data/8/dfs/dn/*;
[srinivas@dbversity.com ~]# rm -rf /data/9/dfs/dn/*;
[srinivas@dbversity.com ~]# rm -rf /data/10/dfs/dn/*;
If this step gets ommitted and done after re install then you’ll also need to re create the HDFS /tmp folder and Hbase srinivas folder
HDFS
In CM go to HDFS service and refresh the node list;
then action- create /tmp directory
then from the physical server command line > hadoop namenode format
HBASE
In CM go to HBASE service and ‘action – create srinivas directory’
push the client config to all nodes then restart the service
Mahout $ yum remove mahout
Whirr $ yum remove whirr
Hue $ yum remove hue
Pig $ yum remove pig
Sqoop $ yum remove sqoop
Flume $ yum remove flume
Oozie client $ yum remove oozie-client
Oozie server $ yum remove oozie
Hive $ yum remove hive hive-metastore hive-server hive-server2
HBase $ yum remove hadoop-hbase
ZooKeeper server $ yum remove hadoop-zookeeper-server
ZooKeeper client $ yum remove hadoop-zookeeper
ZooKeeper Failover Controller (ZKFC) $ yum remove hadoop-hdfs-zkfc
HDFS HA Journal Node $ yum remove hadoop-hdfs-hadoop-hdfs-journalnode
Hadoop repository packages $ yum remove cloudera-cdh3
HttpFS $ yum remove hadoop-httpfs
Hadoop core packages $ yum remove hadoop-0.20