root 계정 활성화
/etc/ssh/sshd_config 수정 후 서비스 재시작
systemctl restart sshd.service
hostbame을 FQDN 형식으로 수정 (아래 참고)
http://blog.naver.com/yehyang0512/221407967289
/etc/hosts
172.26.11.237 hadoop01.co.kr hadoop01 -- master
172.26.10.99 hadoop02.co.kr hadoop02
172.26.3.17 hadoop03.co.kr hadoop03
172.26.10.80 hadoop04.co.kr hadoop04
vi ~/allnodes 에 추가로
vi ~/nodes 생성 (master 빼고)
yum install -y epel-release
yum install -y pssh
yum install -y clustershell
[root@hadoop01 ~]# pscp.pssh -h ~/nodes /etc/hosts /etc/hosts
[1] 16:34:33 [SUCCESS] 172.26.10.99
[2] 16:34:33 [SUCCESS] 172.26.10.80
[3] 16:34:33 [SUCCESS] 172.26.3.17
Selinux 정지
1.
[root@hadoop01 ~]# pssh -h ~/allnodes 'setenforce 0'
[1] 15:00:10 [SUCCESS] 172.26.10.99
[2] 15:00:11 [SUCCESS] 172.26.10.80
[3] 15:00:11 [SUCCESS] 172.26.11.237
[4] 15:00:11 [SUCCESS] 172.26.3.17
2.
vi /etc/sysconfig/selinux
SELINUX=enforcing => SELINUX=disabled 로 변경함.
[root@hadoop01 ~]# pscp.pssh -h ~/nodes /etc/sysconfig/selinux /etc/sysconfig/selinux
[1] 15:01:33 [SUCCESS] 172.26.10.80
[2] 15:01:33 [SUCCESS] 172.26.10.99
[3] 15:01:33 [SUCCESS] 172.26.3.17
swappiness 설정
[root@hadoop01 ~]# pssh -h ~/allnodes 'sysctl -w vm.swappiness=0'
[1] 16:45:33 [SUCCESS] 172.26.11.237
[2] 16:45:33 [SUCCESS] 172.26.3.17
[3] 16:45:33 [SUCCESS] 172.26.10.99
[4] 16:45:33 [SUCCESS] 172.26.10.80
[root@hadoop01 ~]#echo 'vm.swappiness=0' >> /etc/sysctl.conf
[root@hadoop01 ~]# pscp.pssh -h ~/nodes /etc/sysctl.conf /etc/sysctl.conf
[1] 16:47:24 [SUCCESS] 172.26.10.99
[2] 16:47:24 [SUCCESS] 172.26.10.80
[3] 16:47:24 [SUCCESS] 172.26.3.17
transparent_hugepage 설정
[root@hadoop01 ~]# pssh -h ~/allnodes echo never > /sys/kernel/mm/transparent_hugepage/defrag
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
아래처럼 하니까 됨
[root@hadoop01 ~]# echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >> |
> /etc/rc.d/rc.local
[root@hadoop01 ~]# echo 'echo never > /sys/kernel/mm/transparent_hugepage/defrag' >> |
> /etc/rc.d/rc.local
[root@hadoop01 ~]# echo 'sysctl -w vm.swappiness=1' >> /etc/rc.d/rc.local
모든 노드에 배포
[root@hadoop01 ~]# pscp.pssh -h /root/allnodes /etc/rc.d/rc.local /etc/rc.d/rc.local
[1] 13:47:33 [SUCCESS] 172.26.10.99
[2] 13:47:33 [SUCCESS] 172.26.11.237
[3] 13:47:33 [SUCCESS] 172.26.10.80
[4] 13:47:33 [SUCCESS] 172.26.3.17
[root@hadoop01 ~]# pssh -h /root/allnodes "chmod +x /etc/rc.d/rc.local"
[1] 13:47:56 [SUCCESS] 172.26.10.99
[2] 13:47:56 [SUCCESS] 172.26.10.80
[3] 13:47:56 [SUCCESS] 172.26.11.237
[4] 13:47:56 [SUCCESS] 172.26.3.17
NTP 동기화
[root@hadoop01 ~]# pssh -h ~/allnodes yum install -y ntp
[1] 13:59:15 [SUCCESS] 172.26.11.237
[2] 13:59:18 [SUCCESS] 172.26.10.80
[3] 13:59:18 [SUCCESS] 172.26.3.17
[4] 13:59:22 [SUCCESS] 172.26.10.99
[root@hadoop01 ~]# cat <<EOT >> /etc/ntp.conf
> server 0.kr.pool.ntp.org
> server 3.asia.pool.ntp.org
> server 2.asia.pool.ntp.org
> EOT
[root@hadoop01 ~]# pscp.pssh -h ~/nodes /etc/ntp.conf /etc/ntp.conf
[1] 14:10:12 [SUCCESS] 172.26.10.80
[2] 14:10:12 [SUCCESS] 172.26.10.99
[3] 14:10:13 [SUCCESS] 172.26.3.17
[root@hadoop01 ~]# pssh -h ~/allnodes service ntpd stop
[1] 14:11:14 [SUCCESS] 172.26.10.99
[2] 14:11:14 [SUCCESS] 172.26.10.80
[3] 14:11:14 [SUCCESS] 172.26.3.17
[4] 14:11:14 [SUCCESS] 172.26.11.237
[root@hadoop01 ~]# pssh -h ~/allnodes ntpdate kr.pool.ntp.org
[1] 14:11:51 [SUCCESS] 172.26.10.99
[2] 14:11:51 [SUCCESS] 172.26.3.17
[3] 14:11:51 [SUCCESS] 172.26.10.80
[4] 14:11:51 [SUCCESS] 172.26.11.237
[root@hadoop01 ~]# pssh -h ~/allnodes service ntpd start
[1] 14:12:34 [SUCCESS] 172.26.10.99
[2] 14:12:34 [SUCCESS] 172.26.11.237
[3] 14:12:34 [SUCCESS] 172.26.3.17
[4] 14:12:34 [SUCCESS] 172.26.10.80
[root@hadoop01 ~]# pssh -h ~/allnodes chkconfig ntpd on
[1] 14:13:01 [SUCCESS] 172.26.10.99
[2] 14:13:01 [SUCCESS] 172.26.10.80
[3] 14:13:01 [SUCCESS] 172.26.11.237
[4] 14:13:01 [SUCCESS] 172.26.3.17
File descriptor 수정
[root@hadoop01 ~]# cat <<EOT >> /etc/security/limits.conf
> * hard nofile 131072
> * soft nofile 131072
> root hard nofile 131072
> root soft nofile 131072
> EOT
[root@hadoop01 ~]# pscp.pssh -h ~/nodes /etc/security/limits.conf /etc/security/limits.conf
[1] 14:16:11 [SUCCESS] 172.26.10.80
[2] 14:16:11 [SUCCESS] 172.26.10.99
[3] 14:16:11 [SUCCESS] 172.26.3.17
모든 서버 리부팅
[root@hadoop01 ~]# pssh -h ~/allnodes reboot
Install open jdk
yum install java-1.8.0-openjdk-devel
설치된 openjdk 경로 확인
# which javac
Javac의 심볼릭 링크를 통해 원본 파일 위치 추출
# readlink -f /bin/javac
[root@hadoop01 ~]# vi /etc/profile
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.275.b01-0.el7_9.x86_64
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH="."
환경변수를 reboot이 없이 바로 적용
[root@hadoop01 ~]# source /etc/profile
모든 노드에 openjdk 설치
[root@hadoop01 ~]# pssh -h ~/nodes "yum install -y java-1.8.0-openjdk-devel"
[1] 14:48:40 [SUCCESS] 172.26.10.99
[2] 14:48:57 [SUCCESS] 172.26.3.17
[3] 14:48:58 [SUCCESS] 172.26.10.80
환경설정 파일 복사
[root@hadoop01 ~]# pscp.pssh -h ~/nodes /etc/profile /etc/profile
[1] 14:50:24 [SUCCESS] 172.26.10.99
[2] 14:50:24 [SUCCESS] 172.26.10.80
[3] 14:50:24 [SUCCESS] 172.26.3.17
복사한 환경설정 적용
[root@hadoop01 ~]# pssh -h ~/nodes source /etc/profile
[1] 14:51:55 [SUCCESS] 172.26.10.99
[2] 14:51:55 [SUCCESS] 172.26.10.80
[3] 14:51:55 [SUCCESS] 172.26.3.17
Cloudera Manager 설치
# wget http://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installer.bin
# chmod u+x cloudera-manager-installer.bin
# ./cloudera-manager-installer.bin
'학습장 > Data Engineering' 카테고리의 다른 글
pyspark groupBy 샘플코드 (0) | 2021.03.01 |
---|---|
Sqoop ETL (0) | 2021.02.21 |
DataStage Job Xml export (0) | 2021.01.17 |
Hadoop 설치(2) (0) | 2021.01.10 |
Hadoop에 대하여.. (0) | 2021.01.10 |
댓글