注意:public ip、virtual ip、scan ip必须位于同一个网段
我目前是在虚拟机中设置了两个网卡,一个是NAT(192.168.88.X),另外一个是HOST ONLY(192.168.94.X)
node1.localdomain node1 public ip 192.168.88.191 node1-vip.localdomain node1-vip virtual ip 192.168.88.193 node1-priv.localdomain node1-priv private ip 192.168.94.11 node2.localdomain node2 public ip 192.168.88.192 node2-vip.localdomain node2-vip virtual ip 192.168.88.194 node2-priv.localdomain node2-priv private ip 192.168.94.12 scan-cluster.localdomain scan-cluster SCAN IP 192.168.88.203 dg.localdomain 192.168.88.212 DNS服务器ip: 192.168.88.11
2、安装oracle linux 6.10
安装过程略过,在安装过程中node1、node2要设置好public ip和private ip,dg要设置一个ip。
安装完成后,分别测试node1是否能ping通node2,dg,node2是否能ping通node1,dg。
在node1的终端: ping 192.168.88.192 ping 192.168.94.12 ping 192.168.88.212 在node2的终端: ping 192.168.88.191 ping 192.168.94.11 ping 192.168.88.212
3、设置hostname
node1和node2配置相同的hostname
127.0.0.1 localhost ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 #node1 192.168.88.191 node1.localdomain node1 192.168.88.193 node1-vip.localdomain node1-vip 192.168.94.11 node1-priv.localdomain node1-priv #node2 192.168.88.192 node2.localdomain node2 192.168.88.194 node2-vip.localdomain node2-vip 192.168.94.12 node2-priv.localdomain node2-priv #scan-ip 192.168.88.203 scan-cluster.localdomain scan-cluster
测试:
在node1的终端: ping node2 ping node2-priv 在node2的终端: ping node1 ping node1-priv
4、安装配置DNS服务器(192.168.88.11)
安装DNS软件包:
[root@feiht Packages]# rpm -ivh bind-9.8.2-0.68.rc1.el6.x86_64.rpm warning: bind-9.8.2-0.68.rc1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ########################################### [100%] 1:bind ########################################### [100%] [root@feiht Packages]# rpm -ivh bind-chroot-9.8.2-0.68.rc1.el6.x86_64.rpm warning: bind-chroot-9.8.2-0.68.rc1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ########################################### [100%] 1:bind-chroot ########################################### [100%]
配置/etc/named.conf 文件:
说明:
直接将该文件中的127.0.0.1、localhost 全部修改成any,且修改时,需要注意左右两边留空格。
修改前将原文件进行备份,注意加上-p 选项,来保证文件的权限问题,否则修改有问题后还原文件会由于权限问题导致DNS 服务启不来!
[root@feiht /]# cd /etc [root@feiht etc]# cp -p named.conf named.conf.bak
修改后如下:
// // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key";
配置/etc/named.rfc1912.zones 文件:
[root@feiht /]# cd /etc [root@feiht etc]# cp -p named.rfc1912.zones named.rfc1912.zones.bak
在etc/named.rfc1912.zones的最后添加如下内容:
zone "localdomain" IN { type master; file "localdomain.zone"; allow-update { none; }; }; zone "88.168.192.in-addr.arpa" IN { type master; file "88.168.192.in-addr.arpa"; allow-update { none; }; };
修改后的如下:
zone "localhost.localdomain" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "localdomain" IN { type master; file "localdomain.zone"; allow-update { none; }; }; zone "localhost" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "1.0.0.127.in-addr.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "0.in-addr.arpa" IN { type master; file "named.empty"; allow-update { none; }; }; zone "88.168.192.in-addr.arpa" IN { type master; file "88.168.192.in-addr.arpa"; allow-update { none; }; };
配置正、反向解析数据库文件:
[root@feiht ~]# cd /var/named/ 创建正反向文件: [root@feiht named]# cp -p named.localhost localdomain.zone [root@feiht named]# cp -p named.localhost 88.168.192.in-addr.arpa
在正向解析文件localdomain.zone的最后添加如下内容:
scan-cluster IN A 192.168.88.203
修改后如下:
$TTL 1D @ IN SOA @ rname.invalid. ( 0 ; serial 1D ; refresh 1H ; retry 1W ; expire 3H ) ; minimum NS @ A 127.0.0.1 AAAA ::1 scan-cluster A 192.168.88.203
在反向解析数据库文件88.168.192.in-addr.arpa 最后添加下述内容:
203 IN PTR scan-cluster.localdomain.
修改后如下:
$TTL 1D @ IN SOA @ rname.invalid. ( 1997022700 ; serial 28800 ; refresh 1400 ; retry 3600000 ; expire 86400 ) ; minimum NS @ A 127.0.0.1 AAAA ::1 203 IN PTR scan-cluster.localdomain.
如果遇到权限问题(具体什么问题忘记截图了),请执行下列语句把正反数据库文件的权限改成named:
[root@feiht named]# chown -R named:named localdomain.zone [root@feiht named]# chown -R named:named 88.168.192.in-addr.arpa
修改DNS服务器的 /etc/resolv.conf文件,保证resolv.conf不会自动修改:
[root@feiht etc]# cat /etc/resolv.conf # Generated by NetworkManager search localdomain nameserver 192.168.88.11 [root@feiht named]# chattr +i /etc/resolv.conf
关闭DNS服务器的防火墙:
[root@feiht etc]# service iptables stop
[root@oradb ~]# chkconfig iptables off
启动DNS服务:
[root@feiht named]# /etc/rc.d/init.d/named status [root@feiht named]# /etc/rc.d/init.d/named start [root@feiht named]# /etc/rc.d/init.d/named stop [root@feiht named]# /etc/rc.d/init.d/named restart
然后,分别在RAC 节点node1、node2 的/etc/resolv.conf 配置文件中添加下述配置信息:
search localdomain nameserver 192.168.88.11
验证node1的scan ip是否解析成功:
[root@node1 etc]# nslookup 192.168.88.203 Server: 192.168.88.11 Address: 192.168.88.11#53 203.88.168.192.in-addr.arpa name = scan-cluster.localdomain. [root@node1 etc]# nslookup scan-cluster.localdomain Server: 192.168.88.11 Address: 192.168.88.11#53 Name: scan-cluster.localdomain Address: 192.168.88.203 [root@node1 etc]# nslookup scan-cluster Server: 192.168.88.11 Address: 192.168.88.11#53 Name: scan-cluster.localdomain Address: 192.168.88.203 同样的方式测试node2.
5、安装前的准备工作
5.1、分别在node1、node2上建用户、改口令、修改用户配置文件
用户规划:
GroupName GroupID GroupInfo OracleUser(1100) GridUser(1101) oinstall 1000 Inventory Group Y Y dba 1300 OSDBA Group Y oper 1301 OSOPER Group Y asmadmin 1200 OSASM Y asmdba 1201 OSDBA for ASM Y Y asmoper 1202 OSOPER for ASM Y
shell脚本(node1):
说明:在节点node2 上执行该脚本时,
需要将grid 用户环境变量ORACLE_SID 修改为+ASM2,oracle 用户环境变量ORACLE_SID 修改为devdb2,ORACLE_HOSTNAME 环境变量修改为node2.localdomain
echo "Now create 6 groups named 'oinstall','dba','asmadmin','asmdba','asmoper','oper'" echo "Plus 2 users named 'oracle','grid',Also setting the Environment" groupadd -g 1000 oinstall groupadd -g 1200 asmadmin groupadd -g 1201 asmdba groupadd -g 1202 asmoper useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "grid Infrastructure Owner" grid echo "grid" | passwd --stdin grid echo 'export PS1="`/bin/hostname -s`-> "'>> /home/grid/.bash_profile echo "export TMP=/tmp">> /home/grid/.bash_profile echo 'export TMPDIR=$TMP'>>/home/grid/.bash_profile echo "export ORACLE_SID=+ASM1">> /home/grid/.bash_profile echo "export ORACLE_BASE=/u01/app/grid">> /home/grid/.bash_profile echo "export ORACLE_HOME=/u01/app/11.2.0/grid">> /home/grid/.bash_profile echo "export ORACLE_TERM=xterm">> /home/grid/.bash_profile echo "export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'" >> /home/grid/.bash_profile echo 'export TNS_ADMIN=$ORACLE_HOME/network/admin' >> /home/grid/.bash_profile echo 'export PATH=/usr/sbin:$PATH'>> /home/grid/.bash_profile echo 'export PATH=$ORACLE_HOME/bin:$PATH'>> /home/grid/.bash_profile echo 'export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib'>> /home/grid/.bash_profile echo 'export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib'>> /home/grid/.bash_profile echo "export EDITOR=vi" >> /home/grid/.bash_profile echo "export LANG=en_US" >> /home/grid/.bash_profile echo "export NLS_LANG=AMERICAN_AMERICA.AL32UTF8" >> /home/grid/.bash_profile echo "umask 022">> /home/grid/.bash_profile groupadd -g 1300 dba groupadd -g 1301 oper useradd -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle echo "oracle" | passwd --stdin oracle echo 'export PS1="`/bin/hostname -s`-> "'>> /home/oracle/.bash_profile echo "export TMP=/tmp">> /home/oracle/.bash_profile echo 'export TMPDIR=$TMP'>>/home/oracle/.bash_profile echo "export ORACLE_HOSTNAME=node1.localdomain">> /home/oracle/.bash_profile echo "export ORACLE_SID=devdb1">> /home/oracle/.bash_profile echo "export ORACLE_BASE=/u01/app/oracle">> /home/oracle/.bash_profile echo 'export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1'>> /home/oracle/.bash_profile echo "export ORACLE_UNQNAME=devdb">> /home/oracle/.bash_profile echo 'export TNS_ADMIN=$ORACLE_HOME/network/admin' >> /home/oracle/.bash_profile echo "export ORACLE_TERM=xterm">> /home/oracle/.bash_profile echo 'export PATH=/usr/sbin:$PATH'>> /home/oracle/.bash_profile echo 'export PATH=$ORACLE_HOME/bin:$PATH'>> /home/oracle/.bash_profile echo 'export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib'>> /home/oracle/.bash_profile echo 'export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib'>> /home/oracle/.bash_profile echo "export EDITOR=vi" >> /home/oracle/.bash_profile echo "export LANG=en_US" >> /home/oracle/.bash_profile echo "export NLS_LANG=AMERICAN_AMERICA.AL32UTF8" >> /home/oracle/.bash_profile echo "export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'" >> /home/oracle/.bash_profile echo "umask 022">> /home/oracle/.bash_profile echo "The Groups and users has been created" echo "The Environment for grid,oracle also has been set successfully"
查看用户和目录是否创建成功:
[root@node1 shell]# id oracle uid=1101(oracle) gid=1000(oinstall) 组=1000(oinstall),1201(asmdba),1300(dba),1301(oper) [root@node1 shell]# id grid uid=1100(grid) gid=1000(oinstall) 组=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) [root@node1 shell]# groups oracle oracle : oinstall asmdba dba oper [root@node1 shell]# groups grid grid : oinstall asmadmin asmdba asmoper [root@node1 home]# ll /home 总用量 8 drwx------. 3 grid oinstall 4096 2月 5 17:18 grid drwx------. 3 oracle oinstall 4096 2月 5 17:18 oracle
5.2、建路径、改权限
路径和权限规划:
Environment Variable Grid User Oracle User ORACLE_BASE /u01/app/grid /u01/app/oracle ORACLE_HOME /u01/app/11.2.0/grid /u01/app/oracle/product/11.2.0/db_1 ORACLE_SID [node1] +ASM1 devdb1 ORACLE_SID [node2] +ASM2 devdb2
shell脚本:
echo "Now create the necessary directory for oracle,grid users and change the authention to oracle,grid users..." mkdir -p /u01/app/grid mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/oracle chown -R oracle:oinstall /u01 chown -R grid:oinstall /u01/app/grid chown -R grid:oinstall /u01/app/11.2.0 chmod -R 775 /u01 echo "The necessary directory for oracle,grid users and change the authention to oracle,grid users has been finished"
5.3、修改/etc/security/limits.conf,配置 oracle、grid用户的shell限制
shell脚本:
echo "Now modify the /etc/security/limits.conf,but backup it named /etc/security/limits.conf.bak before" cp /etc/security/limits.conf /etc/security/limits.conf.bak echo "oracle soft nproc 2047" >>/etc/security/limits.conf echo "oracle hard nproc 16384" >>/etc/security/limits.conf echo "oracle soft nofile 1024" >>/etc/security/limits.conf echo "oracle hard nofile 65536" >>/etc/security/limits.conf echo "grid soft nproc 2047" >>/etc/security/limits.conf echo "grid hard nproc 16384" >>/etc/security/limits.conf echo "grid soft nofile 1024" >>/etc/security/limits.conf echo "grid hard nofile 65536" >>/etc/security/limits.conf echo "Modifing the /etc/security/limits.conf has been succeed."
5.4、修改/etc/pam.d/login配置文件
shell脚本:
echo "Now modify the /etc/pam.d/login,but with a backup named /etc/pam.d/login.bak" cp /etc/pam.d/login /etc/pam.d/login.bak echo "session required /lib/security/pam_limits.so" >>/etc/pam.d/login echo "session required pam_limits.so" >>/etc/pam.d/login echo "Modifing the /etc/pam.d/login has been succeed."
5.5、修改/etc/profile文件
shell脚本:
echo "Now modify the /etc/profile,but with a backup named /etc/profile.bak" cp /etc/profile /etc/profile.bak echo 'if [ $USER = "oracle" ]||[ $USER = "grid" ]; then' >> /etc/profile echo 'if [ $SHELL = "/bin/ksh" ]; then' >> /etc/profile echo 'ulimit -p 16384' >> /etc/profile echo 'ulimit -n 65536' >> /etc/profile echo 'else' >> /etc/profile echo 'ulimit -u 16384 -n 65536' >> /etc/profile echo 'fi' >> /etc/profile echo 'fi' >> /etc/profile echo "Modifing the /etc/profile has been succeed."
5.6、修改内核配置文件/etc/sysctl.conf
shell脚本:
echo "Now modify the /etc/sysctl.conf,but with a backup named /etc/sysctl.bak" cp /etc/sysctl.conf /etc/sysctl.conf.bak echo "fs.aio-max-nr = 1048576" >> /etc/sysctl.conf echo "fs.file-max = 6815744" >> /etc/sysctl.conf echo "kernel.shmall = 2097152" >> /etc/sysctl.conf echo "kernel.shmmax = 1054472192" >> /etc/sysctl.conf echo "kernel.shmmni = 4096" >> /etc/sysctl.conf echo "kernel.sem = 250 32000 100 128" >> /etc/sysctl.conf echo "net.ipv4.ip_local_port_range = 9000 65500" >> /etc/sysctl.conf echo "net.core.rmem_default = 262144" >> /etc/sysctl.conf echo "net.core.rmem_max = 4194304" >> /etc/sysctl.conf echo "net.core.wmem_default = 262144" >> /etc/sysctl.conf echo "net.core.wmem_max = 1048586" >> /etc/sysctl.conf echo "net.ipv4.tcp_wmem = 262144 262144 262144" >> /etc/sysctl.conf echo "net.ipv4.tcp_rmem = 4194304 4194304 4194304" >> /etc/sysctl.conf echo "Modifing the /etc/sysctl.conf has been succeed." echo "Now make the changes take effect....." sysctl -p
5.7、停止 ntp服务
[root@node1 /]# service ntpd status ntpd 已停 [root@node1 /]# chkconfig ntpd off [root@node1 etc]# ls |grep ntp ntp ntp.conf [root@node1 etc]# cp -p /etc/ntp.conf /etc/ntp.conf.bak [root@node1 etc]# ls |grep ntp ntp ntp.conf ntp.conf.bak [root@node1 etc]# rm -rf /etc/ntp.conf [root@node1 etc]# ls |grep ntp ntp ntp.conf.bak [root@node1 etc]#
6、在node2上重复第5步的步骤配置node2节点
说明:在节点node2 上执行该脚本时,
需要将grid 用户环境变量ORACLE_SID 修改为+ASM2,oracle 用户环境变量ORACLE_SID 修改为devdb2,ORACLE_HOSTNAME 环境变量修改为node2.localdomain
7、配置 oracle,grid 用户 SSH对等性
node1:
[root@node1 etc]# su - oracle node1-> env | grep ORA ORACLE_UNQNAME=devdb ORACLE_SID=devdb1 ORACLE_BASE=/u01/app/oracle ORACLE_HOSTNAME=node1.localdomain ORACLE_TERM=xterm ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 node1-> pwd /home/oracle node1-> mkdir ~/.ssh node1-> chmod 700 ~/.ssh node1-> ls -al total 36 drwx------. 4 oracle oinstall 4096 Feb 5 18:53 . drwxr-xr-x. 4 root root 4096 Feb 5 17:10 .. -rw-------. 1 oracle oinstall 167 Feb 5 18:16 .bash_history -rw-r--r--. 1 oracle oinstall 18 Mar 22 2017 .bash_logout -rw-r--r--. 1 oracle oinstall 823 Feb 5 17:18 .bash_profile -rw-r--r--. 1 oracle oinstall 124 Mar 22 2017 .bashrc drwxr-xr-x. 2 oracle oinstall 4096 Nov 20 2010 .gnome2 drwx------. 2 oracle oinstall 4096 Feb 5 18:53 .ssh -rw-------. 1 oracle oinstall 651 Feb 5 18:16 .viminfo node1-> ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: b9:97:e7:1b:1c:e4:1d:d9:31:47:e1:d1:90:7f:27:e7 oracle@node1.localdomain The key's randomart image is: node1-> ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: b7:70:24:43:ab:90:74:b0:49:dc:a9:bf:e7:19:17:ef oracle@node1.localdomain The key's randomart image is: 同样在node2上执行上面的命令
返回node1:
node1-> id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 node1-> pwd /home/oracle node1-> cd ~/.ssh node1-> ll total 16 -rw-------. 1 oracle oinstall 668 Feb 5 18:55 id_dsa -rw-r--r--. 1 oracle oinstall 614 Feb 5 18:55 id_dsa.pub -rw-------. 1 oracle oinstall 1675 Feb 5 18:54 id_rsa -rw-r--r--. 1 oracle oinstall 406 Feb 5 18:54 id_rsa.pub node1-> cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys node1-> cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys node1-> ll total 20 -rw-r--r--. 1 oracle oinstall 1020 Feb 5 19:05 authorized_keys -rw-------. 1 oracle oinstall 668 Feb 5 18:55 id_dsa -rw-r--r--. 1 oracle oinstall 614 Feb 5 18:55 id_dsa.pub -rw-------. 1 oracle oinstall 1675 Feb 5 18:54 id_rsa -rw-r--r--. 1 oracle oinstall 406 Feb 5 18:54 id_rsa.pub node1-> ssh node2 cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys The authenticity of host 'node2 (192.168.88.192)' can't be established. RSA key fingerprint is cd:fd:bd:72:7d:2f:54:b3:d7:32:30:de:67:bb:6f:8b. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2,192.168.88.192' (RSA) to the list of known hosts. oracle@node2's password: node1-> ssh node2 cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys oracle@node2's password: node1-> scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys oracle@node2's password: authorized_keys node1->
在node1上验证SSH的对等性是否配置成功:
node1-> ssh node1 date The authenticity of host 'node1 (192.168.88.191)' can't be established. RSA key fingerprint is b2:a4:19:c0:85:b5:df:f2:8d:16:d8:b2:83:5b:21:19. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.88.191' (RSA) to the list of known hosts. Fri Feb 5 19:15:02 CST 2021 node1-> ssh node2 date Fri Feb 5 19:15:57 CST 2021 node1-> ssh node1-priv date ....省略... node1-> ssh node2-priv date ....省略... node1-> ssh node1.localdomain date ....省略... node1-> ssh node2.localdomain date ....省略... node1-> ssh node1-priv.localdomain date ....省略... node1-> ssh node2-priv.localdomain date ....省略...
在node2上同样执行上述命令验证SSH的对等性是否配置成功
最后在node1和node2上分别再次执行上述命令,如果不需要输入密码,则node1和node2对等性配置成功:
node1: node1-> ssh node1 date Fri Feb 5 19:36:22 CST 2021 node1-> ssh node2 date Fri Feb 5 19:36:26 CST 2021 node1-> ssh node1-priv date Fri Feb 5 19:36:34 CST 2021 node1-> ssh node2-priv date Fri Feb 5 19:36:38 CST 2021 node1-> ssh node1.localdomain date Fri Feb 5 19:37:51 CST 2021 node1-> ssh node2.localdomain date Fri Feb 5 19:37:54 CST 2021 node1-> ssh node2-priv.localdomain date Fri Feb 5 19:38:01 CST 2021 node1-> ssh node1-priv.localdomain date Fri Feb 5 19:38:08 CST 2021 node2: node2-> ssh node1 date Fri Feb 5 19:49:20 CST 2021 node2-> ssh node2 date Fri Feb 5 19:49:23 CST 2021 node2-> ssh node1-priv date Fri Feb 5 19:49:29 CST 2021 node2-> ssh node2-priv date Fri Feb 5 19:49:32 CST 2021 node2-> ssh node1.localdomain date Fri Feb 5 19:49:40 CST 2021 node2-> ssh node2.localdomain date Fri Feb 5 19:49:43 CST 2021 node2-> ssh node2-priv.localdomain date Fri Feb 5 19:49:50 CST 2021 node2-> ssh node1-priv.localdomain date Fri Feb 5 19:49:55 CST 2021
Oracle 用户SSH 对等性配置完成!
8、重复上述步骤7,切换到grid用户(su - oracle),以grid 用户配置对等性。
9、配置共享磁盘
在任意一个节点上先创建共享磁盘,然后在另外的节点上选择添加已有磁盘。这里选择先在node2 节点机器上创建共享磁盘,然后在node1 上添加已创建的磁盘。
共创建4 块硬盘,其中2 块500M的硬盘,将来用于配置GRIDDG 磁盘组,专门存放OCR 和Voting Disk,Voting Disk一般是配置奇数块硬盘;1 块3G 的磁盘,用于配置DATA 磁盘组,存放数据库;1 块3G 的磁盘,用于配置FLASH 磁盘组,用于闪回区;
在node2 上创建第一块共享硬盘的步骤:
① 先关闭节点2 RAC2,然后选择RAC2,右键选择设置:
② 在编辑虚拟机对话框下,选择添加,选择硬盘,下一步:
③创建新虚拟磁盘
④指定共享磁盘的大小
⑤选择共享磁盘文件的存放位置
⑥磁盘创建完成后,选择刚创建的新硬盘,点击“高级”,在弹出的对话框里,虚拟设备节点这里需要特别注意,要选择1:0。
⑦重复步骤①-⑥创建第二块硬盘,磁盘大小0.5G,虚拟设备节点这里要选择1:1
⑧重复步骤①-⑥创建第三块硬盘,磁盘大小3G,虚拟设备节点这里要选择2:0
⑨重复步骤①-⑥创建第四块硬盘,磁盘大小3G,虚拟设备节点这里要选择2:1
关机node1节点,然后为node1添加磁盘:
添加步骤和上述node2节点的步骤完全一致,但是要注意在选择磁盘的时候必须选择“使用现有虚拟磁盘”,如下图:
修改node1和node2的虚拟机文件:
先关机node2节点,鼠标放到RAC2上,会提示当前虚拟机对应的文件:
修改后的内容如下,红色字体为添加的部分:
...省略N行... scsi1.present = "TRUE" scsi1.virtualDev = "lsilogic" scsi1.sharedBus = "virtual" scsi2.present = "TRUE" scsi2.virtualDev = "lsilogic" scsi2.sharedBus = "virtual" scsi1:0.present = "TRUE" scsi1:1.present = "TRUE" scsi2:0.present = "TRUE" scsi2:1.present = "TRUE" scsi1:0.fileName = "H:\sharedisk\OCR.vmdk" scsi1:0.mode = "independent-persistent" scsi1:0.deviceType = "disk" scsi1:1.fileName = "H:\sharedisk\VOTING.vmdk" scsi1:1.mode = "independent-persistent" scsi1:1.deviceType = "disk" scsi2:0.fileName = "H:\sharedisk\DATA.vmdk" scsi2:0.mode = "independent-persistent" scsi2:0.deviceType = "disk" scsi2:1.fileName = "H:\sharedisk\FLASH.vmdk" scsi2:1.mode = "independent-persistent" scsi2:1.deviceType = "disk" floppy0.present = "FALSE" scsi1:1.redo = "" scsi1:0.redo = "" scsi2:0.redo = "" scsi2:1.redo = "" scsi1.pciSlotNumber = "38" scsi2.pciSlotNumber = "39" disk.locking = "false" diskLib.dataCacheMaxSize = "0" diskLib.dataCacheMaxReadAheadSize = "0" diskLib.DataCacheMinReadAheadSize = "0" diskLib.dataCachePageSize = "4096" diskLib.maxUnsyncedWrites = "0" disk.EnableUUID = "TRUE" usb:0.present = "TRUE" usb:0.deviceType = "hid" usb:0.port = "0" usb:0.parent = "-1"
关机node1,按照上述方法修改node1的虚拟机文件。
10、配置 ASM磁盘
在第15步中已经对RAC 双节点已经配置好了共享磁盘,接下来需要将这些共享磁盘格式化、然后用asmlib 将其配置为ASM 磁盘,用于将来存放OCR、Voting Disk和数据库用。
注意:只需在其中1 个节点上格式化就可以,接下来选择在node1 节点上格式化。这里以asmlib 软件来创建ASM 磁盘,而不使用raw disk,而且从11gR2 开始,OUI的图形界面已经不再支持raw disk。
① 查看共享磁盘信息
以root 用户分别在两个节点node1和node2上执行fdisk -l 命令,查看现有硬盘分区信息:
可以看到目前两个节点上的分区信息一致:其中/dev/sda 是用于存放操作系统的,/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde 这4 块盘都没有分区信息
②格式化共享磁盘
root 用户在node1上格式化/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde 这4块盘
root 用户在node1 上格式化/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde 这4 块盘 [root@node1 ~]# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-500, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-500, default 500): Using default value 500 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node1 ~]#
说明:
说明:fdisk /dev/sdb 表示要对/dev/sdb 磁盘进行格式化,其中,输入的命令分别表示:
n 表示新建1 个分区;
p 表示分区类型选择为primary partition 主分区;
1 表示分区编号从1 开始;
起始、终止柱面选择默认值,即1 和500;
w 表示将新建的分区信息写入硬盘分区表。
③ 重复上述步骤②,以root 用户在node1 上分别格式化其余3 块磁盘
④ 格式化完毕之后,在node1,node2 节点上分别看到下述信息:
node1:
[root@node1 ~]# fdisk -l ...前面省略N行... Disk /dev/sda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000229dc Device Boot Start End Blocks Id System /dev/sda1 * 1 3407 27360256 83 Linux /dev/sda2 3407 3917 4096000 82 Linux swap / Solaris Disk /dev/sdb: 536 MB, 536870912 bytes 64 heads, 32 sectors/track, 512 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xab9b0998 Device Boot Start End Blocks Id System /dev/sdb1 1 500 511984 83 Linux Disk /dev/sdc: 536 MB, 536870912 bytes 64 heads, 32 sectors/track, 512 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xcca413ef Device Boot Start End Blocks Id System /dev/sdc1 1 512 524272 83 Linux Disk /dev/sdd: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x886442a7 Device Boot Start End Blocks Id System /dev/sdd1 1 391 3140676 83 Linux Disk /dev/sde: 3221 MB, 3221225472 bytes 255 heads, 63 sectors/track, 391 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xa9674b78 Device Boot Start End Blocks Id System /dev/sde1 1 391 3140676 83 Linux [root@node1 ~]#