关于购物网站建设的论文,邢台网站网页设计,做个淘宝客网站怎么做,电商网站的建设RAC安装时需要执行4个脚本 1) $ORACLE_BASE/oraInventory/orainstRoot.sh (clusterware 结束时执行) 2) $CRS_HOME/root.sh (clusterware 结束时执行) 3) $CRS_HOME/bin/vipca.sh(该脚本是在第二个节点执行$CRS_HOME/root.sh时被自动调用) 4) $ORACLE_HOME/root.sh (安装完数据… RAC安装时需要执行4个脚本 1) $ORACLE_BASE/oraInventory/orainstRoot.sh (clusterware 结束时执行) 2) $CRS_HOME/root.sh (clusterware 结束时执行) 3) $CRS_HOME/bin/vipca.sh(该脚本是在第二个节点执行$CRS_HOME/root.sh时被自动调用) 4) $ORACLE_HOME/root.sh (安装完数据库以后执行) 1. orainstRoot.sh 脚本 1.1 orainstRoot.sh 脚本执行过程 rootnode2 #/oracle/oraInventory/orainstRoot.sh Changing permissions of /oracle/oraInventory to 770. Changing groupname of /oracle/oraInventory to oinstall. The execution of the script is complete 1.2 orainstRoot.sh 脚本内容 rootnode1 # more /oracle/oraInventory/orainstRoot.sh #!/bin/sh if [ ! -d /var/opt/oracle ]; then mkdir -p /var/opt/oracle; fi if [ -d /var/opt/oracle ]; then chmod 755 /var/opt/oracle; fi if [ -f /oracle/oraInventory/oraInst.loc ]; then cp /oracle/oraInventory/oraInst.loc /var/opt/oracle/oraInst.loc; chmod 644 /var/opt/oracle/oraInst.loc; else INVPTR/var/opt/oracle/oraInst.loc INVLOC/oracle/oraInventory GRPoinstall PTRDIRdirname $INVPTR; # Create the software inventory location pointer file if [ ! -d $PTRDIR ]; then mkdir -p $PTRDIR; fi echo Creating the Oracle inventory pointer file ($INVPTR); echo inventory_loc$INVLOC $INVPTR echo inst_group$GRP $INVPTR chmod 644 $INVPTR # Create the inventory directory if it doesnt exist if [ ! -d $INVLOC ];then echo Creating the Oracle inventory directory ($INVLOC); mkdir -p $INVLOC; fi fi echo Changing permissions of /oracle/oraInventory to 770.; chmod -R 770 /oracle/oraInventory; if [ $? ! 0 ]; then echo OUI-35086:WARNING: chmod of /oracle/oraInventory to 770 failed!; fi echo Changing groupname of /oracle/oraInventory to oinstall.; chgrp oinstall /oracle/oraInventory; if [ $? ! 0 ]; then echo OUI-10057:WARNING: chgrp of /oracle/oraInventory to oinstall failed!; fi echo The execution of the script is complete 从脚本我们可以看出这个脚本主要是创建/var/opt/oracle目录如果不存在的话,再在该目录下建oraInst.loc文件该文件记录orainventory的位置和组。并改变orainventory的属性。 rootnode2 # ls –rlt /var/opt/oracle/ total 2 -rw-r--r-- 1 root root 55 Apr 2 14:42 oraInst.loc rootnode2 # more oraInst.loc inventory_loc/oracle/oraInventory inst_groupoinstall 在另一个节点上运行该脚本 rootnode1 #/oracle/oraInventory/orainstRoot.sh Changing permissions of /oracle/oraInventory to 770. Changing groupname of /oracle/oraInventory to oinstall. The execution of the script is complete 2. Root.sh 脚本 2.1 root.sh 脚本执行过程 rootnode2 #/oracle/crs/root.sh WARNING: directory /oracle is not owned by root Checking to see if Oracle CRS stack is already configured Checking to see if any 9i GSD is up Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory /oracle is not owned by root Successfully accumulated necessary OCR keys. Using ports: CSS49895 CRS49896 EVMC49898 and EVMR49897. node nodenumber: nodename private interconnect name hostname node 0: node2 node2-priv node2 node 1: node1 node1-priv node1 Creating OCR keys for user root, privgrp root.. Operation successful. Now formatting voting device: /oracle/ocrcfg1 Format of 1 voting devices complete. Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. node2 CSS is inactive on these nodes. node1 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons. 从输出我们可以看出该脚本主要执行crs的配置格式化ocr disk,更新/etc/inittab文件启动css进程在/var/opt/oracle/新建了ocr.loc文件及scls_scroprocd文件夹。 2.2 查看crs进程及/etc/inittab文件可以看出节点的变化。 rootnode2 # ps -ef|grep crs|grep –v grep oracle 18212 18211 0 14:47:28 ? 0:00 /oracle/crs/bin/ocssd.bin oracle 18191 18180 0 14:47:28 ? 0:00 /oracle/crs/bin/oclsmon.bin oracle 17886 1 0 14:47:27 ? 0:00 /oracle/crs/bin/evmd.bin oracle 18180 18092 0 14:47:28 ? 0:00 /bin/sh -c cd /oracle/crs/log/node2/cssd/oclsmon; ulimit -c unlimited; /ora root 17889 1 0 14:47:27 ? 0:00 /oracle/crs/bin/crsd.bin reboot oracle 18211 18093 0 14:47:28 ? 0:00 /bin/sh -c ulimit -c unlimited; cd /oracle/crs/log/node2/cssd; /oracle/crs rootnode2 # ls –rlt /var/opt/oracle/ total 8 -rw-r--r-- 1 root root 55 Apr 2 14:42 oraInst.loc drwxrwxr-x 5 root root 512 Apr 2 14:47 oprocd drwxr-xr-x 3 root root 512 Apr 2 14:47 scls_scr -rw-r--r-- 1 root oinstall 48 Apr 2 14:47 ocr.loc 注意新创建了ocr.locscls_scroprocd但没有创建/var/opt/oracle/oratab。 rootnode1 # more inittab # Copyright 2004 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # The /etc/inittab file controls the configuration of init(1M); for more # information refer to init(1M) and inittab(4). It is no longer # necessary to edit inittab(4) directly; administrators should use the # Solaris Service Management Facility (SMF) to define services instead. # Refer to smf(5) and the System Administration Guide for more # information on SMF. # # For modifying parameters passed to ttymon, use svccfg(1m) to modify # the SMF repository. For example: # # # svccfg # svc: select system/console-login # svc:/system/console-login setprop ttymon/terminal_type xterm # svc:/system/console-login exit # #ident (#)inittab 1.41 04/12/14 SMI ap::sysinit:/sbin/autopush -f /etc/iu.ap sp::sysinit:/sbin/soconfig -f /etc/sock2path smf::sysinit:/lib/svc/bin/svc.startd /dev/msglog 2/dev/msglog /dev/console p3:s1234:powerfail:/usr/sbin/shutdown -y -i5 -g0 /dev/msglog 2/dev/msglog h1:3:respawn:/etc/init.d/init.evmd run /dev/null 21 /dev/null h2:3:respawn:/etc/init.d/init.cssd fatal /dev/null 21 /dev/null h3:3:respawn:/etc/init.d/init.crsd run /dev/null 21 /dev/null rootnode1 # ls -rlt /etc/inittab* -rw-r--r-- 1 root root 1072 Nov 2 12:39 inittab.cssd -rw-r--r-- 1 root root 1206 Mar 21 17:15 inittab.pre10203 -rw-r--r-- 1 root root 1006 Mar 21 17:15 inittab.nocrs10203 -rw-r--r-- 1 root root 1040 Apr 2 14:50 inittab.orig -rw-r--r-- 1 root root 1040 Apr 2 14:50 inittab.no_crs -rw-r--r-- 1 root root 1240 Apr 2 14:50 inittab -rw-r--r-- 1 root root 1240 Apr 2 14:50 inittab.crs 该脚本会将inittab复制为inittab.no_crs,修改后的inittab另复制一份为inittab.crs. 2.3 在另外一个节点执行$CRS_HOME/root.sh rootnode1 #/oracle/crs/root.sh WARNING: directory /oracle is not owned by root Checking to see if Oracle CRS stack is already configured Checking to see if any 9i GSD is up Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory /oracle is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. Successfully accumulated necessary OCR keys. Using ports: CSS49895 CRS49896 EVMC49898 and EVMR49897. node nodenumber: nodename private interconnect name hostname node 0: node2 node2-priv node2 node 1: node1 node1-priv node1 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. node2 node1 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps Creating VIP application resource on (2) nodes... Creating GSD application resource on (2) nodes... Creating ONS application resource on (2) nodes... Starting VIP application resource on (2) nodes... Starting GSD application resource on (2) nodes... Starting ONS application resource on (2) nodes... Done. 3. 在第二个节点上运行时会多比在第一个节点上运行多执行一个任务 -------运行$CRS_HOME/bin/vipca.sh VIPCA.sh主要是配置vip并启动crs的默认资源未建库时默认为6个,多启动三个后台进程。 rootnode1 # ps -ef|grep crs|grep -v grep oracle 18347 17447 0 14:51:06 ? 0:00 /oracle/crs/bin/evmlogger.bin -o /oracle/crs/evm/log/evmlogger.info -l /oracle/ oracle 17447 1 0 14:50:47 ? 0:00 /oracle/crs/bin/evmd.bin oracle 17763 17756 0 14:50:48 ? 0:00 /oracle/crs/bin/ocssd.bin oracle 17756 17643 0 14:50:48 ? 0:00 /bin/sh -c ulimit -c unlimited; cd /oracle/crs/log/node1/cssd; /oracle/crs oracle 21216 1 0 14:52:28 ? 0:00 /oracle/crs/opmn/bin/ons -d oracle 21217 21216 0 14:52:28 ? 0:00 /oracle/crs/opmn/bin/ons -d oracle 17771 17642 0 14:50:48 ? 0:00 /bin/sh -c cd /oracle/crs/log/node1/cssd/oclsmon; ulimit -c unlimited; /ora oracle 17773 17771 0 14:50:48 ? 0:00 /oracle/crs/bin/oclsmon.bin root 17449 1 0 14:50:47 ? 0:01 /oracle/crs/bin/crsd.bin reboot rootnode2 # ps -ef|grep crs|grep -v grep oracle 18212 18211 0 14:47:28 ? 0:00 /oracle/crs/bin/ocssd.bin oracle 27467 27466 0 14:52:25 ? 0:00 /oracle/crs/opmn/bin/ons -d oracle 25252 17886 0 14:51:16 ? 0:00 /oracle/crs/bin/evmlogger.bin -o /oracle/crs/evm/log/evmlogger.info -l /oracle/ oracle 27466 1 0 14:52:25 ? 0:00 /oracle/crs/opmn/bin/ons -d oracle 18191 18180 0 14:47:28 ? 0:00 /oracle/crs/bin/oclsmon.bin oracle 17886 1 0 14:47:27 ? 0:00 /oracle/crs/bin/evmd.bin oracle 18180 18092 0 14:47:28 ? 0:00 /bin/sh -c cd /oracle/crs/log/node2/cssd/oclsmon; ulimit -c unlimited; /ora root 17889 1 0 14:47:27 ? 0:00 /oracle/crs/bin/crsd.bin reboot oracle 18211 18093 0 14:47:28 ? 0:00 /bin/sh -c ulimit -c unlimited; cd /oracle/crs/log/node2/cssd; /oracle/crs 从现在node2上的进程就能看出执行完vipca.sh后会多出三个后台进程。 rootnode1 # crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora....c03.gsd application ONLINE ONLINE node1 ora....c03.ons application ONLINE ONLINE node1 ora....c03.vip application ONLINE ONLINE node1 ora....c04.gsd application ONLINE ONLINE node2 ora....c04.ons application ONLINE ONLINE node2 ora....c04.vip application ONLINE ONLINE node1 4. 安装数据库软件binary时需在最后一步执行$ORACLE_HOME/root.sh rootnode2 #$ORACLE_HOME/root.sh Running Oracle10 root.sh script... The following environment variables are set as: ORACLE_OWNER oracle ORACLE_HOME /oracle/10g Enter the full pathname of the local bin directory: [/usr/local/bin]: The file dbhome already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file oraenv already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file coraenv already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Creating /var/opt/oracle/oratab file... Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 该脚本的作用在于在指定的目录(默认为/usr/local/bin)下创建dbhome,oraenv,coraenv,在/var/opt/oracle/里创建oratab文件。 rootnode2# ls –rlt /usr/local/bin total 18 -rwxr-xr-x 1 oracle root 2428 Apr 2 15:07 dbhome -rwxr-xr-x 1 oracle root 2560 Apr 2 15:07 oraenv -rwxr-xr-x 1 oracle root 2857 Apr 2 15:07 coraenv rootnode2 # ls –rlt /var/opt/oracle/ total 10 -rw-r--r-- 1 root root 55 Apr 2 14:42 oraInst.loc drwxrwxr-x 5 root root 512 Apr 2 14:47 oprocd drwxr-xr-x 3 root root 512 Apr 2 14:47 scls_scr -rw-r--r-- 1 root oinstall 48 Apr 2 14:47 ocr.loc -rw-rw-r-- 1 oracle root 678 Apr 2 15:07 oratab rootnode1 # /oracle/10g/root.sh Running Oracle10 root.sh script... The following environment variables are set as: ORACLE_OWNER oracle ORACLE_HOME /oracle/10g Enter the full pathname of the local bin directory: [/usr/local/bin]: The file dbhome already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file oraenv already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file coraenv already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Creating /var/opt/oracle/oratab file... Entries will be added to the /var/opt/oracle/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 转载于:https://www.cnblogs.com/tianlesoftware/archive/2010/02/22/3610253.html