I. Oracle Clusterware Architecture
The basic Oracle clustering architecture is shown in the following graphic and includes the following high level components:
- Clustered servers using Oracle Database 11g Release 2 Clusterware
- Clusterware Networking
- Private Interconnect
- Public Interface
- Clusterware Processes and Services
- Clusterware High Availability
- ASM
- OCR and voting files
- Grid Plug and Play
- Shared storage
- The RAC Database(s)
II. Password os et oracle :
Idem sur les 2 machines.
os user
|
Passwd
|
Grid
|
Grid
|
Root
|
N
|
Oracle
|
Oracle
|
III. Hardware
System Model: IBM,8203-E4A
Machine Serial Number: 1080AB5
Processor Type: PowerPC_POWER6
Number Of Processors: 1
Memory Size: 4096 MB
IV. Prérequis systemes AIX 6.1
IBM param
|
valeur min
| |
asynch io
|
aio_maxreqs
|
65536
|
RAM
|
Mémoire physique
|
4
|
SWAP
|
Swap
|
1,5 ou plus
|
virtual memory
|
Minperm%
|
3
|
Maxperm%
|
90
| |
maxclient%
|
90
| |
lru_file_repage recommandation
|
0
| |
strict_maxclient
|
1
| |
strict_maxperm
|
0
| |
page_steal_method recommandation
|
1
| |
Minpout=4096 maxpout=8193
|
4096,8193
| |
allocation
des bloc os |
ARG_MAX
|
128
|
Ports reseaux
|
tcp_ephemeral_low
|
32768
|
tcp_ephemeral_high
|
65535
| |
udp_ephemeral_low
|
32768
| |
udp_ephemeral_high
|
65535
| |
trame et reseau
|
ipqmaxlen
|
512
|
rfc1323
|
1
| |
sb_max
|
4194304
| |
tcp_recvspace
|
65536
| |
tcp_sendspace
|
65536
| |
udp_recvspace
|
10 fois udp_sendspace mais inferieur a sb_max
| |
udp_sendspace
|
65536 ou
(DB_BLOCK_SIZE * DB_MULTIBLOCK_READ_COUNT) + 4 KB | |
Packages
|
bos.adt.base
| |
bos.adt.lib
| ||
bos.adt.libm
| ||
bos.perf.libperfstat
| ||
bos.perf.perfstat
| ||
bos.perf.proctools
| ||
rsct.basic.rte
| ||
rsct.compat.clients.rte
| ||
xlC.aix61.rte 10.1.0.0 (or later)
| ||
nb de process max
pour un user |
Maxuproc
|
16384
|
coredump
pour oui |
Fullcore
|
unlimited pour f et C
|
user (grid)
|
Capabilities
| |
Patch
|
IZ41855 or later version
|
instfix -i -k IZ41855
|
IZ51456 or later version
| ||
IZ52319 or later version
| ||
JDK
|
Java 6 64-bit 6.0.0.50 IZ30726 (SR2)
|
smitty
|
Pro*FORTRAN
|
IBM XL Fortran v. 11.1
| |
ADA
|
OC Systems PowerAda 5.4d
| |
C/C++ compiler
|
IBM XL C/C++ Enterprise Edition for AIX, V9.0
et gcc 3.45 | |
Pro*COBOL
|
IBM COBOL for AIX version 3.1
| |
Micro Focus Server Express 5.1
| ||
Oracle Messaging
Gateway |
mqm.Client.Bnd
| |
mqm.Server.Bnd
|
En bleue c est optionnelle
En jaune : les paramètres dont je suis incapable de vérifier c est donc géré par l’admin système a 100%.
Attention
Ulimit du user root
ln -s /usr/sbin/lsattr /etc/lsattr
V. Verifier aussi le hostsname
Uname –a sur les 2 nœuds doit correspondre a ce qu on souhaite
Dans note cas :
nœud 1 aix_rac1
nœud 2 rac2
A modifier pour avoir une cohérence .
De préférence pas _ et pas de – dans le nom.
VI. Disque et FS :
Les FS suivant sont créent sur chaque nœud du rac
/u01/app 3Go
/u01/app/oracle 4Go en fait ca doit etre 8Go
/u01/app/grid 13Go
FS
|
Home
|
taille FS en Go/noeud du RAC
|
/u01/app/grid
|
/u01/app/grid/11.2.0
|
13
|
/u01/app/oracle
|
/u01/app/oracle/11.2.0
|
3
|
/tmp
|
/tmp
|
8
|
/u01/app/oracledata
|
DB standalone
|
8
|
VI.1 En vue de batir l’ASM :
Nouveauté sur ASM 11gR2 , mettre l ocr et le voting dans l’ASM.
Idee est de créer 3 diskgroups : CLWARE, DATA et FRA.
Sur le DG data et DG CLWARE on aura 2 Fail groups donc la création du dg sera en normal redundancy.
La repartition des fichiers db
Storage
|
Diskgroup
|
Redundancy
| |
datafiles/ctl/redo
|
ASM
|
DATA
|
normal (raid 10 géré par oracle :
mirroring des extents) |
Datafiles
|
ASM
|
DATA
|
Normal
|
Online Redo logs
|
ASM
|
DATA et FRA
|
Normal
|
FRA
|
ASM
|
FRA
|
external (pas de raid )
|
OCR Voting disk
|
ASM
|
CLWARE
|
Normal
|
Taille des dg et redundancy
Diskgroup ASM
|
Raw devices
|
taille Go
|
Redundancy
|
FG
|
Data (2 failgroups
2 way mirroring) |
raw dev1
|
5
|
Normal
|
FG1
|
raw dev2
|
5
|
Normal
| ||
raw dev3
|
5
|
Normal
|
FG2
| |
raw dev4
|
5
|
Normal
| ||
FRA
|
raw dev5
|
4
|
External
| |
CLWARE
|
Rawdev 6
|
0,8
|
Normal
| |
Rawdev 7
|
0,8
|
Normal
| ||
Rawdev 8
|
0,8
|
Normal
|
Pour construire cette configuration il faut au moins 2 vg visibles des 2 nœuds
La commande a passer pour créer notre diskgroup DATA :
Create diskgroup DATA NORMAL REDUNDANCY
FAILGROUP RAC_FG1 disk '/dev/rlv_raw1', '/dev/rlv_raw2'
FAILGROUP RAC_FG2 disk '/dev/rlv_raw3', '/dev/rlv_raw4';
Rac_fg1
RAC_FG2 :
A noter que pour chaque vg :
· pp size 32 M
· Concurrent: Enhanced-Capable
VI.2 Raw devices :
Vg
|
lvname
|
pp Mo
|
size Mo
|
RAC_FG1
|
lv_raw1
|
160
|
5120
|
lv_raw2
|
160
|
5120
| |
Lv_raw6
|
25
|
800
| |
Lv_raw7
|
25
|
800
| |
RAC_FG2
|
lv_raw3
|
160
|
5120
|
lv_raw4
|
160
|
5120
| |
lv_raw5
|
128
|
4096
| |
lv_raw8
|
25
|
800
|
Le raw device lv_raw5 est pour le fra pas de redundancy.
Les droits sur les raw devices :
#ls -l /dev | grep rlv_
crw-rw---- 1 root system 35, 1 Jan 19 23:39 rlv_raw1
crw-rw---- 1 root system 35, 2 Jan 19 23:39 rlv_raw2
crw-rw---- 1 root system 36, 1 Jan 19 23:45 rlv_raw3
crw-rw---- 1 root system 36, 2 Jan 19 23:45 rlv_raw4
crw-rw---- 1 root system 36, 3 Jan 19 23:45 rlv_raw5
Le changement des droits pour les raw devices sur les 2 noeuds :
Chown grid:oinstall /dev/rlv_raw*
VII. User et environnement :
VII.1 Creation des users et groupes :
Les id des groupes et des users doivent etre identiques.
2 users
Grid pour l asm et le clusterware,
Oracle pour les binaires oracle.
mkgroup -'A' id='1000' adms='root' oinstall
mkgroup -'A' id='1031' adms='root' dba
mkgroup -'A' id='1032' adms='root' asmdba
mkgroup -'A' id='1033' adms='root' asmadmin
mkgroup -'A' id='1034' adms='root' asmoper
mkuser id='1101' pgrp='oinstall' groups='dba,asmdba,asmadmin,asmoper' home='/home/grid' grid
mkuser id='1102' pgrp='oinstall' groups='dba,asmdba' home='/home/oracle' oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
/usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid
VII.2 Les .profile des users oracle et grid
User GRID
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/11.2.0
export GRID_HOME=/u01/app/grid/11.2.0
export CRS_HOME=/u01/app/grid/11.2.0
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
PS1="[`uname -n`:\$LOGNAME:\$ORACLE_SID:\$PWD]
#"
Set -o vi
User ORACLE :
Au niveau oracle_sid on indique l instance.
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/11.2.0
export ORACLE_SID=RACTEST1
export GRID_HOME=/u01/app/grid/11.2.0
export CRS_HOME=/u01/app/grid/11.2.0
export ORACLE_UNQNAME=RACTEST
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
PS1="[`uname -n`:\$LOGNAME:\$ORACLE_SID:\$PWD]
#"
Set –o vi
VII.3 Ulimit
Pour les users root,oracle et grid il faut les rajouter dans le fichier limits.
Ci dessous les lignes a rajouter dans le fichier /etc/security/limits :
grid:
fsize = -1
core = 2097151
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1
oracle:
fsize = -1
core = 2097151
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1
root:
fsize = -1
core = 2097151
cpu = -1
data = -1
rss = -1
stack = -1
nofiles = -1
VII.4 SSH :
Pour le users os grid et oracle il doivent pouvoir se connecter d’un serveur a l autre en ssh sans password.
Voir la note metalink 300548.1
VII.5 Display
/etc/ssh/sshd_config à modifier sur les 2 nœuds :
de
#X11Forwarding no
en
X11Forwarding yes
Puis en root taper la commande suivante
xauth list
Putty : pour le host il faut cocher enable X11 forwarding
Xming : il faut le lancer en dos par Xming :0 -multiwindow –clipboard
Attention il faut toutes les fonts pour xming sinon le dbca va bloquer.
Fermer ca session root et en refaire une nouvelle .
Le message suivant doit apparaitre sur l’ecran :
Après pour avoir l affichage
en root :
Xhost+
Su - grid
Export DISPLAY=<ip de mon poste> :0.0
Clock
VIII. Configuration réseau :
VIII.1 Manual IP management :
1.3.2.2.1 IP Address Requirements for Manual Configuration If you do not enable GNS, then
the public and virtual IP addresses for each node must be static IP addresses,
configured before installation for each node, but not currently in use. Public and
virtual IP addresses must be on the same subnet.
Oracle Clusterware manages private IP addresses in the private subnet on interfaces
you identify as private during the installation interview.
The cluster must have the following addresses configured:
■A public IP address for each node, with the following characteristics:
– Static IP address
– Configured before installation for each node, and resolvable to that node
before installation
– On the same subnet as all other public IP addresses, VIP addresses, and SCAN
addresses
■A virtual IP address for each node, with the following characteristics:
– Static IP address
– Configured before installation for each node, but not currently in use
– On the same subnet as all other public IP addresses, VIP addresses, and SCAN
addresses
■A Single Client Access Name (SCAN) for the cluster, with the following
characteristics:
– Three Static IP addresses configured on the domain name server (DNS) before
installation so that the three IP addresses are associated with the name
provided as the SCAN, and all three addresses are returned in random order
by the DNS to the requestor
– Configured before installation in the DNS to resolve to addresses that are not
currently in use
– Given a name that does not begin with a numeral
– On the same subnet as all other public IP addresses, VIP addresses, and SCAN
addresses
– Conforms with the RFC 952 standard, which allows alphanumeric characters
and hyphens ("-"), but does not allow underscores ("_").
■A private IP address for each node, with the following characteristics:
– Static IP address
– Configured before installation, but on a separate, private network, with its
own subnet, that is not resolvable except by other cluster member nodes
Considérons que les interfaces seront organisées de cette façon :
en1 private et fixe sur les 2 noeuds
en3 adresse de la machine sur les 2 nœuds
Etat des adresses au depart sur le nœud 1
en1: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 172.16.22.180 netmask 0xffffff00 broadcast 172.16.22.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en2: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 192.168.100.180 netmask 0xfffffc00 broadcast 192.168.103.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
en3: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
inet 172.16.21.180 netmask 0xffffff00 broadcast 172.16.21.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
Le fichier hosts (/etc/hosts) :
127.0.0.1 loopback localhost # loopback (lo0) name/address
172.16.22.180 rac1-priv #interconnect rac1
172.16.22.181 rac2-priv #interconnect rac2
172.16.22.202 172.16.22.202
192.168.100.180 aix_rac1
192.168.100.181 aix_rac2
172.16.22.202 172.16.22.202
192.168.100.102 aix-vip1
192.168.100.103 aix-vip2
192.168.100.204 aix-scan1
#192.168.100.205 aix-scan2
#192.168.100.206 aix-scan3
ping -s 8192 172.16.21.181
ping -s 8192 172.16.21.180
Ca doit etre ok pour toute les ip et dans les 2 sens.
./runcluvfy.sh stage -pre crsinst -n aix_rac1,aix_rac2 -r 11gR2 –verbose
./runcluvfy.sh stage -pre crsinst -n aix_rac1,aix_rac2 -r 11gR2 –verbose
Ne fonctionne pas comme si les 2 nœuds ne se voient pas.
Ci dessous un extrait de la log
Performing pre-checks for cluster services setup
Checking node reachability...
aix_rac1: aix_rac1
Check: Node reachability from node "null"
Destination Node Reachable?
------------------------------------ ------------------------
aix-rac1 no
aix-rac2 no
Result: Node reachability check failed from node "null"
ERROR:
Verification cannot proceed
Après modification :
./runcluvfy.sh comp nodereach -n aix-rac1,aix-rac2 -verbose
Verifying node reachability
Checking node reachability...
Check: Node reachability from node "aix-rac1"
Destination Node Reachable?
------------------------------------ ------------------------
aix-rac1 yes
aix-rac2 yes
Result: Node reachability check passed from node "aix-rac1"
Configuration sans gns toutes les IP sont ds les fichiers hosts des serveurs :
network identity
|
Node
|
IP
|
defined name
|
adresse host 1
|
Aix_rac1
|
192.168.100.180
|
aix-rac1
|
private1
|
Aix_rac1
|
172.16.21.180
|
rac1-priv
|
adresse vip1
|
192.168.100.102
|
rac-vip1
| |
adress host 2
|
Aix_rac2
|
192.168.100.181
|
aix-rac2
|
private2
|
Aix_rac2
|
172.16.21.181
|
rac2-priv
|
adresse vip2
|
192.168.100.103
|
rac-vip2
| |
scan vip1
|
gerer par le cluster oracle
|
192.168.100.204
|
Rac_scan1
|
scan vip2
|
gerer par le cluster oracle
|
192.168.100.205
|
Rac_scan2
|
scan vip3
|
gerer par le cluster oracle
|
192.168.100.206
|
Rac_scan3
|
VIII.2 Configuration IP en vue de l utilisation GNS :
network identity
|
Node
|
IP
|
defined name
|
type
|
assigné par
|
resolve by
|
GNS VIP
|
gerer par le cluster oracle
|
virtual
|
Fixe
|
dns
| ||
adresse hosts public
|
Aix_rac1
|
192.168.100.180
|
aix-rac1
|
public
|
Fixe
|
gns
|
private1
|
Aix_rac1
|
172.16.21.180
|
rac1-priv
|
privé
|
Fixe
|
gns
|
adresse vip1
|
gerer par le cluster oracle
|
rac-vip
|
virtual
|
Dhcp
|
gns
| |
adress hosts public
|
Aix_rac2
|
192.168.100.181
|
aix-rac2
|
public
|
Fixe
|
gns
|
private2
|
Aix_rac2
|
172.16.21.181
|
rac2-priv
|
privé
|
Fixe
|
gns
|
adresse vip2
|
gerer par le cluster oracle
|
virtual
|
Dhcp
|
gns
| ||
scan vip1
|
gerer par le cluster oracle
|
virtual
|
Dhcp
|
gns
| ||
scan vip2
|
gerer par le cluster oracle
|
virtual
|
Dhcp
|
gns
| ||
Scan vip3
|
gerer par le cluster oracle
|
virtual
|
Dhcp
|
gns
|