메뉴 건너뛰기

Korea Oracle User Group

Install/Configuration

오라클 21c RAC 설치 with Oracle Linux 8.2 - 2 (Grid Infrastructure)

 

참고용 19c RAC 설치 게시글

Oracle 19c RAC 설치 with Oracle Linux 8.2 - 1 (OS 및 스토리지 설정)

Oracle 19c RAC 설치 with Oracle Linux 8.2 - 2 (Grid Infrastructure)

Oracle 19c RAC 설치 with Oracle Linux 8.2 - 3 (사용할 Disk Group 생성)

Oracle 19c RAC 설치 with Oracle Linux 8.2 - 4 (Database Software 설치)

Oracle 19c RAC 설치 with Oracle Linux 8.2 - 5 (DBCA로 Database 생성)

 

 

1. 설치 디렉토리 생성(node 1, 2)

2. Grid Infrastructure 설치 파일 unzip

3. root 계정으로 cvuqdisk rpm 설치

4. runcluvfy 수행

5. GI(Grid Infrastructure) 설치 실행

6. Grid Infrastructure 상태 확인

 

1. 설치 디렉토리 생성(node 1, 2)

1
2
3
4
5
6
mprac1.localdomain@root:/root> mkdir -/u01/app/21c/grid
mprac1.localdomain@root:/root> mkdir -/u01/app/grid
mprac1.localdomain@root:/root> mkdir -/u01/app/oracle/product/21c/db_1
mprac1.localdomain@root:/root> chown -R grid:oinstall /u01
mprac1.localdomain@root:/root> chown -R oracle:oinstall /u01/app/oracle
mprac1.localdomain@root:/root> chmod -775 /u01/
 

 

2. Grid Infrastructure 설치 파일 unzip

1
2
3
4
5
mprac1.localdomain@root:/root> mv LINUX.X64_213000_grid_home.zip /u01/app/21c/grid/
mprac1.localdomain@root:/root> chown grid:oinstall /u01/app/21c/grid/LINUX.X64_213000_grid_home.zip 
mprac1.localdomain@root:/root> su - grid
mprac1.localdomain@grid:+ASM1:/home/grid> cd /u01/app/21c/grid/
mprac1.localdomain@grid:+ASM1:/u01/app/21c/grid> unzip /u01/app/21c/grid/LINUX.X64_213000_grid_home.zip 
 

 

3. root 계정으로 cvuqdisk rpm 설치

1
2
3
4
5
6
7
8
9
10
11
12
13
mprac1.localdomain@grid:+ASM1:/u01/app/21c/grid> exit
logout
mprac1.localdomain@root:/root> cd /u01/app/21c/grid/cv/rpm
mprac1.localdomain@root:/u01/app/21c/grid/cv/rpm> cp -p cvuqdisk-1.0.10-1.rpm /tmp
mprac1.localdomain@root:/u01/app/21c/grid/cv/rpm> scp -p cvuqdisk-1.0.10-1.rpm mprac2:/tmp
cvuqdisk-1.0.10-1.rpm                               100%   12KB  10.7MB/s   00:00    
mprac1.localdomain@root:/u01/app/21c/grid/cv/rpm> rpm -Uvh /tmp/cvuqdisk-1.0.10-1.rpm 
Verifying...                          ################################# [100%]
준비 중...                         ################################# [100%]
Using default group oinstall to install package
Updating / installing...
   1:cvuqdisk-1.0.10-1                ################################# [100%]
mprac1.localdomain@root:/u01/app/21c/grid/cv/rpm> 
 

 

2번 노드에서도 패키지를 설치한다.

 

1
2
3
4
5
6
mprac2.localdomain@root:/root> rpm -Uvh /tmp/cvuqdisk-1.0.10-1.rpm 
Verifying...                          ################################# [100%]
준비 중...                         ################################# [100%]
Using default group oinstall to install package
Updating / installing...
   1:cvuqdisk-1.0.10-1                ################################# [100%]
 

 

4. runcluvfy 수행

1) runcluvfy 수행시 resolv.conf 등 파일을 scp를 통해 2번 노드에서 1번 노드로 copy 해 와야 하는데 Open-SSH 8.x 버전의 이슈로 에러 발생 이슈가 있어 이를 먼저 해결하는 workround 적용한다. root 계정으로 아래 작업 수행한다.

 

1
2
3
4
mprac1.localdomain@root:/root> cp -/usr/bin/scp /usr/bin/scp-original
mprac1.localdomain@root:/root> echo "/usr/bin/scp-original -T \$*" > /usr/bin/scp
mprac1.localdomain@root:/root> cat /usr/bin/scp
/usr/bin/scp-original -T $*
 

 

2) runcluvfy 실행

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
mprac1.localdomain@root:/root> su - grid
mprac1.localdomain@grid:+ASM1:/home/grid> oh
mprac1.localdomain@grid:+ASM1:/u01/app/21c/grid> ./runcluvfy.sh stage -pre crsinst -n mprac1,mprac2 -method root -verbose
"ROOT" 비밀번호 입력:
 
Performing following verification checks ...
 
  물리적 메모리 ...
  노드 이름         사용 가능                     필수                        상태        
  ------------  ------------------------  ------------------------  ----------
  mprac2        9.4452GB (9903960.0KB)    8GB (8388608.0KB)         성공        
  mprac1        9.4452GB (9903960.0KB)    8GB (8388608.0KB)         성공        
  물리적 메모리 ...성공
  사용 가능한 물리적 메모리 ...
  노드 이름         사용 가능                     필수                        상태        
  ------------  ------------------------  ------------------------  ----------
  mprac2        8.3254GB (8729828.0KB)    50MB (51200.0KB)          성공        
  mprac1        8.102GB (8495524.0KB)     50MB (51200.0KB)          성공        
  사용 가능한 물리적 메모리 ...성공
  교체 크기 ...
  노드 이름         사용 가능                     필수                        상태        
  ------------  ------------------------  ------------------------  ----------
  mprac2        20GB (2.0971516E7KB)      9.4452GB (9903960.0KB)    성공        
  mprac1        20GB (2.0971516E7KB)      9.4452GB (9903960.0KB)    성공        
  교체 크기 ...성공
  사용 가능한 공간: mprac2:/usr,mprac2:/var,mprac2:/etc,mprac2:/sbin,mprac2:/tmp ...
  경로                노드 이름         마운트 위치        사용 가능         필수            상태          
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              mprac2        /             45.1582GB     25MB          성공          
  /var              mprac2        /             45.1582GB     5MB           성공          
  /etc              mprac2        /             45.1582GB     25MB          성공          
  /sbin             mprac2        /             45.1582GB     10MB          성공          
  /tmp              mprac2        /             45.1582GB     1GB           성공          
  사용 가능한 공간: mprac2:/usr,mprac2:/var,mprac2:/etc,mprac2:/sbin,mprac2:/tmp ...성공
  사용 가능한 공간: mprac1:/usr,mprac1:/var,mprac1:/etc,mprac1:/sbin,mprac1:/tmp ...
  경로                노드 이름         마운트 위치        사용 가능         필수            상태          
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              mprac1        /             34.2573GB     25MB          성공          
  /var              mprac1        /             34.2573GB     5MB           성공          
  /etc              mprac1        /             34.2573GB     25MB          성공          
  /sbin             mprac1        /             34.2573GB     10MB          성공          
  /tmp              mprac1        /             34.2573GB     1GB           성공          
  사용 가능한 공간: mprac1:/usr,mprac1:/var,mprac1:/etc,mprac1:/sbin,mprac1:/tmp ...성공
  사용자 존재 여부: grid ...
  노드 이름         상태                        설명                      
  ------------  ------------------------  ------------------------
  mprac2        성공                        존재함(1001)               
  mprac1        성공                        존재함(1001)               
 
    UID가 동일한 사용자입니다.: 1001 ...성공
  사용자 존재 여부: grid ...성공
  그룹 존재 여부: asmadmin ...
  노드 이름         상태                        설명                      
  ------------  ------------------------  ------------------------
  mprac2        성공                        존재함                     
  mprac1        성공                        존재함                     
  그룹 존재 여부: asmadmin ...성공
  그룹 존재 여부: asmdba ...
  노드 이름         상태                        설명                      
  ------------  ------------------------  ------------------------
  mprac2        성공                        존재함                     
  mprac1        성공                        존재함                     
  그룹 존재 여부: asmdba ...성공
  그룹 존재 여부: oinstall ...
  노드 이름         상태                        설명                      
  ------------  ------------------------  ------------------------
  mprac2        성공                        존재함                     
  mprac1        성공                        존재함                     
  그룹 존재 여부: oinstall ...성공
  그룹 멤버쉽: asmdba ...
  노드 이름             사용자가 존재함      그룹이 존재함       그룹의 사용자       상태              
  ----------------  ------------  ------------  ------------  ----------------
  mprac2            예             예             예             성공              
  mprac1            예             예             예             성공              
  그룹 멤버쉽: asmdba ...성공
  그룹 멤버쉽: asmadmin ...
  노드 이름             사용자가 존재함      그룹이 존재함       그룹의 사용자       상태              
  ----------------  ------------  ------------  ------------  ----------------
  mprac2            예             예             예             성공              
  mprac1            예             예             예             성공              
  그룹 멤버쉽: asmadmin ...성공
  그룹 멤버쉽: oinstall(기본) ...
  노드 이름             사용자가 존재함      그룹이 존재함       그룹의 사용자       기본            상태          
  ----------------  ------------  ------------  ------------  ------------  ------------
  mprac2            예             예             예             예             성공          
  mprac1            예             예             예             예             성공          
  그룹 멤버쉽: oinstall(기본) ...성공
  실행 레벨 ...
  노드 이름         실행 레벨                     필수                        상태        
  ------------  ------------------------  ------------------------  ----------
  mprac2        5                         3,5                       성공        
  mprac1        5                         3,5                       성공        
  실행 레벨 ...성공
  UID가 동일한 사용자입니다.: 0 ...성공
  현재 그룹 ID ...성공
  루트 사용자 일관성 ...
  노드 이름                                 상태                      
  ------------------------------------  ------------------------
  mprac2                                성공                      
  mprac1                                성공                      
  루트 사용자 일관성 ...성공
  호스트 이름 ...성공
  노드 접속 ...
    호스트 파일 ...
  노드 이름                                 상태                      
  ------------------------------------  ------------------------
  mprac1                                성공                      
  mprac2                                성공                      
    호스트 파일 ...성공
 
"mprac2" 노드에 대한 인터페이스 정보
 
 이름     IP 주소           서브넷             게이트웨이           Def. 게이트웨이      HW 주소             MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 ens192 192.168.45.202  192.168.45.0    0.0.0.0         192.168.100.1   00:0C:29:45:59:34 1500  
 ens224 1.1.1.202       1.1.1.0         0.0.0.0         192.168.100.1   00:0C:29:45:59:3E 9000  
 
"mprac1" 노드에 대한 인터페이스 정보
 
 이름     IP 주소           서브넷             게이트웨이           Def. 게이트웨이      HW 주소             MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 ens192 192.168.45.201  192.168.45.0    0.0.0.0         192.168.100.1   00:0C:29:B7:12:49 1500  
 ens224 1.1.1.201       1.1.1.0         0.0.0.0         192.168.100.1   00:0C:29:B7:12:53 9000  
 
검사: "1.1.1.0" 서브넷의 MTU 일관성.
 
  노드                이름            IP 주소         서브넷           MTU             
  ----------------  ------------  ------------  ------------  ----------------
  mprac2            ens224        1.1.1.202     1.1.1.0       9000            
  mprac1            ens224        1.1.1.201     1.1.1.0       9000            
 
검사: "192.168.45.0" 서브넷의 MTU 일관성.
 
  노드                이름            IP 주소         서브넷           MTU             
  ----------------  ------------  ------------  ------------  ----------------
  mprac2            ens192        192.168.45.202  192.168.45.0  1500            
  mprac1            ens192        192.168.45.201  192.168.45.0  1500            
 
  소스                              대상                              접속됨?            
  ------------------------------  ------------------------------  ----------------
  mprac1[ens224:1.1.1.201]        mprac2[ens224:1.1.1.202]        예               
 
  소스                              대상                              접속됨?            
  ------------------------------  ------------------------------  ----------------
  mprac1[ens192:192.168.45.201]   mprac2[ens192:192.168.45.202]   예               
    최대(MTU) 크기 패킷이 서브넷을 통과하는지 검사 ...성공
    "1.1.1.0" 서브넷에 대한 서브넷 마스크 일관성 ...성공
    "192.168.45.0" 서브넷에 대한 서브넷 마스크 일관성 ...성공
  노드 접속 ...성공
  멀티캐스트 또는 브로드캐스트 검사 ...
멀티캐스트 그룹 "224.0.0.251"과(와) 멀티캐스트 통신을 위해 "1.1.1.0" 서브넷을 검사하는 중
  멀티캐스트 또는 브로드캐스트 검사 ...성공
  NTP(네트워크 시간 프로토콜) ...성공
  동일한 코어 파일 이름 패턴 ...성공
  사용자 마스크 ...
  노드 이름         사용 가능                     필수                        설명        
  ------------  ------------------------  ------------------------  ----------
  mprac2        0022                      0022                      성공        
  mprac1        0022                      0022                      성공        
  사용자 마스크 ...성공
  사용자가 그룹에 없습니다. "root": grid ...
  노드 이름         상태                        설명                      
  ------------  ------------------------  ------------------------
  mprac2        성공                        존재하지 않음                 
  mprac1        성공                        존재하지 않음                 
  사용자가 그룹에 없습니다. "root": grid ...성공
  시간대 일관성 ...성공
  Path existence, ownership, permissions and attributes ...
    Path "/var" ...성공
    Path "/dev/shm" ...성공
  Path existence, ownership, permissions and attributes ...성공
  노드 사이의 시간 오프셋 ...성공
  resolv.conf 무결성 ...
  노드 이름                                 상태                      
  ------------------------------------  ------------------------
  mprac1                                성공                      
  mprac2                                성공                      
 
"/etc/resolv.conf"에 지정된 각 이름 서버에서 "mprac2" 이름에 대한 응답을 확인하는 중
 
  노드 이름         소스                        설명                        상태        
  ------------  ------------------------  ------------------------  ----------
  mprac2        192.168.45.105            IPv4                      성공        
 
"/etc/resolv.conf"에 지정된 각 이름 서버에서 "mprac1" 이름에 대한 응답을 확인하는 중
 
  노드 이름         소스                        설명                        상태        
  ------------  ------------------------  ------------------------  ----------
  mprac1        192.168.45.105            IPv4                      성공        
  resolv.conf 무결성 ...성공
  DNS/NIS 이름 서비스 ...성공
  도메인 소켓 ...성공
  "avahi-daemon" 데몬이 구성되어 실행 중이 아닙니다. ...
  노드 이름         구성됨                       상태                      
  ------------  ------------------------  ------------------------
  mprac2        아니오                       성공                      
  mprac1        아니오                       성공                      
 
  노드 이름         실행 중?                     상태                      
  ------------  ------------------------  ------------------------
  mprac2        아니오                       성공                      
  mprac1        아니오                       성공                      
  "avahi-daemon" 데몬이 구성되어 실행 중이 아닙니다. ...성공
  "proxyt" 데몬이 구성되어 실행 중이 아닙니다. ...
  노드 이름         구성됨                       상태                      
  ------------  ------------------------  ------------------------
  mprac2        아니오                       성공                      
  mprac1        아니오                       성공                      
 
  노드 이름         실행 중?                     상태                      
  ------------  ------------------------  ------------------------
  mprac2        아니오                       성공                      
  mprac1        아니오                       성공                      
  "proxyt" 데몬이 구성되어 실행 중이 아닙니다. ...성공
  사용자 동일성 ...성공
  RPM Package Manager 데이터베이스 ...성공
  /dev/shm이 임시 파일 시스템으로 마운트됨 ...성공
  /var 경로에 대한 파일 시스템 마운트 옵션 ...성공
  DefaultTasksMax 매개변수 ...성공
  zeroconf 검사 ...성공
  ASM 필터 드라이버 구성 ...성공
  Systemd 로그인 관리자 IPC 매개변수 ...성공
 
클러스터 서비스 설정에 대한 사전 확인을 성공했습니다. 
 
수행된 CVU 작업:                   stage -pre crsinst
날짜:                           2021. 12. 2 오후 1:24:41
CVU 홈:                        /u01/app/21c/grid
사용자:                          grid
운영체제:                         Linux5.4.17-2136.301.1.3.el8uek.x86_64
 

 

5. GI(Grid Infrastructure) 설치 실행

mprac1.localdomain@grid:+ASM1:/u01/app/21c/grid> ./gridSetup.sh
 

 

인스톨러 실행 화면

 

20211202_133403.jpg

 

오라클 엔터프라이즈 리눅스 8.2 가 아직 서티되지 않은 듯 합니다.

계속 진행을 하면 설치에 이슈는 없습니다.

Oracle Grid Infrastructure 인스톨을 진행한다.

1) 단계 1/9

20211202_133422.jpg

 

"새 클러스터에 대한 Oracle Grid Infrastructure 구성" 을 선택

 

2) 단계 2/9

20211202_133739.jpg

 

"Oracle 독립형 클러스터 구성" 을 선택

 

3) 단계 3/17

20211202_133812.jpg

 

"로컬 SCAN 생성" 선택

칸은 이미 채워져 있는 상태이다.

 

4) 단계 4/17

20211202_133922.jpg

 

노드가 다 올라와 있지 않아 추가 버튼을 눌러 추가한다.

5) 단계 4/17

20211202_134003.jpg

 

노드2의 정보를 기입하고 진행한다.

6) 단계 4/17

20211202_134058.jpg

 

grid 계정으로 ssh 접속 테스트를 수행해 본다.

7) 단계 5/17

20211202_134140.jpg

 

네트웍 인터페이스 정보 확인

8) 단계 6/17

20211202_134207.jpg

 

"저장 영역에 Oracle Flex ASM 사용" 선택하여 진행

9) 단계 7/17

20211202_134228.jpg

 

"GIMR 데이터베이스 사용 안함" 선택

10) 단계 8/16

20211202_134319.jpg

 

OCR과 Voting 디스크가 저장될 공같을 위해 CRS 란 디스크 그룹을 생성해 준다.

검색경로 변경을 선택해 1편에서 작업한 디스크 중 crs1, crs2, crs3 를 사용하게 잡아준다.

11) 단계 8/16

20211202_134348.jpg

 

12) 단계 8/16

20211202_134509.jpg

 

13) 단계 9/16

20211202_134554.jpg

 

"이러한 계정에 동일한 비밀번호 사용" 선택하고 비밀번호를 입력해 준다.

 

14) 단계 10/16

20211202_134614.jpg

 

"IPMI 사용 안함" 선택

15) 단계 11/18

20211202_134628.jpg

 

EM 등록 부분에 체크하지 않고 다음으로 넘어간다.

16) 단계 12/18

20211202_134645.jpg

 

설정되어 있는 그룹정보 확인

17) 단계 13/18

20211202_134703.jpg

 

grid 유저의 ORACLE_BASE의 위치와 ORACLE_HOME 위치에 대한 정보 확인

18) 단계 14/19

20211202_134730.jpg

 

인벤토리 정보와 그룹 정보를 확인

19) 단계 15/19

20211202_134750.jpg

 

"자동으로 구성 스크립트 실행" 선택 후 "루트 사용자의 인증서 사용" 란의 root 계정 비밀번호를 기입해 준다.

20) 단계 16/18

20211202_134806.jpg

 

설치 설정 정보를 확인

21) 단계 17/18

20211202_134843.jpg

 

설치를 진행한다.

22) 단계 18/18

20211202_141211.jpg

 

설치를 완료한다.

6. Grid Infrastructure 상태 확인

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
mprac1.localdomain@grid:+ASM1:/home/grid> crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       mprac1                   STABLE
               ONLINE  ONLINE       mprac2                   STABLE
ora.chad
               ONLINE  ONLINE       mprac1                   STABLE
               ONLINE  ONLINE       mprac2                   STABLE
ora.net1.network
               ONLINE  ONLINE       mprac1                   STABLE
               ONLINE  ONLINE       mprac2                   STABLE
ora.ons
               ONLINE  ONLINE       mprac1                   STABLE
               ONLINE  ONLINE       mprac2                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       mprac1                   STABLE
      2        ONLINE  ONLINE       mprac2                   STABLE
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       mprac1                   STABLE
      2        ONLINE  ONLINE       mprac2                   STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       mprac1                   STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       mprac1                   STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       mprac2                   STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       mprac1                   Started,STABLE
      2        ONLINE  ONLINE       mprac2                   Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       mprac1                   STABLE
      2        ONLINE  ONLINE       mprac2                   STABLE
ora.cdp1.cdp
      1        ONLINE  ONLINE       mprac1                   STABLE
ora.cdp2.cdp
      1        ONLINE  ONLINE       mprac1                   STABLE
ora.cdp3.cdp
      1        ONLINE  ONLINE       mprac2                   STABLE
ora.cvu
      1        ONLINE  ONLINE       mprac1                   STABLE
ora.mprac1.vip
      1        ONLINE  ONLINE       mprac1                   STABLE
ora.mprac2.vip
      1        ONLINE  ONLINE       mprac2                   STABLE
ora.qosmserver
      1        ONLINE  ONLINE       mprac1                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       mprac1                   STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       mprac1                   STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       mprac2                   STABLE
--------------------------------------------------------------------------------
 

 

 

 

 

 

번호 제목 글쓴이 날짜 조회 수
40 패치 conflict(충돌) 검사 방법 [1] 우뽕 2023.04.04 698
39 오라클 21c RAC 설치 with Oracle Linux 8.2 - 5 (DBCA로 Database 생성) file 명품관 2021.12.02 768
38 오라클 21c RAC 설치 with Oracle Linux 8.2 - 4 (Database Software 설치) file 명품관 2021.12.02 1020
37 오라클 21c RAC 설치 with Oracle Linux 8.2 - 3 (사용할 Disk Group 생성) file 명품관 2021.12.02 569
» 오라클 21c RAC 설치 with Oracle Linux 8.2 - 2 (Grid Infrastructure) [1] file 명품관 2021.12.01 1151
35 오라클 21c RAC 설치 with Oracle Linux 8.2 - 1 (OS 및 스토리지 설정) 명품관 2021.12.01 3789
34 Oracle RAC RU rolling patching from 19.3 to 19.9.0.0.201020(Patch 31750108) [1] file 명품관 2020.12.08 6090
33 Oracle EM(Enterprise Manager) 13.4 설치 및 구성 - 2(EM 설치) file 명품관 2020.11.27 2871
32 Oracle Database 19c Patch Update(단일 인스턴스 오라클 DB 19.3 에서 19.9로 RU Update) 명품관 2020.11.24 41595
31 Oracle EM(Enterprise Manager) 13.4 설치 및 구성 - 1(Repository DB 설치) file 명품관 2020.11.23 1494
30 Oracle 19c RAC 설치 with Oracle Linux 8.2 - 5 (DBCA로 Database 생성) file 명품관 2020.11.19 3186
29 Oracle 19c RAC 설치 with Oracle Linux 8.2 - 4 (Database Software 설치) file 명품관 2020.11.19 1293
28 Oracle 19c RAC 설치 with Oracle Linux 8.2 - 3 (사용할 Disk Group 생성) file 명품관 2020.11.19 1843
27 Oracle 19c RAC 설치 with Oracle Linux 8.2 - 2 (Grid Infrastructure) file 명품관 2020.11.19 6107
26 Oracle 19c RAC 설치 with Oracle Linux 8.2 - 1 (OS 및 스토리지 설정) 명품관 2020.11.14 6033
25 ORA-27300, ORA-27301, ORA-27302 Error 와 함께 DB Shutdown - 작성중. Talros 2020.08.28 396
24 ASM 에 spfile 등록 및 삭제 하기 우뽕 2020.07.27 5364
23 메뉴얼하게 DB 추가 작업 - GRID 리소스에 DB 추가 작업 우뽕 2020.07.27 1845
22 19c RAC - Manual Patch 적용방법 우뽕 2020.05.05 2561
21 19c RAC OJVM 패치작업 우뽕 2020.04.22 9018
위로