Oracle 12C DataGuard部署以及维护

Oracle 12C DataGuard部署以及维护Oracle12CDat 部署以及维护 Oracle12CDat 部署一 环境部署 二 维护使用 2 1 检查主备库的归档日志号 2 2 检查备库的归档日志同步情况以及应用指标参数 2 3 查看主备库的模式 2 4 主备库归档日志维护 2 5 备库磁盘空间不足处理 2 6 重建备库 三 取消 DataGuard 环境 3 1St

Oracle 12C DataGuard部署以及维护

 

Oracle 12C DataGuard部署

  • 一、环境部署
  • 二、维护使用
    • 2.1 检查主备库的归档日志号
    • 2.2 检查备库的归档日志同步情况以及应用指标参数
    • 2.3 查看主备库的模式
    • 2.4 主备库归档日志维护
    • 2.5 备库磁盘空间不足处理
    • 2.6 重建备库
  • 三、取消DataGuard环境
    • 3.1 Standby Database Alone
    • 3.2 主库脱离DataGuard

 

12c的Dataguard部署过程还是与11g类似,可以按照11g的部署方法进行部署。

一、环境部署

  Primary Standby
DB Version 12.1.0.2 12.1.0.2
Hostname primary standby
HOST IP 172.16.4.51 172.16.4.52
DB_NAME orcl orcl
DB_UNIQUE_NAME orcl stdorcl
Instance_Name orcl stdorcl
DB Listener 1521 1521
DB Storage FileSystem FileSystem
DB files /u01/app/oracle/oradata/orcl/ /u01/app/oracle/oradata/stdorcl/
LOG files /u01/app/oracle/oradata/orcl/ /u01/app/oracle/oradata/stdorcl/
ORACLE_HOME /u01/app/oracle/product/12.1.0/db_1/ /u01/app/oracle/product/12.1.0/db_1/
OS CentOS 7.5 CentOS 7.5

2、在主库增加standby redologfile文件(如果redo log有N组,standby redo log则需要N+1组。)

主库查询确认组数

SQL> select group#,thread#,members,status from v$log; GROUP# THREAD# MEMBERS STATUS ---------- ---------- ---------- ---------------- 1 1 1 INACTIVE 2 1 1 CURRENT 3 2 1 INACTIVE 4 2 1 CURRENT 

增加standbylogfile:

alter database add standby logfile group 5 '/u01/app/oracle/oradata/orcl/stdredo1.log' size 50m; alter database add standby logfile group 6 '/u01/app/oracle/oradata/orcl/stdredo2.log' size 50m; alter database add standby logfile group 7 '/u01/app/oracle/oradata/orcl/stdredo3.log' size 50m; alter database add standby logfile group 8 '/u01/app/oracle/oradata/orcl/stdredo4.log' size 50m; 
mkdir -p /u01/app/oracle/diag/orcl mkdir -p /u01/app/oracle/oradata/stdorcl/ mkdir -p /u01/arch mkdir -p /u01/rman mkdir -p /u01/app/oracle/oradata/stdorcl/pdbseed/ mkdir -p /u01/app/oracle/oradata/stdorcl/pdb/ 

5、在主库已开启归档模式下修改如下参数

alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(orcl,stdorcl)' scope=both sid='*'; alter system set log_archive_dest_1='LOCATION=/u01/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl' scope=spfile; alter system set LOG_ARCHIVE_DEST_2='SERVICE=stdorcl LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stdorcl' scope=both sid='*'; alter system set fal_client='orcl' scope=both sid='*'; alter system set FAL_SERVER='stdorcl' scope=both sid='*'; alter system set standby_file_management=AUTO scope=both sid='*'; alter system set DB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/stdorcl/','/u01/app/oracle/oradata/orcl/' scope=spfile sid='*'; alter system set LOG_FILE_NAME_CONVERT='/u01/app/oracle/oradata/stdorcl/','/u01/app/oracle/oradata/orcl/' scope=spfile sid='*'; alter system set log_archive_format='%t_%s_%r.arc' scope=spfile sid='*'; alter system set remote_login_passwordfile='EXCLUSIVE' scope=spfile; alter system set PARALLEL_EXECUTION_MESSAGE_SIZE=8192 scope=spfile; 

参数解析:

LOG_ARCHIVE_CONFIG:列出主备库上的DB_UNIQUE_NAME 参数。默认情况下,定义该参数能确保主备库数据库能够互相识别对方 LOG_ARCHIVE_DEST_1:本地归档路径。Primary与Standby需要定义各自的归档路径 LOG_ARCHIVE_DEST_2:设置redo log的传输方式(sync or async)以及传输目标(即standby apply node),当前节点设置的均为另一端数据库的db_unique_name,并且其中的service的值需要与tnsnames.ora别名相同。 FAL_CLIENT:(Fetch Archive Log)用来解决归档裂缝,定义的客户端(获取日志的客户端),参数取自Oracle Net Service Name FAL_SERVER:当主库转为备库的时候此参数会生效。通过网络向FAL_CLIENT发送缺失的日志,参数取自Oracle Net Service Name上述例子当rac转为备库时,会向std获取redo或者归档应用。 STANDBY_FILE_MANAGEMENT:当主库转为备库的时候此参数会生效。用来控制是否自动将Primary数据库增加表空间或数据文件的改动,传播到物理Standby数据库。AUTO:如果该参数值设置为AUTO,则Primary数据库执行的表空间创建操作也会被传播到物理Standby数据库上执行。 MANUAL:如果设置为MANUAL或未设置任何值(默认值是MANUAL),需要手工复制新创建的数据文件到物理Standby服务器。 DB_FILE_NAME_CONVERT:当主库转为备库的时候此参数会生效。主数据库和备用数据库的数据文件转换目录对映(如果两数据库的目录结构不一样),如果有多个对映,逐一指明对映关系。注意: primary上的该参数仅在主备switch over后生效,格式应保持一致 LOG_FILE_NAME_CONVERT:当主库转为备库的时候此参数会生效。定义主备log文件的存放路径转换 

6、编辑主库以及备库的tnsnames.ora文件

orcl = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = primary)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) stdorcl = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = standby)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SID = stdorcl) ) ) 

7、在主库上生成备库的参数文件

create pfile from spfile 

9、在备库上更改参数文件

*.audit_trail='NONE' *.compatible='12.1.0.2.0' *.control_files='/u01/app/oracle/oradata/stdorcl/control01.ctl','/u01/app/oracle/oradata/stdorcl/control02.ctl' *.db_block_size=8192 *.db_domain='' *.db_name='orcl' *.db_unique_name='stdorcl' *.diagnostic_dest='/u01/app/oracle' *.log_archive_format='%t_%s_%r.arc' *.open_cursors=300 enable_pluggable_database = true *.pga_aggregate_target= *.processes=1000 *.remote_login_passwordfile='exclusive' *.sga_target= *.LOG_ARCHIVE_CONFIG='DG_CONFIG=(stdorcl,orcl)' *.LOG_ARCHIVE_DEST_1='LOCATION=/u01/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stdorcl' *.LOG_ARCHIVE_DEST_2='SERVICE=orcl LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl' *.log_archive_format='%t_%s_%r.arc' *.FAL_SERVER='orcl' *.fal_client='stdorcl' *.DB_FILE_NAME_CONVERT='/u01/app/oracle/oradata/orcl/','/u01/app/oracle/oradata/stdorcl/' *.LOG_FILE_NAME_CONVERT='/u01/app/oracle/oradata/orcl/','/u01/app/oracle/oradata/stdorcl/' *.standby_file_management=AUTO 

10、在备库增加静态监听

SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1) (SID_NAME = stdorcl) ) ) 注意启动备库的监听lsnrctl start 

11、在备库使用修改的参数文件启动备库数据库

startup nomount; 

12、主库进入rman模式

rman target sys/oracle@orcl auxiliary sys/oracle@stdorcl 执行duplicate run{ allocate channel c1 type disk; allocate channel c2 type disk; allocate channel c3 type disk; allocate AUXILIARY channel c4 type disk; allocate AUXILIARY channel c5 type disk; allocate AUXILIARY channel c6 type disk; DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE DORECOVER NOFILENAMECHECK; release channel c1; release channel c2; release channel c3; release channel c4; release channel c5; release channel c6; } 

输出过程:

using target database control file instead of recovery catalog allocated channel: c1 channel c1: SID=53 device type=DISK allocated channel: c2 channel c2: SID=62 device type=DISK allocated channel: c3 channel c3: SID=55 device type=DISK allocated channel: c4 channel c4: SID=23 device type=DISK allocated channel: c5 channel c5: SID=24 device type=DISK allocated channel: c6 channel c6: SID=25 device type=DISK Starting Duplicate Db at 2018/11/24 06:41:11 current log archived contents of Memory Script: { backup as copy reuse targetfile '/u01/app/oracle/product/12.1.0/db_1/dbs/orapworcl' auxiliary format '/u01/app/oracle/product/12.1.0/db_1/dbs/orapwstdorcl' ; } executing Memory Script Starting backup at 2018/11/24 06:41:11 Finished backup at 2018/11/24 06:41:12 contents of Memory Script: { restore clone from service 'orcl' standby controlfile; } executing Memory Script Starting restore at 2018/11/24 06:41:12 channel c4: starting datafile backup set restore channel c4: using network backup set from service orcl channel c4: restoring control file channel c4: restore complete, elapsed time: 00:00:01 output file name=/u01/app/oracle/oradata/stdorcl/control01.ctl output file name=/u01/app/oracle/oradata/stdorcl/control02.ctl Finished restore at 2018/11/24 06:41:13 contents of Memory Script: { sql clone 'alter database mount standby database'; } executing Memory Script sql statement: alter database mount standby database Using previous duplicated file /u01/app/oracle/oradata/stdorcl/system01.dbf for datafile 1 with checkpoint SCN of  Using previous duplicated file /u01/app/oracle/oradata/stdorcl/sysaux01.dbf for datafile 3 with checkpoint SCN of  Using previous duplicated file /u01/app/oracle/oradata/stdorcl/undotbs01.dbf for datafile 4 with checkpoint SCN of  Using previous duplicated file /u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf for datafile 5 with checkpoint SCN of  Using previous duplicated file /u01/app/oracle/oradata/stdorcl/users01.dbf for datafile 6 with checkpoint SCN of  contents of Memory Script: { set newname for tempfile 1 to "/u01/app/oracle/oradata/stdorcl/temp01.dbf"; set newname for tempfile 2 to "/u01/app/oracle/oradata/stdorcl/pdbseed/pdbseed_temp012018-11-24_12-05-19-AM.dbf"; set newname for tempfile 3 to "/u01/app/oracle/oradata/stdorcl/pdb/pdb_temp012018-11-24_12-15-56-AM.dbf"; switch clone tempfile all; set newname for datafile 1 to "/u01/app/oracle/oradata/stdorcl/system01.dbf"; set newname for datafile 3 to "/u01/app/oracle/oradata/stdorcl/sysaux01.dbf"; set newname for datafile 4 to "/u01/app/oracle/oradata/stdorcl/undotbs01.dbf"; set newname for datafile 5 to "/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf"; set newname for datafile 6 to "/u01/app/oracle/oradata/stdorcl/users01.dbf"; set newname for datafile 7 to "/u01/app/oracle/oradata/stdorcl/pdbseed/sysaux01.dbf"; set newname for datafile 8 to "/u01/app/oracle/oradata/stdorcl/pdb/system01.dbf"; set newname for datafile 9 to "/u01/app/oracle/oradata/stdorcl/pdb/sysaux01.dbf"; set newname for datafile 10 to "/u01/app/oracle/oradata/stdorcl/pdb/SAMPLE_SCHEMA_users01.dbf"; set newname for datafile 11 to "/u01/app/oracle/oradata/stdorcl/pdb/example01.dbf"; restore from service 'orcl' clone datafile 7, 8, 9, 10, 11 ; sql 'alter system archive log current'; } executing Memory Script executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME renamed tempfile 1 to /u01/app/oracle/oradata/stdorcl/temp01.dbf in control file renamed tempfile 2 to /u01/app/oracle/oradata/stdorcl/pdbseed/pdbseed_temp012018-11-24_12-05-19-AM.dbf in control file renamed tempfile 3 to /u01/app/oracle/oradata/stdorcl/pdb/pdb_temp012018-11-24_12-15-56-AM.dbf in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting restore at 2018/11/24 06:41:18 channel c4: starting datafile backup set restore channel c4: using network backup set from service orcl channel c4: specifying datafile(s) to restore from backup set channel c4: restoring datafile 00007 to /u01/app/oracle/oradata/stdorcl/pdbseed/sysaux01.dbf channel c5: starting datafile backup set restore channel c5: using network backup set from service orcl channel c5: specifying datafile(s) to restore from backup set channel c5: restoring datafile 00008 to /u01/app/oracle/oradata/stdorcl/pdb/system01.dbf channel c6: starting datafile backup set restore channel c6: using network backup set from service orcl channel c6: specifying datafile(s) to restore from backup set channel c6: restoring datafile 00009 to /u01/app/oracle/oradata/stdorcl/pdb/sysaux01.dbf channel c5: restore complete, elapsed time: 00:00:38 channel c5: starting datafile backup set restore channel c5: using network backup set from service orcl channel c5: specifying datafile(s) to restore from backup set channel c5: restoring datafile 00010 to /u01/app/oracle/oradata/stdorcl/pdb/SAMPLE_SCHEMA_users01.dbf channel c4: restore complete, elapsed time: 00:00:54 channel c4: starting datafile backup set restore channel c4: using network backup set from service orcl channel c4: specifying datafile(s) to restore from backup set channel c4: restoring datafile 00011 to /u01/app/oracle/oradata/stdorcl/pdb/example01.dbf channel c5: restore complete, elapsed time: 00:00:16 channel c6: restore complete, elapsed time: 00:00:54 channel c4: restore complete, elapsed time: 00:00:35 Finished restore at 2018/11/24 06:42:48 sql statement: alter system archive log current current log archived contents of Memory Script: { restore clone force from service 'orcl' archivelog from scn ; catalog clone datafilecopy "/u01/app/oracle/oradata/stdorcl/system01.dbf", "/u01/app/oracle/oradata/stdorcl/sysaux01.dbf", "/u01/app/oracle/oradata/stdorcl/undotbs01.dbf", "/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf", "/u01/app/oracle/oradata/stdorcl/users01.dbf"; switch clone datafile 1 to datafilecopy "/u01/app/oracle/oradata/stdorcl/system01.dbf"; switch clone datafile 3 to datafilecopy "/u01/app/oracle/oradata/stdorcl/sysaux01.dbf"; switch clone datafile 4 to datafilecopy "/u01/app/oracle/oradata/stdorcl/undotbs01.dbf"; switch clone datafile 5 to datafilecopy "/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf"; switch clone datafile 6 to datafilecopy "/u01/app/oracle/oradata/stdorcl/users01.dbf"; switch clone datafile all; } executing Memory Script Starting restore at 2018/11/24 06:42:51 channel c4: starting archived log restore to default destination channel c4: using network backup set from service orcl channel c4: restoring archived log archived log thread=1 sequence=20 channel c5: starting archived log restore to default destination channel c5: using network backup set from service orcl channel c5: restoring archived log archived log thread=1 sequence=21 channel c6: starting archived log restore to default destination channel c6: using network backup set from service orcl channel c6: restoring archived log archived log thread=1 sequence=22 channel c4: restore complete, elapsed time: 00:00:04 channel c4: starting archived log restore to default destination channel c4: using network backup set from service orcl channel c4: restoring archived log archived log thread=1 sequence=23 channel c5: restore complete, elapsed time: 00:00:04 channel c5: starting archived log restore to default destination channel c5: using network backup set from service orcl channel c5: restoring archived log archived log thread=1 sequence=24 channel c6: restore complete, elapsed time: 00:00:04 channel c6: starting archived log restore to default destination channel c6: using network backup set from service orcl channel c6: restoring archived log archived log thread=1 sequence=25 channel c4: restore complete, elapsed time: 00:00:05 channel c4: starting archived log restore to default destination channel c4: using network backup set from service orcl channel c4: restoring archived log archived log thread=1 sequence=26 channel c5: restore complete, elapsed time: 00:00:05 channel c5: starting archived log restore to default destination channel c5: using network backup set from service orcl channel c5: restoring archived log archived log thread=1 sequence=27 channel c6: restore complete, elapsed time: 00:00:05 channel c6: starting archived log restore to default destination channel c6: using network backup set from service orcl channel c6: restoring archived log archived log thread=1 sequence=28 channel c4: restore complete, elapsed time: 00:00:00 channel c4: starting archived log restore to default destination channel c4: using network backup set from service orcl channel c4: restoring archived log archived log thread=1 sequence=29 channel c5: restore complete, elapsed time: 00:00:00 channel c5: starting archived log restore to default destination channel c5: using network backup set from service orcl channel c5: restoring archived log archived log thread=1 sequence=30 channel c4: restore complete, elapsed time: 00:00:00 channel c4: starting archived log restore to default destination channel c4: using network backup set from service orcl channel c4: restoring archived log archived log thread=1 sequence=31 channel c6: restore complete, elapsed time: 00:00:01 channel c5: restore complete, elapsed time: 00:00:01 channel c4: restore complete, elapsed time: 00:00:00 Finished restore at 2018/11/24 06:43:02 cataloged datafile copy datafile copy file name=/u01/app/oracle/oradata/stdorcl/system01.dbf RECID=7 STAMP= cataloged datafile copy datafile copy file name=/u01/app/oracle/oradata/stdorcl/sysaux01.dbf RECID=8 STAMP= cataloged datafile copy datafile copy file name=/u01/app/oracle/oradata/stdorcl/undotbs01.dbf RECID=9 STAMP= cataloged datafile copy datafile copy file name=/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf RECID=10 STAMP= cataloged datafile copy datafile copy file name=/u01/app/oracle/oradata/stdorcl/users01.dbf RECID=11 STAMP= datafile 1 switched to datafile copy input datafile copy RECID=7 STAMP= file name=/u01/app/oracle/oradata/stdorcl/system01.dbf datafile 3 switched to datafile copy input datafile copy RECID=8 STAMP= file name=/u01/app/oracle/oradata/stdorcl/sysaux01.dbf datafile 4 switched to datafile copy input datafile copy RECID=9 STAMP= file name=/u01/app/oracle/oradata/stdorcl/undotbs01.dbf datafile 5 switched to datafile copy input datafile copy RECID=10 STAMP= file name=/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf datafile 6 switched to datafile copy input datafile copy RECID=11 STAMP= file name=/u01/app/oracle/oradata/stdorcl/users01.dbf datafile 7 switched to datafile copy input datafile copy RECID=12 STAMP= file name=/u01/app/oracle/oradata/stdorcl/pdbseed/sysaux01.dbf datafile 8 switched to datafile copy input datafile copy RECID=13 STAMP= file name=/u01/app/oracle/oradata/stdorcl/pdb/system01.dbf datafile 9 switched to datafile copy input datafile copy RECID=14 STAMP= file name=/u01/app/oracle/oradata/stdorcl/pdb/sysaux01.dbf datafile 10 switched to datafile copy input datafile copy RECID=15 STAMP= file name=/u01/app/oracle/oradata/stdorcl/pdb/SAMPLE_SCHEMA_users01.dbf datafile 11 switched to datafile copy input datafile copy RECID=16 STAMP= file name=/u01/app/oracle/oradata/stdorcl/pdb/example01.dbf contents of Memory Script: { set until scn ; recover standby clone database delete archivelog ; } executing Memory Script executing command: SET until clause Starting recover at 2018/11/24 06:43:03 starting media recovery archived log for thread 1 with sequence 25 is already on disk as file /u01/arch/1_25_.arc archived log for thread 1 with sequence 26 is already on disk as file /u01/arch/1_26_.arc archived log for thread 1 with sequence 27 is already on disk as file /u01/arch/1_27_.arc archived log for thread 1 with sequence 28 is already on disk as file /u01/arch/1_28_.arc archived log for thread 1 with sequence 29 is already on disk as file /u01/arch/1_29_.arc archived log for thread 1 with sequence 30 is already on disk as file /u01/arch/1_30_.arc archived log for thread 1 with sequence 31 is already on disk as file /u01/arch/1_31_.arc archived log file name=/u01/arch/1_25_.arc thread=1 sequence=25 archived log file name=/u01/arch/1_26_.arc thread=1 sequence=26 archived log file name=/u01/arch/1_27_.arc thread=1 sequence=27 archived log file name=/u01/arch/1_28_.arc thread=1 sequence=28 archived log file name=/u01/arch/1_29_.arc thread=1 sequence=29 archived log file name=/u01/arch/1_30_.arc thread=1 sequence=30 archived log file name=/u01/arch/1_31_.arc thread=1 sequence=31 media recovery complete, elapsed time: 00:00:00 Finished recover at 2018/11/24 06:43:05 Finished Duplicate Db at 2018/11/24 06:43:09 released channel: c1 released channel: c2 released channel: c3 released channel: c4 released channel: c5 released channel: c6 
select SWITCHOVER_STATUS,DATABASE_ROLE,PROTECTION_MODE,OPEN_MODE FROM v$database; 

默认应该为MAXIMIZE PERFORMANCE

15、备库开启实时应用模式

12c可以不用using current logfile字句,使用实时日志 SQL> alter database recover managed standby database disconnect; 
SQL>alter database open; (if not open) SQL> SELECT NAME,DATABASE_ROLE,PROTECTION_MODE,OPEN_MODE FROM V$DATABASE; NAME DATABASE_ROLE PROTECTION_MODE OPEN_MODE ----- -------------------- -------------------------- RAC PHYSICAL STANDBY MAXIMUM PERFORMANCE READ ONLY WITH APPLY 
SQL> select process,status from v$managed_standby; PROCESS STATUS --------- ------------ ARCH CLOSING ARCH CONNECTED RFS IDLE RFS IDLE MRP0 APPLYING_LOG 

在主节点多次切换日志文件,查询主备节点是否一致

alter system switch logfile; SQL> select thread#,max(sequence#) from v$log_history group by thread#; 

–以下为查询DataGuard状态

SQL> select thread#,sequence#,archived,applied from v$archived_log order by sequence# desc; 
#su - oracle $sqlplus / as sysdba SQL>create table test as select * from dba_objects where rownum < 101; 

在备库:

#su - oracle $sqlplus / as sysdba SQL>select count(*) from test; count(*) --------------- 100 结果为100表示实时同步成功 在主库: SQL> drop table test purge; 

二、维护使用

2.1 检查主备库的归档日志号

在主库检查日志号: SQL> archive log list; SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination +ARCH Oldest online log sequence 159 Next log sequence to archive 160 Current log sequence 160 
在备库检查日志号 SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination /u01/arch_std Oldest online log sequence 159 Next log sequence to archive 0 Current log sequence 160 检查主库、备库当前的Current log sequence是否都是一样的 

2.2 检查备库的归档日志同步情况以及应用指标参数

在备库执行:

查询日志应用情况 SQL> select thread#,sequence#,first_time,next_time,applied from v$archived_log where applied='YES'; SQL> select thread#,sequence#,first_time,next_time,applied from v$archived_log where applied='NO'; 
SQL> SELECT PROCESS, STATUS,THREAD#,SEQUENCE#,BLOCK#,BLOCKS, DELAY_MINS FROM V$MANAGED_STANDBY; PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS DELAY_MINS --------- ------------ ---------- ---------- ---------- ---------- ---------- ARCH CLOSING 1 168 1 1 0 ARCH CLOSING 1 169 1 77 0 ARCH CONNECTED 0 0 0 0 0 ARCH CLOSING 2 96 1 31 0 RFS IDLE 0 0 0 0 0 RFS IDLE 0 0 0 0 0 RFS IDLE 0 0 0 0 0 RFS IDLE 2 97 4557 1 0 RFS IDLE 0 0 0 0 0 RFS IDLE 0 0 0 0 0 RFS IDLE 1 170 3917 1 0 MRP0 APPLYING_LOG 2 97 4555  0 

查询备库日志应用的速率

select * from V$RECOVERY_PROGRESS; 

 

也可通过动态性能视图,如下查询 SQL> select message from V$DATAGUARD_STATUS order by TIMESTAMP; 该动态性能视图显示出在主库上产生了多少重做日志数据,但是还没有被备库所应用 MESSAGE ARC0: Archival started ARC1: Archival started ARC2: Archival started ARC1: Becoming the 'no FAL' ARCH ARC2: Becoming the heartbeat ARCH ARC2: Becoming the active heartbeat ARCH ARC3: Archival started RFS[1]: Assigned to RFS process 1723 RFS[2]: Assigned to RFS process 1725 RFS[3]: Assigned to RFS process 1720 ARC3: Beginning to archive thread 1 sequence 100 (-) ARC3: Completed archiving thread 1 sequence 100 (0-0) Primary database is in MAXIMUM PERFORMANCE mode ARC0: Completed archiving thread 1 sequence 105 (0-0) ARC0: Beginning to archive thread 1 sequence 105 (-) RFS[5]: Assigned to RFS process 1742 RFS[4]: Assigned to RFS process 1740 Attempt to start background Managed Standby Recovery process MRP0: Background Managed Standby Recovery process started Managed Standby Recovery starting Real Time Apply Media Recovery Waiting for thread 2 sequence 18 观察可以发现是在等待thread2的日志应用 
备注: 1、检查主备库之间的日志同步情况,如是是YES表示同步成功的,如果是NO表示没同步同成的日志,根据主备库与具体时间来确定。 2、实时应用MRP通常是APPLYING_LOG,ADG下通常是WAIT_FOR_LOG 

2.3 查看主备库的模式

查看主备库性能模式

主库 SQL> SELECT NAME,DATABASE_ROLE,PROTECTION_MODE,OPEN_MODE FROM V$DATABASE; NAME DATABASE_ROLE PROTECTION_MODE OPEN_MODE --------- ---------------- -------------------- -------------------- ORCLDB PRIMARY MAXIMUM PERFORMANCE READ WRITE 默认为最大性能模式 
备库 SQL> SELECT NAME,DATABASE_ROLE,PROTECTION_MODE,OPEN_MODE FROM V$DATABASE; NAME DATABASE_ROLE PROTECTION_MODE OPEN_MODE --------- ---------------- -------------------- -------------------- ORCLDB PHYSICAL STANDBY MAXIMUM PERFORMANCE READ ONLY WITH APPLY 备库最大性能模式,已开启日志应用 

检查应用模式(是否启用了实时应用)

查询v$archive_dest_status 视图,如果打开了实时应用,则recovery_mode为:MANAGED REAL TIME APPLY,例如: 备库: SQL> select recovery_mode from v$archive_dest_status where DEST_NAME='LOG_ARCHIVE_DEST_1'; RECOVERY_MODE ----------------------- MANAGED REAL TIME APPLY 主库: SQL> select recovery_mode from v$archive_dest_status where DEST_NAME='LOG_ARCHIVE_DEST_2'; RECOVERY_MODE ----------------------- MANAGED REAL TIME APPLY 

2.4 主备库归档日志维护

#!/bin/bash export ORACLE_SID=orcl export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 export ORACLE_BASE=/u01/app/oracle time=`date +"20%y%m%d%H%M%S"` logdir="/home/oracle/arch_clean_log" find $logdir -type f -name "*.log" -mtime +7 -exec rm -rf {} \; $ORACLE_HOME/bin/rman target / log=$logdir/archClean_$time.log < 
  
crontab 0 22 * * * /bin/bash /home/oracle/clean_arch.sh >/dev/null 2>&1 

2.5 备库磁盘空间不足处理

当主库创建新的数据文件,然而备库没有空间时,应当先扩容备库磁盘,然后将备库的文件管理改成手动管理,使用命令在备库创建对应的数据文件。

1、在主库创建新的表空间test,并且增加数据文件7,8

SQL> select file#,name from v$datafile; FILE# NAME ---------- -------------------------------------------------- 1 +ORADATA/orcl/datafile/system.273. 2 +ORADATA/orcl/datafile/sysaux.272. 3 +ORADATA/orcl/datafile/undotbs1.271. 4 +ORADATA/orcl/datafile/undotbs2.269. 5 +ORADATA/orcl/datafile/users.268. 6 +ORADATA/orcl/datafile/admin.260. 7 +ORADATA/orcl/datafile/test.259. 8 +ORADATA/orcl/datafile/test.276. 

2、备库查询数据文件

SQL> select file#,name from v$datafile; FILE# NAME ---------- -------------------------------------------------- 1 /u01/app/oracle/oradata/orcl/datafile/system.273. 2 /u01/app/oracle/oradata/orcl/datafile/sysaux.272. 3 /u01/app/oracle/oradata/orcl/datafile/undotbs1.271. 4 /u01/app/oracle/oradata/orcl/datafile/undotbs2.269. 5 /u01/app/oracle/oradata/orcl/datafile/users.268. 6 /u01/app/oracle/oradata/orcl/datafile/admin.260. 7 /u01/app/oracle/product/11.2.0/db_1/dbs/UNNAMED00007 8 /u01/app/oracle/product/11.2.0/db_1/dbs/UNNAMED00008 可以看到备库的数据文件,并没有真正的创建,只是创建了临时文件。 

3、等备库磁盘空间充足时(新挂1块盘),手动创建数据文件对应主库

--默认为自动管理,改为手动管理 SQL> alter system set standby_file_management='MANUAL'; 
--由于u01空间已不够,可将数据文件创建到u02 SQL> alter database create datafile '/u01/app/oracle/product/11.2.0/db_1/dbs/UNNAMED00007' as '/u02/oradata/test_1.dbf'; SQL> alter database create datafile '/u01/app/oracle/product/11.2.0/db_1/dbs/UNNAMED00008' as '/u02/oradata/test_2.dbf'; 

4、将备库数据文件管理改回自动

SQL> alter system set standby_file_management='AUTO'; 
SQL> alter database recover managed standby database using current logfile disconnect; 

6、检查主备库新的表空间数据是否同步

备注:系统做同步时,会先建临时文件,名称如下: /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/UNNAMED000XX 如果空间足够,就会成功同步,并将文件更名为正确的名称和路径。 

2.6 重建备库

SQL> shutdown immediate; SQL> startup restrict mount; (只有sysdba用户可以连) SQL> select name from v$database; NAME --------- ORCLDB SQL> drop database; 

2、重新用备库的pfile文件将备库启动到nomount

3、主库重新用rman duplicate创建备库

三、取消DataGuard环境

3.1 Standby Database Alone

LOG_ARCHIVE_CONFIG DB_FILE_NAME_CONVERT LOG_FILE_NAME_CONVERT LOG_ARCHIVE_DEST_STATE_2 STANDBY_FILE_MANAGEMENT FAL_SERVER FAL_CLIENT 

2、启动数据库脱离dg环境

SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE; SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP 

3、查看数据库状态

SQL> select database_role,switchover_status,open_mode from v$database; DATABASE_ROLE SWITCHOVER_STATUS OPEN_MODE ---------------- -------------------- -------------------- PRIMARY NOT ALLOWED READ WRITE 已脱离dg 

4、测试备库读写

参考MOS ID .1 

 

3.2 主库脱离DataGuard

SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE; 

2、移除(S)PFILE 参数文件dg参数

LOG_ARCHIVE_CONFIG DB_FILE_NAME_CONVERT LOG_FILE_NAME_CONVERT LOG_ARCHIVE_DEST_n pointing to the Standby Database and valid for STANDBY_LOGFILES LOG_ARCHIVE_DEST_STATE_n DG_BROKER_START DG_BROKER_CONFIG_FILE1 DG_BROKER_CONFIG_FILE2 STANDBY_ARCHIVE_DEST STANDBY_FILE_MANAGEMENT FAL_SERVER FAL_CLIENT 

3、删除主库所有的Standby Redolog Groups

SQL> SELECT GROUP# FROM V$STANDBY_LOG; SQL> alter database drop standby logfile group 3; 

4、删除 Data Guard Broker Configuration 配置文件

SQL> show parameter DG_BROKER_CONFIG; $ORACLE_HOME/dbs/dr1 
  
    .dat and $ORACLE_HOME/dbs/dr2 
   
     .dat 
    
  
参考MOS ID .1
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/223957.html原文链接:https://javaforall.net

(0)
上一篇 2026年3月17日 下午1:02
下一篇 2026年3月17日 下午1:02


相关推荐

  • clion激活码_在线激活[通俗易懂]

    (clion激活码)最近有小伙伴私信我,问我这边有没有免费的intellijIdea的激活码,然后我将全栈君台教程分享给他了。激活成功之后他一直表示感谢,哈哈~IntelliJ2021最新激活注册码,破解教程可免费永久激活,亲测有效,下面是详细链接哦~https://javaforall.net/100143.html…

    2022年3月22日
    96
  • 尾插法建立链表详解

    尾插法建立链表详解尾插法 顾名思义 就是把新加入的节点插入到上一个节点的尾部 头插法是把新加入的节点插入到上一个节点的头部 next 存储下一个节点位置的地址 开始时 初始化定义头节点 head gt next NULL 表示头节点的下一个节点为空 就是该链表只有一个头节点 图形化表示为由于头插法要把每一个新加入的节点插入到上一个节点的尾部 所以需要定义一个指针 记录每次插入变换后的最后一个节点的指针域信息 r head 将头节点赋值给 r r 记录每次插入变换后尾部的信息申请一个节点

    2026年3月17日
    2
  • 什么是迁移学习(Transfer Learning)?【精讲+代码实例】

    文章目录@[toc]1.Introduction2.DevelopmentofMachineLearning3.Whatistransferlearning?4.Howtotransfer?4.1Example1:物体识别4.2Example2:放射科诊断4.3Example3:语音识别系统5.Themeaningoftransferlearnin…

    2022年4月17日
    38
  • shuriken粒子系统(1)

    shuriken粒子系统(1)Shurlken 粒子系统 nbsp 创建 方法一 gameobject crate nbsp otherparticl nbsp system 方法二 创建一个空物体 nbsp 给这个空物体添加粒子组件 nbsp nbsp 粒子的控制面板 nbsp 1 nbsp 在 hierarchy 试图中双击 particle nbsp system 游戏对象可是将其 scene 试图中居中并最大化显示 并显示 particle nbsp system 组件标签左侧的下

    2026年3月20日
    2
  • vscode远程开发python_vscode版本

    vscode远程开发python_vscode版本在大二刚接触linux系统的时候,学校的服务器装的都是有图形界面Ubuntu系统,使用teamviewer远程连接操作很方便,就像多了个电脑一样。可是最近进所里后发现,给分配的服务器只有能使用ssh登录的终端,这让我这个被pycharm的各种功能惯坏了的人十分不爽,不能代码提示补全、不能一眼看到所有文件、不能随心所欲地debug。。。因此我开始了寻找舒适远程开发方法的无用功旅程,在此期间尝试了mobaxterm,vscode和pycharm。最后总结了下各个方法的使用情况。一、Mobaxter…

    2022年8月29日
    5
  • databus 支持oracle么,Databus[通俗易懂]

    databus 支持oracle么,Databus[通俗易懂]系统如果要应付大规模的请求,一条必经之路就是数据库的分割,单服务器的性能早晚都会成为负载的短板。而数据库分割,通常有Master/Salve或者集群Cluster的方式,这些方式通常都是基于同种类型的数据。对于一个庞大的多类型数据库的系统,在不同的数据库之间(甚至是不同地理位置的机房间)保持数据的同步,需要更复杂的解决方案。LinkedIn良心开源了内部的一个项目Databus,正是解决这个问题的…

    2022年10月17日
    3

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号