2009-12-15

Oracle LogMiner (3)

Environment

Oracle 10g (10.2.0.4) on Solaris 10

Purpose

Extract data with given time interval and filter.
Time interval:
from 2009-12-09 10:35 to 2009-12-09 10:35
Filter:
table_space='TABLES' and operation in ('UPDATE','INSERT','DELETE');

Solution

Switch logs to make sure that latest data changes are in archived logs.
alter system switch logfile;

Find needed logs:
select * from v$archived_log
where completion_time > to_date('2009-12-09 10:35','yyyy-mm-dd hh24:mi')
order by first_time;

Create SQL script to add logs to LogMiner session:
select 'DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '''||name||''');' from v$archived_log
where first_time > to_date('2009-12-06 10:35','yyyy-mm-dd hh24:mi')
order by first_time;

After adding DBMS_LOGMNR.NEW to first log, it looks like this:

begin
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/archive/DHMERR03/disk/1_628_689683506.dbf',OPTIONS => DBMS_LOGMNR.NEW);
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/archive/DHMERR03/disk/1_629_689683506.dbf');
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/archive/DHMERR03/disk/1_630_689683506.dbf');
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/archive/DHMERR03/disk/1_631_689683506.dbf');
end;

Alternatively you can define only first log and use option CONTINUOUS_MINE to add following logs automatically.

Start LogMiner:
Begin
DBMS_LOGMNR.START_LOGMNR(startTime => to_date('2009-12-09 10:35','yyyy-mm-dd hh24:mi'), endTime => to_date('2009-12-15 11:10','yyyy-mm-dd hh24:mi'), OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY);
End;

As PK supplemental logging was used in this case, ROWID could be excluded by adding option DBMS_LOGMNR.NO_ROWID_IN_STMT to previous statement. It would look like:
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.NO_ROWID_IN_STMT)

Return data changes using given filter:
select * from V$LOGMNR_CONTENTS
where table_space='TABLES' and operation in ('UPDATE','INSERT','DELETE');

If necessary you can record the result into table.
create table alexz.LOGMINER_CONTENTS_20091215 tablespace users
as select * from V$LOGMNR_CONTENTS
where table_space='TABLES' and operation in ('UPDATE','INSERT','DELETE');

grant select on alexz.LOGMINER_CONTENTS_20091215 to public;

Finish LogMiner session
EXECUTE DBMS_LOGMNR.END_LOGMNR;

2009-12-01

Oracle LogMiner (2) supplemental logging

Environment

Oracle 10g (10.2.0.4) on Solaris 10

Purpose

Test Logminer with different supplemental logging modes.
Compare actual and Logminer SQL and choose proper supplemental logging to re-apply some application code.

Prepare database

CREATE TABLE "ALEXZ"."TBL1"
("ID" NUMBER,
 "N1" NUMBER,
 "V1" VARCHAR2(77),
 "D1" DATE, "D2" DATE,
PRIMARY KEY ("ID") VALIDATE);

Select log_mode, SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
LOG_MODE     SUPPLEME SUP SUP
------------ -------- --- ---
NOARCHIVELOG NO       NO  NO

Startup mount;
alter database archivelog;
alter database add supplemental log data (primary key) columns;
alter database open;

Logminer session

Choose current log file.
select * from v$log;
select * from v$logfile;

Start logminer session.
EXECUTE sys.DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/redo/01/DHMERT01/DHMERT01redo01.dbf', OPTIONS => sys.DBMS_LOGMNR.NEW);
EXECUTE sys.DBMS_LOGMNR.START_LOGMNR(OPTIONS => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + sys.DBMS_LOGMNR.COMMITTED_DATA_ONLY);

select * from V$LOGMNR_CONTENTS
where seg_name ='TBL1' order by CSCN;

Task 1

PK supplemental logging
Actual SQL:
insert into tbl1 values (1,1,'aa',sysdate,sysdate);
update tbl1 set d2=sysdate where id=1;

SQL_REDO from Logminer:
insert into "ALEXZ"."TBL1"("ID","N1","V1","D1","D2") values ('1','1','aa',TO_DATE('2009-12-01 12:29', 'yyyy-mm-dd hh24:mi'),TO_DATE('2009-12-01 12:29', 'yyyy-mm-dd hh24:mi'));

update "ALEXZ"."TBL1" set "D2" = TO_DATE('2009-12-01 12:29', 'yyyy-mm-dd hh24:mi') where "ID" = '1' and "D2" = TO_DATE('2009-12-01 12:29', 'yyyy-mm-dd hh24:mi') and ROWID = 'AAAip0AADAAA6GqAAA';

Task 2

All columns supplemental logging
alter database drop supplemental log data (primary key) columns;
alter database add supplemental log data (all) columns;
Select log_mode, SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;

LOG_MODE     SUPPLEME SUP SUP
------------ -------- --- ---
ARCHIVELOG   IMPLICIT NO  NO

Actual SQL:
insert into tbl1 values (2,2,'aa',sysdate,sysdate);
update tbl1 set d2=sysdate+1 where id=2;

SQL_REDO from Logminer:
insert into "ALEXZ"."TBL1"("ID","N1","V1","D1","D2") values ('2','2','aa',TO_DATE('2009-12-01 12:35', 'yyyy-mm-dd hh24:mi'),TO_DATE('2009-12-01 12:35', 'yyyy-mm-dd hh24:mi'));

update "ALEXZ"."TBL1" set "D2" = TO_DATE('2009-12-02 12:36', 'yyyy-mm-dd hh24:mi') where "ID" = '2' and "N1" = '2' and "V1" = 'aa' and "D1" = TO_DATE('2009-12-01 12:35', 'yyyy-mm-dd hh24:mi') and "D2" = TO_DATE('2009-12-01 12:35', 'yyyy-mm-dd hh24:mi') and ROWID = 'AAAip0AADAAA6GqAAB';

Task 3

All columns + PK supplemental logging (it seems working like “all columns”)
alter database add supplemental log data (primary key) columns;
LOG_MODE     SUPPLEME SUP SUP
------------ -------- --- ---
ARCHIVELOG   IMPLICIT YES NO

insert into tbl1 values (3,3,'aa',sysdate,sysdate);
update tbl1 set d2=sysdate+1 where id=3;

SQL_REDO from Logminer:

insert into "ALEXZ"."TBL1"("ID","N1","V1","D1","D2") values ('3','3','aa',TO_DATE('2009-12-01 12:37', 'yyyy-mm-dd hh24:mi'),TO_DATE('2009-12-01 12:37', 'yyyy-mm-dd hh24:mi'));

update "ALEXZ"."TBL1" set "D2" = TO_DATE('2009-12-02 12:38', 'yyyy-mm-dd hh24:mi') where "ID" = '3' and "N1" = '3' and "V1" = 'aa' and "D1" = TO_DATE('2009-12-01 12:37', 'yyyy-mm-dd hh24:mi') and "D2" = TO_DATE('2009-12-01 12:37', 'yyyy-mm-dd hh24:mi') and ROWID = 'AAAip0AADAAA6GqAAC';

Task 4

Update without PK
PK supplemental logging
alter database drop supplemental log data (all) columns;
alter database drop supplemental log data (primary key) columns;
LOG_MODE     SUPPLEME SUP SUP
------------ -------- --- ---
ARCHIVELOG   NO       NO  NO

alter database add supplemental log data (primary key) columns;
LOG_MODE     SUPPLEME SUP SUP
------------ -------- --- ---
ARCHIVELOG   IMPLICIT YES NO

update tbl1 set v1='bb' where n1<3;

SQL_REDO from Logminer:
update "ALEXZ"."TBL1" set "V1" = 'bb' where "ID" = '1' and "V1" = 'aa' and ROWID = 'AAAip0AADAAA6GqAAA';
update "ALEXZ"."TBL1" set "V1" = 'bb' where "ID" = '2' and "V1" = 'aa' and ROWID = 'AAAip0AADAAA6GqAAB';

Task 5

Update without PK
Minimal supplemental logging
alter database drop supplemental log data (primary key) columns;
alter database add supplemental log data;
LOG_MODE     SUPPLEME SUP SUP
------------ -------- --- ---
ARCHIVELOG   YES      NO  NO

update tbl1 set v1='cc' where n1<3;

SQL_REDO from Logminer:
update "ALEXZ"."TBL1" set "V1" = 'cc' where "V1" = 'bb' and ROWID = 'AAAip0AADAAA6GqAAA';
update "ALEXZ"."TBL1" set "V1" = 'cc' where "V1" = 'bb' and ROWID = 'AAAip0AADAAA6GqAAB';

2009-11-20

Restoring 10g database to another location

Restoring M02 (10.2.0.4) from cold RMAN backup of M03 (10.2.0.3).
1.    stop all for M02
2.    delete M02 data files and redo
3.    change owner to ora10P1 (10.2.0.3 owner) for oradata and redo directories
/oracle/oradata/01/DHMERM02
/oracle/redo/DHMERM02
4.    copy M03 init file to new home (10.2.0.3)
5.    change $ORACLE_HOME and $ORACLE_SID to new home and SID=M03
6.    startup nomount as ora10P1
7.    restore control file
restore controlfile from '/oracle/oradata/01/DHMERM02/backup/DHMERM03/2009-09-17/dhmerm03_cf_1ukpfn6a_1_1';
8.    mount database
9.    copy backup files or NFS
10.    catalog the backup pieces
catalog backuppiece '/oracle/oradata/01/DHMERM02/backup/DHMERM03/2009-09-17/dhmerm03_cf_1ukpfn6a_1_1';
catalog backuppiece '/oracle/oradata/01/DHMERM02/backup/DHMERM03/2009-09-17/dhmerm03_db_1mkpfesv_1_1';
11.    create soft links for oradata and redo directories to link original M03 paths to M02 directories.
Alternatively  you can use SET NEWNAME FOR DATAFILE|TEMPFILE to rename files.
12.    restore database with RMAN
run
{
ALLOCATE CHANNEL d1 DEVICE TYPE DISK;
ALLOCATE CHANNEL d2 DEVICE TYPE DISK;
restore database;
}
alter database open resetlogs;
13.    prepare create control file script to rename database and files
14.    shutdown M03 and startup nomount M02
15.    create control file and open M02 with resetlogs
16.    add file to TEMP tablespace
17.    shutdown M02
18.    change ownership of oradata and redo directories and files to ora10M2 (10.2.0.4 owner)
19.    upgrade M02 to 10.2.0.4
lsnrctl start listener_name
sqlplus / as sysdba
startup upgrade
@?/rdbms/admin/utlu102i.sql
@?/rdbms/admin/catupgrd.sql
shutdown immediate
startup
@?/rdbms/admin/utlrp.sql

2009-09-13

Convert database from Solaris to AIX on destination server

Oracle RDBMS 10.2.0.3

1. Make sure source and destination oracle homes are the same version.

2. Get destination platform name from
select PLATFORM_NAME from V$DB_TRANSPORTABLE_PLATFORM;

3. Open database read only and check if it is ok to convert.
startup mount;
alter database open read only;

set serveroutput on
declare
db_ready boolean;
begin
db_ready := dbms_tdb.check_db('AIX-Based Systems (64-bit)',0);
end;
/
-- if no message is returned it is ok to convert db

set serveroutput on
declare
external boolean;
begin
/* value of external is ignored, but with SERVEROUTPUT set to ON
* dbms_tdb.check_external displays report of external objects
* on console */
external := dbms_tdb.check_external;
end;
/

output:
The following directories exist in the database:
SYS.WORK_DIR, SYS.ADMIN_DIR, SYS.DATA_PUMP_DIR

4. Run RMAN command to create conversion scripts and init.ora file:
FORMAT defines location of init.ora file
DB_FILE_NAME_CONVERT defines final data files location on destination server.
If source database is not available (only backup) you can skip this step. Convert and transport scripts can be created manually.

CONVERT DATABASE ON TARGET PLATFORM
CONVERT SCRIPT '/oracle/oradata/01/DHMERM03/testdb/convertscript.rman'
TRANSPORT SCRIPT '/oracle/oradata/01/DHMERM03/testdb/transportscript.sql'
new database 'dhmerm10'
FORMAT '/oracle/oradata/01/DHMERM03/testdb/%U'
DB_FILE_NAME_CONVERT =('DHMERM03/testdb/dhmerm09','DHMERM04/testdb/dhmerm10');

5. Prepare environment file. Unset LD_LIBRARY_PATH and LD_LIBRARY_PATH_64 for AIX.
6. Copy init.ora file to destination server $ORACLE_HOME/dbs, rename and edit.
Make sure *dump directories exist.

7.
Copy convertscript.rman to destination server.
Copy transportscript.sql to destination server.


8. Identify datafiles that contain undo data by running the following query:

select distinct(file_name)
from dba_data_files a, dba_rollback_segs b
where a.tablespace_name=b.tablespace_name;

Copy unconverted data files (system, undo) into stage area on destination server.
Copy rest datafiles to final location on destination server.

9. Edit the convert script and leave the CONVERT DATAFILE commands for only the datafiles that contain undo data.
CONVERT DATAFILE points to unconverted files (local stage area or NFS to source file system).
FORMAT defines final destination.
You can convert all datafiles, especially if source database is not available and you are not sure about undo segments.

RUN {
CONVERT DATAFILE '/oracle/oradata/01/DHMERM04/testdb/stage/system01.dbf'
FROM PLATFORM 'Solaris[tm] OE (64-bit)'
FORMAT '/oracle/oradata/01/DHMERM04/testdb/dhmerm10/system01.dbf';

CONVERT DATAFILE '/oracle/oradata/01/DHMERM04/testdb/stage/undotbs01.dbf'
FROM PLATFORM 'Solaris[tm] OE (64-bit)'
FORMAT '/oracle/oradata/01/DHMERM04/testdb/dhmerm10/undotbs01.dbf';
}

10. Create script for temporary control file create_temp_cf.sql. Datafile path must be current, unconverted files.
You can create it from transport script or from trace backup (alter database backup controlfile to trace).

CREATE CONTROLFILE REUSE SET DATABASE "DHMERM10" RESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/oracle/oradata/01/DHMERM03/testdb/dhmerm10/redo01.log' SIZE 50M,
GROUP 2 '/oracle/oradata/01/DHMERM03/testdb/dhmerm10/redo02.log' SIZE 50M,
GROUP 3 '/oracle/oradata/01/DHMERM03/testdb/dhmerm10/redo03.log' SIZE 50M
DATAFILE
'/oracle/oradata/01/DHMERM04/testdb/stage/system01.dbf',
'/oracle/oradata/01/DHMERM04/testdb/stage/undotbs01.dbf',
'/oracle/oradata/01/DHMERM04/testdb/dhmerm10/sysaux01.dbf',
'/oracle/oradata/01/DHMERM04/testdb/dhmerm10/users01.dbf'
CHARACTER SET AL32UTF8
;

11. Check environment variables:
echo $ORACLE_SID
echo $ORACLE_HOME

12. Startup instance on destination server and create temporary control file.
sqlplus / as sysdba
startup nomount
@create_temp_cf.sql

13. Run conversion:
rman target /
@convertscript.rman
shutdown immediate;

14. Configure listener.
15. Create password file.
16. Edit transport script, change names for log and temp files, check other paths.
sqlplus / as sysdba
@transportscript.sql

Transport script:


STARTUP NOMOUNT
CREATE CONTROLFILE REUSE SET DATABASE "DHMERM10" RESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/oracle/oradata/01/DHMERM04/testdb/dhmerm10/redo01.log' SIZE 50M,
GROUP 2 '/oracle/oradata/01/DHMERM04/testdb/dhmerm10/redo02.log' SIZE 50M,
GROUP 3 '/oracle/oradata/01/DHMERM04/testdb/dhmerm10/redo03.log' SIZE 50M
DATAFILE
'/oracle/oradata/01/DHMERM04/testdb/dhmerm10/system01.dbf',
'/oracle/oradata/01/DHMERM04/testdb/dhmerm10/undotbs01.dbf',
'/oracle/oradata/01/DHMERM04/testdb/dhmerm10/sysaux01.dbf',
'/oracle/oradata/01/DHMERM04/testdb/dhmerm10/users01.dbf'
CHARACTER SET AL32UTF8
;
ALTER DATABASE OPEN RESETLOGS;
ALTER TABLESPACE TEMP ADD TEMPFILE '/oracle/oradata/01/DHMERM04/testdb/dhmerm10/temp01.dbf' SIZE 20971520 AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
SHUTDOWN IMMEDIATE
STARTUP UPGRADE
@@ ?/rdbms/admin/utlirp.sql
SHUTDOWN IMMEDIATE
STARTUP
-- The following step will recompile all PL/SQL modules.
@@ ?/rdbms/admin/utlrp.sql

Useful info:
Metalink 415884.1 Cross Platform Database Conversion with same Endian
Metalink 414878.1 Cross-Platform Migration on Destination Host Using Rman Convert Database
Metalink 732053.1 Avoid Datafile Conversion during Transportable Database
Metalink 417455.1 Datafiles are not converted in parallel for transportable database
Platform Migration using Transportable Database
RMAN Cross-Platform Transportable Databases and Tablespaces


2009-08-28

OAS port conflict

Some commands to help with port conflicts:

#Solaris
lsof -i |grep :1854

cd $ORACLE_HOME/sysman
find . -name "*.xml" -exec grep "<rmi-server port=" {} \;

emctl config iasconsole port [portNumber]
emctl config iasconsole rmiport [portNumber]
emctl config agent port [portNumber]

emctl status agent
emctl status iasconsole

$ORACLE_HOME/opmn/bin/opmnctl status -l

2009-08-27

Oracle password file

Just syntax reminder.

RDBMS 11.1
orapwd FILE=filename [ENTRIES=numusers] [FORCE={Y|N}] [IGNORECASE={Y|N}] [NOSYSDBA={Y|N}]
Unix file name: orapwsid
Windows file name: PWDsid.ora

RDBMS 10.2 for UNIX-Based Operating Systems
orapwd file=orapwsid password=password entries=max_users force=y/n
if FILE is not full path, it is created in current directory.
the password file should be in the $ORACLE_HOME/dbs directory


RDBMS 10.2 for Microsoft Windows
orapwd FILE=PWDsid.ora PASSWORD=password ENTRIES=max_users FORCE=y/n
if FILE is not full path, it is created in current directory.
Oracle Database looks in the registry for the value of parameter ORA_SID_PWFILE. If no value is specified,
then it looks in the registry for the value of parameter ORA_PWFILE, which points to a file containing usernames,
passwords, and privileges. If that is not set, then it uses the default.
so the order is
ORA_SID_PWFILE
ORA_PWFILE
ORACLE_BASE\ORACLE_HOME\DATABASE\PWDsid.ORA

2009-07-01

Find oracle product version

APEX:
select version_no from apex_release;

Oracle Application Server (OAS 10.1.2 and 10.1.3):
grep -i version $ORACLE_HOME/config/ias.properties

OAS Java version (JDK)
cd $ORACLE_HOME/jdk/bin
java -version

OAS 10.1.3 "System Properties"
Use Application Server Control.
From main page "Cluster Topology" go to OC4J "home". Click "Administration" tab. Go to "Server Properties". Scroll down and click the link "System Properties".

Oracle Database Server (RDBMS):
1.
select * from v$version;
2.
cd $ORACLE_HOME/bin
file oracle
If the output of the "file oracle" command does not say 64-bit explicitly then you are running 32-bit Oracle.
3.
check banner of sqlplus
4.
Check for lib, lib32
$ORACLE_HOME/lib32
$ORACLE_HOME/lib
If there is only an ORACLE_HOME/lib directory then it is 32 bit

modified on 2011-04-15

2009-06-24

change sysdate

environment: Oracle 10.2

To change sysdate for testing purposes in one database,
create new scheduler job with e.g. 1 minute interval.

Following example code has +7 days offset.
Parameter fixed_date changes only sysdate, it doesn't change systimestamp.

begin
EXECUTE IMMEDIATE ( 'alter system set fixed_date='
|| ''''
|| to_char(systimestamp+7,'YYYY-MM-DD-HH24:MI:SS')
|| '''');
end;

2009-01-23

clone oracle 10g database software on AIX

Environment
Oracle 10.2.0.4, CPU Jan 2009, recommended bundle #3
UNIX AIX 5.3.0.0 (both source and target hosts, command oslevel)
source unix user orax104
target unix user orax115

Cloning
1. Install oracle software, recommended patches and CPU on source host.
2. Oracle recommends to shutdown any databases, listeners, agents etc before copying files.
3. prepare exception list
export CLONE=/oracle/export/install
cd $ORACLE_HOME
find . -name "*.log" >$CLONE/ExcludeList.txt
find . -name "*.err" >>$CLONE/ExcludeList.txt
# network/admin can be useful for next installation
#shrept.lst is needed for patch 7226548 (lock_SGA on AIX)
#find ./network/admin >>$CLONE/ExcludeList.txt
find ./oc4j/j2ee/OC4J_DBConsole_* >>$CLONE/ExcludeList.txt
find `hostname |cut -d "." -f 1`*_* >>$CLONE/ExcludeList.txt
# DBCA templates, If u have them
find ./assistants/dbca/templates >>$CLONE/ExcludeList.txt
cat $CLONE/ExcludeList.txt

4. archive Oracle home
cd $ORACLE_HOME
tar -X $CLONE/ExcludeList.txt -cvf - . | gzip -c >$CLONE/oracle_home.tar.gz

5. copy tar.gz file to target host
6. extract oracle home
cd /oracle/export/install
# file was renamed
gunzip rdbms_10.2.0.4_CPUJAN2009_generic_bundle3.tar.gz
# as root to preserve permissions
# cd $ORACLE_HOME
cd /oracle/rdbms/10.2.0/orax115
tar xvf /oracle/export/install/rdbms_10.2.0.4_CPUJAN2009_generic_bundle3.tar
# change owner. Owner group is the same (oinstall)
find . -user orax104 -exec chown orax115 {} \;

7. on target host as oracle user
cd $ORACLE_HOME/rdbms/lib
mv config.o config.o_backup
cd $ORACLE_HOME/oui/bin
#detach oracle home if it exists
./runInstaller -detachHome ORACLE_HOME="/oracle/rdbms/10.2.0/orax115"
./runInstaller -clone -silent -ignorePreReq ORACLE_HOME="/oracle/rdbms/10.2.0/orax115" ORACLE_HOME_NAME="db_home_10g2_1"

8. (optional) Let other users use Oracle Home.
cd $ORACLE_HOME/install
./changePerm.sh

Useful links:
  1. Metalink DocID 300062.1 How To Clone An Existing RDBMS Installation Using OUI
  2. Metalink DocID 558478.1 Cloning A Database Home And Changing The User/Group That Owns It
  3. Oracle Software Cloning Using Oracle Universal Installer
  4. http://www.pythian.com/blogs/1041/oracle-silent-mode-part-3-cloning-software-and-databases