Date Published: December 14, 2017

Create a physical copy of your local database in Oracle Cloud Infrastructure with no backup.

The focus of this article would be cloud specific steps rather than repeating already well known steps in the net.Its purpose is only to give you an idea for an use case and should not be used in a production environment.

The high level setup:

Source:
Two nodes (VirtualBox) Oracle Linux 7.
Oracle Grid Infrastructure 12.2 .
Oracle RAC 12.2, storage ASM.

Target (to be created) in Oracle cloud:
Single node Oracle Linux 7.
Oracle GI/Restart 12.2 .
Oracle 12.2, storage ASM.

You can find here further details of instances’ config like pfiles, tns/listener entries, rman scripts, ssh config, etc.

For simplicity sake all the communication will be based on ssh tunnels.All used ports will be default.Our source database is tiny (about 3G) so it should serve the purpose. OCI offers enterprise level network connectivity – you can connect your premises to cloud via VPN with Juniper, Cisco and other popular providers. If you need guaranteed speed between, Oracle has an agreement with certain ISPs. There are real case examples of backing up 1TB database with 250GB backup set size in the cloud within 40min. .

--------------------------------------------------------------------------------

+------------+ 
| RAC NODE1  | 
| ASM1       |                                              +-----------------+
-------------+                +-------------+               |  Physical copy  | 
              <= SSH tunnel=> |  BASTION    |<= SSH tunnel=>|  Oracle restart |
+------------+                |     SERVER  |               |       ASM       |
| RAC NODE2  |                +-------------+               +-----------------+
|      ASM2  |              
-------------+                              

---------------------------------------------------------------------------------

VCN

OCI offers a friendly interface for defining our objects in the cloud. We are going to create a VCN (virtual cloud network) with standard definitions and we’ll add connectivity on TCP port 1521 – the listener that will be running on the host in the cloud.

Navigate from the top menu Networking then Virtual Cloud Network.

Press Create Virtual Cloud Network. Select the compartment where VCN should be created.
Put a descriptive name and select Create VCN plus related resources – this way Oracle takes care of subnets, routing, gateways etc.
Remember to keep all objects created under the same compartment and same availability domain.

Once ready press Create VCN button and you will get a summary of what has been created.
To allow incoming connection on instance default listener port 1521 we need to add an ingress rule – navigate Networking,VCNs,Security Lists and select the VCN created above (zVCN01).
Then click on Edit All Rules:

On the top ingress rule for TCP port 22, just add 1521 and save the changes.

Compute instance

Here we can create the instance, it will be the smallest possible shape VM.Standard1.1: 7Gb. memory and 1 ocpu. Here you can find further details on the shapes.
Navigate ComputeInstances:

Press Launch Instance:

Fill in “NAME”, select availability domain and oracle image “Oracle Linux 7.4”. Shape VM.Standard 1.1, latest image built and the VCN created above. Check the box to get public IP assigned – this way Oracle will connect the instance to Internet. As mentioned earlier – this is only for test purposes. Still later on we are going to force Oracle Net encryption between the nodes.
In the last box you could paste or point the public key of a pair generated earlier. This article shows how to generate and use the keys.

OS configuration

At this stage we have a box with installed Oracle Linux 7. A note that the user to connect is “opc” with no password – use the corresponding private key of the public one used for the instance creation.
Please follow the instruction here and install and configure required OS packages, user, groups, memory parameters,user limits and disable SELinux.

Custom setup following the OS config:

[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/grid 
[opc@zdb02 ~]$ sudo chown -R oracle:oinstall /u01/app 
[opc@zdb02 ~]$ sudo chmod 775 /u01/app 
[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/grid/product/12.2.0/gridhome 
[opc@zdb02 ~]$ sudo chown -R grid:oinstall /u01/app/grid 
[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/oracle 
[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/oracle/product/12.2.0/dbhome_1 
[opc@zdb02 ~]$ sudo chown -R oracle:oinstall /u01/app/oracle 
[opc@zdb02 ~]$ sudo mkdir -pv /u01/app/oraInventory 
[opc@zdb02 ~]$ sudo chown grid:oinstall /u01/app/oraInventory 
[opc@zdb02 ~]$ sudo chmod 770 /u01/app/oraInventory  

[opc@zdb02 ~]$ sudo -u grid vi /home/grid/.bash_profile 
export ORACLE_BASE=/u01/app/grid 
export ORACLE_HOME=/u01/app/grid/product/12.2.0/gridhome 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH 
export PATH=$ORACLE_HOME/bin:$PATH export ORACLE_SID="+ASM" 
export LISTENER_NAME="LISTENERASM" export ORAENV_ASK=NO  

[opc@zdb02 ~]$ sudo -u oracle vi /home/oracle/.bash_profile 
export ORACLE_BASE=/u01/app/oracle 
export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH 
export PATH=$ORACLE_HOME/bin:$PATH 
export ORAENV_ASK=NO 
export ORACLE_SID=cdb1s1  

Download Oracle grid and database software from https://edelivery.oracle.com .
Unzip grid software with the same user in GRID_HOME:

[opc@zdb02 ~]$sudo su - grid  
[grid@zdb02 ~]$unzip /home/grid/install/V840012-01.zip -d /u01/app/grid/product/12.2.0/gridhome  

Unzip with oracle user database software in a temporary location:

[opc@zdb02 ~]$sudo su - oracle  
[oracle@zdb02 ~]$unzip V839960-01.zip -d /home/oracle/install/  

Open listener port:

[opc@zdb02 install]$ sudo firewall-cmd --zone=public --permanent --add-port=1521/tcp 
success
[opc@zdb02 ~]$ sudo firewall-cmd --reload 
success  

Attaching block storage

I am going to attach a 100Gb. block storage to zdb02 that will be used for the ASM instance.
Current state:

[opc@zdb02]$ lsscsi -i  
[2:0:0:0] storage IET Controller 0001 - - [2:0:0:1] 
disk ORACLE BlockVolume 1.0 /dev/sda 36035e5779ab0470e999f5048f8da5a09  

[opc@zdb02 install]$ lsblk 
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 
sda 8:0 0 46.6G 0 disk 
├─sda1 8:1 0 512M 0 part /boot/efi 
├─sda2 8:2 0 8G 0 part [SWAP] 
└─sda3 8:3 0 38.1G 0 part /  

Navigate StorageBlock volume and press Create block volume:

Select the same compartment and availability domain where the instance resides.Put some descriptive name (zdb02_asm_data01) and the required size then confirm volume block creation.

Once it is provisioned we can attach it to the instance. Navigate ComputeInstances and select zDB02.

Press Attach Block Volume and select from the drop down menus the one we created earlier and press Attach:

Now we need to configure zdb02 to “see” the attached storage. Oracle provides all the required commands next to each attached volume:

Select iSCSI commands & Information and copy all commands in Attach commands pane:

Execute the commands as opc user:

[opc@zdb02]$ sudo iscsiadm -m node -o new -T iqn.2015-12.com.oracleiaas:77e80837-9b66-48ca-88e4-e9483dde9acc -p 169.254.2.2:3260 
[opc@zdb02]$ sudo iscsiadm -m node -o update -T iqn.2015-12.com.oracleiaas:77e80837-9b66-48ca-88e4-e9483dde9acc -n node.startup -v automatic 
[opc@zdb02]$ sudo iscsiadm -m node -T iqn.2015-12.com.oracleiaas:77e80837-9b66-48ca-88e4-e9483dde9acc -p 169.254.2.2:3260 -l  

For comparison – now we can see sdb device:


[opc@zdb02]$ lsscsi -i
[2:0:0:0] storage IET Controller 0001 - -
[2:0:0:1] disk ORACLE BlockVolume 1.0 /dev/sda 36035e5779ab0470e999f5048f8da5a09
[3:0:0:0] storage IET Controller 0001 - -
[3:0:0:1] disk ORACLE BlockVolume 1.0 /dev/sdb 360d0c86e73b74b6f826f979ec7a73153
[opc@zdb02]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 46.6G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 8G 0 part [SWAP]
└─sda3 8:3 0 38.1G 0 part /
sdb 8:16 0 100G 0 disk

Partition the device:

[opc@zdb02]$ sudo su - 
[root@zdb02 ~]# parted /dev/sdb mklabel gpt  
[root@zdb02 ~]# parted /dev/sdb mkpart primary 0% 100%  

Udev rules, use the id returned from the command above:


[root@zdb02 ~]#vi /etc/udev/rules.d/99-oracleasm.rules
KERNEL==“sd?1”, SUBSYSTEM==“block”, PROGRAM==“/usr/lib/udev/scsi_id -g -u -d /dev/$parent”,
RESULT==”360d0c86e73b74b6f826f979ec7a73153”, SYMLINK+=“oracleasm/asm-disk01”, OWNER=“grid”, GROUP=“dba”,
MODE=“0660”

Refresh OS partition information and reload udev rules:

[root@zdb02 ~]#partprobe /dev/sdb 
[root@zdb02 ~]#partprobe /dev/sdb1  
[root@zdb02 ~]#udevadm control --reload-rules  

You should get something like this:


[root@zdb02 ~]# ls -l /dev/oracleasm/
total 0
lrwxrwxrwx. 1 root root 7 Dec 8 08:42 asm-disk01 -> ../sdb1
[root@zdb02 ~]# ls -l /dev/sdb1
brw-rw—- 1 grid dba 8, 17 Dec 11 13:14 /dev/sdb1

Make sure the partition created (sdb1 in our case) is owned by the user who will own GI installation.

Grid setup

[grid@zdb02 gridhome]$cd /u01/app/grid/product/12.2.0/gridhome
[grid@zdb02 gridhome]$ ./gridSetup.sh -silent -force -responseFile /home/grid/install/grid_standalone.rsp

As root:

[root@zdb02~]#/u01/app/oraInventory/orainstRoot.sh  
[root@zdb02~]#/u01/app/grid/product/12.2.0/gridhome/root.sh  

As grid user:

[grid@zdb02 gridhome]$ /u01/app/grid/product/12.2.0/gridhome/gridSetup.sh -executeConfigTools -responseFile /home/grid/install/grid_standalone.rsp -silent  

[grid@zdb02 gridhome]$ crsctl stat res -t  
--------------------------------------------------------------------------------
Name           Target  State        Server       State details    Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       zdb02                    STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       zdb02                    STABLE
ora.asm
               ONLINE  ONLINE       zdb02                    Started,STABLE
ora.ons
               OFFLINE OFFLINE      zdb02                    STABLE
ora.cssd
      1        ONLINE  ONLINE       zdb02                    STABLE
ora.diskmon
      1        OFFLINE OFFLINE                                    STABLE
ora.evmd
      1        ONLINE  ONLINE       zdb02                    STABLE
--------------------------------------------------------------------------------

Database software only

As oracle user:


[oracle@zdb02]$ cd /home/oracle/install/database
[oracle@zdb02]$ ./runInstaller -silent -force -responseFile /home/oracle/install/12.2db.rsp

As root:

[root@zdb02~]#/u01/app/oracle/product/12.2.0/dbhome_1/root.sh

Start in nomount state the “auxiliary” instance using a simple init file:

[oracle@zdb02 dbs]$ vi $ORACLE_HOME/dbs/initcdb1s1.ora 
db_name=cdb1 
db_unique_name=cdb1s1 
db_domain=sub12080721000.zvcn01.oraclevcn.com 
sga_target=2g 
pga_aggregate_target=512m  


[oracle@zdb02 dbs]$ sqlplus / as sysdba  

SQL*Plus: Release 12.2.0.1.0 Production on Sun Dec 99 14:36:52 2017  
Copyright (c) 1982, 2016, Oracle. All rights reserved.  
Connected to an idle instance.  

SQL> startup nomount 
ORACLE instance started.  
Total System Global Area 2147483648 bytes 
Fixed Size 8622776 bytes 
Variable Size 503319880 bytes 
Database Buffers 1627389952 bytes 
Redo Buffers 8151040 bytes  

 SQL> exit  

Oracle Net Services

A reminder that we are using ssh tunnel from the cloud to our premises.In the other direction a tunnel is not required. SSh config setup for zdb02 host can be found here.
Oracle net services including transport encryption is available here.

In addition to that primary and copy databases have the same password file for authentication. Please note that listener’s static registration is enable for purpose as over rman clone the target database has to be bounced.

RMAN DUPLICATE


[oracle@zdb02 dbs]$ rman target sys@cdb1 auxiliary sys@cdb1s1
@duplicate_cdb1.spf.rman
RMAN> run {
2> allocate channel p1 type disk;
3> allocate auxiliary channel s1 type disk;
4>
5> duplicate target database
6> for standby
7> from active database
8> dorecover
9> spfile
10> set instance_name=‘cdb1s1’
11> set db_unique_name=‘cdb1s1’
12> set db_domain=‘sub12080721000.zvcn01.oraclevcn.com’
13> set sga_target=‘2g’
14> set pga_aggregate_target=‘512m’
15> reset sga_max_size
16> reset audit_trail
17> reset audit_file_dest
18> reset dispatchers
19> reset local_listener
20> reset cluster_database
21> nofilenamecheck;
22> }

The complete output log of the above execution is here.

At this stage you have a clone in the cloud of your source database that is a few steps away from a primary or standby/DG depending on your needs.

Comments

Write a Reply or Comment

Your email address will not be published. Required fields are marked *