Backup and recovery Interview questions

Sarat

1) What are pre-checks you do before doing point in time recovery?
2) There is no previous level zero backup available, Can we take a Level 1 backup?
3) My database have 500+ datafiles, In that I lost 89 files. How you will identify which files to recover?
4) Can you explain what are internal mechanism of Transportable tablespace restore & recovery?
5) Database is UP and has taken Level 0 backup, is the backup taken is Consistent or Inconsistent?
6) How to use physical standby backup to restore primary database?
7) A database is running in ARCHIVELOG mode since last one month. A datafile is added to the database last week. Many objects are created in this datafile.
After one week this datafile gets damaged before you can take any backup. Now can you recover this datafile when you don’t have any backups?
8) You lost some datafiles and you don’t have any full backup and the database was running in NOARCHIVELOG mode. What you can do now?
9) You loss one datafile and it does not contain important objects. The important objects are there in other datafiles which are intact. How do you proceed in this situation?
10) What is the significance of fast_start_mttr_target parameter?

Suraj

11)How to improve tape backup performance ?
12)Any statistics about how much time it will take to back up per TB data to tape and best way to configure tape backup ?

Gracy

13)How backup as copy and roll forward works in Rman incremental merge

Shivraj

14)Can we take scn(not current) based backup if no archive available.
15)How rman is maintaining the consistency in the backup , like backup running long hours
16)Is it possible to restore specific tablespaces to bring back specific tables In 11g .. if yes steps pls
17)12c new features with regards to backup and recovery
18)how to identify block corruption and options to fix it with rman..
19)is block change tracking mandatory to have incremental backup
20)How do we recover a redo log group if we lost

Sandip

21)suppose there is corruption in datafile ID 92, if I take a full back of the Db using rman, it will fail unless I include a parameter, what will be the parameter?

22)suppose my archivelog sequence 5 and 6 is missing, now I am taking a archivelog backup, will it be successful, if not what I need to be done?

23)How to tune RMAN buffer?

Oracle user role and privilege interview questions

 Answer Credit:-

https://www.linkedin.com/in/skantali/

https://www.linkedin.com/in/hemant-shahi-59921762/

  • How we can execute PLSQL without any privilege to change it

  • What are the default user created when Oracle is installed ?

  • Can you create password less user in Oracle ? If yes how ?

  • How we can see the procedure /package body with out having privilege to execute it

  • How we can create password file in ASM

  • What is the difference between a user and schema?Ans:-A user will not have any of his own objects and will always access others objects. Schema is a collection of objects.

  • What is the disadvantage of resource role?Ans:-It will override the quota granted for a user and makes it unlimited.

  • How to create user and grant the permission in a single command?Ans:-grant create session to user_a identified by <password>;

  • You got a requirement to run a script as user “SCOTT” and you don’t have the password of SCOTT. How you will take next step?Ans:-a. We will ask application support. In case they cannot be reached, we can take ASCII password stored in dba_users and change the password. Once work is done we can change the password back using ASCII that we stored earlier.

          b. Oracle proxy user – http://oracle-help.com/oracle-database/oracle-proxy-users/

Test Case:-
SYS user does not know the password of a schema called ‘New_hemant’ but without asking him, SYS can perform any operation inside the schema with that user’s access.
Here the user is – “NEW_HEMANT” with the password “password”

SQL> conn new_hemant/password
Connected.

SQL> sho user;

USER is “NEW_HEMANT”

SQL> create table data_drop as select * from data_t;
Table created.

SQL> select * from data_drop;
EMP_ID NAME SALARY
———- ——————————————–
1 mr. a 4561234
2 mr. b 3451930
3 mr. c 2030300
4 mr. d 2130380
5 mr. e 2131380
6 mr. f 2131380
6 mr. g 3131380
6 mr. h 3134380
6 mr. i 3134380
7 mr. l 9134380
1 mr. a 4561234
2 mr. b 3451930
3 mr. c 2030300
4 mr. d 2130380
14 rows selected.

SQL> conn / as sysdba
Connected.

SQL> sho user;
USER is “SYS”

SQL> select owner,table_name from dba_tables where table_name=’DATA_DROP’;
OWNER TABLE_NAME
—————————— ——————————
NEW_HEMANT DATA_DROP

SQL> drop table data_drop;
drop table data_drop
*
ERROR at line 1:
ORA-00942: table or view does not exist
// sys cannot see this table without having privilege on this as the owner new_hemant user did not grant privilege to anyone.
// but sys wants to do something so that this drop should look like from owner.

SQL> select username,password from dba_users where username=’NEW_HEMANT’;
USERNAME PASSWORD
—————————— ——————————
NEW_HEMANT

//oops ! nothing .
// this is a security feature from 11g onwards, the data dictionary view DBA_USERS no more shows encrypted passwords of any user.
// rather 11g stores it in another data dictionary called USER$ and not in the password column but in spare4 column.

SQL> select name,password,spare4 from sys.user$ where name=’NEW_HEMANT’;
NAME PASSWORD SPARE4
—————————— ——————————————————————————————————————-
NEW_HEMANT 95110033565548B2 S:E727ECB1B6CD195798730F93DC0ECD55E934825812C9F2AA1013580F8A8D

SQL> alter user new_hemant identified by temp_password;
User altered.

SQL> conn new_hemant/temp_password
Connected.

SQL> drop table data_drop;
Table dropped.

// SYS user changed the password of new_hemant , logged in through his ID and dropped the table.
// now time to revert the password , failing to do so would confirm the user that his password was changed
.
// so let’s revert the password.

SQL> conn new_hemant/password
ERROR:
ORA-01017: invalid username/password; logon denied
//old password is not working now.

SQL> conn / as sysdba
Connected.

SQL> sho user;
USER is “SYS”

SQL> select name,password,spare4 from sys.user$ where name=’NEW_HEMANT’;
NAME PASSWORD SPARE4
—————————— ——————————————————————————————————————-
NEW_HEMANT 95110033565548B2 S:E323ECD1D6CD105398339F93DC9ECD55E934825812C9F2AA1013589F8A8D

// you can see there is no change in password column even though we have changed the password , so it does not depend on password column or any combination of it with SPARE4 but solely on spare4 column.
// now keep the old value of spare4 column intact and see the magic.

SQL> alter user new_hemant identified by values ‘S:E727ECB1B6CD195798730F93DC0ECD55E934825812C9F2AA1013580F8A8D’;
User altered.
SQL> conn new_hemant/temp_password
ERROR:
ORA-01017: invalid username/password; logon denied
SQL> conn new_hemant/password
Connected.
SQL> exit;

// old password is set back and that user does not even know what happened

  • What are SNIPPED sessions?Ans:-Those are sessions terminated from database because of crossing IDLE_TIME limit. But disadvantage is even though oracle level sessions are cleared, at OS level processes willstill occupy resources which is a burden to the server.

  • Which grant is needed to select all dictionary views?Ans:-select any dictionary
    select_catalog_role

  • Why should we not grant resource to any user?

  • How we can create read only user

  • Difference between Sys, System  user.

  • Can you create user without specifying default tablespce ?

  • Can you install Oracle database without OS user Oracle ?

  • What is OS authentication

  • Can I install oracle without oracle OS user?

  • Why we shouldnt grant privilges directly to users?

  • User creation without default tablespace

  • How audit information can be reflected to syslog?

  • Can you restrict user sessions in oracle ?

  • How to restrict a user in oracle ?

  • How to check which users are granted with sysdba role ?

  • How new auditing for roles/privileges changed in 12c from earlier version ?

  • What is ldap authentication and external directories authentication in oracle?

  • How can you centralize user security in Oracle?

  • How will you create the user?

  • What are the pre check creating a user? Also post creation?

  •  How to check user an schema size?

  • How can we restrict one IP to connect to database?

  • what is invoker right in plsql?

  • Is it a good practice to grant DBA roles to GGuser?

 

 

Database Architecture Interview Questions

  1. What are base tables? When and how they will get created?

    Ans:-Are read-only tables(Data Dictionary Tables) that hold information about the database. Base tables are own by SYS.

    When and how they will get created?

    They get created when the database is built.

    https://docs.oracle.com/cd/B19306_01/server.102/b14220/datadict.htm

  2. Why to execute catproc.sql script?

    Ans:-Used to create and recreate database internal views/procedures/packagess. Primarily used after applying a patch/patchset to the database.

    1. http://www.dba-oracle.com/t_data_dictionary_catalog_scripts.htm

  3. Explain the phases of SQL execution

    Ans:- 1. Syntax check ( is it a select, update, insert etc)

    2. Semantic check against the dictionary. (are table spelled correctly etc)

    3. Create CBO decision tree of possible plan

    4. Generation of the lowest cost execution plan

    5. Binding the execution plan.

    6. Executing the query and fetching the rows.

     For Example:- 

    1.I write an sql (Like you put your destination and starting point in GPS)
    2.I provide hint (I provide some info like I must take bus as much as possible)
    3.Query hit to the server process (GPS server takes the input)

    Parse phase:-

    Oracle will save the cusror and bind variable specific to user in private sql area of PGA.
    Stack is like your bind variables and heap is like your sql area in PGA(GPS must store all
    user inputs to some other memory area not related to its database)

    4.syntatic check(GPS will check if your query is valid)
    5.semantic check(Your query is valid but GPS will check if your destination is in there database)
    6.Shared pool check

    a.Hard parse:-where your sql statement is not there in SGA and the database accesses the library cache and data
    dictionary cache numerous times to check the data dictionary. When the database accesses these areas, it uses a serialization device called a latch on required objects so that their definition does not change.
    (GPS should have some pre-defined best route in database based or user selection but your
    input are not matching to search pre-defined route.Then GPS needs to re-calculate the path which is
    time and resource consuming)

    In that phase oracle calculates cost based on statistics(GPS calculates the best route based on your input
    .If your GPS database does not have much info about road condition(statistics),GPS will give the route that may not be
    very cost effective.)

    b.Soft parse:-A soft parse is any parse that is not a hard parse. If the submitted statement is the
    same as a reusable SQL statement in the shared pool, then Oracle Database
    reuses the existing code. This reuse of code is also called a library cache hit.
    (GPS already found best route based on your input)

    Execute phase:-

    Oracle will execute your query (GPS has all info now and it will execute it’s operation )

    Fetch phase:-

    Oracle fetch data during executing select statement(GPS now show you data)

    Explain plan:-your road map shown by GPS
    Execution plan:-You tried to explore the path by GPS and found there was some problem in the way and changed your road map
    a bit.Now your travelled road map is execution plan.

    Query may change your plan on the fly after cardinality feedback as part of adaptive optization .So your
    execution plan and explan will come different.

  4. What is the difference between physical and logical read?

    Ans:-1. LIO = logical = buffer cache. May get a PIO in order to get into the cache in the first place.

    2. PIO = physical + disk read

    3. Each consistent get is a latch, latches = contention. Over abused indexes can be a killer for scaling.

  5. Why to maintain a copy of data in database buffer cache?Ans:-Accessing the data from memory is more faster than accessing the data from disk(direct path).

    The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles. All users concurrently connected to the instance share access to the database buffer cache.
    The first time an Oracle Database user process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it must copy the data block from a datafile on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than data access through a cache miss.

    The allocation of blocks into and out of the Buffer Cache is achieved by means of a Least Recently Used (LRU) algorithm.

    As you’d expect the data blocks for the table will only be cached as they are required.
    With a small, frequently used reference data table, you can probably expect it to be fully cached fairly soon after the application is started.
    Once it is cached, the way the LRU algorithm works should ensure that the data blocks are constantly in the hot end of the cache.

    Refer below for eg:

    https://mikesmithers.wordpress.com/2016/06/23/oracle-pinning-table-data-in-the-buffer-cache/

  6. What is different status for a block in database buffer cache? Explain each

    Ans:-1. Write List – this holds buffers that are called “Dirty Buffers” – and these are buffers that have not yet been written to disk.

    2. Least Recently Used – This hold three types of Buffers: Free Buffers, Pinned Buffers and Dirty Buffers that have not yet been moved to Write List.

    3. Most Recently Used – Buffers that are often read.

    When an Oracle Database process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list. As more buffers are continually moved to the MRU end of the LRU list, dirty buffers age toward the LRU end of the LRU list.

    The first time an Oracle Database user process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it must copy the data block from a datafile on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than data access through a cache miss.

    Before reading a data block into the cache, the process must first find a free buffer. The process searches the LRU list, starting at the least recently used end of the list. The process searches either until it finds a free buffer or until it has searched the threshold limit of buffers.

    If the user process finds a dirty buffer as it searches the LRU list, it moves that buffer to the write list and continues to search. When the process finds a free buffer, it reads the data block from disk into the buffer and moves the buffer to the MRU end of the LRU list.

    If an Oracle Database user process searches the threshold limit of buffers without finding a free buffer, the process stops searching the LRU list and signals the DBW0 background process to write some of the dirty buffers to disk.

    From <https://docs.oracle.com/cd/B28359_01/server.111/b28318/memory.htm#BABIJIIF>

  7. What is write-ahead protocol?

    Ans:-Before DBWn can write a modified buffer, all redo records associated with the changes to the buffer must be written to disk (the write-ahead protocol). If DBWn finds that some redo records have not been written, it signals LGWR to write the redo records to disk and waits for LGWR to complete writing the redo log buffer before it can write out the data buffers.

    From <https://docs.oracle.com/cd/B19306_01/server.102/b14220/process.htm>

  8. What is OFA? Do you recommend to implement it?

    Ans:-Optimal Flexible Architecture. Highly recommended for bid databases for all the reasons listed in the oracle helper site.

    https://docs.oracle.com/cd/B19306_01/install.102/b15660/app_ofa.htm#i633068

  9. I connected to database as sysdba, but its not allowing me to shutdown the database, what may be the reason?

  10. What is the default size of SGA?

  11. What is hard parsing and soft parsing?Ans:-During the parse, the database performs a shared pool check to determine whether it can skip resource-intensive steps of statement processing.
    To this end, the database uses a hashing algorithm to generate a hash value for every SQL statement. The statement hash value is the SQL ID shown in V$SQL.SQL_ID. This hash value is deterministic within a version of Oracle Database, so the same statement in a single instance or in different instances has the same SQL ID.When a user submits a SQL statement, the database searches the shared SQL area to see if an existing parsed statement has the same hash value. The hash value of a SQL statement is distinct from the following values:
    • Memory address for the statement
    Oracle Database uses the SQL ID to perform a keyed read in a lookup table. In
    this way, the database obtains possible memory addresses of the statement.
    • Hash value of an execution plan for the statement

    A SQL statement can have multiple plans in the shared pool. Typically, each plan
    has a different hash value. If the same SQL ID has multiple plan hash values, then
    the database knows that multiple plans exist for this SQL ID.Hard parse:-
    If Oracle Database cannot reuse existing code, then it must build a new
    executable version of the application code. This operation is known as a hard
    parse, or a library cache miss.
    During the hard parse, the database accesses the library cache and data
    dictionary cache numerous times to check the data dictionary. When the database
    accesses these areas, it uses a serialization device called a latch on required
    objects so that their definition does not change. Latch contention increases
    statement execution time and decreases concurrency.
    Soft Parse:-A soft parse is any parse that is not a hard parse. If the submitted statement is the
    same as a reusable SQL statement in the shared pool, then Oracle Database
    reuses the existing code. This reuse of code is also called a library cache hit.
    Soft parses can vary in how much work they perform. For example, configuring the
    session shared SQL area can sometimes reduce the amount of latching in the soft
    parses, making them “softer.”
    In general, a soft parse is preferable to a hard parse because the database skips
    the optimization and row source generation steps, proceeding straight to
    execution.

  12. What are the mandatory parameters in architecture?

  13. How multiplex pfile an spfile for control file?

  14. How to check database size?Ans:-Query the total number and size of datafiles in v$datafile;
    SELECT ROUND(SUM(bytes) / 1024 / 1024 / 1024) AS DB_GB FROM v$datafile;

  15. What is the use of large pool an default size?

  16. What is the difference between SGA and PGA?Ans:-1. SGA – The System Global Area (SGA) and the set of database processes constitute an Oracle Database instance. Oracle Database automatically allocates memory for an SGA when you start an instance, and the operating system reclaims the memory when you shut down the instance. Each instance has its own SGA.
    The SGA is read/write. All database background processes and all server processes that execute on behalf of users can read information contained within the instance’s SGA, and several processes write to the SGA during database operation.
    Part of the SGA contains general information about the state of the database and the instance, which the background processes need to access. This is called the fixed SGA. No user data is stored here. The SGA also includes information communicated between processes, such as locking information.
    If the system uses shared server architecture, then the request and response queues and some contents of the PGA are in the SGA.

    The most important SGA components are the following:
    • Database Buffer Cache
    • Redo Log Buffer
    • Shared Pool
    • Large Pool
    • Java Pool
    • Streams Pool

    2. PGA – Oracle Database allocates a program global area (PGA) for each server process. The PGA is used to process SQL statements and to hold logon and other session information. For the purposes of memory management, the collection of all PGAs is known as the instance PGA. Using an initialization parameter, you set the size of the instance PGA, and the database distributes memory to individual PGAs as needed.
    Note:
    Background processes also allocate their own PGAs. This discussion focuses on server process PGAs only.
    This section contains the following topics:

    Content of the PGA
    The content of the PGA memory varies, depending on whether or not the instance is running the shared server option. Generally speaking, the PGA memory is divided into the following areas:
    • Session Memory
    • Private SQL Area
    Session Memory
    Session memory is the memory allocated to hold a session’s variables (logon information) and other information related to the session. For a shared server, the session memory is shared and not private.

    https://docs.oracle.com/cd/B28359_01/server.111/b28318/memory.htm#CNCPT1237

  17. What are the mandatory background process in 11g?Ans:-The mandatory background processes are present in all typical database configurations. These processes run by default in a database instance started with a minimally configured initialization parameter file
    This section describes the following mandatory background processes:
    • Process Monitor Process (PMON)
    • System Monitor Process (SMON)
    • Database Writer Process (DBWn)
    • Log Writer Process (LGWR)
    • Checkpoint Process (CKPT)
    • Manageability Monitor Processes (MMON and MMNL)
    • Recoverer Process (RECO)

    From <https://docs.oracle.com/cd/E25054_01/server.1111/e25789/process.htm#BABHDAGI>

  18. How to writing blocks in database buffer cache?

  19. What is MMON and MMAN?MMON (Manageability Monitor) is a background process that gathers memory statistics (snapshots) and stores this information in the AWR (automatic workload repository). MMON is also responsible for issuing alerts for metrics that exceed their thresholds. This process was introduced in Oracle 10g.

    MMAN (Memory Manager) is a background process that manages the dynamic resizing of SGA memory areas as the workload increases or decreases.
    This process was introduced in Oracle 10g.

  20. DBWR area got filled and no free space in buffer.. then what will happen?

  21. What is high watermark in oracle ?The high water mark (HWM) is the point in a segment beyond which data blocks are unformatted and have never been used.

    Above the HWM: These blocks are unformatted and have never been used.

    Below the HWM: These blocks are in one of the following states:
    >Allocated, but currently unformatted and unused
    >Formatted and contain data
    >Formatted and empty because the data was deleted

    If there are too many DMLs in a table, then there is a possibility of too many white spaces below the highwater mark.
    So you can only shrink the datafile upto the highwater mark as there are some white spaces below the highwater mark.
    So in such cases (where there was too many deletes), to claim more space on the datafile, 1st we have to reorg the table, reset the highwater mark and then again shrink the datafile.
    This way we can claim more space at the disk level.

    The HWM specifies the limit to which you can resize a datafile.

    Script for MAX-Shrink:-

    set verify off
    column file_name format a50 word_wrapped
    column smallest format 999,990 heading “Smallest|Size|Poss.”
    column currsize format 999,990 heading “Current|Size”
    column savings  format 999,990 heading “Poss.|Savings”
    break on report
    compute sum of savings on report
    column value new_val blksize
    select value from v$parameter where name = ‘db_block_size’;
    /
    select file_name,
    ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest,
    ceil( blocks*&&blksize/1024/1024) currsize,
    ceil( blocks*&&blksize/1024/1024) –
    ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
    from dba_data_files a,
    ( select file_id, max(block_id+blocks-1) hwm
    from dba_extents
    group by file_id ) b
    where a.file_id = b.file_id(+) order by savings desc
    /

  22. What happens if you delete alert log while the database is running?Ans : It will regenerate alert log file in next log switch or next archive generation or any tablespace Creation or datafile add or any Bug Error occurs.
    The alert log can be renamed or deleted while an Oracle instance is up and running.  After you rename or delete the alert log, a new alert log is created when the next alert entry is ready to be written to the log file.

    Caution:  Oracle recommends that you be able to retrieve the alert log for debugging purposes.  If you choose to delete the alert log, backup the log file to tape prior to deleting it.

    Reference :-     Renaming or Deleting the Alert Log While an Oracle Instance is Up & Running (Doc ID 74966.1)

  23. What is the difference between PGA and UGA?

  24. How dml statement works

  25. What is Local Inventory and Global Inventory?Ans:-Overview of Inventory

    The inventory is a very important part of the Oracle Universal Installer. This is where OUI keeps all information regarding the products installed on a specific machine.
    There are two inventories with the newer releases of OUI (2.x and higher):

    The inventory in the ORACLE_HOME (Local Inventory)
    The central inventory directory outside the ORACLE_HOME (Global Inventory)

    At startup, the Oracle Universal Installer first looks for the key that specifies where the global inventory is located at (this key varies by platform).

    /var/opt/oracle/oraInst.loc (typical)
    /etc/oraInst.loc (AIX and Linux)
    HKEY_LOCAL_MACHINE -> Software -> Oracle -> INST_LOC (Windows platforms)

    If this key is found, the directory within it will be used as the global inventory location.

    If the key  is not found, the inventory path will default created as follows:

    UNIX : ORACLE_BASE\oraInventory
    WINDOWS : c:\program files\oracle\Inventory

    If the ORACLE_BASE environment variable is not defined, the inventory is created at the same level as the first Oracle home. For example, if your first Oracle home is at /private/ORACLEHome1, then, the inventory is at /private/oraInventory.

    With Oracle Applications 11i the inventory contains information about both the iAS and RDBMS ORACLE_HOMEs

    About the Oracle Universal Installer Inventory

    The Oracle Universal Installer inventory is the location for the Oracle Universal Installer’s bookkeeping. The inventory stores information about:

    All Oracle software products installed in all Oracle homes on a machine
    Other non-ORACLE_HOME specific products, such as the Java Runtime Environment (JRE)

    Starting with Oracle Universal Installer 2.1, the information in the Oracle Universal Installer inventory is stored in Extensible Markup Language (XML) format. The XML format allows for easier diagnosis of problems and faster loading of data. Any secure information is not stored directly in the inventory. As a result, during deinstallation of some products, you may be prompted for required secure information, such as passwords.

    By default, the Universal Installer inventory is located in a series of directories at /Program Files/Oracle/Inventory on Windows computers and in the <Oracle Home>/Inventory directory on UNIX computers.

    Local Inventory

    There is one Local Inventory per ORACLE_HOME. It is physically located inside the ORACLE_HOME at $ORACLE_HOME/inventory and contains the detail of the patch level for that ORACLE_HOME.
    The Local Inventory gets updated whenever a patch is applied to the ORACLE_HOME, using OUI.

    If the Local Inventory becomes corrupt or is lost, this is very difficult to recover, and may result in having to reinstall the ORACLE_HOME and re-apply all patchsets and patches.

    Global Inventory

    The Global Inventory is the part of the XML inventory that contains the high level list of all oracle products installed on a machine. There should therefore be only one per machine. Its location is defined by the content of oraInst.loc.
    The Global Inventory records the physical location of Oracle products installed on the machine, such as ORACLE_HOMES (RDBMS and IAS) or JRE. It does not have any information about the detail of patches applied to each ORACLE_HOMEs.
    The Global Inventory gets updated every time you install or de-install an ORACLE_HOME on the machine, be it through OUI Installer, Rapid Install, or Rapid Clone.

    Note: If you need to delete an ORACLE_HOME, you should always do it through the OUI de-installer in order to keep the Global Inventory synchronized.

    OUI 2.x (11.5.7 and later):

    Global inventory becomes a list of pointers to each Local Inventory
    Global and local inventories are in text based XML format
    XML conversion is validated and performed if needed by Rapid Clone

    Prior to OUI 2.x (11.5.5 and earlier):

    Global inventory was in binary format
    Global inventory contained all software install information
    No local inventories in ORACLE_HOMEs

    If the Global Inventory is lost or becomes corrupt, but the Local Inventory remains, it may be possible to recreate the Global Inventory by following Note:
    References:-    Global and Local Inventory explained (Doc ID 360079.1)

  26. Which process will register listener in database?From 12c onwards Listener Registration Process (LREG) performing the listener registration.

    The listener registration process (LREG) registers information about the database instance and dispatcher processes with the Oracle Net listener (see “The Oracle Net Listener”). When an instance starts, LREG polls the listener to determine whether it is running. If the listener is running, then LREG passes it relevant parameters. If it is not running, then LREG periodically attempts to contact it.

    Note: In releases before Oracle Database 12c, PMON performed the listener registration.

    In version 11.2, if there are numerous services (100+) and numerous listeners, then there is a possibility that the PMON process might spend more time on service registration to listeners due to the sheer number of services and listeners. But, in version 12c, this possibility is eliminated as the LREG parameter performs service registration and PMON is freed from listener registration.

    LREG:

    registers instance information with the listener.
    is a critical background process for each database instance (if it is killed, Oracle goes down).
    does everything PMON used to do and report: service_update, service_register, service_died in the listener.log.

    LREG can be traced using the same methods as one would trace PMON:

    Enabling an Oracle Net server side sqlnet tracing will invoke a trace for LREG on instance startup.
    The old PMON trace command now traces LREG: alter system set events = ‘10257 trace name context forever, level 5’;
    Listener registration information can also be dumped into the ora_lreg trace file: alter system set events = ‘immediate trace name listener_registration level 3’;
    LREG can be traced dynamically.

  27. When will a select statement on a table bypass the buffer cache and be read straight into the PGA?

  28. How to add and drop redo log groups online?

  29. Which process writes data from data files to database buffer cache?

  30. What happens in instance recovery internally?When a database shuts down abruptly, may be due to a sudden system crash or shutdown abort command, DB is left in an inconsistent state, means all ongoing transactions (committed or non-committed) were not completed. Its a MUST for a DB to be in consistent state before it can be opened.
    To bring DB in consistent, Oracle performs instance recovery.
    Instance recovery has 2 steps : Roll forward (or cache recovery) AND rolling back (or transaction recovery). It basically re-applies committed,and then rolls back un-committed transactions.Roll forward or cache recovery involves reapplying relevant changes recorded in the online redo log to the data files. Because rollback data (OLD copy) is also recorded in the online redo log, rolling forward also regenerates the corresponding undo segments. So it brings DB in time.
    The data files could also contain uncommitted changes that were either saved to the data files before the failure,
    or were recorded in the online redo log and introduced during roll forward step above.
    After the roll forward, any changes that were not committed, must be undone. Oracle uses the checkpoint position,
    which guarantees that every committed change with an SCN lower than the checkpoint SCN is saved on disk. Oracle Database applies undo blocks to roll back uncommitted changes in data blocks that were written before the failure or introduced during cache recovery. This phase is called rolling back or transaction recovery.

  31. What is the main purpose of ‘CHECKPOINT’ in oracle database? How do you automatically force the oracle to perform a checkpoint?

  32. How we get benifit from temporary tablespace group

  33. What is the purpose of rootinst.sh post installation

  34. When the base tables will be created and using which script?

  35. Can select query make DBWn to write?Ans:-Yes.There is very very details discussion and explanation here on this by Thomas Kyte.
    https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:44798632736844
    Does select generate undo ?
    https://asktom.oracle.com/pls/apex/f?p=100:11:0::NO::P11_QUESTION_ID:580570100346596181
    why the select keep generating redo?

  36. Can select increase SCN of the database?

    Ans:-

    The system change number (SCN) is Oracle’s clock – every time we commit, the clock increments. The SCN just marks a consistent point in time in the database.
    A session takes note of the current SCN at various critical moments (when a transaction starts, when a transaction commits, when a query starts) and the current SCN is written to datablocks in various places (control files, data file headers, a couple of places in data block headers, ITL entries). A session is constantly comparing the current SCN, or one of its remembered SCNs, with the SCNs it sees stored in the database to ensure that it is looking at a safe, correct, and appropriate version of the data.
    Can a Select statement also increase SCN value (though it is not committing anything). ????
    SQL> select distinct instance_name,status,open_mode from v$instance,v$database;
    INSTANCE_NAME STATUS OPEN_MODE
    —————- ———— ——————–
    xe OPEN READ WRITE
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    current_scn
    ———–
    7047016
    SQL> /
    current_scn
    ———–
    7047017 — — select query increased the SCN of database.
    SQL> /
    current_scn
    ———–
    7047020
    SQL> select current_scn from v$database;
    CURRENT_SCN
    ———–
    7047025
    Now in the below steps testing the definition given by most of the experts — SCN increased at transaction commit.
    SQL> create table tab1 (n1 number);
    Table created.
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    current_scn
    ———–
    7047069
    SQL> begin
    2 for i in 1 .. 1000
    3 loop
    4 insert into tab1 values (i);
    5 end loop;
    6 end;
    7 /
    PL/SQL procedure successfully completed.
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    current_scn
    ———–
    7047094
    /// inserted 1000 rows into the table tab1 but no commit, still SCN increased (by 25 in this example)
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    current_scn
    ———–
    7047099
    /// now testing with commit;
    SQL>
    SQL> begin
    2 for i in 1 .. 1000
    3 loop
    4 insert into tab1 values (i);
    5 commit;
    // 1000 times committed inside the anonymous block.
    6 end loop;
    7 end;
    8 /
    PL/SQL procedure successfully completed.
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    current_scn
    ———–
    7048884
    // difference is more than 1000 (1000 commit statements + others operations on DB which are increasing SCN values).
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    SQL> shu immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 1071333376 bytes
    Fixed Size 1388352 bytes
    Variable Size 645923008 bytes
    Database Buffers 419430400 bytes
    Redo Buffers 4591616 bytes
    Database mounted.
    Database opened.
    SQL> select distinct instance_name,status,open_mode from v$instance,v$database;
    INSTANCE_NAME STATUS OPEN_MODE
    —————- ———— ——————–
    xe OPEN READ WRITE
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    current_scn
    ———–
    7049237
    // so we can see if DB is open for R/W , SCN increases every time, no matter any user session is doing something (commit) or not, performing any operation or not.
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    current_scn
    ———–
    7049260
    SQL> alter database close;
    Database altered.
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    select dbms_flashback.get_system_change_number “current_scn” from dual
    *
    ERROR at line 1:
    ORA-00904: “DBMS_FLASHBACK”.”GET_SYSTEM_CHANGE_NUMBER”: invalid identifier
    SQL> select distinct instance_name,status,open_mode from v$instance,v$database;
    INSTANCE_NAME STATUS OPEN_MODE
    —————- ———— ——————–
    xe MOUNTED MOUNTED
    SQL> select current_scn from v$database;
    CURRENT_SCN
    ———–
    0
    // in mount state the SCN is 0
    /// now last thing remained to check is – check the SCN when Database is open in read only mode.
    SQL> startup mount;
    ORACLE instance started.
    Total System Global Area 1071333376 bytes
    Fixed Size 1388352 bytes
    Variable Size 645923008 bytes
    Database Buffers 419430400 bytes
    Redo Buffers 4591616 bytes
    Database mounted.
    SQL> select current_scn from v$database;
    CURRENT_SCN
    ———–
    0
    SQL> alter database open read only;
    Database altered.
    SQL> select distinct instance_name,status,open_mode from v$instance,v$database;
    INSTANCE_NAME STATUS OPEN_MODE
    —————- ———— ——————–
    xe OPEN READ ONLY
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    current_scn
    ———–
    7049489
    SQL> select dbms_flashback.get_system_change_number “current_scn” from dual;
    current_scn
    ———–
    7049489
    SQL> select current_scn from v$database;
    CURRENT_SCN
    ———–
    7049489
    SQL> /
    CURRENT_SCN
    ———–
    7049489 // NO change in the SCN.
    Conclusion-
    The database ALWAYS has transactions going on, ALWAYS. SMON and many other background processes are always doing work, the database (unless it is opened read only) is always doing transactions.
    So basically our select query was not increasing the SCN, it’s the background processes which were continuously performing transactions followed by commit.
  37. A user did a transaction in the database ( say updated 1000 records) , did not commit and left for the day and then subsequently left this world also. Here DBWn has flushed his modified blocks to the data files. How oracle keeps track of such transactions?Ans:-Explanation by Arup Nanda.
    Part 1: Myth of Commit Causing Buffer to be Flushed to the Disk
    http://arup.blogspot.in/2010/12/100-things-you-probably-didnt-know.html

  38. Does instance recovery commit your uncommited transactions?Ans:-

    OK, let’s say at 9am, the power (and hence your server) goes out.

    Leading up to that point in time, your database probably had

    – plenty of committed changes from completed transactions
    – a batch of un-committed changes from active transactions

    There are two things to keep in mind here:

    1) Some of those committed changes, may NOT be reflected in the datafiles
    2) Some of those uncommitted changes may BE reflected in the datafiles.

    Because the “datafiles” are not actually a reflection of the exact state of your database, its the redo logs (or the sum total of every redo log entry every made). It’s almost like datafiles are a “performance enhancement” designed to save you from reading every redo log in the history of the database in order to see the current state.

    So when your server powers back up again, we need use the redo logs to get our datafiles to a consistent state.

    So we roll the redo changes forward to bring our datafiles to a point where all the committed and potentially uncommitted changes are applied. In doing do, we’ve also brought forward the files we use for our UNDO tablespace (which stores info on how to undo a transaction).

    Now we need to back out (rollback) those uncommitted changes. So we can use the freshly recovered undo information, to now reverse out those uncommitted changes.

    Thus at the end of the recovery, we have our desired consistent state – just those changes that committed successfully before the power was lost.


  39. What does the PGA_AGGREGATE_LIMIT in 12c signify?http://clouddba.co/oracle-pga-program-global-area_12c/

  40. What is Transparent HugePages and why is it recommended to be set to ‘NEVER’?Starting from RHEL6/OL6, Transparent HugePages are implemented and enabled by default. They are meant to improve memory management by allowing
    HugePages to be allocated dynamically by the “hugepaged” kernel thread, rather than at boot time like conventional HugePages.
    That sounds like a good idea, but unfortunately Transparent HugePages don’t play well with Oracle databases and are associated with node reboots in RAC installations and performance problems on both single instance and RAC installations. As a result Oracle recommends disabling Transparent HugePages on all servers running Oracle databases, as described in this MOS note.ALERT: Disable Transparent HugePages on SLES11, RHEL6, OEL6 and UEK2 Kernels [ID 1557478.1]

  41. What is the difference between log sequence number and checkpoint id? How it is used for instance recovery?

    log sequence number:
    A number that uniquely identifies a set of redo records in a redo log file. When Oracle fills one online redo log file and switches to a different one, Oracle automatically assigns the new file a log sequence number. For example, if you create a database with two online log files, then the first file is assigned log sequence number 1. When the first file fills and Oracle switches to the second file,it assigns log sequence number 2; when it switches back to the first file, it assigns log sequence number 3, and so forth.

    A checkpoint is the act of flushing modified, cached database blocks to disk. Normally, when you make a change to a block —
    the modifications of that block are made to a memory copy of the block. When you commit — the block is not written (but the REDO LOG is — that makes it so we can “replay” your transaction in the event of a failure)eventually, the system will checkpoint your modified blocks to disk. there is no relationship between “checkpoint” and sid and instance recovery does not imply “checkpoint”. a checkpoint reduces the amount of time it takes to perform instance recovery.

  42. Which process is responsible to bring up the instance
    SMON

  43. Life cycle of a user session?

    Session lifecycle settings can be defined using the Oracle Access Management Console. The WebLogic Scripting Tool does not include options for session management.The lifecycle of a session refers to the period of user activity from the start of a session to the end.

    Session lifecycle states include:

    Active: A session starts when the user is authenticated by Access Manager. The session remains active as long as the user makes requests for Access Manager-protected content, and provided that the session has not expired.
    Inactive: A session becomes inactive when the user does not access Access Manager-protected content for the period defined by the Idle Timeout attribute in the session lifecycle configuration.
    Expired: The duration of the session has exceeded the period defined by the Session Lifetime attribute.

    An active session becomes inactive when the user is inactive for the defined Idle Timeout period. A session expires when it exceeds the defined Session Lifetime period.The Session Management Engine maintains a list of inactive sessions. When an active session becomes inactive, or expires, the user must re-authenticate. Data for expired sessions is automatically deleted from in-memory caches (or the optional SME database).

  44. When a table size is 10g and max sga is 5g, if user running select * from tab1 how Oracle is handling that ?I believe the table will never be kept in the buffer cache and Oracle does an physical I/O. Also keeping huge table in buffer cache does not
    serve the purpose of caching mechanism as it would flush other object’s data in the cache. Also whenever a table is accessed by Full Table scan plus including other calculations) the table’s block will be kept at the LRU end of the LRU chain – so that the blocks get flushed soon giving way for other blocks.based on the table size, direct path read is enabled even for serial table scans for the tables that are considered as “large” tables. Now, the read of the large tables is going to direct the scan directly into the PGA instead of flooding the buffer cache.You will see direct path read wait event.

    ———————-RAW trace—————————

    WAIT #140293705208536: nam=’SQL*Net message from client’ ela= 104 driver id=1650815232 #bytes=1 p3=0 obj#=92275 tim=14190555758807
    WAIT #140293705208536: nam=’SQL*Net message to client’ ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=92275 tim=14190555758828
    WAIT #140293705208536: nam=‘direct path read’ ela= 1307 file number=6 first dba=289297 block cnt=15 obj#=92275 tim=14190555760193
    FETCH #140293705208536:c=87,e=1446,p=15,cr=11,cu=0,mis=0,r=15,dep=0,og=1,plh=4122059633,tim=14190555760267
    WAIT #140293705208536: nam=’SQL*Net message from client’ ela= 208 driver id=1650815232 #bytes=1 p3=0 obj#=92275 tim=14190555760508
    WAIT #140293705208536: nam=’SQL*Net message to client’ ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=92275 tim=14190555760558
    FETCH #140293705208536:c=0,e=38,p=0,cr=11,cu=0,mis=0,r=15,dep=0,og=1,plh=4122059633,tim=14190555760587

  45. What is the command to find the database that already present before connecting?ps -ef|grep smon (or) pmon

  46. Based on which parameter PGA can extend PGA_AGGREGATE_TARGET or PGA_AGGREGATE_LIMIT ?and which all components it will effect.Ans:-http://clouddba.co/oracle-pga-program-global-area_12c/

  47. When we recreate control file from where the information related to scn number is fetched… Datafile header might have skewed and inconsistent info.It reads from datafile header.

  48. During direct path read which portion of memory is used?PGA

  49. Dedicated vs shared sever mode, and what is the role of dispatcher in it?Ans:-

    Dedicated server connection:

    A dedicated process has one to one relationship with user process. The user connected with dedicated server connection has the dedicated resource that is whether or not the user is doing work the connection remains allocated to that user. Thus if your user is not working and if there is less network resource then this user is blocking another user to connect or to execute the user request by making this user works queued. This connection is preferable when you have lot of resource or to some administrative user like “SYS” , “SYS” for administrative works and for batch jobs,

    Shared server connection:

    Shared server architecture consists of Listener Process, Dispatcher Process, Request Queue, Shared server process and Response Queue. Network Listener process listens the user process request. If user process request requires a dedicated server process, listener process starts a dedicated server process. If the request can be assigned to a shared server process, then the request is forwarded to dispatcher process. Shared server configuration requires at least on dispatcher process. Dispatcher process places the request on request queue. Request Queue is created in SGA and shared by all dispatcher processes. On of free Shared server process picks up the request from request queue. After processing the request, shared server process places the result in response queue. Each dispatcher process has its own response queue in SGA. Dispatcher knows about the user process which placed the request, so the response is returned back to user process.

    https://docs.oracle.com/cd/B28359_01/server.111/b28310/manproc001.htm#ADMIN11168

  50. What is the significance of statistics_level parameter?Ans:-STATISTICS_LEVEL specifies the level of collection for database and operating system statistics. The Oracle Database collects these statistics for a variety of purposes, including making self-management decisions.The default setting of TYPICAL ensures collection of all major statistics required for database self-management functionality and provides best overall performance. The default value should be adequate for most environments.When the STATISTICS_LEVEL parameter is set to ALL, additional statistics are added to the set of statistics collected with the TYPICAL setting. The additional statistics are timed OS statistics and plan execution statistics.Setting the STATISTICS_LEVEL parameter to BASIC disables the collection of many of the important statistics required by Oracle Database such as AWR,ADDM,etc.
    SELECT statistics_name, session_status, system_status,
    activation_level, session_settable FROM v$statistics_level ORDER BY statistics_name;

  51. What is the best way to schedule backups (full as well as incremental)?Best Way : twice a week level 0 + rest of the days incremental backup (level 1) + Every 30 Mins archivelog backup ( duration depends on number of archive
    generated per hour)

  52. What is LRU list and ITL?Ans:-

    LRU – It is a linked list data structure maintained inside Database buffer Cache . It has hot end and cold end . Least used data blocks are moved to the cold end of the list while the frequently used Data blocks are moved to the hot end of LRU list. When there is space crunch in database buffer cache, blocks are aged out from the cold end of LRU list . It follows LRU algorithm . However in recent versions of Oracle LRU is managed internally by touch count algorithm .

    ITL- Interested Transaction List is a data structure in the header of data blocks which contains transaction specific information . It is used by Oracle internally to identify whether the transaction is committed or not . ITL contains information like start time of transaction, row level locking details by the transaction and also the row details which contains both committed and uncommitted changes. ITL internally points to Undo segment which actually stores timings of the Changes made to the database . During the table creation, the INITRANS parameter defines how many slots are initially created in the ITL. INITRANS is a block level storage parameter which can be specified while creating an object (table). INITRANS and MAXTRANS parameters are used to control the concurrent access to the same block.

  53. What is Result_cache and how does it help?http://www.dba-oracle.com/oracle11g/oracle_11g_result_cache_sql_hint.htm

  54. How do you take a database to mount state from read write without shutting it down ?Put the job_queueu_Process to zero
    kill all the active/inactive session in the database
    alter database close. ( to bring mount state)
    Alter databsae open ( bring back to startup)

  55. Whenever rman backup intimated first it will take the snap shot of control file.. Bacup will progress based on that snapshot.. so if the changes are happening during backup time it will not consider it?No it will not consider.

  56. If query is updating table n rman hot back is also running … Will backup of rman will include this update of table …

  57. Will a database be down if someone kills any one of the bg proceeses ? If not how to check killing which process will bring database down ?Ans:-The database will go down if we kill any mandatory process.There is an environment variable named skgp_hidden _args which indicates the same.https://jhdba.wordpress.com/tag/skgp_hidden_args

  58. What does db_nk_cache_size signify ? How is it different from  db_block_size ?Ans:-

    The DB_BLOCK_SIZE is used to specify the standard block size which is used for the SYSTEM and TEMPORARY tablespaces. All subsequent tablespaces will also be assigned this block size unless specified otherwise.

    Db_nK_cache_size is for tablespaces which is created with different block size other than specified by db_block_size param.
    An individual buffer cache must be defined for each non-standard block size used. These are set up with the following parameters which default to a size of 0M.

    DB_2K_CACHE_SIZE = 0M
    DB_4K_CACHE_SIZE = 0M
    DB_8K_CACHE_SIZE = 0M
    DB_16K_CACHE_SIZE = 0M
    DB_32K_CACHE_SIZE = 0M

    It´s remarkable to say that you can create a tablespace with different block sizes depending on the use case of the data, for example for a data warehouse where reading large contiguous data you benefit with a higher block size (32 Kb) while for most transaction systems where reading single rows you benefit with lower block size (4K). The DB_BLOCK_SIZE in the initialization parameter file determines the standard block size of the database. So, before you create the non-standard block size tablespace, you must configure the underlying non-standard sub-caches using the DB_nK_CACHE_SIZE initialization parameter. The bottom line is that each unique block size has its own buffer cache are as well as the default, keep and recycle will only cache the blocks of the default block size. https://googleweblight.com/i?u=https://www.toadworld.com/platforms/oracle/w/wiki/10960.knowing-the-buffer-cache-in-oracle-database-to-improve-performance&hl=en-IN&tg=416&tk=459124700759542706

  59. What is a locally managed tablespace?Ans:-

    A tablespace that can manage extent allocation by itself is called locally managed tablespace. These tablespaces maintain a bitmap in each datafile to keep track of the freed or used status of blocks in that datafile. Each bit in the bitmap corresponds to a block or a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables in the data dictionary (except for special cases such as tablespace quota information).

    The storage parameters NEXT, PCTINCREASE, MINEXTENTS, MAXEXTENTS, and DEFAULT STORAGE are not valid for extents that are managed locally !

  60. What is a redo vector ? What are the files in a database where the scn information is stored?Ans:-The following website from Riyaj gives a brilliant description of redo Change vector. https://orainternals.wordpress.com/tag/redo-record/. Scn gets stored in datafile headers , redo logs , archive logs and control file headers.

  61. What is RBA?Ans:-

    Redo Byte Address(RBA) is 10 bytes long address which signifies a particular location in physical online redolog. . It has three components namely
    i)the log file sequence number ( 4 bytes)
    ii)the log file block number ( 4 bytes)
    iii) the byte offset into the block at which the redo record starts (2 bytes) .

    There are different types of RBA namely :-

    Low RBA : Dirty buffer contains first redo change address called Low RBA. From x$bh we can check low RBA.

    High RBA : Dirty buffer contains last and most recent redo changes address called High RBA. From x$bh we can check High RBA.

    Checkpoint RBA : DBWR has written buffers from checkpoint queue are pointing to checkpoint RBA while incremental checkpoint is enabled. This RBA copies in to control file’s checkpoint progress record. When instance recovery occurs that time it starts from checkpointing RBA from control file. We can check this RBA from x$targetrba (sometimes from x$kccrt).

    On-disk RBA : That RBA which was flushed in to online Redo Log File on disk. This RBA recorded in to control file record section. We can check from x$kcccp for on-disk RBA (sometimes from x$targetrba).

    Working mechanism of RBA :- CKPT records checkpoint information to controlfile for maintaining book keeping information like checkpoint progress . Each instance checkpoint refers to some RBA (called checkpoint RBA) whose redo prior to this RBA have been written to disk. Hence recovery time is difference between checkpoint RBA and end of the redo log .

    Given a checkpoint RBA, DBWR writes buffers from the head of the queue until low RBA of the buffer at the head of the checkpoint queue is greater than the checkpoint RBA . At this point ,CKPT can record this checkpoint progress record in control file (phase 3).
    PHASE(1) process initiating the checkpoint (checkpoiting RBA or current RBA is marked) (The RBA of the last change made to a buffer) at the time reuqest is initiated.
    PHASE (2) DBWR writes all required buffers i.e all buffers that have been modified at RBAs less than or equal to the checkpoint RBA. After all required buffers have been written, in
    PHASE (3) CKPT process records the completion of the checkpoint in control file.

    The checkpoint RBA is copied into the checkpoint progress record of the controlfile by the checkpoint heartbeat once every 3 seconds. Instance recovery, when needed, begins from the checkpoint RBA recorded in the controlfile. The target RBA is the point up to which DBWn should seek to advance the checkpoint RBA to satisfy instance recovery objectives.

    The term sync RBA is sometimes used to refer to the point up to which LGWR is required to sync the thread. However, this is not a full RBA — only a redo block number is used at this point.

    Source :- http://neeraj-dba.blogspot.in/2011/04/what-is-rba.html

  62. What is the relationship between checkpoint and scn ?https://mgrvinod.wordpress.com/2011/03/14/scn-and-checkpoint/

  63. What important kernel parameters to be considered while  sizing SGA?

    shmmax / shmall

  64. How do you check uptime in OS and database level?Ans:-In DB uptime – select to_char(startup_time,’DD-MM-YYYY 24hh:mi:ss’) from v$instance; If it is Linux/Aix, we can use ‘uptime’ command to check uptime from O/S end.

  65. What happens if you try to drop an undo tablespace that contains active rollback segments?Ans:-https://airydba.wordpress.com/2016/05/23/ora-01548-active-rollback-segment-_syssmu1_3780397527-found-terminate-dropping-tablespace/

  66. What is the minimum # of init.ora parameters required for instance startup/db creation in 11g?https://www.relationaldbdesign.com/database-creation-architecture/module3/file-parameters.php

  67. Can we create dictionary managed tablespace in 12c?Ans:-Yes, we can create dictionary managed tablespace in Oracle 12c.But its deprecated in 12c.
    Ref: https://docs.oracle.com/database/121/SQLRF/statements_7003.htm#i2153401
    The DICTIONARY keyword is deprecated. It is still supported for backward compatibility. However, Oracle recommends that you create locally managed tablespaces. Locally managed tablespaces are much more efficiently managed than dictionary-managed tablespaces.

  68. Db_files vs maxdatafiles ?Ans:- DB_files specifies the maximum number of files that can be created for a particular database . When we add a new datafiles to the database ,the value of ‘DB_files’ increases. We can increase the value of ‘DB_files’ parameter but it requires for database restart.
    In dataguard environment, primary and all standby servers should have same ‘DB_files’ value . Maxdatafiles represents the maximum number of datafiles that can remain in a control file.However, if you attempt to add a new file whose number is greater than MAXDATAFILES, but less than or equal to DB_FILES, the control file will expand automatically so that the data files section can accommodate more files.

  69. In which scenarios, DB hangs and how would you go about troubleshooting it?

  70. Execution flow of update statement?

  71. What is Database transaction – ACID properties?Ans:-A: stands for atomicity .An atomic transaction is an indivisible and irreducible series of database operations such that either all occur, or nothing occurs. Nothing will be partial as either transaction is committed or roll back .
    C :stands for consistency. A transaction either creates a new and valid state of data, or, if any failure occurs, returns all data to its state before the transaction was started.
    I: stands for Isolation . It signifies that a transaction either creates a new and valid state of data, or, if any failure occurs, returns all data to its state before the transaction was started.
    D: refers to durability . It refers that committed data is saved by the system such that, even in the event of a failure and system restart, the data is available in its correct state.

  72. What is sdu and how does it help in connectivity ?Ans:-Session Data Unit(SDU) denotes number of packets to be transferred over Oracle Network . It is mainly defined in tnsnames.ora and listener.ora.As per Official Docs,The amount of data provided to Oracle Net to send at any one time is referred to as the message size. Oracle Net assumes by default that the message size will normally vary between 0 and 8192 bytes, and infrequently, be larger than 8192 bytes. If this assumption is true, then most of the time, the data is sent using one SDU buffer.
    The SDU size can range from 512 bytes to 65535 bytes. The default SDU for the client and a dedicated server is 8192 bytes. The default SDU for a shared server is 65535 bytes.
    The actual SDU size used is negotiated between the client and the server at connect time and is the smaller of the client and server values. Configuring an SDU size different from the default requires configuring the SDU on both the client and server computers, unless you are using shared servers. For shared servers, only the client value must be changed because the shared server defaults to the maximum value.

  73. What is small table threshold?Ans:-_small_table_threshold is a hidden parameter in Oracle which is used by server process to decide to go for direct path read or read from buffer cache . Unit of this parameter is number of blocks and default value is 2% of buffer cache size. It specifically defines Unit of this parameter is number of blocks and default value is 2% of buffer cache size.

  74. What is /dev/shm?Ans:-Starting from 11g, if Automatic Memory management is enabled, size of Shared Memory(/dev/shm) should be greater than ‘Memory Target/Memory Max target’ for all the instances running on the same server . Basically ‘dev/shm’ is in-memory file system but non -persistent with server reboot. If we set higher ‘Memory Target/Memory Max target ‘ higher than /dev/shm, the instance startup will get halted with “ORA-00845: MEMORY_TARGET not supported on this system”.