Omarsoft For IT Solutions (Java Codes ,C#.NET Codes , ASP.NET Codes ,VB.NET Codes ,Oracle Database Administration, Real Application Cluster , Remote Support, Cloud Services , Networks ,Virtualization....
  • الأنظــمــة المكتبية        Windows App Programming
  • أنظــمـةالويــب        Web based systems programming
  • تطبيقات الهواتف الذكية     smartphones programming
  • إدارة قواعــــــد البيــــــــــــــــانات        Database Administration
  • إدارة الشبكـــــــــــــــــــــــــــــــــات        Networks Administration
  • إدارة الســـيــرفرات (ويب - محلية)  Servers Administration
  • إدارة مخـــــــــــــــــازن البيــــــــــــانات     Storage Administration
  •             N Computing & 2X Application services

    Social Icons

Loading...

DBA Interview Questions 3

What is Row Migration and Row Chaining?
There are two circumstances when this can occur, the data for a row in a table may be too large to fit into a single data block. This can be caused by either row chaining or row migration.
Chaining: Occurs when the row is too large to fit into one data block when it is first inserted. In this case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of data type LONG, LONG RAW, LOB, etc. Row chaining in these cases is unavoidable.
Migration: Occurs when a row that originally fitted into one data block is updated so that the overall row length increases, and the block’s free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row: the rowid of a migrated row does not change. When a row is chained or migrated, performance associated with this row decreases because Oracle must scan more than one data block to retrieve the information for that row.
  1. INSERT and UPDATE statements that cause migration and chaining perform poorly, because they perform additional processing.
  2. SELECTs that use an index to select migrated or chained rows must perform additional I/Os.
Detection: Migrated and chained rows in a table or cluster can be identified by using the ANALYZE command with the LIST CHAINED ROWS option. This command collects information about each migrated or chained row and places this information into a specified output table. To create the table that holds the chained rows,
execute script UTLCHAIN.SQL.
SQL> ANALYZE TABLE scott.emp LIST CHAINED ROWS;
SQL> SELECT * FROM chained_rows;
You can also detect migrated and chained rows by checking the ‘table fetch continued row’ statistic in the v$sysstat view.
SQL> SELECT name, value FROM v$sysstat WHERE name = ‘table fetch continued row’;
Although migration and chaining are two different things, internally they are represented by Oracle as one. When detecting migration and chaining of rows you should analyze carefully what you are dealing with.
What is Ora-01555 - Snapshot Too Old error and how do you avoid it?
1. Increase the size of rollback segment. (Which you have already done)
2. Process a range of data rather than the whole table.
3. Add a big rollback segment and allot your transaction to this RBS.
4. There is also possibility of RBS getting shrunk during the life of the query by setting optimal.
5. Avoid frequent commits.
6. Google out for other causes.
What is a locally Managed Tablespace?
A Locally Managed Tablespace is a tablespace that manages its own extents maintaining a bitmap in each data file to keep track of the free or used status of blocks in that data file. Each bit in the bitmap corresponds to a block or a group of blocks. When the extents are allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables in the data dictionary (except for tablespace quota information), unlike the default method of Dictionary - Managed Tablespaces.
Following are the major advantages of locally managed tablespaces –
• Reduced contention on data dictionary tables 
• No rollback generated 
• No coalescing required 
• Reduced recursive space management.
Can you audit SELECT statements?
      Yes, we can audit the select statements. Check out the below example:
SQL> show parameter audit
NAME TYPE VALUE
———————————— ———– ——————————
audit_file_dest string E:\ORACLE\PRODUCT\10.2.0\DB_2\
ADMIN\SRK\ADUMP
audit_sys_operations boolean FALSE
audit_trail string NONE
SQL>  begin
dbms_fga.add_policy ( object_schema => ‘SCOTT’,
object_name => ‘EMP2′,
policy_name => ‘EMP_AUDIT’,
statement_types => ‘SELECT’ );
end;
/
PL/SQL procedure successfully completed.
SQL>select * from dba_fga_audit_trail;
             no rows selected
In HR schema:
SQL> create table bankim(
name varchar2 (10),
roll number (20));
Table created.
SQL> insert into bankim values (‘bankim’, 10);
1 row created.
SQL> insert into bankim values (‘bankim2′, 20);
1 row created.
SQL> select * from bankim;
NAME ROLL
———- ———-
bankim 10
bankim2 20
SQL> select name from bankim;
NAME
———-
bankim
bankim2
In sys schema:
SQL>set head off
SQL> select sql_text from dba_fga_audit_trail;
select count(*) from emp2
select * from emp2
select * from emp3
select count(*) from bankim
select * from bankim
select name from bankim
What does DBMS_FGA package do?
The dbms_fga Package is the central mechanism for the FGA is implemented in the package dbms_fga, where all the APIs are defined. Typically, a user other than SYS is given the responsibility of maintaining these policies. With the convention followed earlier, we will go with the user SECUSER, who is entrusted with much of the security features. The following statement grants the user SECUSER enough authority to create and maintain the auditing facility.
Grant execute on dbms_fga to secuser;
The biggest problem with this package is that the polices are not like regular objects with owners. While a user with execute permission on this package can create policies, he or she can drop policies created by another user, too. This makes it extremely important to secure this package and limit the use to only a few users who are called to define the policies, such as SECUSER, a special user used in examples.
What is Cost Based Optimization?
The CBO is used to design an execution plan for SQL statement. The CBO takes an SQL statement and tries to weigh different ways (plan) to execute it. It assigns a cost to each plan and chooses the plan with smallest cost.
The cost for smallest is calculated: Physical IO + Logical IO / 1000 + net IO.
How often you should collect statistics for a table?
CBO needs some statistics in order to assess the cost of the different access plans. These statistics includes:
Size of tables, Size of indexes, number of rows in the tables, number of distinct keys in an index, number of levels in a B* index, average number of blocks for a value, average number of leaf blocks in an index
These statistics can be gathered with dbms_stats and the monitoring feature.
How do you collect statistics for a table, schema and Database?
Statistics are gathered using the DBMS_STATS package. The DBMS_STATS package can gather statistics on table and indexes, and well as individual columns and partitions of tables. When you generate statistics for a table, column, or index, if the data dictionary already contains statistics for the object, then Oracle updates the existing statistics. The older statistics are saved and can be restored later if necessary. When statistics are updated for a database object, Oracle invalidates any currently parsed SQL statements that access the object. The next time such a statement executes, the statement is re-parsed and the optimizer automatically chooses a new execution plan based on the new statistics.
Collect Statistics on Table Level
sqlplus scott/tiger
exec dbms_stats.gather_table_stats ( -
     ownname          => 'SCOTT', -
     tabname          => 'EMP', -
     estimate_percent => dbms_stats.auto_sample_size, -
     method_opt       => 'for all columns size auto', -
     cascade          => true, -
     degree           => 5 - )
/
Collect Statistics on Schema Level
sqlplus scott/tiger
exec dbms_stats.gather_schema_stats ( -
     ownname          => 'SCOTT', -
     options          => 'GATHER', -
     estimate_percent => dbms_stats.auto_sample_size, -
     method_opt       => 'for all columns size auto', -
     cascade          => true, -
     degree           => 5 - )

Collect Statistics on Other Levels
DBMS_STATS can collect optimizer statistics on the following levels, see Oracle Manual
GATHER_DATABASE_STATS
GATHER_DICTIONARY_STATS
GATHER_FIXED_OBJECTS_STATS
GATHER_INDEX_STATS
GATHER_SCHEMA_STATS
GATHER_SYSTEM_STATS
GATHER_TABLE_STATS
Can you make collection of Statistics for tables automatic?
Yes, you can schedule your statistics but in some situation automatic statistics gathering may not be adequate. It suitable for those databases whose object is modified frequently. Because the automatic statistics gathering runs during an overnight batch window, the statistics on tables which are significantly modified during the day may become stale.
There may be two scenarios in this case:                           
·         Volatile tables that are being deleted or truncated and rebuilt during the course of the day.
·         Objects which are the target of large bulk loads which add 10% or more to the object’s total size.
So you may wish to manually gather statistics of those objects in order to choose the optimizer the best execution plan. There are two ways to gather statistics.
  1. Using DBMS_STATS package.
  2. Using ANALYZE command
How can you use ANALYZE statement to collect statistics?
ANALYZE TABLE emp ESTIMATE STATISTICS FOR ALL COLUMNS;
ANALYZE INDEX inv_product_ix VALIDATE STRUCTURE;
ANALYZE TABLE customers VALIDATE REF UPDATE;
ANALYZE TABLE orders LIST CHAINED ROWS INTO chained_rows;
ANALYZE TABLE customers VALIDATE STRUCTURE ONLINE;
To delete statistics:
ANALYZE TABLE orders DELETE STATISTICS;
To get the analyze details:
SELECT owner_name, table_name, head_rowid, analyze_timestamp FROM chained_rows;
On which columns you should create Indexes?
The following list gives guidelines in choosing columns to index:
  • You should create indexes on columns that are used frequently in WHERE clauses.
  • You should create indexes on columns that are used frequently to join tables.
  • You should create indexes on columns that are used frequently in ORDER BY clauses.
  • You should create indexes on columns that have few of the same values or unique values in the table.
  • You should not create indexes on small tables (tables that use only a few blocks) because a full table scan may be faster than an indexed query.
  • If possible, choose a primary key that orders the rows in the most appropriate order.
  • If only one column of the concatenated index is used frequently in WHERE clauses, place that column first in the CREATE INDEX statement.
  • If more than one column in a concatenated index is used frequently in WHERE clauses, place the most selective column first in the CREATE INDEX statement.
What type of Indexes is available in Oracle?
  • B-tree indexes: the default and the most common.
  • B-tree cluster indexes: defined specifically for cluster.
  • Hash cluster indexes: defined specifically for a hash cluster.
  • Global and local indexes: relate to partitioned tables and indexes.
  • Reverse key indexes: most useful for Oracle Real Application Clusters.
  • Bitmap indexes: compact; work best for columns with a small set of values
  • Function-based indexes: contain the pre-computed value of a function/expression Domain indexes: specific to an application or cartridge.
What is B-Tree Index?
B-Tree is an indexing technique most commonly used in databases and file systems where pointers to data are placed in a balance tree structure so that all references to any data can be accessed in an equal time frame. It is also a tree data structure which keeps data sorted so that searching, inserting and deleting can be done in logarithmic amortized time.
A table is having few rows, should you create indexes on this table?
You should not create indexes on small tables (tables that use only a few blocks) because a full table scan may be faster than an indexed query.
A Column is having many repeated values which type of index you should create on this column
B-Tree index is suitable if the columns being indexed are high cardinality (number of repeated values). In fact for this situation a bitmap index is very useful but bitmap index are vary expensive.
When should you rebuild indexes?
There is no thumb rule “when you should rebuild the index”. According to expert it depends upon your database situation:
When the data in index is sparse (lots of holes in index, due to deletes or updates) and your query is usually range based or If Blevel >3 then takes index in rebuild consideration; desc DBA_Indexes;
Because when you rebuild indexes then database performance goes down.
In fact binary tree index can never be unbalanced. Binary tree performance is good for both small and large tables and does not degrade with the growth of table.
Can you build indexes online?
Yes, we can build index online. It allows performing DML operation on the base table during index creation. You can use the statements:
CREATE INDEX ONLINE and DROP INDEX ONLINE.
ALTER INDEX REBUILD ONLINE is used to rebuild the index online.
A Table Lock is required on the index base table at the start of the CREATE or REBUILD process to guarantee DDL information. A lock at the end of the process also required to merge change into the final index structure.
A table is created with the following setting
                storage (initial 200k
                   next 200k
                   minextents 2
                   maxextents 100
                   pctincrease 40)
What will be size of 4th extent?
Percent Increase allows the segment to grow at an increasing rate.
The first two extents will be of a size determined by the Initial and Next parameter (200k)
The third extent will be 1 + PCTINCREASE/100 times the second extent (1.4*200=280k).
AND the 4th extent will be 1 + PCTINCREASE/100 times the third extent (1.4*280=392k!!!) and so on...
Can you Redefine a table Online?
Yes. We can perform online table redefinition with the Enterprise Manager Reorganize Objects wizard or with the DBMS_REDEFINITION package.
It provides a mechanism to make table structure modification without significantly affecting the table availability of the table. When a table is redefining online it is accessible to both queries and DML during the redefinition process.
Purpose for Table Redefinition
·         Add, remove, or rename columns from a table
·         Converting a non-partitioned table to a partitioned table and vice versa
·         Switching a heap table to an index organized and vice versa
Modifying storage parameters
·         Adding or removing parallel support
·         Reorganize (defragmenting) a table
·         Transform data in a table
Restrictions for Table Redefinition:
·         One cannot redefine Materialized Views (MViews) and tables with MViews or MView Logs defined on them.
·         One cannot redefine Temporary and Clustered Tables
·         One cannot redefine tables with BFILE, LONG or LONG RAW columns
·         One cannot redefine tables belonging to SYS or SYSTEM
·         One cannot redefine Object tables
·         Table redefinition cannot be done in NOLOGGING mode (watch out for heavy archiving)
·         Cannot be used to add or remove rows from a table
Can you assign Priority to users?
Yes, we can do this through resource manager. The Database Resource Manager gives a database administrators more control over resource management decisions, so that resource allocation can be aligned with an enterprise's business objectives.
With Oracle database Resource Manager an administrator can:
  • Guarantee certain users a minimum amount of processing resources regardless of the load on the system and the number of users
  • Distribute available processing resources by allocating percentages of CPU time to different users and applications.
  • Limit the degree of parallelism of any operation performed by members of a group of users
  • Create an active session pool. This pool consists of a specified maximum number of user sessions allowed to be concurrently active within a group of users. Additional sessions beyond the maximum are queued for execution, but you can specify a timeout period, after which queued jobs terminate.
  • Allow automatic switching of users from one group to another group based on administrator-defined criteria. If a member of a particular group of users creates a session that runs for longer than a specified amount of time, that session can be automatically switched to another group of users with different resource requirements.
  • Prevent the execution of operations that are estimated to run for a longer time than a predefined limit
  • Create an undo pool. This pool consists of the amount of undo space that can be consumed in by a group of users.
  • Configure an instance to use a particular method of allocating resources. You can dynamically change the method, for example, from a daytime setup to a nighttime setup, without having to shut down and restart the instance.

Sana'a Yemen - 50th st.

+967 738166685

omar.soft2000@gmail.com

للاتصال بنا CONTACT US

الاسم Name

بريد إلكتروني Email *

رسالة Message *

2015-2023 © All Rights Reserved Omarsoft
Back To Top