Oracle9i Database Fundamentals 1Z1-031: Managing the Database Structure
Lesson 1. Course Introduction
*Course 3: Managing the Database StructureThis course introduces Oracle's physical disk resources including data dictionaries, control files, and redo log files.
- The data dictionary tracks every object in a database. It has two key components: base tables that store the data and user-accessible views that let you view the data.
- Control files keep track of the location of all the files that Oracle needs. Without control files, Oracle would be unable to find its physical components.
- Redo logs are disk resources that store the data changes made by Oracle users.
This course presents information relevant to the following Oracle 1Z1-031 Oracle9i DBA Fundamentals I exam objectives exam objectives:
Data Dictionary Content and Usage
- Identify key data dictionary components
- Identify the contents and uses of the data dictionary
- Query the data dictionary
- Explain the uses of the control file
- List the contents of the control file
- Multiplex and manage the control file
- Manage the control file with OMF
- Obtain control file information
- Explain the purpose of online redo log files
- Describe the structure of online redo log files
- Control log switches and checkpoints
- Multiplex and maintain online redo log files
- Manage online redo log files with OMF
Lesson 2. Constructing Data Dictionary Views
Every object in the database is tracked in some fashion by the Oracle data dictionary. The data dictionary is the first set of database objects the DBA should create after issuing the create database command. The data dictionary can be created automatically or manually.After completing this lesson, you should be able to:
- Explain the purpose of the data dictionary
- Describe how the data dictionary is created
Oracle generally creates the data dictionary at database creation time through the use of the catalog.sql and catproc.sql scripts. This can be done without any intervention from the DBA.
If you're using Database Configuration Assistant to create the database, the tool runs these scripts. If you're manually creating the database, be sure to run these scripts soon after the database is created.
*The catalog.sql Script
The first script to run, catalog.sql, creates the objects that comprise the data dictionary. The data dictionary supports virtually every aspect of Oracle database operation, from finding information about objects to performance tuning, and everything in between.
To create a data dictionary, run the catalog.sql script from within SQL*Plus while connected with the administrative privilege sysdba.
This script performs a laundry list of create view statements. It also executes a series of other scripts to create other data dictionary views in special areas, and to create special public synonyms for those views.
Within the catalog.sql script are calls to several other scripts. This table lists these scripts and their functions.
Script | Function |
---|---|
cataudit.sql | Creates the SYS.AUD$ dictionary table that tracks all audit trail information generated by Oracle's auditing feature. |
catldr.sql | Creates views for the SQL*Loader tool, which is used to process large-volume data loads from one system to another. |
catexp.sql | Creates views used by the Oracle IMPORT/EXPORT utilities. |
catpart.sql | Creates views that support Oracle9i's partitioning option. |
catadt.sql | Creates views that support the user-defined types and object components of Oracle9i's object features. |
standard.sql | Creates the STANDARD package, which stores all Oracle scalar or simple datatypes such as VARCHAR2 and BLOB. STANDARD also contains built-in SQL functions such as decode( ). |
The catalog.sql script calls these other scripts automatically. All the scripts can be found in the rdbms/admin directory under the Oracle software home directory. The following code demonstrates the commands necessary to run the catalog.sql file on UNIX:
# cd $ORACLE_HOME/rdbms/admin
# sqlplus
Oracle SQL*Plus Release 9.0.1.0.0 - Production
(c)Copyright 2001, Oracle Corporation. All Rights Reserved.
Enter user-name: sys as sysdba
Enter password:
Connected to:
Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production
With the Partitioning and Objects options
SQL> @catalog
*The catproc.sql Script
The second script to run, catproc.sql, creates procedural options and utilities for PL/SQL. It contains references to scripts in the rdbms/admin directory, such as dbmsutil.sql and dbmssql.sql.
Two different types of scripts are run by catproc.sql:
- Scripts ending in .sql are package specifications for Oracle server packages. A package specification contains the procedure, function, type, and constant definitions available in the package, but no actual code.
- Scripts ending in .plb, such as prvtutil.plb, contain PL/SQL code encrypted with a wrapper program so you can't see the application code logic.
The catproc.sql script calls other scripts automatically. All the scripts can be found in the rdbms/admin directory under the Oracle software home directory. The following code demonstrates the commands necessary to run the catproc.sql file on UNIX:
/home/oracle/app/oracle/product/9.0.1/rdbms/admin> sqlplus
Oracle SQL*Plus Release 8.1.7.0.0 - Production
(c)Copyright 2000, Oracle Corporation. All Rights Reserved.
Enter user-name: sys as sysdba
Enter password:
Connected to:
Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production
With the Partitioning and Objects options
SQL> @catproc
As you've learned, you must run the catalog.sql and catproc.sql scripts to create the data dictionary. You can't create the dictionary views unless you've already created the database.
Because you run the scripts while connected as user sys as sysdba, the SYS user will own the database objects that comprise the data dictionary, and these objects are stored in the SYSTEM tablespace. Neither of these exist until you issue the create database statement.
In addition to the catalog.sql and catproc.sql scripts, other scripts can be executed to create different options. These scripts will be described as they pertain to matters tested on the exam.
The Oracle documentation has a complete list of scripts.
Question 1
Question 2
Question 3
Question 4
In this lesson, you learned that the Oracle data dictionary tracks every object in the database. The data dictionary is the first set of database objects the DBA should create after issuing the create database command.
You learned that the data dictionary can be created automatically with the Database Configuration Assistant.
You also learned that you can create the data dictionary manually by running the catalog.sql script from within SQL*Plus while you're connected as the administrative privilege sysdba.
Lesson 3. Tables and Views
Oracle's data dictionary has two components: base tables and user-accessible views. This lesson covers the differences between these components.After completing this lesson, you should be able to:
- Describe the purpose of base tables
- Describe the purpose of user-accessible dictionary views
Oracle's data dictionary has base tables and user-accessible views. You should always employ the user-accessible views in the data dictionary. Never access the base tables directly. This lesson examines aspects of base tables and views you need to understand for the exam.
*Base Tables
Data in the data dictionary is stored in a set of tables generated during the initial stages of database creation. These are the base tables.
Base tables are the X$ tables created in the SYSTEM tablespace, whose sole purpose is to store the data dictionary base tables. These tables are created during the execution of catalog.sql.
Only user SYS has the privileges necessary to access base tables directly. You should avoid logging into Oracle as SYS whenever possible to avoid damaging the dictionary base tables, and you should never grant direct access to base tables to others using the Oracle database.
*User-Accessible Views
As the base tables are created and populated, the catalog.sql script creates views that let users see the data dictionary data. The data dictionary that we normally refer to is a set of views, not tables. This course will focus on the views available in the data dictionary rather than on the base tables.
Oracle creates public synonyms on many data dictionary views so users can access them easily. The use of the data dictionary is where the exam will test your knowledge.
Oracle software uses the data dictionary extensively. Among other things, the dictionary is used for the following tasks:
- Validating user connections coming into the database
- Verifying the existence of tables that are being queried
- Looking for table indexes to improve transaction performance
A wealth of information about the objects and data in your database can be found in a relatively small number of tables owned by the special privileged user called SYS.
Although Oracle prevents you from looking at these tables directly, several views are available for you to access this information. These views comprise the data dictionary.
Data dictionary views let you avoid referring to the tables of the data dictionary directly. This safeguard is important for two reasons:
- First, it underscores the sensitivity of the SYS-owned tables that store dictionary data. If something happens to those tables, causing data to be lost or a table to be removed, the effects could seriously damage your Oracle database — possibly even rendering it completely unusable!
- Second, dictionary views distill the information in the data dictionary into highly understandable and useful formats.
Question 5
Question 6
Question 7
In this lesson, you learned that the data dictionary has base tables which store the data.
You also learned that you shouldn't access data via the base tables, but through user-accessible views. The term "data dictionary" normally refers to a set of views rather than the tables they're based on.
Lesson 4. Data Dictionary Contents
Every data dictionary view has a scope and a topic.The scope refers to how much of the topic a user can see with a view.
The topic refers to what part of the data dictionary is included in the view.
After completing this lesson, you should be able to:
- Describe the three categories used to define the scope of a view
- Specify the topic a view covers
- Describe the dynamic performance views
The code shown here displays the contents of a data dictionary view called USER_TABLES. The name of this view, USER_TABLES, implies two vitally important aspects of this view in the data dictionary:
- The view's topic (in this case, tables)
- The view's scope (in this case, tables owned by the user)
SQL> select table_name from user_tables;
TABLE_NAME
------------------------------
BONUS
DEPARTMENT
DEPT
DUMMY
EMP
EMPLOYEE
EXAMPLE_1
PRICES
SALGRADE
TESTER
*The Scope of a View
Dictionary views are divided into general categories, or scopes. The scope defines how much of the related topic the database user querying the view can see. The first part of any dictionary view's name identifies the scope of that view. There are three scopes:
- USER
- ALL
- DBA
*USER
Views that employ the USER scope let you see only the database objects that you own.
These views have the narrowest scope. They display only the database objects in your own schema.
Suppose user SCOTT owns a table called EMP, and user JASON logs into Oracle and issues the command select * from USER_TABLES.
JASON won't see the EMP table listed among the output, because EMP belongs to SCOTT.
*ALL
Views that employ the ALL scope let you see all the database objects you can access, even if you don't own some of those objects.
ALL views have a wider scope than USER views because they include every relevant object you can access, regardless of who owns the object.
The scope is still limited to you, the user. If you don't have access to an object, even just to look at it without modifying it, you won't see it.
To access a database object, you must meet one of the following conditions:
- You created the object, so you own it.
- You were granted access by the object owner.
- The PUBLIC user was granted access to the object by the owner. PUBLIC is a special user that represents the access privileges every user has. When an object owner creates a table and grants access to that table to user PUBLIC, every user in the database has access privileges to the table.
*DBA
Views that employ the powerful DBA scope let you see all relevant database objects in the entire database, whether or not they're owned by or accessible to you.
These views are incredibly handy for DBAs (and sometimes for developers) who need information about every database object.
You can grant a special role to other users, called SELECT_CATALOG_ROLE, to let those users look at DBA views.
*The Topic of a View
The second part of any dictionary view's name identifies the topic of that view. So the topic of the USER_TABLES view is tables, and the topic of the ALL_INDEXES view is indexes.
The views that correspond to areas discussed in this series are listed below. For any given topic, a view has the same function regardless of its scope. These are just some of the views available in the Oracle data dictionary.
View | Description |
---|---|
USER_OBJECTS ALL_OBJECTS DBA_OBJECTS | Give information about database objects owned by the current user, objects available to the current user, or all objects in the database. |
USER_TABLES ALL_TABLES DBA_TABLES | Display information about tables owned by or available to the current user, or all tables in the database. |
USER_INDEXES ALL_INDEXES DBA_INDEXES | Display information about indexes owned by or available to the current user, or all indexes in the database. |
USER_VIEWS ALL_VIEWS DBA_VIEWS | Display information about views owned by or available to the current user, or all views in the database (including dictionary views). |
USER_SEQUENCES ALL_SEQUENCES DBA_SEQUENCES | Display information about sequences owned by or available to the current user, or about all sequences. |
USER_USERS ALL_USERS DBA_USERS | Display information about the current user or about all users. |
USER_CONSTRAINTS ALL_CONSTRAINTS DBA_CONSTRAINTS | Display information about constraints owned by or available to the current user, or all constraints. |
USER_CONS_COLUMNS ALL_CONS_COLUMNS DBA_CONS_COLUMNS | Display information about table columns that have constraints owned by or available to the current user, or all table columns in that have constraints. |
USER_IND_COLUMNS ALL_IND_COLUMNS DBA_IND_COLUMNS | Display information about table columns with indexes owned by or available to the current user, or all columns in tables with indexes. |
USER_TAB_COLUMNS ALL_TAB_COLUMNS DBA_TAB_COLUMNS | Display information about columns in tables owned by or available to the current user, or all columns in all tables in Oracle. |
USER_ROLES ALL_ROLES DBA_ROLES | Display information about roles owned by or available to the current user, or all roles in the database. |
USER_TAB_PRIVS ALL_TAB_PRIVS DBA_TAB_PRIVS | Display information about object privileges on objects owned by the user or available to the current user, or all object privileges available to all Oracle users. |
USER_SYS_PRIVS ALL_SYS_PRIVS DBA_SYS_PRIVS | Display information about object privileges on objects owned by the user or available to the current user, or all system privileges granted to all Oracle users. |
USER_SOURCE ALL_SOURCE DBA_SOURCE | Display the source code for PL/SQL programs owned by the user or available to the current user, or all PL/SQL source code in the database. |
USER_TRIGGERS ALL_TRIGGERS DBA_TRIGGERS | Display information about triggers owned by the user or available to the current user, or all triggers. |
ROLE_TAB_PRIVS ROLE_SYS_PRIVS ROLE_ROLE_PRIVS | Display information about object privileges, system privileges, or roles granted to roles in the database. |
DBA_TABLESPACES DBA_TS_QUOTAS | Display information about all Oracle tablespaces, or space quotas assigned to users in each tablespace. |
DBA_DATAFILES DBA_SEGMENTS DBA_EXTENTS DBA_FREE_SPACE | Display information about datafiles in the database, or segments, extents, or free space in each datafile. |
DBA_PROFILES | Display information about user profiles. Profiles let the DBA restrict the physical resources of the host system (such as process memory allocation and CPU cycles) that users may involve in Oracle processing. |
*Dynamic Performance Views
Dynamic performance views, or "V$ views," aren't intrinsic parts of the Oracle dictionary, but they can be useful management tools.
The dynamic performance views are constantly updated with important data about the operation of your database. Some dynamic performance views you might find helpful are listed here. Notice that these views don't include a scope in the manner of dictionary views.
This View... | Provides Information About... |
---|---|
V$DATABASE | The database mounted to your instance |
V$SYSSTAT | The performance of your database |
V$SESSION V$SESSTAT | The performance of individual user sessions |
V$LOG V$LOGFILE | Online redo logs |
V$DATAFILE | Datafiles |
V$CONTROLFILE | Control files |
Views don't actually contain any data. They're merely select statements stored as objects in Oracle.
Every time you refer to a view in your own queries, Oracle executes the view's underlying select statement to obtain the contents of that view.
Dictionary view definitions can be quite complex. The code here shows the definition of the ALL_TABLES view in Oracle.
This view contains a column called TEXT defined as the LONG datatype, so the set long 9999 command was included to provide extra formatting:
SQL> SET LONG 9999;
SQL> SELECT text FROM all_views WHERE viewname = 'ALL_TABLES';
TEXT
— — — — — — — — — — — — — — — — — — — -
select u.name, o.name, ts.name, co.name,
t.pctfree$, t.pctused$,
t.initrans, t.maxtrans,
s.iniexts * ts.blocksize, s.extsize * ts.blocksize,
s.minexts, s.maxexts, s.extpct,
decode(s.lists, 0, 1, s.lists), decode(s.groups, 0, 1, s.groups),
decode(bitand(t.modified,1), 0, 'Y', 1, 'N', '?'),
t.rowcnt, t.blkcnt, t.empcnt, t.avgspc, t.chncnt, t.avgrln,
lpad(decode(t.spare1, 0, '1', 1, 'DEFAULT', to_char(t.spare1)), 10),
lpad(decode(mod(t.spare2, 65536), 0, '1', 1, 'DEFAULT',
to_char(mod(t.spare2, 65536))), 10),
lpad(decode(floor(t.spare2 / 65536), 0, 'N', 1, 'Y', '?'), 5),
decode(bitand(t.modified, 6), 0, 'ENABLED', 'DISABLED')
from sys.user$ u, sys.ts$ ts, sys.seg$ s,
sys.obj$ co, sys.tab$ t, sys.obj$ o
where o.owner# = u.user#
and o.obj# = t.obj#
and t.clu# = co.obj# (+)
and t.ts# = ts.ts#
and t.file# = s.file# (+)
and t.block# = s.block# (+)
and (o.owner# = userenv('SCHEMAID')
or o.obj# in
(select oa.obj#
from sys.objauth$ oa
where grantee# in ( select kzsrorol from x$kzsro))
or /* user has system privileges */
exists (select null from v$enabledprivs
where priv_number in (-45 /* LOCK ANY TABLE */,
-47 /* SELECT ANY TABLE */,
-48 /* INSERT ANY TABLE */,
-49 /* UPDATE ANY TABLE */,
-50 /* DELETE ANY TABLE */)))
If you want to obtain a full listing of all data dictionary views available in Oracle, execute the following command:
select * from DICTIONARY
Oracle will list all the dictionary views.
Comments on the use of each dictionary view are offered in DICTIONARY as well.
Objects in Oracle that are synonymous with DICTIONARY are DICT, CATALOG, and CAT.
Question 8
Question 9
Question 10
Question 11
Question 12
Question 13
In this lesson, you learned that a data dictionary view is defined by its scope and topic.
You learned that the scope may be limited to those objects owned by a single user, expanded to include any objects the user can access, or provide access to all relevant objects in the entire database.
You learned that the topic refers to what part of the data dictionary is included in the view. Topics include (but aren't limited to) indexes, tables, columns, privileges, and users.
You also learned that dynamic performance views can provide important information about database operation.
Lesson 5. Querying the Data Dictionary
Given your knowledge of the importance of the data dictionary, as well as its scopes and topics, you can write queries to locate information in the dictionary.After completing this lesson, you should be able to:
- Query different views of the data dictionary
This lesson looks at examples of querying the dictionary to illustrate how useful the data dictionary can be.
The views in the examples that follow use the DBA scope.
*Querying the DBA_SOURCE View
You can use the describe command on data dictionary views just as if they were tables. The code here shows the result with DBA_SOURCE. Remember that the SOURCE topic displays the source code for PL/SQL programs.
SQL> describe dba_source
Name Null? Type
------------------------ -------- ----------------
OWNER NOT NULL VARCHAR2(30)
NAME NOT NULL VARCHAR2(30)
TYPE VARCHAR2(12)
LINE NOT NULL NUMBER
TEXT VARCHAR2(4000)
*Querying the DBA_INDEXES View
The DBA_INDEXES view contains information about the indexes on tables available to the user.
Some information in this view details the features of the index, such as whether all values in the indexed column are unique.
Other information in the view identifies the storage parameters of the index and where the index is stored. The code here provides an example. This code includes commands to format the output.
SQL> column owner format a10
SQL> column index_name format a15
SQL> column table_name format a12
SQL> column uniqueness format a10
SQL> select owner, index_name, table_name, uniqueness
2 from dba_indexes
3 where owner = 'SCOTT';
OWNER INDEX_NAME TABLE_NAME UNIQUENESS
---------- --------------- ------------ ----------
SCOTT PK01 EXAMPLE_1 UNIQUE
SCOTT SYS_C00905 DEPARTMENT UNIQUE
SCOTT UK_EMPLOYEE_01 EMPLOYEE UNIQUE
*Querying the DBA_USERS View
The DBA_USERS view gives the current user of the database information about all users known to the Oracle database. An example is shown here.
SQL> SELECT username, user_id, created from dba_users;
USERNAME USER_ID CREATED
------------------------------ --------- ---------
SYS 0 23-JUN-99
SYSTEM 5 23-JUN-99
OUTLN 11 23-JUN-99
DBSNMP 18 23-JUN-99
AURORA$ORB$UNAUTHENTICATED 23 23-JUN-99
JASON 27 18-JUL-00
STUDENT2 46 30-OCT-00
STUDENT1 45 30-OCT-00
SPANKY 43 30-OCT-00
JASON2 48 31-OCT-00
SCOTT 52 19-MAR-01
JASON3 49 16-NOV-00
JASON10 50 18-NOV-00
GIANT 51 08-DEC-00
*Querying Combined Views
The DBA_CONSTRAINTS view displays information about the constraints (primary keys, foreign keys, unique, Not NULL, and check constraints) defined in a database. The DBA_CONS_COLUMNS view displays all columns in integrity constraints in Oracle.
Suppose that you can't remember whether you've given shared columns in multiple tables the same name. You can still determine the referenced column through the use of DBA_CONSTRAINTS and DBA_CONS_COLUMNS. This example shows how you can combine the contents of these two views.
SQL> column table_name format a12
SQL> column column_name format a12
SQL> select a.table_name, b.column_name, c.table_name, c.column_name
2 from dba_constraints a, dba_cons_columns b, dba_cons_columns c
3 where a.constraint_name = b.constraint_name
4 and a.rconstraint_name = c.constraint_name;
TABLE_NAME COLUMN_NAME TABLE_NAME COLUMN_NAME
------------ ------------------ ------------ --------------
EMPLOYEE DEPARTMENT_NUM DEPARTMENT DEPARTMENT_NUM
*Querying the DBA_IND_COLUMNS View
Information similar to the contents of DBA_CONS_COLUMNS can be found in DBA_IND_COLUMNS. But the DBA_IND_COLUMNS view will contain columns used in constraint-related indexes as well as columns used in other types of indexes. The code here shows this.
SQL> create index ix_employee_01 on employee (lastname);
Index created.
SQL> select index_name, table_name, column_name, column_position
2 from dba_ind_columns
3 where table_name = 'EMPLOYEE';
INDEX_NAME TABLE_NAME COLUMN_NAME COLUMN_POSITION
--------------- ------------ ------------------ ---------------
PK_EMPLOYEE_01 EMPLOYEE EMPID 1
UK_EMPLOYEE_01 EMPLOYEE GOVT_ID 1
IX_EMPLOYEE_01 EMPLOYEE LASTNAME 1
Question 14
Question 15
Question 16
In this lesson, you learned how to perform queries of the DBA_SOURCE, DBA_INDEXES, DBA_USERS, DBA_CONSTRAINTS, and other views of the data dictionary.
Lesson 6. Using Control Files
Control files keep track of all the files Oracle needs, remembering where those files are located on the host machine. The control files also contain information about the redo log member filenames and where they're located in the filesystem. Without control files, the Oracle database server would be unable to find its physical components.After completing this lesson, you should be able to:
- Describe the purpose of control files
- Explain the importance and use of the CONTROL_FILES parameter
When you enter SQL*Plus to bring a database online, Oracle looks in the control file to find all the components it needs to start that database.
For example, if the control file has three associated files and only two files are available, Oracle will determine that the third file is missing and won't start your database.
After database startup, control files will be modified or used by Oracle in the following situations:
- When new physical disk resources (such as tablespaces) are created
- When an existing disk resource is modified in some way (for example, when a datafile is added to a tablespace)
- When LGWR stops writing one online redo log and starts writing to another (log switch)
*The CONTROL_FILES Parameter
The CONTROL_FILES parameter in the init.ora file defines the location of your control files on the database server and indicates where Oracle will look on instance startup to find its control files.
When you start the instance before creating a database or when you're migrating from one version of Oracle to another, Oracle will create control files based on the filenames and locations you provide in the CONTROL_FILES parameter.
In subsequent instance startups, if Oracle doesn't find the control files it expects to find based on the content of the CONTROL_FILES parameter, Oracle won't start.
If you create Oracle-Managed control files (OMF is discussed in another lesson), there's no need for the CONTROL_FILES parameter; Oracle will look in the default directory or directories for the control files.
By default in both Windows and UNIX environments, the Oracle Database Configuration Assistant (DBCA) will create three control files and put them in the oradata/database_name directory under the Oracle software home directory.
DBCA gives these control files the name controlnn.dbf, where nn is a number between 01 and 03 (the number could be operating system-specific). Sometimes DBCA may use the extension .ctl or .ora instead.
You can follow whatever naming convention you like when you define your own control files. You're also not restricted to placing the control files in $ORACLE_HOME/dbs. You can put them wherever you want.
Oracle recommends that you use multiple control files placed on separate disks. Be sure to include the absolute pathname for the location of your control file when you define values for the CONTROL_FILES parameter in init.ora.
Question 17
Question 18
Question 19
Question 20
In this lesson, you learned that Oracle uses control files to keep track of all the files it needs. Without control files, the Oracle database server would be unable to find its physical components.
You also learned that the CONTROL_FILES parameter in the init.ora file defines the location of your control files. This parameter specifies where Oracle will look on instance startup to find its control files.
Lesson 7. Control File Contents
Control files contain a number of items that describe your database, the redo logs, and backups.After completing this lesson, you should be able to:
- Describe the contents of a control file
- Explain how to recreate a control file
You can't just open a control file in your favorite text editor. The control file is written in binary, and only the Oracle database can understand its contents. The control file contains the following items:
- Database name and identifier information
- Database creation date and time information
- Datafile and redo log filesystem locations supplied when you created the database and when you added datafiles or redo logs
- Tablespace names and the associations between tablespaces and datafiles supplied when you created the database or tablespaces, or when you added more datafiles to existing tablespaces
- A history of when archive logs were taken, generated by Oracle
- Information about when backups were taken
- The current online redo log sequence number
- Current checkpoint information generated by Oracle
*Creating New Control Files
Sometimes you need to re-create control files. You might like to rename the database because you're making a copy of it on the same machine that hosts the original database (Oracle won't let two databases have the same name on the same host system).
Or you might need to change certain database settings you set when you created the database, such as maxlogfiles. Or you may have even lost the control file.
Whatever your reason, the method for recreating control files is the same. Issue the following statement:
alter database backup controlfile to trace
The trace keyword tells Oracle to generate a script containing a create controlfile command and store it in the trace directory identified in the USER_DUMP_DEST parameter of the init.ora file.
A sample control file creation script generated by the create controlfile command is displayed here. This script illustrates the correct syntax for a create controlfile statement.
# The following commands create a new control file and use it
# to open the database.
# Data used by the recovery manager will be lost. Additional
# logs may be required for media recovery of offline data
# files. Use this only if the current version of all online
# logs are available.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "ORGDB01" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/oracle/disk_01/log1a.dbf',
'/oracle/disk_02/log1b.dbf') SIZE 30M,
GROUP 2 ('/oracle/disk_03/log2a.dbf',
'/oracle/disk_04/log2b.dbf') SIZE 30M
DATAFILE
'/oracle/disk_05/system01.dbf',
'/oracle/disk_05/system02.dbf'
;
# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;
The next few pages describe the procedure you'd take to re-create control files if you want to clone an existing database.
*Step 1: Generate the Control File Creation Script
From SQL*Plus, issue the alter database backup controlfile to trace command. Oracle generates a script containing the create controlfile command.
SQL> alter database backup controlfile to trace
*Step 2: Shut Down the Database
Shut your existing database down from SQL*Plus using the shutdown normal or shutdown immediate commands. Don't use shutdown abort.
SQL> shutdown normal
*Step 3: Copy Datafiles and Logs
Use operating system commands to copy all datafiles and redo logs to an alternate filesystem location (preferably using OFA conventions).
*Step 4: Modify the Script
Modify the control file creation script by changing the reuse database "name" clause to set database "newDBName". Substitute the appropriate new name for the cloned database you want to create.
# The following commands create a new control file and use it
# to open the database.
# Data used by the recovery manager will be lost. Additional
# logs may be required for media recovery of offline data
# files. Use this only if the current version of all online
# logs are available.
STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "newDBName" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/oracle/disk_01/log1a.dbf',
'/oracle/disk_02/log1b.dbf') SIZE 30M,
GROUP 2 ('/oracle/disk_03/log2a.dbf',
'/oracle/disk_04/log2b.dbf') SIZE 30M
DATAFILE
'/oracle/disk_05/system01.dbf',
'/oracle/disk_05/system02.dbf'
;
# Recovery is required if any of the datafiles are restored
# backups, or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;
*Step 5: Modify File Locations
In the script, review and modify the filesystem location of every online redo log listed in the logfile clause of the create controlfile command to reflect the new location of the files you copied.
Review and modify the filesystem location of every datafile listed in the datafile clause.
Remove all commented lines (lines that begin with #). You won't necessarily encounter an error if some datafiles weren't copied properly or if Oracle didn't find the datafiles where it expected. So it's very important that you verify the filesystem locations for all datafiles and redo logs in your script before proceeding.
STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "newDBName" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/pathname/log1a.dbf',
'/pathname/log1b.dbf') SIZE 30M,
GROUP 2 ('/pathname/log2a.dbf',
'/pathname/log2b.dbf') SIZE 30M
DATAFILE
'/pathname/system01.dbf',
'/pathname/system02.dbf'
;
RECOVER DATABASE
ALTER DATABASE OPEN;
*Step 6: Remove Unneeded Commands
Remove or comment out the recover database and alter database open commands. You'll execute these commands manually. A correctly modified control file script is shown here.
STARTUP NOMOUNT
CREATE CONTROLFILE SET DATABASE "newDBName" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/pathname/log1a.dbf',
'/pathname/log1b.dbf') SIZE 30M,
GROUP 2 ('/pathname/log2a.dbf',
'/pathname/log2b.dbf') SIZE 30M
DATAFILE
'/pathname/system01.dbf',
'/pathname/system02.dbf'
;
# The commands that follow aren't needed.
# RECOVER DATABASE
# ALTER DATABASE OPEN;
*Step 7: Copy init.ora
Copy the init.ora file for the existing database to a new location. Use a text editor to change the value set for the CONTROL_FILES parameter in your init.ora file to the new filesystem location.
Modify other init.ora settings, such as DB_NAME, as if you're creating a new database. If you don't perform this step, Oracle won't let you rename your new database and may corrupt your old one.
*Step 8: Start the Instance
From SQL*Plus, start the Oracle instance with startup nomount.
SQL> startup nomount newDBName
*Step 9: Run the Script
Run the modified script with the run or @ command in SQL*Plus.
This step creates the new control file.
SQL> run
*Step 10: Mount the New Database
Mount the new database with the alter database mount command.
SQL> alter database mount
*Step 11: Open the New Database
Open the new database with the alter database open command.
SQL> alter database open
*Step 12: Open the Old Database
To verify that it's still operational, open the old database with the startup open command. If you can open both the original database and the clone, you've been successful. You've re-created the control files Oracle needs for the new database.
SQL> startup open oldDBName
Question 21
Question 22
Question 23
Question 24
Question 25
Topic 7.1 Exercises
* Exercise 1
Try re-creating control files for a database you want to copy in Oracle9i.
Step | Action |
---|---|
1 | Please Note: For the exercises in this course, you should set up an isolated Oracle9i system for practice. Do not use your production Oracle9i system for these practice exercises. Generate the control file creation script. |
2 | Shut down the existing database. |
3 | Copy datafiles and redo log files to the appropriate location. |
4 | Modify the script to reflect the name of the database copy and the location of the datafiles and redo log files. |
5 | Copy the init.ora file and modify init.ora parameters. |
6 | Start the Oracle instance and run the script. |
7 | Mount and open the new database. |
In this lesson, you learned that control files contain important information such as the name and identifier of your database, the date and time it was created, database and redo log file locations, tablespace names, checkpoint information, and backup information.
You also learned that it's sometimes necessary to recreate control files, and you learned how to use the create controlfile statement.
Lesson 8. Using Oracle-Managed Files
The Oracle-Managed Files (OMF) feature can minimize the amount of filesystem handling an Oracle DBA must do to create and manage the Oracle database.After completing this lesson, you should be able to:
- Describe the init.ora OMF parameters used to define Oracle-Managed Files
- Explain the interaction between the CONTROL_FILES parameter and the OMF parameters
- Describe how OMF affects Oracle naming conventions
Employing the Oracle-Managed Files (OMF) feature will affect the operation of your control files. Recall that in order to set up OMF, you must configure appropriate values for two init.ora OMF parameters:
- DB_CREATE_FILE_DEST
- DB_CREATE_ONLINE_LOG_DEST_N
*DB_CREATE_FILE_DEST
The DB_CREATE_FILE_DEST parameter defines the location of the default filesystem directory where Oracle will create datafiles.
Control files and redo logs will also be stored in this directory if the DB_CREATE_ONLINE_LOG_DEST_N parameter is not set.
The code here shows an OFA-compliant example that sets this parameter for a database called DB1.
DB_CREATE_FILE_DEST = '/u01/oradata/db1'
*DB_CREATE_ONLINE_LOG_DEST_N
The DB_CREATE_ONLINE_LOG_DEST_N parameter defines the location of the default filesystem directory for online redo log files and OMF control file creation.
The N value can range from 1 to 5, letting you define five different multiplex copies.
When you specify multiple locations for online redo logs, Oracle will place control files in those redo log directories. You'll learn more about control file multiplexing elsewhere.
The code here shows an OFA-compliant example that sets this parameter for a database called DB1, using two directory locations:
DB_CREATE_ONLINE_LOG_DEST_1 = '/u01/oradata/db1'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u02/oradata/db1'
The values you set for the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_N parameters can be dynamically changed with the following command:
alter system set parameter = value
In this command, parameter is the parameter whose settings you want to change; value is the directory you want to change that OMF parameter to. The code here shows an example.
SQL> alter system set DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata/db1';
System altered.
*Using the CONTROL_FILES and OMF Parameters
Having both the CONTROL_FILES and the OMF parameters provides several options for specifying the location of control files.
If you... | Then... |
---|---|
Specify a value for CONTROL_FILES, but not for either of the OMF parameters | Oracle will create control files in the locations you define with CONTROL_FILES. These files won't be managed by OMF, though Oracle will write any changes in database structure to those control files. |
Specify a value for DB_CREATE_FILE_DEST but not for DB_CREATE_ONLINE_LOG_DEST_N | Oracle will place a single OMF-managed control file in the directory assigned by DB_CREATE_FILE_DEST. |
Specify a value for DB_CREATE_ONLINE_LOG_DEST_N but not for DB_CREATE_FILE_DEST | Oracle places an OMF-managed control file in each directory specified in DB_CREATE_ONLINE_LOG_DEST_N. |
Don't specify values for the CONTROL_FILES parameter or either of the OMF parameters | Oracle's response depends on the operating system in use. |
When you use OMF, you must let Oracle name the files according to its own conventions. The Oracle-managed control file uses a default naming convention of ora_%u.ctl, where %u is a unique name generated by Oracle.
This name won't necessarily correlate with the name of the database or with parameter settings in init.ora such as DB_NAME. For example, your database may be named orgdb01, but Oracle might call the OMF control file ora_cmr3u45r.ctl.
Under OMF, many files that should be placed on separate disks for performance purposes are all dumped onto the same disk. So most DBAs probably won't use OMF for production systems. But OMFs are great for an organization just getting started with Oracle databases that needs to get up and running very quickly.
Question 26
Question 27
Question 28
In this lesson, you learned that to set up the Oracle-Managed Files (OMF) feature, you must include two OMF parameters in init.ora: DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_N
You also learned that OMF will affect the operation and naming of your control files.
Lesson 9. Viewing Control File Information
Control file information can be found in several V$ performance views.After completing this lesson, you should be able to:
- Obtain control filename and availability information
- Find information about control file contents
- Display specific database information
From time to time, you might need to obtain information about the control files in your Oracle database. You can do so using some important dynamic performance and data dictionary views available to DBAs on an Oracle database. Let's take a look at these views in more detail.
*The V$CONTROLFILE View
The main view available in the Oracle data dictionary for control file use and management is the V$CONTROLFILE view. This view has only two columns: STATUS and NAME. Their contents are as follows:
STATUS Displays INVALID if the control filename cannot be determined; otherwise, it will be NULL.
NAME Gives the absolute path location of the file on your host machine as well as the control filename.
The information in the V$CONTROLFILE view corresponds to the values set for the initialization parameter CONTROL_FILES. The code here shows the SQL statement used to obtain information from the V$CONTROLFILE view about the control files for Oracle on a Windows machine, as well as the output:
SQL> select * from v$controlfile;
STATUS NAME
------ ----------------------------------------
D:\ORACLE\DATABASE\CTL1D704.ORA
E:\ORACLE\DATABASE\CTL2D704.ORA
F:\ORACLE\DATABASE\CTL3D704.ORA
*The V$DATABASE View
You can find other information about your control files from the V$DATABASE view. This dynamic performance view gives information that Oracle normally also stores within the control file. You should be aware that several columns in this view give information about your control file:
CONTROLFILE_TYPE The section type in the control file
CONTROLFILE_CREATED When the current control file was created
CONTROLFILE_SEQUENCE# The current sequence number for the database, which is recorded in the control file
CONTROLFILE_CHANGE# The current system change number for the database, which is recorded in the control file
CONTROLFILE_TIME The last time the control file was updated
The code here shows the SQL statement used to obtain information from the V$DATABASE view about the control files on a Windows machine, as well as the output:
SQL> select * from v$database;
DBID NAME CREATED RESETLOGS_CHANGE# RESETLOGS
---------- --------- --------- ----------------- ---------
PRIOR_RESETLOGS_CHANGE# PRIOR_RES LOG_MODE CHECKPOINT_CHANGE#
----------------------- --------- ------------ ------------------
ARCHIVE_CHANGE# CONTROL CONTROLFI CONTROLFILE_SEQUENCE#
--------------- ------- --------- ---------------------
CONTROLFILE_CHANGE# CONTROLFI OPEN_RESETL VERSION_T
------------------- --------- ----------- ---------
1674500680 ORGDB01 21-JAN-00 33409 21-JAN-00
1 06-OCT-98 NOARCHIVELOG 736292
716268 CURRENT 21-JAN-00 4412
736292 NOT ALLOWED 21-JAN-00
*The V$CONTROLFILE_RECORD_SECTION View
A final view available for displaying control file information is the V$CONTROLFILE_RECORD_SECTION view. A working control file is divided into several sections, each storing different information about the database in action.
For example, there is a section in the control file that keeps track of the sequence number of the current online redo log, a section that contains information about the physical disk file layout of the Oracle database, and so on. This view displays information about each of those sections, such as the size of each record in the control file for that section, the total number of records allocated to each section, and so on.
The code here shows output from the V$CONTROLFILE_RECORD_SECTION view:
SQL> select * from v$controlfile_record_section where rownum < 6;
TYPE RECORD_SIZE RECORDS_TOTAL RECORDS_USED FIRST_INDEX LAST_INDEX
------------- ----------- ------------- ------------ ----------- ----------
LAST_RECID
----------
DATABASE 316 1 1 0 0
0
CKPT PROGRESS 2036 1 0 0 0
0
REDO THREAD 228 1 1 0 0
0
REDO LOG 72 30 5 0 0
5
DATAFILE 428 400 18 0 0
20
*The V$PARAMETER View
You can also find the names of your control files by issuing the following statement:
select value from V$PARAMETER where name = 'control_files'
Be sure that the parameter name is in lowercase.
Question 29
Question 30
Question 31
Topic 9.1 Exercises
* Exercise 1
Try obtaining information about the control files in your Oracle database.
Step | Action |
---|---|
1 | Please Note: For the exercises in this course, you should set up an isolated Oracle9i system for practice. Do not use your production Oracle9i system for these practice exercises. Display the status and name of your control file using the V$CONTROLFILE view. |
2 | Determine when the current control file was created using the V$DATABASE view. |
3 | Find the size of each record in the control file using the V$CONTROLFILE_RECORD_SECTION view. |
In this lesson, you learned how dynamic performance and data dictionary views are used by DBAs to obtain information about control files. You learned that V$CONTROL_FILE provides basic information about the status and location of control files.
You also learned that the V$DATABASE view provides information about when the control file was created and the most recent sequence number and SCN number recorded in the control file. Finally, you learned that V$CONTROLFILE_RECORD_SECTION shows information stored in different sections of the control file.
Lesson 10. Multiplexing Control Files
Multiplexing, or mirroring, control files reduces the risk of losing these important files by maintaining multiple copies on different disks.After completing this lesson, you should be able to:
- Define multiplexing
- Create copies of control files
- Back up control files
If you have multiple disk drives available for Oracle, you should store copies of the control files on different disks to minimize the risk of losing these important physical disk resources. If you stick with the default creation of control files, Oracle recommends that you move these control files to different disk resources and set the CONTROL_FILES parameter to let Oracle know that there are multiple copies of the control file to be maintained. This is called multiplexing, or mirroring, the control file.
Multiplexing control files reduces Oracle's dependence on any one disk available on the host machine. In the event of a failure, the database is more recoverable because multiple copies of the control file have been maintained. In no case should you ever use only one control file for an Oracle database, because of the difficulty in recovering a database when the control file is lost. Having several copies of the control file and parameter file on different disks will minimize the possibility of one disk failure rendering your database inoperable.
The actual process of making additional copies of your control file and moving them to different disk resources is something you handle outside of Oracle. You can create a duplicate copy of the control file by simply using the operating system copy command. In Windows, that command is copy, whereas in UNIX it is cp.
However, that file will be unusable unless you follow these steps:
- In SQL*Plus, execute the shutdown normal, shutdown immediate, or shutdown transactional command to shut down the instance and close the database.
- Copy the control file to another disk using your operating system's file copy command.
- Modify the CONTROL_FILES parameter in init.ora to include the additional control file.
- Restart the instance in SQL*Plus with the startup open command. Oracle will now maintain an additional copy of the control file.
By specifying multiple control files in the init.ora file before database creation, you will start your database administration on that database on the right foot, making the database easy to maintain.
You back up your control file using the alter database backup controlfile to trace command. This command generates a script that can be used to re-create the control file later in the event of a problem.
You can also use the alter database backup controlfile command to make a copy of the actual control file to some alternate location. Once backed up, however, Oracle will not maintain that control file when new datafiles or tablespaces or redo logs are added.
Thus, you cannot simply make a backup copy of the control file if you want Oracle to maintain that copy with new information about the database's physical structure.
The code here shows an example of how to use the alter database backup controlfile command for making backups of actual control files:
SQL> alter database backup controlfile to '/u05/backup/db1/control01.ctl';
Database altered.
Question 32
Question 33
Question 34
In this lesson, you learned that Oracle can maintain multiple copies of a control file for redundancy purposes. This is called multiplexing.
You also learned that you can create a duplicate of a control file with the copy command in Windows, or the cp command in UNIX. And you learned the steps necessary to ensure that the file is usable and easy to maintain.
Finally, you learned how to back up control files using the alter database backup controlfile command. However, remember that Oracle will not maintain a backup when new datafiles, tablespaces, or redo logs are added.
Lesson 11. Redo Log Files
Redo logs are disk resources that store data changes made by users on Oracle.After completing this lesson, you should be able to:
- Explain how online redo log files are processed
- Identify the two database archive modes
- Switch between archive modes
This discussion covers two official exam objectives listed on the candidate guide for the Oracle9i DBA Fundamentals I exam:
- Explain the purpose of online redo log files
- Describe the structure of online redo log files
Oracle uses redo logs to track data changes users make to the database such as changes made to data segment blocks of the tables or indexes. Each user process that makes such a change generates a redo log entry, which identifies the change that was made. This redo log entry is placed in the area of the SGA called the redo log buffer, which is covered in another course. The LGWR process writes those changes to files on disk called online redo log files.
Oracle expects a minimum of two redo log files. Each of the redo log files is called a redo log group. Oracle also enables you to mirror each of the redo log files for sake of redundancy. Those mirrored files are called members of the group.
The operation of online redo logs occurs in this way: as the redo log buffer fills with redo entries from user processes or if an active transaction gives the commit command, then LGWR writes the contents of the redo log buffer to each member of the current group.
This is sometimes referred to as flushing the redo buffer. The flushing of the entire redo log buffer happens even if a single active transaction commits.
The group being written is considered the current group because LGWR is currently writing into it. LGWR writes redo log entries to the active group until the group is full, at which point LGWR switches to writing redo entries to the next redo log group. This is referred to as log switch.
When the other group fills, LGWR will then switch to the next available group until it reaches the last redo log file. After filling the last log group, LGWR loops back to the first group and continues writing redo entries.
What happens to the first group depends on the archiving mode your Oracle database is running in. A database could be running either in ARCHIVELOG or NOARCHIVELOG mode.
When running in the NOARCHIVELOG mode, the LGWR writes into each of the log groups and then loops back to the first group and overwrites what it contains. In the event of a failure, this mode cannot recover the database to the point of failure.
In ARCHIVELOG mode, the redo entries written into the redo groups are copied to a different location by an archiver process (ARCn).
To speed this archiving process you could activate more than one archiver processes. While running in this mode, the database can recover to the point of failure by applying these archived redo log entries to the restored backup.
Many DBAs with less experience managing Oracle systems at this point wonder the following — if Oracle lets you recover to the point of database failure when the database runs in ARCHIVELOG mode, why would anyone want to run his or her database in NOARCHIVELOG mode?
The answer is simple. An Oracle database running in ARCHIVELOG mode runs slower than a database running in NOARCHIVELOG mode, although the performance difference is minimal when the database has been tuned properly.
Also, not every Oracle database needs to have all its redo archived. For example, a read-only database used for data warehouse queries wouldn't need to run in ARCHIVELOG mode because once the data has been loaded, users wouldn't be able to make changes anyway.
Another situation where you might not care about archiving user data changes is on development systems. Unlike production systems, where every user change is important, development systems usually are only utilized by developers coding new programs or enhancing existing systems. The developer may only care that the database structure is the same in development as in production. The developer may populate the development tables with junk data for the purposes of testing.
When testing is finished, the developer may want to blow away all the changes and start again from scratch. In this case, archiving all data changes made by the developer would be unnecessary and potentially even counterproductive. Thus, Oracle lets you decide whether or not to archive redo information based on the business needs of the system the database supports.
In order to switch between these two modes, you must follow these steps:
- In SQL*Plus, shut down the database you want to change archiving mode on using the shutdown normal or shutdown immediate command.
- Start the instance and mount but do not open the database. You can use the startup mount command to set up the database in the proper state.
- Change the archiving status of the database using the alter database archivelog or alter database noarchivelog command. This command will modify the contents of the control file. If you restart the database using the startup open command, Oracle will bring up the database in the mode you specify here.
- You should shut down the database at this point and take a complete offline backup. If you don't perform this step and later have to recover the database from datafiles backed up before you changed the archive mode, then the recovered database will not reflect the change in archiving mode.
Additionally, if you switched from NOARCHIVELOG mode to ARCHIVELOG mode and didn't backup your database afterward, you will not be able to recover to the point of database failure. This is because the datafiles you use to recover will reflect the fact that Oracle was running in NOARCHIVELOG mode when you took the backup.
- Reopen the database for normal use with the alter database open command.
Question 35
Question 36
Question 37
Question 38
Topic 11.1 Exercises
* Exercise 1
Try switching the archive mode of your Oracle database.
Step | Action |
---|---|
1 | Please Note: For the exercises in this course, you should set up an isolated Oracle9i system for practice. Do not use your production Oracle9i system for these practice exercises. Shut down the database. |
2 | Start the instance and mount but do not open the database. |
3 | Use the alter database command to change the archive status of the database. |
4 | Shut down the database and take a complete offline backup. |
5 | Reopen the database. |
6 | Execute a query on the database. |
7 | View the archived log of the query. |
In this lesson, you learned that online redo logs are the physical files storing the information from the redo log buffers in SGA. You learned the operation of online redo log files. You also learned that online redo log files can have mirrored files. Each set of files containing the same data is called a redo log group, while the mirrored files in a group are called redo log members.
You also learned that an Oracle database can run in either ARCHIVELOG or NOARCHIVELOG mode. In ARCHIVELOG mode, data in online redo log files are archived in the archived redo log files as soon as the redo log file gets filled up. NOARCHIVELOG mode runs quicker and overwrites the contents of the log group rather than archiving them.
Finally, you learned how to switch between the two archive modes.
Lesson 12. Controlling Log Switches and Checkpoints
A checkpoint occurs every time a log switch occurs. Both can be forced to occur more frequently by a DBA.After completing this lesson, you should be able to:
- Identify the events that occur at a log switch
- Define the term checkpoint
- Specify checkpoint frequency by volume and time
- Force checkpoints or log switches to occur
Redo logs are written sequentially. As Oracle users make changes, the server process generates redo to re-create the changes made within the user's transaction. The redo gets written to the redo buffer.
From there, LGWR writes the change to an online redo log. Redo logs are finite in size. When one fills, LGWR has to start writing to the next one in the sequence.
A log switch occurs at the point at which LGWR completely fills the online redo log group and switches to start writing into the next group.
At every log switch, several events occur. Oracle generates a new sequence number for the online redo log LGWR is about to write. Oracle also performs a checkpoint. A checkpoint occurs every time a log switch occurs. Checkpoints can also occur more often than log switches.
During a checkpoint, the checkpoint background process CKPT updates the headers of all datafiles and control files to reflect that it has completed successfully, and signals the DBWn to flush the dirty buffers into the data files. The number of buffers being written by DBWn is determined by the parameter FAST_START_IO_TARGET (or FAST_START_MTTR_TARGET), if specified.
The frequency with which checkpoints occur affects the amount of time Oracle requires for instance recovery. If an instance experiences failure, the dirty blocks that haven't been written to disk must be recovered from redo logs. Even though this instance recovery is handled automatically by SMON, the amount of time it takes to recover depends on the time difference between the checkpoints.
The events that occur at a log switch are as follows.
First, LGWR stops writing the redo log it filled. Second, CKPT signals the DBWn to flush the dirty buffers into the data files. Finally, CKPT updates the control files and the data file headers with the checkpoint information. With the checkpointing tasks completed, the LGWR will be allowed to start writing into the next redo log group with a new sequence number.
The DBA has only a small amount of control over log switches. Because users will always change data, there is little the DBA can do to stop redo information from being written.
With that said, you can control how often log switches will occur by changing the size of the online redo log members or manually by forcing a log switch with alter system switch logfile command. Larger member files make log switches less frequent, whereas smaller member files make log switches more frequent.
A checkpoint happens not just at the time of log switch, but it can also be configured to happen at the regular intervals by CKPT. You will learn how to do this in the following section. If you configure CKPT to checkpoint at regular intervals, then checkpoints will happen at regular intervals as well as during redo log switch.
If your database redo logs are very large, you should set up the database so that checkpoints happen more often than just at log switches.
You can specify more frequent checkpoints with LOG_CHECKPOINT_INTERVAL or LOG_CHECKPOINT_TIMEOUT in the init.ora file.
These two parameters reflect two different principles on which checkpoint frequency can be based: volume-based intervals and time-based intervals.
LOG_CHECKPOINT_INTERVAL sets checkpoint intervals to occur on a volume basis. When LGWR writes as much information to the redo log as is specified by LOG_CHECKPOINT_INTERVAL, the checkpoint occurs.
Periods of high transaction volume require flushing the dirty buffer write queue more often; conversely, periods of low transaction volume require fewer redo log entries to be written, therefore fewer checkpoints are needed.
The effect of using LOG_CHECKPOINT_INTERVAL is much the same as using smaller redo logs, but it also eliminates the additional overhead of a log switch, such as the archiving of the redo log.
In versions of Oracle prior to Oracle8i, the value you set for LOG_CHECKPOINT_INTERVAL is the number of operating system blocks LGWR should write to the redo log (after a log switch) before a checkpoint should occur.
However, this definition changed a little bit with Oracle8i. When LOG_CHECKPOINT_INTERVAL is specified, the target for the checkpoint position cannot lag the end of the log more than the number of redo log blocks specified by this parameter. This ensures that no more than a fixed number of redo blocks will need to be read during instance recovery.
The other way of specifying checkpoint frequency is to use a time-based interval. This is defined with the LOG_CHECKPOINT_TIMEOUT init.ora parameter. Time-based checkpoint intervals are far simpler to configure than volume-based ones, although they make checkpoints occur at uniform intervals regardless of the transaction volume on the system.
When LOG_CHECKPOINT_TIMEOUT is specified, it sets the target for checkpoint position to a location in the log file where the end of the log was this many seconds ago. This ensures that no more than the specified number of seconds' worth of redo blocks needs to be read during recovery. However, there is no difference between Oracle9i and Oracle8i except for the formulation.
To disable time-based checkpoints, set the LOG_CHECKPOINT_TIMEOUT to zero.
Also, recall the mention of the new parameter FAST_START_IO_TARGET. This parameter improves the performance of crash and instance recovery. The smaller the value of this parameter, the better the recovery performance, because fewer blocks need to be recovered. When the parameter is set, the DBWn writes dirty buffers out more aggressively.
One concern you may have when specifying checkpoints to occur at regular intervals is that a checkpoint may occur just before a log switch. In order to avoid log switches causing checkpoints to occur in rapid succession, determine the average time it takes the redo log to fill and specify a time interval that factors in the checkpoint that happens at log switches. To do so, review the trace file generated by LGWR in the directory specified by the BACKGROUND_DUMP_DEST parameter.
Finally, you can force checkpoints to occur either by forcing a log switch or by forcing a checkpoint. Both can be done with the alter system command. To force a log switch, issue the alter system switch logfile command. To force a checkpoint, issue the alter system checkpoint command. Checkpoints that occur without a corresponding log switch are called fast checkpoints, whereas checkpoints involving log switches are full, or complete checkpoints.
Question 39
Question 40
Question 41
Question 42
Question 43
In this lesson, you learned about the events that occur at every log switch, the point at which LGWR completely fills the online redo log group and switches to start writing into the next group. A checkpoint is one of the events that occurs every time a log switch occurs, although a checkpoint can happen more often.
You also learned how to configure a checkpoint to happen at regular intervals. This should be done if your redo logs are very large, so that checkpoints happen more often that just at log switches. You learned that the two parameters in the init.ora file that can be set to specify checkpoint frequency are LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT. The former occurs on a volume basis and the latter is time-based.
Finally, you learned that you can force a log switch or a checkpoint by using the alter system command.
Lesson 13. Maintaining Redo Log Files
Multiplexing your redo logs is an important step in configuring the redo log files of a database.After completing this lesson, you should be able to:
- Explain why multiplexing redo logs is important
- Add and remove redo logs and members
- Rename redo log files
In order to improve recoverability in the event of disk failure, the DBA should configure Oracle to multiplex or store each redo log member in a group on different disk resources. This means that Oracle will maintain two or more members for each redo log group. The diagram illustrates the concept of multiplexing redo log members.
By multiplexing redo log members, you keep multiple copies of the redo log available to LGWR. If LGWR has a problem with a disk that holds the redo log (for example, if the disk controller fails), the entire instance will continue running because another redo log member is available on a different disk.
If the redo log group has only one member, or if multiple online redo log members are not multiplexed, and the same failure occurs, LGWR will not be able to write redo log entries and the Oracle instance will fail. This is because LGWR must write redo log entries to disk in order to clear space in the redo log buffer so that user processes can continue making changes to the database. If LGWR cannot clear the space in memory by writing the redo log entries to disk, no further user changes are allowed.
Multiplexing redo logs on separate disks benefits the database in other ways, too. The archiver process (ARCn) handles the archiving of redo logs automatically when it is enabled via ARCHIVELOG mode. When ARCn is running, it automatically moves archived redo logs to an archive destination specified by the LOG_ARCHIVE_DESTn parameter in the init.ora file every time a log switch occurs.
If redo log groups are on one disk, contention can arise at log switch time when ARCn tries to copy the filled redo log to the archive destination at the same time that LGWR tries to write to the next redo log group. If redo log members and the archive log destination are on different disks, there is little possibility for ARCn and LGWR to contend, because ARCn can work on one disk while LGWR continues on another.
A redo log group must have at least one member. To add additional members, use the following command:
alter database add logfile member 'filename' to group grpnum
where filename is the name of the file with the absolute path that the group will now have and grpnum is the number of the group to which you are adding the member.
You can also add new online redo log groups with the alter database add logfile group grpnum 'filename' statement. Finally, if you have more than two redo log groups, you can remove redo logs, provided at least two logs will remain and the one you want to remove is not currently being written by LGWR. The statement used to remove an online redo log from Oracle is alter database drop logfile group grpnum.
Note that dropping the redo log group does not remove the actual file from your host machine. The control files are updated to drop the members of the group from the database structure.
Group number and status information for online redo logs can be obtained from V$LOG, as described in another lesson.
You can rename redo log member files as well. This functionality can also be used to move an online redo log to another disk location on the machine hosting the Oracle database.
In addition, be aware that by moving a redo log, you are also making changes to the control file. If you don't have a backup copy of your control file, it may be difficult to back out of this change to your database later. Thus, you should make a copy of your database (including the control file) before renaming a redo log file.
Assuming you've made a backup, the following steps can be used to rename a redo log member file:
- Issue the shutdown command. The database cannot be open while you rename a redo log file.
- Copy the redo log files from their old location to the new location desired using operating system commands.
- Start up the instance and mount but do not open the database.
- Within SQL*Plus, issue the alter database rename file command to alert Oracle to the new location of your log file. After issuing this command, you can remove your copy of the redo log from its old (but not its new) location.
- Open the database.
- Make a backup copy of your new control file.
Tip: If the database complains about a corrupted redo log file, you can reinitialize the redo log without having to shut down the database. alter database clear logfile group n will do the magic.
Question 44
Question 45
Question 46
Question 47
In this lesson, you learned the importance of multiplexing redo log members so that multiple copies of the redo log are available to LGWR. If there is a problem with the disk that holds the redo log, this will prevent a failure of the Oracle instance because another redo log member is available on a different disk.
You also learned that the archive log destination should be on a separate disk from the redo log groups to prevent contention between ARCn and LGWR.
You learned how to add and remove redo log groups and members using alter database commands. Finally you learned how to rename log member files and how to move an online redo log to another disk location.
Lesson 14. Using OMF to Manage Redo Log Files
Managing Oracle-Managed redo log files is exactly the same as managing Oracle-Managed control files.After completing this lesson, you should be able to:
- Describe where redo log files are stored
- Explain the naming format for Oracle-Managed redo log files
Oracle-Managed redo log files use the same init.ora parameters and behave in the same way as Oracle-Managed control files. We will discuss only the salient points of Oracle-Managed redo log files. Recall that your redo logs are created when you create the database. You can create new redo logs by issuing the alter database add logfile group command.
When the database is configured to use OMF by specifying either or both of the init.ora parameters for OMF (DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n), then there is no need to specify the location or names of the redo log files during the creation of the database.
Oracle takes care to create them in a location specified in the OMF parameters and with a standard naming convention.
The same is true while creating new redo log groups or members.
The value n in parameter DB_CREATE_ONLINE_LOG_DEST_n specifies the location for a multiplexed copy of the online redo log or control file. You can specify up to five multiplexed copies.
If only the DB_CREATE_FILE_DEST parameter is specified, then an Oracle-Managed redo log file member is created in the directory specified in the parameter.
If both DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n parameters are specified, then the latter parameter takes the precedence. Whereas the rest of the tablespace Oracle-Managed data files are created in the location specified by DB_CREATE_FILE_DEST, an Oracle-Managed log file member is created in each directory specified in the parameters (up to MAXLOGMEMBERS for the database).
The default size of an Oracle-Managed redo log file is 100MB. This value can be overridden by specifying the size of the redo log files in the logfile clause of the create database or alter database commands. The Oracle-Managed redo log file will have the following name format: ora_%g_%u.log, where %g is the online redo log file group number and %u is a unique eight character string.
For example, a redo log file could be
/u01/oradata/db1/ora_1_xyz12345.log
As in the case of any Oracle-Managed Files, when an Oracle-Managed online redo log file is dropped, its corresponding OMFs on the host are removed automatically.
Question 48
Question 49
Question 50
In this lesson, you learned that redo logs are created when you create the database and that you can create new redo logs by issuing the alter database add logfile group command.
You also learned that when the database is configured to use OMF, there is no need to specify the location or names of redo log files during creation of the database. Oracle will create them in a location specified in the OMF parameters with a standard naming convention.
Finally, you learned the name format for an Oracle-Managed redo log file: ora_g%_u%.log. As with any Oracle-Managed Files, when an Oracle-Managed online redo log file is dropped, its corresponding OMFs on the host are removed automatically.
Lesson 15. Course In Review
This lesson includes questions and material covering the entire course. You can use it to review and study for the exam.Topic 15.1 Review Notes
*Review Notes1. | The data dictionary keeps track of every aspect of the database — from tables being created to performance information of the database. |
2. | Executing the catalog.sql and catproc.sql scripts soon after creation of the database creates the data dictionary. |
3. | The data dictionary is owned by SYS and created in the SYSTEM tablespace. |
4. | Know what the data dictionary is. Be sure you can distinguish between dictionary views and the SYS-owned tables underlying those views. |
5. | Understand how to identify the topic and scope of a dictionary view based on the name of that view. Also, be sure you can identify all the views defined in this course. |
6. | Be sure you can identify the data dictionary views that will list all the dictionary views available in the Oracle database. |
7. | Be sure you can develop queries against dictionary views. |
8. | Control files are used to keep track of all the physical components of the database such as location of the data files, redo logs, name of the database, and so on. |
9. | Normally, Oracle expects to see the control files at the location indicated by CONTROL_FILES parameter. Starting with Oracle9i, this isn't true if you're using Oracle-Managed Files (OMF). For our purposes, we'll assume you're not using OMF, but be mindful that some exam questions may take OMFs into account. |
10. | Control files are the most important files of a database from a filesystem layout and management perspective. They contain information related to the physical location of every datafile and redo log file in your database. |
11. | Control files are created the first time you create your Oracle database. If no control file exists already in the location specified by the CONTROL_FILES parameter, then Oracle creates a new control file automatically when the create database command is issued. You can re-create a control file for an associated database later using the create controlfile command. |
12. | Control files are opened and read every time the database is opened. The contents of control files are modified every time the structure of the database is changed, such as by adding a tablespace, datafile, or online redo log. The control file is also changed when you archive your redo logs, back up the database, and whenever a log switch happens. |
13. | Oracle-Managed Files are created by specifying two init.ora parameters: DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n. |
14. | When one of these two types of parameters is specified but not the other, then Oracle-Managed control files are generated in the filesystem location identified by the parameter that was specified. |
15. | If both parameters are specified, then Oracle-Managed control files are created in the locations specified by DB_CREATE_ONLINE_LOG_DEST_n parameter. |
16. | The values set for these parameters can be dynamically changed using the alter system command. |
17. | Control file information can be found in several V$ performance views. |
18. | V$CONTROL_FILE provides the basic information about the status and location of the control files. |
19. | The V$DATABASE view provides information such as when the control file was created, and the most recent sequence number and SCN number recorded in the control file. |
20. | V$CONTROLFILE_RECORD_SECTION shows information stored in different sections of the control file |
21. | Oracle can maintain multiple copies of a control file for redundancy purposes. This is called multiplexing. |
22. | Make sure all your databases have multiplexed control files placed on different physical disks. Their location should be specified in the CONTROL_FILES parameter. |
23. | You can take regular backups either of the actual control file or of a script to re-create the control file using the alter database backup controlfile command. |
24. | Online redo log files are the physical files storing the information from the redo log buffers in the SGA. |
25. | Changes made to the data blocks in the database are stored as redo entries in the redo log buffer in the SGA. |
26. | When transactions are committed or this buffer gets full, the redo log entries are flushed into online redo log files. |
27. | The online redo log files may have mirrored files. Each set of files containing the same data is called a redo log group while the mirrored files in a group are called redo log members. |
28. | If the database is operating in ARCHIVELOG mode, then the data in online redo log files is archived in the archived redo log files as soon as the online redo log gets filled up. This can be done either manually or by using an automatic archiver process. |
29. | If the database is operating in NOARCHIVELOG mode, then you do not have to worry about archiving the online redo logs. |
30. | A checkpoint is a regular event in the database that signals the database writer to flush the dirty buffers to the data files and to update the control files and the file headers with checkpoint information. This checkpoint frequency is one of the factors that influence the time required for the database to recover from an unexpected failure. |
31. | Checkpoints occur at least as often as log switches. They can also occur at more frequent intervals when specified using init.ora parameters. |
32. | If the checkpoint interval exceeds the size of the redo log file, then checkpoints occur only during the redo log switch. |
33. | Just like the control file, online redo log files are important for the functioning of database. To prevent accidental loss of them, Oracle enables mirroring of the redo log files. You should definitely mirror members of a redo log group in different disks. |
34. | If you are archiving the online redo logs, try to alternate the locations of the online redo groups between different disks so that LGWR and ARCn do not face disk contention. |
35. | There will be times when you need to add log groups or log members, drop log groups or log members, or even rename log files. Oracle enables you to do all of these using alter database commands. |
36. | Online redo logs are created during database creation and later can be created or dropped using the alter database command. In order to create them as Oracle-Managed redo log files, two init.ora parameters, DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n, need to be specified. |
37. | If only DB_CREATE_FILE_DEST is specified, then all OMFs including redo log files will be created at the location specified by this parameter. Your online redo logs will not be multiplexed in this scenario. |
38. | When DB_CREATE_ONLINE_LOG_DEST_n is specified, regardless of whether DB_CREATE_FILE_DEST is specified or not, the Oracle-Managed redo log files will be created in the location specified by the former parameter. |
39. | If multiple locations are specified by DB_CREATE_ONLINE_LOG_DEST_n parameter(s), then Oracle will create the mirror images of redo logs in each of these locations. |
Question 51
Question 52
Question 53
Question 54
Question 55
Question 56
Question 57
Question 58
Question 59
Question 60
Question 61
Question 62
Question 63
Question 64
Topic 15.2 Exam Preparation
This page contains a bulleted list of fast facts to review, or crib notes for the days leading up to the exam.- The Oracle data dictionary contains a host of views to be aware of, which contain information about the contents in and ongoing activities on the Oracle database.
- Additionally, there are a host of dynamic performance views that contain information about the ongoing performance of Oracle software.
- The catalog.sql script creates the data dictionary. Run it after creating a database while connected to Oracle administratively through SQL*Plus.
- The catproc.sql script creates the Oracle-supplied packages used often in PL/SQL development. Run it after creating a database while connected to Oracle administratively through SQL*Plus.
- Understand all Oracle physical disk resources — they are control files, redo logs, and datafiles.
- Control files are used to tell the Oracle instance where to find the other files it needs for normal operation.
- The contents of a control file can be found in the script to create it, which Oracle generates with an alter database backup controlfile to trace command. This file is then found in the directory specified by the USER_DUMP_DEST initialization parameter.
- You will find information about control files, such as where they are located on your host machine, in V$CONTROLFILE, V$CONTROLFILE_RECORD_SECTION, and V$DATABASE.
- It is important to multiplex control files in order to reduce dependency on any single disk resource in the host machine. This is done using the CONTROL_FILES parameter in init.ora.
- The Oracle redo log architecture consists of the following components: redo log buffer to store redo entries from user processes, LGWR to move redo entries from memory onto disk, and online redo logs on disk to store redo entries taken out of memory.
- Online redo logs are referred to as groups. The group has one or more files, called members, where LGWR writes the redo log entries from memory. There must be at least two online redo log groups for the Oracle instance to start.
- Checkpoints are events in which LGWR tells DBWR to write all changed blocks to disk. They occur during log switches, which are when LGWR stops writing the filled log and starts writing a new one. At this point, LGWR will also write the redo log file sequence change to datafile headers and to the control file.
- Understand the process LGWR uses to write redo data from one log to another and then back again, what happens when archiving is used, what the role of the ARCn process is, and how LGWR can contend with ARCn.
- Understand how to multiplex redo logs using both the create database and alter database statements, and why it is important to do so.
- The OMF feature in Oracle can be used to manage the placement of control files and redo logs on disk. Be sure you understand the use of the appropriate parameters for configuring OMF and the standard filename formats Oracle employs when OMF is used.
No comments:
Post a Comment