*Course 6: Managing Tables
Tables store data in Oracle. This course explores the table from an administrative perspective, as a collection of bits and bytes inside a file whose storage must be planned and managed carefully.
This course presents information relevant to the following Oracle 1Z1-031 Oracle9i DBA Fundamentals I exam objectives:
Managing Tables
After completing this lesson, you should be able to:
Tables store data in Oracle. You may think of a table as an object similar to a spreadsheet — a table has columns and rows for data.
All information stored in a column will have the same datatype.
A row is a collection of single values defined for every column.
How does a table differ from data segments? The data users place in tables is stored in segments and extents inside datafiles.
That's precisely what makes tables different — tables offer a conceptual way to think of the data inside a segment or extent that also lets you reference that data, manipulate it, add new data, or even take data away.
In other words, tables are the constructs that let users access the data that DBAs think of as being stored in segments and extents.
Oracle supports several types of tables for storing user data in an Oracle segment. There are regular tables for housing permanent data and temporary tables for temporary data. You'll probably find that these two types of tables are all you need.
There are also many exotic variations of tables that Oracle has introduced over the years; each serves an important niche purpose. These include partitioned tables, IOTs, and clustered tables. The pages that follow describe the various types of tables.
*Regular (Permanent) Tables
Regular tables (also called permanent tables) are the most common type of table used to store data. When you execute the create table command, the default table that Oracle creates is a regular table.
Oracle doesn't guarantee that row data will be stored in a particular order in a regular table. This type of data storage is sometimes called heap-organized. Data is stored in an unordered collection, or heap. You can impose order later by including various clauses in SQL queries.
*Temporary Tables
Suppose you want to create a complex report in which data must be run through a series of complex processing steps. To simplify processing, it would help to store data in an intermediate table.
The problem is that you don't want users to access that intermediate information during or after the report's execution, for fear of the confusion that may result.
You could simply use a standard Oracle table, but then you have to remember to clean out the table when you're done.
Temporary tables let you retain session-private data for only the duration of the transaction or session.
Because the data is available only to the session that put it there, there's no worry that other users will be able to see your data before you want them to.
Because the data stored in temporary tables is temporary in nature, there's no need for you to worry about eliminating the contents of the table when you don't need the data anymore. At the end of the transaction or session, the data will disappear.
Even though Oracle supports session privacy with respect to data in a temporary table, all temporary tables in Oracle are available for all current sessions to use, and the definition of the temporary table will continue to exist in the data dictionary even after its data is eliminated.
*Partitioned Tables
Partitioned tables are just like regular tables except for an important small feature — they let you reference the individual segments that might directly support larger tables.
The segment isn't considered a separate database object, but rather a subobject that you happen to be able to reference directly.
Partitioned tables give you more control over data distribution across multiple disks in the machine hosting the Oracle database. You might find partitioning used to scale extremely large tables, such as those found in data warehouses.
Every disk in a machine has a channel or hardware mechanism designed to facilitate access to that disk. But that channel might not be able to provide enough bandwidth for many processes to access the disk fast enough to provide data for users during peak periods of activity. This is where partitioning factors in.
You can partition a large table and spread the data stored in that table across many disks. Because each disk spins independently and may have a channel available for direct access, partitioning supports a greater degree of parallel processing for busy databases than regular tables.
In a partitioned table, data is stored in each partition according to a partition key, or column, that defines which range of row data goes into which partition. Each partition can then be stored in different tablespaces. Every partition in a table must have the same columns.
Partitions provide several benefits, including increased data availability and the potential for parallel data-change operations operating on different partitions simultaneously.
*Index-Organized Tables
In regular tables, data is stored in heap-organized fashion, so data isn't stored in any ordered way. In an index-organized table, Oracle stores the data in a B-tree index structure rather than in an unstructured heap. The data is stored in the order of the primary key of the table.
The rows in an index-organized table are not only sorted by their primary key values, but each row contains the primary key column values and other non-key column values. There's no separate index structure to store the primary key in index-organized tables. This reduces storage overhead, especially for tables that have few columns.
Index-organized tables are particularly useful for data that's mostly retrieved based on primary key. An example of this sort of data would be a lookup table, where important terms are associated with short alphanumeric codes.
*Cluster Tables
You might have a set of tables that are queried for data at the same time. For example, an employee expense reimbursement system might contain two tables used in a parent/child fashion.
The parent table might contain key facts about each expense reimbursement request submitted by an employee. This table might store one row for each employee reimbursement request, containing column values defined for the employee's name or ID number, the expense request ID number, and the mailing address where the reimbursement check must be sent.
The child table contains line items for each reimbursement request. For example, if the employee took a flight to Singapore for a trade show that lasted several days, the child table would store several rows, each corresponding to a particular expense (airfare, hotel, car, conference registration, meals, and so on), along with a charge-to account number for each expense.
Cluster tables (also simply called clusters) might be handy in this parent/child context. Even though the data is stored as two separate tables, employees who want to see their expense reimbursement requests will almost invariably request data from both tables at the same time.
Clusters let you store data from several tables inside a single segment so users can retrieve data from those two tables together very quickly.
To store two or more tables together inside a cluster segment, the tables must have at least one column in common. This column becomes the cluster key.
Because a cluster stores related rows of different tables in the same physical data segments, it offers two benefits:
Clusters work best when the data is static or read-only.
The rows of a table stored in cluster segment blocks are laid out so they're close to the associated rows from the other tables. If the size of a row increases too dramatically after Oracle performs the initial row layout in the cluster, data can be shifted around inside the blocks of the cluster, negating the performance increase the cluster would otherwise provide.
Question 1
Question 2
Question 3
Question 4
Question 5
In this lesson, you learned about different types of tables:
The built-in datatypes include scalar, collection, and reference datatypes.
The classes of user-defined datatypes you can create are endless.
After completing this lesson, you should be able to:
Each column in a table stores data of a particular datatype. Every row can contain values for columns of different datatypes, but individual columns can house only one type of data.
Oracle doesn't have as many different datatypes as you might be used to. For example, Oracle doesn't have a currency datatype because currency is nothing more than a number with a currency format mask.
Oracle instead provides only one datatype for numbers, and gives you a robust mechanism for applying whatever format masks you might need to give your numbers meaning.
Oracle substantially reorganized the available datatypes between versions 7.3 and 8.0. There are two basic categories of datatypes in Oracle: built-in types and user-defined types.
Within the built-in types, there are three basic classes of datatypes available: scalar, collection, and reference datatypes.
Within the user-defined types, the datatypes you can define for your applications are endless.
*Scalar Datatypes
A table column defined as a scalar datatype holds a single item in each cell. You can choose from a variety of scalar datatypes:
*CHAR(L) and NCHAR(L)
CHAR(L) and NCHAR(L) are fixed-length text string scalar datatypes, where the data is stored with blanks padded out to the full width of the column (represented by L).
CHAR and NCHAR values can be up to 2,000 bytes in length in Oracle9i. In Oracle7, the limit was 255 bytes.
*VARCHAR2(L) and NVARCHAR2(L)
VARCHAR2(L) and NVARCHAR2(L) are variable-length text string datatypes, where data is stored using only the number of bytes required to store the actual value, which can vary in length for each row.
VARCHAR2(L) and NVARCHAR2(L) values can be up to 4,000 bytes long.
NCHAR and NVARCHAR2 are NLS datatypes that enable the storage of either fixed-width or variable-width character sets. You can also use them for nonmultibyte character sets, but that's not common. For all intents and purposes, NCHAR is CHAR's NLS multibyte equivalent type, and NVARCHAR2 plays the same role for VARCHAR2.
NLS stands for National Language Set. NLS lets Oracle be used in languages other than American English.
The English language requires only one byte to store a character, such as the letter A. Languages with large character sets (such as Japanese, Chinese, or Korean) or other substantial differences from English (such as being read from right to left, like Arabic or Hebrew) need multiple bytes to store one character. NLS provides the translation.
*NUMBER(L,P)
The NUMBER(L,P) datatype stores fixed and floating-point numbers up to 38 digits of precision in a variable-length format. Values are stored in scientific notation — one byte stores the exponent, one byte is used for every two significant digits of the number's mantissa, and one byte is used for negative numbers if the number of significant digits is less than 38 bytes.
You can specify precision (number of digits) and scale (number of digits right of the decimal point).
Tip: A mantissa is the decimal part of a logarithm. Oracle uses the logarithm of a number to store the binary version of the number so it takes up less space.
*DATE
The DATE datatype stores a fixed-length 7-byte field. You can apply formatting masks to render the date in the manner appropriate for your situation. By default, Oracle shows date information as DD-MON-YY: a two-digit date, a three-letter month abbreviation, and a two-digit year.
The DATE format includes time as well as date. The time is stored internally as a number.
Internally, Oracle stores the date as a number that includes four-digit year references. The two-digit default representation of the year is just a formatting mask.
You can change this format at the system or session level with the following command:
alter [system|session] set nlsdateformat = 'mask'
Substitute your format mask for mask.
*RAW(L)
The RAW(L) datatype holds a small amount of binary data. There are no conversions performed on raw data in Oracle. The raw data is simply stored as is, and isn't interpreted by ORACLE.
Oracle can house up to 2,000 bytes in a RAW column.
*ROWID
The ROWID datatype stores ROWID information. A ROWID is a 10-byte string that identifies the location of row data in a datafile. Another lesson has more to say about ROWID.
*LONG and LOB (Large Object)
Oracle9i has scalar datatypes that let you store very large amounts of information in columns. Some of these datatypes are provided mainly for backward compatibility:
Several key differences between LONG and LOB types make LOB types more versatile and helpful for large object management:
*Collection Datatypes
We've covered the scalar datatypes. Now let's move on to collection datatypes. A collection is a gathering of like-defined elements.
There are two collection datatypes available in Oracle:
*VARRAY
A variable-length array (VARRAY) is an ordered list of objects, all of the same datatype.
The VARRAY datatype has two special attributes (in addition to those attributes within the objects the VARRAY contains): a count for the number of elements in the VARRAY and the limit for the maximum number of elements that can appear in a VARRAY. Although the VARRAY can have any number of elements, the limit must be predefined.
Each element in a VARRAY has an index number corresponding to the position of the element in the array.
Constraints and default values may not be created for VARRAY elements.
Once the VARRAY is created, a user can refer only to an individual element in a VARRAY with PL/SQL (although SQL can be used to access the entire VARRAY).
*Nested Tables
The nested table datatype can be thought of as a table within a table. This architecture is exceptionally suited for applications that have parent/child tables with referential integrity. A nested table is an unordered list of row records, each having the same structure. These rows are usually stored away from the table, with a reference pointer from the corresponding row in the parent table to the child table.
Like VARRAYs, nested tables can have any number of elements, with the added bonus that you don't need to predetermine a maximum limit.
*Reference Datatypes
The final built-in datatype is the reference datatype.
Developers can use the reference datatype to define a foreign key relationship between two objects.
The reference datatype can reference all columns in the table for a particular row — it's a pointer to a particular object, not the object itself.
*User-Defined Datatypes
User-defined types are abstract datatypes or compositions of existing scalar or other types you can define to serve highly specialized purposes.
User-defined datatypes may be composed of scalar datatypes, collection datatypes, or other user-defined types.
Question 6
Question 7
Question 8
Question 9
Question 10
Question 11
In this lesson, you learned about the Oracle datatypes used to define the data in each column of a table. There are two kinds of datatypes: built-in and user-defined.
The built-in datatypes include scalar, collection, and reference datatypes.
The user-defined datatypes you can create are endless. They can be derived from scalar and collection datatypes, as well as from other user-defined datatypes.
There are two ROWID formats: the older restricted ROWID and the extended ROWID in use since Oracle 8.0.3. Both can be used in Oracle9i.
After completing this lesson, you should be able to:
Oracle ROWIDs identify the location of a row inside a database. ROWIDs aren't addresses in memory or on disk; they're simply identifiers Oracle uses. Locating a table row using the ROWID is the fastest way to find a row in a table.
Although ROWID information can be queried like other columns in a table, a ROWID isn't stored explicitly as a column value. When users add new rows to a database, Oracle generates a ROWID to identify that row's unique database location.
The ROWID Oracle generates for a row depends on the following components:
*Extended ROWIDs
Oracle's extended ROWID format is 10 bytes (80 bits) in size.
The extended ROWID has four components: an object number (32 bits), a relative file number (10 bits), a block number (22 bits), and a row (slot) number (16 bits).
This format was introduced in Oracle 8.0.3 to overcome the limited amount of disk space prior versions of Oracle could address. With extended ROWIDs, Oracle can address rows in tables in a manner that lets the database grow to a virtually limitless size.
Extended ROWIDs are displayed as 18-character representations of the location of data in the database. Each character is represented in a base-64 format consisting of A - Z, a - z, 0 - 9, +, and /.
The first six characters correspond to the data object number, the next three are the relative file number, the next six are the block number, and the last three are the row number.
The code below shows how to query a database for ROWIDs.
SQL> select name, ROWID from employee;
NAME ROWID
---------- ------------------
DURNAM AAAA3kAAGAAAAGsAAA
BLANN AAAA3kAAGAAAAGsAAB
*Restricted ROWIDs
Historically, Oracle used a 6-byte ROWID that's now considered restricted. This restricted ROWID doesn't store the object number for the table the row will be stored in.
This format worked because earlier versions of Oracle required all datafiles to have a unique file number within the database, regardless of the tablespace the file belonged to. Oracle8i and later releases number datafiles relative to the tablespace they belong to.
Restricted ROWIDs are displayed as 18 characters in base-16 format. The first 8 characters represent the block number, characters 10 - 13 are the row number, and characters 15 - 18 are the (absolute) file number. Characters 9 and 14 are static separator characters.
Restricted ROWIDs are rarely used now, with one exception:
Restricted ROWID format is employed to locate rows in nonpartitioned indexes for nonpartitioned tables where all index entries refer to rows within the same segment.
This eliminates any uncertainty about relative file numbers, because a segment can be stored in one and only one tablespace.
Here's how you can remember the ROWIDs:
Question 12
Question 13
Question 14
Question 15
In this lesson, you learned how ROWIDs point to the location of a row.
There are two types of ROWIDs. The restricted ROWID format is older, but it's still in limited use.
The extended ROWID format lets Oracle increase the amount of space a database can employ.
After completing this lesson, you should be able to:
A cornerstone of storage management is Oracle's ability to let you manage the space in a data block. The size of a data block is determined by the DB_BLOCK_SIZE parameter set for the instance when you create the database.
Data block size is almost always a multiple of operating system block size. Operating systems usually employ block sizes of 512 or 1,024 bytes.
An Oracle block can be anywhere from 2,048 to 16,384 bytes in size (or even larger for some platforms). 8,192 bytes is a common size for most Oracle databases.
*Data Block Components
There are several different components inside every data block in your Oracle database:
Each block has a block header and directory information. This includes information about the table that owns the block and the row data the block contains.
Block header and directory information
Free space is the space reserved for the growth of existing rows in the block. It's determined by the pctfree setting.
Block header and directory information
Free space
Space occupied by existing rows. Every time a row is added, Oracle places that row in a block. The space available for rows is determined by the setting for DB_BLOCK_SIZE, minus the space occupied by the block header, minus the space reserved by the setting for pctfree.
*Row Components
There are several different components inside every row contained in a data block:
The row header stores information about the number of columns in the row, chaining information, and the current lock status for the row.
Row header
Row data consists of the actual rows of each data table. Row data is divided into columns, or column data.
Row header
Row data
Column data for each row is stored in the order in which the columns were defined for the table.
Row header
Row data
Column data
Column width is stored in a field along with the non-NULL column data. If a column value is NULL for a row, no width field or column value will be stored in the block for that row column, and no bytes are used for the storage of NULL column values. That's why Oracle needs to maintain some free space via pctfree for row growth.
The column width field is 1 byte if the value for the column in this row is under 250 bytes, and it's 3 bytes if the value for the column in this row is 250 bytes or more. Oracle stores a number identifying the width of the column value only when the column value isn't NULL.
Question 16
Question 17
Question 18
Question 19
In this lesson, you learned about the structure of Oracle data blocks and the rows contained in those blocks.
You learned that data blocks consist of a block header with directory information, free space, and occupied space containing rows.
Each row has a row header and row data that's divided into column data.
You create tables with the create table command and add optional clauses to control the characteristics of the table and Oracle's operation.
After completing this lesson, you should be able to:
You create a permanent or regular table in Oracle with the create table command. This lesson reviews that command and discusses how you can manage storage considerations when you create regular tables.
The default storage settings are applied to objects placed in a tablespace that have been created without a storage clause.
This lesson explains how you can define the storage clause when you create an object.
This lesson assumes you're placing a regular table in a dictionary-managed tablespace to cover the use of the storage clause. The code below shows an example of the create table command.
The clauses within this command are described on the next pages.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
The tablespace keyword indicates which tablespace Oracle should create the table in.
If you don't specify this clause, Oracle will put the table in the default tablespace you were assigned to when your userid was created.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
There are a number of clauses for space utilization.
The pctfree keyword specifies the space Oracle is to leave free when inserting rows to accommodate later growth.
The pctused option specifies the threshold percentage of a block the actual contents of row data must fall below before Oracle will consider the block free for new row inserts.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
The next two clauses control Oracle's capability to make concurrent updates to a data block.
initrans specifies the initial number of transactions that can update the rows in a data block concurrently. The default initrans value for regular tables is 1. For clustered tables, the default is 2.
maxtrans specifies the maximum number of transactions that can update the rows in a data block concurrently. The default maxtrans value for tables is 255.
You shouldn't normally change the default values for these options.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
The nocache clause specifies that Oracle shouldn't make these blocks persist in the buffer cache if a select statement on the table results in a full table scan.
In this case, select * from EMPLOYEE would have Oracle load blocks into the buffer cache so those blocks won't persist for long.
If you want the table to stay cached in the buffer cache when you issue select * from EMPLOYEE, specify the cache keyword instead.
The default nocache specifies that the blocks retrieved for this table are placed at the least-recently-used end of the LRU list in the buffer cache after a full table scan.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
The logging clause tells Oracle to track table creation in the redo log. If a disk fails, a logged table can be recovered. logging is the default.
This could be changed to nologging so redo isn't logged. That's handy in situations such as certain types of bulk data loads after which the DBA plans to take a backup after loading the data into Oracle.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
Finally, you can specify storage clauses for table creation. These will override the default storage settings of the tablespace you create the object in.
The only tablespace default a storage clause won't override is minimum extent.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
*Rules of Thumb for Table Storage
Observe the following rules of thumb when you create regular tables:
Question 20
Question 21
Question 22
Question 23
* Exercise 1
Try creating a regular table in Oracle9i.
In this lesson, you learned how to create regular, permanent tables with the create table command.
You learned about the clauses you can use with create table, such as tablespace, initrans, nocache, and logging.
Finally, you learned some rules for table storage.
Since temporary tables are ephemeral, you need to understand how they differ from regular, permanent tables.
After completing this lesson, you should be able to:
You create temporary tables with the command shown here.
The tbldef table definition can use any of the column or constraint definitions a permanent table uses. Associated temporary indexes will be generated to support primary or unique keys.
create global temporary table tblname ( tbldef )
on commit [delete|preserve] rows
The data in a temporary table is stored in memory, inside the sort area. If more space is required, temporary segments in your temporary tablespace will be used.
When you specify on commit delete rows, the data in a temporary table (along with data in any associated index) is purged after the transaction is completed.
When you specify on commit preserve rows, rows will be stored in the temporary table until the user who created the table terminates the session. For that period, you can use the temporary table the same way you'd use any other table.
If you don't specify an on commit clause, Oracle uses on commit delete rows by default.
create global temporary table tblname ( tbldef )
on commit [delete|preserve] rows
In this example, SCOTT creates a temporary table called MYTEMP.
Since on commit delete rows is specified, this table will purge rows after a transaction is completed. Remember that this is the default action.
The first select statement SCOTT issues yields a result, but the second select doesn't. The data has been purged by the commit.
SQL> connect scott/tiger
Connected.
SQL> create global temporary table mytemp
2 (col1 number,
3 col2 varchar2(30));
4 on commit delete rows;
Table created.
SQL> insert into mytemp values (1,'JUNK');
1 row created.
SQL> select * from mytemp;
COL1 COL2
--------- ------------------------------
1 JUNK
SQL> commit;
Commit complete.
SQL> select * from mytemp;
no rows selected
Here the data in temporary table T_TABLE will persist for the duration of the session instead of being purged after a transaction is completed.
The first select statement SCOTT issues yields a result, and so does the second. The data will be preserved until the session is terminated.
SQL> connect scott/tiger
Connected.
SQL> create global temporary table t_table
2 (col1 number,
3 col2 varchar2(20))
4 on commit preserve rows;
Table created.
SQL> insert into t_table values (1, 'JUNK');
1 row created.
SQL> select * from t_table;
COL1 COL2
--------- --------------------
1 JUNK
SQL> commit;
Commit complete.
SQL> select * from t_table;
COL1 COL2
--------- --------------------
1 JUNK
Other users can employ a temporary table SCOTT creates, but they won't see SCOTT's data. And SCOTT won't see their data. Even though data remains until SCOTT terminates the session, in this example CLARK can't access data SCOTT inserted into T_TABLE. If SCOTT issues the command truncate table t_table, only SCOTT's data will be removed. Other users' data will still be in the table.
SQL> connect scott/tiger
Connected.
SQL> create global temporary table t_table
2 (col1 number,
3 col2 varchar2(20))
4 on commit preserve rows;
Table created.
SQL> insert into t_table values (1, 'JUNK');
1 row created.
SQL> select * from t_table;
COL1 COL2
--------- --------------------
1 JUNK
SQL> connect clark/cloth
Connected.
SQL> select * from t_table;
no rows selected
You can join a temporary table with permanent tables and create objects like views, indexes, and triggers that form object dependencies on the temporary table.
You can't create a view that contains a join between temporary and permanent tables.
The create global temporary table statement doesn't log any redo, so temporary table data changes aren't recoverable.
Data changes made to temporary tables will generate rollback information to let Oracle roll back a transaction and thus, the temporary table data, in the event the instance crashes.
The TEMPORARY (T) and DURATION columns in the DBA_TABLES view indicate whether a table is a temporary table and how long the data in the table will persist.
Oracle stores temporary table data in temporary segments in a temporary tablespace.
Because temporary tablespace segment allocations are managed uniformly by Oracle, you typically don't need to worry about specifying a storage clause when you create a temporary tablespace.
Instead, focus your attention on whether or not the data in the temporary table should persist for the duration of a transaction or a session.
Question 24
Question 25
Question 26
Question 27
Question 28
Question 29
* Exercise 1
Try creating a temporary table in Oracle9i.
In this lesson, you learned how to create temporary tables with the create global temporary table command.
You learned how to use the commit clause to purge or retain the data in a temporary table.
Finally, you learned how temporary tables operate, and you learned some factors you need to consider when you employ temporary tables.
After completing this lesson, you should be able to:
Oracle allocates new extents for a table automatically when more data is added than the current allocation will hold. You can add more extents manually with the statement
alter table tblname allocate extent (size num[K|M] datafile 'filename')
num is the size of the extent you want to allocate (subject to the tablespace limit set by minimum extent) and filename is the absolute path and filename of the datafile where you want to store the extent.
Both the size and datafile clauses are optional in the alter table tblname allocate extent statement.
If size isn't used, Oracle uses the size specified in the next storage option for the table.
If datafile is excluded, Oracle manages placement itself.
You could use this command to control the distribution of extents before performing bulk data loads, as shown in these code examples that modify the EMPLOYEE table.
ALTER TABLE EMPLOYEE ALLOCATE EXTENT;
ALTER TABLE EMPLOYEE ALLOCATE EXTENT ( SIZE 200K );
ALTER TABLE EMPLOYEE ALLOCATE EXTENT
(DATAFILE '/u10/oradata/oracle/data05.dbf' );
*Using the High-Water Mark
How does Oracle maintain knowledge about table size? A special marker called the high-water mark indicates the last block used to hold the table's data.
As insert statements fill data blocks, Oracle moves the high-water mark farther and farther out to indicate the last block used.
The high-water mark is stored in a table segment header. Its location tells Oracle where to stop reading blocks during full table scans.
You can find the high-water mark for your table using the unusedspace( ) procedure from the DBMS_SPACE Oracle-supplied package or in the DBA_TABLES dictionary view after you've run the analyze command on your table.
To eliminate unused space allocated to a table, issue the statement
alter table tblname deallocate unused keep num [K|M]
In this statement, keep is an optional clause that lets you retain num amount of the unused space. The keep clause specifies the number of bytes above the high-water mark that should be retained.
If the command is used without the keep clause, Oracle will deallocate all unused space above the high-water mark.
If the high-water mark is at an extent less than the value of minextents, Oracle will release extents above minextents.
Examples of this statement are shown in the code below.
ALTER TABLE EMPLOYEE DEALLOCATE UNUSED;
ALTER TABLE EMPLOYEE DEALLOCATE UNUSED KEEP 10K;
*Row Migration
If pctfree is too low for blocks in a table, update statements may increase the size of that row, only to find there isn't enough room in the block to fit the change. Thus, Oracle has to move the row to another block in which it will fit.
This row migration degrades performance when the server process tries to locate the migrated row, only to find that the row has moved.
*Chaining
Chaining occurs when data for a row is stored in multiple blocks. This is common in tables that use columns defined as the LONG datatype, because LONG data is stored inline with the rest of the table.
Chaining is detrimental to database performance because the server process must piece together the row of data with multiple disk reads. And there's more degradation when DBWR has to perform multiple disk writes for a single row of data.
*Analyzing Table Integrity
Typically, Oracle handles the validation of data block structure automatically whenever a block is read into the buffer cache.
There are two actions you can take to validate the structural integrity of data blocks on your own:
Setting the initsid.ora parameter DB_BLOCK_CHECKSUM to TRUE makes the DBWR process calculate a checksum on every block it writes, regardless of tablespace. Oracle also calculates checksums for blocks written to the online redo logs.
When DB_BLOCK_CHECKSUM is set to FALSE, the DBWR process calculates checksums only when writing blocks to the SYSTEM tablespace.
The process of calculating checksums adds overhead to normal database processing. You must balance the additional performance overhead with your need for ensuring that data blocks aren't corrupt.
The analyze table tblname validate structure command can help you check table integrity. The optional cascade clause in this statement validates the structure of blocks in indexes associated with the table. From SQL*Plus, you issue the analyze command on one table at a time.
One of the main uses for the analyze command is to determine performance statistics for a cost-based optimization of processing SQL statements.
This command will also detect row migration.
There are two basic clauses for the analyze command.
The compute statistics clause calculates statistics collection for the table based on every row in the table:
analyze table tblname compute statistics;
The estimate statistics clause estimates statistics collection for the table based on a sample size of data you can specify with the sample num [rows|percent] clause. If you don't specify a sample clause, Oracle uses 1,064 rows:
analyze table tblname estimate statistics;
estimate statistics is almost as accurate as compute statistics, yet takes less time because it measures a smaller sample.
Once statistics are generated, the CHAIN_CNT column in the DBA_TABLES dictionary view contains the number of chained and migrated rows estimated or computed in the table. If you feel this number is high, you might want to save the table data, drop the table, re-create it, and reload the data to eliminate the problem.
Remember that some chaining is to be expected, especially when your rows are wide (for example, if you have many VARCHAR2(4000) columns or a LONG column).
If you want to validate integrity on an ongoing basis as part of a PL/SQL application, you can develop code that calls the analyzeschema( ) procedures in the DBMS_UTILITY package or the analyzeobject procedure in DBMS_DDL.
You can also employ the DBMS_STATS package in place of the analyze command for gathering statistics. This package offers some advantages over the use of the analyze command, particularly when you want to save statistics and reuse those statistics in different databases.
*Using Views
Several data dictionary views provide information about tables. These views base their content either on tables in the database or on dynamic performance information about tables collected while the instance is running. Views that provide helpful table information are described below.
Consider the ways you can gain information about tables from views:
Question 30
Question 31
Question 32
Question 33
Question 34
Question 35
* Exercise 1
Try managing and analyzing tables in Oracle9i.
In this lesson, you learned how Oracle uses the high-water mark to indicate the last block in a table that holds data. You learned how to locate the high-water mark, and how to use it to eliminate unused space in a table.
You also learned how row migration and chaining can degrade the performance of a database, and you learned how to avoid these conditions.
You learned how to use the analyze command to check the integrity of a database.
Finally, you reviewed the data dictionary views that provide information about tables.
After completing this lesson, you should be able to:
Moving a table into a different tablespace used to be a complex task. You used the EXPORT tool to create a dump file containing the rows of the table, the table's definition, and indexes or constraints.
You dropped the table from the Oracle database after exporting it, then you re-created the table in another tablespace.
You then ran IMPORT. You set the IGNORE parameter to Y so IMPORT wouldn't fail when it saw the table already existed.
You could have used SQL*Loader or the create table as select statement with the storage and tablespace clauses explicitly defined. Either method required a long time to execute.
It's easier to relocate and reorganize tables in Oracle9i. You don't need any of the old time-consuming techniques. Instead, you simply use the alter table move statement:
alter table tblname move tablespace tblspcname
The alter table move statement lets you relocate a table to another tablespace or reorganize a table to resize the initial segment. This statement has three optional clauses:
In Oracle Enterprise Edition, alter table move enables the online keyword. This permits the movement or reorganization of a table while the original version is still available for use.
*Relocating Tables
The pages that follow show how you employ alter table move to relocate and reorganize tables.
This first example moves a table called WORK_TABLE from the USER_DATA tablespace to the LMTAB tablespace.
The initial select statement shows the original tablespace where the table was located. The final select statement shows the new location.
SQL> select owner, table_name, tablespace_name
2 from dba_tables
3 where table_name = 'work_table';
OWNER TABLE_NAME TABLESPACE_NAME
---------- ---------- ---------------
SCOTT WORK_TABLE USER_DATA
SQL> alter table work_table move tablespace lmtab;
Table altered.
SQL> select owner, table_name, tablespace_name
2 from dba_tables
3 where table_name = 'work_table';
OWNER TABLE_NAME TABLESPACE_NAME
---------- ---------- ---------------
SCOTT WORK_TABLE LMTAB
Although Oracle preserves a table's associated constraints, object privileges, and triggers when the table is moved from one tablespace to another, the alter table move statement does not move indexes associated with the table. The code below, in which a table with an index is moved from tablespace SYSTEM to tablespace USER_DATA, illustrates this. The index remains in the original SYSTEM tablespace.
SQL> create table tab_w_indexes
2 (col1 number primary key)
3 tablespace system;
Table created.
SQL> select owner, table_name, index_name,
tablespace_name
2 from dba_indexes
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INDEX_NAME TABLESPACE_NAME
---------- -------------- ------------ ---------------
SCOTT TAB_W_INDEXES SYS_C00953 SYSTEM
SQL> alter table tab_w_indexes
2 move tablespace user_data;
Table altered.
SQL> select owner, table_name, index_name,
tablespace_name
2 from dba_indexes
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INDEX_NAME TABLESPACE_NAME
---------- -------------- ------------ ---------------
SCOTT TAB_W_INDEXES SYS_C00953 SYSTEM
To move an index into a new tablespace, use the statement
alter index idxname rebuild tablespace tblspcname
The code below illustrates moving both a table and the table's index from the USER_DATA tablespace to the LMTAB tablespace.
SQL> select owner, table_name, index_name,
tablespace_name
2 from dba_indexes
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INDEX_NAME TABLESPACE_NAME
---------- -------------- ------------ ---------------
SCOTT TAB_W_INDEXES SYS_C00953 USER_DATA
SQL> alter table tab_w_indexes
2 move tablespace lmtab;
Table altered.
SQL> alter index sys_c00953
2 rebuild tablespace lmtab
Index altered.
SQL> select owner, table_name, tablespace_name
2 from dba_tables
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME TABLESPACE_NAME
---------- ------------- --------------
SCOTT TAB_W_INDEXES LMTAB
SQL> select owner, table_name, index_name,
tablespace_name
2 from dba_indexes
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INDEX_NAME TABLESPACE_NAME
---------- -------------- ------------ ---------------
SCOTT TAB_W_INDEXES SYS_C00953 LMTAB
*Reorganizing Tables
Instead of moving a table to a completely new tablespace, you could just reorganize the table in its original tablespace. You can use the alter table move statement to do that. You simply omit the tablespace clause.
Oracle places the table in a new segment within the original tablespace to rebuild the storage allocation according to your specifications.
The code below reorganizes a table.
SQL> alter table tab_w_indexes
2 move storage (initial 20K next 20K);
Table altered.
SQL> select owner, table_name, initial_extent,
next_extent
2 from dba_tables
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INITIAL_EXTENT NEXT_EXTENT
---------- ------------- -------------- -----------
SCOTT TAB_W_INDEXES 20480 20480
If you need to reorganize a large table and want to improve the performance of the operation, include the nologging keyword in the alter table move statement.
You'll improve performance because the changes made by Oracle9i won't be written to the online redo log.
But remember that, without logging, the changes won't be recoverable later.
The operation of the alter table move statement requires enough space for two copies of the table to exist until the process is completed and Oracle9i can drop the table.
While the table is being moved, users can issue select statements to see data in the table, but they can't change data in the table.
*Truncating Tables
Suppose you issue a delete statement on a table with many hundreds of thousands or millions of rows, and commit it. Then you issue a select count(*) statement. A count of zero rows is returned. What happened? Oracle didn't reset the high-water mark after the delete statement, and what's more, it never does!
To get rid of the allocated extents that are now empty, and reset the high-water mark, while still preserving the table definition, you need to use the truncate table command. You can include an optional drop storage clause, if needed:
TRUNCATE TABLE EMPLOYEE;
TRUNCATE TABLE EMPLOYEE DROP STORAGE;
Once a table is truncated, you can't issue a rollback command to magically get the data back. Any change made to minextents after table creation will be applied to the table unless you specify the optional reuse storage clause. This clause preserves the current storage allocation and doesn't reset the high-water mark for the table:
TRUNCATE TABLE EMPLOYEE REUSE STORAGE;
Any associated indexes will also be truncated when you use truncate table, and any optional drop storage or reuse storage clauses will be applied to associated indexes.
Despite your inability to rollback a table truncation, Oracle does acquire a rollback segment for the job. Why?
Because if you terminate the truncate table command, or if some failure occurs, the rollback segment stores the changes made for the duration of the truncate operation to enable crash recovery.
*Dropping Tables
To rid yourself of a table entirely and give all allocated space back to the tablespace, issue the drop table statement. For example, to drop the EMPLOYEE table, you'd issue the following statement:
DROP TABLE EMPLOYEE;
If other tables have defined referential integrity constraints into this table, you must include the cascade constraints clause:
DROP TABLE EMPLOYEE CASCADE CONSTRAINTS;
Question 36
Question 37
Question 38
Question 39
Question 40
Question 41
* Exercise 1
Try relocating, truncating, and dropping a table in Oracle9i.
In this lesson, you learned how to use the move option with the alter table command to relocate a table to a different tablespace.
You also learned how to use the truncate table command to reallocate empty extents.
Finally, you learned how to use the drop table command to completely eliminate a table.
After completing this lesson, you should be able to:
Suppose you want to eliminate a table column that's no longer needed. In earlier versions of Oracle, you dumped the contents of the table to a flat file, dropped the table, re-created the table without the subsidiary column, and used SQL*Loader or Pro*C to reload the table records to your newly created table. Depending on the size of the table, this process could take a long time.
Oracle9i lets you drop unused columns with the alter table statement. It's easy and quick.
There are two ways to drop a column in Oracle9i:
*Marking a Column Unused
To delete a column logically but not physically, you mark the column unused. The column data is still in the table, but it can't be accessed.
Users of a table can't see an unused column. Information about the unused column doesn't appear in the output of the describe table command, and you can't query data in an unused column.
The syntax for marking a column as unused is
alter table tblname set unused column colname
To see which tables have unused columns, you can query the dictionary view DBA_UNUSED_COL_TABS. This view provides a list of all tables with unused columns.
The COUNT column in this view indicates how many unused columns there are for each table.
*Dropping a Column
To physically drop a column from a table, issue the statement
alter table tblname drop column colname
This statement removes all data from the column and eliminates the column from the table definition.
This operation takes more time than marking a column as unused. Oracle has to go through all blocks of the table and actually remove the column data to reclaim the space used by that column.
*Marking and Then Dropping a Column
You might want to mark columns unused and then drop them later. To do this, first issue the statement
alter table tblname set unused column colname
Then, at your leisure, issue the statement
alter table tblname drop unused columns
Marking a column as unused and then dropping the unused column lets you take away column access quickly and immediately. When you have DBA maintenance time scheduled later, you can remove the column.
The code below provides an example of dropping columns in Oracle9i through the alter table statement.
This example immediately drops the SUBSIDIARY column and all its contents from the EMPLOYEE table.
SQL> alter table employee drop column subsidiary;
Table altered.
This example marks a specific column in a table as unused and then drops all unused columns.
The first statement instructs Oracle to mark the SUBSIDIARY column unused. No information will be removed from the column — Oracle simply pretends the column isn't there.
The second statement (presumably issued at some later time) drops any columns (such as SUBSIDIARY) which have been marked as unused in the table. The unused columns and their data will be deleted.
SQL> alter table employee set unused column subsidiary;
Table altered.
SQL> alter table employee drop unused columns;
Table altered.
You can add three optional clauses to the alter table tblname drop column colname statement. These clauses are added after colname:
Question 42
Question 43
Question 44
Question 45
Question 46
* Exercise 1
Try dropping columns from a table in Oracle9i.
In this lesson, you learned two techniques for removing columns from tables. Both use the alter table command.
In one method, you mark a column unused. This retains the data of the column for later use.
The second method physically removes a column from a table, so the data contained in that column is gone.
You also learned about optional clauses you can add to the alter table command when you drop a column. These clauses let you drop foreign keys, invalidate related objects, and set a checkpoint.
Question 47
Question 48
Question 49
Question 50
Question 51
Question 52
Question 53
Question 54
Question 55
Question 56
Question 57
Question 58
This page contains a bulleted list of fast facts to review or crib notes for the days leading up to the exam.
Tables store data in Oracle. This course explores the table from an administrative perspective, as a collection of bits and bytes inside a file whose storage must be planned and managed carefully.
- There are regular (permanent) tables, temporary tables, partitioned tables, index-organized tables, and clustered tables. Tables may include a variety of datatypes.
- Every row in a table has a well-defined structure. ROWIDs keep track of where individual rows are stored in a table.
- Proper management of tables includes avoiding row migration and chaining, relocating tables, and dropping unused columns or tables.
This course presents information relevant to the following Oracle 1Z1-031 Oracle9i DBA Fundamentals I exam objectives:
Managing Tables
- Identify the various methods of storing data
- Describe Oracle data types
- Distinguish between extended and restricted ROWIDs
- Describe the structure of a row
- Create regular and temporary tables
- Manage storage structures within a table
- Reorganize, truncate, and drop a table
- Drop a column within a table
Lesson 2. Storing Data
The most common type of table you'll create in Oracle9i is a regular table. But Oracle has different types of tables to suit different types of tasks. In addition to regular (permanent) tables, you can employ temporary tables, partitioned tables, index-organized tables, and cluster tables.After completing this lesson, you should be able to:
- Describe the five types of Oracle tables
Tables store data in Oracle. You may think of a table as an object similar to a spreadsheet — a table has columns and rows for data.
All information stored in a column will have the same datatype.
A row is a collection of single values defined for every column.
How does a table differ from data segments? The data users place in tables is stored in segments and extents inside datafiles.
That's precisely what makes tables different — tables offer a conceptual way to think of the data inside a segment or extent that also lets you reference that data, manipulate it, add new data, or even take data away.
In other words, tables are the constructs that let users access the data that DBAs think of as being stored in segments and extents.
Oracle supports several types of tables for storing user data in an Oracle segment. There are regular tables for housing permanent data and temporary tables for temporary data. You'll probably find that these two types of tables are all you need.
There are also many exotic variations of tables that Oracle has introduced over the years; each serves an important niche purpose. These include partitioned tables, IOTs, and clustered tables. The pages that follow describe the various types of tables.
*Regular (Permanent) Tables
Regular tables (also called permanent tables) are the most common type of table used to store data. When you execute the create table command, the default table that Oracle creates is a regular table.
Oracle doesn't guarantee that row data will be stored in a particular order in a regular table. This type of data storage is sometimes called heap-organized. Data is stored in an unordered collection, or heap. You can impose order later by including various clauses in SQL queries.
*Temporary Tables
Suppose you want to create a complex report in which data must be run through a series of complex processing steps. To simplify processing, it would help to store data in an intermediate table.
The problem is that you don't want users to access that intermediate information during or after the report's execution, for fear of the confusion that may result.
You could simply use a standard Oracle table, but then you have to remember to clean out the table when you're done.
Temporary tables let you retain session-private data for only the duration of the transaction or session.
Because the data is available only to the session that put it there, there's no worry that other users will be able to see your data before you want them to.
Because the data stored in temporary tables is temporary in nature, there's no need for you to worry about eliminating the contents of the table when you don't need the data anymore. At the end of the transaction or session, the data will disappear.
Even though Oracle supports session privacy with respect to data in a temporary table, all temporary tables in Oracle are available for all current sessions to use, and the definition of the temporary table will continue to exist in the data dictionary even after its data is eliminated.
*Partitioned Tables
Partitioned tables are just like regular tables except for an important small feature — they let you reference the individual segments that might directly support larger tables.
The segment isn't considered a separate database object, but rather a subobject that you happen to be able to reference directly.
Partitioned tables give you more control over data distribution across multiple disks in the machine hosting the Oracle database. You might find partitioning used to scale extremely large tables, such as those found in data warehouses.
Every disk in a machine has a channel or hardware mechanism designed to facilitate access to that disk. But that channel might not be able to provide enough bandwidth for many processes to access the disk fast enough to provide data for users during peak periods of activity. This is where partitioning factors in.
You can partition a large table and spread the data stored in that table across many disks. Because each disk spins independently and may have a channel available for direct access, partitioning supports a greater degree of parallel processing for busy databases than regular tables.
In a partitioned table, data is stored in each partition according to a partition key, or column, that defines which range of row data goes into which partition. Each partition can then be stored in different tablespaces. Every partition in a table must have the same columns.
Partitions provide several benefits, including increased data availability and the potential for parallel data-change operations operating on different partitions simultaneously.
*Index-Organized Tables
In regular tables, data is stored in heap-organized fashion, so data isn't stored in any ordered way. In an index-organized table, Oracle stores the data in a B-tree index structure rather than in an unstructured heap. The data is stored in the order of the primary key of the table.
The rows in an index-organized table are not only sorted by their primary key values, but each row contains the primary key column values and other non-key column values. There's no separate index structure to store the primary key in index-organized tables. This reduces storage overhead, especially for tables that have few columns.
Index-organized tables are particularly useful for data that's mostly retrieved based on primary key. An example of this sort of data would be a lookup table, where important terms are associated with short alphanumeric codes.
*Cluster Tables
You might have a set of tables that are queried for data at the same time. For example, an employee expense reimbursement system might contain two tables used in a parent/child fashion.
The parent table might contain key facts about each expense reimbursement request submitted by an employee. This table might store one row for each employee reimbursement request, containing column values defined for the employee's name or ID number, the expense request ID number, and the mailing address where the reimbursement check must be sent.
The child table contains line items for each reimbursement request. For example, if the employee took a flight to Singapore for a trade show that lasted several days, the child table would store several rows, each corresponding to a particular expense (airfare, hotel, car, conference registration, meals, and so on), along with a charge-to account number for each expense.
Cluster tables (also simply called clusters) might be handy in this parent/child context. Even though the data is stored as two separate tables, employees who want to see their expense reimbursement requests will almost invariably request data from both tables at the same time.
Clusters let you store data from several tables inside a single segment so users can retrieve data from those two tables together very quickly.
To store two or more tables together inside a cluster segment, the tables must have at least one column in common. This column becomes the cluster key.
Because a cluster stores related rows of different tables in the same physical data segments, it offers two benefits:
- Disk I/O is reduced and access time improves for joins of clustered tables.
- Common column(s) need to be stored only once for all the tables in a cluster.
Clusters work best when the data is static or read-only.
The rows of a table stored in cluster segment blocks are laid out so they're close to the associated rows from the other tables. If the size of a row increases too dramatically after Oracle performs the initial row layout in the cluster, data can be shifted around inside the blocks of the cluster, negating the performance increase the cluster would otherwise provide.
Question 1
Question 2
Question 3
Question 4
Question 5
In this lesson, you learned about different types of tables:
- Regular (permanent) tables are the most common type. Regular tables are the default table type.
- Temporary tables let you work with session-private data for only the duration of the session.
- Partitioned tables let you reference individual table segments. They're used to spread work across multiple disks.
- Index-organized tables store data in the order of the table's primary key.
- Cluster tables let you store data from several tables within a single segment for faster retrieval.
Lesson 3. Distinguishing Oracle Datatypes
Datatypes define the data in each column of a table. Oracle doesn't have many different datatypes — in fact, it has only two basic kinds: built-in and user-defined.The built-in datatypes include scalar, collection, and reference datatypes.
The classes of user-defined datatypes you can create are endless.
After completing this lesson, you should be able to:
- Describe the various scalar datatypes
- Explain how collection datatypes are used
- Explain how the reference datatype is used
- Describe user-defined datatypes
Each column in a table stores data of a particular datatype. Every row can contain values for columns of different datatypes, but individual columns can house only one type of data.
Oracle doesn't have as many different datatypes as you might be used to. For example, Oracle doesn't have a currency datatype because currency is nothing more than a number with a currency format mask.
Oracle instead provides only one datatype for numbers, and gives you a robust mechanism for applying whatever format masks you might need to give your numbers meaning.
Oracle substantially reorganized the available datatypes between versions 7.3 and 8.0. There are two basic categories of datatypes in Oracle: built-in types and user-defined types.
Within the built-in types, there are three basic classes of datatypes available: scalar, collection, and reference datatypes.
Within the user-defined types, the datatypes you can define for your applications are endless.
*Scalar Datatypes
A table column defined as a scalar datatype holds a single item in each cell. You can choose from a variety of scalar datatypes:
- CHAR(L)
- NCHAR(L)
- VARCHAR2(L)
- NVARCHAR2(L)
- NUMBER(L,P)
- DATE
- RAW(L)
- ROWID
- LONG
- LONG RAW
- LOB
*CHAR(L) and NCHAR(L)
CHAR(L) and NCHAR(L) are fixed-length text string scalar datatypes, where the data is stored with blanks padded out to the full width of the column (represented by L).
CHAR and NCHAR values can be up to 2,000 bytes in length in Oracle9i. In Oracle7, the limit was 255 bytes.
*VARCHAR2(L) and NVARCHAR2(L)
VARCHAR2(L) and NVARCHAR2(L) are variable-length text string datatypes, where data is stored using only the number of bytes required to store the actual value, which can vary in length for each row.
VARCHAR2(L) and NVARCHAR2(L) values can be up to 4,000 bytes long.
NCHAR and NVARCHAR2 are NLS datatypes that enable the storage of either fixed-width or variable-width character sets. You can also use them for nonmultibyte character sets, but that's not common. For all intents and purposes, NCHAR is CHAR's NLS multibyte equivalent type, and NVARCHAR2 plays the same role for VARCHAR2.
NLS stands for National Language Set. NLS lets Oracle be used in languages other than American English.
The English language requires only one byte to store a character, such as the letter A. Languages with large character sets (such as Japanese, Chinese, or Korean) or other substantial differences from English (such as being read from right to left, like Arabic or Hebrew) need multiple bytes to store one character. NLS provides the translation.
*NUMBER(L,P)
The NUMBER(L,P) datatype stores fixed and floating-point numbers up to 38 digits of precision in a variable-length format. Values are stored in scientific notation — one byte stores the exponent, one byte is used for every two significant digits of the number's mantissa, and one byte is used for negative numbers if the number of significant digits is less than 38 bytes.
You can specify precision (number of digits) and scale (number of digits right of the decimal point).
Tip: A mantissa is the decimal part of a logarithm. Oracle uses the logarithm of a number to store the binary version of the number so it takes up less space.
*DATE
The DATE datatype stores a fixed-length 7-byte field. You can apply formatting masks to render the date in the manner appropriate for your situation. By default, Oracle shows date information as DD-MON-YY: a two-digit date, a three-letter month abbreviation, and a two-digit year.
The DATE format includes time as well as date. The time is stored internally as a number.
Internally, Oracle stores the date as a number that includes four-digit year references. The two-digit default representation of the year is just a formatting mask.
You can change this format at the system or session level with the following command:
alter [system|session] set nlsdateformat = 'mask'
Substitute your format mask for mask.
*RAW(L)
The RAW(L) datatype holds a small amount of binary data. There are no conversions performed on raw data in Oracle. The raw data is simply stored as is, and isn't interpreted by ORACLE.
Oracle can house up to 2,000 bytes in a RAW column.
*ROWID
The ROWID datatype stores ROWID information. A ROWID is a 10-byte string that identifies the location of row data in a datafile. Another lesson has more to say about ROWID.
*LONG and LOB (Large Object)
Oracle9i has scalar datatypes that let you store very large amounts of information in columns. Some of these datatypes are provided mainly for backward compatibility:
- LONG stores up to 2 GB of text data.
- LONG RAW stores up to 2 GB of binary data such as graphics, sound, documents, or arrays.
- BLOB stores up to 4 GB of binary data such as graphics, sound, documents, or arrays.
- CLOB and NCLOB store up to 4 GB of text data. NCLOB is a large fixed-width NLS datatype.
- BFILE stores 4 GB of unstructured data in operating system files.
Several key differences between LONG and LOB types make LOB types more versatile and helpful for large object management:
- There can be only one LONG column in a table, because the LONG column data is stored inline. This means that all data in the LONG column for each row in the table is stored in contiguous data blocks inside the segment used for storing the table's data.
- There can be many LOB columns in a table, because when the LOB value is over 4,000 bytes, only a locator for the LOB type is stored inline with the table data. No LOB will ever require more than 4,000 bytes of space inline with other table data. The rest of the data in the LOB columns is stored in an overflow segment.
- SELECT statements on LONG columns return the actual data. The same statement on a LOB column returns only the locator.
- Oracle supports the use of the LOB types in object types except NCLOB, whereas LONG doesn't.
- LOBs can be larger than LONGs — 4 GB for LOBs, 2 GB for LONGs.
- LOB data can be accessed piecewise; LONG access is sequential. Only the entire value in a LONG column can be obtained, whereas parts of a value in a LOB column can be retrieved.
*Collection Datatypes
We've covered the scalar datatypes. Now let's move on to collection datatypes. A collection is a gathering of like-defined elements.
There are two collection datatypes available in Oracle:
- Variable-length arrays (VARRAY)
- Nested tables
*VARRAY
A variable-length array (VARRAY) is an ordered list of objects, all of the same datatype.
The VARRAY datatype has two special attributes (in addition to those attributes within the objects the VARRAY contains): a count for the number of elements in the VARRAY and the limit for the maximum number of elements that can appear in a VARRAY. Although the VARRAY can have any number of elements, the limit must be predefined.
Each element in a VARRAY has an index number corresponding to the position of the element in the array.
Constraints and default values may not be created for VARRAY elements.
Once the VARRAY is created, a user can refer only to an individual element in a VARRAY with PL/SQL (although SQL can be used to access the entire VARRAY).
*Nested Tables
The nested table datatype can be thought of as a table within a table. This architecture is exceptionally suited for applications that have parent/child tables with referential integrity. A nested table is an unordered list of row records, each having the same structure. These rows are usually stored away from the table, with a reference pointer from the corresponding row in the parent table to the child table.
Like VARRAYs, nested tables can have any number of elements, with the added bonus that you don't need to predetermine a maximum limit.
*Reference Datatypes
The final built-in datatype is the reference datatype.
Developers can use the reference datatype to define a foreign key relationship between two objects.
The reference datatype can reference all columns in the table for a particular row — it's a pointer to a particular object, not the object itself.
*User-Defined Datatypes
User-defined types are abstract datatypes or compositions of existing scalar or other types you can define to serve highly specialized purposes.
User-defined datatypes may be composed of scalar datatypes, collection datatypes, or other user-defined types.
Question 6
Question 7
Question 8
Question 9
Question 10
Question 11
In this lesson, you learned about the Oracle datatypes used to define the data in each column of a table. There are two kinds of datatypes: built-in and user-defined.
The built-in datatypes include scalar, collection, and reference datatypes.
The user-defined datatypes you can create are endless. They can be derived from scalar and collection datatypes, as well as from other user-defined datatypes.
Lesson 4. Extended and Restricted ROWIDs
ROWIDs point to the location of a row. Using ROWID is the fastest way to find a row.There are two ROWID formats: the older restricted ROWID and the extended ROWID in use since Oracle 8.0.3. Both can be used in Oracle9i.
After completing this lesson, you should be able to:
- Explain the purpose and use of ROWIDs
- Describe the difference between restricted and extended ROWIDs
Oracle ROWIDs identify the location of a row inside a database. ROWIDs aren't addresses in memory or on disk; they're simply identifiers Oracle uses. Locating a table row using the ROWID is the fastest way to find a row in a table.
Although ROWID information can be queried like other columns in a table, a ROWID isn't stored explicitly as a column value. When users add new rows to a database, Oracle generates a ROWID to identify that row's unique database location.
The ROWID Oracle generates for a row depends on the following components:
- The datafile storing the table that particular row is added to
- The segment corresponding to the object the row will be stored in
- The block inside the segment that will house the row
- The slot in an Oracle block where the row will be stored
*Extended ROWIDs
Oracle's extended ROWID format is 10 bytes (80 bits) in size.
The extended ROWID has four components: an object number (32 bits), a relative file number (10 bits), a block number (22 bits), and a row (slot) number (16 bits).
This format was introduced in Oracle 8.0.3 to overcome the limited amount of disk space prior versions of Oracle could address. With extended ROWIDs, Oracle can address rows in tables in a manner that lets the database grow to a virtually limitless size.
Extended ROWIDs are displayed as 18-character representations of the location of data in the database. Each character is represented in a base-64 format consisting of A - Z, a - z, 0 - 9, +, and /.
The first six characters correspond to the data object number, the next three are the relative file number, the next six are the block number, and the last three are the row number.
The code below shows how to query a database for ROWIDs.
SQL> select name, ROWID from employee;
NAME ROWID
---------- ------------------
DURNAM AAAA3kAAGAAAAGsAAA
BLANN AAAA3kAAGAAAAGsAAB
*Restricted ROWIDs
Historically, Oracle used a 6-byte ROWID that's now considered restricted. This restricted ROWID doesn't store the object number for the table the row will be stored in.
This format worked because earlier versions of Oracle required all datafiles to have a unique file number within the database, regardless of the tablespace the file belonged to. Oracle8i and later releases number datafiles relative to the tablespace they belong to.
Restricted ROWIDs are displayed as 18 characters in base-16 format. The first 8 characters represent the block number, characters 10 - 13 are the row number, and characters 15 - 18 are the (absolute) file number. Characters 9 and 14 are static separator characters.
Restricted ROWIDs are rarely used now, with one exception:
Restricted ROWID format is employed to locate rows in nonpartitioned indexes for nonpartitioned tables where all index entries refer to rows within the same segment.
This eliminates any uncertainty about relative file numbers, because a segment can be stored in one and only one tablespace.
Here's how you can remember the ROWIDs:
- Restricted ROWIDs have a block ID, row number, and file number, which form the acronym BRF.
- Extended ROWIDS have an object ID, block ID, row number, and relative file number, which form OBRRF.
Question 12
Question 13
Question 14
Question 15
In this lesson, you learned how ROWIDs point to the location of a row.
There are two types of ROWIDs. The restricted ROWID format is older, but it's still in limited use.
The extended ROWID format lets Oracle increase the amount of space a database can employ.
Lesson 5. Row Structure
Every Oracle table is divided into data blocks that contain rows. To administer a database, you need to understand how data blocks and rows are constructed.After completing this lesson, you should be able to:
- Describe the structure of a data block
- Describe the structure of a row
A cornerstone of storage management is Oracle's ability to let you manage the space in a data block. The size of a data block is determined by the DB_BLOCK_SIZE parameter set for the instance when you create the database.
Data block size is almost always a multiple of operating system block size. Operating systems usually employ block sizes of 512 or 1,024 bytes.
An Oracle block can be anywhere from 2,048 to 16,384 bytes in size (or even larger for some platforms). 8,192 bytes is a common size for most Oracle databases.
*Data Block Components
There are several different components inside every data block in your Oracle database:
Each block has a block header and directory information. This includes information about the table that owns the block and the row data the block contains.
Block header and directory information
Free space is the space reserved for the growth of existing rows in the block. It's determined by the pctfree setting.
Block header and directory information
Free space
Space occupied by existing rows. Every time a row is added, Oracle places that row in a block. The space available for rows is determined by the setting for DB_BLOCK_SIZE, minus the space occupied by the block header, minus the space reserved by the setting for pctfree.
*Row Components
There are several different components inside every row contained in a data block:
The row header stores information about the number of columns in the row, chaining information, and the current lock status for the row.
Row header
Row data consists of the actual rows of each data table. Row data is divided into columns, or column data.
Row header
Row data
Column data for each row is stored in the order in which the columns were defined for the table.
Row header
Row data
Column data
Column width is stored in a field along with the non-NULL column data. If a column value is NULL for a row, no width field or column value will be stored in the block for that row column, and no bytes are used for the storage of NULL column values. That's why Oracle needs to maintain some free space via pctfree for row growth.
The column width field is 1 byte if the value for the column in this row is under 250 bytes, and it's 3 bytes if the value for the column in this row is 250 bytes or more. Oracle stores a number identifying the width of the column value only when the column value isn't NULL.
Question 16
Question 17
Question 18
Question 19
In this lesson, you learned about the structure of Oracle data blocks and the rows contained in those blocks.
You learned that data blocks consist of a block header with directory information, free space, and occupied space containing rows.
Each row has a row header and row data that's divided into column data.
Lesson 6. Creating Permanent Tables
Regular, permanent tables are probably the most common type of table you'll create in a database.You create tables with the create table command and add optional clauses to control the characteristics of the table and Oracle's operation.
After completing this lesson, you should be able to:
- Create a regular, permanent table
- Add optional clauses to the create table command
You create a permanent or regular table in Oracle with the create table command. This lesson reviews that command and discusses how you can manage storage considerations when you create regular tables.
The default storage settings are applied to objects placed in a tablespace that have been created without a storage clause.
This lesson explains how you can define the storage clause when you create an object.
This lesson assumes you're placing a regular table in a dictionary-managed tablespace to cover the use of the storage clause. The code below shows an example of the create table command.
The clauses within this command are described on the next pages.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
The tablespace keyword indicates which tablespace Oracle should create the table in.
If you don't specify this clause, Oracle will put the table in the default tablespace you were assigned to when your userid was created.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
There are a number of clauses for space utilization.
The pctfree keyword specifies the space Oracle is to leave free when inserting rows to accommodate later growth.
The pctused option specifies the threshold percentage of a block the actual contents of row data must fall below before Oracle will consider the block free for new row inserts.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
The next two clauses control Oracle's capability to make concurrent updates to a data block.
initrans specifies the initial number of transactions that can update the rows in a data block concurrently. The default initrans value for regular tables is 1. For clustered tables, the default is 2.
maxtrans specifies the maximum number of transactions that can update the rows in a data block concurrently. The default maxtrans value for tables is 255.
You shouldn't normally change the default values for these options.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
The nocache clause specifies that Oracle shouldn't make these blocks persist in the buffer cache if a select statement on the table results in a full table scan.
In this case, select * from EMPLOYEE would have Oracle load blocks into the buffer cache so those blocks won't persist for long.
If you want the table to stay cached in the buffer cache when you issue select * from EMPLOYEE, specify the cache keyword instead.
The default nocache specifies that the blocks retrieved for this table are placed at the least-recently-used end of the LRU list in the buffer cache after a full table scan.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
The logging clause tells Oracle to track table creation in the redo log. If a disk fails, a logged table can be recovered. logging is the default.
This could be changed to nologging so redo isn't logged. That's handy in situations such as certain types of bulk data loads after which the DBA plans to take a backup after loading the data into Oracle.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
Finally, you can specify storage clauses for table creation. These will override the default storage settings of the tablespace you create the object in.
The only tablespace default a storage clause won't override is minimum extent.
CREATE TABLE EMPLOYEE
(empid NUMBER(10),
lastname VARCHAR2(25),
firstname VARCHAR2(25),
salary NUMBER(10,4),
CONSTRAINT pk_employee_01
PRIMARY KEY (empid))
TABLESPACE data
PCTFREE 20 PCTUSED 50
INITRANS 1 MAXTRANS 255
NOCACHE LOGGING
STORAGE ( INITIAL 100K NEXT 150K
MINEXTENTS 4 MAXEXTENTS 300
PCTINCREASE 20 );
*Rules of Thumb for Table Storage
Observe the following rules of thumb when you create regular tables:
- In Oracle9i, all tables are created as locally-managed tablespaces by default. In previous versions of Oracle, the default was dictionary-managed tablespaces.
- Tables don't go in the same tablespace as undo segments, temporary segments, index segments, and tables must not be placed in the SYSTEM tablespace.
- To have as little fragmentation in the tablespace as possible, have a collection of standard extent sizes complementary for your tables, or use uniform extent allocation. The latter feature is used in Oracle9i by default, though Oracle will allocate multiple extents of uniform size to adhere to the allocation settings for initial or next in the storage clause of create table.
- The cache statement makes blocks which are read into the buffer cache via full table scans persist for much longer than they would otherwise. If you have a small lookup table accessed frequently, you may want to keep it in memory by specifying the cache clause or by issuing alter table lookup_tblname cache.
Question 20
Question 21
Question 22
Question 23
Topic 6.1 Exercises
* Exercise 1
Try creating a regular table in Oracle9i.
Step | Action |
---|---|
1 | Please Note: For the exercises in this course, you should set up an isolated Oracle9i system for practice. Do not use your production Oracle9i system for these practice exercises. Create a regular table. Include the columns, constraints, and primary key you desire. |
2 | Specify the tablespace where the table will be created. |
3 | Specify the space Oracle should leave free for later growth. |
4 | Specify the percentage threshold that row data contents must not exceed for a block to be free for row inserts. |
5 | Specify that the default values will apply for the initial number and maximum number of transactions that can update data block rows concurrently. |
6 | Specify that data blocks should not persist in the buffer cache. |
7 | Track table creation in the redo log. |
In this lesson, you learned how to create regular, permanent tables with the create table command.
You learned about the clauses you can use with create table, such as tablespace, initrans, nocache, and logging.
Finally, you learned some rules for table storage.
Lesson 7. Creating Temporary Tables
Temporary tables let you work with session-private data other users can't access. A temporary table lasts only as long as the transaction or session.Since temporary tables are ephemeral, you need to understand how they differ from regular, permanent tables.
After completing this lesson, you should be able to:
- Create a temporary table
- Add clauses to preserve or purge data in a temporary table
You create temporary tables with the command shown here.
The tbldef table definition can use any of the column or constraint definitions a permanent table uses. Associated temporary indexes will be generated to support primary or unique keys.
create global temporary table tblname ( tbldef )
on commit [delete|preserve] rows
The data in a temporary table is stored in memory, inside the sort area. If more space is required, temporary segments in your temporary tablespace will be used.
When you specify on commit delete rows, the data in a temporary table (along with data in any associated index) is purged after the transaction is completed.
When you specify on commit preserve rows, rows will be stored in the temporary table until the user who created the table terminates the session. For that period, you can use the temporary table the same way you'd use any other table.
If you don't specify an on commit clause, Oracle uses on commit delete rows by default.
create global temporary table tblname ( tbldef )
on commit [delete|preserve] rows
In this example, SCOTT creates a temporary table called MYTEMP.
Since on commit delete rows is specified, this table will purge rows after a transaction is completed. Remember that this is the default action.
The first select statement SCOTT issues yields a result, but the second select doesn't. The data has been purged by the commit.
SQL> connect scott/tiger
Connected.
SQL> create global temporary table mytemp
2 (col1 number,
3 col2 varchar2(30));
4 on commit delete rows;
Table created.
SQL> insert into mytemp values (1,'JUNK');
1 row created.
SQL> select * from mytemp;
COL1 COL2
--------- ------------------------------
1 JUNK
SQL> commit;
Commit complete.
SQL> select * from mytemp;
no rows selected
Here the data in temporary table T_TABLE will persist for the duration of the session instead of being purged after a transaction is completed.
The first select statement SCOTT issues yields a result, and so does the second. The data will be preserved until the session is terminated.
SQL> connect scott/tiger
Connected.
SQL> create global temporary table t_table
2 (col1 number,
3 col2 varchar2(20))
4 on commit preserve rows;
Table created.
SQL> insert into t_table values (1, 'JUNK');
1 row created.
SQL> select * from t_table;
COL1 COL2
--------- --------------------
1 JUNK
SQL> commit;
Commit complete.
SQL> select * from t_table;
COL1 COL2
--------- --------------------
1 JUNK
Other users can employ a temporary table SCOTT creates, but they won't see SCOTT's data. And SCOTT won't see their data. Even though data remains until SCOTT terminates the session, in this example CLARK can't access data SCOTT inserted into T_TABLE. If SCOTT issues the command truncate table t_table, only SCOTT's data will be removed. Other users' data will still be in the table.
SQL> connect scott/tiger
Connected.
SQL> create global temporary table t_table
2 (col1 number,
3 col2 varchar2(20))
4 on commit preserve rows;
Table created.
SQL> insert into t_table values (1, 'JUNK');
1 row created.
SQL> select * from t_table;
COL1 COL2
--------- --------------------
1 JUNK
SQL> connect clark/cloth
Connected.
SQL> select * from t_table;
no rows selected
You can join a temporary table with permanent tables and create objects like views, indexes, and triggers that form object dependencies on the temporary table.
You can't create a view that contains a join between temporary and permanent tables.
The create global temporary table statement doesn't log any redo, so temporary table data changes aren't recoverable.
Data changes made to temporary tables will generate rollback information to let Oracle roll back a transaction and thus, the temporary table data, in the event the instance crashes.
The TEMPORARY (T) and DURATION columns in the DBA_TABLES view indicate whether a table is a temporary table and how long the data in the table will persist.
Oracle stores temporary table data in temporary segments in a temporary tablespace.
Because temporary tablespace segment allocations are managed uniformly by Oracle, you typically don't need to worry about specifying a storage clause when you create a temporary tablespace.
Instead, focus your attention on whether or not the data in the temporary table should persist for the duration of a transaction or a session.
Question 24
Question 25
Question 26
Question 27
Question 28
Question 29
Topic 7.1 Exercises
* Exercise 1
Try creating a temporary table in Oracle9i.
Step | Action |
---|---|
1 | Please Note: For the exercises in this course, you should set up an isolated Oracle9i system for practice. Do not use your production Oracle9i system for these practice exercises. Create a temporary table. Include whatever column or constraint definitions or keys you like. Insert rows of data. |
2 | Specify that data is to be purged after a transaction. |
3 | Display data with select, then commit the transaction. |
4 | Try to select the data again. |
5 | Use the same data to create a second temporary table that won't purge rows until the session is terminated. |
6 | Display the data for that table with select. |
7 | Commit the transaction, then try to select the data again. |
8 | Connect as a different user. Try to select the same data. |
In this lesson, you learned how to create temporary tables with the create global temporary table command.
You learned how to use the commit clause to purge or retain the data in a temporary table.
Finally, you learned how temporary tables operate, and you learned some factors you need to consider when you employ temporary tables.
Lesson 8. Managing Storage Structures
It's important to keep a database working as efficiently as possible. You should remove unused space, but leave sufficient space for normal operation. You should avoid storing data across data blocks. And you should periodically use the tools supplied with Oracle to evaluate the integrity of your database.After completing this lesson, you should be able to:
- Use the high-water mark to allocate space
- Avoid row migration and chaining
- Analyze the integrity of a table
- Use views to gain information about tables
Oracle allocates new extents for a table automatically when more data is added than the current allocation will hold. You can add more extents manually with the statement
alter table tblname allocate extent (size num[K|M] datafile 'filename')
num is the size of the extent you want to allocate (subject to the tablespace limit set by minimum extent) and filename is the absolute path and filename of the datafile where you want to store the extent.
Both the size and datafile clauses are optional in the alter table tblname allocate extent statement.
If size isn't used, Oracle uses the size specified in the next storage option for the table.
If datafile is excluded, Oracle manages placement itself.
You could use this command to control the distribution of extents before performing bulk data loads, as shown in these code examples that modify the EMPLOYEE table.
ALTER TABLE EMPLOYEE ALLOCATE EXTENT;
ALTER TABLE EMPLOYEE ALLOCATE EXTENT ( SIZE 200K );
ALTER TABLE EMPLOYEE ALLOCATE EXTENT
(DATAFILE '/u10/oradata/oracle/data05.dbf' );
*Using the High-Water Mark
How does Oracle maintain knowledge about table size? A special marker called the high-water mark indicates the last block used to hold the table's data.
As insert statements fill data blocks, Oracle moves the high-water mark farther and farther out to indicate the last block used.
The high-water mark is stored in a table segment header. Its location tells Oracle where to stop reading blocks during full table scans.
You can find the high-water mark for your table using the unusedspace( ) procedure from the DBMS_SPACE Oracle-supplied package or in the DBA_TABLES dictionary view after you've run the analyze command on your table.
To eliminate unused space allocated to a table, issue the statement
alter table tblname deallocate unused keep num [K|M]
In this statement, keep is an optional clause that lets you retain num amount of the unused space. The keep clause specifies the number of bytes above the high-water mark that should be retained.
If the command is used without the keep clause, Oracle will deallocate all unused space above the high-water mark.
If the high-water mark is at an extent less than the value of minextents, Oracle will release extents above minextents.
Examples of this statement are shown in the code below.
ALTER TABLE EMPLOYEE DEALLOCATE UNUSED;
ALTER TABLE EMPLOYEE DEALLOCATE UNUSED KEEP 10K;
*Row Migration
If pctfree is too low for blocks in a table, update statements may increase the size of that row, only to find there isn't enough room in the block to fit the change. Thus, Oracle has to move the row to another block in which it will fit.
This row migration degrades performance when the server process tries to locate the migrated row, only to find that the row has moved.
*Chaining
Chaining occurs when data for a row is stored in multiple blocks. This is common in tables that use columns defined as the LONG datatype, because LONG data is stored inline with the rest of the table.
Chaining is detrimental to database performance because the server process must piece together the row of data with multiple disk reads. And there's more degradation when DBWR has to perform multiple disk writes for a single row of data.
*Analyzing Table Integrity
Typically, Oracle handles the validation of data block structure automatically whenever a block is read into the buffer cache.
There are two actions you can take to validate the structural integrity of data blocks on your own:
- Use the DB_BLOCK_CHECKSUM initsid.ora parameter.
- Issue the analyze table tblname validate structure command.
Setting the initsid.ora parameter DB_BLOCK_CHECKSUM to TRUE makes the DBWR process calculate a checksum on every block it writes, regardless of tablespace. Oracle also calculates checksums for blocks written to the online redo logs.
When DB_BLOCK_CHECKSUM is set to FALSE, the DBWR process calculates checksums only when writing blocks to the SYSTEM tablespace.
The process of calculating checksums adds overhead to normal database processing. You must balance the additional performance overhead with your need for ensuring that data blocks aren't corrupt.
The analyze table tblname validate structure command can help you check table integrity. The optional cascade clause in this statement validates the structure of blocks in indexes associated with the table. From SQL*Plus, you issue the analyze command on one table at a time.
One of the main uses for the analyze command is to determine performance statistics for a cost-based optimization of processing SQL statements.
This command will also detect row migration.
There are two basic clauses for the analyze command.
The compute statistics clause calculates statistics collection for the table based on every row in the table:
analyze table tblname compute statistics;
The estimate statistics clause estimates statistics collection for the table based on a sample size of data you can specify with the sample num [rows|percent] clause. If you don't specify a sample clause, Oracle uses 1,064 rows:
analyze table tblname estimate statistics;
estimate statistics is almost as accurate as compute statistics, yet takes less time because it measures a smaller sample.
Once statistics are generated, the CHAIN_CNT column in the DBA_TABLES dictionary view contains the number of chained and migrated rows estimated or computed in the table. If you feel this number is high, you might want to save the table data, drop the table, re-create it, and reload the data to eliminate the problem.
Remember that some chaining is to be expected, especially when your rows are wide (for example, if you have many VARCHAR2(4000) columns or a LONG column).
If you want to validate integrity on an ongoing basis as part of a PL/SQL application, you can develop code that calls the analyzeschema( ) procedures in the DBMS_UTILITY package or the analyzeobject procedure in DBMS_DDL.
You can also employ the DBMS_STATS package in place of the analyze command for gathering statistics. This package offers some advantages over the use of the analyze command, particularly when you want to save statistics and reuse those statistics in different databases.
*Using Views
Several data dictionary views provide information about tables. These views base their content either on tables in the database or on dynamic performance information about tables collected while the instance is running. Views that provide helpful table information are described below.
View | Description |
---|---|
DBA_SEGMENTS | Information about the name, tablespace location, and owner of segments containing table data |
DBA_OBJECTS | Information about the object ID number used to determine ROWID for rows in the table, and the table creation timestamp for determining dependencies |
DBA_TABLES | Information about all storage settings for the table, and statistics collected as part of the analyze operation |
DBA_EXTENTS | Information about the number of extents allocated to a table, the datafiles in which they're stored, and the size of each extent |
DBA_TAB_COLUMNS | Information about every column in every table, including datatype, size, precision, column position in each row, and nullability |
DBA_TAB_COMMENTS | Comment information for every table |
DBA_COL_COMMENTS | Comment information for every column in every table |
Consider the ways you can gain information about tables from views:
- If you want to determine whether an extremely large table is clumped in one datafile, you can query the DBA_EXTENTS view.
- If you want to determine what rows are chained or migrated in a table, you can query the DBA_TABLES view.
- If you want to see whether there's a great deal of unused space in a table, you can query the DBA_TABLES view.
Question 30
Question 31
Question 32
Question 33
Question 34
Question 35
Topic 8.1 Exercises
* Exercise 1
Try managing and analyzing tables in Oracle9i.
Step | Action |
---|---|
1 | Please Note: For the exercises in this course, you should set up an isolated Oracle9i system for practice. Do not use your production Oracle9i system for these practice exercises. Manually add extents to a table. Use the extent size that suits your situation. Place the extents in a datafile of your choice. |
2 | Eliminate all but 10K of the unused space from a table. |
3 | Modify initsid.ora so Oracle will calculate a checksum for every block it writes. |
4 | Analyze the integrity of a table, first by estimating statistics and then by computing statistics. |
5 | Use the DBA_TABLES view to determine the number of chained and migrated rows in a table. |
In this lesson, you learned how Oracle uses the high-water mark to indicate the last block in a table that holds data. You learned how to locate the high-water mark, and how to use it to eliminate unused space in a table.
You also learned how row migration and chaining can degrade the performance of a database, and you learned how to avoid these conditions.
You learned how to use the analyze command to check the integrity of a database.
Finally, you reviewed the data dictionary views that provide information about tables.
Lesson 9. Managing Tables
Managing tables is important for maintaining your database in proper working order.After completing this lesson, you should be able to:
- Relocate a table to a new tablespace or reorganize a table in the same tablespace
- Truncate empty extents from a table
- Drop a table from a tablespace
Moving a table into a different tablespace used to be a complex task. You used the EXPORT tool to create a dump file containing the rows of the table, the table's definition, and indexes or constraints.
You dropped the table from the Oracle database after exporting it, then you re-created the table in another tablespace.
You then ran IMPORT. You set the IGNORE parameter to Y so IMPORT wouldn't fail when it saw the table already existed.
You could have used SQL*Loader or the create table as select statement with the storage and tablespace clauses explicitly defined. Either method required a long time to execute.
It's easier to relocate and reorganize tables in Oracle9i. You don't need any of the old time-consuming techniques. Instead, you simply use the alter table move statement:
alter table tblname move tablespace tblspcname
The alter table move statement lets you relocate a table to another tablespace or reorganize a table to resize the initial segment. This statement has three optional clauses:
Clause | Function |
---|---|
tablespace | Lets you identify the tablespace location where you want the table placed. If omitted, Oracle9i rebuilds the table in a new segment in the same tablespace. |
storage | Lets you reconfigure aspects of the table's storage, such as the initial extent or percentage increase of subsequent extents. If no storage clause is specified, Oracle9i will re-create the table with the storage attributes used when the table was initially created. Useful only for placement in dictionary-managed tablespaces, not locally managed tablespaces. |
logging|nologging | The default logging tells Oracle9i to write alter table move changes to the online redo logs for recoverability. nologging tells Oracle to not write changes to the log. |
In Oracle Enterprise Edition, alter table move enables the online keyword. This permits the movement or reorganization of a table while the original version is still available for use.
*Relocating Tables
The pages that follow show how you employ alter table move to relocate and reorganize tables.
This first example moves a table called WORK_TABLE from the USER_DATA tablespace to the LMTAB tablespace.
The initial select statement shows the original tablespace where the table was located. The final select statement shows the new location.
SQL> select owner, table_name, tablespace_name
2 from dba_tables
3 where table_name = 'work_table';
OWNER TABLE_NAME TABLESPACE_NAME
---------- ---------- ---------------
SCOTT WORK_TABLE USER_DATA
SQL> alter table work_table move tablespace lmtab;
Table altered.
SQL> select owner, table_name, tablespace_name
2 from dba_tables
3 where table_name = 'work_table';
OWNER TABLE_NAME TABLESPACE_NAME
---------- ---------- ---------------
SCOTT WORK_TABLE LMTAB
Although Oracle preserves a table's associated constraints, object privileges, and triggers when the table is moved from one tablespace to another, the alter table move statement does not move indexes associated with the table. The code below, in which a table with an index is moved from tablespace SYSTEM to tablespace USER_DATA, illustrates this. The index remains in the original SYSTEM tablespace.
SQL> create table tab_w_indexes
2 (col1 number primary key)
3 tablespace system;
Table created.
SQL> select owner, table_name, index_name,
tablespace_name
2 from dba_indexes
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INDEX_NAME TABLESPACE_NAME
---------- -------------- ------------ ---------------
SCOTT TAB_W_INDEXES SYS_C00953 SYSTEM
SQL> alter table tab_w_indexes
2 move tablespace user_data;
Table altered.
SQL> select owner, table_name, index_name,
tablespace_name
2 from dba_indexes
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INDEX_NAME TABLESPACE_NAME
---------- -------------- ------------ ---------------
SCOTT TAB_W_INDEXES SYS_C00953 SYSTEM
To move an index into a new tablespace, use the statement
alter index idxname rebuild tablespace tblspcname
The code below illustrates moving both a table and the table's index from the USER_DATA tablespace to the LMTAB tablespace.
SQL> select owner, table_name, index_name,
tablespace_name
2 from dba_indexes
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INDEX_NAME TABLESPACE_NAME
---------- -------------- ------------ ---------------
SCOTT TAB_W_INDEXES SYS_C00953 USER_DATA
SQL> alter table tab_w_indexes
2 move tablespace lmtab;
Table altered.
SQL> alter index sys_c00953
2 rebuild tablespace lmtab
Index altered.
SQL> select owner, table_name, tablespace_name
2 from dba_tables
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME TABLESPACE_NAME
---------- ------------- --------------
SCOTT TAB_W_INDEXES LMTAB
SQL> select owner, table_name, index_name,
tablespace_name
2 from dba_indexes
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INDEX_NAME TABLESPACE_NAME
---------- -------------- ------------ ---------------
SCOTT TAB_W_INDEXES SYS_C00953 LMTAB
*Reorganizing Tables
Instead of moving a table to a completely new tablespace, you could just reorganize the table in its original tablespace. You can use the alter table move statement to do that. You simply omit the tablespace clause.
Oracle places the table in a new segment within the original tablespace to rebuild the storage allocation according to your specifications.
The code below reorganizes a table.
SQL> alter table tab_w_indexes
2 move storage (initial 20K next 20K);
Table altered.
SQL> select owner, table_name, initial_extent,
next_extent
2 from dba_tables
3 where table_name = 'tab_w_indexes';
OWNER TABLE_NAME INITIAL_EXTENT NEXT_EXTENT
---------- ------------- -------------- -----------
SCOTT TAB_W_INDEXES 20480 20480
If you need to reorganize a large table and want to improve the performance of the operation, include the nologging keyword in the alter table move statement.
You'll improve performance because the changes made by Oracle9i won't be written to the online redo log.
But remember that, without logging, the changes won't be recoverable later.
The operation of the alter table move statement requires enough space for two copies of the table to exist until the process is completed and Oracle9i can drop the table.
While the table is being moved, users can issue select statements to see data in the table, but they can't change data in the table.
*Truncating Tables
Suppose you issue a delete statement on a table with many hundreds of thousands or millions of rows, and commit it. Then you issue a select count(*) statement. A count of zero rows is returned. What happened? Oracle didn't reset the high-water mark after the delete statement, and what's more, it never does!
To get rid of the allocated extents that are now empty, and reset the high-water mark, while still preserving the table definition, you need to use the truncate table command. You can include an optional drop storage clause, if needed:
TRUNCATE TABLE EMPLOYEE;
TRUNCATE TABLE EMPLOYEE DROP STORAGE;
Once a table is truncated, you can't issue a rollback command to magically get the data back. Any change made to minextents after table creation will be applied to the table unless you specify the optional reuse storage clause. This clause preserves the current storage allocation and doesn't reset the high-water mark for the table:
TRUNCATE TABLE EMPLOYEE REUSE STORAGE;
Any associated indexes will also be truncated when you use truncate table, and any optional drop storage or reuse storage clauses will be applied to associated indexes.
Despite your inability to rollback a table truncation, Oracle does acquire a rollback segment for the job. Why?
Because if you terminate the truncate table command, or if some failure occurs, the rollback segment stores the changes made for the duration of the truncate operation to enable crash recovery.
*Dropping Tables
To rid yourself of a table entirely and give all allocated space back to the tablespace, issue the drop table statement. For example, to drop the EMPLOYEE table, you'd issue the following statement:
DROP TABLE EMPLOYEE;
If other tables have defined referential integrity constraints into this table, you must include the cascade constraints clause:
DROP TABLE EMPLOYEE CASCADE CONSTRAINTS;
Question 36
Question 37
Question 38
Question 39
Question 40
Question 41
Topic 9.1 Exercises
* Exercise 1
Try relocating, truncating, and dropping a table in Oracle9i.
Step | Action |
---|---|
1 | Please Note: For the exercises in this course, you should set up an isolated Oracle9i system for practice. Do not use your production Oracle9i system for these practice exercises. Move a table into a different tablespace. Log the move process. If you're working with a dictionary-managed tablespace, modify the table's extents as you desire. |
2 | Truncate a table to remove empty extents. |
3 | Drop a table and any associated integrity constraints. |
In this lesson, you learned how to use the move option with the alter table command to relocate a table to a different tablespace.
You also learned how to use the truncate table command to reallocate empty extents.
Finally, you learned how to use the drop table command to completely eliminate a table.
Lesson 10. Dropping Columns
Oracle provides two methods for removing unneeded columns from your tables. Both employ the alter table command. One method retains the data while making the column inaccessible. The other method actually removes the column.After completing this lesson, you should be able to:
- Mark a column unused
- Physically remove a column
Suppose you want to eliminate a table column that's no longer needed. In earlier versions of Oracle, you dumped the contents of the table to a flat file, dropped the table, re-created the table without the subsidiary column, and used SQL*Loader or Pro*C to reload the table records to your newly created table. Depending on the size of the table, this process could take a long time.
Oracle9i lets you drop unused columns with the alter table statement. It's easy and quick.
There are two ways to drop a column in Oracle9i:
- You can mark a column as unused. This logical method removes no data but makes the column inaccessible.
- You can physically remove a column from a table. This deletes the column and the column data.
*Marking a Column Unused
To delete a column logically but not physically, you mark the column unused. The column data is still in the table, but it can't be accessed.
Users of a table can't see an unused column. Information about the unused column doesn't appear in the output of the describe table command, and you can't query data in an unused column.
The syntax for marking a column as unused is
alter table tblname set unused column colname
To see which tables have unused columns, you can query the dictionary view DBA_UNUSED_COL_TABS. This view provides a list of all tables with unused columns.
The COUNT column in this view indicates how many unused columns there are for each table.
*Dropping a Column
To physically drop a column from a table, issue the statement
alter table tblname drop column colname
This statement removes all data from the column and eliminates the column from the table definition.
This operation takes more time than marking a column as unused. Oracle has to go through all blocks of the table and actually remove the column data to reclaim the space used by that column.
*Marking and Then Dropping a Column
You might want to mark columns unused and then drop them later. To do this, first issue the statement
alter table tblname set unused column colname
Then, at your leisure, issue the statement
alter table tblname drop unused columns
Marking a column as unused and then dropping the unused column lets you take away column access quickly and immediately. When you have DBA maintenance time scheduled later, you can remove the column.
The code below provides an example of dropping columns in Oracle9i through the alter table statement.
This example immediately drops the SUBSIDIARY column and all its contents from the EMPLOYEE table.
SQL> alter table employee drop column subsidiary;
Table altered.
This example marks a specific column in a table as unused and then drops all unused columns.
The first statement instructs Oracle to mark the SUBSIDIARY column unused. No information will be removed from the column — Oracle simply pretends the column isn't there.
The second statement (presumably issued at some later time) drops any columns (such as SUBSIDIARY) which have been marked as unused in the table. The unused columns and their data will be deleted.
SQL> alter table employee set unused column subsidiary;
Table altered.
SQL> alter table employee drop unused columns;
Table altered.
You can add three optional clauses to the alter table tblname drop column colname statement. These clauses are added after colname:
Clause | Function |
---|---|
cascade constraints | Causes any foreign keys referring to the column being dropped, or any constraints on the column, to be eliminated along with that column. |
invalidate | Causes any objects related to the table to be marked invalid. Objects that relate to a table in this fashion include triggers, views, and PL/SQL blocks that refer to a table. |
checkpoint num | Lets you reduce the amount of space used in a rollback segment by having Oracle9i perform a checkpoint every num number of rows. For the duration of the alter table drop column operation, the table shows a status of INVALID. If the operation terminates abnormally, Oracle can roll back only to the most recent checkpoint and the table will remain in an INVALID state. However, you can resume the removal of the column after instance recovery is made or when you reconnect using the alter table tblname drop columns continue statement. |
Question 42
Question 43
Question 44
Question 45
Question 46
Topic 10.1 Exercises
* Exercise 1
Try dropping columns from a table in Oracle9i.
Step | Action |
---|---|
1 | Please Note: For the exercises in this course, you should set up an isolated Oracle9i system for practice. Do not use your production Oracle9i system for these practice exercises. Mark several columns in a table as unused. |
2 | Use a data dictionary view to identify the number of unused columns in your tables. |
3 | Drop the unused columns from the table. |
4 | Drop columns which are still in active use. |
In this lesson, you learned two techniques for removing columns from tables. Both use the alter table command.
In one method, you mark a column unused. This retains the data of the column for later use.
The second method physically removes a column from a table, so the data contained in that column is gone.
You also learned about optional clauses you can add to the alter table command when you drop a column. These clauses let you drop foreign keys, invalidate related objects, and set a checkpoint.
Lesson 11. Course In Review
This lesson includes questions and material covering the entire course. You can use it to review and study for the exam.Topic 11.1 Review Notes
*Review Notes1. | Regular tables are great for general data storage needs. They're the most flexible database object in Oracle for storing data. |
2. | Temporary tables are useful for housing the intermediate datasets of complex processing for the duration of a transaction or session. |
3. | Partitioned tables let you reference individual segments of data in a larger table directly, as though they were a miniature copy of the table containing only a specific range of that table's data. |
4. | To get at data quickly, you can create indexes on a table. Index-organized tables house data in an index structure to reduce storage overhead; they don't separate the table from the index. |
5. | Usually a segment houses information for only one table. Clusters permit the storage of more than one table's data inside a single segment. This is useful for situations in which users must access two or more tables simultaneously. |
6. | There are two general categories for datatypes — built-in and user-defined. Built-in types include scalar, reference, and collection datatypes. User-defined types are specialized composite datatypes you construct from existing built-in types. |
7. | Be sure you can describe the difference between the LONG datatype and LOB datatypes available in Oracle for the storage of very large amounts of data in a single column. LOB types offer several advantages and are the preferred type. The LONG type is provided for backward compatibility. |
8. | ROWIDs point to the location of a row on disk with respect to a file, block, row number, and data object number. |
9. | Versions of Oracle prior to 8.0.3 use a restricted ROWID format. Current versions still use this format for very limited purposes. |
10. | Starting with Oracle 8.0.3, Oracle has employed an extended ROWID format which lets the database address more space. This dramatically increases the size an Oracle database can achieve. |
11. | Blocks consist of three components — a block header, row data, and free space left over to accommodate existing row growth. |
12. | Rows consist of a header, column data, and a field indicating the width of each column. |
13. | If no data is stored in the column for a particular row, Oracle doesn't waste space in the block indicating that fact. |
14. | Be sure you can create tables with appropriate storage settings. |
15. | Be able to describe the concepts underlying temporary tables. Know the purposes they serve with respect to session-private storage of information, global availability for all users, and the retention of data for either the length of a transaction or the length of a session. |
16. | Understand how to create temporary tables in Oracle9i, including how to define the period of time the data in the temporary table will be retained. |
17. | Know how to manually allocate more extents to a table. |
18. | Know where to look to determine if rows are chained or migrated in a table. |
19. | Be sure you understand how a high-water mark is set. |
20. | Understand what fragmentation is. |
21. | Know the statement used for moving a table to another tablespace. Understand how this statement is similar in function to the create table as select statement. |
22. | Be sure you know how the alter index rebuild tablespace statement is used to move indexes associated with tables. |
23. | Be sure you know the space considerations involved in relocating or reorganizing a table. Know the trade-offs inherent in the use of the nologging keyword in the alter table move statement. |
24. | Understand both uses of the alter table statement for dropping columns — one using the set unused column and drop unused columns syntax and the other with the drop column syntax. |
Question 47
Question 48
Question 49
Question 50
Question 51
Question 52
Question 53
Question 54
Question 55
Question 56
Question 57
Question 58
Topic 11.2 Exam Preparation
*Exam PreparationThis page contains a bulleted list of fast facts to review or crib notes for the days leading up to the exam.
- There are four types of tables: regular tables, partitioned tables, cluster tables, and index-organized tables.
- There are two categories of datatypes: user-defined and built-in.
- There are three classes of built-in types: scalar, collection, and relationship types.
- The regular-size scalar types include CHAR, NCHAR, VARCHAR2, NVARCHAR2, DATE, RAW, ROWID, and NUMBER.
- The large-size scalar types include LONG and LONG RAW from Oracle 7, and CLOB, NCLOB, BLOB, and BFILE.
- The collection types include VARRAY, which is a variable-length array, and TABLE, which is a nested table type.
- The relationship type REF is a pointer to data in another table.
- Collection and relationship types require the object option.
- The components of a ROWID can be described by BRF or OBRRF.
- Row migration occurs when a row becomes too large for a block.
- Chaining occurs when a row is stored in many blocks.
No comments:
Post a Comment