Saturday, August 28, 2010

Ensuring Table With Only One Row in Oracle Using Virtual Column

There was a discussion on the OTN General database forum, in which the OP asked creating a table with just one row and restricting that table to just one row. Here is my attempt at it.

I created a table with two columns, and the second column is a virtual column and contains a constant. I created a unique index on this column. On every insertion, this second column always evaluates to 1, and unique index (which become the function based index on virtual column) ensures that only one row remains in the table.


oracle@test # sqlplus /nolog
 
SQL*Plus: Release 11.1.0.7.0
 - Production on Sat Aug 28 19:09:16 2010
 
Copyright (c) 1982, 2008, Oracle.  All rights reserved.
 
idle> conn test/test
Connected.
test@test> create table t1
 (c1 number, c2 generated always as (1) virtual);
 
Table created.
 
test@test> create unique index idx1 on t1(c2);
 
Index created.
 
test@test> insert into t1(c1) values (1);
 
1 row created.
 
test@test> commit;
 
Commit complete.
 
test@test> insert into t1(c1) values (1);
insert into t1(c1) values (1)
*
ERROR at line 1:
ORA-00001: unique constraint (TEST.IDX1) violated
 
 
test@test> insert into t1(c1) values (2);
insert into t1(c1) values (2)
*
ERROR at line 1:
ORA-00001: unique constraint (TEST.IDX1) violated

Friday, August 27, 2010

Who is a Successful DBA?

What are the traits of a successful database administrator? How do we describe a successful DBA?

A DBA who

  • who keeps everything secret
  • who exaggerates his trouble-shooting
  • who considers the users and developers inferior and disregard the management
  • who likes mystery and non-transparency.
  • who likes to brag about hidden parameters and esoteric rule of thumbs
  • who doesn't share knowledge and experience
  • who takes credit of success and in case of failure throws blame upon network or developers
  • who refuses to learn new concepts and techniques and likes status quo
Or a DBA who

  • who works as bridge between network, developers, users and the management
  • who facilitates the end users
  • who believes in documentation, auditing, transparency
  • who is really passionate about his data and the ones who manipulate it
  • who enables the developers to exploit the full potential of database
  • who shares knowledge, take part in community and help others
  • who admits blunders and doesn't hesitate to ask for help from others
  • who is daring, visionary, and change-oriented
Want to add something, please feel free.

Friday, August 20, 2010

ORA-29701: unable to connect to Cluster Manager

Just imagine that its your 3rd day at the new job, and you come to know that somebody has deleted some of the oracle binaries from the Oracle Home's bin folder?

It's a 6Tb data warehouse of a telco, which is growing rapidly on daily basis (20GB daily). Its a 10.2.0.3 database on Solaris 10 on a SPARC machine using ASM. I manage this database remotely, as it is in other country at the client site. As the database was running, when the rm command was inadvertently run in the bin directory, so only few of them got deleted. The resilient Oracle kept running without a frown.

I wanted to restart the instance and the whole server to see if database comes up or not, but client didn't let me do that. Also they didn't have any spare server from where I could just copy/paste the binaries to check if they work or not. I pressed hard for restart but in vain.

If something can go wrong, it will go wrong. Murphy was so right.

Couple of days ago, there was a power breakdown at the data center of the client, and server got shutdown. I mentally prepared myself for the worst. Well, I had the list of missing binaries, so on my laptop on I installed Solaris 10 on VMware and installed oracle 10g base release, and got the binaries. Though I was not sure that whether the oracle 10g's binaries on x86 solaris would work on sparc solaris or not.

After the server restart, first I tried to restart the ASM and database without binaries. When I tries to restart the ASM, it gave me error:

ORA-29701: unable to connect to Cluster Manager

From logs, it was shown that it was searching for crsctl binary in the bin directory and it wasn't there.

I copied the crsctl from the x86 to sparc in ORACLE_HOME/bin directory, and tries starting ASM instance again:

It started like a charm :)

Thursday, August 19, 2010

How Oracle Enabled Me to Help Flood Victims in Pakistan

Pakistan, where I live, is going through worst of the floods. Floods in Pakistan are not a routine and these floods have no precedence. More than 20 million people have gone displaced, thousands died, thousands injured, and the estimate of material loss is uncountable.

Flood area is about 70km away from my home, and fortunately I am at a safe distance. Yesterday, I arranged a small vehicle loaded with mineral water, biscuits, some tents, old clothes and went to the affected area, which is 70km from my home. The scene at the flood site was heart-wrenching. I have seen natural disasters like the earthquake of 2005 in Pakistan, but this calamity is unprecedented.

I was able to help my countrymen out because I am earning through Oracle DBA, and so Oracle is indirectly doing huge service to the far-flung areas of the world.

Thank you Oracle.

Tuesday, August 17, 2010

Why I Joined Battle Against Any Guess (BAAG)

From Pythian blog, I came to know about Battle Against Any Guess (BAAG).

I instantly wanted to join the BAAG. When you register for BAAG, they asked you a question:

Why I join the BAAG?

My reply was:

From experience I have come to know that methodical diagnosis is the key to the resolution of performance problem in databases. As we cannot prescribe medicine to humans merely upon hunch, we shouldn't apply solutions on database problems by guesswork.

When I started my career back in 2001, performance tuning was a voodoo. In voodoo (Kala Jadoo in Pakistan), all is mystery. You follow haphazard procedures, you perform lots of mysterious steps, you tinker with lots of elements and see which one might work and blah blah blah.

That was what Oracle Performance Tuning was all about. The voodoo master, the DBA, would check the ratios, based on hunches would create some indexes, drop some indexes, recreate some indexes, increase or decrease the sizes of caches and buffers, apply some unknown hidden parameters, add more log files, resize them, set parameters like freelists, pctused, pctfree, inittrans all at once and then ask the user whether the database was fast now?

If user complained that database was still slow, the DBA would put all the blame on the user's ignorance or to the network team or to the server's hardware. And if user would say that things were fast this time, DBA would just smile and wonder which component did the trick.

Thursday, August 12, 2010

Deletion of LOB data Doesn't Free the Space

Just learned that in order to reclaim space after deleting LOB data, you have to do:


alter table lob_test modify lob (data) (shrink space);


Also came to know that the undo space also get stored in the lob segment itself.

Another method could be to use following command:

ALTER TABLE MODIFY LOB () DEALLOCATE UNUSED KEEP M;

If your LOB table is huge and takes ages to complete and consumes lots of CPU, then you can break the above command down into phases and deallocate specific extents.

Sysdba

The most powerful credentials in the Oracle database is of 'sysdba'. Any user having sysdba is sys. These most powerful credential are not stored within the Oracle database.

It doesn't mean that Oracle doesn't trust itself for the most powerful credentials, but because these are needed to access the instance and database when it is not up and open.

There are two types of external authentication for the sysdba:

1. OS authenticaion - logon to the server is a member of a particular group on the os (usually dba or oradba). If you are logged in with such user which is in the dba group, then you can connect to instance without even specifying the password.

SQL> connect / as sysdba
Connected.
SQL> show user
USER is "SYS"

2. The second option is password file authentiction.

SQL> connect sys/password@orcl as sysdba
Connected.
SQL> show user
USER is "SYS"
SQL> alter user sys identified by secure;

User altered.

SQL> connect sys/secure@orcl as sysdba
Connected.
SQL> show user
USER is "SYS"

changing the sys password changes the password in the password file and the users wanting to log on have to know the new password..

We can also add new users to the password file:

SQL> grant sysdba to hr;

Grant succeeded.

SQL> connect hr/hr@orcl as sysdba
Connected.
SQL> show user
USER is "SYS"

Wednesday, August 11, 2010

Oracle Data Warehouse DBA Top Skills

Please see if you could add any feature in the following skill set for any DBA who has to manage a data warehouse in Oracle?

  • Dimensional Modeling
  • Concept about architecture of Oracle
  • Partitioning
  • Compression
  • Indexing
  • External Table
  • SQL Loader
  • Parallelism
  • DBMS_STAT
  • SCHEDULER
  • Resource Manager
  • ASM
  • RMAN
  • Materialized Views
  • Data Pump
  • Tuning of ETL like ensuring Set Based Operations, nologging, parallel, etc.
  • Able to make proper assessment of hardware (RAM, I/0, CPU) according to work load and volume of data
If you think anything else should be there, please add your take in comments.

Why AMM Doesn't work with HugePages in Oracle 11g

Memory in the computers is managed in terms of pages by the operating systems. In 32-bit systems, the size of one page is 4KB.

In 64-bit systems, there are also HugePages. HugePages allow us to have page size of 2MB.

Its always better to use HugePages for the Oracle database, because more SGA can be crammed into fewer pages.

But the Automatic Memory Management Feature of Oracle 11g is incompatible with the HugePages.

If it comes to deciding between using HugePages and AMM, I always opt for HugePages, because its efficient in memory management.

The default of OS is to mount tmpfs on /dev/shm. /dev/shm is used by SGA.

For HugePages, ramfs is mounted over /dev/shm, which is not supported for the AMM for now.

A very fine discussion is here.

Tuesday, August 10, 2010

Sohaib Abbasi, A Former SVP of Oracle from Pakistan

Yes, he is an Informatica guy now, but he spent 20 years at Oracle, and led Oracle Tools and Oracle Education with a bang. Joined Oracle when it had US 4MILLION Dollars revenue annually, and when he left Oracle, annual revenues of Oracle was US $9.5 billion.

It was gut-wrenching for me to learn that he came back to Pakistan to contribute to the education, but due to red tape and lack of transparency he went back. Understandably, that is the biggest reason why Pakistan's education system is not something to write home about.

Here is a fine wikipedia article about the great Sohaib Abbasi, with some very fine external links.

I came to know about him while reading this article.

Saturday, August 7, 2010

Key Highlights of Oracle Exadata

Oracle has been doing databases for more than three decades now. Over the course of these three decades, they have been busy in accumulating hundreds of years of experience from the industry. The culmination of their hardwork, innovation and vision seems to be the Exadata database machine.

Some of the key highlights of Exadata are:

Exadata is database servers plus storage servers.

Load is divided among both database and storage servers.

Storage in Exadata is no longer a dumb dump of bits. Processing is also done on the storage through the storage servers, so when the results of queries are returned to to the database servers, they are presented to the user along with caching, which provides added performance as the caches in the Exadata are huge and very fast.

Exadata's real beauty is that it exploits the hardware resources very smartly using world-class software. The Exadata software does column projection, predicate filtering and bloom filtering during joins to make the retrieval of data lightning fast. Here 'lightning fast' is not a marketing term, but I literally mean it.

Another blockbuster feature is the storage indexes. Yes you still need indexes even in Exadata, but they are perhaps the 'true' indexes as they fulfill the index definition completely. They make the data fetching fast and they don't take much space and their maintenance is not a headache. In Exadata storage, the space is divided into 1 megabyte region and storage indexes keep the low and high value of that region, and so index is aware of data distribution at such a minute level, and the parallel (helper) processes returns the data in less than jiffy.

Just like the mention of bride is incomplete without the bridal dress, the discussion of Exadata is incomplete without mentioning the Hybrid Columnar Compression. Compression is one of the most lovable weapon of DBAs who manage static data. Compression not only saves space at disk and buffer cache, it also improves the I/O. HCC defines compression units in the data and it groups rows into those compression units, and it only stores unique values of rows in the columns.

Intelligent, non-volatile, database-aware cache, the Flash Cache of Exadata works hand in hand with the Buffer Cache of database. Due to its huge size (up to 5TB), the flash cache often fulfills the needs of queries.

There are many many other working and proven features of Exadata. Larry Ellison is not excited without reason, he has really got a point there. If enterprises and corporations are individuals, then Exadata is like iPhone and iPad for them. It is cool, useful and expensive and they have to have it.

Thursday, August 5, 2010

_smm_max_size and _pga_max_size

I always try to not to tinker with the hidden parameters and only use them if Oracle Support asks me to do so.

But just stumbled upon a great blog post by one of my favorite Oracle blogger Charles Hooper, in which he discusses two valuable and should-be-used-frequently hidden parameters.

_smm_max_size= Maximum workarea size for one session

_pga_max_size= Maximum PGA size for one session

AS PGA_AGGREGATE_TARGET is a soft limit for all the sessions, any runaway session can consume and cross that limit. Also whenever sort_area_size with in PGA is consumed, then temporary tablespace is used.

By using _smm_max_size, you may restrict a session to use that much memory for sorting with in a PGA.

By using _pga_max_size, you may restrict a session to use that much memory for the whole PGA. That keeps your PGA available for other sessions too.