You are a new oracle DBA and you had to delete data from some production database table . you have got all the necessary change management approvals. if you dont know what is change management then google for ITIL concepts . so you have your approvals and your boss also agrees and now you found out that this table is really very very huge with billions of records. what would you do? you have couple of options
option 1 ) go ahead and execute the delete statement and after many many hours of execution you might get the error that undo tablespace is full or space errors etc and the delete statement fails and then takes hours to rollback all the changes done by the delete and you have to reply to your boss about what you did for the last 3 days.
option 2: before starting the delete statement you can look into the dba_segments table and find out the size of the table. If the table size is big then you can issue a select query to find out how many records your delete statement would delete. Then you can find out the total number of records in the table. If you are going to almost delete all the records from the table then it is better for you to copy the good records into a temporary table and then truncate the first table and then copy the good records from the temporary table into the first table.
option3: write a pl/sql procedure and use the forall bulk delete advanced pl/sql option to bulk delete 1000 or 2000 or 3000 at a stretch. for this you definetly need time .so explain to your manager that the table is very big and that you have to write a pl/sql procedure. test it on your test server and then execute it on production.
Below discussion explains the above points
Hello all,
The problem of deleting large amount of data from database appeared several times in the forum, but none of them reflects my problem.
I have a table which is appended of ca. 1 700 000 records every day. Every day should also be oldest records deleted, ca. 1 700 000 of them. Total number of records in this table is constant, ca. 0,6 billion. There is no dependency between this table and any other Oracle object, and the only operations performed on this table is adding new records, removing oldest and selecting them. One column of this table is indexed, so adding of removing data takes some time to update index. But it is not the issue.
The problem is in deleting oldest records. When I simply issue a DELETE command I get a message that undo tablespace is to small, transaction rollback will not be possible and therefore records cannot be deleted. This is normal and expected effect, because Oracle general philosophy is full data recovery. But in my particular case this recovery policy is not needed. I simply want to irrecoverably delete large amount of data. If something happens during transaction then this is not a problem, because such failed operation can be easily corrected on application (not Oracle) level.
I couldn’t find in the web how to "switch off" rollback, undo, or whatever it is called. Oracle gurus told me that probably it is not possible and the only solution is to increase undo tablespace. But I can’t believe that there is no such mechanism in Oracle that allows ultimate and irrecoverable data deletion. Deleting records in smaller sets, by 100 000 for example, takes too long because indexes must be updated after every commit. In the ideal case I would like oldest records to be deleted, then added new ones, and finally updated index and committed the whole operation.
Do you have any idea how to achieve this? Or should I organize my data in a different way?
Any hints will be greatly appreciated.
Greg
|
||||||||||||||||||
|
|
|||||||||||||||||
|
|
||||||||||||||||
|
|
|
|
||||||||||||||||
|
|
|
|
||||||||||||||||
|
|
|
|
||||||||||||||||
|
|
|
|
||||||||||||||||
|
|
|
|