Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Good Oracle DBA book
Message
From
14/05/2001 03:22:56
 
 
To
11/05/2001 23:09:09
Christian Cote
Les Logiciels Onoma Inc.
Longueuil, Quebec, Canada
General information
Forum:
Oracle
Category:
Other
Miscellaneous
Thread ID:
00506563
Message ID:
00506729
Views:
11
A few suggestions:

1. You may want to try to make a copy of the rows you want and then truncate.

i.e.
create table mytable_bu as select * from mytable where blah=blah;
truncate table mytable;
insert into mytable select * from mytable_bu;
drop mytable_bu;

The problem is likely that after repeated imports and deletes have fragmented
the data over the disks.

2. Make an export file (exp) with the rows you need in it.
Then you can drop the table.
Recreate it and import your rows in again this will release the fragamented
disk space created by the deletes.

You might consider doing this so you can just truncate the table and import the rows you want in each time.

You may also get benefits if you create some indexes and delete on the index.
If you did not already do that..

In this case I often drop the table. And import it again which is much faster.


>HI,
>I am on a project that uses Oracle as backend. I need more knowledge on how to administrate an Oracle Server on Unix. The server we use for development is not well optimised and I would like to help the DBA in is task of fine tuning it.
>
>The project consist of loading a lot of rows (100-500k) in tables several times. These tables after all loads will contain several milions rows of data. The problems we face right now are that it takes an eternity to delete rows that we previously loaded. Since we are not doing any OLTP ( we are actually doing ETL for a datawarehouse) we need to be able to delete the rows loaded very fast and NO we cannot TRUNCATE the tables since we do not want to delete all the data in the tables.
>
>I created a procedure that delete 20K rows at a time and COMMIT at each 20K rows. To delete 4 millions rows, it takes something like 10-12 hours. I assume that it is not normal :-). The server we have has 2 processors and 2GB of ram. There are only 4-5 users that query on this server. Anyway, if somebody knows where I could find any infos (if there is another forum more specialized that this one that I like very much), or books suggestions I would really apreciate it.
>
>Thank you for your help,
>Christian
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform