>Sounds like you need to split your database into two database, one for transactional data, the other for reporting. The transactional DB has almost no indexes and is simply used for CRUD. It is optimized for speed. You can setup a job to read updated data from the transactional DB and update the data in the reporting DB. The speed of updates there can be slower and additional indexes can be setup to speed reporting.
>
>I will say that 145 indexes sounds like too many. I can't imagine a scenario where that many are needed.
It's a nationwide huge database with very large tables about insurance claims and a few hundreds fields per table apply. They search on all of them, and worst, on several of them at the same time, thus requiring compound indexes for the most commonly used.
We talk about 400 tables, 5000 fields, 500 lists, 50 millions hits per day, etc.
In overall, the write is not was would be killing the system, sort of, but I was curious to know about technique that could be used to ease in that direction.
The application requires real time access as enormous amount of data is being changed every second. I am still not sure splitting the database would be applicable.