I have found a way which works very well.
I simply adjusted the framework to recycle records from a table by simply collecting the primary keys at first and a For/Next loop to delete one by one. I should have done that since the very beginning. But, we tried to find a solution to workaround this massive delete and it was not really a factor. The code deletes about 1000 records per second. So, to delete 5000 records every 15 minutes, it takes about 5 seconds.
1. I did not have to do anything in regards to the fragmentation
2. I did not have to reindex from scratch
3. I did not have to apply any change to the database from SSMS to clear the backlog
4. I run the task as is, and on the first run, it collected 1.9 million primary keys in 10 seconds
5. This has no impact on the Web site or any other commands from SSMS
This is a much safer way at the SQL Server core engine as it is almost no CPU-intensive related task and it doesn't have to negotiate with thousands of records at a time to delete and adjust the transactional log. I do not see any difference as before. This is pretty much the same execution timeframe it took.
Thanks for your help, this has helped to consider various possibilities and look around the related metrics.