I had initially wrote this post with a great deal of information, but it came across as snarky and condescending which was not my original intention. Thus I’ve rewritten it in the hopes that it’s succinct and distilled down the the minimal amount of information needed to make an informed decision. This post is specifically geared towards B-Tree indexes as In-Memory indexes are only rebuilt upon start-up (or adding a new one) and Columnstore indexes have their own set of rules for which Niko Neugebauer has a fantastic write up.
Many of the issues that I end up having to explain to DBAs and Developers alike is that index fragmentation really doesn’t mean a whole lot. In fact it’s somewhat asinine to rebuild or reorg indexes on a nightly basis, even if that’s based on things such as fragmentation level – whether we’re considering fragmentation at the filesystem layer or pages that have a fair amount of unused space on them. Yes, this means you implementing things such as the built in index maintenance tasks in SQL Server 2014+, Adaptive Index Defrag, and Ola’s Maintenance Scripts are pretty much worthless at best and downright disastrous at worst.
Index fragmentation at a filesystem level really only hurts two things, the first of those is read-ahead which functions the best when all pages are contiguous on disk. That’s all well and good if you’re using read-ahead, which really should only be needed a handful of times unless memory thrashing is happening when it comes to query performance. Note that there are many other places in the engine where the scatter-gather API is used. The second place this can efficiently help is when LazyWriter [1,2] is invoked which can do scattered reads or gathered writes, which hopefully this isn’t invoked too often on your systems.
In addition to just the internal workings, the other upsides include less disk and buffer pool (memory) usage. This is all well and good, but does it really make sense to churn IO and CPU for sometimes minimal if any gains?
The downside to all this maintenance can be negligible if you don’t have large databases, have beefy servers with robust IO, and have many idle times during the day or night in which to execute these server killing maintenance routines.
This doesn’t just affect the server itself when executing the maintenance but can have a disastrous downstream effects. Since the on-disk structures will entirely change, every extent changed will need to be set so that the next differential backup (if being used) captures the changes. This also means that log usage will soar, and any downstream services such as: LogShipping, Replication, AlwaysOn Availaiblity Groups, Change Data Capture, Backups (All), Recovery (AG Failover, FCI Failover, Database Restore), Database Snapshots, etc., which you can see is non-trivial.
In the worst of the cases, which does seem to happen quite often, it can bring the entire server to its knees, causing long blocking waits, filling up the log, causing remote AG replicas to fall behind, among a large list of other items. This does have real and painful consequences.
TL:DR; Stop rebuilding indexes based on fragmentation level. If anything, keep up with your statistics updates and rebuild an index when it makes sense, such as deleting many rows, changing the schema of a table, or before taking a final backup to archive a database (not a comprehensive list of all things). Stop killing your servers and causing production down scenarios because of a fragmentation value.