SQL Server requires minimum interaction to manage indexes. As rows are inserted, updated, and deleted, the indexes are adjusted accordingly. By default, statistics used by the optimizer are also automatically generated. Sometimes, however, you might want to drop or re-create indexes. When loading large amounts of data, it can often be more efficient to drop any indexes, load the data, and re-create the indexes. This prevents the bulk load operation from simultaneously having to load the table and update the index.
Indexes can also become fragmented. This happens when no room is available to insert or update a row on a data page and the page "splits" into two pages. The new page will probably not be physically contiguous with the first page, and this can cause performance issues. Detecting and repairing fragmentation of indexes is covered in detail in Chapter 34.