Skip to main content

Do we need to file-level defragment Exchange database drives?

 Do we need to file-level defragment Exchange database drives?


Every so often there is a question: "Should we run file-level defragmentation software on Exchange servers?"
Usually, this comes from confusion that file system defragmentation actually helps Exchange as - well, Exchange databases get fragmented too.
The process of Exchange database fragmentation is a completely different story though - it is the defragmentation of the "white space" or empty database pages within the Exchange database. There are 2 types of defragmentation of Exchange databases:
ONLINE defragmentation - this is what happens as part of online maintenance which is by default run on nightly basis. Here we rearrange the data (database pages really) within the database to have more contiguous white space. Typically you will want to make sure that your backup schedule does NOT interfere with online maintenance schedule, as starting of online backup will stop the online defrag.
OFFLINE defragmentation - this is what happens when you run ESEUTIL utility with the /d switch - therefore you need to take the database offline to do it. This is typically done only when there is a specific reason to do it - such as reclaiming huge amounts of hard drive space, if instructed to do so by Support Services when troubleshooting a specific problem, or after a database hard repair (which is another thing that we should never do).
So - that being said - what about file system defragmentation?
I would never do it on running production server databases. The reason for it is simple actually - file system defrag is a very intense I/O operation. So the disc will be very busy. I have seen some cases here in Support Services, where our database engine has actually started logging warnings that the write to the disc was successful, but it took "unusually long" to complete, and it was suggesting that hardware might be at fault. Sure enough - a disk defrag kicked off just before this started happening as witnessed by the Application log. That right there is enough reason for me not to do it in real life.
The bottom line really is - you do not HAVE to file-level defrag the Exchange database drives. Exchange reads and writes to it's databases in very random fashion. Large sequential reads and writes will see much more improvement from file system defrag than Exchange databases will. But if you really WANT to do it - I would do it the old-fashioned way: move the databases off to some other volume, file system defrag the drive and then move the databases back... Or at least make sure you have a good backup, dismount the databases and file-system defrag then.
Few related things to read:
328804 How to Defragment Exchange Databases
http://support.microsoft.com/?id=328804
192185 XADM: How to Defragment with the Eseutil Utility (Eseutil.exe)
http://support.microsoft.com/?id=192185
256352 Online Defragmentation Does Not Reduce Size of .edb Files
http://support.microsoft.com/?id=256352

Comments

Popular posts from this blog

Integration with vCloud Director failing after NSXT upgrade to 4.1.2.0 certificate expired

  Issue Clarification: after upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Issue Verification: after Upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Root Cause Identification: >>we confirmed the issue to be related to the below KB NSX alarms indicating certificates have expired or are expiring (94898)   Root Cause Justification:   There are two main factors that can contribute to this behaviour: NSX Managers have many certificates for internal services. In version NSX 3.2.1, Cluster Boot Manager (CBM) service certificates were incorrectly given a validity period of 825 days instead of 100 years. This was corrected to 100 years in NSX 3.2.3. However any environment originally installed on NSX 3.2.1 will have the internal CBM Corfu certs expire after 825 regardless of upgrade to the fixed version or not. On NSX-T 3.2.x interna...

Calculate how much data can be transferred in 24 hours based on link speed in data center

  In case you are planning for migration via DIA or IPVPN link and as example you have 200Mb stable speed so you could calculate using the below formula. (( 200Mb /8)x60x60x24) /1024/1024 = 2TB /per day In case you have different speed you could replace the 200Mb by any rate to calculate as example below. (( 5 00Mb /8)x60x60x24) /1024/1024 =  5.15TB  /per day So approximate each 100Mb would allow around 1TB per day.

Device expanded/shrank messages are reported in the VMkernel log for VMFS-5

    Symptoms A VMFS-5 datastore is no longer visible in vSphere 5 datastores view. A VMFS-5 datastore is no longer mounted in the vSphere 5 datastores view. In the  /var/log/vmkernel.log  file, you see an entry similar to: .. cpu1:44722)WARNING: LVM: 2884: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device shrank (actual size 18424453 blocks, stored size 18424507 blocks) A VMFS-5 datastore is mounted in the vSphere 5 datastores view, but in the  /var/log/vmkernel.log  file you see an entry similar to: .. cpu0:44828)LVM: 2891: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device expanded (actual size 18424506 blocks, stored size 18422953 blocks)   Purpose This article provides steps to correct the VMFS-5 partition table entry using  partedUtil . For more information see  Using the partedUtil command line utility on ESX and ESXi (1036609) .   Cause The device size discrepancy is caused by an incorrect ending sector for the VMFS-5 partition on the ...