Skip to main content

How to delete the default database in Exchange 2010

 How to delete the default database in Exchange 2010


You’ll probably want to delete the default database that is created when you build a new Exchange 2010 server. So you’ve moved all the mailboxes that you can see to another database. And then you’ll remove the database.
But if you do try, then you’ll get the following message:

Key Error Messages

The mailbox database ‘Mailbox Database ’ cannot be deleted.
This mailbox database contains one or more mailboxes or arbitration mailboxes.
Cause

This is because there are hidden mailboxes.
Solution

Here’s how to find them, move them and then remove the database.
Find them

You’ll need to use the EMS, The Exchange Management Shell, for this.

  1. Use the Get-Mailbox – Database command as I have in the image below:
  2. You’ll see the SystemMailbox, as listed in the image above. You may see more than one mailbox in your listing.
  3. Copy the mailbox name(s) to notepad.


Move them

Still within the EMS…

  1. Use the New-MoveRequest command as seen in the image below, pasting back in the mailbox name you copied in the Find them steps above.
  2. If you need to move your arbitration\system mailboxes to a specific database, you could instead use the New-MoveRequest command but add the following switch:
    -TargetDatabase “Database_Name” as shown in the image below.
  3. When you do your move-request, Exchange will queue the move. In the background, Exchange 2010 will perform the move, just as it does for a move performed from EMC.
  4. Repeat step 1 or 2 if you have more mailboxes like this to move.
  5. Use Get-MoveRequest to check that the move has worked:

    You could do this within Move Request in the Exchange Management Console GUI, but you wouldn’t be able to confirm which database the mailbox had moved to.

Remove the database

Remove the database in the usual way in the Exchange Management Console.
I hope this has helped you.

Comments

Popular posts from this blog

Integration with vCloud Director failing after NSXT upgrade to 4.1.2.0 certificate expired

  Issue Clarification: after upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Issue Verification: after Upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Root Cause Identification: >>we confirmed the issue to be related to the below KB NSX alarms indicating certificates have expired or are expiring (94898)   Root Cause Justification:   There are two main factors that can contribute to this behaviour: NSX Managers have many certificates for internal services. In version NSX 3.2.1, Cluster Boot Manager (CBM) service certificates were incorrectly given a validity period of 825 days instead of 100 years. This was corrected to 100 years in NSX 3.2.3. However any environment originally installed on NSX 3.2.1 will have the internal CBM Corfu certs expire after 825 regardless of upgrade to the fixed version or not. On NSX-T 3.2.x interna...

Calculate how much data can be transferred in 24 hours based on link speed in data center

  In case you are planning for migration via DIA or IPVPN link and as example you have 200Mb stable speed so you could calculate using the below formula. (( 200Mb /8)x60x60x24) /1024/1024 = 2TB /per day In case you have different speed you could replace the 200Mb by any rate to calculate as example below. (( 5 00Mb /8)x60x60x24) /1024/1024 =  5.15TB  /per day So approximate each 100Mb would allow around 1TB per day.

Device expanded/shrank messages are reported in the VMkernel log for VMFS-5

    Symptoms A VMFS-5 datastore is no longer visible in vSphere 5 datastores view. A VMFS-5 datastore is no longer mounted in the vSphere 5 datastores view. In the  /var/log/vmkernel.log  file, you see an entry similar to: .. cpu1:44722)WARNING: LVM: 2884: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device shrank (actual size 18424453 blocks, stored size 18424507 blocks) A VMFS-5 datastore is mounted in the vSphere 5 datastores view, but in the  /var/log/vmkernel.log  file you see an entry similar to: .. cpu0:44828)LVM: 2891: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device expanded (actual size 18424506 blocks, stored size 18422953 blocks)   Purpose This article provides steps to correct the VMFS-5 partition table entry using  partedUtil . For more information see  Using the partedUtil command line utility on ESX and ESXi (1036609) .   Cause The device size discrepancy is caused by an incorrect ending sector for the VMFS-5 partition on the ...