Skip to main content

RAID Levels RAID 0 RAID 1 RAID 5 ...

 RAID stands for Redundant Array of Independent Disks and it basically involves combining two or more drives together to improve the performance and the fault tolerance. Combining two or more drives together also offers improved reliability and larger data volume sizes. A RAID distributes the data across several disks and the operating system considers this array as a single disk.


RAID Levels
Several different arrangements are possible and different standard schemes have evolved which represent a set of trade-offs between capacity, speed and protection against the data loss.

Some of the common RAID levels are -
RAID 0
RAID 0 uses data stripping as the data is broken into fragments while writing it to the drive. The fragments are then written to their disks simultaneously on the same sector. While reading, the data is read off the drive in parallel and so, this type of arrangement offers huge bandwidth.

The trade-off associated with RAID 0 is that a single disk failure destroys the entire array as it offers no fault tolerance and RAID 0 does not implement error checking.

RAID 1
RAID 1 uses mirroring to write the data to the drives. It also offers fault tolerance from the disk errors and the array continues to operate efficiently as long as at least one drive is functioning properly.

The trade-off associated with the RAID 1 level is the cost required to purchase the additional disks to store data.

RAID 2
It uses Hamming Codes for error correction. In RAID 2, the disks are synchronized and they're striped in very small stripes. It requires multiple parity disks.

RAID 3
This level uses a dedicated parity disk instead of rotated parity stripes and offers improved performance and fault tolerance. The benefit of the dedicated parity disk is that the operation continues without parity if the parity drive stops working during the operation.

RAID 4
It is similar to RAID 3 but it does block-level stripping instead of the byte-level stripping and as a result, a single file can be stored in blocks. RAID 4 allows multiple I/O requests in parallel but the data transfer speed will be less. Block level parity is used to perform the error detection.

RAID 5
RAID 5 uses block-level stripping with distributed parity and it requires all drives but one to be present to operate correctly. The reads are calculated from the distributed parity upon the drive failure and the entire array is not destroyed by a single drive failure. However, the array will lose some data in the event of the second drive failure.

The above standard RAID levels can be combined together in different ways to create Nested RAID Levels which offer improved performance. Some of the known Nested RAID Levels are -

RAID 0+1
RAID 1+0
RAID 3+0
RAID 0+3
RAID 10+0
RAID 5+0
RAID 6+0

Comments

Popular posts from this blog

Integration with vCloud Director failing after NSXT upgrade to 4.1.2.0 certificate expired

  Issue Clarification: after upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Issue Verification: after Upgrade from 3.1.3 to 4.1.2.0 observed certificate to be expired related to various internal services.   Root Cause Identification: >>we confirmed the issue to be related to the below KB NSX alarms indicating certificates have expired or are expiring (94898)   Root Cause Justification:   There are two main factors that can contribute to this behaviour: NSX Managers have many certificates for internal services. In version NSX 3.2.1, Cluster Boot Manager (CBM) service certificates were incorrectly given a validity period of 825 days instead of 100 years. This was corrected to 100 years in NSX 3.2.3. However any environment originally installed on NSX 3.2.1 will have the internal CBM Corfu certs expire after 825 regardless of upgrade to the fixed version or not. On NSX-T 3.2.x interna...

Calculate how much data can be transferred in 24 hours based on link speed in data center

  In case you are planning for migration via DIA or IPVPN link and as example you have 200Mb stable speed so you could calculate using the below formula. (( 200Mb /8)x60x60x24) /1024/1024 = 2TB /per day In case you have different speed you could replace the 200Mb by any rate to calculate as example below. (( 5 00Mb /8)x60x60x24) /1024/1024 =  5.15TB  /per day So approximate each 100Mb would allow around 1TB per day.

Device expanded/shrank messages are reported in the VMkernel log for VMFS-5

    Symptoms A VMFS-5 datastore is no longer visible in vSphere 5 datastores view. A VMFS-5 datastore is no longer mounted in the vSphere 5 datastores view. In the  /var/log/vmkernel.log  file, you see an entry similar to: .. cpu1:44722)WARNING: LVM: 2884: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device shrank (actual size 18424453 blocks, stored size 18424507 blocks) A VMFS-5 datastore is mounted in the vSphere 5 datastores view, but in the  /var/log/vmkernel.log  file you see an entry similar to: .. cpu0:44828)LVM: 2891: [naa.6006048c7bc7febbf4db26ae0c3263cb:1] Device expanded (actual size 18424506 blocks, stored size 18422953 blocks)   Purpose This article provides steps to correct the VMFS-5 partition table entry using  partedUtil . For more information see  Using the partedUtil command line utility on ESX and ESXi (1036609) .   Cause The device size discrepancy is caused by an incorrect ending sector for the VMFS-5 partition on the ...