two hands holding puzzle peices white background

At first glance Network Attached storage, also called NAS is not much different from a Storage Area Network (SAN). They both attach to a network, they both provide storage to computers on their network. There are some major differences between the two storage roles. However, these two things are becoming more and more the same thing. Today we will discuss what the difference is between a storage area network and network attached storage (SAN vs NAS).

Differences in Appearance

The first thing to look at when debating if you are using a SAN or a NAS is how the operating system sees the storage. Does the operating system see the storage as being on a remote computer? Or does the operating system see the storage as being local? If the operating system and/or programs knows the storage is not local, you are probably working with a NAS.

A good example of this is with Microsoft Windows network drives. If you map a network drive in windows you get a drive letter, but windows show this drive as a network drive. Only the one user has access to it, and windows will not let you use this drive for many functions.

If you connect to a SAN Windows can’t tell the storage is actually somewhere on the network. It treats the storage as if it was connected directly to the server. It is accessible to any user logged into the system, and you can use it just like any other drive.

NAS Protocols

When working with a SAN vs a NAS there are different protocols involved. When you connect to a NAS you will typically be working with Network Attached File Systems (NFS) or Common Internet File Systems (CIFS). In Windows, if you map a network drive to an NFS or CIFS volume, it will be treated as I have stated above, it is usable by the one user, and you are limited in what you can do.

In Linux/Unix network drives are treated differently. When you mount an NFS or CIFS volume it is treated much like it was a local disk and is available to all users on the system unless the file system permissions do not allow it.

SAN Protocols

When working with a SAN the most common protocols are iSCSI and Fiber Channel. Typically when working with the iSCSI protocol you will operate over an Ethernet network, and when working with Fiber Channel you will operate over a fiber optic network. However, this is not always the case. There is another protocol called Fiber Channel over Ethernet (FCoE). And there is nothing stopping you from using iSCSI over a fiber optic network.

FCOE operates a lot like iSCSI. iSCSI is an implementation of the SCSI protocol where the SCSI operations are wrapped in a TCP Packet and sent over the network. When the storage system receives the TCP packet, it extracts the SCSI command and executes it on the local storage. It then takes the result, wraps it in another TCP packet it sends it back to the client machine. FCOE does the same thing, only it uses the Fiber Channel protocol instead of the SCSI protocol.

How do they Operate?

When working with NAS storage, the client machine operates at the file level. This means when you want to access mycoolpicture.jpg your computer sends a message the NAS over the network asking for mycoolpicture.jpg, then the NAS responds by sending the file. When using a SAN you are operating at the block level.

This means the client machine can’t simply ask for mycoolpicture.jpg. The client machine needs to actually tell the SAN where on the volume the file is. The operating system will send a message to the SAN asking for specific blocks on the file system. For example, if the picture is stored in blocks 5555 – 5577, then the client system would ask for blocks 5555 – 5577, the SAN would read those blocks, then send them over the network. The SAN does not know what it is reading, it is simply following orders.

Shared access to a SAN or a NAS

Due to the nature of how a NAS works, it is not a problem to connect multiple servers to the same share on your NAS. You have to be more careful when connecting the same volume on a SAN to more than one client system.

This limitation comes from the way the SAN operates. Since a NAS shares files, the NAS device can handle things such as file locks and consistency checking. A SAN operates at the block level and just trusts the operating system to know what it is doing with the blocks. Unless your file system is set up to allow for simultaneous access from more than one device you are asking for trouble.

For example, lets assume you have a SAN with a single volume, and you connect it to two servers running Windows 2003 Standard. Both systems see the volume and try to use it. When the first server writes files to the system, everything works fine. Then the other server modifies the files in some way, perhaps it just reads the files and updates the date accessed attribute on the file. NTFS is looking at the blocks of the file system and see’s changes it did not expect. NTFS at this point may think there is something wrong with that Block and take some corrective action. At the same time, the other server will see something strange happening and take the same action. In the end, you have a corrupt file.

Some file systems are designed for simultaneous access to the same SAN volume, for example, the VMWare File System (VMFS). In a purely Windows environment, you want to make sure you never connect the same SAN volume to more than one server at a time unless you are using clustering.

When using Microsoft Clustering Services (MSCS) the cluster service knows you have a volume connected to more than one server and it ensures that the volume is only mounted on a single server within the cluster at any given time; assuming it is properly configured. This does not protect you from mounting that volume on another server outside the cluster, which would be a bad idea.

Cluster Aware File Systems

As I expressed in the previous section, you have to be careful as to which file system you use when sharing storage using iSCSI or similar protocols with more than one server at a time. A cluster Aware File System allows you to connect the file system to more than one server at a time.

Below is a list of a few cluster-aware file systems:

OCFS2 – Oracle Cluster File system (Part of Linux Kernel)
VMFS – VMWare File System
CXFS – Clustered XFS

Below is a list of File systems which I know to not be cluster aware:

NTFS – NT File System
EXT2 – Linux
EXT3 – Linux
Fat 16/32 – File Allocation Table (Used in Dos and Windows)
XFS – Linux

The above lists by no means cover every file system but does give a few examples of what you may be using. In a future article, I will go over some of the good and bad elements of different file systems.