Rutgers, The State University of New Jersey

iSCSI experiments

I've been thinking about how to set up disk service from my Mac at home. I have the first generation Intel iMac. External disks are handicapped by a 400 Mbps Firewire connection. I'm wondering how iSCSI over Gbps Ethernet would do.

This records the results of an experiment using Solaris 11 (Solaris Express Community Edition 104) as an iSCSI target and a Mac as the initiator. I believe recent versions of Solaris 10 would work the same way.

Solaris target

See this: and

First start the iSCSI target service if it isn't already started:

svcadm enable iscsitgt
If you're using a firewall, make sure port 3260 is open. You may have to do "pkg install SUNWiscsitgt".

I did this with ZFS, although I have no reason to think that is necessary. I used a partition that I wasn't already using for anything else. First, create a ZFS pool on that partition:

zpool create -f tank c3d0s2
zpool list

The -f was needed because slice 2 (which is the whole partition) overlaps with slice 8. "tank" is just a name, from an example. You can use anything.

Create an actual target for this ZFS pool. Unfortunately I lost the exact commands I typed, but I think I did

zfs create -V 20g tank/iscsi0
zfs set shareiscsi=on tank/iscsi0
iscsitadm create target -b /dev/zvol/dsk/tank/iscsi0 iscsi0
This creates a target called iscsi0. The data itself is stored in a ZFS volume, which the create -V command creates. It seems to be hidden.

To look at the results do one of

iscsitadm list target
iscsitadm list target -v

Depending upon your needs for reliability you may want to do

iscsitadm modify admin -f enable
This causes fast-ack mode. Without this, writes aren't acknowledged until the write completes. This is safe, but slow. I saw 1 MB/sec write without this, which is absurd. With it, I saw 4 MB/sec, but see below for why this is probably limited by my network card.

Fast acks are only supposed to be used if your system has a UPS. Otherwise the client could think you had acknowledged writes that haven't actually occurred. Of course if the client's power fails at the same time it's not clear how serious this is. In home situations you may well want to set this.

You're now set up with a working iSCSI target. Note that at this point there's no authentication. I was working on a private subnet.

Mac initiator

Get the initiator from It comes as a zipped .dmg file.

Install the software. It adds an extension and a control panel entry. You'll have to reboot.

Open the control panel. In "Portals" click the + and add the IP address of your target (in this case the Solaris machine).

Now go to the "Targets" section. You should see the target. Actually I seem to have 2 of them. I used the one with iscsi0 at the end.

Select the target and hit "Log on". At the moment I'm not using Authentication, so I didn't fill out any of the data.

Go to "Sessions", select the session and click "Info". In the bottom section of the display it will show you a bsd disk name, e.g. disk4.

You can now use Disk Utility to create a single partition on it and format it HFS+. You should then see it mounted.


My results are limited by the speed of the NIC on my PC. It's 100 Mbps. Mac to Solaris is about 4 MB / sec, from Solaris to Mac about 11 MB/sec.

The reads seem to be about at the limit of the network. The writes are not. However I noted that write speeds ranged from 0 to 10 MB/sec in a kind of sawtooth. This is clearly not optimal behavior. But I'm hoping that with 1 Gbps interfaces I could still get to disk speed.

If you actually want to do this with 100 Mbps networking, increasing the read buffer size on Solaris can speed things up.

iscsitadm modify target --maxrecv 200000 tank/iscsi0
With this I went from about 4 MB write speed to 8.5 MB write speed. Higher values didn't seem to help. The value needed depends upon how the application writes. Using dd, with the default of 512 byte writes a value of 65536 gave 9.5 MB/sec. No doubt smaller numbers would also have worked. With dd using a blocksize of 100000, 200000 seemed the minimum needed. With a 1 Gbps network, I expect that this number will be less important.

I tried this again on my new configuration, with a gigabit network and a very fast disk. I got 33 MB/sec, but with periods of dropout. So depending upon file size the total would be between 20 and 33 MB/sec. It's hard to know which side needs tuning. I suspect it's more the Mac, because it's hard to see what more Solaris could do. I tried 2 and 3 simultaneous copies, and got up to 60 MB/sec total, though again with brief periods of nothing. While the NFS test results are better, that's with the ideal test: writing long sequential files. I suspect that in practice there would be little difference. In both cases I was near the limits of the disk and network bandwidth. It's increasingly clear that even gigabit Ethernet is not enough for anything beyond a single 7200 rpm disk. Sun's new high-end system uses multiple 10 gigabit Ethernet.

The obvious test would be Solaris to Solaris, but at the moment I don't have a second Solaris machine at home that can do gigabit Ethernet. I'll work on it.


Since the results reported above I've experimented a bit with NFS. These were done with an iMac as client and an Acer X1200 as server, running OpenSolaris, connected by gigabit Ethernet and an Apple Airport Extreme as a switch. (The Acer X1200 is a low-cost consumer desktop. I chose it because it has eSata, Firewire, and gigabit Ethernet, and is very compact. Its disk performance is significantly better than the business equivalent, the L460. It has a faster disk, and it's SATA, whereas Solaris sees the L460 as IDE.)

I used writing for the test, because there are problems making sure whether caching is interfering with the tests on read. Surprisingly, the 7200 rpm SATA disk on the Acer can do sustained writes at 100 MB/s (dd if=/dev/zero bs=1000000). I can get about 90 MB/s over NFS. To do it with version 3 of NFS I have to mount async,rw=1000000. With just one of those it's more like 50 MB/s. With version 4 of NFS only the async parameter is needed. (Solaris supports version 4 of NFS. The Mac has an alpha-level implementation of the client, although not the server.)

I tried 2 and 3 threads doing the same thing. THere is some interference. Total was more like 70 MB/s. I haven't done any analysis, but I wonder if the issue is that the disk heads are moving more.

Incidentally, the Acer has surprisingly good performance with USB. I can get 50 MB/s to a USB disk. That's good, because as of this writing, Solaris is too unreliable with Firewire to use.


I also tried SMB (Windows file system, a.k.a. CIFS). Enabling it proved challenging. First, you have to load a couple of packages from the repository. See Getting Starting with the Solaris CIFS Service for more information. In addition to those steps, you need to establish name mappings, e.g.

idmap add "winuser:hedrick@WORKGROUP" "unixuser:hedrick"
There are also wildcards, and you can use Unix groups. See Identity Mapping Administration for details.

Finally, you have to get the mount right. mount_smbfs in OS X didn't seem to use the current user as default. It used "nobody". So I had to do \\hedrick@host\SERVICE.

With all of this I got about 40 MB/sec on my write test. This could be an implementation issue. I didn't see the kind of tuning I could do with NFS, but I may be missing it. Also, files are created with mode 0, i.e. no one can read and write it. Again, there's probably some way to change that. For the moment I don't see any reason to continue. For going from a Mac, NFS seems superior in both performance and administration. NFS version 4 has the advantage that it doesn't use all the auxillary ports, so the client doesn't need to open holes in its firewall. With version 3, the client has to allow connection to statd and a few other things.


For more information, contact
Last updated: Sunday, 18-Jan-2009 14:12:26 EST
© 2009 Rutgers, The State University of New Jersey. All rights reserved.