Welcome! Log In Create A New Profile

Advanced

NFS performance

Posted by habibie 
NFS performance
May 11, 2016 07:20AM
I would like to know what software do you use to test your NFS performance on your client (Linux) computer when mounting your NFS drive from your PogoPlug Pro. Also, what are the maximum R/W throughputs you will get.
Re: NFS performance
May 11, 2016 11:01AM
Until someone knowledgeable gets here, my basic research indicates performance heavily depends on block size and number of threads handling it.

=========
-= Cloud 9 =-
Re: NFS performance
May 11, 2016 03:52PM
use ioperf and iperf

then grab your calculator...

https://en.wikipedia.org/wiki/Iperf
Re: NFS performance
May 12, 2016 12:40AM
I just tried iperf. On my server (PogoPLug Pro 02) running LEDE, I get the following result and I don't really know how good/bad the performance is, TBH.
root@lede:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.0.20 port 5001 connected with 10.0.0.100 port 38194
------------------------------------------------------------
Client connecting to 10.0.0.100, TCP port 5001
TCP window size: 43.8 KByte (default)
------------------------------------------------------------
[  6] local 10.0.0.20 port 55466 connected with 10.0.0.100 port 5001
Waiting for server threads to complete. Interrupt again to force quit.
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.1 sec   195 MBytes   163 Mbits/sec
[  6]  0.0-10.0 sec   517 MBytes   433 Mbits/sec
root@lede:~#
I used the following setup on my debian Wheezy Linux desktop computer
[root@debian:/opt/google/src/debian 70%] # iperf -c 10.0.0.20 -fM -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.0.0.20, TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[  5] local 10.0.0.100 port 38194 connected with 10.0.0.20 port 5001
[  4] local 10.0.0.100 port 5001 connected with 10.0.0.20 port 55466
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   196 MBytes  19.5 MBytes/sec
[  4]  0.0-10.0 sec   518 MBytes  51.7 MBytes/sec
0.024u+2.944s=0:10.06e(29.4%) TDSavg=0k+0k+0k max=1444k 0+0io 0pf+0sw
[root@debian:/opt/google/src/debian 71%] #
When I plugged a 2 TB Seagate 2.5" SATA HDD to the SATA port, dmesg excerpt is shown below indicating the SATA link up 3.0 Gbps. I thought the SATA port on a PogoPlug Pro can do a 6.0 Gbps.
[  405.725168] sata_oxnas: resetting SATA core
[  405.739604] ata1: exception Emask 0x10 SAct 0x0 SErr 0x20000 action 0xe frozen
[  405.746836] ata1: hard resetting link
[  406.663664] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[  406.676372] ata1.00: ATA-8: ST2000LM003 HN-M201RAD, 2BC10005, max UDMA/133
[  406.683225] ata1.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 0/32)
[  406.696716] ata1.00: configured for UDMA/133
[  406.700994] ata1: EH complete
[  406.704719] scsi 0:0:0:0: Direct-Access     ATA      ST2000LM003 HN-M 0005 PQ: 0 ANSI: 5
[  406.714557] sd 0:0:0:0: Attached scsi generic sg1 type 0
[  406.715130] sd 0:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[  406.715143] sd 0:0:0:0: [sdb] 4096-byte physical blocks
[  406.727953] sd 0:0:0:0: [sdb] Write Protect is off
[  406.727971] sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[  406.728181] sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[  406.955417]  sdb: sdb1 sdb2 sdb3
[  406.963112] sd 0:0:0:0: [sdb] Attached SCSI disk
Anyway, hdparm output is shown below and I am not really impressed.
/dev/sdb3:
Timing buffered disk reads:  300 MB in 3.00 seconds = 102386 kB/s
root@lede:~# hdparm -t /dev/sdb3

/dev/sdb3:
Timing buffered disk reads:  335 MB in 3.00 seconds = 114337 kB/s
root@lede:~# hdparm -t /dev/sdb3

/dev/sdb3:
Timing buffered disk reads:  337 MB in 3.00 seconds = 114810 kB/s
root@lede:~# hdparm -t /dev/sdb3

/dev/sdb3:
Timing buffered disk reads:  329 MB in 3.00 seconds = 112042 kB/s
root@lede:~#
Using f3 utility, I get R/W about 27/18 MBps on my NFS mounted storage as shown below. AFAICT, it is pretty slow considering the output from hdparm shows the HDD can do a 100 MBps (R/W?).
[debian@debian:/opt/openwrt-git-trunk 9%] ~ f3write /mnt/downloads
Free space: 1.74 TB
Creating file 1.h2w ... OK!                            
Creating file 2.h2w ... OK!                            
Creating file 3.h2w ... OK!                            
Creating file 4.h2w ... 0.17% -- 18.14 MB/s -- 27:37:52^C
6.412u+10.008s=2:53.86e(9.4%) TDSavg=0k+0k+0k max=908k 0+6441336io 0pf+0sw
[debian@debian:/opt/openwrt-git-trunk 10%] ~ f3read /mnt/downloads
                  SECTORS      ok/corrupted/changed/overwritten
Validating file 1.h2w ... 2097152/        0/      0/      0
Validating file 2.h2w ... 2097152/        0/      0/      0
Validating file 3.h2w ... 2097152/        0/      0/      0
Validating file 4.h2w ...  148632/        0/      0/      0

  Data OK: 3.07 GB (6440088 sectors)
Data LOST: 0.00 Byte (0 sectors)
	       Corrupted: 0.00 Byte (0 sectors)
	Slightly changed: 0.00 Byte (0 sectors)
	     Overwritten: 0.00 Byte (0 sectors)
Average reading speed: 27.27 MB/s
6.828u+4.668s=1:55.59e(9.9%) TDSavg=0k+0k+0k max=884k 6440408+0io 0pf+0sw
[debian@debian:/opt/openwrt-git-trunk 11%] ~
If you can post yours for a comparison, I sure will appreciate that.
Re: NFS performance
May 12, 2016 04:00AM
The read/write performance is slow compared to what could be acheived.

there is a lot to consider when performance tuning a linux system on a low resource device. Its not as simple as adding a HDD and slapping some packages on.

The limit of a pogov3 in your setup is the following
[  406.663664] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[  406.676372] ata1.00: ATA-8: ST2000LM003 HN-M201RAD, 2BC10005, max UDMA/133
[  406.683225] ata1.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 0/32)
[  406.696716] ata1.00: configured for UDMA/133

more notably this
[  406.696716] ata1.00: configured for UDMA/133

So your theoretical maximum performance is only ever going to be 133MB and that is at the disk level under ideal theoretical circumstances. The fact you show 100MB is good but this is showing and also using cached writes.

You also have the limitation of memory and processor, the processor and memory on a plug is not enough to reach these theoretical maximums to begin with. @Joey above is partially right when he mentions "performance heavily depends on block size". File system block size depends on what you intend to use the disk for - eg database or file store, each have a "preferred" block size, however this also impacts how much data can actually be stored on the disk. There is also the version of NFS you are advertising and or configuring for. NFSv3 / NFSv4


to keep this simple.
If you are using the default block size for ext4 and haven’t turned off journalling and also not set the the "atime" parameters etc, then don’t expect much beyond what you see above. with tuning you could enjoy burst rates close to 30MB+ depending on system load etc.



some reading for you
http://www.slashroot.in/how-do-linux-nfs-performance-tuning-and-optimization

http://www.techrepublic.com/blog/linux-and-open-source/tuning-nfs-for-better-performance/

http://nfs.sourceforge.net/nfs-howto/ar01s05.html
Re: NFS performance
May 12, 2016 09:59AM
Gravelrash Wrote:
-------------------------------------------------------
> The read/write performance is slow compared to
> what could be acheived.
>
> there is a lot to consider when performance tuning
> a linux system on a low resource device. Its not
> as simple as adding a HDD and slapping some
> packages on.
>
> The limit of a pogov3 in your setup is the
> following
>
> [  406.663664] ata1: SATA link up 3.0 Gbps
> (SStatus 123 SControl 300)
> [  406.676372] ata1.00: ATA-8: ST2000LM003
> HN-M201RAD, 2BC10005, max UDMA/133
> [  406.683225] ata1.00: 3907029168 sectors, multi
> 0: LBA48 NCQ (depth 0/32)
> [  406.696716] ata1.00: configured for UDMA/133
>
>
> more notably this
>
> [  406.696716] ata1.00: configured for UDMA/133
>
>

I think this is the bottleneck. It looks like this UDMA/133 limit is set in the HDD. Is there a way to change that? I also notice this when I plugged my 2 TB Seagate HDD to the SATA port on my debian Wheezy Linux desktop computer.

What about the following two lines? Do they indicate any problem with the SATA HDD?
[  405.725168] sata_oxnas: resetting SATA core
[  405.739604] ata1: exception Emask 0x10 SAct 0x0 SErr 0x20000 action 0xe frozen
[  405.746836] ata1: hard resetting link
>
> You also have the limitation of memory and
> processor, the processor and memory on a plug is
> not enough to reach these theoretical maximums to
> begin with. @Joey above is partially right when he
> mentions "performance heavily depends on block
> size". File system block size depends on what you
> intend to use the disk for - eg database or file
> store, each have a "preferred" block size, however
> this also impacts how much data can actually be
> stored on the disk. There is also the version of
> NFS you are advertising and or configuring for.
> NFSv3 / NFSv4
>

My configuration is pretty basic. On my PogoPlug Pro 02 server, I just needed to modify the /etc/exports file whose content is shown below:
root@lede:~# cat /etc/exports 
/opt	10.0.0.100/255.0.0.0(rw,insecure,no_subtree_check,sync)
root@lede:~#

On my debian Wheezy Linux desktop computer, I just manually mount it as follows and cat /proc/mounts still reports rsize=16384,wsize=16384 (see below).
mount 10.0.0.20:/opt /mnt -o rsize=32768,wsize=32768,intr,noatime,nolock

Can you and/or anyone please post here your configuration as well as the throughputs you get?

>
> to keep this simple.
> If you are using the default block size for ext4
> and haven’t turned off journalling and also not
> set the the "atime" parameters etc, then don’t
> expect much beyond what you see above. with tuning
> you could enjoy burst rates close to 30MB+
> depending on system load etc.
>

How do you do that? I checked the Linux kernel source .config file and I don't see this option as shown below.

 .config - Linux/arm 4.4.10 Kernel Configuration
 > File systems ────────────────────────────────────────────────────────────
  ┌──────────────────────────── File systems ────────────────────────────┐
  │  Arrow keys navigate the menu.  <Enter> selects submenus ---> (or    │  
  │  empty submenus ----).  Highlighted letters are hotkeys.  Pressing   │  
  │  <Y> includes, <N> excludes, <M> modularizes features.  Press        │  
  │  <Esc><Esc> to exit, <?> for Help, </> for Search.  Legend: [*]      │  
  │ ┌──────────────────────────────────────────────────────────────────┐ │  
  │ │    < > Second extended fs support                                │ │  
  │ │    < > The Extended 3 (ext3) filesystem                          │ │  
  │ │    <*> The Extended 4 (ext4) filesystem                          │ │  
  │ │    [*]   Use ext4 for ext2 file systems                          │ │  
  │ │    [*]   Ext4 POSIX Access Control Lists                         │ │  
  │ │    [*]   Ext4 Security Labels                                    │ │  
  │ │    <*>   Ext4 Encryption                                         │ │  
  │ │    [ ]   EXT4 debugging support                                  │ │  
  │ │    [ ] JBD2 (ext4) debugging support                             │ │  
  │ │    < > Reiserfs support                                          │ │  
  │ └────v(+)──────────────────────────────────────────────────────────┘ │  
  ├──────────────────────────────────────────────────────────────────────┤  
  │       <Select>    < Exit >    < Help >    < Save >    < Load >       │  
  └──────────────────────────────────────────────────────────────────────┘

>
>
> some reading for you
> http://www.slashroot.in/how-do-linux-nfs-performance-tuning-and-optimization
>

I perused the above link to learn more about NFS. Unfortunately, I tried some of the tuning suggestions to no avail. For instance, I could not even change the size of the RPC data chunk using the rsize=32768 & wsize=32768 as shown below. I followed this link (under the Performance / Tuning section) and use echo 32768 > /proc/fs/nfsd/max_block_size to set the rsize/wsize to no avail. Perhaps, I did something wrong here.
[root@debian:/opt/google/src/debian 135%] # mount 10.0.0.20:/opt /mnt -o rsize=32768,wsize=32768,intr,noatime,nolock
[root@debian:/opt/google/src/debian 136%] # cat /proc/mounts 
10.0.0.20:/opt /mnt nfs rw,relatime,vers=3,rsize=16384,wsize=16384,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.0.20,mountvers=3,mountport=32780,mountproto=udp,local_lock=all,addr=10.0.0.20 0 0

> http://www.techrepublic.com/blog/linux-and-open-source/tuning-nfs-for-better-performance/
>

Not much I can do here to enhance the performance, unfortunately.

> http://nfs.sourceforge.net/nfs-howto/ar01s05.html

This will probably be good if my server is based on a Linux kernel v2.6 or older. Nevertheless, I did some tests suggested from the above link and the results are shown below.
[debian@debian:/opt/openwrt-git-trunk 39%] ~ time dd if=/dev/zero of=/mnt/downloads/testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 13.3754 s, 20.1 MB/s
0.008u+0.428s=0:13.56e(3.0%) TDSavg=0k+0k+0k max=880k 0+524288io 0pf+0sw
[debian@debian:/opt/openwrt-git-trunk 40%] ~ time dd if=/mnt/downloads/testfile of=/dev/null bs=16k
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 9.19802 s, 29.2 MB/s
0.004u+0.388s=0:09.42e(4.0%) TDSavg=0k+0k+0k max=884k 524288+0io 0pf+0sw
[debian@debian:/opt/openwrt-git-trunk 41%] ~
Re: NFS performance
May 12, 2016 12:28PM
here are my exports and fstab settings

Server
/etc/fstab
# Self Mounted
LABEL=DATA      /mnt/SATA       ext4    suid,errors=continue,dev,noatime,exec

/etc/exports
# SATA exports
/mnt/SATA	192.168.168.0/255.255.255.0(async,insecure,rw,nohide)

root@smurfville:~# cat  /proc/fs/nfsd/max_block_size
131072

Workstation
/etc/fstab
#NFS
192.168.168.100:/mnt/SATA/DATA/  /home/gravelrash/nfsville  nfs auto 0 0


# cat /proc/mounts | grep .100
192.168.168.100:/mnt/SATA/DATA /home/gravelrash/nfsville nfs4 rww,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.168.4,local_lock=none,addr=192.168.168.100 0 0

To disable journalling on the disk perform the following on the "Server"

tune2fs -O ^has_journal /dev/sdXY

where sdXY is the identity of the disk, i.e. /dev/sdb , /dev/sda etc etc.


Whatever you do, make a backup before you play with filesystems.

You might have to use the -f (force) parameter.




Edited 3 time(s). Last edit at 05/12/2016 12:45PM by Gravelrash.
Re: NFS performance
May 12, 2016 12:40PM
The lower test show you are getting circa 20/29MB read and write across your NFS drives, that in itself is not bad.

+You appear to be running NFSv3 whereas i am running NFSv4



Edited 1 time(s). Last edit at 05/12/2016 12:47PM by Gravelrash.
Re: NFS performance
May 14, 2016 08:53AM
Gravelrash Wrote:
-------------------------------------------------------
> here are my exports and fstab settings
>
> Server
>
> /etc/fstab
> # Self Mounted
> LABEL=DATA      /mnt/SATA       ext4   
> suid,errors=continue,dev,noatime,exec
> 
> /etc/exports
> # SATA exports
> /mnt/SATA	192.168.168.0/255.255.255.0(async,insecu
> re,rw,nohide)
> 
> root@smurfville:~# cat 
> /proc/fs/nfsd/max_block_size
> 131072
>  
> 
> 
>
>
> Workstation
>
> /etc/fstab
> #NFS
> 192.168.168.100:/mnt/SATA/DATA/ 
> /home/gravelrash/nfsville  nfs auto 0 0
> 
> 
> # cat /proc/mounts | grep .100
> 192.168.168.100:/mnt/SATA/DATA
> /home/gravelrash/nfsville nfs4
> rww,relatime,vers=4.0,rsize=131072,wsize=131072,na
> mlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2
> ,sec=sys,clientaddr=192.168.168.4,local_lock=none,
> addr=192.168.168.100 0 0
> 
> 
>
>

You are right that we have different versions of NFS. BTW and for a comparison, can you please post the maximum throughputs of your NFS connection? IIRC, your PogoPlug Pro runs on a debian ARM, doesn't it?

> To disable journalling on the disk perform the
> following on the "Server"

>
> tune2fs -O ^has_journal /dev/sdXY
>
> where sdXY is the identity of the disk, i.e.
> /dev/sdb , /dev/sda etc etc.
>
>
> Whatever you do, make a backup
> before you play with filesystems.
>
> You might have to use the -f (force)
> parameter.


Yes and I will have to be very careful with this.

BTW, I have been reading this Upgrading and Repairing PCs: The ATA/IDE Interface and noticed the log shows my 2 TB Seagate SATA HDD is an ATA-8 ([ 406.676372] ata1.00: ATA-8: ST2000LM003 HN-M201RAD, 2BC10005, max UDMA/133). Despite the UDMA/133, it should be able to deliver a whopping 600 MBps throughput (according to the ATA-8 specs on the link), right? Yet, I can only get as much as 105 MBps. So, do you and/or does anyone out here know if it is possible to tweak my 2 TB HDD to give a throughput much higher than 105 MBps?
Re: NFS performance
May 14, 2016 10:56AM
habibie Wrote:
-------------------------------------------------------

> You are right that we have different versions of
> NFS. BTW and for a comparison, can you please post
> the maximum throughputs of your NFS connection?
> IIRC, your PogoPlug Pro runs on a debian ARM,
> doesn't it?

At the moment Im still nowhere near my ARM boxes, I do however run the same setup across all my devices so the outputs I provided above are consistent with what I run

> So, do you and/or does anyone out
> here know if it is possible to tweak my 2 TB HDD
> to give a throughput much higher than 105 MBps?

Therein lies the million dollar question
You should be aware that even though the kernel has support for the SATA interface and utilises its driver, you still need to find a valid disk driver / kernel module to actually drive your disk at its maximum capability (much like M$) and these may not even exist for your disk.

You can however utilise hdparm to tweak the settings on your disk (which will be different for all disks) and re run your tests again. I believe the biggest gain you will make will be to use NFSv4. Remember you are also processor bound too so you will never get the maximum throughput of your disk.

as an example - the disks that I used before I migrated my servers to ARM

Intel
Core2Duo, 8GB Ram, SATA II interface and a SATA II 3.5" drive >= 100MB

ARM device(s)
Allwinner : Dual Core 1.2Ghz, 1GB Ram, SATA II interface and a SATA III 2.5" drive ~ 60MB
Oxnas : Dual Core 750Mhz, 128MB Ram, SATA I interface and a SATA II 3.5" drive ~ 30MB
Marvel : Single Core 800Mhz, 256MB Ram, SATA I interface and a SATA II 2.5" drive ~ 30MB

These above are real world figures i experienced.
Re: NFS performance
May 14, 2016 11:00PM
Gravelrash Wrote:
-------------------------------------------------------
> Oxnas : Dual Core 750Mhz, 128MB Ram, SATA
> I interface and a SATA II 3.5" drive ~ 30MB

I gathered this performance was from a PogoPlug Pro. If so, then it is comparable to mine.
Re: NFS performance
May 24, 2016 07:33PM
I found this post to claim about 22 MBps write to the SATA HDD.
Author:

Your Email:


Subject:


Spam prevention:
Please, enter the code that you see below in the input field. This is for blocking bots that try to post this form automatically. If the code is hard to read, then just try to guess it right. If you enter the wrong code, a new image is created and you get another chance to enter it right.
Message: