Welcome! Log In Create A New Profile

Advanced

My Pogo v3 and v4 Speed Test Results

Posted by JeffS 
My Pogo v3 and v4 Speed Test Results
December 16, 2016 01:48PM
I'm testing the hdd read write speeds when transferring a large media file across my local network. I have everything plugged into a gigabit switch. I'm using iotop to get the numbers. I have both Pogos setup to run the OS's off the attached hdd's.

The v3 Pro is set up with Debian:
[root@PogoV3oxnas ~]# uname -a
Linux PogoV3oxnas 4.4.38-oxnas-tld-5 #1 SMP PREEMPT Sun Dec 11 17:32:48 PST 2016 armv6l GNU/Linux

The hdd is a 3.5" 500mb WD connected to the onboard sata connector.
[root@PogoV3oxnas ~]# lsblk -afm
NAME      FSTYPE LABEL  MOUNTPOINT NAME        SIZE OWNER GROUP MODE
sda                                sda       465.8G root  disk  brw-rw---T
|-sda1    ext3   rootfs /          |-sda1       10G root  disk  brw-rw---T
|-sda2    swap   swap   [SWAP]     |-sda2      512M root  disk  brw-rw---T
`-sda3    ext4   backup /hdd/sda3  `-sda3    455.3G root  disk  brw-rw---T
loop0                              loop0            root  disk  brw-rw---T
loop1                              loop1            root  disk  brw-rw---T
loop2                              loop2            root  disk  brw-rw---T
loop3                              loop3            root  disk  brw-rw---T
loop4                              loop4            root  disk  brw-rw---T
loop5                              loop5            root  disk  brw-rw---T
loop6                              loop6            root  disk  brw-rw---T
loop7                              loop7            root  disk  brw-rw---T
mtdblock0                          mtdblock0    14M root  disk  brw-rw---T
mtdblock1                          mtdblock1   114M root  disk  brw-rw---T

hdd mounts bold:
[root@PogoV3oxnas ~]# cat /proc/self/mountinfo
14 19 0:14 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
15 19 0:4 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
16 19 0:6 / /dev rw,relatime - devtmpfs udev rw,size=10240k,nr_inodes=14726,mode=755
17 16 0:12 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620
18 19 0:15 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=12104k,mode=755
19 0 8:1 / / rw,relatime - ext3 /dev/disk/by-label/rootfs rw,data=ordered
20 18 0:16 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,size=5120k
21 18 0:17 / /run/shm rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,size=129060k
22 19 0:18 / /tmp rw,relatime - tmpfs tmpfs rw
23 19 8:3 / /hdd/sda3 rw,relatime - ext4 /dev/sda3 rw,data=ordered
24 19 0:19 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs rpc_pipefs rw


The v4 mobile is running Arch.
[root@JeffsPogo2 ~]# uname -a
Linux JeffsPogo2 4.4.34-1-ARCH #1 PREEMPT Tue Nov 22 02:02:24 MST 2016 armv5tel GNU/Linux

The hdd is a Seagate 1tb slim+ usb3 capable, connected to the usb2 port.
[root@JeffsPogo2 ~]# lsblk -afm
NAME   FSTYPE LABEL          UUID                                 MOUNTPOINT NAME     SIZE OWNER GROUP MODE
sda                                                                          sda    931.5G root  disk  brw-rw----
|-sda1 ext3   Seagate-Root   b77d6532-1b83-4afc-a766-108ce4976de5 /          |-sda1   4.4G root  disk  brw-rw----
|-sda2 swap   swap           c01a8152-20c4-410d-b143-dc1a659639c3 [SWAP]     |-sda2   512M root  disk  brw-rw----
`-sda3 ext4   Seagate-Backup 969fdb6c-d4f1-4219-a605-e1c8d49fed29 /backup    `-sda3 926.4G root  disk  brw-rw----
loop0                                                                        loop0         root  disk  brw-rw----
loop1                                                                        loop1         root  disk  brw-rw----
loop2                                                                        loop2         root  disk  brw-rw----
loop3                                                                        loop3         root  disk  brw-rw----
loop4                                                                        loop4         root  disk  brw-rw----
loop5                                                                        loop5         root  disk  brw-rw----
loop6                                                                        loop6         root  disk  brw-rw----
loop7                                                                        loop7         root  disk  brw-rw----

hdd mounts bold:
[root@JeffsPogo2 ~]# cat /proc/self/mountinfo
15 0 8:1 / / rw,relatime shared:1 - ext3 /dev/root rw,stripe=8191,data=ordered
16 15 0:6 / /dev rw,relatime shared:2 - devtmpfs devtmpfs rw,size=59848k,nr_inodes=14962,mode=755
17 15 0:15 / /sys rw,nosuid,nodev,noexec,relatime shared:5 - sysfs sysfs rw
18 15 0:4 / /proc rw,nosuid,nodev,noexec,relatime shared:9 - proc proc rw
19 17 0:16 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:6 - securityfs securityfs rw
20 16 0:17 / /dev/shm rw,nosuid,nodev shared:3 - tmpfs tmpfs rw
21 16 0:13 / /dev/pts rw,nosuid,noexec,relatime shared:4 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
22 15 0:18 / /run rw,nosuid,nodev shared:10 - tmpfs tmpfs rw,mode=755
23 17 0:19 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:7 - tmpfs tmpfs ro,mode=755
24 23 0:20 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:8 - cgroup cgroup rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
25 23 0:21 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:11 - cgroup cgroup rw,freezer
26 23 0:22 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:12 - cgroup cgroup rw,cpuset
27 23 0:23 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:13 - cgroup cgroup rw,devices
28 23 0:24 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:14 - cgroup cgroup rw,blkio
29 23 0:25 / /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,net_cls,net_prio
30 23 0:26 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,perf_event
31 23 0:27 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,memory
32 23 0:28 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:18 - cgroup cgroup rw,cpu,cpuacct
33 18 0:29 / /proc/sys/fs/binfmt_misc rw,relatime shared:19 - autofs systemd-1 rw,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct
34 17 0:7 / /sys/kernel/debug rw,relatime shared:20 - debugfs debugfs rw
35 17 0:30 / /sys/fs/fuse/connections rw,relatime shared:21 - fusectl fusectl rw
36 16 0:14 / /dev/mqueue rw,relatime shared:22 - mqueue mqueue rw
61 15 0:31 / /tmp rw,nosuid,nodev shared:23 - tmpfs tmpfs rw
60 15 8:3 / /backup rw,noatime shared:24 - ext4 /dev/sda3 rw,stripe=8191,data=ordered
175 22 0:34 / /run/user/0 rw,nosuid,nodev,relatime shared:133 - tmpfs tmpfs rw,size=11996k,mode=700


For my initial testing, I used Thunar file browser in sftp mode. Thunar lets me connect to remote computers on my local network in a nice GUI environment to manipulate files. Unfortunately, using this method limited file transfer speeds to around 4 M/s for both read and write according to iotop. These results were pretty disappointing so I dug in to try to figure out why so slow. Turns out file transfers using ssh puts a lot of overhead on the cpu for encryption and there are some serious bottlenecks built into ssh itself.

A few small samples from the v4 using sftp:

Downloading
  526 be/4 jeff        0.00 B/s    2.38 M/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        0.00 B/s    4.54 M/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        0.00 B/s    4.43 M/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        0.00 B/s    4.44 M/s  0.00 %  0.10 % sftp-server
  526 be/4 jeff        0.00 B/s    4.44 M/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        0.00 B/s    4.44 M/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        0.00 B/s    4.39 M/s  0.00 %  0.08 % sftp-server
  526 be/4 jeff        0.00 B/s    4.31 M/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        0.00 B/s    4.34 M/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        0.00 B/s    4.26 M/s  0.00 %  3.38 % sftp-server

Uploading:
  526 be/4 jeff        4.82 M/s    0.00 B/s  0.00 %  7.63 % sftp-server
  526 be/4 jeff        4.12 M/s    0.00 B/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        4.14 M/s    0.00 B/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        4.14 M/s    0.00 B/s  0.00 %  0.25 % sftp-server
  526 be/4 jeff        4.01 M/s    0.00 B/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        4.05 M/s    0.00 B/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        4.05 M/s    0.00 B/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        3.96 M/s    0.00 B/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        4.08 M/s    0.00 B/s  0.00 %  0.00 % sftp-server
  526 be/4 jeff        4.22 M/s    0.00 B/s  0.00 %  0.00 % sftp-server

These next results were created after making a few changes. Switching to a less cpu intensive encryption algorithm, arcfour, and using the command line for file transfers using secure copy, scp. Debian allows arcfour whereas Arch required me to enable the use of it. This made a big difference, although the numbers still indicate pretty slow transfers. I think both Pogos are still being bottle necked by the encryption overhead. The v3 seems to have a slight edge in speed, possibly because it's dual core processor is more powerful than the v4's.

Used this to transfer file from my main x86 box to the pogo v3:
[jeff@Arch2014p9 ~]$ scp -C -c arcfour /mnt/1TB-WDHD/1TB-MOVIES/Boyhood.2014.mp4 root@192.168.2.92:/hdd/sda3

A sample of the results:
2845 be/4 root        0.00 B/s    8.78 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s    8.81 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s    9.40 M/s  0.00 %  0.08 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s    9.64 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   11.60 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s    9.95 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   10.39 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   10.74 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   10.36 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   11.25 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   10.96 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   10.81 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   10.56 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   10.86 M/s  0.00 %  0.63 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   10.81 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   11.17 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   10.21 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s    9.99 M/s  0.00 %  0.07 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s    9.91 M/s  0.00 %  0.00 % scp -t /hdd/sda3
 2845 be/4 root        0.00 B/s   11.32 M/s  0.00 %  0.00 % scp -t /hdd/sda3

So based on what I have so far, it seems encryption overhead is still an issue. Switching to a weaker encryption algorithm has speed things up by nearly 3 times. How would eliminating it altogether effect speed? I spent some time trying to use a lower overhead method that would allow network file transfers with no encryption. Linux remote copy, or rcp would seem to be the perfect solution if not for being somewhat depreciated and having a lack of documentation on setup, at least that that I could find. I couldn't get it working in the end. I should also say rcp should only be used on a local network due to zero security built into using it.

There are many options available to move files around via network for Linux. I've intentionally avoided SMB Samba because I don't have any Windows box's and Samba seems like something band-aided to Linux for Windows.

With all that said, I will be using rsync for remote backups to one of the Pogos. It's slow but speed really doesn't matter for in this case, and after the first backup, only the changes will be sent.

A NAS box is a whole different situation, faster is better. What are you Linux users using for network file transfer protocol and any details?

--------------------------------------------------------------
Blog: www.jeffstory.org
Re: My Pogo v3 and v4 Speed Test Results
December 16, 2016 02:31PM
My transfer numbers are near identical on Gigabit LAN copying files between (1. Pogoplug E02, (2. router with USB 3 external WD 1TB & fast EVO SD card and (3. this Windows ThinClient. In every direction. My Pogoplug SAMBA is probably unencrypted.

JeffS are you using a router in the middle? -update- ahh you're using a switch. Okay, my router includes a dedicated switch so now we know it's probably not that. Many times on Windows machines the problem is antivirus scanning everything and slowing it down, but I turn it off for some file transfers as I did this time.

The area I'm most blindsided concerns USB devices in general. Seems it requires processor meddling, and this Thin Client is a 1.3GHz unicore.

JeffS are you in a position to try speed experiments with no USBs at all? This would require copying your USB stick to a partition on the SATA drive and use something other than a USB device at the end of the chain. So my strictly-intuitive hunch is the USBs protocol requires more CPU.

=========
-= Cloud 9 =-



Edited 1 time(s). Last edit at 12/16/2016 03:42PM by JoeyPogoPlugE02.
Re: My Pogo v3 and v4 Speed Test Results
December 16, 2016 02:40PM
JeffS,

> With all that said, I will be using rsync for
> remote backups to one of the Pogos. It's slow but
> speed really doesn't matter for in this case, and
> after the first backup, only the changes will be
> sent.
>
> A NAS box is a whole different situation, faster
> is better. What are you Linux users using for
> network file transfer protocol and any details?

If you need speed, use NFS. And of course, use NFS inside your local network only. And rsync or copy files accross.

Don't use Samba or other protocols. Samba is good to be compatible with Windows, and that's the only reason why we should be using Samba at all.


Update:

What Joey said. The true test is only if you have 1 Gbits switch in the middle, and no other router/switch in between.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)



Edited 1 time(s). Last edit at 12/16/2016 02:43PM by bodhi.
Re: My Pogo v3 and v4 Speed Test Results
December 16, 2016 05:30PM
JoeyPogoPlugE02 Wrote:
-------------------------------------------------------
> My transfer numbers are near identical on Gigabit
> LAN copying files between (1. Pogoplug E02, (2.
> router with USB 3 external WD 1TB & fast EVO SD
> card and (3. this Windows ThinClient. In every
> direction. My Pogoplug SAMBA is probably
> unencrypted.
>
Did you mean you could attain about 1Gb/s transfer rate in copying files between your computer and your storage attached to a USB3 port on your router? That's blazing fast.
Re: My Pogo v3 and v4 Speed Test Results
December 16, 2016 05:34PM
@JeffS: What does "M/s" stand for, i.e. Mbps or MBps? Also, you may wanna run 'top' utility to see what eats the CPU cycles while doing the transfer.
Re: My Pogo v3 and v4 Speed Test Results
December 16, 2016 05:50PM
> 1Gb/s transfer rate

:)) ROTFL.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: My Pogo v3 and v4 Speed Test Results
December 16, 2016 06:21PM
habibie Wrote:
-------------------------------------------------------
> Did you mean you could attain about 1Gb/s transfer
> rate in copying files between your computer and
> your storage attached to a USB3 port on your
> router? That's blazing fast.

Absolutely not. In fact it's rare I've exceeded USB 2.0 speeds from the router, which is a D-Link DIR-857 with I believe includes an Atheros AR8327N Switch.

Between two PCs with 3.5" hard drives yes I can get gigabit (over 100MB/sec).

A lot of this stuff is magic to me, but it goes to show it's a good idea to have multiple OSs in order to diagnose hardware bottlenecks. I had no idea how important CPU is to USB 3 and SATA speeds. Conclusion: efficient small CPUs = slow transfer speed. Quadcore+ = much faster.

So before I get too far off topic, that's one reason I'm interested in seeing a USB 3.0 port on a Pogoplug 3. If it has a spiffy controller maybe it can break free of possible CPU restrictions. Just thinking out loud so I'll slip away and let this topic be about JeffS's configurations.

=========
-= Cloud 9 =-
Re: My Pogo v3 and v4 Speed Test Results
December 16, 2016 06:47PM
JoeyPogoPlugE02 Wrote:
-------------------------------------------------------
> Between two PCs with 3.5" hard drives yes I can
> get gigabit (over 100MB/sec).
>
Wow and that is a good rate. IIRC, I can only get as much as 15 MBps between two Linux computers connected on a Gigabit LAN using the FTP protocol. Right now, I get about 10.5 MBps download speed from OpenSuSE server (just downloaded OpenSuSE 42.4 DVD).
Re: My Pogo v3 and v4 Speed Test Results
December 16, 2016 07:05PM
I have a DSL modem/router in [router bypass mode], plugged into to a D-Link router, and a 5 port gigabit switch attached to it. The devices I'm speed testing are all directly connected [using gig-rated network cables] to the gigabit switch.

To write to the Pogo device: Media starts at my x86 quad core Linux box, through the gigabit switch, to a pogo device.

To read from the Pogo device: Media starts at the Pogo device, through the gigabit switch, to the x86 quad core Linux box.

I have not tried NFS, but will give it a try next.

I feel there is a lot more file transfer speed potential in the Pogos. Just need to stay away from any protocol that uses ssh with it's high CPU overhead of encryption and decryption. I don't need that level of security on my local network. At the same time, I'd like to make sure whatever protocol is best for speed, is also user friendly to use. Not having to use the command line for every file transfer would be high on my list of priories.

--------------------------------------------------------------
Blog: www.jeffstory.org
Re: My Pogo v3 and v4 Speed Test Results
December 16, 2016 07:16PM
JeffS,

> protocol is best for speed, is also user friendly
> to use. Not having to use the command line for
> every file transfer would be high on my list of
> priories.

Sure. NFS is supported on all Linux distros and Mac OSX. So friendliness is not an issue with those. All GUI base stuff.

See the Wiki thread:
http://forum.doozan.com/read.php?2,23630

Quote

NFS

NFS - HowTo set up NFS shares (and boot NFS rootfs)
Boot your Dockstar (and other plugs) using NFS rootfs

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)



Edited 1 time(s). Last edit at 12/16/2016 07:18PM by bodhi.
Re: My Pogo v3 and v4 Speed Test Results
December 17, 2016 12:01AM
Hey bodhi do nice settings affect anything to do with whatever you need most for transferring?

For instance in Windows there are ways to prioritize what's needed, and other ways to automate it. Then there's jumbo frame settings if everything supports it, which for large files make it blaze.

Maybe this is a good time to ask: in Linux is it possible to make a shortcut that changes a bunch of nice settings to prioritize transfers and another shortcut that reverts back? In Windows this would be a batch file (*.bat) where you list 20 things at once and it can have a considerable effect.

=========
-= Cloud 9 =-
Re: My Pogo v3 and v4 Speed Test Results
December 17, 2016 12:16AM
habibie Wrote:
-------------------------------------------------------
> @JeffS: What does "M/s" stand for, i.e. Mbps or
> MBps? Also, you may wanna run 'top' utility to see
> what eats the CPU cycles while doing the transfer.

I have ran htop on the v3 while transferring files using sftp. It showed 100% usage of one CPU and ~ 80% in the other. As for what is using it, I'd have to test again. Didn't record it and don't recall off the top of my head.

I'm honestly not sure and was hoping someone here would know what "M/s" stand for. I saw iotop recommended here somewhere on the forums. I have not looked into it yet. The man page gives only a clue at best. The command I used and explanation.

# iotop -o -b -qqq
Quote
man iotop
-o, --only
Only show processes or threads actually doing I/O, instead of showing all processes or threads. This can be dynamically toggled by pressing o.

-b, --batch
Turn on non-interactive mode. Useful for logging I/O usage over time.

-q, --quiet
suppress some lines of header (implies --batch). This option can be specified up to three times to remove header lines.
-q

column names are only printed on the first iteration,

-qq

column names are never printed,

-qqq

the I/O summary is never printed.


A clue to the M/s question.

Quote
iotop man
-k, --kilobytes
Use kilobytes instead of a human friendly unit. This mode is useful when scripting the batch mode of iotop. Instead of choosing the most
appropriate unit
iotop will display all sizes in kilobytes.

Another clue to the M/s question. This is transferring a media file from one hdd to another on my 5+yo x86 quad core (with sata 2 I think).

19916 be/4 jeff        4.92 M/s    4.71 M/s  0.00 %  2.63 % Thunar --daemon [pool]
19916 be/4 jeff      171.05 M/s  171.05 M/s  0.00 % 60.98 % Thunar --daemon [pool]
19916 be/4 jeff      170.54 M/s  170.62 M/s  0.00 % 58.92 % Thunar --daemon [pool]
19916 be/4 jeff      169.42 M/s  169.35 M/s  0.00 % 58.17 % Thunar --daemon [pool]
19916 be/4 jeff      170.54 M/s  170.54 M/s  0.00 % 59.20 % Thunar --daemon [pool]
19916 be/4 jeff      128.82 M/s  128.91 M/s  0.00 % 45.45 % Thunar --daemon [pool]
19916 be/4 jeff      128.98 M/s  128.88 M/s  0.00 % 66.35 % Thunar --daemon [pool]
19916 be/4 jeff      169.75 M/s  169.75 M/s  0.00 % 59.05 % Thunar --daemon [pool]
19916 be/4 jeff      169.92 M/s  169.92 M/s  0.00 % 60.28 % Thunar --daemon [pool]
19916 be/4 jeff      114.68 M/s  114.74 M/s  0.00 % 72.37 % Thunar --daemon [pool]
19916 be/4 jeff       84.53 M/s   84.58 M/s  0.00 % 77.85 % Thunar --daemon [pool]
19916 be/4 jeff       76.85 M/s   76.85 M/s  0.00 % 78.86 % Thunar --daemon [pool]
19916 be/4 jeff       69.02 M/s   68.92 M/s  0.00 % 83.25 % Thunar --daemon [pool]
19916 be/4 jeff       47.14 M/s   47.13 M/s  0.00 % 62.14 % Thunar --daemon [pool]
19916 be/4 jeff       77.18 M/s   77.25 M/s  0.00 % 80.84 % Thunar --daemon [pool]
19916 be/4 jeff       22.79 M/s   22.95 M/s  0.00 % 32.38 % Thunar --daemon [pool]

Based on the results of my x86 and the info below, I think it's safe to say that "M/s" as used in iotop are actually MB/s, and not Mb/s. 170 MB/s = 1360 Mbps

Quote
sata specs
SATA II (revision 2.x) interface, formally known as SATA 3Gb/s, is a second generation SATA interface running at 3.0 Gb/s. The bandwidth throughput, which is supported by the interface, is up to 300MB/s.


--------------------------------------------------------------
Blog: www.jeffstory.org



Edited 2 time(s). Last edit at 12/17/2016 12:48AM by JeffS.
Re: My Pogo v3 and v4 Speed Test Results
December 17, 2016 02:28AM
Joey,

> Hey bodhi do nice settings affect anything
> to do with whatever you need most for
> transferring?

It does if there are other jobs running. If there is no other user space jobs running, then it is should not matter much. The kernels I built are preemptive, so nice will be quite useful if you usually have 2 or more things running and want to set priority to a particular application. The "nicest", i.e. lower positive priority (0-19), application will be assigned all the time it needs, whenever it needs.

> Then there's jumbo frame settings if
> everything supports it, which for large files make
> it blaze.

FYI, jumbo frame is NIC dependent. These small plugs don't have it.

> Maybe this is a good time to ask: in Linux is it
> possible to make a shortcut that changes a bunch
> of nice settings to prioritize transfers and
> another shortcut that reverts back? In Windows
> this would be a batch file (*.bat) where you list
> 20 things at once and it can have a considerable
> effect.

The Linux equivalence of Windows .bat file is a shell script (shell script is much more powerful). So sure, one shell script would do these 2 things easily.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: My Pogo v3 and v4 Speed Test Results
December 17, 2016 07:35AM
JeffS Wrote:
-------------------------------------------------------
> I have ran htop on the v3 while transferring files using sftp. It showed 100% usage of one CPU and ~ 80% in the other. As for what is using it, I'd have to test again. Didn't record it and don't recall off the top of my head
>
Sounded like you are right that encryption ate up all the CPU cycles.

> Another clue to the M/s question. This is transferring a media file from one hdd to another on my 5+yo x86 quad core (with sata 2 I think).
>
> 19916 be/4 jeff        4.92 M/s    4.71 M/s  0.00 %  2.63 % Thunar --daemon [pool]
> 19916 be/4 jeff      171.05 M/s  171.05 M/s  0.00 % 60.98 % Thunar --daemon [pool]
> 19916 be/4 jeff      170.54 M/s  170.62 M/s  0.00 % 58.92 % Thunar --daemon [pool]
> 19916 be/4 jeff      169.42 M/s  169.35 M/s  0.00 % 58.17 % Thunar --daemon [pool]
> 19916 be/4 jeff      170.54 M/s  170.54 M/s  0.00 % 59.20 % Thunar --daemon [pool]
> 19916 be/4 jeff      128.82 M/s  128.91 M/s  0.00 % 45.45 % Thunar --daemon [pool]
> 19916 be/4 jeff      128.98 M/s  128.88 M/s  0.00 % 66.35 % Thunar --daemon [pool]
> 19916 be/4 jeff      169.75 M/s  169.75 M/s  0.00 % 59.05 % Thunar --daemon [pool]
> 19916 be/4 jeff      169.92 M/s  169.92 M/s  0.00 % 60.28 % Thunar --daemon [pool]
> 19916 be/4 jeff      114.68 M/s  114.74 M/s  0.00 % 72.37 % Thunar --daemon [pool]
> 19916 be/4 jeff       84.53 M/s   84.58 M/s  0.00 % 77.85 % Thunar --daemon [pool]
> 19916 be/4 jeff       76.85 M/s   76.85 M/s  0.00 % 78.86 % Thunar --daemon [pool]
> 19916 be/4 jeff       69.02 M/s   68.92 M/s  0.00 % 83.25 % Thunar --daemon [pool]
> 19916 be/4 jeff       47.14 M/s   47.13 M/s  0.00 % 62.14 % Thunar --daemon [pool]
> 19916 be/4 jeff       77.18 M/s   77.25 M/s  0.00 % 80.84 % Thunar --daemon [pool]
> 19916 be/4 jeff       22.79 M/s   22.95 M/s  0.00 % 32.38 % Thunar --daemon [pool]
>
>
> Based on the results of my x86 and the info below, I think it's safe to say that "M/s" as used in iotop are actually MB/s, and not Mb/s. 170 MB/s = 1360 Mbps
>
I would think so. Otherwise, your 4 M/s throughput would equate to about 500 KBps.

I dug my old posts about NFS performance on a PogoPlug and found the R/W throughts were about 27/18 MBps using this f3 utility, respectively. I was rather disappointed and am hoping it can be improved significantly.
Re: My Pogo v3 and v4 Speed Test Results
December 17, 2016 12:29PM
This is getting more interesting as I dig in!

Hardware encryption acceleration on the Marvell Kirkwood

https://linuxengineering.wordpress.com/2014/08/03/performance-tuning-with-pogoplug-v4/

--------------------------------------------------------------
Blog: www.jeffstory.org
Re: My Pogo v3 and v4 Speed Test Results
December 17, 2016 03:02PM
JeffS,

Also see in the Wiki thread:

Hardware Cryptography 

Marvell CESA (also see correction post in this thread) 
Marvell CESA in kernel 4.4 performance

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: My Pogo v3 and v4 Speed Test Results
December 17, 2016 06:39PM
Those are great reads you guys; explained the areas I didn't know and had to interpolate.

There might be another trick with the Pogo Mobile - because it's Kirkwood I was able to copy my E02 Debian stick and it booted with few hassles int he P4-Mobile. Bodhi then gave me a tiny bit of code that made it keep working perfect. That's where I left off. I don't know if that code touched NAND for booting prowess or identified Debian better.

So I might try that with my E02 Arch stick in the coming days. In this moment I'm trying to talk myself out of it because I'd need to copy to an SD card in order to test USB slot, or solder that SATA cable internally. And get a tackle box for the USB sticks, labeled and categorized.

These boxes are SO cool though! I just got done downloading 189 Reaper tutorial videos at 23.2 GB on a slow line, and the real-world educational value is in the thousands of dollars compared to not having this information. True the videos are free, but time less wasted it priceless. And to think I griped about paying $19 for a new E02... :-)

=========
-= Cloud 9 =-



Edited 2 time(s). Last edit at 12/17/2016 06:50PM by JoeyPogoPlugE02.
Re: My Pogo v3 and v4 Speed Test Results
December 18, 2016 02:59AM
OK, I got the encryption hardware working in Arch, but have not been able to connect via scp when using it from the command line of my x86 desktop. Seems it may have something to do with the difference between "aes-128-cbc" used in the pogo and "aes128-cbc" used in openssh.


This is what happened initially. Thought I just needed to add the cipher to sshd.
[jeff@Arch2014p9 ~]$ scp -c aes128-cbc /mnt/1TB-WDHD/1TB-MOVIES/Django.Unchained.2012.mp4 root@192.168.2.160:/backup/test
Unable to negotiate with 192.168.2.160 port 22: no matching cipher found. Their offer: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,none
lost connection

This is what I got after adding the aes128-cbc cipher to sshd.config and restarting it.
[jeff@Arch2014p9 ~]$ scp -c aes128-cbc /mnt/1TB-WDHD/1TB-MOVIES/Django.Unchained.2012.mp4 root@192.168.2.160:/backup/test
Connection closed by 192.168.2.160 port 22
lost connection

This is what I got when I tried to use the exact same cipher as I tested on the pogo per below. If I add aes-128-cbc to sshd, it refuses to start.
[jeff@Arch2014p9 ~]$ scp -c aes-128-cbc /mnt/1TB-WDHD/1TB-MOVIES/Django.Unchained.2012.mp4 root@192.168.2.160:/backup/test
Unknown cipher type 'aes-128-cbc'
lost connection


This test confirms getting HW encryption working. The numbers were in the 3.00s range prior to implementing the changes.
[jeff@JeffsPogo2 ~]$ openssl speed -evp aes-128-cbc
Doing aes-128-cbc for 3s on 16 size blocks: 45456 aes-128-cbc's in 0.08s
Doing aes-128-cbc for 3s on 64 size blocks: 42876 aes-128-cbc's in 0.03s
Doing aes-128-cbc for 3s on 256 size blocks: 36253 aes-128-cbc's in 0.06s
Doing aes-128-cbc for 3s on 1024 size blocks: 22090 aes-128-cbc's in 0.02s
Doing aes-128-cbc for 3s on 8192 size blocks: 5557 aes-128-cbc's in 0.00s
OpenSSL 1.0.2h  3 May 2016
built on: reproducible build, date unspecified
options:bn(64,32) rc4(ptr,char) des(idx,cisc,16,long) aes(partial) idea(int) blowfish(ptr) 
compiler: gcc -I. -I.. -I../include  -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -DHAVE_CRYPTODEV -DHASH_MAX_LEN=64 -Wa,--noexecstack -D_FORTIFY_SOURCE=2 -march=armv5te -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -Wl,-O1,--sort-common,--as-needed,-z,relro -O3 -Wall -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DAES_ASM -DBSAES_ASM -DGHASH_ASM
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128-cbc       9091.20k    91468.80k   154679.47k  1131008.00k         infk

Here's the results of trying to run the above test using aes128-cbc.
[root@JeffsPogo2 ~]# openssl speed -evp aes128-cbc
aes128-cbc is an unknown cipher or digest

Here's a current list of ciphers available in ssh.
[jeff@JeffsPogo2 ~]$ ssh -Q cipher localhost
3des-cbc
blowfish-cbc
cast128-cbc
arcfour
arcfour128
arcfour256
aes128-cbc
aes192-cbc
aes256-cbc
rijndael-cbc@lysator.liu.se
aes128-ctr
aes192-ctr
aes256-ctr
aes128-gcm@openssh.com
aes256-gcm@openssh.com
chacha20-poly1305@openssh.com


I gave up on trying to figure out how to connect using the HW encryption for the time being and focused on the hpn-ssh patch. This patch allows turning ssh encryption off. I also got this working with rather disappointing results. It helped, but the CPU still max's out 100%, limiting speeds. It was about a wash speed wise with using the arcfour algorithm. So this test confirms there is more than just the ssh encryption overhead. I'm not sure what it is at this point. The info I've read on this is mostly over my head, but I'm a pretty persistent bugger! Htop shows sshd as the culprit. I need to narrow it down a bit more though. I could have got NFS set up, had the speed, and put this all to bed, but what fun would that be compared to this? There is a bunch of tuning available for ssh that comes with the hpn-ssh patch. It sounds like most of it will not apply for a short local network hop as in our use case though.


CPU[||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||100.0%]   Tasks: 20, 1 thr; 3 running
  Mem[||||||||||||||||||||||||||||||||||||||||||||||||||||||||||21.8M/117M]   Load average: 1.36 1.06 0.51 
  Swp[                                                             0K/512M]   Uptime: 00:27:45

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
  295 root       20   0 16544  3788   824 R 85.4  3.2  4:51.28 sshd: root@notty
  306 root       20   0  4496  3048  2208 R  1.9  2.5  0:01.84 htop
  279 root       20   0  4076  2068  1732 S  0.0  1.7  0:00.08 bash
    1 root       20   0  9272  1696   712 S  0.0  1.4  0:04.51 /sbin/init
  135 root       20   0 39596  1676  1308 S  0.0  1.4  0:00.99 /usr/lib/systemd/systemd-journald
  271 root       20   0 13080  1568   364 S  0.0  1.3  0:00.00 (sd-pam)
  262 root       20   0 14476  1308   532 S  0.0  1.1  0:00.69 sshd: root@pts/0
  264 root       20   0  8920  1292   624 S  0.0  1.1  0:00.35 /usr/lib/systemd/systemd --user
  209 root       20   0  7424  1068   700 S  0.0  0.9  0:00.16 /usr/lib/systemd/systemd-logind
  207 systemd-n  20   0 15900   976   576 S  0.0  0.8  0:01.16 /usr/lib/systemd/systemd-networkd
  204 systemd-t  20   0 17448   960   544 S  0.0  0.8  0:00.07 /usr/lib/systemd/systemd-timesyncd
  201 systemd-t  20   0 17448   960   544 S  0.0  0.8  0:00.21 /usr/lib/systemd/systemd-timesyncd
  290 root       20   0  7404   940   532 S  0.0  0.8  0:00.06 /usr/bin/sshd -D
  217 systemd-r  20   0  7504   888   536 S  0.0  0.7  0:00.18 /usr/lib/systemd/systemd-resolved
  205 dbus       20   0  6808   792   420 S  0.0  0.7  0:00.45 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
  156 root       20   0 13680   788   420 S  0.0  0.7  0:00.45 /usr/lib/systemd/systemd-udevd
  278 root       20   0  6916   668   328 S  0.0  0.6  0:00.03 su
  275 root       20   0  4076   624   296 S  0.0  0.5  0:00.05 -bash
  302 root       20   0  2500   464   348 R 10.8  0.4  0:37.93 scp -t /backup/test/
  223 root       20   0  2480   400   284 S  0.0  0.3  0:00.02 /sbin/agetty --noclear tty1 linux
  222 root       20   0  2480   396   284 S  0.0  0.3  0:00.01 /sbin/agetty --keep-baud 115200,38400,9600 ttyS0 vt220

--------------------------------------------------------------
Blog: www.jeffstory.org
Re: My Pogo v3 and v4 Speed Test Results
December 18, 2016 11:53PM
JeffS Wrote:
-------------------------------------------------------
The info I've
> read on this is mostly over my head, but I'm a
> pretty persistent bugger!

That's me to a "T" when some factors are familiar, but in this case I'm reading it and hoping it sinks in.
So what would happen if you tried another network adapter? Like in the Pogo 3 take the mpcie slot that holds the wireless and drop in an mpcie to ethernet adapter? Actually bodhi soon and me later can try that, at least with a USB to ethernet adapter, but no telling if that can bypass encryption.

There another school of thought I just remembered. In Banana Pis, it's said to have Gigabit LAN, but in the fine print it says in actuality it's the same speed as USB 2.0 because of the internal bus. I wonder if there is a lowest common denominator that forces the speed lower in Pogoplugs?

* hey are there hardware schematics around (or thereabouts) to see if there's either an impasse or a workaround?

=========
-= Cloud 9 =-
Re: My Pogo v3 and v4 Speed Test Results
December 19, 2016 12:32AM
> There another school of thought I just remembered.
> In Banana Pis, it's said to have Gigabit LAN, but
> in the fine print it says in actuality it's the
> same speed as USB 2.0 because of the internal bus.
> I wonder if there is a lowest common denominator
> that forces the speed lower in Pogoplugs?

No. All Pogoplugs (Kirkwood or OXNAS) Gbits are real Gbits. In general, the Pis are limited to USB speed. But the later generatiion rPi seems to have eliminated this limitation.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: My Pogo v3 and v4 Speed Test Results
December 19, 2016 11:34PM
What about storage format this is being written and read from? I don't know anything about Linux formats: Ext2 or 3 (3 is journaled though, right?). But if there's any hand-me-downs from Windows, I recall two watershed moments with Windows format and file access.
One was when Windows stopped time stamping every file read and written to. This is also a retro-hack that makes XP fly. I don't know if there's a Linux equivalent or advisable. The other thing is indexing. Now if i have a Windows install the first thing I do is right-click a drive and un-tick "Allow files on this drive to have contents indexed in addition to file properties".

I wish there were a book that shows similarities/happy coincidences between Linux and Windows, because many concepts would be approachable. But major things like how efficiency is lost to encryption (anything that's unnecessary) or like Windows, Indexing that some people like me don't use nor appreciate... worth looking into.

=========
-= Cloud 9 =-
Re: My Pogo v3 and v4 Speed Test Results
December 20, 2016 07:22AM
JoeyPogoPlugE02 Wrote:
-------------------------------------------------------
> I wish there were a book that shows
> similarities/happy coincidences between Linux and
> Windows, because many concepts would be
> approachable. But major things like how efficiency
> is lost to encryption (anything that's
> unnecessary) or like Windows, Indexing that some
> people like me don't use nor appreciate... worth
> looking into.
>
I don't know anything about a Windows FS. But, if you had some interests for a Linux FS, then you probably may wanna read more about an i-node (intro or this). Once you understand about a Linux FS, then you can understand more about their similarities and/or differences.
Re: My Pogo v3 and v4 Speed Test Results
December 20, 2016 10:00AM
habibie Wrote:
-------------------------------------------------------
> Once
> you understand about a Linux FS, then you can
> understand more about their similarities and/or
> differences.

Perfect for my level of understanding. I scrounged up Linux For Dummies, thinking that ought to be my skill level, and it's:
Q: What kind of bird is the mascot for Linux?
A: I don't know but it's got a yellow beak, Dummy.

Thanks Habibie, permanently bookmarked :-)

=========
-= Cloud 9 =-



Edited 1 time(s). Last edit at 12/20/2016 10:09AM by JoeyPogoPlugE02.
Re: My Pogo v3 and v4 Speed Test Results
December 20, 2016 01:29PM
I've put trying to speed up sftp and scp, which utilize ssh, on the back burner fo now. I read, "If you want fast network file transfers, don't use ssh". The fact is, ssh, or secure shell, was designed to be secure, not fast. I really like sftp because I've already used it for years, it's dead simple to use with very little setup, and my file browser, Thunar seamlessly integrates sftp and remote file browsing.

I may focus on rcp and rsh (remote copy) and (remote shell) sometime down the road when I have more time. The problem with these is it is said, they are "somewhat depreciated" at this point. This is likely because the protocol is so insecure, and with secure (but slower) options readily available today. I wasn't willing to spend enough time to get it setup and working and don't know if it's file browser compatible, or a cli only deal.

Currently, I've setup nfs on my Arch box's. I used v4, which runs without the need for rpcbind. My thinking is this may remove some overhead on the pogos, but it's is just a hunch. As I understand it, nfs is basically a "server setup" that serves a file system, to be mounted by a remote computer for access. Once mounted, file browsers have access to it as an addition to the file system. This initially seemed like a really inefficient, complex way to enable access to remote file systems, compared the other options, but has proven to be obviously faster that the anything I've tried so far. Benchmarking with iotop has been problematic though. It's erroneously reporting numbers in the triple digits, so I won't bother to post them. With that said, imagine how exotic and awesome NFS was in 1984!


Off Topic:
Computer technology is a fast moving target for standards, protocols, programmers, users, etc... The guts of Linux support this idea. In the big picture, readily available, cheap, home (and home made) NAS servers are a newcomer to the scene. My thoughts are a simple, fast solution to utilize home NAS has not yet materialized for Linux. With cloud online storage becoming more common and even free, and as faster internet access becomes available and cheaper, home NAS may become obsolete before it has a chance to mature over decades, as many other technologies have had.

It seems to me the future is headed more towards the home personal computer as we know it, to be replaced by something more similar and locked down to a smart phone, that what our community of computer geeks prefer to use now.

--------------------------------------------------------------
Blog: www.jeffstory.org



Edited 1 time(s). Last edit at 12/20/2016 01:35PM by JeffS.
Re: My Pogo v3 and v4 Speed Test Results
December 20, 2016 03:07PM
> Computer technology is a fast moving target for
> standards, protocols, programmers, users, etc...
> The guts of Linux support this idea. In the big
> picture, readily available, cheap, home (and home
> made) NAS servers are a newcomer to the scene. My
> thoughts are a simple, fast solution to utilize
> home NAS has not yet materialized for Linux. With
> cloud online storage becoming more common and even
> free, and as faster internet access becomes
> available and cheaper, home NAS may become
> obsolete before it has a chance to mature over
> decades, as many other technologies have had.
>
> It seems to me the future is headed more towards
> the home personal computer as we know it, to be
> replaced by something more similar and locked down
> to a smart phone, that what our community of
> computer geeks prefer to use now.

My thinking is different. With the pervasive data collection by corporations (Google, MS, ...) and government (fill in the blanks ...), privacy become more and more important. As long as home NAS are cheap and user-friendly (e.g. accessible from the net using an app), people won't give up this personal sorage. Personally, the only thing I would store in the cloud is technical data for public consumption.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: My Pogo v3 and v4 Speed Test Results
December 20, 2016 03:44PM
You have a good point I had really not thought about with that bodhi. I'd honestly be scared to store any entertainment media (music and movies) on the "cloud" for fear of potential issues.... I also think the more hardcore computer users will always prefer a system similar to what we use now. It just might just cost more if it becomes really uncommon and low volume production.

--------------------------------------------------------------
Blog: www.jeffstory.org
Re: My Pogo v3 and v4 Speed Test Results
December 21, 2016 11:24PM
I duplicated this test procedure, http://forum.doozan.com/read.php?2,28829 except I'm not using samba, for benchmarking my pogo v4 mobile.

I agree with the referenced post, for benchmark numbers to be meaningful for comparison purposes, the same procedure should be followed. I ran this several times, and came up with VERY close numbers. That seems to indicate a sound, repeatable process.


Creating bigfile on my x86 box.
[root@Arch2014p9 Desktop]# dd if=/dev/urandom of=bigfile bs=512 count=1000000
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB, 488 MiB) copied, 6.08787 s, 84.1 MB/s

This is the v4 pogo kernel info.
[root@JeffsPogo2 ~]# uname -a
Linux JeffsPogo2 4.4.38-1-ARCH #1 PREEMPT Sat Dec 10 21:36:23 MST 2016 armv5tel GNU/Linux

This is the push from my x86 box hdd, which would be the pogo hdd write.
[root@Arch2014p9 Desktop]# time cp bigfile /nfs/dir.192.168.2.160/backup/test/

real	0m28.783s
user	0m0.000s
sys	0m0.797s

I deleted bigfile from my desktop prior to copying it back, but still got inaccurate results. I'm posting this as a heads up for anyone repeating this test.
[root@Arch2014p9 Desktop]# rm bigfile
[root@Arch2014p9 Desktop]# time cp /nfs/dir.192.168.2.160/backup/test/bigfile /home/jeff/Desktop

real	0m0.620s
user	0m0.007s
sys	0m0.577s    I renamed bigfile on the pogo hdd prior to copying it back and got accurate looking results.


Renamed bigfile
[root@Arch2014p9 Desktop]# mv /nfs/dir.192.168.2.160/backup/test/bigfile /nfs/dir.192.168.2.160/backup/test/bigfile.pull

These are the pull from the pogo, which would be the pogo hdd read.
[root@Arch2014p9 Desktop]# time cp /nfs/dir.192.168.2.160/backup/test/bigfile.pull /home/jeff/Desktop

real	0m23.259s
user	0m0.000s
sys	0m1.400s

It'll be interesting to compare these numbers to my pogo pro when I run the same test, with it's sata interface and slightly more powerful dual core processor. I would think the USB 2 would be a bottle neck for this device.

I'd also be interested in the numbers from a pogo v4 with a hdd connected to the SATA or USB 3, for comparison to my mobile with the hdd connected to the USB 2 port.






In case anyone else is interested, I copied and pasted the test procedure from http://forum.doozan.com/read.php?2,28829 below.

Quote
Gravelrash Wrote:
-------------------------------------------------------

As an addendum to bodhi's request, If you want to perform the test yourself and would like a
standard way of doing this so we can all compare and have a baseline set to work with. Ideally you
should be connected at Gbit speeds on a Gbit switch - not on wireless or 100Mbit switch.

Do the following
1)..Start with a massive and compressed file...This should be a close to 500mb file.
2)..Then find you architecture and kernel version
3)..Then mount your samba/nfs share and copy the file to and from your device (PUSH - PULL)

Create the file - called "bigfile"
dd if=/dev/urandom of=bigfile bs=512 count=1000000

Find your architecture and kernel details
uname -ar

PUSH the file to the location
time cp bigfile /mnt/samba/server/share #where /mnt/samba/server/share = your mounted share
time cp bigfile /mnt/nfs/server/share #where /mnt/nfs/server/share = your mounted share
The "time" command will tell you how long it took.

PULL the file from the location
 time cp /mnt/samba/server/share/bigfile .           #where /mnt/samba/server/share = your mounted share
time cp /mnt/nfs/server/share/bigfile .           #where /mnt/nfs/server/share = your mounted share
The "time" command will tell you how long it took.

Post the results you get, bonus points if you do it like this post 6 in this thread


--------------------------------------------------------------
Blog: www.jeffstory.org



Edited 1 time(s). Last edit at 12/22/2016 08:42PM by JeffS.
Re: My Pogo v3 and v4 Speed Test Results
December 22, 2016 06:15PM
Pogo v3 Pro, Debian / Linux 4.4.38, 7200rpm, 3.5", 500MB SATA hdd:

[root@PogoV3oxnas etc]# uname -a
Linux PogoV3oxnas 4.4.38-oxnas-tld-5 #1 SMP PREEMPT Sun Dec 11 17:32:48 PST 2016 armv6l GNU/Linux

Push from x86, pogo hdd write.
root@Arch2014p9 Desktop]# time cp bigfile /nfs/dir.192.168.2.92/

real	0m25.753s
user	0m0.003s
sys	0m0.853s


Pull from the pogo, pogo hdd read.
[root@Arch2014p9 Desktop]# time cp /nfs/dir.192.168.2.92/bigfile.pull /home/jeff/Desktop

real	0m21.891s
user	0m0.000s
sys	0m1.563s

These results are noticeably faster than my v4 mobile. I'd guess this is because of the SATA hdd interface on this compared to USB 2 on the mobile. I'm not sure if both these devices have 1Gbit ethernet hardware.

I'll be posting results here for my v2 E-02 after I get it setup with an OS and nfs.

--------------------------------------------------------------
Blog: www.jeffstory.org
Re: My Pogo v3 and v4 Speed Test Results
December 22, 2016 06:51PM
Jeff,

>
> These results are noticeably faster than my v4
> mobile. I'd guess this is because of the SATA hdd
> interface on this compared to USB 2 on the mobile.
> I'm not sure if both these devices have 1Gbit
> ethernet hardware.

They do. Both have 1Gbit NIC. The SATA HDD is surely faster than USB 2 in this use case.

-bodhi
===========================
Forum Wiki
bodhi's corner (buy bodhi a beer)
Re: My Pogo v3 and v4 Speed Test Results
December 27, 2016 05:22AM
bodhi Wrote:
-------------------------------------------------------
> My thinking is different. With the pervasive data
> collection by corporations (Google, MS, ...) and
> government (fill in the blanks ...), privacy
> become more and more important. As long as home
> NAS are cheap and user-friendly (e.g. accessible
> from the net using an app), people won't give up
> this personal sorage. Personally, the only thing I
> would store in the cloud is technical data for
> public consumption.

as google would put it... +1 from me :)))))
Author:

Subject:


Spam prevention:
Please, enter the code that you see below in the input field. This is for blocking bots that try to post this form automatically. If the code is hard to read, then just try to guess it right. If you enter the wrong code, a new image is created and you get another chance to enter it right.
Message: