While I did add the read/write methods to the ongoing future issues item, it struck me that a few things are fundamentally wrong with the dd method:
1: it's writing to a file system. File systems have different overheads, so it's not actually related to the disk per se, but to the file system ON the disk, which itself has speed differences depending on the file system. You could see this by comparing say an ext3, ext4, xfs, zfs partition on the same disk to see what the actual variances are, but they will certainly not map 1 to 1 between file systems. Some read faster, some write faster, some delete faster, etc.
2: It would only apply to a single partition. For spinning disks, this matters, since where the data occurs on the disk actually impacts the speeds, if I remember right. So on a purely technical level, this one doesn't really work in terms of getting true disk write speeds, nor can I see any way to get that data, since it has to write to a file system always, and thus you are measuring the disk raw capacity minus the filesystem overhead and read/write performance.
3: you'd need to know the native block size of the disk and the file system (I think, not positive about the latter), which gets messy. this data can actually be found in /sys, I was looking at that for an advanced disk size feature, in the future, it would be part of an --admin -D disk report I think, if it appears.
4: The bright side however is that if you reversed the test, that is, create the file, get the write speeds, then time how long it takes to grab the file from disk, then delete it, you'd end up with a quite accurate live performance metric, without having to use hdparm or root at all.
But it is a neat trick anyway, which I thought was worth keeping, since it can be run as user, but it would need to know many things, like which disk the partition actually is on, so, again, unlikely to be added since it's not really related to Disk or Partition directly, but both, it's actually something else.
https://www.thomas-krenn.com/en/wiki/Li ... s_using_dd that's a good technical overview of dd re write speeds, one thing that you'll notice is the radical speed differences based on 1 big block written vs 1000 small blocks, which again goes to show, write speed means almost nothing until you know what the actual original method used was, and the filesystem, and block size, etc. But that also suggests a much more granular test approach, very large blocks to get bulk write speeds, very small ones multiplied many times to see more realistic normal operation write speeds, which would also probably say as much about the filesystem as the disk itself.