Analysing Storage Performance

Analysing Storage Performance

A critical view on the storage subsystem with DiskSpd

When it comes to SharePoint Performance – a fast Storage is key, so how do we measure storage for one and how we apply our usage pattern?

Microsoft superseeded SQLIO since DiskSpd release came out in 12/14/2015.

So we are dealing with DiskSpd from now on. In compaprsion to SQLIO DiskSpd brings a few (to myself) intesting features to the table.

 

New features:

  • Consumable XML output for automation support e.g. Scheduled analysis runs throughout the day powered by PowerShell
  • Custom CPU affinity options
  • Synchronisation and tracking functionality
  • Ability to target also physical disks
  • Variable read/write ratio

Purpose of DiskSpd

With DiskSpd we are Simulating Workload – specifically for SQL.
We are generating lots of IOPS. – some might say here Ayoub’s – whitch is my name and sounds very funny actually

To have clean tests:

  • If you are using iSCSI LUNS or SMB shares, you depend on the Network – make sure you are “alone”
  • If you are using SAN, make sure you dont have any other Systems consuming the shared resources – reduce the noise as much as possible.

New features:

  • Consumable XML output for automation support e.g. Scheduled analysis runs throughout the day powered by PowerShell
  • Custom CPU affinity options
  • Synchronisation and tracking functionality
  • Ability to target also physical disks
  • Variable read/write ratio

 

So let’s get our brain working with some more parameters and their meanings flying around. Strap your seatbelt – i am about decrypt a few things and put it in context with the real word.

What’s likely your setup?

You are running your servers on top of a virtualisation layer e.g. ESX / Hyper-V and your underlying storage….could be anything. It doesn’t really matter to us, as we don’t want to dig around into the storage architecture corner. But we need to know a few things from the storage engineers.

  • What is the block/stripe unit-size on the storage?
  • What is the blocksize on the guest ?

Got the feedback ? Blocksize on guests vs storage should be same. Take blocksize of guest or get the disks re-provisionend. Oops.

Alright that’s it…  but you can check it out by yourself, to be sure.

Run in any administration shell fsutil fsinfo ntfsinfo d:

Take Bytes per cluster / Storage offset = blocksize

 

Ideally you are having 64k block size on the storage and on the guest.

If you are dealing with SQL and you use iSCSI LUNs, format them as 64k, attach separate LUNs, and support separation by OS, Data files, Log files.

Hint: If you are on Hyper-V with motion enabled, ensure that vmdk anti affinity realm is doing its job and your are preventing to eventually having your drives sitting altogether on one LUN.

Let’s get started and download DiskSpd here.

Put it on drive C and use the following parameters.

  • -h disabling OS caching like the SQL server does
  • -t8 Number of threads – adjust this if you know the code and know how the app is talking to your sail box. If you have a chatty black box – leave it at 8 or even increase it
  • -c1G size of the data file in gigabytes. Leave it on 1G if your are dealing with SharePoint for example.
  • -w25 25% writes vs 75% reads – you are invited to play with this values.
  • -o8 queue depth/length per thread (no of remaining tasks in the queue)
  • -b64K Block size of your disks
  • -d60 duration of the test in seconds
  • -Z1G Workload test write source buffer for supplying random data for our write operations
  • -L Capture latency – we really want this

Hint

Before you do anything on the Systems in your corporation – align with the Sys Admins first and tell them what you are doing and let them know the impact of the testing. They will likely schedule this with you off hours when live systems will be affected.

.\diskspd.exe -b64K -d60 -o8 -t8 -h -r -w25 -L -Z1G -c1G c:\\io.perf

La voila – have fun with the data.

 

Your are interested in:

  • Latency
  • I/O per secound (read & write)
%d bloggers like this: