What is an IOP? The Truth About Storage Performance

November 4, 2015

The Origins of the IOP

If you’ve ever been in the market for a storage solution, you’ve no doubt heard the term IOP. What does this nebulous term mean and where did it come from? By definition, IOP/s are a measurement of input and output operations per second, or the number of complete transactions that occur in one second. “Well…. what kind of transactions?” you ask? Good question!

Let’s say, you pull into a parking lot somewhere on the border of the US and Canada. You spot a 69' Chevelle SS exactly like yours. The conversation goes something like this:

You: "How fast will she go?"

Owner: "Almost touched 200 last night on the track"

You: "Woah! that’s screaming! Way faster than my ride. I might get 150 rolling down hill."

You probably know where I'm going with this. You’re talking miles, while the canuck is talking klicks, so the Canadian Chevelle that goes “200” is way slower than the American one that goes “150.” Storage conversations anchored around IOPS can feel the exact same way.

So, an IOP is certainly a measurement, but it's a measurement of transactions of a particular size. For block storage they may average in the 100's. In high density vm environments, especially vdi, averages can be as low as 4K or less. This “block size” is the 'payload' of the IOP we are counting. An 8k IOP is twice a 4k IOP.

So, what is an IOP? It's a marketing term coined by storage companies, intended to be a conversation starter. The gross misuse of the term IOP/s in the storage market, has created a lot of confusion and mistakes. It’s time to get real about performance.

The 3 Components of an IOP

If we’re going to use this term, let’s make sure we understand what it means. In order to understand the components that impact an IOPs measurement, it’s helpful to think of a 3 legged stool. The legs are latency, block size, and data-rate, formerly known as throughput or bandwidth, and all three are required factors in an IOP calculation.

As is the case with everything in IT, any solution’s performance is limited to the weakest link in the chain.  The same goes for Storage, and every turn an IO takes in your stack is an opportunity to impact the 3 legs of our stool. 

The life of an IO operation:

Geek Alert - This process is intentionally simplified for conceptual purposes.

You fill out a web form and hit submit. Let’s say that this generates 256KB of data that needs to be written to a Database somewhere. Before you can get to the confirmation screen, the application needs acknowledgement that the data arrived intact. When the web-server passes the data over to the Data Base server, it might assemble the information into 64KB blocks, transmit and wait for acknowledgements before sending the next piece.

So in our over simplified example, If I’m moving 256k, the operation would take 4 x 64K IOP/s, or 32 x 8K IOP/s, or 64 x 4K IOP/s. You get the point.

What makes a “write operation” take longer on some storage than others? Everything in the block’s path. Here are the components that impact performance:

  1. The first Stop is the initiator (Host or VM) which transmits data to be stored.  The initiator issues a write.
  2. Now this ‘write block’ needs to get from the ‘Initiator’ to the storage controller.  To get there, it travels over a storage network until it gets to the array, also known as “Target.”
  3. Now that my information has arrived, the CPU has to deal with it.  Maybe do some parity or compression operations, then send it to devices in the back-end.
  4. In order for this block to get to those disks, it has to traverse the back-plane, to get to each of it’s devices.  This backplane is typically 6Gb or 12Gb SAS.
  5. Okay, now the disk has the request, along with some instructions on where the data should be written to.  This is a fraction of the process, but it owns 90% of all storage conversations.
  6. The short version is that solid state drives offer factors of improvement on latency over magnetic drives.  Magnetic Drives offer better data-rates than many SSD’s.
  7. Once a device has written our block of data, it sends an acknowledgement message…
  8. which then goes back through the back-plane, through the CPU, over the SAN, and back to the Initiator.

Rinse and Repeat!

In our example, we’re moving a total of 256k, so with 4K blocks, the above operation would need to happen 64 Times. With 64k blocks, this process would happen 4 times. In most cases we don't get to dictate all of the block sizes, and your storage system is exposed to more than just this workload. So operations may have to wait in queue at any point along the process. They could queue in a switch buffer, in Controller DRAM, or even in the flux capacitor. All of this adds latency.

How to judge storage performance in the real world

You have back-to-back storage demonstrations from two different vendors, and they spout off their performance capabilities:

The WhizBang Storage Company says their array can do 30,000 IOP/s. And you can have this amazing solution for $100,000!

The ShaBang Storage Company says their array can do 10,000 IOP/s. Their solution also costs $100,000!”

Performance being your primary objective, which array do you buy?

If you answer this question "WhizBang all the way!", you just made a pretty common mistake. We now know IOP/s are a product of latency, and data-rate. All storage components influence latency which is the sum of all parts we just illustrated.

Let’s put some duck tape on the marketing person, and ask the vendor to be more specific about their performance… Looking at the datasheets, WhizBang is capable of 30,000, 4k Random Write IOP/s with 2ms of Latency

ShaBang’s solution is capable of 10,000, 64k Random Write IOP/s with 6ms latency.

Now which array is "Faster?"

Big Giant Disclaimer!!!  - Context is the mother of all measurements when it comes to storage performance.  The first and most correct answer is “it depends”. In this case, let’s say we’re talking about heavy data base workloads, that average around 64K block sizes.

Knowing the workload, the winner is ShaBang! Think of the 256K chunk of data we mentioned. If WhizBang has 2ms latency on 4k IO Operations then theoretically it would take 128ms (64 x 4k IO's) when delivering 4k blocks. If ShaBang has 6ms latency on 64k IO Operations then it would take 24ms (4 x 64k IO's), putting us at 24ms latency for the same 256k chunk of data.

So when someone tells me they can do 1,000,000,000 IOP/s, how can I know what that actually means to me and my business? How do I level the playing field?

Get the Latency and Block Size numbers for the IOP/s claim. Not all vendors have latency numbers available.

Vendors tend to choose a metric that’s most favorable for their solution. If you can't confirm the exact architecture that produced the vendors results, then it's likely their highest performance configuration. Preferably, your vendor (more likely your reseller for a more objective customer experience) will have use-case performance data for you that’s even more useful.  Vology suggests leveraging information provided by SPEC - Storage Performance Evaluation Corporation www.spec.org for unbiased 3rd party benchmark data if a Proof on Concept isn’t a viable option.

If limited data is available, and you can’t perform a PoC, then we may have to rely on a few primitive methods.