Basics – Change and Growth

I assume you’ve heard of the wheat and the chessboard problem. It often gets presented as part of the history of chess, or as some fable to teach the importance of understanding what you’re agreeing to. The way I heard it when I was a child was along the following lines:

A king was out for a ride one day when he passed an old lady beside a bridge, begging for alms. Ignoring her, he pressed on, but as he was riding over the bridge, it collapsed, and he fell into the river and would have drowned had the old lady not jumped to his aid. When she got him to the banks of the river, he said, “You’ve saved my life. Whatever you want, you will have.” To which she replied, “Put one grain of rice on the first square of a chess board. Then double that for the second square, double the second square for the third, and so on, until all the squares are filled.”

Wheat and Chessboard Problem

Of course, the wheat and chessboard problem fundamentally teaches about exponential growth, and can equally be used to help understand compound growth, too.

Understanding Change and Growth Rates
Understanding Change and Growth Rates

In backup and recovery systems in particular, it’s rather critical to understand your change rates and growth rates, particularly when you’re planning a refresh of your environment. Obviously, since I work in pre-sales, this is something I deal with on a regular basis, and my guidance on this is as follows:

  1. Garbage in, garbage out: The more accurate an understanding can be built of your change and growth rates, the more correct a solution sizing/design will be.
  2. Not all workloads in your environment will have the same daily change rate.
  3. Not all workloads in your environment will have the same annual growth rate.
  4. Your daily change rates are probably lower than you think.
  5. Your annual growth rates are also probably lower than you think.

Number 1, above, is to me an immutable truism. If the data regarding your workloads – type, size, change and growth – are not correct, the only way a solution is going to be correct is sheer dumb luck.

Items 2 and 3, above, I’m willing to posit, are almost invariably truisms as well. I’d normally expect to see different workload change and growth rates based not only on the business function, but also the workload type. So even if a customer facing service utilises both a database and a fileserver, those two different systems, even though they support the same function, might have radically different change and growth rates.

Items 4 and 5 are usually correct, though they’re the ones where there’s greater flexibility. If you have servers that just accumulate data on a daily basis (e.g., video feeds, data warehousing, etc.), there’s a greater chance you’ll have bigger change rates and bigger growth rates than we would normally see. However, generally speaking, it’s not unusual to see relatively low change rates, and growth rates, across large numbers of datasets within an environment.

Those change rates and growth rates will clearly have a significant impact on the overall solution requirements. To see what I’m talking about, let’s consider some varying data set sizes and a variety of daily change rates.

Data Size (GB)1% Change2% Change4% Change5% Change10% Change20% Change
500.5122.5510
2502.551012.52550
500510202550100
100010204050100200
150015306075150300
2000204080100200400
5000501002002505001000
1000010020040050010002000

As you can see, even relatively small daily change rates generate a reasonable amount of new data each day. I often get told that there’s a 20% daily change rate, but when we extrapolate out what those numbers mean, it’s more often than not that it was a guess rather than hard numbers.

It’s equally the case that annual growth rates are misunderstood for a lot of data sets. Let’s look at those same dataset sizes with a variety of annual growth rates, compounded over 3 years:

Data Size (GB)1% YoY2% YoY5% YoY10% YoY20% YoY50% YoY
5051.553.157.966.686.4168.8
250257.6265.3289.4332.8432.0843.8
500515.2530.6578.8665.5864.01687.5
10001030.31061.21157.61331.01728.03375.0
15001545.51591.81736.41996.52592.05062.5
20002060.62122.42315.32662.03456.06750.0
50005151.55306.05788.16655.08640.016875.0
1000010303.010612.111576.313310.017280.033750.0

I find predicting growth over 3 years is about as accurate as you’ll get within a solution view – beyond three years, and unless you’ve got an extremely measurable and linear data change, the likelihood that extrapolated growths for say, 5 years (a common request) are accurate is actually pretty minimal. (Which returns us to truism #1: garbage in, garbage out.) To get what I mean, consider the above growth rates, but now extrapolated out to 5 years:

Data Size (GB)1% YoY2% YoY5% YoY10% YoY20% YoY50% YoY
5052.655.263.880.5124.4379.7
250262.8276.0319.1402.6622.11898.4
500525.5552.0638.1805.31244.23796.9
10001051.01104.11276.31610.52488.37593.8
15001576.51656.11914.42415.83732.511390.6
20002102.02208.22552.63221.04976.615187.5
50005255.15520.46381.48052.612441.637968.8
1000010510.111040.812762.816105.124883.275937.5

As you can see, when we stretch to a 5 year annual growth rate, those workload sizes get quite large, even for the smallest starting points. So say if the workload size is actually 50GB, not 250GB, using the wrong data set size at the start compounds to quite a serious difference in sizing requirements by the end of the 5 years. (While it’s common to see RFPs for instance issued with an assumption around 5 years growth, I honestly think for the most part that a more sensible approach is to size for 3 years growth with a requirement to accommodate extra growth for years 4 and 5.)

There’s an old saying, measure twice, cut once, referring to anything involving clothes, upholstery, etc. It’s that saying that brings me to the point of this post: if you’re looking at refreshing your environment, or changing your environment, it’s worth spending the time and effort gathering as much data as is practicable.

Sometimes, you might have this data available. You might have a strong capacity monitoring and management process within your environment that can chart, on a system by system basis, or a dataset by dataset basis, what your daily change, and annual growth rates are. If you’ve got those details, that’s the perfect information you need to get a solution sized with the greatest accuracy. In my experience, most environments don’t have this information to hand to that degree – and so the best option, the most likely option, is to run a comprehensive assessment of your environment. And that’s where tools such as LiveOptics come in. LiveOptics can review not only the current state of your environment to gather significant amounts of dataset information, it can also run continuously over a defined period to help gather information such as change rates.

In a lot of senses, developing a view of the required size of a solution is a fairly straight-forward mathematical process (I say this as someone who still counts on my fingers). It’s not the solution sizing that’s rocket science (though it can be a long process, depending on the number of datasets to be evaluated), it’s the gathering of the input data to get the sizing right which takes the most careful consideration. Once you get that right, you can evaluate a refresh or change to your environment with relative ease.


If you found this interesting, be sure to check out Data Protection: Ensuring Data Availability.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.