# Back to School: Elementary Physics for DBAs

This blog post was originally released on Pythian blog

“Hello World!” I guess that’s the most appropriate way to start my 1st blog post under pythian.com domain. I’m going to start slow, but hopefully will pick up speed and have at least couple of posts each month to share with you. I’ve been blogging at http://appsdbalife.wordpress.com/ until now and I haven’t decided yet what the future will be for my previous blog, I wouldn’t like it to become some kind of a zombie page that’s been long dead but still wandering around the internet world.

Enough intros, let’s get to business! I hope this blog post doesn’t get lost in the huge amount of posts related to OOW 2011.

A few days ago I was asked to estimate how much space needed to be added to the ASM diskgroup to handle the database growth for one year without additional need of adding disks. Obviously, to estimate the disk space that needed to be added I had to know what the DB size will be in one year from now. It looked like an easy task as I knew we were recording the size of the database every day.

I checked the size of the database – it was 887.3Gb and a month ago the size of the database was 856.9Gb, so it had increased by 30.4Gb in one month. 30.4Gb multiplied by 12 months equals 364.8Gb, so the database will be ~1252Gb after one year. That was easy right?

But wait a second! Wouldn’t it be logical that as the DB becomes bigger (there is more activity; there are more users and more transactions, more everything) then it grows quicker too? So I checked how much the DB increased during one month exactly one year ago – 21.2Gb (the DB size was 740.3Gb at that time). Looking at this number I instantly knew the DB size I calculated earlier was not correct.

This was the moment where physics kicked in. Velocity, distance and time could easily be substituted for DB growth in one month, size of the DB and time. Also, as the “velocity” has increased during one year we deal with some acceleration too. I graduated high school 10 years ago, so I had to look up the formulas. At first I needed to calculate acceleration and assuming the acceleration stays the same I would be able to calculate the “distance” or (how much the DB would grow in one year). I used the following formulas (where s = distance, u = initial velocity, v = final velocity, a = uniform acceleration, t = time):

- v^2 = u^2 + 2as
- s = u
*t + (a*t^2)/2

I applied the 1st formula on the variables I had (v=30.4 Gb/month, u=21.2 Gb/month, t=12 months, s=887.3-740.3=147 Gb), so the results were:

- v^2 = u^2 + 2as
- 30.4^2 = 21.2^2 + 2
*a*147 - 924.16 = 449.44 + 294*a
- 474.72 = 294*a
- a = ~1.6147 (Gb/month^2)

As I had the acceleration of the DB growth I could calculate the estimated growth of the database during one year (u=30.4 Gb/month, t=12 months, a = ~1.6147 Gb/month^2), and the result was:

- s = u
*t + (a*t^2)/2 - s = 30.4
*12 + (1.6147*12^2)/2 - s = 364.8 + 1.6147*144/2
- s = 364.8 + 116.2584
- s = ~481.1 Gb

Based on these calculations the size of the DB would be 887.3 + 481.1 = 1368.4Gb, which is quite different from 1252Gb that I calculated before. I used the estimated DB size of 1368.4GB to calculate how much disk space needed to be added to the ASM disk group in the end.

Of course, the acceleration of the DB growth might not be uniform in reality but I think it’s still better to use this, as it’s likely to be more accurate compared to result based on the assumption that the DB growth is constant. You can do the same calculations for different types of data – it is not bound to the database, use it to estimate the total size of apache logs or the total size of concurrent request output files if you’re running e-Business Suite. Keep in mind the estimates are only estimates don’t rely on them 100% and they are going to be more accurate if the input data are accurate.

I hope you enjoyed reading this. See you next time!