To be honest, I'm writing now on statistics after too many failed attempts to explain the necessity of statistical analysis to the world around us.
First, I'll use one of the classic examples from grade school. The game is: Let's Make A Deal. The main game here is that they show you three doors and tell you that two doors have goats and one has a Ferrari (Ferrari $\gg$ Goats in case that was unclear). You guess a door at random, but, before he opens that door, he opens another door and shows you a goat is inside. You then get to choose if you want to keep your same door or switch to the remaining door.
The surprising result (if you haven't heard it before) is that if you switch you are twice as likely to be driving a Ferrari than riding a goat. A quick google search will yield web applets to play this out and see for yourself.
Not that bad of a prize really.
Let's look at why switching is better than staying. First, we note that picking the original door is of no consequence. Nobody knows anything at this point. Except for the host. And the staff. And the pretty lady opening the doors. Okay, YOU don't know anything. So say you pick a door.
Now, note that, while you don't know this yet, you have either picked the right door or the wrong door [so many possibilities!]. There is a 1/3 chance that your door is the door, and a 2/3 chance of goat times. They then show you a goat and you have two doors left. Looks like your odds of winning are 1/2 right? One of the doors has a goat and one has a car.
But remember that your first door has a 2/3 chance of being a goat. Which means that the other door has a 2/3 chance of being a car. It doesn't matter that you don't know what is behind your door, unless they're pulling fast ones on you back stage, if you switch, you will win 2/3 times and if you stay you will only win 1/3 times.
As a side note, the game takes advantage of contestants attachment to their guesses.
So, statistics is good for game shows (and probably casinos and such too), but what else? There were statistics majors at my college! If beating video poker was their only incentive for exhaustive studies of confusing subtleties, they would have lost funding ages ago.
In any experiment, statistics needs to be used. Scientists attempt to measure reality, but there is always some error in that measurement.
Suppose you want to measure your arm
and you record that it is twenty inches long. What does that mean? The length of your bones? From somewhere on your shoulder to somewhere around your wrist? Even with a standard definition of "length of arm", that doesn't explain if your arm is exactly twenty inches. Your ruler probably only goes down to 16ths or 32nds of inches. Plus you're measuring by eyeballing it. How accurate is that?
Unfortunately, this problem doesn't end with fancy special equipment.
Let's scale back a moment though, to a more practical example.
Suppose a friend brings you a die and claims that someone has been cheating and weighted the die towards the six (Risk anyone?) and wants you, an expert on rolling dice and such, to confirm or deny this belief. What would you do?
Assuming that it looks and feels normal, you would probably roll it a whole lot of times and record what you get. Maybe you roll it 100 times and get 100 sixes. Whoops, cheater exposed!
What happens if you only got 98 sixes. Still probably a weighted die. On the other hand, 16 sixes [the expected value is $16.66\bar6$] suggests a non-weighted die. But what about values inbetween? When do you change your mind from "bad luck" to "cheating-friend-we're-never-talking-to-again-because-of-a-really-important-Risk-game"? Well, you could always roll the die more times. After all, it's not that hard, and it's apparently quite important to get it right. If it doesn't approach 1/6 now, we know someone's cheating.
Perhaps a more relevant scenario is to consider the same one as above, but instead suppose that it costs $\$$50 million per roll of the die.
All of a sudden, rolling it as many times as you want is no longer an option. If you're given an operation budget to perform three rolls and return an answer, what then? Three sixes sounds like a cheater, but Yahtzee players know that this happens. What about no sixes? Sounds like it passed the test. But what if it wasn't weighted that much and just got a(n) (un?)lucky set of rolls? Not to mention any ground in between. It's not like you can just redo the experiment, and yet you have to report your results. How confident can you be that three sixes implies a cheater?
Luckily, statistics can help. In fact, statistics makes quite clear statements on "confidence levels". For example, if we roll three out of three sixes, we can be $>99.5\%$ sure that the die isn't normal.
Moreover, statistics can be used a priori to determine things like how many rolls are necessary to be sure to a certain confidence level that the die is weighted or not. Regardless, though, you can never be $100\%$ sure.
That's statistics.