The second major hardware question is: How much memory? Fortunately, this is very simple to answer. When in doubt, buy as much memory as your budget can afford and your server can accommodate. No other hardware component as generally, easily, or quickly provides such dynamic performance improvements. Moreover, memory is by far the easiest hardware component to incrementally augment as time goes on. If you don't get enough memory at the start, don't hesitate to add more. Just make sure that you adjust your database and application parameters to take full advantage of all memory.
So, do you just order as much memory as the server can hold or the budget can afford? What if you have multiple servers? Fortunately, you can be a bit more scientific with your estimates. In fact, you can use the same criteria from the CPU needs analysis to derive a more meaningful memory need estimate.
If we assume Oracle consumes about 50 MB of memory per business intelligence query process, (based in large part upon proper INIT.ORA parameter settings, covered in Chapter 5), and that we must have at least 1 GB of RAM, we arrive at the following formula:
GB RAM = Round (Max (Concurrent Users * Avg. Concurrent Queries per User * PQO Degree * 50, 1000)/1000,0)
The results are shown in Figure 3-6.
Keep in mind that you'll need to equally spread all that memory across whatever number of nodes you end up with. Also remember that most SMP and NUMA machines can have up to 64 GB of RAM, so they can address the needs for all but the largest of data warehouses (i.e., bottom right corner cell of Figure 3-6).
So what does this graph show us? Well, even for parallel degree one (i.e., serial query processing), we need at least 2 GB of memory for even just a few queries. In fact, the graph suggests 6?26 GB for mid-size data warehouses, and 60+ GB (often the limit) for the largest data warehouses. And don't forget to factor in your data loading memory needs?they might be above and beyond what this graph suggests.