It's hard to explain, and harder still to remember, but here goes.
I was analyzing electricity time-of-use metering for 11 different commercial sites. These meters spit out consumption data every 30 minutes, so I have 48 consumption 'trading periods' in the day, every day of the year, for each site. Approx 200,000 rows of data in total.
What I needed to do was plot coincident peak demand values so I could look for overall trends, and how use changes over particular half hourly trading period and a particular month. Cooincident peak demand means the highest combined load of a group of sites on a particular day at a particular time.
From memory, getting the max coincident peak demand in each trading period in a particular month was easy enough from the pivot table. I had to summarise the data by time period and month, and change the value field setting to max. But getting the average coincident demand was problematic, as the pivottable calculated averages from the disaggregated data (i.e. the individual sites) whereas I needed it to first lump up the data into daily blocks, then pick the average of all the days.
I could do this by pivoting the to aggregate it up across all sites, then copying the aggregated totals to a separate sheet so that I could then pick out coincident maximums and also average readings over time.
But with the data table, (i.e. database filtering to a table) and with a few subtotal functions I could get exactly what I wanted straight from the raw data, plus drop specific sites at will.