I've got a workbook containing several million lines of data extracted by SQL query. I've written some measures that clearly require a lot of calculation. What I am struggling to understand is if I have some pivot tables with some of these big measures already existing and I create a new pivot table, with some very simple measure (like sum of a field,) it takes hours to populate this new pivot table with "Reading data" showing in the bottom corner. The implication is that it is recalculating all the hard measures even though they already exist. Even if I use slicers to pre select a small sub-set for the new simple pivot table it takes just as long.
Is this the behavior I should expect where the time to do any calculation is dependent on the slowest calculation in the workbook. If so seems a nuts implementation. I know 2013 is rubbish because it recalculates everything whenever you change data in the powerpivot window unless you use the OLAP extension to disable the auto refresh (who designed that!!!) . So is this what is happening and it is refreshing everything any time you make the slightest change to any pivottable? If so it is basically unusable.
Appreciate any clarity on how the calculations are performed in case i'm being stupid.
Mike
Is this the behavior I should expect where the time to do any calculation is dependent on the slowest calculation in the workbook. If so seems a nuts implementation. I know 2013 is rubbish because it recalculates everything whenever you change data in the powerpivot window unless you use the OLAP extension to disable the auto refresh (who designed that!!!) . So is this what is happening and it is refreshing everything any time you make the slightest change to any pivottable? If so it is basically unusable.
Appreciate any clarity on how the calculations are performed in case i'm being stupid.
Mike