Brain Innovation

support portal

Another crucial part where memory comes into play is when GLM statistics are calculated. There are certain approaches that use quite different amounts of RAM. Let's assume we stick to the VTC case with a 3x3x3mm resolution. For the multi-study approach we would assume a total of 20 subjects, each having one VTC file (scan) with 25 single study predictors each.

Hence, the number of values stored for one time point (matching the number of values for one statistic) is

58 x 40 x 46 values = 106720 values

which means 213440 bytes for one VTC volume and 426880 bytes for a statistical result volume. For MTCs, the GLM is usually done on a per-hemisphere basis, leading to 40962 values and thus 163848 bytes per surface based map.

Single-study GLM

For a single-study GLM, for each predictor 2 maps are calculated (a beta map and a map referring to the explained variance). Additionally, three maps are stored, the correlation of predictors with the data, an R value map, and the variance, leading to 53 maps in total. Of course, also the design matrix X and its inverse are stored, but in the single-study case this is negligible...

(2 * 25 + 3) * 4 * 106720 = 21.6MByte for the VTC-based GLM and (2 * 25 + 3) * 4 * 40962 = 8.3MByte for a one-hemisphere MTC-based GLM

Multi-study FFX GLM

There are two approaches, one where you would concatenate the functional data over time, and the other where you separate the predictors along studies and subjects. If you choose to concatenate the runs (after normalizing the data with the appropriate option, of course), the number of maps does not change between single and multi-study GLM, since the number of predictors basically stays the same.

In case you separate the predictors, the number of core predictors calculates as

NrOfCorePredictors = NrOfSubjects * NrOfSingleSubjectPredictors + NrOfTimeCourseFiles

which in this case evaluates to

20 * 25 + 20 = 520 core predictors.

And now, for this number the same calculation as above applies:

(2 * 520 + 3) * 4 * 106720 = 424.1MByte for the VTC-based GLM and (2 * 520 + 3) * 4 * 40962 = 163MByte for a one-hemisphere MTC-based GLM

Please keep in mind that the additionally stored design matrix X in this case requires some space as well:

4 * (20 * 25 + 20) * 250 = 508kbyte

Although this does not seem much, this number will further increase with the number of subjects and number of predictors. So, eventually, it might become the last straw that breaks the camel's back!

Multi-study RFX GLM

For an RFX GLM, each predictor is estimated as with the fixed effects statistics. However, since the calculation is done on a per-subject basis first (reducing the memory requirements for storing the number of resulting maps per GLM!), the maps are represented differently in the program, allowing to allocate memory in a different way. So, while the resulting GLM file will be of comparable size, also machines failing to compute a FFX GLM might be able to run a RFX GLM without problems!

You are here: HomeBrainVoyagerInstallation & IntroductionSystem & Memory Requirements ≫ Memory: Statistical Results