You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for this very useful and elegant library!
I have a question about how the time series is/should be scaled before input in the analysis.
Since the fMRI signal is unitless, the various tools that people use for fMRI GLMs (FSL, SPM etc.), have all their own different ways of scaling the time series data before fitting the GLM. For example, FSL makes the average time series values across the whole brain = 10,000, whereas SPM sets this mean to 100.
Recently, I have also found that softwares like FSL featquery and SPM marsbar have pretty complicated ways of calculating the % signal change in response to a condition or a contrast based on a GLM, while taking into account the scaling that was applied before.
Besides what is done in these toolboxes, when working with region of interest data, different people seem to do different things. For example, some people scale the raw ROI signal to "percent signal change": scaled_signal = signal/mean * 100. Others use z-scores: scaled_signal= signal-mean/standard deviation.
My question to you is: what scaling is done in nideconv before fitting the GLM? Also, does the convolved response to the condition have units such as "% signal change"? If so, how are they calculated? I ask this because I noticed that in your documentation, "percent signal change" is referenced in the plot here: https://nideconv.readthedocs.io/en/latest/tutorials/plot_what_is_deconvolution.html , but then is never used again in the subsequent plots or tutorials.
Similarly, I would like to know if any scaling is done in the functions you provide to extract the BOLD time series, before fitting the GLM.
If the tool does not do any scaling, is there one that you would recommend?
Thank you very much!
Yours sincerely,
Leonardo Tozzi
The text was updated successfully, but these errors were encountered:
For the time course, it is percent signal change unless you specifically ask for t-values with the function get_t_value_timecourse. You might wanna look into the ResponseFitter class (and the corresponding get_timecourse method for more on how it is calculated.
Dear Experts,
Thank you very much for this very useful and elegant library!
I have a question about how the time series is/should be scaled before input in the analysis.
Since the fMRI signal is unitless, the various tools that people use for fMRI GLMs (FSL, SPM etc.), have all their own different ways of scaling the time series data before fitting the GLM. For example, FSL makes the average time series values across the whole brain = 10,000, whereas SPM sets this mean to 100.
Recently, I have also found that softwares like FSL featquery and SPM marsbar have pretty complicated ways of calculating the % signal change in response to a condition or a contrast based on a GLM, while taking into account the scaling that was applied before.
Besides what is done in these toolboxes, when working with region of interest data, different people seem to do different things. For example, some people scale the raw ROI signal to "percent signal change": scaled_signal = signal/mean * 100. Others use z-scores: scaled_signal= signal-mean/standard deviation.
My question to you is: what scaling is done in nideconv before fitting the GLM? Also, does the convolved response to the condition have units such as "% signal change"? If so, how are they calculated? I ask this because I noticed that in your documentation, "percent signal change" is referenced in the plot here: https://nideconv.readthedocs.io/en/latest/tutorials/plot_what_is_deconvolution.html , but then is never used again in the subsequent plots or tutorials.
Similarly, I would like to know if any scaling is done in the functions you provide to extract the BOLD time series, before fitting the GLM.
If the tool does not do any scaling, is there one that you would recommend?
Thank you very much!
Yours sincerely,
Leonardo Tozzi
The text was updated successfully, but these errors were encountered: