Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Saturday, December 1, 2012

Review: Differentiating BOLD from non-BOLD signals in fMRI time series using multi-echo EPI


Disclaimer: I'm afraid I haven't done a very good job reviewing the entirety of this paper because the stats/processing part was pretty much opaque to me. I've done my best to glean what I can out of it, and then I've focused as much as I can on the acquisition, since that is one part where I can penetrate the text and offer some useful commentary. Perhaps someone with better knowledge of stats/ICA/processing will review those sections elsewhere.


The last paper I reviewed used a bias field map to attempt to correct for some of the effects of subject motion in time series EPI. A different approach is taken by Prantik Kundu et al. in another recently published study. In their paper, Differentiating BOLD from non-BOLD signals in fMRI time series using multi-echo EPI, Kundu et al. set out to differentiate between signal changes that have a plausible neurally-driven BOLD origin from those that are likely to have been modulated by something other than neuronal activity. In the latter category we have cardiac and respiratory fluctuations and, of course, subject motion.

The method involves sorting BOLD-like from spurious changes using an independent component analysis (ICA) and to then "de-noise" the time series before applying connectivity analysis. For resting state fMRI in particular, the lack of any sort of ground truth and an absence of independent knowledge that one has with task-based fMRI makes disambiguating neurally driven signal changes from artifacts a major problem. Kundu et al. use a relatively simple philosophical approach to the separation:
"We hypothesized that if TE-dependence could be used to differentiate BOLD and non-BOLD signals, non-BOLD signal could be removed to denoise data without conventional noise modeling. To test this hypothesis, whole brain multi-echo data were acquired at 3 TEs and decomposed with Independent Components Analysis (ICA) after spatially concatenating data across space and TE. Components were analyzed for the degree to which their signal changes fit models for R2* and S0 change, and summary scores were developed to characterize each component as BOLD-like or not BOLD-like."

And, noting again the caveat that there is an absence of ground truth, the approach seems to work:
"These scores clearly differentiated BOLD-like “functional network” components from non BOLD-like components related to motion, pulsatility, and other nuisance effects. Using non BOLD-like component time courses as noise regressors dramatically improved seed-based correlation mapping by reducing the effects of high and low frequency non-BOLD fluctuations."


What does it mean to be BOLD-like?

BOLD contrast is achieved via changes of T2* in and around the venous blood. In a typical fMRI experiment the TE of a T2*-sensitive image acquisition such as EPI is set approximately equal to the T2* of the gray matter, thereby producing a contrast mechanism that is maximally sensitive to changes in T2*. (See Note 1.) Instead, Kundu et al. acquire three different T2*-weighted images per slice, i.e. three successive images are acquired at different TEs, allowing a fit for T2* for each voxel in the brain that contains signal. (In the paper the authors usually use the relaxivity, defined as 1/T2* = R2*, rather than the relaxation rate constant, T2* but the terms are clearly interchangeable.) 

Multi-echo acquisition has been used before to characterize BOLD changes, or to combat dropout effects that arise from the distribution of T2* values across a brain. (See the paper's Introduction for a review.) What's new in this work is the use of ICA to separate R2*-dependent components for each voxel's time series, where the goodness-of-fit to an R2* model can be used to characterize whether a particular time series component is more likely to be neurally-driven, i.e. BOLD-like, than a non-BOLD change, such as could arise from head motion.

So, the method relies upon the ability to model appropriately signal changes that are BOLD-like from everything else. What does it mean to be BOLD-like? Kundu et al. explain it in detail in their Theory section, but in brief it means that a signal has a mono-exponential TE dependence that is consistent with small changes in magnetic susceptibility due to small changes in oxygenation in the (venous) blood. For BOLD-like modulation, then, the change of signal level from a baseline state to an activated state, cast as ΔS/S, is linearly dependent on acquisition TE in the limit of small changes in T2*, as shown at bottom-right in the figure below. Non-BOLD-like modulations don't fit this model. Instead, the ΔS/S is TE-invariant, as shown at bottom-left:



Mono-exponentiality is assumed in the BOLD versus non-BOLD characterization. Is that fair? I think it probably is, for voxels that aren't significantly larger than about 4 mm on a side. But the assumption is likely to be better the smaller one can make the voxels. There are only three TEs with which to characterize the TE dependence anyway, so for the time being that would seem to be a limitation that we are required to live with.

The other assumption is that only small changes in magnetic susceptibility are expected in BOLD-like fluctuations. The concentration of deoxyhemoglobin in venous blood varies by a few percent as the upstream neural activity varies; we're not expecting huge shifts of T2*. Large changes in magnetic susceptibility across a voxel can accompany movement, however. But in the case of head movement the dephasing across a voxel leads to changes in signal level that will be reflected in the term ΔS0/S0 as well. Thus, movement may also cause a change in the TE dependence but we can still separate movement from BOLD-driven signals because we want changes in the TE dependence only; no change in ΔS0/S0.


The practical stuff

The immediate problem is obtaining data suitable to fit R2*. This isn't trivial because it takes tens of milliseconds to acquire a single image, a problem that manifests in routine EPI as distortion in the phase encoding dimension. On their 3 T GE scanner, Kundu et al. were able to acquire three different T2* weightings (TE = 15, 39 and 63 ms) using relatively large voxels (3.75x3.75 mm in-plane, 4.2 mm slice thickness) by employing SENSE parallel imaging with an acceleration factor of two. Even so the TR was 2500 ms for 31 slices (0.3 mm gap) to cover the whole brain.

Pulse and respiration data were recorded separately and allowed different types of de-noising to be compared. I won't get into the comparisons for brevity; also because I'm only interested in the multi-echo data characterized by ICA. For the ICA pipeline, then, conventional slice timing correction was applied and motion correction (a.k.a. realignment) was applied using the central TE image for each time point.  I don't know why the central TE image was selected, but perhaps it was because it's the median value and thus might be expected to minimally bias the time series to BOLD-like or non-BOLD-like changes. I think I would have been tempted to use the first TE images because they exhibit less dropout, thus there should be more brain signal for the realignment algorithm to work with.

At this point some sort of magic happens. As previously mentioned, I have zero knowledge of ICA and only very slightly greater knowledge about stats in general. Apologies for glossing over this part and making an assumption that the fitting and statistical procedures were valid; I'm not qualified to do anything else. Anyway, ICA was applied to the time series data by treating the three TE images at each time point as a fourth spatial variable for spatial ICA. Each ICA component was then analyzed for its TE dependence: BOLD-like or non-BOLD-like, as established by the TE-dependent criteria mentioned previously. The magic thus produces two new variables, κ and ρ, to characterize each independent component of a voxel's time series:
"High κ indicated strong ΔR2*-like character (BOLD-like), and high ρ indicated strong ΔS0-like character (non BOLD-like)."


But will it blend?

By this point I feel like my head has been in a Blendtec. (This is the last time I try to review a heavy stats/modeling paper!) However, the results are compelling (for someone who has no knowledge of ICA or the actual processing used in the paper). For example, here is a comparison of a BOLD-like (top panel) versus a non-BOLD component (bottom panel):


The TE dependence of images in the lower left quadrant suggests that the non-BOLD modulation isn't driven by neurons, instead the peripheral signal changes resemble classic head movement artifacts. Thus, for the non-BOLD (artifact) component there is a corresponding ring on the periphery of the brain having high percent ΔS0 (bottom-right corner image); head movement primarily modulates signal level without a TE dependence. In contrast, BOLD-like modulations show very little change in percent ΔS0, instead showing strong, localized changes in ΔR2* (top-right corner image). For the BOLD-like component shown here, κ was 184 and ρ was 15 whereas for the artifact component κ was 22 and ρ was 90.

The generation of κ and ρ permitted automated sorting of the BOLD-like wheat from the non-BOLD-like chaff:
"The ICA components were rank-ordered based on their κ and ρ scores. These two rank orderings (κ-spectrum and ρ-spectrum) were used to differentiate BOLD components from non-BOLD components. Both κ and ρ spectra were found to be L-curves with well-defined elbows distinguishing high score and low score regimes. This inherent separation was used to identify BOLD components in an automated procedure. First, the elbows of κ and ρ spectra were identified. The spectra were scanned from right to left to identify an abruptly high score following a series of similarly valued low scores. The κ and ρ scores marking abrupt changes were used as thresholds. Those components with κ greater than the κ threshold and ρ less than the ρ threshold were considered BOLD components. All other components were considered non-BOLD components. These were used as noise regressors in time course de-noising."

The elbows in the L-shaped rank plots of independent components seemed to be clearer for κ (BOLD-like) than for ρ (non-BOLD) values, but the features were certainly complementary:



Maps corresponding to high κ matched resting state networks seen in other studies. I'm not sure if that is, by itself, a good thing but let's move on. Maps of the highest ρ tended to feature brain edges or CSF-filled spaces, a strong indication that they would be motion-related artifacts. Maps of components near the elbows of the κ and ρ spectra were more difficult to interpret, but tended to be more suggestive of artifact than "proper" BOLD networks, according to the authors' interpretation.

Which brings us to the final proof: connectivity analysis. Time series correlation using seeds in hippocampus and brain stem showed that the de-noising using multi-echo ICA yielded spatial patterns that were more consistent across subjects than when standard de-noising techniques (such as RETROICOR) were used instead:
"The group T-maps based on low κ de-noising showed much higher T-statistics for connected regions than the group T-maps based on standard de-noising. This indicated that (Z-transformed) correlation coefficients based on ME-ICA were more consistent across subjects than Z-transformed correlation coefficients based on standard de-noising."

That seems like a good finding. One naively assumes that brains are connected with greater similarity than dissimilarity when examined with our relatively coarse fMRI tools.

But why the hippocampal and brain stem seeds?
"Studying functional connectivity of subcortical regions is challenging due to low functional contrast-to-noise due to CSF and blood flow pulsatility and distance from receiver elements. Where standard de-noising showed no clear correlation patterns for the hippocampal and brain stem seeds, ME-ICA de-noising revealed robust correlation patterns. The brain stem seed was localized to the anterior pons that contains corticospinal (pyramindal) tracts connecting to premotor, parietal, and motor regions (Kiernan, 2009). This pattern of anatomical connectivity agrees well with the pattern of functional connectivity exposed after ME-ICA de-noising. The hippocampus seed was localized to the head of the right hippocampus that has anatomical connectivity to sensory regions via temporal and entorhinal cortices (Kiernan, 2009). The pattern of functional connectivity exposed after ME-ICA denoising agreed with this pattern of anatomical connectivity."

This also seems to be good news.


Limitations of the study and ME-ICA

Assuming that the statistical evaluation works as presented, what are the limitations of using ME-ICA for de-noising fMRI data? My first concern is the use of three TEs, two of which (39 ms and 65 ms) may not offer very much signal in important brain regions having short T2* (below perhaps 20 ms); namely, portions of frontal and temporal lobes. There is the danger of a 2-point or even a 1-point fit to the TE dependence in these regions. How does the method fare when the SNR is very low? I would want to assess regional variations in the de-noising before pushing for widespread adoption.

And talking of validation, de-noising doesn't turn a bad experiment into a good one. I assume - because the paper gives no indication to the contrary - that all eight subjects were compliant. Perhaps they were all experienced fMRI subjects, too. Thus, I would put the data acquired in the experiments into the "good" bin; the subjects probably didn't move much compared to your typical, off the street fMRI volunteers, or kids, or elderly subjects with a medical condition. How does ME-ICA fare when movement is higher than for these eight subjects? Does the model always capture the non-BOLD components and leave the same BOLD-like networks? I would want to see some failure tests before moving to global use of ME-ICA. It would be very useful to have the same subjects scanned under different intentional movement regimes, for example.

But I also have a minor concern about how the method fares with very small amounts of subject motion, too. Very small head movements could cause small T2* changes with minuscule concomitant signal intensity changes, through small shifts in magnetic susceptibility gradients across tissue boundaries, for instance. I wonder, then, whether the method might characterize very small movements as being BOLD-like. That would be perverse. Again, it would be important to test the method under the movement extremes to see how it could work (and fail) in practice.

My final concern is whether the method would be adopted based on the acquisition parameters presented. The acquisition requires multiple TEs for each slice, a temporally expensive thing to do. In fact, getting down to the TEs of 15, 39 and 65 ms as used in the study required the use of parallel imaging (SENSE with acceleration factor of two), an option that isn't without its own penalties (of increased motion sensitivity and decreased SNR). And even so, it was only possible to acquire 31 slices in TR = 2500 ms. I can see a lot of people turning their noses up at that performance. It might be feasible to decrease the TEs used, thereby increasing the number of slices/TR, by using even higher acceleration factors, but again the SNR goes farther down and the motion sensitivity gets larger. More on the prospects for pulse sequence developments in the final section, below.

What I do like about the proposed method, however, is the principle. This paper tries to develop a conceptual framework for discerning noise components, using a simple model of how a signal should behave with TE in order to be considered BOLD-like. It's a step up from the T2*-weighted acquisitions everyone else uses for connectivity. I note that nobody does plain diffusion-weighted imaging any more, we've moved to diffusion models such as tensors to fit and interpret the anisotropic motion-encoded signals that we acquire. Yet we haven't made that step en masse for fMRI as yet. We're still doing the same T2*-weighted imaging that we were doing in the early nineties. A move towards principled evaluations - something more quantitative, as presented in this paper - would seem to be a worthwhile advance, and I welcome it.


What developments might benefit multi-echo acquisitions?

As I mentioned, I suspect the biggest hurdle facing ME-ICA isn't the complexity of the analysis or the lack of rigorous failure analysis when the method is applied to wiggly subjects, but rather the temporal overhead associated with the multiple echo acquisition. I did a quick back-of-the-envelope calculation and I reckon it is possible to get the same TE=15,39,65 ms performance without parallel imaging (no SENSE or GRAPPA) using 6/8ths partial Fourier acquisitions instead; assuming 0.5 ms echo spacing for a 64x64 matrix over a 220x220 mm field-of-view. That eliminates the increased motion sensitivity of parallel imaging but it doesn't get the data into the bag any faster.

The authors suggest solutions to the speed issue in their Discussion section: use multi-band (MB) imaging in the slice direction to speed things up. I've recently started tinkering with MB-EPI and the acceleration is really quite amazing. With the University of Minnesota's variant (developed as part of the Human Connectome Project) it is possible to get whole brain, 2 mm isotropic voxels with a BOLD-optimal TE of 38 ms in a TR of 1300 ms using a 6-fold acceleration in the slice dimension. Damn, that's fast. Now, before you all rush off to use MB-EPI for all of your experiments I would caution you that there are likely similar motion sensitivities in MB-EPI as there are for in-plane GRAPPA - they use similar principles applied orthogonally. So, there is a good chance that motion may hamper MB-EPI performance a lot more than you'd like. The validations haven't yet been presented. But if we assume that those clever pulse sequence folk can maintain a reasonable degree of robustness to motion then what might a multi-echo, MB-EPI acquisition look like?

Avoiding in-plane parallel imaging and sticking to the partial Fourier scheme I suggested above, it would be reasonable to obtain TE=15,39,65 ms images at each slice position using a multi-band factor of between two and six, and expect to get a corresponding improvement in the number of slices/TR. We really only need a factor of two to get full brain coverage when the slices are approximately 3 mm thick. But this assumes comparable in-plane resolution - between 3 and 4 mm - to that used in the current study. Higher resolution necessarily increases the echo train length of the EPI readout and would thus extend the TEs well beyond the 15-70 ms range we need to fit T2* at 3 T.

What are the options for substantially increasing the in-plane resolution beyond 3 mm? The 2 mm voxels for the MB-EPI sequence I've tested had a minimum TE of 38 ms. The echo train length per image, using 6/8ths partial Fourier, is already some 38 ms long, too. I don't think many parts of the brain would fit T2* very well with TE=38,76,114 ms images, even with the reduced dephasing (via modest extension of T2*) that one gets from smaller voxels. To reduce the TEs to values that could be expected to fit T2* for most brain regions at 3 T requires either faster gradients or in-plane parallel imaging. With respect to gradient speed, we are already at the limit set by cardiac stimulation when using a whole body gradient set, so that's off the table at the present time. Perhaps in-plane parallel imaging can be used profitably? Doubtless someone will be able to show that using in-plane GRAPPA and high resolution can be made to work with multi-band imaging under certain circumstances - good subjects not prone to moving very much, perhaps - but at the risk of being boring I don't think these are the sorts of sequences that should be applied in routine practice. Motion would have to be very low indeed or the image quality would be awful.

Another option might be to move away from T2* BOLD and instead try to do the same sort of ICA decomposition for T2 BOLD, using multiple spin echoes in conjunction with multi-band encoding. Such an approach has its own limitations, of course: power deposition goes up a lot, maybe prohibitively, while overall BOLD sensitivity (at 3 T) is diminished by about 50%. Or, perhaps asymmetric spin echo images could be obtained, using the spin echo to extend the lifetime of the signal but offsetting the center of each image readout to encode some T2* rather than T2. We have options to explore, perhaps trading sensitivity for specificity. And that goal - specificity - is the main lesson of the paper, I think. It's where we should be aiming. Doing the same basic T2*-weighted resting state acquisitions over and over isn't getting us very far. I'm glad that Kundu et al. are suggesting ways to push through our present inertia.

In sum, then, I don't see everyone switching to multi-echo EPI acquisitions by this time next year. Best case, I suspect some hardy types might try ME-ICA as a way to validate what others are seeing; to use it as a yardstick. We still don't have ground truth, but I would put more faith into a network derived from ME-ICA than I would from the coincidental findings of a hundred "standard" resting state fMRI studies. 

_________________




Notes:

1.  It is well known that different regions of the brain have different T2*, thus requiring different TE for optimal BOLD contrast. The voxel resolution as well as neuroanatomical variations, and magnetic susceptibility arising from the skull and sinuses all interact to produce a complicated T2* dependence across a brain. Therefore, a compromise value of TE is usually set to achieve sufficient BOLD contrast in "good" regions of the brain, such as parietal and occipital lobes, where T2* is 30-50 ms at 3 T, as well as in "bad" regions of the brain, such as frontal and temporal lobes, where the T2* is generally below 25 ms. Echo times in the range 25-35 ms are thus typical for 3-4 mm voxels. The TE for optimal BOLD contrast can sometimes be increased when voxels smaller than about 3 mm are used. An example of adjusting TE with voxel resolution is given in this paper, on amygdala fMRI.



References and Links:

Differentiating BOLD and non-BOLD signals in fMRI time series using multi-echo EPI.
P Kundu, SJ Inati, JW Evans, W-M Luh and PA Bandettini. NeuroImage 60 (3), 1759-70 (2012).
http://www.sciencedirect.com/science/article/pii/S1053811911014303
http://dx.doi.org/10.1016/j.neuroimage.2011.12.028
PMID: 22209809


8 comments:

  1. Thanks a lot on sharing this blog with your thoughts. It is very instructive and plainly explained for everybody.

    This said, a question raised inside me while reading this article. I know that there exist ICA noise cancellation for audio. In that case, two or more microphones nearby each other are used to decompose the audio sources. The results are spectacular, and some phone brands have started using the technology to make people speak from noisy environment. Now, the method ME-ICA explained here seems much like the source separation on audio. In practice multiple recordings (that is multiple TEs) of the same environment (here the brain) are used to separate sources. My question is how similar these methods really are? Do they have a voxel based ICA (trying to find the composing sources from a single voxel) or do they put a spatial factor in the game to make ICA assess also the distribution of sources?

    The first one looks very neat and nice to implement, but the second I don't understand yet. Trying to separate sources of BOLD with multiple TEs may be not just a motion correction but the analysis itself of fMRI. Does this make sense?

    Dorian

    ReplyDelete
  2. Hi Dorian, I'm afraid I have no ICA knowledge to speak of, so I'm not well placed to answer your question. Having said that, however, I can make one clarification that might help. Although multiple TEs are acquired, these are separate measurements only insofar as they are used to fit for T2*. Then, the T2* fits are used to separate the sources. So if I follow your analogy, this method only fits a single "microphone" to the data, one that can't be acquired in a single image but that requires three successive images to create. (I think of it as a slow discrete sampling process, compared to the faster discrete sampling process of acquiring a single T2*-weighted EPI.)

    If I come across good articles on ICA for fMRI I'll forward them. I need to learn a lot more about it myself!

    ReplyDelete
  3. Got it. It's not exactly the audio source separation then. Beside, in audio the mics should be in sync, while the three TE images are acquired in sequence, thus registering the mix in different points in time.

    Thanks, congrats again about the blog.

    ReplyDelete
  4. Hi Everyone

    In addition to PractiCalfMRI's concerns regarding these ME methods for removing motion-related signal I have a couple more. I am concerned with the interaction of slice time correction (temporal interpolation) and the ME methods. In addition I am concerned with the source of motion-related signal that remains after motion-correction (registration across the time series). There are definitely sources to be concerned about and we have highlighted one in this paper:

    http://arxiv.org/abs/1210.3633

    but I think that motion-correction is probably just inadequate by registration methods. But anyhow on to the specifics of my criticisms of the ME method.

    According to Nyquist the slice time correction is not expected to work well for frequency components greater than 1/(2*TR). So unless the subject motion is on a time scale of 2*TR (or greater) then the motion component of the signal will have large interpolation errors. So I can't really expect the interpolated motion components to resemble the time course of the actual motion. This tells me that I should expect difficulties separating components due to motion from BOLD components. Nevertheless the ME method does appear to pick out motion components. But does it pick out all of the motion components? Should we be comfortable with what remains?

    And what was the characteristic frequency for subject motion in this study anyhow? The paper does not say. I am assuming that nothing can be said about the characteristic frequency because the subjects were not asked to move let alone move at some prescribed frequency. So I am left wondering if the ability to pick out what looks like motion components of the signal was due to the TR and the characteristic frequency for these particular subject's motion satisfying the Nyquist sampling criteria.

    Leaving the temporal interpolation question aside, in the Kundu paper the ICA components that were eliminated look a lot like the product of inadequate image registration over the time series. In fact the article states that: "The non BOLD-like component has a high frequency time course with localization along the brain edges." So I think we can assume that image registration over the time series was not adequate (and probably is not adequate in general).

    Since there are many ways that motion should be expected to contaminate the image-space signal in the presence of perfect motion correction - for example temporal variations due to scanner-fixed contrast - then it looks like the effects of motion in this study are a mixed bag due to a lacking efficacy of image registration error and scanner-fixed contrast variations. Which makes me think we need better motion correction and better means to eliminate or explicitly account for other motion-based error and noise.

    Of course, there are many such sources of motion related noise which is the reason that papers like this are being written. If we can't individually eliminate or successfully reduce motion-based error due to all these sources then perhaps we have to try to separate the BOLD from the not-BOLD by methods like the ME method. At this point however I am not convinced that the ME methods are simply pointing out that we must first do better at image registration over the time series before we can move on to dealing with the other motion-based error.

    And I have not even mentioned the problem of separating the slice timing correction from the image registration (motion correction).

    What do you think?

    DS

    ReplyDelete
  5. I should also mention that to my knowledge none of the ME methods papers applied prescan normalization (PN). PN should, at least in part, remove motion-related signal due to receive field contrast in the motion-corrected (the registration method applied in all these papers) time series. So I have to wonder how much motion-related signal would there be left to clean up with these ME methods if something as simple as using PN were done in the first place.

    ReplyDelete
  6. Hi, great post. I too read the Kundu paper with great interest (and I'll keep an eye out for any future posts here that demystify ICA). If you're still in the mood for more literature on the topic, may I humbly offer another paper? (disclaimer: yup, I wrote it)

    Bright and Murphy (2013) Removing motion and physiological artifacts from intrinsic BOLD fluctuations using short echo data. Neuroimage 64:526-37.
    http://www.ncbi.nlm.nih.gov/pubmed/23006803

    Here, we collect only one extra echo (3.3 ms) and use it to denoise the BOLD-weighted data. Although it's a less elegant approach than ME-ICA, several of the limitations you list are covered: it is "free" to implement and appears to remove variance related to both very small and very large amounts of head motion in resting-state data with otherwise standard acquisition parameters. In a perfect world, I'd probably go down an ME-ICA route. With our imperfect acquisition abilities, a more simple technique might still help avoid the motion-related false-positives that are affecting connectivity studies.



    ReplyDelete
  7. @BrightMG: Thanks Molly, much appreciated! Happens I'd already read your paper and was planning a possible review at some point in the future. I just glanced over my notes from a couple of months ago and nothing jumps out, except one comment that I noticed you must be doing spiral out, yes? If you were doing spiral in-out would there still be extra "free" time for the short TE echo? And any thoughts at whether it would be worth adding an extra echo - which would likely necessitate an extended main TE - for EPI acquisition? (I know it's not your problem that we all use EPI, just wondering how widespread the adoption might be!)

    Keep up the good work! We need a lot more investigations of motion like yours. Proposals for fixing motion are most welcome, too! Any new results on the horizon for ISMRM or HBM, perhaps?

    ReplyDelete
  8. You are correct, we were using spiral-out. I haven't had enough experience with all the possible spiral combinations, but I would guess that spiral in-out might cause some trouble using our simple dual-echo regression method. In the paper, we compared using a 10 ms instead of a 3 ms start of the spiral-out, and this slight difference in acquisition resulted in a lot more activation-related variance being regressed out. So if relying on only two echos for noise correction, the first one really must be as early as possible. Acquiring more than two echos and doing some fitting before regression sounds like a good (and doable) way forward, and I would think slightly longer "BOLD-weighted" echos wouldn't compromise very much; perhaps it comes back to what sort of activation you are trying to measure.

    I must admit that when this study finished I went quickly back to collect some EPI data... brains look like brains again! However, we have also tried flipping around this correction method for looking at dual-echo ASL data, and those results will be available as e-poster #3345 at ISMRM. It's a tricky application, due to the shared tag and the coupling between CBF and BOLD, but might have its uses.

    In the meantime, I'm working on ways to explain to scan volunteers *just how still* I want them to be.

    ReplyDelete