Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Monday, April 15, 2024

Core curriculum - Cell biology: taxonomy

 

Most of the biology we need to learn can be treated orthogonal to the mathematics, whereas the mathematics underlies all the physics and engineering to come. As a change of pace, then, I'm going to start covering some of the biology so I can jump back and forth between two separate tracks. One track will involve Mathematics, then Physics, then Engineering, the other will be Cell Biology, Anatomy, Physiology and then Biochemistry.

 

Let's begin with a simple overview of cell structure:

 https://www.youtube.com/watch?v=0xe1s65IH0w

The owner prohibits embedding this video in other media so you'll have to click through the link to watch.


Next, a little more detail on what's in a typical mammalian cell:


All well and good, but we are primarily interested in the types of cells found in neural tissue, whether central nervous system (CNS) or peripheral nervous system (PNS):


A little more taxonomy before we get into the details of neurons and astrocytes. In this video, we start to encounter the chemical and electrical signaling properties in cells, something we will get into in more detail in a later post. Still, it's timely to introduce the concepts.


As we move towards the neural underpinnings of fMRI signals, we need to know a lot more about neurons and astrocytes. Let's do neurons first.


While this next video repeats a lot of what you've already seen, there is enough unique information to make it worth watching.


Finally, a little more taxonomy that relates types of neurons to parts of the body, something that could be very important for fMRI when we are considering an entire organism.


To conclude this introduction to cell biology and types of neural cells, let's look at glial cells in more detail.



 Another simple introduction, to reinforce the main points:


And a nice review to wrap up.


We will look far more closely at astrocytes in a later video, once we've learned more about blood flow and control. For now, just remember that those astrocyte end feet are going to be extremely important for the neurovascular origin of fMRI signals.

 

That will do for this primer. The next post in this series will concern the resting and action potentials, signaling and neurotransmission.

_________________



Thursday, April 11, 2024

Coffee Break with practiCal fMRI

 A new podcast on YouTube


We all know the best science at a conference happens either during the coffee breaks or in the pub afterwards. This being the case, practiCal fMRI and a guest sit down for coffee (or something stronger) to discuss some aspect of functional neuroimaging in what we hope is an illuminating, honest fashion. It's not a formal presentation. It's not even vaguely polished. It’s simply a frank, open discussion like you might overhear during a conference coffee break.

In the inaugural Coffee Break, I sit down with Ravi Menon to discuss two recent papers refuting the existence of a fast neuronal response named DIANA that was proposed in 2022. Ravi was a co-author on one of the two refutations. (The other comes from the lab of Alan Jasanoff at MIT.) We then digress into a brief discussion about the glymphatic system and sleep, and finally some other bits and pieces of shared interest. I've known Ravi for three decades and it's been a couple of years since we had a good natter, so we actually chatted on for another hour after I stopped recording. Sorry you don't get to eavesdrop on that conversation. It was all science, zero gossip and the subject of expensive Japanese whisky versus Scotch and bourbon did not feature, honest guv.

 


All the links to the papers and some items mentioned in our discussion can be found in the description under the video on YouTube. 

What's next for Coffee Break? I have a fairly long list of subject matter and potential guests. I'm hoping to follow some sort of slightly meandering theme, but no promises. I'm also hoping to get new episodes out about once every couple of weeks. But again, no promises.

(PS The series of posts on the core fMRI syllabus will resume shortly with a new branch on biology, starting with basic cell biology.)

______________

Saturday, March 9, 2024

Core curriculum - Mathematics: Linear algebra VI

 

A13. Eigenvectors and Eigenvalues

Let's end this section on linear algebra with a brief exploration of eigenvectors and their eigenvalues. An eigenvector is simply one which is unchanged by a linear transformation except to be scaled by some constant. The constant factor (scalar) by which the eigenvector is scaled is called its eigenvalue. If the eigenvalue is negative then the direction of the vector is reversed as well as scaled.

 Curious about the terminology? Eigen means "proper" or "characteristic" in German. So if you're struggling to understand or remember what eigenvectors are all about, perhaps it helps to rename them "characteristic vectors" instead.

Here's a nice introduction to the concepts. Pay close attention to the symmetry arguments. It turns out eigenvectors represent things like axes of rotational symmetry and the like:

 

 

And with some of the insights under your belt, here's a tutorial on the mechanics of finding eigenvalues and eigenvectors:

 


________________






Thursday, February 22, 2024

Core curriculum - Mathematics: Linear algebra V

 

With some understanding of basic matrix manipulations, we're ready to begin using matrices to solve systems of linear equations. In this post, you'll learn a few standard tools for solving small systems - system defined by a small number of equations - by hand. Naturally, larger systems as found in fMRI will use computers to solve the equations, but you should understand what's going on when you push the buttons.


A11. Elementary row operations and elimination

 
This is just your standard algebraic manipulation to solve multiple simultaneous equations, e.g. dividing both sides of an equation by some constant to be able to simplify, but where the equations are represented as matrices:

 


A12. Cramer's Rule for solving small linear systems

According to Wikipedia:

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations.

 



________________




Sunday, February 18, 2024

Core curriculum - Mathematics: Linear algebra IV

 

Before getting back to the lectures from 3Blue1Brown, try this part review, part preview:



Now let's get back into the meaning with a little more detail.

 

A9. The dot (or scalar) product 

The dot product is a way to estimate how much two vectors interact in a common dimension. If the vectors are orthogonal to each other, they don't interact in a common dimension so their dot product is zero. This is like asking how much north-south movement is involved in an east-west heading: none. But if two vectors are perfectly parallel then this is equivalent to the two vectors lying on the number line and we can use our standard (scalar) multiplication rules. In between, we use a little trigonometry to determine their (dot) product.

 


Still lacking an intuition? This excellent summary from Better Explained (slogan: "Learn Right, Not Rote") should do the trick.


A10. The cross (or vector) product

Both the dot and cross products affect dimensionality. With the dot product, we find how much two vectors interact in one dimension. The cross product of two vectors is perpendicular to them both, telling us how much rotation arises in a third dimension.





A useful real world example use of the cross product is to compute the torque vector. Torque is the rotating force generated by pulling or pushing on a lever, such as a wrench or a bicycle crank. The lever moves in one plane but produces a rotation orthogonal to that plane. 

 

 

Torque is also fundamental to the origins of the MRI signal. We will encounter it later in the physics section. Can you take a guess how torque might be relevant to the MRI signal? Hint: it has to do with the interaction of a nuclear magnet (the protons in H atoms) with an applied magnetic field.

This article from Cuemath covers the rules for computing dot and cross products. And here are a couple of useful visualizations:

 


 

________________



 

 

Saturday, February 17, 2024

Core curriculum - Mathematics: Linear algebra III

 

Now we start to think about transformations between dimensions, e.g. taking a 2D vector into a 3D space. Non-square matrices come up frequently in engineering and research applications, including fMRI analysis, so you'll want a good understanding of their meaning. 

 

 A8. Non-square matrices

Let's look at a simple physical interpretation of changing the number of dimensions.



We previously saw how to invert a square matrix. But how do we invert a non-square matrix?



 

________________



Friday, February 9, 2024

Core curriculum - Mathematics: Linear algebra II


Continuing the series on linear algebra using the lectures from 3Blue1Brown, we are getting into some of the operations that will become mainstays of fMRI processing later on. It's entirely possible to do the processing steps in rote fashion as an fMRI practitioner, but understanding the foundations should help you recognize the limits of different approaches.


A4. Matrix multiplication as composition

In this video we see how to treat more than one transformation on a space, and how the order of transformations is important.

 



Q: While brains come in all shapes and sizes, we often seek to interpret neuroimaging results in some sort of "average brain" space, or template. We need to account for the variable position and size of anatomical structures. However, we also have the variability of where that brain was located in the scanner, e.g. because of different amounts and types of padding, operator error, and so on. When do you think it makes the most sense to correct for translations and rotations in the scanner: before or after trying to put individual brain anatomy into an "average brain" space? Or does it not matter?


 A5. Three-dimensional linear transformations

 Now we're going to move on from 2D to 3D spaces. Same basic rationale, just more numbers to track!

 


A6. The determinant 

 



A7. Inverse matrices, column space and null space

 


 

Perhaps it's not fully clear why we might need the inverse matrix. It turns out to be the way to achieve the equivalent of division using matrices. To galvanize this insight, let's look at the concept of an inverse matrix for solving an equation without division. Leaving aside the slightly goofy intro, it's a useful tutorial on the mechanics of determining an inverse matrix. 



________________