My Work at the University of Michigan Psychiatry Department in 2017, Part 1

After I finished my undergraduate degree at the University of Michigan in 2017, in the Mathematical Sciences, and Statistics, I worked in the University of Michigan Psychiatry Department. This was with Daniel Kessler, Dr. Eunjee Lee, Dr. Chandra Sripada, and Mike Angstadt. This was in 2017, so thank you for understanding if vocab isn’t up to par. I’d appreciate any correction from the fMRI community in the comments section.

In summary, Dr. Eunjee Lee had implemented an advanced Bayesian model that took connectivity matrices, that represented voxel-wise activation for resting state fMRI, and determined association with neurological diseases. The lab figured it would run faster if we implemented the MCMC sampler, which was a parameter-expanded Gibbs sampler, in C++. To complete this project, they needed someone who knew enough about MCMC, worked with statistical models, and was a good programmer in C++. I learned a lot about fMRI and experimental design for fMRI on the job. The person with the math skills, MCMC skills, and modeling skills was me.

The implementation is here:

This is a bit of an over-simplification to make this blogable, but here’s what the data looked like. It was resting state fMRI. BOLD imaging, where the magnetic fields for each voxel were disrupted by brain areas absorbing oxygen in the blood, which is used a proxy for activation. A patient would be in the scanner for an pre-specified amount of time, and at certain time points, the magnetic field would align allowing us to measure disruptions in the magnetic field. After all of these points were collected, they’d take correlation over all time points. After some preprocessing, re-aligning the data into a “mask” so everything was the same shape, they’d reorganize the voxel so that each patient’s brain was represented as a mathematical graph. In Dr. Eunjee Lee’s notation, and mathematically, we can represent a graph mathematically as a Symmetric Positive Definite (SPD) matrix. In this case, these were positive real. They’re also Hermitian, since they’re real symmetric, and Hermitian matrices are equal to their conjugate transpose. In math, here’s what they look like. Let V be the number of edges between voxels in each mask.

L_i \in \mathbf{R}_{V \times V}

for each L_{i, j} > 0 and L_{i, j} = L_{j, i}

This was just a collection of activation of edges between voxels. In psychiatry, and fMRI, it’s useful to interpret the brain as a collection of networks. I.e. collections of edges that activate simultaneously. One example of these networks, that’s not necessarily a psychiatry application, but easy to think about, would be the visual cortex. This network is mostly spatially colocated, but this isn’t always necessarily true.

In Dr. Eunjee Lee’s case, during her work as a PhD student at University of North Carolina Chapel Hill in the statistics department, she was given fMRI activation maps, demographic covariates, and indicators of whether the patient was a healthy control (HC), had Alzheimer’s Disorder (AZ) or had mild cognitive impairment (MCI). Whether a patient had AZ, MCI or was a HC was diagnosed clinically. The goal was to determine which of these clinical diagnoses was associated with what effects in brain network connectivity.

One way to obtain a network, from a graph, would be to use dimension reduction. Common techniques that can be used for dimension reduction, would be singular value decomposition (SVD), or principal component analysis (PCA). I used SVD, and the spectral gaps of the first singular vector to find clusters in data, in the last blog post. Let’s take look at the SVD factorization again. Take a matrix A \in m \times n for example. We can find the SVD by a certain factorization:

A = U\Sigma V^T

where U \in m \times m, V \in n \times n are unitary, and \Sigma \in m \times n is diagonal. There’s literature on how to compute these factorizations, but I won’t go into detail. Supposed for example, that, instead, we have L is symmetric positive definite (SPD) as above. In the symmetric case, we can write the singular value decomposition as follows.

L = B \Lambda B^T

where B is unitary. In the fMRI case, L is 3rd order tensor, as opposed to a matrix, with dimensions N \times V \times V, where N is the number of patients. Eunjee had a clever solution. She used a generalized eigenvalue decomposition with group level eigen maps, and patient-level, reduced dimension, connectivity matrices, which represent the reduced dimension graph structure the investigators were looking for. In mathematics, we can write this as follows.

\L_i = \ B \Lambda_i B^T

Notice that L and \Lambda_i are indexed by i, for each patient, but B is not. That is B is common across all subjects.

I’m wrapping this up today, but I’ll slowly work through this paper when I have time. But for those interested in reading ahead, the paper, and sampler and model are here:

As always, any comments or corrections are appreciated.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: