BRAINS IN BRIEFS


Scroll down to see new briefs about recent scientific publications by neuroscience graduate students at the University of Pennsylvania. Or search for your interests by key terms below (i.e. sleep, Alzheimer’s, autism).

NGG GLIA NGG GLIA

The inner workings of a rare childhood disease

or technically,
Altered lipid homeostasis is associated with cerebellar neurodegeneration in SNX14 deficiency
[See original abstract on Pubmed]

Vanessa Sanchez was the lead author on this study. As a scientist, Vanessa is passionate about understanding the cell biology of neurons, especially in the context of pediatric neurodevelopmental – degenerative disorders. Outside of lab, you can find Vanessa tending to her garden, trying new recipes, rock climbing, or hanging out with her cat, Jiji!  

or technically,

Altered lipid homeostasis is associated with cerebellar neurodegeneration in SNX14 deficiency

[See Original Abstract on Pubmed]

Authors of the study: Yijing Zhou, Vanessa B Sanchez, Peining Xu, Thomas Roule, Marco Flores-Mendez, Brianna Ciesielski, Donna Yoo, Hiab Teshome, Teresa Jimenez, Shibo Liu, Mike Henne, Tim O'Brien, Ye He, Clementina Mesaros, Naiara Akizu

Neurons are special cells in our bodies that communicate with one another to help us do everyday things like eat, think and walk. Amazingly, for most healthy people, the neurons that we are born with will last our lifetimes and support us as we navigate the world. However, in some rare and unfortunate diseases, neurons die prematurely. These kinds of diseases are called neurodegenerative diseases. There are many different types of neurodegenerative disease, each targeting different groups of neurons and resulting in different symptoms. In 2014, a new and extremely rare neurodegenerative disease was discovered called SCAR20. SCAR20 was found to negatively affect newborn children by causing intellectual disability and impairing motor functions, like the ability to walk. Researchers were quickly able to identify the culprit of the disease: the total lack of a protein called SNX14. Since little is known about SNX14 and how its absence causes SCAR20, Vanessa Sanchez, a current NGG student, and her collaborators designed a study to learn more about the nature of this disease, with the hope that one day there might be a cure or treatment.

Figure 1. Key takeaways from Vanessa and colleagues’ experiments investigating the underlying causes of the SCAR20 disease.

To begin their investigation, Vanessa and her collaborators used genetic tools to remove the SNX14 protein from mice. Genetically modified mice are immensely useful in neuroscience research as they allow scientists to study the underlying causes of disease in detail. In this case, since the researchers removed a protein, the genetically modified mice are referred to as a knockout mouse model. After they generated their new knockout mice, Vanessa and her colleagues tested these mice to make sure that they had all of the symptoms that the children experienced. This was an important step in their study because they wanted to be sure that any discoveries that they make using the knockout mouse model are directly relevant for human patients. Vanessa and her colleagues compared the knockout mice to normal healthy mice and found a few convincing results (Figure 1, Healthy mouse vs. Knockout mouse). First, they found that knockout mice had a complete lack of SNX14 in their brains - the direct cause of SCAR20 in humans. Next, they found that knockout mice were smaller in size and had a structurally abnormal face - two known symptoms of SCAR20 in humans. Finally, they found that the knockout mice had worse social memory and motor ability compared to healthy mice - again, a clear-cut sign of SCAR20 in humans. Given these results, Vanessa and her colleagues were convinced that they had developed a good mouse model of the SCAR20 disease and were now able to investigate how the disease develops.

In order to gain insight into the underlying causes of the disease, Vanessa and her colleagues needed to narrow down their focus to a single brain area. In human patients, SCAR20 seems to preferentially kill neurons in a brain area known as the cerebellum. This brain area is typically thought to be involved in motor control and coordination, which might explain why SCAR20 patients have severe motor disability. Vanessa and her colleagues discovered that, just as in human SCAR20, the knockout mouse model also showed a preferential negative effect on the cerebellum of the mice. She found that both the number of neurons and the overall size of the cerebellum were reduced in the knockout mice compared to healthy mice, once again validating the model for the study of SCAR20 and identifying a key brain area to narrow in on.

At this point, Vanessa and her colleagues have all the tools they need to study the inner workings of the disease. They performed a very important experiment where they extracted neurons in the cerebellum of knockout mice before they were killed by the disease, and looked for differences compared to the neurons in the cerebellum of healthy mice (Figure 1, Healthy neuron vs. Knockout neuron). By testing various cell properties, they discovered that there was one key cell property that was disrupted in the neurons of knockout mice compared to the neurons of healthy mice. This key cell property is called lipid homeostasis, which is important for regulating lipids, the building blocks of fat, inside the cell. Despite what you may expect, fats play an essential role in cell biology. Disrupting the total amount of fats inside of the cell can be toxic, resulting in cell death. In fact, Vanessa and her colleagues discovered that knockout neurons had trouble removing fats from the cell, resulting in a build-up. They went on to show that this disruption in lipid homeostasis is most likely the root cause of neuron death in SCAR20, which underlies the known symptoms of the disease.

This important research by Vanessa and her colleagues sheds light on the inner workings of a new disease that severely impacts the well-being of newborn children. Although there is still much to learn about the nature of this disease, such as how it affects neurons in other brain areas, the findings from Vanessa’s experiments offer a strong foundation for the possible development of treatments for this debilitating disease. Finally, research like Vanessa’s is invaluable as it contributes to our basic understanding of how neurons work and what causes them to die prematurely - knowledge that is fundamental for all neurodegenerative diseases.

About the brief writer: [Brief writer’s name here]

Jafar Bhatti is a PhD Candidate in the lab of Dr. Long Ding / Dr. Josh Gold. He is broadly interested in brain systems involved in sensory decision-making.

Interested in reading more? See the full paper here!

Read More
NGG GLIA NGG GLIA

Why some people wake up under anesthesia and others don’t (Hint: It’s your Hormones)

or technically,
Hormonal basis of sex differences in anesthetic sensitivity
[See original abstract on Pubmed]

Dr. Andrzej (Andi) Wasilczuk was the lead author on this study. A recent graduate from Penn Bioengineering, Andi is captivated by the brain’s intricate ability to shift between states of consciousness. His research uses general anesthetics to uncover the neuronal circuits responsible for these shifts, aiming to understand how the brain sustains or disrupts consciousness. By identifying these critical networks, Andi is paving the way for personalized anesthesia and offering new insights into arousal state transitions.

or technically,

Hormonal basis of sex differences in anesthetic sensitivity

[See Original Abstract on Pubmed]

Authors of the study: Andrzej Z Wasilczuk, Cole Rinehart, Adeeti Aggarwal, Martha E Stone, George A Mashour, Michael S Avidan, Max B Kelz, Alex Proekt, ReCCognition Study Group

Waking up during surgery sounds like a nightmare, and did you know that females might be at higher risk for this than males? Through medication, general anesthesia makes a patient unconscious, which allows doctors to perform surgical procedures without the patient’s awareness or discomfort. General anesthesia puts the patient in a sleep-like state, and doing this is an involved process. Anesthesiologists must be highly trained to determine the best course of treatment. When creating a safe treatment plan, anesthesiologists take into account many factors, such as the patient’s body weight or pre-existing conditions. The sex of the patient, however, hasn’t been historically considered as an equally important factor in delivering a safe course of anesthesia.

Previous research about the link between sex and response to anesthesia was ambiguous and conflicting.  Some early clinical trials suggested that females were more likely to wake up under anesthesia, while others found no significant difference between males and females. These clinical trials, however, had diverse patient populations and non-standardized anesthetic protocols, which would make it hard to directly compare anesthetic conditions between patients. Nevertheless, more recently, an analysis done on many of these studies has provided clear cut evidence that females are more resistant to the anesthetic state (Braithwaite et al., 2023). The question of how this sex difference arose, however, remained unanswered.

Dr. Andi Wasilczuk, a former Penn Bioengineering PhD student, and his team wanted to understand why females and males responded differently to anesthesia. To do this, they decided to focus on the hypothalamus, a structure in the brain heavily involved in both sleep-wake and anesthetic induced unconsciousness. The hypothalamus is regulated by hormones, which are the body’s chemical messengers—and the researchers knew that the levels of hormones typically differ between males and females. For example, males typically have a much higher level of the hormone testosterone, whereas females typically have higher levels of the hormone estrogen

With these differences in mind, Dr. Wasilczuk wanted to know: Could hormonal differences across sexes alter the effectiveness of general anesthesia?  He framed "effectiveness of general anesthesia" by using the idea of  “anesthetic sensitivity.” Individuals  who are more sensitive to anesthesia need less of the drug to fall and stay unconscious, and wake up smoothly after surgery. On the contrary, individuals with less anesthetic sensitivity, or have anesthetic resistance, require more anesthetic to fall and stay asleep, and wake up sooner once the anesthetic is removed. 

Recognizing this gap in the research, Dr. Wasilczuk’s research group sought to test the influence of sex and sex hormones on anesthetic sensitivity in mice.  First, the researchers compared the dosage of anesthetic required for the mice to be initially anesthetized (induction), and to wake up from anesthesia (emergence). They found that, across all four anesthetics the group tested, female mice required a much higher dose on induction, and were more likely to emerge at higher doses than males. Next, the researchers compared the time, given the same dosage, for female versus male mice to be induced and emerge from anesthesia. Female mice took significantly longer to be induced than males, and also were much quicker to emerge. These experiments indicated that female mice were indeed more resistant to anesthesia, compared to male mice.

Yet, the reason for these results remained unclear: Were these effects due to sex hormone differences? To find out, the researchers changed the mice's hormone levels by surgical removal of the testicles (castration) in male mice or ovaries (oophorectomy) in female mice post puberty. They repeated the experiments, this time using castrated males and oophorectomy females, then compared these mice to the untreated males and females tested before.

The results were striking. In both experiments, castrated males and oophorectomized females showed a similar resistance to anesthesia as untreated females. Oophorectomy did not change a female mouse’s anesthetic sensitivity. Castration, however, produced a female-like anesthetic sensitivity in males. Eliminating male sex hormones, therefore, seemed to remove the sex differences in response to anesthesia!

The researchers also directly measured the effect of testosterone. Under a steady dose of anesthetic, untreated males and castrated males were injected with testosterone, and continually tested for responsiveness using the righting reflex. Testosterone administration increased anesthetic sensitivity for both groups of mice in a dose-dependent manner. This finding could explain why males, who typically have higher testosterone, are more sensitive to general anesthetics, and therefore are at lower risk of waking up under anesthesia than females.

Intrigued, the researchers wondered: Can these sex differences be seen in brain activity? The conventional measure of anesthetic depth (how unconscious someone is in response to anesthesia) during surgery is the Electroencephalogram (EEG). EEG measures electrical brain activity through electrodes attached to the scalp. The researchers found that sex differences were not reflected in the EEG of the mice they tested. Similar conclusions were made when re-analyzing human data from another study. In this study, female volunteers displayed resistance to general anesthesia based on assessments of behavior and cognitive function, but not based on information gathered from the EEG.

Looking at the activity of individual neurons, however, clearly revealed sex differences. They looked for elevated levels of the protein c-Fos, an indicator of neuronal activity, throughout the whole brain. Compared to anesthetized male mice, anesthetized female mice had fewer neurons expressing c-Fos in sleep-promoting hypothalamic cells. In other words, anesthesia activates fewer sleep-promoting circuits in females than males, correlating with females’ greater resistance to anesthetics. 

Compared to untreated male mice, castrated male mice also had reduced c-Fos expression in similar hypothalamic structures. Fewer sleep-promoting circuits were activated in castrated males (which displayed a similar aesthetic sensitivity to females) than untreated males. Thus, sex-dependent activity patterns, seen in hypothalamic structures, reflected anesthetic sensitivity trends!

Dr. Wasilczuk’s groundbreaking paper reveals why researching sex-dependence is incredibly important: females may need different anesthetic management than males due to their higher resistance to anesthesia. After years of standard general anesthesia administration to millions of patients, and using EEGs to measure anesthetic depth, Dr. Wasilczuk’s findings have huge clinical implications supporting personalized anesthetic care. 

About the brief writer: Sydney Liu

Sydney is a guest writer for Brains in Briefs! She is a Penn undergraduate in Dr. Shinjae Chung’s lab researching what makes us sleep and the brain transitions between sleep states. She is a Junior majoring in neuroscience, and is interested in teaching. In her free time, she likes to draw!

Citations:

  1. E. Braithwaite et al., Impact of female sex on anaesthetic awareness, depth, and emergence: A systematic review and meta-analysis. Br. J. Anaesth. 131, 510–522 (2023).

Interested in learning more about how anesthetic sensitivity is different in males and females? Check out Andi’s paper here!

Read More
NGG GLIA NGG GLIA

Finding the patterns of white matter growth that support children’s cognitive development

or technically,
Development of white matter fiber covariance networks supports executive function in youth
[See original abstract on Pubmed]

Joëlle Bagautdinova was the lead author on this study. Joëlle is broadly interested in brain development and how this may go awry in psychiatric disorders. For her PhD in Dr. Ted Satterthwaite’s lab, Joëlle is using neuroimaging to study the mechanisms underlying brain development, cognition and psychiatric disorders. She is particularly interested in understanding the potential role of sleep as a risk factor in the emergence of mental illness.

or technically,

Development of white matter fiber covariance networks supports executive function in youth

[See Original Abstract on Pubmed]

Authors of the study: Joëlle Bagautdinova, Josiane Bourque, Valerie J. Sydnor, Matthew Cieslak,Aaron F. Alexander-Bloch, Maxwell A. Bertolero, Philip A. Cook, Raquel E. Gur, Ruben C. Gur, Fengling Hu, Bart Larsen, Tyler M. Moore, Hamsanandini Radhakrishnan, David R. Roalf, Russel T. Shinohara, Tinashe M. Tapera, Chenying Zhao, Aristeidis Sotiras, Christos Davatzikos, and Theodore D. Satterthwaite

Recently, many neuroscientists have been trying to uncover the developmental “blueprint” of the brain’s gray matter, or the specific ways in which brain regions grow and change over the course of adolescence. However, less attention has been paid to the brain’s white matter, which is the insulated, wire-like “tracts” that connect one brain region to another. NGG student Joëlle Bagautdinova and her colleagues in the Satterthwaite lab filled this gap by investigating white matter’s structural development in MRI scans from almost 1000 people ages 8 to 22 years.

While it famously does NOT imply causation, correlation can show parts of the brain have similar structures and, therefore, might be following the same developmental blueprint. So, Joëlle and her colleagues decided to cluster every point along the brain’s white matter tracts (Figure 1) into groups with similar structures (Figure 2). Specifically, they grouped points with similar fiber density, or how many “wires” are packed together to make the tract, and cross-section, or how thick the tract is (Figure 1); they refer to the combination of these measurements as “FDC”. She also tested to see how each group’s FDC values changed across adolescence.

Figure 1. White matter tracts can be measured by their density and cross-section.

Figure 2. Points of white matter can be grouped by how similar their FDC (fiber density and cross-section) values are.

Usually, researchers assume that all points along a tract will develop similarly; however, because Joëlle determined her groups based on how similar the points are, different points along the same tract could be put into different groups, while points from more than one tract could be lumped together. This allowed her to uncover brand new relationships between different white matter tracts and unique subsections that develop differently than the white matter tract. For instance, she found that FDC in the lower part of the corticospinal tract, which connects the brain and spinal cord, was different than the FDC in the upper corticospinal tract, and each portion had its own unique growth trajectory. All in all, the researchers found 14 different groups of similarly-structured white matter regions, 12 of which showed significant structural changes across this period of adolescent development.

The age at which each white matter group developed most also seems to follow a pattern. Specifically, they found that the white matter in the lower back area of the brain matures earlier in adolescence while the white matter in the upper front area of the brain doesn’t mature until a bit later. These early-maturing white matter tracts tend to connect parts of the brain that do what scientists call “lower-order functions” like vision processing, basic movement, and emotions – all things that children can do pretty well. Meanwhile, the later-maturing white matter tracts tend to connect brain regions that do “higher order” functions like complex reasoning. Overall, the fact that white matter maturation seems to progress “basic” to “complex” tracts suggests that white matter may play a big role in the brain’s development across adolescence.

Finally, Joëlle and her colleagues wanted to see if these white matter structures helped kids’ executive function, which is one of these “higher order” cognitive functions that includes planning, organizing, and impulse control. They found that if you remove the effects of age, kids with better executive function tend to have higher FDC in all but one white matter group. This means that white matter tracts that are thicker and/or more tightly packed do a better job of sending signals between brain regions, especially those in the front of the brain that are responsible for cognition, and that this enhanced signaling may allow children to have stronger executive functions.

By using new, cutting-edge analyses, Joëlle and her collaborators were able to: uncover brand-new, biologically-based relationships between white matter areas; chart how these areas develop over adolescence; and show which white matter structures seem to help with cognitive function. All in all, this work fills in important gaps in our understanding how the brains we’re born with mature into the brains of capable, full-grown adults.

About the brief writer: Margaret Gardner

Margaret is a PhD student in the Brain-Gene-Development Lab working with Dr. Aaron Alexander-Bloch. She is interested in studying how different biological and demographic factors influence people’s brain development and their risk for mental illnesses.

Want to learn more about this exciting research? Check out Joëlle’s paper here!

Read More
NGG GLIA NGG GLIA

A bark or a grunt? A look at what makes animal calls sound different

or technically,
Time as a supervisor: temporal regularity and auditory object learning
[See original abstract on Pubmed]

Ron DiTullio was the lead author on this study. He is a computational neuroscientist who studies how the brain transforms information about the outside world into efficient and useful neural representations.  His current work focus on spatial navigation and memory, audition, and social cognition with future work planned to focus on vision.

or technically,

Time as a supervisor: temporal regularity and auditory object learning

[See Original Abstract on Pubmed]

Authors of the study: Ronald W. DiTullio, Chetan Parthiban, Eugenio Piasini, Pratik Chaudhari, Vijay Balasubramanian, Yale E. Cohen

As you walk through the parking lot it probably doesn’t take much effort to recognize the laughter of your friends behind you, the gentle hum of a car engine rolling down the neighboring aisle, or the clatter of wheels rolling on someone’s shopping cart. Each one of these sounds is what neuroscientists call an auditory object. While we can often effortlessly recognize auditory objects, determining exactly how we do so is difficult. It might be obvious to us that a particular sound is a car engine or a laugh, but we would be hard-pressed to name exactly what features of the sound distinguish one from another. That’s why recent NGG graduate Dr. Ronald DiTullio and a team of researchers at the University of Pennsylvania set out to figure out what makes auditory objects sound different.

 Identifying the features that define auditory objects has at least two important applications in neuroscience and beyond. First, it might give experimenters clues as to how the brain learns to differentiate sounds. For example, once researchers have identified features that separate human voices from the sound of kitchen appliances, they can look at brain activity and ask whether the brain learns the same features. Second, being able to measure and quantify useful features of sounds allows scientists to build artificial systems that can distinguish them as well. For example, your Google Home or Amazon Alexa system might benefit from looking for features that will help it learn to distinguish your voice from a tea kettle boiling in the background.

 DiTullio and his team aimed to find the features that differentiate auditory objects by studying the different types of vocalizations that rhesus macaque monkeys make. They chose to study macaque vocalizations because they have structure, just like human speech, and vocalizations can be grouped into a small number of types. In this case, analyzing the differences between macaque vocalizations is easier than looking at something as complicated as spoken language. “There is a general pattern in nature,” DiTullio explained. “We know that macaque vocalizations should share similar structure with human vocalizations.” Because sounds in nature share similar features, the team hopes that what they learn about the differences between macaque vocalizations will apply to other types of sounds.

 The research team started with a key observation: the sounds that make up auditory objects change slowly and smoothly over time. This is a property called temporal regularity. We can understand the concept of temporal regularity by thinking about playing notes on the piano. When you play and hold a note, the note fades away slowly and the components of the sound are largely the same over time.  This example has high temporal regularity.  However, if you randomly play and release a note, you quickly switch between silence and the note and the components of the sound change a lot from moment to moment.  Thus, this example has low temporal regularity. That is one way DiTullio and his team think we can distinguish one note from another. Temporal regularity is a known feature of many auditory objects, including macaque vocalizations. “The underlying idea is to find a pattern we think exists in natural sounds and that the brain could get useful information from,” said DiTullio. “Our main motivation was to ask, ‘is temporal regularity that pattern?’”.

 To test their idea, the team first demonstrated that temporal regularities can be used to distinguish auditory objects. To do this, they took recordings of four different types of macaque vocalizations (coo, harmonic arch, shrill bark, and grunt) and three different types of noise and used statistical methods to quantify different features of each audio clip. One of these statistical methods, called slow feature analysis, looked for temporal regularities in the audio clips, whereas the other methods relied on different types of features. The group found that the temporal regularities identified by slow feature analysis did a better job of distinguishing auditory objects than the features identified by the other statistical methods. This showed that temporal regularities are in fact an important feature of macaque vocalizations that can be used to differentiate them.

 The team next applied their idea to another challenge for our auditory systems: the ability to recognize auditory objects in the presence of noise. For example, most people will have no problem distinguishing the statement “I have plans” from “I have plants” in a quiet room, but that becomes much more difficult when having a conversation at a crowded cocktail party. To test whether temporal regularities might help solve this problem, the team applied the same statistical methods discussed previously to audio clips of the same types of macaque vocalizations, but with noisy backgrounds. They found that the temporal regularities identified by the slow feature analysis did a better job than any other features at distinguishing macaque vocalizations with noisy backgrounds. The team showed that temporal regularities are a useful way the brain might solve the problem of identifying sounds in the noisy environments we encounter in our everyday life.

 Taken together, these experiments identify an important feature of auditory objects that is useful for distinguishing them: temporal regularity. This opens the door to lots of exciting future work. One open question is whether the brain uses the same features when learning the differences between auditory objects. “We hope that more people will look for these kinds of signals in the brain,” said DiTullio. “People know that temporal regularities are an important part of audition, but they don’t necessarily look for how... the brain might learn these [features].” Another interesting direction for future work would be to investigate different types of sounds, like spoken language. DiTullio’s team has already done follow-up work applying these ideas to bird and human vocalizations. It seems like this is only the start of what temporal regularities may be able to teach us about what makes the sounds we encounter every day sound so different.

About the brief writer: Catrina Hacker is a PhD candidate working in Dr. Nicole Rust’s Lab. She is broadly interested in the neural correlates of cognitive processes and is currently studying how we remember what we see.

Learn more about how the team determined what distinguishes different types of sounds in the original paper.

Read More
NGG GLIA NGG GLIA

Neurons in the brainstem promote REM sleep and trigger brainwaves that might cause dreaming

or technically,
A medullary hub for controlling REM sleep and pontine waves
[See original abstract on Pubmed]

Dr. Amanda (Mandy) Schott was the lead author on this study. As a researcher, Mandy is most interested in defining neural circuits, and how specific populations of cells communicate to generate essential human behaviors such as sleep.

or technically,

A medullary hub for controlling REM sleep and pontine waves

[See Original Abstract on Pubmed]

Authors of the study: Amanda Schott, Justin Baik, Shinjae Chung & Franz Weber

Rapid eye movement (REM) sleep is the sleep state that most people associate with dreaming, however REM sleep has many other essential functions. While REM makes up only about 20-25% of our nightly sleep, it is vitally important for memory, emotional processing, and other functions we have yet to understand. This is true not just for humans, but all mammals and maybe even birds and reptiles! To facilitate all these functions of REM, the brain is highly active during this sleep state. In fact, during REM sleep, brain signals look more similar to wake than non-REM sleep. Because of this, REM sleep is sometimes called paradoxical sleep because paradoxically, the brain is so active during rest. 

Surprisingly, we still know very little about how the brain switches from low-activity non-REM sleep to high-activity REM sleep. Moreover, during REM sleep there are sometimes sporadic brain waves that seem to be important for normal brain function but whose precise role is still not totally clear. P-waves are one such waveform that is caused by lots of synchronous neuronal activity in the back of the brain, in a brainstem region called the pons. From the pons, P-waves travel forwards in the brain to brain regions important for forming and storing memories, and also areas involved in visual processing. These P-waves are interesting because they occur only during REM sleep, and are proposed to be involved in dreaming and the memory functions of REM sleep. A paper by recent NGG graduate Dr. Amanda Schott investigated two major unknowns in REM sleep research: 1) What neurons and brain regions are involved in generating REM sleep, and 2) What neurons and brain regions are involved in generating P-waves. Is it possible that one set of neurons could do both? 

While we know of several brain regions in the brainstem that regulate REM sleep, most of them consist of inhibitory neurons, meaning they “turn off’ other brain regions to promote REM sleep. Dr. Schott, however, found a highly unusual group of excitatory neurons in part of the brainstem called the dorsal medial medulla (dmM). These excitatory neurons can “turn-on” other neurons they make connections with. These dmM excitatory neurons were only active during REM sleep, suggesting they may be involved in  promoting REMs sleep. In addition, dmM neurons project their axons and send signals to the part of the pons that is known to generate p-waves. In fact, the dmM neurons were active at the same time the p-waves occurred suggesting that the dmM excitatory neurons could be involved in the generation of p-waves too! Dr. Schott next wanted to directly manipulate the activity of these neurons to see if they could cause transitions to REM sleep or cause generate p-waves. 

Using a modern neuroscience technique called optogenetics, Dr. Schott was able to cause the neurons in the dmM to fire when a laser light was shined over them through an optic fiber. She simultaneously determined if the mouse was awake, asleep, or in REM sleep by measuring the mouse’s brain waves using electroencephalography, or EEG. She found that stimulating these neurons caused the mouse to enter REM sleep, and also increased the length of REM sleep episodes. Shining the laser light also caused a p-wave to be generated when the light was shined about 60-100% of the time when the mouse was sleeping. Experimentally reducing the activity of the dmM neurons also decreased the amount of REM sleep, as well as the amount of p-waves. Dr. Schott interpreted these findings as evidence that dmM excitatory neurons are critical for normal amounts of REM sleep to occur, and for triggering p-waves. 

Overall, Dr. Shott’s work adds an important piece to the puzzle to our understanding of which brain regions can promote REM sleep. Her findings are an important first step in understanding which neurons generate p-waves which is ultimately necessary to understand p-wave function. This work will provide a foundation on which others (including the author of this piece!) can study the role of p-waves in REM sleep, and move closer to finally understanding how and why we dream.

About the brief writer: Emily Pickup

Emily is a 4th year PhD candidate in Dr. Franz Weber’s lab. She is interested in the biological functions of sleep. Specifically, she is interested in understanding the function of REM-specific p-waves. The large pontine waveform implicated in memory consolidation discussed in the brief above.

Interested in learning more about REM sleep and p-waves? See the original paper here.

Read More
NGG GLIA NGG GLIA

Does zip code contribute to racial health disparities in aging?

or technically,
Contributions of neighborhood social environment and air pollution exposure to Black-White disparities in epigenetic aging
[See original abstract on Pubmed]

Isabel Yannatos was the lead author on this study. They studied racial disparities in aging, examining how the external environment is internalized to influence health outcomes among Black and White Americans. Currently, they are pursuing a career in health policy to move towards health equity.

or technically,

Contributions of neighborhood social environment and air pollution exposure to Black-White disparities in epigenetic aging

[See Original Abstract on Pubmed]

Isabel Yannatos, Shana Stites, Rebecca Brown, Corey McMillan

What’s the first thing that comes to mind when you think of health? A doctor’s office? A waiting room? A pharmacy? Whatever image you conjured, your house and your surrounding neighborhood probably didn’t make the cut, but the environments in which we live, work, and age are actually huge determinants of our health. 

These “environments” encompass not just the environment in a traditional sense (exposure to severe weather or air pollution), but also the social (violence/crime, sense of community, and wealth) and physical (walkability or access to green spaces) elements of your neighborhood. All of these factors play a critical role in how healthy you are and how well you age [1]. However, all environments aren’t created equally, and long-standing racism and exclusionary policies directly influence how these important health determinants, like access to healthy food or exposure to pollutants, are distributed. 

For instance, it is well established that Black Americans have fewer socioeconomic resources and higher exposure to unhealthy conditions in their neighborhoods [2]. In turn, these environments create and cement a number of racial health inequities such as increased prevalence of age-related diseases, like Alzheimer’s [1-3]. This disparity in aging is not due to genetic differences, rather to differences in socioeconomic risk factors, inadequate healthcare access (e.g., decreased management of conditions like high blood pressure and diabetes), and the cumulative impact of social stress and political marginalization [3]. 

Motivated to further understand how neighborhood discrimination influences health and aging, former Neuroscience Graduate Group (NGG) student Dr. Isabel Yannatos and colleagues from the Bioinformatics in Neurodegenerative Disease lab leveraged a large (2,960 participants!) dataset collected by the National Institute on Aging. This Health and Retirement Study consisted of phone or in-person surveys collected from a representative population of Americans 50 years and older. 

In addition to filling out these surveys, which asked individuals about their race/ethnicity, age, and social and physical environment, a subset of participants gave a blood sample for measurement of DNA methylation. Methylation is a process by which the body adds little chemical tags to specific parts of DNA molecules. Typically, these methylation tags prevent part of the DNA molecule from being used by the body. This to say, methylation can change the activity of the DNA molecule without changing the DNA sequence. Accumulating DNA methylation tags is a normal part of aging. It’s one way our bodies -- and our DNA -- change as we get older. In fact, scientists can use the pattern of methylation present on a person’s DNA to calculate their biological age. The difference between this biological age (e.g., DNA methylation-based age) and chronological age (e.g., the number of years someone has been alive) is one way to measure the pace at which someone is aging [4]. If biological age comes out to be higher than chronological age, it’s a sign that the body is aging more quickly than expected.

Isabel and colleagues used measures of DNA methylation to calculate a biological age for each participant. They found that Black participants appeared biologically older than White participants of the same chronological age, suggesting that Black Americans are aging more quickly than their White counterparts. Could this difference in aging between Black and White Americans be due to racial disparities in neighborhood environments? Using survey responses and additional data on air pollution from the Environmental Protection Agency, Isabel concluded that the Black Americans surveyed had lower neighborhood socioeconomic resources, higher levels of social deprivation, and increased exposure to fine particulate matter (PM2.5, the same pollutant given off by wildfire smoke). Moreover, these factors did contribute to the difference in DNA methylation-based age between Black and White participants, suggesting that neighborhood discrimination and deprivation are associated with accelerated aging among Black Americans.

Interestingly, Isabel found differences not only in the amount of PM2.5 exposure between Black and White participants, but also in the extent to which this exposure to air pollution affected aging. In particular, analysis revealed that Black individuals appeared more vulnerable to the effects of air pollution. This increased vulnerability was a result of fewer socioeconomic resources, which impacts things like the proximity to major sources of pollution and the quality of air filtration available. In other words, the neighborhood environments of Black participants not only put them at a higher risk for exposure, but also made them more susceptible to exposure’s damaging effects. 

This study has numerous important implications. For one, Isabel’s work provides evidence that working to eliminate the racial gaps in neighborhood social and economic conditions would go a long way toward alleviating the accelerated biological aging and increased vulnerability to environmental exposure present among Black Americans. It also highlights just how important information about neighborhood conditions can be for health and aging, inviting future exploration of how local environments contribute to our health across the lifespan.

About the brief writer: Kara McGaughey

Kara is a PhD candidate in Josh Gold’s lab studying how we make decisions in the face of uncertainty and instability. Combining electrophysiology and computational modeling, she’s investigating the neural mechanisms underlying this adaptive behavior.

Citations:

  1. Diez Roux, A. V. (2016). Neighborhoods and Health: What Do We Know? What Should We Do? American Journal of Public Health, 106(3), 430–431. https://doi.org/10.2105/AJPH.2016.303064

  2. Tessum, C. W., Apte, J. S., Goodkind, A. L., Muller, N. Z., Mullins, K. A., Paolella, D. A., Polasky, S., Springer, N. P., Thakrar, S. K., Marshall, J. D., & Hill, J. D. (2019). Inequity in consumption of goods and services adds to racial–ethnic disparities in air pollution exposure. Proceedings of the National Academy of Sciences, 116(13), 6001–6006. https://doi.org/10.1073/pnas.1818859116

  3. Lennon, J. C., Aita, S. L., Bene, V. A. D., Rhoads, T., Resch, Z. J., Eloi, J. M., & Walker, K. A. (2022). Black and White individuals differ in dementia prevalence, risk factors, and symptomatic presentation. Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association, 18(8), 1461–1471. https://doi.org/10.1002/alz.12509

  1. Diez Roux, A. V. (2016). Neighborhoods and Health: What Do We Know? What Should We Do? American Journal of Public Health, 106(3), 430–431.https://doi.org/10.2105/AJPH.2016.303064

Interested in reading more? You can check out Isabel’s paper here 

Read More
NGG GLIA NGG GLIA

Sex matters: Exploring sex differences in opioid withdrawal mechanisms

or technically,
Sex differences in VTA GABA transmission and plasticity during opioid withdrawal
[See original abstract on Pubmed]

Dan Kalamarides was the lead author on this study. Dan’s research interests are rooted in neuropsychopharmacology, that is, the intersection of brain physiology, drugs (both “good” and “bad”), and behavior. This interest has been applied in the context of preclinical models for several therapeutic areas including substance use disorders, pain, neuroinflammation, and depression. Dan is currently planning to transition to industry where he can leverage his neuroscience expertise in the pharmaceutical world to enhance treatment strategies for mental health disorders and brain diseases.

or technically,

Sex differences in VTA GABA transmission and plasticity during opioid withdrawal

[See Original Abstract on Pubmed]

Authors of the study: Daniel Kalamarides, Aditi Singh, Shannon Wolfmann, John Dani

Scientific research has long been biased on the basis of sex. From cells and tissues to animals and people, there is a long history of scientists including more male subjects in their studies. As a result, we don’t understand how female bodies respond differently to diseases or to treatments, and the quality of healthcare has suffered. The National Institutes of Health (NIH) and several scientific journals have started requiring researchers to consider sex in their science, but the progress towards equal representation of males and females has been slow.

Opioids - including heroin, fentanyl, morphine, and others - are one of many classes of drugs that affect men and women differently. For example, women are less responsive to the pain killing effects of opioids but more sensitive to affects the drugs have on respiration compared to men. This difference makes it a lot harder to safely and effectively treat women with opioids in the clinical setting, and it can make recreational opioid use more dangerous. Despite these differences in people, basic science research into the effects and mechanisms of opioids in females is still lacking compared to our understanding of the drugs in males.  

One area of research on opioids that still has a lot of unanswered questions, related to sex differences and more generally, is opioid withdrawal. Scientists, including recent NGG graduate Daniel Kalamarides, want to better understand opioid withdrawal so that they can treat the withdrawal, help people feel better, and make it easier for people to stop using opioids. In his paper, Daniel and his fellow researchers wanted to learn more about how the brain changes during opioid withdrawal, while keeping in mind that these changes could look different in males and females. Specifically, he was curious about a brain region called the ventral tegmental area (VTA), which contains neurons responsible for releasing dopamine into another brain region (the striatum) involved in reward.

Previous studies have shown that the active effects of opioids (think the “high”) are in part caused by an increase in dopamine release from those neurons in the VTA. This happens because opioids remove a natural brake on the dopamine system. In an opioid-free brain, other inhibitory neurons in the VTA – known as GABAergic neurons because they release the neurotransmitter GABA – decrease the release of dopamine from the dopaminergic neurons. Opioids remove this brake by decreasing the activity of the inhibitory neurons. This makes the system go faster, or, more specifically, release more dopamine.

Your brain adapts if opioids are in the body for an extended period of time. In the VTA, this means that those inhibitory neurons amp up their control of the dopamine-releasing neurons so that, even in the presence of an opioid, a relatively normal amount of dopamine is released. This is fine until the opioids are removed. Now you have an overactive brake, and there’s not enough dopamine released into the reward-related brain regions.

Researchers have found, in male mice only, that the inhibitory neuron control of the dopamine-releasing neurons increases in withdrawal because the connections between them grow stronger. This increase in connectivity is known as long term potentiation (LTP) or plasticity, and it’s one of the primary mechanisms by which the brain changes depending on how it’s used and what it’s exposed to. Knowing that the effects of opioids can differ between males and females, Daniel explored whether a similar phenomenon occurs in female mice.

Daniel first induced opioid withdrawal in mice by giving them morphine for a week, then studied the properties of neurons in the VTA when the mice were in withdrawal. He used patch-clamp electrophysiology, a technique which allowed him to measure the electrical current flowing into or out of the neuron as he manipulated the voltage. By using this technique, he was able to learn about the strength of the connection between the inhibitory neurons and the dopaminergic neurons and compare that connection between male and female mice.

Daniel measured how likely the inhibitory neurons were to release GABA – and thus inhibit the dopamine-releasing neurons – spontaneously and when electrically stimulated. He found that, in male mice, morphine withdrawal increased the probability of GABA release (or increased the strength of that brake). This was a great result because previous studies had also found this phenomenon, which means that this science is replicable. When he looked at female mice, however, he didn’t see any difference between the morphine treated mice and the control mice. That’s a surprise!

Daniel also tried to experimentally force LTP to occur in the brains in morphine withdrawal so that he could learn more about how the probability of GABA release was changing. He stimulated the inhibitory neurons with a really high frequency of electrical current, which would cause LTP in a normal neuron. He found that he could cause plasticity in the female mice, but he couldn’t in the males. This result suggested that the increase in the probability of GABA release in males was due to LTP. The molecular components needed for LTP were all used up in the males, so Daniel couldn’t create more. The components were still available in the females, on the other hand, so Daniel was able to stimulate the neurons and cause LTP.

To be thorough, Daniel also asked if the male and female mice were experiencing a similar level of morphine withdrawal. If the female mice were going through less withdrawal, it could maybe explain the sex differences in plasticity in the VTA. Daniel measured the strength of withdrawal that the mice were experiencing by counting physical signs of morphine withdrawal, and he found that males and females displayed a similar number. After all of these experiments, we still don’t know for sure that the opioid withdrawal mechanisms in male and female mice are entirely different. If Daniel used a different dose of morphine or if he studied the brains at a different time into withdrawal, he might be able to observe the same plasticity in female mice that he saw in male mice. However, by running this control experiment, he was able to strengthen the argument that there is a true difference in how the male and female mouse brains changed in opioid withdrawal.

This research by Daniel and his fellow scientists reinforced the fact that opioids affect males and females differently, and they showed that we still don’t understand how female brains change in opioid withdrawal. Hopefully, this evidence will push other scientists to continue thinking about sex differences in opioid research and in neuroscience broadly. In the meantime, Daniel has led us a step closer towards developing treatments for opioid use disorder, and he’s contributed to reducing bias in science.

About the brief writer: Lyndsay Hastings

Lyndsay is a first year NGG PhD student broadly interested in the relationship between neurocircuitry and behavior.

Interested in learning more about how opioid withdrawal is different in males and females? Check out Daniel’s paper here!

Read More
NGG GLIA NGG GLIA

Weight loss drugs can also be leveraged to curb nicotine use

or technically,
Liraglutide attenuates nicotine self-administration as well as nicotine seeking and hyperphagia during withdrawal in male and female rats
[See original abstract on Pubmed]

Rae Herman was the lead author on this study. Rae is a 5th year PhD Candidate in the lab of Dr. Heath Schmidt. Her research investigates the potential of repurposing current medications for obesity/type II diabetes as novel treatments for substance use disorders. She also explores neural basis of drug seeking with a focus on nicotine and fentanyl use disorder.

or technically,

Liraglutide attenuates nicotine self-administration as well as nicotine seeking and hyperphagia during withdrawal in male and female rats

[See Original Abstract on Pubmed]

Authors of the study: R J Herman, M R Hayes, J Audrain-McGovern, R L Ashare, & H D Schmidt

Nicotine use has long plagued humanity, despite how it negatively impacts our health. Indeed, numerous studies have shown that daily smoking leads to increased risk of cancers, lung disease, and heart disease1. However, despite this knowledge, most nicotine users are unable to stop smoking over the long-term. Only 10% of smokers are successful when they try to quit smoking2, and individuals who do manage to quit often develop strong cravings for highly palatable foods (i.e., foods that taste sweet or savory with high fat and/or sugar content)3. Weight gain during nicotine abstinence is a major concern for people trying to quit smoking and is one of the most significant barriers to quitting for good4. Current approved treatments for nicotine use, like varenicline and bupropion, are only modestly effective and do not prevent weight gain after quitting5.

However, there might be a pharmacological therapy that provides some hope for nicotine addiction. A class of drugs known as glucagon-like peptide-1 receptor (GLP-1R) agonists have recently been approved by the FDA to treat obesity. These drugs have been tremendously successful, with individuals suffering from obesity losing approximately 20% of their weight after long-term use6. The success of the drug has captured the attention of many people. For example, there are numerous ads on TV or social media posts relaying stories of people using the GLP-1R-based drugs for weight loss. Science magazine even named the development of this drug class as the “Breakthrough of the Year” in 20237. With the success of these drug treatments, users noticed that they had reduced cravings for both palatable foods and drugs such as nicotine and alcohol8.

This report was intriguing to Dr. Heath Schmidt, a professor at the University of Pennsylvania (Penn). His team was led by Rae Herman, a graduate student in the Neuroscience Graduate Group at Penn. They hypothesized that this class of miracle weight loss drugs might reduce nicotine use, and simultaneously prevent the cravings for highly palatable foods that arise after quitting smoking. Effectively, using one drug to treat two conditions, or essentially using one stone to kill two birds.

To test their hypothesis, the researchers used a rodent model of drug taking. Specifically, they inserted an intravenous catheter into the jugular vein of rats. By doing this, they could train the rats to press a lever and receive an infusion of nicotine into the bloodstream. The rats repeatedly self-administer nicotine, which mimics nicotine use throughout the day in humans. Once the rats became addicted to nicotine, the experimenters treated half of the mice with the GLP-1R agonist liraglutide, and the other half of the mice received no treatment. Strikingly, after mice were treated with liraglutide, they had a reduced number of self-administered infusions for nicotine. They also demonstrated less nicotine seeking lever presses in reinstatement tests, an animal model of relapse to drug taking.  These results provide evidence that GLP-1R drugs can be repurposed from their current use as weight loss drugs to reduce nicotine use and relapse.

Next, the researchers evaluated how nicotine withdrawal affects food intake and body weight. First, rats self-administered nicotine every day for 21 days. Then, the rats went through nicotine withdrawal, where they no longer had access to nicotine. During withdrawal, the rats were given a highly palatable food source, which was high in fat and sugar. During nicotine withdrawal, rats ate more of the highly palatable food and gained more weight than a control group of rats that had no nicotine experience. This experiment confirmed that rats, like humans, eat more of highly palatable foods and gain weight during nicotine withdrawal. In parallel with this experiment, a separate group of nicotine-addicted rats received daily treatment with GLP-1R agonist liraglutide during the nicotine withdrawal period. Strikingly, although these rats were also given the highly palatable food, they consumed less food and did not gain weight compared to control rats that had no nicotine experience. Therefore, liraglutide was able to stop nicotine withdrawal from causing increased palatable food intake and body weight gain.

This exciting research suggests a novel role for GLP-1R-based drugs to curb nicotine use and prevent nicotine withdrawal-induced weight gain. Future studies can now determine the effectiveness of GLP-1R drugs at reducing nicotine use in humans, and if other drugs can be leveraged, in combination with GLP-1R drugs, to further help nicotine users achieve long-term abstinence.

About the brief writer: Aaron McKnight

Aaron is a PhD Candidate in Amber Alhadeff’s lab. The Alhadeff Lab is focused on understanding how different macronutrients are detected within the gastrointestinal tract and how this information is relayed to hypothalamic agouti-related protein (AgRP)-expressing neurons (a group of neurons that drive feeding behavior!).

Citations:

1.     CDC (2020) Smoking Cessation: A Report of the Surgeon General. CDC Center for Disease Control and Prevention.

2.     Babb S, Malarcher A, Schauer G, Asman K, Jamal A (2017) Quitting Smoking Among Adults - United States, 2000-2015. MMWR Morb Mortal Wkly Rep 65:1457–1464

3.     Chao AM, Wadden TA, Ashare RL, Loughead J, Schmidt HD (2019) Tobacco Smoking, Eating Behaviors, and Body Weight: A Review. Curr Addict Rep 6:191–199

4.     Benowitz NL (2009) Pharmacology of nicotine: addiction, smokinginduced disease, and therapeutics. Annu Rev Pharmacol Toxicol 49:57–71

5.     Mills EJ, Wu P, Lockhart I, Thorlund K, Puhan M, Ebbert JO (2012) Comparisons of high-dose and combination nicotine replacement therapy, varenicline, and bupropion for smoking cessation: a systematic review and multiple treatment meta-analysis. Ann Med 44:588–597

6.     Wilding JPH, Batterham RL, Calanna S, Davies M, Van Gaal LF, Lingvay I, McGowan BM, Rosenstock J, MTD T, Wadden TA, Wharton S, Yokote K, Zeuthen N, Kushner RF, Group SS (2021) Once-Weekly Semaglutide in Adults with Overweight or Obesity. N Engl J Med 384:989

7.     Couzin-Frankel, J. Obesity Meets Its Match. Science (2023). Vol 382, Issue 6676.

8.     Blum, D. People on Drugs Like Ozempic Say Their “Food Noise” Has Disappeared. (2023) New York Times.

Find the complete article here!

Read More
NGG GLIA NGG GLIA

Understanding the brain during mindfulness

or technically,
Mindful attention promotes control of brain network dynamics for self-regulation and discontinues the past from the present
[See original abstract on Pubmed]

Dale Zhou was the lead author on this study. Dale is interested in how the brain network compresses and reconstructs information as network structure changes across the lifespan. He aims to account for computations of memory and reward as network functions of dimensionality reduction and expansion using experimental, naturalistic, and clinical data.

or technically,

Mindful attention promotes control of brain network dynamics for self-regulation and discontinues the past from the present

[See Original Abstract on Pubmed]

Authors of the study: Dale Zhou, Yoona Kang, Danielle Cosme, Mia Jovanova, Xiaosong He, Arun Mahadevan, Jeesung Ahn, Ovidia Stanoi, Julia K. Brynildsen, Nicole Cooper, Eli J. Cornblath, Linden Parkes, Peter J. Mucha, Kevin N. Ochsner , David M. Lydon-Staley, Emily B. Falk, and Dani S. Bassett

In recent years, the practice of meditation has received a lot of attention for its health benefits, both physically and mentally. One popular form of meditation, mindfulness meditation, teaches individuals to focus on, and attend to the present moment. The ability to shift focus depends on the ability to orchestrate shifts in neural activity, and has been previously called executive function. While the benefits of mindfulness meditation are widely recognized, what’s going on in the brain is much less clear.

In order to understand how mindfulness is represented in the brain, Dale Zhou, a recent NGG graduate, and his collaborators recruited healthy college students who identified as social drinkers and asked them to perform a task rating from 1 to 5 how much they would crave an alcoholic drink, presented to them on a computer screen. Dale simultaneously measured the activity patterns in participants’ brains using functional magnetic resonance imaging, or fMRI, while they completed this task. One group of participants was instructed to practice mindfulness while rating their cravings by “mentally distancing themselves by observing the situation and their response to it with a more impartial, nonjudgmental, or curious mindset, and without getting caught up in the situation or response”. The other group was instructed to rate their cravings with their natural gut reaction to the drink.  For some trials, participants in the mindful group were asked to switch to their gut reaction instead, allowing Dale and his colleagues to compare which brain areas were simultaneously active or quiet during the different reactions. This allowed them to draw some interesting conclusions about how the brain represents mindfulness.

Figure 1: Simplified representation of brain states. In this example, the brain has only two areas and the brain state is defined by the activity of region 1 and region 2.

Before Dale analyzed the results from the experiment, he first asked how mindfulness can be measured in the brain and if the “amount” of mindfulness in our brains impacts our day-to-day behaviors. To answer this question, he used average brain activity from the participants’ scans to calculate a measure of the executive function called controllability. To understand controllability, it is helpful to think of the brain as having different “brain states” (Figure 1). When a person is doing some activity, like walking, the brain exists in a particular brain state - some brain areas are very active and some are quiet. When the same person is doing a different activity, like eating, the brain exists in a different brain state - a different set of brain areas are active and quiet. Dale and his colleagues defined controllability as how readily the brain can switch into any possible brain state. By calculating controllability for each participant, and tracking their drinking behavior weeks after the brain scan, Dale found that the participants with higher controllability tended to have fewer drinks than those with lower controllability, suggesting that perhaps mindfulness does impact our day to day behaviors in a positive manner. 

Now back to the experiment. Dale asked whether there were differences in controllability, and therefore brain activity, between the two groups.  To do this, he calculated the amount of effort, or control, it took for participants in each group to enter either a mindful state or gut reaction state while reacting to the alcohol cue. He found that participants instructed to react mindfully took more effort to enter this brain state after being prompted than participants instructed to react naturally took to enter their gut reaction brain state. This was exactly what they expected to see, since it is known that achieving a state of mindfulness initially requires more thought and brain activity. However, he also found that when participants from both groups were instructed to react naturally, those who had previously reacted mindfully still required more effort to enter this gut reaction brain state than those who had not. This suggests that practicing mindfulness might make us more effortful in attention, even when we are not actively trying or instructed to. 

Finally, Dale found that brain areas that use more effort had shorter episodes of neural activity. These shorter episodes suggested that there was less influence of the past in these areas. Furthermore, these quick episodes were typically found in brain areas that help us sense the world around us rather than areas that help us think about past experiences or plan for the future. Practicing mindfulness, therefore, may put us in a more effortful state of attention which is more focused on the present moment rather than on the past or future. 

In conclusion, Dale’s hard work on this project has allowed us to take a glimpse at the brain during mindfulness and how it might be benefiting our behavior. His work reminds us that, although the brain is composed of many different brain areas, human behavior is a product of these various areas interacting with one another, producing unique states of mind such as mindfulness. Work similar to his will hopefully lead the way to a better understanding of some of the brain’s other complex functions.

About the brief writer: Jafar Bhatti

Jafar is a PhD Candidate in Long Ding and Josh Gold’s lab. He is broadly interested in brain systems involved in sensory decision-making. 

Want to learn more about how these researchers study mindfulness? You can find Dale’s paper here!

Read More
NGG GLIA NGG GLIA

A new method for looking through the (cyto)skeletons in the closet

or technically,
A solution to the long-standing problem of actin expression and purification
[See original abstract on pubmed]

Rachel Dvorak was the lead author on this study. She is interested in studying the biochemical mechanisms by which mutations in gamma smooth muscle actin cause visceral myopathy. Patients with visceral myopathy present with severe abdominal distension, intractable constipation, feeding intolerance, and growth delays. Because there are no targeted therapies for this disease, many patients die in adolescence. She hopes that by understanding how disease-causing mutations alter actin biochemistry we can develop treatments for patients with visceral myopathy and other rare conditions caused by actin mutations.

or technically,

A solution to the long-standing problem of actin expression and purification

[See Original Abstract on Pubmed]

Authors of the study: Rachel H. Ceron, Peter J. Carman, Grzegorz Rebowski, Malgorzata Boczkowska, Robert O. Heuckeroth, and Roberto Dominguez

Take a moment to picture the last evening you spent with friends. Now, think about something that happened over 10 years ago.

Throughout both of those events- in fact, throughout every day of your life- you’ve had the same exact cells living in your brain, helping you navigate each situation. These cells are called neurons, and your body can’t replace them; when they’re gone, they’re gone for good. This means it’s especially important for neurons to be well-built to withstand the everyday challenges that cells face, and well-equipped to respond to emergencies that could threaten their existence.

One contributor to neurons’ ability to last a lifetime lies in how they’re built; much like buildings that are constantly exposed to the elements, neurons need structural integrity to last a lifetime. To do this, there are several types of miniature “skeletons” that exist within the cell. Making up these miniature cellular skeletons, collectively called the cytoskeleton, are proteins. Proteins are large molecules that perform specific jobs intended to keep cells alive. One type of protein is called actin. Similar to links in a chain, actin proteins can attach to each other to form a long structure called a filament. As part of the cytoskeleton, actin filaments are important for helping neurons and many other types of cells keep their shape. Correspondingly, neurons are only as structurally sound as their components; issues within actin filament structures cause severe defects in overall brain structure. This includes lissencephaly, a condition where the brain lacks the grooves that normally sprawl across its surface, leaving it completely smooth. By understanding how actin behaves, both on its own and in the presence of other proteins, scientists hope to develop better treatments for diseases where actin isn’t acting the way it normally would.

Rachel Dvorak, a PhD candidate in Dr. Roberto Dominguez’s lab at the University of Pennsylvania, wanted to develop a robust way to study how actin works. Specifically, she was interested in studying how actin interacts with itself and other proteins- much like people, proteins can interact with each other in ways that influence their behavior in the cell. To do this, she aimed to isolate actin from cells. Isolating a single type of protein is a common method to study the way it functions. The inside of a cell is a bustling place, and it can be hard to tease out the specific interactions that proteins have with one another when they exist within that cellular environment.  

This is far from a simple undertaking. Inside of an actual cell, there are many actin proteins, and they are not all identical. Instead, there are several possible varieties of actin. One way to understand this is by comparing it to different flavors of ice cream. Although chocolate and vanilla are made of slightly different ingredients and paired with different foods, they are both still ice cream. Actin also comes in different “flavors” called isoforms. While each of their structures are slightly different, and each isoform might be best suited for use in different scenarios, they are all still considered actin due to their overall similarities and jobs in the cell. Moreover, even two actin proteins that are the same isoform can be slightly different, because the cell can modify actin by attaching other molecules to it. These attachments are called post-translational modifications, and they also influence actin’s behavior. However, despite their important effect on actin behavior, many of these post-translational modifications are usually lost during the process of isolating actin to study its behavior.

Another challenge of isolating specific isoforms of actin lies in getting cells to produce large quantities of the actin isoform of interest. Making actin is an involved, multi-step process for the cell that requires a lot of molecular machinery. Because of this, most scientists use only one type of actin isolated from muscle cells to conduct experiments outside of the cellular environment. This severely limits our ability to study whether different isoforms of actin behave in slightly different ways. It also means that oftentimes, our out-of-the-cell reconstructions of cellular events are created using actin filaments that lack much of the nuance they would have in cells. This limits the accuracy of this model and makes it impossible to study how actin that is incorrectly produced causes human disease.

To tackle this, Rachel decided to use modified human kidney cells to produce actin. This meant that the cells would already contain the machinery unique to humans that is necessary to carry out production of actin. This is in contrast with existing methods that use cells like bacteria or insect cells; these cell types can also be used to churn out proteins for isolation, but lack the machinery to make some proteins native to human cells or add post-translational modifications the way that human cells would. Rachel was able to introduce genetic instructions that caused the human kidney cells she worked with to essentially become actin-making factories, synthesizing large quantities of whichever actin isoform she was interested in.

However, the actin the kidney cells made based on the genetic instructions that Rachel provided was a bit different than any other actin in the cells; each of these actin proteins was also attached to two other man-made proteins called tags. Tags are helpful because scientists have materials that can grab onto them, allowing for the separation of the tags (and anything attached to them) from everything else in the cell. Most tags don’t occur naturally, so only the protein (or, in this case, the actin isoform) that you’ve instructed the cell to produce will have this unique feature. To further ensure she was isolating only the type of actin she wanted, Rachel also used a different protein her lab had engineered to grab and release actin based on how much calcium is in the environment. By using a combination of materials that grab the tags attached to actin in tandem with the protein that grabs onto actin itself, Rachel was able to isolate specific isoforms of actin with post-translational modifications from the rest of the contents of the kidney cells.

Rachel and her colleagues ultimately invented a new method for isolating actin exactly as it would exist in the cell. This is important because it means scientists now have a new way to study the exact things that are going awry in diseases that involve issues with interactions between actin and other proteins, such as the one that causes lissencephaly. The more we understand about how actin functions differently in diseases like this, the better our ability to develop effective treatments.

About the brief writer: Julia Riley

Julia is a PhD candidate in Dr. Erika Holzbaur’s lab studying the consequences that damaged mitochondria (the powerhouses of the cell) have on the function of astrocytes, a cell type found in the brain. This is important for understanding diseases like Parkinson’s, where we know mitochondrial damage occurs but don’t fully understand how it impacts the health of brain cells.

Interested in reading more about actin? Find the full paper here!

Read More
NGG GLIA NGG GLIA

Little kids, big insights: What childhood can teach us about how the brain supports cognition

or technically,
The age of reason: Functional brain network development during childhood
[See original abstract on Pubmed]

Ursula Tooley was the lead author on this study. Ursula is a postdoctoral research scholar at Washington University in St. Louis. Her research examines functional brain network development in neonates and toddlers, with a focus on the pace of brain maturation and how neuroplasticity changes across development. She received her Ph.D. in Neuroscience in 2022 from the University of Pennsylvania, under the direction of Dr. Allyson Mackey and Dr. Dani Bassett, where she studied functional brain network development in children and adolescents. She received her B.S. in Neuroscience from the University of Arizona, where she conducted research on sleep disruption in children with Down syndrome.

or technically,

The age of reason: Functional brain network development during childhood

[See Original Abstract on Pubmed]

Authors of the study: Ursula A. Tooley, Anne T. Park, Julia A. Leonard, Austin L. Boroshok, Cassidy L. McDermott, Matthew A. Tisdall, Dani S. Bassett, and Allyson P. Mackey.

Early and middle childhood (4-10 years old) are full of developmental milestones. How children speak, move, learn, and play is constantly evolving and improving. Kids build social networks, become better able to control their attention, and begin to develop cognitive skills, like reasoning. However, despite the rapid cognitive development happening during this early childhood period, neuroscientists have very little information about how brain function is changing. 

This is because getting a clear picture of brain activity, like getting a clear picture of anything, requires that the subject stays almost perfectly still. If you’ve ever watched a 4-year-old sit at the dinner table, it comes as no surprise that they don’t make the best neuroimaging subjects. Functional magnetic resonance imaging (fMRI) scanners, which take many consecutive snapshots of brain activity, are even more sensitive to motion than cameras. The tiniest movements, even just a few millimeters, can blur the images and make it impossible for neuroscientists to tell what brain activity belongs to which brain region. So, most neuroimaging work to date uses subjects over 7 years old, which means that while researchers work to understand how the brain develops to support cognition, they’re missing many of the first pieces of the puzzle. 

Here’s where recent Neuroscience Graduate Group alumn Ursula Tooley and collaborators from the Robust Methods for Magnetic Resonance group stepped in. The team engineered a way to monitor and correct for head motion inside the scanner, allowing them to collect high-quality neuroimaging data from wiggly subjects during this critical early childhood period. Specifically, this motion-tracking technology gave the researchers a way to record exactly how much and in which direction kids were moving at any given point during the scan. Ursula could then use this information to correct (think: realign) the images of brain activity or exclude the child from the study if they moved too much. The ability to precisely monitor head position in real time also created an opportunity for kids to practice the correct behavior. Before the scanning session, children came to the lab to watch a movie while laying in a mock scanner that made the same whirring noises and beeps as the real deal. Each time they moved their head more than 1 millimeter, the movie paused. Incorporating this period of exploring the scanner and the scanning expectations meant that most of the kids who enrolled in the study stayed still enough for usable images of brain activity to be collected. This is a huge feat for Ursula and the team as well as a huge win for neuroscience, making it possible to take an earlier look at the developing brain.

Over the course of the study, Ursula and her colleagues scanned a diverse group of 92 children ages 4-10 from the Philadelphia community. Each child completed an fMRI scan as well as a series of cognitive tests (which they did outside of the scanner) designed to measure the strength of their cognitive reasoning abilities. What is cognitive reasoning? Reasoning is an umbrella term describing the ability to process information, problem solve, and make predictions based on pattern recognition (Fig. 1). Successful cognitive reasoning involves much of the brain and improves dramatically during early and middle childhood. Research suggests that how kids perform on cognitive reasoning tasks is predictive of their academic achievement — even years down the road! By combining a child's cognitive reasoning ability with information about their brain activity, Ursula was able to ask whether and how changes in brain function might support this shift in cognitive performance.

Figure 1

An example question from the cognitive reasoning test, which was administered at different difficulty levels to children in the study depending on their age. Here, we see the red rectangle switches from the background (left) to the foreground (right). To answer the question correctly, the child has to understand this spatial relationship for the rectangles and extend it to the pentagons.

Ursula used resting-state fMRI data (data collected while the kids laid “at rest” in the scanner) to explore the brain’s functional organization. In other words, she inferred how much different brain regions talk to each other based on how their activity fluctuates together over time. As such, regions with activity that rises and falls together are likely functionally connected. These groups of connected brain regions are called “systems.” The brain has many of these functionally-connected systems, and neuroscience research shows that they can rewire and reconfigure themselves. For example, another neuroimaging study of older kids and young adults (ages 8-22) from Philadelphia showed that the organization of these brain systems changes with age [1]. Specifically, our brain systems become more segregated and more modular as we move towards adulthood, with weaker connections between systems and stronger connections within systems (Fig. 2). Ursula found the same trends in her data with older kids tending to have more segregated brain systems than younger kids, suggesting that our brain’s functional architecture is flexible and continues to refine as we age.

Figure 2

As we age and develop, our brain systems (red, green, and blue ovals) reorganize, moving from more integrated (e.g., many connections between systems) to more modular (e.g., more connections within systems and fewer connections between systems). Ursula’s work shows that this brain system separation supports the development of cognitive skills, like reasoning.

Do some systems remodel more than others? Ursula found that changes in connectivity were largest in brain systems involved in abstract cognition, visual processing, and attention. As it turns out, these are the same systems involved in cognitive reasoning. For instance, reasoning is supported by the brain’s visual areas taking in information from the world while attention systems focus the brain’s resources on what’s important to the task at hand while ignoring distractors. Given what we know about the blossoming of cognitive reasoning during childhood, Ursula wondered if there could be a relationship between these changes in brain connectivity and cognitive ability. To test this, Ursula compared the patterns of brain system connectivity for each child with their scores on the cognitive reasoning test (Fig. 1). She found that the remodeling of cognition, visual processing, and attention systems was associated with increased cognitive ability! In other words, kids who had more mature patterns of brain system connectivity were better equipped to reason about the world and their place in it.

Taken together, Ursula’s work suggests that the massive restructuring of brain systems as kids age might be happening to support the rapid development of cognitive abilities emerging during these early and middle childhood years. Beyond offering a new perspective on healthy brain development, this relationship between brain organization and brain function offers new ways to think about -- and potentially treat -- various neurodevelopmental or neurological disorders.


About the brief writer: Kara McGaughey

Kara is a PhD candidate in Josh Gold’s lab studying how we make decisions in the face of uncertainty and instability. Combining electrophysiology and computational modeling, she’s investigating the neural mechanisms that may underlie this adaptive behavior.

Citations:

  1. Baum, G.L., Ciric R., Roalf, D.R., Betzel, R.F., Moore, T.M., Shinohara, R.T., … & Satterthwaite, T.D. (2017). Modular segregation of structural brain networks supports the development of executive function in youth. Current Biology, 27(11). doi: 10.1016/j.cub.2017.04.051.

Want to learn more about how brain function supports the development of cognitive reasoning during childhood? You can find Ursula’s full paper here!

Read More
NGG GLIA NGG GLIA

Keeping your brain's symphony in sync

or technically,
Weakly correlated local cortical state switches under anesthesia lead to strongly correlated global states
[See original abstract on Pubmed]

Dr. Brenna Shortal was one of the two lead authors of this publication. Her graduate and undergraduate research focused on understanding the neurological mechanisms of consciousness, and she has published a number of papers on the topic. While she was a student at UPenn, Dr. Shortal was the director of Brains in Briefs, and her passion for science communication led her to pursue a career as a medical writer for Red Nucleus following her graduation in 2021. She hopes to continue working to communicate and advocate for scientific research to broad audiences.

or technically,

Weakly correlated local cortical state switches under anesthesia lead to strongly correlated global states

[See Original Abstract on Pubmed]

Authors of the study: Ethan B Blackwood, Brenna P Shortal, Alex Proekt

The most complicated piece of machinery you will ever encounter is sitting right between your ears: your brain. Our ability to move, sense, and think is thanks to billions of individual neurons that interact in varied and complicated ways. With this level of complexity, it’s miraculous that our brains work at all, let alone as well or as long as they do. Even more impressively, when our brain gets knocked off track, like from a seizure or anesthesia, it can quickly go back to typical patterns of activity. How does such a complex thing keep itself in sync?

Ethan Blackwood is a fifth-year neuroscience graduate student in the lab of Dr. Alex Proekt. Before coming to Penn and as a rotation student with Dr. Proekt, his research focused on how neural oscillations ("brain waves") change over time or with stimulation and what this means for behavior. More recently, he has been zooming in to the individual neuron level and studying how the firing of large groups of neurons changes during learning.

Neuroscience PhD student Ethan Blackwood and Drs. Brenna Shortal and Alex Proekt at the University of Pennsylvania sought to answer this question by studying brain activity in rats under anesthesia. Anesthesia is a useful way to study the coordination of brain activity because it is easy to put animals under anesthesia in the lab and because researchers already know a lot about the patterns of brain activity that occur when people are under anesthesia. The team studied this phenomenon in rats because they were able to directly record the activity of the neurons in the rat’s brain, something that is rarely possible in the human brain.  

The team had two ideas about how the brain keeps itself in sync. Their ideas are easiest to understand if we think of the brain as a symphony with your neurons as the musicians. Just like an orchestral piece comes together because the musicians move in sync from one part of the music to the next, so too do the groups of neurons in your brain. The researchers’ first idea about how the brain might keep its symphony together was that there is a conductor who dictates how all the groups of neurons behave. The second possibility was that there is no conductor, but nearby neurons listen to each other so that the whole orchestra stays together.

To distinguish between these two possibilities, the team recorded a kind of brain signal called a local field potential in two parts of the rat brain. They did this by placing electrodes in the rat’s brain and listening to the activity of nearby neurons. This is like listening to a few microphones placed in the cello and violin sections to understand how the whole orchestra works. Each microphone captures sound produced by several nearby musicians, but it can’t capture the whole orchestra’s sound.

The team started by identifying what musical melodies, which they call brain states, each electrode recorded and noting when the nearby neurons switched from one state to the next. By doing this for all the electrodes, they showed that there were only a small number of brain states that the neurons played, and the same states appeared in different rats. The relatively small number of brain states they found is something other neuroscientists have observed, and it’s key to how the brain keeps itself in sync. If every musician in the orchestra played their own tune, it would be hard to make sense of what was going on. However, by moving through different sections of the same piece of music in sync, the instruments create a beautiful piece of music together. The same is true of your brain’s symphony. Rather than coordinating billions of songs, each sung by different neurons, your brain’s symphony sings just a few, transitioning between a small number of brain states over time.

Now that they had their brain recordings, the team could see which of their two proposals about how the neural symphony stays in sync was true. If their first prediction, that there is a conductor that signals when to transition from one state to another, was true, the team expected to see all the groups of neurons transitioning between states at similar times. On the other hand, if their second prediction was true, that the neural symphony stays in sync by listening to nearby neurons, the researchers would expect to see groups of nearby neurons transitioning between themes mostly together, with nearby neurons more likely to move together than neurons that are further apart. When they measured the neurons’ activity, they found that transitions between states measured on different electrodes corresponded only weakly to each other, but that the closer the electrodes were, the more the state transitions were related. This supported their second prediction, that neurons listen to their neighbors to decide when to transition from one state to another.

This is an exciting step toward understanding how the brain coordinates the movements between states that help keep our complex brains in sync. Understanding this process is important because it can help us develop therapies that mimic it for patients whose brain activity can’t always keep up healthy patterns, such as seizure patients. Beyond medical uses, understanding nature’s elegant solution to managing the complexity of brain signaling can teach us how to build computer systems and models that can handle increasingly more complexity to do things like power robots. And if none of these applications excite you, hopefully you can appreciate the wonder of understanding a little more about what makes us tick and how our neural symphonies stay in sync.

About the brief writer: Catrina Hacker

Catrina Hacker is a PhD candidate working in Dr. Nicole Rust’s lab. She is broadly interested in the neural correlates of cognitive processes and is currently studying how we remember what we see. She also co-directs PennNeuroKnow.

Want to learn more about how our brain activity changes during anesthesia? Read this paper to learn more!

Read More
NGG GLIA NGG GLIA

Scientists use zebrafish to understand how the brain makes decisions!

or technically,
The calcium-sensing receptor (CaSR) regulates zebrafish sensorimotor decision making via a genetically defined cluster of hindbrain neurons
[See original abstract on Pubmed]

Dr. Hannah Shoenhard was the lead author on this study. Hannah hopes to one day run a lab that connects the “small picture“ (molecules, genes, and proteins) to the “big picture“ (circuits and behaviors) in neuroscience. After earning her PhD in the Granato lab using zebrafish as a model system to study decision making, she moved to do a postdoc in the Sehgal lab studying sleep and memory in fruit flies.

or technically,

The calcium-sensing receptor (CaSR) regulates zebrafish sensorimotor decision making via a genetically defined cluster of hindbrain neurons

[See Original Abstract on Pubmed]

Authors of the study: Hannah Shoenhard, Roshan A. Jain, Michael Granato

How we make decisions is a question that scientists and philosophers have considered for ages. But did you know that there are different types of decision making? The type that we are most familiar with involves decisions that we make in our everyday lives: Should I walk to school or take the bus? Should I have pasta or salad for dinner? But the brain is actually responsible for lots of different kinds of decisions - some of which we don’t even think about! One type of decision making that is commonly studied in the field of neuroscience is called sensorimotor decision making. In this form of decision making, the brain takes in sensory information from the world, processes the information while considering past experiences, and then produces a behavioral response. 

Figure 1:

Image of a zebrafish used in scientific research. 

Source: https://news.mit.edu/2022/smarter-zebrafish-study-1118

To understand more about this type of decision making, Dr. Hannah Shoenhard, a recent Penn Neuroscience PhD graduate, and her lab used zebrafish, a common animal model that is used in neuroscience research. Her lab had previously found that when fish are presented with a sudden quiet sound, they respond with a “reorientation” response - the fish slowly turn their bodies. But if the fish are presented with a sudden loud sound, they respond with an “escape” response - the fish rapidly turn their bodies. Having learned about this fascinating behavioral phenomenon, Hannah was interested in how different proteins may be involved in this sensorimotor decision making process. Through whole-genome sequencing (a fancy way of scanning for important genes) in the zebrafish, the lab identified a protein named CaSR that is essential for sensorimotor decision. When the lab removed CaSR from the zebrafish, they found that they would produce the wrong response to a loud sound by reorienting instead of trying to escape.

Given that CaSR is important for normal sensorimotor decision making, Hannah next wanted to know which part of the zebrafish brain uses CaSR to perform this behavior. She first looked at the neurons that drive the escape response. When she reintroduced CaSR into these escape neurons, she found that it did not restore the correct escaping response. This meant that CaSR had to be acting elsewhere.

To find the location where CaSR is acting, Hannah developed a novel experimental strategy. This approach combined behavior and brain imaging. Hannah expressed CaSR in random sets of neurons in zebrafish that didn’t have any CaSR of their own. Some of these fish displayed normal decision-making, meaning CaSR had been expressed in the “correct” neurons, and some displayed impaired decision-making, meaning the “correct” neurons had been missed. Hannah then compared which neurons had CaSR in zebrafish that displayed normal decision-making or abnormal decision-making. Using this novel strategy, Hannah found a brain region in the zebrafish called DCR6, which is located in the hindbrain, near both the escape and reorientation neurons. The hindbrain controls many reflexive behaviors in both fish and humans. To validate her findings and test if this region is actually involved in sensorimotor decision making, she drove extra CaSR expression in the DCR6 and found that this was sufficient to drive escape responses in zebrafish exposed to quiet noises – in other words, the opposite of what happens when CaSR is missing.  Additionally, she used the original zebrafish strain that lacked CaSR and specifically restored CaSR only in DCR6 neurons. Hannah found that these fish performed reorientations in response to quiet sounds and escapes in response to loud sounds - just as we expect healthy zebrafish to do! 

Thus far, Hannah’s experiments have pointed to two major findings: 1) CaSR is important for normal sensorimotor decision making and 2) CaSR acts locally in DCR6 neurons, but not reorientation or escape neurons, to enable normal sensorimotor decision making. Given these findings, Hannah asked an important follow-up question - are there connections between DCR6 and reorientation or escape neurons? To answer this, she used a unique zebrafish strain that labels DCR6 neurons and escape neurons. Hannah found that DCR6 neurons do connect to escape neurons but found no connections with reorientation neurons. Nevertheless, Hannah and her colleagues were excited to find this result. 

Hannah’s amazing work in the zebrafish underscores that it is important to examine the brain both at a large scale (i.e., behavior and decision making) as well as a small scale (i.e., individual neurons and proteins, like CaSR) in order to more fully understand how it works. Secondly, her work tells us that decisions are the result of distinct parts of the brain working together to perform a behavior. When you decide to have a salad for dinner, there is one part of your brain that controls your muscles and allows you to eat the salad. There is a different part of your brain that helps in deciding to eat the salad in the first place! In the example of the zebrafish, reorientation/escape neurons allow the fish to perform the actions, but the decision making site is elsewhere - namely, in a brain region known as DCR6. On a final note, Hannah’s research reminds us of the incredible value and insight that animal models, like the zebrafish, bring to us. They allow us to study behaviors that are very seemingly very human (like decision making) in very deliberate and precise ways!

About the brief writer: Jafar Bhatti

Jafar is a PhD Candidate in Maria Geffen’s lab. He is broadly interested in brain networks involved in auditory processing and decision-making.

Want to learn more about how these researchers study decision making in zebrafish? You can find Hannah’s paper here!

Read More
NGG GLIA NGG GLIA

A case of leaky brain barrier: how missing a piece of chromosome 22 can lead to schizophrenia

or technically,
Disruption of the blood-brain barrier in 22q11.2 deletion syndrome
[See original abstract on pubmed]

Alexis Crockett was the lead author on this study. She is interested in understanding how the rest of the body affects the brain to change behavior. One way the body signals to the brain and changes its function is through activation of the immune system. Her research focuses on how the immune system can become activated, and tries to understand how this inflammation is able to bypass all the barriers that are supposed to protect the brain from this inflammation. She is currently continuing this line of study in her postdoctoral fellowship at the Cleveland Clinic in the laboratory of Dr. Dimitrios Davalos.

or technically,

Disruption of the blood-brain barrier in 22q11.2 deletion syndrome

[See Original Abstract on Pubmed]

Authors of the study: Alexis M Crockett, Sean K Ryan, Adriana Hernandez Vásquez, Caroline Canning, Nickole Kanyuch, Hania Kebir, Guadalupe Ceja, James Gesualdi, Elaine Zackai, Donna McDonald-McGinn, Angela Viaene, Richa Kapoor, Naïl Benallegue, Raquel Gur, Stewart A Anderson, Jorge I Alvarez

Our brains are like car radios -- they tune into different stations for various thoughts and experiences. However, sometimes the station might change without a person touching the radio knob, leading them to hear sounds or voices that are not real in a way that they can't control. Imagine you are on a road trip with your friends, listening to a carefully curated Taylor Swift soundtrack, when all of the sudden, you only hear Kanye West rapping -- while your friends insist that Kanye hasn’t been playing at all! The idea of hearing something that no one else does is super confusing and frightening, especially because sometimes these stations that only you are tuned into could be ominous -- rather than Kanye rapping, you might hear someone that sounds like a scary character from a horror movie. Alternatively, what if you suddenly have zero interest in listening to Taylor Swift despite being known as her biggest fan for years? Such sudden disconnect-from-reality circumstances and/or the lack of interest and emotions are experienced by people with schizophrenia, a chronic mental illness that can seriously interfere with daily life functions. Medicine and therapy can help to manage symptoms of schizophrenia, but there is currently no cure. One reason for the lack of a cure is that we have yet to fully pinpoint the causes of this disorder, making it difficult to inform therapeutic strategies directly targeting those causes.

Scientists have identified many different genetic mutations that are linked to schizophrenia diagnoses. However, these mutations are not found in all individuals with schizophrenia. In addition, people with these mutations do not necessarily develop schizophrenia. A complex combination of genetic, environmental and lifestyle factors contributes to the development of this disorder. Generally, diseases with strong genetic drivers often have more well-defined biological mechanisms, which makes them easier to study. One of the strongest genetic risk factors in schizophrenia is the deletion of a segment of chromosome 22, herein referred to as 22q11.2 deletion, which results in the loss of 40-50 genes. Strikingly, approximately 25% of people bearing 22q11.2 deletion are diagnosed with schizophrenia, putting these people at much higher risk than the general population. Hence, deciphering the commonality among individuals with 22q11.2 deletion might help us better understand the disease mechanism(s). Dr. Alexis Crockett, a former Neuroscience Graduate Group student in the Alvarez lab at University of Pennsylvania, set out to explore how 22q11.2 deletion alters the brain in the way(s) that might cause schizophrenia.

Unlike most organs in the body, the brain is extremely delicate, with limited ability to regenerate if it is damaged. Therefore, to protect the brain, access of substances in the bloodstream to the brain is tightly controlled by a special filter, referred to as the blood-brain barrier. This structure forms a barrier that is critical for keeping various harmful particles such as bacteria, viruses, and environmental toxins from the brain. This brain barrier is made possible by densely packed endothelial cells, which are specialized cells that make up the blood vessels, and the many proteins between them like bricks and mortar, respectively. Therefore, only select substances are allowed to pass through the tiny pores of this barrier, if they are small enough or being transported by specific proteins from the blood-facing side of the cell to the brain-facing side of the same cell. This tight barrier is further reinforced by astrocytes which are a type of brain cell. Given that many of the deleted genes in the 22q11.2 region are proteins that make up this brain barrier, Dr. Crockett and colleagues hypothesized that the brain barrier is leaky in patients with 22q11.2 deletion.

To explore this hypothesis, they employed a mouse model with a similar 22q11.2 deletion as found in humans. Two proteins in the bloodstream, which are known to normally be kept out of the brain, were instead found in the brain tissue of these mice. Furthermore, they observed a marked increase in the amount of ICAM-1, a protein that aids immune cells in sticking to and migrating across the endothelial cell layer. An intact brain barrier normally restricts entry of the immune cells into the brain to avoid uncontrollable inflammation. However, in the brains of mice with 22q11.2 deletion, there was an increased level of inflammatory proteins in astrocytes of the brain. These evidence indicated a breach of brain barrier along with brain inflammation in the mouse model of 22q11.2 deletion.

Although mice are a valuable animal model for biomedical research, there are important differences between mice and humans. For instance, laboratory mice are quite genetically similar to each other, which fails to reflect the genetic complexity of schizophrenic patients. In order to study 22q11.2 deletion in human cells, Dr. Crockett and colleagues obtained cells from patients with this deletion. They then used established methods to change these cells to resemble the endothelial cells that make up the brain’s barrier, allowing them to examine the integrity of the human brain barrier in the dish. Compared to endothelial-like cells derived from healthy individuals, endothelial-like cells derived from patients with 22q11.2 deletion showed an increase in leakiness. Similar to their findings in mice, there was also a higher level of the adhesion protein ICAM-1 in the human endothelial-like cells with 22q11.2 deletion. Indeed, human immune cells readily crossed endothelial-like cell layer, consistent with known effect of high ICAM-1 level on immune cell migration.

Together, the work led by Dr. Crockett demonstrated that in the context of 22q11.2 deletion, the brain barrier is dysfunctional, permitting the entry of prohibited particles, and subsequently triggering inflammation in the brain. Interestingly, impaired function of the brain barrier has been reported in other cases of schizophrenia without clear genetic mutations, suggesting that a leaky brain barrier might be one of the underlying mechanisms contributing to the development of schizophrenia. Dr. Crockett's findings not only help us further understand the complex origins of this devastating disease, but also may lead to better treatment strategies for schizophrenia by targeting the brain’s barrier.

About the brief writer: Phuong Nguyen

Phuong is a PhD Candidate in Dr. Katy Wellen’s lab at Penn. Her research journey started in her undergraduate study at Drexel University when she performed a drug screening on a fruit fly model of Alzheimer’s disease. She then decided to pursue her PhD training in Neuroscience at Penn. She set out to characterize the brain function of a novel mouse model lacking Acly, an important enzyme for lipid synthesis and various metabolic processes. Interestingly, the brain demonstrated a remarkable resilience to the loss of this enzyme, while the skin of those mice was severely damaged that was associated with fat loss and premature death. Her research work revealed a crosstalk among the skin, the fat tissue, and the dietary lipids. She hopes to continue her research in understanding the complex metabolic crosstalk between organs, especially focusing on the brain, and how nutrition impacts those crosstalks.

Curious to learn more about what Dr. Crockett and colleagues discovered? Check out the details of this work here.

Read More
NGG GLIA NGG GLIA

Can we improve how we age by using therapies that reduce brain stress?

or technically,
Reducing ER stress with chaperone therapy reverses sleep fragmentation and cognitive decline in aged mice
See Original Abstract on Pubmed

Dr. Jennifer Hafycz was the lead author on this study. Jennifer is interested in understanding how molecular signaling affects the brain and, ultimately, behavior. Specifically, she’s interested in discovering the molecular signaling changes that occur during aging to improve our function as we age!

or technically,

Reducing ER stress with chaperone therapy reverses sleep fragmentation and cognitive decline in aged mice

See Original Abstract on Pubmed

Authors of the study: Jennifer M Hafycz, Ewa Strus, Nirinjini Naidoo

As we get older, our brains don’t work as well as they used to. For example, we are not able to complete complicated tasks quickly or learn new material easily - symptoms called “cognitive decline.” Why and how this decline in brain function occurs has perplexed scientists  for years. That is until a recent study by Penn NGG student, Jennifer Hafycz, which found that cellular stress is a key driver of cognitive decline in aging. 

The brain is made up of small machines, called neurons, which perform a variety of functions that allow us to feel, interpret, and respond to our environment (among many other things)! In order to complete these tasks, neurons must recycle nutrients called proteins. This process involves folding proteins so that they can function appropriately, akin to a scooter needing to be propped up in order to be used! Sometimes this process goes awry and proteins don’t form properly, causing neurons to accumulate non-functioning proteins. These proteins accumulate and stress neurons. Young neurons can easily deal with this stress by making helper proteins (called chaperones) that fix the poorly formed proteins. Old neurons, however, lose the ability to do this well, which results in cellular stress and eventual loss of function. 

Penn NGG student, Jennifer Hafycz, theorized that treating aged mice with lab-made chaperones (PBA) would enable old neurons to deal with cellular stress, allow them to continue functioning well, and prevent age-related cognitive decline. 

After confirming that old mice exhibited markedly more cellular stress than young mice, Jennifer treated them with PBA to examine whether chaperone treatment reduced neuronal stress. Strikingly, PBA treatment improved neuronal health and old neurons looked almost as stress-free as young neurons! She then wondered whether these changes would affect learning and memory in old mice. Using two well-described tasks that test these functions in mice, she found that PBA-treated old mice performed significantly better on these tasks than untreated old mice. In fact, the PBA-treated old mice performed just as well as young mice! In other words, reducing cellular stress can prevent cognitive decline!

To figure out exactly how PBA treatment was improving cognitive function in old mice, Jennifer tested whether PBA increased the levels of proteins normally found in neurons that play crucial roles in cognition. Interestingly, she found that binding immunoglobulin protein (BiP, a chaperone normally found in mice) and p-CREB (a protein implicated in learning and memory) were dramatically increased in PBA-treated mice but not in untreated mice. Jennifer next theorized that PBA treatment might work by increasing BiP levels. To test whether BiP alone could improve cognitive function, Jennifer used genetic tools to increase BiP levels in old neurons and found that this was enough to improve cognition in mice! In other words, treating old mice with a chemical chaperone, such as PBA, promoted neurons to make their own chaperone which resulted in improved learning and memory!

In sum, Jennifer found a new therapy for dealing with cellular stress in aging and identified how it may work, marking an advance towards improving our quality of life as we age!

About the brief writer: Sai is a 2nd year PhD Candidate in Chris Bennett’s Lab. Sai is interested in understanding the molecular mechanisms underlying immune dysfunction in Krabbe disease. Ultimately, he wants to develop safer immunotherapies for the devastating pediatric disease.

Read More
NGG GLIA NGG GLIA

How brain waves might help us see

or technically,
Visual evoked feedforward-feedback traveling waves organize neural activity across the cortical hierarchy in mice
[See original abstract on Pubmed]

Dr. Adeeti Aggarwal was the lead author on this study. Her ultimate career goal is to become an academic ophthalmologist whose clinical insights motivate her research in visual processing, and whose research also translates back to patient care. She is fascinated by how cortical networks transform visual sensory information into perception and how defects in sensory processing may alter or abolish perception such as in hallucinations or blindness. This interest has driven her research in graduate school and she hopes to continue studying how visual processing pathways participate in perceptual experience as her career progresses. 

or technically,

Visual evoked feedforward-feedback traveling waves organize neural activity across the cortical hierarchy in mice

[See Original Abstract on Pubmed]

Authors of the study: Adeeti Aggarwal, Connor Brennan, Jennifer Luo, Helen Chung, Diego Contreras, Max B. Kelz, Alex Proekt

Modern cameras do an amazing job of turning the photons of light in the world into pixels on our phone or laptop screen that faithfully capture that moment in time. The fact that we all walk around with the technology to do this sitting in our pockets is the result of decades of innovation and technological advancement. But even with everything that your smartphone’s camera can capture, we have an even more elegant piece of machinery doing all that and more sitting between our ears all day: our brains.

How is our ability to see different than a camera? To start, there’s the obvious difference in materials. Cameras are made of hard, man-made materials, whereas your brain is filled with comparatively squishier biological material. But even more importantly, a camera and your brain are trying to accomplish two different things. The goal of a camera is to recreate the world exactly as it is. The goal of your visual system is to use what you see to interact with the world. Unlike cameras, you need to do things like pay attention to one thing over another, predict what’s coming next, or change your behavior according to what you see.

We can think of the brain as needing to accomplish two things: 1) build up a representation of what is in the world, and 2) integrate that into our current understanding of the world and intended actions to accomplish something. One popular idea, or hypothesis, is that the brain accomplishes the first goal of building up a representation of the world by sending neural signals through several brain regions moving from the back of your head toward the front, termed feedforward communication. The second goal is then accomplished by integrating those signals with neural activity in other brain regions and then passing a signal backwards through the same regions from front to back, which is called feedback. These “traveling waves” of brain activity could coordinate brain activity across different parts of the brain and integrate the two goals of the visual system.

Figure 1

Illustration of the hypothesized direction of the flow of brain activity for feedforward waves (yellow) and feedback waves (blue). Figure made with biorender.com.

Testing this hypothesis has been difficult, because it requires the ability to look at brain activity across large portions of the brain as it changes very quickly and the tools to do this were only recently developed. Until recently, several scientists had used what tools were available to study feedforward and feedback activity, but they could only look for small snapshots of evidence of feedforward and feedback waves. However, last year a team of researchers at the University of Pennsylvania led by Dr. Adeeti Aggarwal, a former PhD student in the Neuroscience Graduate Group, used new technology to visualize these waves of activity across the mouse brain for the first time.

To do this, Dr. Aggarwal and her team recorded brain activity across several areas of the mouse brain while they flashed a green light in front of the mouse’s eye. By using a special kind of analysis that allowed them to get a cleaner look at the data, they were able to see the two kinds of brain waves that the hypothesis predicted. The first feedforward wave fluctuated quickly and moved from the back to the front of the brain, while the second feedback wave fluctuated more slowly and moved from the front to the back of the brain. Importantly, the team found that both waves of activity spread equally far across the brain, despite the feedforward wave fluctuating faster than the feedback wave. Through this and other observations the team concluded that the two waves of brain activity interact and integrate to form a cohesive wave of brain activity that could be combining the information about what the mouse is seeing with other brain signals.

This was exciting evidence that the kinds of feedforward and feedback waves that neuroscientists thought could coordinate visual information are actually present in the brain, but how might they help a mouse to see?  Your brain cells, called neurons, communicate with each other by sending a kind of signal called an action potential, or spike. Whether and how a neuron produces spikes is what ultimately influences what you see and how you behave. To demonstrate that these waves of brain activity could shape these important brain signals, Dr. Aggarwal and her team looked at whether the waves of brain activity had an impact on whether and how neurons produced spikes.  They found that neurons were more likely to produce spikes at the peaks of the slow oscillation than at the lower points. This links the waves of brain activity that they observed directly to spikes, which suggests that these waves are capable of coordinating brain information about what the mouse is seeing with other kinds of signals.

Dr. Aggarwal and her team’s paper provides exciting new evidence for how different parts of the brain can be coordinated through waves of activity, and future work will continue to determine how these waves can be linked to behavior and whether they can be seen in human brains as well. Understanding how the brain coordinates activity across brain regions to turn sight into action could be helpful in many ways. For one, this information could help to engineer better visual prosthetics for people who are blind. If these waves are necessary to coordinate brain activity across parts of the brain, it may be necessary for visual prosthetics to produce signals that work in the same way. Beyond direct human applications, incorporating similar principles into the design of robotic systems that need to coordinate information about the world with a set of goals or actions could produce robots that can better interact with the world to accomplish their goals. As with all scientific advancements, Dr. Aggarwal’s study is one exciting piece in many bigger puzzles.

About the brief writer: Catrina Hacker

Catrina Hacker is a PhD candidate working in Dr. Nicole Rust’s Lab. She is broadly interested in the neural correlates of cognitive processes and is currently studying how we remember what we see. She also co-directs PennNeuroKnow.

Interested in learning more about Adeeti’s work? Check out the full paper here!

Read More
NGG GLIA NGG GLIA

Does the size of your social network predict how big certain parts of your brain are?

or technically,
Social connections predict brain structure in a multidimensional free-ranging primate society
[See original abstract on PubMed]

Camille was the lead author on this study. She is a 5th year graduate student at the University of Pennsylvania with Dr. Michael Platt. She wants to understand the evolution and neurobiological mechanisms of social relationships in primates using approaches from behavioral ecology of primates in the wild to single cell electrophysiology in the lab!

or technically,

Social Connections predict brain structure in a multidimensional free-ranging primate society

[See original abstract on PubMed]

Authors of the study: Camille Testard, Lauren J. N. Brent, Jesper Andersson, Kenneth L. Chiou, Josue E. Negron-Del Valle, Alex R. DeCasien, Arianna Acevedo-Ithier, Michala K. Stock, Susan C. Antón, Olga Gonzalez, Christopher S. Walker, Sean Foxley, Nicole R. Compo, Samuel Bauman, Angelina V. Ruiz-Lambides, Melween I. Martinez, J. H. Pate Skene, Julie E. Horvath, Cayo Biobank Research Unit, James P. Higham, Karla L. Miller, Noah Snyder-Mackler, Michael J. Montague, Michael L. Platt, Jérôme Sallet

When I think of neuroscience, I think of scientists in white lab coats examining brains under a microscope. While it’s true that neuroscience these days typically takes place in a laboratory environment, some would argue that this isn’t the best way to study the brain. If we want to study how the brain works naturally, why would we study it in an artificial environment, such as a lab?

While of course there are some topics that are better suited to be studied in labs like how individual neurons in the brain function and work together, topics like social behavior, which is what Camille and her colleagues were interested in, may benefit from more naturalistic experimental conditions. In particular, Camille and her colleagues wanted to know how the size of an individual’s social network can affect their brain structure and function. To do this they studied the behavior and brains of rhesus macaque monkeys living in a semi-free range colony on Cayo Santiago Island in Puerto Rico. 

In their paper, the researchers examined the behavior of a single social group composed of 103 individual monkeys of which 39 were male and 64 were female. For each monkey in the colony, the researchers looked at two measures of social behavior. The first measure was the monkey’s social network, which was based on the number of grooming interactions a given monkey had with other monkeys. The more grooming partners a monkey had, the larger its network was. The second measure they looked at was the monkey’s social status, which was based on aggressive interactions given and received that a given monkey encountered with others (threats, chases, submissions, etc.). 

Camille and her team observed each monkey’s behavior for 3 months prior to  measuring their brain structure using a technique known as MRI, or magnetic resonance imaging. With this technique, they were able to determine the size of different brain areas in each monkey. Then, they wanted to see if there was a relationship between a given monkey’s social behavior and any part of the monkey’s brain. 

Interestingly, the researchers found that there was a positive correlation between the social network size (i.e, number of grooming partners) of a monkey and the size of two specific brain regions (see Figure 1). The first brain region is called the mid superior temporal sulcus (mid-STS, for short). In previous studies, the mid-STS has been found to be involved in responding to social scenes. This region is also thought to be involved in deciding whether to cooperate versus compete with a partner. The second brain region is called the ventral dysgranular insula (vd-insula, for short). In previous studies, this region has been found to be involved in grooming behavior in macaques and empathy in humans!

Because social interactions between monkeys are multi-faceted, just as in humans, Camille also looked at several other nuances of the monkeys’ social network to see if they predicted the size of these brain regions. For example, they looked at “betweenness” (was a given monkey able to bridge connections between distant members of the colony?) and “closeness” (how close was a given monkey to every other monkey in the colony?). These other measures did not correlate with any brain region in these monkeys. Because of this, the researchers took a closer look at social network size, which did show a correlation with brain size. Since this measure was determined by grooming interactions, they were curious if the direction of the grooming mattered: whether the monkey actively groomed other individuals or was being groomed. When they looked at the data this way, they found that how many individuals in the colony that groomed a given monkey more closely predicted its brain size. 

Finally, the researchers wondered if the relationship that they found between social network size and brain size in adult monkeys was also true for infant monkeys. These monkeys are too young to form complex social networks so the researchers instead used the social network of the mothers of these infants. They reasoned that they might still see a relationship because previous studies showed that an infant macaque’s social network mimick the social network of his/her mother. However they found no clear relationship between a mother’s social network and her infant’s brain size. The authors suggested that the infants were perhaps too young for their brains to have fully developed and any size differences to be observable. These results led the researchers to believe that the brain-size differences that they see in adult macaques are due to the increased sociability that occurs during development. 

In summary, Camille’s research offers incredible insight into how the size of specific brain regions is related to the ability of mammals to form large social networks in their natural environment. Her team determined the social network size of each monkey in the colony and found a significant correlation with two socialization-related brain regions, the mid-STS and the vd-Insula. Furthermore, this relationship could not be found in infant monkeys, leading them to believe that increased sociability during development leads to the observed differences in brain structure seen in adult monkeys. Camille’s work is important because her discoveries in wild, free-ranging monkeys emphasize that complex social forces, for instance in human societies, can powerfully drive the physical expansion of socially related areas in the brain.

About the brief writer: Jafar Bhatti

Jafar is a PhD Candidate in Maria Geffen’s lab. I’m broadly interested in brain networks involved in auditory processing and decision-making.

Want to learn more about how these researchers study the social behavior and brains of free ranging monkeys? You can find Camille’s full paper here!

Read More
NGG GLIA NGG GLIA

How different levels of brain development help adolescent cognition - or don’t

or technically,
Dissociable multi-scale patterns of development in personalized brain networks
[See original abstract on PubMed]

Adam Pines was the lead author on this study. Adam is a postdoctoral fellow in the Stanford PanLab for Precision Psychiatry and Translational Neuroscience. He completed his Ph.D. in Neuroscience at UPenn in 2022. His other research interests include developmental neuroscience, brain-environment interactions, and adaptive plasticity in the brain.

or technically,

Dissociable multi-scale patterns of development in personalized brain networks

[See Original Abstract on Pubmed]

Authors of the study: Adam R. Pines, Bart Larsen, Zaixu Cui, Valerie J. Sydnor, Maxwell A. Bertolero, Azeez Adebimpe, Aaron F. Alexander-Bloch, Christos Davatzikos, Damien A. Fair, Ruben C. Gur, Raquel E. Gur, Hongming Li, Michael P. Milham, Tyler M. Moore, Kristin Murtha, Linden Parkes, Sharon L. Thompson-Schill, Sheila Shanmugan, Russell T. Shinohara, Sarah M. Weinstein, Danielle S. Bassett, Yong Fan & Theodore D. Satterthwaite

You don’t need to be a scientist to know that kids get smarter as they grow up - they get better at things like problem-solving, thinking flexibly, and remembering information. But what exactly is changing in the brain to make these cognitive skills, which researchers call “executive function” easier?

Like instruments in a band, different areas of the human brain have different roles and will perform together in different combinations to everything from processing what your eyes see, to controlling your muscles, to solving a crossword, to feeling emotions. A group of brain regions that work together is called a functional brain network. Some functional brain networks perform easier, or “lower-order”  tasks, like sensing pain when you get a cut. Others perform harder, more complex tasks, like solving physics equations or learning a language, which are considered “higher-order”. 

Dr. Adam Pines, who recently graduated from the Neuroscience Graduate Group, wanted to know how all these functional networks mature as kids age and how this pattern of development relates to kids’ improving executive function. To study this, Adam had two challenges. First, we don’t know how many functional networks there “really” are in the brain; you can divide the brain up into different numbers of chunks and still do a good job of grouping regions that activate together and separating those that don’t (Figure 1, Columns). Second, the layout of everyone’s functional networks is a tiny bit different: one network may take up a little more space in one person, for instance, or the parts of the brain that do a certain task on one person may be just a little bit more to the left on another (Figure 1, Rows). Therefore, Adam made personalized functional networks (PFNs), which are maps of a person’s unique functional network layout, for every subject in the study. He also tried grouping the brain into different numbers of networks to see whether this would change his results.

Figure 1: Illustration of personalized functional networks mapped for varying numbers of networks.

Adam mapped the unique functional networks of each person in the study (PFNs), as shown in the rows. He also divided the brain’s activity into different numbers of networks, with maps of 4, 7, and 13 networks pictured. Different colors show that the brain regions are part of different functional networks.

To make personalized functional networks (PFNs) for each subject (Figure 1, Rows), Adam and his colleagues mapped the layout of every functional network in the average person and mathematically tweaked the layout to fit each participant’s unique pattern of brain activation. Then, they repeated this step using different numbers of networks in their baseline map (Figure 1, Columns) and labeled whether each network did lower- or higher-order functions. In the end, they had 29 brain maps for each person (each dividing brain activity into 2 to 30 functional networks), that they could compare to each participant’s age and score on a test of executive function.

First, Adam compared PFNs across participants ages 8 through 23 and found that lower- and higher-order networks tended to develop differently. Lower-order networks (each of which does an easier task) became more interconnected over the course of adolescence, while higher-order networks (each of which does a harder task) became less interconnected. Next, he tested how these PFN patterns were related to kids’ executive function. Interestingly, he found executive function tends to be better when very low-order and very high-order networks are distinct, but networks that fall in the middle (ones that do medium-complexity tasks) are more interconnected. Dividing the brain into a greater number of PFNs, Adam saw this effect grow stronger, especially in lower-order networks.

Taken together, Adam’s results are surprising because, while aging makes higher-order networks more distinct (which is better for executive function), lower-order networks actually become more interconnected (which is worse for executive function)! This may mean that while increasingly distinct higher-order networks allow kids’ executive function to improve as they grow up, their brains’ lower-order networks are already starting to decline. These findings will be important for future scientists studying how kids’ executive function develops and may help uncover why some kids struggle with cognitive development.

About the brief writer: Margaret Gardner

Margaret is a PhD student in the Brain-Gene-Development Lab working with Dr. Aaron Alexander-Bloch. She is interested in studying how different biological and demographic factors influence people’s brain development and their risk for mental illnesses.

Want to read Adam’s work for yourself? You can find the full article (complete with equations and pretty brain pictures) here!

Read More
NGG GLIA NGG GLIA

Can we use maps of how brain regions are connected to better target brain stimulation?

or technically,
Cortical-subcortical structural connections support transcranial magnetic stimulation engagement of the amygdala
[See Original Abstract on Pubmed]

Valerie Sydnor was the lead author on this study. Valerie is a PhD candidate in Ted Satterthwaite’s lab studying how brain plasticity changes throughout neurodevelopment. Valerie aims to uncover how developmental programs contribute to the emergence of youth psychiatric disorders.

or technically,

Cortical-subcortical structural connections support transcranial magnetic stimulation engagement of the amygdala

[See Original Abstract on Pubmed]

Authors of the study: Valerie J. Sydnor, Matthew Cieslack, Romain Duprat, Joseph Delusi, Matthew W. Flounders, Hannah Long, Morgan Scully, Nicholas L. Balderson, Yvette Sheline, Dani S. Bassett, Theodore D. Satterthwaite, and Desmond J. Oathes

In 2019, the World Health Organization estimated that 1 in every 8 people (that’s 970 million people around the world) were living with a mental health disorder. Over the course of the COVID-19 pandemic, as we experienced tremendous uncertainty, isolation, and loss, the prevalence of disorders like anxiety and depression increased by more than 25%. Although effective treatments for mental health conditions are available, for a substantial percentage of people with debilitating mental health symptoms, they don’t provide adequate relief. In a recent collaboration between the Oathes and Satherthwaite labs, Neuroscience Graduate Group student Valerie Sydnor explores how brain stimulation might offer a promising alternative treatment. 

As neuroscience and its technologies advance, it is becoming possible to more precisely design mental health treatments that target specific brain regions strongly linked to symptoms. For anxiety and depression, one key region is the amygdala, a place where the brain processes things like threats and negative experiences and controls how we respond (both emotionally and behaviorally). In people with anxiety and depression, the amygdala is often extra active. This means that the brain and the body can respond very strongly to scary, upsetting, or stressful situations and remain on high alert even after things have calmed down. We can think of the amygdala like a knob on the stove. If we crank up the heat for a prolonged period, symptoms of anxiety and depression begin to bubble up and boil over. If we were able to reach into the brain and turn the knob back down, perhaps we could provide some relief. 

New technologies, like brain stimulation, allow clinicians to do just that -- toggle brain activity in particular areas using magnetic fields, electrical currents, or even ultrasonic waves. Stimulation can be done even without reaching inside the brain. Techniques like transcranial magnetic stimulation (TMS) are non-invasive, meaning that the treatment (in this case a magnetic field designed to change brain activity) is safely applied using a device placed on the scalp. However, the skull and the brain are so dense that this non-invasive brain stimulation technology only works for targeting regions on the brain’s surface. The amygdala, buried deep within the brain, sits out of reach. 

In an attempt to extend the reach of non-invasive brain stimulation technology, Valerie, Dr. Desmond Oathes, and colleagues wondered if they could make use of the connections between brain regions. You see, the brain isn’t a collection of separate, independent parts. Rather, each brain region is connected to many other regions, forming a sprawling series of pathways that allow activity in one place to easily travel somewhere else. Conveniently, one of these neural pathways directly connects an area on the brain’s surface -- the ventrolateral prefrontal cortex (vlPFC), located on the side of your forehead -- to the amygdala. Just as you might pass through a city or two in order to get to your final destination, Valerie and Desmond figured that if they stimulated vlPFC, some of the activity evoked by the stimulation might pass through, continuing along the connecting pathway and ultimately affecting the amygdala.

There was some solid evidence that this amygdala-targeting strategy would work. Studies show that stimulating the vlPFC increases emotional regulation, reduces negative emotions, and improves mood. Valerie and Desmond speculated that these beneficial effects of brain stimulation applied to  vlPFC may actually stem from engagement of the pathway connecting vlPFC to amygdala and from subsequent reductions in amygdala activity. In other words, the vlPFC functions like our stove operator, using its connection to the amygdala to turn down activity when it’s getting too hot and emotionally charged.

To test this theory, Valerie, Desmond, and colleagues designed a clever (and difficult) experiment that allowed them to both non-invasively stimulate the brain and measure how its activity changed in specific regions. While 45 healthy individuals laid in a functional MRI (fMRI) scanner, the research team applied transcranial magnetic stimulation (TMS) to the vlPFC by placing a magnetic coil against the scalp above the brain region. After each pulse of brain stimulation was applied, they used the fMRI machine to take a quick snapshot of brain activity. This allowed them to examine how activity in the amygdala changed as a result of vlPFC stimulation, and to directly test whether stimulation effects traveled along the connecting neural pathway. 

Excitingly, the team found that stimulation applied to the scalp above vlPFC was able to decrease activity in the amygdala in 30 out of 45 participants. Given that the amygdala’s position deep within the brain was thought to be unreachable by non-invasive brain stimulation, this was a huge feat. Interestingly, amygdala activity tended to decrease by different amounts in different individuals. Wondering why, Valerie used an additional neuroimaging approach to create a map of the structural fibers (a more technical term for a neural pathway), connecting vlPFC and the amygdala for each person. Just as a highway with more lanes allows more traffic to pass through, could a denser connection (a thicker bundle of fibers) allow more stimulation to travel between regions? As it turns out, this was exactly the case! For a given individual, the extent to which neurostimulation was able to spread beyond the brain’s surface and affect amygdala activity depended on the density of their vlPFC-amygdala structural connection. Put simply, the stronger the connection between the vlPFC and the amygdala, the more easily the knob on the stove can be adjusted. 

Even though these results came from an experiment conducted with healthy participants, the ability of non-invasive brain stimulation to both target and decrease amygdala activity has clear implications for mental health treatment. Given the close link between amygdala activity and symptoms of anxiety and depression, brain stimulation represents an exciting new opportunity for people failing to find relief from existing medications and conventional talk therapy. More broadly, this work by Valerie, Desmond, and colleagues demonstrates -- for the first time -- that we can use the brain’s web of connections as a map to target specific brain regions for treatment purposes. Now, not only can we stimulate the amygdala in patients with anxiety and depression, but we can likely reach additional target regions throughout the brain with links to other mental health disorders.

About the brief writer: Kara McGaughey

Kara is a PhD candidate in Josh Gold’s lab studying how we make decisions in the face of uncertainty and instability. Combining electrophysiology and computational modeling, she’s investigating the neural mechanisms that may underlie this adaptive behavior.

Want to learn more about the potential for treating mental health conditions with brain stimulation? You can find Valerie’s full paper here! A list of nationally available resources for mental health and mental illness can also be found below.

Resources for Mental Health and Mental Illness:

National Institute of Mental Health: Information on Mental Disorders

https://www.nimh.nih.gov/health/topics/  

This web link will bring you to a page where you can learn more information about individual psychiatric disorders. Information on disorder symptoms, risk factors, available treatments/therapies, and relevant research is provided. Access this information for anxiety disorders, ADHD, autism spectrum disorder, bipolar disorder, depression, eating disorders, obsessive-compulsive disorder, PTSD, schizophrenia, substance use disorders, and others by clicking on the relevant link under “Mental Disorders and Related Topics”. 

 

National Suicide Prevention Lifeline

Call 1-800-273-TALK (8255); En español 1-888-628-9454

The Suicide Prevention Lifeline provides free, confidential emotional support to people in suicidal crisis or emotional distress. You can call above or use the chat below.

Use Lifeline Chat on the web (https://suicidepreventionlifeline.org/chat/)

The Lifeline is a free, confidential crisis service that is available to everyone 24 hours a day, seven days a week. The Lifeline connects people to the nearest crisis center. These centers provide crisis counseling and mental health referrals.

 

Crisis Text Line

Text “HELLO” to 741741 for free, 24/7 crisis counseling

The Crisis Text hotline is available 24 hours a day, seven days a week throughout the U.S. The Crisis Text Line serves anyone, in any type of crisis, connecting them with a crisis counselor who can provide support and information. The Crisis text line is available for any crisis, painful emotional experience, or time when you need support. When you text the line, a live crisis counselor receives the text and responds from a secure, online platform, typically within 5 minutes.

Substance Abuse and Mental Health Services Administration (SAMHSA)

For general information on mental health and to locate treatment services in your area, call the SAMHSA Treatment Referral Helpline at 1-800-662-HELP (4357). SAMHSA also has a Behavioral Health Treatment Locator on its website that can be searched by location. Navigate to the website and click the “Find Treatment” tab. The “Public Messages” tab also has useful information.

Health Resources and Services Administration (HRSA):

HRSA works to improve access to health care. The HRSA website has information on finding affordable healthcare.

Anxiety and Depression Association of America (https://adaa.org/)

Depression and Bipolar Support Alliance (https://www.dbsalliance.org/)

Apps for Therapy

Talkspace: Assessment and therapy provided online or via app. Provides online therapy, teen therapy, couples therapy, and medication management for psychiatric disorders.

BetterHelp: This app offers professional help from licensed therapists. You can message your therapist any time and schedule live sessions. The app is free to download, but therapy sessions cost money.

Apps for Coping with Stress, Anxiety, and Depression

Sanvello: Clinically validated techniques for reducing stress and treating anxiety and depression (free premium access during COVID-19 pandemic).

Depression CBT Self-Help Guide: Free app for helping to understand depression, factors that contribute to depression symptoms, and how to manage symptoms using cognitive-behavioral therapy.

Shine: Personalized self-care toolkit and community support, developed specifically for individuals of color.

WhatsUp: A free app that uses cognitive behavioral therapy and acceptance and commitment therapy methods to help with depression, anxiety, and stress. Includes a positive and negative habit tracker and helps identify thinking patterns.

Happify: Some free content; stress reduction and cognitive techniques for anxiety.

MindShift CBT: Free content, including cognitive behavioral therapy strategies to address general worry, social anxiety, and panic. Designed for teens and young adults.

COVID Coach: Created for everyone, including veterans and service members, to support self-care and overall mental health during the coronavirus pandemic.

Apps for Eating Disorder Support

Recovery Road: Free app for helping with eating disorder recovery and body positivity.

Apps for OCD Support

nOCD: Uses exposure response prevention treatment and mindfulness-based treatments to help with symptoms of OCD.

Apps for LGBTQ+ Mental Health

Pride Counseling: This app offers 1-on-1 sessions with a licensed counselor as well as group therapy and webinars for LGBTQ+ individuals. You can message your counselor any time and schedule phone, video, or chat sessions. You pay monthly.

Apps for Meditation and Relaxation

Headspace: Two-week free trial for the general public.

Calm: Seven-day free trial. A meditation, sleep, and relaxation app that also provides resources specifically for coping with COVID-19 anxiety.

Stop, Breathe & Think: Always free, and for kids too.

Insight Timer: Always free. This is not a daily app, but rather a great library where you can search for various types of meditations and lengths by excellent teachers.

Read More
NGG GLIA NGG GLIA

Can a single neuron in the brain really solve complicated problems all by itself?

Ilenna Jones was the lead author on these studies. She is a Neuroscience Ph.D. candidate in Dr. Konrad Kording’s lab at Penn.

or technically,

Might a single neuron solve interesting machine learning problems through successive computations on its dendritic tree? & Do biological constraints impair dendritic computation?

See Original Abstracts on Pubmed: Paper 1 Paper 2

Authors of the studies: Ilenna Simone Jones & Konrad Kording

Figure 1: A Purkinje neuron found exclusively in the cerebellum. Illustration by Ramon y Cajal.

In the late 1800s, a scientist named Ramon y Cajal turned his microscope to the brain and discovered neurons, the cells of the brain. At the time, cameras had not yet been invented, so instead he drew what he saw. He compiled a collection of beautiful illustrations of the many different shapes and variations of neurons, which are still cited and referenced to this day (see Figure 1). In doing so he gave birth to the field of modern neuroscience.

Cajal’s drawings demonstrated the anatomical complexity and variety of neurons throughout the brain. He observed that neurons are composed of several parts, including branched fibers called dendrites that converge onto a cell body, and a single thin fiber that departs the cell body called an axon. Since Cajal’s time, neuroscientists have learned that neurons receive electrical activity from other neurons through their dendrites and send electrical activity through their axons. These electrical signals form the basis of brain activity and allow us to sense, interpret, and respond to cues in our environment.  

Much of neuroscience research has focused on the activity of populations and networks of neurons, but how much can a single neuron do? Does a neuron’s extensive tree of dendrites allow it to perform complex calculations and send new information to other neurons? Or does a neuron simply act like a relay station that transfers the signals it receives without analyzing it? These are the questions that Neuroscience Graduate Group student Ilenna Jones wanted to answer. 

In her first paper, Ilenna used a computerized version of a neuron and asked it to perform various complex tasks. By modifying the number and organization of dendrites on her “virtual neuron,” she found that neurons with complex branching patterns performed tasks better than neurons with simpler branching patterns. This finding suggests that the shape of a neuron actually influences how much it can do! Neurons with densely layered, tree-like dendritic structures can perform sophisticated calculations, as opposed to neurons with more simple dendritic structures which cannot. 

In her second paper, Ilenna next wondered whether making her “virtual neuron” more realistic would change how they performed the same tasks. To do this she included even more of the biological properties found in real neurons, including how dendrites receive and respond to electrical signals from other neurons. She expected that by ‘humanizing’ her virtual neuron it would impair its ability to perform complex calculations, leading to worse task performance. This is a reasonable prediction because in many cases adding more rules for a computer model to follow can push it farther from the ‘idealized case' where it performs very well. But to her surprise, adding these new, realistic characteristics to her neuron actually improved performance in many cases! 

Thanks to Ilenna, we now know that dendritic complexity can allow individual neurons to act as mini-computers that receive information, perform calculations on it, and send new information to many other neurons. Moreover, because neurons come in many shapes and sizes across the brain, it’s likely that different types of neurons can perform completely different calculations depending on their shape. Her findings are significant because it opens up a whole new perspective as to how neurons process information. Understanding what individual neurons are capable of will help neuroscientists study the brain more closely and ultimately help us understand how the brain works!

Want to learn more about the details of Ilenna’s computational modeling of neurons? You can check out the full papers here and here!

About the brief writer: Joe Stucynski

Joe is a graduate student in Dr. Franz Weber’s and Dr. Shinjae Chung’s labs at Penn. He is broadly interested in what makes us sleep how the brain transitions between states.

Citations:

  1. Purkinje Neuron Picture: https://upload.wikimedia.org/wikipedia/commons/b/bb/PurkinjeCellCajal.gif

Read More