Monday, November 25, 2019

Chemistry's Evil Twin

In the American-English spoken language, there are many metaphors for that member of the family that seems just a little bit more different from the others–the black sheep, the odd duck, the bad apple, for example. One often heard specifically with twins is that of an "evil twin." The idea of an evil twin is ancient and based in some of the oldest mythologies that folklore still knows of, including Native American creation myths and those from the Mandika people of southern Mali, but for the most part it has always been more of a fictional tool than a reflection of reality. The one major exception (because there's always one) is in chemistry.
An approximate visualization of what the glucose molecule looks like.
PC: Jessica Noviello
Every chemical has a chemical formula that shows how much of every element is in it. For example, the formula for glucose, a basic sugar created by cells for energy, is C6H12O6. This formula tells me, a definitive non-chemist, that there are 6 atoms of carbon (C), 12 atoms of hydrogen (H), and 6 atoms of oxygen (O) in every molecule of glucose. Unfortunately, this is only half of the story when talking about chemistry.

As small as atoms are, they are still three dimensional, which means the molecules they create are also three dimensional. Every molecule has a specific structure of how the atoms are arranged that give it a particular shape. It's hard to see in the two dimensional approximation of the glucose molecule that I made (above), but glucose has a slight twist in the molecule, and the loosely connected carbon to the top-right is more underneath the rest of the molecule than I've shown here

There are many different molecular structures. Some shapes look like a triangular pyramid (tetrahedral), a starfish (seesaw), and even two x's merged together (octahedral), but these are only some of the many shapes that molecules can have. Sometimes the only thing that's different between two molecules is the shape. The same chemical formula but a different molecular shape means that the two molecules are totally different. In special cases, the molecules are the same in both formula and are almost identical in structure, but they are mirror images of each other. These molecules are called enantiomers.

To visualize how enantiomers look, just look down at your hands. They each have five digits, palms, and fingernails. They both do the same thing (more or less) and allow you pick things up, move things around, write, type, hold, and touch. Even though they look very similar, you have a distinct left and a distinct right hand, and they are not the same. They are mirror images of each other, and though you may try to put one on top of the other, it will never be a perfect fit.
These "two" pictures are actually only a single picture! I took a
picture of my left hand, copied it, and flipped the image. It looks
pretty much like my actual right hand! PC: Jessica Noviello.
Chemical enantiomers are the same way, and chemists even refer to them as "right" or "left" handed molecules. Another word for this is chirality. There are many known chiral pairs in chemistry, and the differences in chemical effects between the two members of the pairs range from relatively harmless, such as things smelling different, to downright dangerous.

Oranges smell like oranges because of + limonene, but
its cousin would make them smell like lemons!
PC: Wikipedia, used under the Creative Commons license.
On the harmless side, there are plenty of good examples in the food world. One chemical, limonene, smells different depending on the handedness of the molecules being smelled. The right-handed (+) limonene molecule smells like orange, but its left-handed (-) counterpart smells like, as you may have guessed from the name, lemon. Another example, and one that we had at our November Science on Main table just a couple of weeks ago, is that of another chemical called carvone. L-carvone (-) smells like spearmint and is often used in essential oils products, and its counterpart D-carvone (+) smells like caraway seeds or rye.

A pack of thalidomide capsules. PC: Steven C. Dickson.
Used under the Creative Commons license.
On the more dangerous side, the best example is also a tragic one, a true chemical evil twin. Before chirality and enantiomers were understood, pharmaceutical companies and scientists didn't know that the almost-identical chemicals could produce very different outcomes. In the late 1950s, many doctors in Europe prescribed pregnant women a drug called thalidomide to ease morning sickness, anxiety, and trouble sleeping. It really did a great job doing all of that, but it also caused serious birth defects in babies whose mothers used the drug. 40% of these babies died at birth.

Doctors made the connection between thalidomide and the birth defects in 1961, and the drug was immediately pulled from the market. It turns out that one of the enantiomers caused the beneficial effects of a sedative that relieved the symptoms of pregnancy, but the other caused the birth defects, and both were present in the medicine prescribed. Once chemists figured this out, they also figured out how to isolate the helpful molecule from the dangerous one. Today, thalidomide is still used to manage and treat serious illnesses such as cancer, tuberculosis, leprosy, and HIV/AIDS. It is a drug with a terrible history and still is risky to take, but it is significantly safer than what it was in 1961.

This Thanksgiving, be thankful that your family isn't as bad as chemistry's evil twin!

Monday, November 18, 2019

Arizona's Copper History

Pure copper, element #29 on the periodic table.
PC: Jonathan Zander 2009, used under the Creative
Commons license CC BY-SA 3.0.
One of the most important exports of Arizona is the element copper. As of 2007, Arizona was the leading copper-producing state in the U.S. and accounted for 60% of the total for the entire country. According to the University of Arizona's Geological Survey, prospecting and mining in Arizona begin as far back as 1583, though their targets were gold and silver rather than copper. In the 1850s, shortly after the land that became Arizona was acquired by the U.S. in the signing of the Treaty of Guadalupe Hidalgo, hardrock mining in the area helped build up the local economies, towns, and cities, many of which still exist today. By the time that Arizona became a state in 1912, "there were 445 active mines, 72 concentrating facilities, and 11 smelters with a gross value of nearly $67 million," an amount equivalent to $1.76 billion in 2018 dollars. Not too shabby for the 29th element!

Copper mining is part of Arizona's recent history, and its roots in the community stretch deep. In November 2018, the magma smelter of Resolution Copper in Superior, AZ was demolished as part of a larger effort to reduce the environmental impact of mining and to remove toxins from the area. The smelter is named after the old Magma Copper Co., which began mining the area around Superior in 1911. The smelter was opened in 1924 and immediately got to melting down ore to extract the valuable copper. In 1971 Magma Copper started to use another smelter, and this one was left in honor of its standing as a literal pillar of the community. Eventually it became too dangerous and unstable to keep it, and the choice was made to demolish it safely. It crashed down on November 10, 2018.

PC: Mark Henle/The Republic, c/o the Arizona Republic.
On a larger scale, what is it about Arizona that makes it so wonderful for copper mining? To answer this I first need to explain where the copper comes from. Copper needs to be extracted from copper-rich ores, or the natural materials that contains valuable metals or minerals. Copper-bearing rocks usually have a blue or green hue. When the copper-rich rocks interact with weathering agents such as water and Earth's atmosphere, the copper reacts with the other chemicals and can form other minerals. For example, if there is a lot of carbonate (a salt component of carbon and oxygen) in the water that touches the rocks, then other famous Arizona minerals such as malachite and azurite can form. Chrysocolla is another ore of copper. Though it is generally thought of as a phosphate-rich mineral, turquoise also contains copper and is found in Arizona and New Mexico.

The rocks that are richest in copper in Arizona are igneous rocks, meaning they formed either when a volcano erupted and deposited lava or ash on Earth's surface, or deep under the ground in a magma chamber. The rocks around Superior are a type of volcanic rock called tuff (specifically the Apache Leap Tuff). Tuff is formed from solidified ash that erupted from a vent during a volcanic eruption. The Apache Leap tuff is estimated to have formed 20 million years ago, plenty of time to interact with groundwater and form copper-rich minerals. Other volcanic rocks in the Superior area are rhyolite, a silica-rich igneous rock, and porphyry, a general term for rocks that have large crystals in them.

The copper rich minerals azurite (blue) and malachite
(green). PC: BYU Geology Dept.
Eastern and southeastern Arizona between 55 and 5 million years ago was an incredibly volcanically active place. A major geologic event called orogeny, or mountain building, was developing as the ancient Farallon tectonic plate was slowly moving underneath the larger North American plate. This subduction led to major eruptions that spread enormous amounts of ash all over what later became Arizona. The ash eventually solidified and became tuff, a rock we're already familiar with. These tuff rocks formed large mountain ranges like the Superstitions, Galiuros, Chiricahuas and Tumacacoris. This tuff reacted with groundwater and formed the copper-bearing minerals and ores that are mined today.

All of the volcanic deposits in Arizona make it a rich environment for copper mining. The unique geology here is the reason for why this state is the most copper-rich in the country, which has important economic implications for the state's continued success.
From a science perspective, studying and understanding the environments where certain minerals form is an important step in learning more about Earth's history.

Resources: 
http://azgs.arizona.edu/minerals/mining-arizona


https://www.azcentral.com/story/news/local/pinal/2018/11/10/cheers-tears-historic-copper-smelter-superior-demolished/1808363002/


USGS Geologic map of the Superior, AZ quadrangle: https://ngmdb.usgs.gov/Prodesc/proddesc_2120.htm

https://en.wikipedia.org/wiki/Copper_mining_in_Arizona

Monday, October 28, 2019

Scuttlebutts

Out of all the iconic Halloween animals, one in particular is so creepy-crawly, it never loses its special spookiness at any point during the year: the spider. They come in all kinds of different sizes, colors, abilities, and, for some species in certain parts of the world, even flavors. (Yes, spiders are edible. No, I probably will never eat one.) But something I wondered about was how these animals move.

A tarantula found while hiking up the Mogollon Rim in late
October, 2018. PC: Jessica Noviello.
One of the most basic things about spiders is that they are arachnids, the biological class that also includes scorpions, ticks, mites, harvestmen (aka "daddy longlegs"), and camel spiders (which are not true spiders, it's just the name). Spiders make up the largest order of arachnids. Usually a rule of thumb for telling an arachnid from the more common insects is the number of legs, since arachnids have eight and insects usually have six. As usually happens in science, the answer is a bit more complex. Some mites, which are classified as arachnids, actually have six legs. Some mite species even have four legs! There are other arachnids who start their lives with six legs, but then grow more legs as they moult.

Another fact about spiders is that they have an exoskeleton, a characteristic they share with insects. It's made out of chitin, a similar flexible material similar to the keratin of human fingernails and hair. An exoskeleton is the support system for an insect or arachnid that exists on the outside of its body, keeping all of the squishy parts inside. We humans have an endoskeleton, which is all of our bones. They serve as the base for our muscles, organs, nervous system, and every other system in our body. The way our bodies move is our muscles pull and contract themselves to move tendons and ligaments, which also make our skeletons move. Our movements can be big, like a jump, or small, like picking a piece of lint off of our clothes.

Spiders don't have true muscles because they have exoskeletons. So how do they move?

The answer is that most spiders use a combination of primitive muscles and hydraulic (fluid) pressure, similar to what many powerful machines use to distribute and lift heavy weight in the human world. First I'll explain the muscles. In humans, there are flexor and extensor muscles that make the surround joints open and close, respectively. Spiders only have flexor muscles, which make their joints turn in. This is why spider legs curl inwards when the spider dies.

To make up for the lack of extensor muscles, the spider uses a hydraulic system. The fluid that spiders and other arachnids (and insects too) use is called hemolymph, which is similar to blood in animals with endoskeletons. This hemolymph is mostly made up of a watery plasma, certain chemicals like amino acids, and hemocytes, which are part of a spider's immune system. This hemolymph surrounds the spider's organs outside of any kind of enclosed circulatory system like what we have, and is called an open circulatory system.

The inside structure of a spider. Used under the Wikimedia Commons License.
A spider does have a heart, though it's not nearly as good at pumping blood quickly as human hearts are. This heart can pump the hemolymph into arteries that simply end, spilling out the hemolymph and allowing it to surround the spider's organs. To get the fluid down to the legs, the middle part of the spider's body, called the cephalothorax, pumps the fluid down the arteries in the legs, working like a bellows or accordion. This is how spiders crawl, scuttle, dart, climb, and jump!

A female zebra jumping spider. Used under the Wikimedia
Commons License.
A final fact about spider locomotion is that spiders are not just limited to the ground. Some spiders use their silk to make "parachutes" that carry them hundreds of miles. The special silk they use to do this is called dragline silk, and is so fine that even a small breeze can tangle it up, forming a "balloon" that the spider can ride. Every gust of wind carries the spider further until it eventually lands somewhere, which can be as much as 200 miles away! This is how arachnologists (scientists who study spiders) think spiders travel among islands, often being the first animals to inhabit new volcanically-formed islands. There are still no mathematical models that accurately describe how far a spider can travel using this method.

Turns out there's a lot to say about how spiders move, which is usually what I discover when I start writing a blog article. Here's a list of fun facts about spiders in general that are guaranteed to shock and inform people at any Halloween party:

1) There are 1,000 named species of tarantula worldwide, and the ones found in the Americas often have hair that is irritating if touched. The hair is used to scare of potential predators and curious humans. Tarantulas from other parts of the world generally have more potent venom in place of hair.
2) As of July 2019, there are over 48,200 spider species named by taxonomists, but how these are classified is still up for debate, as there have been 20+ different classifications proposed since 1900.
3) Spiders legs have seven joints!

The seven joints of a spider's leg. PC: InfiniteSpider.com/Eky.edu
4) One species of jumping spider found in Central America, Bagheera kiplingi, is the only known herbivorous spider. All others are predatory, killing about 400–800 million tons of prey per year. Most of that prey is insects.
5) Almost 1,000 species of spider have been described in the fossil record. The oldest spider found preserved in amber is 130 million years old, and the oldest web fossil (which is a thing?!) is 100 million years old.
6) While most spiders live at most two years (unless killed or eaten, of course), tarantulas can live for decades. There is at least one story of a tarantula even befriending a researcher who visited her burrow for years, which shows they can remember individuals.
7) Female spiders are generally larger and more venomous than male spiders, but it depends on the species. This is an example of sexual dimorphism, where males and females of a species appear physically different.

Sources:
https://www.youtube.com/watch?v=FlKago05Lxg
https://asknature.org/strategy/leg-uses-hydraulics-and-muscle-flex/
https://www.newscientist.com/article/dn9536-how-do-spiders-travel-such-epic-distances/
https://infinitespider.com/spider-legs-work/
https://en.wikipedia.org/wiki/Spider

Monday, October 21, 2019

Paper...That Glows?!

On October 11, your favorite neighborhood team of scientists was out on Main Street in Mesa, AZ for the monthly Second Friday event. Every time we set up our booth we try to have a demonstration of some kind to entice people to come over and visit us. Personally I think science is a lot more fun when we can touch it (which is definitely part of why I'm a geologist), and it's good to have a focal point for conversation to put people at ease when talking to scientists. This month, one of our members, Don Balanzat, brought phosphorencent paper to our booth. What ensued was a lot of experimentation and qualitative quantum mechanics!

Phosphorescent paper displays a property called photoluminescence, which is a fancy way of saying "glowing." The "photo-" part of the word refers to photons, which are carriers of light. Most of us have heard of the electromagnetic spectrum, which is a way of classifying different types of radiation based on their wavelengths (the distance between two wavecrests) and their frequencies (the number of waves that pass a certain point, measured in Hertz/Hz). One fun fact is that wavelength is the inverse of frequency, which is a great thing to know if you like trivia.

The electromagnetic spectrum. From: GSFC/NASA.
In physics, light is considered both a particle and a wave. The particle is the photon, and the wave is how we model the photon's path and velocity. This is also why if you ever look at quantum mechanics equations, you'll see a lot of sine and cosine terms--turns out they are good base models for describing how light moves. Trigonometry is good for more than just triangles!

We did some qualitative (no equations, but still figuring out what's going on) experiments on the paper with some custom 3D printed Halloween cookie cutters that Don brought along with the paper. See the progression of 1) before we shined light onto it, 2) while we shined the blue flashlight onto the paper and cookie cutters, and 3) after we turned off the light and removed the cookie cutters.

Picture 1: Before we shined the blue light on the paper.
PC: Jessica Noviello.
Picture 2: While we shined the blue light on the paper.
PC: Jessica Noviello.
Picture 3: After we shined the blue light on the paper.
PC: Jessica Noviello.
Do you notice the shape of the cookie cutters is left behind? That's because the cookie cutters blocked the blue light from reaching the paper. The molecules underneath didn't absorb any photons, so their electrons didn't get excited. The phosphorescence eventually comes back to normal after ~20 seconds, but it's cool to see quantum mechanics at work!

What's going on here? As the photons hit the paper, the molecules that make up the paper absorb the light. All materials do this, by the way, even if it's rare to see something glow afterwards. The difference with the phosphorescent paper is, of course, the glowing after the light is removed. Different materials are more sensitive to different wavelengths of light. The paper we had was very sensitive to blue light, which is higher energy (shorter wavelength) light. It barely responded to the yellow-ish light of the nearby streetlamps and didn't notice red light (considered low energy due to its longer wavelengths).*

When the light source is removed, the molecules are slow to give up their photons. Even when the photons are released, they do not have the same energy or wavelength. This is due to something called energy states in atoms, the building blocks that make up molecules. An atom absorbing a photon is called "excited," and it shoots up an electron to a higher energy state. It's like someone perking up after having a cup of afternoon coffee. As that caffeine wears off, the energy level of the person goes back to its normal, lower, more stable state. This is what happens with the electrons when they give up their extra energy. In phosphorescent materials, it happens on a timescale we can observe.

Here are some more pictures from one of our experiments with another one of our props, a dinosaur.
PC: Jessica Noviello
PC: Jessica Noviello
PC: Jessica Noviello
*Author's Note: Even though I wrote "short" and "long," I am only talking about the light that we can see in the visible part of the spectrum. There's a lot more and, presumably, the paper would respond to things like UV and X-ray light, but we can't see that so we can't measure it without special tools, and we didn't have any of those with us. They are expensive!

Monday, October 14, 2019

The Need for Blood

It's another Monday in October, which means it's time for another blog post on something spooky! This week's topic builds on last week's post about the discovery of blood types and the risky and definitely gross history of blood transfusion. I definitely suggest reading that one first, but you won't miss anything (except an awesome story) by skipping it. 

Last week we talked about how there are four main blood type groups: A, B, AB, and O. Each of these blood types have proteins called antigens on their red blood cells. Blood also has an additional antigen called the Rh factor, which can either be positive or negative. We'll only be talking about these two antigens in this post, but doctors and blood scientists have found over 600(!) other known antigens. The presence or absence of these antigens is what makes a person's blood a certain type.

When antigens from different blood types mix, it causes the blood to clot. In a scientific lab this isn't a big problem, but if someone has two different types of blood in their body, it could lead to a bunch of life-threatening issues. This is part of why blood transfusions before 1910 were so dangerous, and why blood transfusion was regarded as a last-ditch effort to save a life. As with the overall type, the Rh factor must match in order for blood to be safe to receive. Because O blood has no antigens on its red blood cells, it can go to any other blood type without any problems, making it the universal donor. AB blood types have both A and B antigens on the red blood cells, which means they can receive any blood type, hence why it is called the universal receiver.

Safe blood transfusions between donors and recipients. Notice that the O blood type can be donated to any other blood type, but can only receive O blood. This is why O blood is called the universal donor. AB types can receive any kind of blood as the universal receiver type.
Diagram from: https://www.redcrossblood.org/donate-blood/blood-types.html
A person's blood type is controlled by genetics (specifically chromosome 9q34.2, if you want to get technical). Each person's DNA contains half of its characteristics from each parent. Sometimes these characteristics are both expressed simultaneously, but sometimes one parent's genes are the only ones expressed. The variant of the gene that dominates is called the dominant trait, and the other is called the recessive trait. This is the case with blood types. A and B types are dominant, whereas the O types is recessive. For completeness, Rh+ is dominant over Rh-; this is important for later.

How is a person's blood type decided? Each parent has two alleles, or variants of a gene, that they inherited from their own parents. Which single allele is passed down to the child is completely random. The child receives two alleles, and their blood type is decided by which allele is dominant. If either an A or B gene is present, the child will be that blood type. If the parents both supply the recessive O traits, only then will the child have O blood. In the case when a parent with A blood has a child with a parent with B blood, that child could have AB type blood, where both of the alleles are expressed. This is a special case of genetics called codominance, when both traits are expressed at the same time.

The blood type possibilities of parents vs. child.
Chart from: https://www.redcrossblood.org/donate-blood/blood-types.html
The O- blood type is relatively rare because both the O and the Rh- traits are recessive. Across racial demographics in the United States, 8% of Caucasians, 4% of African-Americans, 1% of Asian-Americans, and 4% of Latinx-Americans have O- blood. Overall, that means that 7% of the U.S. population has O- blood. The most common blood type is O+ (38% of U.S. adults), followed by A+ blood (34% of U.S. adults). Of the eight most common blood types (the ABO and Rh factor combinations), the rarest is AB-, which only 0.6% of U.S. adults have.

Percentage of people with the O- blood type. Image credit:
https://www.redcrossblood.org/donate-blood/blood-types.html
Right now, it's also the only blood type that any person can receive without any problems. In an emergency situation where a person is in dire need of blood and there is no time for laboratory tests, doctors and surgeons give that person O- blood until the patient's blood type is known. On any given day, 35,000 pints of blood are given to people for emergencies, scheduled operations, and routine transfusions. That's means that one person every 2 seconds needs blood.

Even with all of modern medicine's tools, blood is something that cannot be made in a laboratory. The only way to get more blood is for people to donate it. According to the American Red Cross organization, while 50% of U.S. adults are able to give blood, only 5% do. When combined with the low rate of O- blood in the U.S. population, the result is that O- blood is usually in short supply, even though it is most needed.

To help provide more blood for medical use without forcing people to donate blood, a team led by researcher Stephen Withers, a chemical biologist at the University of British Columbia in Vancouver, Canada, has found a way to change type A blood into blood that can be used by anyone. This team of researchers was able to isolate DNA from different human gut microbes to create a new type of organism that could produce a new type of protein. This protein has the ability to remove the A antigens from red blood cells, making them identical to O-type blood. Right now the team is doing more testing to make sure the formerly-A blood is safe to use in transfusions, but if it is, then this discovery can potentially double the amount of universal donor blood available. This will help meet the need for blood in the U.S. and save more lives.
Blood shortages usually peak in the summer, when need is greatest
but supply falls. Giving blood addresses a critical need in most
communities. Image credit: American Red Cross.

In the meantime, if you can donate blood, we ask that you seriously consider doing so, especially during the month of October. Yes, needles are scary, and I admit that even I don't like that part. I do it because I know that my blood will go to someone who needs it. Plus, the cookies I get afterwards are tasty. My blood will be replaced in a few weeks. For so many, that isn't guaranteed.

Sources:
https://www.sciencemag.org/news/2019/06/type-blood-converted-universal-donor-blood-help-bacterial-enzymes
https://www.redcrossblood.org/donate-blood/blood-types.html
https://www.mayoclinic.org/tests-procedures/blood-transfusion/expert-answers/universal-blood-donor-type/faq-20058229
https://ghr.nlm.nih.gov/gene/ABO
https://en.wikipedia.org/wiki/Blood_type_distribution_by_country

Monday, October 7, 2019

The History of Blood Types

Blood has always had a mystical, life-giving quality to it, even in ancient societies. It is something worth protecting. Even today there are many terrifying stories across cultures of bloodsucking animals who prey on innocent humans in the night, either killing them outright or changing them into wicked creatures. Other real animals, mainly mosquitoes, are known to spread diseases such as malaria and West Nile virus via their bloody bites.

But this blog post isn't about blood diseases, though they are certainly worth a mention. It's instead about how science gained the basic understanding of blood and blood types and used it to save millions of lives.

A lab technician examining blood samples.
PC: iStock.com/Arindam Ghosh
Today, most people are aware that there are four blood types: A, B, AB, and O. Each person has one blood type, and for 99% of people, it's one of these. Some people are also aware of something called the Rh factor, which names a protein first found in a rhesus monkey in a laboratory (more on that later). 85% of people are Rh+ positive, and the rest are Rh negative. Except in very rare cases and in some pregnant women, Rh factor has no impact on a person's health. The way we talk about our blood types is putting the information about the type and the Rh factor together. For example, my blood type is A+, while my best friend's is O+. What makes my blood different from his?

It all comes down to something called antigens, which are sugars that exist on every single one of my red blood cells. Different blood types have different antigens in different places. For example, my A antigens are on my red blood cells, but I also have B antigens that exist in my plasma, the clear fluid that holds my red blood cells as well as other proteins and sugars that my body needs. A person with B blood has the opposite arrangement: B on the red blood cells, A in the plasma. Someone with AB blood has both A and B antigens on their red blood cells, and no antigens in their plasma, and someone with O blood has nothing on their blood cells and A and B antigens in the plasma.

Visualization of blood types.
From: https://www.redcrossblood.org/donate-blood/blood-types.html
Cool information, I guess, but what good does any of that knowledge do? Quite a bit, as it turns out! If someone gets a blood transfusion (a medical procedure where a patient receives blood from another human) and the blood type is not the same as theirs, there could be a potentially fatal reaction. Blood clots when it is mixed with an antigen it does not recognize. That's because foreign antigens trigger a response from a human's immune system, which is the system in our bodies that fights infection and sickness. Instead of helping the person by giving them blood, the new blood could actually kill them. Before 1900 and the discovery of blood types, a blood transfusion was a last-ditch effort in dying patients because it often resulted in the patient's death. No one knew precisely why.

The first blood transfusion was conducted in 1667 on a 15 year-old French boy by the physician Dr. Jean-Baptiste Denys. His early transfusions used animal blood instead of human blood, most often sheep but sometimes dogs. Patients who received large quantities of animal blood usually died after multiple transfusions, and today we can probably understand why: animals and humans have different blood types. Back then it was assumed that all blood was the same though, and sick people were willing to try anything to stay alive. Transfusion quickly was labeled so dangerous and controversial that in 1668 the French government and the Royal Society of London banned the procedure in their respective countries, and the Vatican condemned it in 1670. Transfusions were taboo for 150 years.

Physician James Blundell.
PC: engraving by John Cochran,
public domain.
In 1818, British physician James Blundell used a blood transfusion to treat a woman who had uncontrollable bleeding after giving birth. He may have been desperate to save her, even if the procedure was technically illegal. He saved her life, and decided to keep trying this technique on other patients. Out of ten transfusions he did in 1825 and 1830, five of them were successful and kept the patient alive. Even in the 1800s, this was a poor success rate, and the medical community viewed transfusions as risky and not medically sound. The procedure even made its way into horror literature: in Bram Stoker's Dracula, the character Lucy actually receives two blood transfusions from her suitors to replenish the blood Dracula has sucked away, but dies anyway. The death is attributed to the titular vampire, but I wonder if the transfusion itself didn't help advance her death.

It wasn't until 1900 that Austrian physician Karl Landsteiner noticed that blood from different humans would clump together when mixed (he also noticed the blood clumped when it was mixed with animal blood, which probably wasn't a surprise to him, given how many people had died from transfusions before). This was the first evidence of any difference in blood. He didn't yet know if the source of the differences was an inherent characteristic of the individual or the result of an infection acquired at some point in life. His experiments in 1901 showed that the blood of an individual would not clump with some people's blood, but would always clump with others. In this way he discovered the three blood groups, which he initially named A, B, and C. Group C would eventually be renamed after the German word for zero or null, ohne, and become what we call it today, O. Two of his students discovered the fourth main blood type, AB, in 1902.
Dr. Karl Landsteiner, blood type discoverer.
PC: The Rockefeller Archive Center.

 Though we today might think of blood types as basic information, especially given how many times it's mentioned in crime shows and medical dramas, this was a groundbreaking discovery in the early 1900s. Dr. Landsteiner refined his theory of blood groups and published it. The number of deaths from blood transfusions dropped dramatically after doctors learned to test blood before putting it into someone. Today it is one of the most common medical procedures, saving up to 4.5 million lives annually in the United States alone. For his discovery and work, and in recognition of the lives he had saved, Landsteiner was awarded the 1930 Nobel Prize in Medicine and Physiology. Landsteiner went on to discover the human Rh factor in 1937 by studying the similar antigen in the rhesus monkey.

Turns out there's a lot more to say about blood types and where they come from, so I'll continue that in next week's post. Until then, be safe and keep your blood where it belongs!

Monday, September 30, 2019

The Coolest Clouds You've (Probably) Never Seen

Most people have seen clouds. Most people who have seen clouds have also noticed that no two are alike. The fluffy white ones are almost mandatory on a sunny summer day, the grey ones signal snow might be on the way, and the dark, low clouds suggest rain or a thunderstorm in the near future. For millennia people have used clouds to forecast immediate changes in local weather, strategies that have kept people safe from lightning strikes, flash floods, hypothermia, among other perils of dangerous weather. Though today there are satellites and radar used to track large weather patterns, clouds are still excellent indicators of weather.

Generally clouds can only be seen during the day, unless there is a particularly bright moon that reflects the sun's light to the sky at night. Other times clouds are only visually detected because the stars cannot be seen through them. What's being is seen is more of the absence of light blocked by the clouds rather than the clouds themselves.

So when bright clouds are seen at night, what the heck does that mean about the weather?
Noctilucent clouds. Matthias Süßen/Wikimedia Commons  
These are called noctilucent, or "night-shining" clouds. The physical structure of these clouds are collections of small water ice crystals surrounding tiny particles of dust that are suspended in the high atmosphere. In fact, noctilucent clouds are the highest clouds of all! While most clouds hang out in the troposphere, the bottom-most layer of the Earth's atmosphere that goes to 5-6 miles above the ground, noctilucent clouds exist around 50 miles above the surface, in a layer called the mesosphere, just below the famous aurora borealis light waves.

Structure of Earth's atmosphere:
https://scied.ucar.edu/atmosphere-layers
These clouds are typically only visible at high latitudes in both the northern and southern hemispheres between 50 and 70 degrees. For reference in North America, that means going up north into Canada, at least as far north as Vancouver. These clouds are only visible during twilight hours during the North American summers. After the sun has set below the horizon, its rays linger for a little while, yielding the lovely colors associated with sunsets. For a few moments the Earth blocks the part of the sun's rays that illuminate the lower atmosphere, allowing the sun's rays to illuminate only the highest parts of the atmosphere, where the noctilucent clouds are. This only lasts until the Earth actually gets in the way of the sun and blocks all light, creating full night. To view noctilucent clouds takes luck and a lot of patience to wait for the right moment, and a camera never hurts!

These clouds are a relatively recent weather phenomenon, though they've probably existed for all time. There is no written mention of these clouds being observed until 1885, when Otto James of Germany began to study these clouds extensively. It was he who coined the name "noctilucent" in 1887, and his notes are the first to mention them at all. He had already been studying the changes in sunsets since the historic eruption of the Krakatoa volcano in 1883, and it was perhaps accidental that he noticed these unusual clouds at all. Subsequent observations of these clouds by James and his colleagues at the Berlin Observatory until 1896 determined their heights and their general visibility patterns. Not much science was done on these clouds after James' death in 1901 until a satellite, NASA's OGO-6, observed them for the first time in 1972.
Geometry of the sky needed for viewing noctilucent clouds.
Credit: NASA

Even today, these clouds are "not fully understood," which is just scientist speak for "we're really not sure what conditions make these clouds, what they mean, or how they vary with different latitudes, altitudes, dust concentration, and ice crystal size." This is really exciting news! It means there's a lot left to discover for anyone interested in studying them, and it's practically a new branch of meteorology. And there are always new mysteries around the corner to learn about.

For example, what has really confused everyone from expert meteorologists to anyone who goes starwatching is why these clouds have recently been observed as far south as northern New Mexico (36 degrees N. latitude). Noctilucent clouds also appear to occur more frequently and are brighter when seen, at least over the past decade. Atmospheric scientists point to increased levels of carbon dioxide and methane as potential causes for the increase in noctilucent clouds. These gases rise into the upper atmosphere and interact with other gases, creating water vapor that can form additional high-altitude clouds. Stronger, more frequent storms on Earth can also supply more water vapor to the high atmosphere, further driving noctilucent cloud formation. These scientists are actively testing their hypotheses now to understand if these clouds are signals of a major shift in Earth's atmosphere.

Until we know more, there is not much to do but enjoy their beauty on a quiet summer night.
Credit: https://scied.ucar.edu/imagecontent/noctilucent-clouds
Sources:
https://en.wikipedia.org/wiki/Noctilucent_cloud
https://www.skyandtelescope.com/observing/noctilucent-clouds-3/
https://www.sciencenews.org/article/night-shining-noctilucent-clouds-have-crept-south-summer
https://scied.ucar.edu/

Monday, March 18, 2019

Color-changing octopuses: How do they do it??

Christina Forbes is a postdoctoral researcher in the School of Molecular Sciences at Arizona State University. She specializes in organic chemistry and biochemistry.

Octopuses are incredibly fascinating creatures: intelligent and clever, strange organ configuration (to me at least), and some really exotic forms of sensory perception.

Octopuses are part of the cephalopod family, with their cousins, squid, cuttlefish, and nautiluses. Fossil impressions of octopuses show evidence that these creatures have been around more than 200 million years, and so they have had plenty of time to evolve some really unique adaptations!


No octopus to see here!
(Flikr 808_Diver)
Some species of octopus have incredible abilities that make them masters of camouflage, where their skin can change color and texture. Check out this short TEDtalk if you want to see some examples of this amazing ability! So how does an octopus do this?

The skin of an octopus has several layers of cells that work together during color changes. One layer of skin contains chromatophores, which are tiny fluid-filled sacs that contain different pigments of colors. If the muscles around the sac squeeze it shut, then it hides the pigment. If the sac expands, then the color shows through. Opening and closing these different sacs of color can give an octopus a wide range of colors for its skin color (a little like a digital pixel display). Another layer of skin contains iridophores, which work to reflect surrounding light, especially green and blue colors. The base layer of skin contains leucophores, which is like a white background for all the other colors. Some octopuses have photophores, which give them special bioluminescent ability.


A close-up of squid skin, where the wide-open chromatophores reveal brownish colors
(Creative Commons, via wikipedia)

More deeply (as my neighbor asked me): does an octopus think about it and "try" to look like its surroundings, or does an octopus innately change color and texture without thinking about it?

Without being able to directly ask an octopus about whether its camouflaging ability is conscious or sub-conscious, some experts have tried to study this. One study found that an octopus tends to choose a particular object to imitate (a rock or piece of coral) rather than its entire surroundings, suggesting there is some thought behind camouflage. This suggests that at least some octopuses are using their vision to match its surroundings.

On the other hand, in addition to seeing light with their eyes, octopus skin could perceive light or color as well. Octopus skin cells contain opsin proteins, which are special sensors that allows their skin to detect color and light. That's not to say that octopuses can "see" with their skin, but their skin can pick up enough information about their surroundings so that they can change shape and color to mimic their environment without using their brains.
 
So maybe camouflaging is a little more innate than conscious!

So can octopus skin change color without the octopus thinking about it? Actually, yes, sort of: Skin that has been removed from an octopus (ouch!) can change color on its own, but its much slower than usual, suggesting that some neural input from the brain assists in color changing and camouflaging.

As if all this wasn't crazy enough, some species of octopus are color-blind. Or are they? A recent paper suggested that while an octopus might only have one photoreceptor (only see in black-and-white), their unique pupil shape (U-shaped) could allow them to interpret color though chromatic abberation. However, this work is only one possible model that remains to be proven. 

If only we could just ask an octopus.


Octopuses have been found to build and carry shelter materials. They can also open doors and screw-topped containers. Don't ever underestimate an octopuses ability.
(Creative commons, via pxhere)

Other awesome references:

https://www.nationalgeographic.com/science/phenomena/2014/08/18/adaptive-colour-changing-sheet-inspired-by-octopus-skin/

https://ocean.si.edu/ocean-life/invertebrates/how-octopuses-and-squids-change-color

https://www.nature.com/scitable/topicpage/cephalopod-camouflage-cells-and-organs-of-the-144048968

Thursday, March 14, 2019

IT'S PI DAY!!!

Pi is the famously irrational number that represents the ratio between a circle's circumference to its diameter. Fractionally it is 22/7, but it's far more commonly represented as the string of literally endless digits: 3.14159.... Today, Pi has been calculated out to 31.4 trillion digits. The digits are countably infinite, meaning they go on forever in no repeatable pattern, but we can calculate them. Frankly I'm glad computers can do that for us.

Pi as a concept is ancient, though its value has varied over time. Ancient Babylonians approximated the value as 3, and there is one tablet that puts it at 3.125. Ancient Egypt had a slightly different value for pi at 3.16 (more precisely as 256/81). This value comes from an ancient papyrus scroll called the Rhind Mathematical Papyrus (1650 BCE).

The Egyptian Rhind Mathematical Papyrus.
Photo credit: The BBC
The famous Greek mathematician Archimedes of Syracuse (287–212 BCE) even tried his hand at calculating pi, but was only able to place limits on its values and did not calculate an exact number. He bounded it between 3 1/7 and 3 10/71, or between 3.140845... and 3.142857.... He wasn't wrong, but he wasn't as precise as later mathematicians were. 700 years later, Chinese astronomer and mathematician Zu Chongzhi calculated the ratio between circumference and diameter to be 355/113, or 3.1415929.... The details of his work are unknown because his book has been lost to time, but historians believe it was computationally complex, "involving hundreds of square roots carried out to 9 decimal places."

It wasn't until 1706 that the Greek letter "pi" was used to represent this intractable but necessary number. While initially introduced by William Jones, mathematician, Fellow of the Royal Society, and friend of Sir Isaac Newton, it was Swiss mathematician Leonhard Euler who popularized it in 1737.
Leonhard Euler, painted by Jakob Emanuel
Handmann, 1753.

In honor of the 3.14 basic number, on the fourteenth day of every March (3/14 in the MM/DD format), we celebrate PI DAY. It is by far the most glorious of all March holidays (in the opinion of this author, at least). I'm far from the only one who thinks this though! In true mathematical style, MIT releases the decisions of its undergraduate freshman class on this date every year, and the emails are sent out at precisely 1:59 PM eastern time. In 2009, a non-binding resolution in the House of Representatives declared March 14, 2009 as National Pi Day in an effort to engage students in mathematics. Also, it's an excuse to bring pie into work and share it with your coworkers. What could be more fun?

Honestly, the answer to that question could be having participated in the very first Pi Day celebrations. Pi Day as a fun holiday isn't an old idea; the first known, large-scale celebration of the arithmetical anomaly was in 1988 at the Exploratorium in San Francisco. Dr. Larry Shaw was a physicist who worked there, and it seems like he used the day as an excuse to get the staff and public together in the building to eat cake. After walking around one of its circular spaces, everyone stopped and began to devour the many fruit pies around the room. The Exploratorium still holds Pi Day celebrations to this day, though they are more extensive and engaging now, with games, talks, activities, and even a band! Fortunately, pie eating is still involved.

Pi actually has a rival for best circular approximation number. Tau is a ratio that relates a circle's circumference to its radius rather than its diameter. Effectively, Tau is just 2*Pi, or 6.283185.... Tau also has a holiday celebrated on June 28th (6/28), but it is less well-known, probably because it doesn't have delicious baked goods associated with it. Supporters of Tau argue that it removes confusion when switching from radians to degrees when describing angles, and that trigonometric functions would have a period of Tau instead of 2 Pi, which is easier to understand conceptually. Pi is winning the popularity contests now, but that could change in the future.

Regardless, Pi Day is a fun time to celebrate the math in your life and learn about the history of the number we all know by name.
I brought an apple pie into work today to celebrate
Pi Day. It was delicious!

Monday, March 11, 2019

Accelerating Your Particles: So Where Does My Beam Come From Anyway?

This week's guest blogger is Kellen McGee, 1st year graduate student in Physics (specializing in nuclear and accelerator physics) at the National Superconducting Cyclotron Laboratory/Facility for Rare Isotope Beams at Michigan State University. 

Big particle accelerators and colliders have long been some of the most visible physics experiments. Many of us going through graduate school these days remember when CERN, the 27-km around circular particle collider in Geneva, Switzerland, turned on. This event brought great excitement over what the new physics experiment was about to uncover (notably the much anticipated detection of the Higgs boson), and also anxiety about whether or not it would accidentally create a black hole, This is but the latest example of how various particle acceleration technologies have, since the turn of the century, been almost entirely responsible for us having figured out and validated as much as we have about the laws of the physical world at the subatomic level. 


Physicists, both theorists and experimentalists, drove the hunt for the particles and laws of the Standard Model: physicists’ current, and beautifully proven, understanding of the fundamental particles and the three fundamental forces. These forces are the the weak, the strong, and the electromagnetic forces. Gravity is, alas, not included; if you’d like to try to incorporate gravity into the Standard Model, there’s at least a PhD in it for you, if not a Nobel Prize. The particles of the Standard Model are all the known fundamental particles that can’t be made up of each other–the various flavors of quarks, gluons, leptons and neutrinos that make up all other particles we know of.

However, quite apart from knowing the physics that told you about the particles, and how they would behave, or be identified in detectors, you also need physicists who know how to speed up the particles to the energies you would need to collide or smash them into detectors to learn about the particles’ insides, be they protons, electrons, neutrons, or even atomic nuclei.

How does this work?

The history of particle accelerating devices is a long and interesting one. You might not know that you have a few particle accelerators in your very own house. If you have an older-model television or computer, for example, (the kind that isn’t flat), the screen is illuminated by something called a cathode-ray tube (CRT). This is essentially an electron accelerator, using a voltage source to accelerate electrons that then zoom off and hit a phosphorescent screen, causing it to glow wherever electrons hit. This is an example of the simplest type of what’s called an “electrostatic” accelerator- two plates, one negatively charged, the other one positively charged, cause electrons to fly off of one, accelerate in the electric field, and land on the other plate.

This type of accelerator works because electrons are charged particles, and will get pushed (“kicked” in accelerator jargon) by an electric field. Now, it’s cool to zoom electrons around, but there’s only so much you can do with them because they’re light, and to get them going fast  enough to smash into things and expect any interesting particles to get created (or other interesting physics effects) you have to give them more energy than we really know how to, (really long accelerators being really expensive and energy-hungry). Fortunately for us, other particles have charges. Protons, for example, have positive charge, and were smashed into each other at CERN (technically, it was a proton-antiproton collision with the antiproton having negative charge) for their famous Higgs experiments.

While there historically has been fame in colliders and particle accelerators aiming for higher and higher energies, Nuclear physicists, people who are interested in the structure and behavior of nuclei, their interrelationship in the periodic table of elements, their relative stabilities (how easily their clusters of protons and neutrons fall apart) and other properties, have been turning to accelerator experiments as well. Accelerators, either linear or circular (cyclotrons) are also used in the creation of medical isotopes for cancer treatments. Your local hospital might just have one of these in their basement, and be looking for newer and more efficient ways of making these medically critical materials.

Nuclei that are missing electrons have a net positive charge, and thus can be kicked along an electric field just like the electrons in a CRT, or the protons at CERN. By accelerating whole nuclei, and smashing them into carefully engineered targets, nuclear physicists can start addressing some of the questions above. However, after you get beyond electrons at slow speeds, acceleration becomes much harder than just setting up two plates and putting one at positive and the other at negative voltage. The only way to make those electrons go faster in that setup would be to increase the voltage, which will always, eventually, lead to an electric discharge (boring name for shock!) before you get your electrons up to interesting speeds.

If you can’t make one “kick” really strong, your next option is to line up a series of kicks, that each increase the speed of the charged particle by a certain amount. Imagine being a whitewater rafter, rafting down a series of waterfalls- this is similar to what happens if a charged particle travels across a series of kicks- it gets more and more energy, regardless of how fast it’s going (before relativity starts kicking in, of course!).

We want to build a nuclear science accelerator to accelerate ions to interesting energies, about 65% the speed of light. CERN, a high-energy physics facility aims for 99.999…% the speed of light, for comparison. To do this, we have to figure out how to line up enough kicks to accelerate the particles to the speed we want to study.  This is exactly the problem currently being tackled by FRIB, the Facility for Rare Isotope Beams at Michigan State University.

In the picture below, you see a plan of the FRIB linear accelerator (“linac”) under construction. Each of the little boxes along the straight parts make the particles (the nuclei) go faster by a certain amount. The little boxes are called cryomodules and in real life are taller than a person and several meters long. These house the true engines of the accelerator- the structures that allow us to set up and maintain many, extremely strong (2-5 megavolt-per-meter) electric fields that kick the nuclei along.


The best way to set up these electric fields is still very much a field of open development. For FRIB’s application, making a number of different kinds of ions go fast, FRIB decided to use pure niobium superconducting RF (radio frequency) cavities. This is a mouthful. Let’s break those words down, a little out of order.

RF: radio frequency. An oscillating electric and magnetic field. It would at first seem counterintuitive that we would be using RF, since “oscillating” means the direction the fields are pointing changes by 180 degrees every half period. Physically this means if we used RF at 650 megahertz (650 million cycles per second) to make an electric field in one direction, half the time the electric field is pointing in the other direction (backwards). This problem is controlled by making sure that the particles are timed to be in the electric field when it’s pointing forwards and out of the electric field when it is pointing backwards.

Cavities: this RF has to live somewhere. Cavities are metal,
cylindrical objects with special geometric properties that you
can stick an antenna inside of and pump RF inside.

Image copyright FRIB, P. Ostroumov, SRF Group et. al.

This is a model of a five-cell prototype cavity for FRIB. The upper half shows the distribution of the magnetic field (strongest in the red regions) and the bottom photo shows the distribution of the electric field. The series of kicks we see now are the series of spaces that are the orange-yellow color. The isotope we want to accelerate enters the tube, then sees five electric fields that make it go faster and faster. The gaps (blue spaces on the axis of the pipe) are where the particle is traveling during the time that the electric field in the orange region is pointed backwards.
 
Superconducting Niobium: The rainbow pictures, again, are depictions of what happens to the electric and magnetic fields when an antenna is put into the cavity and RF is piped in at certain frequencies. The RF waving along the walls of the cavity can generate resistance that has to be dissipated as heat, and also constitutes a power drain on the RF, causing less energy to go into moving the particle forward, and more energy to go into heating the cavity walls. Ideally, we want a low-resistance material. Fortunately, Niobium, an element, is a relatively workable metal and goes superconducting at 9.6 degrees above absolute zero. Thus FRIB and many similar facilities have chosen to engineer their superconducting cavities from either pure niobium, or some variety of niobium compound, and cool these in the cryomodules to superconducting temperatures using liquid nitrogen and liquid helium.

Though the above is only the briefest description, it shows how accelerators demand a variety of specialists- you can receive a PhD in any number of accelerator-related subfields including cryogenic systems, superconducting RF, particle beam dynamics, test diagnostic equipment design and implementation, controls programming…the list goes on and on. The simple task of accelerating particles, or nuclei, for science is thus, itself, a science.