Monday, March 18, 2019

Color-changing octopuses: How do they do it??

Christina Forbes is a postdoctoral researcher in the School of Molecular Sciences at Arizona State University. She specializes in organic chemistry and biochemistry.

Octopuses are incredibly fascinating creatures: intelligent and clever, strange organ configuration (to me at least), and some really exotic forms of sensory perception.

Octopuses are part of the cephalopod family, with their cousins, squid, cuttlefish, and nautiluses. Fossil impressions of octopuses show evidence that these creatures have been around more than 200 million years, and so they have had plenty of time to evolve some really unique adaptations!


No octopus to see here!
(Flikr 808_Diver)
Some species of octopus have incredible abilities that make them masters of camouflage, where their skin can change color and texture. Check out this short TEDtalk if you want to see some examples of this amazing ability! So how does an octopus do this?

The skin of an octopus has several layers of cells that work together during color changes. One layer of skin contains chromatophores, which are tiny fluid-filled sacs that contain different pigments of colors. If the muscles around the sac squeeze it shut, then it hides the pigment. If the sac expands, then the color shows through. Opening and closing these different sacs of color can give an octopus a wide range of colors for its skin color (a little like a digital pixel display). Another layer of skin contains iridophores, which work to reflect surrounding light, especially green and blue colors. The base layer of skin contains leucophores, which is like a white background for all the other colors. Some octopuses have photophores, which give them special bioluminescent ability.


A close-up of squid skin, where the wide-open chromatophores reveal brownish colors
(Creative Commons, via wikipedia)

More deeply (as my neighbor asked me): does an octopus think about it and "try" to look like its surroundings, or does an octopus innately change color and texture without thinking about it?

Without being able to directly ask an octopus about whether its camouflaging ability is conscious or sub-conscious, some experts have tried to study this. One study found that an octopus tends to choose a particular object to imitate (a rock or piece of coral) rather than its entire surroundings, suggesting there is some thought behind camouflage. This suggests that at least some octopuses are using their vision to match its surroundings.

On the other hand, in addition to seeing light with their eyes, octopus skin could perceive light or color as well. Octopus skin cells contain opsin proteins, which are special sensors that allows their skin to detect color and light. That's not to say that octopuses can "see" with their skin, but their skin can pick up enough information about their surroundings so that they can change shape and color to mimic their environment without using their brains.
 
So maybe camouflaging is a little more innate than conscious!

So can octopus skin change color without the octopus thinking about it? Actually, yes, sort of: Skin that has been removed from an octopus (ouch!) can change color on its own, but its much slower than usual, suggesting that some neural input from the brain assists in color changing and camouflaging.

As if all this wasn't crazy enough, some species of octopus are color-blind. Or are they? A recent paper suggested that while an octopus might only have one photoreceptor (only see in black-and-white), their unique pupil shape (U-shaped) could allow them to interpret color though chromatic abberation. However, this work is only one possible model that remains to be proven. 

If only we could just ask an octopus.


Octopuses have been found to build and carry shelter materials. They can also open doors and screw-topped containers. Don't ever underestimate an octopuses ability.
(Creative commons, via pxhere)

Other awesome references:

https://www.nationalgeographic.com/science/phenomena/2014/08/18/adaptive-colour-changing-sheet-inspired-by-octopus-skin/

https://ocean.si.edu/ocean-life/invertebrates/how-octopuses-and-squids-change-color

https://www.nature.com/scitable/topicpage/cephalopod-camouflage-cells-and-organs-of-the-144048968

Thursday, March 14, 2019

IT'S PI DAY!!!

Pi is the famously irrational number that represents the ratio between a circle's circumference to its diameter. Fractionally it is 22/7, but it's far more commonly represented as the string of literally endless digits: 3.14159.... Today, Pi has been calculated out to 31.4 trillion digits. The digits are countably infinite, meaning they go on forever in no repeatable pattern, but we can calculate them. Frankly I'm glad computers can do that for us.

Pi as a concept is ancient, though its value has varied over time. Ancient Babylonians approximated the value as 3, and there is one tablet that puts it at 3.125. Ancient Egypt had a slightly different value for pi at 3.16 (more precisely as 256/81). This value comes from an ancient papyrus scroll called the Rhind Mathematical Papyrus (1650 BCE).

The Egyptian Rhind Mathematical Papyrus.
Photo credit: The BBC
The famous Greek mathematician Archimedes of Syracuse (287–212 BCE) even tried his hand at calculating pi, but was only able to place limits on its values and did not calculate an exact number. He bounded it between 3 1/7 and 3 10/71, or between 3.140845... and 3.142857.... He wasn't wrong, but he wasn't as precise as later mathematicians were. 700 years later, Chinese astronomer and mathematician Zu Chongzhi calculated the ratio between circumference and diameter to be 355/113, or 3.1415929.... The details of his work are unknown because his book has been lost to time, but historians believe it was computationally complex, "involving hundreds of square roots carried out to 9 decimal places."

It wasn't until 1706 that the Greek letter "pi" was used to represent this intractable but necessary number. While initially introduced by William Jones, mathematician, Fellow of the Royal Society, and friend of Sir Isaac Newton, it was Swiss mathematician Leonhard Euler who popularized it in 1737.
Leonhard Euler, painted by Jakob Emanuel
Handmann, 1753.

In honor of the 3.14 basic number, on the fourteenth day of every March (3/14 in the MM/DD format), we celebrate PI DAY. It is by far the most glorious of all March holidays (in the opinion of this author, at least). I'm far from the only one who thinks this though! In true mathematical style, MIT releases the decisions of its undergraduate freshman class on this date every year, and the emails are sent out at precisely 1:59 PM eastern time. In 2009, a non-binding resolution in the House of Representatives declared March 14, 2009 as National Pi Day in an effort to engage students in mathematics. Also, it's an excuse to bring pie into work and share it with your coworkers. What could be more fun?

Honestly, the answer to that question could be having participated in the very first Pi Day celebrations. Pi Day as a fun holiday isn't an old idea; the first known, large-scale celebration of the arithmetical anomaly was in 1988 at the Exploratorium in San Francisco. Dr. Larry Shaw was a physicist who worked there, and it seems like he used the day as an excuse to get the staff and public together in the building to eat cake. After walking around one of its circular spaces, everyone stopped and began to devour the many fruit pies around the room. The Exploratorium still holds Pi Day celebrations to this day, though they are more extensive and engaging now, with games, talks, activities, and even a band! Fortunately, pie eating is still involved.

Pi actually has a rival for best circular approximation number. Tau is a ratio that relates a circle's circumference to its radius rather than its diameter. Effectively, Tau is just 2*Pi, or 6.283185.... Tau also has a holiday celebrated on June 28th (6/28), but it is less well-known, probably because it doesn't have delicious baked goods associated with it. Supporters of Tau argue that it removes confusion when switching from radians to degrees when describing angles, and that trigonometric functions would have a period of Tau instead of 2 Pi, which is easier to understand conceptually. Pi is winning the popularity contests now, but that could change in the future.

Regardless, Pi Day is a fun time to celebrate the math in your life and learn about the history of the number we all know by name.
I brought an apple pie into work today to celebrate
Pi Day. It was delicious!

Monday, March 11, 2019

Accelerating Your Particles: So Where Does My Beam Come From Anyway?

This week's guest blogger is Kellen McGee, 1st year graduate student in Physics (specializing in nuclear and accelerator physics) at the National Superconducting Cyclotron Laboratory/Facility for Rare Isotope Beams at Michigan State University. 

Big particle accelerators and colliders have long been some of the most visible physics experiments. Many of us going through graduate school these days remember when CERN, the 27-km around circular particle collider in Geneva, Switzerland, turned on. This event brought great excitement over what the new physics experiment was about to uncover (notably the much anticipated detection of the Higgs boson), and also anxiety about whether or not it would accidentally create a black hole, This is but the latest example of how various particle acceleration technologies have, since the turn of the century, been almost entirely responsible for us having figured out and validated as much as we have about the laws of the physical world at the subatomic level. 


Physicists, both theorists and experimentalists, drove the hunt for the particles and laws of the Standard Model: physicists’ current, and beautifully proven, understanding of the fundamental particles and the three fundamental forces. These forces are the the weak, the strong, and the electromagnetic forces. Gravity is, alas, not included; if you’d like to try to incorporate gravity into the Standard Model, there’s at least a PhD in it for you, if not a Nobel Prize. The particles of the Standard Model are all the known fundamental particles that can’t be made up of each other–the various flavors of quarks, gluons, leptons and neutrinos that make up all other particles we know of.

However, quite apart from knowing the physics that told you about the particles, and how they would behave, or be identified in detectors, you also need physicists who know how to speed up the particles to the energies you would need to collide or smash them into detectors to learn about the particles’ insides, be they protons, electrons, neutrons, or even atomic nuclei.

How does this work?

The history of particle accelerating devices is a long and interesting one. You might not know that you have a few particle accelerators in your very own house. If you have an older-model television or computer, for example, (the kind that isn’t flat), the screen is illuminated by something called a cathode-ray tube (CRT). This is essentially an electron accelerator, using a voltage source to accelerate electrons that then zoom off and hit a phosphorescent screen, causing it to glow wherever electrons hit. This is an example of the simplest type of what’s called an “electrostatic” accelerator- two plates, one negatively charged, the other one positively charged, cause electrons to fly off of one, accelerate in the electric field, and land on the other plate.

This type of accelerator works because electrons are charged particles, and will get pushed (“kicked” in accelerator jargon) by an electric field. Now, it’s cool to zoom electrons around, but there’s only so much you can do with them because they’re light, and to get them going fast  enough to smash into things and expect any interesting particles to get created (or other interesting physics effects) you have to give them more energy than we really know how to, (really long accelerators being really expensive and energy-hungry). Fortunately for us, other particles have charges. Protons, for example, have positive charge, and were smashed into each other at CERN (technically, it was a proton-antiproton collision with the antiproton having negative charge) for their famous Higgs experiments.

While there historically has been fame in colliders and particle accelerators aiming for higher and higher energies, Nuclear physicists, people who are interested in the structure and behavior of nuclei, their interrelationship in the periodic table of elements, their relative stabilities (how easily their clusters of protons and neutrons fall apart) and other properties, have been turning to accelerator experiments as well. Accelerators, either linear or circular (cyclotrons) are also used in the creation of medical isotopes for cancer treatments. Your local hospital might just have one of these in their basement, and be looking for newer and more efficient ways of making these medically critical materials.

Nuclei that are missing electrons have a net positive charge, and thus can be kicked along an electric field just like the electrons in a CRT, or the protons at CERN. By accelerating whole nuclei, and smashing them into carefully engineered targets, nuclear physicists can start addressing some of the questions above. However, after you get beyond electrons at slow speeds, acceleration becomes much harder than just setting up two plates and putting one at positive and the other at negative voltage. The only way to make those electrons go faster in that setup would be to increase the voltage, which will always, eventually, lead to an electric discharge (boring name for shock!) before you get your electrons up to interesting speeds.

If you can’t make one “kick” really strong, your next option is to line up a series of kicks, that each increase the speed of the charged particle by a certain amount. Imagine being a whitewater rafter, rafting down a series of waterfalls- this is similar to what happens if a charged particle travels across a series of kicks- it gets more and more energy, regardless of how fast it’s going (before relativity starts kicking in, of course!).

We want to build a nuclear science accelerator to accelerate ions to interesting energies, about 65% the speed of light. CERN, a high-energy physics facility aims for 99.999…% the speed of light, for comparison. To do this, we have to figure out how to line up enough kicks to accelerate the particles to the speed we want to study.  This is exactly the problem currently being tackled by FRIB, the Facility for Rare Isotope Beams at Michigan State University.

In the picture below, you see a plan of the FRIB linear accelerator (“linac”) under construction. Each of the little boxes along the straight parts make the particles (the nuclei) go faster by a certain amount. The little boxes are called cryomodules and in real life are taller than a person and several meters long. These house the true engines of the accelerator- the structures that allow us to set up and maintain many, extremely strong (2-5 megavolt-per-meter) electric fields that kick the nuclei along.


The best way to set up these electric fields is still very much a field of open development. For FRIB’s application, making a number of different kinds of ions go fast, FRIB decided to use pure niobium superconducting RF (radio frequency) cavities. This is a mouthful. Let’s break those words down, a little out of order.

RF: radio frequency. An oscillating electric and magnetic field. It would at first seem counterintuitive that we would be using RF, since “oscillating” means the direction the fields are pointing changes by 180 degrees every half period. Physically this means if we used RF at 650 megahertz (650 million cycles per second) to make an electric field in one direction, half the time the electric field is pointing in the other direction (backwards). This problem is controlled by making sure that the particles are timed to be in the electric field when it’s pointing forwards and out of the electric field when it is pointing backwards.

Cavities: this RF has to live somewhere. Cavities are metal,
cylindrical objects with special geometric properties that you
can stick an antenna inside of and pump RF inside.

Image copyright FRIB, P. Ostroumov, SRF Group et. al.

This is a model of a five-cell prototype cavity for FRIB. The upper half shows the distribution of the magnetic field (strongest in the red regions) and the bottom photo shows the distribution of the electric field. The series of kicks we see now are the series of spaces that are the orange-yellow color. The isotope we want to accelerate enters the tube, then sees five electric fields that make it go faster and faster. The gaps (blue spaces on the axis of the pipe) are where the particle is traveling during the time that the electric field in the orange region is pointed backwards.
 
Superconducting Niobium: The rainbow pictures, again, are depictions of what happens to the electric and magnetic fields when an antenna is put into the cavity and RF is piped in at certain frequencies. The RF waving along the walls of the cavity can generate resistance that has to be dissipated as heat, and also constitutes a power drain on the RF, causing less energy to go into moving the particle forward, and more energy to go into heating the cavity walls. Ideally, we want a low-resistance material. Fortunately, Niobium, an element, is a relatively workable metal and goes superconducting at 9.6 degrees above absolute zero. Thus FRIB and many similar facilities have chosen to engineer their superconducting cavities from either pure niobium, or some variety of niobium compound, and cool these in the cryomodules to superconducting temperatures using liquid nitrogen and liquid helium.

Though the above is only the briefest description, it shows how accelerators demand a variety of specialists- you can receive a PhD in any number of accelerator-related subfields including cryogenic systems, superconducting RF, particle beam dynamics, test diagnostic equipment design and implementation, controls programming…the list goes on and on. The simple task of accelerating particles, or nuclei, for science is thus, itself, a science.