Monday, October 7, 2013
Dream Souls. Photo By David Hanjani
www.photographyofdavidhanjani.tumblr.com

Dream Souls. Photo By David Hanjani

www.photographyofdavidhanjani.tumblr.com

Thursday, October 4, 2012

Anonymous asked: What are the 9 types of energy

Hey there,

Why don’t you check out this earlier post. It should give you a brief intro to the types of energy, the law of conservation of energy, efficiency and Sankey diagrams!

Hopefully that will help you out and thank you for the question!

As always, feel free to submit any other inquiries to our ask.

Wednesday, August 8, 2012

Anonymous asked: Hi! I stumbled upon your tumblr, and I'd like to start off by saying how amazing it is, and thank you for making this tumblr! Moreon to my issue, i'm currently studying crude oil in Chemistry. Could you please help me understand"cracking" in terms of crude oil? From what I understand, 'cracking' is the CHEMICAL process of breaking down large molecules into smaller ones. And they 'crack' crude oil to refine it into petroleum; fractional distillation being a PHYSICAL process. More info please?

It sounds like you’re a bit confused between fractional distillation and cracking. It’s true that cracking is a chemical process and fractional distillation is a physical process, but by saying that I mean to show you that they’re two entirely different processes.

When crude oil is first extracted from the ground, is made up of a variety of different hydrocarbons (chemical compounds that only consist of carbon and hydrogen), some very short (ethene) and some long (decane), and is entirely useless in this state.  Hydrocarbons can be separated into two groups: alkanes and alkenes. An alkane is saturated, meaning it holds as many hydrogen atoms as possible, whereas an alkene is unsaturated and contains a double carbon bond. 

Fractional distillation serves to separate the longer hydrocarbons from the shorter hydrocarbons by their boiling points. This works because the longer the hydrocarbon, the higher the boiling point and viscosity and the lower the flammability.

Fractional distillation takes place as follows:

  1. Crude oil is vapourised and fed into the bottom of the fractionating column.
  2. As the vapour rises up the column, the temperature falls.
  3. Fractions with different boiling points condense at different levels of the column and can be collected.
  4. The fractions with high boiling points (long chain hydrocarbons) condense and are collected at the bottom of the column
  5. Fractions with low boiling points (short chain hydrocarbons) rise to the top of the column where they condense and are collected.

To see a diagram of the fractional distillation process, click here.

Cracking on the other hand, breaks long alkanes down into shorter, more useful alkane and alkene molecules. It requires a catalyst (a substance that causes or accelerates a chemical reaction without itself being affected) and a high temperature. This is done mainly to assuage the high industrial demand for the shorter molecules. The alkenes are typically converted into polymers (plastics) while the alkanes are sought after as a fuel source. Cracking is an example of a thermal decomposition reaction.

I hope that helps clear up some of your confusion.

Monday, July 30, 2012
Demons in the History of Science
Part one of two: Laplace’s Demon
Some might say that the modern day physicists have it easy; they can appeal to the public with their stories of eleven-dimensional universes, time travel, and stories of a quantum world that is stranger than fiction. But the basis of such appeal remains the same as the appeal for pursuing science always was and will be: a greater understanding of the environment, ourselves, and knowledge itself.
Just like Schrödinger’s cat, a popular thought experiment by famous physicist Erwin Schrödinger, Laplace’s Demon and Maxwell’s Demon are two other thought-experiments in scientific thinking which are important for what they reveal about our understanding of the universe. It may only interest you to learn of these thought-experiments for the sake of reinforcing the philosophical relevance and beauty that science has always sought to provide.
Jim-Al Khalili, author of Quantum: A Guide for the Perplexed, affirms that fate as a scientific idea was disproved three-quarters of a century ago, referring to the discoveries of quantum mechanics as proof, of course. But what does he mean when he says this? Prior to such discoveries, it may have been okay to argue for a deterministic universe, meaning that scientists could still consider the idea of a world in which one specific input must result in one specific output and thus the sum all these actions and their consequences could help “determine” the overall outcome, or fate, of such a world.
Pierre-Simon Laplace, born on March 23, 1794, was a French mathematician and astronomer whose work largely founded the statistical interpretation of probability known as Bayesian Probability. He lived in a world before Heisenberg’s Uncertainty Principle and Chaos Theory and thus he was allowed to imagine such a deterministic universe:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Laplace, A Philosophical Essay on Probabilities

Laplace thought about what it would be like if it were possible to know the positions, masses, and velocities of all the atoms in existence and hypothesized a being, later known as Laplace’s Demon, which would be able to know such information and such calculate all future events. 
With our knowledge of physics, The Heisenberg Uncertainty Principle and Chaos Theory, such a being could not exist because such information about atoms cannot be observed with enough precision to calculate and predict future events. (By the way, “enough” precision means infinite precision!) This might be good news for those who believe in free will as its concept would not be permitted in a deterministic universe governed by Laplace’s demon. 
Interestingly enough, The Heisenberg Uncertainty Principle and Chaos Theory are not the only restrictive challenges that scientists have faced in trying to understand the properties and bounds of our universe. The Second Law of Thermodynamics is also of concern to scientists and philosophers alike, as we will learn with the birth of another mind-boggling demon.

Demons in the History of Science

Part one of two: Laplace’s Demon

Some might say that the modern day physicists have it easy; they can appeal to the public with their stories of eleven-dimensional universes, time travel, and stories of a quantum world that is stranger than fiction. But the basis of such appeal remains the same as the appeal for pursuing science always was and will be: a greater understanding of the environment, ourselves, and knowledge itself.

Just like Schrödinger’s cat, a popular thought experiment by famous physicist Erwin Schrödinger, Laplace’s Demon and Maxwell’s Demon are two other thought-experiments in scientific thinking which are important for what they reveal about our understanding of the universe. It may only interest you to learn of these thought-experiments for the sake of reinforcing the philosophical relevance and beauty that science has always sought to provide.

Jim-Al Khalili, author of Quantum: A Guide for the Perplexed, affirms that fate as a scientific idea was disproved three-quarters of a century ago, referring to the discoveries of quantum mechanics as proof, of course. But what does he mean when he says this? Prior to such discoveries, it may have been okay to argue for a deterministic universe, meaning that scientists could still consider the idea of a world in which one specific input must result in one specific output and thus the sum all these actions and their consequences could help “determine” the overall outcome, or fate, of such a world.

Pierre-Simon Laplace, born on March 23, 1794, was a French mathematician and astronomer whose work largely founded the statistical interpretation of probability known as Bayesian Probability. He lived in a world before Heisenberg’s Uncertainty Principle and Chaos Theory and thus he was allowed to imagine such a deterministic universe:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

Laplace, A Philosophical Essay on Probabilities

Laplace thought about what it would be like if it were possible to know the positions, masses, and velocities of all the atoms in existence and hypothesized a being, later known as Laplace’s Demon, which would be able to know such information and such calculate all future events. 

With our knowledge of physics, The Heisenberg Uncertainty Principle and Chaos Theory, such a being could not exist because such information about atoms cannot be observed with enough precision to calculate and predict future events. (By the way, “enough” precision means infinite precision!) This might be good news for those who believe in free will as its concept would not be permitted in a deterministic universe governed by Laplace’s demon. 

Interestingly enough, The Heisenberg Uncertainty Principle and Chaos Theory are not the only restrictive challenges that scientists have faced in trying to understand the properties and bounds of our universe. The Second Law of Thermodynamics is also of concern to scientists and philosophers alike, as we will learn with the birth of another mind-boggling demon.

Thursday, July 5, 2012

Two days ago, the CERN team announced that they had found a new particle whose properties are consistent with the long sought-after Higgs Boson’s. Whether or not it is the elusive boson however, is still to be determined by further research. To read more about the event, follow this link to the new BBC article.

If you have no clue what this is about, the above video is a quick and nice introduction to the Higgs Boson submitted by one of our followers, the lovely oh-yeah-and-what. Thanks for the awesome submission!
SIWS loves feedback from followers and we’ll do our best to respond. If you have any questions, ideas, or concerns, feel free to drop us a message, email us at sayitwithscience@gmail.com or like and post on our Facebook page. You can even make a submission post and we might publish it and credit you, like we did with this one!
Take care and happy science-ing! 

Thursday, June 28, 2012
Maximum Entropy Distributions



Entropy is an important topic in many fields; it has very well known uses in statistical mechanics, thermodynamics, and information theory. The classical formula for entropy is Σi(pi log pi), where p=p(x) is a probability density function describing the likelihood of a possible microstate of the system, i, being assumed. But what is this probability density function? How must the likelihood of states be configured so that we observe the appropriate macrostates?



In accordance with the second law of thermodynamics, we wish for the entropy to be maximized. If we take the entropy in the limit of large N, we can treat it with calculus as S[φ]=∫dx φ ln φ. Here, S is called a functional (which is, essentially, a function that takes another function as its argument). How can we maximize S? We will proceed using the methods of calculus of variations and Lagrange multipliers.



First we introduce three constraints. We require normalization, so that ∫dx φ = 1. This is a condition that any probability distribution must satisfy, so that the total probability over the domain of possible values is unity (since we’re asking for the probability of any possible event occurring). We require symmetry, so that the expected value of x is zero (it is equally likely to be in microstates to the left of the mean as it is to be in microstates to the right — note that this derivation is treating the one-dimensional case for simplicity). Then our constraint is ∫dx x·φ = 0. Finally, we will explicitly declare our variance to be σ², so that ∫dx x²·φ = σ².



Using Lagrange multipliers, we will instead maximize the augmented functional S[φ]=∫(φ ln φ + λ0φ + λ1xφ + λ2x²φ dx). Here, the integrand is just the sum of the integrands above, adjusted by Lagrange multipliers λk for which we’ll be solving.



Applying the Euler-Lagrange equations and solving for φ gives φ = 1/exp(1+λ0+xλ1+x²λ2). From here, our symmetry condition forces λ1=0, and evaluating the other integral conditions gives our other λ’s such that q = (1/2πσ²)½·exp(-x² / 2σ²), which is just the Normal (or Gaussian) distribution with mean 0 and variance σ². This remarkable distribution appears in many descriptions of nature, in no small part due to the Central Limit Theorem.

Maximum Entropy Distributions

Entropy is an important topic in many fields; it has very well known uses in statistical mechanics, thermodynamics, and information theory. The classical formula for entropy is Σi(pi log pi), where p=p(x) is a probability density function describing the likelihood of a possible microstate of the system, i, being assumed. But what is this probability density function? How must the likelihood of states be configured so that we observe the appropriate macrostates?

In accordance with the second law of thermodynamics, we wish for the entropy to be maximized. If we take the entropy in the limit of large N, we can treat it with calculus as S[φ]=∫dx φ ln φ. Here, S is called a functional (which is, essentially, a function that takes another function as its argument). How can we maximize S? We will proceed using the methods of calculus of variations and Lagrange multipliers.

First we introduce three constraints. We require normalization, so that ∫dx φ = 1. This is a condition that any probability distribution must satisfy, so that the total probability over the domain of possible values is unity (since we’re asking for the probability of any possible event occurring). We require symmetry, so that the expected value of x is zero (it is equally likely to be in microstates to the left of the mean as it is to be in microstates to the right — note that this derivation is treating the one-dimensional case for simplicity). Then our constraint is ∫dx x·φ = 0. Finally, we will explicitly declare our variance to be σ², so that ∫dx x²·φ = σ².

Using Lagrange multipliers, we will instead maximize the augmented functional S[φ]=∫(φ ln φ + λ0φ + λ1xφ + λ2x²φ dx). Here, the integrand is just the sum of the integrands above, adjusted by Lagrange multipliers λk for which we’ll be solving.

Applying the Euler-Lagrange equations and solving for φ gives φ = 1/exp(1+λ0+xλ1+x²λ2). From here, our symmetry condition forces λ1=0, and evaluating the other integral conditions gives our other λ’s such that q = (1/2πσ²)½·exp(-x² / 2σ²), which is just the Normal (or Gaussian) distribution with mean 0 and variance σ². This remarkable distribution appears in many descriptions of nature, in no small part due to the Central Limit Theorem.

Friday, January 13, 2012
Charge, Parity and Time Reversal (CPT) Symmetry
 From our everyday experience, it is easy to conclude that nature obeys the laws of physics with absolute consistency. However, several experiments have revealed certain cases where these laws are not the same for all particles and their antiparticles. The concept of a symmetry, in physics, means that the laws will be the same for certain types of matter. Essentially, there are three different kinds of known symmetries that exist in the universe: charge (C), parity (P), and time reversal (T). The violations of these symmetries can cause nature to behave differently. If C symmetry is violated, then the laws of physics are not the same for particles and their antiparticles. P symmetry violation implies that the laws of physics are different for particles and their mirror images (meaning the ones that spin in the opposite direction). The violation of symmetry T indicates that if you go back in time, the laws governing the particles change.
 There were two American physicists by the names of Tsunng-Dao Lee and Chen Ning Yang suggested that the weak interaction violates P symmetry.  This was proven by an experiment which was conducted with radioactive atoms of colbalt-60 that were lined up and introduced a magnetic field to insure that they are spinning in the same direction. In addition, it was also found that the weak force also does not obey symmetry C. Oddly enough, the weak force did appear to obey the combined CP symmetry. Therefore the laws of physics would be the same for a particle and it’s antiparticle with opposite spin.
Surprise, surprise! There was a slight error in the previous experiment that was just mentioned. A few years later, it was discovered that the weak force actually violates CP symmetry. Another experiment was conducted by two physicists named Cronin and Fitch. They studied the decay of neutral kaons, which are mesons that are composed of either one down quark (or antiquark) and a strange antiquark (or quark). These particles have two decay modes where one will decay much faster than the other, even though they all have identical masses. The particles with the longer lifetimes will decay into three pions (denoted with the symbol π0), however the kaon ‘species’ with the shorter lifetimes will only decay into two pions. They had a 57 foot beamline, where they only expected to see the particles with slower decay rate at the end of the beam tube. In astonishment, one out of every 500 decays where from the kaons species that had a shorter lifetime. The main conflict with seeing the short-lived mesons at the end of the beam tube is because they are traveling relavistic speeds and therefore ignoring the time dilatationthat they are supposed to undergo. Thus, the experiment has shown that the weak force causes a small CP violation that can be seen in kaon decay.

Charge, Parity and Time Reversal (CPT) Symmetry

From our everyday experience, it is easy to conclude that nature obeys the laws of physics with absolute consistency. However, several experiments have revealed certain cases where these laws are not the same for all particles and their antiparticles. The concept of a symmetry, in physics, means that the laws will be the same for certain types of matter. Essentially, there are three different kinds of known symmetries that exist in the universe: charge (C), parity (P), and time reversal (T). The violations of these symmetries can cause nature to behave differently. If C symmetry is violated, then the laws of physics are not the same for particles and their antiparticles. P symmetry violation implies that the laws of physics are different for particles and their mirror images (meaning the ones that spin in the opposite direction). The violation of symmetry T indicates that if you go back in time, the laws governing the particles change.

There were two American physicists by the names of Tsunng-Dao Lee and Chen Ning Yang suggested that the weak interaction violates P symmetry. This was proven by an experiment which was conducted with radioactive atoms of colbalt-60 that were lined up and introduced a magnetic field to insure that they are spinning in the same direction. In addition, it was also found that the weak force also does not obey symmetry C. Oddly enough, the weak force did appear to obey the combined CP symmetry. Therefore the laws of physics would be the same for a particle and it’s antiparticle with opposite spin.

Surprise, surprise! There was a slight error in the previous experiment that was just mentioned. A few years later, it was discovered that the weak force actually violates CP symmetry. Another experiment was conducted by two physicists named Cronin and Fitch. They studied the decay of neutral kaons, which are mesons that are composed of either one down quark (or antiquark) and a strange antiquark (or quark). These particles have two decay modes where one will decay much faster than the other, even though they all have identical masses. The particles with the longer lifetimes will decay into three pions (denoted with the symbol π0), however the kaon ‘species’ with the shorter lifetimes will only decay into two pions. They had a 57 foot beamline, where they only expected to see the particles with slower decay rate at the end of the beam tube. In astonishment, one out of every 500 decays where from the kaons species that had a shorter lifetime. The main conflict with seeing the short-lived mesons at the end of the beam tube is because they are traveling relavistic speeds and therefore ignoring the time dilatationthat they are supposed to undergo. Thus, the experiment has shown that the weak force causes a small CP violation that can be seen in kaon decay.

(Source: aps.org)

Thursday, December 22, 2011
Refraction
Light waves are part of the EM wave spectrum. When moving through an optical medium (i.e. air, glass, etc. …), the E field of the wave excites the electrons within the medium, causing them to oscillate, as a result, the light wave slows down slightly due to the loss of some of its kinetic energy. Its new speed is always less than that of the speed of light in a vacuum (v<c). Materials are characterized by their ability to bend as well as slow down light, which is known as optical refractive index (n).
         c
     n = -
         v
          speed of light in a vacuum
       = ----------------------------
         speed of light in the medium
n = 1 in a vacuum
n = more than 1 in all other media
Refraction itself occurs when light passes across an interface between two media with different indices of refraction. As a general rule (which can be derived by Snell’s law below), light refracts towards the normal when passing to a medium with a higher refractive index, and away from the normal when moving to a medium of lower refractive index.
Snell’s Law:
n₁sinα = n₂sinβ
where n₁ is the refractive index of the first medium
Reflection
One of the properties of a boundary between optical media is that some of the light that’s approaching the interface at the angle of incidence (α) is reflected back into the first medium, while the rest continues on into the second medium at the angle of refraction (β).
Angle of incidence = Angle of Reflection

Refraction

Light waves are part of the EM wave spectrum. When moving through an optical medium (i.e. air, glass, etc. …), the E field of the wave excites the electrons within the medium, causing them to oscillate, as a result, the light wave slows down slightly due to the loss of some of its kinetic energy. Its new speed is always less than that of the speed of light in a vacuum (v<c). Materials are characterized by their ability to bend as well as slow down light, which is known as optical refractive index (n).

         c
     n = -
         v
          speed of light in a vacuum
       = ----------------------------
         speed of light in the medium
n = 1 in a vacuum
n = more than 1 in all other media

Refraction itself occurs when light passes across an interface between two media with different indices of refraction. As a general rule (which can be derived by Snell’s law below), light refracts towards the normal when passing to a medium with a higher refractive index, and away from the normal when moving to a medium of lower refractive index.

Snell’s Law:

n₁sinα = n₂sinβ

where n₁ is the refractive index of the first medium

Reflection

One of the properties of a boundary between optical media is that some of the light that’s approaching the interface at the angle of incidence (α) is reflected back into the first medium, while the rest continues on into the second medium at the angle of refraction (β).

Angle of incidence = Angle of Reflection

Saturday, December 17, 2011
The Hamilton-Jacobi Equation

This blog has posted more than a few times in the past about classical mechanics. Luckily, classical mechanics can be approached in several ways. This approach, which uses the Hamilton-Jacobi equation (HJE), is one of the most elegant and powerful methods.

Why is the HJE so powerful? Consider a dynamical system with a Hamiltonian H=H(q,p,t). Suppose we knew of a canonical transformation (CT) that generated a new Hamiltonian K=K(Q,P,t) which (for a local chart on phase space) vanishes identically. Then the canonical equations would give that the transformed coordinates (Q,P) are constant in this region. How easy it would be to solve a system where you know that most of the important quantities are constant!

The rub is in finding such a canonical transformation. Sometimes it can&#8217;t even be done analytically, but nevertheless this is the goal of the Hamilton-Jacobi method of solving mechanical systems. In the equation given above, S is the generating function of the CT. Coincidentally, it often comes out to just equal the classical action up to an additive constant! This is due to the connection between canonical transformations and mechanical gauge transformations; it turns out that the additive function used to define the latter is the generating function of the former. In general the HJE is a partial differential equation that might be solvable by additive separation of variables&#8230; but don&#8217;t get too hopeful! Oftentimes the value of the HJE comes not in finding the actual equations of motion but in revealing symmetry and conservation properties of the system.

The Hamilton-Jacobi Equation

This blog has posted more than a few times in the past about classical mechanics. Luckily, classical mechanics can be approached in several ways. This approach, which uses the Hamilton-Jacobi equation (HJE), is one of the most elegant and powerful methods.

Why is the HJE so powerful? Consider a dynamical system with a Hamiltonian H=H(q,p,t). Suppose we knew of a canonical transformation (CT) that generated a new Hamiltonian K=K(Q,P,t) which (for a local chart on phase space) vanishes identically. Then the canonical equations would give that the transformed coordinates (Q,P) are constant in this region. How easy it would be to solve a system where you know that most of the important quantities are constant!

The rub is in finding such a canonical transformation. Sometimes it can’t even be done analytically, but nevertheless this is the goal of the Hamilton-Jacobi method of solving mechanical systems. In the equation given above, S is the generating function of the CT. Coincidentally, it often comes out to just equal the classical action up to an additive constant! This is due to the connection between canonical transformations and mechanical gauge transformations; it turns out that the additive function used to define the latter is the generating function of the former. In general the HJE is a partial differential equation that might be solvable by additive separation of variables… but don’t get too hopeful! Oftentimes the value of the HJE comes not in finding the actual equations of motion but in revealing symmetry and conservation properties of the system.

Monday, November 14, 2011
Variable Star Astronomy
Variable stars are stars whose brightness changes because of physical changes within the star. There exist more than 30,000 variable stars in just the Milky Way. Variable star astronomy is a popular part of astronomy because amateur astronomers play a key role. They have submitted thousands of observed data and these data are logged onto a database. American readers can find information on it on the American Association of Variable Star Observers page. 
One of such variable stars are called Cepheids. Cepheids are pulsating variable stars because they undergo  a &#8220;repetitive expansion and contraction of their outer layers&#8221; [1]. In Cepheids, the star&#8217;s period of variation (about 1-70 days) is related to its luminosity; the longer the period, the higher the luminosity. In fact, when graphed, the relationship is shown by a straight line (as can be seen on the title image). Henrietta Swan Leavitt, an American astronomer, first discovered this and understood the significance of this knowledge.  Combined with understanding of the star&#8217;s apparent magnitude (a previously written post on this subject can be found here), astronomers can use this information to find a star&#8217;s distance from Earth. Cepheids are famously known for their usefulness in finding distances to far-away galaxies and other deep sky objects. Leavitt died early from cancer but was to be nominated for the Nobel Prize in Physics by Professor Mittag-Leffler (Swedish Academy of Sciences). 
Edwin Hubble used Leavitt&#8217;s discovery to prove that the Andromeda Galaxy (M31) is not part of the Milky Way, but was able to find the distance to the Andromeda Galaxy (between 2-9 million light years away). At first his calculation was incorrect (900,000 light years) because he observed Type I (classical) Cepheid Stars. Type I Cepheid stars are brighter, newer Population I stars. Hubble later used type II Cepheids (also called W Virginis stars), which are smaller, dimmer, Population II stars, and he was able to make more accurate calculations.

To determine the star&#8217;s distance, use the inverse square law of light brightness. 


A similar type of star are RR Lyrae Variable Stars. They are smaller than Cepheids and have a much shorter period (from a few hours to a day). On the other hand, they are far more common. Likewise, they can be used to solve for distances as well. Low mass stars live longer, and thus Cepheid stars are generally younger because they are more massive. 
Both Cepheids and RR Lyrae Variable stars are referred to as standard candles: objects with known luminosity. If you&#8217;ve ever wondered how astronomers came to those enormous figures when describing how far away galaxies and stars are from us, you can now better understand why and how. 

Variable Star Astronomy

Variable stars are stars whose brightness changes because of physical changes within the star. There exist more than 30,000 variable stars in just the Milky Way. Variable star astronomy is a popular part of astronomy because amateur astronomers play a key role. They have submitted thousands of observed data and these data are logged onto a database. American readers can find information on it on the American Association of Variable Star Observers page. 

One of such variable stars are called Cepheids. Cepheids are pulsating variable stars because they undergo  a “repetitive expansion and contraction of their outer layers” [1]. In Cepheids, the star’s period of variation (about 1-70 days) is related to its luminosity; the longer the period, the higher the luminosity. In fact, when graphed, the relationship is shown by a straight line (as can be seen on the title image). Henrietta Swan Leavitt, an American astronomer, first discovered this and understood the significance of this knowledge.  Combined with understanding of the star’s apparent magnitude (a previously written post on this subject can be found here), astronomers can use this information to find a star’s distance from Earth. Cepheids are famously known for their usefulness in finding distances to far-away galaxies and other deep sky objects. Leavitt died early from cancer but was to be nominated for the Nobel Prize in Physics by Professor Mittag-Leffler (Swedish Academy of Sciences). 

Edwin Hubble used Leavitt’s discovery to prove that the Andromeda Galaxy (M31) is not part of the Milky Way, but was able to find the distance to the Andromeda Galaxy (between 2-9 million light years away). At first his calculation was incorrect (900,000 light years) because he observed Type I (classical) Cepheid Stars. Type I Cepheid stars are brighter, newer Population I stars. Hubble later used type II Cepheids (also called W Virginis stars), which are smaller, dimmer, Population II stars, and he was able to make more accurate calculations.

To determine the star’s distance, use the inverse square law of light brightness. 

A similar type of star are RR Lyrae Variable Stars. They are smaller than Cepheids and have a much shorter period (from a few hours to a day). On the other hand, they are far more common. Likewise, they can be used to solve for distances as well. Low mass stars live longer, and thus Cepheid stars are generally younger because they are more massive. 

Both Cepheids and RR Lyrae Variable stars are referred to as standard candles: objects with known luminosity. If you’ve ever wondered how astronomers came to those enormous figures when describing how far away galaxies and stars are from us, you can now better understand why and how.