Monday, July 30, 2012
Demons in the History of Science
Part one of two: Laplace’s Demon
Some might say that the modern day physicists have it easy; they can appeal to the public with their stories of eleven-dimensional universes, time travel, and stories of a quantum world that is stranger than fiction. But the basis of such appeal remains the same as the appeal for pursuing science always was and will be: a greater understanding of the environment, ourselves, and knowledge itself.
Just like Schrödinger’s cat, a popular thought experiment by famous physicist Erwin Schrödinger, Laplace’s Demon and Maxwell’s Demon are two other thought-experiments in scientific thinking which are important for what they reveal about our understanding of the universe. It may only interest you to learn of these thought-experiments for the sake of reinforcing the philosophical relevance and beauty that science has always sought to provide.
Jim-Al Khalili, author of Quantum: A Guide for the Perplexed, affirms that fate as a scientific idea was disproved three-quarters of a century ago, referring to the discoveries of quantum mechanics as proof, of course. But what does he mean when he says this? Prior to such discoveries, it may have been okay to argue for a deterministic universe, meaning that scientists could still consider the idea of a world in which one specific input must result in one specific output and thus the sum all these actions and their consequences could help “determine” the overall outcome, or fate, of such a world.
Pierre-Simon Laplace, born on March 23, 1794, was a French mathematician and astronomer whose work largely founded the statistical interpretation of probability known as Bayesian Probability. He lived in a world before Heisenberg’s Uncertainty Principle and Chaos Theory and thus he was allowed to imagine such a deterministic universe:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Laplace, A Philosophical Essay on Probabilities

Laplace thought about what it would be like if it were possible to know the positions, masses, and velocities of all the atoms in existence and hypothesized a being, later known as Laplace’s Demon, which would be able to know such information and such calculate all future events. 
With our knowledge of physics, The Heisenberg Uncertainty Principle and Chaos Theory, such a being could not exist because such information about atoms cannot be observed with enough precision to calculate and predict future events. (By the way, “enough” precision means infinite precision!) This might be good news for those who believe in free will as its concept would not be permitted in a deterministic universe governed by Laplace’s demon. 
Interestingly enough, The Heisenberg Uncertainty Principle and Chaos Theory are not the only restrictive challenges that scientists have faced in trying to understand the properties and bounds of our universe. The Second Law of Thermodynamics is also of concern to scientists and philosophers alike, as we will learn with the birth of another mind-boggling demon.

Demons in the History of Science

Part one of two: Laplace’s Demon

Some might say that the modern day physicists have it easy; they can appeal to the public with their stories of eleven-dimensional universes, time travel, and stories of a quantum world that is stranger than fiction. But the basis of such appeal remains the same as the appeal for pursuing science always was and will be: a greater understanding of the environment, ourselves, and knowledge itself.

Just like Schrödinger’s cat, a popular thought experiment by famous physicist Erwin Schrödinger, Laplace’s Demon and Maxwell’s Demon are two other thought-experiments in scientific thinking which are important for what they reveal about our understanding of the universe. It may only interest you to learn of these thought-experiments for the sake of reinforcing the philosophical relevance and beauty that science has always sought to provide.

Jim-Al Khalili, author of Quantum: A Guide for the Perplexed, affirms that fate as a scientific idea was disproved three-quarters of a century ago, referring to the discoveries of quantum mechanics as proof, of course. But what does he mean when he says this? Prior to such discoveries, it may have been okay to argue for a deterministic universe, meaning that scientists could still consider the idea of a world in which one specific input must result in one specific output and thus the sum all these actions and their consequences could help “determine” the overall outcome, or fate, of such a world.

Pierre-Simon Laplace, born on March 23, 1794, was a French mathematician and astronomer whose work largely founded the statistical interpretation of probability known as Bayesian Probability. He lived in a world before Heisenberg’s Uncertainty Principle and Chaos Theory and thus he was allowed to imagine such a deterministic universe:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.

Laplace, A Philosophical Essay on Probabilities

Laplace thought about what it would be like if it were possible to know the positions, masses, and velocities of all the atoms in existence and hypothesized a being, later known as Laplace’s Demon, which would be able to know such information and such calculate all future events. 

With our knowledge of physics, The Heisenberg Uncertainty Principle and Chaos Theory, such a being could not exist because such information about atoms cannot be observed with enough precision to calculate and predict future events. (By the way, “enough” precision means infinite precision!) This might be good news for those who believe in free will as its concept would not be permitted in a deterministic universe governed by Laplace’s demon. 

Interestingly enough, The Heisenberg Uncertainty Principle and Chaos Theory are not the only restrictive challenges that scientists have faced in trying to understand the properties and bounds of our universe. The Second Law of Thermodynamics is also of concern to scientists and philosophers alike, as we will learn with the birth of another mind-boggling demon.

Thursday, July 5, 2012

Two days ago, the CERN team announced that they had found a new particle whose properties are consistent with the long sought-after Higgs Boson’s. Whether or not it is the elusive boson however, is still to be determined by further research. To read more about the event, follow this link to the new BBC article.

If you have no clue what this is about, the above video is a quick and nice introduction to the Higgs Boson submitted by one of our followers, the lovely oh-yeah-and-what. Thanks for the awesome submission!
SIWS loves feedback from followers and we’ll do our best to respond. If you have any questions, ideas, or concerns, feel free to drop us a message, email us at sayitwithscience@gmail.com or like and post on our Facebook page. You can even make a submission post and we might publish it and credit you, like we did with this one!
Take care and happy science-ing! 

Friday, January 13, 2012
Charge, Parity and Time Reversal (CPT) Symmetry
 From our everyday experience, it is easy to conclude that nature obeys the laws of physics with absolute consistency. However, several experiments have revealed certain cases where these laws are not the same for all particles and their antiparticles. The concept of a symmetry, in physics, means that the laws will be the same for certain types of matter. Essentially, there are three different kinds of known symmetries that exist in the universe: charge (C), parity (P), and time reversal (T). The violations of these symmetries can cause nature to behave differently. If C symmetry is violated, then the laws of physics are not the same for particles and their antiparticles. P symmetry violation implies that the laws of physics are different for particles and their mirror images (meaning the ones that spin in the opposite direction). The violation of symmetry T indicates that if you go back in time, the laws governing the particles change.
 There were two American physicists by the names of Tsunng-Dao Lee and Chen Ning Yang suggested that the weak interaction violates P symmetry.  This was proven by an experiment which was conducted with radioactive atoms of colbalt-60 that were lined up and introduced a magnetic field to insure that they are spinning in the same direction. In addition, it was also found that the weak force also does not obey symmetry C. Oddly enough, the weak force did appear to obey the combined CP symmetry. Therefore the laws of physics would be the same for a particle and it’s antiparticle with opposite spin.
Surprise, surprise! There was a slight error in the previous experiment that was just mentioned. A few years later, it was discovered that the weak force actually violates CP symmetry. Another experiment was conducted by two physicists named Cronin and Fitch. They studied the decay of neutral kaons, which are mesons that are composed of either one down quark (or antiquark) and a strange antiquark (or quark). These particles have two decay modes where one will decay much faster than the other, even though they all have identical masses. The particles with the longer lifetimes will decay into three pions (denoted with the symbol π0), however the kaon ‘species’ with the shorter lifetimes will only decay into two pions. They had a 57 foot beamline, where they only expected to see the particles with slower decay rate at the end of the beam tube. In astonishment, one out of every 500 decays where from the kaons species that had a shorter lifetime. The main conflict with seeing the short-lived mesons at the end of the beam tube is because they are traveling relavistic speeds and therefore ignoring the time dilatationthat they are supposed to undergo. Thus, the experiment has shown that the weak force causes a small CP violation that can be seen in kaon decay.

Charge, Parity and Time Reversal (CPT) Symmetry

From our everyday experience, it is easy to conclude that nature obeys the laws of physics with absolute consistency. However, several experiments have revealed certain cases where these laws are not the same for all particles and their antiparticles. The concept of a symmetry, in physics, means that the laws will be the same for certain types of matter. Essentially, there are three different kinds of known symmetries that exist in the universe: charge (C), parity (P), and time reversal (T). The violations of these symmetries can cause nature to behave differently. If C symmetry is violated, then the laws of physics are not the same for particles and their antiparticles. P symmetry violation implies that the laws of physics are different for particles and their mirror images (meaning the ones that spin in the opposite direction). The violation of symmetry T indicates that if you go back in time, the laws governing the particles change.

There were two American physicists by the names of Tsunng-Dao Lee and Chen Ning Yang suggested that the weak interaction violates P symmetry. This was proven by an experiment which was conducted with radioactive atoms of colbalt-60 that were lined up and introduced a magnetic field to insure that they are spinning in the same direction. In addition, it was also found that the weak force also does not obey symmetry C. Oddly enough, the weak force did appear to obey the combined CP symmetry. Therefore the laws of physics would be the same for a particle and it’s antiparticle with opposite spin.

Surprise, surprise! There was a slight error in the previous experiment that was just mentioned. A few years later, it was discovered that the weak force actually violates CP symmetry. Another experiment was conducted by two physicists named Cronin and Fitch. They studied the decay of neutral kaons, which are mesons that are composed of either one down quark (or antiquark) and a strange antiquark (or quark). These particles have two decay modes where one will decay much faster than the other, even though they all have identical masses. The particles with the longer lifetimes will decay into three pions (denoted with the symbol π0), however the kaon ‘species’ with the shorter lifetimes will only decay into two pions. They had a 57 foot beamline, where they only expected to see the particles with slower decay rate at the end of the beam tube. In astonishment, one out of every 500 decays where from the kaons species that had a shorter lifetime. The main conflict with seeing the short-lived mesons at the end of the beam tube is because they are traveling relavistic speeds and therefore ignoring the time dilatationthat they are supposed to undergo. Thus, the experiment has shown that the weak force causes a small CP violation that can be seen in kaon decay.

(Source: aps.org)

Thursday, December 22, 2011
Refraction
Light waves are part of the EM wave spectrum. When moving through an optical medium (i.e. air, glass, etc. …), the E field of the wave excites the electrons within the medium, causing them to oscillate, as a result, the light wave slows down slightly due to the loss of some of its kinetic energy. Its new speed is always less than that of the speed of light in a vacuum (v<c). Materials are characterized by their ability to bend as well as slow down light, which is known as optical refractive index (n).
         c
     n = -
         v
          speed of light in a vacuum
       = ----------------------------
         speed of light in the medium
n = 1 in a vacuum
n = more than 1 in all other media
Refraction itself occurs when light passes across an interface between two media with different indices of refraction. As a general rule (which can be derived by Snell’s law below), light refracts towards the normal when passing to a medium with a higher refractive index, and away from the normal when moving to a medium of lower refractive index.
Snell’s Law:
n₁sinα = n₂sinβ
where n₁ is the refractive index of the first medium
Reflection
One of the properties of a boundary between optical media is that some of the light that’s approaching the interface at the angle of incidence (α) is reflected back into the first medium, while the rest continues on into the second medium at the angle of refraction (β).
Angle of incidence = Angle of Reflection

Refraction

Light waves are part of the EM wave spectrum. When moving through an optical medium (i.e. air, glass, etc. …), the E field of the wave excites the electrons within the medium, causing them to oscillate, as a result, the light wave slows down slightly due to the loss of some of its kinetic energy. Its new speed is always less than that of the speed of light in a vacuum (v<c). Materials are characterized by their ability to bend as well as slow down light, which is known as optical refractive index (n).

         c
     n = -
         v
          speed of light in a vacuum
       = ----------------------------
         speed of light in the medium
n = 1 in a vacuum
n = more than 1 in all other media

Refraction itself occurs when light passes across an interface between two media with different indices of refraction. As a general rule (which can be derived by Snell’s law below), light refracts towards the normal when passing to a medium with a higher refractive index, and away from the normal when moving to a medium of lower refractive index.

Snell’s Law:

n₁sinα = n₂sinβ

where n₁ is the refractive index of the first medium

Reflection

One of the properties of a boundary between optical media is that some of the light that’s approaching the interface at the angle of incidence (α) is reflected back into the first medium, while the rest continues on into the second medium at the angle of refraction (β).

Angle of incidence = Angle of Reflection

Saturday, December 17, 2011
The Hamilton-Jacobi Equation

This blog has posted more than a few times in the past about classical mechanics. Luckily, classical mechanics can be approached in several ways. This approach, which uses the Hamilton-Jacobi equation (HJE), is one of the most elegant and powerful methods.

Why is the HJE so powerful? Consider a dynamical system with a Hamiltonian H=H(q,p,t). Suppose we knew of a canonical transformation (CT) that generated a new Hamiltonian K=K(Q,P,t) which (for a local chart on phase space) vanishes identically. Then the canonical equations would give that the transformed coordinates (Q,P) are constant in this region. How easy it would be to solve a system where you know that most of the important quantities are constant!

The rub is in finding such a canonical transformation. Sometimes it can&#8217;t even be done analytically, but nevertheless this is the goal of the Hamilton-Jacobi method of solving mechanical systems. In the equation given above, S is the generating function of the CT. Coincidentally, it often comes out to just equal the classical action up to an additive constant! This is due to the connection between canonical transformations and mechanical gauge transformations; it turns out that the additive function used to define the latter is the generating function of the former. In general the HJE is a partial differential equation that might be solvable by additive separation of variables&#8230; but don&#8217;t get too hopeful! Oftentimes the value of the HJE comes not in finding the actual equations of motion but in revealing symmetry and conservation properties of the system.

The Hamilton-Jacobi Equation

This blog has posted more than a few times in the past about classical mechanics. Luckily, classical mechanics can be approached in several ways. This approach, which uses the Hamilton-Jacobi equation (HJE), is one of the most elegant and powerful methods.

Why is the HJE so powerful? Consider a dynamical system with a Hamiltonian H=H(q,p,t). Suppose we knew of a canonical transformation (CT) that generated a new Hamiltonian K=K(Q,P,t) which (for a local chart on phase space) vanishes identically. Then the canonical equations would give that the transformed coordinates (Q,P) are constant in this region. How easy it would be to solve a system where you know that most of the important quantities are constant!

The rub is in finding such a canonical transformation. Sometimes it can’t even be done analytically, but nevertheless this is the goal of the Hamilton-Jacobi method of solving mechanical systems. In the equation given above, S is the generating function of the CT. Coincidentally, it often comes out to just equal the classical action up to an additive constant! This is due to the connection between canonical transformations and mechanical gauge transformations; it turns out that the additive function used to define the latter is the generating function of the former. In general the HJE is a partial differential equation that might be solvable by additive separation of variables… but don’t get too hopeful! Oftentimes the value of the HJE comes not in finding the actual equations of motion but in revealing symmetry and conservation properties of the system.

Monday, November 14, 2011
Variable Star Astronomy
Variable stars are stars whose brightness changes because of physical changes within the star. There exist more than 30,000 variable stars in just the Milky Way. Variable star astronomy is a popular part of astronomy because amateur astronomers play a key role. They have submitted thousands of observed data and these data are logged onto a database. American readers can find information on it on the American Association of Variable Star Observers page. 
One of such variable stars are called Cepheids. Cepheids are pulsating variable stars because they undergo  a &#8220;repetitive expansion and contraction of their outer layers&#8221; [1]. In Cepheids, the star&#8217;s period of variation (about 1-70 days) is related to its luminosity; the longer the period, the higher the luminosity. In fact, when graphed, the relationship is shown by a straight line (as can be seen on the title image). Henrietta Swan Leavitt, an American astronomer, first discovered this and understood the significance of this knowledge.  Combined with understanding of the star&#8217;s apparent magnitude (a previously written post on this subject can be found here), astronomers can use this information to find a star&#8217;s distance from Earth. Cepheids are famously known for their usefulness in finding distances to far-away galaxies and other deep sky objects. Leavitt died early from cancer but was to be nominated for the Nobel Prize in Physics by Professor Mittag-Leffler (Swedish Academy of Sciences). 
Edwin Hubble used Leavitt&#8217;s discovery to prove that the Andromeda Galaxy (M31) is not part of the Milky Way, but was able to find the distance to the Andromeda Galaxy (between 2-9 million light years away). At first his calculation was incorrect (900,000 light years) because he observed Type I (classical) Cepheid Stars. Type I Cepheid stars are brighter, newer Population I stars. Hubble later used type II Cepheids (also called W Virginis stars), which are smaller, dimmer, Population II stars, and he was able to make more accurate calculations.

To determine the star&#8217;s distance, use the inverse square law of light brightness. 


A similar type of star are RR Lyrae Variable Stars. They are smaller than Cepheids and have a much shorter period (from a few hours to a day). On the other hand, they are far more common. Likewise, they can be used to solve for distances as well. Low mass stars live longer, and thus Cepheid stars are generally younger because they are more massive. 
Both Cepheids and RR Lyrae Variable stars are referred to as standard candles: objects with known luminosity. If you&#8217;ve ever wondered how astronomers came to those enormous figures when describing how far away galaxies and stars are from us, you can now better understand why and how. 

Variable Star Astronomy

Variable stars are stars whose brightness changes because of physical changes within the star. There exist more than 30,000 variable stars in just the Milky Way. Variable star astronomy is a popular part of astronomy because amateur astronomers play a key role. They have submitted thousands of observed data and these data are logged onto a database. American readers can find information on it on the American Association of Variable Star Observers page. 

One of such variable stars are called Cepheids. Cepheids are pulsating variable stars because they undergo  a “repetitive expansion and contraction of their outer layers” [1]. In Cepheids, the star’s period of variation (about 1-70 days) is related to its luminosity; the longer the period, the higher the luminosity. In fact, when graphed, the relationship is shown by a straight line (as can be seen on the title image). Henrietta Swan Leavitt, an American astronomer, first discovered this and understood the significance of this knowledge.  Combined with understanding of the star’s apparent magnitude (a previously written post on this subject can be found here), astronomers can use this information to find a star’s distance from Earth. Cepheids are famously known for their usefulness in finding distances to far-away galaxies and other deep sky objects. Leavitt died early from cancer but was to be nominated for the Nobel Prize in Physics by Professor Mittag-Leffler (Swedish Academy of Sciences). 

Edwin Hubble used Leavitt’s discovery to prove that the Andromeda Galaxy (M31) is not part of the Milky Way, but was able to find the distance to the Andromeda Galaxy (between 2-9 million light years away). At first his calculation was incorrect (900,000 light years) because he observed Type I (classical) Cepheid Stars. Type I Cepheid stars are brighter, newer Population I stars. Hubble later used type II Cepheids (also called W Virginis stars), which are smaller, dimmer, Population II stars, and he was able to make more accurate calculations.

To determine the star’s distance, use the inverse square law of light brightness. 

A similar type of star are RR Lyrae Variable Stars. They are smaller than Cepheids and have a much shorter period (from a few hours to a day). On the other hand, they are far more common. Likewise, they can be used to solve for distances as well. Low mass stars live longer, and thus Cepheid stars are generally younger because they are more massive. 

Both Cepheids and RR Lyrae Variable stars are referred to as standard candles: objects with known luminosity. If you’ve ever wondered how astronomers came to those enormous figures when describing how far away galaxies and stars are from us, you can now better understand why and how. 

Friday, October 28, 2011
The Virial Theorem



In the transition from classical to statistical mechanics, are there familiar quantities that remain constant? The Virial theorem defines a law for how the total kinetic energy of a system behaves under the right conditions, and is equally valid for a one particle system or a mole of particles.



Rudolf Clausius, the man responsible for the first mathematical treatment of entropy and for one of the classic statements of the second law of thermodynamics, defined a quantity G (now called the Virial of Clausius):



G ≡ Σi(pi · ri)



Where the sum is taken over all the particles in a system. You may want to satisfy yourself (it’s a short derivation) that taking the time derivative gives:



dG/dt = 2T + Σi(Fi · ri)



Where T is the total kinetic energy of the system (Σ  ½mv2) and dp/dt = F. Now for the theorem: the Virial Theorem states that if the time average of dG/dt is zero, then the following holds (we use angle brackets ⟨·⟩ to denote time averages):



2⟨T⟩ = - Σi(Fi · ri)



Which may not be surprising. If, however, all the forces can be written as power laws so that the potential is V=arn (with r the inter-particle separation), then



2⟨T⟩ = n⟨V⟩



Which is pretty good to know! (Here, V is the total kinetic energy of the particles in the system, not the potential function V=arn.) For an inverse square law (like the gravitational or Coulomb forces), F∝1/r2 ⇒ V∝1/r, so 2⟨T⟩ = -⟨V⟩.



Try it out on a simple harmonic oscillator (like a mass on a spring with no gravity) to see for yourself. The potential V ∝ kx², so it should be the case that the time average of the potential energy is equal to the time average of the kinetic energy (n=2 matches the coefficient in 2⟨T⟩). Indeed, if x = A sin( √[k/m] · t ), then v = A√[k/m] cos( √[k/m] · t ); then x2 ∝ sin² and v² ∝ cos², and the time averages (over an integral number of periods) of sine squared and cosine squared are both ½. Thus the Virial theorem reduces to



2 · ½m·(A²k/2m) = 2 · ½k(A²/2)



Which is easily verified. This doesn’t tell us much about the simple harmonic oscillator; in fact, we had to find the equations of motion before we could even use the theorem! (Try plugging in the force term F=-kx in the first form of the Virial theorem, without assuming that the potential is polynomial, and verify that the result is the same). But the theorem scales to much larger systems where finding the equations of motion is impossible (unless you want to solve an Avogadro’s number of differential equations!), and just knowing the potential energy of particle interactions in such systems can tell us a lot about the total energy or temperature of the ensemble.

The Virial Theorem

In the transition from classical to statistical mechanics, are there familiar quantities that remain constant? The Virial theorem defines a law for how the total kinetic energy of a system behaves under the right conditions, and is equally valid for a one particle system or a mole of particles.

Rudolf Clausius, the man responsible for the first mathematical treatment of entropy and for one of the classic statements of the second law of thermodynamics, defined a quantity G (now called the Virial of Clausius):

G ≡ Σi(pi · ri)

Where the sum is taken over all the particles in a system. You may want to satisfy yourself (it’s a short derivation) that taking the time derivative gives:

dG/dt = 2T + Σi(Fi · ri)

Where T is the total kinetic energy of the system (Σ  ½mv2) and dp/dt = F. Now for the theorem: the Virial Theorem states that if the time average of dG/dt is zero, then the following holds (we use angle brackets ⟨·⟩ to denote time averages):

2⟨T⟩ = - Σi(Fi · ri)

Which may not be surprising. If, however, all the forces can be written as power laws so that the potential is V=arn (with r the inter-particle separation), then

2⟨T⟩ = n⟨V⟩

Which is pretty good to know! (Here, V is the total kinetic energy of the particles in the system, not the potential function V=arn.) For an inverse square law (like the gravitational or Coulomb forces), F∝1/r2 ⇒ V∝1/r, so 2⟨T⟩ = -⟨V⟩.

Try it out on a simple harmonic oscillator (like a mass on a spring with no gravity) to see for yourself. The potential Vkx², so it should be the case that the time average of the potential energy is equal to the time average of the kinetic energy (n=2 matches the coefficient in 2⟨T⟩). Indeed, if x = A sin( √[k/m] · t ), then v = A√[k/m] cos( √[k/m] · t ); then x2 ∝ sin² and v² ∝ cos², and the time averages (over an integral number of periods) of sine squared and cosine squared are both ½. Thus the Virial theorem reduces to

2 · ½m·(A²k/2m) = 2 · ½k(A²/2)

Which is easily verified. This doesn’t tell us much about the simple harmonic oscillator; in fact, we had to find the equations of motion before we could even use the theorem! (Try plugging in the force term F=-kx in the first form of the Virial theorem, without assuming that the potential is polynomial, and verify that the result is the same). But the theorem scales to much larger systems where finding the equations of motion is impossible (unless you want to solve an Avogadro’s number of differential equations!), and just knowing the potential energy of particle interactions in such systems can tell us a lot about the total energy or temperature of the ensemble.

Tuesday, October 18, 2011

∑ F = ma

… is a differential equation:

where acceleration a(t), velocity v(t), and displacement s(t) are all vectors and functions of time. This equation is second-order in position because the highest derivative is the second time derivative of position. Combined with the right boundary conditions, s(t) (also called the trajectory: path through space and time) can be determined.

This differential equation can be solved one component, or dimension, at a time. Let us focus on one of these, and call it the x component. The equations for y and z can be found exactly the same way.

Constant acceleration

If the graph of a(t) signifying acceleration in the x direction is constant

then the graph of v(t), the velocity in the x direction, is a straight line with slope a0

and the graph of x(t), the position along the x axis, is a parabola

It is also possible for the acceleration, or either of the initial velocity or initial position, to be negative. Thus the displacement/projectile motion formula is derived.

Friday, October 14, 2011
Going superfluid!
A liquid goes superfluid when it suddenly loses all internal friction and gains near infinite thermal conductivity. The combination of zero viscosity but nonzero surface tension allows a superfluid to creep up walls and back down the outside to drip from the bottom of open containers, or to completely cover the inner surface of sealed containers. Lack of viscosity also allows a superfluid to leak through a surface that is porous to any degree, because the molecules can slip through even microscopic holes. Superfluids furthermore exhibit a thermo-mechanical effect where they flow from colder to warmer temperatures, exactly the opposite of heat flow as stated by the laws of thermodynamics! That implies the remarkable property of superfluids of carrying zero entropy. Because of this, a perpetual fountain can be set up by shining light on a superfluid bath just below a vertical open capillary tube, causing the fluid to shoot up through and beyond the tube until its  contact with the air causes it to cease being a superfluid and fall back  down into the bath, whereby it will cool back into the superfluid state  and repeat the process.
So how does superfluidity work, exactly?
Makings of a superfluid
Physicists first got the inkling of something stranger than the norm when, around 1940, they cooled liquid helium (specifically, the 4He isotope) down to 2.17&#160;K and it started exhibiting the above-mentioned properties. Since the chemical makeup of the helium didn&#8217;t change (it was still helium), the transformation to a superfluid state is a physical change, a phase transition, just like ice melting into liquid water. Perhaps for cold matter researchers, this transition to a new phase of matter makes up for the fact that helium doesn&#8217;t solidify even at 0&#160;K except under large pressure - whereas ALL other substances solidify above 10&#160;K.
[Phase diagram of 4He, source]
Helium is truly the only substance that never solidifies under its own vapor pressure.
Instead, when the temperature reaches the transition or lambda point, quantum physics takes hold and a fraction of the liquid particles drop into the same ground-energy quantum state. They move in lock-step, behaving identically and never getting in each others&#8217; way. Thus we come to see that superfluidity is a kind of Bose-Einstein condensation, the general phenomenon of a substance&#8217;s particles simultaneously occupying the lowest-energy quantum state.
Read more:&#8221;This Month in Physics History: Discovery of Superfluidity, January 1938&#8221;. APS News: January 2006
Based on a project by Barbara Bai, Frankie Chan, and Michele Silverstein at Cornell University.

Going superfluid!

A liquid goes superfluid when it suddenly loses all internal friction and gains near infinite thermal conductivity. The combination of zero viscosity but nonzero surface tension allows a superfluid to creep up walls and back down the outside to drip from the bottom of open containers, or to completely cover the inner surface of sealed containers. Lack of viscosity also allows a superfluid to leak through a surface that is porous to any degree, because the molecules can slip through even microscopic holes. Superfluids furthermore exhibit a thermo-mechanical effect where they flow from colder to warmer temperatures, exactly the opposite of heat flow as stated by the laws of thermodynamics! That implies the remarkable property of superfluids of carrying zero entropy. Because of this, a perpetual fountain can be set up by shining light on a superfluid bath just below a vertical open capillary tube, causing the fluid to shoot up through and beyond the tube until its contact with the air causes it to cease being a superfluid and fall back down into the bath, whereby it will cool back into the superfluid state and repeat the process.

So how does superfluidity work, exactly?

Makings of a superfluid

Physicists first got the inkling of something stranger than the norm when, around 1940, they cooled liquid helium (specifically, the 4He isotope) down to 2.17 K and it started exhibiting the above-mentioned properties. Since the chemical makeup of the helium didn’t change (it was still helium), the transformation to a superfluid state is a physical change, a phase transition, just like ice melting into liquid water. Perhaps for cold matter researchers, this transition to a new phase of matter makes up for the fact that helium doesn’t solidify even at 0 K except under large pressure - whereas ALL other substances solidify above 10 K.

[Phase diagram of 4He, source]

Helium is truly the only substance that never solidifies under its own vapor pressure.

Instead, when the temperature reaches the transition or lambda point, quantum physics takes hold and a fraction of the liquid particles drop into the same ground-energy quantum state. They move in lock-step, behaving identically and never getting in each others’ way. Thus we come to see that superfluidity is a kind of Bose-Einstein condensation, the general phenomenon of a substance’s particles simultaneously occupying the lowest-energy quantum state.

Read more:
This Month in Physics History: Discovery of Superfluidity, January 1938”. APS News: January 2006

Based on a project by Barbara Bai, Frankie Chan, and Michele Silverstein at Cornell University.

Wednesday, October 12, 2011
Hypercubes
What is a hypercube (also referred to as a tesseract) you say! Well, let&#8217;s start with what you know already. We know what a cube is, it&#8217;s a box! But how else could you describe a cube? A cube is 3 dimensional. Its 2 dimensional cousin is a square. 
A hypercube is just to a cube what a cube is to a square. A hypercube is 4 dimensional! (Actually&#8212; to clarify, hypercubes can refer to cubes of all dimensions. &#8220;Normal&#8221; cubes are 3 dimensional, squares are 2 dimensional &#8220;cubes, etc. This is because a hypercube is an n-dimensional figure whose edges are aligned in each of the space&#8217;s dimensions, perpendicular to each other and of the same length. A tesseract is specifically a 4-d cube). 

[source]
Another way to think about this can be found here:

Start with a point. Make a copy of the point, and move it some distance away. Connect these points. We now have a segment. Make a copy of the segment, and move it away from the first segment in a new (orthogonal) direction. Connect corresponding points. We now have an ordinary square. Make a copy of the square, and move it in a new (orthogonal) direction. Connect corresponding points. We now have a cube. Make a copy and move it in a new (orthogonal, fourth) direction. Connect corresponding points. This is the tesseract.

If a tesseract were to enter our world, we would only see it in our three dimensions, meaning we would see forms of a cube doing funny things and spinning on its axes. This would be referred to as a cross-section of the tesseract. Similarly, if we as 3-dimensional bodies were to enter a 2-dimensional world, its 2-dimension citizens would &#8220;observe&#8221; us as 2-dimensional cross objects as well! It would only be possible for them to see cross-sections of us.
Why would this be significant? Generally, in math, we work with multiple dimensions very often. While it may seem as though a mathematican must then work with 3 dimensions often, it is not necessarily true. The mathematician deals with these dimensions only mathematically. These dimensions do not have a value because they do not correspond to anything in reality; 3 dimensions are nothing ordinary nor special. 
Yet, through modern mathematics and physics, researchers consider the existence of other (spatial) dimensions.  What might be an example of such a theory? String theory is a model of the universe which supposes there may be many more than the usual 4 spacetime dimensions (3 for space, 1 for time). Perhaps understanding these dimensions, though seemingly impossible to visualize, will come in hand. 
Carl Sagan also explains what a tesseract is. 
Image: Peter Forakis, Hyper-Cube, 1967, Walker Art Center, Minneapolis

Hypercubes

What is a hypercube (also referred to as a tesseract) you say! Well, let’s start with what you know already. We know what a cube is, it’s a box! But how else could you describe a cube? A cube is 3 dimensional. Its 2 dimensional cousin is a square. 

A hypercube is just to a cube what a cube is to a square. A hypercube is 4 dimensional! (Actually— to clarify, hypercubes can refer to cubes of all dimensions. “Normal” cubes are 3 dimensional, squares are 2 dimensional “cubes, etc. This is because a hypercube is an n-dimensional figure whose edges are aligned in each of the space’s dimensions, perpendicular to each other and of the same length. A tesseract is specifically a 4-d cube). 

[source]

Another way to think about this can be found here:

Start with a point. Make a copy of the point, and move it some distance away. Connect these points. We now have a segment. Make a copy of the segment, and move it away from the first segment in a new (orthogonal) direction. Connect corresponding points. We now have an ordinary square. Make a copy of the square, and move it in a new (orthogonal) direction. Connect corresponding points. We now have a cube. Make a copy and move it in a new (orthogonal, fourth) direction. Connect corresponding points. This is the tesseract.

If a tesseract were to enter our world, we would only see it in our three dimensions, meaning we would see forms of a cube doing funny things and spinning on its axes. This would be referred to as a cross-section of the tesseract. Similarly, if we as 3-dimensional bodies were to enter a 2-dimensional world, its 2-dimension citizens would “observe” us as 2-dimensional cross objects as well! It would only be possible for them to see cross-sections of us.

Why would this be significant? Generally, in math, we work with multiple dimensions very often. While it may seem as though a mathematican must then work with 3 dimensions often, it is not necessarily true. The mathematician deals with these dimensions only mathematically. These dimensions do not have a value because they do not correspond to anything in reality; 3 dimensions are nothing ordinary nor special. 

Yet, through modern mathematics and physics, researchers consider the existence of other (spatial) dimensions.  What might be an example of such a theory? String theory is a model of the universe which supposes there may be many more than the usual 4 spacetime dimensions (3 for space, 1 for time). Perhaps understanding these dimensions, though seemingly impossible to visualize, will come in hand. 

Carl Sagan also explains what a tesseract is

Image: Peter Forakis, Hyper-Cube, 1967, Walker Art Center, Minneapolis