Say It With Science is an educational blog that serves to teach readers about science, in general, but more specifically about physics and mathematics.
It is run by members who each have their own unique specialty and interests in science and mathematics. Some of us are merely high school students and some of us are university students. This enables us to provide high quality information about lower and higher level subjects.
You may contact us via ask box or at email@example.com
We welcome questions, feedback, and submissions; and we will clarify any concepts for readers.
Some might say that the modern day physicists have it easy; they can appeal to the public with their stories of eleven-dimensional universes, time travel, and stories of a quantum world that is stranger than fiction. But the basis of such appeal remains the same as the appeal for pursuing science always was and will be: a greater understanding of the environment, ourselves, and knowledge itself.
Just like Schrödinger’s cat, a popular thought experiment by famous physicist Erwin Schrödinger, Laplace’s Demon and Maxwell’s Demon are two other thought-experiments in scientific thinking which are important for what they reveal about our understanding of the universe. It may only interest you to learn of these thought-experiments for the sake of reinforcing the philosophical relevance and beauty that science has always sought to provide.
Jim-Al Khalili, author of Quantum: A Guide for the Perplexed, affirms that fate as a scientific idea was disproved three-quarters of a century ago, referring to the discoveries of quantum mechanics as proof, of course. But what does he mean when he says this? Prior to such discoveries, it may have been okay to argue for a deterministic universe, meaning that scientists could still consider the idea of a world in which one specific input must result in one specific output and thus the sum all these actions and their consequences could help “determine” the overall outcome, or fate, of such a world.
Pierre-Simon Laplace, born on March 23, 1794, was a French mathematician and astronomer whose work largely founded the statistical interpretation of probability known as Bayesian Probability. He lived in a world before Heisenberg’s Uncertainty Principle and Chaos Theory and thus he was allowed to imagine such a deterministic universe:
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Laplace, A Philosophical Essay on Probabilities
Laplace thought about what it would be like if it were possible to know the positions, masses, and velocities of all the atoms in existence and hypothesized a being, later known as Laplace’s Demon, which would be able to know such information and such calculate all future events.
With our knowledge of physics, The Heisenberg Uncertainty Principle and Chaos Theory, such a being could not exist because such information about atoms cannot be observed with enough precision to calculate and predict future events. (By the way, “enough” precision means infinite precision!) This might be good news for those who believe in free will as its concept would not be permitted in a deterministic universe governed by Laplace’s demon.
Interestingly enough, The Heisenberg Uncertainty Principle and Chaos Theory are not the only restrictive challenges that scientists have faced in trying to understand the properties and bounds of our universe. The Second Law of Thermodynamics is also of concern to scientists and philosophers alike, as we will learn with the birth of another mind-boggling demon.
Two days ago, the CERN team announced that they had found a new particle whose properties are consistent with the long sought-after Higgs Boson’s. Whether or not it is the elusive boson however, is still to be determined by further research. To read more about the event, follow this link to the new BBC article.
If you have no clue what this is about, the above video is a quick and nice introduction to the Higgs Boson submitted by one of our followers, the lovely oh-yeah-and-what. Thanks for the awesome submission!
SIWS loves feedback from followers and we’ll do our best to respond. If you have any questions, ideas, or concerns, feel free to drop us a message, email us at firstname.lastname@example.org or like and post on our Facebook page. You can even make a submission post and we might publish it and credit you, like we did with this one!
Entropy is an important topic in many fields; it has very well known uses in statistical mechanics, thermodynamics, and information theory. The classical formula for entropy is Σi(pi log pi), where p=p(x) is a probability density function describing the likelihood of a possible microstate of the system, i, being assumed. But what is this probability density function? How must the likelihood of states be configured so that we observe the appropriate macrostates?
In accordance with the second law of thermodynamics, we wish for the entropy to be maximized. If we take the entropy in the limit of large N, we can treat it with calculus as S[φ]=∫dx φ ln φ. Here, S is called a functional (which is, essentially, a function that takes another function as its argument). How can we maximize S? We will proceed using the methods of calculus of variations and Lagrange multipliers.
First we introduce three constraints. We require normalization, so that ∫dx φ = 1. This is a condition that any probability distribution must satisfy, so that the total probability over the domain of possible values is unity (since we’re asking for the probability of any possible event occurring). We require symmetry, so that the expected value of x is zero (it is equally likely to be in microstates to the left of the mean as it is to be in microstates to the right — note that this derivation is treating the one-dimensional case for simplicity). Then our constraint is ∫dx x·φ = 0. Finally, we will explicitly declare our variance to be σ², so that ∫dx x²·φ = σ².
Using Lagrange multipliers, we will instead maximize the augmented functional S[φ]=∫(φ ln φ + λ0φ + λ1xφ + λ2x²φ dx). Here, the integrand is just the sum of the integrands above, adjusted by Lagrange multipliers λk for which we’ll be solving.
Applying the Euler-Lagrange equations and solving for φ gives φ = 1/exp(1+λ0+xλ1+x²λ2). From here, our symmetry condition forces λ1=0, and evaluating the other integral conditions gives our other λ’s such that q = (1/2πσ²)½·exp(-x² / 2σ²), which is just the Normal (or Gaussian) distribution with mean 0 and variance σ². This remarkable distribution appears in many descriptions of nature, in no small part due to the Central Limit Theorem.
Variable stars are stars whose brightness changes because of physical changes within the star. There exist more than 30,000 variable stars in just the Milky Way. Variable star astronomy is a popular part of astronomy because amateur astronomers play a key role. They have submitted thousands of observed data and these data are logged onto a database. American readers can find information on it on the American Association of Variable Star Observers page.
One of such variable stars are called Cepheids. Cepheids are pulsating variable stars because they undergo a “repetitive expansion and contraction of their outer layers” . In Cepheids, the star’s period of variation (about 1-70 days) is related to its luminosity; the longer the period, the higher the luminosity. In fact, when graphed, the relationship is shown by a straight line (as can be seen on the title image). Henrietta Swan Leavitt, an American astronomer, first discovered this and understood the significance of this knowledge. Combined with understanding of the star’s apparent magnitude (a previously written post on this subject can be found here), astronomers can use this information to find a star’s distance from Earth. Cepheids are famously known for their usefulness in finding distances to far-away galaxies and other deep sky objects. Leavitt died early from cancer but was to be nominated for the Nobel Prize in Physics by Professor Mittag-Leffler (Swedish Academy of Sciences).
Edwin Hubble used Leavitt’s discovery to prove that the Andromeda Galaxy (M31) is not part of the Milky Way, but was able to find the distance to the Andromeda Galaxy (between 2-9 million light years away). At first his calculation was incorrect (900,000 light years) because he observed Type I (classical) Cepheid Stars. Type I Cepheid stars are brighter, newer Population I stars. Hubble later used type II Cepheids (also called W Virginis stars), which are smaller, dimmer, Population II stars, and he was able to make more accurate calculations.
A similar type of star are RR Lyrae Variable Stars. They are smaller than Cepheids and have a much shorter period (from a few hours to a day). On the other hand, they are far more common. Likewise, they can be used to solve for distances as well. Low mass stars live longer, and thus Cepheid stars are generally younger because they are more massive.
Both Cepheids and RR Lyrae Variable stars are referred to as standard candles: objects with known luminosity. If you’ve ever wondered how astronomers came to those enormous figures when describing how far away galaxies and stars are from us, you can now better understand why and how.
In the transition from classical to statistical mechanics, are there familiar quantities that remain constant? The Virial theorem defines a law for how the total kinetic energy of a system behaves under the right conditions, and is equally valid for a one particle system or a mole of particles.
Where the sum is taken over all the particles in a system. You may want to satisfy yourself (it’s a short derivation) that taking the time derivative gives:
dG/dt = 2T + Σi(Fi · ri)
Where T is the total kinetic energy of the system (Σ ½mv2) and dp/dt = F. Now for the theorem: the Virial Theorem states that if the time average of dG/dt is zero, then the following holds (we use angle brackets ⟨·⟩ to denote time averages):
2⟨T⟩ = - Σi(Fi · ri)
Which may not be surprising. If, however, all the forces can be written as power laws so that the potential is V=arn (with r the inter-particle separation), then
2⟨T⟩ = n⟨V⟩
Which is pretty good to know! (Here, V is the total kinetic energy of the particles in the system, not the potential function V=arn.) For an inverse square law (like the gravitational or Coulomb forces), F∝1/r2 ⇒ V∝1/r, so 2⟨T⟩ = -⟨V⟩.
Try it out on a simple harmonic oscillator (like a mass on a spring with no gravity) to see for yourself. The potential V ∝ kx², so it should be the case that the time average of the potential energy is equal to the time average of the kinetic energy (n=2 matches the coefficient in 2⟨T⟩). Indeed, if x = A sin( √[k/m] · t ), then v = A√[k/m] cos( √[k/m] · t ); then x2 ∝ sin² and v² ∝ cos², and the time averages (over an integral number of periods) of sine squared and cosine squared are both ½. Thus the Virial theorem reduces to
2 · ½m·(A²k/2m) = 2 · ½k(A²/2)
Which is easily verified. This doesn’t tell us much about the simple harmonic oscillator; in fact, we had to find the equations of motion before we could even use the theorem! (Try plugging in the force term F=-kx in the first form of the Virial theorem, without assuming that the potential is polynomial, and verify that the result is the same). But the theorem scales to much larger systems where finding the equations of motion is impossible (unless you want to solve an Avogadro’s number of differential equations!), and just knowing the potential energy of particle interactions in such systems can tell us a lot about the total energy or temperature of the ensemble.
What is a hypercube (also referred to as a tesseract) you say! Well, let’s start with what you know already. We know what a cube is, it’s a box! But how else could you describe a cube? A cube is 3 dimensional. Its 2 dimensional cousin is a square.
A hypercube is just to a cube what a cube is to a square. A hypercube is 4 dimensional! (Actually— to clarify, hypercubes can refer to cubes of all dimensions. “Normal” cubes are 3 dimensional, squares are 2 dimensional “cubes, etc. This is because a hypercube is an n-dimensional figure whose edges are aligned in each of the space’s dimensions, perpendicular to each other and of the same length. A tesseract is specifically a 4-d cube).
Another way to think about this can be found here:
Start with a point. Make a copy of the point, and move it some distance away. Connect these points. We now have a segment. Make a copy of the segment, and move it away from the first segment in a new (orthogonal) direction. Connect corresponding points. We now have an ordinary square. Make a copy of the square, and move it in a new (orthogonal) direction. Connect corresponding points. We now have a cube. Make a copy and move it in a new (orthogonal, fourth) direction. Connect corresponding points. This is the tesseract.
If a tesseract were to enter our world, we would only see it in our three dimensions, meaning we would see forms of a cube doing funny things and spinning on its axes. This would be referred to as a cross-section of the tesseract. Similarly, if we as 3-dimensional bodies were to enter a 2-dimensional world, its 2-dimension citizens would “observe” us as 2-dimensional cross objects as well! It would only be possible for them to see cross-sections of us.
Why would this be significant? Generally, in math, we work with multiple dimensions very often. While it may seem as though a mathematican must then work with 3 dimensions often, it is not necessarily true. The mathematician deals with these dimensions only mathematically. These dimensions do not have a value because they do not correspond to anything in reality; 3 dimensions are nothing ordinary nor special.
Yet, through modern mathematics and physics, researchers consider the existence of other (spatial) dimensions. What might be an example of such a theory? String theory is a model of the universe which supposes there may be many more than the usual 4 spacetime dimensions (3 for space, 1 for time). Perhaps understanding these dimensions, though seemingly impossible to visualize, will come in hand.
ashifttowardlongerwavelengthsof the spectral linesemittedby a celestial objectthatis caused by the object moving away from theearth.
If you can understand that, great! But for those of us who cannot, consider the celestial bodies which make up our night sky. Did you think they were still, adamant, everlasting constants? They may seem to stick around forever, but…
Boy, you were wrong. I’ll have you know that stars are born and, at some point, they die. They move, they change. Have you heard about variable stars? Stars undergo changes, sometimes in their luminosity. (We are, indeed, made of the same stuff as stars).
So, stars move. All celestial bodies do, actually. You might have heard about some mysterious, elusive thing called dark energy. Dark energy is thought to be the force that causes the universe to expand at a growing rate. If it is proven to exist, dark energy will be able to explain why redshift occurs.
Maybe you can understand redshift by studying a visual:
These are spectral lines from an object. What do you notice is different in the unshifted, “normal” emission lines from the redshifted and blueshifted lines?
The redshifted line is observed as if everything is “shifted” a bit to the right— towards the red end of the spectrum; whereas the blueshifted line is moved to the left towards the bluer end of the spectrum.
Imagine if you were standing here on earth and some many lightyears away, a hypothetical “alien” was standing on their planet. With this image in mind, consider a galaxy in between the two of you that is moving towards the alien. You would then observe redshift (stretched out wavelength) and the alien would observe blueshift (shortened wavelength).
A simple, everyday example of this concept can be observed if you stand in front of a road. As a car (one without a silencer) drives by, the pitch you observe changes. This is known as the Doppler effect. Watch this quick youtube video titled “Example of Dopper Shift using car horn”:
(You may not be able to view it from the dashboard, only by opening this post on the actual blog page. You can watch the video by clicking this link).
Notice how as the car drives past the camera man, the sound changes drastically.
Understanding redshift is important to scientists, especially astronomers and astrophysicists. They must account for this observable difference to make the right conclusions. Redshift is one the concepts which helped scientists determine that celestial bodies are actually moving further away from us at an accelerating rate.
Fractal Geometry is beautiful. Clothes are designed from it and you can find fractal calendars for your house. There’s just something about that infinitely endless pattern that intrigues the eye— and brain.
Fractals are “geometric shapes which can be split into parts which are a reduced-size copy of the whole” (source: wikipedia). They demonstrate a property called self-similarity, in which parts of the figure are similar to the greater picture. Theoretically, each fractal can be magnified and should be infinitely self-similar.
One simple fractal which can easily display self-similarity is the Sierpinski Triangle. You can look at the creation of such a fractal:
What do you notice? Each triangle is self similar— they are all equilateral triangles. The side length is half of the original triangle. And what about the area? The area is a quarter of the original triangle. This pattern repeats again, and again.
The Drake Equation is an equation for predicting the number of civilizations in the Milky Way Galaxy capable of interstellar communication.
Short descriptions of what the variables of the equation represent can be found here.
The variables represent the average rate of star formation per year in our galaxy, the fraction of those stars which have planets, the average number of planets that can potentially support life per star which has planets, the fraction of those which actually go on to develop life in the future, the fraction of those which go on to develop intelligent life, the fraction of those which can release detectable signals of their existence, and (finally) the length of time for which these civilizations release signals.
That all seems like a mess, but you get the idea.
According to Drake’s parameters:
50% of new stars develop planets
0.4 planets will be habitable
90% of habitable planets develop life
10% of new instances of life develop intelligence
10% of such life develops interstellar communications
These civilizations, might, on average, last 10,000 years.
To be fair, we are not sure on the actual figures. Drake’s values gives an answer of 10, meaning that 10 of these theoretical civilizations would be able to communicate.
But the importance of Drake’s equations is not necessarily the numerical value. It lies in all the questions that the equation led him to. Who knows exactly how many stars there are and what not? These figures are yet to be discovered.
So next time you look above, remember to always question. You’re not alone in questioning and you don’t know where these questions can lead you. Like Drake, you might be led to discover companions from different worlds.
It is easy to recognize octaves because the frequency of an octave above a certain pitch is exactly twice the frequency of that pitch. Octaves harmonize so well that they almost sound identical, so we call these notes by the same name: an octave above or below middle C is another C; an octave above or below concert A, 440 Hz, is another A (880 or 220 Hz). Mathematically, if a certain note H has frequency f then a note with frequency 2nf, where n is an integer, is n octaves above H (if n is negative, it is a positive power of 1/2 and represents |n| octaves below H).
Not alone in their ability to harmonize well, octaves are joined by all the intervals that make up a major or minor scale (in the Western music system), notably including perfect fifths (fifth note of a scale, 3/2 times the frequency of the starting note) and major or minor thirds (third note of a scale, respectively 5/4 or 6/5 times the frequency of the starting note). All these frequencies are ratios of relatively small whole numbers - this contributes to the harmony of the notes, just like the ratio 2/1 does for octaves. The simpler the frequency ratio, the higher the quality of harmony achieved by an interval when played out loud. The only requirement is for the ratio to be a (positive) rational number, able to be written with whole numbers for the numerator and denominator.
However, suppose you tuned a piano perfectly according to one of the scales. Then you can play that scale and it would be perfectly in tune - but the harmony of all the other scales get thrown off! For example, E is both the third note of a C major scale and the second note of a D major scale. By tuning the piano to the C major scale, you guarantee that an E has frequency 5/4 times the frequency of a C (C to E is a major third). In a perfect C scale, D has frequency 9/8 times that of a C. Call these frequencies fC, fD, and fE.
fE = (5/4)fC fD = (9/8)fC ⇒ fE = (5/4)(8/9)fD = (10/9)fD
This is still a relatively simple rational number ratio, but it’s the wrong ratio. In a perfect D major scale, E has frequency 9/8 that of D. The relative error when tuning to C is
|10/9 - 9/8| = |80-81|/72 = 1/72.
In the first days of the harpsichord and piano (keyboard instruments), tuners chose one scale to tune to, sacrificing the harmony of the other scales. Interestingly, some of the music from that era took that into account; on one hand some scales were considered “sweeter” than others based on common tuning practices, and on the other some songs were purposely written in one of the sour-sounding scales for their dissonant harmonies.
Today’s most common tuning, or temperament, is called equal temperament. Each scale sounds equally good (or equally bad, depending on your tolerance of imperfection), and the only interval which is perfectly preserved is octaves. Since, in the Western music system, there are 12 semitones from octave to octave (12 white and black keys from a note to an octave above the note), each of those keys is assigned the frequency of exactly the twelfth root of 2 times the key preceding it. What’s great about that, of course, is that this is a completely egalitarian system: no scale is sweeter or sourer sounding that any other. Yet the cost is the complete destruction of the rational number harmonies: the twelfth root of 2 is as irrational as they come, and could never in any number theorist’s wildest dreams be written as a ratio of whole numbers.