Welcome! You may find that the most interesting posts were posted first, that is, at the bottom of the page, starting in June 2014.

April 23, 2016

Physical Units Factor Tables / Large Print PDF

Here's a version of  PUFT which has larger text, making it more legible when used as a poster. Unfortunately, to get the text this large meant removing the equations for each unit type, which makes this version a little cryptic at first glance. The length factors of each unit type are indicated by the scales on the top and bottom, the time factors are shown on the left, and the factors common to the units on a table are shown in the colored box to its left. Here's a link to the PDF.

March 28, 2016

Physical Units Factor Tables (PUFT)

PUFT Copyright 2014 - 2016 Enon Harris
Physical Units Factor Tables (PUFT)

Link to full-size PUFT picture

[Edit: Link to PDF]

I drafted the Physical Units Factor Tables (PUFT) a bit over a year ago, and meant to send it off to publishers, as a poster chart for physics classrooms similar to the periodic table in chemistry classes. but I somehow never got around to it.

The Physical Units Factor Tables organize 50 types of physical units by their factors of length, time, mass and charge so that the mathematical  relationships between physical units are easy to see.

The Physical Units Factor Tables encourage anyone who can multiply and divide simple fractions to deduce equations in mechanics and electromagnetics .

The single-page  document is also marginally legible when printed in color on a single sheet of letter-size paper, but students and teachers with access to computers will likely find the electronic version easier on the eyes.

Each move left represents multiplication by length, each move down is division by time ( = multiplication by frequency). Similarly, moving right represents division by length and each move up represents multiplying by time. These factors are the same in all tables in the stack, with each lower table having an additional factor:
light blue table = * mass
green table       = * 1/charge
pink table         = * mass/charge
purple table      = * mass/charge^2

The names of the unit types are taken from Alan Eliasen's wonderful  calculator and physically-typed programming language, Frink. (Except for the ones whose top line is in parentheses; these names aren't listed in Frink, though it can easily compute using such quantities.)

The original was done in an Open Office spreadsheet, then saved as a PDF file. Among several other versions, I also have one that is more legible from a distance for use as a poster.

If any publishers or science teachers are interested in using PUFT, please let me know.

Converting IQ at a Given Age to an Absoloute (Rasch) Measure of Intelligence

Rasch measure of intelligence age 2-25 +/- 3 s.d. from the norming of the Woodcock-Johnson IQ test block rotation subtest . This is my remake with the scale changed to years, text replaced, and grid added from the source: Kevin McGrew slideshow  "Applied Psych Test Design: Part C - Use of Rasch scaling technology" Slide 19 (2009),  which had the original caption: Block Rotation: Final Rasch with norming test n = 37 norming items n = 4722 norm subjects Item map with “steps” displayed for items Red area represents the complete range (including extremes) of sample Block Rotation W- scores Good test scale coverage for complete range of population.

Rasch measures of intelligence are an interesting and important part of psychometrics, as they provide an absolute measure of intelligence, not only an "equal interval" scale (as with Fahrenheit and Celsius) but one with with a proper zero (as with Kelvin), also known as a ratio scale (not to be confused with the mental/chronological age ratio used in early IQ tests). Because it is a ratio measure, Rasch measures allow all arithmetic operations ( *,/,+,-, rather than at most + and - for IQ) and form the basis for item response theory (IRT) in general. (See the letter following this post for more.) Rasch measures also have the interesting property of putting item difficulties and test-taker abilities on the same scale, so that a if a person with a certain ability score tries an item with the same difficulty score, then he has a 50% chance of success.
The above graph was adapted from one used in the block rotation subtest norming of the Woodcock-Johnson IQ test (WJ), a product of Riverside Publishing, (a division of Houghton Mifflin Harcourt.) The Stanford-Binet (SB5), also published by Riverside uses the same scale ("change-sensitive" score or scale "CSS"), which has as its only arbitrary choice setting the CSS for an average 10-year old equal to 500.

The paper: Assessment Service Bulletin Number 3: Use of the SB5 in the Assessment of High Abilities, has on page 12 of the PDF (table 4) a reprint from the SB5 interpretive manual of the average full-scale CSS scores for diferent ages, which closely matches the average line in the graph above, so the block rotation subtest average scores vs. age should be a reasonable proxy for the full scale, (though there is reason to think the standard deviations on the WJ block rotation subtest shown in the graph are likely somewhat smaller than for the full scale score of the SB5). (See the end of this post for table 4 in usable form.) Unfortunately Riverside seems reluctant to publish the average age- vs. CSS or W-score graphs for either full test, let alone for different standard deviations, so using the BR subtest as a proxy for the full scale is as well as we can do.
Using a horizontal straightedge on the graph allows equating a given CSS score to z-scores at different ages. ( z-scores  = standard deviations, equivalent to 15 IQ points) The Mk.I eyeball gives a pretty decent estimate of fractional z-scores falling between the s.d. lines, but one can use the line or measurement tool in a decent paint program such as Paint.NET or Gimp to get better measurements of the z-score that equates to a given CSS at a given age. (Adding a T-square on a moveable transparent layer is also useful.) -  This allows comparing the absolute intelligence of people with different ages and z-scores.

July 19, 2014

A Curious Way to Represent Numbers: Ternary Factor Tree Representation

This post is only likely to be of interest to those interested in the hidden backwaters of math, and maybe not too many of them. It also has little to do with the other posts here so far.

The conventional system of number representation cannot exactly represent most numbers. Fractions that have factors in the denominator that are not in the number system's base have infinite decimal representations (e.g. other than 2 and 5 for base 10). Square roots and other irrational numbers \ have infinite, non-repeating representations.

In the late 1990s I came up with an alternative system that can exactly represent any rational or irrational number as well as most transcendental numbers using only a finite number of bits. This type of representation is based upon the idea of factored representations of integers extended in a logical way to a nearly universal system of describing number structure.

Pretty but unrelated

July 2, 2014

Universe, Physics and Simulation

[Another post from the archives, this time from late March 2014. I overlooked it or I would have posted it earlier.]

The chief business of science, particularly physics, is modeling of the universe's phenomena. Modeling phenomena is also a definition of simulation. The universe may not be information, but all we can know of it is information. The universe may not be simulation but all the theories that we can make about it can only be tested through simulation and comparison of the information from the simulation with that from the universe.

The universe may in fact have characteristics of a simulation; it seems likely that models designed to resemble the universe will do so, and therefore that the universe will resemble the models just as well as the reverse – sometimes in unforeseen ways. Some of the characteristics of models that are commonly thought to be artificial or mere approximations may be capable of telling us secrets of how the universe really works.

Physicists spend a great deal of time with equations, but it is only when actual numbers representing a given situation are plugged in that they can these equations be said to be a representation of anything in the physical world; in fact, to represent any kind of fields in any but the simplest situations demands iterating the equations with numbers for every point in space and all velocities or wavelengths, which means plugging in astronomical numbers of coefficients to the equations even for a crude approximation.

How to minimize the number of computations for a given level of accuracy and complexity is the central concern of simulation. For instance, often space is divided into a mesh which is sparse where the situation is simple and dense in more complicated regions. Another commonly-used technique is rather than storing enormous matrices with most entries zero, only storing the few entries containing informative numbers. Other types of compression are also used whenever possible, and compression itself is a rich subject.

If the universe resembles a simulation, it too should show evidence of compression techniques. One obvious one is to only store one copy of identical items, and just use a pointer to that copy wherever another such item appears. A bit more advanced is to only store the differences between near-identical items, together with a single prototype copy as above. This is essentially a programmer's sort of platonism. Even beyond that is compression of analogous structures more generally, which can quickly become quite complex.

This compression effect would also potentially seem able to account for some of the observations that led Rupert Sheldrake to propose the existence of morphic fields and morphic resonance. Once a prototype structure exists, it takes much less computation and storage for the universe to support similar structures, assuming the universe is simulation-like in compressing form. Crystals of new substances should be easier to create again once they have first formed elsewhere. Biological structures, behaviors, and for want of better words what I'll call “plots” and “tropes” should be similarly primed if the compression algorithm can handle such subtle and complex analogies.

More speculatively, if the universe's resemblance to a simulation is not mere appearance, then teleological questions of who is running the simulation and for what purpose arise. The biggest potential increment of efficiency for such an entity in simulating a universe would come from only accurately simulating the regions and events of interest, with sparse and approximate methods used in other regions. This could lead to glitches in the simulation such as are often seen in video-games: lag and other time discontinuities, failure to load sections of the simulation properly, changes in item prototypes, non-player characters not performing when the simulation does not register that the NPCs non-action is perceptible to the player characters, conflicts between versions of the simulation when different PCs high-detail regions come into contact, more radically different simulations coming into contact, continuity errors, objects and characters failing to load or loading twice, violations of physical law, cheat codes, hacking, ... All these, or similar effects have been reported numerous times by different people. (See accounts on the “Glitch in the Matrrix” sub-Reddit) It is often the case that their brains are glitching rather than the exterior simulation, but this sometimes seems to be ruled out by corroboration from other witnesses or by physical evidence. Sometimes these glitches seem purposeful, as when avoiding certain death or when missing items reappear in response to a request. Often, though, they seem to be true glitches, either mistakes or with no apparent purpose other than perhaps revealing the simulated nature of things.

It could also be possible that the universe is natural (more-or-less), with the simulation-like aspects being not artifacts but implicit in the universe's necessary informational self-consistency. Nevertheless, conscious beings arising in the natural universe could learn to hack it from the inside, causing glitches and intimations of purposefulness for other, less adept residents of the universe. The general rule of self-consistency is likely only relative to a given branch of implications of occurrences; inconsistencies define other branches of possibilities. (Perhaps in the beginning was the inconsistency 0=1 : the big bang followed because all propositions and their opposites can be derived from a single contradiction -- but there are branching patterns in the successive derivations of implications from that initial seed.)

Also see Daniel Burfoot's quite readable book on ArXiv, “Notes on a New Philosophy of Empirical Science”, particularly pages 8 to 29. (arXiv:1104.5466 [cs.LG], version 1 April 2011)

June 30, 2014

A Hand-waving Introduction to Geometric Algebra and Applications

Geometric  Algebra (GA, real-valued Clifford Algebras, a.k.a. hypercomplex numbers) gives the only mostly-comprehensible-to-me account not only of higher spatial / temporal dimensions, but of physics in general. I have been studying GA now for over ten years. One of the best things about it is that nearly every paper using GA explains it from first principles before going on to use it for physics or computer science. Most physics papers in other fields seem to take a positive joy in obscure math and impenetrable jargon. I'll try here to give a even less mathematically difficult account of some of GAs implications than most GA papers.

Given a set of n mutually orthogonal basis vectors, one vector for each independent  dimension, a space of 2^n quantities results from considering all possible combinations of these basis vectors multiplied together. For instance taking pairs of vectors from a 5D space gives 10 possible planes of rotation, 4D space 6 planes of rotation, while in 3D there are only 3 independent planes of rotation. (The numbers of other combinations for n dimensions go as the n-th row of Pascals triangle or binomial.)
Sums of all the 2^n elements, each weighted by a different scale factor give "multivectors", which are generalizations of complex numbers.

Each of the basis vectors will have a positive or negative square. (Vectors' squares are always scalars, that is, real numbers.) In conventional relativity the basis vectors squares' signs, also called "signatures" are (+ - - - ) or (+ + + -), with the different sign from the others belonging to time. When plugging into the Pythagorean theorem, the square of time can cancel out the squares of the spatial dimensions, giving a distance of zero when the spatial distance equals the time interval (time multiplied by c to give all units in meters). This happens for anything moving at the speed of light. The zero interval is the amount of perceived or "proper" time for a light wave traveling between any two points. This light-speed type of path is also called a "null geodesic". To the photons of the microwave background, no time has passed since they were emitted, supposedly shortly after the universe began.

Now it is possible and actually quite useful for computer graphics to add a pair of dimensions with signature (+ -) to the usual spatial ones (+ + +).  The sum and difference of the extra dimensions give an alternate basis for these two dimensions, but with the basis vectors squaring to zero (0 0). These "null dimensions" are called "origin" and "infinity".  A projection from this augmented space down to 3D allows many other structures besides points and directions to be represented by vectors in the 5D space. For instance, multiplying 3 points gives a circle passing through those points, 4 points gives a sphere. If one of those points is the point at infinity, then the product is a line or a plane respectively. The other advantages of this way of doing things are too many to list here. This "conformal" scheme is actually quite easy to visualize and learn to use without getting into abstruse math by using the free GAViewer visualization software and its tutorials.

One fellow at Intel extended this to having three pairs of extra dimensions, for a total of nine, so that general ellipsoids rather than just spheres could be specified, but the idea has not become popular since each multivector in it has 2^9 = 512 parts. The 32 parts of regular conformal 3D / 5D multivectors is hard enough to convince people to use. The 11 dimensions of superstring theory are not so well defined as conformal dimensions  since seven of the string dimensions are said to be curled up small, "compactified" in some complicated and unspecified fashion.

An interesting thing about the ( +++, +- ) signature algebra is that it is the same as one that has been proposed by José B. Almeida as an extension of the usual 3D+t (+++-) "Minkowsi space" of relativity, augmenting the usual external time (-) with a second sort of time having positive square and describing internal or "proper time", (which in relativity will be measured differently by a moving external observer). But if it is assumed that everything in the universe is about the same age, then they have comparable proper time coordinates, so proper time can be used as a universal coordinate corresponding to the universe's temporal radius. This gives a sort of preferred reference frame for the universe, which is ordinarily considered impossible. In this 5D scheme, not just light but also massive particles follow null geodesics, and from that single assumption can be deduced relativity, quantum mechanics, electromagnetism, and in addition dark matter, the big bang  and the spatial expansion of the universe seem to be illusions.

The math is also easier than the usual warped-space general relativity, instead using flat euclidean space and having light, etc. move more slowly near mass, that is, treating gravitational fields as being regions of higher refractive index than regular space. This is also the case in gauge-theory gravity, (GTG) which also uses Geometric Algebra, though sticking to the usual 4D Minkowski space.  GTG is the only alternative to general relativity that is in agreement with experiment, but GTG also is more consistent, easier, allows dealing with black holes correctly, unlike GR, and is much easier to reconcile with quantum mechanics, which is also  much much easier to visualize using GA. For instance, the behavior of the electron can be described fully by treating it as a point charge moving in a tight helix at light speed around its average path (a "jittery motion", or in German: "zitterbewegung"). The handedness of the helix is the electron spin, the curvature of the helix is the mass, the angle of the particle around the helix is the phase. 

Geometric Algebra is useful in all areas of physics and computer modelling of physics. GA has been successfully applied to robot path planning, electromagnetic field simulation, image processing for object recognition and simulation, signal processing, rigid body dynamics, chained rotations in general and many other applications. It gives very clear, terse and generally applicable, practically useful descriptions in diverse areas using a single notation and body of techniques.