tag:blogger.com,1999:blog-12311853570323653602024-03-12T22:52:40.749-04:00Mindspace & Minds' BasisE. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.comBlogger19125tag:blogger.com,1999:blog-1231185357032365360.post-3350217866500764592023-02-15T12:17:00.001-05:002023-02-15T12:17:57.635-05:00Decision Theory in a Nutshell<p> </p><p>The
central issue of forecasting is reasoning correctly about
probability, which is largely a solved problem, yet very few
forecasters really apply consistent reasoning.
</p>
<p>The essence of probability and decision theory can be stated in
just a page, though there are many additional wrinkles. While long
and mathematical, I think some people making very important decisions
will find this useful:</p>
<p>Synopsis of Ed Jaynes’ <i>Probability Theory</i></p>
<p><b>Probability notation</b><br />
AB = A and B<br />
A + B = A or B
(and/or, not exclusive-or)<br />
a = not A, b = not B<br />
(A|B) =
probability of A given B<br />
AA = A;<br />
A(B+C) = AB+AC;<br />
AB+a
= ab + B;<br />
D = ab -> (implies) d = A+B</p>
<p>Different chains of reasoning must not disagree; if they do then
at least one chain of reasoning is invalid.
</p>
<p>The same state of knowledge in different problems must lead to
assigning the same probabilities.</p>
<p>Consequently:<br />
1.) (AB|C) = (A|BC)(B|C) = (B|AC)(A|C)<br />
2.)
(A|B) + (a|B) = 1 , [ probability = 1 = true ]<br />
3.) (A+B|C) =
(A|C) + (B|C) – (AB|C)<br />
4.) If {A_1…A_n} are mutually
exclusive and exhaustive index of the possible outcomes, and
information B is indifferent and uninformative to predicting the
outcome, then:<br />
(A_i|B) = 1/n for i = 1 … n</p>
<p>From rule 1., <b>Bayes’ Theorem</b>:<br />
(A|BC) = (A|C) (B|AC)
/ (B|C)</p>
<p>From rule 3, if {A_1…A_n} are mutually exclusive :<br />
(A_1 + …
+A_n | B) = SUM[ (A_i | B) ]</p>
<p>If the A_i are also exhaustive, then the chain rule is
implied:<br />
(B|C) = SUM[ (BA_i | C) ] = SUM[ (B | A_i C) (A_i C) ]
</p>
<p><b>Continuous distributions:</b><br />
If x is continuously
variable, the probability given A, that x lies in the range (x,dx+x)
is:<br />
(dx|A) = (x|A)dx<br />
Rule 1 and Bayes’ theorem remain the
same, summations become integrations</p>
<p><b>Prior probabilities: </b><br />
The initial information is
X,<br />
(A|X) is the prior probability of A; use rule 4 when no
information, MaxEnt otherwise</p>
<p><b>Principle of maximum entropy (MaxEnt)</b>:<br />
choose the (A_i
| X) so as to maximize entropy<br />
H = – SUM[p_i * log[p_i]] given
the constraints of X.<br />
For continuous distributions:<br />
H= –
∫ p[x] * log[ p[x]/m[x] ] dx<br />
where the measure m is a
weighting or normalizing function which does not change the
probabilities given the prior information.</p>
<p>Using new evidence E and Bayes’ theorem gives the <b>posterior
probability</b>:<br />
(A|EX), often written (A|E);
</p>
<p>Odds O(A|EX) = (A|X)/(a|X) * (E|AX)/(E|aX)<br />
= O(A|X) *
(E|AX)/(E|aX)</p>
<p><b>Decision theory:</b><br />
Given possible decisions D_1…D_n ,
loss function L( D_i , θ_j ) which is the loss from choosing D_i
when θ_j is the true state of nature; choose D_i that minimizes the
expected loss <L_i> = SUM_j [ L(D_i , θ_j) * ( θ_j |EX) ]
over the posterior distribution of θ_j .</p>
<p>The above rules apply to inductive inference in general, whether
or not a frequency in a random process is involved.</p>
<p><b>General decision theory</b>:<br />
1. Enumerate the states of
nature θ_j, discrete or continuous<br />
2. Assign prior
probabilities ( θ_j|X) which maximize the entropy subject to
whatever information you have<br />
3. Digest any additional evidence
E using Bayes’ theorem to obtain posterior
probabilities<br />
(θ_j|EX)<br />
4. Enumerate the possible decisions
D_i<br />
5. Specify the loss function L( D_j, θ_j) that tells you
what you want to accomplish<br />
6. Make that decision D_j, which
minimizes the expected loss<br />
<L_i > = Sum_j [ L( D_i ,
θ_j)( θ_j|EX) ]</p>
<p>The Kelly Criterion generalizes decision theory to allocating
money to maximize expected gains in betting and investment. For
details, see Ed Thorp’s paper: "The Kelly Criterion in Blackjack, Sports Betting and the Stock Market" (45pp. <a href="http://www.eecs.harvard.edu/cs286r/courses/fall12/papers/Thorpe_KellyCriterion2007.pdf">PDF</a>)<br /></p>
<p>None of this works unless you use it. A spreadsheet is the easiest
way (label everything if you want to understand your calculations
later).
</p>
E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-42817926617086878002022-04-26T11:18:00.000-04:002022-04-26T11:18:50.614-04:00<p> Here's a corrected version of the <a href="http://mindsbasis.blogspot.com/2021/05/this-graph-comes-from-data-from.html">graph of intelligence vs. age and IQ</a> from my last post: <a href="https://drive.google.com/file/d/1aDvyyYyM1TNNLQa7QrjFtZ_Tw9EExCXZ/view?usp=sharing" target="_blank">Rasch measure Intelligence vs Age.pdf [Google Drive, single-page PDF (71kB)]</a>. The standard deviation for age 5 was a bit too low in the earlier version. I updated the PDF file linked there as well.</p><p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieoD20JLL9E_8cmiNn9vWFmOGHeTLTbUztuAgLnvuF7GB3jDHD8vJRmOWzz-fSS9pTkIM9bWRWgzzyeeV7Ws6IYFweKtHEStXGDYREhx4NbtiUf3px2OvHjTlGlA5IfDw1uhEqDTDqcDtbmWB1FxKDawijKMzETOON0v_H0g4OH76StW2KYJBRzODluw/s1246/Intelligence%20by%20Age%20and%20IQ.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1246" data-original-width="964" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieoD20JLL9E_8cmiNn9vWFmOGHeTLTbUztuAgLnvuF7GB3jDHD8vJRmOWzz-fSS9pTkIM9bWRWgzzyeeV7Ws6IYFweKtHEStXGDYREhx4NbtiUf3px2OvHjTlGlA5IfDw1uhEqDTDqcDtbmWB1FxKDawijKMzETOON0v_H0g4OH76StW2KYJBRzODluw/w496-h640/Intelligence%20by%20Age%20and%20IQ.png" width="496" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Woodcock-Johnson IV Absolute General Intellectual Ability by Age and IQ Score</td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;">Enon Harris 2022<br /></td></tr></tbody></table><br /> </p><p> </p><p>Here's a guide to how the "W-score" measure of intelligence, which is "absolute", in the sense it can be compared across different ages, converts to adult IQ levels, and to percentile scores for different occupations. This was adapted from <a href="https://www.iqcomparisonsite.com/Occupations.aspx">this chart</a> on iqcomparisonsite.com<br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB6LfH-lkCWwWI_d9G3hL2LmVDx6GoOVhEszJYuaO6K4yag6HEIfaaykFblUJcrVLt7ce8zBkVvZuvCx02PUYLzNIhv0bjPU9Wk9RY0_dm0hNMdcihUEu0AhEYFY-eVq_nOLX9J-ABa7BlDaqv5cGWZ_9dc387bJwsO-aGRLlOwAOfH9KHl-QXWbLZ4w/s950/IQ%20W-score%20by%20occupation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="950" data-original-width="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB6LfH-lkCWwWI_d9G3hL2LmVDx6GoOVhEszJYuaO6K4yag6HEIfaaykFblUJcrVLt7ce8zBkVvZuvCx02PUYLzNIhv0bjPU9Wk9RY0_dm0hNMdcihUEu0AhEYFY-eVq_nOLX9J-ABa7BlDaqv5cGWZ_9dc387bJwsO-aGRLlOwAOfH9KHl-QXWbLZ4w/s16000/IQ%20W-score%20by%20occupation.jpg" /></a></div><br /><p>It says on the source site: </p><p></p><blockquote>This graph was adapted from Figure 12 of Hauser, Robert M. 2002.
"Meritocracy, cognitive ability, and the sources of occupational
success." CDE Working Paper 98-07 (rev). Center for Demography and
Ecology, The University of Wisconsin-Madison, Madison,
Wisconsin. The figure is labelled "Wisconsin Men's Henmon-Nelson
IQ Distributions for 1992-94 Occupation Groups with 30 Cases or
More"</blockquote><p>The author of iqcomparisonsite, Rodrigo de la Jara, used it with permission, but I didn't; he has copyright to this image and I don't. A link to a PDF of the referenced paper is given at iqcomparisonsite (see link above).</p><p></p>E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-44785513705228050032021-05-11T14:29:00.006-04:002021-06-23T15:25:07.197-04:00<div class="separator" style="clear: both;"><a href="https://1.bp.blogspot.com/-yvIOHVbUHlk/YJrH1rWgOHI/AAAAAAAAC-o/E--y5tgFm1kPAF4yE2sP1alb2p-AtwCZQCNcBGAsYHQ/s1618/absolute%2Bintelligence%2Bage%2Bdistribution%2Bchart.png" style="display: block; padding: 1em 0px; text-align: center;"><img alt="" border="0" data-original-height="1013" data-original-width="1618" src="https://1.bp.blogspot.com/-yvIOHVbUHlk/YJrH1rWgOHI/AAAAAAAAC-o/E--y5tgFm1kPAF4yE2sP1alb2p-AtwCZQCNcBGAsYHQ/s600/absolute%2Bintelligence%2Bage%2Bdistribution%2Bchart.png" width="600" /></a></div><p>
This graph comes from data from the <a href="https://web.archive.org/web/20210511182047/https://pdf4pro.com/cdn/1588280-wjiv-techman-cd-insert-wjscore-com-59b916.pdf" target="_blank">Woodcock-Johnson IV Technical Manual</a>, p. 279-280. (large pdf, link opens in new window.). </p><p>The Woodcock-Johnson is one of the top IQ tests, which uses a Rasch measure of intelligence, called a W-score. (The Stanford-Binet uses the same scale, but calls it “change-sensitive scale”, CSS). Rasch measures are absolute measures of ability, in the same way that lengths are absolute measures of distance – they have a true zero, so one can say this question is 10% harder than that, or equivalently, that one person is 10% more intelligent. The difficulties of questions and the abilities of test-takers are computed all together in a matrix with rows being test-takers and columns being questions, with each matrix entry being 1 if that person got that question right, and 0 if they got it wrong. Using matrix math, difficulties and abilities are computed simultaneously. A bootstrapping procedure can update the matrix with additional questions and test results, giving an ongoing update to test norming. After validating questions (a complex but not arbitrary process), the only free choice in making a Rasch measure is choosing a reference score that sets the scale, which for the W-score and CSS was chosen to make the average, 100 IQ 10.0 year-old’s score 500. </p><p>I’ve made a graph of full test-scale (FSIQ) W-scores vs. age with additional lines for +/- 1 to 3 standard deviations. This allows comparing the absolute intelligence of people with different ages. So a 145 IQ (+3 s.d.) 8-year old can be seen to be best placed with a 130 IQ class of 10-year-olds, or a 115 IQ class of 13-year-olds. </p><p>I’ve found a highly accurate curve fit for W-scores ages 5 to 17: </p><p> </p><p style="margin-left: 40px; text-align: left;"><span style="font-family: helvetica;"><span style="font-size: large;">W-score = 547- 473/age </span></span></p><p style="margin-left: 40px; text-align: left;"><span style="font-size: large;"> </span></p><p> Above 17, the average CSS rises to 520 by age 22 and remains 520-521 through at least age 35.
Generally the W-score standard deviation in adulthood is 10.5.
In childhood it falls from about 12 ages 5-8, to 9.5 ages 9-17.
Assuming an s.d. of 10 is good enough for most calculations, since the s.d. data is quite noisy.</p><p>*</p><p>Update June 5, 2021: </p><p>Here is an improved version:</p><br />And here is a one-page PDF of it which can be scaled without loss of resolution: <a href="https://drive.google.com/file/d/1aDvyyYyM1TNNLQa7QrjFtZ_Tw9EExCXZ/view?usp=sharing" target="_blank">Absolute intelligence age distribution chart PDF</a><br /><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-k9-JLSflZrk/YNOJ3-I-MMI/AAAAAAAADBQ/s29yY_z-BX4LiCLDw8ZLsWIbMG3Q2JFywCNcBGAsYHQ/s1284/WJ4%2Babsolute%2Bintelligence%2Bage%2Bdistribution%2Bchart%2B2.4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1284" data-original-width="924" height="320" src="https://1.bp.blogspot.com/-k9-JLSflZrk/YNOJ3-I-MMI/AAAAAAAADBQ/s29yY_z-BX4LiCLDw8ZLsWIbMG3Q2JFywCNcBGAsYHQ/s320/WJ4%2Babsolute%2Bintelligence%2Bage%2Bdistribution%2Bchart%2B2.4.png" /></a></div><br />E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-68323775313019438462016-04-23T23:56:00.002-04:002016-04-24T20:43:40.280-04:00Physical Units Factor Tables / Large Print PDF<iframe src="https://drive.google.com/file/d/0B8jlAwfWJx-sQ1I4Rk5qNVJkZm8/preview" width="600" height="770"></iframe>
<br />
Here's a version of PUFT which has larger text, making it more legible when used as a poster. Unfortunately, to get the text this large meant removing the equations for each unit type, which makes this version a little cryptic at first glance. The length factors of each unit type are indicated by the scales on the top and bottom, the time factors are shown on the left, and the factors common to the units on a table are shown in the colored box to its left. Here's a <a href="https://drive.google.com/file/d/0B8jlAwfWJx-sQ1I4Rk5qNVJkZm8/view?usp=sharing" target="_blank">link to the PDF</a>.E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-86986031401824234142016-03-28T16:42:00.002-04:002016-04-24T00:08:26.786-04:00Physical Units Factor Tables (PUFT)<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-UdLMGMBYj8c/VvmU2LK1N0I/AAAAAAAAAMg/u0hO1UiHKwgaWPcNw2TlIVEj9OgbY-1Uw/s1600/PUFT-1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="PUFT Copyright 2014 - 2016 Enon Harris" border="0" height="640" src="https://2.bp.blogspot.com/-UdLMGMBYj8c/VvmU2LK1N0I/AAAAAAAAAMg/u0hO1UiHKwgaWPcNw2TlIVEj9OgbY-1Uw/s640/PUFT-1.png" title="Phusical Units Factor Tables (PUFT)" width="494" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i><span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;">Physical Units Factor Tables (PUFT)</span></i><br />
<h3>
<a href="https://2.bp.blogspot.com/-UdLMGMBYj8c/VvmU2LK1N0I/AAAAAAAAAMg/u0hO1UiHKwgaWPcNw2TlIVEj9OgbY-1Uw/s1600/PUFT-1.png" target="_blank"><b><span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;"><span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;"><span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;">Link to f</span>ull<span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;">-</span>size <span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;">PUFT picture</span></span></span></b></a></h3>
<h3>
<b><span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;"><span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;"><span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;">[Ed<span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;">it: <a href="https://drive.google.com/open?id=0B8jlAwfWJx-sM1UwWUhVSGUxYm8" target="_blank">Link t</a><span style="font-family: "helvetica neue" , "arial" , "helvetica" , sans-serif;"><a href="https://drive.google.com/open?id=0B8jlAwfWJx-sM1UwWUhVSGUxYm8" target="_blank">o PDF</a>]</span></span> </span></span></span></b></h3>
</td></tr>
</tbody></table>
I drafted the Physical Units Factor Tables (PUFT) a bit over a year ago, and meant to send it off to publishers, as a poster chart for physics classrooms similar to the periodic table in chemistry classes. but I somehow never got around to it.<br />
<br />
The Physical Units Factor Tables organize 50 types of physical units by
their factors of length, time, mass and charge so that the mathematical
relationships between physical units are easy to see. <br />
<br />
The
Physical Units Factor Tables encourage anyone who can multiply and
divide simple fractions to deduce equations in mechanics and
electromagnetics .<br />
<br />
The single-page document is also marginally
legible when printed in color on a single sheet of letter-size paper,
but students and teachers with access to computers will likely find the
electronic version easier on the eyes.<br />
<br />
Each move left represents multiplication by length, each move down is division by time ( = multiplication by frequency). Similarly, moving right represents division by length and each move up represents multiplying by time. These factors are the same in all tables in the stack, with each lower table having an additional factor:<br />
<blockquote class="tr_bq">
light blue table = * mass<br />
green table = * 1/charge<br />
pink table = * mass/charge<br />
purple table = * mass/charge^2</blockquote>
<br />
The names of the unit types are taken from Alan Eliasen's wonderful calculator and physically-typed programming language, <a href="https://frinklang.org/" target="_blank">Frink</a>. (Except for the ones whose top line is in parentheses; these names aren't listed in Frink, though it can easily compute using such quantities.)<br />
<br />
The original was done in an Open Office spreadsheet, then saved as a PDF file. Among several other versions, I also have one that is more legible from a distance for use as a poster.<br />
<br />
If any publishers or science teachers are interested in using PUFT, please let me know.<br />
<br />
<a name='more'></a><br />
<div style="text-align: center;">
<span style="font-size: large;">* </span></div>
<h4>
Units types included:</h4>
time,<br />
volume,<br />
area,<br />
length,<br />
dimensionless,<br />
linear spatial frequency,<br />
area spatial frequency,<br />
volume spatial frequency,<br />
flow,<br />
area per second,<br />
velocity,<br />
frequency,<br />
specific energy,<br />
acceleration,<br />
frequency squared,<br />
moment of inertia,<br />
kilogram-meters,<br />
mass,<br />
linear mass density,<br />
areal mass density,<br />
mass density,<br />
angular momentum,<br />
momentum,<br />
mass flow,<br />
viscosity,<br />
energy,<br />
force,<br />
surface tension,<br />
pressure,<br />
power,<br />
heat flow density,<br />
magnetic field strength,<br />
current,<br />
electrical dipole moment,<br />
moment of inertia per coulomb,<br />
mass per charge,<br />
magnetic flux,<br />
vector potential,<br />
magnetic flux density,<br />
electric potential,<br />
electric field strength,<br />
inductance,<br />
permeability,<br />
electric resistivity,<br />
electric resistance,<br />
inverse permittivity,<br />
inverse capacitance,<br />
electric current density,<br />
magnetic field strength,<br />
electrical dipole moment,<br />
magnetic moment,<br />
charge,<br />
linear charge density,<br />
surface charge density,<br />
electric charge density,<br />
temperature,<br />
thermal resistance,<br />
thermal resistivity,<br />
thermal insulance,<br />
thermal inductance ,<br />
entropy,<br />
heat capacity,<br />
enthalpy,<br />
specific heat capacity,<br />
molar heat capacity,<br />
thermal capacitance,<br />
entropy flow,<br />
thermal conductivityE. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-65837034086281126982016-03-28T15:13:00.000-04:002016-03-28T16:20:02.629-04:00Converting IQ at a Given Age to an Absoloute (Rasch) Measure of Intelligence<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-tXLt9eo5s6Q/VvlvcmScDkI/AAAAAAAAAMU/Vx12AxqJYcYiGwwNSxFp7jA9mbfqNLcDA/s1600/rasch%2Bintelligence%2Bscore%2Bvs%2Bage%2Bremake%2B1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="409" src="https://2.bp.blogspot.com/-tXLt9eo5s6Q/VvlvcmScDkI/AAAAAAAAAMU/Vx12AxqJYcYiGwwNSxFp7jA9mbfqNLcDA/s640/rasch%2Bintelligence%2Bscore%2Bvs%2Bage%2Bremake%2B1.png" width="640" /></a></td></tr>
<tr align="justify"><td class="tr-caption"><div class="notranslate slideshow-title-text" itemprop="headline">
Rasch measure of intelligence age 2-25 +/- 3 s.d. from the norming of the Woodcock-Johnson IQ test block rotation subtest . This is my remake with the scale changed to years, text replaced, and grid added from the source: Kevin McGrew slideshow "<a href="http://www.slideshare.net/iapsych/applied-psych-test-design-part-c-use-of-rasch-scaling-technology" target="_blank">Applied Psych Test Design: Part C - Use of Rasch scaling technology</a>" Slide 19 (2009), which had the original caption: <i>Block Rotation:
Final Rasch with
norming test
n = 37 norming
items
n = 4722 norm
subjects
Item map with
“steps” displayed
for items
Red area
represents the
complete range
(including
extremes) of
sample Block
Rotation W-
scores
Good test scale coverage for complete range of population.
</i></div>
</td></tr>
</tbody></table>
<span id="goog_1757158187"></span><span id="goog_1757158188"></span><br />
<span id="bc_0_9b+seedGLgnD" kind="d">Rasch measures of intelligence are an interesting and important part of psychometrics, as they provide an absolute measure of intelligence, not only an "equal interval" scale (as with Fahrenheit and Celsius) but one with with a proper zero (as with Kelvin), also known as a ratio scale (not to be confused with the mental/chronological age ratio used in early IQ tests). Because it is a ratio measure, Rasch measures </span><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d">allow all arithmetic operations (
*,/,+,-, rather than at most + and - for IQ) and form the basis for item
response theory (IRT) in general. (See the letter following this post for more.) Rasch measures also have the interesting property of putting </span>item difficulties and
test-taker abilities on the same scale, so that a if a person with a certain ability score tries an item with the same difficulty score, then he has a 50% chance of success.</span><br />
<span id="bc_0_9b+seedGLgnD" kind="d"> </span><span id="bc_0_9b+seedGLgnD" kind="d"> </span><br />
<span id="bc_0_9b+seedGLgnD" kind="d">The above graph was adapted from one used in the block rotation subtest norming of the Woodcock-Johnson IQ test (WJ), a product of Riverside Publishing, (a division of </span><span id="bc_0_9b+seedGLgnD" kind="d">Houghton Mifflin Harcourt.) The Stanford-Binet (SB5), also published by Riverside uses the same scale
("change-sensitive" score or scale "CSS"), which has as its only
arbitrary choice setting the CSS for an average 10-year old equal to
500.</span><br />
<br />
<span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d">The </span></span><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d">paper: <a href="https://secure.riversidepublishing.com/products/sb5/pdf/SB5_ASB_3.pdf" rel="nofollow">Assessment Service Bulletin Number 3: Use of the SB5 in the Assessment of High Abilities</a>,</span> has on page 12 of the PDF </span></span><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d">(table 4) </span></span>a reprint from the SB5
interpretive manual of the average full-scale CSS scores for diferent ages,
which closely matches the average line in the graph above, so the block
rotation subtest average scores vs. age should be a reasonable proxy for
the full scale, (though there is reason to think the standard deviations on the WJ block rotation subtest shown in the graph are
likely somewhat smaller than for the full scale score of the SB5). (See the end of this post for table 4 in usable form.) </span></span><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d">Unfortunately Riverside seems reluctant to publish the
average age- vs. CSS or W-score graphs for either full test, let alone for
different standard deviations, so using the BR subtest as a proxy for the full scale is as well as we can do.</span></span> </span><br />
<span id="bc_0_9b+seedGLgnD" kind="d"> </span><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"> </span></span><br />
<span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d">Using a horizontal
straightedge on the graph allows equating a given CSS score to z-scores
at different ages. (</span></span><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"> z-scores = standard deviations, equivalent to 15 IQ points)</span></span> The Mk.I eyeball gives a pretty decent estimate of
fractional z-scores falling between the s.d. lines, but one can use the
line or measurement tool in a decent paint program such as Paint.NET or Gimp to get better measurements of the z-score that equates to a
given CSS at a given age. (Adding a </span></span><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d">T-square on</span></span> a moveable transparent layer is also useful.) - </span></span><span id="bc_0_9b+seedGLgnD" kind="d"><b><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d">This
allows comparing the absolute intelligence of people with different ages
and z-scores.</span></span></b> </span><br />
<br />
<a name='more'></a><br /><br />
<span id="bc_0_9b+seedGLgnD" kind="d">Since division is a valid operation on scores on this scale, one
can say that in an absolute sense, the average adult with a score of 510
to 515 is only 2 or 3% more intelligent than the average 10 year-old,
and less than 10% smarter than the average 5 year old with a score of
470.</span><br />
<br />
<span id="bc_0_9b+seedGLgnD" kind="d">The
25 CSS point difference between +3s.d. adults (~534 </span><span id="bc_0_9b+seedGLgnD" kind="d"><span id="bc_0_9b+seedGLgnD" kind="d">CSS</span>) and average adults (509 CSS) is a point larger than the between
average adults and average 5 year olds (~483 CSS). </span><br />
<br />
<span id="bc_0_9b+seedGLgnD" kind="d">The 512 CSS score of a +3 s.d. 5 year
old is about the same as a +0.5 s.d. 22 year-old, which would be about
what I would expect the typical graduating psychology major to score.
There are many other such comparisons; I have enjoyed hours playing with
that chart. </span><br />
<span id="bc_0_9b+seedGLgnD" kind="d"><br /><b>I'd be very interested in finding similar charts for a full-scale test, fluid / crystallized scales or any other sub-tests.</b></span><br />
<span id="bc_0_9b+seedGLgnD" kind="d"><br /> </span><br />
<span id="bc_0_9b+seedGLgnD" kind="d">*</span><br />
<blockquote class="tr_bq">
<span id="bc_0_9b+seedGLgnD" kind="d">Here is a quote from a <a href="http://miyaguchi.4sigma.org/gradytowers/about_rasch.html" target="_blank">letter</a> written in 1999 by the late Prometheus Society member Grady Towers :</span><span id="bc_0_9b+seedGLgnD" kind="d"></span></blockquote>
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<span id="bc_0_9b+seedGLgnD" kind="d">There are four levels of measurement generally acknowledged by statisticians. From weakest to strongest, these are nominal, ordinal, interval and ratio. These are important because they determine what kind of statistical procedure can be used. Any statistical procedure using a given level of measurement can be used only on that level. But tests of lesser strength can also be used for the same data. Nominal strength data, for example, can use only tests and procedures appropriate for nominal data. Interval strength data can be tested with interval level tests, but they can also be tested with ordinal level tests and nominal level tests. There's a tradeoff. The lower the level of statistical test used, the fewer assumptions need to be made about the data (normality, symmetry, homoscedasticity, etc), but the larger the sample has to be to reject the null hypothesis.</span><br />
<blockquote class="tr_bq">
<blockquote class="tr_bq">
<br />
<span id="bc_0_9b+seedGLgnD" kind="d"><b>Nominal scale</b>: numbers are used to name, identify or classify. Telephone numbers are a nominal scale. The correct/incorrect responses used on the items from mental ability tests are also on a nominal scale. Only the statistical techniques based on counting are permitted.</span><br />
<br />
<span id="bc_0_9b+seedGLgnD" kind="d"><b>Ordinal scale</b>: numbers represent rank or order. The numbers used to represent the hardness of minerals, from diamond as 10 and talc as 1, represent an ordinal scale. Some people believe that mental abilities represent at most an ordinal scale. Only statistical procedures based on counting, and on greater than or less than are permitted.</span><br />
<br />
<span id="bc_0_9b+seedGLgnD" kind="d"><b>Interval scale</b>: intervals between numbers are presumed to be equal. IQ tests are thought to be approximately on an interval scale. They have been described as rubber rulers. Only statistical techniques based on counting, and greater than and less than, and addition and subtraction are permitted.</span><br />
<br />
<span id="bc_0_9b+seedGLgnD" kind="d"><b>Ratio scale</b>: all numbers are thought to represent a distance from zero. Weight and distance are ratio scales. All statistical (arithmetic) procedures are permitted, including multiplication and division. This is called the ratio scale because it's permitted to say that one measuremement is twice as large as another. Ten feet is twice as long as five feet. This is not permitted on an interval scale. It is not permitted to say that an IQ of 140 is twice as great as an IQ of 70.</span></blockquote>
</blockquote>
</blockquote>
<blockquote class="tr_bq">
Do you get the idea? </blockquote>
</blockquote>
<blockquote class="tr_bq">
<blockquote>
<div align="left">
Rasch scores are not rubber rulers! They are on a
rigid interval scale. But what is truly apocalyptic about them is
that there is a mathematical transformation that will put them on
a ratio scale. For the first time in history, it is possible to
say that one person is twice as intelligent as another. For the
first time in history, it's possible to construct an intelligence
scale with amoebas at one end and Jehovah at the other.</div>
</blockquote>
It's also worth browsing other the psychometric and ultrahigh-IQ society miscellanea at Darryl Miyaguchi's archived <a href="http://miyaguchi.4sigma.org/" target="_blank">site</a>. </blockquote>
*<br />
SB5 CSS averages (100 IQ) for age:<br />
<table border="0" cellspacing="0" cols="4" frame="VOID" rules="NONE">
<colgroup><col width="67"></col><col width="67"></col><col width="90"></col><col width="96"></col></colgroup>
<tbody>
<tr>
<td align="RIGHT" height="17" width="67">510</td>
<td align="LEFT" width="67"><br /></td>
<td align="LEFT" width="90"><br /></td>
<td align="RIGHT" width="96">16.17</td>
</tr>
<tr>
<td align="RIGHT" height="17">505</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">12.75</td>
</tr>
<tr>
<td align="RIGHT" height="17">500</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">10.00</td>
</tr>
<tr>
<td align="RIGHT" height="17">495</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">8.67</td>
</tr>
<tr>
<td align="RIGHT" height="17">490</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">7.67</td>
</tr>
<tr>
<td align="RIGHT" height="17">485</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">6.83</td>
</tr>
<tr>
<td align="RIGHT" height="17">480</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">6.08</td>
</tr>
<tr>
<td align="RIGHT" height="17">475</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">5.50</td>
</tr>
<tr>
<td align="RIGHT" height="17">470</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">5.00</td>
</tr>
<tr>
<td align="RIGHT" height="17">465</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">4.50</td>
</tr>
<tr>
<td align="RIGHT" height="17">460</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">4.00</td>
</tr>
<tr>
<td align="RIGHT" height="17">455</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">3.67</td>
</tr>
<tr>
<td align="RIGHT" height="17">450</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">3.25</td>
</tr>
<tr>
<td align="RIGHT" height="17">445</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">2.92</td>
</tr>
<tr>
<td align="RIGHT" height="17">440</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">2.58</td>
</tr>
<tr>
<td align="RIGHT" height="17">435</td>
<td align="LEFT"><br /></td>
<td align="LEFT"><br /></td>
<td align="RIGHT">2.25</td>
</tr>
</tbody>
</table>
<br />
<br />
<span id="bc_0_9b+seedGLgnD" kind="d"><br /> </span>E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com5tag:blogger.com,1999:blog-1231185357032365360.post-3666933514216805792014-07-30T14:20:00.005-04:002014-07-30T14:22:20.605-04:00Old family photo posts have been moved to a new page.Old family photo posts have been moved to a <a href="http://mindsbasis.blogspot.com/p/blog-page.html" target="_blank">new page</a>.E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com5tag:blogger.com,1999:blog-1231185357032365360.post-25226897059352061582014-07-19T20:54:00.003-04:002014-07-19T20:58:15.806-04:00A Curious Way to Represent Numbers: Ternary Factor Tree Representation
This post is only likely to be of interest to those interested in the hidden backwaters of math, and maybe not too many of them. It also has little to do with the other posts here so far. <br />
<br />
The conventional system of number representation cannot exactly
represent most numbers. Fractions that have factors in the
denominator that are not in the number system's base have infinite
decimal representations (e.g. other than 2 and 5 for base 10). Square
roots and other irrational numbers \ have infinite, non-repeating
representations.
<br />
<br />
<br />
In the late 1990s I came up with an alternative system that can
exactly represent any rational or irrational number as well as most
transcendental numbers using only a finite number of bits. This type
of representation is based upon the idea of factored representations
of integers extended in a logical way to a nearly universal system of
describing number structure.<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-ytAQVvnYmMI/U8sPoR9saDI/AAAAAAAAAF0/EuOt7BMt_cQ/s1600/Mandala5.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-ytAQVvnYmMI/U8sPoR9saDI/AAAAAAAAAF0/EuOt7BMt_cQ/s1600/Mandala5.jpg" height="400" width="391" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Pretty but unrelated</td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<br />
<a name='more'></a><br />
<br />
<br />
If each numerical place is assigned a value corresponding with
each successively larger prime number, then any square-free integer
can be represented in factored form as a binary string with 1s in the
spots that are factors and zeros elsewhere. For instance <span style="font-family: "Courier New",Courier,monospace;">2*3*5*11 =
330</span> would be represented <span style="font-family: "Courier New",Courier,monospace;">[1,1,1,0,1]</span> (smallest factor on the left.)
This is equivalent to raising each prime to the power of the
corresponding binary entry in the string: [2^1, 3^1, 5^1, 7^0, 11^1].
By extending this to entries in {-1,0,1}, (ternary, “trits”) then
rational fractions can be represented so long as both their numerator
and denominator have no squared factors. So 2/3 would be represented
<span style="font-family: "Courier New",Courier,monospace;">[1,-1]</span>, and 6/7 =<span style="font-family: "Courier New",Courier,monospace;"> [1,1,0,-1]</span><br />
<br />
<br />
To allow representation of numbers with squares or higher powers
of the prime factors, the same scheme can be applied to each entry in
the list of base factors. So if:<br />
6 = 2*3 = [1,1], then
<br />
<span style="font-family: "Courier New",Courier,monospace;">6^6 = 46656 =
</span><br />
<span style="font-family: "Courier New",Courier,monospace;">
</span><span style="font-family: "Courier New",Courier,monospace;">[2^(2*3) * 3^(2*3)] =
</span><br />
<span style="font-family: "Courier New",Courier,monospace;">
</span><span style="font-family: "Courier New",Courier,monospace;">[[1,1],[1,1]].
</span><br />
<br />
<br />
To allow representing negative numbers and zero, a leading entry
is needed, a “one's place”, so that positive numbers begin with
1, negative numbers with -1, and zero with 0. Applying this leading
entry to the exponents allows representing all negative exponents
and rational exponents, thus all rational roots of rational numbers.
The examples above would all have leading “1”s added, so that 6^6
would become: <span style="font-family: "Courier New",Courier,monospace;">[1,[1,1,1],[1,1,1]].
</span><br />
<span style="font-family: "Courier New",Courier,monospace;">
</span><span style="font-family: "Courier New",Courier,monospace;">-(6^6) = [-1,[1,1,1],[1,1,1]]</span><br />
<span style="font-family: "Courier New",Courier,monospace;">
</span><span style="font-family: "Courier New",Courier,monospace;">6^(-6) = [1,[-1,1,1],[-1,1,1]]</span><br />
<span style="font-family: "Courier New",Courier,monospace;">
</span><span style="font-family: "Courier New",Courier,monospace;">6^(1/6) = [1,[1,-1,-1],[1,-1,-1]]</span><br />
<span style="font-family: "Courier New",Courier,monospace;">
</span><span style="font-family: "Courier New",Courier,monospace;">-(6^(-1/6)) = [-1,[-1,-1,-1],[-1,-1,-1]]</span><br />
<br />
<br />
<br />
<br />
The representation can be applied recursively to allow exponents
that themselves have squares, etc.:<br />
<span style="font-family: "Courier New",Courier,monospace;">4 = [1,[1,1]] , 16= [1,[1,[1,1]]] , 65536 = [1,[1,[1,[1,1]]]] ,
2^65536 = [1,[1,[1,[1,[1,1]]]]]</span><br />
deeply embedded lists are powers of powers of powers .... Very
large numbers can be compactly represented.<br />
<br />
<br />
This scheme also allows representing irrational roots, e.g.
<span style="font-family: "Courier New",Courier,monospace;">2^(2^(1/2)) = [1,[1,[1,-1]]]</span>. More deeply embedded roots are “more
transcendental”.<br />
<br />
<br />
The representation scheme can be visualized by taking the exponent
strings to be at right angles to their parent strings, so that an
n-level deep embedding is seen as an n-dimensional tree structure.
Square-free numbers are 1-D, numbers with square-free exponents to
some factors in the base string are 2-D, numbers with some factors
raised to e.g. powers of 4 are 3-D, with factors raised to powers of
16 are 4-D, and higher dimension integers are generally <i>extremely</i>
large.<br />
<br />
<br />
Therefore I call this type of numerical representation “ternary
factor tree representation”, TFTR for short.<br />
<br />
<br />
Addition and subtraction are not possible in this representation
without converting back to a conventional representation, performing
the operation and then converting back. This works for rational
numbers, but for irrational numbers, not so well – for instance, <span style="font-family: "Courier New",Courier,monospace;">1
+ sqrt(3)</span> is not defined precisely, though of course approximations
can be constructed.<br />
Multiplication and division are somewhat easier in this scheme
than in conventional representations; just add the exponents for each
place in the base string together. This will seldom require a
difficult conversion back and forth since exponents are generally
small. Exponentiation and roots are even easier, just requiring
multiplying the exponents together.<br />
<br />
<br />
By considering patterns of trits in TFTR, many interesting types
of numbers can be specified.<br />
Primorials, (the equivalent of factorials, but multiplying
together only prime numbers) are the easiest pattern to specify:
<span style="font-family: "Courier New",Courier,monospace;">[1,1,1,1,1,1, ....]</span>. Considering infinite sequences, there is a
unique largest square-free infinite number, the infinite primorial
represented by an infinite string of 1s. It would have some curious
properties for an infinite number: it would have a decimal
representation ending in a single 0 since it is divisible by 10, and
the sum of its digits would be an infinite integer divisible by 3,
but not 9, since the primorial itself is divisible by 3 but not 9.
Yet at the same time the digits would generally be uncomputable since
they all depend on infinite factors.<br />
<br />
<br />
Dividing the infinite primorial by successively larger prime
numbers yields an infinite sequence of infinite numbers, each
infinitely smaller than the last. (Note that each place value in the
infinite primorial can be paired with the same place in the number to
which it is identical (except for a single 0 in the spot
corresponding to the prime by which it was divided), so Cantor
diagonal arguments do not apply in this case.)
<br />
<br />
<br />
The power set of representations of infinite square-free integers,
that is, the set of all infinite bit strings would seem to provide a
transfinite number of infinite numbers. These would seem to not be
totally orderable because the largest factors in each number would
nearly always be of varying infinities. The subset of these that have
no 0s - no missing factors – above some place-value should still be
orderable, as should any that have the same knowable repeating
patterns of 1s and 0s above some place-value. (Even patterns with
very long cycles, such as from pseudo-random number generators.)<br />
<br />
<br />
The category of numbers (including finite, infinite, fractions,
roots, etc.) representable by computable patterns of factors is the
grand arena of number theory. (Or at least as grand as I can now
conceive.)<br />
<br />
<br />
By considering exponents in the one's place, I speculate that
complex roots of unity and -1 may be specified, and perhaps
quaternions as well. The significance of doing this in the one's
places of the prime factors and exponents' prime factors is unclear,
but would seem to allow specifying “phases” for each factor at
each level of exponents, perhaps analogous to having the branches of
the tree at angles other than 90 degrees. There are less confusing
areas to explore first.<br />
<br />
<br />
Another peculiarity of TFTRs is that there are as many
representations of zero as there are of the positive or negative
numbers. (Any string starting with 0 is equal to zero.) This applies
to exponents as well, so there are an infinite number of
representations of each prime raised to the 0<sup>th</sup> power, and
so an infinite number of representations of 1. With these exceptions,
though, each different pattern of trits represents a unique number.<br />
<br />
<br />
Compressed representations of TFTRs may be of practical interest,
particularly compressing long strings of zeros. One scheme I have
considered uses the integer exponent of a zero term in the exponent's
parent string to specify a long string of zeros in the parent string.
This may conceptually muck up the representation, though, so perhaps
applying conventional compression outside the representation is a
better way to handle it. Coding could certainly be more efficient
using an alphabet of {-1, 0, 1, [, ]}, (requiring three bits per
symbol rather than 8 for ASCII) though other options may be even
better, for instance: {[-1, 0, [1, ]}, which fits in 2 bits, though
it requires using a “]” after each non-zero entry.<br />
<br />
<br />
I wrote a preliminary version of a function to translate TFTR
numbers to decimal. Exponents in the one's place do not work in this
version, and irrational numbers are converted to floating-point,
which of course loses precision. It is written in the interpreted JVM
language <a href="http://futureboy.us/frinkdocs/" target="_blank">Frink</a>, which has some very useful built-in functions for the
purpose, especially the nextPrime[n] function, which is blazingly
fast, taking about 6s to find a 1024-bit prime. The factor[n]
function will also be useful for the conversion from decimal back to
TFTR; it is quite fast, factoring the first 10M integers in less than
25s on my machine, and taking only 10s to factor the product of two
50-bit primes and 1000s for two 56-bit primes. I use Frink for nearly
all my daily exploratory calculations since it keeps track of all
physical units, but it also has unlimited-precision rational number
support which makes it quite useful for this TFTR converter.
<br />
<br />
<br />
Frink can do many other things, too, from currency conversions to
dimensioned graphics to human language translation. Frink is
available free and runs on any device with a recent version of Java.
It can also be embedded in web pages, so if you do not want to
install it you can go to such a page and use it online.
<br />
<br />
<br />
<br />
<span style="font-size: x-small;"><span style="font-family: "Courier New",Courier,monospace;">Working version 1.0 :<br />//primeArray[n] returns an array of n prime numbers <br />//starting: [1,2,3,5], of length n + 1, so the nth prime is @n<br /><br />//define test for positive integers, boolean return<br />positiveInteger[x] := (x>=1) && ( x % 1 == 0)<br />primeArray[place is [positiveInteger]] := <br />{<br /> ret=[1]<br /> for cnt = 1 to (place-1)<br /> ret@cnt=nextPrime[ret@(cnt-1)] //nextPrime[] is a Frink built-in function<br /> return ret<br />}<br /><br />// Ternary Factor Tree Representation to Decimal<br />// Input must be an array, possibly multidimensional,<br />// whose elements are either in{-1,0,1} or are arrays whose elements are in{-1,0,1} <br />// when recursion bottoms out (leaf nodes). THIS IS NOT CHECKED!<br />// Arrays starting with -1 are negative. Later negative elements are in denominator.<br />// Elements which are themselves arrays represent exponents, whether positive, negative<br />// integer or rational. Deeper levels of arrays can specify irrational exponents, super-exponents etc.<br /><br />TFTRtoDec[x1] :=<br />{<br /> x1 = toArray[x1] // cast to array needed to allow recursion since input=array, output=number<br /> ret=[x1@0] //first element of input (“one's place”) acts as sign or zero trit<br /> <br /> if length[x1] <> 1<br /> {<br /> cnt = 0<br /> b = primeArray[length[x1]] // array of first x1 prime numbers, b@0=1<br /><br /> while cnt < length[x1]<br /> //raise each prime to the power of the corresponding input array's element<br /> {<br /> ret@(cnt+1) = b@(cnt)^TFTRtoDec[x1@(cnt)] //recursion<br /> cnt = cnt + 1<br /> }<br /> }<br /> return product[ret] //return product of 1-D array elements, which is a number, not array<br />}</span></span><br /><br /><br />
<br />
<br />
<br />
E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-64471236847502368172014-07-02T17:30:00.002-04:002014-07-02T17:31:54.650-04:00Universe, Physics and Simulation[<i>Another post from the archives, this time from late March 2014. I overlooked it or I would have posted it earlier</i>.]<br />
<br />
The chief business of science, particularly physics, is modeling
of the universe's phenomena. Modeling phenomena is also a definition
of simulation. The universe may not be information, but all we can
know of it is information. The universe may not be simulation but all
the theories that we can make about it can only be tested through
simulation and comparison of the information from the simulation with
that from the universe.<br />
<br />
<br />
The universe may in fact have characteristics of a simulation; it
seems likely that models designed to resemble the universe will do
so, and therefore that the universe will resemble the models just as
well as the reverse – sometimes in unforeseen ways. Some of the
characteristics of models that are commonly thought to be artificial
or mere approximations may be capable of telling us secrets of how
the universe really works.<br />
<br />
<br />
Physicists spend a great deal of time with equations, but it is
only when actual numbers representing a given situation are plugged
in that they can these equations be said to be a representation of
anything in the physical world; in fact, to represent any kind of
fields in any but the simplest situations demands iterating the
equations with numbers for every point in space and all velocities or
wavelengths, which means plugging in astronomical numbers of
coefficients to the equations even for a crude approximation.
<br />
<br />
<br />
How to minimize the number of computations for a given level of
accuracy and complexity is the central concern of simulation. For
instance, often space is divided into a mesh which is sparse where
the situation is simple and dense in more complicated regions.
Another commonly-used technique is rather than storing enormous
matrices with most entries zero, only storing the few entries
containing informative numbers. Other types of compression are also
used whenever possible, and compression itself is a rich subject.
<br />
<br />
<br />
If the universe resembles a simulation, it too should show
evidence of compression techniques. One obvious one is to only store
one copy of identical items, and just use a pointer to that copy
wherever another such item appears. A bit more advanced is to only
store the differences between near-identical items, together with a
single prototype copy as above. This is essentially a programmer's
sort of platonism. Even beyond that is compression of analogous
structures more generally, which can quickly become quite complex.<br />
<br />
<br />
This compression effect would also potentially seem able to
account for some of the observations that led Rupert Sheldrake to
propose the existence of morphic fields and morphic resonance. Once a
prototype structure exists, it takes much less computation and
storage for the universe to support similar structures, assuming the
universe is simulation-like in compressing form. Crystals of new
substances should be easier to create again once they have first
formed elsewhere. Biological structures, behaviors, and for want of
better words what I'll call “plots” and “tropes” should be
similarly primed if the compression algorithm can handle such subtle
and complex analogies.<br />
<br />
<br />
More speculatively, if the universe's resemblance to a simulation
is not mere appearance, then teleological questions of who is running
the simulation and for what purpose arise. The biggest potential
increment of efficiency for such an entity in simulating a universe
would come from only accurately simulating the regions and events of
interest, with sparse and approximate methods used in other regions.
This could lead to glitches in the simulation such as are often seen
in video-games: lag and other time discontinuities, failure to load
sections of the simulation properly, changes in item prototypes,
non-player characters not performing when the simulation does not
register that the NPCs non-action is perceptible to the player
characters, conflicts between versions of the simulation when
different PCs high-detail regions come into contact, more radically
different simulations coming into contact, continuity errors, objects
and characters failing to load or loading twice, violations of
physical law, cheat codes, hacking, ... All these, or similar effects
have been reported numerous times by different people. (See accounts
on the “Glitch in the Matrrix” sub-Reddit) It is often the case
that their brains are glitching rather than the exterior simulation,
but this sometimes seems to be ruled out by corroboration from other
witnesses or by physical evidence. Sometimes these glitches seem
purposeful, as when avoiding certain death or when missing items
reappear in response to a request. Often, though, they seem to be
true glitches, either mistakes or with no apparent purpose other than
perhaps revealing the simulated nature of things.
<br />
<br />
<br />
It could also be possible that the universe is natural
(more-or-less), with the simulation-like aspects being not artifacts
but implicit in the universe's necessary informational
self-consistency. Nevertheless, conscious beings arising in the
natural universe could learn to hack it from the inside, causing
glitches and intimations of purposefulness for other, less adept
residents of the universe. The general rule of self-consistency is
likely only relative to a given branch of implications of
occurrences; inconsistencies define other branches of possibilities.
(Perhaps in the beginning was the inconsistency 0=1 : the big bang
followed because all propositions and their opposites can be derived
from a single contradiction -- but there are branching patterns in
the successive derivations of implications from that initial seed.)<br />
<br />
<br />
Also see Daniel Burfoot's quite readable book on ArXiv, “Notes
on a New Philosophy of Empirical Science”, particularly pages 8 to
29. (<a href="http://arxiv.org/abs/1104.5466" target="_blank">arXiv:1104.5466 </a>[cs.LG], version 1 April 2011)E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com2tag:blogger.com,1999:blog-1231185357032365360.post-83997308335524343452014-06-30T20:35:00.001-04:002014-07-02T17:10:03.861-04:00A Hand-waving Introduction to Geometric Algebra and ApplicationsGeometric Algebra (GA, real-valued Clifford Algebras, a.k.a. hypercomplex numbers) gives the only mostly-comprehensible-to-me account not only of higher spatial / temporal dimensions, but of physics in general. I have been studying GA now for over ten years. One of the best things about it is that nearly every paper using GA <a href="http://www.mrao.cam.ac.uk/~clifford/pages/introduction.htm">explains it from first principles</a> before going on to use it for physics or computer science. Most physics papers in other fields seem to take a positive joy in obscure math and impenetrable jargon. I'll try here to give a even less mathematically difficult account of some of GAs implications than most GA papers.<br />
<br />
Given a set of n mutually orthogonal basis vectors, one vector for each independent dimension, a space of 2^n quantities results from considering all possible combinations of these basis vectors multiplied together. For instance taking pairs of vectors from a 5D space gives 10 possible planes of rotation, 4D space 6 planes of rotation, while in 3D there are only 3 independent planes of rotation. (The numbers of other combinations for n dimensions go as the n-th row of Pascals triangle or binomial.)<br />
Sums of all the 2^n elements, each weighted by a different scale factor give "multivectors", which are generalizations of complex numbers. <br />
<br />
Each of the basis vectors will have a positive or negative square. (Vectors' squares are always scalars, that is, real numbers.) In conventional relativity the basis vectors squares' signs, also called "signatures" are (+ - - - ) or (+ + + -), with the different sign from the others belonging to time. When plugging into the Pythagorean theorem, the square of time can cancel out the squares of the spatial dimensions, giving a distance of zero when the spatial distance equals the time interval (time multiplied by c to give all units in meters). This happens for anything moving at the speed of light. The zero interval is the amount of perceived or "proper" time for a light wave traveling between any two points. This light-speed type of path is also called a "null geodesic". To the photons of the microwave background, no time has passed since they were emitted, supposedly shortly after the universe began.<br />
<br />
Now it is possible and actually quite useful for computer graphics to add a pair of dimensions with signature (+ -) to the usual spatial ones (+ + +). The sum and difference of the extra dimensions give an alternate basis for these two dimensions, but with the basis vectors squaring to zero (0 0). These "null dimensions" are called "origin" and "infinity". A projection from this augmented space down to 3D allows many other structures besides points and directions to be represented by vectors in the 5D space. For instance, multiplying 3 points gives a circle passing through those points, 4 points gives a sphere. If one of those points is the point at infinity, then the product is a line or a plane respectively. The other advantages of this way of doing things are too many to list here. This "conformal" scheme is actually quite easy to visualize and learn to use without getting into abstruse math by using the free <a href="http://www.science.uva.nl/ga/viewer/content_viewer.html">GAViewer</a> visualization software and its tutorials.<br />
<br />
One fellow at Intel <a href="http://www.gaalop.de/wp-content/uploads/134-1061-Zamora.pdf">extended this</a> to having three pairs of extra dimensions, for a total of nine, so that general ellipsoids rather than just spheres could be specified, but the idea has not become popular since each multivector in it has 2^9 = 512 parts. The 32 parts of regular conformal 3D / 5D multivectors is hard enough to convince people to use. The 11 dimensions of superstring theory are not so well defined as conformal dimensions since seven of the string dimensions are said to be curled up small, "compactified" in some complicated and unspecified fashion.<br />
<br />
An interesting thing about the ( +++, +- ) signature algebra is that it is the same as one that has been <a href="http://arxiv.org/abs/physics/0601194">proposed</a> by José B. Almeida as an extension of the usual 3D+t (+++-) "Minkowsi space" of relativity, augmenting the usual external time (-) with a second sort of time having positive square and describing internal or "proper time", (which in relativity will be measured differently by a moving external observer). But if it is assumed that everything in the universe is about the same age, then they have comparable proper time coordinates, so proper time can be used as a universal coordinate corresponding to the universe's temporal radius. This gives a sort of preferred reference frame for the universe, which is ordinarily considered impossible. In this 5D scheme, not just light but also massive particles follow null geodesics, and from that single assumption can be deduced relativity, quantum mechanics, electromagnetism, and in addition dark matter, the big bang and the spatial expansion of the universe seem to be illusions. <br />
<br />
The math is also easier than the usual warped-space general relativity, instead using flat euclidean space and having light, etc. move more slowly near mass, that is, treating gravitational fields as being regions of higher refractive index than regular space. This is also the case in <a href="http://arxiv.org/abs/gr-qc/0405033">gauge-theory gravity</a>, (GTG) which also uses Geometric Algebra, though sticking to the usual 4D Minkowski space. GTG is the only alternative to general relativity that is in agreement with experiment, but GTG also is more consistent, easier, allows dealing with black holes correctly, unlike GR, and is much easier to reconcile with quantum mechanics, which is also much much easier to visualize using GA. For instance, the behavior of the electron can be described fully by treating it as a point charge moving in a tight helix at light speed around its average path (a "jittery motion", or in German: "zitterbewegung"). The
handedness of the helix is the electron spin, the curvature of the
helix is the mass, the angle of the particle around the
helix is the phase. <br />
<br />
Geometric Algebra is useful in all areas of physics and computer modelling of physics. GA has been successfully applied to robot path planning, electromagnetic field simulation, image processing for object recognition and simulation, signal processing, rigid body dynamics, chained rotations in general and many other applications. It gives very clear, terse and generally applicable, practically useful descriptions in diverse areas using a single notation and body of techniques.E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-62888368322137023662014-06-30T17:50:00.002-04:002014-07-02T17:13:17.224-04:00Outline of Relation of Quantum Mechanics and Information Theory<i>[From a outline in an October, 2004 email to one of the members of the Ultranet@topica.com email list. Connections to the ideas and terminology of Christopher Langan's "Cognitive-Theoretic Model of the Universe (CTMU) have been edited out for clarity, as have some mathematical speculations.]</i><br />
<ol>
<li><span style="font-style: normal;">Quantum Mechanics requires
information theory</span><br />
<ol>
<li><div style="font-style: normal;">
Theories, measurements are just
information</div>
<ol>
<li><div style="font-style: normal;">
Distinguishable states must
differ by >=1 bit</div>
<ol>
<li><div style="font-style: normal;">
No outside agency besides the 2
minimally differing states can do the distinguishing between
themselves.
</div>
<ol>
<li><div style="font-style: normal;">
Otherwise the theory would
have to explain how the 3rd thing distinguishes not only the 2
original entities from each other but also how it distinguishes
itself from the other two as well.</div>
</li>
<li><div style="font-style: normal;">
This requisite ability to
distinguish is logically part of every distinguishable entity.
</div>
</li>
<li><div style="font-style: normal;">
This logical nature, this
ability to distinguish information, is not just the basis for
consciousness but a basic form of consciousness itself.
</div>
</li>
</ol>
</li>
<li><div style="font-style: normal;">
Every system starts with a
pattern reducible to some number of bits and must in any frame of
reference be seen through interaction to have the same or greater
number of bits as time goes on. (Thermodynamics.)</div>
</li>
</ol>
</li>
<li><div style="font-style: normal;">
The distinguishing of one thing
from another creates another entity in the relation between them
which is a new embedded subspace within the two which in turn will
self-embed itself.</div>
</li>
<li><div style="font-style: normal;">
The increasing dimensions of the
binary information vectors' phase space are expanding into an
implicit unfilled possibility space which is dynamically infinite
in extent.</div>
<ol>
<li><div style="font-style: normal;">
The informational expansion
into the larger possibility space may show why branches are more
frequent than rejoins at the quantum level and thus why time
seems to run from a big bang through increasingly entropic
(information-containing or perceiving) developments.
</div>
</li>
</ol>
</li>
</ol>
</li>
</ol>
</li>
</ol>
E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-31554922929391873042014-06-30T17:18:00.000-04:002014-07-02T17:13:17.218-04:00How Big Is a Photon? The Conceptual Foundation of Quantum Mechanics[From an October 2004 post on the Ultranet@topica.com email list.]<br />
<br />
Daniel asked:<br />>Finally, here's the question: what's the width, if any, of a photon of <br />>visible light?<br /><br />I wondered myself in classical electrodynamics what the physical location of the<br />Poynting vector was within a wave - how is the energy distributed within the wave?<br />In experiments the energy interaction happens at one spot which can be localized<br />to any degree required so long as the phenomenon used to register the interaction<br />propagates with a small enough wavelength. When the "screen" used to register<br />interactions has more resolution than the waves of the particles it is measuring,<br />the individual interactions are each well localized on the screen but appear randomly<br />within the region defined by the impinging waves.<br /><br />Photons, like all quanta, are not objects but interactions. Elementary entities are<br />waves when they are going someplace and particles when they get there. All individual<br />observations are of particles, the wave propagation can only be inferred statistically<br />from the distribution of particle interactions. Interactions, since they are not<br />objects, do not themselves have a size. Everything about a given interaction is <br />specified by what happened in a single instant and therefore what happened at a <br />single point*, and the amount of this knowledge is restricted by the initial information<br />content of the interacting particles and the fundamentally limited information conveyed<br />by the particles resulting from the interaction. <br /><br />The informational limits imposed by the necessity that two states must either be<br />distinguishable or indistinguishable with respect to any given interaction lead <br />to the necessity of quantitization - things that differ must do so by at least one<br />bit. This limited information conveyed by the particles resulting from the interaction<br />also requires that a given system must have a maximum information capacity and if<br />one attempts to get more information out of an interaction with the system then <br />the results of the interaction become fundamentally unpredictable. For some reason<br />this unpredictability varies regularly in complex waves and their interference. <br />Since a system may have multiple parts, it is possible for the information content<br />of some multi-part system with respect to some interaction to stiil be only one <br />bit. Independent, separated interactions with different parts of the composite system<br />with respect to that bit will logically be interdependent - this is entanglement.<br />The bits transferred by an interaction are all there is ever to be known about it.**<br /><br />One cannot go back to an given individual interaction and measure it again, such<br />measurements would be interactions distinct from the original. Therefore observers<br />cannot compare independent measurements of a given interaction and the notion of<br />its size is undefined beyond whatever bits were gained from the particles resulting<br />from the interaction.<br /><br />_________<br /><br />*The point-junctions of the particle world-lines in Feynman graphs are being replaced<br />by junctions with extent in more dimensions, similar to pipe junctions, for example.<br />I think the gradualness of the separation of two such pipes (superstring world-sheets),<br />is equivalent to the rate of decoherence of a state into two incompatible possibilities.<br />This might be regarded as the "size" of the particle interaction.<br /><br />**See "<a href="https://vcq.quantum.at/fileadmin/Publications/1999-10.pdf">A Foundational Principle for Quantum Mechanics</a>" by Anton ZeilingerE. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-88738143164372975332014-06-30T17:06:00.000-04:002014-06-30T17:20:30.576-04:00Zitterbewegung Interpretation of Quantum Mechanics
<i>[From an April 2013 comment on an article about "time crystals" at (billionaire mathematician James Harris Simons') Simons Foundation: <a href="https://www.simonsfoundation.org/quanta/20130425-perpetual-motion-test-could-amend-theory-of-time/">Perpetual Motion Test Could Amend Theory of Time]</a></i><br />
<br />
Continuous periodic motion is implied by basic quantum mechanics.
The simplest interpretation of the quantum numbers of an electron in
a hydrogen atom is that the electron really does orbit the proton. De
Broglie’s matter waves were conceived as circular motions of a
point particle, with the frequency found by noticing that energy is
equivalent to both mass and frequency, the former scaled by c-squared
and the latter by Planck’s constant. Schrodinger worked out the
implications for the Dirac (electron) equation, calling the
phenomenon “zitterbewegung”, meaning “trembling motion”. It
is of very high frequency – 1.6E21 Hz = 1.6 zettaherz, or billion
trillion cycles per second, double that of the De Broglie wave of an
electron.<br />
<br />
As Oersted Medal winner David Hestenes worked out (using his
marvelously clear applied, real-valued Clifford algebras, or
“Geometric Algebra”, a lingua franca for mathematical physics)
zitterbewegung at its simplest is a helical, light-speed motion of a
point charge around its average path.* Further he found that the
orientation of the helix is the electron spin, the curvature of the
helix is the electron mass, the angle of the particle around the
helix is the electron phase, and the helical motion creates a static
magnetic dipole and a rotating electric dipole. This is far more
comprehensible than the usual explanations (insofar as there are any
usual explanations!). His interpretation was borne out by the
discovery an absorption of 81.1MeV electrons in silicon crystals, due
to the spatial zitter frequency and its electric dipole lining up at
that speed with the spatial period of the crystal lattice. Before
Hestenes’ explanation, the experimental results were so unexpected
as to be implausible to most of the reviewers at the journal Physical
Review Letters.<br />
<br />
So the eternal and intrinsic helical motion of electrons in any
state, including the ground state, is an established fact. How does
that differ from the proposed time crystals?<br />
<br />
*[It seems to me that this actually makes time simpler - if both
light and electrons are constrained to move at c, then time for
either sort of particle, rather than being some mysterious quantity
with a square opposite in sign to the other dimensions ( x^2 + y^2
+z^2 - (ct)^2 = 1 - (v/c)^2) ) becomes instead simply a distance, the
hypotenuse in x^2 + y^2 +z^2 = ct^2. Also it should be noted that
electrons can move in more complicated ways than single simple
helices, and superpositions are possible. ] See Hestenes
essay: <a href="http://www.fqxi.org/data/essay-contest-files/Hestenes_Electron_time_essa.pdf">Electron time, mass and zitter</a> at FQXi for more information.E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-80174790276923522152014-06-28T21:27:00.002-04:002014-07-02T17:13:59.110-04:00Compression, Entanglement and a Possible Basis for Morphic Fields<i>[From a June 26, 2014 draft of a letter to Rupert Sheldrake:]</i><br />
<br />
So the universe is analogous to a class of computational
processes, some more efficient than others,<br />
<blockquote class="tr_bq">
<div style="text-align: left;">
<i><span style="font-size: small;"><span style="font-family: inherit;"> ... at which point I'd like to pause and point out that this
doesn't mean that the universe is a computation, or that it
isn't, but that it obeys certain rules of consistency that are just
like those in some computations, and equally that some computations
are also exactly analogous to the rules of the universe, so that <b>if
the most efficient way of doing the computational process has certain
methods or characteristics, then the operation of the universe will
also have analogous characteristics</b>. </span></span></i></div>
<div style="text-align: left;">
<i><span style="font-size: small;"><span style="font-family: inherit;"><br /></span></span></i></div>
<div style="text-align: left;">
<i><span style="font-size: small;"><span style="font-family: inherit;">Compression is the essence of
the operation of the computational processes that are analogous to
the universe. </span></span></i></div>
</blockquote>
<div style="text-align: left;">
<br /></div>
Back to the presentation:<br />
<br />
<br />
So the universe is analogous to a class of computational
processes, some more efficient than others, with the most efficient
being heavily favored as representations, which compress natural
patterns of evolution of matter and fields so that required resources
are minimized to model or instantiate the universe. These compressed
representations of patterns have a supra-physical, informational
component which is encoded in the thermal radiations of all matter
and fields, which cause a cascade of entanglements which in turn have
the history of the universe's changing patterns encoded within them.
The entanglement of the particles in new patterns with those of past
patterns requires the new pattern to be consistent with all the
quantum informational constraints of the past patterns. The only
consistent universes are those where all the past information from
all past patterns is still implicit in each and every new pattern,
sub-pattern and interaction. So the past patterns can serve as
templates for later patterns, with a size-dependent degree of
clarity, as with parts of a hologram, and allow effective compression
of all similar situations in the past to each local region of the
universe. The thermal radiation information field compresses <u>all</u>
similar past situations because it is not truly possible to erase
information, but only to turn it into “heat” which is basically
just information that one has decided to ignore. Everything in the
universe that “stays happened” (as opposed to quantum eraser-type
situations) is on the permanent, ineradicable record.<br />
<br />
<br />
These templates are patterns in both space and time, allowing for
example the progressive elaboration of structures in the development
of embryos, and so can most effectively be modeled by generative
programs which produce the evolving state of the simulation or
instantiation, rather than just static data, that is, efficiency
implies not just compressibility but minimizing the Kolmogorov
complexity of the computational processes analogous to the physical
situations. This allows not just physical structures but patterns of
behavior and modes of development to be optimized for their
analogous computational processes' equivalents of memory space and
processing power, and thus gives not just a memory but a super
pattern-recognition capability in every part of the universe, which
can read a developing situation and compare it with everything in the
past light-cone, thus compressing it to effectively require only the
new, original information content it embodies to be added to the
thermal motions and radiations that communicate past interactions and
patterns among the parts of the universe through quantum phases and
entanglements' implications. The past patterns it embodies are
already in the information field, but each repetition and close
variation makes them “stronger”, or more compressed and
efficient.<br />
<br />
<br />
Effectively this is like compression with unlimited side
information available. The information capacity of thermal radiation
is enormous given it has about 10^19 to 10^21 photons per joule. Even
the milli-atto-joules characteristic of the smallest molecular
motions give rise to photons. To see the potential power of this sort
of compression, movies would be very easy to “compress” to send
over a wire if the sender and all viewers already had a copy of every
movie ever made as “side information” – only a serial number or
tag code would need to be transmitted to “transmit” gigabytes of
movie. (But in such a large data set as the the universe's
information field there probably is a shortage of short tag codes,
codes shorter than the patterns they represent, even though the codes
be context-dependent.)<br />
<br />
<br />
The information field, being in its heat diffusion the same as the
wave equation with time replaced by imaginary time, implies that its
dynamics occur in imaginary time, which is like a small cylindrical
manifold with a particle that changes phase as it spirals along it
helically, as in electron zitter motion, rather than staying at one
angle on the cylinder as in normal time. (See Zitterbewegung comment
on the article on “time crystals” on the Simons Foundation site, reposted <a href="http://mindsbasis.blogspot.com/2014/06/zitterbewegung-interpretation-of.html">here</a>.)
It is recurrent time, cyclical time, perhaps not time but eternity.
And among the compressed patterns in the information field are all
the people who ever lived and every thought and action they ever had
or did. Not just the dead ones, either, nor just the distant past,
but the past that starts a nanosecond ago, even a yoctosecond ago.
In fact, the parts of the future that are implied by the past are
already in the field, so it's really somewhat atemporal or eternal.
<br />
<br />
<br />
So the afterlife, precognition, remote viewing and telepathy are
implications of this view. It even suggests how it is possible to
give a remote viewing target with only an arbitrary code number. The
code and the target are physically associated on the envelope or in
the computer and the target information is sent via the code in the
same way that the movies were “sent” in the example of
compression with unlimited side information.<br />
<br />
<br />
<div style="font-style: normal;">
See Daniel Burfoot's book: “<a href="http://arxiv.org/abs/1104.5466">Notes on a New Philosophy of Empirical Science</a>” (arXiv:1104.5466v1 [cs.LG]
28 Apr 2011 ) for more down-to-earth applications of the idea of
treating science as a data compression problem, compressing vast
quantities experience and experiment down to pithy theories and
equations.</div>
E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-50136977569160549352014-06-28T21:17:00.000-04:002014-07-02T17:13:59.108-04:00Notes Toward a Theory of Eternity<i>Yet more from November 2013, this time some unfinished notes I intend to come back to later:</i><br />
<br />
<div style="font-style: normal;">
Notes for expansion:</div>
<ul>
<li><span style="font-style: normal;">Shell of thermal radiation
expanding from Earth during and following a life. </span></li>
<ul>
<li><span style="font-style: normal;">Rotating
searchlight of a life's radiation as the Earth rotates,</span></li>
<li><span style="font-style: normal;">Holes in the radiation pattern, shell due to absorption by Sun, planets, stars;
re-emission, scattering, gravitational lensing; </span></li>
<li><span style="font-style: normal;">Transactional
interpretation of QM demands both an absorber for every emitted
photon and a potential for re-emission, thus literal eternity. </span>
</li>
</ul>
<li>“<i>Omnia sunt, lumina sunt.”</i><span style="font-style: normal;">
“All that is, is light”</span></li>
<li>All that can be known is information.
</li>
<li>Probability is always and only a
measure of ignorance.</li>
<li>Choices must be real to be real choices
– morality cannot coexist with determinism, but requires multiple
potential realities, universes.</li>
<li>Internal dimensions for each entity are
rotated by relative motion (Lorentz boosts), but quantum-correlated
through all occurring in the same overall 4-space (pseudoscalar).</li>
</ul>
E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-88643680373507853942014-06-28T21:11:00.004-04:002014-07-02T17:11:51.836-04:00Heat as Sound, Neural Impulses as Sound<i>Another bit from November 2013:</i><br />
<br />
<i></i><br />
<div style="font-style: normal;">
Bill Beatty wrote a thought-provoking
<a href="http://www.amasci.com/miscon/a-rant.txt" target="_blank">essay</a> showing the unity of sound, heat, and electromagnetic
radiation. Heat is a form of sound, of very high frequency and wide
bandwidth. This is acknowledged in the theory explaining
superconductivity, which treats heat and sound as being composed of
quantum pseudo-particles called phonons. Heimburg et. al. <a href="http://www.pnas.org/content/102/28/9790.long" target="_blank">showed</a> that neural impulses are primarily sound-like, accounting for their
low speed and because of their solitonic form their lack of energy
dissipation. The electrical effects associated with the impulses are
due to the thickening of the neural membrane during the pulse
increasing the separation between the charges on the inside and
outside of the membrane, resulting in a transient decrease in
capacitance which increases the voltage across the membrane.
Tree-like structures such as neurons have a rich spectrum of
mechanical resonances, largely due indirectly to the form of the
cytoskeleton, which determines the form and stiffness of the neuron.
Neural pulses also <a href="http://arxiv.org/pdf/quantph/0007088.pdf" target="_blank">resonate</a> with parts of the cytoskeleton and may
change its form, as the microtubules together with the layers of
ordered water surrounding them have non-linear ferroelectric and
topological quantum properties which are linked to discrete shape
changes of the microtubules which in turn affect the shape of axons,
dendrites and thus the neurons' mechanical resonances<i>.</i></div>
E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com3tag:blogger.com,1999:blog-1231185357032365360.post-19824136019873301772014-06-28T21:09:00.000-04:002014-07-02T17:38:03.960-04:00Thermodynamics, Information and the Afterlife
<i>This was written around the end of November 2013: </i><br />
<br />
<br />
The 2<sup>nd</sup> law of thermodynamics states that in any closed
system, entropy never decreases. The two apparent loopholes in this
law, that entropy can decrease in open systems and can remain the
same in either type of system are not of interest here.
<br />
<br />
<br />
Entropy is equivalent to both information and disorder. Attempts
to define information as <i>negative</i> entropy are wrong, as is
defining information as order. The more orderly the arrangement, no
matter what the context, the less information that can be embodied in
the arrangement. Information is a measure of improbability, as is
entropy. The term “entropy” is often abused to mean not the
information embodied in a specific arrangement, but the class of all
possible arrangements which look similar from a distance, or averaged
together by coarse measurements. Nevertheless, all real cups of hot
tea, even those indistinguishable to any macroscopic measurements and
composed of absolutely identical constituent molecules, are at the
molecular level entirely different in their components' positions and
velocities, just as much as two identical pieces of paper with
identical amounts of ink, one showing a humorous picture of a cat
and the other a budget summary for the Wolverhampton waterworks.<br />
<br />
<br />
Every event at the molecular (or any other) level which has
lasting consequences creates information. That information is almost
always encoded in heat. It gradually diffuses, becoming more and more
entangled with other bits of information so that one would have to
know about more and more to have any hope of determining the original
causes behind the motions. Creation and transmission of information
require no energy dissipation in general, but erasing information
does. The entropy of the information “erased” is not really
destroyed, but moved to the outside environment, in the same way as
heat is moved to the outside of a refrigerator. Even dropping
information into a black hole does not destroy it, but instead it is
very gradually re-emitted in a scrambled form as the black hole
evaporates All molecular events with consequences that are in
principle distinguishable from some other hypothetical course of
events leave a permanent but increasingly scrambled record in thermal
motions and thermal radiation. This permanent record is the physical
substrate for what has been called the “Akashic Records”. There
is a potential for long-range correlations to emerge in the detailed
patterns of thermal motion which in turn could lead to macroscopic
correlations of pattern through a type of chaotic sensitivity to
initial conditions which leads not just to variations in the location
of systems on a given attractor, but to correlations of the type of
attractor. [<i>Perhaps, but I now think a sort of consistency filter arising from quantum entanglements is a more likely mechanism</i>.]<br />
<br />
<br />
Fourier is now best-known for his “Fourier series” which allow
representing anything as a sum of sinusoidal waves of varying
frequencies, amplitudes, and phases which is the basis of
essentially all digital audio-visual techniques, but during his life
he was known best for his work on heat diffusion. It turns out that
the equations for heat diffusion are exactly the same as for quantum
mechanics, except that heat diffuses in euclidean time while
relativistic quantum mechanics demands a Minkowski space, meaning
that time has a square opposite in sign to the squares of the spatial
dimensions. The two can be converted by using “imaginary time”,
that is, time multiplied by the square root of -1, <i>i.</i><span style="font-style: normal;">
(Or some other entity that squares to -1, of which it turns out there
are several in Geometric Algebra / real-valued Clifford Algebras.)
This procedure known as a “Wick rotation” converts the
Schrodinger equation to the Boltzman equation. This only makes sense
for massive particles – for light-like particles, time has no
independent meaning apart from the distance traveled, time is the
hypotenuse in the Pythagorean theorem. (x</span><sup><span style="font-style: normal;">2</span></sup><span style="font-style: normal;">
+ y</span><sup><span style="font-style: normal;">2</span></sup><span style="font-style: normal;">
+ z</span><sup><span style="font-style: normal;">2</span></sup><span style="font-style: normal;">
= t</span><sup><span style="font-style: normal;">2</span></sup><span style="font-style: normal;">)
(There are experimental results indicating that massive particles, or
at least electrons, have no real mass but move in light-speed helices
which give them the appearance of having mass. This is David
Hestenes' Zitterbewegung interpretation of QM.)</span>
<br />
<div style="font-style: normal;">
<br />
</div>
<div style="font-style: normal;">
The relation between the eternal time
of the thermal record and the Minkowski time of everyday experience
is thus a Wick rotation.</div>
E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-82649754679180658592014-06-28T20:59:00.002-04:002014-07-02T17:13:59.113-04:00A First Approximation to Mindspace<i>The first few posts will be from past writings of mine. Here is most of a letter I wrote to a friend who had moved from being a pillar of the counterculture to espousing ultra-traditional Christian / Calvinist theology and young-Earth Creationism</i>:<br />
<br />
Imagine Indra's net, filling all space and time with a web whose
intersections are jewels, each reflecting all of the others. The
jewels may also be seen through other schemata – Leibnizian monads,
vertexes in Feynman diagrams, atomic perceptions/ perceivers of
varied potentialities and probabilities. What seem from the physical
point of view as particles (interactions) are seen from another point
of view as perceptions whose collective patterns are thoughts. God is
immanent in the totality of the net, these atomic perceptions are
collectively a basis (in the mathematical sense) for the power set of
all their possible permutations. This power set can be viewed in turn
as the total mindspace, which has all possible perceptions and
thoughts implicit in it. Parts of it are human thoughts (all the
possible human thoughts), thoughts of particular groups, individuals,
etc. Most of it is outside the region of human thought. Most of it is
far less than God, the totality of the mindspace (plus parts of which
we cannot speak) – yet also far more than human. A given
jewel/interaction may be parts of beings at all levels: God, gods,
angels, humans, animals, plants and cells. Because mindspace is not
like physical space, these entities which are categorizations of sets
and power sets of jewels may overlap like Venn diagrams.<br />
<br />
<br />
Viewing the net as Feynman diagrams expressing physical reality,
in particular thermodynamics/information theory (which are the same
thing), a direction of time emerges from the tendency of information
(entropy) to accumulate in the form of heat (2<sup>nd</sup> law).
Viewed as a process in time the net expands from a low entropy state
to fill more and more of the implicit unfilled space of
possibilities. It never contracts again, it can only expand. It seems
in physics that information cannot be destroyed, only scrambled,
turned into heat and diffused across the universe, but any event
which has had even a microscopic effect is on the permanent record of
the universe. This heat/information diffusion is mathematically the
same as Schrodinger's equation, but with ordinary time replaced by
imaginary time. (turning t into <i>i</i>t is called Wick rotation).
The equivalence works both ways. By rotating our view of the
imaginary time of the universe considered as heat diffusion into the
ordinary time of the wave equation, the scrambled record of the
universe, including all the people who have ever lived, is translated
back into ordinary time. This is the basis for the reality of the
afterlife. The permanence of information in the heat diffusion/
imaginary time view is eternity.
<br />
<br />
<br />
Morality does not enter on a particle level but in the aggregates.
Morality, ethics, emotions, thoughts and so forth exist in the
mindspace schema, not the physical schema, in the same way that my
words exist in the application layer of the OSI model; on the
physical layer there are only patterns of charges and spins suitable
for interpretation by the higher layers. The difference is that in
the physical view of the net, interactions/particles are not only
atomic perceptions but equally atomic acts of will. The universe is
alive down to the lowest level, not only alive but perceptive and
willful. From these more complicated and subtle patterns are composed
which are the thoughts and wills of more complicated and subtle
beings. Their perceptions and intentions are of varying likeness to
other, more general regions of human mindspace which we label
imperfectly as “good”, “evil”, etc. These regions of
mindspace itself are not changed by our labels on our maps of it,
they exist objectively, yet our perceptions of them are necessarily
subjective. Our perceptions of them are not the things themselves,
though the “things” are ultimately made of atomic perceptions.
Other levels of being have their own accordingly larger or smaller
regions of mindspace that they perceive as good or evil. There are
other categories beyond the basic two, and higher beings are more
able to discern them as well.
<br />
<br />
<br />
God, I suspect, finds these limited regions of mindspace that
humans regard as good and evil as being not particularly more
interesting than the rest of Himself, and scarcely a drop in the
bucket of his All. On the other hand, God the absolute can have no
experience in the way we do; being All, with nothing outside himself,
there can be no separation of subject and object. To have such
experiences, he must limit his point of view, and to that extent he
becomes less than absolute. The full range of experiences demands all
possible types of splits between self/selves and other(s). Thus all
our and others' views of the net and each other are a consequence of
God's need to limit himself so that he can be not only everything but
each thing. No matter what we do or don't do as our limited selves,
for God nothing is undone, all is complete.<br />
<br />
<br />
<br />E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0tag:blogger.com,1999:blog-1231185357032365360.post-33395982848910432642014-06-28T20:50:00.002-04:002014-07-02T17:14:28.047-04:00Welcome to Mindspace & Minds' BasisWelcome! This blog is to share some ideas I have been kicking around for a long time.<br />
<br />
The first is "mindspace": mindspace is the mathematical space of all possible minds, which has all possible perceptions and
thoughts implicit in it.<br />
<br />
Parts (subspaces) of mindspace are human thoughts (all the
possible human thoughts), thoughts of particular groups, individuals, qualia, motifs, tropes and so on. Most of it is outside the region of human thought. I wish to explore its nature, dimensions, units, operations, and applications.<br />
<br />
The second idea that I'll be exploring here is that the second law of thermodynamics implies that information from any event cannot be destroyed, it just becomes more and more entangled among vast numbers of thermal photons and phonons (molecular vibrations), and that these may form a basis for mindspace as well as a potential physical and informational mechanism behind most other "paranormal" phenomena such as "morphic resonance", remote viewing, precognition, out-of-body experiences and the afterlife.<br />
<br />
A third area of interest is in "Geometric Algebra" (GA), as reintroduced by David Hestenes. GA is also known as real-valued Clifford Algebra, but goes beyond mathematics by adding physical geometrical interpretations. GA is a fantastically flexible and concise language which unifies most areas of physics using a single notation.<br />
<br />
I'll also likely post miscellaneous opinions and items about technology, economics, and whatever else of interest I happen to find.<br />
E. Harrishttp://www.blogger.com/profile/00403451420766076432noreply@blogger.com0