Professor Leandro Augusto Frata Fernandes, in 2013, posted a very nice set of notes as an introduction to geometric algebra, including his algorithms for calculating wedge, regressive, left contaction and right contraction products. His regressive product agrees with that of Eric Lengyel, increasing my confidence tht their formulas are the true regessive product. Implementation of his algorithm and results are in the c file linked below.
2D Associative Demonstration Code
3D Associative Demonstration Code
4D Associative Demonstration Code
Hermann Grassmann's wedge product has been widely adopted in physics and mathematics, but the related regressive product is rarely presented. Eric Lengyel, in Foundations of Game Engine Development, Volume 1, interprets the regressive product as a complement to the wedge product, and provides a geometrical view of the regressive product as a measure of intersection of the multiplied elements. Eric's most important insight is that the noncommutative nature of wedge multiplication leads to two potentially different complements, being the right versus left complement. His antiwedge product uses both complements, and achieves an associative antiwedge product.
This antiwedge product is associative, unlike Grassmann's and Hestene's variations, which give me confidence that this is a correct implementation. This note will be revised as I gain in understanding.
The AntiWedge Product of Eric Lengyel
David Hestenes, in his (1986) paper "Universal Geometric Algebra", provides an approach to the regressive product which emphasized the need to restrict each blade product to the dimensionality spanned by the two product terms. This note presents multiplicaton tables and equation sets for the Hestenes regressive product in two, three and four dimensions.
This implementation has negative signs assoociated with the pseudoscalar in three and four dimensions which concern me, and is nonassociative. I want to see if he has different definitions in more recent work.
The Regressive Product Per David Hestenes (1986)
The Regressive Product Per David Hestenes (1986) software implementation
The Regressive Product Per David Hestenes (1986) is nonassociative demonstration software
A possible variation on the regressive product requires for forefactor to be of equal or superior rank compared to the postfactor, otherwise the product is zero. The next set of programs implemented this variation, again with nonassociative results.
The Regressive Product Per David Hestenes (1986) with rank requirement software implementation
Hermann Grassmann's wedge product has been widely adopted in physics and mathematics. However, the related regressive product is rarely presented, due to varying interpretations and prescriptions for implementing the regressive product. Instead, various methods are presented for unions and intersections, joins and meets. This note presents a simple set of products based upon AND, OR and XOR logic operations operating on geometric elements. These products are associative, in contrast with the common regressive products. The program Regressive_AND_OR_XOR.cp is provided for associativity verification.
The Wedge, AND, OR and XOR Products
The AND, OR and XOR Products Validation Code
The AND, OR and XOR Equation Generator Code
I am fond of the Frenet Serret formulas for curvatures for trajectories, and I like extending these formulas to geometric algebra. Here are some simple calculations using geometric algebra and the wedge product to calculate curvature and torsion for the simple cases of the line, a circle, and a spiral.
Frenet Formula Examples Using Geometric Algebra
Using real matrix implementations of three dimensional Euclidean geometric algebra and Minkowski geometric algebra, we can define the determinant of a multivector, which can be used as a measure of magnitude. Likewise, we can define unary operators which change the sign of various multivector components in a fashion similar to complex conjugation, which preserve the value of the determinant, but zero out substantial portions of the multivector product with the original multivector. This note documents the determinants in Euclidean and Minkowski space, lists the interesting unary operators which preserve determinant, then documents efficient calculation of the determinant using these unary operators.
Unary Operators and Determinants
I've been through many variations trying to clearly understand the Dirac equation, and quantum mechanics in general. This particular variation has me quite pleased. The standard Dirac treatment appears to only partially occupy the fourspace and fivespace standard implementations. I decided to drop down to threespace only, and see what I could come up with.
Using Euclidean, threespace geometric algebra, I find the Dirac equation maps into a threespace, eight component multivector, as compared to the four component complex spinors. More importantly, these eight components have a nice geometric interpretation. The scalar equation corresponds to energy terms. We find three equations for linear momentum, three equations for angular momentum (curvature), and one equation for torsion. I find that the wavefunction corresponds to a unit magnitude multivector corresponding to a local Frenet multivector frameset, being psi = (1 + u + un + unb) where the tangent u provides flowlines, and the curvature term un indicate normals to the flowlines (akin to equipotentials in EM field plots).
I further find these equations, in the macroscopic view (where hbar multiplied terms are sent to zero) provides coupling between linear momentum and angular space terms which are not part of our standard technology. I expect these terms to manifest as an electrogyroscopic effect, and intend to wind toroidal coils to realize these effects in hardware shortly.
Dirac Equation in 3D! (Drafts in Revision)
In Minkowski geometric algebra, I commonly use the anticommutating vector basis (x, y, z, t). I found early on that the quadvector xyzt also anticommutes with the vector basis to make a set of five anticommutating members. Curious as to whether there were any other sets in Minkowski spacetime, I quickly found six total sets of five anticommutating elements, which could be used to factor the Dirac equation without sources, for example.
Knowing that several sets exist, the question arises of how many set exist, and what are their members. To answer this question, I have written two small programs to identify and enumerate all anticommutating sets in four dimensional Minkowski spacetime, and five dimensional Dirac spacetime.
This note documents the anticommutating sets in Minkowski and Dirac geometric algebras. I find 60 pairs, 80 triads, 30 tetrads and 6 pentads in Minkowski geometric algebra with a signature of (+ + + ). I find 240 pairs, 640 triads, 480 tetrads and 192 pentads in Dirac geometric algebra with a signature of (+ + + + ).
Anticommutating Basis Elements in Minkowski Geometric Algebra
Anticommutating Basis Elements in Dirac Geometric Algebra
Geometric algebras are associative, noncommutative algebras which can be mapped to matrices. As such, the geometric algebras have matrix related operations, such as determinants, eigenvalues and eigenvectors. This note documents and provides formula for matrix related operations for the specific case of Minkowski geometric algebra, using an east coast metric of (+,+,+,).
Matrix Operations In Minkowski Geometric Algebra
This project began as a simple effort to cast the Dirac equation into geometric algebra terms following the example of David Hestenes, but using real 4x4 matrices as a geometric algebra representation (with metric (+,+,+,)), rather than the complex algebra matrices used by Dirac. Each attempt (of many) to eliminate complex numbers failed. Upon admitting defeat in fourspace, I realized that the complexified Minkowski fourspace describes a five dimensional geometric algebra, with signature (+,,,,+). Having made this realization, I quickly found many others, such as Jose Almeida in Portugal, N. Redington and M.A.K. Lodhi (USA), and the grand master Dirac himself (1935) have made five dimensional models for the Dirac electron.
This project is finally beginning to make me happy. At this point, the draft paper derives a set of eight real (or four complex) component equations for the Dirac wavefunction using the standard Bjorken and Drell matrices. I then set up a five dimensional geometric algebra with signature (+    +) matching the Bjorken and Drell matrices. Corresponding to the column wavefunction, I have a left ideal multivector. I then recover the same set of eight equation found in the standard approach.
Demonstration code, including matrix and multivector implementation and utilities is at Dirac_vs_Ginac.cp
Quantum Reannotated: The Dirac Equation
David Delphenich has translated the 1929 paper On a possible geometric interpretation of relativistic quantum theory by V. Fock and D. Ivanenko. This paper provides a geometric interpretation of the Dirac equation, and points out that the expected velocity of the electron is always c. This note is an extract of their work.
Fock and Ivanenko's Luminal Electron
Four twobytwo matrices can be used to implement two dimensional Euclidean geometric algebra. The pairwise direct product of these matrices provides sixteen real fourbyfour matrices implementing four dimensional Minkowski geometric algebra with (+,+,+,) signature. These sixteen geometric algebra elements can be arranged in a fourbyfour grid sharing many of the symmetries of magic squares, with the substitution of the geometric product in place of addition. Seventy two mappings in magic square format and geometric algebra basis format are listed. I suspect that these mappings may enable practical sigma deritrinitation.
The Magic Square at the Heart of Spacetime
Magic squares are usually presented as a mathematical recreation. These squares have a sequence of integers, arranged such that the sum across each row, sum down each column, and the sum across each diagonal equals the same magic constant. Within the family of 7040 4x4 magic squares is a subset of 384 with an even greater degree of symmetry. These pandiagonal 4x4 magic squares sum to the magic constant along every broken diagonal, glide path and 3x3 cell spacing.
This note describes and provides source code for finding the 549504 semimagic 4x4 squares, where we only require matching sums across and down. The note then filters this list to identify the 7040 4x4 magic squares which also include the requirement that the diagonals sum to the magic number, as well as the rows and columns. After that, this note filters down to the 384 panmagic squares, which have 52 different formulas summing to the magic number. Having found the 384 squares, we then look at Grogono's bit plane decomposition (magic carpets), finding a set of eight planar bit patterns, which can be derived from any one member by rolling and rotating operations.
Sixteen 4x4 matrices can be used to represent the basis for Minkowski geometric algebra. These matrices are outer products of a set of 2x2 planar Euclidean geometric algebra matrices. This note lists the 4x4 matrices, and their originating 2x2 matrices.
Direct Product Factors for Minkowski Matrices
Inside Minkowski Geometric Algebra with signature (+,+,+,) are six subspaces with are isomorphic
to three dimensional Euclidean geometric algebra. As each of these subspaces has the same mathematical
structure as three space,
we can use our standard tools, such as translations, scaling, rotations, cylindrical coordinates,
spherical coordinates, and so forth in these subspaces.
A program demonstrating and verifying these six representations is
Six_Euclideans.ginac.cp
Furthermore, within Minkowski geometric algebra are twelve nontrivial remappings, allowing
our additional fourspace standard tools of rotations, boosts, and hyperbolic geometry to be
used in novel forms, as well.
A program demonstrating and verifying these twelve representations is
Twelve_Minkowskis.ginac.cp
Symmetries In Minkowski Geometric Algebra
Geometric Algebra in 3D Euclidean space can be implemented at least six ways using 4x4 real matrices, using half the available degrees of freedom. Digging into the multiple representations, we find 3D Euclidean geometric algebra to be an even grade subset of a 4D Minkowski space with signature (+,+,+,). Usually, we stop here, and get excited about 3D space as a spinor subspace of 4D. However, I wanted to dig deeper, looking further into the symmetries associated with this 4x4 matrix set of representations. What I came to realize, is that we have nine elements which square to unity, which can be arranged into a 3x3 grid such that across each row we have a set of anticommutating elements which can be used as a linear basis, and down each column, likewise, we find sets which can be used as a linear basis. In the ordinary geometric algebra implementation, we indeed use one of these sets for our spatial basis and associated trivector. We use another set for our spacetime bivectors, and the third set for three trivectors. However, the symmetries involved give no special assignment among our six candidate arrangements. Looking at this from the point of view of modelling an electron as a general purpose multivector, I find a neat opportunity. Instead of viewing this 4x4 set of basis as a Minkowski spacetime, I look at this as a conventional three space Euclidean algebra, augmented with two more displacements and associated trivectors. These displacement may be independent deltas from a particle center, or may be internal degrees of freedom somewhat like isospin. I prefer the real space delta, at this time. Given a three component model for the electron of a luminal charge between two real magnetic monopoles of opposite polarity, this mathematical reinterpretation is exciting.
The Dirac matrices can be interpreted as a set of basis for geometric algebra in a Minkowski metric space. The imaginary factors in Dirac's gamma matrix have always annoyed me. In this note, I present twelve alternatives to the Dirac matrices, using simple real numbers, for a metric (+,+,+,). Supporting code is at Minkowski.c
David Hestenes, Chris Doran, Anthony Lasenby, and others have implemented the Dirac equation for the the relativistic electron in geometric algebra. David Hestenes, in particular, has shown how the geometric algebra view provides insight into electron motion. This note starts with Dirac's development, then repeats the geometric algebra translation using a (,,,+) signature. Finally, the same translation is carried out using a (+,+,+,) signature. The result is a purely real multivector Dirac Equation with no imaginaries.
The Dirac Equation in Geometric Algebra
The Stern Gerlach experiment demonstrates separation of components of a molecular beam via the force on magnetic dipoles in a magnetic field gradient. Most presentations of the Stern Gerlach experiment are simple cartoons or pleasant animations, which leave the real student hungry for detail. Fortunately, several schools, such as MIT, University of Wisconsin, Singapore National University, University of Potsdam, University of Zurich, and RWTH Aachen have replication of the Stern Gerlach experiment part of the standard physics curriculum. MIT and University of Wisconsin have locally produced replications, while the other schools I looked at were using variations on a trainer made by Phywe GmbH. (Phywe link requires cookies to be active.)
There are two major differences
between the original Gerlach Sterns versus the modern replicants.
First, the modern replications use potassium,
rather than silver, as the atomic source. Like silver, potassium has a single unpaired
electron in the outer shell. Potassium (63.4 C) has a much lower melting point than silver (962 C),
with the result that a 12W power supply is adequate for the furnace, and third degree burns are harder to obtain.
However, potassium is more chemically reactive than silver, and requires
attention in handling,
operations and cleanup. (Glauss' work also has a nice picture of the magnetic pole faces.)
The second major difference is that instead of a passive witness plate, as used in the
original experiment, PHYWE uses a LangmuirTaylor surface ionization sensor. In this detector,
neutral potassium atoms hit a heated tungsten filament, and become ionized. These ions are
accelerated by a 50V field, and impact a collector. Picoamp beam currents are routinely measured
over a small area. This device does not provide a two dimensional
display of beam intensity. Instead, a horizontal traverse is made by moving the sensor. This one
dimensional slice is only available in one plane.
University of Zurick has a nice writeup, also showing the potassium oven as well as the LangmuirTaylor probe.
A few more links.
A pleasant, slightly technical article
from the Lindau council about Stern and Gerlach. This article has photos of the original apparatus, as well
as some technical dimensions.
Another pleasant popular article, this from Physics Today, with some technical information.
Lisa Felker's Senior Project
at RWTHAachen, done concurrently with Benjamin Glauss, referenced above. Like the lab manual from Phywe, and Benjamin Glauss' senior
project, this has nice mathematical detail concerning the beam interaction with the divergent field, and nice
quality curves for beam current profiles at different magnet current settings. From the data presented, these various authors
calculate the magnetic moment for the potassium atom within a few percent of standard values.
In the geometric algebra presentation of classical electromagnetics, we have a multivector whose vector portion is the electric field, whose bivector portion is the magnetic field, and whose scalar portion is a measure of deviation from the Lorentz gauge. In this note, I propose viewing this Lorentz field Phi as of equal status as the electric and magnetic fields. I provide wave equations for E, B, and Phi, and point out the coalescent nature of the Lorentz field. This presentation is not done in geometric algebra, but rather conventional vector notation to allow easy comparison to standard texts, such as Griffiths.
Lorentz Field in Electromagnetics
This is the second set of notes looking at fundamental equations of quantum mechanics,
Wolfgang Pauli was adamant about keeping the spin axis distinct from the coordinate space. Perhaps he was influenced by considerations of four dimensional space, where we can have two independent, orthogonal planes of rotation. Perhaps he viewed the electron as a point particle, and felt that a zero dimensional point particle, lacking extent, could not rotate, generate a magnetic field or possess angular momentum. Unlike Louis de Broglie, Pauli was not open to the idea of an electron in continuous motion (action), selfinteracting, following a curved path even in the absense of external magnetic fields. Regardless of how he felt, the spin matrices he used in the development of the Pauli Schrodinger equations are a direct match for geometric algebra.
Interestingly enough, all traces of the sigma matrices disappear from the actual set of wave equations.
In effect, the sigma matrices and the geometric algebra were but a scaffolding to incorporate spin. Having served its purpose, it is quietly removed. From this point of view, I think I can better understand Wolfgang Pauli. Slightly restating his position, he made no assumptions about the nature of spin. It could be local geometric circulation. It could be some isospace separate from our experience. From the point of view of the wavefunctions, the origin does not matter.
Pauli Equation and Geometric Algebra
I'm starting a series of notes following the steps of Hestenes, Doran and others in writing quantum mechanics in geometric algebra form. This first note is rather simple. The spinless Schrodinger equation is merely a complex field, scalar plus trivector. I think the better value of this particular note is the step by step translation to operator format for a single particle in an electromagnetic field.
Schrodinger's Equation in Geometric Algebra Format
These note explore the extension of electromagnetism by use of the geometric and wedge products with multivector potentials.
Extending Electromagnetism via Geometric and Wedge Products
These are crib notes covering the use of the wedge product with derivative and differential terms. I am fascinated by the fact that the dimensionality of space interacting with the wedge product limits the complexity we can have in our mathematical descriptions.
Derivative and Differential Wedge Multivector Products.
As I gain more experience with geometric algebra, I revisit my formula summary sheets, and update them with my current understanding. This is an update formula and code dump for three dimensional Euclidean geometry. Code is at Tests.GA3DE_3_0_0.ginac.cp.
Geometric Product, Wedge Product, Regressive Product and Various Unary Operators.
Geometric algebra defines composite structures of scalars, vectors, bivectors and higher order terms. When two multivectors are multiplied, if the internal units are different, such as meters for the vectors, meters squared for the bivectors, and so on, then the resulting product has mixed units in some of the product terms. By contrast, when using the wedge product, the resulting product has the same spatial unit arrangement as the two incoming factors. Executive summary:
Mixed Units in Geometrical Products
This is a posting looking at duality in three dimensional electromagnetism, from the point of view of Geometric Algebra. I present the standard Maxwell Equations, followed by a brief discussion of parity, and the axial versus polar vector problems. I then present the standard dualities of electromagnetism, leading to the complex number format for the Maxwell equations. Next is a presentation of geometric algebra in three dimensional Euclidean space. I then finish with the Maxwell equations translated into multivector format, and discussion of duality within this format.
Electromagnetic Duality in 3D Geometric Algebra
This note follows Garret Sobczyk's introductory material in Chapter 1 of New Foundations in Mathematics, ISBN 9780817683849. I provide snippets of C code, and a step by step walkthrough of the process of obtaining the idempotents and some nilpotents in a modular arithmetic system.
Idempotents and Nilpotents Illustrated Using Modular Numbers
A C program demonstrating these techniques is Modular_Numbers_7.c
This is a set of online notes and exercises with Nilpotents and Idempotents. Some items of interest to me are generating different nilpotents from using sandwich products around a multivector, as in z_{new} = z*M*z, generating nilpotents using commutating terms in geometric algebra, such as z_{new} = z*(a + b e_x e_y e_z), and identifying factors of zero, as in P_+ P_ = 0 => P_+ = A z, P_ = z B.
Exercises with Nilpotents and Idempotents
Idempotents (mathematical objects which square to themselves) and Nilpotents (mathematical objects which square to zero) are powerful mathematical tools found in quantum mechanics, as well as in algebraic descriptions of spacetime. This note discusses idempotents and nilpotents in the context of fundamentals of mathematics, and illustrates their utility in finite dimensional spacetime, as well as their role linearizing complex operations.
Extreme Utility of Nilpotents and Idempotents
I present general formulas for Nilpotents (nonzero expressions which square to zero) and Idempotents (expressions which square to themselves) in 2D and 3D Euclidean Geometries. Rather than being simple a number/vector recombination, it proves that idempotents can easily have scalar, vector and bivector components. Formulas are provided for general idempotents annihilator pairs in 2D and 3D.
Nilpotents and Idempotents in 2D and 3D Euclidean Geometric Algebra
Using matrices to represent geometric algebras is known, but not necessarily the best practice. While I have used small computer programs to scan through candidate matrices to find representations, I have not been able to derive such matrices from first principles. Garret Sobczyk, however, has. I repeat his derivation here, for 2x2 matrix representation of the 2D geometric algebra, including more intermediate steps for easier understanding.
Garret Sobczyk's 2x2 Matrix Derivations
A funny thing happened on the way to the fourth dimension. . .
So I have been thinking about the relationship between Clifford Algebras and matrix representations of these algebras. My initial belief has been that the matrix representations are a useful, but coincidental mapping. However, looking at the two dimensional case, I began to suspect that there may be more than coincidence in the mapping. Specifically, the two dimensional case showed a relationship between rotary transformations and linear translations, where each of the four algebra elements was both an element and an operator.
As I examined the three dimensional Geometric (Clifford) Algebra, it became clear that 3x3 matrices could not represent 3D Euclidean Clifford Algebra due to the absense of bivectors which square to negative one. However, in 4x4 matrices, six fundamental, and at least 48 representations were readily available. Now, in each of the representations, each element has a nonzero determinant, which is inherently necessary for the higher order multivector products to have nonzero determinant. This in turn is related to the highest order multivector term having a hypervolume equal to the determinant of it's representation.
In effect, from the unit scalar, through vectors, through bivectors, including the trivector, each representation for the three dimensional elements is algebraically a quadvector in fourspace. From the coincidental, but useful mapping point of view, this is no big deal. From the literal realities point of view, this is interesting indeed. Even more fun, is the categorizing of the six fundamental representation as two independent sets of three, with the two sets dual to the other. Regardless of set identification, the idea of three dimensional reality as a sort of downward projection of a four dimensional greater reality is fascinating.
As I scan the matrix space for 4x4 matrices, I find 21 matrices sets (+/ factors between pairs) with zero trace, and nonzero determinant which square to one. In a similar fashion, I find only six pairs (+/ factors between pairs) which square to negative one. In Euclidean Geometric algebra, vectors and quadvectors square to one, while bivectors and trivectors square to negative one. Given the single scalar, four vectors, six bivectors, four trivectors, and single quadvector structure of this algebra, we see that faithful representation is impossible using 4x4 matrices, as we need ten independent elements which square to negative one, but only six are available.
(Updated Jan 20, 2016.) We have an interesting set of 4x4 matrices, listed below, which implements Minkowski spacetime.
In the matrices below, I have 16 mutually orthogonal matrices. Ten of these, being the unit matrix, and nine others, square to +1. The nine nonunit matrices are mutually anticommutative, and their pair wise products produce six other orthogonal matrices, which square to 1.
Rank Order Matrix Printout q [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] x y z t [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 0 0 0 1 ] y z z x x y t z t y t x [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] x y z x y t z x t y z t [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] x y t z [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ]
Source code Study_The_Nine.c
My current plan is to do some comparisions between the 2D Euclidean Algebra, the 4x4 simple representation, and the 2x2 'folded' representation. The fact that the 2x2 matrix representations require nonzero determinants seems to put each of the representation matrices on a same footing, as far as dimensional units are concerned.
After this, I intend to ponder the many Minkoswki implementations in 16x16 straightforward matrix representation, as well as in the 4x4 representations. I would also like to better understand why 4x4 matrices don't support Euclidean four space, but do support Minkowsi.
Finally, I want to spend some more time pondering the possibility of Minkowski space being an assembly error, where a bivector was mistakenly used for time. At the very least, I should be able to get a nice scifi shortstory. At the best, there may be something really good here.
Set 1 1 x y z [ 1 0 0 0 ] [ 0 1 0 0 ] [1 0 0 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] xy zx yz xyz [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 0 0 1 ] [1 0 0 0 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 1 0 0 ] Set 2 1 x y z [ 1 0 0 0 ] [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 1 0 0 ] xy zx yz xyz [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 0 0 1 ] [1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [1 0 0 0 ] [ 0 1 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [1 0 0 0 ] [1 0 0 0 ] Set 3 1 x y z [ 1 0 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] xy zx yz xyz [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [1 0 0 0 ] [ 0 0 1 0 ] Set 4 1 x y z [ 1 0 0 0 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] xy zx yz xyz [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [1 0 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [1 0 0 0 ] [ 0 1 0 0 ] [1 0 0 0 ] Set 5 1 x y z [ 1 0 0 0 ] [1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 1 0 ] [ 0 0 1 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 1 0 0 0 ] xy zx yz xyz [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 0 1 0 ] [ 1 0 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 0 0 1 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [1 0 0 0 ] [ 0 0 1 0 ] [1 0 0 0 ] [ 0 1 0 0 ] [ 0 1 0 0 ] Set 6 1 x y z [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [1 0 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 0 1 ] xy zx yz xyz [ 0 1 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 0 1 0 0 ] [1 0 0 0 ] [ 0 0 1 0 ] [ 0 0 0 1 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 0 1 0 ] [ 1 0 0 0 ] [ 0 1 0 0 ] [ 0 0 1 0 ]To identify vectors, which square to positive one, I wrote a program treating each cell of the 4x4 matrix as a location which could be 1, 0 or +1. In effect, this defines each possible matrix as a sixteen digit trinary number. In this fashion, I identified 5436 matrices which square to one. To be more selective, I then further required each candidate have a nonzero determinant, and twelve zeroes. This reduced the vector list to 76, but looking at the list, there were a number of entries with trace 2 or 2, which would not be orthogonal to the unit vector. Consequently, I further restricted the list by requiring trace = 0, which lead to the short list of 42 vector candidates. (Douglas Adams fans should take note.) Of these 42 entries, half are negatives of the other half, so we really have 21 unique entries. Next, I formed all 21 x 21 products, looking for antisymmetric pairs whose product squared to negative one. These products can be either bivectors or trivectors, depending upon the context of the basis vectors chosen. Examing the feeders for each bivector/trivector candidate, I identified six sets of positive representations for 3D Euclidean Clifford algebras.
More details are in
3D Euclidean Geometric Algebra and Matrix Representation.
I am currently reexamining GA with an eye toward what the basis vectors *really are*, rather than what they do. As a tool in this investigation, I am examining the matrix representations of the basis elements, and treating the implied coordinate transformations as a reality, as opposed to a bookkeeping convenience.
There is somewhat of a chicken and the egg circular reference in this situation.
Because geometric algebra is associative, and has a multiplication table, we can
find a matrix representation for the elements of this algebra. Now that I am
examining these matrix elements, I begin to suspect things are the other way around.
Each basis matrix has a clear geometrical meaning, usually involving reflections or
rotations. Reflections, in turn, look like a rotation through 180 degrees in a transverse
higher dimensional space. I suspect that the elements of GA are these fundamental
rotations potentially projected from a higher space. I suspect that the associative
nature of GA is the result of the matrix representation of these fundamental rotational
elements.
2D Euclidean Geometric Algebra and Matrix Representation
These program are written in C to create generic geometrical multivector products in 4D, with choice of /+1/0/1/ for each metric, choice of +/ 1 for each multivector component convention, and a nice printout of the generic multivector product, as well as the standard collection of vector*vector, vector*bivector, vector*trivector, vector*quadvector, bivector*trivector and so on expressions.
A demonstration program in C is Cliff_4E.c
The fuller collection of programs and outputs is
Clifford Equation Makers.tar.gz
Usually, practitioners of GA (Geometric Algera) avoid component level work, preferring to use symbolic multiplication in the same manner as vector arithmetic operators prefer to avoid components, using vector symbols instead.
I find for myself, however, that I need to be able to explain concepts to a computer to be sure that I understand these concepts myself. Accordingly, I need to be able to write code working with GA at the component level.
Below is a multiplication table and set of component equations for the Euclidean 3D Multivector products. Each bit in the numbers below indicates a directional basis, so the directional unit basis are e.1, e.2 and e.4 in the table below. The scalar term is e.0, which commutes with all other terms. The bivector terms (directed areas) are e.3, e.5 and e.6, and the pseudoscalar (volume element) is e.7.
e.0 e.1 e.2 e.3 e.4 e.5 e.6 e.7 e.1 e.0 e.3 e.2 e.5 e.4 e.7 e.6 e.2 e.3 e.0 e.1 e.6 e.7 e.4 e.5 e.3 e.2 e.1 e.0 e.7 e.6 e.5 e.4 e.4 e.5 e.6 e.7 e.0 e.1 e.2 e.3 e.5 e.4 e.7 e.6 e.1 e.0 e.3 e.2 e.6 e.7 e.4 e.5 e.2 e.3 e.0 e.1 e.7 e.6 e.5 e.4 e.3 e.2 e.1 e.0 c.e0 = + a.e0*b.e0 + a.e1*b.e1 + a.e2*b.e2  a.e3*b.e3 + a.e4*b.e4  a.e5*b.e5  a.e6*b.e6  a.e7*b.e7 ; c.e1 = + a.e0*b.e1 + a.e1*b.e0  a.e2*b.e3 + a.e3*b.e2  a.e4*b.e5 + a.e5*b.e4  a.e6*b.e7  a.e7*b.e6 ; c.e2 = + a.e0*b.e2 + a.e1*b.e3 + a.e2*b.e0  a.e3*b.e1  a.e4*b.e6 + a.e5*b.e7 + a.e6*b.e4 + a.e7*b.e5 ; c.e3 = + a.e0*b.e3 + a.e1*b.e2  a.e2*b.e1 + a.e3*b.e0 + a.e4*b.e7  a.e5*b.e6 + a.e6*b.e5 + a.e7*b.e4 ; c.e4 = + a.e0*b.e4 + a.e1*b.e5 + a.e2*b.e6  a.e3*b.e7 + a.e4*b.e0  a.e5*b.e1  a.e6*b.e2  a.e7*b.e3 ; c.e5 = + a.e0*b.e5 + a.e1*b.e4  a.e2*b.e7 + a.e3*b.e6  a.e4*b.e1 + a.e5*b.e0  a.e6*b.e3  a.e7*b.e2 ; c.e6 = + a.e0*b.e6 + a.e1*b.e7 + a.e2*b.e4  a.e3*b.e5  a.e4*b.e2 + a.e5*b.e3 + a.e6*b.e0 + a.e7*b.e1 ; c.e7 = + a.e0*b.e7 + a.e1*b.e6  a.e2*b.e5 + a.e3*b.e4 + a.e4*b.e3  a.e5*b.e2 + a.e6*b.e1 + a.e7*b.e0 ;A simple program to build these tables and relations is Clifford_Product.c, while a sanity check on the associative properties is Test_Euclidean.c.
As I gain more experience with geometric algebra, I am forming more opinions about style.
A short program demonstrating generic GL3 product, as well as products of vector*vector, vector*bivector,
vector*trivector, and bivector*trivector is CL3.ginac.cp.
Terry Pratchett and Stephen Baxter have released "The Long Mars", the third book in the Long Earth series. This book is a fine bus ride companion, being little snippets easily interrupted and resumed. This book overall is nothing exciting. The best insight from this book is that stepping spaces are gravitationally coupled, and that the stepspace for Earth versus the stepspace for Mars do not necessarily align.
In the Universe as a Simulation theme, the Baryon Imbalance number is the ratio mismatch of the initial number of matter versus antimatter in the initial universe. According to Ning Bao and Prashant Saraswat, this ratio is between 6.5 to 5.9 E10. Now, having taught a few computer courses, I recognise this number as being very close to the imbalance between positive versus negative values in 32 bit computers. We have a slight asymmetry in signed, twos complement numbers where we can go one count more negative than positive. For 32 bit numbers, the range is 2147483647 to 2147483648, and the imbalance is 1/2^32 = 2.32830643654e10. Numerically scaling by e, to get a predesired result, has e/2^32 = 6.32899307753e10, within the range quoted above. My scifi alterego loves the idea that we are in a simulation, where a vendor substituted a cheaper binary processor in place of the specified trinary processor, and we are the accidental result of this unauthorized substitution.
The concept of our universe being the interior of a rotating black hole in 5 space has gained significant traction
during the last twenty years. I am very appreciate of "The Universe is Only Spacetime" by John A. Macken for many
of the insights he presents. I especially like his treatment of the connection between gravitational potential,
and escape velocity as the paradigm to look at gravity, rather than the point of view of acceleration and
gravitation potential gradient. He points out that the escape velocity from the center of the earth is higher than
the escape velocity from surface of the earth at the north pole, which in turn is higher than the escape velocity
at the equator. Let's use this approach to the formation of a black hole in fivespace, and how it would appear to
the observers inside at later times. Being a scifi fan, I use the analogy of an illegal trash dump, where our villian
has continued to dump refuse. At the center of the dump, the escape velocity is increasing with each load until  uh oh 
we hit criticality. Treating lightspeed as a limit, the innermost radii begins to bubble outward, with the interior
empty due to the prohibition of crossing the luminal barrier. This is the big bang initiation for this new island universe.
The wavefront propagates outward at increasing speed as overlayers fall into the event horizon. This is the inflationary
epoch. Once the event horizon has consumed the mass of the illegal garbage dump, inflation stops, being starved of new
mass, and the system begins a new life as a quasi stable black hole on the outside, brand new universe on the inside.
Interior properties of the membrane are set by the mass, charge and rotation of the blackhole; this will generally be different
for each universe. As the universe really is the membrane of the black hole, dimensionality is down by one. Our five dimensional
black hole hosts a four dimensional universe. Our four dimensional universe has its own black hole decendents which
have three dimensional universes, and so on. Given that the black holes have angular momentum, one of the directional
degrees of freedom will not be symmetrical, such as our time, versus r, theta and phi. Some fun here, is that
true four dimensional space can support two orthogonal rotations spaces. . .
Eric Dollard posed a transmission line puzzle. ************* Original Posting *********************** I have a D.C. transmission line, the conductors are 2 inches in diameter, spacing is 18 feet. How many ounces of force are developed upon a 600 foot span of this line, for the following; 1. 1000 ampere line current. 2. 1000 KV line potential?Here is my answer.
Like any major event, there have been prophets who correctly called reality. In this case, George Sudarshan, Chodos, Hauser, Kosteleck, Ehrlich and Eue Jin Jeong are some names to look for.
Chodos, I believe, makes the correct assertion that the left hand only helicity of neutrinos, and the right hand only helicity of antineutrinos, guaranteed luminal or tachyonic speeds, and that the presence of neutrino flavor oscillation locked out luminal only speeds leaving strictly superluminal speeds for neutrinos. Because his argument is so neat, I've spent some time to understand his points, and I'm hoping to be able to communicate his arguments.
Neutrinos have an inherent spin, and consequently can be thought of as following a spiral path as they propagate. A good mental picture is to think of the tips of a propellor on an airplane. As the plane flies, the tips of the propellor trace a spiral path. Helicity is measure of the torsion of a spiraling curve. If we are stationary, watching a plane advance toward us, counterclockwise rotation of the propellor and the closing radial distance traces out a right hand thread, in the sense of screw. This is positive torsion, positive helicity. Now, assume we change our speed from stationary to faster than the airplane. The airplane is now separating from us, as we leave the airplane behind. From our point of view, the trajectory of the propellor tips has changed from a counterclockwise motion approaching to a counterclockwise motion receding. The apparent pitch of the spiral, from our point of view, has become negative, and is described as a left handed screw with negative torsion (from our point of view).
We can directly measure high energy neutrinos, and we indirectly infer high and low energy neutrinos when looking at particle decays. The experimental fact is that we see only lefthand neutrinos, and only righthand antineutrinos. If neutrinos travelled at subluminal speeds, a change of reference velocity would guarantee a mix of left and right handed neutrinos. The lower in energy, the closer to 50/50 the randomized mix should be. Because we see *none*, we know neutrinos had to be luminal or beyond.
Now for the fun stuff here. Ordinarily, when we work with mass, we are dealing with stable particles. (Think electrons, protons, etc.) Mass for these particles is a real number, corresponding to an inverse spatial distance in natural units. Unstable particles, on the other hand (think muons, neutrons, etc.), get an imaginary component of mass proportional to the inverse particle lifetime. Particles with mass, even imaginary mass, cannot propagate at light speed. Consequently, when the solar neutrino paradox of 1/3 neutrino flux came up, when physicist proposed neutrino flavor oscillations, this implied neutrino mass, and that, in turned, ruled out luminal speeds. (Neutrino flavor oscillations have been experimentally verified using reactor generated neutrinos, emitted as electron neutrinos, with time correlated detection of electron and muon neutrinos at remote detectors. Japanese Kat experiment, Minnesota experiment.)
Chodos argument from 1985 is thus: Neutrinos can't be subluminal, can't be luminal, must be superluminal.
First verification was supernova 1987A, where neutrinos were detected prior to optical spotting. (Found in retrospect.) Arguments about the delayed photons propagating from the supernova core prevented this observation from being conclusive, but certainly provided indication. Consequently, experiments which generated time resolved, spatially resolved neutrino beams began to look for time of flight measurements. Fermilab MINOS measured superluminal speeds, but the uncertainty in the measurements were less than six standard deviations, and consequently was not deemed definitive. CERN, in turn, has followed up, and reduced measurement uncertainties to the six sigma standard.
Implications for future supernova detection: The supernova events have a large neutrino pulse at fairly constant energy during the collapsing phase transformation (flash), followed by rapidly cooling neutrinos from the hot neutron core. As high energy neutrinos travel slower than low energy neutrinos, (think of proximity to light speed being the high energy condition), we will see the time reversed rising sizzle, then flash for supernova events. Being specific, if we see an increasing neutrino flux coming from Betelguese or Eta Carinae, we would then later see the high energy neutrino flash followed by the optical event.
This would be a very interesting verification, to say the least.
Some references:
The neutrino as a tachyon, Chodos, Alan, Hauser, Avi I.,
Kostelecky, V. Alan, Phys. Lett. B150 (1985) 431.
Neutrino mass^2 inferred from the cosmic ray spectrum and tritium
beta decay, Ehrlich, Robert, Phys. Lett. B493 (2000) 229232,
arXiv:hepph/0009040.
Eue Jin Jeong: arXiv:hepph/9704311 v4, 1997
Edit 23 January 2015. John Hanniball of http://anachrocomputer.blogspot.com/ noted that the previously posted version of flyby.c did not correctly use divide by z for perspective function. I had incorrectly posted one of my experiments using radial scaling, as with projection on a sphere rather than a rectangular plane. I have left the radial lines commented out in the code, and corrected the perspective function for conventional z scaling. Thanks John!
(Compile with gcc, command gcc flyby.c o flyby lX11 lXdmcp
lXau lm )
My tasks here are to
Conclusion  This really great magnetic material is not hematite. It happens to be a really great magnetic material, probably bariumstrontium ferrite ceramic.
Future work  I want to make samples of magnetite via the Massurt method both for ferrofluid fun and for characterization of magnetite's magnetic properties.
The encoding of the basis vectors as binary numbers, and the product base being given by XOR is worth illustrating numerically, as some interesting interpretations can be made about dimensionality and multiplication. For traditional quaternions, we have numbers and three spatial dimensions. Borrowing notation from spacetime, I'll call t=00 as the numbers (scalars), i=01 as one space axis, j=10 as another space axis, and k=11 as a third. The multiplication table is
When we extend to traditional octonions, we have
Right Hand Quaternion Unit Vector Multiplication Table
Prefactor Postfactor  Binary format
 
 1 i j k  t*i = 00^01 = 01 = i
 \  t*j = 00^10 = 10 = j
1  1 i j k  t*k = 00^11 = 11 = k
 
i  i 1 k j  i*j = 01^10 = 11 = k
  i*k = 01^10 = 10 = j
j  j k 1 i 
  j*k = 10^11 = 01 = i
k  k j i 1 
Left Hand Octonion Unit Vector Multiplication TableThe interesting interpretation of the above, seen in Clifford algebra and geometric algebra, is that the quaternion table above is *not* a four dimensional structure, but rather a two dimensional structure, where the multiplication terms involving i and j give rise to an areal term k. In a similar fashion, complex numbers are really dealing with a one dimensional space, and octonions with a three dimensional space. To get a real spacetime (four true dimensions), will require sedenions.
Prefactor Postfactor

 1 i j k E I J K
 \
1  1 i j k E I J K t = 000 Scalar

i  i 1 k j I E +K J i = 001 Vector

j  j k 1 i J K E I j = 010 Vector

k  k j i 1 K J I E k = 011 Area

E  E I J K 1 i j k E = 100 Scalar

I  I E K J i 1 k j I = 101 Area

J  J K E I j k 1 i J = 110 Area

K  K J I E k j i 1 K = 111 Volume
Product base formed by XOR of two factors. Example i*K => 001^111 = 110 = J
Polarity (sign) determined separately.
Knowing that the basis multiplication table can be encoded as binary numbers XORed together, and seeing the power of two number of solutions, I decided that I should examine the solutions as a digital logic problem. Given that the basis logic was XOR based, I was pleased to find ReedMuller XOR implementations of the sign or polarity logic found above.
Having found digital logic solutions for normed algebras in two, four and eight dimensions, my next target was 16 dimensions. Sedenions are known to not be normal. While I can find numerical special cases where two integer sixteen vectors and a sixteen vector product satisfy the norm relations (based upon any integer being the sum of four squares), there is no general formula. Doing the sixteen bit digital exercise, despite knowing unlikely success, led to an interesting result. In my approach, I used one bit to determine an active high/active low default state for a bit. I then used higher order bits to determine participation of free variables in the sign of the term in the multiplication table. The interesting result, is that while I had conflicting definitions for active high/active low default bit states, I had a consistent set of definitions for how the default state would be modified by 42 free variables.
This result has me very excited. I've always wanted to find a simple explanation for quantum superposition. My hope is to find a simple, mathematical analog to the ring oscillator or logic paradox, where a feedback path with an odd number of inversions, coupled with a propagation delay through the logic creates an inherently oscillating system. A multiplication table which is inherently oscillatory, gives rise to an oscillatory metric, which in turn justifies much of our experience with quantum multivalued weirdness. Philosophically, an inherently oscillating metric structure of space is a good model for Planck scale quantum foam, and in the bigger picture, justification for 'free will', or nonpredestination, on the quantum scale.
So, what do I know? I know that the base definition is inconsistent, and that there are flaws (inherent conflicts) in my model for the multiplication table logic. However, I also know, that once a suitable basis is defined, I can give 2^42 new variations on a successful basis.
My current task is to reexamine fundamentals of division algebras. I am reevaluating the works by Hamilton, Cayley, Kirkland, Clifford, and other great mathematicians from the 18401890s, as well as Sylvester (1867), Hadamard (1893) and Walsh (1923) in more recent times. My most recent influence is the geometric algebra interpretation of Clifford algebras by David Hestenes.
My current working assumptions are