Given a module in the C'dent subset of a supported input language, it can export to one of the current output languages, which are the input languages, plus Perl 5, Python 3, Java, Ruby and PHP.

There are obviously many issues that will be interesting to address, such as dealing with concurrency and differences in object models, but I think the philosophy of the project is awesome: Acmeism :

We have plans to add Parrot Intermediate Representation (PIR) as an output and maybe an input language at the PDX Hackathon tonight. Come on by! PDX Hackathon has always been an Acmeist gathering, and it has been said that "Beer is our Bytecode."

]]>

]]>

Grep returns false when it cannot find a match, which triggers the echo of the filename to STDOUT. The PATTERN can be any valid regular expression for your system's flavor of grep. This can save tons of time if you have dozens/hundreds of files and only a few are missing something. To edit these files in vim, it is nothing more than:

Happy Hacking!

]]>

What if your number is really big and someone else does not believe you that it is prime? How do you

If a shadowy figure emerges from a dark alley and shows you a strip of paper:

$latex x =742376437546875526384762834762837468273648992$

and says "I'll sell you this big fat prime for

But if you are trying to be an upstanding prime-number-selling business and therefore want to go the extra mile and prove that the numbers you sell are indeed prime, you will quickly go out of business. I guess that is why there are still sheisty guys selling primes in trench coats...

The reason is that to prove that $latex x=N$ is a prime number, you are actually making a statement about $latex x=N$ as well as saying that all numbers $latex y$ such that $latex y < N$ are

Currently it takes at most a few seconds to prove that large numbers are composite, but to prove that a similarly-sized number is prime could take thousands of CPU-hours!!! For those that like specific numbers, it took 6 CPU-years to prove that the 20,562-digit Mills' prime was indeed prime.

Even for numbers in the range of current cryptography systems (a few hundred digits), proving something is a prime is just too damn slow. Factoring integers is often a subalgorithm of many number theory and cryptography algorithms, it has to be as fast as possible. When a software algorithm factors a number, it often does some kind of check that all of the numbers it is returning are indeed prime, but it cannot afford to

Modern primality testing started with Pierre de Fermat, but Chinese mathematicians described the beginnings of these algorithms 2000 years before Fermat was on the scene. Numbers which "fake out" the Fermat Primality Test are called Carmichael numbers, and in 1994 a group of mathematicians proved that there are infinitely many of them, i.e. there is not only a few isolated numbers that "fake out" the test, the more your search for them, the more you find.

Some software packages may still use the Fermat Primality test for some algorithms, but it has been mostly superceded by the Rabin-Miller primality test.

This test allows you to run $latex k$ repetitions of the algorithm and on average, the probability of it incorrectly concluding that a composite number is prime is $latex P = 4^{-k}$. For $latex k=20$ this is roughly a chance of one in a trillion (1000000000000, a 1 with 12 zeros) that it will be wrong. This ain't bad.

But, as it turns out, there is a better way.

The better way is called The Baillie-PSW primality test (BPSW) which is actually just four independent steps which can be parrallelized quite easily. The BPSW first checks that the number is not 1, 2 or an even number, then checks to see if it has any small prime divisors, where small is defined as $latex p < 1000 $. It then performs a single Rabin-Miller primality test in base 2 and then a Lucas Primality test, which is another primality test that I have not described that uses Lucas Numbers (very closely related to Fibonacci numbers) as the basis for a test.

According to MathWorld "the multiple Rabin-Miller test in bases 2 and 3 combined with a Lucas pseudoprime test as the primality test have been used by the function

So why is the BPSW test so cool? For starters, people have looked hard and still cannot find one BPSW pseudoprime. According to recent findings, there are no BPSW pseudoprimes less than $latex N=10^{15} $. As for software that uses the Miller-Rabin test (for instance the mpz_probab_prime_p function in GMP ), Thomas R. Nicely (of pentium fdiv bug fame) has done extensive research and discovered many pseudoprimes in GMP as well as providing his data openly before publication, which is really cool! An example of GPL software that uses BPSW is the

The problem with using software that use Miller-Rabin tests is such: the bases and number of repetitions used internally may change from one version of the software to the next, which means that different versions of your software may think the same number is either prime or composite. This is obviously Very Bad. If you always specify the bases and number of repetitions to use in the Miller-Rabin test then this will not occur, but that is not the case for most existing software.

So it would almost seem that BPSW is the perfect primality test, it is pretty fast and perhaps there is no number can fake out all four steps of the test. All is not peaches and cream, though. Carl Pomerance, the P in BPSW and one of the mathematicians that proved there are infinitely many Carmichael numbers, published a small article in 1984 giving an informal proof that BPSW pseudoprimes

So what is the moral of the story?

]]>

Bill DeRouchey (@billder) was the opening presentation at CyborgCamp and was wonderfully entertaining and informative. Here are some quick notes I took:

- How did the "Play/Pause" button become univerally understood?
- Typewriters choose the winning "letter symbols"
- Symbols evolve.
- Are we losing some types/modes of literacy? Is that such a bad thing?
- @ - commerce symbol "each at", then location, then identity
- # - means a group on twitter "context, topic"
- Mouse pointer ~1984
- Future directions in the evolution of tech languages?
- Splintered groups of tech languages
- Emotional bandwidth
- Gatekeepers of knowledge, screening out less knowledgeable people

If we want to calculate the length of a parameterized curve $latex x^r = x^r(u) $ where $latex u $ is a parameter with respect to some coordinate system, then we can write an infintesmal displacement element as $latex dx^r = p^r(u) du $ . The length of this displacement is $latex ds = \sqrt{g_{mn} p^m p^n} du $ and the length of the curve from $latex u=u_1 $ to $latex u=u_2 $ is $latex L = \int_{u_1}^{u_2} ds = \int_{u_1}^{u_2} \sqrt{g_{mn} p^m p^n} du $ .

So we need the metric tensor to define distance along a curve when we are in non-cartesian coordinate systems, such as spherical or toroidal. From the metric tensor one can then start to study the "curvature" of a coordinate system. More soon!]]>

In differential equation theory, the Jacobian matrix plays a key role in defining the stability of solutions. As a simple example, consider the matrix ordinary differential equation $latex \dot{\mathbf{x}} = A \mathbf{x} $ where $latex A = \left(\begin{array}{cc}a&b\\c&d\end{array} \right) $ . Because this is a linear system, the solution will always be a linear combination of exponentials $latex \mathbf{x} = \mathbf{v_1} e^{\lambda_1 t} + \mathbf{v_2} e^{\lambda_2 t } $ where $latex \mathbf{v_{1,2}} $ are the eigenvectors and $latex \lambda_{1,2} $ are the eigenvalues, and $latex t $ is time, which is positive. Since$latex t > 0 $ , we must have $latex Re(\lambda_{1,2}) < 0 $ for the solution to decay to a steady state. If this is violated for any$latex \lambda_i $ then that exponential "blows up" as $latex t \rightarrow \infty $ , since it is an exponential with an ever-increasing postive argument. Note that if the eigenvalues are complex then the imaginary part of the eigenvalue is related to the oscillatory part of the solution.

A $latex \lambda_i $ would be called an unstable eigenvalue of the system and also forms part of the vector space called the unstable manifold. Instability is defined as the tendency for a system to shoot away from a certain state when it is slightly disturbed (perturbed) from that state. Stability is the feature of a system to come back to a certain state if it is slightly perturbed from that state.

Stability has important consequences for applications, because it can determine if your chemical/combustion reaction will go out of control, of if the species in your mathematical model go extinct, etc, depending on what the differential equations describe. Whatever your equations describe, knowing if the solutions are stable is pretty important and usually the first step of an analysis. If you have stable solutions, then you can reasonably trust numerics but if you are trying to numerically simulate an unstable solution of an equation, you must be much more careful. In these situations you want to use specialized integration algorithms that preserve certain properties of your solutions, like if it is symplectic. More about this later!]]>

LaTeX makes math jokes so much more fun:

$latex \int{ \frac{1}{cabin} } = \log{ cabin} + C$

Don't forget the $latex + C $, it's important.

One chapter of my thesis was accepted and published in Communications in Nonlinear Science and Numerical Simulation. Here is the abstract. The actual print publication date is Volume 14, Issue 5, May 2009, Pages 1999-2005, but it is "pre-published" online.

Next on my list is to get LaTeX rendering to work in MovableType.

]]>
#! /Users/leto/git/factor/factor -script

USING: gsl gsl.sf.result gsl.sf.bessel gsl.sf.log kernel tools.test ;

"gsl.sf" test

Note the space after the ! is necessary because every "word" in Factor must be separated by a space. The practice of running factor as a non-interactive script is not encouraged, but in certain situations it wins the day. The above script could be factored to take an argument of a subsystem to test and then I would make a key-binding in vim (or emacs or whatever) so that when I am editing a file and I type ,t it shows me the results of that subsystems tests.

All of this stuff can be done in the interactive UI, but I think people will fiddle with Factor more if they can easily bolt it onto their current workflow. People tend not to even try things that require totally changing their current tool chain. For me, I am editing the source of the GSL bindings in VIM, so having that script is invaluable. I hack on the source and whenever I change anything important I run the tests by hitting 2 keys. It is really important to make running and writing tests as easy as possible, or they won't get written at all.

Today ...... is a good day to hack.

git clone http://leto.net/code/factor.git

The GSL bindings are now in "extra/gsl", which is the extra.gsl namespace in Factor. I have bindings to all of GSL's special functions, I am currently working on documenting the Bessel functions and writing tests for the functions which take/return gsl_sf_result structures. If you would like to see another subsystem, just ask and I will let you know if it is doable. There are almost 50 subsystems, so if you want to help: clone, hack, commit and push!

]]>
HTTP server + JSON Doc DB (key/value pairs), RESTful

Incremental Map reduce views

peer-based replication

multi-master OR push all changes to one write-master and + read slaves

concurrency over serial speed

Ex: lotsofwords.com = 120GB database, 200ms response

written in Erlang

sharding via hash functions

documents vs. relations

each doc has a "revision"

first in wins

standalone apps via _external servers

p2p replication - this is really cool and powerful

_external servers - api to parse request as json, filter it and then return

rollup = reduce

re-reduce phase =~ aggregate

Click on the link above to see a screenshot of how to make plots of functions in Factor. I am showing off the Regular Bessel function J0(x) which is called gsl_sf_bessel_J0 in GSL. In the background I also have the source of the documentation.

]]>