It's been ages since I've posted anything of a serious but non-political nature, so I thought I'd spice things up a bit by posting the following proof from a Bulletin of the AMS review of some books on pro-finite groups.
It appears the 47th International Mathematical Olympiad has already come and gone without my being aware of it, but fortunately it's never too late to try the problems on one's own. Here are the links for the problems from day 1 and from day 2: why not give them a go and see how well you stack up against some of the world's smartest teenagers? The results for the top finishers this year can be found here.
See the prizewinner page here. This year the award goes to Andrei Okounkov, Terence Tao, Wendelin Werner and Grigori Perelman; although the last-mentioned is likely the single awardee of whom a general audience will have heard anything, if only for the concerted efforts of ignorant journalists to milk his story for all the freakshow content it's worth, Perelman is actually one of the less interesting names on the list* as far as I'm concerned, as I've never had much of an interest in his branch of mathematics beyond putting it to use in the service of number theory. Yes, I'm well aware that the settlement of the Poincare conjectureis important - and unlike most of the people writing about it for a lay audience, I don't need the dumbed down version to get what it's about - but I'm no differential geometer by inclination, and understanding Ricci flow just doesn't do anything to get my pulse going.
This'll probably sail over the heads of all but the vanishingly small portion of my readers who are mathematicians or physicists, but I still think it's cool enough to be worth noting: Alain Connes has put online a freely-downloadable pdf version of the book in which he virtually single-handedly invented the subject; you might also want to check out this AMS Notices review of the book for a high-level survey of its content matter, while this page contains a plethora of links to additional related resources.
Now, who says all I write about nowadays is politics and pretty girls*?
I can't count the number of times I've run into some peak oil nut or Paul Ehrlich-style doomster uttering nonsense to the effect that we're using up some finite resource or other "faster than nature created it", as if this retort were an airtight argument before which any sensible person obviously had to surrender. The reason why I say this is nonsense is because it is based on nothing more than a sleight of hand: "finite" encompasses an arbitrarily large range of values, and just because we're used to dealing with quantities on the smaller end of that range doesn't mean "nature" is obliged to play along. To say that something is "finite" tells us absolutely nothing about how long it'll last before we use it all up; to illustrate what I'm talking about, consider the following hypothetical scenario.
The recently arrested "boss of bosses" of the Sicilian Mafia, Bernardo Provenzano, wrote notes using an encryption scheme similar to the one used by Julius Caesar more than 2,000 years ago, according to a biography of Italy's most wanted man.
Following up on my immediately preceding post about guessing number sequences, I'd like to link to a very worthwhile paper which touches on an issue raised in the comments section of my last post, to the effect that "simplicity" offers the best means of logically distinguishing between possible explanations: this is nothing other than the principle of Occam's razor lightly disguised, but as much as this principle is trumpeted as if it were an unassailable guideline for reasoning, the truth is that there's actually no rigorous justification for it.
Abstract. Many KDD systems incorporate an implicit or explicit preference for simpler models, but this use of “Occam’s razor” has been strongly criticized by several authors (e.g., Schaffer, 1993; Webb, 1996). This controversy arises partly because Occam’s razor has been interpreted in two quite different ways. The first interpretation (simplicity is a goal in itself) is essentially correct, but is at heart a preference for more comprehensible models. The second interpretation (simplicity leads to greater accuracy) is much more problematic. A critical review of the theoretical arguments for and against it shows that it is unfounded as a universal principle, and demonstrably false. A review of empirical evidence shows that it also fails as a practical heuristic.
Anyone interested in game theory should check out this website which is operated by a guy called Mike Shor. It's stuffed with all sorts of links and materials on the subject, and its lecture notes section should prove particularly interesting to those looking to study on their own, whether they be at undergraduate, MBA or PhD level. If it were up to me it would be a requirement for graduating from any university that students passed at least an elementary course on game theory, as much of the naiveté and outright foolishness to be seen on display in the political sphere would easily be seen for what it is with such knowledge (any game theorist could have predicted what would happen to agricultural output after communist collectivization campaigns, for instance, or that the public housing projects championed by Le Corbusier and the like were bound to decay into high-rise ghettoes).
The announcement is here. Following are some details:
Sieving has been done on 80 2.2 GHz Opteron CPUs and took 3 months. The matrix step was performed on a cluster of 80 2.2 GHz Opterons connected via a Gigabit network and took about 1.5 months.
Calendar time for the factorization (without polynomial selection) was 5 months.
As usual nowadays, the factoring was done using the General Number Field Sieve, the best description of which can be found in this article by Carl Pomerance (note that some knowledge of algebraic number theory is required); prior to the discovery of the GNFS, the Quadratic Sieve was the fastest known factoring method (and a heck of a lot better than ye olde Sieve of Eratosthenes).
What are the security implications of this announcement? That RSA-640 took a mere 80 Opteron machines and less than 2 months to crack says to me that no RSA-key with a length of 768 bits or less is worth anything against any security agencies worth a damn, and even 1024-bit keys don't offer more than illusory security where the likes of the NSA or GCHQ are concerned - for one thing, they probably know faster ways of attacking the problem than have been published, and the NSA in particular has a lavish enough budget to pay for some truly incredible hardware. Any outfit which hasn't already moved to using 2048-bit or longer keys and isn't in the process of doing so is simply being stupid.