From a number cruncher's perspective the November 2008 issue of Linux Journal is actually quite interesting. In addition to
an article on the petaflop
Roadrunner supercomputer at LANL, there are two articles on number crunching with GPUs (one by
Robert Farber, the other by
Michael Wolfe) and one article on
numpy/
scipy by
Joey Bernard.
Farber's article is really an advert for
NVIDIA's CUDA, and unfortunately doesn't actually show any examples of doing something (this is something of a trend in LJ, as it seems to drift away from showing you how to actually code things and toward merely describing point-and-click mega-code-projects). Wolfe's article shows snippets of matrix multiplication using both CUDA and
Brook, but is really more of a discussion about how to write a compiler that would automatically parallelize for
GPGPU work.
Joey Bernard's article on numpy does try to scratch the surface with simple worked examples, including the use of matplotlib for plotting (or "ipython -pylab").
For a moment after reading his article I was terrified that I'd totally messed up using numpy in my python projects, as Bernard states that array multiplication in numpy (e.g. a3=a1*a2) is handled as a
matrix multiplication!
However, its pretty easy to verify that unless you specifically created matrix objects (which his example did not do), then a1*a2 is an element-wise array multiplication.
To do matrix-like multiplication on numpy array objects you need to specifically do something like "a4=numpy.dot(a1,a2)" or "a5=numpy.mat(a1)*numpy.mat(a2)."
Anyway, I hope this issue is a sign that Linux Journal may get back to publishing more "hands-on" articles on general programming topics interspersed between all the Web 2.0 and database articles.