by piotr5 » Sat May 18, 2013 12:09 am
As far as I understood the reality is much more provisoric than that.
theoretically it would be possible to run an OS-kernel on those 16/64 cores, and thereby use the thread-pool as usual. (some additional probramming would be needed, like for example saving the large amount of registers with every context-switch). problem is just that this way you'd basically have a 16/64-core 10mHz computer to run your bash-scripts, with the possibility to speed up a c-proogram 100-fold by copying it into a special position in RAM -- if that program can cope with 32kb. well, at least that OS would then run in parallel to the OS running on the ARM. so in terms of speed we're back in the 80s, but with cheaper hardware and less electricity. same in terms of RAM if speed is important. difference to gpu is that on Parallella you have 1GB ram at your fingertips, and a rich set of machine-code instructions. but you can only use 32KB RAM at full speed, all other ram is getting transported to each core at a much slower speed. you have the choice, you want fast computer with only 32KB ram for each of the many cores, or you want 1GB ram on a 100 times slower computer with the same amount of cores? so far only the former possibility exists (noone really wants the latter), even though that means giving up the memory-hungry option of multiple threads per core...
in other words, the platform is so young, a lot of programming must still be done to re-use already existing software solutions. yes, alike to a gpu you must recompile all your software, the arm is using different machine-code. but this doesn't mean you couldn't program in your favourite language and use your favourite libs. what compiles in gcc will run, if the tight memory-constraints are fulfilled...