except for the similarity to gpu, parallella is a completely new idea. well, actually a coprocessor isn't that new an idea at all, x86 platform had those since early days. but instead of taking advantage of the paralell processing power, more and more processing units were added and most of the time they are unoccupied. somehow I'm wondering why parallella believes this will be any different here. i.e. the straight-forward use is to let the cores calculate matrix stuff and such by a customized math-lib.
well, honestly this is all I can imagine to happen when restricted to high-level programming languages and 8kb big assembler-programs. however, this isn't all that's possible. an interesting idea would be to re-write the linux kernel such that the cores are used as little helpers emulating hardware or filled with actual mini-tasks to perform in background. this way however the programmers wont have much cores left for doing the actually interesting stuff. so this approach would require a good management software in the main cpu that can allow fine-tuning how much is in use by kernel-space and eventually freeing up everything if requested.
interesting approach would be to allow for in-kernel translation between machine-code for main processor and the parallella cores. if there is a thread small enough to fit into the 8kb, translate it and run it. all that could be done even at the level of libthread and ld.so and such. would require an new binary format for libs and executables though. but since we are talking of a completely new architecture here...
what the major parallell programming approaches are missing is to make use of predictable calculation-speed. one could minimize communication by forcing 2 cores to calculate the same thing simultaneously. sometimes a good prediction on what other cores are doing can allow the more idle cores to send some data over dma before it's being requested. data-transfer could be optimized by sending data over the edge, through the main processors, to where it belongs. (afterall the topology is shaped like a ball or doughnut.) i.e. on a 64-core architecture the upper left core can send a message to lower right core by messaging the main cpu which hands over the massage, hopefully in much less cycles than the 14 cycles it would otherwise need. also interesting is to optimize programs by aligning calculations to time-frames and making sure no other core will block the communication path there.
people keep complaining that 8kb is too little for a sensible program, but this is only true when you are using shared libs. when you statically link your program then you save on administrative code and only code that actually gets used will be included in this limited space. so, how about a dynamic linker doing the same kind of improvements at run-time?
those parallella cores are obviously a security-issue in a normal linux kernel if they can be accessed freely by the user. however, I can imagine this situation could be turned around if the cores would take over the work of hardware otherwise used for security improvement. starting with encrypted disks and monitoring network, right down to watching over the very code the main cores are executing and thereby effectively preventing those cores from messing with the parallella cores. additionally branch-prediction and various other complex duties of a processor could be emulated, increasing the speed as a by-product of the whole monitoring. also virtual machines could make the additional cores do the actual translation from machine-code to machine-code and thereby making parallella emulate any platform you want with several security addons. great platform to research and develop some computer-virus or its antidote.
there are many projects that give a meaning to idle cores, all it takes is to alter the code so it runs on parallella efficently. would also serve as a good testing-ground for various ide-improvements and such. I think those projects should be looked into too in the beginning. so whoever buys parallella will have at least one advantage: better success in those projects...
anybody has some other wild ideas that might not fit into the "what will you do" cathegory? at least my ideas are too much to handle for a single person, so any takers on maintaining some large projects in those directions?