1.1. Erlang Many-core Accelerator Support
Summary: Erlang support for many-core accelerators would be a significant boost for Erlang as a high performance computing language/framework. The project would use the Parallella hardware platform and many-core programming infrastructure as a test and development vehicle.
Expected results: support within Erlang/OTP that allows tasks to be offloaded from the host processor to many-core accelerators like the Epiphany processor on the Parallella platform.
Knowledge prerequisite: C/C++, Erlang.
Mentor: Yaniv Sapir, Adapteva.
1.2. Go Language Many-Core Accelerator Support
Summary: Go support for many-core accelerators would be a significant performance/energy-efficiency boost for applications. The project would use the Parallella hardware platform and many-core programming infrastructure as a test and development vehicle.
Expected results: support that allows Go applications running on the host to offload tsks to many-core accelerators like the Epiphany processor on the Parallella platform.
Knowledge prerequisite: C/C++, OpenCL, Go.
Mentor: Yaniv Sapir, Adapteva.
1.3. NumPy/SciPy Many-Core Accelerator Support
Summary: NumPy/SciPy support for many-core accelerators would provide a significant performance boost for Python high-performance computing applications that make use of those libraries. The project would use the Parallella hardware platform and many-core programming infrastructure as a test and development vehicle.
Expected results: support that allows Python applications to offload NumPy/SciPy functions to many-core accelerators like the Epiphany processor on the Parallella platform.
Knowledge prerequisite: C/C++, OpenCL, Python, NumPy/SciPy.
Mentor: Ed Hartley, Meta Metal.
1.4. Hadoop Many-Core Accelerator Support
Summary: many-core accelerator support in Hadoop would provide a significant performance/energy-efficiency boost for applications built using the framework. The project would use the Parallella hardware platform and many-core programming infrastructure as a test and development vehicle.
Expected results: offload of MapReduce operations from Hadoop applications running in a JVM on the host processor, to many-core accelerators like the Epiphany processor on the Parallella platform.
Knowledge prerequisite: C/C++, Java, OpenJDK development, Hadoop.
Mentor: Yaniv Sapir, Adapteva.
1.5. MPI Lite (MPI for Many-Core Accelerators)
Summary: MPI Lite will allow the MPI programming model to be used with massively parallel systems made up of processors with a small amount of local memory and where traditional MPI would prove too heavy. The project would use the Parallella hardware platform and many-core programming infrastructure as a test and development vehicle.
Expected results: a lightweight messaging passing framework that is built on top of OpenCL, and that is suitable for use with many-core accelerators like the Epiphany processor on the Parallella platform.
Knowledge prerequisite: C/C++, OpenCL and MPI or a similar message passing framework
Mentor: David Richie, Brown Deer Technology.
Software-defined Radio
2.1. GNU Radio Scheduler for Many-Core Accelerators
Summary: a custom scheduler for GNU Radio will allow it to efficiently schedule execution of signal processing blocks on many-core accelerators. The project would use the Parallella hardware platform and many-core programming infrastructure as a test and development vehicle.
Expected results: a GNU Radio scheduler for heterogeneous systems and that enables efficient scheduling of signal processing blocks across a large number of cores and providing features such as processor affinity.
Knowledge prerequisite: C/C++, DSP.
Mentor: Tommy Tracy II, High Performance Low Power Lab, University of Virginia.
2.2. GNU Radio DSP Block Porting
Summary: porting signal processing blocks to the Parallella platform will allow GNU Radio to take advantage of Epiphany many-core accelerator.
Expected results: signal processing blocks that can be targeted to Epiphany cores and assembled into flow graphs using Python scripts running on the host processor.
Knowledge prerequisite: C/C++, DSP.
Mentor: Tommy Tracy II, High Performance Low Power Lab, University of Virginia.
2.3. OpenBTS DSP Many-core Accelerator Support
Summary: OpenBTS support for many-core accelerators would make it possible to create multi-channel software-defined radio GSM base stations, where the host power consumption is considerably less than it would be than when using a GPP alone. The project would use the Parallella hardware platform and many-core programming infrastructure as a test and development vehicle.
Expected results: OpenBTS DSP code ported to Parallella platform and enabling support for at least four simultaneous radio channels (ARFC) when using a 16-core accelerator.
Knowledge prerequisite: C/C++, DSP.
Mentor: TBC.
Distributed Computing
3.1. BOINC Many-core Accelerator Support
Summary: BOINC support for many-core accelerators would provide a significant performance/energy-efficiency boost for volunteer computing applications built using the framework. The project would use the Parallella hardware platform and many-core programming infrastructure as a test and development vehicle.
Expected results: support for Parallella platform within BOINC and at least one open source volunteer computing project that uses the framework.
Knowledge prerequisite: C/C++, OpenCL, BOINC.
Mentor: Yaniv Sapir, Adapteva.