64core version

Forum for anything not suitable for the other forums.

64core version

Postby jlambrecht » Wed Mar 18, 2015 1:54 pm

Hi,

I kind of got the impression the 64-core version has been cancelled ? Is this correct, i could not find any definitive post on the adapteva website regarding.

Personally i think the next step should be a 128 or 256 core, it will prove beyond profitable for sure.
jlambrecht
 
Posts: 41
Joined: Wed Nov 13, 2013 7:57 pm

Re: 64core version

Postby hmiranda » Fri Mar 27, 2015 6:02 pm

I think now that parallella is well know what is, what the company is doing, and good job that the company is doing to deliver the chips to the customers, if a new project request founding I am sure will get the founds.

The question is: It is any plans to rise found for a new chip ? I will love to be one of the persons that put money to the project.
--
Best regards,
Horacio Miranda
User avatar
hmiranda
 
Posts: 4
Joined: Tue Aug 06, 2013 7:09 am

Re: 64core version

Postby jlambrecht » Tue Mar 31, 2015 12:43 pm

Yes, indeed, i also think many people would love to see a many core version with at least 64 cores.
jlambrecht
 
Posts: 41
Joined: Wed Nov 13, 2013 7:57 pm

Re: 64core version

Postby sebraa » Wed Apr 01, 2015 6:51 pm

That is exactly Adapteva's problem: Many people would like to see the 64-core version (or a 256-core version), but nobody wants to buy it. Or at least nobody is willing to pay up front to have them produced in the first place.
sebraa
 
Posts: 495
Joined: Mon Jul 21, 2014 7:54 pm

Re: 64core version

Postby piotr5 » Wed Apr 01, 2015 9:53 pm

I have to agree. but the wording might have been a bit cynical. however, at the erlang speech adapteva basically said that even though people would like to see many cores, those chips and computers would just be ornaments in somebody's room, without real use. first we need a working infrastructure of parallell-coding. starting with students who learn thinking the right way, through various libs making use of massive amounts of coprocessors, right down to development tools fit for this task. if that condition is fulfilled, adapteva will at least try to compete for creating the first chip with several thousands of cores by 2018... ;)
piotr5
 
Posts: 230
Joined: Sun Dec 23, 2012 2:48 pm

Re: 64core version

Postby aolofsson » Wed Apr 01, 2015 10:12 pm

The design on a 1024 core chip will start the day a customer puts up money to buiild it. It could happen this month, this year, or never...the "build it and they will come" phase of Adapteva is over.

Andreas
User avatar
aolofsson
 
Posts: 1005
Joined: Tue Dec 11, 2012 6:59 pm
Location: Lexington, Massachusetts,USA

Re: 64core version

Postby nickoppen » Sun Apr 05, 2015 5:13 am

My 2c worth...

I'd prefer to see more memory than more processors. Then, if filling the memory becomes a problem... more bandwidth.

I totally agree with the approach Adapteva is taking at the moment. I don't believe that the limits of the 16 core Epiphany have been reached yet and the path to greater penetration is improving the software stack.

Also, product development based on, "wouldn't it be cool to have..." is a great way of going out of business. Look at the Raspberry Pi. They sold upwards of 4 million units before they released version 2. What did they do in the mean time? They made the software rock solid and built the community.

nick
Sharing is what makes the internet Great!
User avatar
nickoppen
 
Posts: 266
Joined: Mon Dec 17, 2012 3:21 am
Location: Sydney NSW, Australia

Re: 64core version

Postby dobkeratops » Wed Jun 10, 2015 5:11 pm

>> first we need a working infrastructure of parallell-coding. starting with students who learn thinking the right way,

Its' a great shame Sony stopped supporting linux on the PS3. That machine needed the same tools Parallella needs.

In the end it was just done 'the hard way' and no such tools appeared within its' lifetime;(~7+ years of focus).
I do personally believe such tools are possible - its' just a chicken/egg situation. And I do personally believe a throughput oriented, manycore general purpose processor is a better idea than GPGPU. PS3 was actually a kludgy halfway house, originally the intention was the CELL would assist with rendering.. I still think they'd be better for the job now done by 'vertex-shaders'- but once they got an off-the-shelf GPU (with dedicated vertex shaders..) that concept became a liability. Fast forward to 2015 and unified shader pipelines are standard, so imagine a machine with 1-4 low latency-optimized CPU cores, then some larger number of SPU/Phi/Parallella like units that can do Vertices, recognition tasks,AI,physics.. and a conventional pixel/texture oriented GPU.

Consoles sometimes live on with hobbyist/homebrew communities. I personally haven't been motivated to keep coding for the ps3.. mobiles subsequently appeared being 'the next big thing', so I wanted to focus on mainstream portable multicore+SIMD+shader code.

I think some of the things I wished I could have done back then would be relevant to the Parallella ..but I'm learning about some assumptions I seem to have made, e.g. the extent to which the coprocessors can use 'system RAM'/'host RAM' .. (I gather there is some potential to improve that through the FPGA. If it can work with a large dataset , that opens up more appliction areas. Let me read around, how far have people got with ray-tracers already . Would this also be important for voice recognition?.. being able to compare with a large sample set? I guess for other 'input' oriented tasks the small window isn't a problem)

I also wonder if you could do something useful to parallella & mainstream hardware with a 'Single-System-Image' setup - multiple conventional machines with virtual memory shared across a network, and perhaps like with any NUMA system the ability to explicitely deal with resources local to one.. although networks are usually much slower than DMA.

One sticking point is the difference in software model between DMA and a cache. Although both involve locality of reference, both allow different approaches.
We have better ways of abstracting things now with lambdas - writing kernels plugged into internal-iterators can be done more pleasantly now. Whats going through my mind is some C++ framework that could be applicable with a recompile to both DMA and multicore/NUMA.
dobkeratops
 
Posts: 189
Joined: Fri Jun 05, 2015 6:42 pm
Location: uk

Re: 64core version

Postby piotr5 » Wed Jun 10, 2015 10:23 pm

with c++ I noticed another problem: how can disc-access be transformed in a way that it seems alike to a memory-access? with parallella this problem is nearly unsolveable, you'd need to communicate with fpga in some way in order to initiate disc-transfer. mind you, I'm not talking of swap memory here, just the idea that some objects are located in memory and some objects are loaded on demand from disc, without any kind of size-restrictions...

I'm bringing this up because as far as I know, a data-set of 1G isn't sufficient for AI-powered speech-to-text, I'm not even sure 4G is sufficient (on 32 bit architecture)...
piotr5
 
Posts: 230
Joined: Sun Dec 23, 2012 2:48 pm

Re: 64core version

Postby dobkeratops » Thu Jun 11, 2015 8:35 am

perhaps that just needs to involve the ARM: have a queue of file requests in the 32mb shared area, and a process on the ARM services that.

also could something in the FPGA initiate transfers *into* epiphany local stores? (given that the eLink interface is there for epiphany chips to DMA to/from each other). a custom DMA to system RAM, DMA from system RAM mechanism, which could theoretically use bigger addresses in a different space..
dobkeratops
 
Posts: 189
Joined: Fri Jun 05, 2015 6:42 pm
Location: uk

Next

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 9 guests