Page 1 of 1

interesting mobile vision app (iPhone)

PostPosted: Wed Jan 06, 2016 12:30 pm
by dobkeratops
https://www.youtube.com/watch?v=XMdct-5bERQ

Now this is the sort of thing that should be running on a 1000+ core epiphany... I suspect it could handle these type of convolutional-neural-nets better than GPUs, because it could explicitely keep the data-flow between layers on chip (and could probably easily keep the kernels per layer within scratchpads). At best a GPU would be throwing temporaries in & out of the the L2 cache, which is still quite long a journey.

Whilst this app is enough to be useable and show the potential , it shows the demand for more power... so many ways it could be extended.
basically imagine an epiphany in someones' pocket, connected to cameras , with voice in/out. even for someone sighted, you can imagine more 'situational awareness'. eyes in the back of your head, an 'eye for detail', ..

Seems like an absolutely perfect fit on paper. Supposedly ever deeper nets give better results , which means more on chip data-flow..

(one minor point, supposedly the next Nvidia chip will come with FP16 support ('mixed precision'); they say this will be an efficient tradeoff for nets. I wonder if they could do similar for a future epiphany. Or perhaps on a parallella like setup you could do the first layer (rgb->YUV ? plain old edge-detection?) on the FPGA) for a leg-up. In the past GPUs used to have separate vertex & pixel pipelines with different precision)

Supposedly nvidia are building a multi-GPU setup aimed at self-driving cars.. yet again a case where the epiphany's scalability would be rather nice.