interesting mobile vision app (iPhone)

Forum for anything not suitable for the other forums.

interesting mobile vision app (iPhone)

Postby dobkeratops » Wed Jan 06, 2016 12:30 pm

https://www.youtube.com/watch?v=XMdct-5bERQ

Now this is the sort of thing that should be running on a 1000+ core epiphany... I suspect it could handle these type of convolutional-neural-nets better than GPUs, because it could explicitely keep the data-flow between layers on chip (and could probably easily keep the kernels per layer within scratchpads). At best a GPU would be throwing temporaries in & out of the the L2 cache, which is still quite long a journey.

Whilst this app is enough to be useable and show the potential , it shows the demand for more power... so many ways it could be extended.
basically imagine an epiphany in someones' pocket, connected to cameras , with voice in/out. even for someone sighted, you can imagine more 'situational awareness'. eyes in the back of your head, an 'eye for detail', ..

Seems like an absolutely perfect fit on paper. Supposedly ever deeper nets give better results , which means more on chip data-flow..

(one minor point, supposedly the next Nvidia chip will come with FP16 support ('mixed precision'); they say this will be an efficient tradeoff for nets. I wonder if they could do similar for a future epiphany. Or perhaps on a parallella like setup you could do the first layer (rgb->YUV ? plain old edge-detection?) on the FPGA) for a leg-up. In the past GPUs used to have separate vertex & pixel pipelines with different precision)

Supposedly nvidia are building a multi-GPU setup aimed at self-driving cars.. yet again a case where the epiphany's scalability would be rather nice.
dobkeratops
 
Posts: 189
Joined: Fri Jun 05, 2015 6:42 pm
Location: uk

Return to General Discussion

Who is online

Users browsing this forum: Google [Bot] and 10 guests

cron