by Onni » Tue Dec 18, 2012 8:37 am
I will try to implement multithreaded code for the Open source library Point Cloud Library and attach a kinect. The goal is to have a robust, high fps, portable 3D-scanner. According to my crude calculations, a modern cellphone is not quite up to the task. This math is perfect for parallell computing. The heavy lifting consist of asking the question "does this 3D-object fit with this one, if I rotate them like this?" This can now be asked several times at once.
When this is done, I see two awsome projects:
Turning a quadcopter into an area 3D scanner. This is also the basis for almost all robotics. Knowing where I am and where the object I'm supposed to pick up is. When PCL + Kinect + Parallella is kind of plug and play for the DIY robot community, cool things will happend.
I will put this fashinable contraption on my head and wear google glasses or simmilar. If I know exactly where the table is in relation to my eyes , I can put an animated 3D pink little hippo on it. And if I know where my hand is, I can interact with it. What can be done with this technology? I predict it will be the next big thing in gadgets and gaming within 5-10 years. Imagine putting arbitrary 3D in your everyday environment.
*sigh* So many projects and only a few spare hours.