Featured

WATCH: Researchers at MIT develop a bizarre shape-shifting table surface that will blow your mind

In the future, you’ll be able to touch things through your computer. MIT Media Lab has created a tangible interface that reproduces a realtime virtual version of whatever you put under its sensors.

Called inFORM, it uses cameras to capture objects in 3-D and then processes the information on a computer, morphing solid rods that rise and fall on a tabled surface that can interact with physical objects.

The MIT team then added a screen that shows a person manipulating the objects remotely, giving the activity a sense of human presence.

The project is a step towards the researchers’ stated ultimate goal of a “material user interface in which all digital information has a physical manifestation that people can interact with directly.”

Check it out:

Facebook Comment
15 Comments

15 Comments

  1. Christine Boles

    November 17, 2013 at 5:47 am

    I don’t understand how this is possible from another location. I understand the actuators and linkages are interacting with the pins to make them move, but how is this possible from another location through a computer or tablet device? Are there any engineering or tech geniuses out there that can explain this to me? This is fascinating, so I really want to understand the intricacies of how this works.

  2. Edwin Firmage

    November 17, 2013 at 5:49 am

    Coming to an Apple computer someday soon?!

  3. George Karpel

    November 17, 2013 at 1:34 pm

    Could be a great help for some physically handicapped vets.

  4. Pingback: Researchers at MIT develop a bizarre and amazing shape-shifting table surface | Boardmad

  5. Pingback: Researchers at MIT develop a bizarre and amazing shape-shifting table surface | Enjoying The Moment

  6. Adam

    November 19, 2013 at 2:21 am

    In a nutshell, on the side that receives the input to be translated into the shape, the 3D object is basically translated into a matrix, where each cell in the matrix corresponds to a given height that the bar should be elevated to. So if a hand is put under the sensor, a matrix is sent over to the machine where all the cells in 2D space have a height value that will mirror what the height of the hand was from the table. In principle this is basically what would need to be done.

    To me the amazing part is making the mechanics of the rising and falling pins work the way they do.

  7. NB

    November 19, 2013 at 3:12 am

    The short version is that each pin has a location, and there’s a camera on the other end mapping each location. Think of it as a continuous, automated, fast moving game of battleship. The camera on the other end calls out locations (height too, etc), and the board raises or lowers in response

  8. jedihacker

    November 19, 2013 at 3:14 am

    The same way you read this article and watch this video. The information about which pins to move and by how much is encoded into digital form and transmitted over the internet. It actually wouldn’t be very much information, less than watching a video on youtube (there’s much less information to encode as there are way fewer rods on that table than pixels in a video.)

  9. Luke Stanley

    November 19, 2013 at 3:25 am

    Making an elegant system that works with few kinks is hard but remotely controlling a servo / motor from a Raspberry Pi is fairly easy. They seem to be getting 3D input from something like a XBox Kinect 3D camera then using each voxel area to control a set of servos or something (only nice and quickly). I didn’t play the audio because my girlfriend is asleep so I may be missing something. You might want to Google: “Raspberry Pi servo”.

  10. Jeff Andersen

    November 19, 2013 at 3:37 am

    There are cameras watching the remote operator’s hands. As they move, the data is transmitted to the computer controlling the table, and it figures out how to make the pins move to match the remote operator’s movements. See 0:44 in the video for a look at the operator’s setup.

  11. Herb Ahderp

    November 19, 2013 at 5:06 am

    This could help handicapped people in general, not just ‘murican war criminals.

  12. pat

    November 19, 2013 at 6:07 am

    not real….clearly not real….clearly stop action animation

  13. Paulo

    November 19, 2013 at 9:14 am

    A camera could be used as a sensor. With the correct “app” or any appropriate software, the motion under the camera could be detected and transmitted as an electronic signal to the machine. The sensed signals would be input onto the machine, still electronically and each actuator would be adjusted to its desired position to replicate the initial input.

  14. Jibin Rajan Varghese

    November 20, 2013 at 1:50 pm

    We need to increase the pixel density

  15. Pingback: mcm bags

Leave a Reply

Your email address will not be published.

To Top