Press

Robotic gripper with soft sensitive fingers developed at MIT CSAIL can handle cables with unprecedented dexterity.

Full story at URL.

In recent years there have been exciting breakthroughs in wearable technologies, like smartwatches that can monitor your breathing and blood oxygen levels. But what about a wearable that can detect how you move as you do a physical activity or play a sport, and could potentially even offer feedback on how to improve your technique? 

Full story at URL.

Training a neural network to learn NASCAR-style driving maneuvers purely from looking at a sequence of images taken from a two-person racing game.

Full story at URL.

A faster way to estimate uncertainty in AI-assisted decision-making could lead to safer outcomes.

Full story at URL.

 

Two teams enable better sense and perception for soft robotic grippers. 

Full story at URL.

Using a photorealistic simulation engine, vehicles learn to drive in the real world and recover from near-crash scenarios.

Full story at URL.

 

Computer model of face processing could reveal how the brain produces richly detailed visual representations so quickly.

Full story at URL.

 

System from MIT CSAIL sizes up drivers as selfish or selfless. Could this help self-driving cars navigate in traffic?

Full story at URL.

Model alerts driverless cars when it’s safest to merge into traffic at intersections with obstructed views.
 

Full story at URL.

 

By sensing tiny changes in shadows, a new system identifies approaching objects that may cause a collision. 

See URL for full story.

 

A NOVA documentary premiered October 23, 2019 on PBS: https://www.pbs.org/wgbh/nova/video/look-whos-driving/

Research aims to make it easier for self-driving cars, robotics, and other applications to understand the 3D world.

Full story at URL.

 

Full story at URL.

In “semiautonomous” cars, older drivers may need more time to take the wheel when responding to the unexpected.

 

Full story at URL.

Every year there are thousands of fatal car accidents that happen as a result of distracted driving. But a team from MIT says that for years the public has been distracted by the wrong distracted-driving narrative.

Full story at URL.

Autonomous control system “learns” to use simple maps and image data to navigate new, complex routes.

Full story at URL.

In simulations, robots move through new environments by exploring, observing, and drawing from learned experiences.

Full story at URL.

Model learns to pick out objects within an image, using spoken descriptions.

Full story at URL.

Improved design may be used for exploring disaster zones and other dangerous or inaccessible environments.

Full story at URL.

Algorithm computes “buffer zones” around autonomous vehicles and reassess them on the fly.

Full story at URL.

Today’s autonomous vehicles require hand-labeled 3-D maps, but CSAIL’s MapLite system enables navigation with just GPS and sensors. MapLite uses perception sensors to plan a safe path, including LIDAR to determine the approximate location of the edges of the road.

Full story at URL.

To suppress traffic instabilities known as phantom traffic jams, and using computer models, Horn and his team determined that drivers need to keep equal spacing with the cars in front of and behind them.

Video of the interview: URL.

We’ve all experienced “phantom traffic jams” that arise without any apparent cause. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently showed that we’d have fewer if we made one small change to how we drive: no more tailgating.

Full story at URL.

By building computer systems that begin to approximate these capacities, the researchers believe they can help answer questions about what information-processing resources human beings use at what stages of development. Along the way, the researchers might also generate some insights useful for robotic vision systems.

Full story at URL.

System for performing “tensor algebra” offers 100-fold speedups over previous software packages.

Full story at URL.

Light lets us see the things that surround us, but what if we could also use it to see things hidden around corners? It sounds like science fiction, but that’s the idea behind a new algorithm out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) — and its discovery has implications for everything from emergency response to self-driving cars.

Full story at URL.

ComText, from the Computer Science and Artificial Intelligence Laboratory, allows robots to understand contextual commands.

Full story at URL.

Being able to both walk and take flight is typical in nature — many birds, insects, and other animals can do both. If we could program robots with similar versatility, it would open up many possibilities: Imagine machines that could fly into construction areas or disaster zones that aren’t near roads and then squeeze through tight spaces on the ground to transport objects or rescue people.

Full story at URL.

GelSight technology lets robots gauge objects’ hardness and manipulate small tools.
 
Full story at URL.