Decision Making for Parallel Autonomy in Clutter
Decision Making for Parallel Autonomy in Clutter: Addressing Intent, Interactions, Rules of the Road, and Safety
An image showing correspondences between a visual image of a church on a street and their the individual spoken words
Crossing the Vision-Language Boundary
The Car Can Explain! Photo
The Car Can Explain!
Differentiable computer graphics for training and verification of machine perception
Differentiable computer graphics for training and verification of machine perception
Uhura - A Driver’s Personal Coach for Managing Risk Photo
Uhura - A Driver’s Personal Coach for Managing Risk
Geordi-A Driver's Assistant for Risk-Bounded Maneuvering
Geordi-A Driver's Assistant for Risk-Bounded Maneuvering
Driver-Friendly Bilateral Control for Suppressing Traffic Instabilities Photo
Driver-Friendly Bilateral Control for Suppressing Traffic Instabilities
Formal Verification for Trillion Mile Challenge Photo
Formal Verification Meets Big Data Intelligence in the Trillion Miles Challenge
Reading Minds with Vision and Language Photo
Machines that can Introspect
Uncovering the Pain Points in Driving Photo
Driver Perception and the Car-to-Driver Handoff
Exploring the World of High Definition Touch Photo
Exploring the World of High Definition Touch
Predicting a Driver's State-of-Mind Photo
Understanding Human Gaze
Analysis by Synthesis Revisited: Visual Scene Understanding by Integrating Probabilistic Programs and Deep Learning Photo
Analysis by Synthesis Revisited: Visual Scene Understanding by Integrating Probabilistic Programs and Deep Learning
Drinking from the Visual Firehose: High-Frame-Rate, High-Resolution Computer Vision for Autonomous and Assisted Driving Photo
Using Deep Learning to Speed Up Deep Learning
Parallel Autonomy System
A Parallel Autonomy System: Data-Driven and Model-Based Parallel Autonomy with Robustness and Safety Guarantees
Obstacle
WiFi-Based Obstacle Detection for Robot Navigation
Data
Tools and Data to Revolutionize Driving
Classification
Sensible Deep Learning for 3D Data
Robot Manipulation Arena
The Robotic Manipulation Data Engine
Proposed tactile array
Dense, freeform tactile feedback for manipulation and control
Architecture
A Safety Interlock for Self-driving Cars
Camera view
Inner Vision: Camera Based Proprioception for Soft Robots
MIT Cheetah
All Terrain Mobility and Navigation
HSR
Automation for Everyone
A CAD tool for designing superintelligent human-computer groups

 

The Toyota-CSAIL Joint Research Center is aimed at furthering the development of autonomous vehicle technologies, with the goal of reducing traffic casualties and potentially even developing a vehicle incapable of getting into an accident.

Led by CSAIL director Daniela Rus, the new center will focus on developing advanced decision-making algorithms and systems that allow vehicles to perceive and navigate their surroundings safely, without human input. Researchers will tackle challenges related to everything from computer vision and perception to planning and control.