We’re still not at the point where autonomous vehicles systems can best human drivers in all scenarios, but the hope is that eventually, technology being incorporated into self-driving cars will be capable of things humans can’t even fathom – like seeing around corners. There’s been a lot of work and research put into this concept over the years, but MIT’s newest system uses relatively affordable and readily available technology to pull off this seeming magic trick.

MIT researchers (in a research project backed by Toyota Research Institute) created a system that uses minute changes in shadows to predict whether or not a vehicle can expect a moving object to come around a corner, which could be an effective system for use not only in self-driving cars, but also in robots that navigated shared spaces with humans – like autonomous hospital attendants, for instance.

This system employs standard optical cameras, and monitors changes in the strength and intensity of light using a series of computer vision techniques to arrive at a final determination of whether shadows are being projected by moving or stationary objects, and what the path of said object might be.

In testing so far, this method has actually been able to best similar systems already in use that employ LiDAR imaging in place of photographic cameras and that don’t work around corners. In fact, it beats the LiDAR method by over half a second, which is a long time in the world of self-driving vehicles, and could mean the difference between avoiding an accident and, well, not.

For now, though, the experiment is limited: It has only been tested in indoor lighting conditions, for instance, and the team has to do quite a bit of work before they can adapt it to higher speed movement and highly variable outdoor lighting conditions. Still, it’s a promising step and eventually might help autonomous vehicles better anticipate, as well as react to, the movement of pedestrians, cyclists and other cars on the road.