Seeing like humans is not enough.
Like the human eye, a camera can perceive a scene in its field of view. Unlike humans, machines can have more than two eyes in multiple directions. Seeing 360 degrees around a machine can generate very accurate, dense depth maps in short, mid and far range. This lessens or even eliminates the dependency on active sensing technologies. Leveraging commercially available, high thru-put cameras, with flexible baselines and camera position provides optimal perception in depth, texture and optical flow for a respective application or use case.
Make it so easy that one would forget about the technology.
Providing an end-to-end, seamless integration into in-vehicle platforms and removing the need for maintenance is critical to take out the pain currently experienced with sensing technologies. A calibration system that can easily allow for various, customized baselines as well as adapting to unfavourable environments opens a new door that would make anyone forget about the hard problems that had to be solved.
Detection should not rely on recognition.
AI is deeply transforming the way machines can see and interact with their environments. Models enable machines to detect, track and classify features, as well as determining scene depth. However, models are as good as the data that has trained them and often need the support of other sensors for increased precision and reliability.
Measuring depth - not inferring it - is critical to building extremely safe applications based on recognition.