Technology

Autonomous Navigation

Jon Neilon
5 Minutes

How does autonomous robotics' navigate complex environments?

They learn from human demonstrations, but can they manage new environments?That was the question experts from Harvard University, the University of California, and Imperial College London aimed to answer.

 In a recent study, these researchers presented a method for forming strong flight navigation agents that can carry out vision-basedfly-to-target assignments in various environments. They did this by utilizing liquid neural networks, a type of brain-inspired continuous-time neural models that are causal and adjustable to changing conditions. By distilling the task from visual inputs and disregarding irrelevant features, the liquid agents wereable to efficiently transfer their learned navigation skills to new environments.

 The researchers compared the performance of liquid networks to several other top-of-the-line deep agents and found that liquid networks were exclusive in their robustness in decision-making, both in their differential equation and closed-form representations.

Deep neural networks

This study draws inspiration from natural brains and howthey learn to make sense of their environment and manipulate it to accomplishtheir goals. The researchers found that neural circuits in brains are much morerobust to perturbations and distribution shifts than deep neural networks while also being more flexible in tackling uncertain events. This is because naturalbrains deploy both unconscious and conscious processes for decision-making.

Developing learning-based solutions

The researchers aimed to develop learning-based solutions torobot flight control that are robust and transferable to novel environments.They did this by studying the flight hiking task, where a quadrotor robotcontroller is trained to recognize and get to the target utilizing imitationlearning. The ensuing policy is then deployed to recognize and move to thetarget iteratively in different environments. Achieving goodout-of-distribution performance requires that trained models learnrepresentations that are causally associated with the task, compositionallysturdy, and independent of the environmental context.

While there has been extensive research on improving thegeneralization performance of few-shot, one-shot, and zero-shot imitationlearning agents by taking on augmentation strategies, human interventions, goalconditioning, reward conditioning, task embedding, and meta-learning, thisstudy shows that brain-inspired neural dynamics improve the robustness of thedecision-making process in autonomous agents, leading to better transferabilityand generalization in new settings under the same training distribution. Tosummarize, the study shows that liquid neural networks have the potential tomake autonomous robots more adjustable and supple in navigating through variousenvironments. By taking inspiration from natural brains, researchers candevelop more robust and transferable learning-based solutions for autonomousagents, paving the way for more advanced and sophisticated robotics technology.

Featured Clients

Flat Iron LogoCountry Homes LogoDorplex LogoDorplex LogoShare Tower LogoHollander Landscaping Logo
Pattern overlaySwirl of line work on top of an im