Many autonomous vehicle (AV) use cases primarily focus on interactions between a robot and human or human-controlled vehicles. However, as AVs and autonomous mobile robots (AMRs) are continually developed and expanded into different industries, there is an increasing number of use cases that focus on multi-robot environments (Figure 1). There is a growing need for ways to test the safety and reliability of these systems that not only involve their own individual autonomy stacks but also have mission planners helping them with solving problems in the same operating area.
These fleets of robots have a wide range of applications ranging from homogenous autonomous systems (each agent has the same software system) to heterogeneous autonomous systems (mixed systems and functional devices operate in the same area). Simulation is particularly powerful in capturing the variety of interactions between these different agents and provides the ability to quickly test out long-running scenarios within an operating area. Individual metrics for each agent, as well as for overall fleet management using multi-robot task allocation (MRTA) systems, can be measured to track performance improvement over a long time scale in an efficient manner.
In this blog, we discuss examples of commercial applications for the autonomous multi-robot systems and testing and validation considerations that should be taken into account to ensure the safety and advancement of the system algorithms.
Examples of industries which show promising applications for the fleets of autonomous systems include long-haul trucking, warehousing and logistics, agriculture, construction, and mining. Each industry presents a unique set of tasks that need to be automated and require testing approaches and pass/fail success criteria that are more complex than testing a single-robot system.
In a warehouse setting, for instance, forklifts may be automated to transfer goods from one area to another (Figure 2). Each forklift performs identical tasks, which results in a homogenous environment. The performance of the autonomous system may be measured by a set of success requirements such as “is it able to pick up the target at the start?”, “is it able to maneuver around other agents and avoid collisions?”, or “is it able to drop off the target at the destination?” Metric may be tracked for each autonomous agent over the course of the simulation and the isolation of the specific circumstances of failure may be used to develop and improve the autonomous system. Different parameters of the autonomous system may also be tested within this environment and measured to see how changing the system for a subset of agents might affect other agents as well. For example, modifying a routing module to be purely greedy for one agent may result in it cutting off efficient routes for other actors, making it much more difficult for other agents to route within the warehouse.
On top of these individual requirements for each agent, the overall system’s MRTA strategy requires the efficient use of each robot and measured throughput of the fleet. Each stack version may be measured to ensure that it maintains or improves upon previously defined benchmarks of the overall metrics. In the previous example of a greedy routing module, changing the behavior of a few agents in the setting might cause slowdowns for other agents and result in an overall decrease in output for the warehouse. This flexibility in success metrics allows for performance to be measured at different levels where measures could focus on individual agents while capturing the efficacy of a system as a whole.
On the other hand, complicated processes such as gathering crops within the agricultural industry require a series of functionally-different autonomous vehicles and result in a heterogeneous environment. One important part of this chain is the transfer of crops between a harvester that would gather the crops and a truck that would then pick up the crops when the harvester is close to its maximum capacity. Each of these two vehicles has slightly different objectives, and it is important within scenarios to be able to define the specific success metrics for each type of autonomy stack. Running these two types of stacks in parallel could quickly expose shortcomings in the interaction between the two autonomous stacks and whether or not modifying the function of one will severely affect the other. The overall system may also be measured by the throughput of the harvesting process and evaluate if it meets expected performance with modifications in individual stack behaviors.
As different industries continue to adopt autonomous technology, we expect to see a growing number of fleet-based use cases. Within logistics, homogeneous environments for transportation of goods provide an opportunity to automate repetitive tasks. The agriculture and mining industries also involve many complex procedures, where many functional parts of a system must work together seamlessly to accomplish an end task, resulting in various heterogeneous stack environments in the same operating area.
In simulation, each possible combination of agents may be tested for their reliability and safety. A mix of human-controlled agents and autonomous-based agents may be used in the same operating environment to ensure stack flexibility during interactions with different types of agents. This can provide useful information for warehouse development in order to find the right mix of human-controlled and autonomous-based agents. For production testing, this simulated environment can provide the exact ratio and versions of different vehicle stacks to be tested.
Prior to any software upgrades, fleets may be tested with mixed environments of different versions of the same stack to ensure that vehicles will still interact well with older versions of the stack. Each of these environments can be run against a standard set of simulated test cases collected from past problems seen in simulation or in the real world. Simulation also makes it possible to easily run long-lasting tests and ensure that fleet systems will not get stuck in any edge cases over a long period of time.
Applied’s simulation platform supports the testing of multiple agents from homogeneous and heterogeneous multi-robot systems in an enclosed environment. Within the platform, a flexible suite of metrics allows for success criteria to be measured on the individual agent level as well as on the overall success of a fleet in accomplishing a wider metric. Regression testing over these metrics also enables the systems to be updated and deployed with confidence; and these tests may be automated within the cloud as they increase in scale. If you’re interested in learning more about simulation tests for the multi-robot systems, contact our engineering team!