The AV industry is rightly focused on “edge” or “corner” cases as significant challenges to AV safety and market success. Unfortunately, these are not edge or corner cases; they are common driving scenes that AVs will need to navigate as seamlessly as a rural highway. The key to unlocking AV capabilities is mimicking the most advanced sensor suite we know – human drivers. When peripheral vision or hearing captures data, the brain dynamically directs sensory resources to interrogate aspects of the driving scene that resolves ambiguity and enables the brain to take action safely and successfully. This talk will explore how the AV stack can mimic human sensing capabilities through dynamic allocation of radar resources, drawing on the immense knowledge repositories and other real-time data sources to resolve ambiguity in the driving scene. The talk will include data collected from multiple driving scenes that demonstrate advanced radar imaging resources solving edge and corner cases.