The focus of the AI version 2 will be centered around its detection script, as the core elements of the brain and animation programs seem to be working fine. Firstly I tried to identify the cause of the crashes when a sound event was triggered. Through debugging, I found the error to be due to how the AIBrain received updated information for the sound checks and managed to fix it.
However when I did this I notice a fundamental flaw in my sound detection system when dealing with multiple levels. The sound detection works through the navAgent, when given an origin point it checks the distance the sound would need to travel to reach the AI. My oversight was in levels like the one shown above the path had no way to bridge the gaps between floors, meaning the AI wouldn’t note sounds on floors out of it’s reach, even if they are nearby.
Hearing alternatives concepts
Left navMesh without jump and drop points and right navMesh with
The solution was to have a navmesh that could navigate these different levels. Initially I looked into the possibility of multiple navMeshes in a scene, one that included walls and floors, or if there was any form of custom navMesh I could make for sound use. I had little success in this however, I found the documentation and customizability of the nav system to be limited, so finally I settled on remaking the scene navmesh, but adding drop and jump points that would bridge the gap.
For now I don’t imagine my patrollers jumping so this solution works for now, when the script checks for sounds it simply activates the jump area mask and then disables it when the check is done. This does however limit me from jumps features in the future, so I am not to keen on this solution and will be looking for alternatives.