Let’s take a look at our programming sub-system
The programming system in Barunastra's ASV manages critical functions such as object detection, obstacle avoidance, localization, mapping, and control, with notable advancements this year in deep learning integration, high-performance inference, and enhanced maneuverability through advanced thruster control. These improvements are all geared toward a unified goal of creating a highly autonomous and reliable ASV that excels in precision navigation and adaptability to mission demands. Compared to last year, the team addressed challenges in communication by running programs natively on the ASV’s computer and streamlined the software architecture into modular components, ensuring better performance, easier debugging, and greater overall system efficiency.
Barunastra ITS employs advanced software subsystems to enhance performance in RoboBoat 2025. For computer vision, the YOLOv8 deep learning model detects and identifies objects like buoys and gates, extracts regions of interest (ROI), and uses OpenCV for color detection, optimized with OpenVINO for high-performance CPU-based inference. Obstacle avoidance is achieved through 3D Velodyne LiDAR, leveraging point cloud data and Braitenberg principles to calculate distances. Localization and mapping also utilize Velodyne LiDAR, with Direct LiDAR-Inertial Odometry (DLIO) integrating LiDAR and IMU data to create accurate maps and improve navigation. *You can see more detail on our TDR in Appendix D.
In this year ASV's control system uses four thrusters: two bow thrusters at ±45° and two azimuth stern thrusters rotating up to 60°, ensuring high maneuverability for tasks like station-keeping, precision navigation, and quick rotation. Three control modes—Azimuth and X-Drive—enable efficient yaw control and omnidirectional movement. PID controllers ensure accurate heading and positioning, compensating for environmental disturbances like wind and currents. This design translates navigation commands (surge, sway, yaw) into precise thruster actions, laying the groundwork for future upgrades with additional sensors and field testing for optimal performance. *You can see more detail on our TDR in Appendix C.
The team’s software architecture comprises three main modules: Perception, which processes external stimuli; Cognition, which uses the processed data to make decisions ; and Behavior, which executes these decisions through control actions and hardware commands. An additional Xtras module is included for utilities, interfaces, and logging. To address challenges in ground control, especially communication loss, the team opted to run programs natively on the ASV's computer (Ares) rather than relying on the ground control computer. Secure Shell (SSH) connections are used solely to attach terminal sessions, enabling UI applications to display directly on Ares’ computer. This approach enhances system reliability and minimizes disruptions during operations. *You can see more detail on our TDR in Appendix F