World Aquaculture Magazine - September 2025

WWW.WAS.ORG • WORLD AQUACULTURE • SEPTEMBER 2025 37 Why AI? Human intervention in aquaculture production is invaluable, although the meticulous work of calculating biomass and disease diagnosis can be eased with the help of AI. AI and machine vision provide a powerful, non-invasive tool for monitoring aquaculture (Martínez-Vázquez et al. 2019). Neural network models, which have shown themselves to be useful instruments for gathering behavioral data and individual size measurements of cultured fish populations, have recently been used in aquaculture. This allows us to avoid the intrusive techniques that are usually used for handling and sampling fish, such as behavioral analysis (Fan et al. 2023). These integrated morphometric monitoring and analysis systems have been created for zebrafish (Danio rerio) eggs, larvae, and adults, which are useful model animals in biological study domains such as developmental biology, ecotoxicology, genetics, and fish biology (Li et al. 2023; Nguyen et al. 2023). Using the YOLO model (You Only Look Once) developed in 2015 by Joseph Redmon and Ali Farhadi, this real-time object detection system uses a Convolutional Neural Network (CNN) to forecast the likelihood of objects in an input image (Redmon et al. 2016), I have created an object identification and analysis model called Zid-AI that is based on narrow AI technology that can be applied to zebrafish morphometric and behavioral analysis. There are several variations of YOLO models for object detection. This object detection model can be combined with other OpenCV features to be utilized in morphometric and behavioral analysis applications. However, these AI forecasts do have some limitations, as they need a powerful graphics card and a high-end computer to run and train the model. High-resolution cameras are also necessary for precise result interpretation. The goal of this experiment is to develop a model that reduces the requirement for costly equipment so that videos taken with a cell phone camera may be used to monitor the size and behavior of zebrafish. How the Model is Built To create the object detection model, 3,607 zebrafish photos were gathered from many copyright-free sources, particularly from the Robo-flow repository. The model uses these photos as training data to recognize zebrafish in a holding tank. These photos were enhanced with features like tilting, hue alteration, and saturation variations to improve the model’s learning and comprehension, which is also known as augmentation. After augmentation, 13,999 photos were produced, resulting in a diverse dataset which will help the model perform better. As a result, 13,999 photos were annotated, with each zebrafish in an image being marked by a bounding box and labelled as “Zebrafish.” The annotations were saved in a different file in the COCO format. The names of the images, the coordinate locations of each bounding box, and the numerical class designations are all included in this format; for example, the label for Zebrafish is designated Class-1, which serves as the foundation for model training. The entire process is shown in simplified form in Figure 1. The model was constructed using the YOLO V9 CNN architecture and environment, and it was trained on the annotated photos to identify the labels for additional identification. The learning process is made up of full learning cycles called epochs, which show that the model has memorized and learned every label in a dataset. We utilized 75 epochs in this instance, which means the model learned from the 13,999 photos 75 times overall. Figure 2 explains how the loss function is calculated in each run, making it easy for evaluating the models’ performance. Training, validation, and testing are the three phases that make up the model training process. Of the photos, we used 12% for testing, 13% for validation, and 75% for training. With this partition, the model learns from 75% of the photos, uses 13% to validate what it has learned, and uses the remaining 12% to evaluate accuracy and other performance indicators. The model is saved in a .h5 file format upon completion of training, which may subsequently be utilized to create an application. As of right now, our built model can recognize zebrafish in any input video and image that the model hasn’t seen yet. Method to Validate the Model: Man vs. Machine Learning To verify the accuracy of the AI model, I have created an experimental setup that allows for a simultaneous comparison of measurements taken by the AI and measurements taken using (CONTINUED ON PAGE 38) FIGURE 4. Diagrammatic explanation of how neural networks work. FIGURE 5. ArUco markers with various dimensions (4x4,5x5,6x6,7x7 matrices). Each marker's length and breadth correspond to the matrices; for instance, a 4x4 marker has a dimension of 4 cm length and 4 cm breadth. Source: Siki and Takacs (2021). FIGURE 6. Precision-Recall curve: Illustrates the YOLO V9 model performance during evaluation. X-Axis: Recall (True Positive Rate), Y-Axis: Precision (Positive Predictive Value).

RkJQdWJsaXNoZXIy MjExNDY=