Aquaculture 2022

February 28 - March 4, 2022

San Diego, California

IMPACT OF IMAGE DATASET SIZE AND QUALITY ON A CONVOLUTIONAL NEURAL NETWORK MODEL ACCURACY FOR IN-TANK FISH DETECTION IN RECIRCULATING AQUACULTURE SYSTEMS

Rakesh Ranjan*, Kata Sharrer, and Scott Tsukuda

 

*Freshwater Institute,

 The Conservation Fund

 1098 Turner Rd, Shepherdstown, WV, 25443

 Email: rranjan@conservationfund.org

 



Recirculating aquaculture systems (RAS), a land based intensive aquaculture technology,  are  being adapted globally as a sustainable alternative to wild fishing . Since RAS grow the fish in a controlled environment, precision technologies  can be conveniently adapted to  improve system performance and reliability  and assist growers with important fish management decisions . R ecent advancements in  computer vision and  artificial intelligence (AI) have significantly improved the reliability, repeatability, and accuracy of the models and drawn interest of the aquaculture industry and research community. The convolutional neural network (CNN) assisted image classification and object detection models are being developed in the aquaculture industry for fish management including  feed optimization , biomass and yield estimation,  fish health and waste management. However, machine learning  approaches are data-intensive and model precision and accuracy  primarily  depend on the data quality. When imaging underwater, challenges including turbidity, fish density, and distortions caused by the underwater environment are expected to impede feature identification. Therefore, this study was conducted to investigate the effect of number  and quality  of images, imaging conditions and pre-processing  operations  on the  fish detection accuracy of the  object detection model . A n underwater sensing platform  was developed  with four commercially available imaging sensor s [Raspberry Pi camera ( model:  Pi 4 HQ, Raspberry Pi foundation, Cambridge, UK); GoPro (model: HERO9, GoPro, Inc., California, USA );  Oak-D (model: Oak-D Depth AI , Luxonis , Colorado, USA); Ubiquiti security camera (model: G3, Ubiquiti Inc., New York, USA )]  customized  and deployed in a RAS tank with Rainbow trout. The i mages from all the sensors were first collected with ambient  LED lighting at an interval of 5 sec . Later, supplemental LED lighting was added above the tank to acquire the imagery data for comparison with ambient lighting . The acquired images from various imaging sensors in different light conditions  were divided in batches of 100 images and annotated as partial and whole fish.  The annotated images were segregated  into  training, validation and  test datasets in a ratio of 70:20:10, respectively and  utilized to train a custom Yolo V5 model in Roboflow software (Roboflow, Inc., Des Moines, Iowa, USA ) for fish detection . The  effect of sensor specific image quality, number of images,  light conditions, and image augmentation on  fish detection  accuracy  is being investigated and pertaining results will be presented in  terms of precision, mean average precision, and recall .

Keyword: precision aquaculture, deep neural network, artificial intelligence, RAS, machine learning