Output list
Journal article
Morphology-based weed type recognition using Siamese network
Published 2025
European Journal of Agronomy, 163, 127439
Automatic weed detection and classification can significantly reduce weed management costs and improve crop yields and quality. Weed detection in crops from imagery is inherently a challenging problem. Because both weeds and crops are of similar colour (green on green), their growth and texture are somewhat similar; weeds also vary based on crops, geographical locations, seasons and even weather patterns. This study proposes a novel approach utilising object detection and meta-learning techniques for generalised weed detection, transcending the limitations of varying field contexts. Instead of classifying weeds by species, this study classified them based on their morphological families aligned with farming practices. An object detector, e.g., a YOLO (You Only Look Once) model is employed for plant detection, while a Siamese network, leveraging state-of-the-art deep learning models as its backbone, is used for weed classification. This study repurposed and used three publicly available datasets, namely, Weed25, Cotton weed and Corn weed data. Each dataset contained multiple species of weeds, whereas this study grouped those into three classes based on the weed morphology. YOLOv7 achieved the best result as a plant detector, and the VGG16 model as the feature extractor for the Siamese network. Moreover, the models were trained on one dataset (Weed25) and applied to other datasets (Cotton weed and Corn weed) without further training. The study also observed that the classification accuracy of the Siamese network was improved using the cosine similarity function for calculating contrastive loss. The YOLOv7 models obtained the mAP of 91.03 % on the Weed25 dataset, which was used for training the model. The mAPs for the unseen datasets were 84.65 % and 81.16 %. As mentioned earlier, the classification accuracies with the best combination were 97.59 %, 93.67 % and 93.35 % for the Weed25, Cotton weed and Corn weed datasets, respectively. This study also compared the classification performance of our proposed technique with the state-of-the-art Convolutional Neural Network models. The proposed approach advances weed classification accuracy and presents a viable solution for dataset independent, i.e., site-independent weed detection, fostering sustainable agricultural practices.
Journal article
Object-level benchmark for deep learning-based detection and classification of weed species
Published 2024
Crop protection, 177, 106561
Weeds can decrease yields and the quality of crops. Detection, localisation, and classification of weeds in crops are crucial for developing efficient weed control and management systems. Deep learning (DL) based object detection techniques have been applied in various applications. However, such techniques generally need appropriate datasets. Most available weed datasets only offer image-level annotation, i.e., each image is labelled with one weed species. However, in practice, one image can have multiple weed (and crop) species and/or multiple instances of one species. Consequently, the lack of instance-level annotations of the weed datasets puts a constraint on the applicability of powerful DL techniques. In the current research, we construct an instance-level labelled weed dataset. The images are sourced from a publicly available weed dataset, namely the Corn weed dataset. It has 5997 images of Corn plants and four types of weeds. We annotated the dataset using a bounding box around each instance and labelled them with the appropriate species of the crop or weed. Overall, the images contain about three bounding box annotations on average, while some images have over fifty bounding boxes. To establish the benchmark dataset, we evaluated the dataset using several DL models, including YOLOv7, YOLOv8 and Faster-RCNN, to locate and classify weeds in crops. The performance of the models was compared based on inference time and detection accuracy. YOLOv7 and its variant YOLOv7-tiny models both achieved the highest mean average precision (mAP) of 88.50% and 88.29% and took 2.7 and 1.43 ms, respectively, to classify crop and weed species in an image. YOLOv8m, a variant of YOLOv8, detected the plants in 2.2 ms with the mAP of 87.75%. Data augmentation to address the class imbalance in the dataset improves the mAP results to 89.93% for YOLOv7 and 89.39% for YOLOv8. The detection accuracy and inference time performed by YOLOv7 and YOLOv8 models in this research indicate that these techniques can be used to develop an automatic field-level weed detection system.
Journal article
Image patch-based deep learning approach for crop and weed recognition
Published 2023
Ecological informatics, 78, 102361
Accurate classification of weed species in crop plants plays a crucial role in precision agriculture by enabling targeted treatment. Recent studies show that artificial intelligence deep learning (DL) models achieve promising solutions. However, several challenging issues, such as lack of adequate training data, inter-class similarity between weed species and intra-class dissimilarity between the images of the same weed species at different growth stages or for other reasons (e.g., variations in lighting conditions, image capturing mechanism, agricultural field environments) limit their performance. In this research, we propose an image based weed classification pipeline where a patch of the image is considered at a time to improve the performance. We first enhance the images using generative adversarial networks. The enhanced images are divided into overlapping patches, a subset of which are used for training the DL models. For selecting the most informative patches, we use the variance of Laplacian and the mean frequency of Fast Fourier Transforms. At test time, the model's outputs are fused using a weighted majority voting technique to infer the class label of an image. The proposed pipeline was evaluated using 10 state-of-the-art DL models on four publicly available crop weed datasets: DeepWeeds, Cotton weed, Corn weed, and Cotton Tomato weed. Our pipeline achieved significant performance improvements on all four datasets. DenseNet201 achieved the top performance with F1 scores of 98.49%, 99.83% and 100% on Deepweeds, Corn weed and Cotton Tomato weed datasets, respectively. The highest F1 score on the Cotton weed dataset was 98.96%, obtained by InceptionResNetV2. Moreover, the proposed pipeline addressed the issues of intra-class dissimilarity and inter-class similarity in the DeepWeeds dataset and more accurately classified the minority weed classes in the Cotton weed dataset. This performance indicates that the proposed pipeline can be used in farming applications.
Journal article
Weed recognition using deep learning techniques on class-imbalanced imagery
Published 2022
Crop and Pasture Science, 74, 6, CP21626
Context: Most weed species can adversely impact agricultural productivity by competing for nutrients required by high-value crops. Manual weeding is not practical for large cropping areas. Many studies have been undertaken to develop automatic weed management systems for agricultural crops. In this process, one of the major tasks is to recognise the weeds from images. However, weed recognition is a challenging task. It is because weed and crop plants can be similar in colour, texture and shape which can be exacerbated further by the imaging conditions, geographic or weather conditions when the images are recorded. Advanced machine learning techniques can be used to recognise weeds from imagery.
Aims: In this paper, we have investigated five state-of-the-art deep neural networks, namely VGG16, ResNet-50, Inception-V3, Inception-ResNet-v2 and MobileNetV2, and evaluated their performance for weed recognition.
Methods: We have used several experimental settings and multiple dataset combinations. In particular, we constructed a large weed-crop dataset by combining several smaller datasets, mitigating class imbalance by data augmentation, and using this dataset in benchmarking the deep neural networks. We investigated the use of transfer learning techniques by preserving the pre-trained weights for extracting the features and fine-tuning them using the images of crop and weed datasets.
Key results: We found that VGG16 performed better than others on small-scale datasets, while ResNet-50 performed better than other deep networks on the large combined dataset.
Conclusions: This research shows that data augmentation and fine tuning techniques improve the performance of deep learning models for classifying crop and weed images.
Implications: This research evaluates the performance of several deep learning models and offers directions for using the most appropriate models as well as highlights the need for a large scale benchmark weed dataset.
Journal article
A survey of deep learning techniques for weed detection from images
Published 2021
Computers and Electronics in Agriculture, 184, Article 106067
The rapid advances in Deep Learning (DL) techniques have enabled rapid detection, localisation, and recognition of objects from images or videos. DL techniques are now being used in many applications related to agriculture and farming. Automatic detection and classification of weeds can play an important role in weed management and so contribute to higher yields. Weed detection in crops from imagery is inherently a challenging problem because both weeds and crops have similar colours (‘green-on-green’), and their shapes and texture can be very similar at the growth phase. Also, a crop in one setting can be considered a weed in another. In addition to their detection, the recognition of specific weed species is essential so that targeted controlling mechanisms (e.g. appropriate herbicides and correct doses) can be applied. In this paper, we review existing deep learning-based weed detection and classification techniques. We cover the detailed literature on four main procedures, i.e., data acquisition, dataset preparation, DL techniques employed for detection, location and classification of weeds in crops, and evaluation metrics approaches. We found that most studies applied supervised learning techniques, they achieved high classification accuracy by fine-tuning pre-trained models on any plant dataset, and past experiments have already achieved high accuracy when a large amount of labelled data is available.
Journal article
Jonaki - An mLearning Tool to Reduce Illiteracy in Bangladesh
Published 2015
International journal of computer applications, 128, 17, 21 - 25
Bangladesh is a densely populated country with a high rate of illiteracy. The largest group that contributes to this high rate is from the adult population. Due to the widespread acceptance of mobile phones, developing mobile phone based adult literacy learning tools may help the country with literacy issues. This paper introduces one such mobile application called "Jonaki" - a simple yet powerful self-learning mobile application to teach people how to read and write Bengali. With the help of audio and video, the application creates a friendly learning environment for the illiterates. With the help of surveys on mobile phone usage by people with zero level education, the user interface of the application is made an easy one. If utilized by the government and the mobile operators, Jonaki may drastically reduce illiteracy in Bangladesh within a short span of time.
Journal article
Implementation of Shamir's Secret Sharing on Proactive Network
Published 2013
International Journal of Applied Information Systems, 6, 2, 17 - 22
Journal article
Contour Based Face Recognition Process
Published 2013
International Journal of Science, Engineering and Computer Technology, 3, 7, 244
This paper describes a method for building "Contour Based Face Recognition Process" system so that it gives a user friendly environment to recognize face or image. We made an effort to emphasize all the relevant and crucial parts of face detecting algorithm. In this paper a contour matching based face recognition system is proposed, which uses "contour" for identification of faces. The advantage of using contour matching is that the structure of the face is strongly represented in its description along with its algorithmic and computational simplicity that makes it suitable for both hardware and software implementation. As the process includes a huge work, it is so much time consuming and researchable. We could not complete it as whole but we tried to develop an algorithm that can find the contour of a human face and also develop the matching algorithm so it could recognize face more efficiently. The outcome of our research in the field of "Contour Based Face Recognition Process" development provides data that can be used by researchers who entail information to work further in the development of face recognition.