Posted on

planting weed seeds detector

Deep convolutional neural networks for image-based Convolvulus sepium detection in sugar beet fields

Convolvulus sepium (hedge bindweed) detection in sugar beet fields remains a challenging problem due to variation in appearance of plants, illumination changes, foliage occlusions, and different growth stages under field conditions. Current approaches for weed and crop recognition, segmentation and detection rely predominantly on conventional machine-learning techniques that require a large set of hand-crafted features for modelling. These might fail to generalize over different fields and environments.

Results

Here, we present an approach that develops a deep convolutional neural network (CNN) based on the tiny YOLOv3 architecture for C. sepium and sugar beet detection. We generated 2271 synthetic images, before combining these images with 452 field images to train the developed model. YOLO anchor box sizes were calculated from the training dataset using a k-means clustering approach. The resulting model was tested on 100 field images, showing that the combination of synthetic and original field images to train the developed model could improve the mean average precision (mAP) metric from 0.751 to 0.829 compared to using collected field images alone. We also compared the performance of the developed model with the YOLOv3 and Tiny YOLO models. The developed model achieved a better trade-off between accuracy and speed. Specifically, the average precisions ([email protected]) of C. sepium and sugar beet were 0.761 and 0.897 respectively with 6.48 ms inference time per image (800 × 1200) on a NVIDIA Titan X GPU environment.

Conclusion

The developed model has the potential to be deployed on an embedded mobile platform like the Jetson TX for online weed detection and management due to its high-speed inference. It is recommendable to use synthetic images and empirical field images together in training stage to improve the performance of models.

Background

Sugar beet (Beta vulgaris ssp. vulgaris var. altissima) is very vulnerable to weed competition due to its slow growth and low competitive ability at the beginning of vegetation [1]. The yield loss caused by weed competition can be significant. Therefore, effective weed management in early stages is critical, and essential if a high yield is to be achieved. In modern agriculture, herbicide is widely used to control weeds in crop fields [2]. Weeds are typically controlled by spraying chemicals uniformly across the whole field. However, the overuse of chemicals in this approach has increased the cost of crop protection and promoted the evolution of herbicide-resistant weed populations in crop fields [3], which is a hindrance to sustainable agriculture development.

Site-specific weed management (SSWM) refers to a spatially variable weed management strategy to minimize the use of herbicides [4]. However, the main technical challenge of SSWM implementation lies in developing a reliable and accurate weed detection system under field conditions [5]. As a result, various automated weed monitoring approaches are being developed based on unmanned aerial vehicle or on-ground platforms [6,7,8]. Among them, image-based methods integrating machine learning algorithms are considered a promising approach for crop/weed classification, detection and segmentation. Previous studies [7] utilized features like shape, texture and colour features with a random forest classifier for weed classification. Others, such as Ahmad el al [9] developed a real-time selective herbicide sprayer system to discriminate two weed species based on visual features and an AdaBoost classifier. Spectral features from multispectral or hyperspectral images could also be exploited for weed recognition [10, 11]. Although the works mentioned above show good results on weed/crop segmentation, classification and detection, challenges such as plant species variations, growth differences, foliage occlusions and interference from changing outdoor conditions still need to be further overcome in order to develop a real-time and robust model in agricultural fields.

Deep learning, a subset of machine learning, enables learning of hierarchical representations and the discovery of potentially complex patterns from large data sets [12]. It has shown impressive advancements on various problems in natural language processing and computer vision, and the performance of deep convolutional neural networks (CNNs) on image classification, segmentation and detection are of particular note. Deep learning in the agriculture domain is also a promising technique with growing popularity. Kamilaris et al. [13] concluded that more than 40 studies have applied deep learning to various agricultural problems like plant disease and pest recognition [14, 15], crop planning [16] and plant stress phenotyping [17]. Pound et al. [18] demonstrated that using deep learning can achieve state-of-the-art results (> 97% accuracy) for plant root and shoot identification and localization. Polder et al. [19] adapted an fully convolutional neural network (FCN) for potato virus Y detection based on field hyperspectral images. Specifically, for crop/weed detection and segmentation, Sa et al. [20, 21] developed WeedNet and WeedMap architectures to analyse aerial images from an unmanned aerial vehicle (UAV) platform. Lottes et al. [8, 22] also did relevant studies on weed/crop segmentation in field images (RGB + NIR) obtained from the BoniRob, an autonomous field robot platform. All these studies have demonstrated the effectiveness of deep learning, with very good results provided.

In practice, farmers usually plow fields before sowing to provide the best chance of germination and growth for crop seeds. Moreover, parts of pre-emergent weeds are buried under the ground and so killed through this procedure. However, Convolvulus sepium (hedge bindweed) can emerge from seeds and remaining rhizome segments left underground. This leads to different emergence times of C. sepium, resulting in multiple growth stages from first leaves unfolded to stem elongation being represented in a single field. The appearance of C. sepium at different growth stages varies. In the early growth stages, some C. sepium plants might have similar color features as sugar beet plants in their early growth stages. All these factors bring challenges to the development of a robust system for C. sepium detection under field conditions. To the best of our knowledge, no studies have attempted to detect them in a sugar beet field based on a deep learning approach.

In our study, first we develop an image generation pipeline to generate synthetic images for model training. We then design a deep neural network to detect C. sepium and sugar beet based on field images. The major objectives of the present study are (i) to appraise the feasibility of using a deep neural network for C. sepium detection in sugar beet fields; (ii) to explore whether the use of synthetic images can improve the performance of the developed model; (iii) to discuss the possibility of our model to be implemented on mobile platforms for SSWM.

Methods

A digital single-lens reflex (DSLR) camera (Nikon D7200) was used to manually collect field images from two sugar beet fields of West Flanders province in Belgium under different lighting conditions (from morning to afternoon in sunny and cloudy weather). Most sugar beet plants have 6 unfolded leaves, while the growth stages of C. sepium plants vary widely, from seedling to pre-flowering. The camera was held manually to capture images randomly in the sugar beet fields. The distance between camera and soil surface was around 1 m which is not strictly fixed in order to create more variations in the images. For camera settings, the ISO value is 1600 and the exposure times are 1 ms under sunny weather conditions and 1.25 ms under cloudy weather conditions. The resolution of raw images is 4000 × 6000 pixels. There are 652 images under different lighting conditions which were manually labelled with bounding boxes. Among them, 100 images are randomly selected as a test dataset and 100 images are randomly selected as a validation dataset. The remaining 452 images are used as a training dataset. All the images were resized to 800 × 1200 pixels. In this way, the resized images do not change their aspect ratio and are suitable for training based on our computation resources.

Synthetic image generation

Training a deep neural network with adequate performance generally requires a large amount of data. This is labour-intensive and time-consuming to collect and label. To overcome this problem, we generated synthetic images based on the training dataset from the formerly collected field images. The process of synthetic training image generation is depicted in Fig. 1. Seventy-seven images were selected as original source images. All these images contained either a sugar beet (51) or a C. sepium object (26). Their excess green (ExG) vegetation index [23] grayscale images were obtained using Eqs. (1) and (2). Equation (2) is used to normalize R, G and B channel. Next, we converted the ExG grayscale images into binary mask images with Otsu’s algorithm [24]. Afterwards, the object images and their masks were transformed using a set of randomly chosen parameters. Rotation (from 0 to 360° with a 15° step), zoom (from 0.5 × to 1.5 × with 0.1 step), shift (from − 100 to 100 pixels with a 15-pixel step both in the horizontal and vertical directions) and flip (horizontal or vertical direction) operations were applied. The base image and their corresponding masks were subjected to flip (horizontal or vertical direction), limited rotation (0 or 180°) and limited zoom (from 1 × to 1.8 × with 0.1 step) operations to keep the soil background information. The object mask image (Boolean data type) was used as a logic control image. If the logic value in the object mask image is true, the pixel in the base image was replaced by the pixel from the object image. Otherwise, there is no replacement in the base image. After all the pixels from the object images were added to the base images, their brightness was adjusted using Gamma correction [25]. Gamma values varied from 0.5 to 1.5 with 0.2 step. In our study, we generated 2271 synthetic images in total. They are comprised of 1326 (51 × 26) images with sugar beet and C. sepium plants, 676 (26 × 26) images with C. sepium and C. sepium plants and 269 images with sugar beet and sugar beet plants. These synthetic images will be only used for training deep neural networks. The less images (269) with sugar beet and sugar beet plants were generated compared to the other two type images (1326 and 676), because the balance of different object numbers (sugar beet and C. sepium) is better to keep for the benefits of training deep neural network after considering most field images only contain sugar beet plants in the training dataset. The examples of real field images and synthetic images are shown in Fig. 2. There is no occlusion in base images and object images. However, the synthetic images could contain overlapped plants (see Fig. 2 bottom right image) as the object (sugar beet or C. sepium) was randomly placed in the base images in this pipeline, thus better representing the real scenario of field conditions.

Quality over quantity

A single Palmer amaranth plant can produce half a million seeds, grow 2-4 inches in a day and cause near total loss of crop yield. Palmer varieties in other regions of the country have already developed resistance to five major herbicide classes.

“Economically, when you think of increased inputs for control, and you think of yield loss potential, it’s highly devastating,” said Jeff Gunsolus, a University of Minnesota Extension weed scientist involved in the project.

It is no surprise that growers and regulators wouldn’t want to miss even a single Palmer amaranth seed coming into the state. The invasive weed hasn’t yet taken widespread hold in Minnesota, but it was first introduced back in 2016 via contaminated native seed mix used for conservation plantings.

It’s almost impossible to tell the difference between a Palmer amaranth seed and that of other pigweed species or Palmer’s close cousin, waterhemp. Seed inspectors test native seed mixes at the molecular level to confirm Palmer amaranth presence.

Right now, Brusa and team’s test can identify Palmer amaranth in a sample of 20 visually identical pigweed and waterhemp seeds. A single genetic marker achieves 99.7% accuracy, already better than some medical diagnostic tests, but Brusa said they want to layer two additional markers into the test to improve its reliability even further.

“If one marker fails, the other two should catch it,” said Brusa.

“The highest priority is making sure that we don’t make bad calls. If we let something through, why are we even doing it?”

The efficiency is also expected to grow, as the team works towards the possibility of increasing the test size to 50 seeds. Accuracy will always trump quantity, however.

“The highest priority is making sure that we don’t make bad calls,” he said. “If we let something through, why are we even doing it?”

The test could become available commercially as soon as next year. Long-term, the team hopes to adapt it for in-field use with screening for herbicide resistance.

Colorimetric assay for detecting mechanical damage to weed seeds

Weed seeds with mechanical damage are more susceptible to mortality in soil than nondamaged seeds. In this study we introduce a colorimetric assay to distinguish mechanically damaged weed seeds from nondamaged weed seeds. Our objectives were to 1) compare steepates from mechanically damaged seeds against steepates from nondamaged seeds for their capacities to reduce resazurin—a nontoxic, water-soluble dye that changes color and light absorbance properties in response to pH; and 2) use light absorbance data from steepate-resazurin solutions to create classification trees for distinguishing damaged from nondamaged weed seeds. Species in this study included barnyardgrass, curly dock, junglerice, kochia, oakleaf datura, Palmer amaranth, spurred anoda, stinkgrass, tall morningglory, and yellow foxtail. Seeds of each species were subjected to mechanical damage treatments that collectively represented a range of damage severities. Damaged and nondamaged seeds were individually soaked in water to produce steepates that were combined with resazurin. Light absorbance properties of steepate-resazurin solutions indicated that for all species except kochia, damaged seeds reduced resazurin to greater extents than nondamaged seeds. Prediction accuracy rates for classification trees that used absorbance values as predictor variables were conditioned by species and damage type. Prediction accuracy rates were relatively low (66% to 86% accurate) for lightly damaged seeds, especially grass weed seeds. Prediction accuracy rates were high (91% to 99% accurate) for severely damaged seeds of specific broadleaf and grass weeds. Steepate-resazurin solutions that successfully separated seeds took no more than 32 h to produce. The results of this study indicate that the resazurin assay is a method for quickly distinguishing damaged from nondamaged weed seeds. Because rapid assessments of seed intactness may accelerate the development of tactics for reducing the number of weed seeds in soil, we advocate further development of resazurin assays by laboratories studying methods for weed seedbank depletion.