Skip to main content
Michigan State Universitysite name

Smart Agriculture Sensing

Smart agriculture has enormous potential to improve crop and livestock production and reduce impact to the environment. We are exploring sensing algorithms that we believe will play important roles in this.

Automated Annotation for Smeared Point Removal

Smeared pixels are a problem with many consumer depth sensors used in building 3D models. We propose a self-annotated approach to training a smeared pixel removal algorithm that eliminates the need for manual annotation. Our 2024 WACV paper is available.

Label Efficient Learning in Agriculture

This is a comprehensive review of how to learn more with fewer annotations in plant agriculture. A pre-print (2023) is available

Automated Systems for Animal Monitoring

In work lead by Janice Siegford, we analyze what is needed when building datasets to support Automated Behavior Analysis of animals, see: https://doi.org/10.1016/j.applanim.2023.106000 (2023)

Public Datasets for Computer Vision in Precision Livestock Farming

Our paper identifying public datasets that support computer vision in precision livestock farming was presented at USPLF 2023.

2022 Innovation of the Year Award

The team of Madonna Benjamin, Daniel Morris, Steven Yik and Michael Lavagnino, was awarded by the 2022 MSU Innovation of the Year award for our work on using 3D scanning to advance swine welfare.

Maize Phenotype Estimation from Lidar Scans

Drone-based phenotyping has potential to speed the development of new crop breeds that improve yield and other desired characteristics. In this work we explore how well eleven different phenotypes can be estimated using an UAS-mounted Lidar to scan a maize diversity panel periodically over the growing season. High correlation with manual measurements are obtained with six phenotypes, see our pre-print (2022).

Voxel model of single maize plot showing growth over the season at seven time intervals

Seed Coat Genetic Analysis

Analyses of the genetics and color characteristics of beans requires careful color extration. Traditionally, this would be done through manual segmentation, but now with the advent of convolutional neural networks, we are able to train a network to segment regions of interest, and so greatly speed up the processing of large numbers of beans. The paper on seed coat color genetics (2021) is a collaboration with plant scientists Rie Sadohara and Karen Cichy.

Seedcoats (left), segmentation by CNN (center), extracted colored regions (right)

Pig Detection and Tracking

Computer vision techniques have potential to transform Precision Livestock Farming. Our new methods for semi-supervised deep learning are demonstrating accurate detection and posture estimation of pigs in depth images. Details are in our paper from IROS 2020. Congratulations Steven on paper being a best-paper finalist for Agri-Robotics. Here is an image illustrating the good precision our method achieves with 2-sigma ellipses around estimated joints/features. Click on it to see a video with mesh model reconstructions.

Sow Joint Detections

Dry Bean Quality Analysis

Dry beans are an important agricultural product and source of nutrition. Ongoing research is seeking to develop new varieties with improved nutritional properties and resistance to insects, disease and drought, and at the same time maintaining or improving bean quality. An important quality measure is bean intactness after cooking and canning. Splits in the seed coats, such as those below, detract from bean quality.

Bean Splits

Our 2019 paper at CVPPP describes a proposed Bean Split Ratio as a measure for quantifying bean quality related to splits, and an automated algorithm for calculating it. If used to select new varieties, this can contribute towards improving bean quality for everyone! More details and the dataset are available here.

A Pyramid CNN for Dense-Leaves Segmentation

Segmenting leaves in dense foliage is difficult problem. We have made recent progress as shown below. On the left is a an image of dense foliage containing leaves with large internal variations and texture. Our algorithm can automatically estimate boundaries for the individual leaves, as shown on the right.

Densely packed leaves with strong occlusions   Automatic segmentation

Dense-Leaves Dataset is now available!

The 2018 paper

and pre-print describing the method are available.

Obstacles and Foliage Discrimination for Lidar

Off-road mobile robot navigation can require discrimination of foliage from non-traversable obstacles. We are developing object discrimination technologies that can find navigation hazards, such as tree trunks, rocks, cones etc, that may be partially occluded by foliage. For more details see:Obstacles and Foliage Discrimination for Lidar, D.D. Morris, in proc. SPIE 9837, Unmanned Systems Technology XVIII, 98370E (May 13, 2016); doi:10.1117/12.2224545.

Cluttered off-road environment with navigation hazards and foliage

  • Left: cluttered off-road environment with navigation hazards and foliage
  • Bottom left: automatic classification of Lidar obstacle pixels (red)
  • Bottom right: foliage pixels (green) shown along with obstacle pixels

Obstacle pixels

 

Obstacle and foliage pixels

Plant photosynthesis distribution

Plant photosynthesisExperiments on crop breeding, selection and modification require knowledge of photosynthesis rates.  Innovative sensor development and processing is needed to to assess photosynthesis and its distribution on plant bodies. .

Multi-modal plant dataset

Growing Depth Image Superpixels for Foliage Modeling, D.D. Morris, S.M. Imran, J. Chen, D.M. Kramer, in proc. Canadian Conf. Computer and Robot Vision, Jun 2016.

Multi-modality Imagery Database for Plant Phenotyping, J. Cruz, X. Yin, X. Liu, S.M. Imran, D.D. Morris, D.M. Kramer, J. Chen, in Journal of Machine Vision Applications., pp. 1-15, November 2015.