Skip to main content
Michigan State Universitysite name

3D Vision Lab


The 3D Vision Lab at MSU explores the boundaries of Computer Vision in a dynamically changing three dimensional world. It asks: how can we estimate properties of moving and stationary objects in the world given inherent ambiguities, noise and resolution limits of our sensors?  A variety of two and three dimensional sensors are employed, as well as fusion of the sensors.  Below are some of the problems that have been or are being worked on.

Autodrive Challenge

MSU Autodrive Team   Start of the Lane-following challenge

The MSU Autodrive Challenge team integrated sensing and control hardware as well as software into a GM Bolt. At the end of the first year we demonstrated lane following, lane switching and obstacle avoidance.

Research in our lab includes: accurate pose estimation via 3D registration, Lidar-based target tracking, and Lidar-video fusion.

A Pyramid CNN for Dense-Leaves Segmentation

Segmenting leaves in dense foliage is difficult problem. We have made recent progress as shown below. On the left is a an image of dense foliage containing leaves with large internal variations and texture. Our algorithm can automatically estimate boundaries for the individual leaves, as shown on the right.

Densely packed leaves with strong occlusions   Automatic segmentation

Dense-Leaves Dataset is now available!

Paper describing our method: A Pyramid CNN for Dense-Leaves Segmentation

Plant photosynthesis distribution

Plant photosynthesisExperiments on crop breeding, selection and modification require knowledge of photosynthesis rates.  Innovative sensor development and processing is needed to to assess photosynthesis and its distribution on plant bodies. .

Multi-modal plant dataset

Growing Depth Image Superpixels for Foliage Modeling, D.D. Morris, S.M. Imran, J. Chen, D.M. Kramer, in proc. Canadian Conf. Computer and Robot Vision, Jun 2016.

Multi-modality Imagery Database for Plant Phenotyping, J. Cruz, X. Yin, X. Liu, S.M. Imran, D.D. Morris, D.M. Kramer, J. Chen, in Journal of Machine Vision Applications., pp. 1-15, November 2015.

Obstacles and Foliage Discrimination for Lidar

Off-road mobile robot navigation can require discrimination of foliage from non-traversable obstacles. We are developing object discrimination technologies that can find navigation hazards, such as tree trunks, rocks, cones etc, that may be partially occluded by foliage. For more details see:Obstacles and Foliage Discrimination for Lidar, D.D. Morris, in proc. SPIE 9837, Unmanned Systems Technology XVIII, 98370E (May 13, 2016); doi:10.1117/12.2224545.

Cluttered off-road environment with navigation hazards and foliage

  • Left: cluttered off-road environment with navigation hazards and foliage
  • Bottom left: automatic classification of Lidar obstacle pixels (red)
  • Bottom right: foliage pixels (green) shown along with obstacle pixels

Obstacle pixels


Obstacle and foliage pixels

Autonomous Vehicles: CANVAS

MSU recently acquired a drive-by-wire vehicle outfitted with a large array of heterogeneous sensors as part of the CANVAS program. We are working with other faculty on novel perception systems to enable robust driving in all conditions.

MSU CANVAS vehicle

Object detection and tracking with Lidar

Kinematic models for vehicle trackingAutonomous vehicles of the future will need precise sensing of the world around them.  LIDAR is a promising sensor and provides 3D point clouds of the world.  Within these points clouds we seek for objects such as people and vehicles, track them and provide trajectory predictions.  Components of this problem include clustering 3D points to objects, rejecting clutter objects, developing appropriate shape and motion models, and accounting for self-occlusions and scene-occlusions.
Visual Classification of Coarse Vehicle Orientation using Histogram of Oriented Gradients Features
Paul Rybski, Daniel Huber, Daniel D. Morris, and Regis Hoffman 2010 IEEE Intelligent Vehicles Symposium, June, 2010.

A View-Depdenent Adaptive Matched Filter for Ladar-Based Vehicle Tracking, Daniel D. Morris, Regis Hoffman, and Paul Haley Proc. of 14th IASTED Int. Conf. on Robotics and Applications, November, 2009.

Object classification using LIDAR

LIDAR pointsWhile range measurements from LIDAR are precise, they are sparse at long range.  As a result determining object shape and category can be difficult.  We develop 3D shape-based object categorization methods to classify object types

Person tracking and motion analysis for medical applications

New RGBD sensors enable precise tracking of human motions. This can be used for medical appications such as home care.

Human skeleton from a Kinect

Rough Terrain and Ground Segmentation

Rough terrain and ground segmentation in cluttered environments

An important initial step in local scene understanding is to estimate the ground surface. In flat open areas this is straight forward, but in cluttered environments and in rough terrain in can be challenging to separate ground surfaces from other objects. We have recently developed a new robust ground measurement cost function that accounts for occlusions and clutter. When modeled with a Markov Random Field and optimized with Loopy Belief Propagation, it produces high-quality ground segmentations of LIDAR data, see:
Ground Segmentation based on Loopy Belief Propagation for Sparse 3D Point Clouds, Mingfang Zhang, Daniel D. Morris, Rui Fu, Proceedings 3DV 2015.

Want to be involved?

Contact Dr. Morris if you are an undergraduate, PhD student, or potential Post Doc looking to solve exciting Computer Vision problems.


Image of sun, clouds and water