Home Research Openings Teaching Group News Links

Contact Dr. Cang Ye

Phone:(804) 828-0346

Google Voice: (501) 237-1818

Fax: (804) 828-2771   

Email: cye@vcu.edu

Laboratory

Dept. of Computer Science

Virginia Commonwealth University

401 West Main Street, E2264

Richmond, VA 23284-3019

Office

Dept. of Computer Science

Virginia Commonwealth University

401 West Main Street, E2240

Richmond, VA 23284-3019


Robotics Laboratory

Department of Computer Science, College of Engineering

Virginia Commonwealth University

© 2016 Dr. Ye's  LAB. ALL RIGHTS RESERVED.

Home Ongoing Project  Past Projects Publications & Patents Grants

3D Computer Vision Methods for a Portable Blind Navigational Device

Funded by a grant from the Robust Intelligence Program of the National Science Foundation, the project's objective is to devise computer vision methods to enhance the navigation functions of a white cane for the visually impaired. The computer vision enhanced cane is called a smart cane. The enhanced navigation functions include device positioning (as a result a 6-DOF device pose estimation), object recognition and obstacle detection. Some objects such as a stairway or a doorway may be used as waypoint to access/exit a room or get to the next floor and make a navigation task easier.

The 1st smart cane prototype

Diagram of the smart cane: pose estimation and object recognition

System Overview

Publications

  1. C. Ye, S. Hong, X. Qian, and Wei Wu, "A Co-robot Cane for the Visually Impaired," to appear in IEEE System, Man, and Cybernetics Magazine, 2015.
  2. C. Ye, S. Hong, and A. Tamjidi, "6-DOF Pose Estimation of a Robotic Navigation Aid by Tracking Visual and Geometric Features," IEEE Transactions on Automation Science and Engineering, vol. 12, no. 4, pp. 1169-1180, 2015.
  3. X. Qian and C. Ye, "NCC-RANSAC: A Fast Plane Extraction Method for 3D Range Data Segmentation," IEEE Transactions on Cybernetics, vol. 44, no. 12, pp. 2771-2783, 2014.
  4. S. Hong and C. Ye, "A Pose Graph based Visual SLAM Algorithm for Robot Pose Estimation," in Proceedings of 2014 World Automation Congress, Big Island, HI.
  5. C. Ye, S. Hong and X. Qian, "A Co-Robotic Cane for Blind Navigation," in Proceedings of 2014 IEEE International Conference on Systems, Man, and Cybernetics, San Diego, CA.
  6. X. Qian and C. Ye, "3D Object Recognition by Geometric Context and Gaussian-Mixture-Model-Based Plane Classification," in Proceedings of 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China.
  7. S. Hong and C. Ye, "A Fast Egomotion Estimation Method based on Visual Feature Tracking and Iterative Closest Point," in Proceedings of 2014 IEEE International Conference on Networking, Sensing and Control, Miami, FL. (received Best Student Paper Finalist Award)
  8. A. Tamjidi, C. Ye, and S. Hong , “An Extended Kalman Filter for Pose Estimation of a Portable Navigation Aid for the Visually Impaired,” in Proceedings of the IEEE International Symposium on Robotic and Sensors Environments, 2013, pp. 178-183. (received Best Student Paper Award)
  9. X. Qian and C. Ye, "NCC-RANSAC: A Fast Plane Extraction Method for Navigating a Smart Cane for the Visually Impaired," Proceedings of the IEEE International Conference on Automation Science and Engineering, 2013, pp. 267-273.
  10. C. Ye, "Data processing in current 3D robotic perception systems," chapter in Contemporary Issues in System   Science and Engineering, edited by IEEE/Wiley.
  11. S. Hong and C. Ye, "Performance evaluation of a pose estimation method based on the Swissranger SR4000," Proceedings of IEEE International Conference on Mechatronics and Automation, 2012, pp. 499-504.
  12. G. M. Hegde and C. Ye, "A recursive planar feature extraction method for 3D range data segmentation," in Proceedings of 2011 IEEE International Conference on Systems, Man, and Cybernetics, Anchorage, AK.
  13. C. Ye, "Navigating a Portable Robotic Device by a 3D Imaging Sensor," Proceedings of IEEE Sensors Conference, Big Island, HI, 2010, pp. 1005-1010.
  14. G. M. Hegde, C. Ye and G. T. Anderson, "Robust planar feature extraction from swissranger SR-30000 range images by an extended normalized cuts method," Proceedings of IEEE International Conference on Intelligent Robots and Systems, Taipei, China, 2010, pp. 1190-1195.
  15. C. Ye and G. M. Hegde, "Robust Edge Extraction of Swissranger SR-3000 Range Images by Singular Value Decomposition Filtering and Hough Transform," International Journal of Intelligent Control and Systems, vol. 15, no. 4, pp. 41-48, 2010.

VR-Odometry for Pose Change Measurement,  Extended Kalman Filter for Pose Estimation

Environment for experiment

Position error

Absolute error in Y (vertical) axis

Plane Extraction for Object Recognition

The VR-Odometry (VRO) method uses the 3D data points of the SIFT feature correspondences between two image frames to compute the pose change. VRO based dead reckoning may be use to determine the device pose over time. We developed an Extended Kalman Filter (EKF) to track the SIFT feature over time to reduce the VRO dead reckoning error.  Experiments were performed to test the performance of the proposed EKF estimation method and the results were compared with that of the VRO dead reckoning method. The results demonstrate that the EKF method has a much smaller position error. Taking the above experiment as an example, the total length of the path is 44 meters. The final position errors (in percentage of path length) of the EKF and VOR dead reckoning are 4.96% and 1.26%, respectively. The EKF has a much smaller error in Y axis at each step.

We proposed a so-called NCC-RANSAC method to segment a scene (3D point-cloud) into planar surface for object/ scene recognition.

Pseudo code for NCC-RANSAC

Comparison of CC-RANSAC and NCC-RANSAC

Intensity image of a lab scene

Segmentation result of NCC-RANSAC