LANE LINES DETECTION ALGORITHM FOR SELF-DRIVING CARS
DOI:
https://doi.org/10.65405/.v10i37.682الكلمات المفتاحية:
Lane detection, Advanced Driver Assistance Systems (ADAS), Canny Edge Detection, Hough Transform, Computer Vision, Autonomous Vehicles.الملخص
A major reason for many traffic accidents is due to driver distraction and speeding. To assist drivers in their daily routine of operating an automobile, there has been significant research into developing Advanced Driver Assistance Systems (ADAS). ADAS systems will be able to observe and react to their surroundings so they can help prevent collisions from occurring and/or aid in navigating through difficult situations. One of the most important components of ADAS systems is the ability to recognize lanes on a roadway. Recognizing lanes is key to lane departure warnings, autonomous vehicle navigation, and collision avoidance.
The purpose of this paper is to present a robust lane detection algorithm for detecting lane lines that utilize an optimized version of the Canny edge detection process and the Hough Transform to accurately identify the location of lane boundaries in various types of environmental conditions. The proposed methodology consists of three main phases: pre-processing, feature extraction, and lane recognition. In the pre-processing phase, the raw input roadway image taken by a front facing camera is first converted to grayscale, and then processed using a series of Gaussian and Sobel filters to extract edge characteristics from the image. The Canny edge detection process is utilized to isolate the edges associated with the lane markings, and then a Region of Interest (ROI) is defined in order to limit the amount of processing time required to identify lane markings by eliminating unnecessary parts of the image, including the sky and roadside objects. In the final phase, the Hough Transform is used inside the ROI to locate and map the detected lane markings.
التنزيلات
المراجع
[1] Y. Fan and W. Zhang, “Traffic sign detection and classification for Advanced Driver Assistant Systems,” in Proc. 12th Int. Conf. Fuzzy Systems and Knowledge Discovery (FSKD), 2015.
[2] R. Sheri, N. Jadhav, R. Ravi, A. Shikhare, and S. Sannakki, “Object detection and classification for self-driving cars,” Int. J. Eng. Techniques, 2018.
[3] M. Bertozzi and A. Broggi, “GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Trans. Image Process., 1998.
[4] J. W. Lee and J. S. Cho, “Effective lane detection and tracking method using statistical modeling of color and lane edge-orientation,” Adv. Inf. Sci. Serv. Sci., 2010.
[5] P. Foucher, Y. Sebsadji, J. P. Tarel, P. Charbonnier, and P. Nicolle, “Detection and recognition of urban road markings using images,” in Proc. IEEE Int. Conf. Intelligent Transportation Systems, 2011.
[6] D. J. Kang and M. H. Jung, “Road lane segmentation using dynamic programming for active safety vehicles,” Pattern Recognit. Lett., 2003.
[7] Q. Li, N. Zheng, and H. Cheng, “Springrobot: A prototype autonomous vehicle and its algorithms for lane detection,” Intell. Transp. Syst., 2004.
[8] S. E. Umbaugh, Digital Image Processing and Analysis: Human and Computer Vision Applications with CVIPtools. CRC Press, 2004.
[9] M. V. G. Aziz, A. S. Prihatmanto, and H. Hindersah, “Implementation of lane detection algorithm for self-driving car on Toll Road Cipularang using Python language,” in Proc. 4th Int. Conf. Electric Vehicular Technology (ICEVT), 2017.
[10] J. He, H. Rong, J. Gong, and W. Huang, “A lane detection method for lane departure warning system,” in Proc. Int. Conf. Optoelectronics and Image Processing, 2010.
[11] Y. Li, L. Chen, H. Huang, X. Li, W. Xu, L. Zheng, and J. Huang, “Nighttime lane markings recognition based on Canny detection and Hough transform,” in Proc. IEEE Int. Conf. Real-time Computing and Robotics, 2016.
[12] J. F. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679–698, 1986.
التنزيلات
منشور
إصدار
القسم
الرخصة

هذا العمل مرخص بموجب Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.








