Home | Login
Lectures       Previous announcements
Select year: 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018
Seminars in 2010
    driver support system should provide assistance and security to the driver. For navigaton tasks it is neccesary to determine position of the ego vehicle relative to the road. Oneof the principal approaches is to detect road boundaries and lanes using a vision system in the vehicle. Within the Europeanresearch Project ¡±Secure Propulsion using Advanced Redundant Control (SPARC)¡± different approaches of lane detection are developed to meet the needs of real trafic situations. The vision module presented here is based on several image filters that provide diverse information about the environment. A set of hypotheses about the state of the system is generated by a probabilistic particle filter. Assuming a predefined model of the road the particles are tested according to image filters to infere the best belief vehicle position. Emphasis was placed on extracting relevant information from the scene and efficient testing. In particular, a new testing module based on Canny edge filter and Hough transform increased the accuracy and robustness of estimation. Perfomance of the vision module was tested under various real-road conditions.
    Attached files: A lane detection vision module for driver assistance.pdf
    Various types of sensors are available to implement distance measurement for mobile robots. Scene recognition and path planning point to the use of optical imaging systems and machine vision approaches. For middle size robots, such as those used in robotic football league, reduced weight and volume are mandatory, and a single camera fixed on the robot is the usual choice. 3D localization of objects with such a simple system is impossible, unless some knowledge of the environment and/or objects is available. Localization in 3D space needs three coordinates. The common central projection used in linear image produces a 2D image, from which only two coordinates can be extracted. For the central projection system, any point in a straight line to the lens optical centre of the lens has the same image pixel representation. The distance from the object to the optical centre is the unknown coordinate to be obtained by processing other information. In the case of mobile robots, the movement is usually on a plane surface, meaning that height and camera orientation remain constant. If the object is also at a fixed height, its possible positions define a horizontal plane, and its particular location can be obtained by intersecting this plane with the straight line defined by the corresponding image point on the sensor and the lens optical centre. This work presents both a calculation method and also a calibration procedure for this setup. This paper concerns the use of linear optical systems, where angles to the optical axis are maintained for both image and object sides. For optical systems with radial distortions, such as barrel and pincushion types, a one-dimensional function relating object and image side angles can be used to compensate the non linearity and allow this approach to be used. Also in vision systems, where mirrors are combined with normal lens to allow 3600 viewing, similar image-object angular relationships can be used to compute target positions.
    Attached files: 04078011_Angle_Invariance_for_Distance_Measurements_Using_a_Single_Camera.pdf
    I would like to invite you to my reference seminar on this saturday. This is my seminar paper abstract : On the basis of the vehicle license plate location, an image grey vertical projection segmentation approach based on the distribution character segmentation is proposed in this paper. A two-stage approach consisting of coarse and accurat segmentation is adopted. It can increase the accuracy of the segmentation and has good segmentation speed. And in recognition process, character features are extracted from character segmentation results, in order to identify character exactly, an improved template matching method is used to character recognition. Experimental results show that character segmentation method is efficient and quick and recognition algorithm is applicable. I hope everybody put forward valuable opinion.
    Abstract-This paper proposes a method which combines Sobel edge detection operator and soft-threshold wavelet de-noising to do edge detection on images which include White Gaussian noises. In recent years, a lot of edge detection methods are proposed. The commonly used methods which combine mean de-noising and Sobel operator or median filtering and Sobel operator can not remove salt and pepper noise very well. In this paper, we firstly use soft-threshold wavelet to remove noise, then use Sobel edge detection operator to do edge detection on the image. This method is mainly used on the images which includes White Gaussian noises. Through the pictures obtained by the experiment, we can see very clearly that, compared to the traditional edge detection methods, the method proposed in this paper has a more obvious effect on edge detection.
    Attached files: An Improved Sobel Edge Detection.pdf
    Abstract-This paper proposes a method which combines Sobel edge detection operator and soft-threshold wavelet de-noising to do edge detection on images which include White Gaussian noises. In recent years, a lot of edge detection methods are proposed. The commonly used methods which combine mean de-noising and Sobel operator or median filtering and Sobel operator can not remove salt and pepper noise very well. In this paper, we firstly use soft-threshold wavelet to remove noise, then use Sobel edge detection operator to do edge detection on the image. This method is mainly used on the images which includes White Gaussian noises. Through the pictures obtained by the experiment, we can see very clearly that, compared to the traditional edge detection methods, the method proposed in this paper has a more obvious effect on edge detection.
    Attached files: An Improved Sobel Edge Detection.pdf
    A new recognition method of vehicle license plates based on neural network is presented in this paper. For the Back Propagation (BP) neural network often trap into the local minimum in the training process, a Genetic Neural Network (GNN), GABP was constructed by combining the Genetic Algorithm GA with BP neural network. The training of the GABP neural network was finished in two steps. The GA was firstly used to make a thorough searching in the global space for the weights and thresholds of the neural network, which can ensure they fall into the neighborhood of global optimal solution. Then, in order to improve the convergence precision, the gradient method was used to finely train the network and find the global optimum or second-best solution with good performance. On the other side, feature extraction is also important for improving the recognition rate of the network. So both the structure features and the statistic features are used in this paper, which include mesh feature, direction line element feature and Zernike moments feature. Experimental results show that the proposed method can save the time of training network and achieve a highly recognition rate.
    Attached files: A New Recognition Method of Vehicle License Plate Based on Genetic Neural Network.pdf
    Abstract—In this paper, a new algorithm for vehicle logo recognition on the basis of an enhanced scale-invariant feature transform (SIFT)-based feature-matching scheme is proposed. This algorithm is assessed on a set of 1200 logo images that belong to ten distinctive vehicle manufacturers. A series of experiments are conducted, splitting the 1200 images to a training set and a testing set, respectively. It is shown that the enhanced matching approach proposed in this paper boosts the recognition accuracy compared with the standard SIFT-based feature-matching method. The reported results indicate a high recognition rate in vehicle logos and a fast processing time, making it suitable for real-time applications.
    Attached files: Vehicle Logo Recognition Using a SIFT-Based.pdf
    This paper proposes a novel method for detecting nonconforming trajectories of objects as they pass through a scene. Existing methods mostly use spatial features to solve this problem. Using only spatial information is not adequate; we need to take into consideration velocity and curvature information of a trajectory along with the spatial information for an elegant solution. Our method has the ability to distinguish between objects traversing spatially dissimilar paths, or objects traversing spatially proximal paths but having different spatio-temporal characteristics. The method consists of a path building training phase and a testing phase. During the training phase, we use graph-cuts for clustering the trajectories, where the Hausdorff distance metric is used to calculate the edge weights. Each cluster represents a path. An envelope boundary and an average trajectory are computed for each path. During the testing phase we use three features for trajectory matching in a hierarchical fashion. The first feature measures the spatial similarity while the second feature compares the velocity characteristics of trajectories. Finally, the curvature features capture discontinuities in velocity, acceleration, and position of the trajectory. We use real-world pedestrian sequences to demonstrate the practicality of our method.
    Attached files: Multi Feature Path Modeling for Video Surveillance.pdf Multi Feature Path Modeling for Video Surveillance.ppt
    This paper presents an road modeling strategy for video-based driver assistance systems. It is based on the real-time estimation of the vanishing point of sequences captured with forward looking cameras located near the rear view mirror of a vehicle. The vanishing point is used for many purposs in video-based driver assistance systems, such as computing linear models of th road, extraction of calibration parameters of the camera, stabilization of sequences, etc. In this work is based on the use of an adaptive steerable filter bank which enhances lane markings according to their expected orientations.
    Existing tone reproduction schemes are generally based on a single image and are therefore unable to accurately recover the local details and colors of scene since the limited available information. Accordingly, the proposed tone reproduction system utilizes two images with different exposures (one low and one high) to capture the local detail and color information of low- and high-luminance regions of scene, respectively. The adaptive local region of each pixel is developed in order to appropriately reveal the details and maintain the overall impression of scene. Our system implements the local tone mapping and color mapping based on the adaptive local region by taking the lowly-exposed image as the basis and referencing the information of highly-exposed image. The local tone mapping compresses the luminance range in the image and enhances the local contrast to reveal the details, while the local color mapping maps the precise color information from the highly-exposed image to the lowly-exposed image. Finally, a fusion process is proposed to mix the local tone mapping and local color mapping results to produce the output image. A multi-resolution approach is also developed to reduce time cost. The experimental results confirm that the system generates realistic reproductions of HDR scenes.
    Attached files: Photography Enhancement Based on the Fusion of.pdf
    Overtaking is a complex and hazardous driving maneuver for intelligent vehicles. When to initiate overtaking and how to complete overtaking are critical issues for an overtaking intelligent vehicle. We propose an overtaking control method based on the estimation of the conflict probability. This method uses the conflict probability as the safety indicator and completes overtaking by tracking a safe conflict probability. The conflict probability is estimated by the future relative position of intelligent vehicles, and the future relative position is estimated by using the dynamics models of the intelligent vehicles. The proposed method uses model predictive control to track a desired safe conflict probability and synthesizes decision making and control of the overtaking maneuver. The effectiveness of this method has been validated in different experimental configurations, and the effects of some parameters in this control method have also been investigated.
    Attached files: Conflict-Probability-Estimation-Based Overtaking for Intelligent Vehicles.pdf
    Pedestrians are the most vulnerable participants in urban traffic. The first step toward protecting pedestrians is to reliably detect them.We present a new approach for standing- and walking-pedestrian detection, in urban traffic conditions, using grayscale stereo cameras mounted on board a vehicle. Our system uses pattern matching and motion for pedestrian detection. Both 2-D image intensity information and 3-D dense stereo information are used for classification. The 3-D data are used for effective pedestrian hypothesis generation, scale and depth estimation, and 2-D model selection. The scaled models are matched against the selected hypothesis using high-performance matching, based on the Chamfer distance. Kalman filtering is used to track detected pedestrians. A subsequent validation, based on the motion field's variance and periodicity of tracked walking pedestrians, is used to eliminate false positives.
    Attached files: StereoBasedPedestrianDetection.pdf
    Dear Professor and Lab member This weekend, I am going to present a paper tittle as “High accuracy navigation using laser ranger sensors in outdoor applications” (May,15,2010). This paper proposed simutaneous localization and map building by using sensors and laser and bearing information. The abstract of this paper as follow This paper presents the design of a high accuracy outdoor navigation system based on standard dead reckoning sensors and laser range and bearing information. The data validation problem is addressed using laser intensity information. Beacon design aspect and location of landmarks are also discussed in relation to desired accuracy and required area of operation. The results are important for Simultaneous Localization and Map building applications since the feature extraction and validation are resolved at the sensor level using laser intensity. This facilitates the use of additional natural landmarks to improve the accuracy of the localization algorithm. Experimentalresults in outdoor environments are also presented I am looking forward to seeing you! Best regards Nguyen Van Thuan
    Attached files: Seminar2010-05-15.pdf
    This paper presents a new and robust method for extracting and match¡© ing visual vertical features between images taken by an omnidirectional camera. Matching robustness is achieved by creating a descriptor which is unique and dis¡© tinctive for each feature. Furthermore, the proposed descriptor is invariant to ro¡© tation. The robustness of the approach is validated through real experiments with a wheeled robot equipped with an omnidirectional camera. We show that vertical lines are very well extracted and tracked during the robot motion. At the end, we also present an application of our algorithm to the robot simultaneous localization and mapping in an unknown environment
    Attached files: Robust Feature Extraction and Matching for Omnidirectional Images,_FSR2007_scaramuzza.pdf
    Dear professor and colleagues, I would like to announce that I will present a reference paper on this Saturday (April 24, 2010). This paper mentioned about how to choose a subset of features from a pool of many potential variables. This is a common problem in pattern classification as well as pattern recognition. The abstract of this paper as follow. Sequential forward selection (SFS) and sequential backward elimination (SBE) are two commonly used search methods in feature subset selection. In the present study, we derive an orthogonal forward selection (OFS) and an orthogonal backward elimination (OBE) algorithms for feature subset selection by incorporating Gram–Schmidt and Givens orthogonal transforms into forward selection and backward elimination procedures, respectively. The basic idea of the orthogonal feature subset selection algorithms is to find an orthogonal space in which to express features and to perform feature subset selection. After selection, the physically meaningless features in the orthogonal space are linked back to the same number of input variables in the original measurement space. The strength of employing orthogonal transforms is that features are decorrelated in the orthogonal space, hence individual features can be evaluated and selected independently. The effectiveness of our algorithms to deal with real world problems is finally demonstrated. Meet you all in there! Best regards, Le My Ha IS-Lab
    Attached files: Orthogonal Forward Selection and Backward Elimination.pdf
    The paper presents a solution for lane estimation in difficult scenarios based on the particle filtering framework. The solution employs a novel technique for pitch detection based on fusion of two stereovision-based cues, a novel method for particle measurement and weighting using multiple lane delimiting cues extracted by grayscale and stereo data processing, and a novel method for deciding upon the validity of the lane estimation results
    High Dynamic Range (HDR) imaging is a future trend for digital imaging. With excessive spatial resolution in digital cameras nowadays, plenty of spaces remain in the dynamic domain to enhance the image quality. In order to produce HDR image using conventional devices, multiple captures with different exposure settings are performed and combined. However, multiple exposure systems requires static photo scene. In this paper, a Spatial Varying Exposure (SVE) system is proposed. By altering the exposure settings in spatial domain, it is possible to capture HDR image from instantaneous scene by trading-off spatial resolution. Moreover, a specific demosaicking algorithm is designed to conceal the color pixels assigned to different exposure fields.
    Attached files: SVE-2.ppt High Dynamic Range image capturing by Spatial Varying Exposed.pdf
    Dear Colleagues, On this weekend, I am going to demonstrate a paper tittle as "Application of Freeman Chain Codes: An Alternative Recognition Technique for Malaysian Car Plates", published in International Journal of Computer Science and Network Security (IJCSNS), Vol. 9, No. 11, Nov. 2009. Summary Various applications of car plate recognition systems have been developed using various kinds of methods and techniques by researchers all over the world. The applications developed were only suitable for specific country due to its standard specification endorsed by the transport department of particular countries. The Road Transport Department of Malaysia also has endorsed a specification for car plates that includes the font and size of characters that must be followed by car owners. However, there are cases where this specification is not followed. Several applications have been developed in Malaysia to overcome this problem. However, there is still problem in achieving 100% recognition accuracy. This paper is mainly focused on conducting an experiment using chain codes technique to perform recognition for different types of fonts used in Malaysian car plates. Kaushik
    Attached files: 20091132.pdf
    This paper presents a system to segment and track multiple body parts of interacting humans in the presence of mutual occlusion and shadow. The color image sequence is processed at three levels: pixel level, blob level, and object level. A Gaussian mixture model is used at the pixel level to train and classify individual pixel colors. Markov Random Field (MRF) framework is used at the blob level to merge the pixels into coherent blobs and to register inter-blob relations. A coarse model of the human body is applied at the object level as empirical domain knowledge to resolve ambiguity due to occlusion and to recover from intermittent tracking failures. A two-fold tracking scheme is used which consists of blob to blob matching in consecutive frames and blob to body part association within a frame. The tracking scheme resembles a multi-target, multi-assignment framework. The result is a tracking system that simultaneously segments and tracks multiple body parts of interacting people. Example sequences illustrate the success of the proposed paradigm.
    Attached files: Segmentation and Tracking of Interacting Human Body Parts under Occlusion and Shadowing.pdf
    This paper presents a new design of augmented extended Kalman filter (AEKF) for real-time simulation of mobile robots. A Simulink® model is developed for simultaneous localization and odometry calibration of mobile robots in real time manner.Starting from the encoders readings, and assuming an absolute measurement available, the AEKF provides the local reconstruction of mobile robots position and orientation with an on-line odometry calibration. The simulation results verify the effectiveness of the proposed method and suggest it as a promising way for real time implementations of augmented kalman filters.
    Attached files: seminar.pdf
    Feature based stereo matching is an effective way to perform 3D building reconstruction. However, in urban scene, the cluttered background and various building structures may interfere with the performance of building reconstruction. In this paper, we propose a novel method to robustly reconstruct buildings on the basis of rectangle regions. Firstly, we propose a multi-scale linear feature detector to obtain the salient line segments on the object contours. Secondly, candidate rectangle regions are extracted from the salient line segments based on their local information. Thirdly, stereo matching is performed with the list of matching line segments, which are boundary edges of the corresponding rectangles from the left and right image. Experimental results demonstrate that the proposed method can achieve better accuracy on the reconstructed result than pixel-level stereo matching.
    Attached files: Rectangle Region Based Stereo Matching for.pdf
News | About us | Research | Lectures