Home | Login
Lectures       Previous announcements
Select year: 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017
Seminars in 2008
2008-12-20
    Traditionally, Markov random field (MRF) models have been used in low-level image analysis. This correspondence presents an MRFbased scheme to perform object delineation. The proposed edge-based approach involves extracting straight lines from the edge map of an image. Then, an MRF model is used to group these lines to delineate buildings in aerial images.
2008-12-06
    I'll present a paper "Road detection and classification in urban environments using conditional random field models" this weekend. This paper was published in "Intelligent Transportation Systems Conference, pp. 963-967, 2006." The authors are Jyun-Fan Tsai, Shih-Shinh Huang, Yi-Ming Chan, Chan-Yu Huang, Li-Chen Fu, and Pei-Yung Hsiao. In the paper, the authors decribe analyzing the road scene structure by classifying the pixels to three different types, including road surface, lane markings, and non-road objects. They integrate different ad hoc methods under the conditional random field framework. If you need more information, please check the attached paper. Sincerely Kim, Dae-Nyeon
2008-12-06
    I'll present a paper "Road detection and classification in urban environments using conditional random field models" this weekend. This paper was published in "Intelligent Transportation Systems Conference, pp. 963-967, 2006." The authors are Jyun-Fan Tsai, Shih-Shinh Huang, Yi-Ming Chan, Chan-Yu Huang, Li-Chen Fu, and Pei-Yung Hsiao. In the paper, the authors decribe analyzing the road scene structure by classifying the pixels to three different types, including road surface, lane markings, and non-road objects. They integrate different ad hoc methods under the conditional random field framework. If you need more information, please check the attached paper. Sincerely Kim, Dae-Nyeon
2008-11-15
    Dear Colleagues, On this weekend, I will be present a paper which tittle is " A smart access control using an efficient license plate location and recognition approach", published in Expert Systems with Applications 34(2008) 256-265. ABSTRACT Nowadays license plate recognition became a key technique to many automated systems such as road traffic monitoring, automated payment of tolls on high ways or bridges, security access, and parking lots access control. Most of the previous license plate locating (LPL) approaches are not robust in case of low-quality images. Some difficulties result from illumination variance, noise, complex and dirty background. This paper presents a real-time and robust method for license plate location and recognition. Edge features of the car image are very important, and edge density and background color can be used to successfully detect a number plate location according to the characteristics of the number plate. The proposed algorithm can efficiently determine and adjust the plate rotation in skewed images. LP quantization and equalization has been applied as an important step for successful decryption of the LP. It finds the optimal adaptive threshold corresponding to the intensity image obtained after adjusting the image intensity values. An efficient character segmentation algorithm is used in order to segment the characters in the binary license plate image. An optical character recognition (OCR) engine has then been proposed. The OCR engine includes digit dilation, contours adjustment and resizing. Each digit is resized to standard dimensions according to a neural network dataset. The back-propagation neural network (BPNN) is selected as a powerful tool to perform the recognition process. Experiments have been conducted to corroborate the efficiency of the proposed method. Experimental results showed that the proposed method has excellent performance even in case of low-quality images or images exhibiting illumination effects and noise. Experimental results illustrate the great robustness and efficiency of our method. If you need more info. Please contact me by mail. Sincerely
2008-11-22
    I'll present a paper named "Adaptive Background Estimation Based on Robust Statistics" this weekend. This paper was published in 'Systems and Computers in Japan, Vol.38, No.7, pp.98-108, 2007' . However it is translated paper form Denshi Joho Tsushin Gakkai Ronbunshi, Vol.J86-D-II, No.6, pp.796-806, 2003. The authors are Hiroyuki Shimai, Takio Kurita, Shinji Umeyama, Masaru Tanaka, and Taketoshi Mishima. In the paper, the authors decribe how to estimate adaptive and robust background using robust estimation methos like advanced M-estimation, robust template matching and adjusting the adaptation rate. It is quite old paper therefore their results are old fashioned and need long processing time. However the main key point of this paper is robust estimation and decision making based on it. If you have any curious point for this paper, please check on attached paper and discuss with me. I'll welcome to discuss any time, anybody and anywhere.
2008-11-08
    Authors: S. Cvetković, Member, IEEE, J. Klijn, P.H.N. de With, Fellow, IEEE Source: IEEE Transactions on Consumer Electronics, Vol. 54, No. 2, MAY 2008 Abstract — For real-time imaging with digital video cameras and high-quality with TV display systems, good tonal rendition of video is important to ensure high visual comfort for the user. Except local contrast improvements, High Dynamic Range (HDR) scenes require adaptive gradation correction (tone-mapping function), which should enable good visualization of details at lower brightness. We discuss how to construct and control improved tone-mapping functions that enhance visibility of image details in the dark regions while not excessively compressing the image in the bright image parts. The result of this method is a 21-dB expansion of the dynamic range thanks to improved SNR by using multipleexposure techniques. This new algorithm was successfully evaluated in HW and outperforms the existing algorithms with 11 dB. The new scheme can be successfully applied to cameras and TV systems to improve their contrast
2008-11-01
    Abstract. A fast method for the recognition and classification of informationaltraffic signs is presented in this paper. The aim is to provide an efficient frameworkwhich could be easily used in inventory and guidance systems. The process consists of several steps which include image segmentation, sign detectionand reorientation, and finally traffic sign recognition. In a first stage, a static HSI colour segmentation is performed so that possible traffic signs can be easily isolated from the rest of the scene; secondly, shape classification is carried out so as to detect square blobs from the segmented image; next, each object is reoriented through the use of a homography transformation matrix and its potential axial deformation is corrected. Finally a recursive adaptive gmentation and a SVM-based recognition framework allow us to extract each possible pictogram, icon or symbol and classify the type of the traffic sign via a votingscheme.
2008-10-25
    I would like to introduce my presentation in this week. The title is "Foreground Segmentation in Surveillance Scenes Containing a Door" and the authors are Andrew Miller and Mubarak Shah. This paper is about background subtraction in specific situation as like a title. It proposes a method for performing accurate background subtraction in scenes with a door. In those case, all of the pixels in the image are dependent on the position of the door, so they use the joint probability for all of them to estimate the maximum-likelihood position of the door.
2008-10-18
    This paper is presented IROS2006. This paper is proposed a method that uses stereo vision to detect point cloud which includes the ceiling and extracts teh edges in the ceiling. They uses drop down and rise up method for detecting walls . Abstract - This paper presents an effective and real-time approach for detecting walls in indoor environment. This approach relies on the fact that behind the opaque walls is not visible. Thus, to detect the walls in an indoor environment a set of hypothetical walls, based on the ceiling edges or ground level edges, are considered; and their validity is checked using point cloud, generated by a sensor. A certainty factor is calculated for each detected wall, which is updated continuously based on the newly gathered sensory information. Furthermore, the certainty of the walls can be updated using other source of information for better and more reliable wall detection. The novelty of this approach is in its capability to handle environments, with texture-less walls, in real-time. The algorithm has been implemented in simulation, and tested in real environment and has shown effective, reliable and real-time performance.
2008-10-11
    This paper is about Line Detection of Aerial Image with Complex Scene. Line Detection is important for understang the object. Especially, Man-made objects like building, book, bookshelf and so on have the rectangular shape. The author use Wavelet transform for Line Detection
2008-10-04
    On this weekend, I am going to present a paper entitled as "Colour Texture Segmentation by Region-Boundary Cooperation." This paper was presented in "8th European Conference on Computer Vision, pp. 250-261, 2004" by J. Freixenet, X. Munoz, J. Marti, and X. Llado. This paper described on a colour texture segmentation method by unifier of region and boundary information. They proposed a segmentation method which uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of nonparametric techniques of kernel density estimation. Sincerely Dae-Nyeon Kim
2008-09-06
    I'll present a paper named "Robust Foreground Detection in Video Using Pixel Layers" on this weekend. This paper was published in IEEE Trans. on PATTERN ANALYSIS and MACHINE INTELLIGENCE, Vol. 30, No.4, April, 2008. There main interest is almost same as mine. They first generate automatically updated background and detect moving object. In their process, they use Sampling-Expectation(SE) algorithm to generate "layer-candidate" and "background-candidate". Then they extract multiple layers from background. Finally they extract moving object using background subtraction. I just simply describe their ideas above. Therefore if you need more infomation about it, PLZ check the attached paper or contact to me.
2008-08-30
    Dear Colleagues, On this weekend, I will be present a paper which tittle is " New Methods for Automatic Reading of Vehicle License Plates ", published in Int. Conf. on SPPRA. ABSTRACT In a former work, we built a VLP (Vehicle License Plates) reader using computer vision. To test other methods, we decided to implement a whole new system from scratch. In this communication, we summarize the methods that we have tested and the results we have obtained. Plate location is based on mathematical morphology and character recognition is implemented using Hausdorff distance. Results are comparable to those obtained with other methods If you need more info. Please contact me by mail. Sincerely
2008-08-23
    Dear friends, This week I'm going to present a peper by Rafal Mantiuk, Karol Myszkowski, and Hans-Peter Seidel entitled "A Perceptual Framework for Contrast Processing of High Dynamic Range Images". Abstract Image processing often involves an image transformation into a domain that is better correlated with visual perception, such as the wavelet domain, image pyramids, multi-scale contrast representations, contrast in retinex algorithms, and chroma, lightness and colorfulness predictors in color appearance models. Many of these transformations are not ideally suited for image processing that signi cantly modies an image. For example, the modication of a single band in a multi-scale model leads to an unrealistic image with severe halo artifacts. Inspired by gradient domain methods we derive a framework that imposes constraints on the entire set of contrasts in an image for a full range of spatial frequencies. This way, even severe image modications do not reverse the polarity of contrast. The strengths of the framework are demonstrated by aggressive contrast enhancement and a visually appealing tone mapping which does not introduce artifacts. Additionally, we perceptually linearize contrast magnitudes using a custom transducer function. The transducer function has been derived especially for the purpose of HDR images, based on the contrast discrimination measurements for high contrast stimuli.
2008-07-26
    On this weekend, I am going to present a paper entitled as "Book boundary detection and title extraction for automatic bookshelf inspection". This paper was presented in "Frontier compurter vision, 2004" by Eiji Taira, Seiichi Uchida, Hiroaki Sakoe. This paper described the method of book recognition. This use DP(dynamic programming) for line detection instead of Hough transform and finite state diagram. The method of title extraction is projection histogram. If you need more information, please check the attached paper. Sincerely Kang, Suk-Ju
2008-07-12
    On this weekend, I am going to present a paper entitled as "Probabilistic Spatial Context Models for Scene Content Understanding". This paper was presented in "IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 235~241, 2003" by Amit Singhal, Jiebo Luo and Weiyu Zhu. This paper described on a holistic approach to determining scene content, based on a set of individual material detection algorithms, as well as probabilistic spatial context models. Also, the authors have developed a spatial context-aware material detection system that reduces misclassification by constraining the beliefs to conform to the probabilistic spatial context models. If you need more information, please check the attached paper. Sincerely Kim, Dae-Nyeon
2008-06-14
    Dear Colleagues, On this weekend, I will be present a paper which tittle is " An efficient method of license plate location ", published in Pattern Recognition Letters 26 (2005) 2431–2438 The authors presents a real time and robust method of license plate location. Abstract License plate location is an important stage in vehicle license plate recognition for automated transport system. License plate area contains rich edge and texture information. We first extract out the vertical edges of the car image using image enhancement and Sobel operator, then remove most of the background and noise edges by an effiective algorithm, and finally search the plate region by a rectangle window in the residual edge image and segment the plate out from the original car image. Experimental results demonstrate the great robustness and efficiency of our method. If you need more info. Please contact me by mail. Sincerely Kaushik
2008-06-07
    Dear friends, This saturday I'm going to present a paper by Tam P. Cao, Darrell M. Elton and John Devlin, entitled "Real-time Linear Projection Speed Sign Detection System in Low Light Conditions". It was presented in International Conference on Computer and Information Science (ICIS2007). This paper proposed idea of shape-based detection of speed limitation signs (whuch are circles). I also going to extend my presentation by ideas presented in another paper entitled "A Shape Detection Mehod based on the Radual Symmetry Nature and Direction-discriminated Voting" because the other of this paper used similar ideas to detect not only circular shapes, but trinagles and rectangles also. abstract In this paper, a real-time algorithm for detecting speed signs in low light conditions is presented. This application oriented algorithm starts with performing the popular Difference of Gaussian (DoG) filter as a pre-processing step. After that pixels are thresholded and classified into predefined classes. The paper introduce an efficient voting method called linear projection, in which each pixel that belongs to any predefined classes continuously votes on an incremental image along its gradient vector to identify centre of the speed sign candidates. This algorithm is fast, reliable, suitable for real-time hardware (FPGA) implementation. A Matlab simulation model is built to verify the algorithm¡¯s operation in nighttime driving conditions. A FPGA initial prototype of this algorithm has been implemented and showed promising results.
2008-05-31
    I'll present 'Identification of degraded traffic sign symbols by a generative learning method' on this saturday. This paper was presented in 'International Conference on Pattern Recognition (ICPR¡¯06)'. The authors are Hiroyuki Ishida, Tomokazu Takahashi, Ichiro Ide, Yoshito Mekada, and Hiroshi Murase. They belong the Graduate School of Information Science, Nagoya University, Japan. This paper is presented a training method for recognizing traffic sign symbols undergoing image degradations. In order to cope with the degradations, it is desirable to use similarly degraded images as training data. Their method artificially generates these data from an original image in accordance with the actual degradations. If you need more information, please check the paper or contact to me. Best wishes
2008-05-24
    On this weekend, I would like to present a paper which is related to human detection system using multiple cameras. The title is "Fixed Point Probability Field for Complex Occlusion Handling" and the authors are "Francois Fleuret, Richard Lengagne and Pascal Fua". This paper was presented in Procedding of the IEEE International Conference on Computer Vision, 2005. They propose the method of effective handling occlusion using multiple-cameras context. To handle occlusion, they use occupancy probability in the top view and generative model. For experiment, two image sequences are obtained by four cameras in a square room. In the first 4000 frame one, 4 people enter the room and in the second 1000 frame-sequence, six people are moving about in the room. If you want to know more information, I would like to show you more related references. Please feel free to talk to me.
2008-05-17
    This paper published DAGM 2002. He presented methods which are reconstruction and navigation. He uses Monocular camera and vertical edges and corner in wall region for 3D reconstruction of indoor. Abstract. A new methodology to realise automatic exploration of an indoor environment using single view sequences from a camera mounted on an autonomously moving vehicle is presented. The method includes geometric reconstruction and acquisition of texture information using image rectification. The algorithm of wall edge detection and position determination is given as the heart of the methodology. Navigation planing and self localisation by the moving vehicle are realised whereas obstacles are neglected. Examples for 3D models and accuracy results are presented and discussed.
2008-05-10
    Abstract This paper presents an extension to category classification with bag-of-features, which represents an image as an orderless distribution of features. We propose a method to exploit spatial relations between features by utilizing object boundaries provided during supervised training. We boost the weights of features that agree on the position and shape of the object and suppress the weights of background features, hence the name of our method, “spatial weighting”. The proposed representation is thus richer and more robust to background clutter. Experimental results show that our approach improves the results of one of the best current image classification techniques. Furthermore, we propose to apply the spatial model to object localization. Initial results are promising. Note: 1) Presenting paper: Marszaek, M. and Schmid, C, “Spatial Weighting for Bag-of-Features”, Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, 2006 2) A key reference of reporting paper: J. Willamowski, D. Arregui, G. Csurka, C. R. Dance, and L. Fan. Categorizing nine visual classes using local appearance descriptors. In IWLAVS, 2004. 3) The reporting paper is one of the key references of paper, J. Zhang, M. Marszalek, S. Lazebnik and C. Schmid, “Local Features and Kernels for Classification of Texture and Object Categories: A Comprehensive Study”, International Journal of Computer Vision 73(2), 213–238, 2007
2008-05-03
    On this weekend, I am going to present a paper entitled as "Segmentation and description of natural outdoor scenes." This paper was presented in "Journal of Image and Vision Computing, vol. 25, pp. 727-740, 2007" by A. Bosch, X. Munoz and J. Freixent. This paper described on a scene description and segmentation system which is capable of recognising natural objects (e.g. sky, trees, grass) under different outdoor conditions. This paper focuses on a probabilistic object classifier for outdoor scene analysis. - First step: solving the problem of scene context generation. - Second step: a stage of general segmentation provides the segmentation of unknown regions. - Finally, the last stage tries to perform a region fusion of known and unknown segmented objects. The result is both a segmentation of the image and recognition of each segment as a given object class or as an unknown segmented object. If you need more information, please check the attached paper. Sincerely Kim, Dae-Nyeon
2008-04-19
    The title of this week's reference paper is "3-D Depth Reconstruction from a Single Still Image", IJCV 2008, vol.76, no.1, pp.53-69. The authors estimate 3-d depth from a single still image and reconstruct it. Furthermore they try to avoid obstacles from the image sequences by monocular camera cue. Here is their abstract and If you need more info., then just let me know ^^. Abstract: We consider the task of 3-d depth estimation from a single still image. We take a supervised learning approach to this probloem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the value of the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a hierarchical, multiscaleMarkov Random Field (MRF) that incorporates multiscale local- and global-image features, and models the depths and the relation between depths at different points in the image. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. We further propose a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.
2008-04-12
    On this saturday, I would like to present a paper again. The title is "Multi-View Face Recognition by Nonlinear Dimensionality Reduction and Generalized Linear Models" and it was presented in FGR'06. The authors are Bisser Raytchev, Ikushi Yoda and Katsuhiko Sakaue in AIST, Japan. This paper propose multi-view face recognition system under unconstrained real-world conditions based on a novel nonlinear dimensionality reduction method IsoScale and Generalized Linear Models (GLMs). For their experiment, 4 stereo cameras were installed in the each corner of the room. If you want to know more information, I would like to show you more related referencs and discuss together. Please feel free to talk to me.
2008-04-05
    On this weekend, I will be present a paper which tittle is " A Fast Algorithm for License Plate Detection " This paper was presented in Vahid Abolghasemi and Alireza Ahmadyfard Visual 2007, The 9th International Conference on Visual Information Systems (ICIEA 2007) , 28-29 June 2007, Shanghai, China ABSTRACT In this paper we propose a method for detection of the car license plates in 2D gray images. In this method we first estimate the density of vertical edges in the image. The regions with high density vertical edges are good candidates for license plates. In order to filter out clutter regions possessing similar feature in the edge density image, we design a match filter which models the license plate pattern. By applying the proposed filter on the edge density image followed by a thresholding procedure, the locations of license plate candidates are detected. We finally extract the boundary of license plate(s) using the morphological operations. The result of experiments on car images (taken under different imaging conditions especially complex scenes) confirms the ability of the method for license plate detection. As the complexity of the proposed algorithm is low, it is considerably fast. If you need more info. Please contact me by mail or leave message on our web-board. Sincerely Kaushik Deb
2008-03-29
    Dear friends, This week I'm going to present a paper by Mark A. Robertson, Sean Borman, and Robert L. Stevenson entitled "Dynamic Range Improvement Through Multiple Exposures". It is not a recent paper, however it contains basic ideas of HDR imaging. Abstract This paper presents an approach for improving the effective dynamic range of cameras by using multiple photographs of the same scene taken with different exposure times. Using this method enables the photographer to accurately capture scenes that contain a high dynamic range, i.e., scenes that have both very bright and very dark regions. The approach requires an initial calibration, where the camera response function is determined. Once the response function for a camera is known, high dynamic range images can be computed easily. The high dynamic range output image consists of a weighted average of the multiply-exposed input images, and thus contains information captured by each of the input images. From a computational standpoint, the proposed algorithm is very efficient, and requires little processing time to determine a solution.
2008-03-22
    I'll present "A Novel Feature Extraction Technique for the Recognition of Segmented Handwritten Characters" on this saturday. This paper was presented in "International Conference on Document Analysis and Recognition ICDAR 2003". The authors are Michael Blumenstein, Brijesh Verma, Hasan Basli. They belong the institute for integrated and intelligent systems in Griffith University, Australia. This paper describes describes neural network-based techniques for segmented character ecognition that may be applied to the segmentation and recognition components of an off-line handwritten word recognition system. They use CEDAR database that is Center of Excellence for Document Analysis and Recognition. The database is that researchers at CEDAR have created databases to aid in their research. If you need more information, please check the paper or contact to me. Best wishes
2008-03-15
    On this Saturday, I would like to present a paper which is related to my interest of face recognition system. The title is "Multi-View Face Recognition by Nonlinear Dimensionality Reduction And Generalized Linear Models" and authors are Bisser Raytchev, Ikushi Yoda and Katsuhiko Sakaue in AIST, Japan. This paper was presented in Proceedings of the 7th International Conf. on Automatic Face and Gesture Recognition, 2006. Authors propose multi-view face recognition system under unconstrained real-world conditions based on a novel nonlinear dimensionality reduction method IsoScale and Generalized Linear Models (GLMs). In their implementation, four stereo cameras were installed in the each corner of the room. If you want to know more information, I would like to show you more related references. Please feel free to talk to me
2008-03-08
    This paper published Conference on Articulated Motion and Deformable Objects (AMDO) 2006. This paper introduce method which rectify perspective image. The authors say four step for mapping. First, input stereo image. Second, computing disparity Third, image rectification Finally, mapping Abstract In this paper we present a method for mapping 3D unknown environments from stereo images. It is based on a dense disparity image obtained by a process of window correlation. To each image in the sequence a geometrical rectification process is applied, which is essential to remove the conical perspective of the images obtained with a photographic camera. This process corrects the errors in coordinates x and y to obtain a better matching for the map information. The mapping method is an application of the geometrical rectification and the 3D reconstruction, whose main purpose is to obtain a realistic appearance of the scene.
2008-03-01
    Dear Professor and Islab's members! I am so sorry for late announcement. This saturday, I am going to present the following paper, [Author] K. Lee, Y. Kim, S.I. Cho and K. Choi, "Building Recognition in Augmented Reality Based Navigation System," in LNCS of Int’l Conf. on Multi-Media Modeling (MMM) 2007 Vol. 4352, June 2007, pp. 544-551. 1) Goal: Building detection for captured video from in-vehicle camera 2) Method a) Using edge segments in small size blocks i) Removing background object, trees and so on ii) The rest region is considered as candidate of building region b) Using edge tracing technique to determine building area 3) Results a) Average detection rate is 88.9(%) for general case b) 95.9 (%) for the case of without tree and signs
2008-02-23
    This week I'm going to present paper by Kevin Murphy, Antonio Torralba and William T. Freeman was presenterd on NIPS 2004. This paper entitled "Using the Forest to See the Trees: A Graphical Model Relating Features, Objects, and Scenes" shown how to combine global and local image features to solve the tasks of object detection and scene recognition. - Abstract - Standard approaches to object detection focus on local patches of the image, and try to classify them as background or not. We propose to use the scene context (image as a whole) as an extra source of (global) information, to help resolve local ambiguities. We present a conditional random field for jointly solving the tasks of object detection and scene classification. Dae-Nyeon Kim
2008-02-16
    For this weekend reference seminar, I'll present "Adaptive Background Estimation: Computing a Pixel-Wise Learning Rate from Local Confidence and Global Correlation Values". It was published in IEICE Trans. Inf. & Syst., 2004. Unfortunately, I couldn't find the original pdf file. I'll attatch the file after I'll get it ASAP. Abstract--- Adaptive background techniques are useful for a wide spectrum of applications, ranging from security surveillance, traffic monitoring to medical and space imaging. With a properly estimated background, moving or new objects can be easily detected and tracked. Existing techniques are not suitable for real-world implementation, either because they are slow or because they do not perform well in the presence of frequent outliers or camera motion. We address the issue by computing a learning rate for each pixel, a function of a local confidence value that estimates whether a pixel is (or not) an outlier, and a global correlation value that detects camera motion. After discussing the role of each parameter, we report experimental results, showing that our technique is fast but efficient, even in a real-world situation. Furthermore, we show that the same method applies equally well to a 3-camera stereoscopic system for depth perception.
2008-02-09
    On this weekend, I will be present a paper which tittle is " A Rapid Locating Method of Vehicle License Plate Based on Characteristics of Characters�Connection and Projection " This paper was presented in Cheng Zhang, Guangmin Sun, Deming Chen, Tianxue Zhao The 2nd IEEE Conference on Industrial Electronics and Applications (ICIEA 2007) , 23-25 May 2007, Harbin, China ABSTRACT License plate location is the key component of automatic License Plate Recognition (LPR) system. In this paper a hybrid license plate location method based on characteristics of characters connection and projection is presented. This method can efficiently cope with the images of complex background. The experimental results show that the algorithm has high accuracy and robustness. If you need more info. Please contact me by mail or leave message on our web-board. Sincerely Kaushik Deb
2008-02-02
    Dear friends, This week I'm going to present paper by Alexey Lukin wich was presenterd on Graphicon 2007. This paper entitled "Tips & Tricks: Fast Image Filtering Algorithms" shows some ideas about optimization for such time-consuming operations as gaussian filtering and median filtering. It can be usefull for you if you deal with realtime image processing. Abstract: This paper highlights some fast algorithms for image filtering, specifically – box and Gaussian smoothing, Hann filtering, me-dian filtering, and morphological operations. It is shown that some of these algorithms can be implemented with computational cost independent of a filter radius.
2008-01-19
    I'll present "Objects Recognition by Means of Projective Invariants Considering Corner-Points" on this saturday. This paper was presented in "International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2002". The authors are M. A. Vicente, P. Gil, O. Reinoso, F. Torres in University Miguel Hernandez(Spain). This paper presents an object recognition technique based on projective geometry for industrial pieces that satisfy geometric properties. First at all, they consider some methods of corner detection which are useful for the extraction of interest points in digital images. They present a method that allows to reduce the points extracted by different corner detection techniques. Secondly, groups of points are used to build projective invariants which allow them to distinguish one object from another. If you need more information, please check the paper or contact to me. Best wishes
2008-01-12
    I will present one reference paper about face recognition in this weekend. The title is "Use of depth and colour eigenfaces for face recognition" and authors are F.Tsalakanidou, D. Tzovaras, M.G. Strintzis. That paper was published Pattern Recognition Letters, 2003. They propose a face recognition technique using deoth and colour information. The main objective of the paper is to evaluate three different approaches( colour, depth, combination of clour and depth). The proposal method is based on Principal Components Analysis(PCA), and the extraction of depth and colour eigenfaces. If you need more informtion, ask to me.
News | About us | Research | Lectures