Home | Login
Lectures       Previous announcements
Select year: 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017
Seminars in 2009
2009-12-26
    I would like to introduce my presentation on this week Title : Segmentation and Tracking of Interacting Human Body Parts under Occlusion and Shadowing Authors : Sangho Park and J.K. Aggarwal Abstract This paper presents a system to segment and track multiple body parts of interacting humans in the presence of mutual occlusion and shadow. The color image sequence is processed at three levels: pixel level, blob level, and object level. A Gaussian mixture model is used at the pixel level to train and classify individual pixel colors. Markov Random Field (MRF) framework is used at the blob level to merge the pixels into coherent blobs and to register inter-blob relations. A coarse model of the human body is applied at the object level as empirical domain knowledge to resolve ambiguity due to occlusion and to recover from intermittent tracking failures. A two-fold tracking scheme is used which consists of blob to blob matching in consecutive frames and blob to body part association within a frame. The tracking scheme resembles a multi-target, multi-assignment framework. The result is a tracking system that simultaneously segments and tracks multiple body parts of interacting people. Example sequences illustrate the success of the proposed paradigm. Byung-Seok Woo
2009-12-19
    Dear Professor, Dear Islab members, On this weekend, I am going to present a paper Author:Denis Wolf and Gaurav S. Sukhatme In proceedings of the Intl. Conf. on Robotics and Automation ICRA New Orleans, Louisiana, Apr, 2004 Abstract: We propose an on-line algorithm for simultaneous localization and mapping of dynamic environments. Our algorithm is capable of differentiating static and dynamic parts of the environment and representing them appropriately on the map. Our approach is based on maintaining two occupancy grids. One grid models the static parts of the environment, and the other models the dynamic parts of the environment. The union of the two provides a complete description of the environment over time. We also maintain a third map containing information about static landmarks detected in the environment. These landmarks provide the robot with localization. Results in simulation and with physical robots show the efficiency of our approach and and show how the differentiation of dynamic and static entities in the environment and SLAM can be mutually beneficial. Nguyen Van Thuan IS-LAB
2009-12-12
    Abstract Stereo matching is fundamental to applications such as 3-D visual communications and measurements. There are several different approaches towards this objective, including feature-base methods, block-based methods, and pixel-based methods. Most approaches use regularization to obtain reliable fields. Generally speaking, when smoothing is applied to the estimated depth filed, it results in a bias towards surfaces that are parallel to the image plane. This is called fronto-parallel bias. Recent pixel-based approaches claim that no disparity smoothing is necessary. In their approach, occlusions and objects are explicitly modeled. But these paper models interfere each others in the case of slanted objects and result in a fragmented disparity field. In this paper we propose a disparity estimation algorithm with explicit modeling of object orientation and occlusion. The algorithm incorporates adjustable adjustable resolution and accuracy. Smoothing can be applied without introducing the fronto-parallel bias. The experiments show that the algorithm is very promising.
2009-08-08
    In this saturday, I am going to make presentation about door detection. This paper had presented on 2nd From Sensors to Human Spatial Concepts Workshop, held jointly with IEEE/RSJ International Conference on Robots and Intelligent Systems (IROS), San Diego - USA, pages 41-48, 2007. Hereunder is abstract which it will be presented in this seminar. Abstact : Important component of human-robot interaction is the capability to associate semantic concepts to encountered locations and objects. This functionality is essential for visually guided navigation as well as place and object recognition. In this paper we focus on the problem of door detection using visual information only. Doors are frequently encountered in structured man-made environments and function as transitions between different places. We adopt a probabilistic approach to the problem using a model based Bayes inference to detect the door. Different from previous approaches the proposed model captures both the shape and appearance of the door. This is learned from a few training examples, exploiting additional assumptions about structure of indoors environments. After the learning stage, we describe a hypothesis generation process and several approaches to evaluate the probability of each generated hypothesis. The new proposal is tested on numerous examples of indoor environments, showing a good performance as long as enough features are encountered.
2009-08-01
    Dear Professor, Dear Islab members, On this weekend, I am going to present a paper: Jingang Huang, Bin Kong, Bichun Li, Fei Zheng, “A New Method of Unstructured Road Detection Based on HSV Color Space and Road Features” Proceedings of the 2007 International Conference on Information Acquisition, pp.596-601, July 9-11,2007, Jeju City, Korea Abstract: A novel unstructured road detection approach based on HSV color space and road features is proposed. The method mainly uses the Hue component as the estimation standard, for it is insensitive to shadows and water areas. Sometimes, the Hue component of the road may be unstable, so we combine the Saturation and the Value components. This arrangement can assure the robustness of the approach. We don't deal with all the frame pixels one by one. Instead, we only select some of them. Thus, the processing speed of the method is guaranteed. At last, credible edge points are chosen and then fitted by either straight line or curve. The experiments carried out show that the method detects the lane area effectively, and is robust against noise, shadows, and illumination variations. MY HA IS-LAB
2009-07-25
    Dear Professor, Dear Islab members, On this weekend, I am going to present a paper: C. J. Tsai and A. K. Katsaggelos, "Dense Disparity Estimation with a Divide-and-Conquer Disparity Space Image Technique", IEEE Transactions on Multimedia, Vol. 1, No. 1, March 1999. Abstract. A new divide-and-conquer technique for disparity estimation is proposed in this paper. This technique performs feature matching following the high confidence first principle, starting with the strongest feature point in the stereo pair of scanlines. Once the first matching pair is established, the ordering constraint in disparity estimation allows the original intra-scanline matching problem to be divided into two smaller subproblems. Each subproblem can then be solved recursively until there is no reliable feature point within the subintervals. This technique is very efficient for dense disparity map estimation for stereo images with rich features. For general scenes, this technique can be paired up with the disparity-space image (DSI) technique to compute dense disparity maps with integrated occlusion detection. In this approach, the divide-and-conquer part of the algorithm handles the matching of stronger features and the DSI-based technique handles the matching of pixels in between feature points and the detection of occlusions. An extension to the standard disparity-space technique is also presented to compliment the divide-and-conquer algorithm. Experiments demonstrate the effectiveness of the proposed divide-and-conquer DSI algorithm. Best regards Hoang-Hon Trinh
2009-07-18
    On this weekend, I am going to present a paper entitled as "An Omnidirectional Vision System for Outdoor Mobile Robots". This paper was presented in "Intl. Conf. on SIMULATION, MODELING and PROGRAMMING for AUTONOMOUS ROBOTS Venice(Italy), pp. 273-284, 2008" by Wen Lik Dennis Lui and Ray A. Jarvis. Abstract. The advancements made in microprocessor and image sensory technology have made inexpensive, fast and robust computers and high resolution cameras widely available. This opens up many new possibilities as researchers can now take advantage of the rich visual information of the environment provided by the vision system. However, conventional cameras have a limited eld of view, which is a constraint for certain applications in computational vision. As a result, the use of omnidirectional vision systems have become more prevalent within the robotics community in recent years. In this paper, a novel variable multibaseline stereo omnidirectional vision system and its algorithms intended for outdoor navigation will be presented. If you need more information, please check the attached paper. Sincerely Dae-Nyeon Kim
2009-07-11
    This paper widely describes how we overlay images which have different conditions but included same scene. I hope it'll be a good basis for our research. If you wonder about contents in this paper, read first and we can discuss. Paper info. authors: Barbara Zitová, Jan Flusser Journal: Image and Vision Computing, Vol. 21, No. 11, pp. 977~1000, Oct. 2003 Abstract This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrically align two images (the reference and sensed images). The reviewed approaches are classified according to their nature (areabased and feature-based) and according to four basic steps of image registration procedure: feature detection, feature matching, mapping function design, and image transformation and resampling. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of image registration and outlook for the future research are discussed too. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas.
2009-06-20
    I would like to introduce my presentation on this week. The tilte of paper is "A unifying theory for central panoramic systems and practical implications". The paper was published in European Conference on Computer Vision (ECCV) 2000. The authors are Christopher Geyer and Kostas Daniilidis in University of Pennsylvania. This paper proposes a kind of method for using omnidirectional vision system. If you have some question, please talk to me freely. -- Abstract In this paper, we provide a unifying theory for all central catadioptric systems. We show that all of them are isomorphic to projective mappings from the sphere to a plane with a projection center on the perpendicular to the plane. Subcases are the stereographic projection equivalent to parabolic projection and the central planar projection equivalent to every conventional camera.We define a duality among projections of points and lines as well as among different mappings. This unification is novel and has a a significant impact on the 3D interpretation of images. We present new invariances inherent in parabolic projections and a unifying calibration scheme from one view.We describe the implied advantages of catadioptric systems and explain why images arising in central catadioptric systems contain more information than images from conventional cameras.
2009-06-13
    Abstract—A method based on genetic algorithms (GA) for fusing multiple images of a static scene into an image with maximum information content is introduced. It partitions the image domain into uniform blocks and for each block selects the image that contains the most information within that block. The selected images are then blended together using rational Gaussian blending functions that are centered at the blocks. In this paper, we employ GA for optimizing both the block size and width of the blending functions. We also examine the effectiveness of our scheme by checking the fitness function in GA, which includes both factors related to information and human vision. Keywords—Multiexposure, Image Fusion, GA, Rational Gaussian Blend.
2009-06-06
    I would like to introduce my presentation on this week. The title of paper is "Inter-Camera Color Calibration by Correlation Model Function" and it was published in International Conference on Image Processing (ICIP) 2003. The author is Faith Porikli in MERL (Mitsubishi Electric Research Laboratories). This paper proposes a method for color calibration to solve correspondence between different cameras with non-overlapping views. If you need more information, please feel free to talk to me.
2009-05-16
    Dear Prof. and colleagues, I'll present a paper named "Korean Manual Alphabet Recognition Based on Template Matching" on this weekend. Sign language is complex visual-spatial language most used in deaf society, and is a representative example of hand gesture with linguistic structure. Korean manual alphabet is a manual alphabet that augments the vowel and consonant of Korean sign language. This paper presents a system which recognizes the Korean manual alphabet(KMA) using a USB camera and translates into a normal Korean character. The system captures images from a camera and extracts skin color regions from an image, and finds a hand region. The system detects a hand without particular cloth in a complex background. We use 31 KMA hand shape as template image, and the system compares hand shape with template image using a Euclidean distance and Correlation coefficient method.
2009-05-09
    Dear Prof. and colleagues, I'll present a paper named "Road Boundary Detection in Complex Urban Environment based on Low-Resolution Vision " this weekend. In this paper, we proposed a real-time road boundary detection method in complex urban road environment. The detction difficulty lies in road wear, both exitence of marked and unmarked boundary and low-resolution vision. The idea of the algorithm is to extract the road surface firstly using improved region growung method based on edge enhancement. The road boundary is then estimated by fitting the edge of the extracted road surface. A Bezier splines algorithm with optimization of control point is proposed to estimate the road boundary. The algorithm is implemented on the video collected in BEIJING urban streets and achieves good performance. Author: Qunghua Wen, Zehong Yang, Yixu Song, Peifa Jia 11th Joint Conference on Information Sciences (2008)
2009-05-02
    Dear Prof. and colleagues, I'll present a paper named "Vision-Based Road Detection by Adaptive Region Segmentation and Edge Constraint " this weekend. In this paper, A novel vision-based road detection method was proposed which is combination OTSU algorithm and Canny edge detection. Firstly they use adaptive threshold algorithm OTSU to detect road region against its background and then road boundary can be recognized by Canny detection. This method was robust against strong shadows, surface dilapidation and illumination variations. Author: Yanqing Wang, Deyun Chen, Chaoxia Shi Second International Symposium on Intelligent Information Technology Application-2008
2009-04-11
    I'll present a paper named "Real-Time Gesture REcognition Using the Shape Information of MHI" this weekend. In the paper, the authors decribe how to recognize human gestures using simple idea of MHI(Motion History Image). MHI is generated by static camera and calculating intensity gradient from consecutive images. After all, they extract shape information using Shape context which is one of the feature descriptor presented by Serge Belongie & Jitendra Malik. Extracted shape context will be devided by SVM and organized to the Motion Data Base. In the image recongnition progress, they use images which includes single/multiple persons' gestures.
2009-04-04
    Dear Colleagues, On this weekend, I will present a paper tittle as "An intelligent strategy for checking the annual inspection status of motorcycles based on license plate recognition", published in Expert Systems with Applications 36(2009) 9260-9267. Abstract License plate recognition techniques have been successfully pplied to the management of stolen cars, management of parking ots and traffic flow control. This study proposes a license plate ased strategy for checking the annual inspection status of otorcycles from images taken along the roadside and at designated inspection stations. Both a UMPC (Ultra Mobile ersonal Computer) with a web camera and a desktop PC are used as hardware platforms. The license plate locations in images are identified by means of integrated horizontal and vertical projections that are scanned using a search window. Moreover, a character recovery method is exploited to enhance the success rate. Character recognition is achieved using both a back propagation artificial neural network and feature matching. The identified license plate can then be compared with entries in a database to check the inspection status of the motorcycle. Experiments yield a recognition rate of 95.7% and 93.9% based on roadside and inspection station test images, respectively. It takes less than 1 s on a UMPC (Celeron 900 MHz with 256 MB memory) and about 293 ms on a PC (Intel Pentium 4 3.0 GHz with 1 GB memory) to correctly recognize a license plate. Challenges associated with recognizing license plates from roadside and designated inspection stations images are also discussed. If you need more info. Please contact me by mail. Sincerely Kaushik Deb
2009-03-21
    I'll present "Rectangular Traffic Sign Recognition" on next Saturday. This paper was presented in "International Conference on Image Analysis and Processing (ICIAP2005)". The authors are Roberto Ballerini, Luigi Cinque, Luca Lombardi, and Roberto Marmo. They describe the automatic detection and classification of rectangular road sign. Abstract In this research the problem of the automatic detection and classification of rectangular road sign has been faced. The first step concerns the robust identification of the rectangular sign, through the search of gray level discontinuity on the image and Hough transform. Due to variety of rectangular road signs, we first recognize the guide sign and then we consider advertising the other rectangular signs. The classification is based on analysis of surface color and arrows direction of the sign. We have faced different problems, primarily: shape alterations of the sign owed to the perspective, shades, different light conditions, occlusions. The obtained results show the feasibility of the system.
2009-03-14
    On this weekend, I will present a paper about correspondence problems in the multiple camera system. So, I introduce the summary of paper briefly. The title is "Global Color Model Based Object Tracking in the Multi-Camera Envoronment" It was published in Proc. of the 2006 IEEE/RSJ Int'l Conf. on Intelligent, Robotics and Systems(IROS), 2006. The authors of this paper are Kazuyuki Morioka, Xuchu Mao and Hideki Hashimoto. This paper is one part of intelligent space in Hashimoto Lab., Universtiy of Tokyo. In this paper, they set two problems those are (1) corresponding object from frame to frame over time in image sequence and (2) among different cameras. For solving those problems, Global color model organized by color histogram of each object is generated in advance. Furthermore, they use eigenspace theory to reduce memory size for effective representation. -- Abstract The research field on intelligent environments that consist of many distributed sensors and robots has been expanding. Intelligent environments require the functions of extracting and tracking the multiple objects seamlessly to achieve appropriate services to users under the multi-camera environments. Some tracking systems often prepare the color models of the objects in advance. It is difficult to adopt these models to the tracking of the objects that change their color appearance according to the pose. However, model based tracking is efficient for the tracking in the crowded environment. And also, the object color models have to be valid for seamless tracking in the multi-camera environments. We propose the color histogram based object model configuration. This model termed global model is efficient for object tracking and matching. The eigenspace of color histograms configured from color appearance in several poses of the object is utilized for the configuration of the global model. Color histogram models needed for tracking of the object that changed their appearance can be rebuilt from the global model. As results of the first experiments, the object correspondence among the different cameras was achieved empirically, and it was described that the proposed object model is effective for the seamless tracking by the distributed cameras.
2009-03-07
    This paper is published Robotics and Automation Systems Vol. 44, Issue 1, 2003. This paper similar to my research subject. This paper use vertical lines for mapping. The authors use inertial sensors for detecting vertical lines and homography for corresponding. Finally, they make metric map. If you have some quation or comments, whenever you can contact to me.
2009-02-28
    In this letter, we propose a novel approach to detecting and tracking apartment buildings for the development of a video-based navigation system that provides augmented reality representation of guidance information on live video sequences. For this, we propose a building detector and tracker. The detector is based on the AdaBoost classifier followed by hierarchical clustering. The classifier uses modified Haar-like features as the primitives. The tracker is a motion-adjusted tracker based on pyramid implementation of the Lukas-Kanade tracker, which periodically confirms and consistently adjusts the tracking region. Experiments show that the proposed approach yields robust and reliable results and is far superior to conventional approaches.
2009-02-21
    I'll present a paper "color-based road detection in urban traffic scenes" this weekend. This paper was published in "IEEE Transactions on Intelligent Transportation Systems, vol. 5, no. 4, pp. 309-318, 2004." The authors are Yinghua He, Hong Wang, and Bo Zhang. In the paper, the authors present a road-area detection algorithm based on color images. This algorithm is composed of two modules as boundaries areas detection using intensity images and road areas detection using color images. If you need more information, please check the attached paper. Sincerely Kim, Dae-Nyeon
2009-02-07
    Abstract This paper presents a novel algorithm for performing integrated segmentation and 3D pose estimation of a human body from multiple views. Unlike other state of the art methods which focus on either segmentation or pose estimation individually, our approach tackles these two tasks together. Our method works by optimizing a cost function based on a Conditional Random Field (CRF). This has the advantage that all information in the image (edges, background and foreground appearances), as well as the prior information on the shape and pose of the subject can be combined and used in a Bayesian framework. Optimizing such a cost function would have been computationally infeasible. However, our recent research in dynamic graph cuts allows this to be done much more efficiently than before. We demonstrate the efficacy of our approach on challenging motion sequences. Although we target the human pose inference problem in the paper, our method is completely generic and can be used to segment and infer the pose of any rigid, deformable or articulated object. P.S. Original pdf file is compressed to two files by AlZip. If you want to decompress those, please use AlZip or ask to other colleagues
2009-01-31
    Abstract: Near Infra-Red (NIR) images of natural scenes usually have better contrast and contain rich texture details that may not be perceived in visible light photographs (VIS). In this paper, we propose a novel method to enhance a photograph by using the contrast and texture information of its corresponding NIR image. More precisely, we first decompose the NIR/VIS pair into average and detail wavelet subbands. We then transfer the contrast in the average subband and transfer texture in the detail subbands. We built a special camera mount that optically aligns two consumergrade digital cameras, one of which was modified to capture NIR.Our results exhibit higher visual quality than tonemapped HDR images, showing that NIR imaging is useful for computational photography.
2009-01-24
    Dear Colleagues, On this weekend, I will be present a paper which tittle is " A Macao License Plate Recognition System", published in the Fourth International Conference on Machine Learning and Cybernetics, Guangzhou, 18-21 August 2005. ABSTRACT A license plate recognition(LPR) system plays an important role in numerous applications, such as parking accounting systems, traffic law enforcement, road monitoring and security systems. This paper proposes an experimental license plate recognition system for Macao-style license plates. The system uses morphological operations and a projection searching algorithm for the extraction of license plates. The recognition result is obtained from a character recognition based on template matching. The proposed work in this paper is the first attempt towards Macao-style license plates. If you need more info. Please contact me by mail. Kaushik
2009-01-20
    I'll present "Robust Reorientation of 2D Shapes Using The Orientation Indicator Index the Orientation Indicator Index" on this saturday. This paper was presented in "International Conference on Signal Porcessing (ICASSP2005)". The authors are Victor H. S. Ha and Jose M. F. Moura. Victor H. S. Ha work at Digital Media Solutions Lab of Smasung Information Systems in America. Jose M. F. Moura is a Professor of Electrical and Computer Engineering at Carnegie Mellon University . Abstract Shape reorientation is a critical step in many image processing and computer vision applications such as registration, detection, identification, and classification. Shape reorientation is a needed step to restore the correct orientation of a shape when its image is subject to an arbitrary rotation and reflection. In this paper, we present a robust method to determine the standard "Normalized" orientation of twodimensional (2D) shapes in a blind manner, i.e., without any other information other than the given input shape. We introduce a set of orientation indicator indices (OII) that use low order central moments of the shape to monitor the orientational characteristics of the shape. Because these OII's use only low (up to third) order moments, they are robust to noise and errors. We show with examples how we bring consistently a given shape with an unknown arbitrary orientation to its standard normalized orientation using the OII.
2009-01-10
    I would like to introduce my presentation in this week. The title is "Modeling Inter-Camera Space-Time and Appearance Relationships for Tracking across Non-Overlapping Views" and the authors are Omar Javed, Khurram Shafique, Zeeshan Rasheed andMubarak Shah in University of Central Florida, USA. This paper was published in Computer Vision and Image Understanding, 2008. The focus of this paper is multi-camera tracking in a system of non-overlapping cameras.
2009-01-03
    This paper was published IEICE(The Institute of Electronics, Information and Communication Engineers) TRANS. Information and System Vol. E89-D No. 7 July 2006. This paper is presented new matching algorithm. The algorithm is consist of 2 stage. Stage 1 is to make candiate table, using information. Information is search area, LNO(Line Normalized Overlap), ODs(Orientation Difference) and contrast sign. Stage 2 is disambiguation process. The authors consider 3 cases at stage 2. 3 cases are one to one matching, one to many matching, many to many matching. Finally, this algorithm is shown 96% matching rate. Abstract In this paper, a new stereo line segment matching algorithm is presented. The main purpose of this algorithm is to increase efficiency, i.e. increasing the number of correctly matched lines while avoiding the increase of mismatches. In this regard, the reasons for the elimination of correct matches as well as the existence of the erroneous ones in some existing algorithms have been investigated. An attempt was also made to make efficient uses of the photometric, geometric and structural information through the introduction of new constraints, criteria, and procedures. Hence, in the candidate determination stage of the designed algorithm two new constraints, in addition to the reliable epipolar, maximum and minimum disparity and orientation similarity constraints were employed. In the process of disambiguation and final matches selection, being the main problem of the matching issue, regarding the employed constraints, criterion function and its optimization, it is a completely new development. The algorithm was applied to the images of several indoor scenes and its high efficiency illustrated by correct matching of 96% of the line segments with no mismatches.
News | About us | Research | Lectures