Menu

AbstractThis proposal is about the methodology to develop a object detection and classification system to solve the problem below

0 Comment

AbstractThis proposal is about the methodology to develop a object detection and classification system to solve the problem below. The system is created using Spider, Python(x,y). The methologies we are using is Keywords – image processing; object detection and classification;
IntroductionImage processing is a way of performing some operations on an image, in order to get an enhanced image or to retrieve important information from the image. It is a type of signal processing which is the input is an image and the output can be the original image also or the characteristic retrieve from the image.Image processing can be divided into three steps1. import the image with the help of image acquisition tools2. analyze the image and retrieve useful information from the image3. output is where the result can be manipulated image based on the analysis made.

Object detection and classificationObject detection is a technology related to image processing that helps to detect certain objects such as humans, buildings, cars and many more in an image. The difference with object classification is the variable part. Output of object detection is variable in length, because the objects detected maybe vary from one image to another. The concept of object detection is that every object has its very special features that help in system detection. Example, when want to detect a square shape object, objects that have equal sides and are perpendicular at the corners are needed to detect for a square shape object.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Object classification is also a technology related to image processing which has an extracted special database that has a predefined patterns which compare with a detected object in order to classify the object to the correct category. It is an important and difficult task in many applications domains such as biometry and object tracking. Object classification uses the measurement pixels, which is the smallest unit to represent in an image. Besides that, object classification is a division of object detection.

Object classification process consists below of the following steps:1. pre-processing. It is known as image restoration and also a technique of enhancing the image. This pre-processing technique can removes low frequency background noise and can largely boost the reliability of the optical inspection.
2. detection and extraction of objects. It includes the detection of pixel position and others features of an moving object chose from the video. Extraction means the amount of resources required to describe a large set of data.

3. classification. Objects that are detected are classified into predefined classes by using a suitable classification method. It is mainly depends on the detected objects’ features, these feature are very useful for the classification and also recognition part. Features extraction will efficiently classify and represent that feature information is important for the image analysis and classification.

Problem statementWe are creating a new image processing system with object detection and classification and others functions inside it. Previously, image processing system is quite outdated and either cannot produce the image clearly or cannot extract the important features from the image or even cannot extract important features and classify them. The purpose of creating this new image processing system is to let the system itself to scan and process the image and able to detect and classify the objects found in the image.

Methodology.One of the methods used in image processing is mapping. Mapping is a simple geometric transformation process in image processing that cannot be expressed in a closed form. Mapping has two methods can be used which are forward mapping and reverse mapping.

1058779107114righttop T{(u,v)}
Figure 1.1: Forward mapping
Let the original point be the T{(u,v)}. In figure 1.1, the original point {(u,v)}of a pixel shows in the source image. Then the pixel is mapped by through a a mapping function T(u,v) to another point with coordinates( x,y) to the destination image. Since the pixel point both are numbers, at the destination there is no pixel indication. So the mapped value have one or more pixels that have the closest distance. If the mapping value has a few pixels indicated, it must first specify the maximum distance from the mapping site. Second, a variable given to each of the pixels must be determined based on the distance travelled.For each pixel of destination image, must define the range value if a value is mapped to another point that is within the range value defined. However, the pixel will be affected by the range value and will be defined depending on the influence rates. Firstly, it possibly may not be mapped when no values in field of the destination pixels which will high chance to cause some dark spots in the image. Besides that, due to number of points to be mapped in the influence range of each pixel is uncertain can cause a difficulty for weightings. Therefore, Each of the destination pixels in the image must considered a buffer that maintained all the mapped points in the influence range value and in the end can determine the actual value of the pixel. The advantage of forward mapping is it does not need to import the total amount of image to start the processing stage. But when one need to implement this method, it requires very high processing cost that will cause the system take longer time to process the image. This will also means that it will occupied more memory space.

Second method used in image processing is reverse mapping. Reverse mapping scans the destination image pixel by pixel and calculate the value of source image. Figure 1.2 below shows the reverse mapping. Pixel of the 1st destination image with the coordinates {(x,y)} is mapped to the reverse function T to the point with coordinates {(u,v)} in the source image. For this figure, value of the nearest pixel to the mapping point as the destination pixel is taken into consideration.

Figure 1.2 reverse mapping methodReferences1. R.M. Haralick, K.Shanmugam and I. Dinstein “Textural features for image classification,” IEEE Trans. Syst, ManCybern.,vol.SMC-3,no.6,pp. 610-621, Nov. 1973
2. K. Khurana, R. Awasthi, ” Technique for object recognition in Images and multi-object detection,” IJARCET, Volume 2, Issue 4, April 2013.

3.”Object class detection” Vision.eecs.ucf.edu. Retrieved2013-10-09.4. C. Guestrin, F. Cozman and M. G. Simoes, “Industrial applications of Image mosaicing and stabilization “, Second International Conf. on Knowledge-Based Inteligent Electronic Systems, 174-183,1998.

5. P. J. Burt and K. J. Hanna, “System and Method for Electronic Image Stabilization”, U. S. Patent NO. 5629988, 1997.
6 I. Ghosh and B. Majumdar, “VLSI Implementation of an Efficient ASIC Architecture for Real- Time Rotation of Digital Images “, Int. J. Patern Recogn. Artif. Intell. , 9, 449-462,1995.
7. B.Hawkins and L. Smith, “An Enhanced Real-Time Video Stabilization Algorithm Implemented Using a Reconfigurable Processing Module “, ICSPAT2000.
8.D. Kim, R. Managuli and Y. Kim, “Data Cache and Forward Memory Access in Programing Mediaprocessors “, IEEE Micro, 2001.
9. O. D. Evans and Y. Kim, “Efficient Implementation of Image Warping on a Multimedia Processor “, Real-Time Imaging 4,417-428,1998.
10. S. Siegel and B. Goetz-Greenwald, “VME Bords Perform High Speed Spatial Warping”, SPIE

x

Hi!
I'm Kim!

Would you like to get a custom essay? How about receiving a customized one?

Check it out