@misc{ fernando-vectorization,
author = "Raster Images Fernando",
title = "A Vectorization Algorithm of Closed Regions in",
url = "citeseer.ist.psu.edu/557642.html" }
@misc{ evaluation-sparse,
author = "Its Performance Evaluation",
title = "Sparse Pixel Vectorization: An Algorithm and",
url = "citeseer.ist.psu.edu/544647.html" }
5.Detection and Enhancement of Line Structures in an Image by Anisotropic Diffusion
3. On Image Analysis by the Methods of Moments
1. Adaptable
Vectorization System based on Strategic Knowledge and XML representation
use
7. Stable and Robust
Vectorization: How to make the right choices
Info of Paper
GREC'99
Tombre, Karl
Ah-Soon, Christian
Dosch, Philippe
Springer LNCS 1941 pp3-18, 2000
Abstract
In this paper, we discuss the elements to be taken
into account when choosing one's vectorizaiton method. The paper is
extensively based on our own implementations and tests, and concentrates
on methods designed to have few, if any, parameters.
Summary
An ideal vectorization system should be sufficiently
stable and robust. One important factor of robustness is to minimize the
number of parameters and thresholds needed in the vectorization process.
They work on an approach combing several methods, each of which having
no or very few parameters.
Four steps are involved in
vectorization:
1. First find the lines in the original raster
image. Whereas the most common approach for this is to compute teh
skeleton of the image, a number of other methods have been proposed.
2. Next approximate the lines found into a set of
vectors. This is performed by some polygonal approximation method, and
there are many around, with different approximation criteria.
3. It's necessary to perform some post-processing, to find
better position for the junction points, to merge some vectors and
remove some others, etc.
4. find the circular arcs. Not explained this step in this
paper.
When finding lines, several approaches are used:
One method is to compute the medical axis, i.e.,
skeletonizaiton;
The second methods is based on matching the opposite sides
of the line. This method is better at positioning the junction points,
but tend to rely too much of heuristics and thresholds when the drawing
become complex.
Some sparse-pixel approaches are also used in this paper.
The general idea is not to examine all the pixels in the image, but to
use appropriate sub-sampling methods which give a broader view of thte
line.
From lines to segments
If simplicity of the resulting set of vectors is
important, the best choice is probably an iterative method. It will give
a number of segments closest to the number in the original drawing.
However, it's o optimal with respect to positioning of these segments.
If the precision is the most important criterion,
Rosin&West's method seems to be a good choice. It also does
not require any explicit threshold or paramter.
Post-processing
It's difficult to provide a universal measure for assessing the
performance of vectorization, it's believed taht the elements of choice
given can be complementary to statistical performanec evaluation
processes.
6. Improving the Accuracy of Skeleton-Based
Vectorization
Info of Paper
GREC2002
Hilaire, Xavier
Tombre, Karl
Springer LNCS2390 pp273-288, 2002
Abstract
Summary
5. Detection and
Enhancement of Line Structures in an Image by Anisotropic Diffusion
Info of Paper
Lecture Notes in Computer Science
Abstract:
Summary:
An image may have both local and global structure, this paper focuses
on global structure. To enhance a global line structure of gray-level
image, people need to use some techinique to ignore the local small
structures. Gaussian filter is often used to blur off such small
structures. But if the local lines are not isolated in the image, the
result of filterring is not good. The reason of the bad performance is
that the neighbours will influence each other. If some way can be found
to diffuse the local lines only along the direction of the line
structure, the global result may be better.
Some research shows that gaussian filter has different sizes in its
directions, i.e., the shape of gaussian filter is anisotropic. So people
can smooth out the lines if they know the direction of line structure.
This paper proposed a method to determine the proper parameters of the
gaussian filter to smooth out only local small structures and enhance
global line structures adatpively to a given image, and to each
positions in the iamge. Their approach can be described in several
steps:
1. multi-resolution image analysis
This step is kind of preprocess. By viewing the
image with various resolutions, they can find a critical moment that the
global line structure appears to be perfect(if the purpose is to
recognize the global shape of the figure). They need to find the factor
t in u(x,y,t) which make the critical moment.
2. Evaluation of line-likeness
In the neighborhood of a line structure, the
gradients of the image gray-level f(x,y) have the same direciton toward
the center of the line. So, if the gradients of gray-levels have equal
direction in a small neighbor region, the image is defined to have a
line structure at that point. Line-likeness is defined by the amount of
how similar the directions of the gradients of u(x,y,t) are in the
neighbour of the image point.
Gradient space, structural-analysis tensor, eigen
vectors, and eigen values are introduced. Line-likeness S(x,y) at a
position(x,y) can be calculated, and its value spans betwen 0 and 1. If
S(x,y) is almost 1, the gray level around (x,y) has a line-like
structure, and if S(x,y) is almost 0, they have not.
3. Mutlti-scale evaluation
There is a parameter p in S(x,y), and different p
will lead to different S(x,y). They detect the global line structure of
the original image by blurring it with the p/2 which make S(x,y) become
maximal.
4. Anisotropic diffusion to enhance line structure
The previous steps help to find a global line
structure, but the image is a blurred one and the detected line
structure is faded. This step will smooth out gray level changes only
within the line structure and enhance it with clear contour edges. To
blur an image only within a specific direction, the anisotropic
diffusion has been proposed. This paper proposed the determination of
the suitable diffusion tensor to enhance line structure by using the
evaluation of the line structure S(x,y).
This paper proposed a technique to emphasize the global line
structure of a gray-level image. First, by changing resolution, they
obtain proper resolution to make global line structure clear. Then, get
the direction of the line, and smooth out only in this direction. The
global line structures are enhanced.
Info of Paper
R.
J. Prokop and A. P. Reeves.
CVGIP: Graphics Models and Image Processing.
Full
Text(pdf)
Coming soon.
3. On Image Analysis by the
Methods of Moments
Info of paper
On
Image Analysis by the Methods of Moments
CHo-Huak, TEH, Roland T. Chin
IEEE
Transaction on Pattern Analysis and Machine Intelligence, Vol10, No.4,
1988
Summary
This paper discussed several moments analysis
approaches in image processing, and compared them. In the past decades,
reserachers proposed several kinds of moments, for example, regular
moment(geometric moment), Zernike moments(orthogonal moment), Legendre
moments(orthogonal), pseudo-Zernike moment(orthogonal), complex moments,
etc. This paper discussed the sensitivity to image noise, aspect to
information rebundancy, and capability for image represenation, etc.
To the noise analysis, it shows that higher order
moments are more vulnerable to noise, and the number of coefficients
(and hence the set of moments up to a certain order) for optimal image
representation can be determined under a given noisy condition. In terms
of imformation redundancy, the orthogonal moments (i.e., Legendre,
Zemike, and pseudo-Zemike) are better than the other types of moments.
In terms of overall performance, Zemike and pseudo-Zemike moments
outperform the others.
My view of this paper
Since our diagarms have no much noise, regular
moments may be just fine for our work.
Not a paper, just something from web about Moment Analysis
Hu(MK, Hu) has proved that it's possible to
completely represent an image by its moments. And, researchers have
found tat a small number of moments, e.g., the first 30 or so, suffice
to describe an object with useful accuracy.
In typical imaging application, segmentation of the
image is first performed to generate a binary image with object, or
foreground, pixels labeled as such and non-object, or background pixels,
set to some appropriate , different, value, for example, to 0. The
moment of the foreground pixels are then computed and used to
characterize the object in each image. These images can be thus
represented as a FEATURE VECTOR comprised of the all moments up to some
order. Comparison of images, and thus the objects they contain, is
reduced to a numerical measure of distance between these corresponding.
1.Adaptable Vectorization System based on Strategic Knowledge and XML representation use
Info of this paper
Delalandre Mathieu, Saidali Youssouf, Ogier Jean-Mare, Trupin Eric
PAI lab, Univ. of Rouen, France
L3I lab, Univ. of La Rochelle, France
GRECÕ03
1. Summary
This paper presents a vectorization system. The system has two parts: processing library and graphics user interface. The processing library includes image pre-processing and vectorization tools. The pre-processing tools deal with noisy images. The vectorization tools are of high granularity level in order to use them in a strategic approach. The GUI allows to constructing and executing different vectorization scenarios. This makes it easy to test different strategies according to the recognition goals, and to adopt the system to new applications. XML is used to represent data for data manipulation.
Vectorization is a complex process that may rely on many different methods. Some methods first extract object graphs and then transform object lists into mathematical object lists. Some other methods perform directly vectorization.
Vectorization systems basically use two types of knowledge: descriptive knowledge and strategic knowledge. The first one cares about the objects into documents and the relations between them. The second cares the image processing tools used to construct the objects and chaining relations between these tools. In this paper, strategic knowledge based vectorization system is implemented.
This paper explains the system in the following sequence: processing library, GUI, and XML. This summary follows that sequence.
Image processing library ( Image pre-processing and Vectorization)
Image pre-processing
This step is designed to deal with noisy images,
and there are several ways to implement it. This paper does like this:
First they use grey-level filtering methods on scanned images like
median filter and mean filter. Second, they binarize these images Then,
reduce noise on obtained binary images. They use two methods in
reduction. The first one is a method based on blob coloring algorithm
which uses automatic or pre-defined user surface threshold. The second
method uses mathematical morphology operations like dilation, erosion,
opening and closing. Finally, they use distance computation functions
between images to test the pre-processing scenarios. See Fig(1) for
example.
Vectorization
The vectorization processing is based on various approaches (skeletonisation, contouring, region and run decomposition, direct vectorization). They decomposed the classical vectorization chain into granular processings to use them within a strategic approach: image level, structured data level, and boundary level which is between the image data and structured data.
VectorizationÕs Image Level
In this level, they use two classical image processing methods: contouring and skeletonisation.
VectorizationÕs Boundary Level
They use six different methods to extract structured data from images. Using Direct Contouring, they extract internal and external shapeÕs contours, and construct them into chains. By searching the contour chains, the inclusion relations are extracted. This method gives global/local descriptions of imageÕs shapes. Method Direct Vectorization works like this: after finding an entry point, a point element advances from this entry point into the lineÕs middle according to the contours following. The displacementÕs length is proportional to the lineÕs thickness. Method Run Decomposition first divide images into runs. The runs are organized into run graphs, either horizontal or vertical. For each of these runs, the contours and skeleton are extracted. Method Region Decomposition is based on wave aggregation. It first analyzes the image to find an entry poirt. Then it searches the neighbor points and label, aggregate, and store these neighbors into a wave object. Successively, the previous waves are used for the new aggregation processes. When the wave break or stop, the region boundaries are defined there. The boundaries are then used to create entry waves for the new region search. The examples for the above four methods are shown in Fig(3)
VectorizationÕs Structured Data Level
The structured data level is the central part of vectorization scenario. Its goal is to add semantic information to basic graphs obtained by the boundary levelÕs processing. One way to do it is List Processing. List processing can be used in interiority degree segmentation and mathematical approximation. For the interiority degree segmentation, we apply a thickness segmentation threshold based on a simple test of the thicknessÕs variation. Information of the pixelsÕ interiority degree is obtained by the successive calls of skeletonisation tools.
GUI
GUI is used to for the strategic knowledge acquisition and operation. Users can construct scenarios according to the document context, for the purpose of document image recognition. After the users define some contexts for the analyzed images, a set of processing is proposed. Each processing represents a scenario stage. The user oversees the process. The user can at any time return to any previous stage in order to modify the parameters, change processing stage, seek help, or display some examples. User can also save scenario examples in the scenario base, and can also search the base with two search tools: a query language, or a graph matching tools. See Fig(7).
XML
XML is used for better knowledge representation. In processing library, XML is used for the structured data output of processing, while the GUI use it for the scenario storage into an XML base.
2. My view of this paper
This paper did a good job on a vectorization system in which they applied many related technologies. They pre-processed the image to reduce noise, decompose images into three levels to use them with semantic information and strategic approaches, and implements GUI to make it easy to use.
My question about this paper is the strategic approach, described in part four. The authors say itÕs a highlight of this paper. It seems to me that it focus on GUI, not the idea of how to do the vectorization. So, I donÕt think itÕs technically important, if I understand the paper correctly.
3. Its possible contribution to our research
It provides some different approaches in several stages of vectorization, though many of them are standard methods. We can consult these methods when we work on SVP. They also provide good references.