Jeremy S. De Bonet : Publications




Resume
RESEARCH
     
PUBLICATIONS
Image Compression
Texture Synthesis
Image Database Retrieval
Segmentation
Registration
Discrimination
Projects
Web Hacks


Data Compression Techniques for Branch Prediction

1999
J. S. De Bonet

abstract

Without special handling branch instructions would disrupt the smooth flow of instructions into the microprocessor pipeline. To eliminate this disruption, many modern systems attempt to predict the outcome of branch instructions, and use this prediction to fetch, decode and even evaluate future instructions. Recently, researchers have realized that the task of branch prediction for processor optimization is similar to the task of symbol prediction for data compression. Substantial progress has been made in developing approximations to asymptotically optimal compression methods, while respecting the limited resources available within the instruction prefetching phase of the processor pipeline. Not only does the infusion of data compression ideas result in a theoretical fortification of branch prediction, it results in real and significant empirical improvement in performance, as well. We present an overview of branch prediction, beginning with early techniques through more recent data compression inspired schemes. A new approach is described which uses a non-parametric probability density estimator similar to the LZ77 compression scheme citeZiv77. Results are presented comparing the branch prediction accuracy of several schemes with those achieved by our new approach.
BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



Poxels: Probabilistic Voxelized Volume Reconstruction

1999
J. S. De Bonet and P. Viola
Proceedings of ICCV 1999

abstract

This paper examines the problem of reconstructing a voxelized representation of 3D space from a series of images. An iterative algorithm is used to find the scene model which jointly explains all the observed images by determining which region of space is responsible for each of the observations. The current approach formulates the problem as one of optimization over estimates of these responsibilities. The process converges to a distribution of responsibility which accurately reflects the constraints provided by the observations, the positions and shape of both solid and transparent objects, and the uncertainty which remains. Reconstruction is robust, and gracefully represents regions of space in which there is little certainty about the exact structure due to limited, non-existent, or contradicting data. Rendered images of voxel spaces recovered from synthetic and real observation images are shown.

BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



Flexible Histograms: A Multiresolution Target Discrimination Model

1998
J. S. De Bonet and P. Viola and J. W. Fisher III
Proceedings of SPIE 1998

abstract

In previous work we have developed a methodology for texture recognition and synthesis that estimates and exploits the dependencies across scale that occur within images (see DeBonet97a, DeBonet98a). In this paper we discuss the application of this technique to synthetic aperture radar (SAR) vehicle classification. Our approach measures characteristic cross-scale dependencies in training imagery; targets are recognized when these characteristic dependencies are detected. We present classification results over a large public database containing SAR images of vehicles. Classification performance is compared to the Wright Patterson baseline classifier (see Velten98). These preliminary experiments indicate that this approach has sufficient discrimination power to perform target detection/classification in SAR.
BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



Structure-driven SAR image registration

1998
J. S. De Bonet and A. Chao
Proceedings of SPIE 1998

abstract

We present a fully automatic method for the alignment SAR images, which is capable of precise and robust alignment. A multiresolution SAR image matching metric is first used to automatically determine tie-points, which are then used to perform coarse-to-fine resolution image alignment. A formalism is developed for the automatic determination of tie-point regions that contain sufficiently distinctive structure to provide strong constraints on alignment. The coarse-to-fine procedure for the refinement of the alignment estimate both improves computational efficiency and yields robust and consistent image alignment.

BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



Texture Recognition Using a Non-parametric Multi-Scale Statistical Model

1998
J. S. De Bonet and P. Viola
Proceedings IEEE Conf. on Computer Vision and Pattern Recognition 1998

abstract

We describe a technique for using the joint occurrence of local features at multiple resolutions to measure the similarity between texture images. Though superficially similar to a number of ``Gabor'' style techniques, which recognize textures through the extraction of multi-scale feature vectors, our approach is derived from an accurate generative model of texture, which is explicitly multi-scale and non-parametric. The resulting recognition procedure is similarly non-parametric, and can model complex non-homogeneous textures. We report results on publicly available texture databases. In addition, experiments indicate that this approach may have sufficient discrimination power to perform target detection in synthetic aperture radar images (SAR).
BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



Novel Statistical Multiresolution Techniques for Image Synthesis, Discrimination, and Recognition

May, 1997
J. S. De Bonet

abstract

By treating images as samples from probabilistic distributions, the fundamental problems in vision -- image similarity and object recognition -- can be posed as statistical questions. Within this framework, the crux of visual understanding is to accurately characterize the underlying distribution from which each image was generated. Developing good approximations to such distributions is a difficult, and in the general case, unsolved problem. A series of novel techniques is discussed for modeling images by attempting to approximate such distributions directly. These techniques provide the foundations for texture synthesis, texture discrimination, and general image classification systems.

BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



Multiresolution Sampling Procedure for Analysis and Synthesis of Texture Images

1997
J. S. De Bonet
ACM SIGGRAPH Computer Graphics 1997

abstract

This paper outlines a technique for treating input texture images as probability density estimators from which new textures, with similar appearance and structural properties, can be sampled. In a two-phase process, the input texture is first analyzed by measuring the joint occurrence of texture discrimination features at multiple resolutions. In the second phase, a new texture is synthesized by sampling successive spatial frequency bands from the input texture, conditioned on the similar joint occurrence of features at lower spatial frequencies. Textures synthesized with this method more successfully capture the characteristics of input textures than do previous techniques.
BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



A Non-parametric Multi-Scale Statistical Model for Natural Images

1997
J. S. De Bonet and P. Viola
Advances in Neural Information Processing 1997

abstract

The observed distribution of visual images is far from uniform. On the contrary, images have complex and important structure that can be used for image processing, recognition and analysis. There have been many proposed approaches to the principled statistical modeling of images, but each has been limited in either the complexity of the models or the complexity of the images. We present a non-parametric multi-scale statistical model for images that can be used for recognition, image de-noising, and in a ``generative mode'' to synthesize high quality textures.

BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



Structure Driven Image Database Retrieval

1997
J. S. De Bonet and P. Viola
Advances in Neural Information Processing 1997

abstract

A new algorithm is presented which approximates the perceived visual similarity between images. The images are initially transformed into a feature space which captures visual structure, texture and color using a tree of filters. Similarity is the inverse of the distance in this em perceptual feature space. Using this algorithm we have constructed an image database system which can perform example based retrieval on large image databases. Using carefully constructed target sets, which limit variation to only a single visual characteristic, retrieval rates are quantitatively compared to those of standard methods.
BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



MIMIC: Finding Optima by Estimating Probability Densities

1996
J. S. De Bonet and C. Isbell and P. Viola
Advances in Neural Information Processing 1996

abstract

In many optimization problems the structure of solutions reflects complex relationships between the different input parameters. Any search of the cost landscape should take advantage of these relations. For example, experience may tell us that certain parameters are closely related and should not be explored independently. Similarly, experience may establish that a subset of parameters must take on particular values. We present a framework in which we analyze the structural relationships of the optimization landscape. A novel and efficient algorithm for the estimation of this structure is derived. We use knowledge of this structure to guide a randomized search through the solution space. Our technique obtains significant speed gains over other randomized optimization procedures.
BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)



Reconstructing Rectangular Polyhedra From Hand-Drawn Wireframe Sketches

1995
J. S. De Bonet

abstract

Human observers are capable of interpreting hand drawn sketches as three- dimensional objects, despite inconsistencies in lengths, variability in angles, and uncon- nected vertices. The current system is an attempt to achieve such robust performance in the limited domain of sketches of wireframe rectangular polyhedra. The Latest version of this system reconstructs three-dimensional objects from perfect drawings, in which all angles and line junctions are consistent with projections of rectangular poly- hedron. Ambiguities which are inherent in such drawings are avoided by choosing a line grammar which yields only a single interpretation. Next, reconstruction from im- perfect drawings, in which all the line segments were randomly perturbed, was then achieved by grouping line endpoints into vertices while simultaneously restricting lines to particular orientations, and recovering three-dimensional form from the corrected line drawing. Finally, when actual hand-drawn sketches were used as input, we found that to successfully perform reconstruction the constraints on line orientations had to be replaced with constraints segment lengths and an additional three-dimensional point clustering process was needed.

BibTeX GZip'd Poscript (.ps.gz) Adobe Acrobat (.pdf)





Jeremy S. De Bonet
jsd@debonet.com
return to main page

Page loaded on October 04, 2024 at 06:44 AM.
Page last modified on 2006-05-27
Copyright © 1997-2024, Jeremy S. De Bonet. All rights reserved.