contents next up previous
Next: Changing the Linear Algebra Up: Function Object Classes Previous: Indicator Cdf Estimator

Subsections

Search Tree

search_tree<T, allocator>

Consider a random variable Z($ \bf u$) which can take K different values Z1,..., ZK. The aim of a Search Tree is to infer the ccdf of Z($ \bf u$) from a known realization of Z($ \bf u$): z($ \bf u_{\alpha_1}^{}$),..., z($ \bf u_{\alpha_N}^{}$), called ``training image''.

Call T = {$ \bf h_1$,...,$ \bf h_t$} the family of vectors defining a geometric template (or ``window'') of t locations. These vectors are also called nodes of the template. A data event implied by T, at location u, is the sequence of values:

DT($\displaystyle \bf u$) =  {Z($\displaystyle \bf u$ + $\displaystyle \bf h_1$),..., Z($\displaystyle \bf u$ + $\displaystyle \bf h_t$)}

u is called the central node of the template. Provided the training image is stationary, the same data event (i.e. the same sequence of values) can be observed at different locations.

Guardiano, and later, Strebelle proposed to model the probability

P$\displaystyle \Big($Z($\displaystyle \bf u$) = zk  {z($\displaystyle \bf u$ + $\displaystyle \bf h_1$),..., z($\displaystyle \bf u$ + $\displaystyle \bf h_t$)}$\displaystyle \Big)$

by the frequency of occurrence in the training image of event

z($\displaystyle \bf u_{\alpha}^{}$) = zk  {z($\displaystyle \bf u_{\alpha}^{}$ + $\displaystyle \bf h_1$),..., z($\displaystyle \bf u_{\alpha}^{}$ + $\displaystyle \bf h_t$)}

( $ \bf u_{\alpha}$ is a location of the training image): if for a given data event dT, there are n locations $ \bf u_i$ in the training image such that:

DT($\displaystyle \bf u_i$) = dT    i = 1,..., n

and among these n locations, nk are such that the central pixel value z($ \bf u_j$) = zk ( j = 1,..., nk), then the probability P$ \Big($Z($ \bf u$) = zk  dT$ \Big)$ is modeled by

P$\displaystyle \Big($Z($\displaystyle \bf u$) = zk  dT$\displaystyle \Big)$ = $\displaystyle {\frac{n_k}{n}}$



In some cases, data event dT can not be found in the training image. Call T-1 the subset of T obtained by dropping one of the vectors (nodes) of T, and similarly, T-j the subset of T after dropping j vectors of T, for any j $ \in$ {0,..., t - 1}, with T-0 = T. If data event dT can not be found in the training image, template T is recursively simplified into T-1,...,T-j, until dT-j can be found. Typically, the nodes are dropped according to the amount of information they bring in estimating the probability distribution of Z($ \bf u$) = zk. The probability P$ \Big($Z($ \bf u$) = zk  dT$ \Big)$ is then approximated by

P$\displaystyle \Big($Z($\displaystyle \bf u$) = zk  dT$\displaystyle \Big)$ $\displaystyle \simeq$ P$\displaystyle \Big($Z($\displaystyle \bf u$) = zk  dT-j$\displaystyle \Big)$



A search tree is a data structure that enables to store all the data events dT-j (j = 0,..., t - 1) present in the training image, along with the corresponding frequencies of occurrence of zk (k = 1,..., K) at the central node.



Where Defined

In header file <cdf_estimators.h>



Template Parameters

T   is the type used to represent a category. It is a ``discrete type'', e.g. int, bool, ...
allocator   is an object that manages the memory allocation for the search tree. Two models of allocator are available: a pool allocator and a standard allocator. The standard allocator allocates memory only when it is needed. This allows to use as little memory as possible, but may not be efficient in term of speed. Memory allocation is a slow task, and significant speed improvement can be achieved by decreasing the number of times memory is allocated. This is what the pool allocator does: memory is allocated by large chunks or pools, and when new space is needed, it is taken from the pool without requiring the system to allocate some new memory. The standard allocator should be used when memory is an issue, while the pool allocator is useful to improve performance. pool alocator is the default allocator.



Model of

Single Variable Cdf Estimator



Members


contents next up previous
Next: Changing the Linear Algebra Up: Function Object Classes Previous: Indicator Cdf Estimator
nicolas
2002-05-07